AI Summarization Optimization

These days, the most important meeting attendee isn’t a person: It’s the AI notetaker.

This system assigns action items and determines the importance of what is said. If it becomes necessary to revisit the facts of the meeting, its summary is treated as impartial evidence.

But clever meeting attendees can manipulate this system’s record by speaking more to what the underlying AI weights for summarization and importance than to their colleagues. As a result, you can expect some meeting attendees to use language more likely to be captured in summaries, timing their interventions strategically, repeating key points, and employing formulaic phrasing that AI models are more likely to pick up on. Welcome to the world of AI summarization optimization (AISO).

Optimizing for algorithmic manipulation

AI summarization optimization has a well-known precursor: SEO.

Search-engine optimization is as old as the World Wide Web. The idea is straightforward: Search engines scour the internet digesting every possible page, with the goal of serving the best results to every possible query. The objective for a content creator, company, or cause is to optimize for the algorithm search engines have developed to determine their webpage rankings for those queries. That requires writing for two audiences at once: human readers and the search-engine crawlers indexing content. Techniques to do this effectively are passed around like trade secrets, and a $75 billion industry offers SEO services to organizations of all sizes.

More recently, researchers have documented techniques for influencing AI responses, including large-language model optimization (LLMO) and generative engine optimization (GEO). Tricks include content optimization—adding citations and statistics—and adversarial approaches: using specially crafted text sequences. These techniques often target sources that LLMs heavily reference, such as Reddit, which is claimed to be cited in 40% of AI-generated responses. The effectiveness and real-world applicability of these methods remains limited and largely experimental, although there is substantial evidence that countries such as Russia are actively pursuing this.

AI summarization optimization follows the same logic on a smaller scale. Human participants in a meeting may want a certain fact highlighted in the record, or their perspective to be reflected as the authoritative one. Rather than persuading colleagues directly, they adapt their speech for the notetaker that will later define the “official” summary. For example:

  • “The main factor in last quarter’s delay was supply chain disruption.”
  • “The key outcome was overwhelmingly positive client feedback.”
  • “Our takeaway here is in alignment moving forward.”
  • “What matters here is the efficiency gains, not the temporary cost overrun.”

The techniques are subtle. They employ high-signal phrases such as “key takeaway” and “action item,” keep statements short and clear, and repeat them when possible. They also use contrastive framing (“this, not that”), and speak early in the meeting or at transition points.

Once spoken words are transcribed, they enter the model’s input. Cue phrases—and even transcription errors—can steer what makes it into the summary. In many tools, the output format itself is also a signal: Summarizers often offer sections such as “Key Takeaways” or “Action Items,” so language that mirrors those headings is more likely to be included. In effect, well-chosen phrases function as implicit markers that guide the AI toward inclusion.

Research confirms this. Early AI summarization research showed that models trained to reconstruct summary-style sentences systematically overweigh such content. Models over-rely on early-position content in news. And models often overweigh statements at the start or end of a transcript, underweighting the middle. Recent work further confirms vulnerability to phrasing-based manipulation: models cannot reliably distinguish embedded instructions from ordinary content, especially when phrasing mimics salient cues.

How to combat AISO

If AISO becomes common, three forms of defense will emerge. First, meeting participants will exert social pressure on one another. When researchers secretly deployed AI bots in Reddit’s r/changemyview community, users and moderators responded with strong backlash calling it “psychological manipulation.” Anyone using obvious AI-gaming phrases may face similar disapproval.

Second, organizations will start governing meeting behavior using AI: risk assessments and access restrictions before the meetings even start, detection of AISO techniques in meetings, and validation and auditing after the meetings.

Third, AI summarizers will have their own technical countermeasures. For example, the AI security company CloudSEK recommends content sanitization to strip suspicious inputs, prompt filtering to detect meta-instructions and excessive repetition, context window balancing to weight repeated content less heavily, and user warnings showing content provenance.

Broader defenses could draw from security and AI safety research: preprocessing content to detect dangerous patterns, consensus approaches requiring consistency thresholds, self-reflection techniques to detect manipulative content, and human oversight protocols for critical decisions. Meeting-specific systems could implement additional defenses: tagging inputs by provenance, weighting content by speaker role or centrality with sentence-level importance scoring, and discounting high-signal phrases while favoring consensus over fervor.

Reshaping human behavior

AI summarization optimization is a small, subtle shift, but it illustrates how the adoption of AI is reshaping human behavior in unexpected ways. The potential implications are quietly profound.

Meetings—humanity’s most fundamental collaborative ritual—are being silently reengineered by those who understand the algorithm’s preferences. The articulate are gaining an invisible advantage over the wise. Adversarial thinking is becoming routine, embedded in the most ordinary workplace rituals, and, as AI becomes embedded in organizational life, strategic interactions with AI notetakers and summarizers may soon be a necessary executive skill for navigating corporate culture.

AI summarization optimization illustrates how quickly humans adapt communication strategies to new technologies. As AI becomes more embedded in workplace communication, recognizing these emerging patterns may prove increasingly important.

This essay was written with Gadi Evron, and originally appeared in CSO.

Posted on November 3, 2025 at 7:05 AM9 Comments

Comments

Stéphane Bortzmeyer November 3, 2025 10:01 AM

“keep statements short and clear, and repeat them when possible” But before AI, wasn’t it what some speakers were already practicing because, if you want the participants to a meeting to remember what you said, this is also something you have to do?

KC November 3, 2025 11:22 AM

Hmm, yes, who would have thought.

From the article:

Meeting-specific systems could implement additional defenses: tagging inputs by provenance

The above links to an interesting paper from the ‘Microsoft Cognitive Services Research Group.’

From the paper I am intrigued to learn that meeting summarizations have faced different challenges than document summarizations.

As the paper is from 2020, I am curious about any additional advancements in this domain, particularly against the broader threat of AI summarization manipulation.

Just narrowly looking at Microsoft, they do appear to have an Intelligent Recap feature for meetings in Teams.

What surprises me most, is its dimensionality. At least with this particular product, you can see many, many different aspects of a prior meeting, including when your name was mentioned, timelines of when each person spoke, different chapters of the meeting, a full transcript, a query feature, and AI generated notes.

I’m curious if Microsoft provides transparency about the safeguards it has for its AI generated content.

And it does make me wonder about smaller commercial services. I can’t imagine every service would have same resources for adaptive threats, ur meeting attendees.

mark November 3, 2025 12:33 PM

And this, of course, is on top of the recent spate of stories about AI having something wrong in summaries 80% of the time, and outright wrong 45% of the time.

Fedrick November 3, 2025 3:37 PM

Really thought‑provoking post — I really appreciate how you highlight the subtle shift happening in meetings and how the AI notetaker is becoming a de facto participant. The concept of “AI summarization optimization (AISO)”—people adjusting their language, timing, even tone, to influence what the summarizer picks up — is both clever and slightly unsettling.
Schneier on Security
+1

What struck me:

The idea that phrases like “key takeaway”, “action item”, or repeating a point at transition moments can steer what ends up in a summary.
Schneier on Security
+1

That this behavioural shift is not just individual — it could reshape how we structure meetings, speak in them, maybe even choose who speaks when.
Schneier on Security

And of course, the counter‑measures you cover — from social norms to technical filtering — are essential. Without them, we might enable a race to the “phrase the algorithm likes best”.
Schneier on Security

On a related note: if you’re exploring workflows for summarisation or meeting‑feedback loops (whether in tech teams, product reviews or documentation), a service like monobot.ai might be a useful tool in your toolkit. It won’t solve the deeper behavioural or governance issues you raise, but it can help automate the “capture/summary/feedback” loop so you free up time to focus on what’s said, not just how it’s captured.

Thanks for raising these issues so clearly — the “small shift” you describe is exactly the kind of thing that hides in plain sight but changes things over time. Looking forward to your next piece.

Rontea November 5, 2025 9:43 AM

By intentionally framing points in ways that AI notetakers are more likely to capture, it can subtly influence how meetings are summarized and interpreted later. This could lead to skewed records that prioritize certain viewpoints over others, which makes awareness and transparency around AISO techniques especially important in collaborative environments.

OldScribe November 5, 2025 7:21 PM

I spent years taking minutes for faculty meetings (long before video conferencing, let alone AI), a notably fractious process. I adopted the practices reported here as gaming the AI system to improve the minutes–at the end of each agenda item, I would state my takeaway that was going into the minutes, what the key points were, action items, etc.

The good part was that there was little argument about the minutes or need for revision at the next meeting.

The bad part was that everyone liked how it worked, and I had to keep taking the minutes for many years.

I use Zoom to record meetings now, and take the AI summary as the first draft of the minutes. I continue the practice of stating the key points, takeaways, action items but do it explicitly for the minutes, and repeat if the majority isn’t in agreement. For verification, I post a link to the recording, and process the VTT text transcript Zoom makes into an easily readable form and post that alongside the edited AI summary.

Peter A. November 6, 2025 8:02 PM

It’s like letting a secretary/stenotypist take notes and then trusting the notes blindly. A hostile agent or just an incompetent person in such position can do a lot of damage. The solution is just to review the notes at the end of the meeting, and making the review a hard requirement, no excuses.

Otherwise important decision-making may be subverted, AI or no AI.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.