Prompt Injection in AI Browsers

This is why AIs are not ready to be personal assistants:

A new attack called ‘CometJacking’ exploits URL parameters to pass to Perplexity’s Comet AI browser hidden instructions that allow access to sensitive data from connected services, like email and calendar.

In a realistic scenario, no credentials or user interaction are required and a threat actor can leverage the attack by simply exposing a maliciously crafted URL to targeted users.

[…]

CometJacking is a prompt-injection attack where the query string processed by the Comet AI browser contains malicious instructions added using the ‘collection’ parameter of the URL.

LayerX researchers say that the prompt tells the agent to consult its memory and connected services instead of searching the web. As the AI tool is connected to various services, an attacker leveraging the CometJacking method could exfiltrate available data.

In their tests, the connected services and accessible data include Google Calendar invites and Gmail messages and the malicious prompt included instructions to encode the sensitive data in base64 and then exfiltrate them to an external endpoint.

According to the researchers, Comet followed the instructions and delivered the information to an external system controlled by the attacker, evading Perplexity’s checks.

I wrote previously:

Prompt injection isn’t just a minor security problem we need to deal with. It’s a fundamental property of current LLM technology. The systems have no ability to separate trusted commands from untrusted data, and there are an infinite number of prompt injection attacks with no way to block them as a class. We need some new fundamental science of LLMs before we can solve this.

Posted on November 11, 2025 at 7:08 AM11 Comments

Comments

Spellucci November 11, 2025 7:43 AM

Thanks. The only generative AI I pay for is Perplexity. I had installed the Comet browser to see what the fuss was about. I could not find a use for it. After reading how the architecture of AI browsers is fundamentally insecure, and not securable, I uninstalled it last week.

You wrote, “We need some new fundamental science of LLMs before we can solve this.” That’s the understatement of the decade.

Snarki, child of Loki November 11, 2025 11:59 AM

A few weeks back, I adjusted http server settings to reject connections from LLM training web-scrapers.

Now, I suspect that instead of just rejecting connections, I should have figured a way to reply with “malicious LLM injection” content.

The sooner LLMs are killed off, the better for everyone that isn’t a richy-rich tech edgelord.

Clive Robinson November 11, 2025 12:05 PM

@ Bruce, ALL,

This says oh so much,

“In our proof-of-concept test, we demonstrated that exporting sensitive fields in an encoded form (base64) effectively circumvented the platform’s exfiltration checks”

So plain text only matching on sensitive data.

If base64 gets sorted what about base16 etc.

And that’s all before talking about even primitive statistics flattening by the simplest of “Straddling checkerboard”[1] systems.

The important point to note though, is that the AI agent does not see things the way humans do.

I frequently say,

“Paper, Paper, Never Data”

Because of how the redundancy of extended character sets can have covert channels in them.

It’s a problem that goes back a long way certainly before Current AI LLM and ML Systems and tools and systems built on them.

Put simply,

“They can not be made secure in any way”

Thus the only option is “isolation” by full “segregation” as a mitigation strategy.

Which let’s be honest makes the tool effectively “useless”.

Now ask yourself an important question,

“Does Microsoft’s AI tools built into Win11 have similar defects?”

I think that there would be plenty who would put a Dollar down to say yes…

Microsoft are desperate to make their investments in AI look like they are “bringing in profit” when they clearly are not even “bringing in money”.

People are begining to see that AI is a bent and battered can, that still needs a good kicking to go effectively nowhere.

Which begs another question about “surveillance” in effect these tools are betraying internal confidential / Private Personal Information to Microsoft… So,

“How will Microsoft monetize your stolen information?”

And lets be honest the options are not very good for them, and very bad for us.

[1] The Straddling checkerboard is a very simple and curious thing in that not only can it make language statistics look flat and shorten the length of the plain text so act as crude compression. It can do the opposite of making something with flat statistics like the ciphertext of an OTP look, like the ciphertext of a simple paper and pencil cipher that leaks plaintext statistics. Thus giving anyone doing cryptanalysis a long and entirely pointless investigation.

David November 11, 2025 12:34 PM

“How will Microsoft monetize your stolen information?”

The Microsoft Edge web browser has a very clear voice synthesis feature, which will read a web page aloud. It works well with a fast net connection, but not with a slow connection, which suggests that the page is being sent back to MS for conversion. This might not matter on public web pages, which MS can be assumed to have slurped up already, but anyone using Edge voice synthesis on confidential internal company documents should ask the same question.

Winter November 11, 2025 12:35 PM

What surprises me is that it seems to be so difficult to filter non-human visible, is, invisible, components from pages.

On the other hand, filtering costs time and money and any filtering only benefits the users. The LLM corps have to balance their own costs (important) against the costs for the users (irrelevant).

I assume that if prompt injection did cost the corps real money, the problem would go away overnight.

Clive Robinson November 11, 2025 12:39 PM

@ Bruce,

With regards,

“We need some new fundamental science of LLMs before we can solve this.”

I have good reason to believe that work carried out in the late 1920’s and early 1930’s precludes the possibility of such a “new fundamental science”. Primarily,

1, Kurt Gödel’s incompleteness theorems,
2, Gregory Chaitin’s incompleteness theorem,
3, Alfred Tarski’s undefinability theorem,

With also of note,

4, The Alan Turing / Alonzo Church thesises
5, Claude Shannon on information and redundancy.

All effectively establish intrinsic limits on any algorithmic solution to the problem.

But either way I suspect neither of us will be around when things actually happen.

lurker November 11, 2025 2:13 PM

@Bruce
“A new attack called ‘CometJacking’ exploits URL parameters …”

[sorry for the fat thumb]
URL = Universal Resource Locator
and has come to mean the machine readable address of something on the internet. So why are there exploitable parameters? The tracking industry has a large part of the blame. So-called urls now include tacked onto the adress sometimes hundreds of bytes of PII of the user and the transction.

That’s a good place to start, before the fundamental science of LLMs. Get transactional data out of the URL where it does not belong. We already have a good idea of how big an address should be. It’s long past time to define a URL packet, and fix a maximum size for it. Routers should just drop oversized URL packets. That would fix a lot more ills than just CometJacking. (Click Here to Kill Everybody?)

BCG November 11, 2025 5:19 PM

@lurker

“Routers should just drop oversized URL packets.”

HTTPS means that URLs aren’t visible to routers…

Keith Douglas November 12, 2025 9:26 AM

The limitative results that Clive R. refers to are theoretically relevant; I’d guess that we have to learn to find new ways to create creative failure modes with the new systems (and I would argue also apply to us). I mean, technically, a DVS like Burp Suite’s Scanner is impossible in full generality by Rice’s Theorem, but it doesn’t matter since false positives and false negatives of relevant character are tolerated – and says nothing about how close we can get to the ‘ideal’.

Morley November 12, 2025 11:47 AM

In general, I suppose we’ll settle on a “good enough” solution involving pre-filtering the input text with both traditional code and more AI. We’ll make the Swiss cheese holes as small as we care to spend money on.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.