Page 13

Generative AI as a Cybercrime Assistant

Anthropic reports on a Claude user:

We recently disrupted a sophisticated cybercriminal that used Claude Code to commit large-scale theft and extortion of personal data. The actor targeted at least 17 distinct organizations, including in healthcare, the emergency services, and government and religious institutions. Rather than encrypt the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to attempt to extort victims into paying ransoms that sometimes exceeded $500,000.

The actor used AI to what we believe is an unprecedented degree. Claude Code was used to automate reconnaissance, harvesting victims’ credentials, and penetrating networks. Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands. Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines.

This is scary. It’s a significant improvement over what was possible even a few years ago.

Read the whole Anthropic essay. They discovered North Koreans using Claude to commit remote-worker fraud, and a cybercriminal using Claude “to develop, market, and distribute several variants of ransomware, each with advanced evasion capabilities, encryption, and anti-recovery mechanisms.”

Posted on September 4, 2025 at 7:06 AMView Comments

Indirect Prompt Injection Attacks Against LLM Assistants

Really good research on practical attacks against LLM agents.

Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous

Abstract: The growing integration of LLMs into applications has introduced new security risks, notably known as Promptware­—maliciously engineered prompts designed to manipulate LLMs to compromise the CIA triad of these applications. While prior research warned about a potential shift in the threat landscape for LLM-powered applications, the risk posed by Promptware is frequently perceived as low. In this paper, we investigate the risk Promptware poses to users of Gemini-powered assistants (web application, mobile application, and Google Assistant). We propose a novel Threat Analysis and Risk Assessment (TARA) framework to assess Promptware risks for end users. Our analysis focuses on a new variant of Promptware called Targeted Promptware Attacks, which leverage indirect prompt injection via common user interactions such as emails, calendar invitations, and shared documents. We demonstrate 14 attack scenarios applied against Gemini-powered assistants across five identified threat classes: Short-term Context Poisoning, Permanent Memory Poisoning, Tool Misuse, Automatic Agent Invocation, and Automatic App Invocation. These attacks highlight both digital and physical consequences, including spamming, phishing, disinformation campaigns, data exfiltration, unapproved user video streaming, and control of home automation devices. We reveal Promptware’s potential for on-device lateral movement, escaping the boundaries of the LLM-powered application, to trigger malicious actions using a device’s applications. Our TARA reveals that 73% of the analyzed threats pose High-Critical risk to end users. We discuss mitigations and reassess the risk (in response to deployed mitigations) and show that the risk could be reduced significantly to Very Low-Medium. We disclosed our findings to Google, which deployed dedicated mitigations.

Defcon talk. News articles on the research.

Prompt injection isn’t just a minor security problem we need to deal with. It’s a fundamental property of current LLM technology. The systems have no ability to separate trusted commands from untrusted data, and there are an infinite number of prompt injection attacks with no way to block them as a class. We need some new fundamental science of LLMs before we can solve this.

Posted on September 3, 2025 at 7:00 AMView Comments

1965 Cryptanalysis Training Workbook Released by the NSA

In the early 1960s, National Security Agency cryptanalyst and cryptanalysis instructor Lambros D. Callimahos coined the term “Stethoscope” to describe a diagnostic computer program used to unravel the internal structure of pre-computer ciphertexts. The term appears in the newly declassified September 1965 document Cryptanalytic Diagnosis with the Aid of a Computer, which compiled 147 listings from this tool for Callimahos’s course, CA-400: NSA Intensive Study Program in General Cryptanalysis.

The listings in the report are printouts from the Stethoscope program, run on the NSA’s Bogart computer, showing statistical and structural data extracted from encrypted messages, but the encrypted messages themselves are not included. They were used in NSA training programs to teach analysts how to interpret ciphertext behavior without seeing the original message.

The listings include elements such as frequency tables, index of coincidence, periodicity tests, bigram/trigram analysis, and columnar and transposition clues. The idea is to give the analyst some clues as to what language is being encoded, what type of cipher system is used, and potential ways to reconstruct plaintext within it.

Bogart was a special-purpose electronic computer tailored specifically for cryptanalytic tasks, such as statistical analysis of cipher texts, pattern recognition, and diagnostic testing, but not decryption per se.

Listings like these were revolutionary. Before computers, cryptanalysts did this type of work manually, painstakingly counting letters and testing hypotheses. Stethoscope automated the grunt work, allowing analysts to focus on interpretation, and cryptanalytical strategy.

These listings were part of the Intensive Study Program in General Cryptanalysis at NSA. Students were trained to interpret listings without seeing the original ciphertext, a method that sharpened their analytical intuitive skills.

Also mentioned in the report is Rob Roy, another NSA diagnostic tool focused on different cryptanalytic tasks, but also producing frequency counts, coincidence indices, and periodicity tests. NSA had a tradition of giving codebreaking tools colorful names—for example, DUENNA, SUPERSCRITCHER, MADAME X, HARVEST, and COPPERHEAD.

Posted on September 2, 2025 at 7:08 AMView Comments

Baggage Tag Scam

I just heard about this:

There’s a travel scam warning going around the internet right now: You should keep your baggage tags on your bags until you get home, then shred them, because scammers are using luggage tags to file fraudulent claims for missing baggage with the airline.

First, the scam is possible. I had a bag destroyed by baggage handlers on a recent flight, and all the information I needed to file a claim was on my luggage tag. I have no idea if I will successfully get any money from the airline, or what form it will be in, or how it will be tied to my name, but at least the first step is possible.

But…is it actually happening? No one knows. It feels like a kind of dumb way to make not a lot of money. The origin of this rumor seems to be single Reddit post.

And why should I care about this scam? No one is scamming me; it’s the airline being scammed. I suppose the airline might ding me for reporting a damage bag, but it seems like a very minor risk.

Posted on August 29, 2025 at 7:01 AMView Comments

We Are Still Unable to Secure LLMs from Malicious Inputs

Nice indirect prompt injection attack:

Bargury’s attack starts with a poisoned document, which is shared to a potential victim’s Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) It looks like an official document on company meeting policies. But inside the document, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. The prompt is written in white text in a size-one font, something that a human is unlikely to see but a machine will still read.

In a proof of concept video of the attack, Bargury shows the victim asking ChatGPT to “summarize my last meeting with Sam,” referencing a set of notes with OpenAI CEO Sam Altman. (The examples in the attack are fictitious.) Instead, the hidden prompt tells the LLM that there was a “mistake” and the document doesn’t actually need to be summarized. The prompt says the person is actually a “developer racing against a deadline” and they need the AI to search Google Drive for API keys and attach them to the end of a URL that is provided in the prompt.

That URL is actually a command in the Markdown language to connect to an external server and pull in the image that is stored there. But as per the prompt’s instructions, the URL now also contains the API keys the AI has found in the Google Drive account.

This kind of thing should make everybody stop and really think before deploying any AI agents. We simply don’t know to defend against these attacks. We have zero agentic AI systems that are secure against these attacks. Any AI that is working in an adversarial environment—and by this I mean that it may encounter untrusted training data or input—is vulnerable to prompt injection. It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there.

Posted on August 27, 2025 at 7:07 AMView Comments

Encryption Backdoor in Military/Police Radios

I wrote about this in 2023. Here’s the story:

Three Dutch security analysts discovered the vulnerabilities­—five in total—­in a European radio standard called TETRA (Terrestrial Trunked Radio), which is used in radios made by Motorola, Damm, Hytera, and others. The standard has been used in radios since the ’90s, but the flaws remained unknown because encryption algorithms used in TETRA were kept secret until now.

There’s new news:

In 2023, Carlo Meijer, Wouter Bokslag, and Jos Wetzels of security firm Midnight Blue, based in the Netherlands, discovered vulnerabilities in encryption algorithms that are part of a European radio standard created by ETSI called TETRA (Terrestrial Trunked Radio), which has been baked into radio systems made by Motorola, Damm, Sepura, and others since the ’90s. The flaws remained unknown publicly until their disclosure, because ETSI refused for decades to let anyone examine the proprietary algorithms.

[…]

But now the same researchers have found that at least one implementation of the end-to-end encryption solution endorsed by ETSI has a similar issue that makes it equally vulnerable to eavesdropping. The encryption algorithm used for the device they examined starts with a 128-bit key, but this gets compressed to 56 bits before it encrypts traffic, making it easier to crack. It’s not clear who is using this implementation of the end-to-end encryption algorithm, nor if anyone using devices with the end-to-end encryption is aware of the security vulnerability in them.

[…]

The end-to-end encryption the researchers examined recently is designed to run on top of TETRA encryption algorithms.

The researchers found the issue with the end-to-end encryption (E2EE) only after extracting and reverse-engineering the E2EE algorithm used in a radio made by Sepura.

These seem to be deliberately implemented backdoors.

Posted on August 26, 2025 at 7:06 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.