Generative AI as a Cybercrime Assistant

Anthropic reports on a Claude user:

We recently disrupted a sophisticated cybercriminal that used Claude Code to commit large-scale theft and extortion of personal data. The actor targeted at least 17 distinct organizations, including in healthcare, the emergency services, and government and religious institutions. Rather than encrypt the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to attempt to extort victims into paying ransoms that sometimes exceeded $500,000.

The actor used AI to what we believe is an unprecedented degree. Claude Code was used to automate reconnaissance, harvesting victims’ credentials, and penetrating networks. Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands. Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines.

This is scary. It’s a significant improvement over what was possible even a few years ago.

Read the whole Anthropic essay. They discovered North Koreans using Claude to commit remote-worker fraud, and a cybercriminal using Claude “to develop, market, and distribute several variants of ransomware, each with advanced evasion capabilities, encryption, and anti-recovery mechanisms.”

Posted on September 4, 2025 at 7:06 AM7 Comments

Comments

Clive Robinson September 4, 2025 7:57 AM

@ Bruce, ALL,

With regards,

“This is scary. It’s a significant improvement over what was possible even a few years ago.”

Scary is not the right word nor would be frightening or similar.

It is however concerning but not for the reason that most would assume.

The reason is that this has been all too predictable for a big chunk of this century if not several centuries before. So we’ve had lots of examples to show what happens.

So technology of all forms from the simplest of tools upwards has shown us two things over and over again,

1, Technology as a product of science, engineering and design allways advances tools mostly rapidly.
2, Technology as a product has no notion of good or bad in human or societal terms.

It is humans as the three basic party types of,

1, Actor / Initiator
2, Uninvolved / impartial observer
3, Target of the Actors intent

that decide good or bad. Almost always through the individuals point of view and the more general moors and morals of society they might chose to abide by.

The technology or tool is a functional item with utility.

That is a sharp knife can cut out cancer to save a life, or cut out a vital organ etc to take a life.

The tool neither knows or cares, unlike the actor and the target, and usually later any observers that sit in judgment.

The point is tools are also “force multipliers” the more refined the greater the utility for good or bad usage.

Thus anyone who would have thought about AI from the late 1970’s should be able to see that it would get used by actors of what others would regard as a minimum of “ill or unlawful intent”.

Thus being entirely predictable and also started to be talked about seriously decades ago, questions have to be answered…

Thus the “concern” is two fold,

1, Why did we not put legislation / regulation in place to “stop/limit the harms” we knew would happen?
2, Why did we allow the likes of OpenAI and Microsoft bosses to actively campaign for no regulation?

Many would say that there has been some level of “Criminal Negligence” involved at the highest levels of politics.

Because we knew Crime using AI was going to happen, and legislators just looked the other way.

That’s the real “Primary cause of concern” because of the more fundamental reason of,

“The pace of unregulated technology development is speeding up and will continue to do so.”

We’ve had the same discussions about 3D Printing and it’s ability to rapidly develop technology of very high harm potential such as weapons and devices that more than ten fold improve the utility of weapons.

Just as with 3D Printer legislation / regulation, AI legislation / regulation is way to little and way to late, and probably not get enacted untill long after relatively easily prevented harm could have been avoided.

Thus the “concern” is,

“Why despite massive predictive evidence, do we continuously not take action that we know has to be taken, untill significant harm has already happened?”

Clive Robinson September 4, 2025 11:48 AM

@ Scott Cochrane, ALL,y

With regards,

“The answer to your query is ‘money’”

Reminds me of Douglas Adams observation about “small green pieces of paper”.

“This planet has – or rather had – a problem, which was this: most of the people living on it were unhappy for pretty much of the time. Many solutions were suggested for this problem, but most of these were largely concerned with the movement of small green pieces of paper, which was odd because on the whole it wasn’t the small green pieces of paper that were unhappy.”

For those, that have not had the fun of reading Douglas’s books, the “has – or rather had -” is a reference to the fact, at that point in the story line, that the Earth has been destroyed by the Vogons, in order to build an intergalactic hyperspace bypass…

At first in the story it appears to be by mindless bureaucracy, then under the malign power of a hidden Corporacracy… But as the story progresses an even more malign group appears, that is acting under the worst of corruption, as originally identified by Lord Acton in 1887, who coined the saying,

“Power corrupts; absolute power corrupts absolutely”

Thus it was Douglas taking a subtle side swipe at modern politics whilst making a social commentary on how Corporates use “money” to influence those with “power”, to obtain “absolute power” with “absolute immunity” for themselves. And in the process making every one else deeply unhappy.

lurker September 4, 2025 1:39 PM

As I said before, Anthropic knew, or should have known, that this would happen. But in the same way that gunmakers are never held liable for murder, they can wring their hands, wide eyed in mock astonishment.

The question is not when will the AI take over, it is: Will we last long enough for there to be anythng for the AI to takeover?

Matt September 4, 2025 2:23 PM

It’s also entirely possible that Anthropic is overselling the role their agent had in this, AI vendors are known to do this all the time, i.e. “Look how powerful our agents are, so dangerous!”. The article stays pretty vague what the agent actually did or if the AI-generated malware sold in the forum actually works, and if yes how well it performs. “Without Claude’s assistance, they could not implement or troubleshoot core malware components” is a very bold claim.

anon September 4, 2025 8:23 PM

Well, the US Patent Office refuses to issue patents on AI-created objects. On the other hand, bad musicians want to copyright AI generated music. I think that if they can receive a patent, they can be charged as an accomplice in the case described here.

Miller September 11, 2025 11:06 PM

From the report:
Claude Code was used to automate reconnaissance, harvesting victims’ credentials, and penetrating networks.

Great “advertizement”. So apparently Claude has very little guard rails against such

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.