Generative AI as a Cybercrime Assistant
Anthropic reports on a Claude user:
We recently disrupted a sophisticated cybercriminal that used Claude Code to commit large-scale theft and extortion of personal data. The actor targeted at least 17 distinct organizations, including in healthcare, the emergency services, and government and religious institutions. Rather than encrypt the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to attempt to extort victims into paying ransoms that sometimes exceeded $500,000.
The actor used AI to what we believe is an unprecedented degree. Claude Code was used to automate reconnaissance, harvesting victims’ credentials, and penetrating networks. Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands. Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines.
This is scary. It’s a significant improvement over what was possible even a few years ago.
Read the whole Anthropic essay. They discovered North Koreans using Claude to commit remote-worker fraud, and a cybercriminal using Claude “to develop, market, and distribute several variants of ransomware, each with advanced evasion capabilities, encryption, and anti-recovery mechanisms.”
Subscribe to comments on this entry
Clive Robinson • September 4, 2025 7:57 AM
@ Bruce, ALL,
With regards,
Scary is not the right word nor would be frightening or similar.
It is however concerning but not for the reason that most would assume.
The reason is that this has been all too predictable for a big chunk of this century if not several centuries before. So we’ve had lots of examples to show what happens.
So technology of all forms from the simplest of tools upwards has shown us two things over and over again,
1, Technology as a product of science, engineering and design allways advances tools mostly rapidly.
2, Technology as a product has no notion of good or bad in human or societal terms.
It is humans as the three basic party types of,
1, Actor / Initiator
2, Uninvolved / impartial observer
3, Target of the Actors intent
that decide good or bad. Almost always through the individuals point of view and the more general moors and morals of society they might chose to abide by.
The technology or tool is a functional item with utility.
That is a sharp knife can cut out cancer to save a life, or cut out a vital organ etc to take a life.
The tool neither knows or cares, unlike the actor and the target, and usually later any observers that sit in judgment.
The point is tools are also “force multipliers” the more refined the greater the utility for good or bad usage.
Thus anyone who would have thought about AI from the late 1970’s should be able to see that it would get used by actors of what others would regard as a minimum of “ill or unlawful intent”.
Thus being entirely predictable and also started to be talked about seriously decades ago, questions have to be answered…
Thus the “concern” is two fold,
1, Why did we not put legislation / regulation in place to “stop/limit the harms” we knew would happen?
2, Why did we allow the likes of OpenAI and Microsoft bosses to actively campaign for no regulation?
Many would say that there has been some level of “Criminal Negligence” involved at the highest levels of politics.
Because we knew Crime using AI was going to happen, and legislators just looked the other way.
That’s the real “Primary cause of concern” because of the more fundamental reason of,
“The pace of unregulated technology development is speeding up and will continue to do so.”
We’ve had the same discussions about 3D Printing and it’s ability to rapidly develop technology of very high harm potential such as weapons and devices that more than ten fold improve the utility of weapons.
Just as with 3D Printer legislation / regulation, AI legislation / regulation is way to little and way to late, and probably not get enacted untill long after relatively easily prevented harm could have been avoided.
Thus the “concern” is,
“Why despite massive predictive evidence, do we continuously not take action that we know has to be taken, untill significant harm has already happened?”