Use of Generative AI in Scams

New report: “Scam GPT: GenAI and the Automation of Fraud.”

This primer maps what we currently know about generative AI’s role in scams, the communities most at risk, and the broader economic and cultural shifts that are making people more willing to take risks, more vulnerable to deception, and more likely to either perpetuate scams or fall victim to them.

AI-enhanced scams are not merely financial or technological crimes; they also exploit social vulnerabilities ­ whether short-term, like travel, or structural, like precarious employment. This means they require social solutions in addition to technical ones. By examining how scammers are changing and accelerating their methods, we hope to show that defending against them will require a constellation of cultural shifts, corporate interventions, and eff­ective legislation.

Posted on October 1, 2025 at 7:09 AM4 Comments

Comments

KC October 1, 2025 8:54 AM

@Clive, you might get a kick out of this.

dAIsy, a scam-fighting AI bot.

A UK-based company, O2, seeds dAIsy’s phone number into scammer call lists. She has all the time in the world to talk about her kittens.

Scammer: Stop calling me dear you stupid ——.
dAIsy: Got it dear.

O2: Ruin a scammer’s day, report scam numbers to 7726.

Clive Robinson October 1, 2025 11:27 AM

@ Bruce, ALL,

The post I just wrote to the earlier thread,

https://www.schneier.com/blog/archives/2025/09/details-of-a-scam.html/#comment-448386

Basically says what two major effectors of “Social Shifts” are that need to be stopped immediately,

1, Electronic communications can not be made sufficiently secure. So should not be used for Personal and Private Information and that covers more than Financial and Medical.

2, The general awarness that we all should realise that anything from a commercial entity that appears to be “More Convenient” is for their “share holder value” and are thus guaranteed to be less secure than the needed.

Therefore people should stop using them untill legislation is put in place that is effective in making financial institutions operate more secure systems. Rather than allowing them cut costs such that the level of security is considerably less than

1, Face2Face “Know your customer in person”.
2, Ink&Paper that has “Secure Verification” not just of the document but each transaction within it.

Clive Robinson October 2, 2025 1:37 PM

@ Cyber GOD,

With regards,

“pushing AI down everyone’s throat when YOU, yourself, KNOW that it is a SCAM”

Let me see,

1, Title has “AI in Scams”
2, Subtitle has “ScamGPT” & “Fraud”
3, Tags has “AI” and “scams”

In effect that is all our host, wrote.

Each and every one had “scam” in it.

The rest is a cut-n-past from the article (the article sitle is what makes this thread subtitle).

So I could argue that @Bruce has pushed “scam” and “Fraud” more than he has AI…

Now if you look back on this blog I’ve been a “Debbie Downer” on,

“Current AI LLM and ML systems”

And I’m always fairly specific in using it. I’ve also in the past explained why the so called “Digital Neural Networks”(DNNs) are little more than “Digital Signal Processing”(DSP) on steroids. And that the current transformer and similar ML that adjusts the “MAD weights” is turning the DNN into an “adaptive filter”.

Thus the fun question is,

“What is it filtering?”

That is what is it’s spectral response curve representative of.

What I can say quite truthfully is it’s not anything we would recognise as reasoning let alone intelligence.

I have also pointed out that LLM&ML systems are not just limited, what we currently have will very certainly bo more reach intelligence than a dusty library full of old books.

What makes even dusty libraries of use to mankind is the “human element” where one thinks up questions/queries and another searches through and presents matching records.

In this respect the library is a Database with experienced DBA.

Where is the intelligence in such a system?

Certainly not in the books/journals nor in the index and records it points to or mechanics of using them (for a formal argument of this go look up “Searl’s Chinese Room”).

The intelligence is in coming up with an apposite question, and ability to present the records in a reasoned and coherent manner to the questioner.

This is the task carried out by humans not automation even though the process in the main, to many, looks automated. That is because they do not see “the reasoning out of the question” nor “the comprehension and application of the search results”.

Today happens to be the 75th anniversary of what we now call “The Turing Test” in the way it was originally voiced it is now seen as “discredited”. The thing is it’s not as an idea. Because it’s still about “Human Judgement” not the “mechanics of the shielding method” which has not unexpectedly been discredited. The point being that the test will remain valid as long as the test method is suitably contemporary at the time.

You will find on this blog a comment I made about being able to recognise AI by two things,

1, It’s lack of scope.
2, It spoke in marketing speak.

The first was a limit of not just the limit on the number but length of tokens. It’s not hard to do some basic calculations to see why this will always be an issue.

The second was due to the input source. People talk about theft of originality etc. But the reality is that in the early days of stripping the Internet, much of the source was fairly sad “Marketing speak”. And for some reason I won’t go into –for brevity– that style got amplified.

So it does not matter if the response rattles out on an old fashioned 50baud teletype or gets visualised on what looks like a zoom call to some sexist notion of a seductive girl/woman (see You-Tube garbage for thousands of those being auto generated a day). Because at least some of us catch on very quickly and others do slightly slower but many of us will get to spot it and come up ways to test it effectively.

The point is “Current AI LLM and ML Systems” are actually “Not Scams” any more than “Hammers are murders”. They are however fairly useless and inefficient tools that are unlikely to be of any real use to most people.

There are some “rules based” tasks they will be quite effective at. But that is not due to “reasoning” or “intelligence” and is actually know sign of either.

Consider them like a cards player playing some form of patience. The player shuffles the cards, then “plays them out” untill either the game “plays out” or “gets stuck in a loop”. A current AI system can do this rather better than a human, and can even recognise a “loop condition” very much earlier. And can also get to the point where “slight patterns” in the shuffled deck will say the game is a bust.

If you want similar, both Chess and Go are strictly rules based and AI systems have beaten the best of human players. But hey even a mechanical calculator being rules based can get results faster than most humans when the numbers exceed the tables we were supposed to memorize as children.

And that’s an important difference humans can beat rules based systems by memorised lookup tables.

I won’t go into it but “expert systems” were the AI of the 80’s” but they were in effect a lookup table as a network that got traversed by answers to questions that were in effect binary. The real problem with them was coming up with “real world questions” that could be limited to answers that were binary or in other ways constrained but covered all possibilities. Further more humans have subtle feed forward mechanisms. When a doctor asks you a question they judge your current answer by both your previous answers and their knowledge of you as a patient. Computers of last century just were not capable of doing these things even by adding fuzzy logic.

Whilst not suitable for full medical diagnosis and similar, that in no way ment Expert Systems were useless as tools. They were not and still are not and get tucked into real world products even today because they are orders of magnitude faster and more efficient than general LLM and ML systems ever will be.

This means that there will be a band of usage where LLMs fit in. As far as human jobs go this will be to replace or argument heavily rules based systems. Think of them replacing a Law Library or Similar Tax Regulation Library. Yes a lot of lawyers and CPAs will go ouch, but then so did stenographers and audio typist in the late 80’s through 90’s. Likewise so will a chunk of the ~1/3 of current jobs that are basically “makework”.

Does that mean there will be mass layoffs, well that depends more on the people than anything else. If they can,

1, Reskill or upskill as they go.
2, Relocate or work in location independent jobs

Then they will survive. Because three things will always exist,

1, Creative work to meet human needs and desires
2, Makework that can not be automated away
3, New types of work using new tools and associated new skills.

Oh and those desperate for things to consider… Just remember,

“Humans always want to talk.”

It’s the human condition of being social creatures. That means it has implications, the most obvious of which is

“Communications needs technology for it to be effective.”

The trick is to work out what’s coming next and get ahead of it.

For instance Satellite TV abd Terestrial Broadcast are going to be dead or dying in half a decade or so.

Knowing this and knowing that humans have an increasing desire for parking their rumps on the settee and watching adverts interrupted by so called program content…

Where are they going to get their fix?

Well it will be by two methods,

1, Optic data cable for fixed.
2, Wireless data for mobile.

Within another 5-20 years Optic Data cables will be mostly “backhaul”, and wireless data cells will shrink down to pico and femto in the street or house. Just about every user device will be Wireless. In fact things that are not nor do they need to be wireless, will become wireless simply to reduce production costs. After all why put buttons on a device when a WiFi or BlueTooth device will be a third or less the manufacturing cost?

Oh and another excuse to put you under surveillance… After all how much would knowledge of when and where you brush your teeth be? And not just to your dental insurance company…

As for “adult toys” there have already been “security /surveillance” stories we’ve all no doubt been amused by.

Now consider the fun a WiFi Toilet could be used for (Yes WiFi-loos are a thing in Japan already).

The thing is there is “way to much data” and way to little correlation. This is a job an LLM based AI system can “front end” by finding patterns or more correctly anti-patterns in a Police State which will be fairly ubiquitous in a couple of decades at most unless the Citizenry get their act together and put in very very strong legislation to stop “Data with Everything” surveillance…

ResearcherZero October 8, 2025 10:59 PM

‘All your personal data and travel plans will be released tomorrow,’ threatens Scattered Lapsus$ Hunters who have reportedly knocked off around 1 billion records from Salesforce.

Included in the data is the personal information of high profile officials and the names, addresses and phone numbers of a large number of people. The list of companies threatened with having data leaked includes a lot of big names who many will have done business with.

Scammers will then likely use the data to target many individuals caught up in the leaks.

‘https://www.smh.com.au/technology/qantas-among-40-companies-caught-up-in-major-extortion-attempt-by-hackers-20251008-p5n0wz.html

There are some very nasty tools that scammers can use to help them with their endeavors.
https://unit42.paloaltonetworks.com/clickfix-generator-first-of-its-kind/

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.