Using AI to Scale Spear Phishing

The problem with spear phishing is that it takes time and creativity to create individualized enticing phishing emails. Researchers are using GPT-3 to attempt to solve that problem:

The researchers used OpenAI’s GPT-3 platform in conjunction with other AI-as-a-service products focused on personality analysis to generate phishing emails tailored to their colleagues’ backgrounds and traits. Machine learning focused on personality analysis aims to be predict a person’s proclivities and mentality based on behavioral inputs. By running the outputs through multiple services, the researchers were able to develop a pipeline that groomed and refined the emails before sending them out. They say that the results sounded “weirdly human” and that the platforms automatically supplied surprising specifics, like mentioning a Singaporean law when instructed to generate content for people living in Singapore.

While they were impressed by the quality of the synthetic messages and how many clicks they garnered from colleagues versus the human-composed ones, the researchers note that the experiment was just a first step. The sample size was relatively small and the target pool was fairly homogenous in terms of employment and geographic region. Plus, both the human-generated messages and those generated by the AI-as-a-service pipeline were created by office insiders rather than outside attackers trying to strike the right tone from afar.

It’s just a matter of time before this is really effective. Combine it with voice and video synthesis, and you have some pretty scary scenarios. The real risk isn’t that AI-generated phishing emails are as good as human-generated ones, it’s that they can be generated at much greater scale.

Defcon presentation and slides. Another news article

Posted on August 13, 2021 at 6:16 AM19 Comments


echo August 13, 2021 7:09 AM

Habeous corpus. Ultimately these schemes are old paint on a new fence and we’ve seen it all a million times before in one form or another. There’s nothing new here only the sheen of “OMG computers”. No computers no threat. Flood control can be managed in lots of different ways from top to bottom so the scale issue goes away before it gets started.

Still, the findings spurred the researchers to think more deeply about how AI-as-a-service may play a role in phishing and spearphishing campaigns moving forward. OpenAI itself, for example, has long feared the potential for misuse of its own service or other similar ones. The researchers note that it and other scrupulous AI-as-a-service providers have clear codes of conduct, attempt to audit their platforms for potentially malicious activity, or even try to verify user identities to some degree.

Habeous corpus…

Clive Robinson August 13, 2021 8:30 AM

@ Bruce, ALL,

It’s just a matter of time before this is really effective. Combine it with voice and video synthesis, and you have some pretty scary scenarios.

When a pearson gives an order, they normally expect some form of authentication by the person receiving the orders. Often it is they will recognise their “face” when giving orders in person or their “voice” when giving orders via the phone, or “command codes” for the equivalent ot “telex” or “Email” these days.

Thus a question arises as to what happens to a command hierarchy when Face / Voice can nolonger be trusted?

Well they could fall back on command codes, only the “authorities” want encryption back-doored so that command codes can nolonger be trusted.

Where do command chains go oe even exist when there can be no authentication possible?

I’m not sure it’s a question we have an answer for currently…

Jerry August 13, 2021 9:23 AM

The other thing that happens with this is most of us humans get more and more cynical and suspicious about any type of message that might possibly be fake. It breaks down real relationships people have with businesses and other organizations. Will there be a point at which the mass of people who just don’t respond is too great, and the economy suffers? Society in general already suffers, mostly those who feel like there’s nothing they can do about the many fake actors in the world. (And most of the fake actors are not even technology based; they are two-faced people in leadership positions).

any moose August 13, 2021 3:08 PM

What will happen very soon will depend entirely upon how intelligent and/or sophisticated the recipient of the email is.

People who understand how impossible the situation is will no longer trust emails and will verify everything via phone or in person. If these people work for a boss who is less than clever — don’t we all? — they will need to get his marching orders in writing to prevent them from being held responsible when they are required to approve a nebulous email request.

Most people will believe the text & images contained in the phishing email and get burned.

Why is “research” like this even allowed?

We need to repeal Section 230 of the laughably named Communications Decency Act, something that will never happen because Democrats really like social media oligarchs censoring conservatives.

David Leppik August 13, 2021 5:17 PM

Actually, the problem with spear phishing is that it’s cruel, manipulative, immoral, and illegal. This approach should be obvious to any criminal currently using template-based phishing who learns about GPT-3. I’m not sure that this sort of research is actually helpful.

Thumbo August 13, 2021 7:46 PM

@Clive Flimsy quantum cryptography in the field

@any moose I think this can be the standard, if I don’t do it someone else will, and publishing it makes it a known threat for people to start finding a shield for. Which probably means honest AI service providers watching what everyone is doing with their services (with AI!) Then @echo’s flood control against the shady dark web providers.

I’m excited for the great internet of things bot net AI wars that destroy the internet if we don’t get echo’s flood control

Anonymous August 13, 2021 9:20 PM

@ all

Re Phishing

Fundamentally, it is a problem inherent to the graphical interface [in a perfect world nobody uses GUI for banking transactions]. Visual appeal plays the decisive role in a successful phishing scam [the initial part of the attack at least].

The solution will have to come from the designers – perhaps incorporating other sensory [sound?] elements into the website’s identity, etc.

Hedo August 13, 2021 10:45 PM

I have to agree with @echo (first paragraph of the first comment).
Basically, AI can be used, and it is being used ‘bigly’ as a modern day
used car salesman, screwing you over while lookin’ you in the eye.
To put it primitively.

And as @Freezing_in_Brazil points it out (oh yeah baby),
the devil IS in the GUI. Oh, those purdy, flashy, colorful, shiny
things are hard to resist, ain’t they?
Wouldn’t be surprised to see a Cisco BGP Router with a GUI soon.
Heck, they’ve been running Domain Controllers and Active Directory
servers with GUIs for decades now.

Clive Robinson August 14, 2021 12:22 AM

@ Thumbo,

Flimsy quantum cryptography in the field

Quantum Cryptography[1], contrary to it’s name, is not “cryptography” in terms of encryption/decryption, but more correctly “Quantum Key Distribution”.

That is, the actual “encryption algorithm” used is the normal “One Time Pad” with all it’s limitations.

Which means that the authentication it offers, at it’s widest scope, only covers the Shannon Channel between the OTP encryption and decryption mixer functions (XOR / ADD etc).

All legislation for Government Snooping so far has been above that Shannon Channel at the plaintext level. Thus they get access to the unauthenticated section between the “user and the security end point”.

The implication being that they can do anything the “user” can. Thus inject false / fake messages would fall in that range of activities.

To prevent such injection of false / fake messages your authentication layer would have to be above it. If using a Crypto Algorithm this means you would have to move the “security end point” closer to the user than the attacker can get.

Whilst as I’ve described in the past how you can do this with a paper and pencil cipher such as the OTP, Government legislation is determined to stop you using any kind of third party technology by making it illegal to manufacture and sell unless it has a “The Government wins” back-door not just inserted but connected up to a communications network the Government have control over.

Thus your options are limited to,

1, Buy and use illegal equipment.
2, Design and make illegal equipment.
3, Use a system not legislated against.

Whilst you can show mathmatically that the Government objective can always be defeated as Option 3 will always be open to you. It in no way guarantees you anything over a single bit of secure channel with any given message[2]. So the downside of any such system is a very limited secure channel bandwidth compared to the plaintext message size.

I could trot out the mathmatics to show that in all such systems the likely hood of an attacker getting a message authenticated by “luck” is low but possible. And further the likelihood is related to the number of bits in the secure channel and their individual probability functions. But you can find that in standard refrences.

[1] See BB84 and later.

[2] If you think about it the minimum secure channel / covert message you can send is message(TXed|NotTXed) at some pre aranged point in time. Thus you could arange some system where by messages sent on some agreed number of seconds increment are to be taken as valid whilst all other times are regarded as invalid. So this lack of bandwidth of a bit/message whilst limiting is not preventing.

Thumbo August 14, 2021 1:54 PM


Thank you for all of your excellent comments I’m a fan (notwithstanding confirmation bias)

I was thinking stupid simple authentication between command and lower ranks comms devices via the quantum key exchange (series of entangled photons or whatever).

Not thinking about all the issues of corruptible chips, software, devices before you get to that point to send a fake message.

Clive Robinson August 15, 2021 2:40 AM

@ Thumbo, ALL,

Not thinking about all the issues of corruptible chips, software, devices before you get to that point to send a fake message.

Sadly that’s exactly where the Government through the advice of the SigInt agencies with their half century or more of experience have decided is the best place to attack with legislative and regulatory control…

First though “History is important”, these days not everyone remembers the “Morris Worm”[1] or more importantly the father of it’s creator Bob Morris[2] who was a Senior NSA Scientist. Who in an invited talk in 1995[3] gave two points many should realy learn and take to heart in all areas of life where there is a duty of confidentiality.

In fact Robert Morris himself emphasized that the attendees should write down these points,

1, Never underestimate the attention, risk, money and time that an opponent will put into reading traffic.

2, Rule 1 of cryptanalysis – Always check for plaintext.

Both readily apply to the war on cryptography and now war on authentication by current Goverments rapidly heading down the authoritarian or Police State route…

As you now see the total communications path between two parties always has three parts. The two insecure “plaintext” parts required for the “user interface” at either end. Thus past the security end points, and the middle –hopefully– secure part between of the communications channel between the two security end points.

Whilst in most cases not impossible to attack, the secure part of the communications channel if correctly implemented is well beyond current resources so remains secure for some time to come. Which just leaves what occures on the plaintext side of the security end points at the user interface for effective technical attacks.

Thus from an authoritarian Government view point the only sensible area to gain control of and maintain it, is either or both of the insecure user interfaces before/after the security end points that mark the secure part of the communications path.

With the use of modern technology by most communicating parties an authoritarian Government needs only to dictate what happens in those two insecure “plaintext” user interface areas to get everything they want to surveill nearly the entire population by “collect it all”.

It’s a point I keep making about those “Security Apps” on Smart Devices and mobile phones which are “insecure by design” because of the environment the users chose to use them in. People need to look beyond the “neat tech” to “the whole system” (which unfortunately can be complex).

Look at it this way, it matters not a jot how secure the secure part of the communications path between the security endpoints is, if an attacker can simply “end run” the security end points via the OS or other Apps a user has on their device.

And trust me when I say every Smart device/phone is riddled with such security vulnerabilities by accident or design (See comment in talks[3] about designing with a “Walker inside” it’s even more true today than it was back then). But why trust my word? Instead look at the number of “Walled Garden” supposadly checked and verified apps both Google and Apple have had to pull from their “app stores” because they had malware stealing peoples private information… The real question being “how much more is there they have missed?”…

Hopefully my above will convince rather more than just you that people need to think of “whole systems” of communications rather than “parts of systems”. Because as our host @Bruce used to point out “it’s the weakest link in the chain” where things fail.

Is there a solution to the security end point being in the wrong place?

Yes, and I’ve mentioned it before and I usually use the example of a “pencil and paper” / “Hand Cipher” One Time Pad(OTP). That way what goes into the “user interface” of the Smart decice/phone is already securely authenticated, encrypted, or both, and beyond any “collect it all” technical attacks as it has moved the securiry end points off of the technology. But… as always there is a downside, it does require the users to practice good operational security, something few can do even if their lives and liberty realy do depend on it[4].

[1] Robert T. Morris Worm from 1988

[2] Robert H. Morris, NSA scientist and cryprographer,

He has also had attributed to him the quote,

“The three golden rules to ensure computer security are: do not own a computer; do not power it on; and do not use it.”

Which kind of makes the point behind secure paper and pencil hand ciphers for authentication and encryption.

[3] The invited Robert H. Morris talk was at Crypto 95, where there was also a talk by Adi Shamir. Both have some serious and almost timeless points to consider, that are just as valid today as they were a quater century or more ago, which you can read at,

[4] You would think that “master criminals” as portrayed by the authorities that are again portrayed as the “king pins” of “Serious Organised Crime”(SOC) would have high operational security as they face potentially “life in jail” or worse. Well as seen by the recent faux-security phones and the way they used them that they had not a jot of “Operational Security” practice amoungst the lot of them. Who are thus now fighting via very expensive lawyers to have the evidence collected from the faux-security phones not be allowable to be used in court… So the moral is effectively four fold,

1, Humans are lazy.
2, Humans are easily conned.
3, Humans are not to be trusted.
4, Most humans don’t learn from history.

Makes you wonder how we have survived as long as we have 😉

Hedo August 15, 2021 6:36 AM

@Clive Robinson,
1, Humans are lazy.
2, Humans are easily conned.
3, Humans are not to be trusted.
4, Most humans don’t learn from history.

Truth 100%
Confirmed by yours truly, TIME and TIME again,
and can be supported with mountains of evidence.
Truly sad, but true.
Makes me wonder, honestly, is this perhaps the next
thing/stage of/in human evolution? The 4 items
@Clive Robinson listed above, is that what human race
is racing towards? Because I remember, as a child,
many decades ago, I could trust many more people
than I do today. I had to adapt/evolve.
No bitterness whatsoever, just lookin’ you in the eye
and thinking to myself, sure, sure, whatever man.

echo August 15, 2021 7:54 AM


1, Never underestimate the attention, risk, money and time that an opponent will put into reading traffic.

There is this but also idle hands justifying budgets during periods of inactivity and tight financial environments.

A job ad blunder by the UK’s Ministry of Defence has accidentally revealed the existence of a secret SAS mobile hacker squad.

The secretive Computer Network Operations (CNO) Exploitation Unit had its cover blown on the MoD’s external job ad website, as spotted by the ever eagle-eyed Alan Turnbull of Secret Bases.

Based in Hereford, the £33k-per-year post was to be filled by an “extraordinary talented electronics engineer” [sic] to “work alongside some of the best scientists and engineers within defence and will be tasked with delivering prototype solutions directly to the soldiers and officers of a unique and specialised military unit.”


The attack was inspired by a separate cunning plan, dubbed Voltpillager, used to defeat Intel’s Software Guard Extensions (SGX), a similar secure enclave system for x86 microarchitecture.

As with SGX, the SEV attack relies on cheap, off-the-shelf components: a ~$30 Teensy µController (microcontroller) and a $12 flash programmer. Non-material prerequisites pose more of a challenge – they include insider access at a cloud company, an opportunity to attach wires to the server motherboard without arousing suspicion, and some technical proficiency.

The Register asked AMD to comment. A spokesperson pointed to the physical access requirement to underscore this is not a remote attack scenario but otherwise didn’t have anything to say.

I’m not saying this is what the SAS are up to as I simply have no clue beyond wild speculation and guesses but when you put one capability alongside another I expect the potential will make some people, as one ex SAS member put things, “not get to sleep at night”.

While I am 100% certain this is a strategic and tactical possibility it doesn’t mean they will be doing this simply to be nosey or provide economic advantage as their priorities may be along different lines and, I paraphrase an ex SAS member, “to support policy direction from the top and catch bad guys”. There is an implication in there of acting for the greater good and there have certainly been times like this when they have been brought in to advise during humanitarian disasters, taken the initiative during terrorists incidents and saved innocent civilian lives, and captured war criminals so they could face justice in the courts.

I think intent matters and the use we put to capabilities and opportunities must be carefully weighed.

Winter August 15, 2021 8:03 AM

1, Humans are lazy.

Laziness is the engine of progress.
People who are not lazy tend to be inefficient and have low productivity

2, Humans are easily conned.

Without trust there is no community nor society

3, Humans are not to be trusted.

99%+ of people can be trusted

4, Most humans don’t learn from history.

As the proverb says, the secret of a good marriage is a bad memory

“Makes you wonder how we have survived as long as we have”

By being able to live in communities.

Thumbo August 16, 2021 1:43 PM

99%+ of people can be trusted

<1% can do a lot of damage.

I can trust 99% of open source developers.
But maybe not the <1% involved in distributing binaries.

Back to the laziness it's very easy to dnf install tons of random linux packages that might ease your work.

But there are no guarantees nothing has been snuck into the compiled code, especially underfunded projects with few overworked maintainers.

I don't remember the name but I've seen at least 1 project working on validating builds to make sure nothing was tampered with in automated compile and distribution. So there must be someone with convinced threat there.

Also you know the 1% employees who sneak off with you secrets e.g. Snowden

vas pup August 16, 2021 4:35 PM

AI and Law

=Could your next lawyer be a robot? It sounds far fetched, but artificial intelligence (AI) software systems – computer programs that can update and “think” by themselves – are increasingly being used by the legal community.=

Read the whole article if interested in details.

echo August 16, 2021 6:15 PM

@vas pup

Having done code some law or more really the application of law in practice by some people really grates. It also annoys some judges too because what the law says and what some job titles and institutions think it says (or try to get away with) really does strain things.

echo August 18, 2021 2:08 AM

@vas pup

Having done code some law or more really the application of law in practice by some people really grates. It also annoys some judges too because what the law says and what some job titles and institutions think it says (or try to get away with) really does strain things.

vas pup August 18, 2021 5:56 PM

@echo Thank you for your input.

On the other hand, AI would make decisions primary on facts, not bias or emotions. At least if training data was really unbiased.

Please take a look at 2 min video related to other application

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.