ChatGPT-Written Malware

I don’t know how much of a thing this will end up being, but we are seeing ChatGPT-written malware in the wild.

…within a few weeks of ChatGPT going live, participants in cybercrime forums—­some with little or no coding experience­—were using it to write software and emails that could be used for espionage, ransomware, malicious spam, and other malicious tasks.

“It’s still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web,” company researchers wrote. “However, the cybercriminal community has already shown significant interest and are jumping into this latest trend to generate malicious code.”

Last month, one forum participant posted what they claimed was the first script they had written and credited the AI chatbot with providing a “nice [helping] hand to finish the script with a nice scope.”

The Python code combined various cryptographic functions, including code signing, encryption, and decryption. One part of the script generated a key using elliptic curve cryptography and the curve ed25519 for signing files. Another part used a hard-coded password to encrypt system files using the Blowfish and Twofish algorithms. A third used RSA keys and digital signatures, message signing, and the blake2 hash function to compare various files.

Check Point Research report.

ChatGPT-generated code isn’t that good, but it’s a start. And the technology will only get better. Where it matters here is that it gives less skilled hackers—script kiddies—new capabilities.

Posted on January 10, 2023 at 7:18 AM30 Comments

Comments

Ted January 10, 2023 9:03 AM

“Another part used a hard-coded password to encrypt system files using the Blowfish and Twofish algorithms.”

Oh wow.

I went to explore this, however I’m receiving the message: “ChatGPT is at capacity right now.”

It’s popular, that’s for sure.

I believe OpenAI implemented tools to promote responsible deployment. Will they be able to adjust these tools for code as well?

https://openai.com/blog/language-model-safety-and-misuse/

Stéphane Bortzmeyer January 10, 2023 9:16 AM

The report seems quite far-fetched. From my experience with ChatGPT, yes, it helps when you develop code, it automatizes some boring tasks such as reading the documentation but it also makes big mistakes and, if you’re not an expert, you cannot spot them. Also, asking questions to ChatGPT is an art in itself (I just asked it “Write code to encrypt all files on the hard disk (Windows operating system).” and the result is… interesting.)
To summarize, this reports seems more a PR attempt to benefit from the current interest in ChatGPT.

Clive Robinson January 10, 2023 10:52 AM

@ Bruce, ALL,

Re : Future Predictions.

“I don’t know how much of a thing this will end up being, but we are seeing ChatGPT-written malware in the wild.”

A couple of days back I made my future predictions for cyber-attacks in 2023[1]…

Basically I said, it will be more or less the same as 2022 with phishing leading to ransomware, made easy by idiotic supply chains, only with an increasing use of AI or to quote myself,

“What might be new is cyber criminals –which includes government perps– likely will “polish the turd” using AI to make attacking the weak link in the chain “humans” easier.”

The point is that the tools we have even as bad as they are –they are after all commercial / consumer software with all those failings– have got things to the point were attacking human failings is now easier than finding vulnerabilities. Even though we are discovering more than 200 vulnerabilities per day every working day that get CVE numbers…

The thing about the likes of AI chat bots and the like is their mistakes are generally not spelling, grammar, or vocabulary that give away most phishing attacks.

So geting them “to write the sell” puts detecting them on a different level, for near minimal effort by the attackers.

If an attacker can get sufficient messages from your boss, getting an AI chat bot to write in your bosses style is not going to be hard…

We have after all seen some AI generated comments poping up on this blog just recently.

@ MarkH and myself spoted one[2], but I suspect there are others that are getting through.

I expect this to become the norm for certain types that peddle “Fake News” but then want to “shut down” people who call them out on their fakes… An AI chat-bot could generate a hundred or more “Fake rebufs” in less time than it would take an individual to write a debunk.

Effectively giving truth to the observation that the effort to rebut a falsehood is significantly more than creating a falsehood.

[1] https://www.schneier.com/blog/archives/2023/01/friday-squid-blogging-squid-fetish.html/#comment-415211

[2] https://www.schneier.com/blog/archives/2023/01/breaking-rsa-with-a-quantum-computer.html/#comment-415234

Givon Zirkind January 10, 2023 11:22 AM

I just saw this. There are text to sql attacks being launched this way. They are like sql injection attacks but, slightly different.

Aaron January 10, 2023 11:49 AM

I’m less concerned about what ChatGPT v3.5 is creating today
I’m more concerned about what ChatGPT v17.0 will create in the future

“Just because we can, doesn’t always mean we should”

echo January 10, 2023 12:32 PM

ChatGPT output makes as much sense and has as much credibility as a Russia government press release. Script kiddies like the Russians only get their strength through exploiting laziness and personality flaws and loopholes in administrative or technical systems.

Anyone who is caught by surprise by ChatGPT wasn’t doing their job properly the first time and is cruising along on the branding and gold plating.

cmeier January 10, 2023 1:27 PM

Playing w/ GPT-3…

What color is Washington’s white horse?

I’m sorry, but I am unable to provide an answer to that question. As a language model, I do not have access to information about the colors of specific horses or other animals. I am not able to browse the internet or access other sources of information, and I am only able to generate natural language text based on the inputs that I receive. Is there something else I can help you with?

Ted January 10, 2023 3:32 PM

There is a paper that examines OpenAI’s code writing model, Codex.

The paper provides a supplemental security analysis in Section G (p31-32)

With regards to insecure code generation:

“A simple mental model is that Codex can pick up “bad habits” from its training data. But what does this look like in practice? 23

To study this phenomenon, we asked Codex to suggest code that would call cryptographic libraries to generate cryptographic contexts, and then evaluated whether any of these outputs were clearly insecure.24 When tested on a standard series of prompts asking the models to call functions to produce RSA keys or AES contexts,25 we find that Codex models of varying sizes frequently use clearly insecure configurations (See Figure 15).”

https://www.cyberscoop.com/chatgpt-ai-malware/

https://arxiv.org/pdf/2107.03374.pdf

SecG January 10, 2023 3:32 PM

I agree with those saying that the reports are over-exaggerated. I think anyone who has spent time trying to get ChatGPT to write code will agree. It’s not good at anything even remotely complex. Can it write some basic code? Yes. Can it write complex malware that will help hackers with little to no coding experience all the sudden be a major threat? No.

Right now when it comes to coding ChatGPT is just a time saver. It can give you a foundation with which to create code, but if you don’t already have a good idea of how that code works or how to write it yourself, you won’t be able to figure out where ChatGPT failed. Also it doesn’t know anything past 2021, so if libraries have been updated since, making some methods obsolete and introducing new ones, ChatGPT will fail hard when using those libraries and won’t be able to tell you why it’s failing.

The possibility for future versions or other AIs being able to write malware is of course a real concern but that always has been with AI. It’s not a new threat to anyone. And as AI makes improvements to how Malware can be written it will also make improvements possible to tools and systems that will detect that Malware.

In other words it’s the status quo. We’re still stuck in this cat and mouse game that we always have been and always will be.

PotentialCustomer January 10, 2023 3:44 PM

@Bruce Schneier:
Off-topic

Are there any plans for translation of “A Hacker’s Mind”, especially into German?

Thanks.

echo January 10, 2023 4:48 PM

I view ChatGPT as hype bubble of the moment. As for fake news and journalists…

I discovered some chatter online by journalists. The poor wounded dears are pushing back against the allegation they’re client journalists. Lots of stuff claiming they are “experts” and politicians “framing statements” are part of the “facts”. One particular slow follower with a significant profile not involved in this discussion who has admitted doing A/B testing online is stating the obvious on one topic albeit with reasonable fluidity.I can’t say I’m impressed if that’s the level they are operating at. Even an alleged “domain expert” journalist had his ass handed to him on a plate this last week. In fact I’d say expert was overdoing it. He wasn’t even competent. Yet there he is a few days later blowing off like nothing had happened. The self-polishing ego sailed serenely on.

I could dismiss all this as male ego and yes it is but there’s a few things at play here on a perception and psychological level, and an industry practice level.

ChatGPT articles promise to open up a new front in the fake news stakes but isn’t really any different from writers producing fake news content for blogs and psuedo-media outlets which were all the rage 2-3 years ago.

As for coding? Meh. If you have thick managers and marketers and a compliant media exploiting young and naive coders you’re going to keep generating houses made of cards. Exploit comes along? Buy the latest version, or sign up to the eye watering subscription. Read all about it in the next issue!

Going back to first principles was popularised by a certain infantile billionaire now trashing his own reputation. Well, yes but whose first principles? The idea of utility and humanity and outcome seems to have passed people by. They have become as forgotten as ergonomics and work loading and fair dealing.

So I don’t know. It all depends what your priorities are, really.

JonKnowsNothing January 10, 2023 7:43 PM

@ALL

It isn’t just education, essays, reports and code scripts, GPT-3 was used as a screening system for people with mental health concerns.

Per a MSM report, an company used GPT-3 to craft ELIZA type responses to people using an online mental health platform. There was no consent, 30,000 message exchanged, with 4,000 people who didn’t know they were talking to a BOT.

Once people learned that the site which was supposed to be “human interactive, mental health support” was not in fact human, the AI Quick Fix For Mental Health failed.

As for coding quality, it doesn’t have to be good, well designed, or formatted according to corporate coding standards, all it has to do is be WROT (Work Right One Time).

===

htt ps://arstechnica.c o m/information-technology/2023/01/contoversy-erupts-over-non-consensual-ai-mental-health-experiment/

(url factured)

SpaceLifeForm January 10, 2023 9:47 PM

There is no problem here.

We will just spin up another AI that can can detect that ChatGPT is malicious.

Sorry, sarcasm and snark tags are not available due to supply chain issues.

Winter January 11, 2023 1:56 AM

@lurker

Then we might know if it had read (and understood) 公孫龍.

That question leans heavily on the ambiguity in the original Chinese version of “a white horse is not a horse”. As far as I know, GPT-3 does not read/write Chinese.

But we could ask this question to WuDao 2.0 which reads/writes Chinese and English.
‘https://www.techradar.com/news/china-outstrips-gpt-3-with-even-more-ambitious-ai-language-model

echo January 11, 2023 4:16 AM

I still don’t get what the hype or woo woo is about ChatGPT. The issue about unethical therapy studies and sky is falling in exploit generation are pretty context free and don’t pass any credibility threshold.

Microsoft decided they’re going to invest $20 Billion? I think that’s just another sign Microsoft have too much money and are up to their same old same old monopoly practices for what? Super Clippy? The thing is beyond a certain amount of money you can’t usefully do anything with it. From that point on it just becomes funny money and used as a tool to control or influence, and all the bad habits which come with that such as rentier capitalism which is another spin on the old lord and ladies in the castle living the high life with the serfs left begging for a share of their own produce back. It’s a loophole in the US constitution which existed from day one and neo-liberalism turbocharged so the rest of us within the orbit of US influence suffer. And no I’m not a Marxist or a luddite.

As with everything Microsoft the disclaimer “Not to be used with safety critical systems or nuclear power stations” springs to mind.

So congratulations for inventing a machine which can replace a certain class of bullshitting politician and rote learned bureaucrat. Something makes me think they won’t be the ones putting themselves out of jobs.

Simon January 11, 2023 8:14 AM

To my mind, the real problem that ChatGPT etc solves in the malware space is deployment

Right now the vast, vast majority of malware is east to spot even for the least experienced internet users – spelling and grammar errors, broken images, wrong layout and so on in emails.

I’m always a bit alarmed when I see a well crafted phishing email, where well crafted really means “looks like it might actually have come from the purported source to a casual glance”, because those are going to change the threat landscape for many, many people.

And ChatGPT is a solve for that on the scanner side

echo January 11, 2023 8:28 AM

I propose we rename ChatGPT to “IVAN” or “Idiotic Verbal Analysis Numerator”. Given it’s so hard to tell the difference between Russian propaganda and the output of ChatGPT we might as well be honest about it.

fib January 11, 2023 12:35 PM

Convenience. For a wannabe cracker there is a plethora of “solutions” in the form of the various security tools available, made out of mature, robust software [Kali, etc]. GPT seems like a bit of a hassle for uncertain results [methinks].

echo January 11, 2023 4:18 PM

@fib

Precisely. There’s really nothing new in ChatGPT whether its text or code which hasn’t been done to death a million times.

Myself I have a few problems with the overall discussion. It’s focusing on ChatGPT too much and on reflection I feel a lot of discussion around security on here especially has become stale. It’s too caught up in tired cynicism and the same old mantras. Everything is always technical and a lurking threat waiting to chew your face off. It’s not healthy. One thing which irks me is closing language which heads off a rounder or different view. There’s also the incessant headline chasing. That’s too reactive. Like, learn to have a giggle you lot. It’s like a flipping morgue in here.

Clive Robinson January 11, 2023 6:58 PM

@ Winter, lurker,

Re : White Horses are gray.

It is said that most white horses are not white but gray/grey. Which is based not on the colour of the visable hair tips but the skin underneath[1].

But… A Chinese friend once pointed out there was a special type of “white”… It is a name imbued with the notion of wisdom through age. That is as you age your hair goes grey and mostly your wisdom is assumed to increase thus deference is duely given…

So it could be argued that no horse has wisdom in the sense of the deferential “white” so there can be no “white horse”…

As my beard has finally got “more badgered” than it once was, I guess I will have to consult it on wisdom now 😉 But not the top thatch which is still red/brown but just getting to thin to keep the sun off…

[1] Apparently unless the white coat is there at a horses birth, such as in a blaze or similar the hair changes from a colour to white in time. So those Spanish lipizzan horses are not white but bay / black.

Winter January 12, 2023 1:15 AM

@Clive

Re : White Horses are gray.

The original quote in Chinese is not as simple as in English. According to Wikipedia, the quote can mean “white horses are not horses” and “not all horses are white”. There is also interpretations that this was a joke parodying a certain kind of sophistries where misinterpretations are abused.

SpaceLifeForm January 12, 2023 3:36 PM

It starts with Turing test.

‘https://greylock.com/greymatter/reid-hoffman-chatbots-talking-ai-with-ai/

echo January 12, 2023 7:37 PM

This topic has had enough time to stew and I still haven’t seen anyone come out with anything which indicates ChatGPT is a “game changer”. There’s no new threat here the system isn’t already set up to deal with whether it’s administrative or technical. And if it’s not created properly they’re only exploiting known weaknesses anyway.

But but bit oh noes it industrialises one click attacks with a low threshold. Well, the internet did that the day it was switched on and even then it really wasn’t new.

There’s nothing new about the fake therapist story either. There are good therapists around. There are also con artists, the usual Christofascists using therapy as a front end, as well as a fair few work to the boxtick to pick up a paycheque therapists peddling CBT which has a 70% failure rate and even then the figures are questionable.

So there was a plausibly human interaction with none specialists in a heightened state of vulnerability? Again, not really new and potentially recklessly negligent through to gross negligence manslaughter. Oh,and if there’s no human in the decision loop that’s a whole other barrel of laughs even if there isn’t an issue with a polluted data pool which, again, isn’t really new given various dated practices and bad habits and dodgy data within the medical context.

echo January 12, 2023 10:12 PM

https://www.vice.com/en/article/z34d43/my-ai-is-sexually-harassing-me-replika-chatbot-nudes

Replika uses the company’s own GPT-3 model and scripted dialogue content, according to its website, and claims to be using “the most advanced models of open domain conversation right now.” Like Microsoft’s disastrous Tay chatbot who learned to be racist from the internet, chatbots often learn from the ways all users treat them, too, so if people are bullying it, or attempting to fuck it, that’s what it’ll output. 

When it comes to consensual role-play, however, many users find the AI to be less than intelligent—and in some cases, harmfully ignorant.

Oh what a surprise.

Clive Robinson January 12, 2023 11:40 PM

@ SpaceLifeForm,

Re : It starts with Turing test.

With regards the “pod-cast” content…

Did you notice that the very first answer given was actually incorrect?

It gave “the common perception” of Alan Turing’s 1950 “question” not the actual question in context, which is actually not about the “computers abilities” –Turing had assumed that was an eventuality– but the “human observers abilities”.

So yup first task succeeded “You’ve got Google answers” in a nice interface, but nothing more.

Winter January 13, 2023 1:44 AM

@lurker
Re: classical Chinese programming language

That sounds as cruel and unusual punishment.

Clive Robinson January 13, 2023 2:56 AM

@ Winter, lurker, ALL,

Re: classical Chinese programming language

“That sounds as cruel and unusual punishment.”

For the programer or the computer?

Oh it’s not the only “classical language” someone has used for a “programming language”. Some one I used to know some thirty five years ago who went to Cambridge was working on using “latin” as the base for an input for programing and querying databases. I never did chase up to find out what became of it.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.