Comments

Winter February 20, 2025 9:07 AM

Reflections on Trusting Trust has finally come true. We can now create code that generates code that implants backdoors in everything we create. Soon, LLMs will be used to not only to write compilers, but also do the compilation.

What ever will be put into Copilot will end up in all the software we write.

Ken Thompson was a real visionary. But we knew that already.

tfb February 20, 2025 9:07 AM

It is a lost hope, but I wish people would stop calling these things ‘open source’. The weights of a model are not its source code: they’re its machine code. Publishing the weights is not publishing the source code.

Clive Robinson February 20, 2025 12:56 PM

@ Bruce, ALL,

With regards,

Scary research

The putting of backdoors in others code is now a fairly standard “supply chain attack”. So predictable and in theory can be mitigated (think an extension of the old AV techniques).

What I find more disquieting is the fact that “anything could be put in” not just “backdoors” that are at least recognisable by code signature or behaviour signature.

Do you remember back to when “binary chemical weapons” were the “shock horror” story. Put simply it opened up the possibility that two fairly safe and useful chemicals when mixed would become deadly.

So consider the idea of malware that consists of two separate pieces of software. Run either on it’s own and you get useful functionality, run the two together and something nasty happens.

That is neither piece of software contains code that would get flagged as concerning under code audit/review but together then as the Platters used to sing,

“Smoke gets in your eyes”.

I developed a prototype of this several years ago and I used a shared resource (post script printer) for signalling and data passing to get KeyMat from a secure process to an insecure process in effect using a time based side channel as the comms channel.

Mexaly February 20, 2025 1:19 PM

Could LLMs be the Achilles’ heel that we thought was the Shor algorithm?
Like water on pavement seeking every crack and crevice.

John Freeze February 21, 2025 7:22 AM

@O it’s not an LLM that has to come up with a backdoor.
It’s the random dev who takes the generated backdoor hidden in code that they don’t understand.. and then commit it into a library.
I suppose, today most code is still manually written – but it looks like the percentage is going to seriously decrease in the near future

Jn February 21, 2025 9:46 AM

Layperson here. But couldn’t the same LLM parameters be used to identify back doors? Or determine where backdoors would most likely be introduced?

Clive Robinson February 21, 2025 10:55 AM

@ Mexaly, ALL

With regards,

“Could LLMs be the Achilles’ heel that we thought was the Shor algorithm?”

Possibly not, but it appears certainly they are an Achiles’ Heel on businesses that are trying to use them to “up productivity”, “reduce head count”, “increase competative advantage” and all that other “good stuff nonsense” that has been spouted.

I can not say what might happen in the future in the very unlikely event “Gen AI” happens, but the current AI LLM and ML Systems are looking cery much like “A California Sailboat” that is,

“A hole in the water into which you pour endless amounts of money.”

In fact a new “pithy” has been thought up,

AI : “Success Theater”

Which I guess joins “Security Theater” on the increasing list of “money wasters” not even with “Shareholder Value”.

Read more at,

https://garymarcus.substack.com/p/genai-in-two-words-success-theater

Oh and a reason as to why AGI is more distant now that it was before… The cold hand of reality bites 😉

https://fortune.com/2025/02/19/generative-ai-scaling-agi-deep-learning/

You are not anonymous February 21, 2025 11:56 AM

Thank you, Clive, for that link about AGI. Finally the voice of reason here.

You can always be suspicious of any CEO predicting his own company will do surprisingly great things in the very near future. To get investments, he or she will always have more of an incentive in making everyone think such a thing, than in making it actually happen. Even if his own tech people tell him physics says it can’t happen, he will say it anyway (oh, and “you’re fired for disagreeing with me! I’ll just find someone else to replace you who will make my [God-like] prediction so!”). I have experienced this kind of thing first hand. It’s real.

Gergely Toth February 21, 2025 1:17 PM

This reminds me of the Underdanded C Contest:

The Underhanded C Contest is an annual contest to write innocent-looking C code implementing malicious behavior. In this contest you must write C code that is as readable, clear, innocent and straightforward as possible, and yet it must fail to perform at its apparent function. To be more specific, it should perform some specific underhanded task that will not be detected by examining the source code.

Now, instead of humans, this will be done by LLMs.

ResearcherZero February 23, 2025 1:22 AM

@Jn

The problem is that an introduced backdoor could be picked up anywhere and added to a package that makes it’s way into the supply chain. There are many opportunities for this to take place. The difficulty lies with identifying all of the many places it could happen.

It’s likely data voids will continue to be security problem. A very real Achilles’ heel.
That and Hollywood conspiracies. Film routinely distorts or completely misrepresents fact.

https://datasociety.net/library/data-voids/

ResearcherZero February 27, 2025 2:32 AM

@Clive

“anything could be put in”

LLMs often also lack great swathes of information about specific subjects which can mislead people. Current applications happily pretend entire cultural populations never existed.

It is pretty clear that LLMs already erroneously offer up information that exists or does not exist. The following is a little off subject but also highlights common AI errors.

HudonRock’s BlackBastaGPT fabricated, regurgitated other victims chats and hallucinated…

‘https://www.theregister.com/2025/02/25/southern_water_black_basta_leak/

The water utility declined to say if it did or did not pay out the ransom.
https://www.computing.co.uk/news/2025/security/ransomware-attack-on-southern-water-cost-4-5-million

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.