AI Vulnerability Finding

Microsoft is reporting that its AI systems are able to find new vulnerabilities in source code:

Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison.

Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit.

The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device.

Nothing major here. These aren’t exploitable out of the box. But that an AI system can do this at all is impressive, and I expect their capabilities to continue to improve.

Posted on April 11, 2025 at 7:04 AM25 Comments

Comments

Davide April 11, 2025 7:58 AM

hmm… microsoft sell AI service… microsoft has quoted Stock exchange… and microsoft say something that can help her selling more licenses and rise his quote… hmm…

But this is not as the microsoft quantum computer chip that probably do not exists and cannot exists?
https://tech.slashdot.org/story/25/02/19/1651235/microsoft-reveals-its-first-quantum-computing-chip-the-majorana-1
https://slashdot.org/story/25/03/07/1350230/microsoft-quantum-computing-breakthrough-faces-fresh-challenge
https://slashdot.org/story/25/03/19/088253/microsoft-quantum-computing-claim-still-lacks-evidence

Or this AI App that is managed 100% by humans in the Philippines and 0% by AI?
https://techcrunch.com/2025/04/10/fintech-founder-charged-with-fraud-after-ai-shopping-app-found-to-be-powered-by-humans-in-the-philippines/

There are other cases where something AI-based was totally of partially managed by real human.

Paul Sagi April 11, 2025 8:06 AM

I’ll repeat what I said elsewhere months ago, AI can enable script kiddies to do what earlier was nation-state level attacks and reduce the resources nation-states require for attacks.

Clive Robinson April 11, 2025 8:40 AM

@ Davide, ALL,

With regards,

“There are other cases where something AI-based was totally of partially managed by real human.”

As far as I am aware ever since “MS Tay” all chat bot current AI LLM and ML systems have had “guide rails” and similar as a primary requirement.

Such things are “managed by real human” input every time because ML lacks not just context but knowledge of anything other than a “next likely word profile”.

So a current LLM and ML system without such human constraint can be maliciously poisoned or output random rubbish that sometimes is poison.

anon April 11, 2025 10:00 AM

Why didn’t they start with Windows Server? I’m sure we’d all appreciate only having to perform one or two more updates until we upgrade to Windows Server 2028.

Adam Shostack April 11, 2025 10:45 AM

Ok, but what’s the false positive rate? Did they run traditional SAST on these? Microsoft has been investing in non-AI tooling for at least 20 years, back to prefix and prefast.

(They mention an 80% FP rate in the JJFS section, which is, uhhh, not very good.)

lurker April 11, 2025 2:07 PM

@Bruce

My first skim of the headline above was

So? Somebody’s found another vulnerability in AI?

which is somewhat more likely than anybody using current LLM AI to reliably find vulnerablities in code. But anyhow, MS takes an easy look at open source code first, before sorting out their own spaghetti box …

Clive Robinson April 11, 2025 2:15 PM

@ lurker, ALL,

You note,

“MS takes an easy look at open source code first, before sorting out their own spaghetti box…”

Have you thought why that might be?

If MS can undermine GRUB2 what does that gain them?

Have a think through the implications of MS getting rid of GRUB…

Clive Robinson April 11, 2025 3:05 PM

@ Bruce, ALL,

One of the “oh so scary sounding vulnerabilities” is a crypto side channel (timing) attack,

Given as,

CVE-2024-56738

OK so there is a time based side channel.

But the important question is,

“What does this actually get you?”

Let me put it this way,

“A V8 engine is considered very powerful but if it’s up on the bench it gets you nothing.”

A timing side channel is usually used to illicit “confidential” information that can not be obtained in any other way (like secret keys).

Is anything secret in GRUB2? = No.
Is anything secret in input? = No.

There are quite a few other,

“Is anything secret”

Questions to which as far as I’m aware the answer is “NO”.

Of the top of my head and without spending a lot of time trawling through the code, the only situations where you might get information from this side channel, there are way way easier ways to get it.

If anybody can think of any way this side channel might give an advantage over other easier ways, “shout up”.

ResearcherZero April 12, 2025 4:00 AM

Leaked secrets and credentials for non-human identities remain active for years.

‘https://www.gitguardian.com/state-of-secrets-sprawl-report-2025

ResearcherZero April 12, 2025 4:15 AM

@Davide

They should probably patch that bug in Windows Common Log File System Driver for Win 10.

And that problem with improper memory locking for sensitive information in the Windows Update Stack, and the Windows Update Stack, the Network Configuration Operators group for AD, and the exploitation of Service Group Order to bypass EDR services using BYOD.

Probably the issue with abusing folder permissions is probably a pretty hard one to fix.

But I think they did fix how Windows stores info unsafely in memory for the TCP/IP stack.

However, perhaps AI could be useful for finding problems that were overlooked. Like this one for using symbolic links to connect user filesystem to root.

‘https://www.fortinet.com/blog/psirt-blogs/analysis-of-threat-actor-activity

ResearcherZero April 12, 2025 4:24 AM

@Clive Robinson

Well yeah, you are right as Microsoft handed over their source to nation states so they could have a lookidy see-see at what they could find lurking within.

Not that some of the zero days in Windows were getting a fix anyway. Like the shortcut issue which has been a problem for a very long time and routinely exploited by APTs.

Besides, fixing the structural design issues in Windows would take a complete rebuild.

Ismar April 12, 2025 4:51 AM

@Adam – as someone who uses similar tools as part of my job I can attest to the high proportion of false positives. The main reason is that these tools don’t have enough context of how the scanned applications are used and they prefer to err on the side of caution.
BTW big fan of your book on threat modelling

Grima Squeakersen April 12, 2025 8:40 AM

@Clive Robinson
re: “So a current LLM and ML system without such human constraint can be maliciously poisoned or output random rubbish that sometimes is poison.”

I’m recalling a post of yours on AI from a couple of weeks ago in which your thesis was (I paraphrase) “current AI is nothing but an expert system that lacks actual experts”. How could surreptitiously substituting humans for the silicon decision-making apparatus in the process improve results unless those humans had specifically applicable expertise? Which expertise I assume that CS type contractors in the Philippines are unlikely to possess even for such uncritical decisions such as are involved in on-line shopping…

Clive Robinson April 12, 2025 10:37 AM

@ Grima Squeakersen,

With regards,

“I’m recalling a post of yours on AI from a couple of weeks ago in which your thesis was (I paraphrase) “current AI is nothing but an expert system that lacks actual experts”.”

But you did not mention that I’ve atleast twice mentioned that both are just,

1, A database
2, A query engine

And in other places the input query to the engine is a mixture of,

1, Random bit generation.
2, Past queries in the user environment
3, Past queries in the system environment
4, Past engine output that is based on 1-3.

But I’ve also mentioned that the database gets updated in a feedback loop from 4.

Thus a question of how,

A, Human input over time
B, Random input over time

Change the bias and accuracy in the database.

Feed back can be not just “Positive or negative” but with time/frequency accumulating to phase also “Constructive or Destructive”.

Importantly the phase aspect can cause all sorts of unexpected results including what is a stability criteria (K) issue not just generally but at specific points within the database values (tokens).

I’ve also mentioned more than four or five times the “spectral response” that is often the RMS average forming curves of individual tokens, not just of their values but relative displacement from each other.

Each curve can be seen as a Gaussian with unity volume (think fifty or more dimensions to that volume). With the “skirt width” defined by adjacency to other tokens. Thus a very high value in one token “lifts” the effective curve of those around it and a low value drags down those around it.

Complicated as this sounds it’s a natural result of the “Multiply and Sum” network where the weights are effectively the summed and normalised vector values of the tokens. Hence you hear people talk about “matrix evaluation” but, unlike that which you might have been taught in high school there is an extra wrinkle. Whilst the multiply and sum network is seen as linear, it’s output value is scaled by an often nonlinear transformation the simplest of which is “the rectifier” which in effect says all values below the threshold are set to the threshold value. Another is a form of sigmoid where the values around the threshold are linear but “limit” exponentially to fixed positive and negative hard limit values.

These nonlinear adjustments have a “selection sensitivity effect” that can significantly effect the network output.

Thus user input can effect the whole system (some of which gets called “Prompt injection attacks”) with “planed” deleterious effects. As such these attacks are quite insidious and the current AI LLM and ML systems can not spot it.

Expert Systems however use “curated information” to build the database and base queries. In a way where both user and random inputs are or should be effectively negated. As such, the Expert Systems are in effect “deterministic” whilst current AI LLM and ML systems are very much not hence the early “Stochastic Parrot” name.

Clive Robinson April 12, 2025 1:50 PM

@ ResearcherZero, lurker,

I was thinking more along the lines of who is the UEFI CA and holds the signing key that is the key to the kingdom…

It’s MS, and they tried very hard not to sign anything claimed as a cancer to MS Win… So it would not be possible to load anything like another OS onto a PC with windows installed[1].

They nearly got away with it because the “Entertainment Industry” were talking about having “the PC not play” and going down the “licenced secure hardware route”, something MS could not allow.

Because at that time the MS game plan was to be heart and center of any home Media Player or other Entertainment System. The truth is that the entertainment industries were just MS’s “useful idiots”.

Prof Ross j. Anderson had already pointed out that the “Fritz Chip” was bad news and that MS wanted to “embrace and extend” the idea. Such that they owned not just the OS but they could decide what hardware, drivers, applications,

“You could use ‘On your PC'”

Worse they would have 100% control over everything,

“You could do and had done ‘On your PC'”.

MS have never given up on this psychopathic idea, and you can see it’s resurfaced with stuff they pushed into Win10 to effectively put all your “creativity” onto their systems. People argue it’s just getting rid of “Perpetual licencing” so you have to use their subscription model.

Whilst that appears in part true, the reality is they want all you create under their control so they can profit by it and deny you your IP rights,

No ifs buts or maybes.”

We are seeing a small part of this with Co-Pilot and all the “code examples” jammed into it… After all where do you think they got it from?

We are also seeing it with Recall and that it will every few moments send everything you have done upto their cloud… Which along with 100% connectivity to the MS Cloud means you have “zero privacy” on Win11 and even Win10. It also means they have a whole load of bio-metrics on you. Not just the way you type, but the way you think at a quite basic level upward.

But it is more, MS Recall along with Co-Pilot will be a full,

We “See What You See”(SWYS) device “Client Side Scanning” onto your PC.

Something certain politicians and their DOGiE Muttlings would give just about anything for..

Realistically the only way you can avoid it is by installing Linux or similar Open Source OSs and Apps on your PC. Which you might have noticed MS are trying to controle via,

“A variant of a walled garden”

By forcing you to host the OS under an MS OS using their client side device drivers etc, with forced Recall, CoPilot and similar.

The only thing that stopped it back before UEFI gained traction was that eventually MS were forced into signing the loader etc for Open Source by Google and one or two others. Basically they said if MS did not play ball then they would kill MS off over any and every regulator in the Western World. Thus MS could not back then “Own your PC”.

Times however have changed and if Win11 goes the way expected… The ownership of your PC will not be in question, and it won’t be yours. Worse you will be forbidden by MS from doing what they do not approve of…

And if you think Alphabet/Google, Apple, Meta, or others will stop MS this time, lets just say things have changed and it’s nolonger in their interests…

[1] Depending on who you listen to, and of those the ones you find most credible… MS argued it was the various US IP holders of TV progs, Films, Music, audio, games and other similar “over priced and locked in products” that set the ground rules. Look up some of the history behind CSS that supposedly gave “off line protection” to DVDs. It failed for “reasons of mathematics” and the same logic as to why backdoors can never be NOBUS. Hence to some of us it was no surprise De-CSS appeared and the Entertainment Industry boat got severely rocked.

An article from back then gives some details and links to other documents,

http://www.users.zetnet.co.uk/hopwood/crypto/decss/index.html

Clive Robinson April 12, 2025 3:19 PM

@ Bruce, ALL,

And the current AI “flip side”.

As we know most current AI LLM and ML systems dish out a lot of nonsense and we call it “hallucinations” or “Soft Bullshit” depending on “what term of art” you prefer.

The current AI LLM and ML systems do not conscientiously make it up so it’s not “hard bullshit” but it is definitely “a steaming load of nonsense” around half the time.

The reason why actually does not matter as long as we can recognise and pull the nonsense before it’s sent to the user. Which apparantly we can not do.

The level of this sort of nonsense is often not that important, one AI generated “Marketing focus piece” is generally about as much use as that produced by humans so…

It matters when it’s a slightly harder science or technology output.

Thus if CoPilot is writing your software main loop you would want it to be truthful. Unfortunately it’s not and not getting picked up,

https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/

“The rise of LLM-powered code generation tools is reshaping how developers write software – and introducing new risks to the software supply chain in the process.

These AI coding assistants, like large language models in general, have a habit of hallucinating. They suggest code that incorporates software packages that don’t exist.”

Which is much like it is with LLM’s and Law or other “Professional trade” such as medicine…

Very much “Not Something you Want”.

TimH April 12, 2025 3:52 PM

We now have a war/race where nation states apply AI/ML to find exploits in very popular open source code libraries (busybox?) before the maintainers. The former have more resources.

Better play for the spies would be to build a library of innocent looking mods to the key open source code libraries, to inject should the opportunity arise (or be forced to arise).

ResearcherZero April 12, 2025 10:05 PM

@Clive, ALL

There is a growing vulnerability backlog and not enough funding to meet demands. The level of detail for vulnerabilities is also declining as the teams do not have enough time and resources to easily re-review existing bugs that need to be reexamined. The cuts to NIST and pressures on CISA will likely make that issue worse.

Hallucinating dependencies, and the paper says that LLMs do this consistently, is a big issue because if these systems create connections that would not normally exist, then attackers could register those packages through what is termed “Slopsquatting”, and some projects are quite large and complicated projects, so identifying the problems may be difficult.

“reshaping how developers write software”

That is an important part also that was noted – because as methods change, the attack surface changes with it and opens up new areas to exploitation that further complicates code and vulnerability review, adding to the ever growing list for maintainers and admins.

It is a good name too, Slopsquatting, and it is quite a large number of packages which do not exist. The paper itself has more detail on the problems found.

There will not be just some simple fix for this issue. It will be harder than expected.

‘https://socket.dev/blog/slopsquatting-how-ai-hallucinations-are-fueling-a-new-class-of-supply-chain-attacks

ResearcherZero April 12, 2025 10:24 PM

@Clive Robinson

Even if you go through the Windows Registry or Group Policy to block Co-Pilot it still adds it to systems, like the notepad application for example, and installs the agent even after disabled at both the user and system level. This is an unwanted attack vector.

This inconvenience also will take place after an update, adding to the time wasted checking systems that may be used by students or lecturers and others. Real annoying, as they already have both local accounts and access to the online resources which include all the Microsoft chatty chat bot and other Microsoft product – and other tools like ChatGPT etc

It seems like a huge waste of time, resources and energy for more hurt than it’s worth.

If universities and technical colleges installed open source, it would be so much more efficient and far less costly. People might actually learn more in specific areas. But everyone is now hooked on commercial drip-line and it is mainlined into the bloodstream.

As it is closed source too, you cannot properly audit the code. Very annoying.

Clive Robinson April 13, 2025 1:16 AM

@ Bruce, ALL,

Is it AGI or “A guy Instead”?

We have been seeing “cheep labour in far away places” being used to replace local staff in the likes of programming.

Such outsourcing was earning money prior to “lockdown”… But since then we are being told it’s all different now… Because of the North Koreans, Chinese, Russians, and even quite a few Middle East Nations are using remote working for various forms of crime.

So outsourcing is all “Big Scary Security Risk” now. So in comes the AGI White Knight to the rescue…

Rather than stop outsourcing send it to an AGI instead, because they won’t be a “Big Scary Security Risk”…

But how do you know if it’s AI AGI or Human AGI?

Well there is a fair chance it’s likely to be the latter. That is it’s “the cheap labour in far away places” sweating away, whilst “some dude” is fronting it up by spouting the AI words just to get millions in investors money…

https://m.youtube.com/watch?v=Bb0bq3gAB9s

I was aware that there have been AI investor scams for decades and have said of most AI “it’s a scam” but I’d not realised just how much of a scam it had become…

After all now saying of much AI, it is,

“Full on Fraud”

Might hurt a few peoples feelings, and even bring share prices down, or have people jailed…

But considering other dimwit ideas that are crashing share prices right now, would you notice a little distant Faux-AI smoke, behind the Neroesque conflagration that is a bonfire of the vanities in WashDC?

Clive Robinson April 13, 2025 1:42 AM

@ Bruce, ALL,

You might find this read about Microsoft “Debug-Gym” interesting,

“AI isn’t ready to replace human coders for debugging, researchers say”

“[T]hose claiming we’re mere months away from AI agents replacing most programmers should adjust their expectations because models aren’t good enough at the debugging part, and debugging occupies most of a developer’s time. That’s the suggestion of Microsoft Research, which built a new tool called debug-gym to test and improve how AI models can debug software.”

https://arstechnica.com/ai/2025/04/researchers-find-ai-is-pretty-bad-at-debugging-but-theyre-working-on-it/

lastofthev8's April 14, 2025 4:21 AM

i dont know how to ask the right questions re: 👇>>>>>>

1 i stumbled upon this somehow….so i read some ,i searched some ,virusdtotal,mxtoolbox,ETC…im not learned in such matters ~! i see the word “trojan” i guess im asking the community wth do i do?..if anything idk! id like to learn more. peace everyone☮.

lastofthev8's April 14, 2025 4:26 AM

i dont know how to ask the right questions re: 👇>>>>>>

1 i stumbled upon this somehow….so i read some ,i searched some ,virusdtotal,mxtoolbox,ETC…im not learned in such matters ~! i see the word “trojan” i guess im asking the community wth do i do?..if anything idk! id like to learn more. peace everyone☮.

Clive Robinson April 14, 2025 7:29 AM

@ ResearcherZero, ALL,

You note,

“Even if you go through the Windows Registry or Group Policy to block Co-Pilot it still adds it to systems, like the notepad application for example, and installs the agent even after disabled at both the user and system level. This is an unwanted attack vector.

It is indeed unwanted “surveillance” thus an “attack vector” against not just a users privacy, but also their legal obligations such as for “Non Disclosure” and “Privileged Communications”

Thus the old truism from last century that “Windows is malware” appears to be back “front and center” to “stab you in the back”.

But it’s also further proof if required that the person currently siting in the “Lord Above’s Chair” has the view point that “rent seeking” is not sufficient, and that users privacy must be turned into profit.

As I’ve mentioned in the past on my personal machines I stopped playing MS’s ludicrous games back with W2K/XP and keep them fully segregated by being “gapped”. I also “locked down” other laptops I had to use otherwise for presentations etc by “striping back” and “segregation”.

As for 64bit based systems various types of *nix are a better option especially if they do not have “Agent P’s” nonsense included.

As for UEFI that has given MS “the one ring…” which frankly I find very disturbing, especially as we move into “See What You See”(SWYS) on device “Client Side Scanning” and worse (Recall to Cloud).

As I said on this blog long ago people need two computers,

1, For Privacy
2, For Networking / communications.

Now with Win-11 it appears MS will not allow you to have either privacy or be disconnected from communications to their cloud…

There is currently no “secure application” that will be “secure” under such a system no matter how much people talk about “E2EE”.

Oh and of course if MS can SWYS then so can any “Level III” and quite a number of “Level II” attackers, because MS has not just “backdoored” your computer, they have also turned it into easily reachable “low hanging fruit”…

I guess the sensible thing to do is,

“Get off the bus at the next stop, hopefully before it crashes and burns with you onboard.”

Yes I know it sounds “paranoid” but then just about every time I say something that people claim sounds paranoid it comes true not to long there after…

lastofthev8's April 16, 2025 2:35 AM

My apologies for the incoherent post on the April 14, 2025 4:21 AM 🙏 .
What i was trying to convey was wth! is this (text below) ? and why is this embedded in a web page? i stumbled upon it.\
Why this ? 👉”Trojan.Script.Heuristic-js.iacgm”👈

👉1
2
3
4
5
Trojan.Script.Heuristic-js.iacgm

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.