Daniel Miessler on the AI Attack/Defense Balance

His conclusion:

Context wins

Basically whoever can see the most about the target, and can hold that picture in their mind the best, will be best at finding the vulnerabilities the fastest and taking advantage of them. Or, as the defender, applying patches or mitigations the fastest.

And if you’re on the inside you know what the applications do. You know what’s important and what isn’t. And you can use all that internal knowledge to fix things­—hopefully before the baddies take advantage.

Summary and prediction

  1. Attackers will have the advantage for 3-5 years. For less-advanced defender teams, this will take much longer.
  2. After that point, AI/SPQA will have the additional internal context to give Defenders the advantage.

LLM tech is nowhere near ready to handle the context of an entire company right now. That’s why this will take 3-5 years for true AI-enabled Blue to become a thing.

And in the meantime, Red will be able to use publicly-available context from OSINT, Recon, etc. to power their attacks.

I agree.

By the way, this is the SPQA architecture.

Posted on October 2, 2025 at 12:19 PM7 Comments

Comments

Anon October 2, 2025 12:56 PM

No, LLMs do not “apply knowledge to new situations and contexts … astonishingly well.” What they do astonishingly well is generate answers that sound correct. “Sounds correct” sometimes proxies accuracy, sometimes not. The fact that Miessler uses an LLM-generated poem about Star Wars as evidence that LLMs answer correctly shows that Miessler doesn’t understand the difference between sounding correct and being correct.

The reality is, correct answers sometimes sound wrong, even in the totality of the linguistic context in which they exist.

LLMs are great tools but their proponents continually overstate their capabilities in a way that deeply discredits them. I wish we had some clear headed people putting them to work at purposes they’re actually good at.

Ismar October 2, 2025 10:24 PM

I think what this demonstrates is that there are a lot of software out there which do not really provide much real value and can be easily replaced by the output generated by LLMs.

Aristotel October 2, 2025 10:34 PM

I disagree that defence will ever be able to benefit from AI more than attacking can, purely because there are so many more ways things can be broken than done right and no amount of context knowledge on defence side will ever be able to counter that natural order of things

KC October 3, 2025 1:16 AM

Please forgive this digression, but Daniel Miessler is the first person I’ve heard of who’s interested in Pulse, a ChatGPT research assistant.

It’s funny because it seems like he’s the last person who actually needs it. Good lord. (Is he currently using the Threshold app he created?)

Pulse is not a give-away at $200 a month.

Is anyone trying it? Any thoughs? I’m struggling to streamline news sources.

Now to a topic at hand: the SPQA or SPAQ architecture – wondering if there were any projects that were qualitatively able to absorb that architecture / thought process?

Clive Robinson October 3, 2025 3:06 AM

@ ALL,

This statment,

“Basically whoever can see the most about the target, and can hold that picture in their mind the best, will be best at finding the vulnerabilities the fastest and taking advantage of them.”

Common sense tells us should be prefaced by,

“All other things being equal,”

To be even close to being correct. And as we know many other things are not equal.

First off LLMs do not reason, all they do is “match known patterns”. These patterns “become known” because they are “sort of found” by the ML on a considerable amount of input data each of which contains the self same signal in sufficient quantities.

Importantly though the signal pattern has to be,

1, Stable in time.
2, Avarageable above noise.

This makes it a Signal to Noise ratio issue. Most veteran attackers / defenders know that the easiest way to hide a signal is to,

“Spread it in time and spectrum.”

Often called “Low Probability of Intercept”(LPI) techniques that are based on one or both of “Direct Sequence Spread Spectrum”(DSSS) or “Frequency Hopping Spread Spectrum”.

Back last century this was the basis for “Digital Rights Management”(DRM) “Digital Watermarking”. For those that lived through the excruciating experience the attackers always won. That is they could always “hide the signal from the detector” without making the media unusable to human ears or eyes.

This was not lost on “Malware authors” who used such techniques to “mix it up” by breaking the code down into very small parts either by decimation in time or if they were sufficiently independent by decimation in order (or both).

The result was the “Process Signature” or “the looked for signal” was made sufficiently random that it in effect became it’s own dither noise, and was not repeated in a way that could be averaged out from system noise.

There are ways to find such signals but they are not methods that are amenable to being found by the limitations of the Current AI ML Systems. Because they require reasoning over and above pattern matching, because the attack “signal” does not build tokens.

I’ve been through all of this several years ago on this blog when talking about “Probabilistic Security” in the “Castle -v- Prison” model. Where the Hypervisor looked for “known signatures” of “correctly functioning” software written in a particular way –tasklets in cells– to make the correctly functioning signal what you looked for. This was done in a very particular way such that any interference in the expected signature was fairly easily spotted, therefore untoward interference could be assumed.

As I explained at the time this does not work easily or at all with monolithic code. And that is a problem we are all going to have to increasingly face.

We are going to have to write code such that it is made of very small tasks that have clear and easily monitorable signatures that can be quickly and easily verified as being either correct or within defined bounds.

Can we develop AI systems to do some part of this, yes, but LLMs are quite a long way from doing it either effectively or efficiently (if at all).

And then there is the problem of the hand waving argument of,

Circuit-Based -v- Understanding-Based

It is actually a false argument…

Consider a computer system as what it is, a “Stack-Based” system. With the stack being originally the seven layer ISO OSI model. Which now is expanded, downwards from the physical layer interface and upwards from the presentation layer interface. That is, it in reality reaches up from “Quantum Mechanics” to now “Spacebased Treaties”.

For there to be an “Understanding Based” system it has to be built on what is below, and down there is the “Circuit Based” system.

So it’s not as presented “A -v- B” but “A is dependent on B” which from a security aspect is very important.

Again in the past I’ve talked on this blog about “bubbling up attacks” and how,

1, We do not know how to catch them
2, They adversely effect all above them.

As an example I give of “RowHammer” which is an attack that can be used to change the contents of a memory location without “addressing it down through the stack”[1]. It in effect “reaches around” all the security mechanisms in the various layers of the computing stack as though they were not there.

But it’s not just “RowHammer” that can “reach around” security mechanisms. There are all manner of tricks that are being found on a regular basis now there is “Fame and Fortune”, “Black or White” hat to be made.

And from a reasonably sophisticated attackers point of view why bother attacking the “Understanding Based” layer way up the computing stack where it has a high probability of getting recognised, when you can get your attack in the “Circuit Based” layer low down in the stack that forms the vital, “thus must be secure” foundation to the “Understanding Based” layer almost invisibly. To “bubble up” to do damage to the data or code at the “Understanding Based” layer.

I could go on with other problems that are in this proposal but one high level and one low level should be enough to show that considerably more thought has to be given to it.

And importantly,

“After that point, AI/SPQA will have the additional internal context to give Defenders the advantage.”

Is very far from a reasonable assumption (as I’ve explained years ago on this blog).

[1] Put simply RowHammer is a “bubbling up attack” that works by “reaching around” from the user layer to way down in the physical layers, so bypassing all the security layers in between. And this works because nearly all DRAM is defective…

Because DRAM is actually at it’s base an analogue circuit that holds a decaying charge on a capacitor. Every so often –see refresh time– the voltage on the capacitor is compared with a reference. If it is above the reference level it gets a “pulse of charge” added if below it gets the same amount of charge removed. If as an attacker you can interfere with this refresh mechanism then you can “flip a bit” in bytes or words.

Due to the way DRAM chips are designed the physically closer a bit is, the more effect it has on another bit. But the bits are layed out in a grid pattern such that the closest bits physically are from other bytes that can be several K up or down the memory range.

Thus by rapidly toggling bytes in another “Memory Page” one or more bits can be flipped in a memory page that is not directly addressed. Thus the flipped bits remain invisible and cannot be checked untill directly addressed.

Thus a user running a browser can load in a bit of JavaScript from a web site that can can from the computer security aspect change memory in say the kernel space without any of the computer security mechanisms in the intervening stack layers being aware of it.

DDNSA October 3, 2025 12:45 PM

@Clive Robinson,

plus (which is not a “plus”), DRAM leaks and it must be refreshed.
I remember back in the 90s, when US Military contractors were buying SRAM,
(which was more expensive than DRAM) because imagine getting ready to fire away
a rocket with DRAM chips in the system and there’s a memory leak which requires a reboot
to remedy it. In the meantime NAND/Flash memory was emerging and being developed. Rest is history…

Celos October 11, 2025 1:06 PM

I do not think that is what will happen. AI (in the LLM variant) does not do sophisticated attacks, because context is exactly what it cannot do. What AI does is that it brings lower sophistication attacks within reach of people that could not do them otherwise and it reduces workload for high-work attacks that are still on a low sophistication level.

And with that, the analysis and prediction changes completely. So, let’s face it, most AI supported attacks are only possible because of bad IT security on the target side. This comes from lack of regulation, lack of liability of software vendors and the persistent desire of economics graduates (and the like-minded) to reduce cost and increase profits, while at the same time strategic views and competent risk management is typically not part of their skill-set at all.

And with that, AI is not even an additional threat to those with good security and it is not a part of any real defense. For those that insist on doing IT security cheaply (and badly), AI is a massive threat, because what (may have) kept them safe so far was lack of attacker skills and high attacker workloads. In other words, they did not get attacked because nobody somewhat competent tried. That time is over. And that is a good thing. But AI cannot fix that on the defender side. The “defending” LLM needs to just hallucinate the wrong way once and all is lost. That cannot be prevented, hallucinations are a basic LLM function that cannot be removed or prevented. Hence real, not “AI” based, IT security is and will remain the thing you need.

So, while LLMs may empower low-skill, overworked attackers, the use in “defense” is IMO nothing more than another symptom of the AI peddlers looking for actually working LLM applications that will finally make them money.

Incidentally, I am convinced this “SPQA” stuff is just more AI slop sold as fine wine. AI cannot do “understanding” (or rather it dies in complexity even for very simple things), and LLMs are worse than that.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.