Daniel Miessler on the AI Attack/Defense Balance
His conclusion:
Context wins
Basically whoever can see the most about the target, and can hold that picture in their mind the best, will be best at finding the vulnerabilities the fastest and taking advantage of them. Or, as the defender, applying patches or mitigations the fastest.
And if you’re on the inside you know what the applications do. You know what’s important and what isn’t. And you can use all that internal knowledge to fix things—hopefully before the baddies take advantage.
Summary and prediction
- Attackers will have the advantage for 3-5 years. For less-advanced defender teams, this will take much longer.
- After that point, AI/SPQA will have the additional internal context to give Defenders the advantage.
LLM tech is nowhere near ready to handle the context of an entire company right now. That’s why this will take 3-5 years for true AI-enabled Blue to become a thing.
And in the meantime, Red will be able to use publicly-available context from OSINT, Recon, etc. to power their attacks.
I agree.
By the way, this is the SPQA architecture.
Subscribe to comments on this entry
Anon • October 2, 2025 12:56 PM
No, LLMs do not “apply knowledge to new situations and contexts … astonishingly well.” What they do astonishingly well is generate answers that sound correct. “Sounds correct” sometimes proxies accuracy, sometimes not. The fact that Miessler uses an LLM-generated poem about Star Wars as evidence that LLMs answer correctly shows that Miessler doesn’t understand the difference between sounding correct and being correct.
The reality is, correct answers sometimes sound wrong, even in the totality of the linguistic context in which they exist.
LLMs are great tools but their proponents continually overstate their capabilities in a way that deeply discredits them. I wish we had some clear headed people putting them to work at purposes they’re actually good at.