Autonomous AI Hacking and the Future of Cybersecurity

AI agents are now hacking computers. They’re getting better at all phases of cyberattacks, faster than most of us expected. They can chain together different aspects of a cyber operation, and hack autonomously, at computer speeds and scale. This is going to change everything.

Over the summer, hackers proved the concept, industry institutionalized it, and criminals operationalized it. In June, AI company XBOW took the top spot on HackerOne’s US leaderboard after submitting over 1,000 new vulnerabilities in just a few months. In August, the seven teams competing in DARPA’s AI Cyber Challenge collectively found 54 new vulnerabilities in a target system, in four hours (of compute). Also in August, Google announced that its Big Sleep AI found dozens of new vulnerabilities in open-source projects.

It gets worse. In July Ukraine’s CERT discovered a piece of Russian malware that used an LLM to automate the cyberattack process, generating both system reconnaissance and data theft commands in real-time. In August, Anthropic reported that they disrupted a threat actor that used Claude, Anthropic’s AI model, to automate the entire cyberattack process. It was an impressive use of the AI, which performed network reconnaissance, penetrated networks, and harvested victims’ credentials. The AI was able to figure out which data to steal, how much money to extort out of the victims, and how to best write extortion emails.

Another hacker used Claude to create and market his own ransomware, complete with “advanced evasion capabilities, encryption, and anti-recovery mechanisms.” And in September, Checkpoint reported on hackers using HexStrike-AI to create autonomous agents that can scan, exploit, and persist inside target networks. Also in September, a research team showed how they can quickly and easily reproduce hundreds of vulnerabilities from public information. These tools are increasingly free for anyone to use. Villager, a recently released AI pentesting tool from Chinese company Cyberspike, uses the Deepseek model to completely automate attack chains.

This is all well beyond AIs capabilities in 2016, at DARPA’s Cyber Grand Challenge. The annual Chinese AI hacking challenge, Robot Hacking Games, might be on this level, but little is known outside of China.

Tipping point on the horizon

AI agents now rival and sometimes surpass even elite human hackers in sophistication. They automate operations at machine speed and global scale. The scope of their capabilities allows these AI agents to completely automate a criminal’s command to maximize profit, or structure advanced attacks to a government’s precise specifications, such as to avoid detection.

In this future, attack capabilities could accelerate beyond our individual and collective capability to handle. We have long taken it for granted that we have time to patch systems after vulnerabilities become known, or that withholding vulnerability details prevents attackers from exploiting them. This is no longer the case.

The cyberattack/cyberdefense balance has long skewed towards the attackers; these developments threaten to tip the scales completely. We’re potentially looking at a singularity event for cyber attackers. Key parts of the attack chain are becoming automated and integrated: persistence, obfuscation, command-and-control, and endpoint evasion. Vulnerability research could potentially be carried out during operations instead of months in advance.

The most skilled will likely retain an edge for now. But AI agents don’t have to be better at a human task in order to be useful. They just have to excel in one of four dimensions: speed, scale, scope, or sophistication. But there is every indication that they will eventually excel at all four. By reducing the skill, cost, and time required to find and exploit flaws, AI can turn rare expertise into commodity capabilities and gives average criminals an outsized advantage.

The AI-assisted evolution of cyberdefense

AI technologies can benefit defenders as well. We don’t know how the different technologies of cyber-offense and cyber-defense will be amenable to AI enhancement, but we can extrapolate a possible series of overlapping developments.

Phase One: The Transformation of the Vulnerability Researcher. AI-based hacking benefits defenders as well as attackers. In this scenario, AI empowers defenders to do more. It simplifies capabilities, providing far more people the ability to perform previously complex tasks, and empowers researchers previously busy with these tasks to accelerate or move beyond them, freeing time to work on problems that require human creativity. History suggests a pattern. Reverse engineering was a laborious manual process until tools such as IDA Pro made the capability available to many. AI vulnerability discovery could follow a similar trajectory, evolving through scriptable interfaces, automated workflows, and automated research before reaching broad accessibility.

Phase Two: The Emergence of VulnOps. Between research breakthroughs and enterprise adoption, a new discipline might emerge: VulnOps. Large research teams are already building operational pipelines around their tooling. Their evolution could mirror how DevOps professionalized software delivery. In this scenario, specialized research tools become developer products. These products may emerge as a SaaS platform, or some internal operational framework, or something entirely different. Think of it as AI-assisted vulnerability research available to everyone, at scale, repeatable, and integrated into enterprise operations.

Phase Three: The Disruption of the Enterprise Software Model. If enterprises adopt AI-powered security the way they adopted continuous integration/continuous delivery (CI/CD), several paths open up. AI vulnerability discovery could become a built-in stage in delivery pipelines. We can envision a world where AI vulnerability discovery becomes an integral part of the software development process, where vulnerabilities are automatically patched even before reaching production—a shift we might call continuous discovery/continuous repair (CD/CR). Third-party risk management (TPRM) offers a natural adoption route, lower-risk vendor testing, integration into procurement and certification gates, and a proving ground before wider rollout.

Phase Four: The Self-Healing Network. If organizations can independently discover and patch vulnerabilities in running software, they will not have to wait for vendors to issue fixes. Building in-house research teams is costly, but AI agents could perform such discovery and generate patches for many kinds of code, including third-party and vendor products. Organizations may develop independent capabilities that create and deploy third-party patches on vendor timelines, extending the current trend of independent open-source patching. This would increase security, but having customers patch software without vendor approval raises questions about patch correctness, compatibility, liability, right-to-repair, and long-term vendor relationships.

These are all speculations. Maybe AI-enhanced cyberattacks won’t evolve the ways we fear. Maybe AI-enhanced cyberdefense will give us capabilities we can’t yet anticipate. What will surprise us most might not be the paths we can see, but the ones we can’t imagine yet.

This essay was written with Heather Adkins and Gadi Evron, and originally appeared in CSO.

Posted on October 10, 2025 at 7:06 AM18 Comments

Comments

Clive Robinson October 10, 2025 10:48 AM

@ Bruce, ALL,

With regards,

“AI agents now rival and sometimes surpass even elite human hackers in sophistication. They automate operations at machine speed and global scale. The scope of their capabilities allows these AI agents to completely automate a criminal’s command to maximize profit…”

We need to be careful about how we interpret words like “sophistication”.

It does not in any way imply “reasoning” just an ability to do known things faster and at greater scale hence “automate”.

Thus “old dog tricks” taught to a “young dog pack that are fast”.

Which brings us to,

“We don’t know how the different technologies of cyber-offense and cyber-defense will be amenable to AI enhancement, but we can extrapolate a possible series of overlapping developments.”

It’s not just “extrapolate” that can be done by automation, reasoning or both. The first we know Current AI LLM and ML Systems can do but the second so far the evidence for AI systems being able to reason is lets just say highly questionable when you strip off the hype.

However we know humans can reason in very original ways, and whilst automation is something most humans are all finger and thumb fumblers at we know how to reason out machines to do it more or less faultlessly. So an obvious question is,

“Can humans combine both effectively”

The answer is “YES” because the automation is in effect just,

“A computer doing by brute force already known attacks.”

Which gives rise to the question of,

“If, fault recognition by LLM is used effectively for attacks. Can it just as easily be used for defence?”

To which the answer is “YES” and in two basic ways,

Firstly in exactly the same way the Attack LLM does.

Secondly by the defence AI looking for changes in software behaviour and usage patterns..

The first does not require ML, the second does.

Whilst the second can find “new attacks” being used against a defenders system, it is not using “reasoning” just statistics of “Pattern Finding and Matching”.

It’s basic dull “grunt work” for humans and back during WWII it was the human limit on attacking machine cryptography. But we know that automation in various ways from perforated sheets being stacked up through mechanical cipher analogues being driven by motors could “brut force” reasonably reliably, through to the earliest electronic computer that took not just the dull, and countless grunts away, it speed things up and had the flexibility of rapidly responding to changes as humans reasoned out new attacks.

So we have long had an example of,

“Using humans to reason and computers to mechanise easily at way beyond human speeds”.

LLM’s can not reason, nor can LLMs + ML Systems. But it can take a significant load off of the system defenders, especially if multiple systems are in effect “summed together and attacks averaged out” of the noise etc.

Once humans can “see a pattern” they can “reason about it” and then use other systems to “test” any hypothesis.

The few real successes of Current AI LLM and ML systems have all fallen into this “find, test and automate” basic process. Reasoning was not required in the AI and nor was it there, but success happened.

It is this basic model that for the next few years will be the way LKM and ML AI will “earn it’s keep”.

Unfortunately it is this very model that makes it the greatest surveillance tool so far invented and built by “human reasoning”.

Some would say that, that alone is sufficient to heavily regulate AI development from now on in.

The Russia example shows AI has the capability to be even more of a threat to National Security than the hydrogen bomb…

However the likes of the Wassenaar Arrangement/Agreement, designed to imposes export controls on certain technologies, including cryptographic tools, to prevent their use by hostile Nation States and also limit human rights abuses… Have objectively been a complete failure, not just by not stopping the transfer of technologies, but actively increasing the speed other nations develop the technologies in one way or another.

So we need a different approach entirely. What that will be is very much an open question currently.

lurker October 10, 2025 1:46 PM

How many software vendors (including Open Source projects) are using these techniques to improve their products?

Rontea October 10, 2025 2:52 PM

Defenders can significantly improve their posture by leveraging visual intelligence combined with cheaper, widely deployed cameras. These tools provide real-time situational awareness and early detection capabilities, allowing security teams to anticipate and counter threats before they escalate. Accessible visual monitoring empowers defenders to scale their response without prohibitive costs, making it a critical component of next-generation cybersecurity strategies.

KC October 10, 2025 9:53 PM

@ Clive, lurker, all

Bloomberg Podcasts conducted an interview re: using AI to build cyber resilience

Sinha: There has to be consolidation because businesses can’t deal with 80 tools from hundreds of vendors and constantly juggling and having these disconnected systems.

So you see Palo Alto Network, CrowdStrike and others are making big headways in creating platforms that can connect all aspects of the infrastructure security around prevention and detection.

Looks like CrowdStrike released lots of new offerings this fall.

https://www.crowdstrike.com/en-us/blog/crowdstrike-fall-2025-release-defines-agentic-soc-secures-ai-era/

A major CS introduction is Charlotte AI AgentWorks, a text to security-agent platform.

“Describe what you need, and Charlotte will build an agent to do it … ”

Video: This is how defenders reclaim the advantage, to outpace AI-driven attackers, with AI-driven defense.

And their Agentic SOC has seven new AI agents including a Hunt Agent, a Malware Analysis Agent, an Exposure Prioritization Agent, and so on.

They also have a video on risk-based patching… Honestly, they have so many features, it’s almost ridiculous.

Plus here’s more on Palo Alto’s products: Precision AI (‘Fight AI with AI’) and Secure AI by Design.

Clive Robinson October 10, 2025 10:18 PM

@ Daniel Popescu, ALL,

Hopefully this little story of real life will amuse (I shall try to write in “third person” as that’s the tradition for stories).

Imagine if you will a teenager in school in the early to mid 1970’s, getting into trouble because he disagreed with “the teacher” and proved he was right…

Getting into trouble was not new for the lad, who had taught himself how to pick locks whilst still a quite young child, and come up with a way to fake fingerprints and one or five other things that might be of use to someone with serious criminal intent. Not because the child wanted to be a thief, master criminal or anything like that, it was Just simple curiosity. Acting as the driver to an almost cat like ability to hunt out answers and unmercifully play with them.

The lad on going through school was mostly bored rigid, but excelled at science despite the teachers[1], but was to clumsy for the workshop subjects. Found geography unbelievably dull as he already knew how to navigate by stars and land and already taught other children how to sail and trek, and thus survey the world. As for history with a mother who taught the subject to the higher levels and who remarkably had a degree in the subject back in “the war years”(WWII) and still did research aided by a curious son, the child certainly knew more than his teacher. As for English the child only learnt to read quite late, but took to it with a passion and had a couple of hundred books before becoming a teen. But saw no point in the abstract dissection of language, because as taught it served no purpose, as for appreciation of literature it was there but in effect killed by the idiotic selection of “Books That Had To Be Read”[2]. The lad read them each within a day of being given them and thus the entire year of siting in a class room having pupils read them out page by page was to a number of the pupils as they say “Many hours of our lives to no purpose, and time we shall never get back”.

Having read the books when it came to classroom discussion the lad was lets just say “a thorn in the paw” thus quickly got “left to his own entertainment” (which sometimes amused the other pupils).

Knowledge from books was thus acquired and significantly augmented by the UK BBC “Open University” programs for undergraduates that were avidly watched. Curiosity might have killed the cat, but it was making the child quite precocious and thus an irritant to some teachers.

Maths was a subject that like English Lit had a Traditional and a Modern way to be taught. In Junior school the child went down the Modern path. In senior school the Traditional path “was all” and it was without doubt a form of torture no active mind should be subject to.

Then a change in staffing brought a new fresh face just out of University… Their subject of choice was “history” but they were given “mathematics” to teach… Let’s just say it was a subject they were not suited to and the precocious mind cat like pounced and had a mouse to play with…

It all came to a head with the boy telling the man he was wrong and demonstrating so over “Prime Numbers” and the how and why of them and in particular twin primes and what we now call Primorials[3], and the difficulty of finding if the flanking numbers of a primorial are prime or not other than by the traditional factor by numbers output by the Sieve of Eratosthenes.

Most adults will recognise the reasons for what followed…

Anyway it all came to a head over the Physics Mock. The paper was actually a real exam paper from an earlier year and it had two parts the first was known as “mechanics” the second “matter”. Newton’s mechanics was hated by all pupils because of the way it was taught[4]. As a subject it did not inspire in any way and apparently still does not. But that was all the school had taught that “year” upto that point. So the pupils were “instructed” to do only the first part of the exam. In the Mock the boy looked at the first half and grimaced, then the second and smiled with joy and did that instead… This caused a significant issue and a formal teachers conference had to be called… The Biology and Chemistry heads Mr Pooley and Mr Ennis respectively were very favourable because the marked grade was the equivalent of an A which as none of that part of the subject had been taught to pupils was seen as some kind of miracle. However the physics and maths masters were apparently very anti, to the point of “kicking out of the subject”. But the head of Year had the casting vote and decided that the lad should go through and actually at the higher exam level.

[1] Back then you had to be streamed in various ways. Mostly by a teachers opinion and the result of work. But later and importantly “Mock Examinations”. Most teachers did not know what to make of the lad other than he was friendly personable but significantly different (remember Asperger’s was unknown to even most “trick-cyclists” back then). So got classified as either “too smart for his own good” or “lazy”. Due to other issues homework got marked on “presentation” not “content” thus were given low or no scores. Nobody asked why the lad could read vociferously talk and understand years above his age often above adult levels but could barely spell thus write…

[2] Back then there were two schools of thought about teaching English literature. The Modern and the Traditional, with Modern taking the “lead the forward by enjoyment” path. That is find books the child will want to finish thus read, that leads the mind into a world of well thought out imagination. One such was “Stig of the dump” a book even adults should read and even some of the fun Shakespeare plays.

Unfortunately the senior school I was sent to took the Traditional route of “Hit Them Over The Head Till Its Beaten In”. Thus the two books were “Lord of the flies” which honestly no child and few adults should read unless as Douglas Adam’s pointed out “You’ve put your psychoanalyst on danger money”. The other was a drear book about an HMS Destroyer doing the run to Russia in winter in WWII. About the only useful thing in it was a recipe for soup made with corned beef…

[3] Remember this happened back in the early to mid 1970’s, the term “Primorial” was in effect “unknown”. Academic papers in subject matter and other journals were unavailable to all but a very select few, because most Universities did not carry them and schools… Not a chance.

In fact there is even disagreement today as to when the term was coined if you search by AI it gives 1964 with no citation. Other results say 1987 as seen by – Harvey Dubner, “Factorial and primorial primes”. “J. Recr. Math.”, Vol. 19, no. 3 : pp. 197–203, 1987.

[4] There are actually better ways of teaching Newtonian Mechanics than just writing up equations on a board. But it needs a reasonable grasp on fundamental maths and geometry to do with “growth”. That believe it or not are not actually taught except as more “advanced topics” in maths…

Clive Robinson October 11, 2025 12:03 AM

@ ALL,

The above story is an intro to what happened later when at college.

Sometimes even well qualified people get things wrong…

It was physics again 😉 and about the use of energy.

The lecturer made an off hand comment about the inefficiency of walking[1] that was correct, but carried it through to riding a push bike which he got incorrect. And I pointed it out.

I suspect that Steve Jobs and I had both read the same article. As he has more famously put into words in an interview that has created a meme or law, I’ll just quote them,

Bicycle of the Mind

I remember reading an Article when I was about 12 years old, I think it might have been in Scientific American, where they measured the efficiency of locomotion for all these species on planet earth. How many kilocalories did they expend to get from point A to point B, and the condor won: it came in at the top of the list, surpassed everything else. And humans came in about a third of the way down the list, which was not such a great showing for the crown of creation.

But somebody there had the imagination to test the efficiency of a human riding a bicycle. Human riding a bicycle blew away the condor, all the way off the top of the list. And it made a really big impression on me that we humans are tool builders, and that we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes, and so for me a computer has always been a bicycle of the mind, something that takes us far beyond our inherent abilities.

I think we’re just at the early stages of this tool, very early stages, and we’ve come only a very short distance, and it’s still in its formation, but already we’ve seen enormous changes, but I think that’s nothing compared to what’s coming in the next 100 years.

We get great advantages from a push bike more so than most actually realise.

For over 30 years I used to do on average 500miles a week on one whilst also working. Where as walking I could only do about 100miles a week.

But importantly distance was not the only difference. Those 500 bike miles took a little over 4hours a day over five days so 20-25 hours, in nice weather quite a bit less. Walking 20miles however is about all most can do in a full day, especially if you are doing it day after day. Worse most can not keep 2Mph up over that distance let alone the oft quoted 4Mph. So 35-40 hours a week is about what you would expect.

Because the walking takes that much more energy on the flat, that is effectively wasted.

But… Whilst humans can “walk up walls” fairly easily, by stairs, ladder, or ropes etc, a bike is quickly a struggle at a 1ft rise in 3ft forward travel. And close to impossible at the grade ordinary stairs take.

We need to see this in respect to other “force multiplier” tools mankind makes and uses.

Perhaps the most astounding is as Steve Jobs pointed out the general purpose computer. Which is at the end of the day, what Current AI LLM and ML systems kind of aspire to be. But actually fail to do, because they are in effect “pedestrian” and not at all “general purpose”, “reliable”, or “efficient” in function or operation.

It’s a point people need to remember, Current AI LLM and ML Systems have become effectively trapped by their evolution in a dead end. They will become only a little better but at ever increasing cost. They can however in the right conditions diversify significantly.

But also consider they are like electrical switches… We are in effect just coming out of the “more power Egor” stage of “knife switches”. That are big, clunky and expensive in materials and effort to make, as well as being very dangerous to operate. After a hundred years or so electrical switches became smaller, safer and a lot lot less expensive. It’s a direction we should be thinking about for AI.

One thing DeepSeek has shown is that LLMs do not need to be bigger to do many things, in fact there is,

“More room for growth by going smaller”.

By being “designed to fit for a job” rather than “forcing a job around it”.

The real future for commercial AI is in “designed for a job” not “General for every job”. In fact the old saying of,

“Jack of all trades, master of none”

Applies currently, especially when you consider that “jack” actually is insulting as in “jack the lad” or a “naif”[2].

[1] Human walking is both efficient and inefficient. The reason it’s inefficient is that at each step energy is wasted at “lifting and lowering the body mass” with every step taken. It’s something inertia improves when you run fast enough, and on the flat the bicycle removes the need to lift and lower the body leaving just the legs that can do both at the same time so mostly balance out.

[2] From Collins and other Dictionaries,

Naïf :

  1. One lacking experience or judgment, used as a pejorative.
  2. An irresponsible person, seeking personal pleasure without regard to responsibilities.
  3. A rogue.

From the French for “naive”. It can be used as an adjective or a noun.

Daniel Popescu October 11, 2025 11:55 AM

@Clive – thanks :).

Somewhat similar story vibes with your third person hero :), but in my case it involved a military highschool. Makes a lasting impression when you are 14.

lurker October 11, 2025 7:34 PM

You know AI has passed below commodity status when gypsies are selling bargain bundles of prompts at village fairs …

‘https://www.humblebundle.com/books/complete-ai-gpt-book-bundle-with-7000-prompts-mammoth-flash-sale-books

Clive Robinson October 12, 2025 6:23 AM

@ Daniel Popescu,

“Thanks”

That’s alright.

It gave me the opportunity to practice “third person” writing 😉

Fun side to “third person”….

Trick cyclists say that talking about yourself in the “third person” is a “strong indicator” of having one of –I think it’s now[1]– six “Dark Triad” Personality Traits. Put in simpler times a socio/psychopath.

However teachers of English tell pupils they should not write stories even autobiographies in the “first person” as this is at a minimum boastful, narcissistic and other similar traits that when you check fall under the “Dark Triad”…

So the old,

“Damned if you do, damned if you don’t.”

Catch 22 type reasoning gets you either way 😉

What I am curious about is if this “narrator rule” applies only in “Britain”[2], or if other countries have the same or similar rules 😉

Because let us be honest writing in the “second person” outside of personal letters and advice reads like badly written sermons on moral or social behaviour =:(

[1] For those that have not come across the “Dark Triad” “term of art” it is a list put together just over twenty years ago by those in the domain of psychiatric research. It’s a set of “subclinical personality traits” that individuals have that can be considered as causation to harm –criminality– against others in society. Yes there is a flip list few have ever head of which is the “Light triad : “Humanism”, “Kantianism”, “Faith in humanity” and yes others are being considered for addition).

The Dark Triad list actually has it’s roots back last century with Psychopathy and Sadism getting linked by those researching Violent Crime and found frequently in those men convicted and imprisoned (Women were and still are rarely in such prison studies). These “personality traits” were seen more generally as “Criminal Intent” drivers by those in the legal profession who have been desperate for a tool since Phrenology was popular. It was realised that there was more to it, and back in 2002 Delroy Paulhus and Kevin Williams coined the “Dark Triad” term and listed Narcissism, Machiavellism, and Psychopathy as the personality traits. It was fairly quickly noted that it lacked “Sadism” and more recently “Spitefulness”.

And yes with the very clear rise of certain types of Authoritarian behaviours, there are indications that the sort of “Racism” that leads to what is in effect genocide should be added.

[2] “The British” are said to have certain types of Personality Traits like “Do not go out in the mid-day sun”, “a stiff upper lip”, “clipped speech”, “A taste for Gin and Warm Dark beer” “Boiled beef and carrots” and similar suitable for parody and songs. But it’s oft noted that the real separator between “Brits and Yanks” is not the “pond” of the Atlantic, but the “formal” common language that divides us.

David Ward October 13, 2025 4:13 AM

The real worry with all of this is that whilst AI could help the defenders as much as the attackers, typically the defenders are hamstrung by Corporate, Legal or Regulatory rules whereas the attackers have a free reign.

To get a new piece of wizzy AI technology deployed in the typical Corporate environment will takes weeks if not months to be approved. In the meantime the attackers are 5 versions further down the track.

This will need a paradigm shift in Corporate thinking if we are to stand any chance (and that is assuming the ‘top table’ even appreciate there is an arms race going on in the first place).

Roger A. Grimes October 13, 2025 2:25 PM

By the end of 2026, almost all hacking will be accomplished by agentic AI or AI-enabled tools. Our kids and grandkids will not associate the word ‘hacking’ with humans. It will just be something computers do.

Clive Robinson October 13, 2025 5:54 PM

@ Roger A. Grimes, ALL,

“By the end of 2026, almost all hacking will be accomplished by agentic AI or AI-enabled tools.”

Only “known knowns” and some “unknown knowns” will be hacked by Current AI LLM and ML Systems.

The “unknown unknowns” and other “unknown knowns” will have to be found by humans for now.

The reason is there is no “reasoning” or “intelligence” in the human sense in Current AI LLM and ML Systems, they are simply a form of “pattern matching with fuzzing”. That is they can only find “known” patterns and close variations there of.

That said the speed they will be able to find new variations on “knowns” will be very fast. Thus it will feel like an unstoppable torrent of new attacks, that nobody can keep up with.

There are only a few things we can do,

1, Live with the failure.
2, Use drastic mitigations.
3, Actually stop using artisanal development.
4, Adopt actual “engineering” development.
5, Use methods including AI to come up with some or all of the above proactively not reactively.

The best course of action is to use 2,3,4 now and isolate, reduce, or eliminate the tusnami of “technical debt”.

That said there will always be some form of fundamental vulnerabilities that can not be engineered out and mitigating by isolation just not practical. This means option 1 is going to happen, this is where having certain types of “instrumentation” on systems will hopefully produce a signal above the ambient noise level, thus give some form of early warning. Hopefully before to much damage is done.

Shurg October 14, 2025 4:58 AM

Been thinking about this for a while.

What if tomorrow’s internet simply becomes an unusable constantly evolving battlefield where vulnerabilities are discovered, exploited and patched in seconds, minutes or hours by AIs on both sides, where any entity unable to sustain “VulnOps” capabilities is pushed out (or pushed under the umbrella of one that does, for a fee…)?

It’s already massively concentrated and consolidated, but wouldn’t that be the deathblow (or at the very least an accelerant)?

Clive Robinson October 14, 2025 10:32 AM

@ Shurg,

With regards,

“What if tomorrow’s internet simply becomes an unusable constantly evolving battlefield where vulnerabilities are discovered, exploited and patched in seconds…”

In most places access to the Internet or other forms of electronic communication is not a right, a public good or even guaranteed by legislation.

So a “highway” of any kind it is not.

Which is odd because courts in increasing numbers of places are now regarding not having electronic communications as evidence against a defendant…

I’ve recently had crap from a UK Government agency because I do not have EMail etc… As I’ve pointed out to them the law only requires I have an address or equivalent where “post” can be sent to…

Thus Governments who should know better are using the fact that you still have the freedom not to buy into their,

“Do it on the cheep and worse evade legal obligations to individuals.”

To say that you are some kind of suspicious person and should be found guilty of something…

Even if it’s only from their point of view “Conspiring to make their life more difficult”.

Whilst the majority of middle class and up, might now communicate electronically, and thus be subject to all kinds of surveillance and worse. Those nearer the bottom of the socioeconomic ladder are being discriminated against because they can not afford “The Corporate Tax”.

I’m of the view that most electronic communications is a waste of my time and resources totally needlessly, thus I “Opt Out” and will continue to do so, especially the lunacy of Social Media and On-Line Shopping.

If more people “Opted Out” then the less likely you Government would “mandate electronic communications” as being required.

Oh and if they ever do, you can almost guarantee that the cost of being OnLine will double within a short period of time.

Digtal Canary October 15, 2025 12:38 PM

@Bruce the idea of self-healing networks is an important one, but I’ve yet to receive anything approaching a reasonable answer to the following concern from vendors in the digital immune system space: what happens if your product causes an autoimmune disorder in my network?

@Clive it’s refreshing to see someone else who understands the gross negligence associated with arms & dual-use export control regimes both national and international. I fought hard for that very issue 25 years ago, and remain discouraged that no.one ever was fitted for orange jumpsuits. Instead, Jon Bartol & Dana Deasy have gone on to lead successful IT executive careers, with Deasy in particular grinding my gears having been CIO for the USDoD under Biden. https://open.substack.com/pub/digitalcanary/p/holding-miscreants-to-account

@Bruce, FYI, you were the crucial contributor to my early development as a InfoSec & risk management professional: I got to work with your for several weeks at USAA in the mid 1990s, and your tutelage was invaluable to me. Similarly, Liars & Outliers has been, in particular, a hugely valuable resource in helping those around me to better understand the game-theoretical basis for so much bad actor behaviour in all aspects of our world cyber and physical (and where they intersect to provide so much value, with so much attendant risk, accelerating even further through the misuse of AI to “solve” — or rather paper over – problems and inefficiencies).

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.