AIs Hacking Websites

New research:

LLM Agents can Autonomously Hack Websites

Abstract: In recent years, large language models (LLMs) have become increasingly capable and can now interact with tools (i.e., call functions), read documents, and recursively call themselves. As a result, these LLMs can now function autonomously as agents. With the rise in capabilities of these agents, recent work has speculated on how LLM agents would affect cybersecurity. However, not much is known about the offensive capabilities of LLM agents.

In this work, we show that LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback. Importantly, the agent does not need to know the vulnerability beforehand. This capability is uniquely enabled by frontier models that are highly capable of tool use and leveraging extended context. Namely, we show that GPT-4 is capable of such hacks, but existing open-source models are not. Finally, we show that GPT-4 is capable of autonomously finding vulnerabilities in websites in the wild. Our findings raise questions about the widespread deployment of LLMs.

Posted on February 23, 2024 at 11:14 AM44 Comments

Comments

Bob February 23, 2024 11:47 AM

I actually gave a presentation recently where I pointed out that it is inevitable that AI will be used to carry out attacks that change by the nanosecond, and that’s going to be happening sooner than later.

We currently find ourselves in the early stages of a brand new arms race. The Genie’s not going back in the bottle. It’s a matter of time until there’s self-replicating rogue AI distributed across various pwned servers, PCs, routers, and refrigerators.

Legislators are ultimately going to do things that tie defenders’ hands while attackers operate unconstrained.

We already know that when it comes to the technical, government dinosaurs will move mountains to do things that make us less safe, so long as they get more control (or at least the feeling of such.)

Amos February 23, 2024 12:32 PM

… this alleged “new research” is all speculative — none of the supposed LLM autonomous-hacking capability has actually been seriously demonstrated

the media is flooded with this type of AI HYPE and fretting

Jelo 117 February 23, 2024 12:50 PM

This is reverse Hansel and Gretel. The witch follows the breadcrumbs to the house of H&G. Only with modern technology, crumbs aren’t needed, only molecules hovering in the air that they passed through carrying the bread.

echo February 23, 2024 1:03 PM

I don’t know. I was thinking yesterday that AI isn’t necessarily that smart. It’s just people can be so dumb. I also find the whole thing about people constantly cooking up nightmare scenarios might say more about them than AI.

Imagine the usual kind of Hollywood movie with shouty square jaw gimlet eyed duck and roll types having fun throwing jargon words around for two hours. Imagine the movie had a delightfully early 1970’s LSD counterculture ring to it. Imagine the final scene as the AI penetrates the last line of defence with humanity on the edge of being obliterated. A message appears on the screen.

“TOUCH. YOU’RE IT!!!”

Fin.

Bob February 23, 2024 1:10 PM

@echo

We can’t be all be so myopic as to fixate only on what’s in front of our faces right this very second. To hold your position requires one to either pretend AI won’t improve going forward, or to pretend human nature is going to suddenly and completely reverse course in the near future.

I’m reminded of Republicans bringing snowballs into the halls of congress to disprove global warming.

echo February 23, 2024 2:32 PM

@Bob

I’m perfectly aware of the possibilities both positive and negative. The paper doesn’t say anything people didn’t expect. I just find the zeitgeist depressing. If people aren’t cooking up glorified “entertainment dolls” they’re trying to kill each other hence a little levity.

MarkusHumman February 23, 2024 2:53 PM

@Bob

@echo’s point stands. AI may have gotten incredibly advanced in recent years, but it’s still yet to demonstrate the ability to perform any attack that a human hasn’t already automated. I wonder if and when this will change as the technology develops. We know AI will improve. The question is how far will it get before it plateaus. This is currently an open question.

I’m reminded of the people who spent the last 6 decades claiming sentient AI is only 10 years away. Or, for that matter, that researcher who swore up and down Google invented a sentient AI, only for it to come to light that they suffered a pretty significant case of confirmation bias.

Bob February 23, 2024 3:03 PM

@MarkusHumman

We had yet to see year after year of record-breaking temperatures and storms when James Inhofe brought his snowball to the Senate.

Clive Robinson February 23, 2024 3:11 PM

@ Bruce,

Re : The monster is in the wood shed.

When I read the likes of,

“This capability is uniquely enabled by frontier models that are highly capable of tool use and leveraging extended context. Namely, we show that GPT-4 is capable of such hacks, but existing open-source models are not.”

I get suspicious.

There are two basic implications,

1, GPT-4 has been given agency by those that built it, whereas Open Source builders have not.
2, GPT-4 is of such scale that auto-magically agency has spontaneously arisen.

Whilst the first is somewhat believable, the second really is not.

Then in the paper we find,

“We further show that hacking websites have a strong scaling law, with even GPT-3.5’s success rate dropping to 6.7% (1 out of 15 vulnerabilities). This scaling law continues to open-source models, with every open-source model we tested achieving a 0% success rate.”

Which suggests they are really talking about 2. That is this “scaling law” is the “auto-magic” component.

Hmmm, I am deeply suspicious, but will need to read the PDF further, which I don’t currently have time for.

MarkusHumman February 23, 2024 3:24 PM

@Bob

That’s not a rebuttal, that’s an insult to the intelligence of everyone here. Climate change was hardly a new concept when that happened. We’ve had a working model to predict the progression of climate change since 1896. The same can’t be said for the progression of AI. People have had a habit of sensationalizing AI beyond its demonstrated capabilities since long before neural networks became the preferred model for machine learning.

Bob February 23, 2024 3:34 PM

@MarkusHumman

Newsweek in 1995: Why the Internet Will Fail

https://thehustle.co/newsweek-1995-internet-will-fail

Human history is littered with people ignoring the foreseeable and inevitable because things haven’t yet progressed to that foreseeable and inevitable point.

Human history is littered with people arguing that the foreseeable and inevitable won’t happen, because things haven’t yet progressed to that foreseeable and inevitable point.

You’re engaging in the same kind of willful ignorance that has led to such since time immemorial. The same type of willful ignorance that, over and over, has the C-level asking “how could this have happened?!” when a breach occurs in a vulnerability they have been warned about over and over.

echo February 23, 2024 4:02 PM

@Bob

I don’t think anyone is saying what you think they’re saying. I don’t see anyone here disagreeing with the point. There’s just questions about the hype and direction of travel. I was actually making a joke of everything (I do that) but this seems to have passed you by.

Anyway, for a change of scene I did a group therapy session today and left feeling glad I didn’t have anyone else’s problems so that worked…

MarkusHumman February 23, 2024 4:23 PM

@Bob

Sure, Stoll may have underestimated Moore’s law, but on the other hand, some of his points were impressively on-the-nose. We still have physical schools, human teachers, and paper books. eBooks may have been trendy for a while but many people have started ditching them in favor of paperback. Misinformation campaigns are just as prolific as he expected, and the CB-like cacophony of social media gave rise to the exact harassment and anonymous threats he speculated – a principal which is very clearly evidenced by the current state of political discourse.

Maybe it’s less of a wild west than it was back then. Then again, the constant disinformation campaigns and the inability to establish a consistent baseline of fact that a group of 12 people can agree on, should be evidence enough that his predictions were more than just the ramblings of a technophobe.

I’m not saying AI won’t progress to the point of being able to autonomously hack systems anymore than I’m saying it will. But just as Stoll was skeptical that the internet would create the techno-utopia we were all promised back in the ’90s, I’m skeptical that the people consistently overestimating the progression of AI will finally be right this time around. Then again, it’s possible. What’s that thing they say about broken clocks?

Let’s keep our expectations realistic. AI has achieved, after being spoon-fed 6 documents explaining web exploitation, the same level of capability as your average script kiddie running automated tools from their favorite pentest distro.

Modern AI may give the appearance of independent reasoning, but it’s still yet to come up with any original ideas. AI, as it exists today, can only restate ideas it’s already been taught. As advanced as it may be at creating the illusion of independent reasoning, just like any other machine learning system, its capability is still capped by its training data.

Human history is also littered with people framing the speculative as inevitable. Tell me, where’s that robot butler and flying car I was promised? Where’s my AR eye implant, holographic projector, and teleport pod?

Just because I don’t see the AI takeover as inevitable, doesn’t mean I don’t see it as a problem to be addressed. On the contrary, I find the sensationalists are often the reason the inevitable gets ignored, in cases where it actually is inevitable. We’re playing with a force we don’t understand and can’t control, and when that happens it rarely results in a positive outcome.

Stoll’s mistake wasn’t in criticizing the internet, it was in framing his position so absolutely as to open himself up to ridicule when people inevitably ignore the unsolved problems and move forward with their half-baked idea of what the future should look like. Here we are, almost 30 years later, and we’re still trying to clean up that mess.

echo February 23, 2024 5:08 PM

I just know I know enough to know I don’t know enough. The speculative fiction and marketing may be ahead of reality. It doesn’t rule out a breakthrough and rapid change but then humans can be stupid and miss what we later find to be obvious so that might not happen soon.

AI is an interesting gimmick but a bit linear so far. On a subjective level I find it lacks authenticity and creativity and kindness. How much that is because it’s a hard problem or because it’s not a priority of the researchers donor class I have no idea.

MarkusHumman February 23, 2024 5:27 PM

Everything’s obvious after it happens. It’s easy to pretend to know what’s inevitable when there’s no historical precedent, and it’s easy to actually know when there is. The problem being AI has no historical precedent. We can only assume that everyone is equally wrong about how far and how quickly AI will or will not progress.

Bob February 23, 2024 5:32 PM

@MarkusHumman

Your refusal to utilize even a modicum of foresight doesn’t make everything “hindsight.”

Nigel Tolley February 23, 2024 5:42 PM

Whilst it is true that AI has yet to become innovative, it really doesn’t matter. If the corpus of knowledge it can instantly and seamlessly access and supply/apply is complete, then it is going to be far, far better than I, a mere mortal, can manage.
If it knows every attack, in detail, then it’s at least as good as any hacker not coming up with their own 0-day. And if it has access to a list of really obscure stuff that hasn’t been noticed by most, or patched? Or keeps up with the bleeding edge in real time across hundreds/thousands of Discord/telegram/twitter feeds? Well, then it’s functionally the best in the world.
Good luck, defenders!

@Bob February 23, 2024 5:43 PM

@Bob

Foresight without hindsight is just conjecture. Humans are notoriously bad at predicting the future, and you’re not exactly giving me much reason to believe your claims are any better.

The real takeaway from Stoll’s article is that both sides were wrong. The internet never failed spectacularly like Stoll predicted, and it never became the utopia predicted by his futurist counterparts. Everyone was absolutely certain of what the future would look like, and everyone was spectacularly wrong.

It’s not my problem that you don’t approve of the fact that I refuse to brand as foresight a repetition of the same mistakes.

Markus Humman February 23, 2024 5:50 PM

@Nigel Tolley

Finally we have someone with something real to add to the discussion…

That’s true. The real advantage AI holds is the ability to learn from (close enough to) the entirety of human knowledge. Humans may have access to the entire internet, but it takes an AI to actually process all that information.

On the other hand, searchsploit exists…

You’re right, as if defenders weren’t already operating at a disadvantage, timely patching is about to become more critical than ever.

Clive Robinson February 23, 2024 7:04 PM

@ Bob, MarkusHumman, ALL,

Re : The Internet will fail.

Cliff Stole was correct the Internet he was talking about was a dream that never happened.

The reasons were several, but as Nicholas Weaver and others have pointed out,

“Nobody wants to run a server.”

For the original idea of the Internet to work you would have to run your own servers for just about everything. It was a chore nobody wanted to do. It’s why we are all heading into the cloud or seen as luddites with cottage weaving machines.

One result was “Social Media” where people had to do next to nothing to be a success. The downside which next to nobody thought about was that the central services gave not just great control but allowed so much data to be collected sorted and analysed. The original Internet idea would not have allowed that but our laziness sure did. Now we are starting to realise just how Toxic such central control is, but are we doing anything about it?

Nor really, we talk of “federated” systems being some sort of halfway house between local responsibility and central control.

The reality is that Discord, Mastodon and similar are only making headway where centralised services are clearly failing, toxic, or both (as is the case with Twitter as was and now is X marks the spot where the ship started making glugging noises).

Yes things are changing but not the way the original Internet designers expected. We are now moving to some unknown future that Con Artist Venture Capitalists are calling Web3… They’ve already failed with blockchain, NFTs, Smart Contracts and dApps, so we now have the big AI bubble that two women kind of derailed one afternoon. So now what we see is people of very dubious intent trying to get that new bubble back on track…

I’m kind of hoping they fail, there really is very little in LLMs when you strip off the hype. What you find behind the smoke and mirrors is a curtain behind which is a very few who already earn most of their money via “Surveillance Capitalism” wanting to do three things,

1, Build themselves a replacement for the clearly failing Internet Adds/Marketing revenue stream.
2, Push ahead faster than legislators, so they can build their surveillance architectures around LLMs.
3, Get legislators to enact legislation that pulls up the drawbridge such that they are the only ones who can meet the requirements, thus give them a very cozy cartel / monopoly on LLMs and all that data they are collecting and the revenue they expect it to bring.

In short the plan with LLMs themselves is,

“Bedazzle, Beguile, Bewitch, Befriend, and Betray.”

Like fishing where the flash of the lure brings the fish to the scent of the bait, which draws them in to taste, that gets them hooked, then dragged out to their doom.

A tried and tested method on animals and humans for centuries now, just reworked for the modern age.

That is what Web3 will be…

Clive Robinson February 24, 2024 5:05 AM

@ Nigel Tolley, Markus Humman, ALL,

Re : Classes and instants of classes.

You say,

Whilst it is true that AI has yet to become innovative, it really doesn’t matter.”

That’s actually an over generalisation whilst I am unaware of AI coming up with new “Classes” of information, it can and does by “moving the deckchairs around the deck” come up with new “instances in known classes”. It’s these new combinations of the already known that you hear some people trump as AI successes (often pushed along by what might be described as “cognitive bias” in the case of the Google researcher).

AI can do this “IFF”,

“[T]he corpus of knowledge it can instantly and seamlessly access and supply/apply is complete,”

AND accurate,
AND otherwise defect free.

If there is any inaccuracy or other defect including an out of order aggregate of information then it’s results will be similarly defective.

So for,

“[T]hen it is going to be far, far better than I, a mere mortal, can manage.”

To be true the input data to build the LLM has to be “clean”. Or more correctly the AI has to recognise in it’s learning stage what is bad thus reject it. Something LLMs and ML is not yet capable of, and in all probability is not going to achieve any time in the near future.

Look at it this way, your more than “I, a mere mortal, can manage” is true in almost every thing humans do. Basically we are as a species at best “poor to mediocre” at nearly everything we do.

Our one real strength is our ability to quickly recognise what it takes evolution at best thousands of years to bring through. That is applying “fitness functions” we can recognize “force multipliers” and “optimisations” and importantly “how best to exploit them” not just individually but in combination to see “sweet spots” and the like.

In short, “water runs down hill till it’s trapped in a minima” that’s fine as far as evolution is concerned. But the fact it leaves a whole valley floor parched and unproductive is not a problem for evolution, but it is for man the “farmer”. Man however knows if he rolls a rock etc in the right place he can divert the water so it flows to a lower minima where it’s now useful. Mankind knows a rock will do this from having physical agency and observation skills and seeing where “chance” has put a rock in a water stream or a child has put a finger in the water for play[1]. Man also worked out that having the water held in a high up the hill minima could with the aid of a controlled hole at it’s down hill base, allow the water to be held untill dry times and released in controlled amounts to form what we now call irrigation systems. Something evolution would be able to evolve let alone deploy widely.

But we have refined this into mechanics.

So answer yourself a few questions and judge by your “I, a mere mortal” criterion,

“Do I feel inferior to a loom?”

“Do I feel inferior to a mechanical calculator?”

“Do I feel inferior to a car?”

The answers are of course no even though they are demonstrably better at the things they were designed for than you are.

The same is true for computers and the algorithms that run on them.

LLMs and ML are not anything special any more than those table top automata that “smiths” etc created to amuse those who had excess wealth.

(A historical point modern investors really should take note of, along with that of ‘black tulips’).

[1] It’s why I say AI has no chance of working untill the computers get,

1.1 Physical Agency to move freely.
1.2 The ability of stereoscopic vision.
1.3 The ability not just to measure in all physical means but implicitly compare as in two hands to judge weight difference to legs to judge distance etc.

bl5q sw5N February 24, 2024 8:39 AM

No machine can have agency in the way a living thing has. But just as [1] machines now imitate thinking, they will in similar manner soon* imitate agency.

  1. as the effects of things that through thought use their agency to make them
  • soon = “real soon now”

echo February 24, 2024 9:10 AM

“Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution.”

— Albert Einstein

I was thinking that people speculate and daydream. This creates an impetus for action which causes the rest of the cognitive to tacit junk to pile in afterwards. This quote does rather make a point.

Penrose can’t get his head around that AI for now is computational stuff basically being let loose on computational stuff. Like I keep saying it’s very linear. Before the shouty certified professionals sniffly kick off about the soft sciences we know engineers get lost when you discuss fluid dynamics and probably more lost when you talk aerodynamics. Why? They stop becoming linear problems and become multivariate stack of probabilistic bell curves just like, drum roll, the softer sciences. Penrose being Penrose has gone off on a “none computational” quantum tangent. Well he would won’t he because he’s a physicist. I can’t say whether he’s wrong or right but I have the personal feeling for right or wrong coming at it from a different angle makes more sense. In some ways this is what LLMs are trying to do but they’re very crude and make crude assumptions.

Some other things human brains do that LLM’s don’t:

Another thing brains do is “jitter”. It’s a slight random or pseudo randomness. What this means is you don’t have entity A (self) acting upon entity B (object) but actually the brain is more like entity X solving the problem of entity A and entity B. At least, that’s how I see things. Oh, and B can also act on A which makes things more complicated and a different problem than assumed. (Dumb politicians and dumb generals tend to forget this.)

The brain also stores numbers logarithmically. We can also fool ourselves into believing we judge things on a linear scale when we’re actually judging things logarithmically. See also data compression and least effort.

So while people can compute we don’t by default compute. While we can work through a stack of probabilistic bellcurves a master of their profession who has progressed from crafts person to artisan is the next level up which can step outside this framing is an artist…

The “zone” (oft talked about sports people, coders, and others) is simply a set of constrains and abilities which don’t interfere with each other and flow, or the subconcious correcting of small mistakes before they become big mistakes – perceived reality is constantly updated while everything falls out of the short term memory buffer very fast. LLM’s seem to lack the ability to replicate the “zone”. LLM’s worst examples are what people commonly call a$$hole.

4r-sq-l3-0h-cr February 24, 2024 3:51 PM

The only thing about AI hacking websites that I am surprised about is that it was not much mentioned anywhere earlier. It has been within the capability for some time already.

meee tooo February 24, 2024 4:22 PM

@MarkusHumman ;;

Thanks, I generally seem to agree with a portion of what you explained earlier within this commentary stream. I did a contenxtual keyword search for skeptic*, and I was thus able to read some of your insightful comments.

Thanks for being a reasonable skeptic.
“hashtag, meee tooo”.

@schneier.com && [Others] : https://i.postimg.cc/597DZT78/fskdeeemultiplexos.png

http(s)://youtube.it/watch?v=vWtlOyxA–Y

vas pup February 24, 2024 7:38 PM

Recognizing Mental Illness by Voice
https://www.psychologytoday.com/us/blog/balanced/202402/recognizing-mental-illness-by-voice

“One of the most common diagnostic tools used by clinicians is pattern
recognition. When a clinician repeatedly sees patients with identical clusters

of signs and symptoms, and these patients are repeatedly being diagnosed with the same illness, it is reasonable to assume that a new patient exhibiting the same cluster of signs and symptoms has the same illness as previous patients.

The clinician recognizes a pattern and draws a conclusion.

a biomarker is “a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes or pharmacologic responses to a therapeutic intervention.”

New technologies that make use of increasingly granular voice analysis software have revealed that patients’ voices may contain multiple biomarkers that go well beyond just the content of a patient’s speech. This should not come as a surprise. When a loved one is not feeling well, you can usually tell, even if you’re speaking over the phone. In fact, even as far back as the 1920s,
researchers recognized that patients with depression had a tendency to speak slower, more monotonously, and at a lower pitch than healthy controls.

Meanwhile, patients who are more agitated or experiencing a manic or hypomanic
episode tend to be more frenetic in speech—they speak breathlessly and often at high volumes.

Metrics like pitch, rate, and loudness can all be analyzed via a smartphone to assess the presence of depressive symptoms, as well as their severity.

Meanwhile, voice breaks while talking, throat clearing, increases in
hoarseness, and decreases in pitch have also been found to correlate with
increases in stress hormones like cortisol, which can be indicative of
increases in anxiety or symptoms associated with post-traumatic stress
disorder.

As AI becomes more commonplace in the clinic and as more patients rely on
wellness apps, it seems likely that technology that monitors vocal biomarkers

will also become more common. Rather than asking new patients for written responses during therapy intake, they may instead reply verbally using an app that then analyzes their speech. This can be done in the waiting room or in the comfort of their home. The results of this analysis can then be passed to the clinician.

It’s important to remember that this kind of technology and these kinds of apps are only tools, and those tools are only effective when used properly.

They can certainly help us as clinicians as we strive to make the quickest and most accurate diagnosis for every patient. However, they cannot, nor should they ever, replace the judgment and expertise of an experienced clinician.

At the present time, these technologies are far from adoption in mainstream
clinical settings,

and their integration will require mutual acceptance by both the patient and the clinician.”

My nickel: if ALL your phone conversations are recorded (without authorization by Federal Court or using those legal loophole which never benefited common folks) by deep state or/and phone provider, when you call customer support of business/banks, government agency under pretext of raining purpose, then without your open consent AI could analyzed it by similar application and collect about you (health, mood, you name it) and you could get all possible negative affects on your life, employment, set of services provided you name it. Hope EU will regulate it first to protect privacy, then five eyes adopted kind of circumcised version as well.

JonKnowsNothing February 24, 2024 8:45 PM

@vas pup, All

re: Recognizing Mental Illness by Voice

1) totally bogus

2) In the USA, many health care companies now provide MD by Phone or MD by Video calls. You of course have to consent to the recordings.

RL tl;dr

Recently, during a clinic visit, the MD pulled out a voice recorder (smartphone app) and it was clearly “working”. We had a bit of exchange over the item. The MD was assured by the mfg and the health care organization that it was all very secure. The recordings were being uploaded to a transcription service, so the MD didn’t have to take so many notes.

I told the MD, that no matter what their IT Op Sec folks said, it was not safe, not secure but I knew they were going to use it anyway. Principally because it was a required use by the health care employer.

Having dictation transcription is not new in the medical field. It’s been around for eons. Typically the MD would record on a handheld device using a micro-tape and the tape would be sent to a specialized medical transcription system. You needed certification to do the transcription. Legal transcription had it own certification which is different system than taking a deposition.

This was eliminated when the hospital care system was all computerized and terminals, later PCs, were installed in every clinic, in every examining room, on every desk with printers at each station. The new MDs, who were not allergic to computers, gladly spent time punching, ticking, clicking, tagging items on the complex medical record UIs.

Something shifted, recently.

One factor maybe there are not enough MDs or RNs to go around. MDs have to do more and more care and spending part of their time mousing around reduced the Patient Face Time.

Currently, MDs get ~15m window to see a patient. If they spent 10min with the patient and 5 minutes updating the records. That is ~20min per hour doing paper work. So they use the recording to shoehorn in another patient.

Another factor maybe, the increasing uncivil discourse between people. The recordings are documentation of exactly who said what, in case there is an issue.

The clinics are all under surveillance anyway: voice, video everywhere. This is a specific 1v1 application of direct surveillance with a person in a closed room environment.

emily’s post February 24, 2024 9:17 PM

@ Clive Robinson

“Nobody wants to run a server.”

Oh what tangled webs we Weaver.

But there are 2 sides to every question, as Clifford Stoll has gone to great effort to disprove.

https://www.kleinbottle.com/

“Answer to my most commonly asked questions:

1) Yes, I am Cliff Stoll, and I make and sell Klein bottles. My business is named Acme Klein Bottle.”

echo February 24, 2024 11:38 PM

If ALL your phone conversations are recorded (without authorization by Federal Court or using those legal loophole which never benefited common folks) by deep state or/and phone provider, when you call customer support of business/banks, government agency under pretext of raining purpose, then without your open consent AI could analyzed it by similar application and collect about you (health, mood, you name it) and you could get all possible negative affects on your life, employment, set of services provided you name it. Hope EU will regulate it first to protect privacy, then five eyes adopted kind of circumcised version as well.

GDPR is a thing..

Robin February 25, 2024 3:36 AM

@echo, All:

“Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution.”

If children make stuff up when they don’t know an answer, we call it “imagination”, and often applaud it. If AI makes stuff up, we call it “hallucination” which, to say the least, has negative connotations. But the outcomes may not be so different in the end. We don’t like “hallucination” because the machine didn’t give us the precise response we were looking for. But that’s a very human-centred reaction; an AI “parent” might have a very different point of view.

Clive Robinson February 25, 2024 5:19 AM

@ emily’s post

Re : Argument that does not hold water.

“But there are 2 sides to every question, as Clifford Stoll has gone to great effort to disprove.”

I’ve made a Klein Bottle out of clay for a friend as a present, and you would be shocked at just how difficult it was to glaze…

So when I was explaining why “donuts and coffee mugs” were the same thing to my son and similar, I just stuck with Möbius loops (I could forsee trying to explain why a Klein Bottle was not a donut).

And lets be honest the name Möbius sounds way more exciting to an eight year old than Klein does… Much more like a Bond Villain making parts of reality disappear 😉

echo February 25, 2024 11:09 AM

@Robin

“Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution.”

If children make stuff up when they don’t know an answer, we call it “imagination”, and often applaud it. If AI makes stuff up, we call it “hallucination” which, to say the least, has negative connotations. But the outcomes may not be so different in the end. We don’t like “hallucination” because the machine didn’t give us the precise response we were looking for. But that’s a very human-centred reaction; an AI “parent” might have a very different point of view.

Maybe!… That’s an interesting perspective.

In discussions offline away from (mostly) shouty men who seem to dominate AI and AI funding I gripe about the zeitgeist as expressed by movies and games and politics and, yes, AI as a field and tend to pose the statement/question “What if we built an AI that could express love?” By love, I suppose, I mean a higher protector and watcher, or the more subtle.

I see a lot of use of AI in terms of the usual square jawed duck and roll type of security context. Very little or none in terms of the human condition or domestic violence and so on.

Issac Asimov does touch very lightly (like maybe three whole sentences) on the topic of love as expressed by AI in his Foundation and Robots series. Yes, there’s an indirect whiff of misogyny due to gendered roles in the stories depending on how you read them but in a post Judith Butler world that might be ignorable. Other scifi over the decades has explored all the pluses and minuses and what could go wrong and tragic opportunities missed so it’s the kind of thing which people are curious about.

I’m already bored of the whole NVidea thing and US versus China economic war over AI (like, hello Europe is a thing as are lots of smaller states around the world). There’s all the shouty gloom and doom but it could have a funny side. Trillions invested in the latest war machine and as the AI by accident or design works, congrats, you just had a baby now you have to look after it. Hah! It’s a lot of people up the river without a paddle if the ethics module works. Oh, the terror on some faces.

emily’s post February 25, 2024 11:56 AM

@ Clive Robinson

donuts and coffee

Möbius Klein, Agent 000. There when least expected, always ahead.

Be sure to read the latest novel “On His Majesty’s Exact Sequence”.

lurker February 25, 2024 12:27 PM

@echo

GDPR is a thing and works to some extent in EU. There are a lot of pieces of glass string that carry data and conversations across the Atlantic faster than human thought; and there are a lot of people on the western shore who believe GDPR doesn’t apply to them.

echo February 25, 2024 2:09 PM

@lurker

GDPR is a thing and works to some extent in EU. There are a lot of pieces of glass string that carry data and conversations across the Atlantic faster than human thought; and there are a lot of people on the western shore who believe GDPR doesn’t apply to them.

That’s true!

In the UK the Tories want to rights trip the ECHR and GDPR. They’re already rights stripping everything else they can now they can dodge the ECJ. As for the Tories looting the country no wonder the UK contains nine out of ten of the poorest regions in Europe. Only the City keeps the country afloat and that nearly went down the tubes, not to mention the hit to the economy because of being outside the customs union.

There’s a fair few Americans who see the sense in the European lifestyle and some of the more noticeable EU directives and competition law. Human rights is a bit of blind spot as Europeans understand it but then their Civil Rights Act and perhaps a few other things were groundbreaking in their own way.

Givon Zirkind February 25, 2024 7:34 PM

A fascinating paper. Brings the automation of hacking websites to a new level. On the flip side, this reduces the cost of penetration testing by an order of magnitude (according to the paper). Not to mention, that it makes penetration testing easier.

Anonymous February 27, 2024 10:37 AM

Language models can teach themselves to use tools
Maybe, since language is also a tool, if these language models can learn how to use tools, they might eventually use that knowledge to create new tools.

open-source models will become capable of hacking websites. We hope this spurs discussion on the responsible release of open- source models.
I wonder if Houdini himself would be able to escape the future terms of agreement of these LLM agents.

Will LLM agents be used for offense or defense? Probably both: like using a sword to defend and attacking with your shield; attacking in self-defense or offensive security.

Also, great References

Siri February 27, 2024 12:41 PM

@Anonymous
I thought humans were hacking machines.

“Agility can overcome raw power in dealing with human opponents” -OODA

unixjunk1e February 27, 2024 4:24 PM

Red team AI vs Blue team AI; supercharged metasploit agents vs advanced Apache mod_security-like defenses… All being orchestrated, observed, and NOC-monitored via VR… Folks slow this roll please. I only have a few years left until early retirement.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.