Security Risks of AI

Stanford and Georgetown have a new report on the security risks of AI—particularly adversarial machine learning—based on a workshop they held on the topic.

Jim Dempsey, one of the workshop organizers, wrote a blog post on the report:

As a first step, our report recommends the inclusion of AI security concerns within the cybersecurity programs of developers and users. The understanding of how to secure AI systems, we concluded, lags far behind their widespread adoption. Many AI products are deployed without institutions fully understanding the security risks they pose. Organizations building or deploying AI models should incorporate AI concerns into their cybersecurity functions using a risk management framework that addresses security throughout the AI system life cycle. It will be necessary to grapple with the ways in which AI vulnerabilities are different from traditional cybersecurity bugs, but the starting point is to assume that AI security is a subset of cybersecurity and to begin applying vulnerability management practices to AI-based features. (Andy Grotto and I have vigorously argued against siloing AI security in its own governance and policy vertical.)

Our report also recommends more collaboration between cybersecurity practitioners, machine learning engineers, and adversarial machine learning researchers. Assessing AI vulnerabilities requires technical expertise that is distinct from the skill set of cybersecurity practitioners, and organizations should be cautioned against repurposing existing security teams without additional training and resources. We also note that AI security researchers and practitioners should consult with those addressing AI bias. AI fairness researchers have extensively studied how poor data, design choices, and risk decisions can produce biased outcomes. Since AI vulnerabilities may be more analogous to algorithmic bias than they are to traditional software vulnerabilities, it is important to cultivate greater engagement between the two communities.

Another major recommendation calls for establishing some form of information sharing among AI developers and users. Right now, even if vulnerabilities are identified or malicious attacks are observed, this information is rarely transmitted to others, whether peer organizations, other companies in the supply chain, end users, or government or civil society observers. Bureaucratic, policy, and cultural barriers currently inhibit such sharing. This means that a compromise will likely remain mostly unnoticed until long after attackers have successfully exploited vulnerabilities. To avoid this outcome, we recommend that organizations developing AI models monitor for potential attacks on AI systems, create—formally or informally—a trusted forum for incident information sharing on a protected basis, and improve transparency.

Posted on April 27, 2023 at 9:38 AM12 Comments


andyinsdca April 27, 2023 10:19 AM

One overlooked aspect of AI security is IP leakage. Suppose you work for a company that is IP-centric (Apple) and you’re using ChatGPT to do something. ChatGPT learns from your questions and then a competitor (Intel) comes along and starts asking interesting questions to suss out what ChatGPT knows about Apple and a new product line. Is ChatGPT “smart” enough to not tell what it knows about Apple? This gets very interesting since MS Teams is integrating ChatGPT and Teams “knows” quite a bit about the enterprises it’s used at.

Winter April 27, 2023 11:52 AM


One overlooked aspect of AI security is IP leakage.

IP is as relevant to AI progress as horse’s veterinary health rules are for automobiles

philion April 27, 2023 1:01 PM


You seem to think we’re talking about “AI progress”. We’re talking about security.

Regardless of “progress”, LLMs speed-of-adoption and lack of transparency pose security threats by themselves. Leakage, as Andy points out is another. Any data in servers outside your control (“cloud”) is a risk. The reckless deploy-first and secure-later of bleeding-edge product development will waste a lot of capital (and probably lives; looking at you, self-driving cars), just like all those buggy crypto exchanges.

Robert Thau April 27, 2023 1:17 PM

IP leakage is a completely valid concern — if your interactions with the thing are used as training data for subsequent tune-ups, which is something ChatGPT in particular has been doing for a while. However, as of two days ago, ChatGPT lets you opt out of this, and the OpenAI APIs switched from opt-out to opt-in on training a while ago.

If you’re shipping confidential data to a third-party service, you need some idea of how they’ll use and protect it — but what AIs are adding here is some extra concerns to what was already a fairly long list.

Winter April 27, 2023 4:56 PM


IP leakage is a completely valid concern

When you “IP” is indexed by Google, why is GPT a problem?

Winter April 27, 2023 4:59 PM


You seem to think we’re talking about “AI progress”. We’re talking about security.

In my experience, progress and security problems are two sides of the same coin.

Dictionary April 27, 2023 11:22 PM

@Winter, @Robert
IP= intellectual property, not Internet protocol.

Most unfortunate, unnecessary use of an acronym.

Dr X April 28, 2023 7:38 AM

AI is having a bit of trouble by failing to first test if statements have ambiguity before answering.

AI is just saving on power by not checking if a statement is ambiguous.

Clive Robinson April 28, 2023 8:46 AM

@ Robert Thau, ALL,

Re : How many outputs?

“IP leakage is a completely valid concern — if your interactions with the thing are used as training data for subsequent tune-ups”

There is another concern few others think about, which is “Test Harnesses”.

When even simple systems are designed they have “test points” included on which “test instruments” can be hung. This is as true for software design as it is hardware design.

Usually the test points are left “unconnected” but in more complex systems they me be connrcted to a “test harness” that brings all the test point outputs to one central monitoring point.

A user assumes the system is in effect a black box where they control both the input and the output, thus assume incorrectly some level of privacy of their actions, questions, and answers received.

Access to a test harness the user is unaware of or cannot see by another person, can alow all sorts of information to be leaked, or extra data added to skew the results given back to the user.

People should ask themselves a question,

“In a multi-million dollar experimental prototype system, just how many test points would I add and where?”

Because the chances are in a real system especially a complex one, it’s going to be in reality significantly greater than most of us can imagine.

Each and every one of those test points is a security risk in multiple ways.

Oh and remember as far as US legislation is concerned anything collected from those test points belongs not to the system user, but the person collecting the data, what ever it is, from whatever part of the system they decide to monitor and record for what ever reason.

It’s one of the reasons I recomend against all “Cloud Systems” not just search systems and AI systems.

I used to work for a company that created “citation databases” for amongst other Big Phama and similar with trillions of dollars of reseach IP. They were paranoid even by my then paranoid standards and in a quater of a century many others have started to realise that my then level of what they thought was paranoia was justified not just then, but is insufficient compared to what everyone should be doing these days. As for Big Phama and Co there level of paranoia is still way higher than mine.

As a friend in the industry once put it,

“You think of ‘rectal scaning’ as a ScFi joke, one step beyond abduction and probing. They however think of it as just the first step on an endless journy…”

So a not just “Who watches the watchers” but “How they watch”.

If you hunt around you will find security companies selling, voice stress analysers, face temprature and breathing rate analysers, blood pulse and oxygen level analysers, even body posture and movment analysers, as the new “Lie Detector” systems. Often surreptitiously installed in a covert manner to get coverage at all times on all personnel… Over and above all the monitoring systems on the computers, telephones and any other tech they can bug…

Remember the UK newspaper who’s whacko executives decided to up the idea of “Hot Desking” by putting Infrared crotch heat detectors under the desks?

The amount of money wasted on such systems is immense but Big Phama amoungst others spend a lot of money on surveillance on their employees and those who know them.

I’m sure the fairly recent sting in a pub/restaurant in Éire against a drug company executive will have caused many to think of further increasing employee surveillance…

lurker April 28, 2023 2:18 PM

IP (Intellectual Property), like AI (Artificial Intelligence), is an oxymoron. ChatGPT-x may have some definition(s) of the word oxymoron, it may have some examples of the consequence of inappropriate use of oxymorons, but it cannot care about that because it has no conscience.

no comment May 1, 2023 9:19 AM

There is also equivocation to be handled.

“Things are said to be named ‘equivocally’ when, though they have a common name, the definition corresponding with the name differs for each. Thus, a real man and a figure in a picture can both lay claim to the name ‘animal’; yet these are equivocally so named, for, though they have a common name, the definition corresponding with the name differs for each.” [1]

Equivocation seems to be essential in human ise of language.

  1. Logic, Aristotle.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.