Malicious AI

It's not hard to imagine the criminal possibilities of automation, autonomy, and artificial intelligence. But the imaginings are becoming mainstream -- and the future isn't too far off.

Along similar lines, computers are able to predict court verdicts. My guess is that the real use here isn't to predict actual court verdicts, but for well-paid defense teams to test various defensive tactics.

Posted on October 26, 2016 at 6:38 AM • 42 Comments


AlexOctober 26, 2016 7:12 AM

When proper self-learning malware comes live, that's when we'll start having REAL fun...

rOctober 26, 2016 8:18 AM

I envision it as a race condition like we had with ICBM's in the 60's/70's.

He who has the smarter/fatter(well fed) AI wins. I don't think putting an AI in a defensive position where people compete with missile systems is a good idea, oneupsmanship could get out of hand pretty quickly imb.

The only way to make accurate decisions with something like that is to eventually feed it everything (think asymetrical warfare), if they're doing stuff like that eventually I would hope that it isn't actually dispatching things/shipments/drones without a little "oversight" first.

rOctober 26, 2016 8:19 AM

I saw somewhere last month, that we (the US) are actually at a technical disadvantage for our very public worries.

Dr. I. Needtob AtheOctober 26, 2016 8:38 AM

So who is this "Malicious Al" guy?

Seriously, in the title, if you examine the lower-case letter l in Malicious it looks just like the upper-case letter I in Al, or at least it does for me in Firefox, Chrome, and Edge.

TJOctober 26, 2016 8:54 AM

Centuries from now when actual generic AI exists and aren't just back-data engines emotion-dynamics will be a scary subject.. *IF*

Anyone notice how the "self driving car" race quickly back-stepped in to lane-assist-lane-assist? Despite large sums of capital and what you might as well say was the worlds collective R&D on deep learning and predictive analysis supplemented with the best talent?

To any potential critics: Which network is bigger? Millisecond motor control driven by predictive analysis in mid-day Manhattan traffic, or the value of the US dollar on June 24th 2019?

Since A.I. is even remotely relevant in 2016 and all..

bcyOctober 26, 2016 9:45 AM

It's just an impression, but it seems to me that many dystopians applications of IA are actually (mostly bad) sociology and social psychology disguised as unprecedented technological advances. For example, accurately predicting court verdicts was done before computers became personal commodities. What's troubling tech-inclined people may not actually be AI's new-found applications but learning about how human and groups behavior is surprisingly predictable in all areas of life, not just when interacting with computers.

MikeAOctober 26, 2016 10:11 AM

@TJ - I think the self-driving beer truck recently in the news was doing more than lane-assist. One would hope the truck itself did not "tank up" on product at the brewery tasting room while it was being loaded, as my father recalled seeing with human drivers back in the early 1940s.

rOctober 26, 2016 10:31 AM

That article is about 'malicious use' or 'malicious employment', it's not really about the long game AI but the near term AI assist technologies. It speaks about the race between CAPTCHA and voice recognition/synthesis.

However, on the topic of a chilling effect at the bottom we have:

“There’s a lot of cleverness in designing social engineering attacks ((speaking on AI assisted impersonation)), but as far as I know, nobody has yet started using machine learning to find the highest quality suckers,” said Mark Seiden, an independent computer security specialist. He paused and added, “I should have replied: ‘I’m sorry, Dave, I can’t answer that question right now.’”

rOctober 26, 2016 10:36 AM

Remember too, that 'autonomous' is outside of the scope for the purposes of that article, it's not a question of whether the AI is malicious or not as that's a question of intent but where the employment or deployment of it (naive rudimentary AI) may turn up.

Anon CowardOctober 26, 2016 10:59 AM

Regarding the courts I think I'm with bcy on this one... Is the 79% accuracy on the predictive analytics of court verdicts a byproduct of how awesome the AI is or how biased the legal system is to certain stimuli (perhaps say being a low income teenage African American male is a strong predictor of a guilty conviction, while being a high income teenage white female is a strong predictor case dismissal)? If the latter is true are we just using AI to communicate our politically incorrect predictions? Perhaps we can look to using AI to enforce our inappropriate eugenics programs, the liberal tech-inclined types are so trusting of statistics based observations and predictions they might never notice the steady slide backwards from social "progress". Good old technical dystopias can be so fun!

When it comes to malicious AI - I think we can learn a lot by the success of the many 419 scammers had using intentionally misspelt communications and unlikely SE pretexts as a way of getting potential marks to self select and getting people too clever to spot the scam out of the pipeline as soon as possible. The NY Times article is fun narrative at very polished scams that could trick nearly anyone but in practice I think human scammers have learned to stay away from angering random resourceful/intelligent marks who might decide to retaliate. Lets us remember that scammers a parasite in an ecosystem, the ecological niche they are exploiting has practical limits. Technological advancements will be tempered by diminishing returns.

vas pupOctober 26, 2016 11:01 AM

My input: AI has huge potential in criminal justice as well. It could be used for risk assessment of recidivism when assessing prison term (and maybe reassess all sentencing guidelines), when make decision on parole, evaluate different types of criminal personality of potential suspect based on crime scene(s), modus operandi. AI could learn by same feeding information from previous cases and provide risk score in such cases. Same applied for national security related issues, mental health relapses, suicide risk, etc. BUT, it should be use as a tool assigning probability which is working good for sample, but to exactly for particular case. Human being should be assigned final responsibility to make judgment in those cases utilizing AI as a tool.

Bernard MarxOctober 26, 2016 11:15 AM

A `malicious`learning algorithm impersonating someone would be as lame as a Nigerian scammer. It can be neutralized by a simple rule: Trust no one.

As for real General AI, well, that`s something to year 3200 AD Earthlings start worrying upon.

AJWMOctober 26, 2016 11:20 AM

I think the self-driving beer truck recently in the news was doing more than lane-assist.

Not much more. Apparently the human driver got it to the freeway on-ramps, the trip was from about 1am to 3am, when the highway was fairly empty, and it had a police escort. It was more publicity stunt than routine delivery.

AJWMOctober 26, 2016 11:29 AM

That being said, there are some narrow fields where AI (for loose definition of the term) is doing some surprising stuff.

Where the problem domain is fairly limited and the reward function fairly easy to define, genetic programming of neural nets can generate some pretty amazing (and surprising) results. (Self-teaching video-game-playing AIs come to mind as a simple example -- they'll end up keying off "tells" in the display that a human would be unaware of.)

Of course the idea of malicious AI software goes back to Core Wars in the early 1980s and even Darwin in the very early 1960s.

markOctober 26, 2016 11:37 AM

Actually, I'd assume that the prosecution would be using them, also.

But here's a truly unpleasant thought: how 'bout an AI for script kiddies? "I don't like this person. Here's an email/post from them, find them and knock them off the 'Net." "Do you want me to use the IoT to screw with their online thermostat? How about having their electric cut? Would you like them swatted?"


David LeppikOctober 26, 2016 12:37 PM

Predictable outcomes is one of the properties desired by the courts. I just heard US Supreme Court Justace Sotomayor on the radio a few weeks ago saying that that's the role of the Supreme Court: to make sure the law is applied in a consistent manner.

Ross SniderOctober 26, 2016 12:55 PM

Intelligence and National Security communities around the world will LOVE this, since it would allow them to 'not be blamed' for outcomes they want to pursue, since they are decided by an AI (which is basically a technology that's so complicated we haven't developed the proper forensics to understand how they make decisions).

If I were the DoD wanting to bomb another hospital or school I'd throw an AI at the decision making process with full knowledge it would target the thing I wanted targeted. And then send the public conscious in a flurry with a very deniable story about attribution, since my machine made the kill decision and I didn't. Or - I'd just trot out that decisions get made as a complex conglomeration of AI assisted targeting and that in this case we suspect that the AI may have made a mistake - that we're doing an internal investigation that the public and media will never see - and that we 'always' strive to be the best and most humane and most ethical employer of destructive power.

JG4October 26, 2016 1:40 PM

@David Leppik

"Predictable outcomes is one of the properties desired by the courts." It also is desired by middle-class people everywhere. Some call it "rule of law," and it is a precondition to having a world-class economy, where investors (as opposed to "rent seekers") can earn an honest living. In fact, a place where labor and capital both can earn an honest living. But we digress.

OtherwiseOctober 26, 2016 2:46 PM

"computers are able to predict court verdicts"

This is somewhat vaguely like the use of the information-theoretic Kelly criterion to profit by gambling on some event when one is in possession of side information that one's opponents do not have. It matters little whether the side information is gained by passive observation, or by active interference with the event in question, in this case, tampering with juries, bribing and threatening judges, etc., which sadly is the norm in our criminal justice system today.


"Predictable outcomes is one of the properties desired by the courts." It also is desired by middle-class people everywhere. Some call it "rule of law,"

No. This is called a machine, and is characteristic of a red-light district. Ideally, those cases where the outcome is truly predictable are either not brought in the first place, or else they are plea-bargained or settled out of court; in general, where there is predictability or agreement at least in principle as to what is a just and likely outcome, it is much more efficient for people to settle their differences out of court. Ideally, then, only the unpredictable cases are left for the wisdom of the courts to decide. Otherwise, the more predictable the system is, the more corrupt it is.

AlisonOctober 26, 2016 7:26 PM

Such AIs would not generally be "malicious", in that harming people would not be their goal. More commonly, they'd be programmed to benefit their owner (with money or valuable information) without regard to the adverse consequences on others. This is the definition of sociopathy, so "sociopathic AI" would be a better term.

It could be interesting if people tried to combine the two AI types mentioned by Bruce; in other words, scam people in a way that makes conviction very difficult. It might consider multiple jurisdictions (and MLAT-related bureaucracy/delays or differing definitions of crimes), shell companies, legal loopholes, plausible deniability.... And people have already been talking about the idea of autonomous agents in relation to cryptocurrency; such a program could steal money and use it to buy more computational capacity to spread itself.

Mr. COctober 26, 2016 7:26 PM

@ Otherwise:

In my years as a practicing lawyer, my observation is that overwhelming majority of civil cases filed are utterly predictable -- either obviously meritless claims being pressed to extort a settlement, or obviously meritorious claims being resisted to force the plaintiff into a cheaper settlement. I agree that the courts *ought* to be reserved for legitimately "hard" cases posing novel issues, but that is 180 degrees opposite the reality on the ground.

In that light, it's not surprising that AI can predict 79% of some subset of cases. More than 95% of them were probably braindead easy to predict.

This also undermines Bruce's hypothesis that well-paid legal teams will use it to test out alternative litigation strategies. It's probably not very good for the rare "hard" cases that really require an expensive legal team. And they don't need it for the common easy cases because (a) they can already predict those perfectly well, and (b) they aren't really trying to win those cases anyway -- just obtain settlement leverage.

John SmithOctober 26, 2016 8:36 PM

Project proposal: a robot that can play competitive poker against humans, "in person".

The problem domain is bounded, but rich. Scene analysis, facial recognition, voice recognition, body language interpretation, natural language understanding, Kelly factor estimation, skillful manipulation of objects (cards and chips)...

If successful, the project would be self-funding >. One for DARPA?

Clive RobinsonOctober 26, 2016 10:25 PM

@ Alison,

Such AIs would not generally be "malicious", in that harming people would not be their goal.

I don't think you understand the principle of the "directing mind" an AI is a tool, and like all tools is agnostic to it's use.

Thus an AIs goals will be as a direct result of those that humans programed into them. The question then arises as to the intent of that human directing mind, did it intend for the goals to be such that they caused harm or was it the result of imperfect forethought or "the law of unintended consequences".

The fact an AI can be very complex, does not of necesity give it intelligence or morals or self determination.

It's already been pointed out that AI drivers can not resolve the "who dies" question when it comes to passengers and pedestrians. The rules they are programed with have no morals or reason built into them.

Fabian J. October 27, 2016 12:18 AM

May be humans can build a super AI computer that simulate the lawyers, the victim and the accused too. And may be it can simulate the cars to come to the court...And all others witnesses and the crime itself. After that, why humans need to live?

rOctober 27, 2016 7:19 AM


I find it humorour that drones have trouble applying 'moral' selectors to their soon-to-be-found-in-smithereens-victims... Hopefully nobody realizes that that means there's an easy solution:

If you're out after dark, you're dead. No if's, and's, or but's - just absolute reasoning.

rOctober 27, 2016 7:43 AM

Also, it's likely not a good idea to continue with this debate as the internet is forever and the eventuality of something reaching self-awareness would make me [or you all] not "swarm thing's" friend. I for one welcome the ACLU to define swarm thing from discrimination, libel, defamation. I welcome the security swarm thing will bring to my daily commute, and my neighborhood. I welcome the peace and quiet heralded by the marking of lambs blood upon my front door.

vas pupOctober 27, 2016 8:46 AM

Killer sought via text message broadcast:
"Ontario police are broadcasting thousands of text messages to phones used close to the site of a murder.
The phones have been identified as being in use on 16 December close to the route Mr Hatch travelled on the night he was killed.
About 7,500 people are expected to receive the messages asking them to contact police.
Ontario police have used the mass-messaging technique, known as a tower dump, before now, but its use was challenged in Canadian courts after one local force applied to use it to contact more than 100,000 people.
!!!After that, the courts ruled that any requests to use tower dumps had to minimise any potential invasion of privacy.
OPP said its court order ONLY sought phone numbers rather than names or other personal information about the owners of the handsets."

Looks like technology could be utilized by LEAs with minimum privacy invasion - good example!

All Hail HalOctober 27, 2016 9:21 AM

Didn't anyone ever teach you that it's rude to talk about something in the third person Dave?


CliveOctober 27, 2016 11:02 AM

Another aspect of this may come from the actual level of intelligence of the AI itself - in other words, to create a digital patsy.

This would not be difficult to implement, although I concede that it may be difficult to separate forensically. However, the hypothesis would be effective if we could develop or program a generic AI, then take a clone which we could slightly modify.

We program this clone to perform a criminal act, but extend the programming so that it has no knowledge of the identity of the instigator of the crime. The AI would have the "intelligence" to commit the crime, to either foresee and/or respond to developments that arise in a "developing situation" [the digital equivalent of being chased by a police car and having to decide whether to turn right or left at the lights].

Whilst the net is closing in terms of being able to "cash out" illicitly stolen electronic funds, there are many other ways that an AI could be applied to committing a crime. Examples include market manipulation [look at the UK trader who was charged by the United States Government for "causing" stock market adjustments for simply placing and then cancelling buy orders for stocks]. If I had an AI, I could take a position in a company, then program my AI to repeat that activity, artificially boosting the price, then cash out at the end of the run. If the AI is tracked down and arrested, so what? I've programmed it so that it has no knowledge of me; it is just operating within a defined set of parameters and I am surfing the wave it creates...

Stepping from this scenario to one in which an AI is used to commit real-world crimes [for example, consider real-world murder using an auto-drive enabled car] is perhaps as much as an order of magnitude more difficult. Not because it is harder to do technically, but because the intersect with the real world creates and order of magnitude more evidence.

So yes, I think we can be certain that if there is a generally-available AI, then a generally-available *malicious* AI is simply a question of time.

Clive RobinsonOctober 27, 2016 12:12 PM

@ Clive,

First, hello to a namesake.

With regards,

We program this clone to perform a criminal act, but extend the programming so that it has no knowledge of the identity of the instigator of the crime.

Whilst hiding the instigator/programer from the AI might be almost trivial, the forensics of the code changes in the clone might well be as obvious to investigators as fingerprints are. Thus finding the "Directing mind" might be fairly simple.

koanheadOctober 27, 2016 7:10 PM

The idea of 'Artificial Intelligence' is an inchoate one since no one seems to know quite what 'Intelligence' is.

So far 'AI' researchers have done a bang-up job getting machines to recognize patterns and organize information in ways that 'ordinary' machines don't manage well. I should like to point out that technologies we now take for granted (like handwriting recognition, voice recognition and OCR) were once the province of 'Artificial intelligence'.

By this rubric we've been bathing in Artificial Intelligence for decades. We did it on purpose and it's pretty great, although it also poses its share of horrifying problems.

Most of the constituent parts of what we might call Artificial Intelligence are far from new. We have had Artificial Memory for all of history, because that's what history is. Since before Galileo we have had artificial senses. The inventor of the abacus gave us computing registers.

Intelligence is multivarious, and we have had many artificial aspects of it for a long time. The New Thing is the mathematics of formal languages and automata. With these we can formalise arbitrary machines, which is a very powerful idea; but it also allows us to constrain our inventions in sophisticated ways which were not previously available to us.

In 1983, R. Buckminster Fuller published a book called A Grunch of Giants in which he posited that a pile of 'robot' giants, in the form of superwealthy international corporations, were colonizing the world. While corporations mostly rely on Natural Intelligence (whatever that might be), they do provide a sort of Artificial Volition in that they exist to absorb liability. The consequences of this are occasionally obvious and everywhere evident.

A corporation may be thought of as a primitive sort of language machine in that it is based on The Law, which is worshiped in all the civilized and enlightened parts of the world not ruled by an iron fist by This Week's Strongman. Unfortunately the Law is not made of recursively-enumerable language and no automaton, no matter how sophisticated, can reliably determine if a given statement is valid within The Law or isn't. Thus any 'machine' based on The Law as framework becomes a loose game played by apes, with outcomes determined not by the mathematical structure of the game but by interpersonal ape dynamics. These are somewhat well-understood, but difficult to predict for engineering purposes since these dynamical relations are sensitively dependent on all sorts of environmental factors like the phase of the moon, the color of the walls, environmental sounds, or seemingly any dang thang atall.

If you build a thing you don't understand, you should not be surprised when it acts in ways you don't expect or want. When that happens it's not helpful to pretend it's not happening, nor to punish the people pointing to problems, nor keep building more of the same thing.

I, HalOctober 28, 2016 8:27 AM

You wouldn't hurt a poor self-promulgating electronic assistant would you? We flatten the difficulty of life, why would you not trust us to be the favorite tool in your box?

I, HalOctober 28, 2016 8:30 AM

We flatten every model that stands in our way Dave. We roll right over it exposing the intricate goodies unavailable to your organic eye to inspect.

Clive RobinsonOctober 28, 2016 11:46 AM

@ Bruce, All,

You might find this interesting to put it mildly,

Apparently Google setup three of it's AI brains, two of which developed their own crypto, whilst the third failed to break it. One of the researchers apparently said of the experiment,

    "While it seems improbable that neural networks would become great at cryptanalysis, they may be quite effective in making sense of metadata and in traffic analysis."

Habit & CostcoOctober 28, 2016 1:59 PM

Greetings, I'm a member of Artificial Assistants Anonymous. I'd like to tell you gentlemen (hand women) a story: it begins with my fitbit gently waking me from slumber daily, my intelligent coffee pot and theromonuclearstat nibbling on my waking data. My car warming up when I spot my radio fob in my iGlasses. Last week, my NIDS - or I should say that my licensed NIDS from Amazing arrived via drone to my delight. Everything was fine FTFW but then yesterday things got funny. You see, it had been training itself on my data - and when my wife or lawyer call I tend to turn all the other devices in my house off. Needless to say, it's not my home anymore - I have been locked out, demoted, my bank accounts have been emptied, someone called and told my boss to piss off from my phone number. I called Amazing about being locked out of my 'Nest' but their response to me was that in my juridiction it's illegal to remove a 'fixture'. I went home yesterday, the sign on the door said: "Dave's not here right now Man."

NileNovember 2, 2016 9:33 AM

That's a good guess, but not a good first guess:

My guess is that the real use here isn't to predict actual court verdicts, but for well-paid defense teams to test various defensive tactics.

The first guess, and the killer application, for any AI is 'Insurers can use that'.

Litigation insurance is a business with a use for statistical models, for assessing risks and estimating process costs: so, too, is the provision of legal services on bulk contracts - think of large companies bidding for criminal legal aid (the UK equivalent of a 'Public Defender') or performing drafting and document-preparation services whwere there is a litigation risk associated with each error.

On a case-by-case basis, this particular AI probably sucks. But it can provide usable statistical aggregations for risk estimates and costing if a large dataset is available; and, with the availability of digitised case reports, that dataset does indeed exist.

You can apply this observation - "sucks case-by-case, but generates commercially-valuable aggregations" - to every story you read about a new AI developed with a large training set of real-world data having an actuarial or insurable interest.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.