When AIs Start Hacking

If you don’t have enough to worry about already, consider a world where AIs are hackers.

Hacking is as old as humanity. We are creative problem solvers. We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth. To date, hacking has exclusively been a human activity. Not for long.

As I lay out in a report I just published, artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope. After hacking humanity, AI systems will then hack other AI systems, and humans will be little more than collateral damage.

Okay, maybe this is a bit of hyperbole, but it requires no far-future science fiction technology. I’m not postulating an AI “singularity,” where the AI-learning feedback loop becomes so fast that it outstrips human understanding. I’m not assuming intelligent androids. I’m not assuming evil intent. Most of these hacks don’t even require major research breakthroughs in AI. They’re already happening. As AI gets more sophisticated, though, we often won’t even know it’s happening.

AIs don’t solve problems like humans do. They look at more types of solutions than us. They’ll go down complex paths that we haven’t considered. This can be an issue because of something called the explainability problem. Modern AI systems are essentially black boxes. Data goes in one end, and an answer comes out the other. It can be impossible to understand how the system reached its conclusion, even if you’re a programmer looking at the code.

In 2015, a research group fed an AI system called Deep Patient health and medical data from some 700,000 people, and tested whether it could predict diseases. It could, but Deep Patient provides no explanation for the basis of a diagnosis, and the researchers have no idea how it comes to its conclusions. A doctor either can either trust or ignore the computer, but that trust will remain blind.

While researchers are working on AI that can explain itself, there seems to be a trade-off between capability and explainability. Explanations are a cognitive shorthand used by humans, suited for the way humans make decisions. Forcing an AI to produce explanations might be an additional constraint that could affect the quality of its decisions. For now, AI is becoming more and more opaque and less explainable.

Separately, AIs can engage in something called reward hacking. Because AIs don’t solve problems in the same way people do, they will invariably stumble on solutions we humans might never have anticipated­—and some will subvert the intent of the system. That’s because AIs don’t think in terms of the implications, context, norms, and values we humans share and take for granted. This reward hacking involves achieving a goal but in a way the AI’s designers neither wanted nor intended.

Take a soccer simulation where an AI figured out that if it kicked the ball out of bounds, the goalie would have to throw the ball in and leave the goal undefended. Or another simulation, where an AI figured out that instead of running, it could make itself tall enough to cross a distant finish line by falling over it. Or the robot vacuum cleaner that instead of learning to not bump into things, it learned to drive backwards, where there were no sensors telling it it was bumping into things. If there are problems, inconsistencies, or loopholes in the rules, and if those properties lead to an acceptable solution as defined by the rules, then AIs will find these hacks.

We learned about this hacking problem as children with the story of King Midas. When the god Dionysus grants him a wish, Midas asks that everything he touches turns to gold. He ends up starving and miserable when his food, drink, and daughter all turn to gold. It’s a specification problem: Midas programmed the wrong goal into the system.

Genies are very precise about the wording of wishes, and can be maliciously pedantic. We know this, but there’s still no way to outsmart the genie. Whatever you wish for, he will always be able to grant it in a way you wish he hadn’t. He will hack your wish. Goals and desires are always underspecified in human language and thought. We never describe all the options, or include all the applicable caveats, exceptions, and provisos. Any goal we specify will necessarily be incomplete.

While humans most often implicitly understand context and usually act in good faith, we can’t completely specify goals to an AI. And AIs won’t be able to completely understand context.

In 2015, Volkswagen was caught cheating on emissions control tests. This wasn’t AI—human engineers programmed a regular computer to cheat—but it illustrates the problem. They programmed their engine to detect emissions control testing, and to behave differently. Their cheat remained undetected for years.

If I asked you to design a car’s engine control software to maximize performance while still passing emissions control tests, you wouldn’t design the software to cheat without understanding that you were cheating. This simply isn’t true for an AI. It will think “out of the box” simply because it won’t have a conception of the box. It won’t understand that the Volkswagen solution harms others, undermines the intent of the emissions control tests, and is breaking the law. Unless the programmers specify the goal of not behaving differently when being tested, an AI might come up with the same hack. The programmers will be satisfied, the accountants ecstatic. And because of the explainability problem, no one will realize what the AI did. And yes, knowing the Volkswagen story, we can explicitly set the goal to avoid that particular hack. But the lesson of the genie is that there will always be unanticipated hacks.

How realistic is AI hacking in the real world? The feasibility of an AI inventing a new hack depends a lot on the specific system being modeled. For an AI to even start on optimizing a problem, let alone hacking a completely novel solution, all of the rules of the environment must be formalized in a way the computer can understand. Goals—known in AI as objective functions—need to be established. And the AI needs some sort of feedback on how well it’s doing so that it can improve.

Sometimes this is simple. In chess, the rules, objective, and feedback—did you win or lose?—are all precisely specified. And there’s no context to know outside of those things that would muddy the waters. This is why most of the current examples of goal and reward hacking come from simulated environments. These are artificial and constrained, with all of the rules specified to the AI. The inherent ambiguity in most other systems ends up being a near-term security defense against AI hacking.

Where this gets interesting are systems that are well specified and almost entirely digital. Think about systems of governance like the tax code: a series of algorithms, with inputs and outputs. Think about financial systems, which are more or less algorithmically tractable.

We can imagine equipping an AI with all of the world’s laws and regulations, plus all the world’s financial information in real time, plus anything else we think might be relevant; and then giving it the goal of “maximum profit.” My guess is that this isn’t very far off, and that the result will be all sorts of novel hacks.

But advances in AI are discontinuous and counterintuitive. Things that seem easy turn out to be hard, and things that seem hard turn out to be easy. We don’t know until the breakthrough occurs.

When AIs start hacking, everything will change. They won’t be constrained in the same ways, or have the same limits, as people. They’ll change hacking’s speed, scale, and scope, at rates and magnitudes we’re not ready for. AI text generation bots, for example, will be replicated in the millions across social media. They will be able to engage on issues around the clock, sending billions of messages, and overwhelm any actual online discussions among humans. What we will see as boisterous political debate will be bots arguing with other bots. They’ll artificially influence what we think is normal, what we think others think.

The increasing scope of AI systems also makes hacks more dangerous. AIs are already making important decisions about our lives, decisions we used to believe were the exclusive purview of humans: Who gets parole, receives bank loans, gets into college, or gets a job. As AI systems get more capable, society will cede more—and more important—decisions to them. Hacks of these systems will become more damaging.

What if you fed an AI the entire US tax code? Or, in the case of a multinational corporation, the entire world’s tax codes? Will it figure out, without being told, that it’s smart to incorporate in Delaware and register your ship in Panama? How many loopholes will it find that we don’t already know about? Dozens? Thousands? We have no idea.

While we have societal systems that deal with hacks, those were developed when hackers were humans, and reflect human speed, scale, and scope. The IRS cannot deal with dozens—let alone thousands—of newly discovered tax loopholes. An AI that discovers unanticipated but legal hacks of financial systems could upend our markets faster than we could recover.

As I discuss in my report, while hacks can be used by attackers to exploit systems, they can also be used by defenders to patch and secure systems. So in the long run, AI hackers will favor the defense because our software, tax code, financial systems, and so on can be patched before they’re deployed. Of course, the transition period is dangerous because of all the legacy rules that will be hacked. There, our solution has to be resilience.

We need to build resilient governing structures that can quickly and effectively respond to the hacks. It won’t do any good if it takes years to update the tax code, or if a legislative hack becomes so entrenched that it can’t be patched for political reasons. This is a hard problem of modern governance. It also isn’t a substantially different problem than building governing structures that can operate at the speed and complexity of the information age.

What I’ve been describing is the interplay between human and computer systems, and the risks inherent when the computers start doing the part of humans. This, too, is a more general problem than AI hackers. It’s also one that technologists and futurists are writing about. And while it’s easy to let technology lead us into the future, we’re much better off if we as a society decide what technology’s role in our future should be.

This is all something we need to figure out now, before these AIs come online and start hacking our world.

This essay previously appeared on Wired.com

Posted on April 26, 2021 at 6:06 AM38 Comments

Comments

Léon April 26, 2021 6:35 AM

No. AI is a misleading term. AI systems are still computers we need to program. Not with a program code this time, but with a model that we need to train.

For a AI to recognise a red car, we need to train the model with images of cars, all sizes and shapes and all colours. And then tell the AI system after each “recognition” whether is was indeed a car and whether it was indeed red or not.

By the way: if one could design an AI for detecting security vulnerabilities, one could also use it in software engineering.

Petre Peter April 26, 2021 7:02 AM

How much of our world should be governed by technology? How do we punish an AI if it turns to the dark side? This is a new form of tyranny that starts with “the computer won’t let me do it”. An excuse to not accommodate exceptions or anomalies. We will have no choice but to think and act the same which is our biggest threat to liberty. The AI doesn’t even have to do real computation. It can just be a tyrant using the ‘say’ command.

metaschima April 26, 2021 7:18 AM

@Léon

I agree, what most people call AI is a neural network (real or virtual) that is trained on a dataset and with time gains the ability to solve a problem. Although it is able to learn, we are still teaching it. True AI would be a system that learns on its own and starts making decisions on its own. So far true AI does not exist, but that may change in the near future. True AI is indeed the making of many a sci-fi horror.

WhiskersInMenlo April 26, 2021 8:34 AM

It is necessary to add compilers and language specifications to this discussion.

Compilers can alter the program in ways most programmers and yes security professionals might not expect.

Add hardware: Hardware prefetches data often before authorization code evaluates as true or false.

Systems are built to multiple organizational specifications and commonly fails at these organizational boundaries.

Yes test systems will be increasingly successful at discovering known issues quicker in real code.

Will AI systems see all the source code or only the binary or only public and private interfaces. Will the AI be rich in stupid user tricks, and ambiguous instruction lore?

Jimbo April 26, 2021 8:58 AM

The tax system perhaps is not a good example. There are no loop holes – everything in the internal revenue code is put there intentionally, passed by congress after lenghtly debates and signed into law by the president.

This quote from Judge Learned Hand sums it up:
Anyone may arrange his affairs so that his taxes shall be as low as possible; he is not bound to choose that pattern which best pays the treasury. There is not even a patriotic duty to increase one’s taxes. Over and over again the Courts have said that there is nothing sinister in so arranging affairs as to keep taxes as low as possible. Everyone does it, rich and poor alike and all do right, for nobody owes any public duty to pay more than the law demands.

CdrJameson April 26, 2021 9:06 AM

Hello, I’m a game developer. We use AIs to try and break the games, find holes in the rules and pathological states. They’re very good at it.

It’s not a massively widespread practice as far as I know, but it’s a fairly obvious one (just an extension of random fuzz testing) and it’s pretty cheap.
It’s not particularly new either – both the 1981 and 1982 Traveller Trillion Credit Squadron championships were won by AIs that found holes in the rules.

And these are entirely formally defined mathematical systems, so they can be tweaked to be fixed, but good luck with that one in the real world.

It’s interesting to see AI advances being tried out in games. It’s also a piece of sleight-of-hand to then imply you can generalise that to the real world. Just the Ludic Fallacy in action kids! And that already crashed the economy once, in 2008.

Hedo April 26, 2021 9:14 AM

Remember September 11th 2001 (or the WW2 Japanese Kamikaze)? Airplanes were not designed to be intentionally flown into buildings yet people did use them for that terrible purpose. Why do so many think that we/us humans are actually doing/bringing anything to the table when it comes to planet Earth? Seriously. Just look at us. Greed. Hate. Envy. Jealousy. Narcissist. Angry. Mad. Malicious. Too passive. Too ignorant. There is a “Pile” or a “Bucket” for each and every one of us. We do not deserve this beautiful planet.

pete April 26, 2021 9:15 AM

There has been a fair bit of sci-fi written about this – most notably James P. Hogan’s “The Two Faces of Tomorrow” which opens with an AI making an efficient decision about removing an obstacle to a construction project on the Moon that is not a good decision for the survey team that had just reported the obstacle.

Fred Fubar April 26, 2021 10:01 AM

I had to tell a friend that it’s incredibly unfair he can read “The Futurological Congress” and Tichy’s reports on same in the original language after reading this essay.

Schneier’s reality vs. Lem’s satire. What a choice!

willmore April 26, 2021 10:05 AM

Ahh, good old Kuang Eleven.

@Léon, you’re thinking of machine learning. There are other types of AI programs that work differently.

Vesselin Bontchev April 26, 2021 10:45 AM

Bruce, AI is a vast field (much larger than infosec) and you’re confusing different kinds of AI.

The kind of AI that can’t explain its decisions is ML (machine learning). It will never be able to explain its decisions. You give a neural network a bunch of positive and negative examples and it reconfigures the weights of the neurons, starting to recognize patterns that humans often don’t see, and produces an answer based on these patterns. It’s not sentient, it doesn’t reason, it can’t explain how it has reached a conclusion. It just finds hidden patterns.

Another kind of AI is expert systems. These most definitely can explain their reasoning. They have built-in abilities to answer the questions “Why?” (“why exactly was this conclusion reached?” and “How?” (“what particular rules triggered that lead to this conclusion?”.

It’s just the ML is easy – that’s why it’s so widespread nowadays. You cobble together a neural network, you throw a bunch of data at it, it starts producing results. The only tricky part is decided what properties of the data to feed it.

Expert systems require a lot of manual work. You have to extract the knowledge from human experts (who don’t have it systematized and often rely on intuition or half-forgotten experience) and formalize it as a humongous set of “if-then-else” rules. It can take years to build one – and you might still fail, if you don’t get the right experts or don’t have the talent to extract their knowledge.

JohnnyS April 26, 2021 11:09 AM

“So in the long run, AI hackers will favor the defense because our software, tax code, financial systems, and so on can be patched before they’re deployed.”

Modern software is barely checked for insecurities now before shipping: Why would that change? Without responsibility assigned for software security, there is no software security.

I expect that any AI applied to checking software for defensive purposes will at best be an excuse to run some “tool” against the software, so anyone responsible can then say “I checked it for problems: The tool said it was OK!” With that excuse they achieve CYA, whether the “tool” is any good or not, so there is little incentive to spend money on a good “tool”. It remains nothing more than theater.

J April 26, 2021 11:37 AM

In Germany we not only have cheating car manufacturers (and not only VW, the others were just more careful to include some plausible deniability). We also have banks who defraud the state by claiming multiple tax refunds for a tax paid only once: https://en.wikipedia.org/wiki/CumEx-Files
AFAIK no AIs were involved.

mark April 26, 2021 11:48 AM

About the robot vacuum cleaner – that’s the same problem, except this is a failure of the programmers to consider what the box they’re working in is.

Decades ago, Radio Shack had a little robot, It would run into something, look around, and go at ->right angles<- to whatever it ran into. Why is it that no robot vacuum plots the room its in, so as to not go over a spot more than once, and cover the entire room, instead of only random parts?

Impossibly Stupid April 26, 2021 12:21 PM

AIs don’t solve problems like humans do.

As other’s have noted, Bruce, you do a great disservice when you conflate the current state of popular machine learning algorithms with the whole of artificial intelligence. The crux of the problem is that ML doesn’t solve problems at all, it merely satisfies constraints. You’ve written about the security flaws that that leads to countless times (usually found via adversarial machine learning).

Separately, AIs can engage in something called reward hacking.

That is not something limited to machines. Humans successfully gamed societal systems for millennia. Yes, ML offers new tools in our toolbox (e.g., ways to automate said exploits at scale), but it doesn’t fundamentally alter the fact we often times intentionally design flawed systems.

Genies are very precise about the wording of wishes, and can be maliciously pedantic.

I think the introduction of magical thinking is very apt here. I mean that in a big way. My academic background is in AI, but I also had quite a bit of interest in magic tricks when I was younger (with James Randi being a nice intersection with pseudoscientific thinking). Modern machine learning is a trick, not genuine, “strong” AI. We need to stop fooling ourselves into thinking it is as good, or as bad, as we imagine it to be.

This wasn’t AI — human engineers programmed a regular computer to cheat — but it illustrates the problem.

Sadly, it only illustrates a first-order problem. As I said above, we humans not only think of ways to break the rules, we actively engage in breaking the systems that impose the rules. ML will have nothing on us until it is able to hire lobbyists and bribe corrupt politicians to create rules that are inherently unfair. The inequity is already operating at the meta level; these ML are just fighting over table scraps.

My guess is that this isn’t very far off, and that the result will be all sorts of novel hacks.

Nope. The result is simply going to be the self-serving “hacks” that were intentionally put there by monied interests, but now you offer the possibility of them being exploited by everyone (or at least those who have access to the technology, in a very “Click Here to Kill Everybody” kind of way).

They will be able to engage on issues around the clock, sending billions of messages, and overwhelm any actual online discussions among humans.

Already happening. No ML needed, because humans are quite happy make fools of themselves on social media. Even Elisa-level unsophisticated chat bots are enough to rile up the masses. We’re rapidly approaching a tipping point where either a large segment of the population is lost to irrational thinking, or they realize how damaging social media is and just walk away.

An AI that discovers unanticipated but legal hacks of financial systems could upend our markets faster than we could recover.

Again, humans are doing this now; don’t fool yourself into thinking you only need to pay attention when an “AI” is involved. The whole Robinhood/GameStop attack that recently happened is a prime example. Everyone dead set at screwing everyone else on the trading floor, but the people that make the rules still always win in the end.

So in the long run, AI hackers will favor the defense because our software, tax code, financial systems, and so on can be patched before they’re deployed.

Unfortunately, as I said, those flaws are intentional. Bought and paid for by people who do not want them fixed. If your “AI” favors their defense, it only means the inequity in the system will get increasingly more polarizing, and it will only lead to solutions that are . . . non-digital.

Jon April 26, 2021 1:25 PM

Personally, I’m reminded of the ‘Go’ playing computer program that figured out how to always win: It would place a bead at some ridiculous location (the rules specified an infinite board) which caused all the other programs to expand their memory footprint – but they didn’t have enough memory, and so crashed, and the miscreant always won (the rules specifying that a program that crashed had lost).

Predictable in afterthought, of course… J.

Jeff April 26, 2021 3:36 PM

@mark. The recent versions of iRobot (Roomba) do plot the room and don’t do it randomly. Also when complete, it sends me a diagram of what it covered,

Anders April 26, 2021 3:40 PM

@ALL

I think we have a first AI here.

hxxps://www.livescience.com/55164-russian-robot-escapes-lab-again.html

Humdee April 26, 2021 4:58 PM

What is the distinction between “the explainability problem” and “a woman’s intuition”? I’m only half joking. Maybe the problem is that our brains already act like AI’s act but we dismissed that powerful ability because it would not allow us to play the language game of the giving and taking of reasons. Maybe it is time for the Age of Enlightenment to die.

“The heart has reasons that reason does not know.” Intuition? Or AI?

Clive Robinson April 26, 2021 5:51 PM

@ Bruce, All,

Whatever you wish for, he [the Genie] will always be able to grant it in a way you wish he hadn’t.

That’s not the point behind Genie Fairy Tales, it’s actually all about hubris. That’s why the fairy tales are all most always about the “third wish”, the one that undoes the first two wishes…

Such Fairy Tales are if you prefere about “listening and following advice and not thinking you are more clever than any one else”. That is in more modern parlance “Learning from the past mistakes of others” something ICTsec repeatedly fails to do…

The thing that such Fairy Tales always get wrong is that the first two wishes are “foolish” but the third is always “wise”.

Oddly though, we tend to see “foolish” wishes in the likes of finance, where people see only the “up-side” of a risk, hardly ever the “down-side” of the risk. It kind of tells us something that we mostly miss. Which is such people like gamblers do not understand probability, thus they realy should not be trusted as “rational actors”.

The only people that win consistantly at big-win-risk are those not actually taking the risk for one of two basic reasons,

1, The risk is not theirs, but someone elses (ie investor money) they just take a percentage of the transaction.

2, The game is rigged in some way, and they have knowledge others do not (ie they have probably rigged the game).

One of the things that the most visable form of “AI” to the public is very sensitive to is “training data”. You can if you wish have many sets of “seed training data” into which you embed prejudice. Thus with care you can “rig the game” so you have a desired prejudicial out come (ie the second point above). But as those using such “AI Tools” have “no skin in the game” as they in effect get paid and promoted by the rigged system along with an “Only Following Orders” excuse built in they have no reason to raise concerns (ie first point avove).

Thus these “AI Tools” that get pushed towards the public have both “big win” cheats built in, thus they will get pushed and pushed hard and stopping certain people pushing them is going to be hard, very hard, because they win, and those who loose have little or no say…

Ismar April 26, 2021 6:23 PM

In short, we have created something we can neither understand nor control and yet are happy to give it more and more room for controlling the way we live.

Rachel April 26, 2021 6:46 PM

The article commences by [mostly] omitting the original definition and application of hacking 🙂 Defaulting to the more recent and popular ascribed definition. Oh well I suppose that’s okay.
I would prefer the original and true definition be better appreciated because we as a collective would benefit from applying this perspective in our attitudes to our life and our societal responses.

Naval Ravikant gave an interview with Joe Rogan not long ago.
Brilliant guy, although I disagreed with a number of his points

( mostly the ridiculous Silicon Valley meme-ejaculations, for example ‘Ubber fixed something that was broken and the whole world will run as a gig economy in the future’. Yes he actually said that )

And I class AI-meme alongside Uber,everything Musk, and a few others being transgressive drivel killing our world.
I’d like to see a world with no AI. That’s what the argument should be. No AI.

Anyway Naval did say that AI was no way near becoming anything even vaguely approximating sentient in our lifetimes or beyond our lifetimes. He said, as someone that was familiar with the work personally, contrary to hype the development was terribly simplistic. On the level of, data in, data out.

he has written a ton of essays and pieces some of which addresses the scope or lackthereof of AI and there’s quite a lot on science and quantum stuff. (And money, he’s quite fond of and good at ‘how to make money’ and explaining what he has learnt. )

http://www.nav.al

Freezing_in_Brazil April 26, 2021 7:12 PM

@ metaschima

True AI would be a system that learns on its own and starts making decisions on its own. So far true AI does not exist, but that may change in the near future

Agreed. The usual neural network model, tough successful in limited tasks, has reached its limits, imo. Trying to reach AGI through the current ML schemes is something of an almost cargo-cultish quality. I have entertained the idea that evolutionary [as in Darwin] features would have to be introduced into the idea of neural networks. What is missing:

  1. Add Sensors to System (5 senses and more), and give it means of expression [printers (2, 3D), monitors, etc.]
  2. Release the system from specific tasks (e.g. Character recognition); The machine is basically an idle entity, free in its neural activity to absorb and correlate information. Let it free to form its own patterns. Let it recognize reality itself, through the senses, in a recursive way, moment by moment.
  3. Refine activation functions, thresholds, incorporating QM features.
  4. Adopt of more general algorithms [or, we might find that true intelligence does not work upon algorithms, and cannot be reproduced programmatically]

Intelligence, in nature, and especially in Homo sapiens, derives from the integration of information originating from the senses into the neural pattern. It seems unlikely that it is possible to reach intelligence without the corresponding participation of the senses in the data processing. Hence, I would expect AGI to emerge in a Neuroscience setting, instead of a computer/media lab.

Regards

name.withheld.for.obvious.reasons April 26, 2021 7:35 PM

Wonder if the site, Schneier’s Blog, is a test in the making?

Occam razor seems to apply.

name.withheld.for.obvious.reasons April 26, 2021 7:44 PM

Oh, wanted to mention that appears the U.S. federal legislative system seems to have enjoyed a hack, speaking of different systems being hack for purposes other than control of hardware (unless of course you’re talking about human being hardware).

Back during the IAA bill before the house (10 Dec. 2014), files and text were changing on the congressional library site respecting the bill HR4681. Multiple versions of the same bill, different text with different sections rewritten appeared prior to the vote on the bill(s). It was documented here but never got traction. Both Nick P and myself confirmed independently. What we didn’t know was why and for what purpose.

It is available at the Squid for the second week of Dec in 2014 if memory serves.

Rachel April 27, 2021 1:05 AM

name.withheld.for.obvious.reasons
Hallo and nice to see your name, hope this finds you well

I tried a search for HR4681

There’s two threads both with comments by yourself, can’t see much else

https://www.schneier.com/blog/archives/2015/02/understanding_n.html

https://www.schneier.com/blog/archives/2015/05/un_report_on_th.html

name.withheld.for.obvious.reasons • May 30, 2015 12:47 PM

@ 65535

I find it interesting that these various decisions and disclosures are occurring on the eve of the second vote for re-authorization of Section 215 of a Patriot Act…

There is little doubt that the conspiracy to control the political environment would include the management of “information” and “data” related to illegal government(s) activity. Just before the vote on HR 4681 last year, the text of the bill (summary version) was modified (the official legislative text published on house.gov) changing the controversial language in section 308 by replacing it with the language from section 310 thus rendering the bill harmless. That moment spoke volumes–it received no press whatsoever.

What bothers me about the DoJ report is two fold; first, the general tone of the report is similar to the FISA court report (October 2011) that chastised the NSA for its belligerent attitude (to the level of illegal activity), second is the section of the report that shows the “programs” for business records and reveals (by way of redaction) five other programs. It can be assumed that these business records are the ones enumerated in the FAA and section 702.

Denton Scratch April 27, 2021 10:23 AM

“It can be impossible to understand how the system reached its conclusion, even if you’re a programmer looking at the code.”

AI that is based entirely on Big Data is bogus.

I used to have high expectations of expert systems, that can explain themselves. Such systems have to be put together by human experts; it’s hard work.

But AI researchers seem to have abandoned work on expert systems, in favour of these opaque black box systems. I guess they’re cheaper. but they’re also more dangerous. If AI researcheers had focused on expert systems during the last 40 years, we might now have expert systems that understand law, software, health and medicine, and goodness knows what else.

Perhaps a combination of Big Data and expert systems would have a chance of providing an AI that can explain itself. But we wasted 40 years. It’s a crying shame.

Winter April 27, 2021 11:41 AM

We already have centuries of experience with “AI” hacking society and the human population. They are called Publicly Traded Companies. They learn by incentives to hack society in trial and error ways. Their outcome is genie like, with more jobs and profit by poisoning the neighborhood (minamata bay, Bhopal) and calculating that paying out damages is cheaper than preventing horrible deaths.
http://users.wfu.edu/palmitar/Law&Valuation/Papers/1999/Leggett-pinto.html

Internally, corporations act like large “neural networks” of individuals maximizing their personal benefits in a network that maximizes profit at any cost. We think we can explain the outcomes as greedy persons and poisonous incentives, but that is just fooling ourselves.

Impossibly Stupid April 27, 2021 12:06 PM

@Denton Scratch

AI that is based entirely on Big Data is bogus.

I think that’s entirely the wrong perspective to have. Intelligence actually has very little to do with data. It can’t, because we all start with zero knowledge about the world. Instead, it’s more tied to the question of epistemology.

Take Wikipedia as a prime example. It is a great resource of data about humanity that would benefit any intelligence, be it machine, alien, or fellow humans. But as a big database, even though it has been curated with intelligence, it is not itself intelligent.

I guess they’re cheaper. but they’re also more dangerous.

It’s not that they’re cheaper so much as it has become the case that new algorithms have made training them easier. Fundamentally, the resulting neural networks aren’t doing anything they couldn’t have been doing 30 years ago (neglecting hardware advances). They’re also no more or less dangerous; those problems are firmly in the hands of humanity.

Perhaps a combination of Big Data and expert systems would have a chance of providing an AI that can explain itself.

Nope. Even the expert systems you recall so fondly weren’t on the path to AI. The fundamental problem is they still relied on human experts to curate their knowledge; they couldn’t become experts on their own. Until the machine can tackle the epistemological problem itself, we’re all just toying around with “weak AI”.

Big Data simply does not matter. When it comes to recognizing/explaining what a STOP sign is (or whatever), our intelligence doesn’t need millions of existing example photos from the built world, or a thousand, or even one. With just a description, we can imagine and create things that never existed before. That’s why these “AI hacks” Bruce describes don’t concern me nearly as much as the thoughtless self-interest of the humans behind them.

Winter April 27, 2021 12:41 PM

A better way to think of AI/Machine Learning is as Advanced Statistics on impossibly big piles of date.

AI can implement a function, any function, to make a decision, or rather, a classification, based on a large set of input data.

It is neither intelligent nor stupid, it is just statistics. If red berries are poisonous and blue berries are not, 99.9999% of the times, then the decision “blue berry” “edible” is easy. If you take a picture and identify the person on it, that is less transparent, but uses the same idea.

MikeA April 27, 2021 1:14 PM

In re chess programs and the outer context..

Back in the 1970s, one of the CDC6400 systems in the U.C. Berkeley computer center would frequently be used for “experimentation”, e.g. OS research or evaluation of third-party software. For a while, it ran a time-sharing system from CDC called KRONOS. Of course, since it also had a computer chess program, folks would find a reason to gain access.

One such chess player stumbled across an interesting anomaly. A member of the systems group was in the terminal room when said chess-player got the computer in a fork, and then cursed their luck that the operators had apparently selected that moment to kick them off the system (presumably for playing games during “work hours”). The systems group member looked at the last few lines, ending in something like “JOB 1234 – Operator Drop”. Suspicious, they checked the “DAYFILE” (A log of operator inputs and system status) only to find that the chess program had apparently asked the operator to drop it.

Essentially, rather than simply exiting, it had asked the human operator to stop it. That created a plausible explanation for what had happened, while in fact it apparently figured out its situation and decided to kick over the table.

I am absolutely NOT claiming the program devised this strategy on its own. It was clearly a human programmer adding a long known tactic to computer chess.

Meanwhile, “Expert Systems”? They face an uphill battle in a society which relishes non-expertise. Now that the phrase “OK, Boomer” has been invented, I expect that to accelerate.

Winter April 27, 2021 1:50 PM

ikeA
“They face an uphill battle in a society which relishes non-expertise. Now that the phrase “OK, Boomer” has been invented, I expect that to accelerate.”

But it is pretty easy to let an expert system tell you what you want to hear.

https://xkcd.com/2451/

vas pup April 27, 2021 4:37 PM

Tag – hacking
Opinion: The FBI just got permission to break into private computers without consent so it can fight hackers

https://www.marketwatch.com/story/the-fbi-just-got-permission-to-break-into-private-computers-without-consent-so-it-can-fight-hackers-11619449844

“The FBI has the authority right now to access privately owned computers without their owners’ knowledge or consent, and to delete software. It’s part of a government effort to contain the continuing attacks on corporate networks running Microsoft Exchange software, and it’s an unprecedented intrusion that’s raising legal questions about just how far the government can go.

==>On April 9, the United States District Court for the Southern District of Texas approved a search warrant allowing the U.S. Department of Justice to carry out the operation.

The software the FBI is deleting is malicious code installed by hackers to take control of a victim’s computer. Hackers have used the code to access vast amounts of private email messages and to launch ransomware attacks. The authority the Justice Department relied on and the way the FBI carried out the operation set important precedents. They also raise questions about the power of courts to regulate cybersecurity without the consent of the owners of the targeted computers.

As a cyber security scholar, I have studied this type of cybersecurity, dubbed active defense, and how the public and private sectors have relied on each other for cybersecurity for years.

==>What makes this case unique is both the scope of the FBI’s actions to remove the web shells and the unprecedented intrusion into privately owned computers without the owners’ consent. The FBI undertook the operation without consent because of the large number of unprotected systems throughout U.S. networks and the urgency of the threat.

The action demonstrates the Justice Department’s commitment to using “all of our legal tools,” Assistant Attorney General John Demers said in a statement.

The law and the courts

The Computer Fraud and Abuse Act generally makes it illegal to access a computer without authorization. This law, though, does not apply to the government.

The FBI has the power to remove malicious code from private computers without permission thanks to a change in 2016 to Rule 41 of the Federal Rules of Criminal Procedure. This revision was designed in part to enable the U.S. government to more easily battle botnets and aid other cybercrime investigations in situations where the perpetrators’ locations remained unknown. It permits the FBI to access computers outside the jurisdiction of a search warrant.

This action highlights the precedent, and power, of courts becoming de facto cybersecurity regulators that can empower the Department of Justice to clean up large-scale deployments of malicious code of the kind seen in the Exchange hack. In 2017, for example, the FBI made use of the expanded Rule 41 to take down a global botnet that harvested victims information and used their computers to send spam emails.

Important legal issues remain unresolved with the FBI’s current operation. One is the question of liability.”

Read the whole article!!!

name.withheld.for.obvious.reasons April 27, 2021 5:46 PM

@ Rachel
Thank you for the sentiment. I am okay, just befuddled by the lack of concern and honor to founding principals (sans the killing of native Americans, the sin of slavery and what we’ve done to our brothers and sisters from Africa, and the fact that the U.S. Civil War has not ended). Other than that, in good health and return the statement in your direction.

This is where it begins, there are several responses to the thread and Nick P and myself document the gaming of congress in that archive.

https://www.schneier.com/blog/archives/2014/12/friday_squid_bl_455.html/#comment-236459

I can see from your writings that you have been following this more closely than most. Very few people have understood the condemnation that is in the October 11 Report by Judge Bates. It literally confirms that the NSA is out of control and that is reflected in 65535’s comments too.

farm co worker April 27, 2021 6:18 PM

Trust is like an eraser. It gets smaller as you use it.

“follow the money.” Watergate saying
“follow the artificial money.” upgrade your grey computer!
Make God laugh, tell Him your plan.

Art May 18, 2021 9:39 AM

The problem of AI as a powerful tool is often overstated. Let us start by recognizing that the hype comes from deep neural networks and their success in Playing games and doing pattern recognition which they do well. But, as the history of AI in general has shown, it is one thing to succeed in one domain and the other is to generalize those skills and to transfer them to other domains. But even putting aside this issue, let us start by doing an exercise in trying to design such a deep learning system with its additional support modules

Requirements
1) To teach a deep learning algorithm we need to encode the data as vectors. Such vectors would need to be in the form of paths in a graph for which we have.
2) We would need to encode the notion of a program sequence as vectors. the challenge here is to select the appropriate data set to represent a coherent subset of the universe of programming. While people focus on throwing big amounts of data it is counterproductive to do so in this instance. Selecting appropriate samples is critical
3) At most, what we achieve with step 2 is to teach a deep learning algorithm how to program.
4) We would need to teach the algorithm how to detect vulnerabilities. This would require providing samples of common flaws tagged as such in the data set 2
5) What we achieve in step 4 is that the deep learning algorithm will be an expert in recognizing at most similar attacks but it does not imply that it will detect new attacks
6) The deep learning algorithm is nothing but an input/output system that needs a way to implement what it discovered. The challenge here is to automate a way to build automatically without previous information a hacking module. Either we have to wait for step 2 to become viable or we are limited to hooking the deep learning system into a hacking framework with pre-built modules.

In sum, while it can be worrisome that the detection of known vulnerabilities can be automated, it is not the ultimate hacking machine. A deep learning algorithm is limited by the data that is fed and how it is interconnected to enabling pieces of programmed code.

Martin May 21, 2021 5:47 AM

If I asked you to design a car’s engine control software to maximize performance while still passing emissions control tests, you wouldn’t design the software to cheat without understanding that you were cheating. This simply isn’t true for an AI; it doesn’t understand the abstract concept of cheating.

This “literal understanding” reminds me of my cat. It understands very well, that it must not jump on the kitchen table, when I’m present. Because I never asked my cat to leave the table, when I wasn’t present. For my cat, there is no rule saying “table forbidden when tin-opener (= me) out of sight”, so there is no misdemeanour.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.