Integrity and Availability Threats

Cyberthreats are changing. We’re worried about hackers crashing airplanes by hacking into computer networks. We’re worried about hackers remotely disabling cars. We’re worried about manipulated counts from electronic voting booths, remote murder through hacked medical devices and someone hacking an Internet thermostat to turn off the heat and freeze the pipes.

The traditional academic way of thinking about information security is as a triad: confidentiality, integrity, and availability. For years, the security industry has been trying to prevent data theft. Stolen data is used for identity theft and other frauds. It can be embarrassing, as in the Ashley Madison breach. It can be damaging, as in the Sony data theft. It can even be a national security threat, as in the case of the Office of Personal Management data breach. These are all breaches of privacy and confidentiality.

As bad as these threats are, they seem abstract. It’s been hard to craft public policy around them. But this is all changing. Threats to integrity and availability are much more visceral and much more devastating. And they will spur legislative action in a way that privacy risks never have.

Take one example: driverless cars and smart roads.

We’re heading toward a world where driverless cars will automatically communicate with each other and the roads, automatically taking us where we need to go safely and efficiently. The confidentiality threats are real: Someone who can eavesdrop on those communications can learn where the cars are going and maybe who is inside them. But the integrity threats are much worse.

Someone who can feed the cars false information can potentially cause them to crash into each other or nearby walls. Someone could also disable your car so it can’t start. Or worse, disable the entire system so that no one’s car can start.

This new rise in integrity and availability threats is a result of the Internet of Things. The objects we own and interact with will all become computerized and on the Internet. It’s actually more complicated.

What I’m calling the “World Sized Web” is a combination of these Internet-enabled things, cloud computing, mobile computing and the pervasiveness that comes from these systems being always on all the time. Together this means that computers and networks will be much more embedded in our daily lives. Yes, there will be more need for confidentiality, but there is a newfound need to ensure that these systems can’t be subverted to do real damage.

It’s one thing if your smart door lock can be eavesdropped to know who is home. It’s another thing entirely if it can be hacked to prevent you from opening your door or allow a burglar to open the door.

In separate testimonies before different House and Senate committees last year, both the Director of National Intelligence James Clapper and NSA Director Mike Rogers warned of these threats. They both consider them far larger and more important than the confidentiality threat and believe that we are vulnerable to attack.

And once the attacks start doing real damage—once someone dies from a hacked car or medical device, or an entire city’s 911 services go down for a day—there will be a real outcry to do something.

Congress will be forced to act. They might authorize more surveillance. They might authorize more government involvement in private-sector cybersecurity. They might try to ban certain technologies or certain uses. The results won’t be well-thought-out, and they probably won’t mitigate the actual risks. If we’re lucky, they won’t cause even more problems.

I worry that we’re rushing headlong into the World-Sized Web, and not paying enough attention to the new threats that it brings with it. Again and again, we’ve tried to retrofit security in after the fact.

It would be nice if we could do it right from the beginning this time. That’s going to take foresight and planning. The Obama administration just proposed spending $4 billion to advance the engineering of driverless cars.

How about focusing some of that money on the integrity and availability threats from that and similar technologies?

This essay previously appeared on

Posted on January 29, 2016 at 7:29 AM44 Comments


Keith Glass January 29, 2016 7:54 AM

The question remains: why is it a GOVERNMENT responsibility to research driverless car safety ? Given the long and well-documented history of the Federal government rarely getting ANYTHING right, especially the first time, would it not be better placed in the hands of the software and hardware vendors for those driverless cars ?

Certainly, self-interest and properly allocated liability would provide more of an incentive for real security baked in to the systems, than by bureaucratic fiat.

Alicila January 29, 2016 9:15 AM

We’ve been thinking of such things in the industrial control system corner for quite a while, often creating islands with no external connectivity to limit exposure. From an information assurance perspective, the same basic devices do not currently scale well when massively connected. The economics of assurance certainly come into play. For instance, one might be more than willing to pay more for an internal defibrillator that addresses availability concerns, but perhaps much less so for a home thermostat.

Four billion U.S. dollars toward driver-less cars? I have a hard time explaining that to people who don’t even have clean drinking water.

paul January 29, 2016 9:23 AM

The worst of all possible worlds is that governments — especially law enforcement — will see hacks of smart and IoT-enabled physical stuff and think “Cool! We want to be able to do that.”

Because sure, it would be great for a warrant-equipped officer to be able to pop open any lock, or tell a criminal’s car to pull over, or jam the verification on a bank robber’s smart gun…

Dr. I. Needtob Athe January 29, 2016 9:38 AM

“Someone could also disable your car so it can’t start. Or worse, disable the entire system so that no one’s car can start.”

Who would have thought The Day The Earth Stood Still might actually become a reality. When I first saw that classic 1951 movie I thought it was rather far-fetched and unscientific to think a technology could exist that could stop all the cars in the world.

ianf January 29, 2016 9:45 AM

@ Bruce asks How about spending some of that [$4B federal subsidy for engineering driverless cars] on the integrity and availability threats from that and similar technologies?

Advancing the R&D of a specific, if potentially very disrupting driverless car technology [think of the most common American profession, that of a truck driver, and what SOCIAL THREAT to their bread winning it WILL pose] is at least a well-defined, concrete, and easy to understand target. Shifting a sizable part of that towards prevention of as-yet-mainly-theoretical threats to the public infrastructure and/or security of private domains, will be much harder to argue for, simply because no one can envision these threats fully. While the various manufacturers’ lobbying will argue forcefully for its lion share of the $CAKE.

It’s like with the climate warming – there are people who do not believe it is happening (more so as they barely can cope with a measly 5-day blizzard); others that actually welcome it; and still others that see no point in doing much about it until it actually arrives on their doorstep—by which time they’ll long be dead anyway (=me, too).

    Clearly, the modern society as such (not only the USA) is proving incapable of dealing with truly long-range MEGA problems, and I count the threat fallouts of the shimmering idiot Internet of Things among these, especially when growth of capital continues to be the sole metric of progress.

A case in point to demo the extent of our inability to project ahead, if not of the humanity’s intrinsic shortsightedness: 26 years ago, in 1989, Michael Moore shot a documentary “Roger and Me” that chronicled the sudden decline and pauperisation of the town Flint, Michigan after its major-major car manufacturer employer shut down the plant. Only, would any of us then have envisioned that less than two decades later, the same accelerating urban-decay trend, halving of population/ lower tax base, etc, would affect the nearby once capital of the world car industry and catchy Motown tunes, Detroit, Michigan?

    Nobody saw it coming. Now we know better, but are we learning from that how to avoid future civilization’s regression in the name of the Almighty Dollar rhetorical question.

PS. I am told that researching causes of industrial decay (thus capital destruction) is a growth discipline for PhD studies in economics. The IoT will deliver much more of similar study-fodder.

boog January 29, 2016 9:50 AM

Is it bad that I’m less worried about hackers doing all the things in the first paragraph than I am about those things happening unintentionally due to buggy software?

JeffP January 29, 2016 10:03 AM

@Keith Glass: The invisible hand of the market never texts me back when I message about safety research. At least regarding automobiles, I get crash test and emission studies from the government. Research at the software and hardware vendor named Volkswagon developed an automobile and marketting plan that tricked people into buying a product that got high mileage by lying about emissions. (I wonder if the retrofit will decrease mileage, thereby hitting the buyers in the wallet.)

To agree with you, there is room for A LOT of improvement in government regulation. To disagree with you, the track record of software and hardware vendors isn’t any better.

Martin January 29, 2016 10:04 AM

Terrorists can’t hurt anyone, it’s all overblown by the gov’t just to get more money.

Cyber attacks and reports of hacking are nothing more than scaremongering. All you need is a stronger password. And it helps to be really smart like me. Everybody else is stupid.

See, I only use Linux. And everything is open-sourced. That’s because I’m really smart and only use open-source software because it never has any problems. Of course, only really smart people like me use Linux and open-sourced applications.

Oh, and cloud computing is just a lot of hype. It’s just a bunch of computers. Anybody can do it. No big deal. They’re just trying to sell you stuff.

Sofa January 29, 2016 10:33 AM


In the post above you have: confidentiality, integrity,e and availability.

Need to take off the “e” after the integrity,


Sofa January 29, 2016 10:44 AM


That particular behavior of only using opensource did not work out so well for Heartbleed or Android Stagefright among others. Someone still must review the code and understand the ramifications of an intentional weakened hash algorithm or other technical procedure. It helps but it is by no means a panacea available only to the most intelligent.

Frank Wilhoit January 29, 2016 10:51 AM

It is a governmental responsibility because no other actor has any incentive to do it. Name an actor who, in your view, will have an in-built free-market incentive? No: each of them has been gradually made unaccountable; they don’t have to give a sh1t.

This is the worst thing about what Bruce is calling the World-Sized Web (what most other people are calling the Internet of Things): its architecture is based entirely upon the avoidance of accountability. Blame is sliced and diced so thinly, there are so many links in the chain and each link has so little visibility into any other, that it will be impossible to assign responsibility, or to impose consequence, for any particular failure. This is not by accident, it is by design.

Recently one has been seeing the fatuous argument that driverless cars will kill fewer people than human drivers. It doesn’t really matter whether that predidction ever comes true, partly because it is put forward dishonestly, partly because the real difference is between being able to assign responsibility (at least in principle) for human-caused accidents versus the total, structural impossibility of assigning responsibility for any harm caused by driverless technology. Driverless cars on every road == a Therac-25 in every doctor’s office.

ianf January 29, 2016 11:39 AM

@ Dr. I. Needtob Athe [destination missing]

Never mind the Hollywood’sy “The Day The Earth Stood Still” with its sabotaged urban gridlock and whatnot (theme present in all modern disaster movies)…
we need only to go back to the last three actual US Northern seaboard/ New York City blackouts (1965, 1977, 2003) to get a glimpse of coming large-scale “distractions,” each time more severe than the last (while all that the Brits thus far could manage is the “28 days later” documentary movie of the havoc wrought by the evolved not-your-mommas-zombies).

As for Hohollywood’s threats from the IoT targeting single individuals, nothing beats the #s02ep10 of the HBO “Homeland.” There, on the strength of a single stolen unique serial number, the terrorist mastermind Nazir is able to wirelessly and remotely manipulate the pace of a VPOTUS’ pacemaker to kill him inside a Navy office. Never mind that neither the victim, nor the pacemaker control unit(?) was physically connected to the Internet. But, hey, he was not a mastermind for nothing. And contrary to disbelief, once perfected, this method could actually be quite handy in future presidential and other office-intrigue/ succession conflicts.

Martin January 29, 2016 12:00 PM

Is it the same thing with encryption? Everybody knows a good cipher is one that’s been around a long time. Doesn’t make any difference if anyone uses it or not, just so long as it’s been around a long time. After all, if someone did find a weakness they would be sure to tell everyone!

Now, here’s a good place to find strong ciphers. Look for some guys that wear the same clothes everyday and sleep in their car by the Charles River. I think they teach class near there.

I think one of those guys came up with a good idea to protect private keys. Yeah, just encrypt your keys and then hide that key! Wow, what a genius. I imagine you could even encrypt that key too!

But don’t forget, nobody really knows. I mean we know 2 + 2 = 4, right? But not with encryption. You can never be sure, even if you’re sure, you can’t be sure. Isn’t encryption wonderful!!

By the way, all that stuff about open source doesn’t matter if it’s a really big corporation full of other super smart guys since their code never has any bugs.

albert January 29, 2016 12:01 PM

“…It can be damaging, as in the Sony data theft…”. I don’t think it was all that damaging, unless you believe the BS Sony handed out. As for Ashleigh Madison, well, I’m not a moralist, so I’ll just say, karma’s a bitch. That may apply to Sony as well:) Hopefully, the OPM breach may not turn out to be serious; time will tell. You’d think that the IC would have their own separate database, at least for the most vulnerable agents.

Except for driverless cars, I can avoid most of the other IoTs. Who among you really needs Internet-connected refrigerators, door locks, baby monitors, alarm systems, HVAC systems, dolls, etc. etc. etc.? I don’t give a shit about elections; it’s all theater.
The folks most vulnerable to driverless cars will be pedestrians (you remember, those folks that walk around).

Now the gov’t wants to spend $4B of OUR money to research DCs? Let’s cut off corporate welfare completely. We could save billions of dollars. If we want safer cars, planes, trains, does anyone out there really think the makers are going to provide them? If so, then you’re extremely naive, or an idiot.

Gov’t regulation is ineffective only because the politicians are bought and paid for by the corporations. It is because the gov’t holds their feet to the fire, that we get a modicum of safety in products we buy.

One thing we could do is reduce the number of cars on the road. We have about 250 million cars for a population of about 320M. Taking away the roughly 33% of non-drivers, that’s 1.2 cars/per driver. That’s crazy. Public Mass Transit is cheaper and more efficient.

+1 for the Therac 25 reference. I’ll look for the first driverless car EULA/TOS to see if they define being hacked as an act of God, or better yet, terrorism. (We’re not responsible for terrorist acts, call 202-282-8000).
. .. . .. — ….

Martin January 29, 2016 12:34 PM

I wouldn’t worry about all this integrity stuff. I read somewhere that Congress was getting ready to start thinking about the possibility of forming a committee to discuss their options for financing a study.

And don’t worry about our nation’s security. As more and more responsibility falls on the Pentagon it can only make us safer. Don’t they only award contracts to those big corporations with bug free code? Then they don’t have so much paperwork. I guess if it doesn’t work out they can always go blow something up. I think that’s what they mean by “kinetic.”

Boy, I feel so much better now…with my Linux machine, all this cool open-source stuff, and just knowing the gov’t has my back. And now I have some terrific ciphers too that everyone thinks are good, until they’re not, because they’re not exactly sure, maybe. Maybe I should get certified!

ianf January 29, 2016 12:35 PM

@ albert […] I can avoid most of the IoTs. Who among you really needs Internet-connected refrigerators, door locks, baby monitors, alarm systems, HVAC systems, dolls, etc. etc. etc.? I don’t give a shit about elections; it’s all theater.

All dandy, except that when IoT idiocy becomes the manufacturing default, you won’t be able to escape it how much you try. There simply won’t be any unconnected units of anything. On the other hand, perhaps then there will emerge a new profession of freelance underground connected-system disconnecters, bypassers – just as the rogue unlicensed plumber in “Brazil” who, in order to find prospective customers, had to bug & intercept telephone calls to the Janitorial Services Dept.

[So in order to ensure having a non-IoT refrigerator of the current analog type even in the future, better stockpile a couple of such plus a box of replacement parts.]

Jared Jennings January 29, 2016 12:42 PM

On the “World-Sized Web,” the functionality and interoperability of my device are solely up to the manufacturer, subject to their shifting alliances. The only choice I get is at purchasing time. (That’s Feudal Security of Things).

Whatever you may say about the World-Wide Web or the W3C now, the Web’s architecture has openness at its core. I vote we call this thing a “World-Sized Fiefdom,” instead.

RadioStar January 29, 2016 1:58 PM

Excellent article, some comments, below. Quickly scanning over the comments, inline with the article contents and my response:


Cory Doctorow writeste about Nitesh Dhanjani’s book
“Abusing the Internet of Things: Blackouts, Freakouts, and Stakeouts”

I dream of a group of savy individuals who take it upon themselves to evaluate IoT stuff and then quickly, loudly and incessantly name and shame the stuff that is shoddy.

That is exactly what we are doing. Computer security researchers. Which field’s entry largely requires only that a person actually have the capacity to find meaningful security vulnerabilities (and generally to be able to present those returns in such a way to garner and control that interest — albeit past a certain point, journalists and the rest of the industry supports and proves the real alarm bells).

It is a job, it is a hobby, and if you have technical expertise, experience, you certainly are called to the job. It is a field.

Much of the internet today relies on amateur work, and otherwise unpaid work.

Usually such security research work pays extremely well, albeit indirectly.

On these issues, of field testing :-), the “internet of things” (the most commonly used term for what Bruce is saying)… unfortunately, I would have to gauge that currently, just as development is only truly emerging, so too is security research, and that very much further behind then people may think from star led conference speeches.

One very reason why the headliners of these conferences are headliners is because there is so actual little work done.


I certainly see proof of that in doing this work (and having worked with some of these headliners, such as Charlie Miller and Josh Drake of the last Summer’s BlackHat fest)…

You see it in the countless unanswered questions you have when plowing through new work. 🙂

One very visible evidence of this I had just the past few weeks was in purchasing the ettus b200mini.

One thousand. Two hundred. Fifty. Results.

Why is this pertinant? Because this came out last fall. Works with Intel Edison, works with Raspberry Pi 2. Works with Linux. Works with Windows. Would certainly work with many embedded and miniature systems. Go and look and see for your self why it would matter. It is SDR which is extremely small and so portable, enabling countless possible security testing avenues for the IoT. And the specs are through the roof.

HackRF, the more famous system, which is larger, does not have the same expanse of frequencies, and certainly nowhere near as portable does return: 162,000 returns.

rtl-sdr returns 439,000, and is the main current entry way into many of these testing scenarios, being only 20 dollars. It has been in play only since 2012. HackRF just came about a few years ago, albeit it is extremely good and half the cost of the b200mini.

Ad hoc, breaking that down, you are certainly still talking about hundreds, not thousands, globally, engaged in serious open security work. (Albeit “open” there a loose term, as many of those will certainly have significant government relations.)

This is considering how many different such projects there are for people to engage in which are not directly security research there. There are near countless applications that engage every manner of interest.

Albeit, some of these will accidentally run into security scenarios.

And there are many, which have yet to be well stated.

From Bruce’s article, one issue brought up was hacking planes while being on a plane.

For twenty bucks – and maybe about forty more dollars if you do not already have a usable system – you can get up and run with extremely minimal technical experience an airplane radar system which includes literally being able to plot out a radar with specific flight information and plotting out plane location, altitude, identifying information on google earth.

While this protocol is open and “in the air”, unencrypted, it is not mandatory yet in the United States, but will be by 2120. And is mandatory just about everywhere else. So, it is not going away.

Needless to say, this not only can provide physical attack data to physical ground (and air) attackers, but there are certainly many avenues of control of those planes available from the ground.

This control can range from deliberate shutdown and denial of service, to spoofing… from many avenues, from GPS attacks to air traffic control attacks, and so on.

As one example, DHS has had a significant problem with GPS attacks against their border control drones.

This, albeit sounding very scary – and should be – is merely but one example of nearly countless ‘as yet poorly explored’ or ‘entirely not explored’ attack avenues that goes far beyond just planes.

You get into car control systems, too, as noted, and you get into medical devices. You get really into nearly just about everything, of course. Police and emergency services. Military. The cellular phone infrastructure. Home and building control. etc, etc, etc

Risk… communicating risk… properly planning for risk which certainly includes properly allocating resources for risk…

Core issue here.

It certainly does not singularly rely on amateur and professional, non-governmental security researchers. Governmental researchers are altogether another problem. Because of compartmentalization, they may see all manners of problems yet be unable to communicate these issues. Never even mind national boundaries.

Put another way: “electronic warfare” is certainly hot, in defense, and cyber warfare and espionage certainly remains and is only growing hotter. So, their value is generally to keep quiet, even if that means that known vulnerabilities are not addressed at the meaningful layers.

As a very good example of this: OPM and the OPM hack.

OPM interacts with every governmental organization which has classified workers, which are very many, even into such organizations as whatever the national ‘parks & recreation’ department is ( 🙂 )… not a few of these organizations certainly are extremely savvy and resourced for computer security defense… yet, which of these proactively engaged OPM or other layers of upper government to ensure their classified papers at OPM were properly secured?

And, of course, while this is an US example, this exact same manner – or set of scenarios – is echoed in every other nation on the planet. Not a few far worse, at that.

So, the challenge of communication is enormous.

Unlike with corporations, further, even getting to “monetizing risk” can be meaningless. Government does not necessarily operate in the logical fashion dictated by monetary and shareholder demands. So, likewise, major decisions and projects are very often only able to be motivated by frivolous political issues ultimately driven by base instincts like fear.

It is a mountain. You can just climb it one step at a time. And ultimately, engage others to join, if at all possible.

V January 29, 2016 2:18 PM

@ Frank Wilhoit

It is a governmental responsibility because no other actor has any
incentive to do it. Name an actor who, in your view, will have an
in-built free-market incentive?

Auto insurance companies. Large enough to have clout but not so large that any one company has a monopoly… [cough]Google [cough]Amazon

Omri January 29, 2016 2:39 PM

Mr. Schneier, for once you’re wrong, at least when it comes to driverless cars, for a couple reasons:

Cars do have to communicate withe each other to coordinate movement on intersections, and they can easily do a better job than humans. Imagine all-way stop signs replaced with a protocol based on ethernet style exponential backoff. When they do that, that is, announce their positions, velocities, and intentions to each other to coordinate interleaving through junction, they don’t need to announce any GUIDs. So the privacy issues should only arise when one driverless car is used to surveil the passengers in another one

Furthermore, if you’ve ever had the unpleasant experience of riding an ambulance as a patient, you know that even at 12 MPH, without traffic holding you up, you can get around amazingly fast. Even at 15MPH, driverless cars can get people around way faster than today’s status quo.

And at 15 MPH, what happens if cars get hacked and sent to hit things? Very little. Annoying, but not threatening.

J. Peterson January 29, 2016 3:41 PM

After taking a corporate internet security class, I felt like current security measures are putting duct tape over masking tape. So much of the underlying structure was never designed with security in mind.

albert January 29, 2016 4:15 PM

“… new profession of freelance underground connected-system disconnecters, bypassers…”

Indeed. Just like NoScript. We’ll need workaraounds to make that sucker work without an Internet connection, because that’s the next step from their side.

We’ll also need an IoT version of Wikileaks (IoTLeaks?) to:

  1. Expose bugs in the products. This helps the underground by reducing research efforts.
  2. Make it difficult for manufacturers to prosecute researchers. They’re always pissed when someone points out something they’re trying to ignore.

You’re wrong. You assume that the s/w will be perfect and unhackable, when experience shows this will never be the case. The auto industry has a poor track record for engineering quality, even before computers. Throw them into cars, and you’ve got a nightmare scenario * millions of units.
There could be a dozen cars needing to communicate near an intersection, including cars entering and exiting driveways and parking lots. There’s also bicycle, motor cycle, and pedestrian traffic. Frankly, I don’t think there’s enough capability in the gov’t or the auto industry to deal with this kind of complex system, even without the possibility of hacking.
“…And at 15 MPH, what happens if cars get hacked and sent to hit things? Very little. Annoying, but not threatening….”
You can’t be serious. Let’s arrange for you to be “annoyed” by a car at 15 MPH. I’ll bet you’ll change your mind, if you’re still around.

Driverless cars are years away, but our critical infrastructure is here now, and it’s critically vulnerable. If we’re going to spend billions, let’s invest it there. Back-burner the DCs, and go for stricter drivers licensing, stricter traffic enforcement, and more draconian penalties for drunk drivers.

. .. . .. — ….

Otto January 29, 2016 4:34 PM

And the tradition continues… We were not able to secure computers, servers, etc., connected to the wired and later to wireless networks. Why are people anticipating that IoT will be more secure by default? Ease of use always have and always will trump security….

RadioStar January 29, 2016 5:07 PM


And the tradition continues… We were not able to secure computers, servers, etc., connected to the wired and later to wireless networks. Why are people anticipating that IoT will be more secure by default? Ease of use always have and always will trump security….

While I would prefer to talk about the unnoticed dangers of IoT, I do believe it is well worth a mention that progress has been made. This is more difficult to imagine. There are people holding back, for instance, the striving of many trying to take encryption away, reduce security, and make easy profit and power from attack.

There is also a golden ration: Security & Usability. Just saying the two can never go together is simply throwing away what must be architected and done. Really, we see this manner of ratio all over the place.

It is easy to say, however, “oh how it was in the sixties, versus now”, though I wonder how many say that, in terms of level of living. And do not just take it for granted.

It is much harder to say, “What if defensive security work was not performed since the 90s”.

Quite frankly, had that been the case, it would not have been about “more people simply getting ripped off”. (Which handling of is one of many areas of defensive security. Ripped off? Multimillion person hack? Reset the credit cards, move on. Easy on the customers, compared to how it would have been.)

There probably would not have been much of what we have today. There would not have been these freedoms, and there would not have been these significant rises of standard of living. The industries, like the governments, would have gone full blown totalitarian. No competition, you have crap progress. You have the Middle Ages, and fast.


Cars do have to communicate withe each other to coordinate movement on intersections, and they can easily do a better job than humans. Imagine all-way stop signs replaced with a protocol based on ethernet style exponential backoff. When they do that, that is, announce their positions, velocities, and intentions to each other to coordinate interleaving through junction, they don’t need to announce any GUIDs. So the privacy issues should only arise when one driverless car is used to surveil the passengers in another one

Furthermore, if you’ve ever had the unpleasant experience of riding an ambulance as a patient, you know that even at 12 MPH, without traffic holding you up, you can get around amazingly fast. Even at 15MPH, driverless cars can get people around way faster than today’s status quo.

And at 15 MPH, what happens if cars get hacked and sent to hit things? Very little. Annoying, but not threatening.

It does not matter how much money is poured into a technology, nor how many companies support it, nor how strongly government supports it. It can fail.

It might not even happen, and if it does, it might not happen on the scale people hope it will.

Security wise, however, assuming that there are no problems with the increase of technology in vehicles, including their automation – that there are no potential security issues which have arrived and will arise – is simply not considering the reality of the situation.

Privacy is also but one aspect of security. Really, if you want to know ultimately what the “privacy concerns” are all about, it is really about maintaining personal rights and powers and keeping those from getting stripped.

Be that from criminal hackers seeking to profit from direct sales or other usage of private information, or be it from criminal bureaucrats wanting to surmount tyrannical control.

Not so much the issue about the perils of increased communication and control tech in cars, however. Automated means someone can control it, and there are considerable bad things that can be done with vehicles.

If everything even could be kept to just fifteen mph, okay. Maybe. Still does not mean that usually there won’t be all sorts of other really bad things that could happen there. Like bad things with hydrogen based fuel cells, for instance. Or who knows what. Not there yet.

Today, can cars be well hacked from the road? Absolutely. Can they be hacked to create accidents? Sure. Few things more valuable to some powerful folks then being able to cause car accidents at their pleasure.

Recently read an article where a guy claimed that “the aryan brotherhood had figured out how to cause car accidents for assassination”. Using lasers in their eyes, and otherwise just ensuring they direct the route of their victim.

They do make a lot of money in the drug trade.

Likely a myth, but many myths often have some distant basis in truth.

And as today’s cars have many points of failure which can be engaged while in motion, thereby pretty well ensuring death, it certainly does not stand to reason that statements should be made arguing that car security should be less or an afterthought, if at all.

If anything, car security should be more.

A very major selling point of automated vehicles is exactly that, in fact.

RadioStar January 29, 2016 6:49 PM

^^ Afterthoughts:

Underestimating the dangers of automated, wireless controlled – or simply embedded system controlled – systems, is appalling and no one should do it who even visits this manner of forum. 🙂

Drone sales skyrocketed this Christmas.

If you entered into a major electronics store in December, drones were front and center.

While one can believe that drones are normally not automated, the fact is that even the cheap models have extensive automated capabilities. And the more “expensive” models (>=600) have a lot of automation.

Not the biggest deal, sure. The best only last thirty minutes.

However, the next phase of drone technology you will see is hydrogen fuel cell drones which can last for four hours and more in the air.

Believe it or not, but automation is a major component for such a drone.

At least one show (“Legends”) has shown an automated, amateur level drone used in a targeted assassination. This is because it is a very real possibility.

How to stop that from becoming a reality? License plates for drones? GPS jammers and spoofers and a very close eye on protection systems against GPS jammers and spoofers?

Right now, how much does it cost to make such a system? Thirty bucks, really? I think Cult of the Dead Cow had a system the size of a cigarrette pack on display ten years ago.

But, as SO much increasingly relies on GPS, how can that happen?

If, for instance, you GPS jam a major downtown event… what might result?

And so on.

So, there are countless new problems which arise.

Historically, embedded systems tend to have very poor security. Partly, this has been because the very implementation of these systems has been so dramatically arcane. How much more arcane has the research to attack them therefore been?

ramriot January 29, 2016 8:04 PM


Are you absolutely sure the OPM (BTW is Personnel not personal) data breach was not also an integrity and availability hack?

Since the OPM’s records were the key arbiter of personnel security validity, how do we know that the persons who had unauthorised access were not also able to alter those same records, damaging their integrity.

And of course if you are forced to assume that then you have to re-assert the truth of every record they have, which is a real Denial Of Security Vetting attack.

Grauhut January 30, 2016 4:08 AM

@Otto: Ease of use is not our enemy, we need easily usable strong security.

The real enemy is development cost saving fetishism and time to market.

Petr January 30, 2016 8:59 AM

Do you agree that every group puts emphasis on which part of the triad is most concerning to them (DoD – confidentiality, Energy – availability)? All aspects of the triad should be considered with regard to impact to human life, property damage, reputation, revenue, and information. Anything less is concerning.

Great! Maybe this means companies are finally waking up. We know security doesn’t work for long with FUD campaigns. Hypothetical scenarios could be discussed at great length. Keep showing the risk in terms everyone understands from the top to the bottom and things may continue to change.

Thanks for sharing.

jones January 30, 2016 12:49 PM

If we’re to acknowledge the economic dimension of these concerns, we should also recognize the role of diminishing returns in the crafting of effective countermeasures.

The only way to prevent costs and countermeasures from spiraling out of control is to collect less information. Less information collection means fewer data breaches. The most effective policy approach to this end is to make the collection of personal data more expensive.

Making data collection & retention more expensive may involve increasing the cost of energy (since server farms are decidedly not “in the cloud” but consume a lot of wattage); a Bill of Rights style privacy protection regime can be used by individuals to establish civil standing in data breach cases (and then to extract compensation); disposable electronics don’t need to be so inexpensive (there are also moral reasons, such as the social cost of coltan, to reduce the output of disposable electronics), etc.

Increasing the complexity of countermeasures will only further increase the value of compromised data; it will also obfuscate forensic investigations after the fact by making them more complex & dependent on specialized knowledge (further increasings costs without increasing security, since, in the case of a forensic investigation, the breach has already occurred).

Bob F January 30, 2016 1:41 PM

Even if cars get hacked and cause crashes it will likely pale in comparison to the number of people who die on our roads today. Somewhere around 30,000 people die each year in the U.S. due to traffic accidents.

Security and tampering is an issue that needs to be dealt with but we shouldn’t lose sight of the benefits driverless technology will give us.

ianf January 30, 2016 2:19 PM

@ albert […] We’ll need workaraounds to make that IoT sucker work without an Internet connection…

Except that such IoT-sans-IoT is a contradictions in terms (incl. ToS ;-)), and thus these devices’ manufacturers may go to some lengths to prevent runtime bypassing of—by them claimed critical—Internet connectivity. Clearly, the problem goes way, way beyond mere NoScript extensibility (assuming that such would at all be possible). And what if some household device’s (by-design obnoxious) process flow “requires” periodic access to some server(s), or else the device starts complaining of “needing to access the Internet?” Somehow I can’t envision guerilla reprogramming, end-user reflashing of embedded control systems.

That said, however, connectivity is that weak link in all IoT devices that could lead to a general strategy of nullification of their IoTcy (IoTness?).

    When the Stuxnet virus detected being present in the specifically targeted Siemens PLC model, the first thing it did was to “listen to” and record a typical command-and-feedback flow between itself and the sensors, and later used that to mask the true inputs from concurrently sabotaged centrifuge units.

This methodology, first a private MITM-recording between one’s IoT device and the server, then a straight or programmatically randomized replay of that back to the embed in by then IoT-disconnected state, might become the model for such workarounds. Unlike the Stuxnet that first had to propagate itself to precisely defined PLC units, our runtime IoT-simulator would only need a dedicated dongle with a I/O splitter to the device, later to be inserted directly. A stand-alone automatic dongle, not a cable to the home computer with the simulator running in the background, because then the IoT-device would become dependent on the run-presence of that computer, and presumed high programmer skill-set of the end user. To succeed in the marketplace (and get tons of free advertising in DIY/trade rags) this HAS TO WORK right out of the box and in a Look-Ma-No-Hands-fashion!

@ Wael, @ figureitout, get cracking. I envision a CC-sized (in time matchbox-sized) RaspberryPi/ PiZero or Arduino/ similar box powered off the soon-no-more-IoT target unit, or by any garden variety micro-USB cellphone charger. Solve the deciding, recording, and later randomized replays (to prevent the device detecting being fooled, say) in a program first, then embed the compiled blob in this /drumrrrrrrroll/ IoTbeGONE branded dongle. Solve the main parts now, before IoT becomes ubiquitous, because later you will need to generalize the code to serve all kinds of, not only some specific models of such “pinging” devices.

    DO OBSERVE HOWEVER, that I will claim 5% (“five percent”) payment of the gross global sales figures during the first 3 years’ market lifetime of any your product based on that my above concept, and will quote this very Schneier blog comment as THE proof of having come up with the idea. Just so you know where I stand in this context. Still, 95% isn’t exactly nothing to be sneezing at! [Prosit!]

Wael January 31, 2016 2:16 AM

Take one example: driverless cars and smart roads.

Oh you guys are such old generation pessimists! It’ll happen with or without your approval; with or without security. There is money to be made and taxes to be paid, and more metadata to be collected! End of story. If it’s of any condolences, most of the readers here will be considered “decrepit old people” by the time this is ubiquitous! Invest in the future and look at the bright side!


Solve the deciding, recording, and later randomized replays (to prevent the device detecting being fooled, say)

Prior art, bud!

THE proof of having come up with the idea.

As far as I know, that doesn’t matter. What counts is who filed for patent first.

ianf January 31, 2016 4:05 AM

@ Wael, re: my ITbeGONE gizmo idea

ianf: […] Solve the decoding, recording, and later randomized replays (to prevent the device detecting being fooled, say)

Wael: Prior art, bud!

If you’re talking of Stuxnet, its authors have chosen to be anonymous. You can’t claim ‘prior art’ anonymously, so there!

[…] THE proof of having come up with the idea.

Wael: What counts is who filed for patent first.

I’ll sick my lawyer onto your lawyer, then we can sit back and watch the spectacle. You can bring along the sixpack, I’ll be sipping my Absinthe intravenously.


If it’s of any s/consolation/condolences/g

By and large you are right about the inevitability of driverless cars/ smart roads in the not too distant, if not exactly near future. That said, @ albert is plainly wrong imagining that smart roads will permit any other traffic than that of fully keyed-in driverless vehicles (“There’s also bicycle, motor cycle, and pedestrian traffic”). All such traffic will have to be redirected to its own parallelized road network.

ianf January 31, 2016 4:50 AM

Martin is not worriedabout all this integrity stuff. [He] read somewhere that Congress was getting ready to start thinking about the possibility of forming a committee to discuss their options for financing a study.

    At last someone sane! And I can even tell you WHERE you read it first: on the Internet, dahrrrling!

Clive Robinson January 31, 2016 7:17 AM

@ Wael, ianf,

Prior art, bud!

Hmm have you two actually read this blog prior to Stuxnet?

As I’ve said before, I’m sure the Stuxnet designers did, because most if not all of the interesting supposasly new stuff Stuxnet used had been described long before on this blog…

Which kind of puts it well outside any patent application even in that strange brew world of US IP.

Wael January 31, 2016 9:28 AM


If it’s of any s/consolation/condolences/g …

Normally that would be the correct choice of words. I’m referring to the death of privacy and the systematic deprivation of control. Soon one will have little, if any, privacy and much less control. [1]

[1] Apologies for the Startrek quotes …

If you’re talking of Stuxnet…

Stuxnet is an “attack” method. I’m talking of “defense” mechanisms. Your idea maybe “patentable” given that you put more specifics that show clear innovation beyond what’s obvious to those “skilled in the art”. “Random”, “dongle”, … aren’t new. The way you construct and utilize them maybe novel.

@Clive Robinson,

Hmm have you two actually read this blog prior to Stuxnet?


As I’ve said before, I’m sure the Stuxnet designers did,

That’s a fair assumption.

Figureitout January 31, 2016 11:22 AM

–You’re gonna have to think over more of what the hell exactly you want to do, and enunciate that (I struggle with that sometimes). I have begun thinking of a small tcp/ip system, and asked if there were any tcp/ip experts here that used uIP and lwIP, but got crickets.

But im busy (impacting my self deadlines for nrf_detekt, gonna have to push final deliverable back to summer, pisses me off but work and school trump my open projects). So you can get off your ass and implement whatever the hell youre thinking yourself (and spell my name right ass).

Nick P January 31, 2016 10:41 PM

@ Figureitout

” (and spell my name right ass).”

You still haven’t told us your name. All we have is an alias that tells us to try random ones. 😛

“So you can get off your ass and implement whatever the hell youre thinking yourself ”

That, on other hand, is good advice these days. Both proprietary and FOSS are going to push same old crap. So, inventive or paranoid people have to get their hands dirty to have a chance at security.

Figureitout February 1, 2016 1:14 AM

Nick P
–It doesn’t matter, Integrated Mosfet is my name. You can trace that back to my linkedin account where I use my real name. Don’t care about digital attackers like I used to (as I can create a new digital identity in a snap and use infected systems for lots of work, since I can’t afford to do a clean sweep now), it’s the physical ones that track you down in real life that annoy me more than anything (they’re mostly cowards); other commenters should probably be aware of that, it happens.

FOSS and proprietary is all we have (and there’s been major consolidation of chip companies, Altera and Intel, Freescale and NXP, and most f*cked up, Atmel and Microchip), we’re running out of choices by the day and all chips will have remotely exploitable holes you can’t remove w/ software unless security advocates take over a Fab that can turn a profit. Most other things are all talk and no walk. Eg: they don’t put their talk into projects others can judge and evaluate.

FoLI February 1, 2016 3:42 AM

Bruce, the only way to advance in integrity and availability threats would be a
real threat.

Think of a worm in the internet of things, with sufficient distributed
intelligence to find vulnerabilities and reprogram itself.

The Future of Life institute (Stephen Hawking, Elon Musk, Jaan Tallinn, …[1])
says that such a threat is not a question of if but of when[2]. Decades
away. We are already close to that: Stuxnet was made ten years ago.

The sooner that threat exists, the smaller our exposition to hacked cars or
medical devices will be.

We have better build and release that worm now.

[1] The aim of is to “minimize risks” of “future strong
artificial intelligence.”

[2] They[1] predict the appartion of a superintelligent systems that would “be
very effective at acquiring” “control of physical resources”.

Brett February 1, 2016 7:45 AM

Its not so much confidentiality and integrity. Its


(rather, subsets of confidentiality – privacy and anonymity – mostly anonymity). But

    not Integrity

(OK, the bad guys want the integrity to be maintained so the SS# and CC #s are still good). Not integrity: The other is authenticity (the 4th leg of the stupidly so-called “triad”). Is the real Jonh Q Public using JQP’s credit and SS #? Or is it John Q Public-Hacker? Information that isn’t authentic could be used to crash driverless cars (or worse, aircraft). Again, the hack would not likely work if integrity was failing. Focus on the authenticity, and then the anonymity and privacy will be supported (and this could end up with better confidentiality).

Daniel Bragg February 16, 2016 12:49 PM

“We need to create an agency, a
Department of Technology Policy, that can deal with the WSW in all its
complexities. It needs the power to aggregate expertise and advise other
agencies, and probably the authority to regulate when appropriate.”

I believe now is the time to institute an Appeals/Audit process, where you can identify all the devices owned by you, and where you, at any time, have the right to audit and correct (or delete) information stored at, or sent from, any of those devices, with the legal expectation that this revised information will be propagated through whatever cloud stores are warehousing this information. The longer we delay in implementing something like this, the harder it will be.

Like other certifications, devices that adhere to this will gain the right to bear that certification. After a time, Canada and the US may choose to restrict use of devices that do not conform, but that would require a government department as you mention above that has some teeth.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.