Schneier on Security
A blog covering security and security technology.
« Detecting Browser History |
| Applications Disclosing Required Authority »
May 21, 2010
Automobile Security Analysis
"Experimental Security Analysis of a Modern Automobile," by a whole mess of authors:
Abstract: Modern automobiles are no longer mere mechanical devices; they are pervasively monitored and controlled by dozens of digital computers coordinated via internal vehicular networks. While this transformation has driven major advancements in efficiency and safety, it has also introduced a range of new potential risks. In this paper we experimentally evaluate these issues on a modern automobile and demonstrate the fragility of the underlying system structure. We demonstrate that an attacker who is able to infiltrate virtually any Electronic Control Unit (ECU) can leverage this ability to completely circumvent a broad array of safety-critical systems. Over a range of experiments, both in the lab and in road tests, we demonstrate the ability to adversarially control a wide range of automotive functions and completely ignore driver input--including disabling the brakes, selectively braking individual wheels on demand, stopping the engine, and so on. We find that it is possible to bypass rudimentary network security protections within the car, such as maliciously bridging between our car's two internal subnets. We also present composite attacks that leverage individual weaknesses, including an attack that embeds malicious code in a car's telematics unit and that will completely erase any evidence of its presence after a crash. Looking forward, we discuss the complex challenges in addressing these vulnerabilities while considering the existing automotive ecosystem.
Posted on May 21, 2010 at 6:56 AM
• 60 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
I suppose all cars controlled by microprocessors will have to be banned by the government because terrorists could use them remotely.
Somewhere a Law & Order writer is wailing "And they canceled us NOW???"
So that Doctor Who episode where the alien-controlled GPS units started making cars drive into rivers with the doors and windows locked was maybe realistic? (Substituting sufficiently evil humans for the aliens, of course.)
As I pointed out in the slashdot article on this, if they have physical access to your system, it's their system. The real worry is if an attacker can access your system via OnStar, bluetooth, or WiFi.
We already have computer-controlled cars that randomly accelerate and cause occasional accidents. Yes, its rare, but many people refuse to believe it is actually happening. They prefer not to acknowledge that there are dozens and dozens of small computers making all of the decisions in their car, interacting with each other, and all running software which is written by humans using processes somewhat less rigorous than NASA's. "Drive by wire" has been the reality for many years now, and considering how many cars are out there on our roads, I'd say the car companies have been extremely lucky that we've had so few recognizable malfunctions of these technologies.
I can't imagine that any of the 3-letter agencies out there are exploiting these weaknesses, though. They surely have other options which are a lot easier and more reliable. Sure you can cause a serious car accident, but what if the target survives? He'll tell investigators that the car accelerated by itself, that the airbag didn't deploy, etc. Even if the malware fully erased its tracks, this might cause more scrutiny than they want. They need physical access to the car to install it, and if they can get that close then the might as well apply their talents directly to the driver.
Great paper, would love to see some video of their real world runway testing.
I think the most interesting attack based on these flaws would be counterfeit "black hat" auto parts. As any device with access to the CAN bus appears to be able to assert control, a bad-actor could create a evil replacement part (by definition he would know the make, model, and year) with a backdoor. If very high end exotics were targeted the cost vs. return could justify something as sophisticated as custom code to access the nav system and send location updates via SMS to the hacker/thief.
Now where's my copy of CarShark?
As someone who wrote high reliability embedded code for many years, and during that time noted the lack of talent most other programmers exhibited -- finding good ones was the main challenge, not to mention inability to internally model interactions between threads in real time interacting code, I am frankly surprised nothing worse has happened than the odd report of extra acceleration. Forgetting outside attacks, the quality of a heck of a lot of code out there is so low it already amounts to a built-in attack.
The code my team wrote was for hardware that didn't even have a power switch or a reset button, and was expected to run for years at a time. Having to reboot constituted a failure and a service call. Autos are a lot simpler -- they get a nice fresh boot every time they are run. As an example, we never even used dynamic memory allocation -- the moral equivalent of malloc() was considered far too unreliable and needed too many layers of error checks and corrections to guarantee operation in reasonably real time to even be allowed in a project.
Bet that's not the case in automobile code, and what can realtime code do if some heap compression/recovery code runs at random times and eats the cpu just when some realtime response is needed from the main code?
I do NOT believe that the "modern" tools and frameworks that somehow result in tens of thousands of lines of code to control a single simple subsystem can possibly be as good as well understood purpose-written code down on the metal either. The amounts of code and CPU horsepower needed to do even the simplest thing in an ECU are frankly, ridiculous. I doubt most of the programmers have even the faintest idea of what code their super dooper drag and drop/monkey-typing tools produce, or that the people who write those tools really understand how easily they produce rotten, unreliable code that can easily fail under conditions where even one unexpected event or bad input exists -- or just one bad assumption by the coder.
And in real life, bad assumptions exist. I don't think comparison to NASA as though they were the pinnacle of this art is appropriate either, frankly, having seen some of their work. I was not impressed with it. I found the 3 machine voting bandaid laughable as a safety backstop. Perhaps during testing, but not on a live system where the tripled failure rate may cost lives or controllability by the human backup operator. Far easier to get it right just once, with perhaps a hot spare available.
You quickly get into the "who watches the watcher" set of problems with that.
Much embedded code I've seen suffers badly from the "too many chefs spoil the stew" class of problems with pieces that poorly fit together and which make different, wrong, assumptions about the other pieces that all have to work in concert.
The upshot is, I don't believe an external attack is required to make trouble. All it probably takes is a glitching sensor, connection, or two events that aren't supposed to be able to happen X close together in time do, sometimes. It's practically impossible to get reliablity via testing -- it must be built in (just like Bruce often says about security, you can't add it in later). Some test case will almost always be missing and not take care of some case that actually happens in the real world.
Side note -- I've driven some pretty hot cars in my life, and I've yet to see a single one where simple application of the brakes wouldn't bring the car to a full stop against full engine power, even when that power exceeded 400 hp. Something else fishy is going on there -- tendency to litigate for operator error, utter failure of a driver to realize that pumping power brakes when there's no manifold vacuum merely reduces their operability, or the unwarranted assumption that we are all safely encased in bubble wrap and need not have a clue about systems on which our lives depend.
One could wonder why the affected driver didn't just apply the brakes, shift to neutral, or simply remove the key -- oh yeah, that well thought out "security feature" that locks the steering in many cars when you do that! That's the level of sophistication we think is acceptable in a system costing tens of thousands of dollars that you depend on to stay alive? I think consumers should demand better -- I do and vote with my wallet.
As discussed in the essay below, many of the problems will be avoided, or will at least become controllable, when designers stop assuming that the automotive internal network is inherently secure.
And when things start to go wrong with the electronics in 15 to 20 years, where will the spare parts come from? Is this the ultimate in built-in obsolescence?
Somehow, I just can't see the cars of today being driveable as 'vintage' cars 50 years from now.
Now I wish I still had my old Morris Minor - spanners, a screwdriver, a file and a few bits of metal were enough to fix most problems.
Access to the vehicle should not be necessary. To make class breaks, substitute a malware chipset at the chipmaker (in China?) or else install rootkits in the software distribution at the automotive manufacturer. Then all you have to do is wait for your botcars to roll off the assembly line.
From a risk analysis or cost benefit standpoint I could see why the cars network wouldn't be all that secure. I mean to do anything to it you would need to have physical access to the car long enough to either load up some new software of replace one of the units with a reprogrammed one. If you have access for that long you may as well just install a bomb it would be more effective. If you want to make it look like an accident cutting the break lines would be almost as good. I do see a danger in software bugs causing unintentional crashes though. Especially once some auto manufacture starts outsourcing the coding to the lowest bidder.
Considering that you can get a bluetooth transceiver for OBD-II for $54 ( http://www.dealextreme.com/details.dx/sku.16921 ), those attacks are very easy to conduct, mostly for blackmail/purpose.
Just send a few warning on the multiple displays around the dashboard, then cut the engine :)
@DCFusor: How do you know the drivers didn't try to do those things? With my first car, turning the key physically connected the wire to the ignition, and the brake pedal connected physically to the brake. The shift lever of my first manual transmission car connected mechanically to the gears, and the clutch pedal had a mechanical linkage to the clutch plate. That gave me four ways to stop accelerating: turn the key, stand on the brake, push the clutch pedal, and shift to neutral.
Now, replace all these linkages with digital electronic controls. Each pedal and control sends a signal to a mesh of control systems. Under normal circumstances, you wouldn't have a need to turn off the ignition at high speeds, you wouldn't want to shift to neutral while revving the engine at high speed, and you wouldn't want to slam on the brakes while still hitting the accelerator. Therefore, if there's an erroneous accelerator input, who knows what it'll do to the control software?
One recent firmware change Toyota did was to give the brake signals higher priority in the processing.
It isn't a matter of doing something and making the car unsteerable, it's a matter of not being able to do something.
Killing someone by hacking their car is a bit movie-plot, you dying because a random bug makes your car flip less so. I don't want a car where the breaks are not physically connected to the pedal, which is what I believe was the case here (correct me if I'm wrong).
Now that might not make a good movie but it might make a cool CSI.
Re the brakes thing; it's very unlikely that the break peddle wasn't mechanically connected to the breaks. However anti-lock break and traction control system are able to release and engage the brakes on there own.
Oh for crying-out-loud. If an attacker has access to your car he can do bad stuff. If he has access to your PC he can also do bad stuff. If he has access to your house, or hotel room same thing. If he has access to the sandwhich you're going to eat he can poison it. Everyone clear on that?
Oakland never fails to remind that it's about ridiculous sci-fi attack scenarios than serious approaches to serious problems. I notice Bruce couldn't bring himself to put the customary "interesting" in his introduction.
Bad guys don't want to hack your car. They want to steal it and sell it. For the money.
And killing someone by hacking is so much harder than cutting the brake lines anyway. Old-school attacks are still as easy as ever.
Certain religious circles would love those!
Terrorists won't be interested in single vehicles. Not enough press. They would want to do a lot at the same time.
Someone may sabotage their own vehicle in an attempt to collect a settlement from the manufacturer.
I've worked on avionics for many years, including systems that directly control the aircraft, autopilot, engine digital controls, and primary flight instruments to name a few. Even with the rigor of the FAA certification to DO-178 level A, we have had systems that exhibit flaws. The lame quality control exercised by car companies pales in comparison to what we do. It's actually pretty amazing that there hasn't been a major problem with automotive controls before this!
First thing before people chuck to many stones, the CAN buss has been around a lot longer than most MS OS's. Twenty years ago security in these systems just was not possible in a realistic sense.
Also having spent a big chunk of my programing time developing RT (realtime) micro OS's to support a multitude of high reliability / availability systems including Satellite, medical, nuclear and communications systems I have a little bit of insight as to what is required for such systems to reliably meet Real Time requirments.
To give you a flavor of but one of the major issues,
It is how do you deal with asynchronous IO signals with minimal response times on signals that can be both very infrequent and for short periods of very high bandwidth.
After a little thought you will realise that at the CPU level neither a polled IO or interupt driven IO can be sufficiently reliable to do the job. So you do what they have done for years with serial IO you build it into hardware with buffering. However the two main problems with this is having to have frames of bits between start and stop signals all of which adds the dreaded latency.
The advantage of microcontrolers used to be that you could do all sorts of ad hoc IO in software and thus minimise silicon real estate and power consumption.
After you go around this loop a few times you realise that only standard IO types or hybrid systems are going to meet the specs at the bottom level of the stack. You end up making the same sort of choices at all levels. the result is a ridiculously over specified and thus costly microcontroler that basicaly sits there doing nothing 99% of the time just to ensure meeting the critical times.
Saddly to many programers with insufficient experiance belive they can improve the CPU utilisation, and 99 times out of a hundred they can, but they don't consider the edge effects that occur 1 in 10,000 times or so in their testing...
The simple fact is with care in your design you can abstract asynchronous behaviour to synchronus behaviour and still meet the time constraints the price you pay for this is gross inefficiency in the CPU utilization, and likewise with programer time.
Someone could also cut your brake lines, or drain out all the oil.
It's true, you can cut the break lines.
But, I think the reason this research is interesting is the possibility of bringing about the same effect without any trace of manipulation. If someone actually did this, perhaps the Gambino family hires someone to pull this on a DA's car, the most likely conclusion would be a manufacturer's error. It would take all kinds of time for the manufacturer to analyze it, and people would question their motive when the carmaker tried to say someone hacked it.
It's not everyday someone is poisoned with the tip of an umbrella laced with ricin. However, it has happened. By the same token, this would be a well financed, targeted attack, performed on only specific targets. Let me put it this way, I wouldn't go out and buy a Prius if I were Hamas or Hezbollah.
People have certainly been killed by tampering with their cars before. Inserting car-crashing, self-erasing malware on a standard port sounds like a much safer way to kill someone than, say, putting a bomb in their car, since if someone's car blows up, the police will certainly be very interested. If someone dies in a car wreck, it's a lot more likely to just be assumed to be an accident, especially if there's not obvious physical evidence.
I wonder if anyone has been murdered this way so far. If it happened to some normal person (rather than some famous or powerful person, where there might be more of an investigation), I wonder if anyone would even think to look for evidence of this kind of attack.
I've seen cases where a controller decides it has a problem, and starts screaming into the data bus to that effect. Net result is that the other controllers can't communicate with each other. The whole system comes crashing down.
Well, you got me thinking, so I went out and checked 3 of the newer vehicles here. A 2007 Honda Ridgeline, a 2010 Camaro SS, and a 2001 Buick LeSabre.
Both the Honda and Camaro are drive by wire as far as the airbox/throttle goes. Neither have anything but direct connections to the standard power brake booster, and plain old mechanical connections to the steering.
The Buick is pure mechanical connections to all the controls, and is also the only one which requires the steering lock to get the ignition off. In the rest you can get to off without locking the steering, though you can't remove the key without locking the wheel.
So I suppose a simultaneous failure of the ABS that broke the brakes, along with one that caused the throttle full open would defeat the newer guys -- if some relay contacts also welded -- triple failure required. (the old Dodge van is more likely to fail dead...)
Except all 3 have pure mechanical emergency brakes that do work well.
In both the newer vehicles, the transmission is also a pure mechanical connection to the shifter.
So you still have that. (The Camaro is a 6 speed manual and no servos that could really handle that would be cost effective anyway). And in the manual, the clutch is still direct (hydraulic, but direct and no engine needed to make it work)
In all cases, the ignition switch is a real switch going to a relay. Else the ECM draws would kill the battery quick when the car was parked. I did have that issue in some vehicles in the age of around year 1998-2000 vintage but they learned.
I suppose none of the Toyota engineers ever watched Star Treck and saw the "help, the aliens got into the computer and now the doors won't open" shows -- good learning for any engineer.
Failing to have backup that works, by design, is simply criminal negligence and surely implies a level of hubris no engineer should ever have if they design anything on which life even peripherally depends. Too bad there's no jail time in store for those guys, probably not even a possibility to trace who made certain crucial bad decisions there. Bad to the level of pure immorality. The buck should stop someplace, but it most likely won't. Seppuku seems appropriate for someone.
I know that when TI proposed (and for a little while GM put on Corvettes) fly by wire steering, there was such an outcry about how stupid that was that it stopped.
We do know that some of the drivers didn't try that stuff, including the cop who was calling 911 all the way through the fatal crash. Didn't he know how to do the PIT maneuver to himself? And that was supposedly a pro trained driver!
I still think it's mainly driver incompetence, sorry.
Or just plain stupidity. Or inability to reason in an emergency -- gee, if I push in the clutch, my engine might blow! Better than dying for a rational person, but how many of those do you meet? Who stay that way when things get a bit exciting?
I think more people were trying to get out of trouble (if at all) with minimal or no damage to the car, and for the crucial moment, forgot that if you die, the rest doesn't matter at all -- this forum is all about people making dumb security tradeoffs, and there's yet another example of it. I know too many stupid humans to believe otherwise. There's usually some way to rub off some speed if you aren't too worried about your paint job or tires and *anything whatever* still works, which obviously was the case in all the reports. Nope, easier to blame someone else. Seems we think if we know how to point the finger, that fixing the blame is better than fixing the problem -- in many fields (finance, politics, and now cars).
And while I'm not a spelling/grammar Nazi, gheesh people, I hope no one breaks your brake lines -- or you might loose your mind (which would then need to be tightened back up)! Seems the world is becoming illiterate as well, now git off my lawn!
And, first thing my dad taught me to do on any new vehicle (new to me) was take it out on a big empty parking lot and thrash it hard -- to find out how it handles in a skid, what it takes to get into one, and how to get out. I am absolutely sure it's saved my life more than once to have muscle memory or "intuition" on what the car will do in some extreme situation. Anything less is not only dangerous to you, but irresponsible as regards your interaction with other people on the road. Just my opinion, of course.
Now, as far as hacking my car, there's already a bunch of info on the 'net on how to disable things like OnStar, though which your car can be controlled or at least shut off...more than most people know. You'd sure want to do that before robbing a bank in it....
DCFusor: When I was young (1980's) there was a rumor going around that when car brakes have a problem you should "pump" the brake pedal, which were often accompanied by stories that include "even pumping the brakes didn't work". :-#
As for changing gear in a vehicle with an automatic transmission, most people have never practiced that and it's not easy to do when the car is moving. The one time when I could usefully have done that was when my car stalled on a freeway (thus giving very effective engine-braking) - I ended up swerving across a couple of lanes of traffic to get to the emergency lane before I started going too slowly.
As for using this for assassination. If someone drove a regular route then an attacker could hack the vehicle of someone else who drives a similar route. While the authorities were busy doing alcohol and drug tests on the person who just killed a VIP any evidence could be removed from the car.
@ DCFusor: Thank you for addressing my teeth-gnashing over "brake" and "break". Now can we do "pedal" = something operated with the foot, like a bicycle pedal, gas pedal, or brake pedal -- vs. "peddle" = to sell, leading to the classic joke/pun about the prostitute who bought a bicycle and peddled it all over town?
(See? It *does* matter, else you don't get the joke. ;)
On a more serious note: I have no connection to aerospace engineering, but it's my understanding that all critical fly-by-wire systems (elevators, ailerons, rudder, e. g.) have multiple redundant backups, such as a hydraulic connection *and* a mechanical connection. Perhaps this information is dated, now that everyone has more "confidence" in wires. If so, would someone please inform me? In any event, a drive-by-wire car certainly should have the same.
I used to have an airline pilot for a neighbor. He was studying the manual for, I think, one of the Boeing 7x7 series. He told me that in the event of a complete failure of electrical power (generators, APUs, batteries, etc.), as a last resort, a bomb-bay-type door would open in the belly of the plane, and a little windmill-type device would be lowered into the airstream. Even if all thrust were lost, just the glide speed of the plane would produce sufficient power from the windmill (I'd say "turbine", but that's too easy to confuse with the engines) for emergency cockpit lighting and basic instrumentation power and lighting. Not needed on a car, but... shows what type of thinking is needed.
@ Russell Coker: IIRC, the "pumping" was in the event that a small air bubble had gotten into the hydraulic brake lines, and/or a pinhole leak, in which case rapid pumping presumably might increase the hydraulic pressure enough to slow the vehicle. But I could be mistaken. The other aspect is that new drivers are, or should be, taught that in a need-to-stop-quickly emergency, not to "slam on the brakes", thus causing the wheels to lock up and skid, losing traction (lower coefficient to the highway). Instead, the brake pedal should be repeatedly pumped and released, allowing the wheels to keep, or regain, traction.
The exception would be highly-trained drivers (like DCFusor and me, with our skid-pad training) who can sense the edge of traction, and apply the brakes just hard enough for max stopping power without skidding. Probably the only such drivers commonly around are those with racing experience (even in go-karts) and law enforcement. They also have the presence of mind not to panic. Most people stomp on the brakes as hard as they can, and as the vehicle fails to slow as expected (due to skidding), they stomp even harder. (adrenaline rush, you know.)
I think that explains the "pumping" thing.
I guess I should add that Anti-Lock Brakes were developed precisely because there were so few anti-lock drivers. ;)
Why do you guys keep insisting that Physical access is requires to implement a hack of an Automotive system. Physical access certainly makes the process easier, but it is by no means necessary.
Cars by their nature have long cables attached to the electronics and these cables act as antenna's. With sufficient microwave signal power and an understanding of the microcontroller signal timing and/or critical state transition timing, an attacker can control the branch flow of the program.
Look up at Glitch attacks, under hardware hacking.
To create the power supply glitch, a high power microwave transmitter can be used. The microwave source gets picked up by the automotive cable harness and rectified by the On chip protection /ESD structures.
The ESD structures are mainly intended to protect the chip during the assembly process, however they can be triggered when the system is power-up. The ESD structure is usually an SCR, when this triggers it sinks lots of current. If the chip is in a powered-up state than the SCR firing will either result in Latch-up (bad usually means dead chip) or a Big glitch on the chip power supply.
In microcontrollers timing this glitch allows one to control the Branch flag and thereby control program execution.
It is more complex to do this remotely but it is absolutely possible.
@Tom T: More than that. The 'little windmill' that drops out of an airliner is also a hydraulic pump that gives the pilot the power needed to operate things like the flaps, ailerons, and rudder... See the 'Gimli Glider' for an example of when it worked (767 double engine failure. They ran it out of gas at 30,000 feet).
@DCfusor: Thank you! The break/brake thing was driving me nuts too.
And I was made to do funny things in my car before Dad would let me drive it out in public. Driver education in the USA is absolutely pathetic (as it is most over the world, afaik). It's a complicated machine that kills a lot of people - You'd think we'd require a bit more training in the operation thereof. Heck, a forklift driver that tops out at 15mph gets more training...
For everyone, there's one more problem with nicking the brake(!) lines. Many automatics (all?) require that you tromp on the brake(!) to shift out of 'Park'. With a leaky brake line, that pedal would go straight to the floor, and any reasonable person would not put that car into any gear at all. Or shouldn't.
Another use for pumping brakes is when bleeding the line. There the 'air bubble' issue is known to exist, and by fiddling with the valves eliminated. I've done it. It's fun. The brake pedal feel is quite bizarre while you're doing it.
@Back on topic, yes, car computers interact in interesting and difficult to predict ways. Some years ago there was an example of a Cadillac, brand new, that wouldn't pass emission testing (and illuminated the 'Check Engine' light). Otherwise it seemed fine.
They'd bring it in, hook up their diagnostic equipment, and it would work perfectly. It would pass the smog test. A couple weeks later, it'd be back, and fail the smog test, badly.
The mechanics would drive it around, and could never get it to fail. But the owner could, usually in a couple of days, and wasn't happy at all. I believe he got another Cadillac, because this one was so flaky they sent it back to the factory.
The factory tracked down what was going on. The engine computer was crashing, and going into what's called 'limp-home' mode, where it doesn't really care about emissions or economy or power, it just chucks some gas into the cylinders and fires the plugs now and then, the idea being that even if everything else has gone wrong, the car will still run. Sort of.
As it happened, in this mode the car ran quite well. The owner didn't notice any change. But it was not efficient, and polluting far more than it should. It got lousy gas mileage, but a) it's a Cadillac to start with, and b) gas mileage is notoriously difficult to measure out in the 'real world'.
The engine computer was crashing because the body computer, what ran the power windows, the power seats, the thermostat, etc. was crashing. When the body computer crashed, it sent garbage to the engine computer, which crashed it, and the car went into 'limp-home'.
After a lot of testing, during most of which the body computer worked just fine, they found it. A bad winding in the trunk-lid latching motor was sending voltage spikes down a data line. Which crashed the body computer. Which crashed the engine computer. Which caused the smog controls to become ineffective. And it would work fine (after resetting, which all their testing tools did) until someone next closed the trunk.
Closing the trunk crashed the engine. This was some years ago. I imagine they've gotten even more interconnected since.
Well, I too have suspected that these people panicked and didn't try many ways to keep their cars from crashing for fear of breaking their own cars. I mean, it's a fact that the transmission on most cars works in spite of failures of chips in other areas. If shifting into neutral and grinding the breaks didn't work, then shifting the auto transmission into Reverse certainly would stop the car. I think I'm the only person among my friends who did that without the transmission literally falling out of the car. It just came to a stop almost instantly, killed the power and took some trial-and-error to restart. Did any Toyota drivers really fight to shift the transmission out of drive? Or kill the power in mid-drive (if possible)? Or did they avoid these things to keep the damage from being "their" fault?
Admittedly, though, I'm not putting all the blame on them. Others have mentioned a key point: cars are more digitized than ever. A car's computer system is basically a distributed intelligence system with many dependencies and many components at a high risk of failure. With tight coupling, failure of one component can propagate widely to others. Then, we have researchers in Bruce's article showing how easy it is to get them to accept malicious input. The conclusion is that they are an accident waiting to happen and we must critically consider the claims of these drivers. AFAIK, the failure rate should be higher than we've seen. I think many failures aren't being reported because accidents didn't result.
I know my old 1997 Toyota Camry glitched once or twice while parking. I barely tapped the gas pedal and the thing shot to 3000rpm, lunging forward towards a woman at the store in front of me. I was surprised but still tried to swerve and break. I fortunately stopped the car just before it got stuck on the concrete piece in the parking space. I *know* I didn't slam that gas pedal and I'm pretty sure that a tap of the pedal causing a surge of speed is outside specs. What would you call that DC? I'd call it a fatal system error. Is it digital? Mechanical? Who knows, but I know a Camry can do that. So, it's personally hard for me to dismiss these people's claims.
Thanks for the reference to the "Gimli Glider", I'd never heard of that one and the story is amazing -- a series of maintenance and refueling errors caused a 767 to run out of fuel midway between Montreal and Edmonton, and even with all engines out they managed to glide it to a safe landing. Awesome.
Here's another crazy aviation incident, a mid-air collision over Brazil: http://www.vanityfair.com/magazine/2009/01/... . What used to be an extremely improbable event (two planes passing close enough to actually hit each other) is now a real possibility, thanks to extremely accurate GPS positioning systems. Numerous safety procedures are supposed to prevent it, but a series of air traffic controller errors led to a small corporate jet and a larger passenger jet being assigned reciprocal headings in the same corridor at the same elevation, heading towards each other with a combined airspeed of over 1,000 miles per hour, with navigation systems so precise that they actually collided at that velocity.
Both planes were equipped with a collision-avoidance system which should have prevented them from colliding, but the system on the corporate jet was not functioning. It sheared the wing right off the larger passenger plane, which crashed killing all aboard. The corporate jet survived the impact and managed an emergency landing.
Good movie plot: do this to an aeroplane. Make sure you put in a time delay so you can get off the plane before systems start to malfunction.
When working with a Tier 1 automotive supplier developing HEV battery systems, I was peripherally involved in the security assessment of the energy storage control system. The outcome was that there was no risk because the code base was completely custom and hand-coded, so malicious code was exceedingly unlikely, and each vehicle was so isolated that there would be no way to propogate malicious code. Later, when certain OEMs wanted to install their own code on the hybrid battery controller, they also required that any installed code be signed to prevent the installation of unauthorized code.
Despite the apparent safety of such code, it always seemed to me that this was more security by obscurity than actual safe coding practices. I think this paper bears out that concern. Custom code and code signing are still vulnerable to certain types of attack.
There are, for instance, annecdots floating around the engineering departments of certain OEMs that software updates pushed through OnStar occassionally cause seemingly-unrelated glitches throughout the dozens of controllers in a vehicle. This implies, to me, that the controllers may not be sufficiently robust to random inputs.
If we treat this as a risk assessment, then we have to consider the impact. Shutting down the engine has been the worst-case scenario for a long time. Walk-homes are annoying but almost never life-threatening (though the OEMs are almost obsessive about avoiding this risk). Drive-by-wire systems add new failure effects that are much more severe. Hybrid and fully-electric vehicles add another set of severe failure effects; the euphemisms "thermal event" and "rapid tunnel expansion" were not concieved of randomly.
Unfortunately, we have two factors working against good software engineering practices. First, the risk is distributed across dozens of controllers, many of which are designed by suppliers. There is minimal integration during the design and only limited ability to evaluate effect of interactions between the controllers except by full vehicle testing, which other commenters have rightly pointed out is something of a crap-shoot. Second, the skills needed to analyze these systems for security risks and perform an impact analysis are rare. The embedded software engineers designing these subsystems are rarely trained in secure coding practices. Such skills are rarer still among management, who also tend to have no understanding of the complexity of software and who allocate resources and priority in an environment where time is considered parount. Good design practices are easily subverted for fast design.
The by product of progress :-)
How do you push in the clutch, on car with an automatic transmission?
Are you certain that on a highly automated car, the auto trans will change its configuration while in motion, in response to operation of the transmission control?
If you were at the wheel of an out-of-control vehicle with "push-button start," moving at terrifying speed, and you pushed the button several times without the engine stopping, are you confident that it would occur to you to hold the button continuously for numerous seconds (we're talking many hundreds of feet here) in order to cut the engine?
Disconcerting as it may seem, fly-by-wire aircraft (in general) DON'T have a backup mechanical connection. For a long time now, even "fly-by-tubing" (hydraulic) airliners haven't had such a backup. Witness the DC-10 that crashed in Sioux City, Iowa with no primary flight controls -- the flight crew could only control the aircraft by manipulating the throttles of the two remaining engines.
What they do have, is redundant systems, generally all of the same type. Sometimes, this consists of multiple actuators (belonging to separate systems) on a single control surface, where in the failure case of a single actuator, it is expected that the remaining actuators can overpower the one that has failed. In other cases, the control surface is split: an aileron might have 2 or 3 separate pieces, each with its own actuator and control system.
The DC-10 hydraulic controls had redundancy, but a single failure (uncontained explosion of an engine component) disabled all of the hydraulic systems at once. DC-10s were modified after that, to reduce this vulnerability.
In practice, these flight control system designs have been vindicated by an astronomically low incidence of safety incidents related to inoperative flight controls.
In my opinion, the design of automotive automation (for example, Toyota's criminally stupid push-button engine control) reflects a level of safety engineering drastically below the standard of aviation. I don't believe that the aviation system would ever have accepted such poor designs.
Since the 1960's, aviation has added numerous automatic systems that come between the pilots and the control surfaces. In each case, VERY CAREFUL CONSIDERATION has been given to every conceivable failure case. As far as I can tell, Toyota's assumption was, "these are reliable systems that will never fail."
Regarding the need for complex interactions in many separate systems to cause harm: it would be sufficient to detect a potentially difficult situation and occasionally make it more difficult.
E.g., reverse the braking action on one wheel if there's abrupt steering or braking at elevated speed, preferably at night. If the airbags deploy, erase the program. Otherwise, try again after a few days.
Many attackers may also find it sufficient to make their victim spend a lot of time at hospitals and maybe in court or even jail, without the need to actually kill him or her.
@ MarkH: Thank you for the update.
Now that you mention it, I do recall the Sioux City incident. It proves that "redundant systems, generally all of the same type." aren't truly redundant. True redundancy consists of an entirely separate factor -- as Bruce has frequently reminded us, entering a user/pass, then answering questions, isn't "two-factor" authentication, because both are based on "something you know". A smart card, appropriate USB stick, cell-phone text message, etc. are a true second factor, because they're "something you have". The attacker must have that in his possession, and not just knowledge gained from the keylogger etc., to enter the secured site.
In the case of aircraft, this would indeed require a cable connection to the control systems, as non-electronic (e. g., small, private aircraft) have -- an *actual* wire (cable) between yoke/pedals and the control surfaces.
I can't trace the source ATM, but IIRC, one airliner's communication and navigation systems both failed, both the main and the "completely redundant" second set of radios and receivers. Investigation showed that contrary to FAA specifications, both sets of avionics contained chips from the same lot or production run, an absolute no-no. The whole lot proved to be defective.
Several people have mentioned the Sioux City crash.
I believe the DC10 used control cables to transfer commands from cockpit to flight controls. The problem at Sioux City was not how the commands were transferred - no amount of redundancy in that would have helped. The problem was that there was no hydraulic pressure, which was what provided the force to actually move the control surfaces.
The DC10 had 3 independent hydraulic systems. I think major control surfaces would typically use two of the three systems - so that a single hydraulic system failure would leave the plane completely flyable, and a catastrophic component failure would leave one hydraulic system unharmed.
In this case, the tail engine shredded itself and delivered shrapnel throughout the tail, which managed to sever hydraulic pipes for all three systems.
The DC10 was about on the edge of when people decide to go for quadruple redundancy in the hydraulics. The 747 is quadruply redundant, I think the 380 has two hydraulic plus two electrical systems, I think (not 100%) that the L-1011 Tristar (contemporary competitor to DC10, very comparable) had quadruple hydraulics too.
Sometime in the early 70s (I think at San Francisco) a 747 hit runway lights on take off and lost 3 of 4 hydraulic systems and made a successful emergency landing.
About 20 years ago I worked for a major automotive company designing what were some of the first ABS and engine management microcontrollers. At the time I was still relatively young very cocky and absolutely convinced that my hardware design was correct, my firmware coding was perfect and my communications protocols were bullet proof. I added prioritized hardware interrupts and was convinced that this even solved all the real time scheduling problems.
Put simply I WAS NAIVE!
Today, after many year designing secure microcontrollers and Analog circuits, I can think of a hundred ways to defeat those bullet proof designs of my youth. The trouble is, most of the attacks that I would deploy today do not involve "playing the game by the rules" consequently my attack vectors would do not even appear in any risk analysis chart.
There was a case two year ago of an aircraft QF72 A330 that seemed to go completely Crazy just north of Exmouth WA and a few week later another flight QF71 south of Exmouth exhibited similar problems. Anyone with Google maps can see the huge Antenna Farm at the end of the peninsular, coincidence maybe.
Regarding being able to stop a car, no matter what: I wouldn't be so confident about this as some of the posters here seem to be. Ignoring all the technical details, the thought process alone is quite involved.
First of all, one has to realize that what is happening is an uncommanded action. E.g., that one isn't simply pressing the wrong pedal, or making some other mistake. This realization shouldn't take much time, but it may not be entirely trivial, particularly if one isn't used to the car. (Common example: foreigner used to a "stick" gets a rental car in the US. When driving off, instinctively prepares for the gear change and presses the "clutch" pedal ...)
The next step may be to understand where the problem is, with the objective of eliminating the root cause. I.e., whether it's an operator oversight (item blocking the pedal, or similar) or whether the vehicle is indeed malfunctioning. Some time may be spent fumbling with one's foot around the pedals to remove the assumed foreign object. The normal procedure, pull over and have a good long look at the suspected problem area, isn't available.
Then it's a big leap is to realize that one can't eliminate the root cause and thus has to step back and consider the bigger picture. I.e., instead of trying to regain acceleration control, the objective now becomes to safely stop the car at all. Apparently, the driver on the phone didn't reach this point.
There's also the complication that one can't be sure whether the problem will disappear on its own. Many problems do, some don't. Again, this can take some time.
Without an understanding of what's really going on, it's also difficult to assess the effect of corrective actions. E.g., a supposedly benign input could upset the faulty system even more.
It may also be difficult to assess the full effect of the actions considered, even if the malfunction does not affect them. E.g., are we certain "turning off" the car when driving won't have side-effects not expected by driver or manufacturer, and if it does, could the action be reversed ?
When considering more drastic measures that can be expected to cause significant damage (force an abnormal gear change, perform a "controlled crash", etc.), there's also the element of doubt that one may misunderstand a perfectly harmless situation. I.e., what's the appropriate level of desperation before we decide to seriously damage our brand-new car ?
There's also often a general unwillingness to mentally accept that something out of the ordinary is happening. And people who make this switch, sometimes become completely disoriented. Again, the driver on the phone for a long time sounds like one such case. (There's a similarity to the HF3378 crash, where the pilot called the airline instead of taking care of the evacuation. The full report - alas only available in German - is quite a read.)
Airplane pilots are taught a lot of technical details about their planes and they are given checklists to help analyze faults in the system. They are carefully trained to react to problem situations and to perform the appropriate decision-making. Every once in a while, a pilot gets to experience a novel situation. Often enough, despite all their preparation, this results in tragedy.
I don't think it's reasonable to hold drivers experiencing such a one-in-ten-millions situation to any higher standards.
To return to the topic, even a modification that produces a harmless distraction could cause enough confusion to lead to an accident, particularly when this is triggered only when other conditions are met. E.g., wait until the lights are on, the speed is high, and frequent braking suggests dense traffic or a difficult road. Now activate the windscreen wipers for a bit and let the driver wonder how the lever got moved in the first place and then why pushing it back to neutral produces another swipe. Still driving ? Wait five minutes, then spray some cleaning fluid but don't wipe. After that, half of the driver's attention will be on the wipers for a good while, even if nothing else happens. Now, can we make something in the drivetrain rattle for a bit ?
Most people can't pay attention to more than two things at the same time. Our driver is occupied with possessed wipers and a poltergeist in the gears. Oops, where did that car come from ... ?
@Andrew Gumbrell: You've got it backward. Microprocessors (and satellite) will be MANDATED by the government so that they will be the final arbiters on whether or not your car starts/goes. If that makes you subject to hacking or death, well your life is a small price to pay so that the rest of us can feel warm and snug in our all-encompassing government security blanket protecting us from everything especially ourselves.
You remind me of an event from my earlier work life.
Put simply I had designed this "bullet proof" system that go through all the testing that people could throw at it, I was just about to sign of on getting a production run of 100,000 parts but it was late Friday so I thought 50d1t I'm off home.
Well I took one of my prototypes with me and set it up on my work desk at home just to impress my then current squeeze, and guess what it did not work...
It turned out it did not work next to the computer monitor when it was turned on. After some scratching around I evenetually dragged an oscilloscope down from the loft and wound a half dozen turns of wire around a toilet roll and had a look.
It turns out that the frequency the monitor ran at was at the same frequency as my "sub audible" Manchester encoded "confidence" communications stream, and that the field generated by the monitor was swamping the pre-data slicer amp and filter chip...
The solution was an overnight software hack and a couple of extra poles in the filter...
Thankfully my boss was away and by the time he got to hear about it it was all done and dusted and on the production line. It went on to get a UK Which Magazine No1 award which pleased the customer and was the only cordless phone they ever considered reliable enough to put on their rental market... Oh and Bruce knows the customer as "technically" he works for them (BT).
One day, Clive will tell us how the wheel really was invented ... :-)
Like many of the other commenters here I am suspicious of the original articles results. How did they disable the brakes electronically? My 2007 model (a Chrysler) vehicle has hydraulic brakes. The ABS can be disabled and the brakes work fine. My wife's 2003 model (Mitsubishi) the same.
As far as driver ability to use the brakes, that is a tough issue. I, like DCFocus and some others, was used to taking a car to a big empty lot and putting it through some skids and brake tests, etc. I started my kids out the same way. However, ABS was just getting common on lower end and used cars at the time. High perf driving schools recommend standing on the brake pedal of an ABS equipped car when in an emergency. No modulation attempts. Just stand on it. I live in the North where wheel lockups during normal driving are common on the ice. What to teach my kids. I taught them both, of course. At the time my car did not have ABS but my wife's car did. But the writing was on the wall as far as all vehicles were likely to have ABS soon. So for emergency stops do I drill them on ABS techniques, or on old outdated auto techniques?
As for hacking the auto system in general, yea I think it is a concern, but remote hacking is still not at all easy, and otherwise they need access. Keep the vehicle locked and if possible keep it visible.
@the original bob, et al. (those who sound gloomy about getting stuck with automated cars):
There is a big picture here. At least in many of the wealthiest countries, road safety has made really impressive progress. Probably the biggest single contributor has been the design of the vehicles themselves.
When I was a lad (a half-century or so back), the vast majority of autos had:
* miserable handling
* terrible stability in extreme maneuvering
* rigid frames that transmitted crash loads to the passenger compartment
* no occupant restraints
* hard and sharp surfaces in likely zones of passenger impact with the interior
* glass that often resulted in horrific lacerations
* primitive low-performance tires
A great many also had failure-prone brake systems whose sustained power level was not adequate to the engines.
Today, I feel physically frightened to sit inside an old car -- unless it is safely off the road. Those machines were effective meat-grinders for living people.
As angry as I am about the engineering failures at Toyota, today's cars (even with their automation defects) have very respectable safety outcomes.
Bruce has written and blogged a lot about how our minds are poorly adapted to comparing risks in civilized life. We hugely magnify risks that involve the perception of being 'out of control' (uncommanded throttle opening, passenger on a problem flight, etc.), and grossly downplay risks where we perceive ourselves as being 'in control' (our own distracted, negligent, slovenly and unprofessional driving).
And if you're a driver, and think that my critique doesn't apply to you, take this challenge: put a video camera or two in your car, record yourself driving for an extended interval, and find an objective volunteer to review the video and assess any lapses of attention to the road. You might be surprised ;)
I've done the "using a brake as clutch" thing, an I'm fortunate nobody was hurt.
As a pedant, I have to point out that the Gimli Glider didn't LITERALLY run out of gas, because jet fuel isn't gasoline. And that case is interesting because multiple failures (fuel sensors or fuel management system I can't recall)and operator error (ISTR an incorrect conversion between gallons and kilograms of fuel) and were the cause.
On airliner flight control redundancy --
I didn't properly research this, but as a student of aviation safety I think I can get it about right from memory:
In the 40 year period 1970 - 2009, I am aware of two fatal accidents caused by airliner primary flight controls becoming unresponsive. Both were DC-10s. In the accident already mentioned in this thread, an engine failure disabled all of the hydraulic systems. In an earlier accident in France, the opening of a cargo hatch at altitude (explosive decompression) buckled the cabin floor, impairing the motion of the control cables.
There is a third accident involving loss of response to a secondary flight control, when an Alaska Airlines MD-83 horizontal stabilizer (pitch trim) control mechanically broke. I am not aware whether the MD-83 had any redundancy in the control of the jackscrew, but given that there was only one jackscrew, and that its threads sheared, redundancy in the control of the jackscrew's rotation would not have prevented the accident.
During this 40 year period, there were (I believe) between 500 and 600 million departures by airliners in the West (these statistics don't include Communist bloc countries). If I have it correct that these three were the only fatal accidents due to failure of flight control systems (as opposed to damage or loss of control surfaces and/or stabilizers), then we have about one fatal accident per 150 million to 200 million departures.
What is the threshold, at which we can reasonably conclude the airliner flight control system redundancy is inadequate? Looked at another way, I estimate that these make up less than 1 percent of fatal airliner accidents. Given finite investment of resources, how much should be allocated to addressing this cause?
Let me have a shot at explaining one of the reasons why the brakes may not function when the accelerator effectively jams on.
1) Lets assume that the occupant was "riding" the brakes for some time, pushing them lightly trying to slow the car to the desired speed. (not trying to really STOP the car)
2) whole brake assembly can get very hot under these circumstances. (not just the disks but the calipers and axle and all connected parts.
3) they remove their foot from the brake (check if something is blocking the pedal or whatever)
4) They reapply the brake
With a sequence like the above the brake fluid can actually boil, or more correctly the absorbed water can boil out of the brake fluid. Unfortunately brake fluid is hydroscopic (so over time it absorbs water). Once a gas bubble exists in the Brake line it will be impossible to achieve the normal stopping power.
BTW : I'm not against drive by wire cars, as a matter of fact I'm very much a fan of the technology. I think there is still so much that could be achieved with automated distance maintaining systems, GPS data assisted driving. (things like the car knowing maximum speed for the road and conditions) and various automated safety warning systems (maybe built into the road).
The truth is that the failure incidence of the electronics is still much much lower than the failure rate of the drivers.
"one day, Clive will tell us how the wheel really was invented"
Anything to oblige, but first I need to know which one?
My favourate one has three sides 8)
Hacking the ABS controller could very effectively disable the brakes, as ABS works by _interrupting_ the brake pressure you apply in order to not lock the brakes.
A good controller should be separated into 2 components- 1 component which does the calculations to send commands to the 2nd component which actually controls the valves. The 2nd component shouldn't be accessible from the CAN bus, and should be programmed to ignore any instructions to actuate the valves that would cause a loss of braking for an extended period of time. At some point, the controller would just have to accept that the wheels could potentially skid- making the system less effective on large stretches of ice or gravel.
The question is whether the added security against completely disabling it is worth the loss in effectiveness under normal operating conditions. Given that hacking the system is such an edge case, I don't see much you can do about it- you'd much rather have the system work as designed when not compromised.
@Nick S. :
I didn't find detailed diagrams for ABS systems, but I'm inclined to believe that the people who design them are smarter than Toyota automation engineers.
The valves are designed to momentarily reduce (not eliminate!) brake hydraulic pressure. I suppose they can do this by allowing fluid to move into a small (perhaps compliant) cavity. I am sure that they don't cut the brake off from the pressurized brake line!
If a modulator valve were to stick in the open (relief) position, a little more pedal displacement would make up the difference in a small fraction of a second, and the brake would operate effectively -- of course, without anti-lock protection.
This is a fine example of fail-safe design, that sadly was not followed by system engineers on engine throttle/stop controls. Perhaps mechanical engineers don't have such a childish faith in the infallibility of their systems, as electronic and (especially) software engineers sometimes exhibit.
P.S. I have worked as a software engineer for 30 years. I would support criminal prosecution for engineers who failed to provide fail-safe operation for systems that can significantly hazard life or health, when such systems can be shown to have caused casualties.
"I would support criminal prosecution..."
Saddly it would not work for a whole heap of reasons.
The people that need to be doing jail time are the directors and others at senior levels, it is usualy their short sighted policy that causes the issues.
For instance how are you going to jail a software engineer in India or China where the bean counters have outsourced the work?
"but it was late Friday so I thought 50d1t I'm off home."
Clive, I'm sure I've felt that way, too, but please translate "50d1t" (preferably into English) so I can be certain.
It is a UK statment meaning "to have had enough" that you are not likly to find in the ordinary Oxford English Dictionary ;)
For the sake of peoples "naughty filters" I've changed the "S" into a five the "O" into zero and dropped the space after the "D" and changed the I of "it" into a one.
It is best said with real emphasis and deep fealing on the "SO" and run the "DIT" together with a real degree of venom and cut off.
It is an expression you will hear said with gusto by practical engineering types along with the short version of "Oh gosh I've just hit my thumb with the hammer" (bug-ger-it) and in more recent times (Due to Scrapheap challenge) when an engineer is proud of a quick hack "Proper job".
Couple of nice comments, but one small quibble. You suggest that voting by 3 machines will triple the failure rate. Actually that's not right.
Let's say that the unavailability of each subsystem is r.
Firstly, if failures are independent, then the failure rate of *subsystems* will be tripled (actually 3r - 3r^2 + r^3, but for r
This analysis is only true of failures in the subsystems are independent, which is a pretty big "IF." In NASA's case, they went to an awful lot of trouble and expense to ensure that failures would be as nearly independent as possible: different hardware for the voting machines, different programs written by different coders who didn't interact whilst writing it, etc.. I suspect Toyota might not go to so much trouble.
On the other hand, if failures are *not* independent, this massive improvement in availability does not occur -- but neither does the subsystem failure rate triple: because subsystem failures are no longer independent! In fact if each subsystem has the same unavailability, r, and r
I believe, applying the usual "Leet Speak" filter,
that 50d1t would be "sod it," which I think is one of those quaint English interjections that they think are dirty, but really sound like something your aunt would say when she drops her knitting.
As for the original paper, I didn't get the idea they were suggesting you needed physical access. Since they mentioned that modern vehicles are getting cellular access hooked directly onto the system bus, all you have to do is find a vulnerability you can exploit in that.
So maybe some GM engineers sabatoged Toyota by using these techniques to cause some of Toyota's cars to accelerate unexpectedly? Conspiracy theorists unite!
Unintended acceleration was a problem long before the computerization of cars.
See "Audi unintended acceleration"
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.