Automobile Virus

SC Magazine is reporting on a virus that infects Lexus cars:

Lexus cars may be vulnerable to viruses that infect them via mobile phones. Landcruiser 100 models LX470 and LS430 have been discovered with infected operating systems that transfer within a range of 15 feet.

It seems that no one has done this yet, and the story is based on speculation that a cell phone can transfer a virus to the Lexus using Bluetooth. But it’s only a matter of time before something like this actually works.

Posted on February 2, 2005 at 8:00 AM16 Comments


Arik February 2, 2005 8:57 AM

There are a lot of mitigating factors to a possible virus infection. I think that in order to infect a car you have to use a vulnerability of its operating system or an existing application, because a car would not normally run object code downloaded by Bluetooth.

An automated worm targeting a common OS would be more likely, IMHO

Israel Torres February 2, 2005 10:25 AM

The range noted must be for a stock omni-directional setup. As we have learned with past BT vulns and exploits one could easily modify the infecting range from feet to miles.

We’ve all learned that if you give an entity enough time, it will find a way.

Israel Torres

kurt February 2, 2005 10:29 AM

viruses don’t need vulnerabilities to exploit in order to infect the platform – virus infectability is inherent to all general purpose computing platforms, and while it remains to be seen if there are any cars yet whose on-board computers are general purpose, mr. schneier is right that it’s only a matter of time…

Fred Page February 2, 2005 12:09 PM

RE: Kurt’s comment:

At a basic level, for any code to execute (such as a virus), it must place itself (or something that resolves to that code) or be placed into to something the processor can access, and get the processor to actually execute that code. The execution of code that an intended user did not intend to execute is what is labeled as a “vulnerability” (occasionally, this is actually applied to the placing of the code, if that placement was through an unintentional mechanism).

I am not certain why you claim that a computer need be general purpose to load and execute a virus. Although I haven’t had the displeasure of experiencing it, I’ve programmed for many special-purpose processors that could theoretically load and execute any arbitrary code given to it by a dedicated attacker. Boot loaders, for example, enable this behaviour if the boot loader connection is normally up while the device is normally running.

Furthermore, I don’t understand why you think that a general purpose computing platform is automatically vulnerable to viruses; if one is, for example, using the operating system kernel SE linux properly (see ), with applications that don’t grant additional vulnerabilities, it would be very difficult (if at all possible) for that general purpose computing platform to execute a virus.

Clive Robinson February 2, 2005 1:09 PM

If you consider that “communications security” is more than 60 years old as a concept and practice, it still surprises me that people come up with systems/protocols that are so bad that it is possible to do this sort of thing.

As for virus attackes and embedded systems well… Some (mainly older systems) are imune they are ROM based with insufficient RAM/Registers for exacutable code to be written. Untill recently this would almost certainly have been true for all automabile based systems, however some now use FLASH ROM’s and even smart/memmory cards.

I guess a consiquence of cheaper memory and short software development cycles requiring upgradeability as standard is that we will get people developing attacks in exactly the same way as for motherboards in PCs.

I guess it will soon be possible for my fridge to be made to think it’s a microwave or coffee machine with the consiquences that will amuse a 7 year old.

Fred Page February 2, 2005 1:55 PM

Good point, Clive. Even the processors I’ve programmed that are used essentially as single-device drivers have had over 10 K of RAM, so I wasn’t thinking of devices where (for example) only the registers can vary.

Thomas Sprinkmeier February 2, 2005 5:22 PM

I wrote about this possibility in a security paper 🙂

To make things easier, I heard that some of the phone companies are standardising on some mobile-java infrastructure: no longer will viruses be limited to just one make of mobile phone!

I can’t wait for car computers to do the same thing, then it’s on for young and old! Driving down the road spreading malware!

Good thing I drive an old bomb 🙂

John Panzer February 2, 2005 11:21 PM

Reminds me of the Zelazny short story about AI-equipped cars gone rogue and escaped into packs roaming the Southwest: “The Last of the Wild Ones”. :^)

Roy Owens February 3, 2005 12:21 PM

Worry about Schneier’s ‘class breaks’. If a car thief can call his pick of any of a new model of a high end car and make it shut its engine off, all he needs for carjacking is a threatening demeanor. Worse yet, if he can call the police cars behind him and tell them to shut down, he has an excellent chance of escaping his pursuers.

kurt February 3, 2005 1:01 PM

RE: the response my comment

a) i never said special purpose computers were not vulnerable to viruses, only that virus infectability was inherent to all general purpose computers…
b) the proof that all general purpose computers are able to support virus infection was written back in the mid 80’s by fred cohen during his seminal work on the subject of computer viruses… in fact his proof involved an even broader class of systems of which general purpose computers are just a subset…

Fred Page February 3, 2005 3:24 PM

Kurt, are you referring to Fred Cohen’s 1984 paper on computer viruses?

Quote from his abstract ( “Analysis shows that the only systems with potential for protection from a viral attack are systems with limited transitivity and limited sharing, systems with no sharing, and systems without general interpretation of information (Turing capability).”

This seems to imply that one could have a general-purpose computer protected from viral attack through limited transitivity and sharing, or no sharing.

I believe that an example of the latter is the average plastic toy with sounds one can buy at a toy store. Typically, it contains no RAM, and very few registers- simply put, not enough to actually hold a program of any sort, virus or not. In most cases, one would have to replace the ROM to get anything other than the designated program on there.

I believe my last post was an attempted description of the former approach.

kurt February 3, 2005 5:41 PM

as a matter of fact, it does not imply that there are general purpose computers with those properties, it implies that there are systems with those properties… it’s hard to imagine that the set of such systems would intersect the set of general purpose computers…

a general purpose computer is one that can be programmed to do many different kinds of tasks – the aforemented limitations are not conducive to retaining the ability to be programmed at all…

and as far as i know the actual proof was written in ’85, though it doesn’t seem unreasonable for him to discuss something as an observation before he came up with a mathematical proof for it…

Paul Brackel February 7, 2005 2:40 AM

How many people know the intricacies of Lexus’ programmed devices ? Of those people, how many understand the vulnerabilities ? Of those, how many of those would be prepared to spend a lot of time to attack the Lexus programmed environment ? And if they were planning to do, how would they know which people own a not-pervasive Lexus car and what their cell phone number is to transmit the ‘malicious code’ ?

This appears to me as a horror scenario similar to the ones that were created during the years of Y2K . . . .

Clive Robinson February 8, 2005 3:08 AM

Paul, the issue is not what percentage of the population can or cannot do it.

If there is one thing that the past few years has shown us, if it’s possible, it will be found and then exploited (DeCSS for instance).

As examples, where “inside knowledge” of the embedded system/devices was not known by the attackers,

Sky Smart cards (Sat Pay TV)
Early car remote electronic locks
Most Cable Settop boxes

The argument with all of these is the same as the one you have given (ie security through obscurity of design) they where however all cracked. I suspect that as there is money to be made (as in the cases above) then it is going to happen sooner rather than later.

All manufacturers make a calculation for risk based on their understanding of it and they make financial judgments based on it that effect the system design. The two main problems with these “new systems” is,

There is “apparently” no history to effectivly judge risk at the design stage.

The system complexity leaves to many unknowns that become new risks.

Providing there is no adverse publicity or the direct financial loss is small then it is unlikley that an insecure system will be improved (it’s effectivly “secure enough for the job” from the manufactures position).

On the historical side, in the first case Sky went through a prolonged cat and mouse game that involved asking the chip manufacture (Motorola) to produce a custom chip design. Then when this was defeated Sky ended up sending malware that identified (some of) the fake cards and effectivly killed the unit.

In the second case poor design of the systems (ie insufficient bits) enabled people to develop mass unlocking devices that transmitted all the possible unlock codes. The criminal just stood on a street corner and pressed the button on the device in his pocket and waited for a few minutes. Any car that beeped and flashed it’s lights was now probably unlocked the thief just walked up and tried the door.

The history of the last is making a good example of designed security, where the companies used (for sensible reasons at the time) low cost low complexity equipment, that was relativly easy to install service and maintain.

They traded off the known risk of losses on some premium services against the on going service support costs. They figured that as they controled the service going into a house they could pull the plug at any time. Therefore the user had to pay atleast a minimum service connect charge (the cake) the actuall loss from a few people not paying for a premium service (the icing) was marginal to their operating costs. Basically they got to eat their cake but it was not quite as sweet as it could have been.

They figured that to make money a criminal would have to advertise and leave a paper trail that they could follow back and put the criminal out of business (and happened in some early cases). This was the same view the software industry had for a number of years with regard to dongles etc. However in both cases they did not factor in the Internet as a channel for anonymously sending out the information to anybody who wanted it. Nor the criminals who would practicaly exploit the information as a way to get money with a very minimal paper trail. Often from a different country with a more “liberal” view as far as the criminal is concerned.

Rick February 11, 2005 2:49 PM

After Bruce’s comments I’ve noticed articles popping up all over technology news about cars possibly becoming infected by viruses. I have also seen how other household objects (i.e. toasters) may become virus infected also.

I know some people think that we shouldn’t worry and that those household systems wouldn’t really be affected to destructive levels.

But I wonder “so what.” What if the virus wasn’t supposed to destroy a toaster. What if the toaster was a “vector” meant to carry the virus to maybe a bluetooth enabled phone then maybe to somebody’s computer system at work.

To me it would mimic nature. Some animals can carry viruses with no known affects or visibility.

Dave February 16, 2005 7:20 AM

I used to work in a research and development lab in Europe for a big US-European car manufacturer. We were working on among other things Internet in cars. Some of the reasons the car manufacturer wanted Internet in cars were to be able to make remote diagnostics and also to do remote software upgrades of the car software without having to bring the cars into workshops.

The worst security case we could figure out was this: In modern cars engine control and brakes (ABS) are computer controlled. Thus a hacker could figure out how to hack the engine and the brakes (by for instance buying or stealing a car and reading the ROMs etc). Then he could write a virus or script that automatically hacks say 10 000 cars over the Internet connections to the cars. Then at a given date and time all those cars would accelerate for some seconds, then lock the brakes on the two left wheels sending those speeding cars spinning into the opposite lane traffic…

So we designed a security system for those cars involving what today is called “Trusted Computing”. (Signed code, VPN links, firewalls, IDS and a lot more.) But as we all know no system is fool proof and I don’t know if even the rather elaborate system we designed for those cars will actually be implemented. Cars with Internet connections will probably be on the market in about 2005-2007, if they are not already. (I haven’t checked the market lately and don’t work with this anymore.) Needless to say most of us who worked with it did prefer old “mechanical” cars.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.