Entries Tagged "cars"

Page 7 of 17

Identifying People from their Driving Patterns

People can be identified from their “driver fingerprint“:

…a group of researchers from the University of Washington and the University of California at San Diego found that they could “fingerprint” drivers based only on data they collected from internal computer network of the vehicle their test subjects were driving, what’s known as a car’s CAN bus. In fact, they found that the data collected from a car’s brake pedal alone could let them correctly distinguish the correct driver out of 15 individuals about nine times out of ten, after just 15 minutes of driving. With 90 minutes driving data or monitoring more car components, they could pick out the correct driver fully 100 percent of the time.

The paper: “Automobile Driver Fingerprinting,” by Miro Enev, Alex Takahuwa, Karl Koscher, and Tadayoshi Kohno.

Abstract: Today’s automobiles leverage powerful sensors and embedded computers to optimize efficiency, safety, and driver engagement. However the complexity of possible inferences using in-car sensor data is not well understood. While we do not know of attempts by automotive manufacturers or makers of after-market components (like insurance dongles) to violate privacy, a key question we ask is: could they (or their collection and later accidental leaks of data) violate a driver’s privacy? In the present study, we experimentally investigate the potential to identify individuals using sensor data snippets of their natural driving behavior. More specifically we record the in-vehicle sensor data on the controller area-network (CAN) of a typical modern vehicle (popular 2009 sedan) as each of 15 participants (a) performed a series of maneuvers in an isolated parking lot, and (b) drove the vehicle in traffic along a defined ~50 mile loop through the Seattle metropolitan area. We then split the data into training and testing sets, train an ensemble of classifiers, and evaluate identification accuracy of test data queries by looking at the highest voted candidate when considering all possible one-vs-one comparisons. Our results indicate that, at least among small sets, drivers are indeed distinguishable using only in car sensors. In particular, we find that it is possible to differentiate our 15 drivers with 100% accuracy when training with all of the available sensors using 90% of driving data from each person. Furthermore, it is possible to reach high identification rates using less than 8 minutes of training data. When more training data is available it is possible to reach very high identification using only a single sensor (e.g., the brake pedal). As an extension, we also demonstrate the feasibility of performing driver identification across multiple days of data collection.

Posted on May 30, 2016 at 10:10 AMView Comments

Smartphone Forensics to Detect Distraction

The company Cellebrite is developing a portable forensics device that would determine if a smartphone user was using the phone at a particular time. The idea is to test phones of drivers after accidents:

Under the first-of-its-kind legislation proposed in New York, drivers involved in accidents would have to submit their phone to roadside testing from a textalyzer to determine whether the driver was using a mobile phone ahead of a crash. In a bid to get around the Fourth Amendment right to privacy, the textalyzer allegedly would keep conversations, contacts, numbers, photos, and application data private. It will solely say whether the phone was in use prior to a motor-vehicle mishap. Further analysis, which might require a warrant, could be necessary to determine whether such usage was via hands-free dashboard technology and to confirm the original finding.

This is interesting technology. To me, it feels no more intrusive than a breathalyzer, assuming that the textalyzer has all the privacy guards described above.

Slashdot thread. Reddit thread.

EDITED TO ADD (4/19): Good analysis and commentary.

Posted on April 13, 2016 at 6:51 AMView Comments

Memphis Airport Inadvertently Gets Security Right

A local newspaper recently tested airport security at Memphis Airport:

Our crew sat for 30 minutes in the passenger drop-off area Tuesday without a word from anyone, and that raised a number of eyebrows.

Certainly raised mine. Here’s my question: why is that a bad thing? If you’re worried about a car bomb, why do you think length of time sitting curbside correlates with likelihood of detonation? Were I a car bomber sitting in the front seat, I would detonate my bomb pretty damned quick.

Anyway, the airport was 100% correct in its reply:

The next day, the airport told FOX13 they take a customer-friendly “hassle free” approach.

I’m certainly in favor of that. Useless security theater that adds to the hassle of traveling without actually making us any safer doesn’t help anyone.

Unfortunately, the airport is now reviewing its procedures, because fear wins:

CEO Scott Brockman sent FOX13 a statement saying in part “We will continue to review our policies and procedures and implement any necessary changes in order to ensure the safety of the traveling public.”

EDITED TO ADD (4/12): The airport PR person commented below. “Jim Turner of the Cato Institute” is actually Jim Harper.

Posted on March 25, 2016 at 12:26 PMView Comments

Cory Doctorow on Software Security and the Internet of Things

Cory Doctorow has a good essay on software integrity and control problems and the Internet of Things. He’s writing about self-driving cars, but the issue is much more general. Basically, we’re going to want systems that prevent their owner from making certain changes to it. We know how to do this: digital rights management. We also know that this solution doesn’t work, and trying introduces all sorts of security vulnerabilities. So we have a problem.

This is an old problem. (Adam Shostack and I wrote a paper about it in 1999, about smart cards.) The Internet of Things is going to make it much worse. And it’s one we’re not anywhere near prepared to solve.

Posted on December 31, 2015 at 6:12 AMView Comments

Volkswagen and Cheating Software

Portuguese translation by Ricardo R Hashimoto

For the past six years, Volkswagen has been cheating on the emissions testing for its diesel cars. The cars’ computers were able to detect when they were being tested, and temporarily alter how their engines worked so they looked much cleaner than they actually were. When they weren’t being tested, they belched out 40 times the pollutants. Their CEO has resigned, and the company will face an expensive recall, enormous fines and worse.

Cheating on regulatory testing has a long history in corporate America. It happens regularly in automobile emissions control and elsewhere. What’s important in the VW case is that the cheating was preprogrammed into the algorithm that controlled cars’ emissions.

Computers allow people to cheat in ways that are new. Because the cheating is encapsulated in software, the malicious actions can happen at a far remove from the testing itself. Because the software is “smart” in ways that normal objects are not, the cheating can be subtler and harder to detect.

We’ve already had examples of smartphone manufacturers cheating on processor benchmark testing: detecting when they’re being tested and artificially increasing their performance. We’re going to see this in other industries.

The Internet of Things is coming. Many industries are moving to add computers to their devices, and that will bring with it new opportunities for manufacturers to cheat. Light bulbs could fool regulators into appearing more energy efficient than they are. Temperature sensors could fool buyers into believing that food has been stored at safer temperatures than it has been. Voting machines could appear to work perfectly—except during the first Tuesday of November, when they undetectably switch a few percent of votes from one party’s candidates to another’s.

My worry is that some corporate executives won’t interpret the VW story as a cautionary tale involving just punishments for a bad mistake but will see it instead as a demonstration that you can get away with something like that for six years.

And they’ll cheat smarter. For all of VW’s brazenness, its cheating was obvious once people knew to look for it. Far cleverer would be to make the cheating look like an accident. Overall software quality is so bad that products ship with thousands of programming mistakes.

Most of them don’t affect normal operations, which is why your software generally works just fine. Some of them do, which is why your software occasionally fails, and needs constant updates. By making cheating software appear to be a programming mistake, the cheating looks like an accident. And, unfortunately, this type of deniable cheating is easier than people think.

Computer-security experts believe that intelligence agencies have been doing this sort of thing for years, both with the consent of the software developers and surreptitiously.

This problem won’t be solved through computer security as we normally think of it. Conventional computer security is designed to prevent outside hackers from breaking into your computers and networks. The car analogue would be security software that prevented an owner from tweaking his own engine to run faster but in the process emit more pollutants. What we need to contend with is a very different threat: malfeasance programmed in at the design stage.

We already know how to protect ourselves against corporate misbehavior. Ronald Reagan once said “trust, but verify” when speaking about the Soviet Union cheating on nuclear treaties. We need to be able to verify the software that controls our lives.

Software verification has two parts: transparency and oversight. Transparency means making the source code available for analysis. The need for this is obvious; it’s much easier to hide cheating software if a manufacturer can hide the code.

But transparency doesn’t magically reduce cheating or improve software quality, as anyone who uses open-source software knows. It’s only the first step. The code must be analyzed. And because software is so complicated, that analysis can’t be limited to a once-every-few-years government test. We need private analysis as well.

It was researchers at private labs in the United States and Germany that eventually outed Volkswagen. So transparency can’t just mean making the code available to government regulators and their representatives; it needs to mean making the code available to everyone.

Both transparency and oversight are being threatened in the software world. Companies routinely fight making their code public and attempt to muzzle security researchers who find problems, citing the proprietary nature of the software. It’s a fair complaint, but the public interests of accuracy and safety need to trump business interests.

Proprietary software is increasingly being used in critical applications: voting machines, medical devices, breathalyzers, electric power distribution, systems that decide whether or not someone can board an airplane. We’re ceding more control of our lives to software and algorithms. Transparency is the only way verify that they’re not cheating us.

There’s no shortage of corporate executives willing to lie and cheat their way to profits. We saw another example of this last week: Stewart Parnell, the former CEO of the now-defunct Peanut Corporation of America, was sentenced to 28 years in prison for knowingly shipping out salmonella-tainted products. That may seem excessive, but nine people died and many more fell ill as a result of his cheating.

Software will only make malfeasance like this easier to commit and harder to prove. Fewer people need to know about the conspiracy. It can be done in advance, nowhere near the testing time or site. And, if the software remains undetected for long enough, it could easily be the case that no one in the company remembers that it’s there.

We need better verification of the software that controls our lives, and that means more—and more public—transparency.

This essay previously appeared on CNN.com.

EDITED TO ADD: Three more essays.

EDITED TO ADD (10/8): A history of emissions-control cheating devices.

Posted on September 30, 2015 at 9:13 AMView Comments

Remotely Hacking a Car While It's Driving

This is a big deal. Hackers can remotely hack the Uconnect system in cars just by knowing the car’s IP address. They can disable the brakes, turn on the AC, blast music, and disable the transmission:

The attack tools Miller and Valasek developed can remotely trigger more than the dashboard and transmission tricks they used against me on the highway. They demonstrated as much on the same day as my traumatic experience on I-64; After narrowly averting death by semi-trailer, I managed to roll the lame Jeep down an exit ramp, re-engaged the transmission by turning the ignition off and on, and found an empty lot where I could safely continue the experiment.

Miller and Valasek’s full arsenal includes functions that at lower speeds fully kill the engine, abruptly engage the brakes, or disable them altogether. The most disturbing maneuver came when they cut the Jeep’s brakes, leaving me frantically pumping the pedal as the 2-ton SUV slid uncontrollably into a ditch. The researchers say they’re working on perfecting their steering control—for now they can only hijack the wheel when the Jeep is in reverse. Their hack enables surveillance too: They can track a targeted Jeep’s GPS coordinates, measure its speed, and even drop pins on a map to trace its route.

In related news, there’s a Senate bill to improve car security standards. Honestly, I’m not sure our security technology is enough to prevent this sort of thing if the car’s controls are attached to the Internet.

EDITED TO ADD (8/14): More articles.

Posted on July 23, 2015 at 6:17 AMView Comments

1 5 6 7 8 9 17

Sidebar photo of Bruce Schneier by Joe MacInnis.