Entries Tagged "cars"

Page 5 of 17

Adversarial Machine Learning against Tesla's Autopilot

Researchers have been able to fool Tesla’s autopilot in a variety of ways, including convincing it to drive into oncoming traffic. It requires the placement of stickers on the road.

Abstract: Keen Security Lab has maintained the security research work on Tesla vehicle and shared our research results on Black Hat USA 2017 and 2018 in a row. Based on the ROOT privilege of the APE (Tesla Autopilot ECU, software version 18.6.1), we did some further interesting research work on this module. We analyzed the CAN messaging functions of APE, and successfully got remote control of the steering system in a contact-less way. We used an improved optimization algorithm to generate adversarial examples of the features (autowipers and lane recognition) which make decisions purely based on camera data, and successfully achieved the adversarial example attack in the physical world. In addition, we also found a potential high-risk design weakness of the lane recognition when the vehicle is in Autosteer mode. The whole article is divided into four parts: first a brief introduction of Autopilot, after that we will introduce how to send control commands from APE to control the steering system when the car is driving. In the last two sections, we will introduce the implementation details of the autowipers and lane recognition features, as well as our adversarial example attacking methods in the physical world. In our research, we believe that we made three creative contributions:

  1. We proved that we can remotely gain the root privilege of APE and control the steering system.
  2. We proved that we can disturb the autowipers function by using adversarial examples in the physical world.
  3. We proved that we can mislead the Tesla car into the reverse lane with minor changes on the road.

You can see the stickers in this photo. They’re unobtrusive.

This is machine learning’s big problem, and I think solving it is a lot harder than many believe.

Posted on April 4, 2019 at 6:18 AMView Comments

Zipcar Disruption

This isn’t a security story, but it easily could have been. Last Saturday, Zipcar had a system outage: “an outage experienced by a third party telecommunications vendor disrupted connections between the company’s vehicles and its reservation software.”

That didn’t just mean people couldn’t get cars they reserved. Sometimes is meant they couldn’t get the cars they were already driving to work:

Andrew Jones of Roxbury was stuck on hold with customer service for at least a half-hour while he and his wife waited inside a Zipcar that would not turn back on after they stopped to fill it up with gas.

“We were just waiting and waiting for the call back,” he said.

Customers in other states, including New York, California, and Oregon, reported a similar problem. One user who tweeted about issues with a Zipcar vehicle listed his location as Toronto.

Some, like Jones, stayed with the inoperative cars. Others, including Tina Penman in Portland, Ore., and Heather Reid in Cambridge, abandoned their Zipcar. Penman took an Uber home, while Reid walked from the grocery store back to her apartment.

This is a reliability issue that turns into a safety issue. Systems that touch the direct physical world like this need better fail-safe defaults.

Posted on March 20, 2019 at 12:38 PMView Comments

"Two Stage" BMW Theft Attempt

Modern cars have alarm systems that automatically connect to a remote call center. This makes cars harder to steal, since tripping the alarm causes a quick response. This article describes a theft attempt that tried to neutralize that security system. In the first attack, the thieves just disabled the alarm system and then left. If the owner had not immediately repaired the car, the thieves would have returned the next night and—no longer working under time pressure—stolen the car.

Posted on August 21, 2018 at 5:58 AMView Comments

Gas Pump Hack

This is weird:

Police in Detroit are looking for two suspects who allegedly managed to hack a gas pump and steal over 600 gallons of gasoline, valued at about $1,800. The theft took place in the middle of the day and went on for about 90 minutes, with the gas station attendant unable to thwart the hackers.

The theft, reported by Fox 2 Detroit, took place at around 1pm local time on June 23 at a Marathon gas station located about 15 minutes from downtown Detroit. At least 10 cars are believed to have benefitted from the free-flowing gas pump, which still has police befuddled.

Here’s what is known about the supposed hack: Per Fox 2 Detroit, the thieves used some sort of remote device that allowed them to hijack the pump and take control away from the gas station employee. Police confirmed to the local publication that the device prevented the clerk from using the gas station’s system to shut off the individual pump.

Slashdot post.

Hard to know what’s true, but it seems like a good example of a hack against a cyber-physical system.

Posted on July 13, 2018 at 6:18 AMView Comments

Man-in-the-Middle Attack against Electronic Car-Door Openers

This is an interesting tactic, and there’s a video of it being used:

The theft took just one minute and the Mercedes car, stolen from the Elmdon area of Solihull on 24 September, has not been recovered.

In the footage, one of the men can be seen waving a box in front of the victim’s house.

The device receives a signal from the key inside and transmits it to the second box next to the car.

The car’s systems are then tricked into thinking the key is present and it unlocks, before the ignition can be started.

Posted on November 28, 2017 at 6:03 AMView Comments

Unfixable Automobile Computer Security Vulnerability

There is an unpatchable vulnerability that affects most modern cars. It’s buried in the Controller Area Network (CAN):

Researchers say this flaw is not a vulnerability in the classic meaning of the word. This is because the flaw is more of a CAN standard design choice that makes it unpatchable.

Patching the issue means changing how the CAN standard works at its lowest levels. Researchers say car manufacturers can only mitigate the vulnerability via specific network countermeasures, but cannot eliminate it entirely.

Details on how the attack works are here:

The CAN messages, including errors, are called “frames.” Our attack focuses on how CAN handles errors. Errors arise when a device reads values that do not correspond to the original expected value on a frame. When a device detects such an event, it writes an error message onto the CAN bus in order to “recall” the errant frame and notify the other devices to entirely ignore the recalled frame. This mishap is very common and is usually due to natural causes, a transient malfunction, or simply by too many systems and modules trying to send frames through the CAN at the same time.

If a device sends out too many errors, then­—as CAN standards dictate—­it goes into a so-called Bus Off state, where it is cut off from the CAN and prevented from reading and/or writing any data onto the CAN. This feature is helpful in isolating clearly malfunctioning devices and stops them from triggering the other modules/systems on the CAN.

This is the exact feature that our attack abuses. Our attack triggers this particular feature by inducing enough errors such that a targeted device or system on the CAN is made to go into the Bus Off state, and thus rendered inert/inoperable. This, in turn, can drastically affect the car’s performance to the point that it becomes dangerous and even fatal, especially when essential systems like the airbag system or the antilock braking system are deactivated. All it takes is a specially-crafted attack device, introduced to the car’s CAN through local access, and the reuse of frames already circulating in the CAN rather than injecting new ones (as previous attacks in this manner have done).

Slashdot thread.

Posted on August 18, 2017 at 6:40 AMView Comments

Confusing Self-Driving Cars by Altering Road Signs

Researchers found that they could confuse the road sign detection algorithms of self-driving cars by adding stickers to the signs on the road. They could, for example, cause a car to think that a stop sign is a 45 mph speed limit sign. The changes are subtle, though—look at the photo from the article.

Research paper:

Robust Physical-World Attacks on Machine Learning Models,” by Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song:

Abstract: Deep neural network-based classifiers are known to be vulnerable to adversarial examples that can fool them into misclassifying their input through the addition of small-magnitude perturbations. However, recent studies have demonstrated that such adversarial examples are not very effective in the physical world—they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper. In this paper we propose a new attack algorithm—Robust Physical Perturbations (RP2)—that generates perturbations by taking images under different conditions into account. Our algorithm can create spatially-constrained perturbations that mimic vandalism or art to reduce the likelihood of detection by a casual observer. We show that adversarial examples generated by RP2 achieve high success rates under various conditions for real road sign recognition by using an evaluation methodology that captures physical world conditions. We physically realized and evaluated two attacks, one that causes a Stop sign to be misclassified as a Speed Limit sign in 100% of the testing conditions, and one that causes a Right Turn sign to be misclassified as either a Stop or Added Lane sign in 100% of the testing conditions.

Posted on August 11, 2017 at 6:31 AMView Comments

Uber Drivers Hacking the System to Cause Surge Pricing

Interesting story about Uber drivers who have figured out how to game the company’s algorithms to cause surge pricing:

According to the study. drivers manipulate Uber’s algorithm by logging out of the app at the same time, making it think that there is a shortage of cars.

[…]

The study said drivers have been coordinating forced surge pricing, after interviews with drivers in London and New York, and research on online forums such as Uberpeople.net. In a post on the website for drivers, seen by the researchers, one person said: “Guys, stay logged off until surge. Less supply high demand = surge.”

.

Passengers, of course, have long had tricks to avoid surge pricing.

I expect to see more of this sort of thing as algorithms become more prominent in our lives.

Posted on August 8, 2017 at 9:35 AMView Comments

Vulnerabilities in Car Washes

Articles about serious vulnerabilities in IoT devices and embedded systems are now dime-a-dozen. This one concerns Internet-connected car washes:

A group of security researchers have found vulnerabilities in internet-connected drive-through car washes that would let hackers remotely hijack the systems to physically attack vehicles and their occupants. The vulnerabilities would let an attacker open and close the bay doors on a car wash to trap vehicles inside the chamber, or strike them with the doors, damaging them and possibly injuring occupants.

Posted on August 1, 2017 at 5:47 AMView Comments

1 3 4 5 6 7 17

Sidebar photo of Bruce Schneier by Joe MacInnis.