Entries Tagged "spoofing"

Page 4 of 6

Hacking Cars Through Wireless Tire-Pressure Sensors

Still minor, but this kind of thing is only going to get worse:

The new research shows that other systems in the vehicle are similarly insecure. The tire pressure monitors are notable because they’re wireless, allowing attacks to be made from adjacent vehicles. The researchers used equipment costing $1,500, including radio sensors and special software, to eavesdrop on, and interfere with, two different tire pressure monitoring systems.

The pressure sensors contain unique IDs, so merely eavesdropping enabled the researchers to identify and track vehicles remotely. Beyond this, they could alter and forge the readings to cause warning lights on the dashboard to turn on, or even crash the ECU completely.

More:

Now, Ishtiaq Rouf at the USC and other researchers have found a vulnerability in the data transfer mechanisms between CANbus controllers and wireless tyre pressure monitoring sensors which allows misleading data to be injected into a vehicle’s system and allows remote recording of the movement profiles of a specific vehicle. The sensors, which are compulsory for new cars in the US (and probably soon in the EU), each communicate individually with the vehicle’s on-board electronics. Although a loss of pressure can also be detected via differences in the rotational speed of fully inflated and partially inflated tyres on the same axle, such indirect methods are now prohibited in the US.

Paper here. This is a previous paper on automobile computer security.

EDITED TO ADD (8/25): This is a better article.

Posted on August 17, 2010 at 6:42 AMView Comments

Location-Based Quantum Encryption

Location-based encryption—a system by which only a recipient in a specific location can decrypt the message—fails because location can be spoofed. Now a group of researchers has solved the problem in a quantum cryptography setting:

The research group has recently shown that if one sends quantum bits—the quantum equivalent of a bit—instead of only classical bits, a secure protocol can be obtained such that the location of a device cannot be spoofed. This, in turn, leads to a key-exchange protocol based solely on location.

The core idea behind the protocol is the “no-cloning” principle of quantum mechanics. By making a device give the responses of random challenges to several verifiers, the protocol ensures that multiple colluding devices cannot falsely prove any location. This is because an adversarial device can either store the quantum state of the challenge or send it to a colluding adversary, but not both.

Don’t expect this in a product anytime soon. Quantum cryptography is mostly theoretical and almost entirely laboratory-only. But as research, it’s great stuff. Paper here.

Posted on August 3, 2010 at 6:25 AMView Comments

The Commercial Speech Arms Race

A few years ago, a company began to sell a liquid with identification codes suspended in it. The idea was that you would paint it on your stuff as proof of ownership. I commented that I would paint it on someone else’s stuff, then call the police.

I was reminded of this recently when a group of Israeli scientists demonstrated that it’s possible to fabricate DNA evidence. So now, instead of leaving your own DNA at a crime scene, you can leave fabricated DNA. And it isn’t even necessary to fabricate. In Charlie Stross’s novel Halting State, the bad guys foul a crime scene by blowing around the contents of a vacuum cleaner bag, containing the DNA of dozens, if not hundreds, of people.

This kind of thing has been going on for ever. It’s an arms race, and when technology changes, the balance between attacker and defender changes. But when automated systems do the detecting, the results are different. Face recognition software can be fooled by cosmetic surgery, or sometimes even just a photograph. And when fooling them becomes harder, the bad guys fool them on a different level. Computer-based detection gives the defender economies of scale, but the attacker can use those same economies of scale to defeat the detection system.

Google, for example, has anti-fraud systems that detect ­ and shut down ­ advertisers who try to inflate their revenue by repeatedly clicking on their own AdSense ads. So people built bots to repeatedly click on the AdSense ads of their competitors, trying to convince Google to kick them out of the system.

Similarly, when Google started penalizing a site’s search engine rankings for having “bad neighbors”—backlinks from link farms, adult or gambling sites, or blog spam—people engaged in sabotage: they built link farms and left blog comment spam linking to their competitors’ sites.

The same sort of thing is happening on Yahoo Answers. Initially, companies would leave answers pushing their products, but Yahoo started policing this. So people have written bots to report abuse on all their competitors. There are Facebook bots doing the same sort of thing.

Last month, Google introduced Sidewiki, a browser feature that lets you read and post comments on virtually any webpage. People and industries are already worried about the effects unrestrained commentary might have on their businesses, and how they might control the comments. I’m sure Google has sophisticated systems ready to detect commercial interests that try to take advantage of the system, but are they ready to deal with commercial interests that try to frame their competitors? And do we want to give one company the power to decide which comments should rise to the top and which get deleted?

Whenever you build a security system that relies on detection and identification, you invite the bad guys to subvert the system so it detects and identifies someone else. Sometimes this is hard ­—leaving someone else’s fingerprints on a crime scene is hard, as is using a mask of someone else’s face to fool a guard watching a security camera ­—and sometimes it’s easy. But when automated systems are involved, it’s often very easy. It’s not just hardened criminals that try to frame each other, it’s mainstream commercial interests.

With systems that police internet comments and links, there’s money involved in commercial messages ­—so you can be sure some will take advantage of it. This is the arms race. Build a detection system, and the bad guys try to frame someone else. Build a detection system to detect framing, and the bad guys try to frame someone else framing someone else. Build a detection system to detect framing of framing, and well, there’s no end, really. Commercial speech is on the internet to stay; we can only hope that they don’t pollute the social systems we use so badly that they’re no longer useful.

This essay originally appeared in The Guardian.

Posted on October 16, 2009 at 8:56 AMView Comments

More Security Countermeasures from the Natural World

The plant caladium steudneriifolium pretends to be ill so mining moths won’t eat it.

She believes that the plant essentially fakes being ill, producing variegated leaves that mimic those that have already been damaged by mining moth larvae. That deters the moths from laying any further larvae on the leaves, as the insects assume the previous caterpillars have already eaten most of the leaves’ nutrients.

Cabbage aphids arm themselves with chemical bombs:

Its body carries two reactive chemicals that only mix when a predator attacks it. The injured aphid dies. But in the process, the chemicals in its body react and trigger an explosion that delivers lethal amounts of poison to the predator, saving the rest of the colony.

The dark-footed ant spider mimics an ant so that it’s not eaten by other spiders, and so it can eat spiders itself:

M.melanotarsa is a jumping spider that protects itself from predators (like other jumping spiders) by resembling an ant. Earlier this month, Ximena Nelson and Robert Jackson showed that they bolster this illusion by living in silken apartment complexes and travelling in groups, mimicking not just the bodies of ants but their social lives too.

Now Nelson and Robert are back with another side to the ant-spider’s tale – it also uses its impersonation for attack as well as defence. It also feasts on the eggs and youngsters of the very same spiders that its ant-like form protects it from. It is, essentially, a spider that looks like an ant to avoid being eaten by spiders so that it itself can eat spiders.

My previous post about security stories from the insect world.

Posted on July 2, 2009 at 6:11 AMView Comments

Three Security Anecdotes from the Insect World

Beet armyworm caterpillars react to the sound of a passing wasp by freezing in place, or even dropping off the plant. Unfortunately, armyworm intelligence isn’t good enough to tell the difference between enemy aircraft (the wasps that prey on them) and harmless commercial flights (bees); they react the same way to either. So by producing nectar for bees, plants not only get pollinated, but also gain some protection against being eaten by caterpillars.

The small hive beetle lives by entering beehives to steal combs and honey. They home in on the hives by detecting the bees’ own alarm pheromones. They also track in yeast that ferments the pollen and releases chemicals that spoof the alarm pheromones, attracting more beetles and more yeast. Eventually the bees abandon the hive, leaving their store of pollen and honey to the beetles and yeast.

Mountain alcon blue caterpillars get ants to feed them by spoofing a biometric: the sounds made by the queen ant.

Posted on March 3, 2009 at 1:20 PMView Comments

Aspidistra

Aspidistra was a World War II man-in-the-middle attack. The vulnerability that made it possible was that German broadcast stations were mostly broadcasting the same content from a central source; but during air raids, transmitters in the target area were switched off to prevent them being used for radio direction-finding of the target.

The exploit involved the very powerful (500KW) Aspidistra transmitter, coupled to a directional antenna farm. With that power, they could make it sound like a local station in the target area.

With a staff of fake announcers, a fake German band, and recordings of recent speeches from high-ranking Nazis, they would smoothly switch from merely relaying the German network to emulating it with their own staff. They could then make modifications to news broadcasts, occasionally creating panic and confusion.

German transmitters were switched off during air raids, to prevent them from being used as navigational aids for bombers. But many were connected into a network and broadcast the same content. When a targeted transmitter switched off, Aspidistra began transmitting on their original frequency, initially retransmitting the German network broadcast as received from a still-active station. As a deception, false content and pro-Allied propaganda would be inserted into the broadcast. The first such “intrusion” was carried out on March 25, 1945, as shown in the operations order at the right.

On March 30, 1945, “Aspidistra” intruded into the Berlin and Hamburg frequencies warning that the Allies were trying to spread confusion by sending false telephone messages from occupied towns to unoccupied towns. On April 8, 1945, “Aspidistra” intruded into the Hamburg and Leipzig channels to warn of forged banknotes in circulation. On April 9, 1945, there were announcements encouraging people to evacuate to seven bomb-free zones in central and southern Germany. All these announcements were false.

The German radio network tried announcing “The enemy is broadcasting counterfeit instructions on our frequencies. Do not be misled by them. Here is an official announcement of the Reich authority.” The Aspidistra station made similar announcements, to cause confusion and make the official messages ineffective.

EDITED TO ADD (11/13): Photos here.

Posted on November 10, 2008 at 7:07 AMView Comments

GPS Spoofing

Interesting:

Jon used a desktop computer attached to a GPS satellite simulator to create a fake GPS signal. Portable GPS satellite simulators can fit in the trunk of a car, and are often used for testing. They are available as commercial off-the-shelf products. You can also rent them for less than $1K a week—peanuts to anyone thinking of hijacking a cargo truck and selling stolen goods.

In his first experiments, Jon placed his desktop computer and GPS satellite simulator in the cab of his small truck, and powered them off an inverter. The VAT used a second truck as the victim cargo truck. “With this setup,” Jon said, “we were able to spoof the GPS receiver from about 30 feet away. If our equipment could broadcast a stronger signal, or if we had purchased stronger signal amplifiers, we certainly could have spoofed over a greater distance.”

During later experiments, Jon and the VAT were able to easily achieve much greater GPS spoofing ranges. They spoofed GPS signals at ranges over three quarters of a mile. “The farthest distance we achieved was 4586 feet, at Los Alamos,” said Jon. “When you radiate an RF signal, you ideally want line of sight, but in this case we were walking around buildings and near power lines. We really had a lot of obstruction in the way. It surprised us.” An attacker could drive within a half mile of the victim truck, and still override the truck’s GPS signals.

EDITED TO ADD (10/13): Argonne National Labs is working on this.

Posted on September 17, 2008 at 7:03 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.