Entries Tagged "RFID"

Page 3 of 8

RFID Tattoos

Great idea for livestock. Dumb idea for soldiers:

The ink also could be used to track and rescue soldiers, Pydynowski said.

“It could help identify friends or foes, prevent friendly fire, and help save soldiers’ lives,” he said. “It’s a very scary proposition when you’re dealing with humans, but with military personnel, we’re talking about saving soldiers’ lives and it may be something worthwhile.”

Posted on January 22, 2007 at 12:27 PMView Comments

Tracking People by their Sneakers

Researchers at the University of Washington have demonstrated a surveillance system that automatically tracks people through the Nike+iPod Sport Kit. Basically, the kit contains a transmitter that you stick in your sneakers and a receiver you attach to your iPod. This allows you to track things like time, distance, pace, and calories burned. Pretty clever.

However, it turns out that the transmitter in your sneaker can be read up to 60 feet away. And because it broadcasts a unique ID, you can be tracked by it. In the demonstration, the researchers built a surveillance device (at a cost of about $250) and interfaced their surveillance system with Google Maps. Details are in the paper. Very scary.

This is a great demonstration for anyone who is skeptical that RFID chips can be used to track people. It’s a good example because the chips have no personal identifying information, yet can still be used to track people. As long as the chips have unique IDs, those IDs can be used for surveillance.

To me, the real significance of this work is how easy it was. The people who designed the Nike/iPod system put zero thought into security and privacy issues. Unless we enact some sort of broad law requiring companies to add security into these sorts of systems, companies will continue to produce devices that erode our privacy through new technologies. Not on purpose, not because they’re evil—just because it’s easier to ignore the externality than to worry about it.

Posted on December 12, 2006 at 1:11 PMView Comments

RFID Personal Firewall

Absolutely fascinating paper: “A Platform for RFID Security and Privacy Administration.” The basic idea is that you carry a personalized device that jams the signals from all the RFID tags on your person until you authorize otherwise.

Abstract

This paper presents the design, implementation, and evaluation of the RFID Guardian, the first-ever unified platform for RFID security and privacy administration. The RFID Guardian resembles an “RFID firewall”, enabling individuals to monitor and control access to their RFID tags by combining a standard-issue RFID reader with unique RFID tag emulation capabilities. Our system provides a platform for coordinated usage of RFID security mechanisms, offering fine-grained control over RFID-based auditing, key management, access control, and authentication capabilities. We have prototyped the RFID Guardian using off-the-shelf components, and our experience has shown that active mobile devices are a valuable tool for managing the security of RFID tags in a variety of applications, including protecting low-cost tags that are unable to regulate their own usage.

As Cory Doctorow points out, this is potentially a way to reap the benefits of RFID without paying the cost:

Up until now, the standard answer to privacy concerns with RFIDs is to just kill them—put your new US Passport in a microwave for a few minutes to nuke the chip. But with an RFID firewall, it might be possible to reap the benefits of RFID without the cost.

General info here. They’ve even built a prototype.

Posted on December 11, 2006 at 6:20 AMView Comments

Buying Fake European Passports

Interesting story of a British journalist buying 20 different fake EU passports. She bought a genuine Czech passport with a fake name and her real picture, a fake Latvian passport, and a stolen Estonian passport.

Despite information on stolen passports being registered to a central Interpol database, her Estonian passport goes undetected.

Note that harder-to-forge RFID passports would only help in one instance; it’s certainly not the most important problem to solve.

Also, I am somewhat suspicious of this story. I don’t know about the UK laws, but in the US this would be a major crime—and I don’t think being a reporter would be an adequate defense.

Posted on December 5, 2006 at 1:38 PM

Separating Data Ownership and Device Ownership

Consider two different security problems. In the first, you store your valuables in a safe in your basement. The threat is burglars, of course. But the safe is yours, and the house is yours, too. You control access to the safe, and probably have an alarm system.

The second security problem is similar, but you store your valuables in someone else’s safe. Even worse, it’s someone you don’t trust. He doesn’t know the combination, but he controls access to the safe. He can try to break in at his leisure. He can transport the safe anyplace he needs to. He can use whatever tools he wants. In the first case, the safe needs to be secure, but it’s still just a part of your overall home security. In the second case, the safe is the only security device you have.

This second security problem might seem contrived, but it happens regularly in our information society: Data controlled by one person is stored on a device controlled by another. Think of a stored-value smart card: If the person owning the card can break the security, he can add money to the card. Think of a DRM system: Its security depends on the person owning the computer not being able to get at the insides of the DRM security. Think of the RFID chip on a passport. Or a postage meter. Or SSL traffic being sent over a public network.

These systems are difficult to secure, and not just because you give your attacker the device and let him utilize whatever time, equipment and expertise he needs to break it. It’s difficult to secure because breaks are generally “class breaks.” The expert who figures out how to do it can build hardware—or write software—to do it automatically. Only one person needs to break a given DRM system; the software can break every other device in the same class.

This means that the security needs to be secure not against the average attacker, but against the smartest, most motivated and best funded attacker.

I was reminded of this problem earlier this month, when researchers announced a new attack (.pdf) against implementations of the RSA cryptosystem. The attack exploits the fact that different operations take different times on modern CPUs. By closely monitoring—and actually affecting—the CPU during an RSA operation, an attacker can recover the key. The most obvious applications for this attack are DRM systems that try to use a protected partition in the CPU to prevent the computer’s owner from learning the DRM system’s cryptographic keys.

These sorts of attacks are not new. In 1995, researchers discovered they could recover cryptographic keys by comparing relative timings on chips. In later years, both power and radiation were used to break cryptosystems. I called these “side-channel attacks,” because they made use of information other than the plaintext and ciphertext. And where are they most useful? To recover secrets from smart cards.

Whenever I see security systems with this data/device separation, I try to solve the security problem by removing the separation. This means completely redesigning the system and the security assumptions behind it.

Compare a stored-value card with a debit card. In the former case, the card owner can create money by changing the value on the card. For this system to be secure, the card needs to be protected by a variety of security countermeasures. In the latter case, there aren’t any secrets on the card. Your bank doesn’t care that you can read the account number off the front of the card, or the data off the magnetic stripe off the back—the real data, and the security, are in the bank’s databases.

Or compare a DRM system with a financial model that doesn’t care about copying. The former is impossible to secure, the latter easy.

While common in digital systems, this kind of security problem isn’t limited to them. Last month, the province of Ontario started investigating insider fraud in their scratch-and-win lottery systems, after the CBC aired allegations that people selling the tickets are able to figure out which tickets are winners, and not sell them. It’s the same problem: the owners of the data on the tickets—the lottery commission—tried to keep that data secret from those who had physical control of the tickets. And they failed.

Compare that with a traditional drawing-at-the-end-of-the-week lottery system. The attack isn’t possible, because there are no secrets on the tickets for an attacker to learn.

Separating data ownership and device ownership doesn’t mean that security is impossible, only much more difficult. You can buy a safe so strong that you can lock your valuables in it and give it to your attacker—with confidence. I’m not so sure you can design a smart card that keeps secrets from its owner, or a DRM system that works on a general-purpose computer—especially because of the problem of class breaks. But in all cases, the best way to solve the security problem is not to have it in the first place.

This essay originally appeared on Wired.com.

EDITED TO ADD (12/1): I completely misunderstood the lottery problem in Ontario. The frauds reported were perpetrated by lottery machine operators at convenience stores and the like stealing end-of-week draw tickets from unsuspecting customers. The customer would hand their ticket over the counter to be scanned to see if it was a winner. The clerk (knowing what the winning numbers actually were) would palm a non-winning ticket into the machine, inform the customer “sorry better luck next time” and claim the prize on their own at a later date.

Nice scam, but nothing to do with the point of this essay.

Posted on November 30, 2006 at 6:36 AMView Comments

FIDIS on RFID Passports

The “Budapest Declaration on Machine Readable Travel Documents“:

Abstract:

By failing to implement an appropriate security architecture, European governments have effectively forced citizens to adopt new international Machine Readable Travel Documents which dramatically decrease their security and privacy and increases risk of identity theft. Simply put, the current implementation of the European passport utilises technologies and standards that are poorly conceived for its purpose. In this declaration, researchers on Identity and Identity Management (supported by a unanimous move in the September 2006 Budapest meeting of the FIDIS “Future of Identity in the Information Society” Network of Excellence[1]) summarise findings from an analysis of MRTDs and recommend corrective measures which need to be adopted by stakeholders in governments and industry to ameliorate outstanding issues.

EDITED TO ADD (11/9): Slashdot thread.

Posted on November 9, 2006 at 12:26 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.