Entries Tagged "smart cards"

Page 3 of 4

London Tube Smartcard Cracked

Looks like lousy cryptography.

Details here. When will people learn not to invent their own crypto?

Note that this is the same card — maybe a different version — that was used in the Dutch transit system, and was hacked back in January. There’s another hack of that system (press release here, and a video demo), and many companies — and government agencies — are scrambling in the wake of all these revelations.

Seems like the Mifare system (especially the version called Mifare Classic — and there are billions out there) was really badly designed, in all sorts of ways. I’m sure there are many more serious security vulnerabilities waiting to be discovered.

Posted on March 14, 2008 at 7:27 AMView Comments

Chip and PIN Vulnerable

This both is and isn’t news. In the security world, we knew that replacing credit card signatures with chip and PIN created new vulnerabilities. In this paper (see also the press release and FAQ), researchers demonstrated some pretty basic attacks against the system — one using a paper clip, a needle, and a small recording device. This BBC article is a good summary of the research.

And also, there’s also this leaked chip and PIN report from APACS, the UK trade association that has been pushing chip and PIN.

Posted on March 12, 2008 at 2:12 PMView Comments

Dutch RFID Transit Card Hacked

The Dutch RFID public transit card, which has already cost the government $2B — no, that’s not a typo — has been hacked even before it has been deployed:

The first reported attack was designed by two students at the University of Amsterdam, Pieter Siekerman and Maurits van der Schee. They analyzed the single-use ticket and showed its vulnerabilities in a report. They also showed how a used single-use card could be given eternal life by resetting it to its original “unused” state.

The next attack was on the Mifare Classic chip, used on the normal ticket. Two German hackers, Karsten Nohl and Henryk Plotz, were able to remove the coating on the Mifare chip and photograph the internal circuitry. By studying the circuitry, they were able to deduce the secret cryptographic algorithm used by the chip. While this alone does not break the chip, it certainly gives future hackers a stepping stone on which to stand. On Jan. 8, 2008, they released a statement abut their work.

Most of the links are in Dutch; there isn’t a whole lot of English-language press about this. But the Dutch Parliament recently invited the students to give testimony; they’re more than a little bit interested how $2B could be wasted.

My guess is the system was designed by people who don’t understand security, and therefore thought it was easy.

EDITED TO ADD (2/13): More info.

Posted on January 21, 2008 at 6:35 AMView Comments

Separating Data Ownership and Device Ownership

Consider two different security problems. In the first, you store your valuables in a safe in your basement. The threat is burglars, of course. But the safe is yours, and the house is yours, too. You control access to the safe, and probably have an alarm system.

The second security problem is similar, but you store your valuables in someone else’s safe. Even worse, it’s someone you don’t trust. He doesn’t know the combination, but he controls access to the safe. He can try to break in at his leisure. He can transport the safe anyplace he needs to. He can use whatever tools he wants. In the first case, the safe needs to be secure, but it’s still just a part of your overall home security. In the second case, the safe is the only security device you have.

This second security problem might seem contrived, but it happens regularly in our information society: Data controlled by one person is stored on a device controlled by another. Think of a stored-value smart card: If the person owning the card can break the security, he can add money to the card. Think of a DRM system: Its security depends on the person owning the computer not being able to get at the insides of the DRM security. Think of the RFID chip on a passport. Or a postage meter. Or SSL traffic being sent over a public network.

These systems are difficult to secure, and not just because you give your attacker the device and let him utilize whatever time, equipment and expertise he needs to break it. It’s difficult to secure because breaks are generally “class breaks.” The expert who figures out how to do it can build hardware — or write software — to do it automatically. Only one person needs to break a given DRM system; the software can break every other device in the same class.

This means that the security needs to be secure not against the average attacker, but against the smartest, most motivated and best funded attacker.

I was reminded of this problem earlier this month, when researchers announced a new attack (.pdf) against implementations of the RSA cryptosystem. The attack exploits the fact that different operations take different times on modern CPUs. By closely monitoring — and actually affecting — the CPU during an RSA operation, an attacker can recover the key. The most obvious applications for this attack are DRM systems that try to use a protected partition in the CPU to prevent the computer’s owner from learning the DRM system’s cryptographic keys.

These sorts of attacks are not new. In 1995, researchers discovered they could recover cryptographic keys by comparing relative timings on chips. In later years, both power and radiation were used to break cryptosystems. I called these “side-channel attacks,” because they made use of information other than the plaintext and ciphertext. And where are they most useful? To recover secrets from smart cards.

Whenever I see security systems with this data/device separation, I try to solve the security problem by removing the separation. This means completely redesigning the system and the security assumptions behind it.

Compare a stored-value card with a debit card. In the former case, the card owner can create money by changing the value on the card. For this system to be secure, the card needs to be protected by a variety of security countermeasures. In the latter case, there aren’t any secrets on the card. Your bank doesn’t care that you can read the account number off the front of the card, or the data off the magnetic stripe off the back — the real data, and the security, are in the bank’s databases.

Or compare a DRM system with a financial model that doesn’t care about copying. The former is impossible to secure, the latter easy.

While common in digital systems, this kind of security problem isn’t limited to them. Last month, the province of Ontario started investigating insider fraud in their scratch-and-win lottery systems, after the CBC aired allegations that people selling the tickets are able to figure out which tickets are winners, and not sell them. It’s the same problem: the owners of the data on the tickets — the lottery commission — tried to keep that data secret from those who had physical control of the tickets. And they failed.

Compare that with a traditional drawing-at-the-end-of-the-week lottery system. The attack isn’t possible, because there are no secrets on the tickets for an attacker to learn.

Separating data ownership and device ownership doesn’t mean that security is impossible, only much more difficult. You can buy a safe so strong that you can lock your valuables in it and give it to your attacker — with confidence. I’m not so sure you can design a smart card that keeps secrets from its owner, or a DRM system that works on a general-purpose computer — especially because of the problem of class breaks. But in all cases, the best way to solve the security problem is not to have it in the first place.

This essay originally appeared on Wired.com.

EDITED TO ADD (12/1): I completely misunderstood the lottery problem in Ontario. The frauds reported were perpetrated by lottery machine operators at convenience stores and the like stealing end-of-week draw tickets from unsuspecting customers. The customer would hand their ticket over the counter to be scanned to see if it was a winner. The clerk (knowing what the winning numbers actually were) would palm a non-winning ticket into the machine, inform the customer “sorry better luck next time” and claim the prize on their own at a later date.

Nice scam, but nothing to do with the point of this essay.

Posted on November 30, 2006 at 6:36 AMView Comments

On-Card Displays

This is impressive: a display that works on a flexible credit card.

One of the major security problems with smart cards is that they don’t have their own I/O. That is, you have to trust whatever card reader/writer you stick the card in to faithfully send what you type into the card, and display whatever the card spits back out. Way back in 1999, Adam Shostack and I wrote a paper about this general class of security problem.

Think WYSIWTCS: What You See Is What The Card Says. That’s what an on-card display does.

No, it doesn’t protect against tampering with the card. That’s part of a completely different set of threats.

Posted on September 19, 2006 at 2:18 PMView Comments

Technological Arbitrage

This is interesting. Seems that a group of Sri Lankan credit card thieves collected the data off a bunch of UK chip-protected credit cards.

All new credit cards in the UK come embedded come with RFID chips that contain different pieces of user information, in order to access the account and withdraw cash the ATMs has to verify both the magnetic strip and the RFID tag. Without this double verification the ATM will confiscate the card, and possibly even notify the police.

They’re not RFID chips, they’re normal smart card chips that require physical contact — but that’s not the point.

They couldn’t clone the chips, so they took the information off the magnetic stripe and made non-chip cards. These cards wouldn’t work in the UK, of course, so the criminals flew down to India where the ATMs only verify the magnetic stripe.

Backwards compatibility is often incompatible with security. This is a good example, and demonstrates how criminals can make use of “technological arbitrage” to leverage compatibility.

EDITED TO ADD (8/9): Facts corrected above.

Posted on August 9, 2006 at 6:32 AMView Comments

The Security of RFID Cards

Interesting paper on the security of contactless smartcards:

Interestingly, the outcome of this investigation shows that contactless smartcards are not fundamentally less secure than contact cards. However, some attacks are inherently facilitated. Therefore both the user and the issuer should be aware of these threats and take them into account when building or using the systems based on contactless smartcards.

Posted on June 11, 2006 at 7:04 AMView Comments

Shell Suspends Chip & Pin in the UK

According to the BBC:

Petrol giant Shell has suspended chip-and-pin payments in 600 UK petrol stations after more than £1m was siphoned out of customers’ accounts.

This is just sad:

“These Pin pads are supposed to be tamper resistant, they are supposed to shut down, so that has obviously failed,” said Apacs spokeswoman Sandra Quinn.

She said Apacs was confident the problem was specific to Shell and not a systemic issue.

A Shell spokeswoman said: “Shell’s chip-and-pin solution is fully accredited and complies with all relevant industry standards.

That spokesperson simply can’t conceive of the fact that those “relevant industry standards” were written by those trying to sell the technology, and might possibly not be enough to ensure security.

And this is just after APACS (that’s the Association of Payment Clearing Services, by the way) reported that chip-and-pin technology reduced fraud by 13%.

Good commentary here. See also this article. Here’s a chip-and-pin FAQ from February.

EDITED TO ADD (5/8): Arrests have been made. And details emerge:

The scam works by criminals implanting devices into chip and pin machines which can copy a bank card’s magnetic strip and record a person’s pin number.

The device cannot copy the chip, which means any fake card can only be used in machines where chip and pin is not implemented – often abroad.

This is a common attack, one that I talk about in Beyond Fear: falling back to a less secure system. The attackers made use of the fact that there is a less secure system that is running parallel to the chip-and-pin system. Clever.

Posted on May 8, 2006 at 12:41 PMView Comments

RFID Cards and Man-in-the-Middle Attacks

Recent articles about a proposed US-Canada and US-Mexico travel document (kind of like a passport, but less useful), with an embedded RFID chip that can be read up to 25 feet away, have once again made RFID security newsworthy.

My views have not changed. The most secure solution is a smart card that only works in contact with a reader; RFID is much more risky. But if we’re stuck with RFID, the combination of shielding for the chip, basic access control security measures, and some positive action by the user to get the chip to operate is a good one. The devil is in the details, of course, but those are good starting points.

And when you start proposing chips with a 25-foot read range, you need to worry about man-in-the-middle attacks. An attacker could potentially impersonate the card of a nearby person to an official reader, just by relaying messages to and from that nearby person’s card.

Here’s how the attack would work. In this scenario, customs Agent Alice has the official card reader. Bob is the innocent traveler, in line at some border crossing. Mallory is the malicious attacker, ahead of Bob in line at the same border crossing, who is going to impersonate Bob to Alice. Mallory’s equipment includes an RFID reader and transmitter.

Assume that the card has to be activated in some way. Maybe the cover has to be opened, or the card taken out of a sleeve. Maybe the card has a button to push in order to activate it. Also assume the card has come challenge-reply security protocol and an encrypted key exchange protocol of some sort.

  1. Alice’s reader sends a message to Mallory’s RFID chip.
  2. Mallory’s reader/transmitter receives the message, and rebroadcasts it to Bob’s chip.
  3. Bob’s chip responds normally to a valid message from Alice’s reader. He has no way of knowing that Mallory relayed the message.
  4. Mallory’s reader transmitter receives Bob’s message and rebroadcasts it to Alice. Alice has no way of knowing that the message was relayed.
  5. Mallory continues to relay messages back and forth between Alice and Bob.

Defending against this attack is hard. (I talk more about the attack in Applied Cryptography, Second Edition, page 109.) Time stamps don’t help. Encryption doesn’t help. It works because Mallory is simply acting as an amplifier. Mallory might not be able to read the messages. He might not even know who Bob is. But he doesn’t care. All he knows is that Alice thinks he’s Bob.

Precise timing can catch this attack, because of the extra delay that Mallory’s relay introduces. But I don’t think this is part of the spec.

The attack can be easily countered if Alice looks at Mallory’s card and compares the information printed on it with what she’s receiving over the RFID link. But near as I can tell, the point of the 25-foot read distance is so cards can be authenticated in bulk, from a distance.

From the News.com article:

Homeland Security has said, in a government procurement notice posted in September, that “read ranges shall extend to a minimum of 25 feet” in RFID-equipped identification cards used for border crossings. For people crossing on a bus, the proposal says, “the solution must sense up to 55 tokens.”

If Mallory is on that bus, he can impersonate any nearby Bob who activates his RFID card early. And at a crowded border crossing, the odds of some Bob doing that are pretty good.

More detail here:

If that were done, the PASS system would automatically screen the cardbearers against criminal watch lists and put the information on the border guard’s screen by the time the vehicle got to the station, Williams said.

And would predispose the guard to think that everything’s okay, even if it isn’t.

I don’t think people are thinking this one through.

Posted on April 25, 2006 at 7:32 AMView Comments

RFID Chips and Viruses

Of course RFID chips can carry viruses. They’re just little computers.

More info here. The coverage is more than a tad sensationalist, though.

EDITED TO ADD (3/16): I thought the attack vector was interesting: a Trojan RFID attacks the central database, rather than attacking other RFID chips directly. Metaphorically, it’s a lot closer to biological viruses, because it actually requires the more powerful host being subverted, and there’s no way an infected tag could propagate directly to another tag.

Posted on March 16, 2006 at 6:55 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.