Entries Tagged "smart cards"

Page 2 of 4

Man-in-the-Middle Attack Against Chip and PIN

Nice attack against the EMV — Eurocard Mastercard Visa — the “chip and PIN” credit card payment system. The attack allows a criminal to use a stolen card without knowing the PIN.

The flaw is that when you put a card into a terminal, a negotiation takes place about how the cardholder should be authenticated: using a PIN, using a signature or not at all. This particular subprotocol is not authenticated, so you can trick the card into thinking it’s doing a chip-and-signature transaction while the terminal thinks it’s chip-and-PIN. The upshot is that you can buy stuff using a stolen card and a PIN of 0000 (or anything you want). We did so, on camera, using various journalists’ cards. The transactions went through fine and the receipts say “Verified by PIN”.

[…]

So what went wrong? In essence, there is a gaping hole in the specifications which together create the “Chip and PIN” system. These specs consist of the EMV protocol framework, the card scheme individual rules (Visa, MasterCard standards), the national payment association rules (UK Payments Association aka APACS, in the UK), and documents produced by each individual issuer describing their own customisations of the scheme. Each spec defines security criteria, tweaks options and sets rules — but none take responsibility for listing what back-end checks are needed. As a result, hundreds of issuers independently get it wrong, and gain false assurance that all bases are covered from the common specifications. The EMV specification stack is broken, and needs fixing.

Read Ross Anderson’s entire blog post for both details and context. Here’s the paper, the press release, and a FAQ. And one news article.

This is big. There are about a gazillion of these in circulation.

EDITED TO ADD (2/12): BBC video of the attack in action.

Posted on February 11, 2010 at 4:18 PMView Comments

The Doghouse: Net1

They have technology:

The FTS Patent has been acclaimed by leading cryptographic authorities around the world as the most innovative and secure protocol ever invented to manage offline and online smart card related transactions. Please see the independent report by Bruce Schneider [sic] in his book entitled Applied Cryptography, 2nd Edition published in the late 1990s.

I have no idea what this is referring to.

EDITED TO ADD (5/20): Someone, probably from the company, said in comments that this is referring to the UEPS protocol, discussed on page 589. I still don’t like the hyperbole and the implied endorsement in the quote.

Posted on May 22, 2009 at 11:29 AMView Comments

More European Chip and Pin Insecurity

Optimised to Fail: Card Readers for Online Banking,” by Saar Drimer, Steven J. Murdoch, and Ross Anderson.

Abstract

The Chip Authentication Programme (CAP) has been introduced by banks in Europe to deal with the soaring losses due to online banking fraud. A handheld reader is used together with the customer’s debit card to generate one-time codes for both login and transaction authentication. The CAP protocol is not public, and was rolled out without any public scrutiny. We reverse engineered the UK variant of card readers and smart cards and here provide the first public description of the protocol. We found numerous weaknesses that are due to design errors such as reusing authentication tokens, overloading data semantics, and failing to ensure freshness of responses. The overall strategic error was excessive optimisation. There are also policy implications. The move from signature to PIN for authorising point-of-sale transactions shifted liability from banks to customers; CAP introduces the same problem for online banking. It may also expose customers to physical harm.

EDITED TO ADD (3/12): More info.

Posted on March 5, 2009 at 12:45 PMView Comments

Credit Card with One-Time Password Generator

This is a nifty little device: a credit card with an onboard one-time password generator. The idea is that the user enters his PIN every time he makes an online purchase, and enters the one-time code on the screen into the webform. The article doesn’t say if the code is time-based or just sequence-based, but in either case the credit card company will be able to verify it remotely.

The idea is that this cuts down on card-not-present credit card fraud.

The efficacy of this countermeasure depends a lot on how much these new credit cards cost versus the amount of this type of fraud that happens, but in general it seems like a really good idea. Certainly better than that three-digit code printed on the back of cards these days.

According to the article, Visa will be testing this card in 2009 in the UK.

EDITED TO ADD (12/6): Several commenters point out that banks in the Netherlands have had a similar system for years.

Posted on December 4, 2008 at 6:17 AMView Comments

ID Cards for Port Workers

While I am strongly opposed to a national ID, I have consistently said that giving strongly secured ID cards to groups like port workers is a good idea. It’s happening in New England:

The scannable card serves as proof that a background check has been performed and it contains features aimed at preventing misuse. In addition to a photograph, the card contains a smart chip that carries a copy of the holder’s fingerprint. Port and delivery workers, cargo handlers, and other employees who must venture into sensitive or secure areas will be required to submit to a fingerprint scan before entering those locations. The scanning machine will automatically perform a match analysis with the fingerprint embedded in the smart chip.

This is a great application for these cards.

Posted on October 21, 2008 at 1:28 PMView Comments

Full Disclosure and the Boston Farecard Hack

In eerily similar cases in the Netherlands and the United States, courts have recently grappled with the computer-security norm of “full disclosure,” asking whether researchers should be permitted to disclose details of a fare-card vulnerability that allows people to ride the subway for free.

The “Oyster card” used on the London Tube was at issue in the Dutch case, and a similar fare card used on the Boston “T” was the center of the U.S. case. The Dutch court got it right, and the American court, in Boston, got it wrong from the start — despite facing an open-and-shut case of First Amendment prior restraint.

The U.S. court has since seen the error of its ways — but the damage is done. The MIT security researchers who were prepared to discuss their Boston findings at the DefCon security conference were prevented from giving their talk.

The ethics of full disclosure are intimately familiar to those of us in the computer-security field. Before full disclosure became the norm, researchers would quietly disclose vulnerabilities to the vendors — who would routinely ignore them. Sometimes vendors would even threaten researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers started disclosing the existence of a vulnerability but not the details. Vendors responded by denying the security holes’ existence, or calling them just theoretical. It wasn’t until full disclosure became the norm that vendors began consistently fixing vulnerabilities quickly. Now that vendors routinely patch vulnerabilities, researchers generally give them advance notice to allow them to patch their systems before the vulnerability is published. But even with this “responsible disclosure” protocol, it’s the threat of disclosure that motivates them to patch their systems. Full disclosure is the mechanism by which computer security improves.

Outside of computer security, secrecy is much more the norm. Some security communities, like locksmiths, behave much like medieval guilds, divulging the secrets of their profession only to those within it. These communities hate open research, and have responded with surprising vitriol to researchers who have found serious vulnerabilities in bicycle locks, combination safes, master-key systems and many other security devices.

Researchers have received a similar reaction from other communities more used to secrecy than openness. Researchers — sometimes young students — who discovered and published flaws in copyright-protection schemes, voting-machine security and now wireless access cards have all suffered recriminations and sometimes lawsuits for not keeping the vulnerabilities secret. When Christopher Soghoian created a website allowing people to print fake airline boarding passes, he got several unpleasant visits from the FBI.

This preference for secrecy comes from confusing a vulnerability with information about that vulnerability. Using secrecy as a security measure is fundamentally fragile. It assumes that the bad guys don’t do their own security research. It assumes that no one else will find the same vulnerability. It assumes that information won’t leak out even if the research results are suppressed. These assumptions are all incorrect.

The problem isn’t the researchers; it’s the products themselves. Companies will only design security as good as what their customers know to ask for. Full disclosure helps customers evaluate the security of the products they buy, and educates them in how to ask for better security. The Dutch court got it exactly right when it wrote: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”

In a world of forced secrecy, vendors make inflated claims about their products, vulnerabilities don’t get fixed, and customers are no wiser. Security research is stifled, and security technology doesn’t improve. The only beneficiaries are the bad guys.

If you’ll forgive the analogy, the ethics of full disclosure parallel the ethics of not paying kidnapping ransoms. We all know why we don’t pay kidnappers: It encourages more kidnappings. Yet in every kidnapping case, there’s someone — a spouse, a parent, an employer — with a good reason why, in this one case, we should make an exception.

The reason we want researchers to publish vulnerabilities is because that’s how security improves. But in every case there’s someone — the Massachusetts Bay Transit Authority, the locksmiths, an election machine manufacturer — who argues that, in this one case, we should make an exception.

We shouldn’t. The benefits of responsibly publishing attacks greatly outweigh the potential harm. Disclosure encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers. It’s how we learn about security, and how we improve future security.

This essay previously appeared on Wired.com.

EDITED TO ADD (8/26): Matt Blaze has a good essay on the topic.

EDITD TO ADD (9/12): A good legal analysis.

Posted on August 26, 2008 at 6:04 AMView Comments

Hacking Mifare Transport Cards

London’s Oyster card has been cracked, and the final details will become public in October. NXP Semiconductors, the Philips spin-off that makes the system, lost a court battle to prevent the researchers from publishing. People might be able to use this information to ride for free, but the sky won’t be falling. And the publication of this serious vulnerability actually makes us all safer in the long run.

Here’s the story. Every Oyster card has a radio-frequency identification chip that communicates with readers mounted on the ticket barrier. That chip, the “Mifare Classic” chip, is used in hundreds of other transport systems as well — Boston, Los Angeles, Brisbane, Amsterdam, Taipei, Shanghai, Rio de Janeiro — and as an access pass in thousands of companies, schools, hospitals, and government buildings around Britain and the rest of the world.

The security of Mifare Classic is terrible. This is not an exaggeration; it’s kindergarten cryptography. Anyone with any security experience would be embarrassed to put his name to the design. NXP attempted to deal with this embarrassment by keeping the design secret.

The group that broke Mifare Classic is from Radboud University Nijmegen in the Netherlands. They demonstrated the attack by riding the Underground for free, and by breaking into a building. Their two papers (one is already online) will be published at two conferences this autumn.

The second paper is the one that NXP sued over. They called disclosure of the attack “irresponsible,” warned that it will cause “immense damages,” and claimed that it “will jeopardize the security of assets protected with systems incorporating the Mifare IC.” The Dutch court would have none of it: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”

Exactly right. More generally, the notion that secrecy supports security is inherently flawed. Whenever you see an organization claiming that design secrecy is necessary for security — in ID cards, in voting machines, in airport security — it invariably means that its security is lousy and it has no choice but to hide it. Any competent cryptographer would have designed Mifare’s security with an open and public design.

Secrecy is fragile. Mifare’s security was based on the belief that no one would discover how it worked; that’s why NXP had to muzzle the Dutch researchers. But that’s just wrong. Reverse-engineering isn’t hard. Other researchers had already exposed Mifare’s lousy security. A Chinese company even sells a compatible chip. Is there any doubt that the bad guys already know about this, or will soon enough?

Publication of this attack might be expensive for NXP and its customers, but it’s good for security overall. Companies will only design security as good as their customers know to ask for. NXP’s security was so bad because customers didn’t know how to evaluate security: either they don’t know what questions to ask, or didn’t know enough to distrust the marketing answers they were given. This court ruling encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers.

It’s unclear how this break will affect Transport for London. Cloning takes only a few seconds, and the thief only has to brush up against someone carrying a legitimate Oyster card. But it requires an RFID reader and a small piece of software which, while feasible for a techie, are too complicated for the average fare dodger. The police are likely to quickly arrest anyone who tries to sell cloned cards on any scale. TfL promises to turn off any cloned cards within 24 hours, but that will hurt the innocent victim who had his card cloned more than the thief.

The vulnerability is far more serious to the companies that use Mifare Classic as an access pass. It would be very interesting to know how NXP presented the system’s security to them.

And while these attacks only pertain to the Mifare Classic chip, it makes me suspicious of the entire product line. NXP sells a more secure chip and has another on the way, but given the number of basic cryptography mistakes NXP made with Mifare Classic, one has to wonder whether the “more secure” versions will be sufficiently so.

This essay originally appeared in the Guardian.

Posted on August 7, 2008 at 6:07 AMView Comments

Washington DC Metro Farecard Hack

Clever:

Thieves took a legitimate paper Farecard with $40 in value, sliced the card’s magnetic strip into four lengthwise pieces, and then reattached one piece each to four separate defunct paper Farecards. The thieves then took the doctored Farecards to a Farecard machine and added fare, typically a nickel. By doing so, the doctored Farecard would go into the machine and a legitimate Farecard with the new value, $40.05, would come out.

My guess is that the thieves were caught not through some fancy technology, but because they had to monetize their attack. They sold Farecards on the street at half face value.

Posted on July 22, 2008 at 12:29 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.