Schneier on Security
A blog covering security and security technology.
« SOMBERKNAVE: NSA Exploit of the Day |
| SWAP: NSA Exploit of the Day »
February 6, 2014
Dispute Resolution Systems for Security Protocols
Interesting paper by Steven J. Murdoch and Ross Anderson in this year's Financial Cryptography conference: "Security Protocols and Evidence: Where Many Payment Systems Fail."
Abstract: As security protocols are used to authenticate more transactions, they end up being relied on in legal proceedings. Designers often fail to anticipate this. Here we show how the EMV protocol -- the dominant card payment system worldwide -- does not produce adequate evidence for resolving disputes. We propose five principles for designing systems to produce robust evidence. We apply these to other systems such as Bitcoin, electronic banking and phone payment apps. We finally propose specific modifications to EMV that could allow disputes to be resolved more efficiently and fairly.
Ross Anderson has a blog post on the paper.
Posted on February 6, 2014 at 6:05 AM
• 17 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Fixing the Point of Sale Terminal (POST)
THINK: when you use your card: you are NOT authorizing ONE transaction: you are giving the merchant INDEFINITE UNRESTRICTED access to your account.
if the merchant is hacked the card numbers are then sold on the black market. hackers then prepare bogus cards -- with real customer numbers -- and then send "mules" out to purchase high value items -- that can be resold
it's a rough way to scam cash and the "mules" are most likely to get caught -- not the hackers who compromised the merchants' systems .
The POST will need to be re-designed to accept customer "Smart Cards"
The Customer Smart Card will need an on-board processor, -- with PGP
When the customer presents the card it DOES NOT send the customer's card number to the POST. Instead, the POST will submit an INVOICE to the customer's card. On customer approval the customer's card will encrypt the invoice together with authorization for payment to the PCI ( Payment Card Industry Card Service Center ) for processing and forward the cipher text to the POST
Neither the POST nor the merchant's computer can read the authorizing message because it is PGP encrypted for the PCI service. Therefore the merchant's POST must forward the authorizing message cipher text to the PCI service center.
On approval the PCI Service Center will return an approval note to the POST and an EFT from the customer's account to the merchant's account.
The POST will then print the PAID invoice. The customer picks up the merchandise and the transaction is complete.
The merchant never knows who the customer was: the merchant never has ANY of the customer's PII data.
Cards are NOT updated. They are DISPOSABLE and are replaced at least once a year -- when the PGP signatures are set to expire. Note that PGP signatures can also be REVOKED if the card is lost.
This is already solved with MFA cards and one-time-use account numbers. There just hasn't been widespread adoption as the terminals are more expensive.
The EMV specification is not an implementation specification. Payment networks define implementation obligations. I do like the fact that Murdoch and Anderson focus on evidence gathering, something which is not always implemented well. However the paper fails to mention the most important evidenciary mechanism which is at the core of EMV today, and which provides irrefutable proof of transaction: the cryptogram. I recommend reading on EMV cryptogram generation to get a better understanding of how it prevents replays, etc. What I find more interesting in this paper is its analysis of Bitcoin. Bircoin is not a currency. It is a protocol, and presents serious weaknesses.
The value of the cryptogram is limited. Most notably, UK banks have consistently refused to disclose the method they use to generate cryptograms or disclose the card keys even when the card is cancelled.
Therefore it is not possible to verify whether the cryptogram associated with a disputed transaction is valid. Even transactions where the cryptogram is invalid are sometimes authorised, so the presence of a cryptogram is not conclusive evidence that a transaction was performed by the correct card.
In addition, multiple cards with the same card keys have been seen in the wild, and hardware security modules processing card keys have been compromised. Therefore the ability to generate valid cryptograms is not restricted to the correct card.
So with the EMV cards we have now, the cryptogram is useful information and should be subject to better handling procedures in court, but there's a strong case for building something better than the cryptogram for handling disputes.
(The paper did mention cryptograms, but called them authentication codes because there wasn't space to introduce the EMV-specific jargon.)
Trying to 'fix' credit cards with EMV is like trying to fix a broken arm with an aspirin, you're not fixing the problem, just covering up one of the symptoms.
Payments is about authentication, not the payment processes themselves. You have money (debit or credit) and you want to spend is whenever, wherever, and however you want, and you want to do so safely.
The only way payments will move forward is by focusing on the undeniable trends, which are mobile and identity management. Credit cards have never, and WILL never do either of these things well.
Outlining the problem isn't hard. Designing an alternate system is tedious etc; but that's just work. The two really hard problems are switching over the installed base of devices (notice how bumping still hasn't caused all the locks to get swapped out) and who owns the system design and get's paid for maintaining it.
Anybody recall the system BBN, the DOD, and the US Treasure designed and deployed in a pilot decades ago that allowed digital check writing? I think it ran on Palm pilots. I'd love a link.
@Ben: Anybody recall the system BBN, the DOD, and the US Treasure designed ..
Try: "The Electronic Check Architecture" V1.0.2 29 September 1998, Financial Services Technology Consortium.
As for what happened to it, I don't know.
So here the $100 million dollar question. Why is Target spending $100 million to turn all its cards into Pin and Chip in the wake of the hacking scandal when in fact this is nothing more than closing the front barn door and then throwing the back barn door open?
Isn't it felony destruction of evidence to tell customers to cut up their chip cards when there has been a fraud? In such a circumstance they have to know that the data on the card is liable to become evidence in a court case. Perhaps a short letter from a prosecutor to the CEOs of a few banks, threatening the CEOs with jail time, would end that problem. This might even work in some countries with poor regulation, if extradition was a possibility.
@ Steven Murdoch
“The value of the cryptogram is limited. Most notably, UK banks have consistently refused to disclose the method they use to generate cryptograms or disclose the card keys even when the card is cancelled.Therefore it is not possible to verify whether the cryptogram associated with a disputed transaction is valid…”
If you are correct the “chip and PIN” has serious flaws.
“This is already solved with MFA cards and one-time-use account numbers.”
Yes, maybe some of the fraud - not all.
The Cambridge researchers use a five part apparatus to conduct the MITM attack. On page six of the ‘Chip and Pin is Broken’ pdf it shows a somewhat complex arraignment of a stolen credit card, a card reader, a laptop, a FPGA board, and a Fake card connected by small wires which is then inserted into a POS terminal. That is a bit unwieldy in some POS terminal sales situations - but not all.
The authors indicated a better method, “…we can envision a carrier card that hosts a cutout of the original card, which interfaces with a microcontroller that communicates with the terminal.” Page 5 of the pdf.
There probably are better ways of running the attack. I think that others will have figured it out (possibly with the help of an inside man). So, the ‘chip and pin’ cards are not a panacea for stopping bank card fraud.
I noticed that Visa is ramping-up their customer spending "profiling" system with new IBM computers to stop bank card fraud. Whether this approach is better than the “chip and pin” approach is yet to be demonstrated.
'Visa said the "Z Transaction Processing Facility" 64-bit operating system allows more complex transactions to be processed more quickly in milliseconds… computers behind Visa's fraud detection network, VisaNet, are capable of processing some 130 million transactions per day and running more than 20,000 a second.… This network runs Visa's so-called "Advanced Authorisation", which attributes fraud risk scores based on global spending patterns… The company also developed a cross-border model that can detect more than three times the level of fraud which occurs outside of a cardholder's country…'
The question is once the fraud is detected what will they do about it? Who pays the costs?
@ 65535 (0177777 / 0xffff ;-)
So, the ‘chip and pin’ cards are not a panacea for stopping bank card frad
They never were.
EVM designed them as a way to move liability away from them and the banks towards the merchants and card holders.
The design is like a cross between a "Swiss Cheese full of holes" and a "Bad smell French cheese, crusty on the outside but leathaly desieased inside" the UK Banking industry is well aware of this and quite deliberatly witholds information from courts to try to prevent the dire state of EVM's system becoming known, but importantly to hide what is in effect fraud by the banks.
Having said this I'm fairly sure some "crusty fart" from EVM or one of the banking industry fronts will come by with a bunch of "moral outrage" puffery and spin, as they have done in the past. Some of which Ross J. Anderson has commented on before.
"The question is once the fraud is detected what will they do about it? Who pays the costs?"
First they call the cardholder, asking to verify the transactions.
Then they void the transactions, cancel the card, and mail a new card.
It's happened to me twice with 2 different issuing banks in the last 1-2 years, both times in Florida, and I had not left my home state.
AFAIK, the bank and the card brand (Visa/MC/AMEX) share the costs.
If a merchant is found to have been breached, and out of PCI DSS compliance, then the merchant may be fined up to $500k.
Let's talk about trust in security protocols. Trust in the OpenSSL library for instance.
Last sunday I saw a presentation from Poul-Henning Kamp, a well respected FreeBSD developer. He talked about how insecure we actually are on the internet and on pc's in general. And it's mostly because of the bloat. The OpenSSL library is really terribly bloated. And everyone trust it and use it, whether he knows it or not.
The OpenSSL library is 20Mb (source code + documentation). It contains assembly code that is wrapped up in perl, it has platform and hardware dependend code in the APIs, the code is REALLY, REALLY CRAP. It has zillions of #ifdefs. Some of the comments are mindblowing. Some of the guys really didn't know that this is a security library. The makefiles are 25 kb. Yes, I say makefiles, not makefile. In the root directory there are 3 makefiles, one backup file (... they use a CVS, right?). It has support for OS/2, a platform that really shouldn't be connected to any network AT ALL. It has support for some 50 crypto libraries. Most of them with massive assembly code.
How can you trust a library like OpenSSL? Its worse than X.
I talked a bit about this on this site a while back. Part of the problem is not just the libraries, but the protocols as well. They are complicated; the TLS protocol, for example is a 90 page specification; you can't implement it simply. Everything needs to be broken out into modular standards that can be reviewed individually, and small modular libraries to encompass those standards that can also be easily reviewed. Each standard should be no more complex than necessary to accomplish their goals, and practically memorizable in and of themselves, no more than, let's say, ten pages (the smaller the better). More complex protocols would be built off of smaller protocols and not go into all the details.
If the protocols are simple, the libraries can be simple, if the protocols are complex, the libraries will no doubtably be complex. That's not to say that there is no room for improvement, I've been meaning to fork GnuTLS and breaking it up into two (or possibly more) libraries, a low-level crypto library implementing the algorithms, and a higher level library implementing the protocols (currently the API tightly couples public key algorithms like Diffie-Hellman with the TLS protocol).
When you talk about TLS than you are *only* talking about a protocol that should secure a data transportation between two endpoints. That's it. There shouldn't be options in it. Everything should be mandatory (such as perfect forward secrecy and authentication) and nothing more should be included (such as only one encryption protocol with fixed block size). Yet it counts: 21x "option", 116x "may", 4x "recommended", 69x "should".
You know, the critics are right. Committees are just incapable in creating something simple. For creating a protocol there should be one teamleader, one architect, one documentalist and a handful of engineers. No lawyers, spinners or other important people.
We are talking about fiction here but if it was me, I would replace HTML, FTP, WebDAV and even e-mail (and probably half a dozen of protocols more) with something very simple, a virtual fileystem (think 9p / FUSE).
Returning to the library.
The problem with GnuTLS is in fact the license. Altough I like the "free" aspect of it, GPLv3 is not very compatible, even not with GPLv2.
Another problem is "the mentality". Crappy work should not be accepted. Period. If you want to do something, you have to do it completely. Writing clear code, proper checking of return values, testing, writing documentation etc.. I mean, the guy working on the code knows best what he is doing (and that only for a small period of time). And we are talking about a important piece of software.
If I would break GnuTLS up it would be in a server and a client part.
The server would be the simplest ever, with everything mandatory, no options and nothing more than absolutely necessary. So it would use a small subset of TLS 1.2, only AES-256, no assembly or platform specific code (we have compilers for that) and so on. Testing also simple and functional, every aspect, but not excessive. Not dependend on GNU autotools.
The client is more problematic. There are millions of servers, running all sorts of SSL/TLS (thank you IETF). Again, I would kill all the assembly and hardware / platform specific code. Make it very portable (only POSIX/libc/no #ifdefs).
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.