Data at Rest vs. Data in Motion

For a while now, I’ve pointed out that cryptography is singularly ill-suited to solve the major network security problems of today: denial-of-service attacks, website defacement, theft of credit card numbers, identity theft, viruses and worms, DNS attacks, network penetration, and so on.

Cryptography was invented to protect communications: data in motion. This is how cryptography was used throughout most of history, and this is how the militaries of the world developed the science. Alice was the sender, Bob the receiver, and Eve the eavesdropper. Even when cryptography was used to protect stored data—data at rest—it was viewed as a form of communication. In “Applied Cryptography,” I described encrypting stored data in this way: “a stored message is a way for someone to communicate with himself through time.” Data storage was just a subset of data communication.

In modern networks, the difference is much more profound. Communications are immediate and instantaneous. Encryption keys can be ephemeral, and systems like the STU-III telephone can be designed such that encryption keys are created at the beginning of a call and destroyed as soon as the call is completed. Data storage, on the other hand, occurs over time. Any encryption keys must exist as long as the encrypted data exists. And storing those keys becomes as important as storing the unencrypted data was. In a way, encryption doesn’t reduce the number of secrets that must be stored securely; it just makes them much smaller.

Historically, the reason key management worked for stored data was that the key could be stored in a secure location: the human brain. People would remember keys and, barring physical and emotional attacks on the people themselves, would not divulge them. In a sense, the keys were stored in a “computer” that was not attached to any network. And there they were safe.

This whole model falls apart on the Internet. Much of the data stored on the Internet is only peripherally intended for use by people; it’s primarily intended for use by other computers. And therein lies the problem. Keys can no longer be stored in people’s brains. They need to be stored on the same computer, or at least the network, that the data resides on. And that is much riskier.

Let’s take a concrete example: credit card databases associated with websites. Those databases are not encrypted because it doesn’t make any sense. The whole point of storing credit card numbers on a website is so it’s accessible—so each time I buy something, I don’t have to type it in again. The website needs to dynamically query the database and retrieve the numbers, millions of times a day. If the database were encrypted, the website would need the key. But if the key were on the same network as the data, what would be the point of encrypting it? Access to the website equals access to the database in either case. Security is achieved by good access control on the website and database, not by encrypting the data.

The same reasoning holds true elsewhere on the Internet as well. Much of the Internet’s infrastructure happens automatically, without human intervention. This means that any encryption keys need to reside in software on the network, making them vulnerable to attack. In many cases, the databases are queried so often that they are simply left in plaintext, because doing otherwise would cause significant performance degradation. Real security in these contexts comes from traditional computer security techniques, not from cryptography.

Cryptography has inherent mathematical properties that greatly favor the defender. Adding a single bit to the length of a key adds only a slight amount of work for the defender, but doubles the amount of work the attacker has to do. Doubling the key length doubles the amount of work the defender has to do (if that—I’m being approximate here), but increases the attacker’s workload exponentially. For many years, we have exploited that mathematical imbalance.

Computer security is much more balanced. There’ll be a new attack, and a new defense, and a new attack, and a new defense. It’s an arms race between attacker and defender. And it’s a very fast arms race. New vulnerabilities are discovered all the time. The balance can tip from defender to attacker overnight, and back again the night after. Computer security defenses are inherently very fragile.

Unfortunately, this is the model we’re stuck with. No matter how good the cryptography is, there is some other way to break into the system. Recall how the FBI read the PGP-encrypted email of a suspected Mafia boss several years ago. They didn’t try to break PGP; they simply installed a keyboard sniffer on the target’s computer. Notice that SSL- and TLS-encrypted web communications are increasingly irrelevant in protecting credit card numbers; criminals prefer to steal them by the hundreds of thousands from back-end databases.

On the Internet, communications security is much less important than the security of the endpoints. And increasingly, we can’t rely on cryptography to solve our security problems.

This essay originally appeared on DarkReading. I wrote it in 2006, but lost it on my computer for four years. I hate it when that happens.

EDITED TO ADD (7/14): As several readers pointed out, I overstated my case when I said that encrypting credit card databases, or any database in constant use, is useless. In fact, there is value in encrypting those databases, especially if the encryption appliance is separate from the database server. In this case, the attacker has to steal both the encryption key and the database. That’s a harder hacking problem, and this is why credit-card database encryption is mandated within the PCI security standard. Given how good encryption performance is these days, it’s a smart idea. But while encryption makes it harder to steal the data, it is only harder in a computer-security sense and not in a cryptography sense.

Posted on June 30, 2010 at 12:53 PM44 Comments


Rich June 30, 2010 1:31 PM

It may seem to be a silly question, however consider the following in regards to your e-commerce example. I realize encrypting the entire database would just be ridiculous, and taxing on the system in question, but what if we were selective with what we encrypted? Let’s say we encrypt sensitive data, in this case a credit card number, using the user’s password. Upon logging in the system could then decrypt the password and store that into a session variable or even a temp table. When the user logs out the session or temp table is cleared of their sensitive information and the database would be safe (er).

Rich June 30, 2010 1:35 PM

I just answered my own question. This solution would require 2 passwords, one for logging on, and one for decrypting sensitive information. So, I retract my previous comment. Temporary lapse in judgment.

Gal Shpantzer June 30, 2010 1:37 PM

Re: data at rest… Credit card databases are a bit tricky but what about full disk encryption for laptops? Or, sending a CD/DVD/tape with UPS/Fedex so that if/when the package is lost/intercepted, the data on the media isn’t exposed?

Miles Baska June 30, 2010 1:41 PM

Great essay, even if it was “lost” for four years.

Seems more and more of these back office systems ARE encrypting the credit card, or more precisely, all but the last four digits. None of the folks touching this data can tell me just HOW that’s done, and who (if anyone) has the keys. Seems the whole process should be open and understood.

Suddenly reminds me of something we used to say back in college: When cryptography is outlawed, bayl bhgynjf jvyy unir pelcgbtencul.

Jason June 30, 2010 1:41 PM

In the example given (storing credit cards on a website), cryptography is the solution to the problem, but the wrong question is being asked.

The question is not, “how do I store these credit card numbers so nobody can steal them?” but rather, “how can I verify the identity of the cardholder in a way that nobody else can take advantage of?”

And the answer to that is public key cryptography and/or token-based authentication.

In short, the problem is not that cryptography is useless, it’s that we’re trying to use cryptography to protect credit cards, which are inherently broken.

Matt June 30, 2010 1:52 PM

I think secure crypto-processors like in TPM and smart cards offer some hope. Imagine you had an HSM that could impose some kind of rate limiting or trigger an alarm if too much information was decrypted too quickly.

Brandioch Conner June 30, 2010 3:02 PM

“In short, the problem is not that cryptography is useless, it’s that we’re trying to use cryptography to protect credit cards, which are inherently broken.”


In order to really fix the system, the approach has to be changed. Such as single use encryption with the amount of the transaction, the vendor identification and date/time referencing a temporary account being part of it.

Today, if I make a purchase, the entire amount of my account is available to whomever can copy my information. That’s beyond stupid.

And with Chase (the bank I use) there is no way to get temporary account numbers where I can deposit exactly the amount that I want available to the vendor AND NOTHING MORE.

RH June 30, 2010 3:27 PM

@Matt and Brandioch:

Isn’t it interesting that our day to day commerce is the equivalent of signing blank checks, and asking the vendor to add their name and the right value. To finish the analogy, assume a signed blank check can be photocopied and used by anyone who ever sees the original.

W June 30, 2010 3:30 PM

“I just answered my own question. This solution would require 2 passwords, one for logging on, and one for decrypting sensitive information. ”
I don’t think so. Just using two different seeds in the key derivative function should be enough. One to derive the key and one to derive the hash used to authenticate the user.
I’d guess the reason why it isn’t used is because the shop sometimes requires the data after the user logged out. Perhaps in case of fraud or if the user tells his bank to move back the money.

Mark June 30, 2010 4:14 PM

Great essay, as it gets to the point that cryptography is a point solution. It solves the problem of how to protect disclosure, and in some cases integrity, in a single situation. The issue that’s highlighted here, and is the generally the root-cause of modern information security problems, is protecting the interfaces between people and a system, and between systems. Protecting disclosure of information in these cases always involves much more than just simple secrets.

trsm.mckay June 30, 2010 4:21 PM

@Matt – asked about the use of crypto processors (aka HSM, aka TRSM) to protect a credit card info in a merchant’s database. Let me first point to a quote from Bruce’s original post: “In a way, encryption doesn’t reduce the number of secrets that must be stored securely; it just makes them much smaller.”

I would then add that it make the smaller amount of secrets easier to control. This question of control is at the root of Matt’s question. Although it could be done, neither rate limiting or velocity checking are ideal control points for a TRSM. One of the best ways to make a TRSM successfully resist attacks is to keep it fairly simple, and one of the key ways to accomplish this is to reduce internal state. Keeping state about how keys can be used, which type of user can access which function is OK, but anything that requires keeping track and making decisions based on all transaction is normally going to be too difficult for high assurance design and implementation.

I designed a merchant credit card protection system a while back, and it uses TRSM to control the transactions more naturally. The basic idea is taken from the way PINs are handled with ATM/POS financial security, credits cards don’t normally appear in the clear in these systems. But unlike PINs there are some legitimate reasons to see a customer’s credit card number. The one thing this design requires that is not typical (at least at the time when I designed it), is that communication between the merchant and the CC acquirer must also be encrypted.

So the first step is to protect the credit card info during the normal transaction. The merchant uses a HSM to handle the SSL tunnels, and normally the TRSM will decrypt the data that is sent to the web server. The HSM has just enough intelligence to recognize when the CC data is being sent over the tunnel (nCipher released a demo of this a while back), and will wrap it a carefully designed encrypted data blob. This data blob is stored on the database instead of the normal CC data. When the transaction is sent for approval or processing, a TRSM is used to cryptographically translates the blob for the to the CC acquirer. So long as the TRSM is setup and maintained properly, the CC data is never accessable, nor matter how much access a hacker has to the merchant’s database. At best they can do fake transactions, but not access the data from the real transactions.

I won’t go into what happens when the merchant actually needs to see the CC data in too much detail (in part because this where a good deal of the value-add of my design is). But the high level idea is the TRSM can be designed to evaluate security policies and if allowed translate the CC Data blob to an individual user’s token protected key.

The control points in this type of design are more natural “choke points” for the typical TRSM design.

109 June 30, 2010 5:12 PM

credit card databases associated with websites… If the database were encrypted, the website would need the key. But if the key were on the same network as the data, what would be the point of encrypting it?

You are right that storing keys in the same place as encrypted data does not make any sense. But your example with credit cards is flawed. Credit card data can be encrypted with user password (which itself is not stored with the website, only hash is), and only decrypted when the user is logged.

In other words, not the whole database is encrypted with the same key, but every record is encrypted with it’s own key [which is not stored].

109 June 30, 2010 5:16 PM

…credit card numbers; criminals prefer to steal them by the hundreds of thousands from back-end databases.

That’s why it is important to keep databases encrypted.

109 June 30, 2010 5:21 PM

Rich> This solution would require 2 passwords, one for logging on, and one for decrypting sensitive information

Why the password used for logging can’t be used for encrypting sensitive data?

AndyJ June 30, 2010 6:09 PM

Encrypting the credit card number on the database prevents the database administrator from seeing the credit card number.

The DBA is able to perform maintenance on the database (like drop records older than x).

The DBA does not get rights to allow access to the service that retrieves records and decrypts the credit card number.

Bill June 30, 2010 6:34 PM

Why can’t they just implement a SHA1 or SALT hash to authenticate users to the site or database?

Shabble June 30, 2010 8:22 PM

I’ve only skimmed the comments, so this might be redundant, but encrypting data in the database may provide some protection from certain attack vectors.

For example, an SQL injection attack may lead to data leakage (the ‘OR 1=1; — springs to mind), but depending on database user privileges, might not have access to the rest of the system or network.

In that case, they can extract the crypted data from the database, but without the key which is processed at the app layer, it is effectively useless.

blue92 June 30, 2010 10:06 PM

Those databases are not encrypted because it doesn’t make any sense.

I guess this is an old article. They are encrypted now… or at least they are supposed to be. Every web site that handles a large number of credit cards these days now deals with PCI requirements. ( You get to pay them to come audit you… and fine you if you don’t at least look like you’re trying to do every one of the 200-something bullet-pointed list. Why would they want to fix credit card security? Leaks are the fault of the vendor. It’s a cottage industry extorting money from companies who can’t survive without taking credit cards and don’t have in-house security resources.

Why can’t they just implement a SHA1 or SALT hash to authenticate users to the site or database?

You can (and do), but DB authentication is only one vector. Everybody’s just as paranoid now about untrusted employees — any programmer with access to source can collect passwords. Which means that now you are (ideally) supposed to isolate the key management to a separate encryption service, reduce the access to that box to a minimum number of users, and audit/throttle encryption/decryption access. Which of course just pushes your untrusted employee to leak your precious data by a slow trickle on some man-in-the-middle channel.

Personally I’m waiting for the banks to suggest CIA secret clearance for fast food cashiers in the next compliance recommendations…

Nick P June 30, 2010 10:35 PM

@ Bruce Schneier

Overall, your essay is nice as usual. But, your claim about credit card wielding web sites appears to be based on a flawed assumption. That both the web site and the database server use the database doesn’t imply they are logically or physically in the same place. A common example would be splitting the system into a database server and an application/web server.

The database might be SQL Server on Win 2003 and the ecommerce system a well-designed app running on OpenBSD or RedHat/J2EE. If the database only receives/retrieves encrypted data, then a compromise of the Win 2003 machine and theft of database will result in zero data leakage. Total compromise of the web server might result in total data loss. Compromise of the web application logic can result in slow to high data loss, while a faulty authentication scheme result in loss limited by both network and server response speed. In other words, the encryption totally stops the attack you said is most preferred and slows down others. This seems quite beneficial, esp. considering heterogenous nature of modern networks.

A better, but more limiting, design might have three systems: a database server, a encryption appliance and a web/app server. The database and web/app server would communicate through properly formatted messages via software agents installed on each. These messages would go through encryption appliance, which decides what data to release, generates/stores keys, etc. Firewall policy creates DMZ with these rules: database and crypto appliance can talk to each other; non-DMZ devices only see web server; web server only talks to crypto appliance. This already reduces risk on web server side and totally mitigates database side, but it allows us to go further.

Web application is built very modular and multiprocess to isolate faults. User login goes through an authentication process that tells the crypto appliance to pull the user’s info. That user is considered one of the “active users.” Other parts of the web app can only work with the data of active users. They are also designed to not “accidentally” work on data for inactive users. The crypto appliance can be configured to go fail-safe upon unauthorized access and report it to admin. It might also prompt the immediate poweroff of the offending system so its logs could be retrieved with little tampering. This functionality further reduces the risk of web server compromise. It wouldn’t work for apps like or Facebook, but many simpler stores could use it straight up with a document-based database system, multiple databases for different purposes, and modular web app design with appropriate permissions.

Regardless of which design is chosen, flexible or restrictive, each eliminates a huge attack vector and reduces the gains of others. So, why is encryption of data between logically separated systems useless again? Rephrased, a trusted system encrypting data to store on a system to keep it from being trusted. Less trust is always a good thing to me, as far as PC’s go. 😉

Jon June 30, 2010 11:06 PM

Ask this question: Why are they being stored?

For a little convenience? Is it really that difficult to just type them in again?

Why shouldn’t the web sites keep them for just long enough to validate one transaction, then fry them?

At least that way they don’t store, as another poster put it, the keys to your whole bank account.


Tom T. July 1, 2010 12:04 AM

@ Jon: Bingo!

Many retail sites offer choices:
“Make this my default (credit card) payment option”
“Add this to my list of payment choices”
“Use this time only” (or similar language).

I’m hoping that the third choice means that indeed, they use it and fry it after validation (or shipping and charging the card) which greatly reduces, if not eliminates, the “at risk” period for that card number with that merchant. In any event, it’s what I always choose. It just_isn’t_that_hard to pull out your card and type 16 integers and a CVS. (Much less effort than most of these posts, LOL! 🙂

Nick P July 1, 2010 12:39 AM

@ Tom T

“it just isn’t that hard to pull out your card and type 16 integers and a CVS”

Couldn’t have said it better! Even if you have to type in full info for the invoice every time, it’s still a great risk reducer if it avoids permanent storage. Another problem you avoid using one-time payments is unwanted repeated charges. Companies like Comcast are notorious for sucking money out your account after you’ve terminated service early, but only if you gave them consent to automatic payment. One-time payments don’t have that problem.

Pooka July 1, 2010 12:42 AM

All security eventually boils down to “social engineering”.

Even though machines may be the greatest consumers of secure data such as CC numbers, it’s the humans who work to abuse them.

All of you are just as brilliant as me and can dream up scenarios on how to obtain sensitive data. Use one-time cards online, or find a way to pay cash in person.

The only winning move is not to play.

AC2 July 1, 2010 1:39 AM

@Nick P

I must confess to a slightly elevated heart rate after reading this part of your post:

“The database might be SQL Server on Win 2003 and the ecommerce system a well-designed app running on OpenBSD or RedHat/J2EE. If the database only receives/retrieves encrypted data, then a compromise of the Win 2003 machine and theft of database will result in zero data leakage.”

First bit was the mention of SQL Server on Win 2003, unless it was just to drive home the point…

Second was the bit that everything in the DB is encrypted and the App server or Security Applicance needs to handle the encryption/ decryption… This would make your DB backups hostage to the availability of the App Server/ Security Appliance which imho is not a good thing…

q July 1, 2010 2:10 AM

@Rich “This solution would require 2 passwords, one for logging on, and one for decrypting sensitive information.”

No, in fact, a usable solution with single password is possible. Basically, you store in the database H(password | K1) for authentication, and use H(password | K2) as an encryption key for sensitive data. Here H(x) is a hash function (e.g. SHA256), and K1 and K2 are different constants that serve as hash keys.

Nick P July 1, 2010 3:15 AM

@ AC2

“First bit was the mention of SQL Server on Win 2003, unless it was just to drive home the point…”

That was indeed the point. 😉

“would make your DB backups hostage to the availability of the App Server/ Security Appliance…”

…if it wasn’t part of the backup plan. A protocol can be devised for that.

Ian July 1, 2010 3:54 AM

@Jon “Ask this question: Why are they being stored?”

One major reason is that often the merchant is only allowed to debit the cardholder when the goods are shipped, not when the order is placed, and that requires the card number.

Piet July 1, 2010 4:24 AM

Why would anyone want to type 16 integer number every time you try to buy something if you are not accountable if your card is compromised anyway?

Or has the accountability for credit cards changed recently? (I do not own a credit card).

Clive Robinson July 1, 2010 7:27 AM

@ Bruce,

I had a chat with a “reader” at a well known UK University back in 1995 about an asspect of data “in transit” -v- “at rest”.

And perhaps surprisingly “security” was not the main area of interest…

This was possably due to the fact it was with respect to part of a course titled “The Information Economy”.

At that time many researchers where viewing information in an economic perspective and where relating it to the trading of tangable assets and tokens representing abstract valuation of tangable assests (ie certificats of ownership, bonds and other financial instruments).

Basicaly trying to leverage existing models from the tangable economic world into the as yet unknown intangable economic world.

One area that croped up was if you evaluated information as a replacment for tangiable money what was it’s value in transit and in a market (ie the equivalent of seniorage and churn).

It produced some very strange results and ideas which did have some quite distinct security aspects and not just those widley known ones relating to “tracability” (proceds of crime, double spending, taxation etc).

Any way a bit of topic for this particlar blog page, but defiantly something for you (and others) to think on as it is an area that is going to be much more relavent in (the hopefully near) future and bring with it a whole load more (nearly) new security issues.

paul July 1, 2010 11:53 AM

I think that the lesson of encrypting laptops and CDs and DATs etc is that it’s often difficult to distinguish between data in transit and data at rest. Data sitting in a file on some storage medium is logically at rest, but if it’s physically in transit it’s vulnerable and needs to be encrypted. (And conversely, encrypting data that’s in transit over a physically restricted network doesn’t buy you all that much, because if someone has access to the wire, they also have access to the endpoints and you’re up the creek anyway.) The other thing, which many people have mentioned, is that access to unencrypted data is very rarely necessary; hash comparisons or their moral equivalent will typically do just as well.

I think the best argument for encrypting stuff in general (in addition to the “make a smaller secret” one) is KISS. If you no longer have to invoke special treatment for files that happen to reside on physical media that might be moved, you may simplify your architecture in useful ways. But then I’m not an expert, and some of those ways might be more useful to attackers.

kog999 July 1, 2010 1:14 PM

i once worked for a company who was trying to be pci compliant. Their solution to encrypting credit card data in the database was to run a program on the one of the developers machines (windows XP which he used to go online and do his dev work) that would query the credit card table and update all the unencrypted numbers with a hash value. This program was set to run via a schedule task every 10 minutes (assuming of course his computer wasn’t turned off and the task wasn’t otherwise unrunnable). True story. I couldn’t even begin to list how many ways his was stupid.

Also i agree with alot of the other commenters. the most effective way to prevent the leakage of data is not to have it.

Tom T. July 1, 2010 10:01 PM

@ Ian: Please see my response to Jon, right below his, which acknowledged that the card number might have to be stored until the goods are shipped. — “which greatly reduces, if not eliminates, the “at risk” period…” We can’t always get 100% risk elimination. But isn’t shortening the risk period by 99.7% (3 days out of 3 years), versus leaving the card info with that merchant until the card expires, a major improvement?

@ Piet: Worse things can happen. If the card is charged over the limit, credit agencies may be auto-notified. If it is used to help commit so-called “identity theft” you may spend years, thousands of hours, and much money trying to regain your credit standing and undo the damage. This goes far beyond the $100 charge for a single item that the credit card company agrees to take off your bill.

Matt S. July 1, 2010 10:20 PM

I’ve been living/breathing encryption for the past 3 years, 2 of which were spent developing a SaaS app that would allow anyone to securely share information utilizing end-to-end encryption without installing any software and without having any knowledge if encryption. Most importantly, it had to be owner-proof. As mentioned in one of the posts, so long as the site owner has access to the encryption keys, your encrypted information is not truly secure.

The app is and the initial launch was in Dec. ’09. It is a secure threaded messaging application. No PII other than your email address is stored in our databases. Every thread (text and attachments) is AES encrypted using a system generated key. The encryption keys are stored on a different server than the encrypted data. The app is written in VB.Net with SQL Server 2008 databases. As described in detail in our FAQ, there are 2 levels of confidentiality – Confidential and Secret. In both cases, every thread has its own system generated key. Confidential keys are stored in plain text. Secret keys are encrypted using another passkey provided by the user. That passkey is also stored on the key server in a personal keybox. The user’s keybox is encrypted by the user’s password. We only store the MD5 hash of the user’s password, so the keybox can only be opened when the user is logged in. Secret messaging is more secure, but the user must share their passkey with anyone they invite to their threads. Those users also have the option of storing other user’s passkeys in their keybox for convenience. There is no limit on the number of passkeys that can be created for Secret messaging. I as the database owner could decrypt confidential threads, but not secret threads. When a user logs in, their password is stored in a session variable which is encrypted by .net and never appears the page. When a user displays their threads, if any of them are secret threads, the app automatically tries to find a passkey in the keybox that will decrypt the thread. If one cannot be found, then the thread is displayed with a lock icon that can be clicked to be prompted for the passkey.

This technique is working very well and users seem to be satisfied with the ease of use and the security provided. There are several privacy options like IP address verification and multi-factor authentication. We contracted profession hackers to do penetration testing and we passed. If you are serious about protecting your online communications, check us out.

Pontus G July 2, 2010 8:42 AM

Re the PCI requirements for encrypted credit card number storage: is the intent to increase security, gain regulatory compliance (vs e.g. ‘safe harbour’ provisions) or may it be to some degree a case of security theatre? From what I’ve gleaned, PCI requirements on key management are weak.

Rich July 2, 2010 10:26 AM

@Those who pointed out it wouldn’t require 2 passwords.

I realized while walking around yesterday the solution could use two different salts. Thanks for pointing that out though. What salts you decided to use would vary from solution to solution, but I have always been the fan of using the result of a OTP.

trsm.mckay July 2, 2010 5:27 PM

@Nick P –

Your description of a solution that uses isolated servers (a database server, a encryption appliance and a web/app server) is conceptually very similar what I described.

Reminds me – it used to be fairly rare for me to see mainstram computer security technique and practices that were useful to my more esoteric speciality of designing TRSM interfaces. Just did not have much interest in configuration of firewalls and network monitoring (back in the hard outside and soft center days).

But as distributed systems became more common (because of web server limitations, compared with the mainframe days), and more mainstream attention was paid to things like securing systems using SOAP, I finally started seeing some overlap in concepts.

Basically it requires a fairly good analysis of what the threats are, how much trust you have in the various components, and what type of isolation properties they might have. The complicated part (which is also my favorite part) is designing the data distribution and the APIs between the components of the system so that they really can achieve the desired security goals.

I’m tempted to say there is no difference from a protocol standpoint, between a TRSM and a well isolated encryption server/appliance; but in the real world that is not quite true. If you have highly trusted components like a TRSM, you can sometimes bias the design to reduce the amount of trust in other nodes; something you might not want to do if the encryption node was weaker against conventional OS and network based attacks.

Glenn Maynard July 4, 2010 7:22 PM

The credit card example is a poor one. It makes a lot of sense to encrypt credit card numbers on the server. Use the user’s login credentials as the encryption key, so in order to access that sensitive data, the user must be actively using the site. (If it’s necessary to access the data for a period after a purchase, eg. at shipping time as was mentioned, then store the credentials and clear them when you’re done.)

This allows the webserver to access the credit card number for checkout purposes without the user having to continually reenter it. It also allows the merchant to avoid storing everyone’s credit card number in an accessible format all the time, avoiding the problem of a database leak exposing everyone’s credit card numbers. Only the cards active at the time of the breach would be directly affected.

This puts a time limit on the amount of time you have to retain access to the user’s data, limiting the damage of a data leak, while retaining the necessary convenience of storing payment information.

Curt Sampson July 5, 2010 4:06 AM

I’ve written a few credit card processing systems for web applications. Often I’ve used what is essentially a form of access control: the credit card number in the database is not available to the web application, though it can request that other processes run charges against it. This at least helps secure against exploitation of the web app.

But Jon’s question tweaked me: “Why are they being stored?”

Well, for me they’re being stored because I need to be able to send that number to the credit card processing company in order to charge someone.

However, it occurs to me that there’s no need to use that particular token to authenticate that the credit card owner has authorized me to charge him money. It could just as well work that, when I receive a credit card number for the first time, I use this to authenticate that the CC owner has authorized me, but I then receive back a new token fronm the CC processor for future use, and throw away the CC number. Whenever I want to charge this user again, I use that new token instead. If this token were valid only for charges to the CC owner from me, and could not be used to authorize purchase requests for any other merchant, my database becomes much less useful to attackers.

There’s the rough sketch of an idea; rip it to pieces, folks.

Ryan July 5, 2010 7:32 PM

@Curt, this is called “tokenization” and is already in widespread use in order to push the PCI compliance burden onto 3rd parties. Verisign and Trust Commerce offer this service, as well as other smaller players. And if you are really doing as many CC apps as you say, you should already know about this from your own PCI compliance efforts.

Stephen Wilson July 15, 2010 4:30 PM

Amazingly Bruce sells cryptography short! He says that cryptography is “singularly ill-suited to solve … theft of credit card numbers”. The article is really about encryption for confidentiality but it’s worth noting that asymmetric cryptography (digital signatures) most definitely is suited to fixing digital identity theft and credit card fraud.

Digitally signing credit card transactions by user-specific private keys is a uniquely effective way to stop replay attack and counterfeiting of payment card numbers. At its heart, the EMV Dynamic Data Authentication protocol uses asymmetric cryprography to ward against skimming. In remote payments settings, the MasterCard CAP protocol and my company’s own “Stepwise” solution use private keys in chip cards to sign transaction data.

We might not stop the theft of cardholder details, but by digitally signing transactions we can render stolen card data worthless. These techniques should be generalised to combat all digital identity theft.

Stephen Wilson July 15, 2010 4:41 PM

Jason said: “In short, the problem is not that cryptography is useless, it’s that we’re trying to use cryptography to protect credit cards, which are inherently broken”

I disagree. The four party credit card processing model is not fundamentally broken. It’s worked well for forty odd years, and was successfully adapted to cardholder not present transactions for mail order / telephone order. Fraud has exploded with the internet because card holder details are now widely available to thieves courtesy of widely joined-up but insecure systems, and stplen data are easily replayed against merchant sites.

But the four party model and the credit card concept itself are not at all invalidated by these criminal developments. If only we rendered cardholder details non-replayable (by digital signatures) then the model could be preserved.

In my view, so many “modern” payment protocol developments — like SET and more recently 3D Secure — represent inelegant and way overblown overhauls, when the core threat is a very simple problem easily overcome via digital signatures to protect the pedigree of card numbers.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.