Entries Tagged "data protection"

Page 4 of 4

Data at Rest vs. Data in Motion

For a while now, I’ve pointed out that cryptography is singularly ill-suited to solve the major network security problems of today: denial-of-service attacks, website defacement, theft of credit card numbers, identity theft, viruses and worms, DNS attacks, network penetration, and so on.

Cryptography was invented to protect communications: data in motion. This is how cryptography was used throughout most of history, and this is how the militaries of the world developed the science. Alice was the sender, Bob the receiver, and Eve the eavesdropper. Even when cryptography was used to protect stored data — data at rest — it was viewed as a form of communication. In “Applied Cryptography,” I described encrypting stored data in this way: “a stored message is a way for someone to communicate with himself through time.” Data storage was just a subset of data communication.

In modern networks, the difference is much more profound. Communications are immediate and instantaneous. Encryption keys can be ephemeral, and systems like the STU-III telephone can be designed such that encryption keys are created at the beginning of a call and destroyed as soon as the call is completed. Data storage, on the other hand, occurs over time. Any encryption keys must exist as long as the encrypted data exists. And storing those keys becomes as important as storing the unencrypted data was. In a way, encryption doesn’t reduce the number of secrets that must be stored securely; it just makes them much smaller.

Historically, the reason key management worked for stored data was that the key could be stored in a secure location: the human brain. People would remember keys and, barring physical and emotional attacks on the people themselves, would not divulge them. In a sense, the keys were stored in a “computer” that was not attached to any network. And there they were safe.

This whole model falls apart on the Internet. Much of the data stored on the Internet is only peripherally intended for use by people; it’s primarily intended for use by other computers. And therein lies the problem. Keys can no longer be stored in people’s brains. They need to be stored on the same computer, or at least the network, that the data resides on. And that is much riskier.

Let’s take a concrete example: credit card databases associated with websites. Those databases are not encrypted because it doesn’t make any sense. The whole point of storing credit card numbers on a website is so it’s accessible — so each time I buy something, I don’t have to type it in again. The website needs to dynamically query the database and retrieve the numbers, millions of times a day. If the database were encrypted, the website would need the key. But if the key were on the same network as the data, what would be the point of encrypting it? Access to the website equals access to the database in either case. Security is achieved by good access control on the website and database, not by encrypting the data.

The same reasoning holds true elsewhere on the Internet as well. Much of the Internet’s infrastructure happens automatically, without human intervention. This means that any encryption keys need to reside in software on the network, making them vulnerable to attack. In many cases, the databases are queried so often that they are simply left in plaintext, because doing otherwise would cause significant performance degradation. Real security in these contexts comes from traditional computer security techniques, not from cryptography.

Cryptography has inherent mathematical properties that greatly favor the defender. Adding a single bit to the length of a key adds only a slight amount of work for the defender, but doubles the amount of work the attacker has to do. Doubling the key length doubles the amount of work the defender has to do (if that — I’m being approximate here), but increases the attacker’s workload exponentially. For many years, we have exploited that mathematical imbalance.

Computer security is much more balanced. There’ll be a new attack, and a new defense, and a new attack, and a new defense. It’s an arms race between attacker and defender. And it’s a very fast arms race. New vulnerabilities are discovered all the time. The balance can tip from defender to attacker overnight, and back again the night after. Computer security defenses are inherently very fragile.

Unfortunately, this is the model we’re stuck with. No matter how good the cryptography is, there is some other way to break into the system. Recall how the FBI read the PGP-encrypted email of a suspected Mafia boss several years ago. They didn’t try to break PGP; they simply installed a keyboard sniffer on the target’s computer. Notice that SSL- and TLS-encrypted web communications are increasingly irrelevant in protecting credit card numbers; criminals prefer to steal them by the hundreds of thousands from back-end databases.

On the Internet, communications security is much less important than the security of the endpoints. And increasingly, we can’t rely on cryptography to solve our security problems.

This essay originally appeared on DarkReading. I wrote it in 2006, but lost it on my computer for four years. I hate it when that happens.

EDITED TO ADD (7/14): As several readers pointed out, I overstated my case when I said that encrypting credit card databases, or any database in constant use, is useless. In fact, there is value in encrypting those databases, especially if the encryption appliance is separate from the database server. In this case, the attacker has to steal both the encryption key and the database. That’s a harder hacking problem, and this is why credit-card database encryption is mandated within the PCI security standard. Given how good encryption performance is these days, it’s a smart idea. But while encryption makes it harder to steal the data, it is only harder in a computer-security sense and not in a cryptography sense.

Posted on June 30, 2010 at 12:53 PMView Comments

Cloud Computing

This year’s overhyped IT concept is cloud computing. Also called software as a service (Saas), cloud computing is when you run software over the internet and access it via a browser. The Salesforce.com customer management software is an example of this. So is Google Docs. If you believe the hype, cloud computing is the future.

But, hype aside, cloud computing is nothing new . It’s the modern version of the timesharing model from the 1960s, which was eventually killed by the rise of the personal computer. It’s what Hotmail and Gmail have been doing all these years, and it’s social networking sites, remote backup companies, and remote email filtering companies such as MessageLabs. Any IT outsourcing — network infrastructure, security monitoring, remote hosting — is a form of cloud computing.

The old timesharing model arose because computers were expensive and hard to maintain. Modern computers and networks are drastically cheaper, but they’re still hard to maintain. As networks have become faster, it is again easier to have someone else do the hard work. Computing has become more of a utility; users are more concerned with results than technical details, so the tech fades into the background.

But what about security? Isn’t it more dangerous to have your email on Hotmail’s servers, your spreadsheets on Google’s, your personal conversations on Facebook’s, and your company’s sales prospects on salesforce.com’s? Well, yes and no.

IT security is about trust. You have to trust your CPU manufacturer, your hardware, operating system and software vendors — and your ISP. Any one of these can undermine your security: crash your systems, corrupt data, allow an attacker to get access to systems. We’ve spent decades dealing with worms and rootkits that target software vulnerabilities. We’ve worried about infected chips. But in the end, we have no choice but to blindly trust the security of the IT providers we use.

Saas moves the trust boundary out one step further — you now have to also trust your software service vendors — but it doesn’t fundamentally change anything. It’s just another vendor we need to trust.

There is one critical difference. When a computer is within your network, you can protect it with other security systems such as firewalls and IDSs. You can build a resilient system that works even if those vendors you have to trust may not be as trustworthy as you like. With any outsourcing model, whether it be cloud computing or something else, you can’t. You have to trust your outsourcer completely. You not only have to trust the outsourcer’s security, but its reliability, its availability, and its business continuity.

You don’t want your critical data to be on some cloud computer that abruptly disappears because its owner goes bankrupt . You don’t want the company you’re using to be sold to your direct competitor. You don’t want the company to cut corners, without warning, because times are tight. Or raise its prices and then refuse to let you have your data back. These things can happen with software vendors, but the results aren’t as drastic.

There are two different types of cloud computing customers. The first only pays a nominal fee for these services — and uses them for free in exchange for ads: e.g., Gmail and Facebook. These customers have no leverage with their outsourcers. You can lose everything. Companies like Google and Amazon won’t spend a lot of time caring. The second type of customer pays considerably for these services: to Salesforce.com, MessageLabs, managed network companies, and so on. These customers have more leverage, providing they write their service contracts correctly. Still, nothing is guaranteed.

Trust is a concept as old as humanity, and the solutions are the same as they have always been. Be careful who you trust, be careful what you trust them with, and be careful how much you trust them. Outsourcing is the future of computing. Eventually we’ll get this right, but you don’t want to be a casualty along the way.

This essay originally appeared in The Guardian.

EDITED TO ADD (6/4): Another opinion.

EDITED TO ADD (6/5): A rebuttal. And an apology for the tone of the rebuttal. The reason I am talking so much about cloud computing is that reporters and inverviewers keep asking me about it. I feel kind of dragged into this whole thing.

EDITED TO ADD (6/6): At the Computers, Freedom, and Privacy conference last week, Bob Gellman said (this, by him, is worth reading) that the nine most important words in cloud computing are: “terms of service,” “location, location, location,” and “provider, provider, provider” — basically making the same point I did. You need to make sure the terms of service you sign up to are ones you can live with. You need to make sure the location of the provider doesn’t subject you to any laws that you can’t live with. And you need to make sure your provider is someone you’re willing to work with. Basically, if you’re going to give someone else your data, you need to trust them.

Posted on June 4, 2009 at 6:14 AM

Michael Froomkin on Identity Cards

University of Miami law professor Michael Froomkin writes about ID cards and society in “Identity Cards and Identity Romanticism.”

This book chapter for “Lessons from the Identity Trail: Anonymity, Privacy and Identity in a Networked Society” (New York: Oxford University Press, 2009)—a forthcoming comparative examination of approaches to the regulation of anonymity edited by Ian Kerr—discusses the sources of hostility to National ID Cards in common law countries. It traces that hostility in the United States to a romantic vision of free movement and in England to an equally romantic vision of the ‘rights of Englishmen’.

Governments in the United Kingdom, United States, Australia, and other countries are responding to perceived security threats by introducing various forms of mandatory or nearly mandatory domestic civilian national identity documents. This chapter argues that these ID cards pose threats to privacy and freedom, especially in countries without strong data protection rules. The threats created by weak data protection in these new identification schemes differ significantly from previous threats, making the romantic vision a poor basis from which to critique (highly flawed) contemporary proposals.

One small excerpt:

…it is important to note that each ratchet up in an ID card regime—the introduction of a non-mandatory ID card scheme, improvements to authentication, the transition from an optional regime to a mandatory one, or the inclusion of multiple biometric identifiers—increases the need for attention to how the data collected at the time the card is created will be stored and accessed. Similarly, as ID cards become ubiquitous, a de facto necessity even when not required de jure, the card becomes the visible instantiation of a large, otherwise unseen, set of databases. If each use of the card also creates a data trail, the resulting profile becomes an ongoing temptation to both ordinary and predictive profiling.

Posted on March 4, 2009 at 7:25 AMView Comments

UK Ministry of Defense Loses Memory Stick with Military Secrets

Oops:

The USB stick, outlining training for 70 soldiers from the 3rd Battalion, Yorkshire Regiment, was found on the floor of The Beach in Newquay in May.

Times, locations and travel and accommodation details for the troops were included in files on the device.

It’s not the first time:

More than 120 USB memory sticks, some containing secret information, have been lost or stolen from the Ministry of Defence since 2004, it was reported earlier this year.

Some 26 of those disappeared this year == including three which contained information classified as “secret”, and 19 which were “restricted”.

I’ve written about this general problem before: we’re storing ever more data in ever smaller devices.

The point is that it’s now amazingly easy to lose an enormous amount of information. Twenty years ago, someone could break into my office and copy every customer file, every piece of correspondence, everything about my professional life. Today, all he has to do is steal my computer. Or my portable backup drive. Or my small stack of DVD backups. Furthermore, he could sneak into my office and copy all this data, and I’d never know it.

The solution? Encrypt them.

Posted on September 16, 2008 at 6:21 AMView Comments

Sucking Data off of Cell Phones

Don’t give someone your phone unless you trust them:

There is a new electronic capture device that has been developed primarily for law enforcement, surveillance, and intelligence operations that is also available to the public. It is called the Cellular Seizure Investigation Stick, or CSI Stick as a clever acronym. It is manufactured by a company called Paraben, and is a self-contained module about the size of a BIC lighter. It plugs directly into most Motorola and Samsung cell phones to capture all data that they contain. More phones will be added to the list, including many from Nokia, RIM, LG and others, in the next generation, to be released shortly.

Another news article.

Posted on September 3, 2008 at 6:03 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.