Entries Tagged "cloud computing"

Page 7 of 9

Consumerization and Corporate IT Security

If you’re a typical wired American, you’ve got a bunch of tech tools you like and a bunch more you covet. You have a cell phone that can easily text. You’ve got a laptop configured just the way you want it. Maybe you have a Kindle for reading, or an iPad. And when the next new thing comes along, some of you will line up on the first day it’s available.

So why can’t work keep up? Why are you forced to use an unfamiliar, and sometimes outdated, operating system? Why do you need a second laptop, maybe an older and clunkier one? Why do you need a second cell phone with a new interface, or a BlackBerry, when your phone already does e-mail? Or a second BlackBerry tied to corporate e-mail? Why can’t you use the cool stuff you already have?

More and more companies are letting you. They’re giving you an allowance and allowing you to buy whatever laptop you want, and to connect into the corporate network with whatever device you choose. They’re allowing you to use whatever cell phone you have, whatever portable e-mail device you have, whatever you personally need to get your job done. And the security office is freaking.

You can’t blame them, really. Security is hard enough when you have control of the hardware, operating system and software. Lose control of any of those things, and the difficulty goes through the roof. How do you ensure that the employee devices are secure, and have up-to-date security patches? How do you control what goes on them? How do you deal with the tech support issues when they fail? How do you even begin to manage this logistical nightmare? Better to dig your heels in and say “no.”

But security is on the losing end of this argument, and the sooner it realizes that, the better.

The meta-trend here is consumerization: cool technologies show up for the consumer market before they’re available to the business market. Every corporation is under pressure from its employees to allow them to use these new technologies at work, and that pressure is only getting stronger. Younger employees simply aren’t going to stand for using last year’s stuff, and they’re not going to carry around a second laptop. They’re either going to figure out ways around the corporate security rules, or they’re going to take another job with a more trendy company. Either way, senior management is going to tell security to get out of the way. It might even be the CEO, who wants to get to the company’s databases from his brand new iPad, driving the change. Either way, it’s going to be harder and harder to say no.

At the same time, cloud computing makes this easier. More and more, employee computing devices are nothing more than dumb terminals with a browser interface. When corporate e-mail is all webmail, corporate documents are all on GoogleDocs, and when all the specialized applications have a web interface, it’s easier to allow employees to use any up-to-date browser. It’s what companies are already doing with their partners, suppliers, and customers.

Also on the plus side, technology companies have woken up to this trend and—from Microsoft and Cisco on down to the startups—are trying to offer security solutions. Like everything else, it’s a mixed bag: some of them will work and some of them won’t, most of them will need careful configuration to work well, and few of them will get it right. The result is that we’ll muddle through, as usual.

Security is always a tradeoff, and security decisions are often made for non-security reasons. In this case, the right decision is to sacrifice security for convenience and flexibility. Corporations want their employees to be able to work from anywhere, and they’re going to have loosened control over the tools they allow in order to get it.

This essay first appeared as the second half of a point/counterpoint with Marcus Ranum in Information Security Magazine. You can read Marcus’s half here.

Posted on September 7, 2010 at 7:25 AMView Comments

WPA Cracking in the Cloud

It’s a service:

The mechanism used involves captured network traffic, which is uploaded to the WPA Cracker service and subjected to an intensive brute force cracking effort. As advertised on the site, what would be a five-day task on a dual-core PC is reduced to a job of about twenty minutes on average. For the more “premium” price of $35, you can get the job done in about half the time. Because it is a dictionary attack using a predefined 135-million-word list, there is no guarantee that you will crack the WPA key, but such an extensive dictionary attack should be sufficient for any but the most specialized penetration testing purposes.

[…]

It gets even better. If you try the standard 135-million-word dictionary and do not crack the WPA encryption on your target network, there is an extended dictionary that contains an additional 284 million words. In short, serious brute force wireless network encryption cracking has become a retail commodity.

FAQ here.

In related news, there might be a man-in-the-middle attack possible against the WPA2 protocol. Man-in-the-middle attacks are potentially serious, but it depends on the details—and they’re not available yet.

EDITED TO ADD (8/8): Details about the MITM attack.

Posted on July 27, 2010 at 6:43 AMView Comments

Security in a Reputation Economy

In the past, our relationship with our computers was technical. We cared what CPU they had and what software they ran. We understood our networks and how they worked. We were experts, or we depended on someone else for expertise. And security was part of that expertise.

This is changing. We access our email via the web, from any computer or from our phones. We use Facebook, Google Docs, even our corporate networks, regardless of hardware or network. We, especially the younger of us, no longer care about the technical details. Computing is infrastructure; it’s a commodity. It’s less about products and more about services; we simply expect it to work, like telephone service or electricity or a transportation network.

Infrastructures can be spread on a broad continuum, ranging from generic to highly specialized. Power and water are generic; who supplies them doesn’t really matter. Mobile phone services, credit cards, ISPs, and airlines are mostly generic. More specialized infrastructure services are restaurant meals, haircuts, and social networking sites. Highly specialized services include tax preparation for complex businesses; management consulting, legal services, and medical services.

Sales for these services are driven by two things: price and trust. The more generic the service is, the more price dominates. The more specialized it is, the more trust dominates. IT is something of a special case because so much of it is free. So, for both specialized IT services where price is less important and for generic IT services—think Facebook—where there is no price, trust will grow in importance. IT is becoming a reputation-based economy, and this has interesting ramifications for security.

Some years ago, the major credit card companies became concerned about the plethora of credit-card-number thefts from sellers’ databases. They worried that these might undermine the public’s trust in credit cards as a secure payment system for the internet. They knew the sellers would only protect these databases up to the level of the threat to the seller, and not to the greater level of threat to the industry as a whole. So they banded together and produced a security standard called PCI. It’s wholly industry-enforced ­ by an industry that realized its reputation was more valuable than the sellers’ databases.

A reputation-based economy means that infrastructure providers care more about security than their customers do. I realized this 10 years ago with my own company. We provided network-monitoring services to large corporations, and our internal network security was much more extensive than our customers’. Our customers secured their networks—that’s why they hired us, after all—but only up to the value of their networks. If we mishandled any of our customers’ data, we would have lost the trust of all of our customers.

I heard the same story at an ENISA conference in London last June, when an IT consultant explained that he had begun encrypting his laptop years before his customers did. While his customers might decide that the risk of losing their data wasn’t worth the hassle of dealing with encryption, he knew that if he lost data from one customer, he risked losing all of his customers.

As IT becomes more like infrastructure, more like a commodity, expect service providers to improve security to levels greater than their customers would have done themselves.

In IT, customers learn about company reputation from many sources: magazine articles, analyst reviews, recommendations from colleagues, awards, certifications, and so on. Of course, this only works if customers have accurate information. In a reputation economy, companies have a motivation to hide their security problems.

You’ve all experienced a reputation economy: restaurants. Some restaurants have a good reputation, and are filled with regulars. When restaurants get a bad reputation, people stop coming and they close. Tourist restaurants—whose main attraction is their location, and whose customers frequently don’t know anything about their reputation—can thrive even if they aren’t any good. And sometimes a restaurant can keep its reputation—an award in a magazine, a special occasion restaurant that “everyone knows” is the place to go—long after its food and service have declined.

The reputation economy is far from perfect.

This essay originally appeared in The Guardian.

Posted on November 12, 2009 at 6:30 AMView Comments

File Deletion

File deletion is all about control. This used to not be an issue. Your data was on your computer, and you decided when and how to delete a file. You could use the delete function if you didn’t care about whether the file could be recovered or not, and a file erase program—I use BCWipe for Windows—if you wanted to ensure no one could ever recover the file.

As we move more of our data onto cloud computing platforms such as Gmail and Facebook, and closed proprietary platforms such as the Kindle and the iPhone, deleting data is much harder.

You have to trust that these companies will delete your data when you ask them to, but they’re generally not interested in doing so. Sites like these are more likely to make your data inaccessible than they are to physically delete it. Facebook is a known culprit: actually deleting your data from its servers requires a complicated procedure that may or may not work. And even if you do manage to delete your data, copies are certain to remain in the companies’ backup systems. Gmail explicitly says this in its privacy notice.

Online backups, SMS messages, photos on photo sharing sites, smartphone applications that store your data in the network: you have no idea what really happens when you delete pieces of data or your entire account, because you’re not in control of the computers that are storing the data.

This notion of control also explains how Amazon was able to delete a book that people had previously purchased on their Kindle e-book readers. The legalities are debatable, but Amazon had the technical ability to delete the file because it controls all Kindles. It has designed the Kindle so that it determines when to update the software, whether people are allowed to buy Kindle books, and when to turn off people’s Kindles entirely.

Vanish is a research project by Roxana Geambasu and colleagues at the University of Washington. They designed a prototype system that automatically deletes data after a set time interval. So you can send an email, create a Google Doc, post an update to Facebook, or upload a photo to Flickr, all designed to disappear after a set period of time. And after it disappears, no one—not anyone who downloaded the data, not the site that hosted the data, not anyone who intercepted the data in transit, not even you—will be able to read it. If the police arrive at Facebook or Google or Flickr with a warrant, they won’t be able to read it.

The details are complicated, but Vanish breaks the data’s decryption key into a bunch of pieces and scatters them around the web using a peer-to-peer network. Then it uses the natural turnover in these networks—machines constantly join and leave—to make the data disappear. Unlike previous programs that supported file deletion, this one doesn’t require you to trust any company, organisation, or website. It just happens.

Of course, Vanish doesn’t prevent the recipient of an email or the reader of a Facebook page from copying the data and pasting it into another file, just as Kindle’s deletion feature doesn’t prevent people from copying a book’s files and saving them on their computers. Vanish is just a prototype at this point, and it only works if all the people who read your Facebook entries or view your Flickr pictures have it installed on their computers as well; but it’s a good demonstration of how control affects file deletion. And while it’s a step in the right direction, it’s also new and therefore deserves further security analysis before being adopted on a wide scale.

We’ve lost the control of data on some of the computers we own, and we’ve lost control of our data in the cloud. We’re not going to stop using Facebook and Twitter just because they’re not going to delete our data when we ask them to, and we’re not going to stop using Kindles and iPhones because they may delete our data when we don’t want them to. But we need to take back control of data in the cloud, and projects like Vanish show us how we can.

Now we need something that will protect our data when a large corporation decides to delete it.

This essay originally appeared in The Guardian.

EDITED TO ADD (9/30): Vanish has been broken, paper here.

Posted on September 10, 2009 at 6:08 AMView Comments

Subpoenas as a Security Threat

Blog post from Ed Felten:

Usually when the threat model mentions subpoenas, the bigger threats in reality come from malicious intruders or insiders. The biggest risk in storing my documents on CloudCorp’s servers is probably that somebody working at CloudCorp, or a contractor hired by them, will mess up or misbehave.

So why talk about subpoenas rather than intruders or insiders? Perhaps this kind of talk is more diplomatic than the alternative. If I’m talking about the risks of Gmail, I might prefer not to point out that my friends at Google could hire someone who is less than diligent, or less than honest. If I talk about subpoenas as the threat, nobody in the room is offended, and the security measures I recommend might still be useful against intruders and insiders. It’s more polite to talk about data losses that are compelled by a mysterious, powerful Other—in this case an Anonymous Lawyer.

Politeness aside, overemphasizing subpoena threats can be harmful in at least two ways. First, we can easily forget that enforcement of subpoenas is often, though not always, in society’s interest. Our legal system works better when fact-finders have access to a broader range of truthful evidence. That’s why we have subpoenas in the first place. Not all subpoenas are good—and in some places with corrupt or evil legal systems, subpoenas deserve no legitimacy at all—but we mustn’t lose sight of society’s desire to balance the very real cost imposed on the subpoena’s target and affected third parties, against the usefulness of the resulting evidence in administering justice.

The second harm is to security. To the extent that we focus on the subpoena threat, rather than the larger threats of intruders and insiders, we risk finding “solutions” that fail to solve our biggest problems. We might get lucky and end up with a solution that happens to address the bigger threats too. We might even design a solution for the bigger threats, and simply use subpoenas as a rhetorical device in explaining our solution—though it seems risky to mislead our audience about our motivations. If our solution flows from our threat model, as it should, then we need to be very careful to get our threat model right.

Posted on September 4, 2009 at 6:18 AMView Comments

Risks of Cloud Computing

Excellent essay by Jonathan Zittrain on the risks of cloud computing:

The cloud, however, comes with real dangers.

Some are in plain view. If you entrust your data to others, they can let you down or outright betray you. For example, if your favorite music is rented or authorized from an online subscription service rather than freely in your custody as a compact disc or an MP3 file on your hard drive, you can lose your music if you fall behind on your payments—or if the vendor goes bankrupt or loses interest in the service. Last week Amazon apparently conveyed a publisher’s change-of-heart to owners of its Kindle e-book reader: some purchasers of Orwell’s “1984” found it removed from their devices, with nothing to show for their purchase other than a refund. (Orwell would be amused.)

Worse, data stored online has less privacy protection both in practice and under the law. A hacker recently guessed the password to the personal e-mail account of a Twitter employee, and was thus able to extract the employee’s Google password. That in turn compromised a trove of Twitter’s corporate documents stored too conveniently in the cloud. Before, the bad guys usually needed to get their hands on people’s computers to see their secrets; in today’s cloud all you need is a password.

Thanks in part to the Patriot Act, the federal government has been able to demand some details of your online activities from service providers—and not to tell you about it. There have been thousands of such requests lodged since the law was passed, and the F.B.I.’s own audits have shown that there can be plenty of overreach—perhaps wholly inadvertent—in requests like these.

Here’s me on cloud computing.

Posted on July 30, 2009 at 7:06 AMView Comments

Homomorphic Encryption Breakthrough

Last month, IBM made some pretty brash claims about homomorphic encryption and the future of security. I hate to be the one to throw cold water on the whole thing—as cool as the new discovery is—but it’s important to separate the theoretical from the practical.

Homomorphic cryptosystems are ones where mathematical operations on the ciphertext have regular effects on the plaintext. A normal symmetric cipher—DES, AES, or whatever—is not homomorphic. Assume you have a plaintext P, and you encrypt it with AES to get a corresponding ciphertext C. If you multiply that ciphertext by 2, and then decrypt 2C, you get random gibberish instead of P. If you got something else, like 2P, that would imply some pretty strong nonrandomness properties of AES and no one would trust its security.

The RSA algorithm is different. Encrypt P to get C, multiply C by 2, and then decrypt 2C—and you get 2P. That’s a homomorphism: perform some mathematical operation to the ciphertext, and that operation is reflected in the plaintext. The RSA algorithm is homomorphic with respect to multiplication, something that has to be taken into account when evaluating the security of a security system that uses RSA.

This isn’t anything new. RSA’s homomorphism was known in the 1970s, and other algorithms that are homomorphic with respect to addition have been known since the 1980s. But what has eluded cryptographers is a fully homomorphic cryptosystem: one that is homomorphic under both addition and multiplication and yet still secure. And that’s what IBM researcher Craig Gentry has discovered.

This is a bigger deal than might appear at first glance. Any computation can be expressed as a Boolean circuit: a series of additions and multiplications. Your computer consists of a zillion Boolean circuits, and you can run programs to do anything on your computer. This algorithm means you can perform arbitrary computations on homomorphically encrypted data. More concretely: if you encrypt data in a fully homomorphic cryptosystem, you can ship that encrypted data to an untrusted person and that person can perform arbitrary computations on that data without being able to decrypt the data itself. Imagine what that would mean for cloud computing, or any outsourcing infrastructure: you no longer have to trust the outsourcer with the data.

Unfortunately—you knew that was coming, right?—Gentry’s scheme is completely impractical. It uses something called an ideal lattice as the basis for the encryption scheme, and both the size of the ciphertext and the complexity of the encryption and decryption operations grow enormously with the number of operations you need to perform on the ciphertext—and that number needs to be fixed in advance. And converting a computer program, even a simple one, into a Boolean circuit requires an enormous number of operations. These aren’t impracticalities that can be solved with some clever optimization techniques and a few turns of Moore’s Law; this is an inherent limitation in the algorithm. In one article, Gentry estimates that performing a Google search with encrypted keywords—a perfectly reasonable simple application of this algorithm—would increase the amount of computing time by about a trillion. Moore’s law calculates that it would be 40 years before that homomorphic search would be as efficient as a search today, and I think he’s being optimistic with even this most simple of examples.

Despite this, IBM’s PR machine has been in overdrive about the discovery. Its press release makes it sound like this new homomorphic scheme is going to rewrite the business of computing: not just cloud computing, but “enabling filters to identify spam, even in encrypted email, or protection information contained in electronic medical records.” Maybe someday, but not in my lifetime.

This is not to take anything away anything from Gentry or his discovery. Visions of a fully homomorphic cryptosystem have been dancing in cryptographers’ heads for thirty years. I never expected to see one. It will be years before a sufficient number of cryptographers examine the algorithm that we can have any confidence that the scheme is secure, but—practicality be damned—this is an amazing piece of work.

Posted on July 9, 2009 at 6:36 AMView Comments

Terrorist Risk of Cloud Computing

I don’t even know where to begin on this one:

As we have seen in the past with other technologies, while cloud resources will likely start out decentralized, as time goes by and economies of scale take hold, they will start to collect into mega-technology hubs. These hubs could, as the end of this cycle, number in the low single digits and carry most of the commerce and data for a nation like ours. Elsewhere, particularly in Europe, those hubs could handle several nations’ public and private data.

And therein lays the risk.

The Twin Towers, which were destroyed in the 9/11 attack, took down a major portion of the U.S. infrastructure at the same time. The capability and coverage of cloud-based mega-hubs would easily dwarf hundreds of Twin Tower-like operations. Although some redundancy would likely exist—hopefully located in places safe from disasters—should a hub be destroyed, it could likely take down a significant portion of the country it supported at the same time.

[…]

Each hub may represent a target more attractive to terrorists than today’s favored nuclear power plants.

It’s only been eight years, and this author thinks that the 9/11 attacks “took down a major portion of the U.S. infrastructure.” That’s just plain ridiculous. I was there (in the U.S, not in New York). The government, the banks, the power system, commerce everywhere except lower Manhattan, the Internet, the water supply, the food supply, and every other part of the U.S. infrastructure I can think of worked just fine during and after the attacks. The New York Stock Exchange was up and running in a few days. Even the piece of our infrastructure that was the most disrupted—the airplane network—was up and running in a week. I think the author of that piece needs to travel to somewhere on the planet where major portions of the infrastructure actually get disrupted, so he can see what it’s like.

No less ridiculous is the main point of the article, which seems to imply that terrorists will someday decide that disrupting people’s Lands’ End purchases will be more attractive than killing them. Okay, that was a caricature of the article, but not by much. Terrorism is an attack against our minds, using random death and destruction as a tactic to cause terror in everyone. To even suggest that data disruption would cause more terror than nuclear fallout completely misunderstands terrorism and terrorists.

And anyway, any e-commerce, banking, etc. site worth anything is backed up and dual-homed. There are lots of risks to our data networks, but physically blowing up a data center isn’t high on the list.

Posted on July 6, 2009 at 6:12 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.