Entries Tagged "data loss"

Page 2 of 2

Goldman Sachs Demanding E-Mail Be Deleted

Goldman Sachs is going to court to demand that Google retroactively delete an e-mail it accidentally sent.

The breach occurred on June 23 and included “highly confidential brokerage account information,” Goldman said in a complaint filed last Friday in a New York state court in Manhattan.

[…]

Goldman said the contractor meant to email her report, which contained the client data, to a “gs.com” account, but instead sent it to a similarly named, unrelated “gmail.com” account.

The bank said it has been unable to retrieve the report or get a response from the Gmail account owner. It said a member of Google’s “incident response team” reported on June 26 that the email cannot be deleted without a court order.

“Emergency relief is necessary to avoid the risk of inflicting a needless and massive privacy violation upon Goldman Sachs’ clients, and to avoid the risk of unnecessary reputational damage to Goldman Sachs,” the bank said.

“By contrast, Google faces little more than the minor inconvenience of intercepting a single email – an email that was indisputably sent in error,” it added.

EDITED TO ADD (7/7): Google deleted the unread e-mail, without waiting for a court order.

Posted on July 3, 2014 at 5:46 AMView Comments

Unusual Electronic Voting Machine Threat Model

Rats have destroyed dozens of electronic voting machines by eating the cables. It would have been a better story if the rats had zeroed out the machines after the votes had been cast but before they were counted, but it seems that they just ate the machines while they were in storage.

The EVMs had been stored in a pre-designated strong room that was located near a wholesale wheat market, where the rats had apparently made their home.

There’s a general thread running through security where high-tech replacements for low-tech systems have new and unexpected failures.

EDITED TO ADD (5/14): This article says it was only a potential threat, and one being addressed.

Posted on May 2, 2014 at 2:00 PMView Comments

Cloud Computing

This year’s overhyped IT concept is cloud computing. Also called software as a service (Saas), cloud computing is when you run software over the internet and access it via a browser. The Salesforce.com customer management software is an example of this. So is Google Docs. If you believe the hype, cloud computing is the future.

But, hype aside, cloud computing is nothing new . It’s the modern version of the timesharing model from the 1960s, which was eventually killed by the rise of the personal computer. It’s what Hotmail and Gmail have been doing all these years, and it’s social networking sites, remote backup companies, and remote email filtering companies such as MessageLabs. Any IT outsourcing—network infrastructure, security monitoring, remote hosting—is a form of cloud computing.

The old timesharing model arose because computers were expensive and hard to maintain. Modern computers and networks are drastically cheaper, but they’re still hard to maintain. As networks have become faster, it is again easier to have someone else do the hard work. Computing has become more of a utility; users are more concerned with results than technical details, so the tech fades into the background.

But what about security? Isn’t it more dangerous to have your email on Hotmail’s servers, your spreadsheets on Google’s, your personal conversations on Facebook’s, and your company’s sales prospects on salesforce.com’s? Well, yes and no.

IT security is about trust. You have to trust your CPU manufacturer, your hardware, operating system and software vendors—and your ISP. Any one of these can undermine your security: crash your systems, corrupt data, allow an attacker to get access to systems. We’ve spent decades dealing with worms and rootkits that target software vulnerabilities. We’ve worried about infected chips. But in the end, we have no choice but to blindly trust the security of the IT providers we use.

Saas moves the trust boundary out one step further—you now have to also trust your software service vendors—but it doesn’t fundamentally change anything. It’s just another vendor we need to trust.

There is one critical difference. When a computer is within your network, you can protect it with other security systems such as firewalls and IDSs. You can build a resilient system that works even if those vendors you have to trust may not be as trustworthy as you like. With any outsourcing model, whether it be cloud computing or something else, you can’t. You have to trust your outsourcer completely. You not only have to trust the outsourcer’s security, but its reliability, its availability, and its business continuity.

You don’t want your critical data to be on some cloud computer that abruptly disappears because its owner goes bankrupt . You don’t want the company you’re using to be sold to your direct competitor. You don’t want the company to cut corners, without warning, because times are tight. Or raise its prices and then refuse to let you have your data back. These things can happen with software vendors, but the results aren’t as drastic.

There are two different types of cloud computing customers. The first only pays a nominal fee for these services—and uses them for free in exchange for ads: e.g., Gmail and Facebook. These customers have no leverage with their outsourcers. You can lose everything. Companies like Google and Amazon won’t spend a lot of time caring. The second type of customer pays considerably for these services: to Salesforce.com, MessageLabs, managed network companies, and so on. These customers have more leverage, providing they write their service contracts correctly. Still, nothing is guaranteed.

Trust is a concept as old as humanity, and the solutions are the same as they have always been. Be careful who you trust, be careful what you trust them with, and be careful how much you trust them. Outsourcing is the future of computing. Eventually we’ll get this right, but you don’t want to be a casualty along the way.

This essay originally appeared in The Guardian.

EDITED TO ADD (6/4): Another opinion.

EDITED TO ADD (6/5): A rebuttal. And an apology for the tone of the rebuttal. The reason I am talking so much about cloud computing is that reporters and inverviewers keep asking me about it. I feel kind of dragged into this whole thing.

EDITED TO ADD (6/6): At the Computers, Freedom, and Privacy conference last week, Bob Gellman said (this, by him, is worth reading) that the nine most important words in cloud computing are: “terms of service,” “location, location, location,” and “provider, provider, provider”—basically making the same point I did. You need to make sure the terms of service you sign up to are ones you can live with. You need to make sure the location of the provider doesn’t subject you to any laws that you can’t live with. And you need to make sure your provider is someone you’re willing to work with. Basically, if you’re going to give someone else your data, you need to trust them.

Posted on June 4, 2009 at 6:14 AM

Los Alamos Explains Their Security Problems

They’ve lost 80 computers: no idea if they’re stolen, or just misplaced. Typical story—not even worth commenting on—but this great comment by Los Alamos explains a lot about what was wrong with their security policy:

The letter, addressed to Department of Energy security officials, contends that “cyber security issues were not engaged in a timely manner” because the computer losses were treated as a “property management issue.”

The real risk in computer losses is the data, not the hardware. I thought everyone knew that.

Posted on February 17, 2009 at 5:00 AMView Comments

Ransomware

I’ve never figured out the fuss over ransomware:

Some day soon, you may go in and turn on your Windows PC and find your most valuable files locked up tighter than Fort Knox.

You’ll also see this message appear on your screen:

“Your files are encrypted with RSA-1024 algorithm. To recovery your files you need to buy our decryptor. To buy decrypting tool contact us at: ********@yahoo.com”

How is this any worse than the old hacker viruses that put a funny message on your screen and erased your hard drive?

Here’s how I see it, if someone actually manages to pull this up and put it into circulation, we’re looking at malware Armegeddon. Instead of losing ‘just’ your credit card numbers or having your PC turned into a spam factory, you could lose vital files forever.

Of course, you could keep current back-ups. I do, but I’ve been around this track way too many times to think that many companies, much less individual users, actually keep real back-ups. Oh, you may think you do, but when was the last time you checked to see if the data you saved could actually be restored?

The single most important thing any company or individual can do to improve security is have a good backup strategy. It’s been true for decades, and it’s still true today.

Posted on June 16, 2008 at 1:09 PMView Comments

Third Parties Controlling Information

Wine Therapy is a web bulletin board for serious wine geeks. It’s been active since 2000, and its database of back posts and comments is a wealth of information: tasting notes, restaurant recommendations, stories and so on. Late last year someone hacked the board software, got administrative privileges and deleted the database. There was no backup.

Of course the board’s owner should have been making backups all along, but he has been very sick for the past year and wasn’t able to. And the Internet Archive has been only somewhat helpful.

More and more, information we rely on—either created by us or by others—is out of our control. It’s out there on the internet, on someone else’s website and being cared for by someone else. We use those websites, sometimes daily, and don’t even think about their reliability.

Bits and pieces of the web disappear all the time. It’s called “link rot,” and we’re all used to it. A friend saved 65 links in 1999 when he planned a trip to Tuscany; only half of them still work today. In my own blog, essays and news articles and websites that I link to regularly disappear—sometimes within a few days of my linking to them.

It may be because of a site’s policies—some newspapers only have a couple of weeks on their website—or it may be more random: Position papers disappear off a politician’s website after he changes his mind on an issue, corporate literature disappears from the company’s website after an embarrassment, etc. The ultimate link rot is “site death,” where entire websites disappear: Olympic and World Cup events after the games are over, political candidates’ websites after the elections are over, corporate websites after the funding runs out and so on.

Mostly, we ignore the issue. Sometimes I save a copy of a good recipe I find, or an article relevant to my research, but mostly I trust that whatever I want will be there next time. Were I planning a trip to Tuscany, I would rather search for relevant articles today than rely on a nine-year-old list anyway. Most of the time, link rot and site death aren’t really a problem.

This is changing in a Web 2.0 world, with websites that are less about information and more about community. We help build these sites, with our posts or our comments. We visit them regularly and get to know others who also visit regularly. They become part of our socialization on the internet and the loss of them affects us differently, as Greatest Journal users discovered in January when their site died.

Few, if any, of the people who made Wine Therapy their home kept backup copies of their own posts and comments. I’m sure they didn’t even think of it. I don’t think of it, when I post to the various boards and blogs and forums I frequent. Of course I know better, but I think of these forums as extensions of my own computer—until they disappear.

As we rely on others to maintain our writings and our relationships, we lose control over their availability. Of course, we also lose control over their security, as MySpace users learned last month when a 17-GB file of half a million supposedly private photos was uploaded to a BitTorrent site.

In the early days of the web, I remember feeling giddy over the wealth of information out there and how easy it was to get to. “The internet is my hard drive,” I told newbies. It’s even more true today; I don’t think I could write without so much information so easily accessible. But it’s a pretty damned unreliable hard drive.

The internet is my hard drive, but only if my needs are immediate and my requirements can be satisfied inexactly. It was easy for me to search for information about the MySpace photo hack. And it will be easy to look up, and respond to, comments to this essay, both on Wired.com and on my own blog. Wired.com is a commercial venture, so there is advertising value in keeping everything accessible. My site is not at all commercial, but there is personal value in keeping everything accessible. By that analysis, all sites should be up on the internet forever, although that’s certainly not true. What is true is that there’s no way to predict what will disappear when.

Unfortunately, there’s not much we can do about it. The security measures largely aren’t in our hands. We can save copies of important web pages locally, and copies of anything important we post. The Internet Archive is remarkably valuable in saving bits and pieces of the internet. And recently, we’ve started seeing tools for archiving information and pages from social networking sites. But what’s really important is the whole community, and we don’t know which bits we want until they’re no longer there.

And about Wine Therapy, I think it started in 2000. It might have been 2001. I can’t check, because someone erased the archives.

This essay originally appeared on Wired.com.

Posted on February 27, 2008 at 5:46 AMView Comments

A Security Market for Lemons

More than a year ago, I wrote about the increasing risks of data loss because more and more data fits in smaller and smaller packages. Today I use a 4-GB USB memory stick for backup while I am traveling. I like the convenience, but if I lose the tiny thing I risk all my data.

Encryption is the obvious solution for this problem—I use PGPdisk—but Secustick sounds even better: It automatically erases itself after a set number of bad password attempts. The company makes a bunch of other impressive claims: The product was commissioned, and eventually approved, by the French intelligence service; it is used by many militaries and banks; its technology is revolutionary.

Unfortunately, the only impressive aspect of Secustick is its hubris, which was revealed when Tweakers.net completely broke its security. There’s no data self-destruct feature. The password protection can easily be bypassed. The data isn’t even encrypted. As a secure storage device, Secustick is pretty useless.

On the surface, this is just another snake-oil security story. But there’s a deeper question: Why are there so many bad security products out there? It’s not just that designing good security is hard—although it is—and it’s not just that anyone can design a security product that he himself cannot break. Why do mediocre security products beat the good ones in the marketplace?

In 1970, American economist George Akerlof wrote a paper called “The Market for ‘Lemons‘” (abstract and article for pay here), which established asymmetrical information theory. He eventually won a Nobel Prize for his work, which looks at markets where the seller knows a lot more about the product than the buyer.

Akerlof illustrated his ideas with a used car market. A used car market includes both good cars and lousy ones (lemons). The seller knows which is which, but the buyer can’t tell the difference—at least until he’s made his purchase. I’ll spare you the math, but what ends up happening is that the buyer bases his purchase price on the value of a used car of average quality.

This means that the best cars don’t get sold; their prices are too high. Which means that the owners of these best cars don’t put their cars on the market. And then this starts spiraling. The removal of the good cars from the market reduces the average price buyers are willing to pay, and then the very good cars no longer sell, and disappear from the market. And then the good cars, and so on until only the lemons are left.

In a market where the seller has more information about the product than the buyer, bad products can drive the good ones out of the market.

The computer security market has a lot of the same characteristics of Akerlof’s lemons market. Take the market for encrypted USB memory sticks. Several companies make encrypted USB drives—Kingston Technology sent me one in the mail a few days ago—but even I couldn’t tell you if Kingston’s offering is better than Secustick. Or if it’s better than any other encrypted USB drives. They use the same encryption algorithms. They make the same security claims. And if I can’t tell the difference, most consumers won’t be able to either.

Of course, it’s more expensive to make an actually secure USB drive. Good security design takes time, and necessarily means limiting functionality. Good security testing takes even more time, especially if the product is any good. This means the less-secure product will be cheaper, sooner to market and have more features. In this market, the more-secure USB drive is going to lose out.

I see this kind of thing happening over and over in computer security. In the late 1980s and early 1990s, there were more than a hundred competing firewall products. The few that “won” weren’t the most secure firewalls; they were the ones that were easy to set up, easy to use and didn’t annoy users too much. Because buyers couldn’t base their buying decision on the relative security merits, they based them on these other criteria. The intrusion detection system, or IDS, market evolved the same way, and before that the antivirus market. The few products that succeeded weren’t the most secure, because buyers couldn’t tell the difference.

How do you solve this? You need what economists call a “signal,” a way for buyers to tell the difference. Warranties are a common signal. Alternatively, an independent auto mechanic can tell good cars from lemons, and a buyer can hire his expertise. The Secustick story demonstrates this. If there is a consumer advocate group that has the expertise to evaluate different products, then the lemons can be exposed.

Secustick, for one, seems to have been withdrawn from sale.

But security testing is both expensive and slow, and it just isn’t possible for an independent lab to test everything. Unfortunately, the exposure of Secustick is an exception. It was a simple product, and easily exposed once someone bothered to look. A complex software product—a firewall, an IDS—is very hard to test well. And, of course, by the time you have tested it, the vendor has a new version on the market.

In reality, we have to rely on a variety of mediocre signals to differentiate the good security products from the bad. Standardization is one signal. The widely used AES encryption standard has reduced, although not eliminated, the number of lousy encryption algorithms on the market. Reputation is a more common signal; we choose security products based on the reputation of the company selling them, the reputation of some security wizard associated with them, magazine reviews, recommendations from colleagues or general buzz in the media.

All these signals have their problems. Even product reviews, which should be as comprehensive as the Tweakers’ Secustick review, rarely are. Many firewall comparison reviews focus on things the reviewers can easily measure, like packets per second, rather than how secure the products are. In IDS comparisons, you can find the same bogus “number of signatures” comparison. Buyers lap that stuff up; in the absence of deep understanding, they happily accept shallow data.

With so many mediocre security products on the market, and the difficulty of coming up with a strong quality signal, vendors don’t have strong incentives to invest in developing good products. And the vendors that do tend to die a quiet and lonely death.

This essay originally appeared in Wired.

EDITED TO ADD (4/22): Slashdot thread.

Posted on April 19, 2007 at 7:59 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.