Entries Tagged "data protection"

Page 4 of 5

Indiana's Voter Registration Data Is Frighteningly Insecure

You can edit anyone’s information you want:

The question, boiled down, was haunting: Want to see how easy it would be to get into someone’s voter registration and make changes to it? The offer from Steve Klink—a Lafayette-based public consultant who works mainly with Indiana public school districts—was to use my voter registration record as a case study.

Only with my permission, of course.

“I will not require any information from you,” he texted. “Which is the problem.”

Turns out he didn’t need anything from me. He sent screenshots of every step along the way, as he navigated from the “Update My Voter Registration” tab at the Indiana Statewide Voter Registration System maintained since 2010 at www.indianavoters.com to the blank screen that cleared the way for changes to my name, address, age and more.

The only magic involved was my driver’s license number, one of two log-in options to make changes online. And that was contained in a copy of every county’s voter database, a public record already in the hands of political parties, campaigns, media and, according to Indiana open access laws, just about anyone who wants the beefy spreadsheet.

Posted on October 11, 2016 at 2:04 PMView Comments

France Rejects Backdoors in Encryption Products

For the right reasons, too:

Axelle Lemaire, the Euro nation’s digital affairs minister, shot down the amendment during the committee stage of the forthcoming omnibus digital bill, saying it would be counterproductive and would leave personal data unprotected.

“Recent events show how the fact of introducing faults deliberately at the request—sometimes even without knowing—the intelligence agencies has an effect that is harming the whole community,” she said according to Numerama.

“Even if the intention [to empower the police] is laudable, it also opens the door to the players who have less laudable intentions, not to mention the potential for economic damage to the credibility of companies planning these flaws. You are right to fuel the debate, but this is not the right solution according to the Government’s opinion.”

France joins the Netherlands on this issue.

And Apple’s Tim Cook is going after the Obama administration on the issue.

EDITED TO ADD (1/20): In related news, Congress will introduce a bill to establish a commission to study the issue. This is what kicking the can down the road looks like.

Posted on January 20, 2016 at 5:02 AMView Comments

New Pew Research Report on Americans' Attitudes on Privacy, Security, and Surveillance

This is interesting:

The surveys find that Americans feel privacy is important in their daily lives in a number of essential ways. Yet, they have a pervasive sense that they are under surveillance when in public and very few feel they have a great deal of control over the data that is collected about them and how it is used. Adding to earlier Pew Research reports that have documented low levels of trust in sectors that Americans associate with data collection and monitoring, the new findings show Americans also have exceedingly low levels of confidence in the privacy and security of the records that are maintained by a variety of institutions in the digital age.

While some Americans have taken modest steps to stem the tide of data collection, few have adopted advanced privacy-enhancing measures. However, majorities of Americans expect that a wide array of organizations should have limits on the length of time that they can retain records of their activities and communications. At the same time, Americans continue to express the belief that there should be greater limits on government surveillance programs. Additionally, they say it is important to preserve the ability to be anonymous for certain online activities.

Lots of detail in the reports.

Posted on May 21, 2015 at 1:05 PMView Comments

The Security of Data Deletion

Thousands of articles have called the December attack against Sony Pictures a wake-up call to industry. Regardless of whether the attacker was the North Korean government, a disgruntled former employee, or a group of random hackers, the attack showed how vulnerable a large organization can be and how devastating the publication of its private correspondence, proprietary data, and intellectual property can be.

But while companies are supposed to learn that they need to improve their security against attack, there’s another equally important but much less discussed lesson here: companies should have an aggressive deletion policy.

One of the social trends of the computerization of our business and social communications tools is the loss of the ephemeral. Things we used to say in person or on the phone we now say in e-mail, by text message, or on social networking platforms. Memos we used to read and then throw away now remain in our digital archives. Big data initiatives mean that we’re saving everything we can about our customers on the remote chance that it might be useful later.

Everything is now digital, and storage is cheap­—why not save it all?

Sony illustrates the reason why not. The hackers published old e-mails from company executives that caused enormous public embarrassment to the company. They published old e-mails by employees that caused less-newsworthy personal embarrassment to those employees, and these messages are resulting in class-action lawsuits against the company. They published old documents. They published everything they got their hands on.

Saving data, especially e-mail and informal chats, is a liability.

It’s also a security risk: the risk of exposure. The exposure could be accidental. It could be the result of data theft, as happened to Sony. Or it could be the result of litigation. Whatever the reason, the best security against these eventualities is not to have the data in the first place.

If Sony had had an aggressive data deletion policy, much of what was leaked couldn’t have been stolen and wouldn’t have been published.

An organization-wide deletion policy makes sense. Customer data should be deleted as soon as it isn’t immediately useful. Internal e-mails can probably be deleted after a few months, IM chats even more quickly, and other documents in one to two years. There are exceptions, of course, but they should be exceptions. Individuals should need to deliberately flag documents and correspondence for longer retention. But unless there are laws requiring an organization to save a particular type of data for a prescribed length of time, deletion should be the norm.

This has always been true, but many organizations have forgotten it in the age of big data. In the wake of the devastating leak of terabytes of sensitive Sony data, I hope we’ll all remember it now.

This essay previously appeared on ArsTechnica.com, which has comments from people who strongly object to this idea.

Slashdot thread.

Posted on January 15, 2015 at 6:12 AMView Comments

Corporations Misusing Our Data

In the Internet age, we have no choice but to entrust our data with private companies: e-mail providers, service providers, retailers, and so on.

We realize that this data is at risk from hackers. But there’s another risk as well: the employees of the companies who are holding our data for us.

In the early years of Facebook, employees had a master password that enabled them to view anything they wanted in any account. NSA employees occasionally snoop on their friends and partners. The agency even has a name for it: LOVEINT. And well before the Internet, people with access to police or medical records occasionally used that power to look up either famous people or people they knew.

The latest company accused of allowing this sort of thing is Uber, the Internet car-ride service. The company is under investigation for spying on riders without their permission. Called the “god view,” some Uber employees are able to see who is using the service and where they’re going—and used this at least once in 2011 as a party trick to show off the service. A senior executive also suggested the company should hire people to dig up dirt on their critics, making their database of people’s rides even more “useful.”

None of us wants to be stalked—whether it’s from looking at our location data, our medical data, our emails and texts, or anything else—by friends or strangers who have access due to their jobs. Unfortunately, there are few rules protecting us.

Government employees are prohibited from looking at our data, although none of the NSA LOVEINT creeps were ever prosecuted. The HIPAA law protects the privacy of our medical records, but we have nothing to protect most of our other information.

Your Facebook and Uber data are only protected by company culture. There’s nothing in their license agreements that you clicked “agree” to but didn’t read that prevents those companies from violating your privacy.

This needs to change. Corporate databases containing our data should be secured from everyone who doesn’t need access for their work. Voyeurs who peek at our data without a legitimate reason should be punished.

There are audit technologies that can detect this sort of thing, and they should be required. As long as we have to give our data to companies and government agencies, we need assurances that our privacy will be protected.

This essay previously appeared on CNN.com.

Posted on December 5, 2014 at 6:45 AMView Comments

Dan Geer Explains the Government Surveillance Mentality

This talk by Dan Geer explains the NSA mindset of “collect everything”:

I previously worked for a data protection company. Our product was, and I believe still is, the most thorough on the market. By “thorough” I mean the dictionary definition, “careful about doing something in an accurate and exact way.” To this end, installing our product instrumented every system call on the target machine. Data did not and could not move in any sense of the word “move” without detection. Every data operation was caught and monitored. It was total surveillance data protection. Its customers were companies that don’t accept half-measures. What made this product stick out was that very thoroughness, but here is the point: Unless you fully instrument your data handling, it is not possible for you to say what did not happen. With total surveillance, and total surveillance alone, it is possible to treat the absence of evidence as the evidence of absence. Only when you know everything that *did* happen with your data can you say what did *not* happen with your data.

The alternative to total surveillance of data handling is to answer more narrow questions, questions like “Can the user steal data with a USB stick?” or “Does this outbound e-mail have a Social Security Number in it?” Answering direct questions is exactly what a defensive mindset says you must do, and that is “never make the same mistake twice.” In other words, if someone has lost data because of misuse of some facility on the computer, then you either disable that facility or you wrap it in some kind of perimeter. Lather, rinse, and repeat. This extends all the way to such trivial matters as timer-based screen locking.

The difficulty with the defensive mindset is that it leaves in place the fundamental strategic asymmetry of cybersecurity, namely that while the workfactor for the offender is the price of finding a new method of attack, the workfactor for the defender is the cumulative cost of forever defending against all attack methods yet discovered. Over time, the curve for the cost of finding a new attack and the curve for the cost of defending against all attacks to date cross. Once those curves cross, the offender never has to worry about being out of the money. I believe that that crossing occurred some time ago.

The total surveillance strategy is, to my mind, an offensive strategy used for defensive purposes. It says “I don’t know what the opposition is going to try, so everything is forbidden unless we know it is good.” In that sense, it is like whitelisting applications. Taking either the application whitelisting or the total data surveillance approach is saying “That which is not permitted is forbidden.”

[…]

We all know the truism, that knowledge is power. We all know that there is a subtle yet important distinction between information and knowledge. We all know that a negative declaration like “X did not happen” can only proven true if you have the enumeration of *everything* that did happen and can show that X is not in it. We all know that when a President says “Never again” he is asking for the kind of outcome for which proving a negative, lots of negatives, is categorically essential. Proving a negative requires omniscience. Omniscience requires god-like powers.

The whole essay is well worth reading.

Posted on November 11, 2013 at 6:21 AMView Comments

Risks of Data Portability

Peter Swire and Yianni Lagos have pre-published a law journal article on the risks of data portability. It specifically addresses an EU data protection regulation, but the security discussion is more general.

…Article 18 poses serious risks to a long-established E.U. fundamental right of data protection, the right to security of a person’s data. Previous access requests by individuals were limited in scope and format. By contrast, when an individual’s lifetime of data must be exported ‘without hindrance,’ then one moment of identity fraud can turn into a lifetime breach of personal data.

They have a point. If you’re going to allow users to download all of their data with one command, you might want to double- and triple-check that command. Otherwise it’s going to become an attack vector for identity theft and other malfeasance.

Posted on October 24, 2012 at 1:27 PMView Comments

Data at Rest vs. Data in Motion

For a while now, I’ve pointed out that cryptography is singularly ill-suited to solve the major network security problems of today: denial-of-service attacks, website defacement, theft of credit card numbers, identity theft, viruses and worms, DNS attacks, network penetration, and so on.

Cryptography was invented to protect communications: data in motion. This is how cryptography was used throughout most of history, and this is how the militaries of the world developed the science. Alice was the sender, Bob the receiver, and Eve the eavesdropper. Even when cryptography was used to protect stored data—data at rest—it was viewed as a form of communication. In “Applied Cryptography,” I described encrypting stored data in this way: “a stored message is a way for someone to communicate with himself through time.” Data storage was just a subset of data communication.

In modern networks, the difference is much more profound. Communications are immediate and instantaneous. Encryption keys can be ephemeral, and systems like the STU-III telephone can be designed such that encryption keys are created at the beginning of a call and destroyed as soon as the call is completed. Data storage, on the other hand, occurs over time. Any encryption keys must exist as long as the encrypted data exists. And storing those keys becomes as important as storing the unencrypted data was. In a way, encryption doesn’t reduce the number of secrets that must be stored securely; it just makes them much smaller.

Historically, the reason key management worked for stored data was that the key could be stored in a secure location: the human brain. People would remember keys and, barring physical and emotional attacks on the people themselves, would not divulge them. In a sense, the keys were stored in a “computer” that was not attached to any network. And there they were safe.

This whole model falls apart on the Internet. Much of the data stored on the Internet is only peripherally intended for use by people; it’s primarily intended for use by other computers. And therein lies the problem. Keys can no longer be stored in people’s brains. They need to be stored on the same computer, or at least the network, that the data resides on. And that is much riskier.

Let’s take a concrete example: credit card databases associated with websites. Those databases are not encrypted because it doesn’t make any sense. The whole point of storing credit card numbers on a website is so it’s accessible—so each time I buy something, I don’t have to type it in again. The website needs to dynamically query the database and retrieve the numbers, millions of times a day. If the database were encrypted, the website would need the key. But if the key were on the same network as the data, what would be the point of encrypting it? Access to the website equals access to the database in either case. Security is achieved by good access control on the website and database, not by encrypting the data.

The same reasoning holds true elsewhere on the Internet as well. Much of the Internet’s infrastructure happens automatically, without human intervention. This means that any encryption keys need to reside in software on the network, making them vulnerable to attack. In many cases, the databases are queried so often that they are simply left in plaintext, because doing otherwise would cause significant performance degradation. Real security in these contexts comes from traditional computer security techniques, not from cryptography.

Cryptography has inherent mathematical properties that greatly favor the defender. Adding a single bit to the length of a key adds only a slight amount of work for the defender, but doubles the amount of work the attacker has to do. Doubling the key length doubles the amount of work the defender has to do (if that—I’m being approximate here), but increases the attacker’s workload exponentially. For many years, we have exploited that mathematical imbalance.

Computer security is much more balanced. There’ll be a new attack, and a new defense, and a new attack, and a new defense. It’s an arms race between attacker and defender. And it’s a very fast arms race. New vulnerabilities are discovered all the time. The balance can tip from defender to attacker overnight, and back again the night after. Computer security defenses are inherently very fragile.

Unfortunately, this is the model we’re stuck with. No matter how good the cryptography is, there is some other way to break into the system. Recall how the FBI read the PGP-encrypted email of a suspected Mafia boss several years ago. They didn’t try to break PGP; they simply installed a keyboard sniffer on the target’s computer. Notice that SSL- and TLS-encrypted web communications are increasingly irrelevant in protecting credit card numbers; criminals prefer to steal them by the hundreds of thousands from back-end databases.

On the Internet, communications security is much less important than the security of the endpoints. And increasingly, we can’t rely on cryptography to solve our security problems.

This essay originally appeared on DarkReading. I wrote it in 2006, but lost it on my computer for four years. I hate it when that happens.

EDITED TO ADD (7/14): As several readers pointed out, I overstated my case when I said that encrypting credit card databases, or any database in constant use, is useless. In fact, there is value in encrypting those databases, especially if the encryption appliance is separate from the database server. In this case, the attacker has to steal both the encryption key and the database. That’s a harder hacking problem, and this is why credit-card database encryption is mandated within the PCI security standard. Given how good encryption performance is these days, it’s a smart idea. But while encryption makes it harder to steal the data, it is only harder in a computer-security sense and not in a cryptography sense.

Posted on June 30, 2010 at 12:53 PMView Comments

Cloud Computing

This year’s overhyped IT concept is cloud computing. Also called software as a service (Saas), cloud computing is when you run software over the internet and access it via a browser. The Salesforce.com customer management software is an example of this. So is Google Docs. If you believe the hype, cloud computing is the future.

But, hype aside, cloud computing is nothing new . It’s the modern version of the timesharing model from the 1960s, which was eventually killed by the rise of the personal computer. It’s what Hotmail and Gmail have been doing all these years, and it’s social networking sites, remote backup companies, and remote email filtering companies such as MessageLabs. Any IT outsourcing—network infrastructure, security monitoring, remote hosting—is a form of cloud computing.

The old timesharing model arose because computers were expensive and hard to maintain. Modern computers and networks are drastically cheaper, but they’re still hard to maintain. As networks have become faster, it is again easier to have someone else do the hard work. Computing has become more of a utility; users are more concerned with results than technical details, so the tech fades into the background.

But what about security? Isn’t it more dangerous to have your email on Hotmail’s servers, your spreadsheets on Google’s, your personal conversations on Facebook’s, and your company’s sales prospects on salesforce.com’s? Well, yes and no.

IT security is about trust. You have to trust your CPU manufacturer, your hardware, operating system and software vendors—and your ISP. Any one of these can undermine your security: crash your systems, corrupt data, allow an attacker to get access to systems. We’ve spent decades dealing with worms and rootkits that target software vulnerabilities. We’ve worried about infected chips. But in the end, we have no choice but to blindly trust the security of the IT providers we use.

Saas moves the trust boundary out one step further—you now have to also trust your software service vendors—but it doesn’t fundamentally change anything. It’s just another vendor we need to trust.

There is one critical difference. When a computer is within your network, you can protect it with other security systems such as firewalls and IDSs. You can build a resilient system that works even if those vendors you have to trust may not be as trustworthy as you like. With any outsourcing model, whether it be cloud computing or something else, you can’t. You have to trust your outsourcer completely. You not only have to trust the outsourcer’s security, but its reliability, its availability, and its business continuity.

You don’t want your critical data to be on some cloud computer that abruptly disappears because its owner goes bankrupt . You don’t want the company you’re using to be sold to your direct competitor. You don’t want the company to cut corners, without warning, because times are tight. Or raise its prices and then refuse to let you have your data back. These things can happen with software vendors, but the results aren’t as drastic.

There are two different types of cloud computing customers. The first only pays a nominal fee for these services—and uses them for free in exchange for ads: e.g., Gmail and Facebook. These customers have no leverage with their outsourcers. You can lose everything. Companies like Google and Amazon won’t spend a lot of time caring. The second type of customer pays considerably for these services: to Salesforce.com, MessageLabs, managed network companies, and so on. These customers have more leverage, providing they write their service contracts correctly. Still, nothing is guaranteed.

Trust is a concept as old as humanity, and the solutions are the same as they have always been. Be careful who you trust, be careful what you trust them with, and be careful how much you trust them. Outsourcing is the future of computing. Eventually we’ll get this right, but you don’t want to be a casualty along the way.

This essay originally appeared in The Guardian.

EDITED TO ADD (6/4): Another opinion.

EDITED TO ADD (6/5): A rebuttal. And an apology for the tone of the rebuttal. The reason I am talking so much about cloud computing is that reporters and inverviewers keep asking me about it. I feel kind of dragged into this whole thing.

EDITED TO ADD (6/6): At the Computers, Freedom, and Privacy conference last week, Bob Gellman said (this, by him, is worth reading) that the nine most important words in cloud computing are: “terms of service,” “location, location, location,” and “provider, provider, provider”—basically making the same point I did. You need to make sure the terms of service you sign up to are ones you can live with. You need to make sure the location of the provider doesn’t subject you to any laws that you can’t live with. And you need to make sure your provider is someone you’re willing to work with. Basically, if you’re going to give someone else your data, you need to trust them.

Posted on June 4, 2009 at 6:14 AM

Sidebar photo of Bruce Schneier by Joe MacInnis.