Entries Tagged "privacy"

Page 125 of 138

Most Stolen Identities Never Used

This is something I’ve been saying for a while, and it’s nice to see some independent confirmation:

A new study suggests consumers whose credit cards are lost or stolen or whose personal information is accidentally compromised face little risk of becoming victims of identity theft.

The analysis, released on Wednesday, also found that even in the most dangerous data breaches—where thieves access social security numbers and other sensitive information on consumers they have deliberately targeted—only about 1 in 1,000 victims had their identities stolen.

The reason is that thieves are stealing far more identities than they need. Two years ago, if someone asked me about protecting against identity theft, I would tell them to shred their trash and be careful giving information over the Internet. Today, that advice is obsolete. Criminals are not stealing identity information in ones and twos; they’re stealing identity information in blocks of hundreds of thousands and even millions.

If a criminal ring wants a dozen identities for some fraud scam, and they steal a database with 500,000 identities, then—as a percentage—almost none of those identities will ever be the victims of fraud.

Some other findings from their press release:

A significant finding from the research is that different breaches pose different degrees of risk. In the research, ID Analytics distinguishes between “identity-level” breaches, where names and Social Security numbers were stolen and “account-level” breaches, where only account numbers—sometimes associated with names—were stolen. ID Analytics also discovered that the degree of risk varies based on the nature of the data breach, for example, whether the breach was the result of a deliberate hacking into a database or a seemingly unintentional loss of data, such as tapes or disks being lost in transit.

And:

ID Analytics’ fraud experts believe the reason for the minimal use of stolen identities is based on the amount of time it takes to actually perpetrate identity theft against a consumer. As an example, it takes approximately five minutes to fill out a credit application. At this rate, it would take a fraudster working full-time ­ averaging 6.5 hours day, five days a week, 50 weeks a year ­ over 50 years to fully utilize a breached file consisting of one million consumer identities. If the criminal outsourced the work at a rate of $10 an hour in an effort to use a breached file of the same size in one year, it would cost that criminal about $830,000.

Another key finding indicates that in certain targeted data breaches, notices may have a deterrent effect. In one large-scale identity-level breach, thieves slowed their use of the data to commit identity theft after public notification. The research also showed how the criminals who stole the data in the breaches used identity data manipulation, or “tumbling” to avoid detection and to prolong the scam.

That last bit is interesting, and it makes this recommendation even more surprising:

The company suggests, for instance, that companies shouldn’t always notify consumers of data breaches because they may be unnecessarily alarming people who stand little chance of being victimized.

I agree with them that all this notification is having a “boy who cried wolf” effect on people. I know people living in California who get disclosure notifications in the mail regularly, and who have stopped paying attention to them.

But remember, the main security value of notification requirements is the cost. By increasing the cost to companies of data thefts, the goal is for them to increase their security. (The main security value used to be the public shaming, but these breaches are now so common that the press no longer writes about them.) Direct fines would be a better way of dealing with the economic externality, but the notification law is all we’ve got right now. I don’t support eliminating it until there’s something else in its place.

Posted on December 12, 2005 at 9:50 AMView Comments

U.S. Immigration Database Security

In September, the Inspector General of the Department of Homeland Security published a report on the security of the USCIS (United States Citizenship and Immigration Services) databases. It’s called: “Security Weaknesses Increase Risks to Critical United States Citizenship and Immigration Services Database,” and a redacted version (.pdf) is on the DHS website.

This is from the Executive Summary:

Although USCIS has not established adequate or effective database security controls for the Central Index System, it has implemented many essential security controls such as procedures for controlling temporary or emergency system access, a configuration management plan, and procedures for implementing routine and emergency changes. Further, we did not identify any significant configuration weaknesses during our technical tests of the Central Index System. However, additional work remains to implement the access controls, configuration management procedures, and continuity of operations safeguards necessary to protect sensitive Central Index System data effectively. Specifically, USCIS has not: 1) implemented effective user administration procedures; 2) reviewed and retained [REDACTED] effectively, 3) ensured that system changes are properly controlled; 4) developed and tested an adequate Information technology (IT) contingency plan; 5) implemented [REDACTED]; or 6) monitored system security functions sufficiently. These database security exposures increase the risk that unauthorized individuals could gain access to critical USCIS database resources and compromise the confidentiality, integrity, and availability of sensitive Central Index System data. [REDACTED]

Posted on December 8, 2005 at 7:38 AMView Comments

Snake-Oil Research in Nature

Snake-oil isn’t only in commercial products. Here’s a piece of research published (behind a paywall) in Nature that’s just full of it.

The article suggests using chaos in an electro-optical system to generate a pseudo-random light sequence, which is then added to the message to protect it from interception. Now, the idea of using chaos to build encryption systems has been tried many times in the cryptographic community, and has always failed. But the authors of the Nature article show no signs of familiarity with prior cryptographic work.

The published system has the obvious problem that it does not include any form of message authentication, so it will be trivial to send spoofed messages or tamper with messages while they are in transit.

But a closer examination of the paper’s figures suggests a far more fundamental problem. There’s no key. Anyone with a valid receiver can decode the ciphertext. No key equals no security, and what you have left is a totally broken system.

I e-mailed Claudio R. Mirasso, the corresponding author, about the lack of any key, and got this reply: “To extract the message from the chaotic carrier you need to replicate the carrier itself. This can only be done by a laser that matches the emitter characteristics within, let’s say, within 2-5%. Semiconductor lasers with such similarity have to be carefully selected from the same wafer. Even though you have to test them because they can still be too different and do not synchronize. We talk abut a hardware key. Also the operating conditions (current, feedback length and coupling strength) are part of the key.”

Let me translate that. He’s saying that there is a hardware key baked into the system at fabrication. (It comes from manufacturing deviations in the lasers.) There’s no way to change the key in the field. There’s no way to recover security if any of the transmitters/receivers are lost or stolen. And they don’t know how hard it would be for an attacker to build a compatible receiver, or even a tunable receiver that could listen to a variety of encodings.

This paper would never get past peer review in any competent cryptography journal or conference. I’m surprised it was accepted in Nature, a fiercely competitive journal. I don’t know why Nature is taking articles on topics that are outside its usual competence, but it looks to me like Nature got burnt here by a lack of expertise in the area.

To be fair, the paper very carefully skirts the issue of security, and claims hardly anything: “Additionally, chaotic carriers offer a certain degree of intrinsic privacy, which could complement (via robust hardware encryption) both classical (software based) and quantum cryptography systems.” Now that “certain degree of intrinsic privacy” is approximately zero. But other than that, they’re very careful how they word their claims.

For instance, the abstract says: “Chaotic signals have been proposed as broadband information carriers with the potential of providing a high level of robustness and privacy in data transmission.” But there’s no disclosure that this proposal is bogus, from a privacy perspective. And the next-to-last paragraph says “Building on this, it should be possible to develop reliable cost-effective secure communication systems that exploit deeper properties of chaotic dynamics.” No disclosure that “chaotic dynamics” is actually irrelevant to the “secure” part. The last paragraph talks about “smart encryption techniques” (referencing a paper that talks about chaos encryption), “developing active eavesdropper-evasion strategies” (whatever that means), and so on. It’s just enough that if you don’t parse their words carefully and don’t already know the area well, you might come away with the impression that this is a major advance in secure communications. It seems as if it would have helped to have a more careful disclaimer.

Communications security was listed as one of the motivations for studying this communications technique. To list this as a motivation, without explaining that their experimental setup is actually useless for communications security, is questionable at best.

Meanwhile, the press has written articles that convey the wrong impression. Science News has an article that lauds this as a big achievement for communications privacy.

It talks about it as a “new encryption strategy,” “chaos-encrypted communication,” “1 gigabyte of chaos-encrypted information per second.” It’s obvious that the communications security aspect is what Science News is writing about. If the authors knew that their scheme is useless for communications security, they didn’t explain that very well.

There is also a New Scientist article titled “Let chaos keep your secrets safe” that characterizes this as a “new cryptographic technique, ” but I can’t get a copy of the full article.

Here are two more articles that discuss its security benefits. In the latter, Mirasso says “the main task we have for the future” is to “define, test, and calibrate the security that our system can offer.”

And their project web page says that “the continuous increase of computer speed threatens the safety” of traditional cryptography (which is bogus) and suggests using physical-layer chaos as a way to solve this. That’s listed as the goal of the project.

There’s a lesson here. This is research undertaken by researchers with no prior track record in cryptography, submitted to a journal with no background in cryptography, and reviewed by reviewers with who knows what kind of experience in cryptography. Cryptography is a subtle subject, and trying to design new cryptosystems without the necessary experience and training in the field is a quick route to insecurity.

And what’s up with Nature? Cryptographers with no training in physics know better than to think they are competent to evaluate physics research. If a physics paper were submitted to a cryptography journal, the authors would likely be gently redirected to a physics journal—we wouldn’t want our cryptography conferences to accept a paper on a subject they aren’t competent to evaluate. Why would Nature expect the situation to be any different when physicists try to do cryptography research?

Posted on December 7, 2005 at 6:36 AMView Comments

FBI to Approve All Software?

Sounds implausible, I know. But how else do you explain this FCC ruling (from September—I missed it until now):

The Federal Communications Commission thinks you have the right to use software on your computer only if the FBI approves.

No, really. In an obscure “policy” document released around 9 p.m. ET last Friday, the FCC announced this remarkable decision.

According to the three-page document, to preserve the openness that characterizes today’s Internet, “consumers are entitled to run applications and use services of their choice, subject to the needs of law enforcement.” Read the last seven words again.

The FCC didn’t offer much in the way of clarification. But the clearest reading of the pronouncement is that some unelected bureaucrats at the commission have decreeed that Americans don’t have the right to use software such as Skype or PGPfone if it doesn’t support mandatory backdoors for wiretapping. (That interpretation was confirmed by an FCC spokesman on Monday, who asked not to be identified by name. Also, the announcement came at the same time as the FCC posted its wiretapping rules for Internet telephony.)

Posted on December 2, 2005 at 11:24 AMView Comments

Google and Privacy

Daniel Solove on Google and privacy:

A New York Times editorial observes:

At a North Carolina strangulation-murder trial this month, prosecutors announced an unusual piece of evidence: Google searches allegedly done by the defendant that included the words “neck” and “snap.” The data were taken from the defendant’s computer, prosecutors say. But it might have come directly from Google, which—unbeknownst to many users—keeps records of every search on its site, in ways that can be traced back to individuals.

This is an interesting fact—Google keeps records of every search in a way that can be traceable to individuals. The op-ed goes on to say:

Google has been aggressive about collecting information about its users’ activities online. It stores their search data, possibly forever, and puts “cookies” on their computers that make it possible to track those searches in a personally identifiable way—cookies that do not expire until 2038. Its e-mail system, Gmail, scans the content of e-mail messages so relevant ads can be posted. Google’s written privacy policy reserves the right to pool what it learns about users from their searches with what it learns from their e-mail messages, though Google says it won’t do so. . . .

The government can gain access to Google’s data storehouse simply by presenting a valid warrant or subpoena. . . .

This is an important point. No matter what Google’s privacy policy says, the fact that it maintains information about people’s search activity enables the government to gather that data, often with a mere subpoena, which provides virtually no protection to privacy—and sometimes without even a subpoena.

Solove goes on to argue that if companies like Google want to collect people’s data (even if people are willing to supply it), the least they can do is fight for greater protections against government access to that data. While this won’t address all the problems, it would be a step forward to see companies like Google use their power to foster meaningful legislative change.

EDITED TO ADD (12/3): Here’s an op ed from The Boston Globe on the same topic.

Posted on November 30, 2005 at 3:08 PMView Comments

Giving the U.S. Military the Power to Conduct Domestic Surveillance

More nonsense in the name of defending ourselves from terrorism:

The Defense Department has expanded its programs aimed at gathering and analyzing intelligence within the United States, creating new agencies, adding personnel and seeking additional legal authority for domestic security activities in the post-9/11 world.

The moves have taken place on several fronts. The White House is considering expanding the power of a little-known Pentagon agency called the Counterintelligence Field Activity, or CIFA, which was created three years ago. The proposal, made by a presidential commission, would transform CIFA from an office that coordinates Pentagon security efforts—including protecting military facilities from attack—to one that also has authority to investigate crimes within the United States such as treason, foreign or terrorist sabotage or even economic espionage.

The Pentagon has pushed legislation on Capitol Hill that would create an intelligence exception to the Privacy Act, allowing the FBI and others to share information gathered about U.S. citizens with the Pentagon, CIA and other intelligence agencies, as long as the data is deemed to be related to foreign intelligence. Backers say the measure is needed to strengthen investigations into terrorism or weapons of mass destruction.

The police and the military have fundamentally different missions. The police protect citizens. The military attacks the enemy. When you start giving police powers to the military, citizens start looking like the enemy.

We gain a lot of security because we separate the functions of the police and the military, and we will all be much less safer if we allow those functions to blur. This kind of thing worries me far more than terrorist threats.

Posted on November 28, 2005 at 2:11 PMView Comments

European Terrorism Law and Music Downloaders

The European music industry is lobbying the European Parliament, demanding things that the RIAA can only dream about:

The music and film industries are demanding that the European parliament extends the scope of proposed anti-terror laws to help them prosecute illegal downloaders. In an open letter to MEPs, companies including Sony BMG, Disney and EMI have asked to be given access to communications data – records of phone calls, emails and internet surfing – in order to take legal action against pirates and filesharers. Current proposals restrict use of such information to cases of terrorism and organised crime.

Our society definitely needs a serious conversation about the fundamental freedoms we are sacrificing in a misguided attempt to keep us safe from terrorism. It feels both surreal and sickening to have to defend our fundamental freedoms against those who want to stop people from sharing music. How is it possible that we can contemplate so much damage to our society simply to protect the business model of a handful of companies?

Posted on November 27, 2005 at 12:20 PMView Comments

Surveillance and Oversight

Christmas 2003, Las Vegas. Intelligence hinted at a terrorist attack on New Year’s Eve. In the absence of any real evidence, the FBI tried to compile a real-time database of everyone who was visiting the city. It collected customer data from airlines, hotels, casinos, rental car companies, even storage locker rental companies. All this information went into a massive database—probably close to a million people overall—that the FBI’s computers analyzed, looking for links to known terrorists. Of course, no terrorist attack occurred and no plot was discovered: The intelligence was wrong.

A typical American citizen spending the holidays in Vegas might be surprised to learn that the FBI collected his personal data, but this kind of thing is increasingly common. Since 9/11, the FBI has been collecting all sorts of personal information on ordinary Americans, and it shows no signs of letting up.

The FBI has two basic tools for gathering information on large groups of Americans. Both were created in the 1970s to gather information solely on foreign terrorists and spies. Both were greatly expanded by the USA Patriot Act and other laws, and are now routinely used against ordinary, law-abiding Americans who have no connection to terrorism. Together, they represent an enormous increase in police power in the United States.

The first are FISA warrants (sometimes called Section 215 warrants, after the section of the Patriot Act that expanded their scope). These are issued in secret, by a secret court. The second are national security letters, less well known but much more powerful, and which FBI field supervisors can issue all by themselves. The exact numbers are secret, but a recent Washington Post article estimated that 30,000 letters each year demand telephone records, banking data, customer data, library records, and so on.

In both cases, the recipients of these orders are prohibited by law from disclosing the fact that they received them. And two years ago, Attorney General John Ashcroft rescinded a 1995 guideline that this information be destroyed if it is not relevant to whatever investigation it was collected for. Now, it can be saved indefinitely, and disseminated freely.

September 2005, Rotterdam. The police had already identified some of the 250 suspects in a soccer riot from the previous April, but most were unidentified but captured on video. In an effort to help, they sent text messages to 17,000 phones known to be in the vicinity of the riots, asking that anyone with information contact the police. The result was more evidence, and more arrests.

The differences between the Rotterdam and Las Vegas incidents are instructive. The Rotterdam police needed specific data for a specific purpose. Its members worked with federal justice officials to ensure that they complied with the country’s strict privacy laws. They obtained the phone numbers without any names attached, and deleted them immediately after sending the single text message. And their actions were public, widely reported in the press.

On the other hand, the FBI has no judicial oversight. With only a vague hinting that a Las Vegas attack might occur, the bureau vacuumed up an enormous amount of information. First its members tried asking for the data; then they turned to national security letters and, in some cases, subpoenas. There was no requirement to delete the data, and there is every reason to believe that the FBI still has it all. And the bureau worked in secret; the only reason we know this happened is that the operation leaked.

These differences illustrate four principles that should guide our use of personal information by the police. The first is oversight: In order to obtain personal information, the police should be required to show probable cause, and convince a judge to issue a warrant for the specific information needed. Second, minimization: The police should only get the specific information they need, and not any more. Nor should they be allowed to collect large blocks of information in order to go on “fishing expeditions,” looking for suspicious behavior. The third is transparency: The public should know, if not immediately then eventually, what information the police are getting and how it is being used. And fourth, destruction. Any data the police obtains should be destroyed immediately after its court-authorized purpose is achieved. The police should not be able to hold on to it, just in case it might become useful at some future date.

This isn’t about our ability to combat terrorism; it’s about police power. Traditional law already gives police enormous power to peer into the personal lives of people, to use new crime-fighting technologies, and to correlate that information. But unfettered police power quickly resembles a police state, and checks on that power make us all safer.

As more of our lives become digital, we leave an ever-widening audit trail in our wake. This information has enormous social value—not just for national security and law enforcement, but for purposes as mundane as using cell-phone data to track road congestion, and as important as using medical data to track the spread of diseases. Our challenge is to make this information available when and where it needs to be, but also to protect the principles of privacy and liberty our country is built on.

This essay originally appeared in the Minneapolis Star-Tribune.

Posted on November 22, 2005 at 6:06 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.