Entries Tagged "privacy"

Page 135 of 144

Snooping on Text by Listening to the Keyboard

Fascinating research out of Berkeley. Ed Felten has a good summary:

Li Zhuang, Feng Zhou, and Doug Tygar have an interesting new paper showing that if you have an audio recording of somebody typing on an ordinary computer keyboard for fifteen minutes or so, you can figure out everything they typed. The idea is that different keys tend to make slightly different sounds, and although you don’t know in advance which keys make which sounds, you can use machine learning to figure that out, assuming that the person is mostly typing English text. (Presumably it would work for other languages too.)

Read the rest.

The paper is on the Web. Here’s the abstract:

We examine the problem of keyboard acoustic emanations. We present a novel attack taking as input a 10-minute sound recording of a user typing English text using a keyboard, and then recovering up to 96% of typed characters. There is no need for a labeled training recording. Moreover the recognizer bootstrapped this way can even recognize random text such as passwords: In our experiments, 90% of 5-character random passwords using only letters can be generated in fewer than 20 attempts by an adversary; 80% of 10-character passwords can be generated in fewer than 75 attempts. Our attack uses the statistical constraints of the underlying content, English language, to reconstruct text from sound recordings without any labeled training data. The attack uses a combination of standard machine learning and speech recognition techniques, including cepstrum features, Hidden Markov Models, linear classification, and feedback-based incremental learning.

Posted on September 13, 2005 at 8:13 AMView Comments

Trusted Computing Best Practices

The Trusted Computing Group (TCG) is an industry consortium that is trying to build more secure computers. They have a lot of members, although the board of directors consists of Microsoft, Sony, AMD, Intel, IBM, SUN, HP, and two smaller companies who are voted on in a rotating basis.

The basic idea is that you build a computer from the ground up securely, with a core hardware “root of trust” called a Trusted Platform Module (TPM). Applications can run securely on the computer, can communicate with other applications and their owners securely, and can be sure that no untrusted applications have access to their data or code.

This sounds great, but it’s a double-edged sword. The same system that prevents worms and viruses from running on your computer might also stop you from using any legitimate software that your hardware or operating system vendor simply doesn’t like. The same system that protects spyware from accessing your data files might also stop you from copying audio and video files. The same system that ensures that all the patches you download are legitimate might also prevent you from, well, doing pretty much anything.

(Ross Anderson has an excellent FAQ on the topic. I wrote about it back when Microsoft called it Palladium.)

In May, the Trusted Computing Group published a best practices document: “Design, Implementation, and Usage Principles for TPM-Based Platforms.” Written for users and implementers of TCG technology, the document tries to draw a line between good uses and bad uses of this technology.

The principles that TCG believes underlie the effective, useful, and acceptable design, implementation, and use of TCG technologies are the following:

  • Security: TCG-enabled components should achieve controlled access to designated critical secured data and should reliably measure and report the system’s security properties. The reporting mechanism should be fully under the owner’s control.
  • Privacy: TCG-enabled components should be designed and implemented with privacy in mind and adhere to the letter and spirit of all relevant guidelines, laws, and regulations. This includes, but is not limited to, the OECD Guidelines, the Fair Information Practices, and the European Union Data Protection Directive (95/46/EC).
  • Interoperability: Implementations and deployments of TCG specifications should facilitate interoperability. Furthermore, implementations and deployments of TCG specifications should not introduce any new interoperability obstacles that are not for the purpose of security.
  • Portability of data: Deployment should support established principles and practices of data ownership.
  • Controllability: Each owner should have effective choice and control over the use and operation of the TCG-enabled capabilities that belong to them; their participation must be opt-in. Subsequently, any user should be able to reliably disable the TCG functionality in a way that does not violate the owner’s policy.
  • Ease-of-use: The nontechnical user should find the TCG-enabled capabilities comprehensible and usable.

It’s basically a good document, although there are some valid criticisms. I like that the document clearly states that coercive use of the technology—forcing people to use digital rights management systems, for example, are inappropriate:

The use of coercion to effectively force the use of the TPM capabilities is not an appropriate use of the TCG technology.

I like that the document tries to protect user privacy:

All implementations of TCG-enabled components should ensure that the TCG technology is not inappropriately used for data aggregation of personal information/

I wish that interoperability were more strongly enforced. The language has too much wiggle room for companies to break interoperability under the guise of security:

Furthermore, implementations and deployments of TCG specifications should not introduce any new interoperability obstacles that are not for the purpose of security.

That sounds good, but what does “security” mean in that context? Security of the user against malicious code? Security of big media against people copying music and videos? Security of software vendors against competition? The big problem with TCG technology is that it can be used to further all three of these “security” goals, and this document is where “security” should be better defined.

Complaints aside, it’s a good document and we should all hope that companies follow it. Compliance is totally voluntary, but it’s the kind of document that governments and large corporations can point to and demand that vendors follow.

But there’s something fishy going on. Microsoft is doing its best to stall the document, and to ensure that it doesn’t apply to Vista (formerly known as Longhorn), Microsoft’s next-generation operating system.

The document was first written in the fall of 2003, and went through the standard review process in early 2004. Microsoft delayed the adoption and publication of the document, demanding more review. Eventually the document was published in June of this year (with a May date on the cover).

Meanwhile, the TCG built a purely software version of the specification: Trusted Network Connect (TNC). Basically, it’s a TCG system without a TPM.

The best practices document doesn’t apply to TNC, because Microsoft (as a member of the TCG board of directors) blocked it. The excuse is that the document hadn’t been written with software-only applications in mind, so it shouldn’t apply to software-only TCG systems.

This is absurd. The document outlines best practices for how the system is used. There’s nothing in it about how the system works internally. There’s nothing unique to hardware-based systems, nothing that would be different for software-only systems. You can go through the document yourself and replace all references to “TPM” or “hardware” with “software” (or, better yet, “hardware or software”) in five minutes. There are about a dozen changes, and none of them make any meaningful difference.

The only reason I can think of for all this Machiavellian maneuvering is that the TCG board of directors is making sure that the document doesn’t apply to Vista. If the document isn’t published until after Vista is released, then obviously it doesn’t apply.

Near as I can tell, no one is following this story. No one is asking why TCG best practices apply to hardware-based systems if they’re writing software-only specifications. No one is asking why the document doesn’t apply to all TCG systems, since it’s obviously written without any particular technology in mind. And no one is asking why the TCG is delaying the adoption of any software best practices.

I believe the reason is Microsoft and Vista, but clearly there’s some investigative reporting to be done.

(A version of this essay previously appeared on CNet’s News.com and ZDNet.)

EDITED TO ADD: This comment completely misses my point. Which is odd; I thought I was pretty clear.

EDITED TO ADD: There is a thread on Slashdot on the topic.

EDITED TO ADD: The Sydney Morning Herald republished this essay. Also “The Age.”

Posted on August 31, 2005 at 8:27 AMView Comments

Privacy Risks of Used Cell Phones

Ignore the corporate sleaziness by Cingular for the moment—they sold used cell phones meant for charity—and focus on the privacy implications. Cingular didn’t erase any of the personal information on the used phones they sold.

This reminds me of Simson Garfinkel’s analysis of used hard drives. He found that 90% of them contained old data, some of it very private and interesting.

Erasing data is one of the big problems of the information age. We know how to do it, but it takes time and we mostly don’t bother. And sadly, these kinds of privacy violations are more the norm than the exception. I don’t think it will get better unless Cingular becomes liable for violating its customers’ privacy like that.

EDITED TO ADD: I already wrote about the risks of losing small portable devices.

Posted on August 26, 2005 at 2:58 PMView Comments

Cameras in the New York City Subways

New York City is spending $212 million on surveillance technology: 1,000 video cameras and 3,000 motion sensors for the city’s subways, bridges, and tunnels.

Why? Why, given that cameras didn’t stop the London train bombings? Why, when there is no evidence that cameras are effectice at reducing either terrorism and crime, and every reason to believe that they are ineffective?

One reason is that it’s the “movie plot threat” of the moment. (You can hear the echos of the movie plots when you read the various quotes in the news stories.) The terrorists bombed a subway in London, so we need to defend our subways. The other reason is that New York City officials are erring on the side of caution. If nothing happens, then it was only money. But if something does happen, they won’t keep their jobs unless they can show they did everything possible. And technological solutions just make everyone feel better.

If I had $212 million to spend to defend against terrorism in the U.S., I would not spend it on cameras in the New York City subways. If I had $212 million to defend New York City against terrorism, I would not spend it on cameras in the subways. This is nothing more than security theater against a movie plot threat.

On the plus side, the money will also go for a new radio communications system for subway police, and will enable cell phone service in underground stations, but not tunnels.

Posted on August 24, 2005 at 1:10 PMView Comments

Bluetooth Spam

Advertisers are beaming unwanted content to Bluetooth phones at a distance of 100 meters.

Sure, it’s annoying, but worse, there are serious security risks. Don’t believe this:

Furthermore, there is no risk of downloading viruses or other malware to the phone, says O’Regan: “We don’t send applications or executable code.” The system uses the phone’s native download interface so they should be able to see the kind of file they are downloading before accepting it, he adds.

This company might not send executable code, but someone else certainly could. And what percentage of people who use Bluetooth phones can recognize “the kind of file they are downloading”?

We’ve already seen two ways to steal data from Bluetooth devices. And we know that more and more sensitive data is being stored on these small devices, increasing the risk. This is almost certainly another avenue for attack.

Posted on August 23, 2005 at 12:24 PMView Comments

Cryptographically-Secured Murder Confession

From the Associated Press:

Joseph Duncan III is a computer expert who bragged online, days before authorities believe he killed three people in Idaho, about a tell-all journal that would not be accessed for decades, authorities say.

Duncan, 42, a convicted sex offender, figured technology would catch up in 30 years, “and then the world will know who I really was, and what I really did, and what I really thought,” he wrote May 13.

Police seized Duncan’s computer equipment from his Fargo apartment last August, when they were looking for evidence in a Detroit Lakes, Minn., child molestation case.

At least one compact disc and a part of his hard drive were encrypted well enough that one of the region’s top computer forensic specialists could not access it, The Forum reported Monday.

This is the kind of story that the government likes to use to illustrate the dangers of encryption. How can we allow people to use strong encryption, they ask, if it means not being able to convict monsters like Duncan?

But how is this different than Duncan speaking the confession when no one was able to hear? Or writing it down and hiding it where no one could ever find it? Or not saying anything at all? If the police can’t convict him without this confession—which we only have his word for as existing—then maybe he’s innocent?

Technologies have good and bad uses. Encryption, telephones, cars: they’re all used by both honest citizens and by criminals. For almost all technologies, the good far outweighs the bad. Banning a technology because the bad guys use it, denying everyone else the beneficial uses of that technology, is almost always a bad security trade-off.

EDITED TO ADD: Looking at the details of the encryption, it’s certainly possible that the authorities will break the diary. It probably depends on how random a key Duncan chose, although possibly on whether or not there’s an implementation error in the cryptographic software. If I had more details, I could speculate further.

Posted on August 15, 2005 at 2:17 PMView Comments

Secure Flight News

According to Wired News, the DHS is looking for someone in Congress to sponsor a bill that eliminates congressional oversight over the Secure Flight program.

The bill would allow them to go ahead with the program regardless of GAO’s assessment. (Current law requires them to meet ten criteria set by Congress; the most recent GAO report said that they did not meet nine of them.) The bill would allow them to use commercial data even though they have not demonstrated its effectiveness. (The DHS funding bill passed by both the House and the Senate prohibits them from using commercial data during passenger screening, because there has been absolutely no test results showing that it is effective.)

In this new bill, all that would be required to go ahead with Secure Flight would be for Secretary Chertoff to say so:

Additionally, the proposed changes would permit Secure Flight to be rolled out to the nation’s airports after Homeland Security chief Michael Chertoff certifies the program will be effective and not overly invasive. The current bill requires independent congressional investigators to make that determination.

Looks like the DHS, being unable to comply with the law, is trying to change it. This is a rogue program that needs to be stopped.

In other news, the TSA has deleted about three million personal records it used for Secure Flight testing. This seems like a good idea, but it prevents people from knowing what data the government had on them—in violation of the Privacy Act.

Civil liberties activist Bill Scannell says it’s difficult to know whether TSA’s decision to destroy records so swiftly is a housecleaning effort or something else.

“Is the TSA just such an incredibly efficient organization that they’re getting rid of things that are no longer needed?” Scannell said. “Or is this a matter of the destruction of evidence?”

Scannell says it’s a fair question to ask in light of revelations that the TSA already violated the Privacy Act last year when it failed to fully disclose the scope of its testing for Secure Flight and its collection of commercial data on individuals.

My previous essay on Secure Flight is here.

Posted on August 15, 2005 at 9:43 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.