August 15, 2011

by Bruce Schneier
Chief Security Technology Officer, BT

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <>.

You can read this issue on the web at <>. These same essays and news items appear in the "Schneier on Security" blog at <>, along with a lively comment section. An RSS feed is available.

In this issue:

Developments in Facial Recognition

Eventually, it will work. You'll be able to wear a camera that will automatically recognize someone walking towards you, and a earpiece that will relay who that person is and maybe something about him. None of the technologies required to make this work are hard; it's just a matter of getting the error rate down low enough for it to be a useful system. And there have been a number of recent research results and news stories that illustrate what this new world might look like.

The police want this sort of system. MORIS is an iris-scanning technology that several police forces in the U.S. are using. The next step is the face-scanning glasses that the Brazilian police claim they will be wearing at the 2014 World Cup.

A small camera fitted to the glasses can capture 400 facial images per second and send them to a central computer database storing up to 13 million faces.
The system can compare biometric data at 46,000 points on a face and will immediately signal any matches to known criminals or people wanted by police.

In the future, this sort of thing won't be limited to the police. Facebook has recently embarked on a major photo tagging project, and already has the largest collection of identified photographs in the world outside of a government. Researchers at Carnegie Mellon University have combined the public part of that database with a camera and face-recognition software to identify students on campus. (The paper fully describing their work is under review and not online yet, but slides describing the results can be found here.)

Of course, there are false positives -- as there are with any system like this. That's not a big deal if the application is a billboard with face-recognition serving different ads depending on the gender and age -- and eventually the identity -- of the person looking at it, but is more problematic if the application is a legal one.

In Boston, someone erroneously had his driver's license revoked:

It turned out Gass was flagged because he looks like another driver, not because his image was being used to create a fake identity. His driving privileges were returned but, he alleges in a lawsuit, only after 10 days of bureaucratic wrangling to prove he is who he says he is.
And apparently, he has company. Last year, the facial recognition system picked out more than 1,000 cases that resulted in State Police investigations, officials say. And some of those people are guilty of nothing more than looking like someone else. Not all go through the long process that Gass says he endured, but each must visit the Registry with proof of their identity.
At least 34 states are using such systems. They help authorities verify a person's claimed identity and track down people who have multiple licenses under different aliases, such as underage people wanting to buy alcohol, people with previous license suspensions, and people with criminal records trying to evade the law.

The problem is less with the system, and more with the guilty-until-proven-innocent way in which the system is used.

Kaprielian said the Registry gives drivers enough time to respond to the suspension letters and that it is the individual's "burden'" to clear up any confusion. She added that protecting the public far outweighs any inconvenience Gass or anyone else might experience.
"A driver's license is not a matter of civil rights. It's not a right. It's a privilege," she said. "Yes, it is an inconvenience [to have to clear your name], but lots of people have their identities stolen, and that's an inconvenience, too."

Related, there's a system embedded in a pair of glasses that automatically analyzes and relays micro-facial expressions. The goal is to help autistic people who have trouble reading emotions, but you could easily imagine this sort of thing becoming common. And what happens when we start relying on these computerized systems and ignoring our own intuition?

And finally, CV Dazzle is camouflage from face detection.


Brazilian face-scanning glasses:

Facebook photo tagging:
Carnegie Mellon research:

Billboard with face-recognition:
Boston false positive:
IEEE Spectrum and The Economist have published similar articles.

Micro facial expression analysis glasses.
CV Dazzle:


Ross Anderson discusses the technical and policy details of the British phone hacking scandal.
This is really clever: the Telex anti-censorship system uses deep-packet inspection to avoid Internet censorship.

The police arrested sixteen suspected members of the Anonymous hacker group.

Google detects malware in its search data, and alerts users. There's a lot that Google sees as a result of its unique and prominent position in the Internet. Some of it is going to be stuff they never considered. And while they use a lot of it to make money, it's good of them to give this one back to the Internet users.
Smuggling drugs in unwitting people's car trunks.
This attack works because 1) there's a database of keys available to lots of people, and 2) both the SENTRI system and the victims are predictable.

Revenge effects of too-safe playground equipment.

iPhone iris scanning technology:
Good article on liabilities and computer security.
I've been talking about liabilities for about a decade now. Here are essays I wrote in 2002, 2003, 2004, and 2006.

Matt Blaze analyzes the 2010 U.S. Wiretap Report.

I second Matt's recommendation of Susan Landau's book "Surveillance or Security: The Risks Posed by New Wiretapping Technologies" (MIT Press, 2011). It's an excellent discussion of the security and politics of wiretapping.

Data privacy as a prisoner's dilemma: a good analysis.
The solution -- and one endorsed by the essay -- is a comprehensive privacy law. That reduces the incentive to defect.

ShareMeNot is a Firefox add-on for preventing tracking from third-party buttons (like the Facebook "Like" button or the Google "+1" button) until the user actually chooses to interact with them. That is, ShareMeNot doesn't disable/remove these buttons completely. Rather, it allows them to render on the page, but prevents the cookies from being sent until the user actually clicks on them, at which point ShareMeNot releases the cookies and the user gets the desired behavior (i.e., they can Like or +1 the page).

Hacking Apple laptop batteries.
Bypassing the lock on luggage.
Interesting paper: "Science Fiction Prototyping and Security Education: Cultivating Contextual and Societal Thinking in Computer Security Education and Beyond," by Tadayoshi Kohno and Brian David Johnson.
Breaking the Xilinx Virtex-II FPGA bitstream encryption. It's a power-analysis attack, which makes it much harder to defend against. And since the attack model is an engineer trying to reverse-engineer the chip, it's a valid attack.

Attacking embedded systems in prison doors.
This seems like a minor risk today; Stuxnet was a military-grade effort, and beyond the reach of your typical criminal organization. But that can only change, as people study and learn from the reverse-engineered Stuxnet code and as hacking PLCs becomes more common. As we move from mechanical, or even electro-mechanical, systems to digital systems, and as we network those digital systems, this sort of vulnerability is going to only become more common.

The article is in the context of the big Facebook lawsuit, but the part about identifying people by their writing style is interesting.
It seems reasonable that we have a linguistic fingerprint, although 1) there are far fewer of them than finger fingerprints, 2) they're easier to fake. It's probably not much of a stretch to take that software that "identifies bundles of linguistic features, hundreds in all" and use the data to automatically modify my writing to look like someone else's.

A good criticism of the science behind author recognition, and a paper on how to evade these systems.

Seems that the one-time pad was not first invented by Vernam.
The paper:

Two items on hacking lotteries. The first is about someone who figured out how to spot winners in a scratch-off tic-tac-toe style game, and a daily draw style game where expected payout can exceed the ticket price. The second is about someone who has won the lottery four times, with speculation that she had advance knowledge of where and when certain jackpot-winning scratch-off tickets would be sold.

Home-made Wi-Fi hacking, phone snooping, UAV.

German police call airport full-body scanners useless.
Here's a story about full-body scanners that are overly sensitive to sweaty armpits.
The Zodiac cipher was announced as cracked, but the break was a hoax.

XKCD on the CIA hack.

I've been using the phrase "arms race" to describe the world's militaries' rush into cyberspace for a couple of years now. Here's a good article on the topic that uses the same phrase.
New bank-fraud Trojan.
An article on MRI lie detectors -- lots of interesting research.
My previous blog post on the topic.

There's a security story from biology I've used a few times: plants that use chemicals to call in airstrikes by wasps on the herbivores attacking them. This is a new variation: a species of orchid that emits the same signals as a trick, to get pollinated.
I'm a big fan of taxonomies, and this "Taxonomy of Operational Cyber Security Risks" -- from Carnegie Mellon -- seems like a useful one.

GPRS hacked.
Security flaws in encrypted police radios: "Why (Special Agent) Johnny (Still) Can't Encrypt: A Security Analysis of the APCO Project 25 Two-Way Radio System," by Sandy Clark, Travis Goodspeed, Perry Metzger, Zachary Wasserman, Kevin Xu, and Matt Blaze. I've heard Matt talk about this project several times. It's great work, and a fascinating insight into the usability problems of encryption in the real world.
Counterfeit pilot IDs and uniforms will now be sufficient to bypass airport security. TSA is testing a program to not screen pilots.

The African crested rat applies tree poison to its fur to make itself more deadly.
A couple of weeks ago Wired reported the discovery of a new, undeletable, web cookie.
The Wired article was very short on specifics, so I waited until one of the researchers -- Ashkan Soltani -- wrote up more details. He finally did, in a quite technical essay.

Schneier News

My new book, "Liars and Outliers," has a cover. Publication is still scheduled for the end of February -- in time for the RSA Conference -- assuming I finish the manuscript in time.
Older posts on the book:

Interview with me from the Homeland Security News Wire.

Is There a Hacking Epidemic?

Freakonomics asks: "Why has there been such a spike in hacking recently? Or is it merely a function of us paying closer attention and of institutions being more open about reporting security breaches?"

They posted five answers, including mine:

The apparent recent hacking epidemic is more a function of news reporting than an actual epidemic. Like shark attacks or school violence, natural fluctuations in data become press epidemics, as more reporters write about more events, and more people read about them. Just because the average person reads more articles about more events doesn't mean that there are more events -- just more articles.
Hacking for fun -- like LulzSec -- has been around for decades. It's where hacking started, before criminals discovered the Internet in the 1990s. Criminal hacking for profit -- like the Citibank hack -- has been around for over a decade. International espionage existed for millennia before the Internet, and has never taken a holiday.
The past several months have brought us a string of newsworthy hacking incidents. First there was the hacking group Anonymous, and its hacktivism attacks as a response to the pressure to interdict contributions to Julian Assange's legal defense fund and the torture of Bradley Manning. Then there was the probably espionage-related attack against RSA, Inc. and its authentication token -- made more newsworthy because of the bungling of the disclosure by the company -- and the subsequent attack against Lockheed Martin. And finally, there were the very public attacks against Sony, which became the company to attack simply because everyone else was attacking it, and the public hacktivism by LulzSec.
None of this is new. None of this is unprecedented. To a security professional, most of it isn't even interesting. And while national intelligence organizations and some criminal groups are organized, hacker groups like Anonymous and LulzSec are much more informal. Despite the impression we get from movies, there is no organization. There's no membership, there are no dues, there is no initiation. It's just a bunch of guys. You too can join Anonymous -- just hack something, and claim you're a member. That's probably what the members of Anonymous arrested in Turkey were: 32 people who just decided to use that name.
It's not that things are getting worse; it's that things were always this bad. To a lot of security professionals, the value of some of these groups is to graphically illustrate what we've been saying for years: organizations need to beef up their security against a wide variety of threats. But the recent news epidemic also illustrates how safe the Internet is. Because news articles are the only contact most of us have had with any of these attacks.

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.

Copyright (c) 2011 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.