Crypto-Gram

October 15, 2015

by Bruce Schneier
CTO, Resilient Systems, Inc.
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2015/…>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:


Volkswagen and Cheating Software

For the past six years, Volkswagen has been cheating on the emissions testing for its diesel cars. The cars’ computers were able to detect when they were being tested, and temporarily alter how their engines worked so they looked much cleaner than they actually were. When they weren’t being tested, they belched out 40 times the pollutants. Their CEO has resigned, and the company will face an expensive recall, enormous fines and worse.

Cheating on regulatory testing has a long history in corporate America. It happens regularly in automobile emissions control and elsewhere. What’s important in the VW case is that the cheating was preprogrammed into the algorithm that controlled cars’ emissions.

Computers allow people to cheat in ways that are new. Because the cheating is encapsulated in software, the malicious actions can happen at a far remove from the testing itself. Because the software is “smart” in ways that normal objects are not, the cheating can be subtler and harder to detect.

We’ve already had examples of smartphone manufacturers cheating on processor benchmark testing: detecting when they’re being tested and artificially increasing their performance. We’re going to see this in other industries.

The Internet of Things is coming. Many industries are moving to add computers to their devices, and that will bring with it new opportunities for manufacturers to cheat. Light bulbs could fool regulators into appearing more energy efficient than they are. Temperature sensors could fool buyers into believing that food has been stored at safer temperatures than it has been. Voting machines could appear to work perfectly—except during the first Tuesday of November, when they undetectably switch a few percent of votes from one party’s candidates to another’s.

My worry is that some corporate executives won’t interpret the VW story as a cautionary tale involving just punishments for a bad mistake but will see it instead as a demonstration that you can get away with something like that for six years.

And they’ll cheat smarter. For all of VW’s brazenness, its cheating was obvious once people knew to look for it. Far cleverer would be to make the cheating look like an accident. Overall software quality is so bad that products ship with thousands of programming mistakes.

Most of them don’t affect normal operations, which is why your software generally works just fine. Some of them do, which is why your software occasionally fails, and needs constant updates. By making cheating software appear to be a programming mistake, the cheating looks like an accident. And, unfortunately, this type of deniable cheating is easier than people think.

Computer-security experts believe that intelligence agencies have been doing this sort of thing for years, both with the consent of the software developers and surreptitiously.

This problem won’t be solved through computer security as we normally think of it. Conventional computer security is designed to prevent outside hackers from breaking into your computers and networks. The car analogue would be security software that prevented an owner from tweaking his own engine to run faster but in the process emit more pollutants. What we need to contend with is a very different threat: malfeasance programmed in at the design stage.

We already know how to protect ourselves against corporate misbehavior. Ronald Reagan once said “trust, but verify” when speaking about the Soviet Union cheating on nuclear treaties. We need to be able to verify the software that controls our lives.

Software verification has two parts: transparency and oversight. Transparency means making the source code available for analysis. The need for this is obvious; it’s much easier to hide cheating software if a manufacturer can hide the code.

But transparency doesn’t magically reduce cheating or improve software quality, as anyone who uses open-source software knows. It’s only the first step. The code must be analyzed. And because software is so complicated, that analysis can’t be limited to a once-every-few-years government test. We need private analysis as well.

It was researchers at private labs in the United States and Germany that eventually outed Volkswagen. So transparency can’t just mean making the code available to government regulators and their representatives; it needs to mean making the code available to everyone.

Both transparency and oversight are being threatened in the software world. Companies routinely fight making their code public and attempt to muzzle security researchers who find problems, citing the proprietary nature of the software. It’s a fair complaint, but the public interests of accuracy and safety need to trump business interests.

Proprietary software is increasingly being used in critical applications: voting machines, medical devices, breathalyzers, electric power distribution, systems that decide whether or not someone can board an airplane. We’re ceding more control of our lives to software and algorithms. Transparency is the only way verify that they’re not cheating us.

There’s no shortage of corporate executives willing to lie and cheat their way to profits. We saw another example of this last week: Stewart Parnell, the former CEO of the now-defunct Peanut Corporation of America, was sentenced to 28 years in prison for knowingly shipping out salmonella-tainted products. That may seem excessive, but nine people died and many more fell ill as a result of his cheating.

Software will only make malfeasance like this easier to commit and harder to prove. Fewer people need to know about the conspiracy. It can be done in advance, nowhere near the testing time or site. And, if the software remains undetected for long enough, it could easily be the case that no one in the company remembers that it’s there.

We need better verification of the software that controls our lives, and that means more—and more public—transparency.

This essay previously appeared on CNN.com.
http://www.cnn.com/2015/09/28/opinions/…

Portuguese translation by Ricardo R Hashimoto:
http://www.midiasemmascara.org/mediawatch/…

News:
http://money.cnn.com/2015/09/23/news/companies/…

Emissions test cheating:
http://www.bloomberg.com/news/articles/2015-09-23/…
http://arstechnica.com/cars/2015/10/…

Samsung cheating:
http://www.theguardian.com/technology/2013/oct/13/…

Bugs in code:
http://www.mayerdan.com/ruby/2012/11/11/…

Deniable cheating:
https://www.schneier.com/essays/archives/2013/10/…

How Volkswagen got caught:
http://www.iflscience.com/technology/…

Threatening transparency and oversight:
https://medium.com/climate-desk/…

Parnell sentencing:
http://www.cnn.com/2015/09/21/us/…

Other essays on this:
http://www.nytimes.com/2015/09/24/opinion/…
http://www.slate.com/articles/technology/…
http://fusion.net/story/202867/…


Living in a Code Yellow World

In the 1980s, handgun expert Jeff Cooper invented something called the Color Code to describe what he called the “combat mind-set.” Here is his summary:

In White you are unprepared and unready to take lethal action. If you are attacked in White you will probably die unless your adversary is totally inept.

In Yellow you bring yourself to the understanding that your life may be in danger and that you may have to do something about it.

In Orange you have determined upon a specific adversary and are prepared to take action which may result in his death, but you are not in a lethal mode.

In Red you are in a lethal mode and will shoot if circumstances warrant.

Cooper talked about remaining in Code Yellow over time, but he didn’t write about its psychological toll. It’s significant. Our brains can’t be on that alert level constantly. We need downtime. We need to relax. This is why we have friends around whom we can let our guard down and homes where we can close our doors to outsiders. We only want to visit Yellowland occasionally.

Since 9/11, the US has increasingly become Yellowland, a place where we assume danger is imminent. It’s damaging to us individually and as a society.

I don’t mean to minimize actual danger. Some people really do live in a Code Yellow world, due to the failures of government in their home countries. Even there, we know how hard it is for them to maintain a constant level of alertness in the face of constant danger. Psychologist Abraham Maslow wrote about this, making safety a basic level in his hierarchy of needs. A lack of safety makes people anxious and tense, and the long term effects are debilitating.

The same effects occur when we believe we’re living in an unsafe situation even if we’re not. The psychological term for this is hypervigilance. Hypervigilance in the face of imagined danger causes stress and anxiety. This, in turn, alters how your hippocampus functions, and causes an excess of cortisol in your body. Now cortisol is great in small and infrequent doses, and helps you run away from tigers. But it destroys your brain and body if you marinate in it for extended periods of time.

Not only does trying to live in Yellowland harm you physically, it changes how you interact with your environment and it impairs your judgment. You forget what’s normal and start seeing the enemy everywhere. Terrorism actually relies on this kind of reaction to succeed.

Here’s an example from The Washington Post last year: “I was taking pictures of my daughters. A stranger thought I was exploiting them.” A father wrote about his run-in with an off-duty DHS agent, who interpreted an innocent family photoshoot as something nefarious and proceeded to harass and lecture the family. That the parents were white and the daughters Asian added a racist element to the encounter.

At the time, people wrote about this as an example of worst-case thinking, saying that as a DHS agent, “he’s paid to suspect the worst at all times and butt in.” While, yes, it was a “disturbing reminder of how the mantra of ‘see something, say something’ has muddied the waters of what constitutes suspicious activity,” I think there’s a deeper story here. The agent is trying to live his life in Yellowland, and it caused him to see predators where there weren’t any.

I call these “movie-plot threats,” scenarios that would make great action movies but that are implausible in real life. Yellowland is filled with them.

Last December former DHS director Tom Ridge wrote about the security risks of building a NFL stadium near the Los Angeles Airport. His report is full of movie-plot threats, including terrorists shooting down a plane and crashing it into a stadium. His conclusion, that it is simply too dangerous to build a sports stadium within a few miles of the airport, is absurd. He’s been living too long in Yellowland.

That our brains aren’t built to live in Yellowland makes sense, because actual attacks are rare. The person walking towards you on the street isn’t an attacker. The person doing something unexpected over there isn’t a terrorist. Crashing an airplane into a sports stadium is more suitable to a Die Hard movie than real life. And the white man taking pictures of two Asian teenagers on a ferry isn’t a sex slaver. (I mean, really?)

Most of us, that DHS agent included, are complete amateurs at knowing the difference between something benign and something that’s actually dangerous. Combine this with the rarity of attacks, and you end up with an overwhelming number of false alarms. This is the ultimate problem with programs like “see something, say something.” They waste an enormous amount of time and money.

Those of us fortunate enough to live in a Code White society are much better served acting like we do. This is something we need to learn at all levels, from our personal interactions to our national policy. Since the terrorist attacks of 9/11, many of our counterterrorism policies have helped convince people they’re not safe, and that they need to be in a constant state of readiness. We need our leaders to lead us out of Yellowland, not to perpetuate it.

This essay previously appeared on Fusion.net.
http://fusion.net/story/200747/living-in-code-yellow/

Cooper’s Color Code:
https://en.wikipedia.org/wiki/…

Maslow’s hierarchy of needs:
https://en.wikipedia.org/wiki/…

Hypervigilance and its effects:
http://www.seattlepi.com/news/article/…
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3229257/
http://www.sciencedirect.com/science/article/pii/…
http://www.ncbi.nlm.nih.gov/pubmed/21621333
http://www.sciencedirect.com/science/article/pii/…
https://news.stanford.edu/news/2007/march7/…
http://emotionalsurvival.com/hypervigilance.htm

How terrorism relies on a hypervigilant reaction:
http://www.theguardian.com/books/2015/jan/31/…

Washington Post article:
https://www.washingtonpost.com/opinions/…
https://www.washingtonpost.com/opinions/…
http://groupthink.kinja.com/…
https://reason.com//2014/09/01/…
http://www.infowars.com/…

Movie-plot threats.
https://www.schneier.com/essays/archives/2005/09/…

Tom Ridge’s security assessment:
http://www.latimes.com/sports/nfl/…
http://i.usatoday.net/sports/nfl/ridgereport.pdf

“The War on the Unexpected”:
https://www.schneier.com/blog/archives/2007/11/…

Why “See Something, Say Something” Fails:
http://nymag.com/news/intelligencer/…

“Refuse to be Terrorized”:
https://www.schneier.com/essays/archives/2006/08/…


Obama Administration Not Pursuing a Backdoor to Commercial

Encryption

The Obama Administration is not pursuing a law that would force computer and communications manufacturers to add backdoors to their products for law enforcement. Sensibly, they concluded that criminals, terrorists, and foreign spies would use that backdoor as well.

Score one for the pro-security side in the Second Crypto War.

It’s certainly not over. The FBI hasn’t given up on an encryption backdoor (or other backdoor access to plaintext) since the early 1990s, and it’s not going to give up now. I expect there will be more pressure on companies, both overt and covert, more insinuations that strong security is somehow responsible for crime and terrorism, and more behind-closed-doors negotiations.

http://www.nytimes.com/2015/10/11/us/politics/…
https://www.washingtonpost.com/world/…

Our “Keys Under Doormats” paper:
https://www.schneier.com/paper-keys-under-doormats.html

Second Crypto War:
http://www.dailydot.com/politics/…
http://www.huffingtonpost.com/matthew-prince/…
https://www.schneier.com/blog/archives/2015/08/…
http://www.tandfonline.com/doi/pdf/10.1080/…


News

A Texas 9th-grader makes an electronic clock and brings it to school. Teachers immediately become stupid and call the police. The student’s name is Ahmed Mohamed, which certainly didn’t help.
http://www.dallasnews.com/news/community-news/…
http://www.nytimes.com/2015/09/17/us/…
https://theintercept.com/2015/09/16/…
I am reminded of the 2007 story of an MIT student getting arrested for bringing a piece of wearable electronic art to the airport. And I wrote about the “war on the unexpected” back in 2007, too.
https://www.schneier.com/blog/archives/2007/09/…
https://www.schneier.com/blog/archives/2007/11/…
We simply have to stop terrorizing ourselves. We just look stupid when we do it.
https://popehat.com/2015/09/16/…
http://s.artvoice.com/techvoice/2015/09/17/…

There’s more to the Mohamed story. He’s been invited to the White House, Google, MIT, and Facebook, and offered internships by Reddit and Twitter. On the other hand, Sarah Palin doesn’t believe it was just a clock. And he’s changing schools.
http://www.cnn.com/2015/09/16/politics/…
http://www.cnn.com/2015/09/16/us/…
http://www.washingtonpost.com/news/morning-mix/wp/…
http://www.cnn.com/2015/09/17/us/…

A rural New Hampshire library decided to install Tor on their computers and allow anonymous Internet browsing. The Department of Homeland pressured them to stop.
https://www.propublica.org/article/…
The good news is that the library is resisting the pressure and keeping Tor running.
http://www.vnews.com/home/18620952-95/…
http://www.theregister.co.uk/2015/09/17/…

This is an important issue for reasons that go beyond the New Hampshire library. The goal of the Library Freedom Project is to set up Tor exit nodes at libraries. Exit nodes help every Tor user in the world; the more of them there are, the harder it is to subvert the system. The Kilton Public Library isn’t just allowing its patrons to browse the Internet anonymously; it is helping dissidents around the world stay alive.
https://libraryfreedomproject.org/torexitpilotphase1/
Librarians have been protecting our privacy for decades, and I’m happy to see that tradition continue.
http://www.theguardian.com/world/2015/jun/05/…
https://www.washingtonpost.com/news/the-switch/wp/…
http://www.thenation.com/article/librarians-versus-nsa/

A UK student reading a book on terrorism is accused of being a terrorist. He was reading the book for a class he was taking. I’ll let you guess his ethnicity.
http://www.theguardian.com/education/2015/sep/24/…

A self-destructing computer chip is built on glass.
http://www.extremetech.com/extreme/…

Okay, this is weird. FireEye has gone to court to prevent ERNW from disclosing vulnerabilities in FireEye products. FireEye should know better.
http://www.wired.com/2015/09/…
http://www.cio.com/article/2983141/…
http://www.pcworld.com/article/2983144/…
https://www.insinuator.net/2015/09/…
http://www.forbes.com/sites/thomasbrewster/2015/09/…
Here’s FireEye’s statement:
https://www.fireeye.com/content/dam/fireeye-www/…

Here’s a watch that monitors the movements of your hand and can guess what you’re typing.
http://news.softpedia.com/news/…

Drone speedboat.
http://www.defensemedianetwork.com/stories/…

A history of hacking by Dorothy Denning.
http://journal.georgetown.edu/the-rise-of-hacktivism/

FireEye is reporting the discovery of persistent malware that compromises Cisco routers:
https://www.fireeye.com//executive-perspective/…
https://www.fireeye.com//threat-research/2015/…
I don’t know if the attack is related to this attack against Cisco routers discovered in August.
https://www.schneier.com/blog/archives/2015/08/…
http://www.wired.com/2013/09/nsa-router-hacking/
As I wrote then, this is very much the sort of attack you’d expect from a government eavesdropping agency. We know, for example, that the NSA likes to attack routers. If I had to guess, I would guess that this is an NSA exploit. (Note the lack of Five Eyes countries in the target list.)
https://www.techdirt.com/articles/20131230/…

This is the story of a reporter who set up a fake business and then bought Facebook fans, Twitter followers, and online reviews. It was surprisingly easy and cheap.
http://fusion.net/story/191773/…

Fascinating story about a man who figured out how to hack the game show “Press Your Luck” in 1984.
http://priceonomics.com/the-man-who-got-no-whammies/

People who need to pee are better at lying.
http://www.sciencedirect.com/science/article/pii/…
https://www.newscientist.com/article/…

You can wrap your house in tinfoil, but when you start shining bright lights to defend yourself against alien attack, you’ve gone too far.
http://www.upi.com/Odd_News/2015/09/11/…
In general, society puts limits on what types of security you are allowed to use, especially when that use can affect others. You can’t place landmines on your lawn or shoot down drones hovering over your property.
https://answers.yahoo.com/question/index?…

Fortune has a three-part article on the Sony attack by North Korea. There’s not a lot of tech here; it’s mostly about Sony’s internal politics regarding the movie and IT security before the attack, and some about their reaction afterwards.
http://fortune.com/sony-hack-part-1/
http://fortune.com/sony-hack-part-two/
http://fortune.com/sony-hack-final-part/
Despite what I wrote at the time, I now believe that North Korea was responsible for the attack. This is the article that convinced me. It’s about the US government’s reaction to the attack.
http://www.nytimes.com/2015/01/19/world/asia/…

The Intercept has a new story from the Snowden documents about the UK’s surveillance of the Internet by the GCHQ, along with 28 new NSA and GCHQ documents:
https://theintercept.com/2015/09/25/…

The website Unfitbits.com has a series of instructional videos on how to spoof fitness trackers, using such things as a metronome, pendulum, or power drill. With insurance companies like John Hancock offering discounts to people who allow them to verify their exercise program by opening up their fitness-tracker data, these are useful hacks.
http://www.unfitbits.com/index.html
http://www.nytimes.com/2015/04/08/your-money/…
http://www.theatlantic.com/health/archive/2015/09/…

During the Cold War, the KGB was very adept at identifying undercover CIA officers in foreign countries through what was basically big data analysis. (Yes, this is a needlessly dense and very hard-to-read article. I think it’s worth slogging through, though.)
http://www.salon.com/2015/09/26/…

AI theorist Eliezer Yudkowsky coined Moore’s Law of Mad Science: “Every eighteen months, the minimum IQ necessary to destroy the world drops by one point.”
http://www.azquotes.com/quote/819025
Oh, how I wish I said that.

Good discussion of the issues involving using autonomous vehicles as bombs. Now we need to think about solutions.
http://www.start.umd.edu/news/…

The European Court of Justice ruled that sending personal data to the US violates their right to privacy. This is a big deal, because it directly affects all the large American Internet companies. If this stands, expect much more pressure on the NSA to stop its indiscriminate spying on everyone.
http://www.nytimes.com/2015/10/07/technology/…
http://www.scribd.com/doc/283806411/…
http://www.politico.eu/wp-content/uploads/2015/10/…
https://www.lawfareblog.com/safe-harbor-framework-dead
https://theintercept.com/2015/10/06/…
https://www.eff.org/deeplinks/2015/10/…
http://www.washingtonpost.com/s/monkey-cage/wp/…
http://www.washingtonpost.com/s/monkey-cage/wp/…
http://www.europe-v-facebook.org/CJEU_IR.pdf
http://www.mcgarrsolicitors.ie/2015/10/06/…
https://www.lawfareblog.com/…
https://www.lawfareblog.com/…
http://abovethelaw.com/2015/10/…
http://www.theregister.co.uk/2015/10/08/…

There’s a lot of information hidden in the bar code of your airplane boarding pass, including the ability to get even more information.
https://krebsonsecurity.com/2015/10/…

Details on the patents issued to the NSA.
https://medium.com/silk-stories/…

In the 1980s, the Soviet Union bugged the IBM Selectric typewriters in the US Embassy in Moscow. This NSA document discusses how the US discovered the bugs and what we did about it. Codename is GUNMAN.
https://www.nsa.gov/about/_files/…
Is this the world’s first keylogger? Maybe.

Jamming Wi-Fi is both easy and cheap.
http://www.net-security.org/secworld.php?id=18971
http://tech.slashdot.org/story/15/10/13/1552238/…

Good op-ed on cyber arms-control treaties:
https://www.washingtonpost.com/opinions/…

Turns out that DNA evidence is fallible:
http://www.wbur.org/npr/447202433/…
http://www.wired.com/2015/10/…


Stealing Fingerprints

The news from the Office of Personnel Management hack keeps getting worse. In addition to the personal records of over 20 million US government employees, we’ve now learned that the hackers stole fingerprint files for 5.6 million of them.

This is fundamentally different from the data thefts we regularly read about in the news, and should give us pause before we entrust our biometric data to large networked databases.

There are three basic kinds of data that can be stolen. The first, and most common, is authentication credentials. These are passwords and other information that allows someone else access into our accounts and—usually—our money. An example would be the 56 million credit card numbers hackers stole from Home Depot in 2014, or the 21.5 million Social Security numbers hackers stole in the OPM breach. The motivation is typically financial. The hackers want to steal money from our bank accounts, process fraudulent credit card charges in our name, or open new lines of credit or apply for tax refunds.

It’s a huge illegal business, but we know how to deal with it when it happens. We detect these hacks as quickly as possible, and update our account credentials as soon as we detect an attack. (We also need to stop treating Social Security numbers as if they were secret.)

The second kind of data stolen is personal information. Examples would be the medical data stolen and exposed when Sony was hacked in 2014, or the very personal data from the infidelity website Ashley Madison stolen and published this year. In these instances, there is no real way to recover after a breach. Once the data is public, or in the hands of an adversary, it’s impossible to make it private again.

This is the main consequence of the OPM data breach. Whoever stole the data—we suspect it was the Chinese—got copies the security-clearance paperwork of all those government employees. This documentation includes the answers to some very personal and embarrassing questions, and now opens these employees up to blackmail and other types of coercion.

Fingerprints are another type of data entirely. They’re used to identify people at crime scenes, but increasingly they’re used as an authentication credential. If you have an iPhone, for example, you probably use your fingerprint to unlock your phone. This type of authentication is increasingly common, replacing a password—something you know—with a biometric: something you are. The problem with biometrics is that they can’t be replaced. So while it’s easy to update your password or get a new credit card number, you can’t get a new finger.

And now, for the rest of their lives, 5.6 million US government employees need to remember that someone, somewhere, has their fingerprints. And we really don’t know the future value of this data. If, in twenty years, we routinely use our fingerprints at ATM machines, that fingerprint database will become very profitable to criminals. If fingerprints start being used on our computers to authorize our access to files and data, that database will become very profitable to spies.

Of course, it’s not that simple. Fingerprint readers employ various technologies to prevent being fooled by fake fingers: detecting temperature, pores, a heartbeat, and so on. But this is an arms race between attackers and defenders, and there are many ways to fool fingerprint readers. When Apple introduced its iPhone fingerprint reader, hackers figured out how to fool it within days, and have continued to fool each new generation of phone readers equally quickly.

Not every use of biometrics requires the biometric data to be stored in a central server somewhere. Apple’s system, for example, only stores the data locally: on your phone. That way there’s no central repository to be hacked. And many systems don’t store the biometric data at all, only a mathematical function of the data that can be used for authentication but can’t be used to reconstruct the actual biometric. Unfortunately, OPM stored copies of actual fingerprints.

Ashley Madison has taught us all the dangers of entrusting our intimate secrets to a company’s computers and networks, because once that data is out there’s no getting it back. All biometric data, whether it be fingerprints, retinal scans, voiceprints, or something else, has that same property. We should be skeptical of any attempts to store this data en masse, whether by governments or by corporations. We need our biometrics for authentication, and we can’t afford to lose them to hackers.

This essay previously appeared on Motherboard.
http://motherboard.vice.com/read/stealing-fingerprints

Office of Personnel Management hack:
http://motherboard.vice.com/read/…
http://www.npr.org/sections/thetwo-way/2015/07/09/…

OPM fingerprint files stolen:
https://www.washingtonpost.com/news/the-switch/wp/…

China suspected in OPM hack:
http://www.wsj.com/articles/…

Home Depot hack:
http://money.cnn.com/2014/09/18/technology/security/…

Fraud with Social Security Numbers:
http://.credit.com/2015/02/…
http://money.cnn.com/2014/11/11/technology/security/…

Sony hack:
http://www.usatoday.com/story/news/nation-now/2014/…

Ashley Madison hack:
http://fortune.com/2015/08/26/ashley-madison-hack/

Security problem with biometrics:
https://www.schneier.com/blog/archives/2009/01/…

Potential value of fingerprints in the future:
http://www.zdnet.com/article/…

Beating fingerprint readers:
http://www.networkworld.com/article/2293129/…

Beating the iPhone fingerprint reader:
http://www.darkreading.com/…
http://www.cnet.com/news/…


Automatic Face Recognition and Surveillance

ID checks were a common response to the terrorist attacks of 9/11, but they’ll soon be obsolete. You won’t have to show your ID, because you’ll be identified automatically. A security camera will capture your face, and it’ll be matched with your name and a whole lot of other information besides. Welcome to the world of automatic facial recognition. Those who have access to databases of identified photos will have the power to identify us. Yes, it’ll enable some amazing personalized services; but it’ll also enable whole new levels of surveillance. The underlying technologies are being developed today, and there are currently no rules limiting their use.

Walk into a store, and the salesclerks will know your name. The store’s cameras and computers will have figured out your identity, and looked you up in both their store database and a commercial marketing database they’ve subscribed to. They’ll know your name, salary, interests, what sort of sales pitches you’re most vulnerable to, and how profitable a customer you are. Maybe they’ll have read a profile based on your tweets and know what sort of mood you’re in. Maybe they’ll know your political affiliation or sexual identity, both predictable by your social media activity. And they’re going to engage with you accordingly, perhaps by making sure you’re well taken care of or possibly by trying to make you so uncomfortable that you’ll leave.

Walk by a policeman, and she will know your name, address, criminal record, and with whom you routinely are seen. The potential for discrimination is enormous, especially in low-income communities where people are routinely harassed for things like unpaid parking tickets and other minor violations. And in a country where people are arrested for their political views, the use of this technology quickly turns into a nightmare scenario.

The critical technology here is computer face recognition. Traditionally it has been pretty poor, but it’s slowly improving. A computer is now as good as a person. Already Google’s algorithms can accurately match child and adult photos of the same person, and Facebook has an algorithm that works by recognizing hair style, body shape, and body language—and works even when it can’t see faces. And while we humans are pretty much as good at this as we’re ever going to get, computers will continue to improve. Over the next years, they’ll continue to get more accurate, making better matches using even worse photos.

Matching photos with names also requires a database of identified photos, and we have plenty of those too. Driver’s license databases are a gold mine: all shot face forward, in good focus and even light, with accurate identity information attached to each photo. The enormous photo collections of social media and photo archiving sites are another. They contain photos of us from all sorts of angles and in all sorts of lighting conditions, and we helpfully do the identifying step for the companies by tagging ourselves and our friends. Maybe this data will appear on handheld screens. Maybe it’ll be automatically displayed on computer-enhanced glasses. Imagine salesclerks—or politicians—being able to scan a room and instantly see wealthy customers highlighted in green, or policemen seeing people with criminal records highlighted in red.

Science fiction writers have been exploring this future in both books and movies for decades. Ads followed people from billboard to billboard in the movie Minority Report. In John Scalzi’s recent novel Lock In, characters scan each other like the salesclerks I described above.

This is no longer fiction. High-tech billboards can target ads based on the gender of who’s standing in front of them. In 2011, researchers at Carnegie Mellon pointed a camera at a public area on campus and were able to match live video footage with a public database of tagged photos in real time. Already government and commercial authorities have set up facial recognition systems to identify and monitor people at sporting events, music festivals, and even churches. The Dubai police are working on integrating facial recognition into Google Glass, and more US local police forces are using the technology.

Facebook, Google, Twitter, and other companies with large databases of tagged photos know how valuable their archives are. They see all kinds of services powered by their technologies—services they can sell to businesses like the stores you walk into and the governments you might interact with.

Other companies will spring up whose business models depend on capturing our images in public and selling them to whoever has use for them. If you think this is farfetched, consider a related technology that’s already far down that path: license-plate capture.

Today in the US there’s a massive but invisible industry that records the movements of cars around the country. Cameras mounted on cars and tow trucks capture license places along with date/time/location information, and companies use that data to find cars that are scheduled for repossession. One company, Vigilant Solutions, claims to collect 70 million scans in the US every month. The companies that engage in this business routinely share that data with the police, giving the police a steady stream of surveillance information on innocent people that they could not legally collect on their own. And the companies are already looking for other profit streams, selling that surveillance data to anyone else who thinks they have a need for it.

This could easily happen with face recognition. Finding bail jumpers could even be the initial driving force, just as finding cars to repossess was for license plate capture.

Already the FBI has a database of 52 million faces, and describes its integration of facial recognition software with that database as “fully operational.” In 2014, FBI Director James Comey told Congress that the database would not include photos of ordinary citizens, although the FBI’s own documents indicate otherwise. And just last month, we learned that the FBI is looking to buy a system that will collect facial images of anyone an officer stops on the street.

In 2013, Facebook had a quarter of a trillion user photos in its database. There’s currently a class-action lawsuit in Illinois alleging that the company has over a billion “face templates” of people, collected without their knowledge or consent.

Last year, the US Department of Commerce tried to prevail upon industry representatives and privacy organizations to write a voluntary code of conduct for companies using facial recognition technologies. After 16 months of negotiations, all of the consumer-focused privacy organizations pulled out of the process because industry representatives were unable to agree on any limitations on something as basic as nonconsensual facial recognition.

When we talk about surveillance, we tend to concentrate on the problems of data collection: CCTV cameras, tagged photos, purchasing habits, our writings on sites like Facebook and Twitter. We think much less about data analysis. But effective and pervasive surveillance is just as much about analysis. It’s sustained by a combination of cheap and ubiquitous cameras, tagged photo databases, commercial databases of our actions that reveal our habits and personalities, and—most of all—fast and accurate face recognition software.

Don’t expect to have access to this technology for yourself anytime soon. This is not facial recognition for all. It’s just for those who can either demand or pay for access to the required technologies—most importantly, the tagged photo databases. And while we can easily imagine how this might be misused in a totalitarian country, there are dangers in free societies as well. Without meaningful regulation, we’re moving into a world where governments and corporations will be able to identify people both in real time and backwards in time, remotely and in secret, without consent or recourse.

Despite protests from industry, we need to regulate this budding industry. We need limitations on how our images can be collected without our knowledge or consent, and on how they can be used. The technologies aren’t going away, and we can’t uninvent these capabilities. But we can ensure that they’re used ethically and responsibly, and not just as a mechanism to increase police and corporate power over us.

This essay previously appeared on Forbes.com.
http://www.forbes.com/sites/bruceschneier/2015/09/…

License-plate scanners:
http://betaboston.com/news/2014/03/05/…
http://betaboston.com/news/2014/03/05/…
http://cironline.org/reports/…
http://arstechnica.com/tech-policy/2012/09/…
https://www.aclu.org/files/assets/…

FBI’s photo database and facial recognition system:
http://www.fbi.gov/news/pressrel/press-releases/…
http://www.nationaljournal.com/tech/2014/06/24/…
https://www.eff.org/deeplinks/2014/04/…
https://www.eff.org/deeplinks/2015/09/…

Facebook’s photo database and facial recognition system:
http://www.businessinsider.com/…
http://www.theregister.co.uk/2015/09/07/…

Privacy organizations pull out of NTIA process:
https://www.eff.org/deeplinks/2015/06/…
http://www.slate.com/articles/technology/…

Two articles that say much the same thing:
http://www.telegraph.co.uk/technology/11693965/…
http://www.theguardian.com/commentisfree/2015/jun/…


Schneier News

I’m speaking at The Second Annual Cato Surveillance Conference in Washington, DC, on October 21, 2015.
http://www.cato.org/events/…

I’m speaking at the Privacy + Security Forum, October 21-23, 2015 at The Marvin Center in Washington, DC.
http://www.privacyandsecurityforum.com/schedule/

I’m speaking at the Boston Book Festival on October 24, 2015.
http://www.bostonbookfest.org/

I’m speaking at CyberSEED 2015, October 29-30 at the University of Connecticut’s Storrs Campus.
http://www.csi.uconn.edu/cyberseed/

I’m speaking at the 4th Annual Cloud Security Congress EMEA in Berlin on November 17, 2015.
https://cloudsecurityalliance.org/events/…

I’m speaking at NASA’s Goddard Space Flight Center as part of their Information Science and Technology Colloquium Series, on December 16, 2015.
https://istcolloq.gsfc.nasa.gov/Fall2015/speaker/…

I was interviewed by the Logikull blog:
http://logikcull.com//…

I gave a keynote at Free and Safe in Cyberspace 2015:
http://www.free-and-safe.org/program/
https://www.youtube.com/watch?…

I participated in a panel at Free and Safe in Cyberspace 2015, with Bart Preneel, Richard Stallman, Andreas Wild, Jovan Golic, Bjoern Rupp, Michael Sieber, Melle Van den Berg, Pierre Chastanet, and moderator Rufo Guerreschi.
http://www.free-and-safe.org/program/
https://www.youtube.com/watch?…

I was a guest on “Adam Ruins Everything.” The episode is about security theater. I am a disembodied head on a scooter.
https://www.youtube.com/watch?v=-LDzOi1dyAA
https://www.amazon.com/gp/product/B0163KQ5WM/…
The scooter idea was a hack when I couldn’t find the time to fly to LA for live filming. The whole thing was a lot of fun.

I was interviewed by BetaBoston on the subject of data privacy:
http://www.betaboston.com/news/2015/10/07/…


Resilient Systems News

Former Raytheon CEO Bill Swanson has joined our board of directors.

For those who don’t know, Resilient Systems is my company. I’m the CTO, and we sell an incident-response management platform that…well…helps IR teams to manage incidents. It’s a single hub that allows a team to collect data about an incident, assign and manage tasks, automate actions, integrate intelligence information, and so on. It’s designed to be powerful, flexible, and intuitive—if your HR or legal person needs to get involved, she has to be able to use it without any training. I’m really impressed with how well it works. Incident response is all about people, and the platform makes teams more effective. This is probably the best description of what we do.

We have lots of large- and medium-sized companies as customers. They’re all happy, and we continue to sell this thing at an impressive rate. Our Q3 numbers were fantastic. It’s kind of scary, really.

http://fortune.com/2015/09/28/…

Resilient Systems:
https://www.resilientsystems.com/
https://www.resilientsystems.com/product/…


Bringing Frozen Liquids through Airport Security

Gizmodo reports that UK airport security confiscates frozen liquids:

“He told me that it wasn’t allowed so I asked under what grounds, given it is not a liquid. When he said I couldn’t take it I asked if he knew that for sure or just assumed. He grabbed his supervisor and the supervisor told me that ‘the government does not classify that as a solid’. I decided to leave it at that point. I expect they’re probably wrong to take it from me. They’d probably not seen it before, didn’t know the rules, and being a bit of an eccentric request, decided to act on the side of caution. They didn’t spend the time to look it up.”

As it happens, I have a comparable recent experience. Last week, I tried to bring through a small cooler containing, among other things, a bag of ice. I expected to have to dump the ice at the security checkpoint and refill it inside the airport, but the TSA official looked at it and let it through. Turns out that frozen liquids are fine. I confirmed this with TSA officials at two other airports this week.

One of the TSA officials even told me that what he was officially told is that liquid explosives don’t freeze.

So there you go. The US policy is more sensible. And anyone landing in the UK from the US will have to go through security before any onward flight, so there’s no chance at flouting the UK rules that way.

And while we’re on the general subject, I am continually amazed by how lax the liquid rules are here in the US. Yesterday I went through airport security at SFO with an opened 5-ounce bottle of hot sauce in my carry-on. The screener flagged it; it was obvious on the x-ray. Another screener searched my bag, found it and looked at it, and then let me keep it.

And, in general, I never bother taking my liquids out of my suitcase anymore. I don’t have to when I am in the PreCheck lane, but no one seems to care in the regular lane, either. It is different in the UK.

http://gizmodo.com/…

According to a TSA blog post, frozen ice (not semi-melted) is allowed:
http://.tsa.gov/2009/11/…

Hannibal Burgess routine about the TSA liquids rules.
https://youtu.be/kD0KBcrfWgs?t=44s:


SHA-1 Freestart Collision

There’s a new cryptanalysis result against the hash function SHA-1:

Abstract: We present in this article a freestart collision example for SHA-1, i.e., a collision for its internal compression function. This is the first practical break of the full SHA-1, reaching all 80 out of 80 steps, while only 10 days of computation on a 64 GPU cluster were necessary to perform the attack. This work builds on a continuous series of cryptanalytic advancements on SHA-1 since the theoretical collision attack breakthrough in 2005. In particular, we extend the recent freestart collision work on reduced-round SHA-1 from CRYPTO 2015 that leverages the computational power of graphic cards and adapt it to allow the use of boomerang speed-up techniques. We also leverage the cryptanalytic techniques by Stevens from EUROCRYPT 2013 to obtain optimal attack conditions, which required further refinements for this work. Freestart collisions, like the one presented here, do not directly imply a collision for SHA-1.

However, this work is an important milestone towards an actual SHA-1 collision and it further shows how graphics cards can be used very efficiently for these kind of attacks. Based on the state-of-the-art collision attack on SHA-1 by Stevens from EUROCRYPT 2013, we are able to present new projections on the computational/financial cost required by a SHA-1 collision computation. These projections are significantly lower than previously anticipated by the industry, due to the use of the more cost efficient graphics cards compared to regular CPUs. We therefore recommend the industry, in particular Internet browser vendors and Certification Authorities, to retract SHA-1 soon. We hope the industry has learned from the events surrounding the cryptanalytic breaks of MD5 and will retract SHA-1 before example signature forgeries appear in the near future. With our new cost projections in mind, we strongly and urgently recommend against a recent proposal to extend the issuance of SHA-1 certificates by a year in the CAB/forum (the vote closes on October 16 2015 after a discussion period ending on October 9).

Especially note this bit: “Freestart collisions, like the one presented here, do not directly imply a collision for SHA-1. However, this work is an important milestone towards an actual SHA-1 collision and it further shows how graphics cards can be used very efficiently for these kind of attacks.” In other words: don’t panic, but prepare for a future panic.

This is not that unexpected. We’ve long known that SHA-1 is broken, at least theoretically. All the major browsers are planning to stop accepting SHA-1 signatures by 2017. Microsoft is retiring it on that same schedule. What’s news is that our previous estimates may be too conservative.

There’s a saying inside the NSA: “Attacks always get better; they never get worse.” This is obviously true, but it’s worth explaining why. Attacks get better for three reasons. One, Moore’s Law means that computers are always getting faster, which means that any cryptanalytic attack gets faster. Two, we’re forever making tweaks in existing attacks, which make them faster. (Note above: “…due to the use of the more cost efficient graphics cards compared to regular CPUs.”) And three, we regularly invent new cryptanalytic attacks. The first of those is generally predictable, the second is somewhat predictable, and the third is not at all predictable.

Way back in 2004, I wrote: “It’s time for us all to migrate away from SHA-1.” Since then, we have developed an excellent replacement: SHA-3 has been agreed on since 2012, and just became a standard.

This new result is important right now:

Thursday’s research showing SHA1 is weaker than previously thought comes as browser developers and certificate authorities are considering a proposal that would extend the permitted issuance of the SHA1-based HTTPS certificates by 12 months, that is through the end of 2016 rather than no later than January of that year. The proposal argued that some large organizations currently find it hard to move to a more secure hashing algorithm for their digital certificates and need the additional year to make the transition.

As the paper’s authors note, approving this proposal is a bad idea.

More on the paper here.
https://eprint.iacr.org/2015/967
https://sites.google.com/site/itstheshappening/

http://arstechnica.com/security/2015/10/…

Old blog posts on SHA-1 cryptanalysis:
https://www.schneier.com/blog/archives/2005/02/…
https://www.schneier.com/blog/archives/2005/02/…
https://www.schneier.com/blog/archives/2013/11/…
https://www.schneier.com/blog/archives/2012/10/…

My 2004 essay:
https://www.schneier.com/essays/archives/2004/08/…

SHA-3:
https://en.wikipedia.org/wiki/SHA-3
http://www.nist.gov/…

Proposal to extend the use of SHA:
https://cabforum.org/pipermail/public/2015-October/…


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Resilient Systems, Inc. See <https://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient Systems, Inc.

Copyright (c) 2015 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.