Blog: April 2014 Archives

Tracking People from Smartphone Accelerometers

It’s been long known that individual analog devices have their own fingerprints. Decades ago, individual radio transmitters were identifiable and trackable. Now, researchers have found that accelerometers in smartphone are unique enough to be identifiable.

The researchers focused specifically on the accelerometer, a sensor that tracks three-dimensional movements of the phone ­ essential for countless applications, including pedometers, sleep monitoring, mobile gaming ­ but their findings suggest that other sensors could leave equally unique fingerprints.

“When you manufacture the hardware, the factory cannot produce the identical thing in millions,” Roy said. “So these imperfections create fingerprints.”

Of course, these fingerprints are only visible when accelerometer data signals are analyzed in detail. Most applications do not require this level of analysis, yet the data shared with all applications—your favorite game, your pedometer—bear the mark. Should someone want to perform this analysis, they could do so.

The researchers tested more than 100 devices over the course of nine months: 80 standalone accelerometer chips used in popular smartphones, 25 Android phones and two tablets.

The accelerometers in all permutations were selected from different manufacturers, to ensure that the fingerprints weren’t simply defects resulting from a particular production line.

With 96-percent accuracy, the researchers could discriminate one sensor from another.

Posted on April 30, 2014 at 1:05 PM23 Comments

Details of Apple's Fingerprint Recognition

This is interesting:

Touch ID takes a 88×88 500ppi scan of your finger and temporarily sends that data to a secure cache located near the RAM, after the data is vectorized and forwarded to the secure enclave located on the top left of the A7 near the M7 processor it is immediately discarded after processing. The fingerprint scanner uses subdermal ridge flows (inner layer of skin) to prevent loss of accuracy if you were to have micro cuts or debris on your finger.

With iOS 7.1.1 Apple now takes multiple scans of each position you place finger at setup instead of a single one and uses algorithms to predict potential errors that could arise in the future. Touch ID was supposed to gradually improve accuracy with every scan but the problem was if you didn’t scan well on setup it would ruin your experience until you re-setup your finger. iOS 7.1.1 not only removes that problem and increases accuracy but also greatly reduces the calculations your iPhone 5S had to make while unlocking the device which means you should get a much faster unlock time.

Posted on April 29, 2014 at 6:47 AM29 Comments

Is Google Too Big to Trust?

Interesting essay about how Google’s lack of transparency is hurting their trust:

The reality is that Google’s business is and has always been about mining as much data as possible to be able to present information to users. After all, it can’t display what it doesn’t know. Google Search has always been an ad-supported service, so it needs a way to sell those users to advertisers—that’s how the industry works. Its Google Now voice-based service is simply a form of Google Search, so it too serves advertisers’ needs.

In the digital world, advertisers want to know more than the 100,000 people who might be interested in buying a new car. They now want to know who those people are, so they can reach out to them with custom messages that are more likely to be effective. They may not know you personally, but they know your digital persona—basically, you. Google needs to know about you to satisfy its advertisers’ demands.

Once you understand that, you understand why Google does what it does. That’s simply its business. Nothing is free, so if you won’t pay cash, you’ll have to pay with personal information. That business model has been around for decades; Google didn’t invent that business model, but Google did figure out how to make it work globally, pervasively, appealingly, and nearly instantaneously.

I don’t blame Google for doing that, but I blame it for being nontransparent. Putting unmarked sponsored ads in the “regular” search results section is misleading, because people have been trained by Google to see that section of the search results as neutral. They are in fact not. Once you know that, you never quite trust Google search results again. (Yes, Bing’s results are similarly tainted. But Microsoft never promised to do no evil, and most people use Google.)

Posted on April 24, 2014 at 6:45 AM66 Comments

Conversnitch

Surveillance is getting cheaper and easier:

Two artists have revealed Conversnitch, a device they built for less than $100 that resembles a lightbulb or lamp and surreptitiously listens in on nearby conversations and posts snippets of transcribed audio to Twitter. Kyle McDonald and Brian House say they hope to raise questions about the nature of public and private spaces in an era when anything can be broadcast by ubiquitous, Internet-connected listening devices.

This is meant as an art project to raise awareness, but the technology is getting cheaper all the time.

The surveillance gadget they unveiled Wednesday is constructed from little more than a Raspberry Pi miniature computer, a microphone, an LED and a plastic flower pot. It screws into and draws power from any standard bulb socket. Then it uploads captured audio via the nearest open Wi-Fi network to Amazon’s Mechanical Turk crowdsourcing platform, which McDonald and House pay small fees to transcribe the audio and post lines of conversation to Conversnitch’s Twitter account.

Consumer spy devices are now affordable by the masses. For $54, you can buy a camera hidden in a smoke detector. For $80, you can buy one hidden in an alarm clock. There are many more options.

Posted on April 23, 2014 at 2:33 PM24 Comments

Dan Geer on Heartbleed and Software Monocultures

Good essay:

To repeat, Heartbleed is a common mode failure. We would not know about it were it not open source (Good). That it is open source has been shown to be no talisman against error (Sad). Because errors are statistical while exploitation is not, either errors must be stamped out (which can only result in dampening the rate of innovation and rewarding corporate bigness) or that which is relied upon must be field upgradable (Real Politik). If the device is field upgradable, then it pays to regularly exercise that upgradability both to keep in fighting trim and to make the opponent suffer from the rapidity with which you change his target.

The whole thing is worth reading.

Posted on April 22, 2014 at 7:52 AM42 Comments

Info on Russian Bulk Surveillance

Good information:

Russian law gives Russia’s security service, the FSB, the authority to use SORM (“System for Operative Investigative Activities”) to collect, analyze and store all data that transmitted or received on Russian networks, including calls, email, website visits and credit card transactions. SORM has been in use since 1990 and collects both metadata and content. SORM-1 collects mobile and landline telephone calls. SORM-2 collects internet traffic. SORM-3 collects from all media (including Wi-Fi and social networks) and stores data for three years. Russian law requires all internet service providers to install an FSB monitoring device (called “Punkt Upravlenia”) on their networks that allows the direct collection of traffic without the knowledge or cooperation of the service provider. The providers must pay for the device and the cost of installation.

Collection requires a court order, but these are secret and not shown to the service provider. According to the data published by Russia’s Supreme Court, almost 540,000 intercepts of phone and internet traffic were authorized in 2012. While the FSB is the principle agency responsible for communications surveillance, seven other Russian security agencies can have access to SORM data on demand. SORM is routinely used against political opponents and human rights activists to monitor them and to collect information to use against them in “dirty tricks” campaigns. Russian courts have upheld the FSB’s authority to surveil political opponents even if they have committed no crime. Russia used SORM during the Olympics to monitor athletes, coaches, journalists, spectators, and the Olympic Committee, publicly explaining this was necessary to protect against terrorism. The system was an improved version of SORM that can combine video surveillance with communications intercepts.

EDITED TO ADD (4/23): This article from World Policy Journal is excellent.

Posted on April 21, 2014 at 5:55 AM83 Comments

Friday Squid Blogging: Squid Jigging

Good news from Malaysia:

The Terengganu International Squid Jigging Festival (TISJF) will be continued and become an annual event as one of the state’s main tourism products, said Menteri Besar Datuk Seri Ahmad Said.

He said TISJF will become a signature event intended to enhance the branding of Terengganu as a leading tourism destination in the region.

“Beside introducing squid jigging as a leisure activity, the event also highlights the state’s beautiful beaches, lakes and islands and also our arts, culture and heritage,” he said.

I assume that Malaysian squid jigging is the same as American squid jigging. But I don’t really know.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on April 18, 2014 at 4:16 PM109 Comments

Metaphors of Surveillance

There’s a new study looking at the metaphors we use to describe surveillance.

Over 62 days between December and February, we combed through 133 articles by 105 different authors and over 60 news outlets. We found that 91 percent of the articles contained metaphors about surveillance. There is rich thematic diversity in the types of metaphors that are used, but there is also a failure of imagination in using literature to describe surveillance.

Over 9 percent of the articles in our study contained metaphors related to the act of collection; 8 percent to literature (more on that later); about 6 percent to nautical themes; and more than 3 percent to authoritarian regimes.

On the one hand, journalists and bloggers have been extremely creative in attempting to describe government surveillance, for example, by using a variety of metaphors related to the act of collection: sweep, harvest, gather, scoop, glean, pluck, trap. These also include nautical metaphors, such as trawling, tentacles, harbor, net, and inundation. These metaphors seem to fit with data and information flows.

The only literature metaphor used is the book 1984.

This is sad. I agree with Daniel Solove that Kafka’s The Trial is a much better literary metaphor. This article suggests some other literary metaphors, most notably Philip K. Dick. And this one suggests the Eye of Sauron.

Posted on April 18, 2014 at 2:21 PM32 Comments

Overreacting to Risk

This is a crazy overreaction:

A 19-year-old man was caught on camera urinating in a reservoir that holds Portland’s drinking water Wednesday, according to city officials.

Now the city must drain 38 million gallons of water from Reservoir 5 at Mount Tabor Park in southeast Portland.

I understand the natural human disgust reaction, but do these people actually think that their normal drinking water is any more pure? That a single human is that much worse than all the normal birds and other animals? A few ounces distributed amongst 38 million gallons is negligible.

Another story.

EDITED TO ADD (5/14): They didn’t flush the reservoir after all, but they did move the water.

Posted on April 18, 2014 at 6:26 AM82 Comments

Book Title

I previously posted that I am writing a book on security and power. Here are some title suggestions:

  • Permanent Record: The Hidden Battles to Capture Your Data and Control Your World
  • Hunt and Gather: The Hidden Battles to Capture Your Data and Control Your World
  • They Already Know: The Hidden Battles to Capture Your Data and Control Your World
  • We Already Know: The Hidden Battles to Capture Your Data and Control Your World
  • Data and Goliath: The Hidden Battles to Capture Your Data and Control Your World
  • All About You: The Hidden Battles to Capture Your Data and Control Your World
  • Tracked: The Hidden Battles to Capture Your Data and Control Your World
  • Tracking You: The Forces that Capture Your Data and Control Your World
  • Data: The New Currency of Power

My absolute favorite is Data and Goliath, but there’s a problem. Malcolm Gladwell recently published a book with the title of David and Goliath. Normally I wouldn’t care, but I published my Liars and Outliers soon after Gladwell published Outliers. Both similarities are coincidences, but aping him twice feels like a bit much.

Anyway, comments on the above titles—and suggestions for new ones—are appreciated.

The book is still scheduled for February publication. I hope to have a first draft done by the end of June, and a final manuscript by the end of October. If anyone is willing to read and comment on a draft manuscript between those two months, please let me know in e-mail.

Posted on April 16, 2014 at 9:32 AM220 Comments

Schneier Speaking Schedule: April–May

Here’s my upcoming speaking schedule for April and May:

Information about all my speaking engagements can be found here.

Posted on April 14, 2014 at 2:11 PM7 Comments

More on Heartbleed

This is an update to my earlier post.

Cloudflare is reporting that it’s very difficult, if not practically impossible, to steal SSL private keys with this attack.

Here’s the good news: after extensive testing on our software stack, we have been unable to successfully use Heartbleed on a vulnerable server to retrieve any private key data. Note that is not the same as saying it is impossible to use Heartbleed to get private keys. We do not yet feel comfortable saying that. However, if it is possible, it is at a minimum very hard. And, we have reason to believe based on the data structures used by OpenSSL and the modified version of NGINX that we use, that it may in fact be impossible.

The reasoning is complicated, and I suggest people read the post. What I have heard from people who actually ran the attack against a various servers is that what you get is a huge variety of cruft, ranging from indecipherable binary to useless log messages to peoples’ passwords. The variability is huge.

This xkcd comic is a very good explanation of how the vulnerability works. And this post by Dan Kaminsky is worth reading.

I have a lot to say about the human aspects of this: auditing of open-source code, how the responsible disclosure process worked in this case, the ease with which anyone could weaponize this with just a few lines of script, how we explain vulnerabilities to the public—and the role that impressive logo played in the process—and our certificate issuance and revocation process. This may be a massive computer vulnerability, but all of the interesting aspects of it are human.

EDITED TO ADD (4/12): We have one example of someone successfully retrieving an SSL private key using Heartbleed. So it’s possible, but it seems to be much harder than we originally thought.

And we have a story where two anonymous sources have claimed that the NSA has been exploiting Heartbleed for two years.

EDITED TO ADD (4/12): Hijacking user sessions with Heartbleed. And a nice essay on the marketing and communications around the vulnerability

EDITED TO ADD (4/13): The US intelligence community has denied prior knowledge of Heatbleed. The statement is word-game free:

NSA was not aware of the recently identified vulnerability in OpenSSL, the so-called Heartbleed vulnerability, until it was made public in a private sector cybersecurity report. Reports that say otherwise are wrong.

The statement also says:

Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities.

Since when is “law enforcement need” included in that decision process? This national security exception to law and process is extending much too far into normal police work.

Another point. According to the original Bloomberg article:

http://www.bloomberg.com/news/2014-04-11/nsa-said-to-have-used-heartbleed-bug-exposing-consumers.html

Certainly a plausible statement. But if those millions didn’t discover something obvious like Heartbleed, shouldn’t we investigate them for incompetence?

Finally—not related to the NSA—this is good information on which sites are still vulnerable, including historical data.

Posted on April 11, 2014 at 1:10 PM178 Comments

Police Disabling Their Own Voice Recorders

This is not a surprise:

The Los Angeles Police Commission is investigating how half of the recording antennas in the Southeast Division went missing, seemingly as a way to evade new self-monitoring procedures that the Los Angeles Police Department imposed last year.

The antennas, which are mounted onto individual patrol cars, receive recorded audio captured from an officer’s belt-worn transmitter. The transmitter is designed to capture an officer’s voice and transmit the recording to the car itself for storage. The voice recorders are part of a video camera system that is mounted in a front-facing camera on the patrol car. Both elements are activated any time the car’s emergency lights and sirens are turned on, but they can also be activated manually.

According to the Los Angeles Times, an LAPD investigation determined that around half of the 80 patrol cars in one South LA division were missing antennas as of last summer, and an additional 10 antennas were unaccounted for.

Surveillance of power is one of the most important ways to ensure that power does not abuse its status. But, of course, power does not like to be watched.

Posted on April 11, 2014 at 6:41 AM45 Comments

Heartbleed

Heartbleed is a catastrophic bug in OpenSSL:

“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.

Basically, an attacker can grab 64K of memory from a server. The attack leaves no trace, and can be done multiple times to grab a different random 64K of memory. This means that anything in memory—SSL private keys, user keys, anything—is vulnerable. And you have to assume that it is all compromised. All of it.

“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.

Half a million sites are vulnerable, including my own. Test your vulnerability here.

The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.

At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.

This article is worth reading. Hacker News thread is filled with commentary. XKCD cartoon.

EDITED TO ADD (4/9): Has anyone looked at all the low-margin non-upgradable embedded systems that use OpenSSL? An upgrade path that involves the trash, a visit to Best Buy, and a credit card isn’t going to be fun for anyone.

EDITED TO ADD (4/10): I’m hearing that the CAs are completely clogged, trying to reissue so many new certificates. And I’m not sure we have anything close to the infrastructure necessary to revoke half a million certificates.

Possible evidence that Heartbleed was exploited last year.

EDITED TO ADD (4/10): I wonder if there is going to be some backlash from the mainstream press and the public. If nothing really bad happens—if this turns out to be something like the Y2K bug—then we are going to face criticisms of crying wolf.

EDITED TO ADD (4/11): Brian Krebs and Ed Felten on how to protect yourself from Heartbleed.

Posted on April 9, 2014 at 5:03 AM318 Comments

"Unbreakable" Encryption Almost Certainly Isn't

This headline is provocative: “Human biology inspires ‘unbreakable’ encryption.”

The article is similarly nonsensical:

Researchers at Lancaster University, UK have taken a hint from the way the human lungs and heart constantly communicate with each other, to devise an innovative, highly flexible encryption algorithm that they claim can’t be broken using the traditional methods of cyberattack.

Information can be encrypted with an array of different algorithms, but the question of which method is the most secure is far from trivial. Such algorithms need a “key” to encrypt and decrypt information; the algorithms typically generate their keys using a well-known set of rules that can only admit a very large, but nonetheless finite number of possible keys. This means that in principle, given enough time and computing power, prying eyes can always break the code eventually.

The researchers, led by Dr. Tomislav Stankovski, created an encryption mechanism that can generate a truly unlimited number of keys, which they say vastly increases the security of the communication. To do so, they took inspiration from the anatomy of the human body.

Regularly, someone from outside cryptography—who has no idea how crypto works—pops up and says “hey, I can solve their problems.” Invariably, they make some trivial encryption scheme because they don’t know better.

Remember: anyone can create a cryptosystem that he himself cannot break. And this advice from 15 years ago is still relevant.

Another article, and the paper.

Posted on April 8, 2014 at 6:16 AM51 Comments

Mass Surveillance by Eavesdropping on Web Cookies

Interesting research:

Abstract: We investigate the ability of a passive network observer to leverage third-party HTTP tracking cookies for mass surveillance. If two web pages embed the same tracker which emits a unique pseudonymous identifier, then the adversary can link visits to those pages from the same user (browser instance) even if the user’s IP address varies. Using simulated browsing profiles, we cluster network traffic by transitively linking shared unique cookies and estimate that for typical users over 90% of web sites with embedded trackers are located in a single connected component. Furthermore, almost half of the most popular web pages will leak a logged-in user’s real-world identity to an eavesdropper in unencrypted traffic. Together, these provide a novel method to link an identified individual to a large fraction of her entire web history. We discuss the privacy consequences of this attack and suggest mitigation strategies.

Blog post.

Posted on April 4, 2014 at 8:25 AM29 Comments

Ephemeral Apps

Ephemeral messaging apps such as Snapchat, Wickr and Frankly, all of which advertise that your photo, message or update will only be accessible for a short period, are on the rise. Snapchat and Frankly, for example, claim they permanently delete messages, photos and videos after 10 seconds. After that, there’s no record.

This notion is especially popular with young people, and these apps are an antidote to sites such as Facebook where everything you post lasts forever unless you take it down—and taking it down is no guarantee that it isn’t still available.

These ephemeral apps are the first concerted push against the permanence of Internet conversation. We started losing ephemeral conversation when computers began to mediate our communications. Computers naturally produce conversation records, and that data was often saved and archived.

The powerful and famous—from Oliver North back in 1987 to Anthony Weiner in 2011—have been brought down by e-mails, texts, tweets and posts they thought private. Lots of us have been embroiled in more personal embarrassments resulting from things we’ve said either being saved for too long or shared too widely.

People have reacted to this permanent nature of Internet communications in ad hoc ways. We’ve deleted our stuff where possible and asked others not to forward our writings without permission. “Wall scrubbing” is the term used to describe the deletion of Facebook posts.

Sociologist danah boyd has written about teens who systematically delete every post they make on Facebook soon after they make it. Apps such as Wickr just automate the process. And it turns out there’s a huge market in that.

Ephemeral conversation is easy to promise but hard to get right. In 2013, researchers discovered that Snapchat doesn’t delete images as advertised; it merely changes their names so they’re not easy to see. Whether this is a problem for users depends on how technically savvy their adversaries are, but it illustrates the difficulty of making instant deletion actually work.

The problem is that these new “ephemeral” conversations aren’t really ephemeral the way a face-to-face unrecorded conversation would be. They’re not ephemeral like a conversation during a walk in a deserted woods used to be before the invention of cell phones and GPS receivers.

At best, the data is recorded, used, saved and then deliberately deleted. At worst, the ephemeral nature is faked. While the apps make the posts, texts or messages unavailable to users quickly, they probably don’t erase them off their systems immediately. They certainly don’t erase them from their backup tapes, if they end up there.

The companies offering these apps might very well analyze their content and make that information available to advertisers. We don’t know how much metadata is saved. In SnapChat, users can see the metadata even though they can’t see the content and what it’s used for. And if the government demanded copies of those conversations—either through a secret NSA demand or a more normal legal process involving an employer or school—the companies would have no choice but to hand them over.

Even worse, if the FBI or NSA demanded that American companies secretly store those conversations and not tell their users, breaking their promise of deletion, the companies would have no choice but to comply.

That last bit isn’t just paranoia.

We know the U.S. government has done this to companies large and small. Lavabit was a small secure e-mail service, with an encryption system designed so that even the company had no access to users’ e-mail. Last year, the NSA presented it with a secret court order demanding that it turn over its master key, thereby compromising the security of every user. Lavabit shut down its service rather than comply, but that option isn’t feasible for larger companies. In 2011, Microsoft made some still-unknown changes to Skype to make NSA eavesdropping easier, but the security promises they advertised didn’t change.

This is one of the reasons President Barack Obama’s announcement that he will end one particular NSA collection program under one particular legal authority barely begins to solve the problem: the surveillance state is so robust that anything other than a major overhaul won’t make a difference.

Of course, the typical Snapchat user doesn’t care whether the U.S. government is monitoring his conversations. He’s more concerned about his high school friends and his parents. But if these platforms are insecure, it’s not just the NSA that one should worry about.

Dissidents in the Ukraine and elsewhere need security, and if they rely on ephemeral apps, they need to know that their own governments aren’t saving copies of their chats. And even U.S. high school students need to know that their photos won’t be surreptitiously saved and used against them years later.

The need for ephemeral conversation isn’t some weird privacy fetish or the exclusive purview of criminals with something to hide. It represents a basic need for human privacy, and something every one of us had as a matter of course before the invention of microphones and recording devices.

We need ephemeral apps, but we need credible assurances from the companies that they are actually secure and credible assurances from the government that they won’t be subverted.

This essay previously appeared on CNN.com.

EDITED TO ADD (4/14): There are apps to permanently save Snapchat photos.

At Financial Cryptography 2014, Franziska Roesner presented a paper that questions whether users expect ephemeral messaging from Snapchat.

Posted on April 2, 2014 at 5:07 AM66 Comments

Seventh Movie-Plot Threat Contest

As you might expect, this year’s contest has the NSA as the villain:

The NSA has won, but how did it do it? How did it use its ability to conduct ubiquitous surveillance, its massive data centers, and its advanced data analytics capabilities to come out on top? Did it take over the world overtly, or is it just pulling the strings behind everyone’s backs? Did it have to force companies to build surveillance into its products, or could it just piggy-back on market trends? How does it deal with liberal democracies and ruthless totalitarian dictatorships at the same time? Is it blackmailing Congress? How does the money flow? What’s the story?

That’s it: an NSA movie-plot threat. (For those who don’t know, a movie-plot threat is a scary-threat story that would make a great movie, but is much too specific to build security policies around.) Nothing too science fictional; today’s technology or presumed technology only.

Entries are limited to 500 words, and should be posted in the comments. In a month, I’ll choose some semifinalists, and we can all vote and pick the winner.

Prize will be something tangible, but primarily the accolades of your peers.

Good luck.

History: The First Movie-Plot Threat Contest rules and winner. The Second Movie-Plot Threat Contest rules, semifinalists, and winner. The Third Movie-Plot Threat Contest rules, semifinalists, and winner. The Fourth Movie-Plot Threat Contest rules and winner. The Fifth Movie-Plot Threat Contest rules, semifinalists, and winner. The Sixth Movie-Plot Threat Contest rules, semifinalists, and winner.

Posted on April 1, 2014 at 6:11 AM

Sidebar photo of Bruce Schneier by Joe MacInnis.