Blog: February 2015 Archives

Everyone Wants You To Have Security, But Not from Them

In December, Google’s Executive Chairman Eric Schmidt was interviewed at the CATO Institute Surveillance Conference. One of the things he said, after talking about some of the security measures his company has put in place post-Snowden, was: “If you have important information, the safest place to keep it is in Google. And I can assure you that the safest place to not keep it is anywhere else.”

The surprised me, because Google collects all of your information to show you more targeted advertising. Surveillance is the business model of the Internet, and Google is one of the most successful companies at that. To claim that Google protects your privacy better than anyone else is to profoundly misunderstand why Google stores your data for free in the first place.

I was reminded of this last week when I appeared on Glenn Beck’s show along with cryptography pioneer Whitfield Diffie. Diffie said:

You can’t have privacy without security, and I think we have glaring failures in computer security in problems that we’ve been working on for 40 years. You really should not live in fear of opening an attachment to a message. It ought to be confined; your computer ought to be able to handle it. And the fact that we have persisted for decades without solving these problems is partly because they’re very difficult, but partly because there are lots of people who want you to be secure against everyone but them. And that includes all of the major computer manufacturers who, roughly speaking, want to manage your computer for you. The trouble is, I’m not sure of any practical alternative.

That neatly explains Google. Eric Schmidt does want your data to be secure. He wants Google to be the safest place for your data ­ as long as you don’t mind the fact that Google has access to your data. Facebook wants the same thing: to protect your data from everyone except Facebook. Hardware companies are no different. Last week, we learned that Lenovo computers shipped with a piece of adware called Superfish that broke users’ security to spy on them for advertising purposes.

Governments are no different. The FBI wants people to have strong encryption, but it wants backdoor access so it can get at your data. UK Prime Minister David Cameron wants you to have good security, just as long as it’s not so strong as to keep the UK government out. And, of course, the NSA spends a lot of money ensuring that there’s no security it can’t break.

Corporations want access to your data for profit; governments want it for security purposes, be they benevolent or malevolent. But Diffie makes an even stronger point: we give lots of companies access to our data because it makes our lives easier.

I wrote about this in my latest book, Data and Goliath:

Convenience is the other reason we willingly give highly personal data to corporate interests, and put up with becoming objects of their surveillance. As I keep saying, surveillance-based services are useful and valuable. We like it when we can access our address book, calendar, photographs, documents, and everything else on any device we happen to be near. We like services like Siri and Google Now, which work best when they know tons about you. Social networking apps make it easier to hang out with our friends. Cell phone apps like Google Maps, Yelp, Weather, and Uber work better and faster when they know our location. Letting apps like Pocket or Instapaper know what we’re reading feels like a small price to pay for getting everything we want to read in one convenient place. We even like it when ads are targeted to exactly what we’re interested in. The benefits of surveillance in these and other applications are real, and significant.

Like Diffie, I’m not sure there is any practical alternative. The reason the Internet is a worldwide mass-market phenomenon is that all the technological details are hidden from view. Someone else is taking care of it. We want strong security, but we also want companies to have access to our computers, smart devices, and data. We want someone else to manage our computers and smart phones, organize our e-mail and photos, and help us move data between our various devices.

Those “someones” will necessarily be able to violate our privacy, either by deliberately peeking at our data or by having such lax security that they’re vulnerable to national intelligence agencies, cybercriminals, or both. Last week, we learned that the NSA broke into the Dutch company Gemalto and stole the encryption keys for billions ­ yes, billions ­ of cell phones worldwide. That was possible because we consumers don’t want to do the work of securely generating those keys and setting up our own security when we get our phones; we want it done automatically by the phone manufacturers. We want our data to be secure, but we want someone to be able to recover it all when we forget our password.

We’ll never solve these security problems as long as we’re our own worst enemy. That’s why I believe that any long-term security solution will not only be technological, but political as well. We need laws that will protect our privacy from those who obey the laws, and to punish those who break the laws. We need laws that require those entrusted with our data to protect our data. Yes, we need better security technologies, but we also need laws mandating the use of those technologies.

This essay previously appeared on Forbes.com.

EDITED TO ADD: French translation.

Posted on February 26, 2015 at 6:47 AM70 Comments

"Surreptitiously Weakening Cryptographic Systems"

New paper: “Surreptitiously Weakening Cryptographic Systems,” by Bruce Schneier, Matthew Fredrikson, Tadayoshi Kohno, and Thomas Ristenpart.

Abstract: Revelations over the past couple of years highlight the importance of understanding malicious and surreptitious weakening of cryptographic systems. We provide an overview of this domain, using a number of historical examples to drive development of a weaknesses taxonomy. This allows comparing different approaches to sabotage. We categorize a broader set of potential avenues for weakening systems using this taxonomy, and discuss what future research is needed to provide sabotage-resilient cryptography.

EDITED TO ADD (3/3): News article.

Posted on February 25, 2015 at 6:09 AM25 Comments

AT&T Charging Customers to Not Spy on Them

AT&T is charging a premium for gigabit Internet service without surveillance:

The tracking and ad targeting associated with the gigabit service cannot be avoided using browser privacy settings: as AT&T explained, the program “works independently of your browser’s privacy settings regarding cookies, do-not-track and private browsing.” In other words, AT&T is performing deep packet inspection, a controversial practice through which internet service providers, by virtue of their privileged position, monitor all the internet traffic of their subscribers and collect data on the content of those communications.

What if customers do not want to be spied on by their internet service providers? AT&T allows gigabit service subscribers to opt out—for a $29 fee per month.

I have mixed feelings about this. On one hand, AT&T is forgoing revenue by not spying on its customers, and it’s reasonable to charge them for that lost revenue. On the other hand, this sort of thing means that privacy becomes a luxury good. In general, I prefer to conceptualize privacy as a right to be respected and not a commodity to be bought and sold.

EDITED TO ADD: It’s actually even more expensive.

Posted on February 24, 2015 at 6:33 AM92 Comments

Cell Phones Leak Location Information through Power Usage

New research on tracking the location of smart phone users by monitoring power consumption:

PowerSpy takes advantage of the fact that a phone’s cellular transmissions use more power to reach a given cell tower the farther it travels from that tower, or when obstacles like buildings or mountains block its signal. That correlation between battery use and variables like environmental conditions and cell tower distance is strong enough that momentary power drains like a phone conversation or the use of another power-hungry app can be filtered out, Michalevsky says.

One of the machine-learning tricks the researchers used to detect that “noise” is a focus on longer-term trends in the phone’s power use rather than those than last just a few seconds or minutes. “A sufficiently long power measurement (several minutes) enables the learning algorithm to ‘see’ through the noise,” the researchers write. “We show that measuring the phone’s aggregate power consumption over time completely reveals the phone’s location and movement.”

Even so, PowerSpy has a major limitation: It requires that the snooper pre-measure how a phone’s power use behaves as it travels along defined routes. This means you can’t snoop on a place you or a cohort has never been, as you need to have actually walked or driven along the route your subject’s phone takes in order to draw any location conclusions.

I’m not sure how practical this is, but it’s certainly interesting.

The paper.

Posted on February 23, 2015 at 10:30 AM22 Comments

Man-in-the-Middle Attacks on Lenovo Computers

It’s not just national intelligence agencies that break your https security through man-in-the-middle attacks. Corporations do it, too. For the past few months, Lenovo PCs have shipped with an adware app called Superfish that man-in-the-middles TLS connections.

Here’s how it works, and here’s how to get rid of it.

And you should get rid of it, not merely because it’s nasty adware. It’s a security risk. Someone with the password—here it is, cracked—can perform a man-in-the-middle attack on your security as well.

Since the story broke, Lenovo completely misunderstood the problem, turned off the app, and is now removing it from its computers.

Superfish, as well, exhibited extreme cluelessness by claiming its sofware poses no security risk. That was before someone cracked its password, though.

Three Slashdot threads.

EDITED TO ADD (2/20): US CERT has issued two security advisories. And the Department of Homeland Security is urging users to remove Superfish.

EDITED TO ADD (2/23): Another good article.

EDITED TO ADD (2/24): More commentary.

EDITED TO ADD (3/12): Rumors are that any software from Barak Weichselbaum may be vulnerable. This site tests for the vulnerability. Better removal instructions.

Posted on February 20, 2015 at 3:43 PM72 Comments

NSA/GCHQ Hacks SIM Card Database and Steals Billions of Keys

The Intercept has an extraordinary story: the NSA and/or GCHQ hacked into the Dutch SIM card manufacturer Gemalto, stealing the encryption keys for billions of cell phones. People are still trying to figure out exactly what this means, but it seems to mean that the intelligence agencies have access to both voice and data from all phones using those cards.

Me in The Register: “We always knew that they would occasionally steal SIM keys. But all of them? The odds that they just attacked this one firm are extraordinarily low and we know the NSA does like to steal keys where it can.”

I think this is one of the most important Snowden stories we’ve read.

More news stories. Slashdot thread. Hacker News thread.

Posted on February 20, 2015 at 7:51 AM130 Comments

IRS Encourages Poor Cryptography

I’m not sure what to make of this, or even what it means. The IRS has a standard called IDES: International Data Exchange Service: “The International Data Exchange Service (IDES) is an electronic delivery point where Financial Institutions (FI) and Host Country Tax Authorities (HCTA) can transmit and exchange FATCA data with the United States.” It’s like IRS data submission, but for other governments and foreign banks.

Buried in one of the documents are the rules for encryption:

While performing AES encryption, there are several settings and options depending on the tool used to perform encryption. IRS recommended settings should be used to maintain compatibility:

  • Cipher Mode: ECB (Electronic Code Book).
  • Salt: No salt value
  • Initialization Vector: No Initialization Vector (IV). If an IV is present, set to all zeros to avoid affecting the encryption.
  • Key Size: 256 bits / 32 bytes ­ Key size should be verified and moving the key across operating systems can affect the key size.
  • Encoding: There can be no special encoding. The file will contain only the raw encrypted bytes.
  • Padding: PKCS#7 or PKCS#5.

ECB? Are they serious?

Posted on February 18, 2015 at 6:42 AM53 Comments

The Equation Group's Sophisticated Hacking and Exploitation Tools

This week, Kaspersky Labs published detailed information on what it calls the Equation Group—almost certainly the NSA—and its abilities to embed spyware deep inside computers, gaining pretty much total control of those computers while maintaining persistence in the face of reboots, operating system reinstalls, and commercial anti-virus products. The details are impressive, and I urge anyone interested to read the Kaspersky documents, or this very detailed article from Ars Technica.

Kaspersky doesn’t explicitly name the NSA, but talks about similarities between these techniques and Stuxnet, and points to NSA-like codenames. A related Reuters story provides more confirmation: “A former NSA employee told Reuters that Kaspersky’s analysis was correct, and that people still in the intelligence agency valued these spying programs as highly as Stuxnet. Another former intelligence operative confirmed that the NSA had developed the prized technique of concealing spyware in hard drives, but said he did not know which spy efforts relied on it.”

In some ways, this isn’t news. We saw examples of these techniques in 2013, when Der Spiegel published details of the NSA’s 2008 catalog of implants. (Aside: I don’t believe the person who leaked that catalog is Edward Snowden.) In those pages, we saw examples of malware that embedded itself in computers’ BIOS and disk drive firmware. We already know about the NSA’s infection methods using packet injection and hardware interception.

This is targeted surveillance. There’s nothing here that implies the NSA is doing this sort of thing to every computer, router, or hard drive. It’s doing it only to networks it wants to monitor. Reuters again: “Kaspersky said it found personal computers in 30 countries infected with one or more of the spying programs, with the most infections seen in Iran, followed by Russia, Pakistan, Afghanistan, China, Mali, Syria, Yemen and Algeria. The targets included government and military institutions, telecommunication companies, banks, energy companies, nuclear researchers, media, and Islamic activists, Kaspersky said.” A map of the infections Kaspersky found bears this out.

On one hand, it’s the sort of thing we want the NSA to do. It’s targeted. It’s exploiting existing vulnerabilities. In the overall scheme of things, this is much less disruptive to Internet security than deliberately inserting vulnerabilities that leave everyone insecure.

On the other hand, the NSA’s definition of “targeted” can be pretty broad. We know that it’s hacked the Belgian telephone company and the Brazilian oil company. We know it’s collected every phone call in the Bahamas and Afghanistan. It hacks system administrators worldwide.

On the other other hand—can I even have three hands?—I remember a line from my latest book: “Today’s top-secret programs become tomorrow’s PhD theses and the next day’s hacker tools.” Today, the Equation Group is “probably the most sophisticated computer attack group in the world,” but these techniques aren’t magically exclusive to the NSA. We know China uses similar techniques. Companies like Gamma Group sell less sophisticated versions of the same things to Third World governments worldwide. We need to figure out how to maintain security in the face of these sorts of attacks, because we’re all going to be subjected to the criminal versions of them in three to five years.

That’s the real problem. Steve Bellovin wrote about this:

For more than 50 years, all computer security has been based on the separation between the trusted portion and the untrusted portion of the system. Once it was “kernel” (or “supervisor”) versus “user” mode, on a single computer. The Orange Book recognized that the concept had to be broader, since there were all sorts of files executed or relied on by privileged portions of the system. Their newer, larger category was dubbed the “Trusted Computing Base” (TCB). When networking came along, we adopted firewalls; the TCB still existed on single computers, but we trusted “inside” computers and networks more than external ones.

There was a danger sign there, though few people recognized it: our networked systems depended on other systems for critical files….

The National Academies report Trust in Cyberspace recognized that the old TCB concept no longer made sense. (Disclaimer: I was on the committee.) Too many threats, such as Word macro viruses, lived purely at user level. Obviously, one could have arbitrarily classified word processors, spreadsheets, etc., as part of the TCB, but that would have been worse than useless; these things were too large and had no need for privileges.

In the 15+ years since then, no satisfactory replacement for the TCB model has been proposed.

We have a serious computer security problem. Everything depends on everything else, and security vulnerabilities in anything affects the security of everything. We simply don’t have the ability to maintain security in a world where we can’t trust the hardware and software we use.

This article was originally published at the Lawfare blog.

EDITED TO ADD (2/17): Slashdot thread. Hacker News thread. Reddit thread. BoingBoing discussion.

EDITED TO ADD (2/18): Here are are two academic/hacker presentations on exploiting hard drives. And another article.

EDITED TO ADD (2/23): Another excellent article.

Posted on February 17, 2015 at 12:19 PM143 Comments

Co3 Systems Changes Its Name to Resilient Systems

Today my company, Co3 Systems, is changing its name to Resilient Systems. The new name better reflects who we are and what we do. Plus, the old name was kind of dumb.

I have long liked the term “resilience.” If you look around, you’ll see it a lot. It’s used in human psychology, in organizational theory, in disaster recovery, in ecological systems, in materials science, and in systems engineering. Here’s a definition from 1991, in a book by Aaron Wildavsky called Searching for Safety: “Resilience is the capacity to cope with unanticipated dangers after they have become manifest, learning to bounce back.”

The concept of resilience has been used in IT systems for a long time.

I have been talking about resilience in IT security—and security in general—for at least 15 years. I gave a talk at an ICANN meeting in 2001 titled “Resilient Security and the Internet.” At the 2001 Black Hat, I said: “Strong countermeasures combine protection, detection, and response. The way to build resilient security is with vigilant, adaptive, relentless defense by experts (people, not products). There are no magic preventive countermeasures against crime in the real world, yet we are all reasonably safe, nevertheless. We need to bring that same thinking to the Internet.”

In Beyond Fear (2003), I spend pages on resilience: “Good security systems are resilient. They can withstand failures; a single failure doesn’t cause a cascade of other failures. They can withstand attacks, including attackers who cheat. They can withstand new advances in technology. They can fail and recover from failure.” We can defend against some attacks, but we have to detect and respond to the rest of them. That process is how we achieve resilience. It was true fifteen years ago and, if anything, it is even more true today.

So that’s the new name, Resilient Systems. We provide an Incident Response Platform, empowering organizations to thrive in the face of cyberattacks and business crises. Our collaborative platform arms incident response teams with workflows, intelligence, and deep-data analytics to react faster, coordinate better, and respond smarter.

And that’s the deal. Our Incident Response Platform produces and manages instant incident response plans. Together with our Security and Privacy modules, it provides IR teams with best-practice action plans and flexible workflows. It’s also agile, allowing teams to modify their response to suit organizational needs, and continues to adapt in real time as incidents evolve.

Resilience is a lot bigger than IT. It’s a lot bigger than technology. In my latest book, Data and Goliath, I write: “I am advocating for several flavors of resilience for both our systems of surveillance and our systems that control surveillance: resilience to hardware and software failure, resilience to technological innovation, resilience to political change, and resilience to coercion. An architecture of security provides resilience to changing political whims that might legitimize political surveillance. Multiple overlapping authorities provide resilience to coercive pressures. Properly written laws provide resilience to changing technological capabilities. Liberty provides resilience to authoritarianism. Of course, full resilience against any of these things, let alone all of them, is impossible. But we must do as well as we can, even to the point of assuming imperfections in our resilience.”

I wrote those words before we even considered a name change.

Same company, new name (and new website). Check us out.

Posted on February 17, 2015 at 6:53 AM19 Comments

Ford Proud that "Mustang" Is a Common Password

This is what happens when a PR person gets hold of information he really doesn’t understand.

“Mustang” is the 16th most common password on the Internet according to a recent study by SplashData, besting both “superman” in 21st place and “batman” in 24th

Mustang is the only car to appear in the top 25 most common Internet passwords

That’s not bad. If you’re a PR person, that’s good.

Here are a few suggestions for strengthening your “mustang” password:

  • Add numbers to your password (favorite Mustang model year, year you bought your Mustang or year you sold the car)
  • Incorporate Mustang option codes, paint codes, engine codes or digits from your VIN
  • Create acronyms for modifications made to your Mustang (FRSC, for Ford Racing SuperCharger, for example)
  • Include your favorite driving road or road trip destination

Keep in mind that using the same password on all websites is not recommended; a password manager can help keep multiple Mustang-related passwords organized and easy-to-access.

At least they didn’t sue users for copyright infringement.

Posted on February 16, 2015 at 6:45 AM39 Comments

New Book: Data and Goliath

After a year of talking about it, my new book is finally published.

This is the copy from the inside front flap:

You are under surveillance right now.

Your cell phone provider tracks your location and knows who’s with you. Your online and in-store purchasing patterns are recorded, and reveal if you’re unemployed, sick, or pregnant. Your e-mails and texts expose your intimate and casual friends. Google knows what you’re thinking because it saves your private searches. Facebook can determine your sexual orientation without you ever mentioning it.

The powers that surveil us do more than simply store this information. Corporations use surveillance to manipulate not only the news articles and advertisements we each see, but also the prices we’re offered. Governments use surveillance to discriminate, censor, chill free speech, and put people in danger worldwide. And both sides share this information with each other or, even worse, lose it to cybercriminals in huge data breaches.

Much of this is voluntary: we cooperate with corporate surveillance because it promises us convenience, and we submit to government surveillance because it promises us protection. The result is a mass surveillance society of our own making. But have we given up more than we’ve gained? In Data and Goliath, security expert Bruce Schneier offers another path, one that values both security and privacy. He shows us exactly what we can do to reform our government surveillance programs and shake up surveillance-based business models, while also providing tips for you to protect your privacy every day. You’ll never look at your phone, your computer, your credit cards, or even your car in the same way again.

And there’s a great quote on the cover:

“The public conversation about surveillance in the digital age would be a good deal more intelligent if we all read Bruce Schneier first.”—Malcolm Gladwell, author of David and Goliath

This is the table of contents:

Part 1: The World We’re Creating

Chapter 1: Data as a By-Product of Computing
Chapter 2: Data as Surveillance
Chapter 3: Analyzing our Data
Chapter 4: The Business of Surveillance
Chapter 5: Government Surveillance and Control
Chapter 6: Consolidation of Institutional Surveillance

Part 2: What’s at Stake

Chapter 7: Political Liberty and Justice
Chapter 8: Commercial Fairness and Equality
Chapter 9: Business Competitiveness
Chapter 10: Privacy
Chapter 11: Security

Part 3: What to Do About It

Chapter 12: Principles
Chapter 13: Solutions for Government
Chapter 14: Solutions for Corporations
Chapter 15: Solutions for the Rest of Us
Chapter 16: Social Norms and the Big Data Trade-off

I’ve gotten some great responses from people who read the bound galley, and hope for some good reviews in mainstream publications. So far, there’s one review.

You can buy the book at Amazon, Amazon UK, Barnes & Noble, Powell’s, Book Depository, or IndieBound—which routes your purchase through a local independent bookseller. E-books are available on Amazon, B&N, Apple’s iBooks store, and Google Play.

And if you can, please write a review for Amazon, Goodreads, or anywhere else.

Posted on February 15, 2015 at 6:41 AM67 Comments

Cryptography for Kids

Interesting National Science Foundation award:

In the proposed “CryptoClub” afterschool program, middle-grade students will explore cryptography while applying mathematics to make and break secret codes. The playfulness and mystery of the subject will be engaging to students, and the afterschool environment will allow them to learn at their own pace. Some activities will involve moving around, for example following a trail of encrypted clues to find a hidden treasure, or running back and forth in a relay race, competing to be the first to gather and decrypt the parts of a secret message. Other activities will involve sitting more quietly and thinking deeply about patterns that might help break a code. On the other hand, in the proposed CryptoClub Online approach, the CryptoClub Website will provide additional opportunities for applying and learning cryptography in a playful way. It currently includes cipher tools for encrypting and decrypting, message and joke boards where users decrypt messages or submit their own encrypted messages, historical comics about cryptography, and adventure games that involve secret messages.

Posted on February 13, 2015 at 1:13 PM13 Comments

Samsung Television Spies on Viewers

Earlier this week, we learned that Samsung televisions are eavesdropping on their owners. If you have one of their Internet-connected smart TVs, you can turn on a voice command feature that saves you the trouble of finding the remote, pushing buttons and scrolling through menus. But making that feature work requires the television to listen to everything you say. And what you say isn’t just processed by the television; it may be forwarded over the Internet for remote processing. It’s literally Orwellian.

This discovery surprised people, but it shouldn’t have. The things around us are increasingly computerized, and increasingly connected to the Internet. And most of them are listening.

Our smartphones and computers, of course, listen to us when we’re making audio and video calls. But the microphones are always there, and there are ways a hacker, government, or clever company can turn those microphones on without our knowledge. Sometimes we turn them on ourselves. If we have an iPhone, the voice-processing system Siri listens to us, but only when we push the iPhone’s button. Like Samsung, iPhones with the “Hey Siri” feature enabled listen all the time. So do Android devices with the “OK Google” feature enabled, and so does an Amazon voice-activated system called Echo. Facebook has the ability to turn your smartphone’s microphone on when you’re using the app.

Even if you don’t speak, our computers are paying attention. Gmail “listens” to everything you write, and shows you advertising based on it. It might feel as if you’re never alone. Facebook does the same with everything you write on that platform, and even listens to the things you type but don’t post. Skype doesn’t listen—we think—but as Der Spiegel notes, data from the service “has been accessible to the NSA’s snoops” since 2011.

So the NSA certainly listens. It listens directly, and it listens to all these companies listening to you. So do other countries like Russia and China, which we really don’t want listening so closely to their citizens.

It’s not just the devices that listen; most of this data is transmitted over the Internet. Samsung sends it to what was referred to as a “third party” in its policy statement. It later revealed that third party to be a company you’ve never heard of—Nuance—that turns the voice into text for it. Samsung promises that the data is erased immediately. Most of the other companies that are listening promise no such thing and, in fact, save your data for a long time. Governments, of course, save it, too.

This data is a treasure trove for criminals, as we are learning again and again as tens and hundreds of millions of customer records are repeatedly stolen. Last week, it was reported that hackers had accessed the personal records of some 80 million Anthem Health customers and others. Last year, it was Home Depot, JP Morgan, Sony and many others. Do we think Nuance’s security is better than any of these companies? I sure don’t.

At some level, we’re consenting to all this listening. A single sentence in Samsung’s 1,500-word privacy policy, the one most of us don’t read, stated: “Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.” Other services could easily come with a similar warning: Be aware that your e-mail provider knows what you’re saying to your colleagues and friends and be aware that your cell phone knows where you sleep and whom you’re sleeping with—assuming that you both have smartphones, that is.

The Internet of Things is full of listeners. Newer cars contain computers that record speed, steering wheel position, pedal pressure, even tire pressure—and insurance companies want to listen. And, of course, your cell phone records your precise location at all times you have it on—and possibly even when you turn it off. If you have a smart thermostat, it records your house’s temperature, humidity, ambient light and any nearby movement. Any fitness tracker you’re wearing records your movements and some vital signs; so do many computerized medical devices. Add security cameras and recorders, drones and other surveillance airplanes, and we’re being watched, tracked, measured and listened to almost all the time.

It’s the age of ubiquitous surveillance, fueled by both Internet companies and governments. And because it’s largely happening in the background, we’re not really aware of it.

This has to change. We need to regulate the listening: both what is being collected and how it’s being used. But that won’t happen until we know the full extent of surveillance: who’s listening and what they’re doing with it. Samsung buried its listening details in its privacy policy—they have since amended it to be clearer—and we’re only having this discussion because a Daily Beast reporter stumbled upon it. We need more explicit conversation about the value of being able to speak freely in our living rooms without our televisions listening, or having e-mail conversations without Google or the government listening. Privacy is a prerequisite for free expression, and losing that would be an enormous blow to our society.

This essay previously appeared on CNN.com.

ETA (2/16): A German translation by Damian Weber.

Posted on February 13, 2015 at 7:01 AM49 Comments

Programming No-Fly Zones into Drones

DJI is programming no-fly zones into its drone software.

Here’s how it’ll work. The update will add a list of GPS coordinates to the drone’s computer that tells it not to fly around the Washington D.C. area. When users are within a 15-mile restricted zone, the drone’s motors won’t spin up, preventing it from taking off.

If this sounds like digital rights management, it basically is. And it will fail in all the ways that DRM fails. Cory Doctorow has explained it all very well.

Posted on February 12, 2015 at 12:22 PM36 Comments

Electronic Surveillance Failures Leading up to the 2008 Mumbai Terrorist Attacks

Long New York Times article based on “former American and Indian officials and classified documents disclosed by Edward J. Snowden” outlining the intelligence failures leading up to the 2008 Mumbai terrorist attacks:

Although electronic eavesdropping often yields valuable data, even tantalizing clues can be missed if the technology is not closely monitored, the intelligence gleaned from it is not linked with other information, or analysis does not sift incriminating activity from the ocean of digital data.

This seems to be the moral:

Although the United States computer arsenal plays a vital role against targets ranging from North Korea’s suspected assault on Sony to Russian cyberthieves and Chinese military hacking units, counterterrorism requires a complex mix of human and technical resources. Some former counterterrorism officials warn against promoting billion-dollar surveillance programs with the narrow argument that they stop attacks.

That monitoring collects valuable information, but large amounts of it are “never meaningfully reviewed or analyzed,” said Charles (Sam) Faddis, a retired C.I.A. counterterrorism chief. “I cannot remember a single instance in my career when we ever stopped a plot based purely on signals intelligence.”

[…]

Intelligence officials say that terror plots are often discernible only in hindsight, when a pattern suddenly emerges from what had been just bits of information. Whatever the reason, no one fully grasped the developing Mumbai conspiracy.

“They either weren’t looking or didn’t understand what it all meant,” said one former American official who had access to the intelligence and would speak only on the condition of anonymity. “There was a lot more noise than signal. There usually is.”

Posted on February 12, 2015 at 6:57 AM17 Comments

National Academies Report on Bulk Intelligence Collection

In January, the National Academies of Science (NAS) released a report on the bulk collection of signals intelligence. Basically, a year previously President Obama tasked the Director of National Intelligence with assessing “the feasibility of creating software that would allow the Intelligence Community more easily to conduct target information acquisition rather than bulk collection.” The DNI asked the NAS to answer the question, and the result is this report.

The conclusion is about what you’d expect. From the NAS press release:

No software-based technique can fully replace the bulk collection of signals intelligence, but methods can be developed to more effectively conduct targeted collection and to control the usage of collected data, says a new report from the National Research Council. Automated systems for isolating collected data, restricting queries that can be made against those data, and auditing usage of the data can help to enforce privacy protections and allay some civil liberty concerns, the unclassified report says.

[…]

A key value of bulk collection is its record of past signals intelligence that may be relevant to subsequent investigations, the report notes. The committee was not asked to and did not consider whether the loss of effectiveness from reducing bulk collection would be too great, or whether the potential gain in privacy from adopting an alternative collection method is worth the potential loss of intelligence information. It did observe that other sources of information—for example, data held by third parties such as communications providers—might provide a partial substitute for bulk collection in some circumstances.

Right. The singular value of spying on everyone and saving all the data is that you can go back in time and use individual pieces of that data. There’s nothing that can substitute for that.

And what the report committee didn’t look at is very important. Here’s Herb Lin, cyber policy and security researcher and a staffer on this report:

…perhaps the most important point of the report is what it does not say. It concludes that giving up bulk surveillance entirely will entail some costs to national security, but it does not say that we should keep or abandon bulk surveillance. National security is an important national priority and so are civil liberties. We don’t do EVERYTHING we could do for national security—we accept some national security risks. And we don’t do everything we could do for civil liberties—we accept some reductions in civil liberties. Where, when, and under what circumstances we accept either—that’s the most important policy choice that the American people can make.

Just because something can be done does not mean that 1) it is effective, or 2) it should be done. There’s a lot of evidence that bulk collection is not valuable.

Here’s an overview of the report. And a news article. And the DNI press release.

Posted on February 9, 2015 at 6:16 AM42 Comments

NSA Using Hacker Research and Results

In the latest article based on the Snowden documents, the Intercept is reporting that the NSA and GCHQ are piggy-backing on the work of hackers:

In some cases, the surveillance agencies are obtaining the content of emails by monitoring hackers as they breach email accounts, often without notifying the hacking victims of these breaches. “Hackers are stealing the emails of some of our targets…by collecting the hackers’ ‘take,’ we…get access to the emails themselves,” reads one top secret 2010 National Security Agency document.

Not surprising.

Posted on February 6, 2015 at 9:39 AM26 Comments

Tracking Bitcoin Scams

Interesting paper: “There’s No Free Lunch, Even Using Bitcoin: Tracking the Popularity and Profits of Virtual Currency Scams,” by Marie Vasek and Tyler Moore.

Abstract: We present the first empirical analysis of Bitcoin-based scams: operations established with fraudulent intent. By amalgamating reports gathered by
voluntary vigilantes and tracked in online forums, we identify 192 scams and categorize them into four groups: Ponzi schemes, mining scams, scam wallets and fraudulent exchanges. In 21% of the cases, we also found the associated Bitcoin addresses, which enables us to track payments into and out of the scams. We find that at least $11 million has been contributed to the scams from 13 000 distinct victims. Furthermore, we present evidence that the most successful scams depend on large contributions from a very small number of victims. Finally, we discuss ways in which the scams could be countered.

News article.

Posted on February 4, 2015 at 7:02 AM33 Comments

Obama Says Terrorism Is Not an Existential Threat

In an interview this week, President Obama said that terrorism does not pose an existential threat:

What I do insist on is that we maintain a proper perspective and that we do not provide a victory to these terrorist networks by overinflating their importance and suggesting in some fashion that they are an existential threat to the United States or the world order. You know, the truth of the matter is that they can do harm. But we have the capacity to control how we respond in ways that do not undercut what’s the—you know, what’s essence of who we are.

He said something similar in January.

On one hand, what he said is blindingly obvious; and overinflating terrorism’s risks plays into the terrorists’ hands. Climate change is an existential threat. So is a comet hitting the earth, intelligent robots taking over the planet, and genetically engineered viruses. There are lots of existential threats to humanity, and we can argue about their feasibility and probability. But terrorism is not one of them. Even things that actually kill tens of thousands of people each year—car accidents, handguns, heart disease—are not existential threats.

But no matter how obvious this is, until recently it hasn’t been something that serious politicians have been able to say. When Vice President Biden said something similar last year, one commentary carried the headline “Truth or Gaffe?” In 2004, when presidential candidate John Kerry gave a common-sense answer to a question about the threat of terrorism, President Bush used those words in an attack ad. As far as I know, these comments by Obama and Biden are the first time major politicians are admitting that terrorism does not pose an existential threat and are not being pilloried for it.

Overreacting to the threat is still common, and exaggeration and fear still make good politics. But maybe now, a dozen years after 9/11, we can finally start having rational conversations about terrorism and security: what works, what doesn’t, what’s worth it, and what’s not.

Posted on February 3, 2015 at 6:15 AM72 Comments

Texas School Overreaction

Seems that a Texas school has suspended a 9-year-old for threatening another student with a replica One Ring. (Yes, that One Ring.)

I’ve written about this sort of thing before:

These so-called zero-tolerance policies are actually zero-discretion policies. They’re policies that must be followed, no situational discretion allowed. We encounter them whenever we go through airport security: no liquids, gels or aerosols. Some workplaces have them for sexual harassment incidents; in some sports a banned substance found in a urine sample means suspension, even if it’s for a real medical condition. Judges have zero discretion when faced with mandatory sentencing laws: three strikes for drug offenses and you go to jail, mandatory sentencing for statutory rape (underage sex), etc. A national restaurant chain won’t serve hamburgers rare, even if you offer to sign a waiver. Whenever you hear “that’s the rule, and I can’t do anything about it”—and they’re not lying to get rid of you—you’re butting against a zero discretion policy.

These policies enrage us because they are blind to circumstance. Editorial after editorial denounced the suspensions of elementary school children for offenses that anyone with any common sense would agree were accidental and harmless. The Internet is filled with essays demonstrating how the TSA’s rules are nonsensical and sometimes don’t even improve security. I’ve written some of them. What we want is for those involved in the situations to have discretion.

However, problems with discretion were the reason behind these mandatory policies in the first place. Discretion is often applied inconsistently. One school principal might deal with knives in the classroom one way, and another principal another way. Your drug sentence could depend considerably on how sympathetic your judge is, or on whether she’s having a bad day.

My guess is that the school administration ended up trapped by its own policies, probably even believing that they were correctly being applied. You can hear that in this hearsay quote reported by the boy’s father:

Steward said the principal said threats to another child’s safety would not be tolerated – whether magical or not.

Slashdot thread. Reddit thread.

Posted on February 2, 2015 at 12:37 PM40 Comments

Hiding a Morse Code Message in a Pop Song

In Colombia:

The team began experimenting with Morse code using various percussion instruments and a keyboard. They learned that operators skilled in Morse code can often read the signals at a rate of 40 words per minute ­ but played that fast, the beat would sound like a European Dance track. “We discovered the magic number was 20,” says Portela. “You can fit approximately 20 Morse code words into a piece of music the length of a chorus, and it sounds okay.”

[…]

Portela says they played with the Morse code using Reason software, which gives each audio channel or instrument its own dedicated track. With a separate visual lane for certain elements, it was possible to match the code to the beat of the song—and, crucially, blend it in.

Hiding the Morse code took weeks, with constant back-and-forth with Col. Espejo and the military to make sure their men could understand the message. “It was difficult because Morse code is not a musical beat. Sometimes it was too obvious,” says Portela. “Other times the code was not understood. And we had to hide it three times in the song to make sure the message was received.”

Posted on February 2, 2015 at 7:01 AM31 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.