Blog: July 2015 Archives

Schneier Speaking Schedule

I’m speaking at an Infoedge event at Bali Hai Golf Club in Las Vegas, at 5 pm on August 5, 2015.

I’m speaking at Def Con 23 on Friday, August 7, 2015.

I’m speaking—remotely via Skype—at LinuxCon in Seattle on August 18, 2015.

I’m speaking at CloudSec in Singapore on August 25, 2015.

I’m speaking at MindTheSec in São Paulo, Brazil, on August 27, 2015.

I’m speaking on the future of privacy at a public seminar sponsored by the Institute for Future Studies, in Stockholm, Sweden on September 21, 2015.

I’m speaking at Next Generation Threats 2015 in Stockholm, Sweden, on September 22, 2015.

I’m speaking at Next Generation Threats 2015 in Gothenburg, Sweden, on September 23, 2015.

I’m speaking at Free and Safe in Cyberspace in Brussels on September 24, 2015.

I’ll be on a panel at Privacy. Security. Risk. 2015 in Las Vegas on September 30, 2015.

I’m speaking at the Privacy + Security Forum, October 21-23, 2015, at The Marvin Center in Washington, DC.

I’m speaking at the Boston Book Festival on October 24, 2015.

I’m speaking at the 4th Annual Cloud Security Congress EMEA in Berlin on November 17, 2015.

Posted on July 31, 2015 at 2:21 PM14 Comments

HAMMERTOSS: New Russian Malware

FireEye has a detailed report of a sophisticated piece of Russian malware: HAMMERTOSS. It uses some clever techniques to hide:

The Hammertoss backdoor malware looks for a different Twitter handle each day—automatically prompted by a list generated by the tool—to get its instructions. If the handle it’s looking for is not registered that day, it merely returns the next day and checks for the Twitter handle designated for that day. If the account is active, Hammertoss searches for a tweet with a URL and hashtag, and then visits the URL.

That’s where a legit-looking image is grabbed and then opened by Hammertoss: the image contains encrypted instructions, which Hammertoss decrypts. The commands, which include instructions for obtaining files from the victim’s network, typically then lead the malware to send that stolen information to a cloud-based storage service.

Another article. Reddit thread.

Posted on July 31, 2015 at 11:12 AM15 Comments

Backdoors Won't Solve Comey's Going Dark Problem

At the Aspen Security Forum two weeks ago, James Comey (and others) explicitly talked about the “going dark” problem, describing the specific scenario they are concerned about. Maybe others have heard the scenario before, but it was a first for me. It centers around ISIL operatives abroad and ISIL-inspired terrorists here in the US. The FBI knows who the Americans are, can get a court order to carry out surveillance on their communications, but cannot eavesdrop on the conversations, because they are encrypted. They can get the metadata, so they know who is talking to who, but they can’t find out what’s being said.

“ISIL’s M.O. is to broadcast on Twitter, get people to follow them, then move them to Twitter Direct Messaging” to evaluate if they are a legitimate recruit, he said. “Then they’ll move them to an encrypted mobile-messaging app so they go dark to us.”

[…]

The FBI can get court-approved access to Twitter exchanges, but not to encrypted communication, Comey said. Even when the FBI demonstrates probable cause and gets a judicial order to intercept that communication, it cannot break the encryption for technological reasons, according to Comey.

If this is what Comey and the FBI are actually concerned about, they’re getting bad advice—because their proposed solution won’t solve the problem. Comey wants communications companies to give them the capability to eavesdrop on conversations without the conversants’ knowledge or consent; that’s the “backdoor” we’re all talking about. But the problem isn’t that most encrypted communications platforms are securely encrypted, or even that some are—the problem is that there exists at least one securely encrypted communications platform on the planet that ISIL can use.

Imagine that Comey got what he wanted. Imagine that iMessage and Facebook and Skype and everything else US-made had his backdoor. The ISIL operative would tell his potential recruit to use something else, something secure and non-US-made. Maybe an encryption program from Finland, or Switzerland, or Brazil. Maybe Mujahedeen Secrets. Maybe anything. (Sure, some of these will have flaws, and they’ll be identifiable by their metadata, but the FBI already has the metadata, and the better software will rise to the top.) As long as there is something that the ISIL operative can move them to, some software that the American can download and install on their phone or computer, or hardware that they can buy from abroad, the FBI still won’t be able to eavesdrop.

And by pushing these ISIL operatives to non-US platforms, they lose access to the metadata they otherwise have.

Convincing US companies to install backdoors isn’t enough; in order to solve this going dark problem, the FBI has to ensure that an American can only use backdoored software. And the only way to do that is to prohibit the use of non-backdoored software, which is the sort of thing that the UK’s David Cameron said he wanted for his country in January:

But the question is are we going to allow a means of communications which it simply isn’t possible to read. My answer to that question is: no, we must not.

And that, of course, is impossible. Jonathan Zittrain explained why. And Cory Doctorow outlined what trying would entail:

For David Cameron’s proposal to work, he will need to stop Britons from installing software that comes from software creators who are out of his jurisdiction. The very best in secure communications are already free/open source projects, maintained by thousands of independent programmers around the world. They are widely available, and thanks to things like cryptographic signing, it is possible to download these packages from any server in the world (not just big ones like Github) and verify, with a very high degree of confidence, that the software you’ve downloaded hasn’t been tampered with.

[…]

This, then, is what David Cameron is proposing:

* All Britons’ communications must be easy for criminals, voyeurs and foreign spies to intercept.

* Any firms within reach of the UK government must be banned from producing secure software.

* All major code repositories, such as Github and Sourceforge, must be blocked.

* Search engines must not answer queries about web-pages that carry secure software.

* Virtually all academic security work in the UK must cease—security research must only take place in proprietary research environments where there is no onus to publish one’s findings, such as industry R&D and the security services.

* All packets in and out of the country, and within the country, must be subject to Chinese-style deep-packet inspection and any packets that appear to originate from secure software must be dropped.

* Existing walled gardens (like IOs and games consoles) must be ordered to ban their users from installing secure software.

* Anyone visiting the country from abroad must have their smartphones held at the border until they leave.

* Proprietary operating system vendors (Microsoft and Apple) must be ordered to redesign their operating systems as walled gardens that only allow users to run software from an app store, which will not sell or give secure software to Britons.

* Free/open source operating systems—that power the energy, banking, ecommerce, and infrastructure sectors—must be banned outright.

As extreme as it reads, without all of that, the ISIL operative would be able to communicate securely with his potential American recruit. And all of this is not going to happen.

Last week, former NSA director Mike McConnell, former DHS secretary Michael Chertoff, and former deputy defense secretary William Lynn published a Washington Post op-ed opposing backdoors in encryption software. They wrote:

Today, with almost everyone carrying a networked device on his or her person, ubiquitous encryption provides essential security. If law enforcement and intelligence organizations face a future without assured access to encrypted communications, they will develop technologies and techniques to meet their legitimate mission goals.

I believe this is true. Already one is being talked about in the academic literature: lawful hacking.

Perhaps the FBI’s reluctance to accept this is based on their belief that all encryption software comes from the US, and therefore is under their influence. Back in the 1990s, during the first Crypto Wars, the US government had a similar belief. To convince them otherwise, George Washington University surveyed the cryptography market in 1999 and found that there were over 500 companies in 70 countries manufacturing or distributing non-US cryptography products. Maybe we need a similar study today.

This essay previously appeared on Lawfare.

Posted on July 31, 2015 at 6:08 AM88 Comments

Comparing the Security Practices of Experts and Non-Experts

New paper: “‘…no one can hack my mind’: Comparing Expert and Non-Expert Security Practices,” by Iulia Ion, Rob Reeder, and Sunny Consolvo.

Abstract: The state of advice given to people today on how to stay safe online has plenty of room for improvement. Too many things are asked of them, which may be unrealistic, time consuming, or not really worth the effort. To improve the security advice, our community must find out what practices people use and what recommendations, if messaged well, are likely to bring the highest benefit while being realistic to ask of people. In this paper, we present the results of a study which aims to identify which practices people do that they consider most important at protecting their security on-line. We compare self-reported security practices of non-experts to those of security experts (i.e., participants who reported having five or more years of experience working in computer security). We report on the results of two online surveys—­one with 231 security experts and one with 294 MTurk participants­—on what the practices and attitudes of each group are. Our findings show a discrepancy between the security practices that experts and non-experts report taking. For instance, while experts most frequently report installing software updates, using two-factor authentication and using a password manager to stay safe online, non-experts report using antivirus software, visiting only known websites, and changing passwords frequently.

Posted on July 30, 2015 at 2:21 PM25 Comments

Fugitive Located by Spotify

The latest in identification by data:

Webber said a tipster had spotted recent activity from Nunn on the Spotify streaming service and alerted law enforcement. He scoured the Internet for other evidence of Nunn and Barr’s movements, eventually filling out 12 search warrants for records at different technology companies. Those searches led him to an IP address that traced Nunn to Cabo San Lucas, Webber said.

Nunn, he said, had been avidly streaming television shows and children’s programs on various online services, giving the sheriff’s department a hint to the couple’s location.

Posted on July 29, 2015 at 1:43 PM14 Comments

Bizarre High-Tech Kidnapping

This is a story of a very high-tech kidnapping:

FBI court filings unsealed last week showed how Denise Huskins’ kidnappers used anonymous remailers, image sharing sites, Tor, and other people’s Wi-Fi to communicate with the police and the media, scrupulously scrubbing meta data from photos before sending. They tried to use computer spyware and a DropCam to monitor the aftermath of the abduction and had a Parrot radio-controlled drone standing by to pick up the ransom by remote control.

The story also demonstrates just how effective the FBI is tracing cell phone usage these days. They had a blocked call from the kidnappers to the victim’s cell phone. First they used a search warrant to AT&T to get the actual calling number. After learning that it was an AT&T prepaid Tracfone, they called AT&T to find out where the burner was bought, what the serial numbers were, and the location where the calls were made from.

The FBI reached out to Tracfone, which was able to tell the agents that the phone was purchased from a Target store in Pleasant Hill on March 2 at 5:39 pm. Target provided the bureau with a surveillance-cam photo of the buyer: a white male with dark hair and medium build. AT&T turned over records showing the phone had been used within 650 feet of a cell site in South Lake Tahoe.

Here’s the criminal complaint. It borders on surreal. Were it an episode of CSI:Cyber, you would never believe it.

Posted on July 29, 2015 at 6:34 AM48 Comments

New RC4 Attack

New research: “All Your Biases Belong To Us: Breaking RC4 in WPA-TKIP and TLS,” by Mathy Vanhoef and Frank Piessens:

Abstract: We present new biases in RC4, break the Wi-Fi Protected Access Temporal Key Integrity Protocol (WPA-TKIP), and design a practical plaintext recovery attack against the Transport Layer Security (TLS) protocol. To empirically find new biases in the RC4 keystream we use statistical hypothesis tests. This reveals many new biases in the initial keystream bytes, as well as several new long-term biases. Our fixed-plaintext recovery algorithms are capable of using multiple types of biases, and return a list of plaintext candidates in decreasing likelihood.

To break WPA-TKIP we introduce a method to generate a large number of identical packets. This packet is decrypted by generating its plaintext candidate list, and using redundant packet structure to prune bad candidates. From the decrypted packet we derive the TKIP MIC key, which can be used to inject and decrypt packets. In practice the attack can be executed within an hour. We also attack TLS as used by HTTPS, where we show how to decrypt a secure cookie with a success rate of 94% using 9*227 ciphertexts. This is done by injecting known data around the cookie, abusing this using Mantin’s ABSAB bias, and brute-forcing the cookie by traversing the plaintext candidates. Using our traffic generation technique, we are able to execute the attack in merely 75 hours.

News articles.

We need to deprecate the algorithm already.

Posted on July 28, 2015 at 12:09 PM12 Comments

Stagefright Vulnerability in Android Phones

The Stagefright vulnerability for Android phones is a bad one. It’s exploitable via a text message (details depend on auto downloading of the particular phone), it runs at an elevated privilege (again, the severity depends on the particular phone—on some phones it’s full privilege), and it’s trivial to weaponize. Imagine a worm that infects a phone and then immediately sends a copy of itself to everyone on that phone’s contact list.

The worst part of this is that it’s an Android exploit, so most phones won’t be patched anytime soon—if ever. (The people who discovered the bug alerted Google in April. Google has sent patches to its phone manufacturer partners, but most of them have not sent the patch to Android phone users.)

Posted on July 28, 2015 at 6:37 AM57 Comments

Hacking Team's Purchasing of Zero-Day Vulnerabilities

This is an interesting article that looks at Hacking Team’s purchasing of zero-day (0day) vulnerabilities from a variety of sources:

Hacking Team’s relationships with 0day vendors date back to 2009 when they were still transitioning from their information security consultancy roots to becoming a surveillance business. They excitedly purchased exploit packs from D2Sec and VUPEN, but they didn’t find the high-quality client-side oriented exploits they were looking for. Their relationship with VUPEN continued to frustrate them for years. Towards the end of 2012, CitizenLab released their first report on Hacking Team’s software being used to repress activists in the United Arab Emirates. However, a continuing stream of negative reports about the use of Hacking Team’s software did not materially impact their relationships. In fact, by raising their profile these reports served to actually bring Hacking Team direct business. In 2013 Hacking Team’s CEO stated that they had a problem finding sources of new exploits and urgently needed to find new vendors and develop in-house talent. That same year they made multiple new contacts, including Netragard, Vitaliy Toropov, Vulnerabilities Brokerage International, and Rosario Valotta. Though Hacking Team’s internal capabilities did not significantly improve, they continued to develop fruitful new relationships. In 2014 they began a close partnership with Qavar Security.

Lots of details in the article. This was made possible by the organizational doxing of Hacking Team by some unknown individuals or group.

Posted on July 27, 2015 at 6:17 AM29 Comments

Friday Squid Blogging: How a Squid Changes Color

The California market squid, Doryteuthis opalescens, can manipulate its color in a variety of ways:

Reflectins are aptly-named proteins unique to the light-sensing tissue of cephalopods like squid. Their skin contains specialized cells called iridocytes that produce color by reflecting light in a predictable way. When the neurotransmitter acetylcholine activates reflectin proteins, this triggers the contraction and expansion of deep pleats in the cell membrane of iridocytes. By turning enzymes on and off, this process adjusts (or tunes) the brightness and color of the light that’s reflected.

Interesting details in the article and the paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on July 24, 2015 at 4:18 PM176 Comments

How an Amazon Worker Stole iPads

A worker in Amazon’s packaging department in India figured out how to deliver electronics to himself:

Since he was employed with the packaging department, he had easy access to order numbers. Using the order numbers, he packed his order himself; but instead of putting pressure cookers in the box, he stuffed it with iPhones, iPads, watches, cameras, and other expensive electronics in the pressure cooker box. Before dispatching the order, the godown also has a mechanism to weigh the package. To dodge this, Bhamble stuffed equipment of equivalent weight,” an officer from Vithalwadi police station said. Bhamble confessed to the cops that he had ordered pressure cookers thrice in the last 15 days. After he placed the order, instead of, say, packing a five-kg pressure cooker, he would stuff gadgets of equivalent weight. After receiving delivery clearance, he would then deliver the goods himself and store it at his house. Speaking to mid-day, Deputy Commissioner of Police (Zone IV) Vasant Jadhav said, “Bhamble’s job profile was of goods packaging at Amazon.com’s warehouse in Bhiwandi.

Posted on July 24, 2015 at 12:49 PM31 Comments

Remotely Hacking a Car While It's Driving

This is a big deal. Hackers can remotely hack the Uconnect system in cars just by knowing the car’s IP address. They can disable the brakes, turn on the AC, blast music, and disable the transmission:

The attack tools Miller and Valasek developed can remotely trigger more than the dashboard and transmission tricks they used against me on the highway. They demonstrated as much on the same day as my traumatic experience on I-64; After narrowly averting death by semi-trailer, I managed to roll the lame Jeep down an exit ramp, re-engaged the transmission by turning the ignition off and on, and found an empty lot where I could safely continue the experiment.

Miller and Valasek’s full arsenal includes functions that at lower speeds fully kill the engine, abruptly engage the brakes, or disable them altogether. The most disturbing maneuver came when they cut the Jeep’s brakes, leaving me frantically pumping the pedal as the 2-ton SUV slid uncontrollably into a ditch. The researchers say they’re working on perfecting their steering control—for now they can only hijack the wheel when the Jeep is in reverse. Their hack enables surveillance too: They can track a targeted Jeep’s GPS coordinates, measure its speed, and even drop pins on a map to trace its route.

In related news, there’s a Senate bill to improve car security standards. Honestly, I’m not sure our security technology is enough to prevent this sort of thing if the car’s controls are attached to the Internet.

EDITED TO ADD (8/14): More articles.

Posted on July 23, 2015 at 6:17 AM95 Comments

Malcolm Gladwell on Competing Security Models

In this essay/review of a book on UK intelligence officer and Soviet spy Kim Philby, Malcolm Gladwell makes this interesting observation:

Here we have two very different security models. The Philby-era model erred on the side of trust. I was asked about him, and I said I knew his people. The “cost” of the high-trust model was Burgess, Maclean, and Philby. To put it another way, the Philbyian secret service was prone to false-negative errors. Its mistake was to label as loyal people who were actually traitors.

The Wright model erred on the side of suspicion. The manufacture of raincoats is a well-known cover for Soviet intelligence operations. But that model also has a cost. If you start a security system with the aim of catching the likes of Burgess, Maclean, and Philby, you have a tendency to make false-positive errors: you label as suspicious people and events that are actually perfectly normal.

Posted on July 21, 2015 at 6:51 AM37 Comments

Organizational Doxing of Ashley Madison

The—depending on who is doing the reporting—cheating, affair, adultery, or infidelity site Ashley Madison has been hacked. The hackers are threatening to expose all of the company’s documents, including internal e-mails and details of its 37 million customers. Brian Krebs writes about the hackers’ demands.

According to the hackers, although the “full delete” feature that Ashley Madison advertises promises “removal of site usage history and personally identifiable information from the site,” users’ purchase details—including real name and address—aren’t actually scrubbed.

“Full Delete netted ALM $1.7mm in revenue in 2014. It’s also a complete lie,” the hacking group wrote. “Users almost always pay with credit card; their purchase details are not removed as promised, and include real name and address, which is of course the most important information the users want removed.”

Their demands continue:

“Avid Life Media has been instructed to take Ashley Madison and Established Men offline permanently in all forms, or we will release all customer records, including profiles with all the customers’ secret sexual fantasies and matching credit card transactions, real names and addresses, and employee documents and emails. The other websites may stay online.”

Established Men is another of the company’s sites; this one is designed to link wealthy men with young and pretty women.

This is yet another instance of organizational doxing:

Dumping an organization’s secret information is going to become increasingly common as individuals realize its effectiveness for whistleblowing and revenge. While some hackers will use journalists to separate the news stories from mere personal information, not all will.

EDITED TO ADD (7/22): I don’t believe they have 37 million users. This type of service will only appeal to a certain socio-economic demographic, and it’s not equivalent to 10% of the US population.

This page claims that 20% of the population of Ottawa is registered. Given that 25% of the population are children, that means it’s 30% of the adult population: 189,000 people. I just don’t believe it.

Posted on July 20, 2015 at 3:15 PM63 Comments

Google's Unguessable URLs

Google secures photos using public but unguessable URLs:

So why is that public URL more secure than it looks? The short answer is that the URL is working as a password. Photos URLs are typically around 40 characters long, so if you wanted to scan all the possible combinations, you’d have to work through 1070 different combinations to get the right one, a problem on an astronomical scale. “There are enough combinations that it’s considered unguessable,” says Aravind Krishnaswamy, an engineering lead on Google Photos. “It’s much harder to guess than your password.”

It’s a perfectly valid security measure, although unsettling to some.

Posted on July 20, 2015 at 5:25 AM73 Comments

Using Secure Chat

Micah Lee has a good tutorial on installing and using secure chat.

To recap: We have installed Orbot and connected to the Tor network on Android, and we have installed ChatSecure and created an anonymous secret identity Jabber account. We have added a contact to this account, started an encrypted session, and verified that their OTR fingerprint is correct. And now we can start chatting with them with an extraordinarily high degree of privacy.

FBI Director James Comey, UK Prime Minister David Cameron, and totalitarian governments around the world all don’t want you to be able to do this.

Posted on July 17, 2015 at 6:35 AM43 Comments

Crypto-Gram Is Moving

If you subscribe to my monthly e-mail newsletter, Crypto-Gram, you need to read this.

Sometime between now and the August issue, the Crypto-Gram mailing list will be moving to a new host. When the move happens, you’ll get an e-mail asking you to confirm your subscription. In the e-mail will be a link that you will have to click in order to join the new list. The link will go to dreamhost.com—that’s the new host—not to schneier.com. It’s just the one click, and you won’t be asked for any additional information.

(Yes, I am asking you all to click on a link you’ve received in e-mail. The fact that I’m writing about this in Crypto-Gram and posting about it on this blog is the best confirmation I can provide.)

If for any reason you don’t want to receive Crypto-Gram anymore, just don’t click the confirmation link, and you’ll automatically drop off the list.

I’ll post updates on the status of the move on the main list page.

Posted on July 15, 2015 at 2:15 AM23 Comments

Human and Technology Failures in Nuclear Facilities

This is interesting:

We can learn a lot about the potential for safety failures at US nuclear plants from the July 29, 2012, incident in which three religious activists broke into the supposedly impregnable Y-12 facility at Oak Ridge, Tennessee, the Fort Knox of uranium. Once there, they spilled blood and spray painted “work for peace not war” on the walls of a building housing enough uranium to build thousands of nuclear weapons. They began hammering on the building with a sledgehammer, and waited half an hour to be arrested. If an 82-year-old nun with a heart condition and two confederates old enough to be AARP members could do this, imagine what a team of determined terrorists could do.

[…]

Where some other countries often rely more on guards with guns, the United States likes to protect its nuclear facilities with a high-tech web of cameras and sensors. Under the Nunn-Lugar program, Washington has insisted that Russia adopt a similar approach to security at its own nuclear sites­—claiming that an American cultural preference is objectively superior. The Y-12 incident shows the problem with the American approach of automating security. At the Y-12 facility, in addition to the three fences the protestors had to cut through with wire-cutters, there were cameras and motion detectors. But we too easily forget that technology has to be maintained and watched to be effective. According to Munger, 20 percent of the Y-12 cameras were not working on the night the activists broke in. Cameras and motion detectors that had been broken for months had gone unrepaired. A security guard was chatting rather than watching the feed from a camera that did work. And guards ignored the motion detectors, which were so often set off by local wildlife that they assumed all alarms were false positives….

Instead of having government forces guard the site, the Department of Energy had hired two contractors: Wackenhut and Babcock and Wilcox. Wackenhut is now owned by the British company G4S, which also botched security for the 2012 London Olympics, forcing the British government to send 3,500 troops to provide security that the company had promised but proved unable to deliver. Private companies are, of course, driven primarily by the need to make a profit, but there are surely some operations for which profit should not be the primary consideration.

Babcock and Wilcox was supposed to maintain the security equipment at the Y-12 site, while Wackenhut provided the guards. Poor communication between the two companies was one reason sensors and cameras were not repaired. Furthermore, Babcock and Wilcox had changed the design of the plant’s Highly Enriched Uranium Materials Facility, making it a more vulnerable aboveground building, in order to cut costs. And Wackenhut was planning to lay off 70 guards at Y-12, also to cut costs.

There’s an important lesson here. Security is a combination of people, process, and technology. All three have to be working in order for security to work.

Slashdot thread.

Posted on July 14, 2015 at 5:53 AM48 Comments

High-tech Cheating on Exams

India is cracking down on people who use technology to cheat on exams:

Candidates have been told to wear light clothes with half-sleeves, and shirts that do not have big buttons.

They cannot wear earrings and carry calculators, pens, handbags and wallets.

Shoes have also been discarded in favour of open slippers.

In India students cheating in exams have been often found concealing Bluetooth devices and mobile SIM cards that have been stitched to their shirts.

I haven’t heard much about this sort of thing in the US or Europe, but I assume it’s happening there too.

Posted on July 10, 2015 at 12:44 PM57 Comments

Organizational Doxing

Recently, WikiLeaks began publishing over half a million previously secret cables and other documents from the Foreign Ministry of Saudi Arabia. It’s a huge trove, and already reporters are writing stories about the highly secretive government.

What Saudi Arabia is experiencing isn’t common but part of a growing trend.

Just last week, unknown hackers broke into the network of the cyber-weapons arms manufacturer Hacking Team and published 400 gigabytes of internal data, describing, among other things, its sale of Internet surveillance software to totalitarian regimes around the world.

Last year, hundreds of gigabytes of Sony’s sensitive data was published on the Internet, including executive salaries, corporate emails and contract negotiations. The attacker in this case was the government of North Korea, which was punishing Sony for producing a movie that made fun of its leader. In 2010, the U.S. cyberweapons arms manufacturer HBGary Federal was a victim, and its attackers were members of a loose hacker collective called LulzSec.

Edward Snowden stole a still-unknown number of documents from the National Security Agency in 2013 and gave them to reporters to publish. Chelsea Manning stole three-quarters of a million documents from the U.S. State Department and gave them to WikiLeaks to publish. The person who stole the Saudi Arabian documents might also be a whistleblower and insider but is more likely a hacker who wanted to punish the kingdom.

Organizations are increasingly getting hacked, and not by criminals wanting to steal credit card numbers or account information in order to commit fraud, but by people intent on stealing as much data as they can and publishing it. Law professor and privacy expert Peter Swire refers to “the declining half-life of secrets.” Secrets are simply harder to keep in the information age. This is bad news for all of us who value our privacy, but there’s a hidden benefit when it comes to organizations.

The decline of secrecy means the rise of transparency. Organizational transparency is vital to any open and free society.

Open government laws and freedom of information laws let citizens know what the government is doing, and enable them to carry out their democratic duty to oversee its activities. Corporate disclosure laws perform similar functions in the private sphere. Of course, both corporations and governments have some need for secrecy, but the more they can be open, the more we can knowledgeably decide whether to trust them.

This makes the debate more complicated than simple personal privacy. Publishing someone’s private writings and communications is bad, because in a free and diverse society people should have private space to think and act in ways that would embarrass them if public.

But organizations are not people and, while there are legitimate trade secrets, their information should otherwise be transparent. Holding government and corporate private behavior to public scrutiny is good.

Most organizational secrets are only valuable for a short term: negotiations, new product designs, earnings numbers before they’re released, patents before filing, and so on.

Forever secrets, like the formula for Coca-Cola, are few and far between. The one exception is embarrassments. If an organization had to assume that anything it did would become public in a few years, people within that organization would behave differently.

The NSA would have had to weigh its collection programs against the possibility of public scrutiny. Sony would have had to think about how it would look to the world if it paid its female executives significantly less than its male executives. HBGary would have thought twice before launching an intimidation campaign against a journalist it didn’t like, and Hacking Team wouldn’t have lied to the UN about selling surveillance software to Sudan. Even the government of Saudi Arabia would have behaved differently. Such embarrassment might be the first significant downside of hiring a psychopath as CEO.

I don’t want to imply that this forced transparency is a good thing, though. The threat of disclosure chills all speech, not just illegal, embarrassing, or objectionable speech. There will be less honest and candid discourse. People in organizations need the freedom to write and say things that they wouldn’t want to be made public.

State Department officials need to be able to describe foreign leaders, even if their descriptions are unflattering. Movie executives need to be able to say unkind things about their movie stars. If they can’t, their organizations will suffer.

With few exceptions, our secrets are stored on computers and networks vulnerable to hacking. It’s much easier to break into networks than it is to secure them, and large organizational networks are very complicated and full of security holes. Bottom line: If someone sufficiently skilled, funded and motivated wants to steal an organization’s secrets, they will succeed. This includes hacktivists (HBGary Federal, Hacking Team), foreign governments (Sony), and trusted insiders (State Department and NSA).

It’s not likely that your organization’s secrets will be posted on the Internet for everyone to see, but it’s always a possibility.

Dumping an organization’s secret information is going to become increasingly common as individuals realize its effectiveness for whistleblowing and revenge. While some hackers will use journalists to separate the news stories from mere personal information, not all will.

Both governments and corporations need to assume that their secrets are more likely to be exposed, and exposed sooner, than ever. They should do all they can to protect their data and networks, but have to realize that their best defense might be to refrain from doing things that don’t look good on the front pages of the world’s newspapers.

This essay previously appeared on CNN.com. I didn’t use the term “organizational doxing,” though, because it would be too unfamiliar to that audience.

EDITED TO ADD: This essay has been translated into German.

Posted on July 10, 2015 at 4:32 AM46 Comments

The Risks of Mandating Backdoors in Encryption Products

Tuesday, a group of cryptographers and security experts released a major paper outlining the risks of government-mandated back-doors in encryption products: Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications, by Hal Abelson, Ross Anderson, Steve Bellovin, Josh Benaloh, Matt Blaze, Whitfield Diffie, John Gilmore, Matthew Green, Susan Landau, Peter Neumann, Ron Rivest, Jeff Schiller, Bruce Schneier, Michael Specter, and Danny Weitzner.

Abstract: Twenty years ago, law enforcement organizations lobbied to require data and communication services to engineer their products to guarantee law enforcement access to all data. After lengthy debate and vigorous predictions of enforcement channels going dark, these attempts to regulate the emerging Internet were abandoned. In the intervening years, innovation on the Internet flourished, and law enforcement agencies found new and more effective means of accessing vastly larger quantities of data. Today we are again hearing calls for regulation to mandate the provision of exceptional access mechanisms. In this report, a group of computer scientists and security experts, many of whom participated in a 1997 study of these same topics, has convened to explore the likely effects of imposing extraordinary access mandates. We have found that the damage that could be caused by law enforcement exceptional access requirements would be even greater today than it would have been 20 years ago. In the wake of the growing economic and social cost of the fundamental insecurity of today’s Internet environment, any proposals that alter the security dynamics online should be approached with caution. Exceptional access would force Internet system developers to reverse forward secrecy design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today’s Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

It’s already had a big impact on the debate. It was mentioned several times during yesterday’s Senate hearing on the issue (see here).

Three blog posts by authors. Four different news articles, and this analysis of how the New York Times article changed. Also, a New York Times editorial.

EDITED TO ADD (7/9): Peter Swire’s Senate testimony is worth reading.

EDITED TO ADD (7/10): Good article on these new crypto wars.

EDITED TO ADF (7/14): Two rebuttals, neither very convincing.

Posted on July 9, 2015 at 6:31 AM101 Comments

Amazon Is Analyzing the Personal Relationships of Its Reviewers

This is an interesting story of a reviewer who had her review deleted because Amazon believed she knew the author personally.

Leaving completely aside the ethics of friends reviewing friends’ books, what is Amazon doing conducting this kind of investigative surveillance? Do reviewers know that Amazon is keeping tabs on who their friends are?

Posted on July 8, 2015 at 6:36 AM58 Comments

More on Hacking Team

Read this:

Hacking Team asked its customers to shut down operations, but according to one of the leaked files, as part of Hacking Team’s “crisis procedure,” it could have killed their operations remotely. The company, in fact, has “a backdoor” into every customer’s software, giving it ability to suspend it or shut it down­—something that even customers aren’t told about.

To make matters worse, every copy of Hacking Team’s Galileo software is watermarked, according to the source, which means Hacking Team, and now everyone with access to this data dump, can find out who operates it and who they’re targeting with it.

It’s one thing to have dissatisfied customers. It’s another to have dissatisfied customers with death squads. I don’t think the company is going to survive this.

Posted on July 7, 2015 at 5:30 PM63 Comments

More about the NSA's XKEYSCORE

I’ve been reading through the 48 classified documents about the NSA’s XKEYSCORE system released by the Intercept last week. From the article:

The NSA’s XKEYSCORE program, first revealed by The Guardian, sweeps up countless people’s Internet searches, emails, documents, usernames and passwords, and other private communications. XKEYSCORE is fed a constant flow of Internet traffic from fiber optic cables that make up the backbone of the world’s communication network, among other sources, for processing. As of 2008, the surveillance system boasted approximately 150 field sites in the United States, Mexico, Brazil, United Kingdom, Spain, Russia, Nigeria, Somalia, Pakistan, Japan, Australia, as well as many other countries, consisting of over 700 servers.

These servers store “full-take data” at the collection sites—meaning that they captured all of the traffic collected—and, as of 2009, stored content for 3 to 5 days and metadata for 30 to 45 days. NSA documents indicate that tens of billions of records are stored in its database. “It is a fully distributed processing and query system that runs on machines around the world,” an NSA briefing on XKEYSCORE says. “At field sites, XKEYSCORE can run on multiple computers that gives it the ability to scale in both processing power and storage.”

There seems to be no access controls at all restricting how analysts can use XKEYSCORE. Standing queries—called “workflows”—and new fingerprints have an approval process, presumably for load issues, but individual queries are not approved beforehand but may be audited after the fact. These are things which are supposed to be low latency, and you can’t have an approval process for low latency analyst queries. Since a query can get at the recorded raw data, a single query is effectively a retrospective wiretap.

All this means that the Intercept is correct when it writes:

These facts bolster one of Snowden’s most controversial statements, made in his first video interview published by The Guardian on June 9, 2013. “I, sitting at my desk,” said Snowden, could “wiretap anyone, from you or your accountant, to a federal judge to even the president, if I had a personal email.”

You’ll only get the data if it’s in the NSA’s databases, but if it is there you’ll get it.

Honestly, there’s not much in these documents that’s a surprise to anyone who studied the 2013 XKEYSCORE leaks and knows what can be done with a highly customizable Intrusion Detection System. But it’s always interesting to read the details.

One document—”Intro to Context Sensitive Scanning with X-KEYSCORE Fingerprints (2010)—talks about some of the queries an analyst can run. A sample scenario: “I want to look for people using Mojahedeen Secrets encryption from an iPhone” (page 6).

Mujahedeen Secrets is an encryption program written by al Qaeda supporters. It has been around since 2007. Last year, Stuart Baker cited its increased use as evidence that Snowden harmed America. I thought the opposite, that the NSA benefits from al Qaeda using this program. I wrote: “There’s nothing that screams ‘hack me’ more than using specially designed al Qaeda encryption software.”

And now we see how it’s done. In the document, we read about the specific XKEYSCORE queries an analyst can use to search for traffic encrypted by Mujahedeen Secrets. Here are some of the program’s fingerprints (page 10):

encryption/mojahaden2
encryption/mojahaden2/encodedheader
encryption/mojahaden2/hidden
encryption/mojahaden2/hidden2
encryption/mojahaden2/hidden44
encryption/mojahaden2/secure_file_cendode
encryption/mojahaden2/securefile

So if you want to search for all iPhone users of Mujahedeen Secrets (page 33):

fingerprint(‘demo/scenario4’)=

fingerprint(‘encryption/mojahdeen2’ and fingerprint(‘browser/cellphone/iphone’)

Or you can search for the program’s use in the encrypted text, because (page 37): “…many of the CT Targets are now smart enough not to leave the Mojahedeen Secrets header in the E-mails they send. How can we detect that the E-mail (which looks like junk) is in fact Mojahedeen Secrets encrypted text.” Summary of the answer: there are lots of ways to detect the use of this program that users can’t detect. And you can combine the use of Mujahedeen Secrets with other identifiers to find targets. For example, you can specifically search for the program’s use in extremist forums (page 9). (Note that the NSA wrote that comment about Mujahedeen Secrets users increasing their opsec in 2010, two years before Snowden supposedly told them that the NSA was listening on their communications. Honestly, I would not be surprised if the program turned out to have been a US operation to get Islamic radicals to make their traffic stand out more easily.)

It’s not just Mujahedeen Secrets. Nicholas Weaver explains how you can use XKEYSCORE to identify co-conspirators who are all using PGP.

And these searches are just one example. Other examples from the documents include:

  • “Targets using mail.ru from a behind a large Iranian proxy” (here, page 7).
  • Usernames and passwords of people visiting gov.ir (here, page 26 and following).
  • People in Pakistan visiting certain German-language message boards (here, page 1).
  • HTTP POST traffic from Russia in the middle of the night—useful for finding people trying to steal our data (here, page 16).
  • People doing web searches on jihadist topics from Kabul (here).

E-mails, chats, web-browsing traffic, pictures, documents, voice calls, webcam photos, web searches, advertising analytics traffic, social media traffic, botnet traffic, logged keystrokes, file uploads to online services, Skype sessions and more: if you can figure out how to form the query, you can ask XKEYSCORE for it. For an example of how complex the searches can be, look at this XKEYSCORE query published in March, showing how New Zealand used the system to spy on the World Trade Organization: automatically track any email body with any particular WTO-related content for the upcoming election. (Good new documents to read include this, this, and this.)

I always read these NSA documents with an assumption that other countries are doing the same thing. The NSA is not made of magic, and XKEYSCORE is not some super-advanced NSA-only technology. It is the same sort of thing that every other country would use with its surveillance data. For example, Russia explicitly requires ISPs to install similar monitors as part of its SORM Internet surveillance system. As a home user, you can build your own XKEYSCORE using the public-domain Bro Security Monitor and the related Network Time Machine attached to a back-end data-storage system. (Lawrence Berkeley National Laboratory uses this system to store three months’ worth of Internet traffic for retrospective surveillance—it used the data to study Heartbleed.) The primary advantage the NSA has is that it sees more of the Internet than anyone else, and spends more money to store the data it intercepts for longer than anyone else. And if these documents explain XKEYSCORE in 2009 and 2010, expect that it’s much more powerful now.

Back to encryption and Mujahedeen Secrets. If you want to stay secure, whether you’re trying to evade surveillance by Russia, China, the NSA, criminals intercepting large amounts of traffic, or anyone else, try not to stand out. Don’t use some homemade specialized cryptography that can be easily identified by a system like this. Use reasonably strong encryption software on a reasonably secure device. If you trust Apple’s claims (pages 35-6), use iMessage and FaceTime on your iPhone. I really like Moxie Marlinspike’s Signal for both text and voice, but worry that it’s too obvious because it’s still rare. Ubiquitous encryption is the bane of listeners worldwide, and it’s the best thing we can deploy to make the world safer.

Posted on July 7, 2015 at 6:38 AM134 Comments

Hacking Team Is Hacked

Someone hacked the cyberweapons arms manufacturer Hacking Team and posted 400 GB of internal company data.

Hacking Team is a pretty sleazy company, selling surveillance software to all sorts of authoritarian governments around the world. Reporters Without Borders calls it one of the enemies of the Internet. Citizen Lab has published many reports about their activities.

It’s a huge trove of data, including a spreadsheet listing every government client, when they first bought the surveillance software, and how much money they have paid the company to date. Not surprising, the company has been lying about who its customers are. Chris Soghoian has been going through the data and tweeting about it. More Twitter comments on the data here. Here are articles from Wired and The Guardian.

Here’s the torrent, if you want to look at the data yourself. (Here’s another mirror.) The source code is up on Github.

I expect we’ll be sifting through all the data for a while.

Slashdot thread. Hacker News thread.

EDITED TO ADD: The Hacking Team CEO, David Vincenzetti, doesn’t like me:

In another [e-mail], the Hacking Team CEO on 15 May claimed renowned cryptographer Bruce Schneier was “exploiting the Big Brother is Watching You FUD (Fear, Uncertainty and Doubt) phenomenon in order to sell his books, write quite self-promoting essays, give interviews, do consulting etc. and earn his hefty money.”

Meanwhile, Hacking Team has told all of its customers to shut down all uses of its software. They are in “full on emergency mode,” which is perfectly understandable.

EDITED TO ADD: Hacking Team had no exploits for an un-jail-broken iPhone. Seems like the platform of choice if you want to stay secure.

EDITED TO ADD (7/14): WikiLeaks has published a huge trove of e-mails.

Hacking Team had a signed iOS certificate, which has been revoked.

Posted on July 6, 2015 at 12:53 PM91 Comments

NSA German Intercepts

On Friday, WikiLeaks published three summaries of NSA intercepts of German government communications. To me, the most interesting thing is not the intercept analyses, but this spreadsheet of intelligence targets. Here we learn the specific telephone numbers being targeted, who owns those phone numbers, the office within the NSA that processes the raw communications received, why the target is being spied on (in this case, all are designated as “Germany: Political Affairs”), and when we started spying using this particular justification. It’s one of the few glimpses we have into the bureaucracy of surveillance.

Presumably this is from the same leaker who gave WikiLeaks the French intercepts they published a week ago. (And you can read the intelligence target spreadsheet for France, too. And another for Brazil that WikiLeaks published on Saturday; Intercept commentary here.) Now that we’ve seen a few top secret summaries of eavesdropping on German, French, and Brazilian communications, and given what I know of Julian Assange’s tactics, my guess is that there is a lot more where this came from.

Der Spiegel is all over this story.

Posted on July 6, 2015 at 5:13 AM32 Comments

Rabbit Beating Up Snake

It’s the Internet, which means there must be cute animal videos on this blog. But this one is different. Watch a mother rabbit beat up a snake to protect her children. It’s impressive the way she keeps attacking the snake until it is far away from her nest, but I worry that she doesn’t know enough to grab the snake by the neck. Maybe there just aren’t any venomous snakes around those parts.

Posted on July 3, 2015 at 12:13 PM59 Comments

Evidence Shows Data Breaches Not Increasing

This is both interesting and counterintuitive:

Our results suggest that publicly reported data breaches in the U.S. have not increased significantly over the past ten years, either in frequency or in size. Because the distribution of breach sizes is heavy-tailed, large (rare) events occur more frequently than intuition would suggest. This helps to explain why many reports show massive year-to-year increases in both the aggregate number of records exposed and the number of breaches. All of these reports lump data into yearly bins, and this amount of aggregation can often influence the apparent trends (Figure 1).

The idea that breaches are not necessarily worsening may seem counter-intuitive. The Red Queen hypothesis in biology provides a possible explanation. It states that organisms not only compete within their own species to gain reproductive advantage, but they must also compete with other species, leading to an evolutionary arms race. In our case, as security practices have improved, attacks have become more sophisticated, possibly resulting in stasis for both attackers or defenders. This hypothesis is consistent with observed patterns in the dataset. Indeed, for breaches over 500,000 records there was no increase in size or frequency of malicious data breaches, suggesting that for large breaches such an arms race could be occurring. Many large breaches have occurred over the past decade, but the largest was disclosed as far back as 2009, and the second largest was even earlier, in 2007. Future work could analyze these breaches in depth to determine whether more recent breaches have required more sophisticated attacks.

The research was presented at WEIS this week. According to their research, data breach frequency has a negative binomial distribution, and breach size has a log-normally distribution.

Posted on July 1, 2015 at 10:03 AM0 Comments

Office of Personnel Management Data Hack

I don’t have much to say about the recent hack of the US Office of Personnel Management, which has been attributed to China (and seems to be getting worse all the time). We know that government networks aren’t any more secure than corporate networks, and might even be less secure.

I agree with Ben Wittes here (although not the imaginary double standard he talks about in the rest of the essay):

For the record, I have no problem with the Chinese going after this kind of data. Espionage is a rough business and the Chinese owe as little to the privacy rights of our citizens as our intelligence services do to the employees of the Chinese government. It’s our government’s job to protect this material, knowing it could be used to compromise, threaten, or injure its people­—not the job of the People’s Liberation Army to forebear collection of material that may have real utility.

Former NSA Director Michael Hayden says much the same thing:

If Hayden had had the ability to get the equivalent Chinese records when running CIA or NSA, he says, “I would not have thought twice. I would not have asked permission. I’d have launched the star fleet. And we’d have brought those suckers home at the speed of light.” The episode, he says, “is not shame on China. This is shame on us for not protecting that kind of information.” The episode is “a tremendously big deal, and my deepest emotion is embarrassment.”

My question is this: Has anyone thought about the possibility of the attackers manipulating data in the database? What are the potential attacks that could stem from adding, deleting, and changing data? I don’t think they can add a person with a security clearance, but I’d like someone who knows more than I do to understand the risks.

Posted on July 1, 2015 at 6:32 AM53 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.