Blog: March 2016 Archives

ISIS Encryption Opsec

Tidbits from the New York Times:

The final phase of Mr. Hame’s training took place at an Internet cafe in Raqqa, where an Islamic State computer specialist handed him a USB key. It contained CCleaner, a program used to erase a user’s online history on a given computer, as well as TrueCrypt, an encryption program that was widely available at the time and that experts say has not yet been cracked.

[…]

More than a year and a half earlier, the would-be Cannes bomber, Ibrahim Boudina, had tried to erase the previous three days of his search history, according to details in his court record, but the police were still able to recover it. They found that Mr. Boudina had been researching how to connect to the Internet via a secure tunnel and how to change his I.P. address.

Though he may have been aware of the risk of discovery, perhaps he was not worried enough.

Mr. Boudina had been sloppy enough to keep using his Facebook account, and his voluminous chat history allowed French officials to determine his allegiance to the Islamic State. Wiretaps of his friends and relatives, later detailed in French court records obtained by The Times and confirmed by security officials, further outlined his plot, which officials believe was going to target the annual carnival on the French Riviera.

Mr. Hame, in contrast, was given strict instructions on how to communicate. After he used TrueCrypt, he was to upload the encrypted message folder onto a Turkish commercial data storage site, from where it would be downloaded by his handler in Syria. He was told not to send it by email, most likely to avoid generating the metadata that records details like the point of origin and destination, even if the content of the missive is illegible. Mr. Hame described the website as “basically a dead inbox.”

The ISIS technician told Mr. Hame one more thing: As soon as he made it back to Europe, he needed to buy a second USB key, and transfer the encryption program to it. USB keys are encoded with serial numbers, so the process was not unlike a robber switching getaway cars.

“He told me to copy what was on the key and then throw it away,” Mr. Hame explained. “That’s what I did when I reached Prague.”

Mr. Abaaoud was also fixated on cellphone security. He jotted down the number of a Turkish phone that he said would be left in a building in Syria, but close enough to the border to catch the Turkish cell network, according to Mr. Hame’s account. Mr. Abaaoud apparently figured investigators would be more likely to track calls from Europe to Syrian phone numbers, and might overlook calls to a Turkish one.

Next to the number, Mr. Abaaoud scribbled “Dad.”

This seems like exactly the sort of opsec I would set up for an insurgent group.

EDITED TO ADD: Mistakes in the article. For example:

And now I’ve read one of the original French documents and confirmed my suspicion that the NYTimes article got details wrong.

The original French uses the word “boîte”, which matches the TrueCrypt term “container”. The original French didn’t use the words “fichier” (file), “dossier” (folder), or “répertoire” (directory). This makes so much more sense, and gives us more confidence we know what they were doing.

The original French uses the term “site de partage”, meaning a “sharing site”, which makes more sense than a “storage” site.

The document I saw says the slip of paper had login details for the file sharing site, not a TrueCrypt password. Thus, when the NYTimes article says “TrueCrypt login credentials”, we should correct it to “file sharing site login credentials”, not “TrueCrypt passphrase”.

MOST importantly, according the subject, the login details didn’t even work. It appears he never actually used this method—he was just taught how to use it. He no longer remembers the site’s name, other than it might have the word “share” in its name. We see this a lot: ISIS talks a lot about encryption, but the evidence of them actually using it is scant.

Posted on March 31, 2016 at 6:10 AM41 Comments

Lawful Hacking and Continuing Vulnerabilities

The FBI’s legal battle with Apple is over, but the way it ended may not be good news for anyone.

Federal agents had been seeking to compel Apple to break the security of an iPhone 5c that had been used by one of the San Bernardino, Calif., terrorists. Apple had been fighting a court order to cooperate with the FBI, arguing that the authorities’ request was illegal and that creating a tool to break into the phone was itself harmful to the security of every iPhone user worldwide.

Last week, the FBI told the court it had learned of a possible way to break into the phone using a third party’s solution, without Apple’s help. On Monday, the agency dropped the case because the method worked. We don’t know who that third party is. We don’t know what the method is, or which iPhone models it applies to. Now it seems like we never will.

The FBI plans to classify this access method and to use it to break into other phones in other criminal investigations.

Compare this iPhone vulnerability with another, one that was made public on the same day the FBI said it might have found its own way into the San Bernardino phone. Researchers at Johns Hopkins University announced last week that they had found a significant vulnerability in the iMessage protocol. They disclosed the vulnerability to Apple in the fall, and last Monday, Apple released an updated version of its operating system that fixed the vulnerability. (That’s iOS 9.3­you should download and install it right now.) The Hopkins team didn’t publish its findings until Apple’s patch was available, so devices could be updated to protect them from attacks using the researchers’ discovery.

This is how vulnerability research is supposed to work.

Vulnerabilities are found, fixed, then published. The entire security community is able to learn from the research, and­—more important­—everyone is more secure as a result of the work.

The FBI is doing the exact opposite. It has been given whatever vulnerability it used to get into the San Bernardino phone in secret, and it is keeping it secret. All of our iPhones remain vulnerable to this exploit. This includes the iPhones used by elected officials and federal workers and the phones used by people who protect our nation’s critical infrastructure and carry out other law enforcement duties, including lots of FBI agents.

This is the trade-off we have to consider: do we prioritize security over surveillance, or do we sacrifice security for surveillance?

The problem with computer vulnerabilities is that they’re general. There’s no such thing as a vulnerability that affects only one device. If it affects one copy of an application, operating system or piece of hardware, then it affects all identical copies. A vulnerability in Windows 10, for example, affects all of us who use Windows 10. And it can be used by anyone who knows it, be they the FBI, a gang of cyber criminals, the intelligence agency of another country—anyone.

And once a vulnerability is found, it can be used for attack­—like the FBI is doing—or for defense, as in the Johns Hopkins example.

Over years of battling attackers and intruders, we’ve learned a lot about computer vulnerabilities. They’re plentiful: vulnerabilities are found and fixed in major systems all the time. They’re regularly discovered independently, by outsiders rather than by the original manufacturers or programmers. And once they’re discovered, word gets out. Today’s top-secret National Security Agency attack techniques become tomorrow’s PhD theses and the next day’s hacker tools.

The attack/defense trade-off is not new to the US government. They even have a process for deciding what to do when a vulnerability is discovered: whether they should be disclosed to improve all of our security, or kept secret to be used for offense. The White House claims that it prioritizes defense, and that general vulnerabilities in widely used computer systems are patched.

Whatever method the FBI used to get into the San Bernardino shooter’s iPhone is one such vulnerability. The FBI did the right thing by using an existing vulnerability rather than forcing Apple to create a new one, but it should be disclosed to Apple and patched immediately.

This case has always been more about the PR battle and potential legal precedent than about the particular phone. And while the legal dispute is over, there are other cases involving other encrypted devices in other courts across the country. But while there will always be a few computers­—corporate servers, individual laptops or personal smartphones—­that the FBI would like to break into, there are far more such devices that we need to be secure.

One of the most surprising things about this debate is the number of former national security officials who came out on Apple’s side. They understand that we are singularly vulnerable to cyberattack, and that our cyberdefense needs to be as strong as possible.

The FBI’s myopic focus on this one investigation is understandable, but in the long run, it’s damaging to our national security.

This essay previously appeared in the Washington Post, with a far too click-bait headline.

EDITED TO ADD: To be fair, the FBI probably doesn’t know what the vulnerability is. And I wonder how easy it would be for Apple to figure it out. Given that the FBI has to exhaust all avenues of access before demanding help from Apple, we can learn which models are vulnerable by watching which legal suits are abandoned now that the FBI knows about this method.

Matt Blaze makes excellent points about how the FBI should disclose the vulnerabilities it uses, in order to improve computer security. That was part of a New York Times “Room for Debate” on hackers helping the FBI.

Susan Landau’s excellent Congressional testimony on the topic.

Posted on March 30, 2016 at 4:54 PM91 Comments

Mass Surveillance Silences Minority Opinions

Research paper: Elizabeth Stoycheff, “Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring“:

Abstract: Since Edward Snowden exposed the National Security Agency’s use of controversial online surveillance programs in 2013, there has been widespread speculation about the potentially deleterious effects of online government monitoring. This study explores how perceptions and justification of surveillance practices may create a chilling effect on democratic discourse by stifling the expression of minority political views. Using a spiral of silence theoretical framework, knowing one is subject to surveillance and accepting such surveillance as necessary act as moderating agents in the relationship between one’s perceived climate of opinion and willingness to voice opinions online. Theoretical and normative implications are discussed.

No surprise, and something I wrote about in Data and Goliath:

Across the US, states are on the verge of reversing decades-old laws about homosexual relationships and marijuana use. If the old laws could have been perfectly enforced through surveillance, society would never have reached the point where the majority of citizens thought those things were okay. There has to be a period where they are still illegal yet increasingly tolerated, so that people can look around and say, “You know, that wasn’t so bad.” Yes, the process takes decades, but it’s a process that can’t happen without lawbreaking. Frank Zappa said something similar in 1971: “Without deviation from the norm, progress is not possible.”

The perfect enforcement that comes with ubiquitous government surveillance chills this process. We need imperfect security­—systems that free people to try new things, much the way off-the-record brainstorming sessions loosen inhibitions and foster creativity. If we don’t have that, we can’t slowly move from a thing’s being illegal and not okay, to illegal and not sure, to illegal and probably okay, and finally to legal.

This is an important point. Freedoms we now take for granted were often at one time viewed as threatening or even criminal by the past power structure. Those changes might never have happened if the authorities had been able to achieve social control through surveillance.

This is one of the main reasons all of us should care about the emerging architecture of surveillance, even if we are not personally chilled by its existence. We suffer the effects because people around us will be less likely to proclaim new political or social ideas, or act out of the ordinary. If J. Edgar Hoover’s surveillance of Martin Luther King Jr. had been successful in silencing him, it would have affected far more people than King and his family.

Slashdot thread.

EDITED TO ADD (4/6): News article.

Posted on March 29, 2016 at 12:58 PM31 Comments

Power on the Internet

Interesting paper: Yochai Benkler, “Degrees of Freedom, Dimensions of Power,” Daedelus, winter 2016:

Abstract: The original Internet design combined technical, organizational, and cultural characteristics that decentralized power along diverse dimensions. Decentralized institutional, technical, and market power maximized freedom to operate and innovate at the expense of control. Market developments have introduced new points of control. Mobile and cloud computing, the Internet of Things, fiber transition, big data, surveillance, and behavioral marketing introduce new control points and dimensions of power into the Internet as a social-cultural-economic platform. Unlike in the Internet’s first generation, companies and governments are well aware of the significance of design choices, and are jostling to acquire power over, and appropriate value from, networked activity. If we are to preserve the democratic and creative promise of the Internet, we must continuously diagnose control points as they emerge and devise mechanisms of recreating diversity of constraint and degrees of freedom in the network to work around these forms of reconcentrated power.

Posted on March 28, 2016 at 6:46 AM28 Comments

Memphis Airport Inadvertently Gets Security Right

A local newspaper recently tested airport security at Memphis Airport:

Our crew sat for 30 minutes in the passenger drop-off area Tuesday without a word from anyone, and that raised a number of eyebrows.

Certainly raised mine. Here’s my question: why is that a bad thing? If you’re worried about a car bomb, why do you think length of time sitting curbside correlates with likelihood of detonation? Were I a car bomber sitting in the front seat, I would detonate my bomb pretty damned quick.

Anyway, the airport was 100% correct in its reply:

The next day, the airport told FOX13 they take a customer-friendly “hassle free” approach.

I’m certainly in favor of that. Useless security theater that adds to the hassle of traveling without actually making us any safer doesn’t help anyone.

Unfortunately, the airport is now reviewing its procedures, because fear wins:

CEO Scott Brockman sent FOX13 a statement saying in part “We will continue to review our policies and procedures and implement any necessary changes in order to ensure the safety of the traveling public.”

EDITED TO ADD (4/12): The airport PR person commented below. “Jim Turner of the Cato Institute” is actually Jim Harper.

Posted on March 25, 2016 at 12:26 PM36 Comments

Interesting Lottery Terminal Hack

It was a manipulation of the terminals.

The 5 Card Cash game was suspended in November after Connecticut Lottery and state Department of Consumer Protection officials noticed there were more winning tickets than the game’s parameters should have allowed. The game remains suspended.

An investigation determined that some lottery retailers were manipulating lottery machines to print more instant winner tickets and fewer losers….

[…]

An investigator for the Connecticut Lottery determined that terminal operators could slow down their lottery machines by requesting a number of database reports or by entering several requests for lottery game tickets. While those reports were being processed, the operator could enter sales for 5 Card Cash tickets. Before the tickets would print, however, the operator could see on a screen if the tickets were instant winners. If tickets were not winners, the operator could cancel the sale before the tickets printed.

Posted on March 25, 2016 at 6:31 AM15 Comments

FBI vs. Apple: Who Is Helping the FBI?

On Monday, the FBI asked the court for a two-week delay in a scheduled hearing on the San Bernardino iPhone case, because some “third party” approached it with a way into the phone. It wanted time to test this access method.

Who approached the FBI? We have no idea.

I have avoided speculation because the story makes no sense. Why did this third party wait so long? Why didn’t the FBI go through with the hearing anyway?

Now we have speculation that the third party is the Israeli forensic company Cellebrite. From its website:

Support for Locked iOS Devices Using UFED Physical Analyzer

Using UFED Physical Analyzer, physical and file system extractions, decoding and analysis can be performed on locked iOS devices with a simple or complex passcode. Simple passcodes will be recovered during the physical extraction process and enable access to emails and keychain passwords. If a complex password is set on the device, physical extraction can be performed without access to emails and keychain. However, if the complex password is known, emails and keychain passwords will be available.

My guess is that it’s not them. They have an existing and ongoing relationship with the FBI. If they could crack the phone, they would have done it months ago. This purchase order seems to be coincidental.

In any case, having a company name doesn’t mean that the story makes any more sense, but there it is. We’ll know more in a couple of weeks, although I doubt the FBI will share any more than they absolutely have to.

This development annoys me in every way. This case was never about the particular phone, it was about the precedent and the general issue of security vs. surveillance. This will just come up again another time, and we’ll have to through this all over again—maybe with a company that isn’t as committed to our privacy as Apple is.

EDITED TO ADD: Watch former NSA Director Michael Hayden defend Apple and iPhone security. I’ve never seen him so impassioned before.

EDITED TO ADD (3/26): Marcy Wheeler has written extensively about the Cellebrite possibility

Posted on March 24, 2016 at 12:34 PM68 Comments

Cryptography Is Harder Than It Looks

Writing a magazine column is always an exercise in time travel. I’m writing these words in early December. You’re reading them in February. This means anything that’s news as I write this will be old hat in two months, and anything that’s news to you hasn’t happened yet as I’m writing.

This past November, a group of researchers found some serious vulnerabilities in an encryption protocol that I, and probably most of you, use regularly. The group alerted the vendor, who is currently working to update the protocol and patch the vulnerabilities. The news will probably go public in the middle of February, unless the vendor successfully pleads for more time to finish their security patch. Until then, I’ve agreed not to talk about the specifics.

I’m writing about this now because these vulnerabilities illustrate two very important truisms about encryption and the current debate about adding back doors to security products:

  1. Cryptography is harder than it looks.
  2. Complexity is the worst enemy of security.

These aren’t new truisms. I wrote about the first in 1997 and the second in 1999. I’ve talked about them both in Secrets and Lies (2000) and Practical Cryptography (2003). They’ve been proven true again and again, as security vulnerabilities are discovered in cryptographic system after cryptographic system. They’re both still true today.

Cryptography is harder than it looks, primarily because it looks like math. Both algorithms and protocols can be precisely defined and analyzed. This isn’t easy, and there’s a lot of insecure crypto out there, but we cryptographers have gotten pretty good at getting this part right. However, math has no agency; it can’t actually secure anything. For cryptography to work, it needs to be written in software, embedded in a larger software system, managed by an operating system, run on hardware, connected to a network, and configured and operated by users. Each of these steps brings with it difficulties and vulnerabilities.

Although cryptography gives an inherent mathematical advantage to the defender, computer and network security are much more balanced. Again and again, we find vulnerabilities not in the underlying mathematics, but in all this other stuff. It’s far easier for an attacker to bypass cryptography by exploiting a vulnerability in the system than it is to break the mathematics. This has been true for decades, and it’s one of the lessons that Edward Snowden reiterated.

The second truism is that complexity is still the worst enemy of security. The more complex a system is, the more lines of code, interactions with other systems, configuration options, and vulnerabilities there are. Implementing cryptography involves getting everything right, and the more complexity there is, the more there is to get wrong.

Vulnerabilities come from options within a system, interactions between systems, interfaces between users and systems—everywhere. If good security comes from careful analysis of specifications, source code, and systems, then a complex system is more difficult and more expensive to analyze. We simply don’t know how to securely engineer anything but the simplest of systems.

I often refer to this quote, sometimes attributed to Albert Einstein and sometimes to Yogi Berra: “In theory, theory and practice are the same. In practice, they are not.”

These truisms are directly relevant to the current debate about adding back doors to encryption products. Many governments—from China to the US and the UK—want the ability to decrypt data and communications without users’ knowledge or consent. Almost all computer security experts have two arguments against this idea: first, adding this back door makes the system vulnerable to all attackers and doesn’t just provide surreptitious access for the “good guys,” and second, creating this sort of access greatly increases the underlying system’s complexity, exponentially increasing the possibility of getting the security wrong and introducing new vulnerabilities.

Going back to the new vulnerability that you’ll learn about in mid-February, the lead researcher wrote to me: “If anyone tells you that [the vendor] can just ‘tweak’ the system a little bit to add key escrow or to man-in-the-middle specific users, they need to spend a few days watching the authentication dance between [the client device/software] and the umpteen servers it talks to just to log into the network. I’m frankly amazed that any of it works at all, and you couldn’t pay me enough to tamper with any of it.” This is an important piece of wisdom.

The designers of this system aren’t novices. They’re an experienced team with some of the best security engineers in the field. If these guys can’t get the security right, just imagine how much worse it is for smaller companies without this team’s level of expertise and resources. Now imagine how much worse it would be if you added a government-mandated back door. There are more opportunities to get security wrong, and more engineering teams without the time and expertise necessary to get it right. It’s not a recipe for security.

Unlike what much of today’s political rhetoric says, strong cryptography is essential for our information security. It’s how we protect our information and our networks from hackers, criminals, foreign governments, and terrorists. Security vulnerabilities, whether deliberate backdoor access mechanisms or accidental flaws, make us all less secure. Getting security right is harder than it looks, and our best chance is to make the cryptography as simple and public as possible.

This essay previously appeared in IEEE Security & Privacy, and is an update of something I wrote in 1997.

That vulnerability I alluded to in the essay is the recent iMessage flaw.

Posted on March 24, 2016 at 6:37 AM30 Comments

1981 US Document on Encryption Policy

This was newly released under FOIA at my request: Victor C. Williams, Jr., Donn B. Parker, and Charles C. Wood, “Impacts of Federal Policy Options for Nonmilitary Cryptography,” NTIA-CR-81-10, National Telecommunications and Information Administration, US. Department of Commerce, June 1981. It argues that cryptography is an important enabling technology. At this point, it’s only of historical value.

Posted on March 23, 2016 at 6:20 AM14 Comments

iMessage Encryption Flaw Found and Fixed

Matthew Green and team found and reported a significant iMessage encryption flaw last year.

Green suspected there might be a flaw in iMessage last year after he read an Apple security guide describing the encryption process and it struck him as weak. He said he alerted the firm’s engineers to his concern. When a few months passed and the flaw remained, he and his graduate students decided to mount an attack to show that they could pierce the encryption on photos or videos sent through iMessage.

It took a few months, but they succeeded, targeting phones that were not using the latest operating system on iMessage, which launched in 2011.

To intercept a file, the researchers wrote software to mimic an Apple server. The encrypted transmission they targeted contained a link to the photo stored in Apple’s iCloud server as well as a 64-digit key to decrypt the photo.

Although the students could not see the key’s digits, they guessed at them by a repetitive process of changing a digit or a letter in the key and sending it back to the target phone. Each time they guessed a digit correctly, the phone accepted it. They probed the phone in this way thousands of times.

“And we kept doing that,” Green said, “until we had the key.”

A modified version of the attack would also work on later operating systems, Green said, adding that it would likely have taken the hacking skills of a nation-state.

This flaw is fixed in iOS 9.3. (You should download and install it now.)

I wrote about this flaw in IEEE Security and Privacy earlier this year:

Going back to the new vulnerability that you’ll learn about in mid-February, the lead researcher wrote to me: “If anyone tells you that [the vendor] can just ‘tweak’ the system a little bit to add key escrow or to man-in-the-middle specific users, they need to spend a few days watching the authentication dance between [the client device/software] and the umpteen servers it talks to just to log into the network. I’m frankly amazed that any of it works at all, and you couldn’t pay me enough to tamper with any of it.” This is an important piece of wisdom.

The designers of this system aren’t novices. They’re an experienced team with some of the best security engineers in the field. If these guys can’t get the security right, just imagine how much worse it is for smaller companies without this team’s level of expertise and resources. Now imagine how much worse it would be if you add a government-mandated back door. There are more opportunities to get security wrong, and more engineering teams without the time and expertise necessary to get

Related: A different iOS flaw was reported last week. Called AceDeceiver, it is a Trojan that allows an attacker to install malicious software onto an iOS device, bypassing Apple’s DRM protections. I don’t believe that Apple has fixed this yet, although it seems as if Apple just has to add a certificate revocation list, or make the certs nonreplayable by having some mandatory interaction with the iTunes store.

EDITED (4/14): The paper describing the iMessage flaw.

Posted on March 21, 2016 at 1:45 PM28 Comments

Brennan Center Report on NSA Overseas Spying and Executive Order 12333

The Brennan Center has released a report on EO 12333, the executive order that regulates the NSA’s overseas surveillance. Much of what the NSA does here is secret and, even though the EO is designed for foreign surveillance, Americans are regularly swept up in the NSA’s collection operations:

Despite a series of significant disclosures, the scope of these operations, as well as critical detail about how they are regulated, remain secret. Nevertheless, an analysis of publicly available documents reveals several salient features of the EO 12333 regime:

  • Bulk collection of information: The NSA engages in bulk collection overseas—for example, gathering all of the telephone calls going into or out of certain countries. These programs include the data of Americans who are visiting those countries or communicating with their inhabitants. While recent executive branch reforms place some limits on how the government may use data collected in bulk, these limits do not apply to data that is collected in bulk and held for a temporary (but unspecified) period of time in order to facilitate “targeted” surveillance.
  • Treating subjects of discussion as “targets”: When the NSA conducts surveillance under EO 12333 that it characterizes as “targeted,” it is not limited to obtaining communications to or from particular individuals or groups, or even communications that refer to specified individuals or groups (such as e-mails that mention “ISIS”). Rather, the selection terms used by the NSA may include broad subjects, such as “Yemen” or “nuclear proliferation.”
  • Weak limits on the retention and sharing of information: Despite recent reforms, the NSA continues to exercise significant discretion over how long it may retain personal data gathered under EO 12333 and the circumstances under which it may share such information. While there is a default five-year limit on data retention, there is an extensive list of exceptions. Information sharing with law enforcement authorities threatens to undermine traditional procedural safeguards in criminal proceedings. Current policies disclosed by the government also lack specific procedures for mitigating the human rights risks of intelligence sharing with foreign governments, particularly regimes with a history of repressive and abusive conduct.
  • Systemic lack of meaningful oversight: Operations that are conducted solely under EO 12333 (i.e., those that are not subject to any statutory law) are not vetted or reviewed by any court. Members of the congressional intelligence committees have cited challenges in overseeing the NSA’s network of EO 12333 programs. While the Agency has argued that its privacy processes are robust, overreliance on internal safeguards fails to address the need for external and independent oversight. It also leaves Congress and the public without sufficient means to assess the risks and benefits of EO 12333 operations.

The report concludes with a list of major unanswered questions about EO 12333 and the array of surveillance activities conducted under its rules and policies. While many operational aspects of surveillance programs are necessarily secret, the NSA can and should share the laws and regulations that govern EO 12333 programs, significant interpretations of those legal authorities, and information about how EO 12333 operations are overseen both within the Executive Branch and by Congress. It should clarify internal definitions of terms such as “collection,” “targeted,” and “bulk” so that the scope of its operations is understandable rather than obscured. And it should provide more information on how its overseas operations impact Americans’ privacy, by releasing statistics on data collection and by specifying in greater detail the instances in which it shares information with other U.S. and foreign agencies and the relevant safeguards.

Here’s an article from the Intercept.

And this is me from Data and Goliath on EO 12333:

Executive Order 12333, the 1981 presidential document authorizing most of NSA’s surveillance, is incredibly permissive. It is supposed to primarily allow the NSA to conduct surveillance outside the US, but it gives the agency broad authority to collect data on Americans. It provides minimal protections for Americans; data collected outside the US, and even less for the hundreds of millions of innocent non-Americans whose data is incidentally collected. Because this is a presidential directive and not a law, courts have no jurisdiction, and congressional oversight is minimal. Additionally, at least in 2007, the president believed he could modify or ignore it at will and in secret. As a result, we know very little about how Executive Order 12333 is being interpreted inside the NSA.

Posted on March 21, 2016 at 6:53 AM37 Comments

Companies Handing Source Code Over to Governments

ZDNet has an article on US government pressure on software companies to hand over copies of their source code. There’s no details because no one is talking on the record, but I also believe that this is happening.

When asked, a spokesperson for the Justice Dept. acknowledged that the department has demanded source code and private encryption keys before.

These orders would probably come from the FISA Court:

These orders are so highly classified that simply acknowledging an order’s existence is illegal, even a company’s chief executive or members of the board may not be told. Only those who are necessary to execute the order would know, and would be subject to the same secrecy provisions.

Given that Federighi heads the division, it would be almost impossible to keep from him the existence of a FISA order demanding the company’s source code.

It would not be the first time that the US government has reportedly used proprietary code and technology from American companies to further its surveillance efforts.

Top secret NSA documents leaked by whistleblower Edward Snowden, reported in German magazine Der Spiegel in late-2013, have suggested some hardware and software makers were compelled to hand over source code to assist in government surveillance.

The NSA’s catalog of implants and software backdoors suggest that some companies, including Dell, Huawei, and Juniper—which was publicly linked to an “unauthorized” backdoor—had their servers and firewall products targeted and attacked through various exploits. Other exploits were able to infiltrate firmware of hard drives manufactured by Western Digital, Seagate, Maxtor, and Samsung.

Last year, antivirus maker and security firm Kaspersky later found evidence that the NSA had obtained source code from a number of prominent hard drive makers—a claim the NSA denied—to quietly install software used to eavesdrop on the majority of the world’s computers.

“There is zero chance that someone could rewrite the [hard drive] operating system using public information,” said one of the researchers.

The problem is, of course, is that any company forced by the US to hand over their source code would also be forbidden from talking about it.

It’s the sort of thing China does:

For most computing and networking equipment, the chart says, source code must be turned over to Chinese officials. But many foreign companies would be unwilling to disclose code because of concerns about intellectual property, security and, in some cases, United States export law.

The chart also calls for companies that want to sell to banks to set up research and development centers in China, obtain permits for workers servicing technology equipment and build “ports” to allow Chinese officials to manage and monitor data processed by their hardware.

The draft antiterrorism law pushes even further, calling for companies to store all data related to Chinese users on servers in China, create methods for monitoring content for terror threats and provide keys to encryption to public security authorities.

Slashdot thread.

Posted on March 18, 2016 at 11:27 AM34 Comments

New NIST Encryption Guidelines

NIST has published a draft of their new standard for encryption use: “NIST Special Publication 800-175B, Guideline for Using Cryptographic Standards in the Federal Government: Cryptographic Mechanisms.” In it, the Escrowed Encryption Standard from the 1990s, FIPS-185, is no longer certified. And Skipjack, NSA’s symmetric algorithm from the same period, will no longer be certified.

I see nothing sinister about decertifying Skipjack. In a world of faster computers and post-quantum thinking, an 80-bit key and 64-bit block no longer cut it.

ETA: My essays from 1998 on Skipjack and KEA.

Posted on March 17, 2016 at 9:54 AM15 Comments

Another FBI Filing on the San Bernardino iPhone Case

The FBI’s reply to Apple is more of a character assassination attempt than a legal argument. It’s as if it only cares about public opinion at this point.

Although notice the threat in footnote 9 on page 22:

For the reasons discussed above, the FBI cannot itself modify the software on Farook’s iPhone without access to the source code and Apple’s private electronic signature. The government did not seek to compel Apple to turn those over because it believed such a request would be less palatable to Apple. If Apple would prefer that course, however, that may provide an alternative that requires less labor by Apple programmers.

This should immediately remind everyone of the Lavabit case, where the FBI did ask for the site’s master key in order to get at one user. Ladar Levison commented on the similarities. He, of course, shut his service down rather than turn over the master key. A company as large as Apple does not have that option. Marcy Wheeler wrote about this in detail.

My previous three posts on this are here, here, and here, all with lots of interesting links to various writings on this case.

EDITED TO ADD:The New York Times reports that the White House might have overreached in this case.

John Oliver has a great segment on this. With a Matt Blaze cameo!

Good NPR interview with Richard Clarke.

Well, I don’t think it’s a fierce debate. I think the Justice Department and the FBI are on their own here. You know, the secretary of defense has said how important encryption is when asked about this case. The National Security Agency director and three past National Security Agency directors, a former CIA director, a former Homeland Security secretary have all said that they’re much more sympathetic with Apple in this case. You really have to understand that the FBI director is exaggerating the need for this and is trying to build it up as an emotional case, organizing the families of the victims and all of that. And it’s Jim Comey and the attorney general is letting him get away with it.

Senator Lindsay Graham is changing his views:

“It’s just not so simple,” Graham said. “I thought it was that simple.”

Steven Levy on the history angle of this story.

Benjamin Wittes on possible legislative options.

EDITED TO ADD (3/17): Apple’s latest response is pretty withering. Commentary from Susan Crawford. FBI and China are on the same side. How this fight risks the whole US tech industry.

EDITED TO ADD (3/18): Tim Cook interview. Apple engineers might refuse to help the FBI, if Apple loses the case. And I should have previously posted this letter from racial justice activists, and this more recent essay on how this affects the LGBTQ community.

EDITED TO ADD (3/21): Interesting article on the Apple/FBI tensions that led to this case.

Posted on March 16, 2016 at 6:12 AM106 Comments

Possible Government Demand for WhatsApp Backdoor

The New York Times is reporting that WhatsApp, and its parent company Facebook, may be headed to court over encrypted chat data that the FBI can’t decrypt.

This case is fundamentally different from the Apple iPhone case. In that case, the FBI is demanding that Apple create a hacking tool to exploit an already existing vulnerability in the iPhone 5c, because they want to get at stored data on a phone that they have in their possession. In the WhatsApp case, chat data is end-to-end encrypted, and there is nothing the company can do to assist the FBI in reading already encrypted messages. This case would be about forcing WhatsApp to make an engineering change in the security of its software to create a new vulnerability—one that they would be forced to push onto the user’s device to allow the FBI to eavesdrop on future communications. This is a much further reach for the FBI, but potentially a reasonable additional step if they win the Apple case.

And once the US demands this, other countries will demand it as well. Note that the government of Brazil has arrested a Facebook employee because WhatsApp is secure.

We live in scary times when our governments want us to reduce our own security.

EDITED TO ADD (3/15): More commentary.

Posted on March 15, 2016 at 6:17 AM231 Comments

Punishment and Trust

Interesting research: “Third-party punishment as a costly signal of trustworthiness, by Jillian J. Jordan, Moshe Hoffman, Paul Bloom,and David G. Rand, Nature:

Abstract: Third-party punishment (TPP), in which unaffected observers punish selfishness, promotes cooperation by deterring defection. But why should individuals choose to bear the costs of punishing? We present a game theoretic model of TPP as a costly signal of trustworthiness. Our model is based on individual differences in the costs and/or benefits of being trustworthy. We argue that individuals for whom trustworthiness is payoff-maximizing will find TPP to be less net costly (for example, because mechanisms that incentivize some individuals to be trustworthy also create benefits for deterring selfishness via TPP). We show that because of this relationship, it can be advantageous for individuals to punish selfishness in order to signal that they are not selfish themselves. We then empirically validate our model using economic game experiments. We show that TPP is indeed a signal of trustworthiness: third-party punishers are trusted more, and actually behave in a more trustworthy way, than non-punishers. Furthermore, as predicted by our model, introducing a more informative signal—the opportunity to help directly—attenuates these signalling effects. When potential punishers have the chance to help, they are less likely to punish, and punishment is perceived as, and actually is, a weaker signal of trustworthiness. Costly helping, in contrast, is a strong and highly used signal even when TPP is also possible. Together, our model and experiments provide a formal reputational account of TPP, and demonstrate how the costs of punishing may be recouped by the long-run benefits of signalling one’s trustworthiness.

More accessible essay.

Posted on March 14, 2016 at 12:59 PM12 Comments

Leaked ISIS Documents

Looks like tens of thousands of ISIS documents have been leaked. Where did they come from? We don’t know:

Documents listing the names of Islamic State fighters have been touted around the Middle East for months, dangled in front of media outlets for large sums of money.

[…]

Ramsay said he met the source of the documents in Turkey, an individual calling himself Abu Hamed who had been in the Free Syrian Army rebel group and switched to Isis before becoming disillusioned with it.

Sky said the documents were on a memory stick stolen from the head of Isis’s internal security police.

The Syrian opposition news website, Zaman al-Wasl, in a report billed as “exclusive” and published before Sky’s, said it had the personal data on 1,736 fighters and that its documents had come from Isis’s general administration of borders.

Posted on March 11, 2016 at 6:17 AM19 Comments

Espionage Tactics Against Tibetans

A Citizen Lab research study of Chinese attack and espionage tactics against Tibetan networks and users.

This report describes the latest iteration in a long-running espionage campaign against the Tibetan community. We detail how the attackers continuously adapt their campaigns to their targets, shifting tactics from document-based malware to conventional phishing that draws on “inside” knowledge of community activities. This adaptation appears to track changes in security behaviors within the Tibetan community, which has been promoting a move from sharing attachments via e-mail to using cloud-based file sharing alternatives such as Google Drive.

We connect the attack group’s infrastructure and techniques to a group previously identified by Palo Alto Networks, which they named Scarlet Mimic. We provide further context on Scarlet Mimic’s targeting and tactics, and the intended victims of their attack campaigns. In addition, while Scarlet Mimic may be conducting malware attacks using other infrastructure, we analyze how the attackers re-purposed a cluster of their malware Command and Control (C2) infrastructure to mount the recent phishing campaign.

This move is only the latest development in the ongoing cat and mouse game between attack groups like Scarlet Mimic and the Tibetan community. The speed and ease with which attackers continue to adapt highlights the challenges faced by Tibetans who are trying to remain safe online.

News article.

Posted on March 10, 2016 at 2:16 PM14 Comments

Cheating at Professional Bridge

Interesting article on detecting cheaters in professional bridge using big-data analysis.

Basically, a big part of the game is the communication of information between the partners. But only certain communications channels are permitted. Cheating involves partners sending secret signals to each other.

The results of this can be detected by analyzing lots of games the partners play. If they consistently make plays that should turn out badly based on the information they should know, but end up turning out well given the actual distribution of the cards, then we know that some sort of secret signaling is involved.

Posted on March 8, 2016 at 6:07 AM27 Comments

Interesting Research on the Economics of Privacy

New paper: “The Economics of Privacy, by Alessandro Acquisti, Curtis R. Taylor, and Liad Wagman:

Abstract: This article summarizes and draws connections among diverse streams of empirical and theoretical research on the economics of privacy. Our focus is on the economic value and consequences of privacy and of personal information, and on consumers’ understanding of and decisions about the costs and benefits associated with data protection and data sharing. We highlight how the economic analysis of privacy evolved through the decades, as, together with progress in information technology, more nuanced issues associated with the protection and sharing of personal information arose. We use three themes to connect insights from the literature. First, there are theoretical and empirical situations where the protection of privacy can both enhance and detract from economic surplus and allocative efficiency. Second, consumers’ ability to make informed decisions about their privacy is severely hindered, because most of the time they are in a position of imperfect information regarding when their data is collected, with what purposes, and with what consequences. Third, specific heuristics can profoundly influence privacy decision-making. We conclude by highlighting some of the ongoing issues in the privacy debate.

Posted on March 7, 2016 at 3:59 PM8 Comments

Data Is a Toxic Asset

Thefts of personal information aren’t unusual. Every week, thieves break into networks and steal data about people, often tens of millions at a time. Most of the time it’s information that’s needed to commit fraud, as happened in 2015 to Experian and the IRS.

Sometimes it’s stolen for purposes of embarrassment or coercion, as in the 2015 cases of Ashley Madison and the US Office of Personnel Management. The latter exposed highly sensitive personal data that affects security of millions of government employees, probably to the Chinese. Always it’s personal information about us, information that we shared with the expectation that the recipients would keep it secret. And in every case, they did not.

The telecommunications company TalkTalk admitted that its data breach last year resulted in criminals using customer information to commit fraud. This was more bad news for a company that’s been hacked three times in the past 12 months, and has already seen some disastrous effects from losing customer data, including £60 million (about $83 million) in damages and over 100,000 customers. Its stock price took a pummeling as well.

People have been writing about 2015 as the year of data theft. I’m not sure if more personal records were stolen last year than in other recent years, but it certainly was a year for big stories about data thefts. I also think it was the year that industry started to realize that data is a toxic asset.

The phrase “big data” refers to the idea that large databases of seemingly random data about people are valuable. Retailers save our purchasing habits. Cell phone companies and app providers save our location information.

Telecommunications providers, social networks, and many other types of companies save information about who we talk to and share things with. Data brokers save everything about us they can get their hands on. This data is saved and analyzed, bought and sold, and used for marketing and other persuasive purposes.

And because the cost of saving all this data is so cheap, there’s no reason not to save as much as possible, and save it all forever. Figuring out what isn’t worth saving is hard. And because someday the companies might figure out how to turn the data into money, until recently there was absolutely no downside to saving everything. That changed this past year.

What all these data breaches are teaching us is that data is a toxic asset and saving it is dangerous.

Saving it is dangerous because it’s highly personal. Location data reveals where we live, where we work, and how we spend our time. If we all have a location tracker like a smartphone, correlating data reveals who we spend our time with­—including who we spend the night with.

Our Internet search data reveals what’s important to us, including our hopes, fears, desires and secrets. Communications data reveals who our intimates are, and what we talk about with them. I could go on. Our reading habits, or purchasing data, or data from sensors as diverse as cameras and fitness trackers: All of it can be intimate.

Saving it is dangerous because many people want it. Of course companies want it; that’s why they collect it in the first place. But governments want it, too. In the United States, the National Security Agency and FBI use secret deals, coercion, threats and legal compulsion to get at the data. Foreign governments just come in and steal it. When a company with personal data goes bankrupt, it’s one of the assets that gets sold.

Saving it is dangerous because it’s hard for companies to secure. For a lot of reasons, computer and network security is very difficult. Attackers have an inherent advantage over defenders, and a sufficiently skilled, funded and motivated attacker will always get in.

And saving it is dangerous because failing to secure it is damaging. It will reduce a company’s profits, reduce its market share, hurt its stock price, cause it public embarrassment, and­—in some cases—­result in expensive lawsuits and occasionally, criminal charges.

All this makes data a toxic asset, and it continues to be toxic as long as it sits in a company’s computers and networks. The data is vulnerable, and the company is vulnerable. It’s vulnerable to hackers and governments. It’s vulnerable to employee error. And when there’s a toxic data spill, millions of people can be affected. The 2015 Anthem Health data breach affected 80 million people. The 2013 Target Corp. breach affected 110 million.

This toxic data can sit in organizational databases for a long time. Some of the stolen Office of Personnel Management data was decades old. Do you have any idea which companies still have your earliest e-mails, or your earliest posts on that now-defunct social network?

If data is toxic, why do organizations save it?

There are three reasons. The first is that we’re in the middle of the hype cycle of big data. Companies and governments are still punch-drunk on data, and have believed the wildest of promises on how valuable that data is. The research showing that more data isn’t necessarily better, and that there are serious diminishing returns when adding additional data to processes like personalized advertising, is just starting to come out.

The second is that many organizations are still downplaying the risks. Some simply don’t realize just how damaging a data breach would be. Some believe they can completely protect themselves against a data breach, or at least that their legal and public relations teams can minimize the damage if they fail. And while there’s certainly a lot that companies can do technically to better secure the data they hold about all of us, there’s no better security than deleting the data.

The last reason is that some organizations understand both the first two reasons and are saving the data anyway. The culture of venture-capital-funded start-up companies is one of extreme risk taking. These are companies that are always running out of money, that always know their impending death date.

They are so far from profitability that their only hope for surviving is to get even more money, which means they need to demonstrate rapid growth or increasing value. This motivates those companies to take risks that larger, more established, companies would never take. They might take extreme chances with our data, even flout regulations, because they literally have nothing to lose. And often, the most profitable business models are the most risky and dangerous ones.

We can be smarter than this. We need to regulate what corporations can do with our data at every stage: collection, storage, use, resale and disposal. We can make corporate executives personally liable so they know there’s a downside to taking chances. We can make the business models that involve massively surveilling people the less compelling ones, simply by making certain business practices illegal.

The Ashley Madison data breach was such a disaster for the company because it saved its customers’ real names and credit card numbers. It didn’t have to do it this way. It could have processed the credit card information, given the user access, and then deleted all identifying information.

To be sure, it would have been a different company. It would have had less revenue, because it couldn’t charge users a monthly recurring fee. Users who lost their password would have had more trouble re-accessing their account. But it would have been safer for its customers.

Similarly, the Office of Personnel Management didn’t have to store everyone’s information online and accessible. It could have taken older records offline, or at least onto a separate network with more secure access controls. Yes, it wouldn’t be immediately available to government employees doing research, but it would have been much more secure.

Data is a toxic asset. We need to start thinking about it as such, and treat it as we would any other source of toxicity. To do anything else is to risk our security and privacy.

This essay previously appeared on CNN.com.

Posted on March 4, 2016 at 5:32 AM45 Comments

DROWN Attack

Earlier this week, we learned of yet another attack against SSL/TLS where an attacker can force people to use insecure algorithms. It’s called DROWN. Here’s a good news article on the attack, the technical paper describing the attack, and a very good technical blog post by Matthew Green.

As an aside, I am getting pretty annoyed at all the marketing surrounding vulnerabilities these days. Vulnerabilities do not need a catchy name, a dedicated website—even thought it’s a very good website—and a logo.

Posted on March 3, 2016 at 2:09 PM24 Comments

Security Vulnerabilities in Wireless Keyboards

Many wireless keyboards have a security vulnerability that allow someone to hack the computer using the keyboard-computer link. (Technical details here.)

An attacker can launch the attack from up to 100 meters away. The attacker is able to take control of the target computer, without physically being in front of it, and type arbitrary text or send scripted commands. It is therefore possible to perform rapidly malicious activities without being detected.

The MouseJack exploit centers around injecting unencrypted keystrokes into a target computer. Mouse movements are usually sent unencrypted, and keystrokes are often encrypted (to prevent eavesdropping what is being typed). However the MouseJack vulnerability takes advantage of affected receiver dongles, and their associated software, allowing unencrypted keystrokes transmitted by an attacker to be passed on to the computer’s operating system as if the victim had legitimately typed them.

Affected devices are starting to patch. Here’s Logitech:

Logitech said that it has developed a firmware update, which is available for download. It is the only one among the affected vendors to respond so for with a patch.

“Logitech’s Unifying technology was launched in 2007 and has been used by millions of our consumers since. To our knowledge, we have never been contacted by any consumer with such an issue,” Asif Ahsan, Senior Director, Engineering, Logitech. “We have nonetheless taken Bastille Security’s work seriously and developed a firmware fix. If any of our customers have concerns, and would like to ensure that this potential vulnerability is eliminated…They should also ensure their Logitech Options software is up to date.”

Posted on March 3, 2016 at 6:29 AM26 Comments

The Mathematics of Conspiracy

This interesting study tries to build a mathematical model for the continued secrecy of conspiracies, and tries to predict how long before they will be revealed to the general public, either wittingly or unwittingly.

The equation developed by Dr Grimes, a post-doctoral physicist at Oxford, relied upon three factors: the number of conspirators involved, the amount of time that has passed, and the intrinsic probability of a conspiracy failing.

He then applied his equation to four famous conspiracy theories: The belief that the Moon landing was faked, the belief that climate change is a fraud, the belief that vaccines cause autism, and the belief that pharmaceutical companies have suppressed a cure for cancer.

Dr Grimes’s analysis suggests that if these four conspiracies were real, most are very likely to have been revealed as such by now.

Specifically, the Moon landing “hoax” would have been revealed in 3.7 years, the climate change “fraud” in 3.7 to 26.8 years, the vaccine-autism “conspiracy” in 3.2 to 34.8 years, and the cancer “conspiracy” in 3.2 years.

He also ran the model against two actual conspiracies: the NSA’s PRISM program and the Tuskegee syphilis experiment.

From the paper:

Abstract: Conspiratorial ideation is the tendency of individuals to believe that events and power relations are secretly manipulated by certain clandestine groups and organisations. Many of these ostensibly explanatory conjectures are non-falsifiable, lacking in evidence or demonstrably false, yet public acceptance remains high. Efforts to convince the general public of the validity of medical and scientific findings can be hampered by such narratives, which can create the impression of doubt or disagreement in areas where the science is well established. Conversely, historical examples of exposed conspiracies do exist and it may be difficult for people to differentiate between reasonable and dubious assertions. In this work, we establish a simple mathematical model for conspiracies involving multiple actors with time, which yields failure probability for any given conspiracy. Parameters for the model are estimated from literature examples of known scandals, and the factors influencing conspiracy success and failure are explored. The model is also used to estimate the likelihood of claims from some commonly-held conspiratorial beliefs; these are namely that the moon-landings were faked, climate-change is a hoax, vaccination is dangerous and that a cure for cancer is being suppressed by vested interests. Simulations of these claims predict that intrinsic failure would be imminent even with the most generous estimates for the secret-keeping ability of active participants­—the results of this model suggest that large conspiracies (≥1000 agents) quickly become untenable and prone to failure. The theory presented here might be useful in counteracting the potentially deleterious consequences of bogus and anti-science narratives, and examining the hypothetical conditions under which sustainable conspiracy might be possible.

Lots on the psychology of conspiracy theories here.

EDITED TO ADD (3/12): This essay debunks the above research.

Posted on March 2, 2016 at 12:39 PM60 Comments

Company Tracks Iowa Caucusgoers by their Cell Phones

It’s not just governments. Companies like Dstillery are doing this too:

“We watched each of the caucus locations for each party and we collected mobile device ID’s,” Dstillery CEO Tom Phillips said. “It’s a combination of data from the phone and data from other digital devices.”

Dstillery found some interesting things about voters. For one, people who loved to grill or work on their lawns overwhelmingly voted for Trump in Iowa, according to Phillips. There was some pretty unexpected characteristics that came up too.

“NASCAR was the one outlier, for Trump and Clinton,” Phillips said. “In Clinton’s counties, NASCAR way over-indexed.”

Kashmir Hill wondered how:

What really happened is that Dstillery gets information from people’s phones via ad networks. When you open an app or look at a browser page, there’s a very fast auction that happens where different advertisers bid to get to show you an ad. Their bid is based on how valuable they think you are, and to decide that, your phone sends them information about you, including, in many cases, an identifying code (that they’ve built a profile around) and your location information, down to your latitude and longitude.

Yes, for the vast majority of people, ad networks are doing far more information collection about them than the NSA­—but they don’t explicitly link it to their names.

So on the night of the Iowa caucus, Dstillery flagged all the auctions that took place on phones in latitudes and longitudes near caucus locations. It wound up spotting 16,000 devices on caucus night, as those people had granted location privileges to the apps or devices that served them ads. It captured those mobile ID’s and then looked up the characteristics associated with those IDs in order to make observations about the kind of people that went to Republican caucus locations (young parents) versus Democrat caucus locations. It drilled down farther (e.g., ‘people who like NASCAR voted for Trump and Clinton’) by looking at which candidate won at a particular caucus location.

Okay, so it didn’t collect names. But how much harder could that have been?

Posted on March 2, 2016 at 6:34 AM23 Comments

WikiLeaks Publishes NSA Target List

As part of an ongoing series of classified NSA target list and raw intercepts, WikiLeaks published details of the NSA’s spying on UN Secretary General Ban Ki-Moon, German Chancellor Angela Merkel, Israeli prime minister Benjamin Netanyahu, former Italian prime minister Silvio Berlusconi, former French leader Nicolas Sarkozy, and key Japanese and EU trade reps. WikiLeaks never says this, but it’s pretty obvious that these documents don’t come from Snowden’s archive.

I’ve said this before, but it bears repeating. Spying on foreign leaders is exactly what I expect the NSA to do. It’s spying on the rest of the world that I have a problem with.

Other leaks in this series: France, Germany, Brazil, Japan, Italy, the European Union, and the United Nations.

BoingBoing post.

Posted on March 1, 2016 at 12:55 PM34 Comments

Lots More Writing about the FBI vs. Apple

I have written two posts on the case, and at the bottom of those essays are lots of links to other essays written by other people. Here are more links.

If you read just one thing on the technical aspects of this case, read Susan Landau’s testimony before the House Judiciary Committee. It’s very comprehensive, and very good.

Others are testifying, too.

Apple is fixing the vulnerability. The Justice Department wants Apple to unlock nine more phones.

Apple prevailed in a different iPhone unlocking case.

Why the First Amendment is a bad argument. And why the All Writs Act is the wrong tool.

Dueling poll results: Pew Research reports that 51% side with the FBI, while a Reuters poll reveals that “forty-six percent of respondents said they agreed with Apple’s position, 35 percent said they disagreed and 20 percent said they did not know,” and that “a majority of Americans do not want the government to have access to their phone and Internet communications, even if it is done in the name of stopping terror attacks.”

One of the worst possible outcomes from this story is that people stop installing security updates because they don’t trust them. After all, a security update mechanism is also a mechanism by which the government can install a backdoor. Here’s one essay that talks about that. Here’s another.

Cory Doctorow comments on the FBI’s math denialism. Yochai Benkler sees this as a symptom of a greater breakdown in government trust. More good commentary from Jeff Schiller, Julian Sanchez, and Jonathan Zdziarski. Marcy Wheeler’s comments. Two posts by Dan Wallach. Michael Chertoff and associates weigh in on the side of security over surveillance.

Here’s a Catholic op-ed on Apple’s side. Bill Gates sides with the FBI. And a great editorial cartoon.

Here’s high snark from Stewart Baker. Baker asks some very good (and very snarky) questions. But the questions are beside the point. This case isn’t about Apple or whether Apple is being hypocritical, any more than climate change is about Al Gore’s character. This case is about the externalities of what the government is asking for.

One last thing to read.

Okay, one more, on the more general back door issue.

EDITED TO ADD (3/2): Wall Street Journal editorial. And here’s video from the House Judiciary Committee hearing. Skip to around 34:50 to get to the actual beginning.

EDITED TO ADD (3/3): Interview with Rep. Darrell Issa. And at the RSA Conference this week, both Defense Secretary Ash Carter and Microsoft’s chief legal officer Brad Smith sided with Apple against the FBI.

EDITED TO ADD (3/4): Comments on the case from the UN High Commissioner for Human Rights.

EDITED TO ADD (3/7): Op ed by Apple. And an interesting article on the divide in the Obama Administration.

EDITED TO ADD (3/10): Another good essay.

EDITED TO ADD (3/13): President Obama’s comments on encryption: he wants back doors. Cory Doctorow reports.

Posted on March 1, 2016 at 6:47 AM87 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.