December 15, 2017
by Bruce Schneier
CTO, IBM Resilient
schneier@schneier.com
https://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2017/…>. These same essays and news items appear in the “Schneier on Security” blog at <https://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- Warrant Protections against Police Searches of Our Data
- News
- Uber Data Hack
- Schneier News
- NSA “Red Disk” Data Leak
- New White House Announcement on the Vulnerability Equities Process
Warrant Protections against Police Searches of Our Data
The cell phones we carry with us constantly are the most perfect surveillance device ever invented, and our laws haven’t caught up to that reality. That might change soon.
This week, the Supreme Court will hear a case with profound implications for your security and privacy in the coming years. The Fourth Amendment’s prohibition of unlawful search and seizure is a vital right that protects us all from police overreach, and the way the courts interpret it is increasingly nonsensical in our computerized and networked world. The Supreme Court can either update current law to reflect the world, or it can further solidify an unnecessary and dangerous police power.
The case centers on cell phone location data and whether the police need a warrant to get it, or if they can use a simple subpoena, which is easier to obtain. Current Fourth Amendment doctrine holds that you lose all privacy protections over any data you willingly share with a third party. Your cellular provider, under this interpretation, is a third party with whom you’ve willingly shared your movements, 24 hours a day, going back months—even though you don’t really have any choice about whether to share with them. So police can request records of where you’ve been from cell carriers without any judicial oversight. The case before the court, Carpenter v. United States, could change that.
Traditionally, information that was most precious to us was physically close to us. It was on our bodies, in our homes and offices, in our cars. Because of that, the courts gave that information extra protections. Information that we stored far away from us, or gave to other people, afforded fewer protections. Police searches have been governed by the “third-party doctrine,” which explicitly says that information we share with others is not considered private.
The Internet has turned that thinking upside-down. Our cell phones know who we talk to and, if we’re talking via text or e-mail, what we say. They track our location constantly, so they know where we live and work. Because they’re the first and last thing we check every day, they know when we go to sleep and when we wake up. Because everyone has one, they know whom we sleep with. And because of how those phones work, all that information is naturally shared with third parties.
More generally, all our data is literally stored on computers belonging to other people. It’s our e-mail, text messages, photos, Google docs, and more—all in the cloud. We store it there not because it’s unimportant, but precisely because it is important. And as the Internet of Things computerizes the rest our lives, even more data will be collected by other people: data from our health trackers and medical devices, data from our home sensors and appliances, data from Internet-connected “listeners” like Alexa, Siri, and your voice-activated television.
All this data will be collected and saved by third parties, sometimes for years. The result is a detailed dossier of your activities more complete than any private investigator—or police officer—could possibly collect by following you around.
The issue here is not whether the police should be allowed to use that data to help solve crimes. Of course they should. The issue is whether that information should be protected by the warrant process that requires the police to have probable cause to investigate you and get approval by a court.
Warrants are a security mechanism. They prevent the police from abusing their authority to investigate someone they have no reason to suspect of a crime. They prevent the police from going on “fishing expeditions.” They protect our rights and liberties, even as we willingly give up our privacy to the legitimate needs of law enforcement.
The third-party doctrine never made a lot of sense. Just because I share an intimate secret with my spouse, friend, or doctor doesn’t mean that I no longer consider it private. It makes even less sense in today’s hyper-connected world. It’s long past time the Supreme Court recognized that a months-long history of my movements is private, and my e-mails and other personal data deserve the same protections, whether they’re on my laptop or on Google’s servers.
This essay previously appeared in the Washington Post.
https://www.washingtonpost.com/news/posteverything/…
Details on the case.
https://www.nytimes.com/2017/11/27/us/politics/…
https://www.washingtonpost.com/news/…
Two opinion pieces.
https://www.nytimes.com/2017/11/26/opinion/…
https://www.theguardian.com/commentisfree/2017/nov/…
I signed on to two amicus briefs on the case.
https://www.aclu.org/legal-document/…
https://www.aclu.org/legal-document/…
Good commentary on the Supreme Court oral arguments.
https://www.theatlantic.com/politics/archive/2017/…
News
This digital security guide by Motherboard is very good.
https://motherboard.vice.com/en_us/article/d3devm/…
I put it alongside EFF’s “Surveillance Self-Defense” and John Scott-Railton’s “Digital Security Low Hanging Fruit.” There’s also “Digital Security and Privacy for Human Rights Defenders.”
https://ssd.eff.org/en
https://www.johnscottrailton.com/…
https://www.frontlinedefenders.org/en/…
Amazon Key is an IoT door lock that can enable one-time access codes for delivery people. To further secure that system, Amazon sells Cloud Cam, a camera that watches the door to ensure that delivery people don’t abuse their one-time access privilege. Cloud Cam has been hacked.
https://www.wired.com/story/…
Amazon has a cloud for US classified data. The physical and computer requirements for handling classified information are considerable, both in terms of technology and procedure. I am surprised that a company with no experience dealing with classified data was able to do it.
https://www.washingtonpost.com/news/business/wp/…
The security researchers at Princeton are posting the results of some very interesting research into web surveillance: “You may know that most websites have third-party analytics scripts that record which pages you visit and the searches you make. But lately, more and more sites use “session replay” scripts. These scripts record your keystrokes, mouse movements, and scrolling behavior, along with the entire contents of the pages you visit, and send them to third-party servers. Unlike typical analytics services that provide aggregate statistics, these scripts are intended for the recording and playback of individual browsing sessions, as if someone is looking over your shoulder.”
https://freedom-to-tinker.com/2017/11/15/…
https://motherboard.vice.com/en_us/article/59yexk/…
Mozilla reviews the privacy practices of Internet-connected toys, home accessories, exercise equipment, and more.
https://advocacy.mozilla.org/en-US/privacynotincluded
Man-in-the-middle relay attack against electronic car-door openers.
http://www.bbc.com/news/uk-england-birmingham-42132689
A Turkish Airlines flight made an emergency landing because someone named his wireless network (presumably from his smartphone) “bomb on board.”
https://www.reuters.com/article/…
In 2006, I wrote an essay titled “Refuse to be Terrorized.” (I am also reminded of my 2007 essay, “The War on the Unexpected.” A decade later, it seems that the frequency of incidents like the one above is less, although not zero. Progress, I suppose.
https://www.schneier.com/essays/archives/2006/08/…
https://www.schneier.com/blog/archives/2007/11/…
I agree with Lorenzo Franceschi-Bicchierai, “Cryptocurrencies aren’t ‘crypto.'”
https://motherboard.vice.com/en_us/article/43nk9b/…
Matt Blaze’s House testimony on the security of voting machines is an excellent read.
https://oversight.house.gov/wp-content/uploads/2017/…
https://oversight.house.gov/hearing/…
The German Interior Minister is preparing a bill that allows the government to mandate backdoors in encryption.
https://www.bleepingcomputer.com/news/government/…
No details about how likely this is to pass. I am skeptical.
New research found that many banks offer certificate pinning as a security feature, but fail to authenticate the hostname. This leaves the systems open to man-in-the-middle attacks.
http://www.cs.bham.ac.uk/~garciaf/publications/…
https://www.darkreading.com/mobile/…
The FDA has approved a pill with an embedded sensor that can report when it is swallowed. The pill transmits information to a wearable patch, which in turn transmits information to a smartphone.
https://www.nytimes.com/2017/11/13/health/…
Last month, the DHS announced that it was able to remotely hack a Boeing 757:
http://www.aviationtoday.com/2017/11/08/…
Good article on the history and practice of e-mail tracking:
https://www.wired.com/story/…
Security Planner is a custom security advice tool from Citizen Lab. Answer a few questions, and it gives you a few simple things you can do to improve your security. It’s not meant to be comprehensive, but instead to give people things they can actually do to immediately improve their security. I don’t see it replacing any of the good security guides out there, but instead augmenting them. The advice is peer reviewed, and the team behind Security Planner is committed to keeping it up to date.
https://securityplanner.org/
Note: I am an advisor to this project.
Fascinating research on tracking people without GPS, using clues from various sensors on their phone and publicly available information about weather and etc. This is a good example of how powerful synthesizing information from disparate data sources can be. We spend too much time worried about individual data collection systems, and not enough about analysis techniques of those systems.
https://www.androidauthority.com/tracked-gps-off-822865
http://ieeexplore.ieee.org/document/8038870/authors?…
Uber Data Hack
Uber was hacked, losing data on 57 million driver and rider accounts. The company kept it quiet for over a year. The details are particularly damning:
The two hackers stole data about the company’s riders and drivers—including phone numbers, email addresses and names—from a third-party server and then approached Uber and demanded $100,000 to delete their copy of the data, the employees said.
Uber acquiesced to the demands, and then went further. The company tracked down the hackers and pushed them to sign nondisclosure agreements, according to the people familiar with the matter. To further conceal the damage, Uber executives also made it appear as if the payout had been part of a “bug bounty”—a common practice among technology companies in which they pay hackers to attack their software to test for soft spots.
And almost certainly illegal:
While it is not illegal to pay money to hackers, Uber may have violated several laws in its interaction with them.
By demanding that the hackers destroy the stolen data, Uber may have violated a Federal Trade Commission rule on breach disclosure that prohibits companies from destroying any forensic evidence in the course of their investigation.
The company may have also violated state breach disclosure laws by not disclosing the theft of Uber drivers’ stolen data. If the data stolen was not encrypted, Uber would have been required by California state law to disclose that driver’s license data from its drivers had been stolen in the course of the hacking.
https://www.nytimes.com/2017/11/21/technology/…
Schneier News
None. Enjoy the holidays.
NSA “Red Disk” Data Leak
ZDNet is reporting about another data leak, this one from US Army’s Intelligence and Security Command (INSCOM), which is also within the NSA.
The disk image, when unpacked and loaded, is a snapshot of a hard drive dating back to May 2013 from a Linux-based server that forms part of a cloud-based intelligence sharing system, known as Red Disk. The project, developed by INSCOM’s Futures Directorate, was slated to complement the Army’s so-called distributed common ground system (DCGS), a legacy platform for processing and sharing intelligence, surveillance, and reconnaissance information.
[…]
Red Disk was envisioned as a highly customizable cloud system that could meet the demands of large, complex military operations. The hope was that Red Disk could provide a consistent picture from the Pentagon to deployed soldiers in the Afghan battlefield, including satellite images and video feeds from drones trained on terrorists and enemy fighters, according to a Foreign Policy report.
[…]
Red Disk was a modular, customizable, and scalable system for sharing intelligence across the battlefield, like electronic intercepts, drone footage and satellite imagery, and classified reports, for troops to access with laptops and tablets on the battlefield. Marking files found in several directories imply the disk is “top secret,” and restricted from being shared to foreign intelligence partners.
A couple of points. One, this isn’t particularly sensitive. It’s an intelligence distribution system under development. It’s not raw intelligence. Two, this doesn’t seem to be classified data. Even the article hedges, using the unofficial term of “highly sensitive.” Three, it doesn’t seem that Chris Vickery, the researcher that discovered the data, has published it.
Chris Vickery, director of cyber risk research at security firm UpGuard, found the data and informed the government of the breach in October. The storage server was subsequently secured, though its owner remains unknown.
This doesn’t feel like a big deal to me.
http://www.zdnet.com/article/…
https://www.upguard.com/breaches/cloud-leak-inscom
Slashdot thread.
https://it.slashdot.org/story/17/11/28/1844226/…
New White House Announcement on the Vulnerability Equities Process
The White House has released a new version of the Vulnerabilities Equities Process (VEP). This is the inter-agency process by which the US government decides whether to inform the software vendor of a vulnerability it finds, or keep it secret and use it to eavesdrop on or attack other systems. You can read the new policy or the fact sheet, but the best place to start is Cybersecurity Coordinator Rob Joyce’s blog post.
In considering a way forward, there are some key tenets on which we can build a better process.
*Improved transparency is critical.* The American people should have confidence in the integrity of the process that underpins decision making about discovered vulnerabilities. Since I took my post as Cybersecurity Coordinator, improving the VEP and ensuring its transparency have been key priorities, and we have spent the last few months reviewing our existing policy in order to improve the process and make key details about the VEP available to the public. Through these efforts, we have validated much of the existing process and ensured a rigorous standard that considers many potential equities.
*The interests of all stakeholders must be fairly represented.* At a high level we consider four major groups of equities: defensive equities; intelligence / law enforcement / operational equities; commercial equities; and international partnership equities. Additionally, ordinary people want to know the systems they use are resilient, safe, and sound. These core considerations, which have been incorporated into the VEP Charter, help to standardize the process by which decision makers weigh the benefit to national security and the national interest when deciding whether to disclose or restrict knowledge of a vulnerability.
*Accountability of the process and those who operate it is important to establish confidence in those served by it.* Our public release of the unclassified portions Charter will shed light on aspects of the VEP that were previously shielded from public review, including who participates in the VEP’s governing body, known as the Equities Review Board. We make it clear that departments and agencies with protective missions participate in VEP discussions, as well as other departments and agencies that have broader equities, like the Department of State and the Department of Commerce. We also clarify what categories of vulnerabilities are submitted to the process and ensure that any decision not to disclose a vulnerability will be reevaluated regularly. There are still important reasons to keep many of the specific vulnerabilities evaluated in the process classified, but we will release an annual report that provides metrics about the process to further inform the public about the VEP and its outcomes.
*Our system of government depends on informed and vigorous dialogue to discover and make available the best ideas that our diverse society can generate.* This publication of the VEP Charter will likely spark discussion and debate. This discourse is important. I also predict that articles will make breathless claims of “massive stockpiles” of exploits while describing the issue. That simply isn’t true. The annual reports and transparency of this effort will reinforce that fact.
Mozilla is pleased with the new charter. I am less so; it looks to me like the same old policy with some new transparency measures—which I’m not sure I trust. The devil is in the details, and we don’t know the details—and it has giant loopholes that pretty much anything can fall through:
The United States Government’s decision to disclose or restrict vulnerability information could be subject to restrictions by partner agreements and sensitive operations. Vulnerabilities that fall within these categories will be cataloged by the originating Department/Agency internally and reported directly to the Chair of the ERB. The details of these categories are outlined in Annex C, which is classified. Quantities of excepted vulnerabilities from each department and agency will be provided in ERB meetings to all members.
This is me from last June:
There’s a lot we don’t know about the VEP. The Washington Post says that the NSA used EternalBlue “for more than five years,” which implies that it was discovered after the 2010 process was put in place. It’s not clear if all vulnerabilities are given such consideration, or if bugs are periodically reviewed to determine if they should be disclosed. That said, any VEP that allows something as dangerous as EternalBlue—or the Cisco vulnerabilities that the Shadow Brokers leaked last August—to remain unpatched for years isn’t serving national security very well. As a former NSA employee said, the quality of intelligence that could be gathered was “unreal.” But so was the potential damage. The NSA must avoid hoarding vulnerabilities.
I stand by that, and am not sure the new policy changes anything.
https://www.whitehouse.gov/sites/whitehouse.gov/…
https://www.whitehouse.gov/sites/whitehouse.gov/…
https://www.whitehouse.gov//2017/11/15/…
Mozilla’s reaction:
https://.mozilla.org/netpolicy/2017/11/15/…
Me from last June:
https://www.schneier.com/blog/archives/2017/06/…
More commentary.
https://www.lawfareblog.com/…
https://www.darkreading.com/attacks-breaches/…
Here’s more about the Windows vulnerabilities hoarded by the NSA and released by the Shadow Brokers.
https://www.schneier.com/blog/archives/2017/07/…
Adam Shostack points out that the process does not cover design flaws or trade-offs, and that those need to be covered:
…we need the VEP to expand to cover those issues. I’m not going to claim that will be easy, that the current approach will translate, or that they should have waited to handle those before publishing. One obvious place it gets harder is the sources and methods tradeoff. But we need the internet to be a resilient and trustworthy infrastructure.
https://adam.shostack.org//2017/11/…
https://s.cisco.com/security/shadow-brokers
http://www.securityweek.com/…
https://www.washingtonpost.com/business/technology/…
https://www.schneier.com/blog/archives/2016/08/…
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and CTO of IBM Resilient and Special Advisor to IBM Security. See <https://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM Resilient.
Copyright (c) 2017 by Bruce Schneier.