Crypto-Gram

June 15, 2017

by Bruce Schneier
CTO, IBM Resilient
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2017/…>. These same essays and news items appear in the “Schneier on Security” blog at <https://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:


WannaCry Ransomware

Criminals go where the money is, and cybercriminals are no exception.

And right now, the money is in ransomware.

It’s a simple scam. Encrypt the victim’s hard drive, then extract a fee to decrypt it. The scammers can’t charge too much, because they want the victim to pay rather than give up on the data. But they can charge individuals a few hundred dollars, and they can charge institutions like hospitals a few thousand. Do it at scale, and it’s a profitable business.

And scale is how ransomware works. Computers are infected automatically, with viruses that spread over the internet. Payment is no more difficult than buying something online—and payable in untraceable bitcoin—with some ransomware makers offering tech support to those unsure of how to buy or transfer bitcoin. Customer service is important; people need to know they’ll get their files back once they pay.

And they want you to pay. If they’re lucky, they’ve encrypted your irreplaceable family photos, or the documents of a project you’ve been working on for weeks. Or maybe your company’s accounts receivable files or your hospital’s patient records. The more you need what they’ve stolen, the better.

The particular ransomware making headlines is called WannaCry, and it’s infected some pretty serious organizations.

What can you do about it? Your first line of defense is to diligently install every security update as soon as it becomes available, and to migrate to systems that vendors still support. Microsoft issued a security patch that protects against WannaCry months before the ransomware started infecting systems; it only works against computers that haven’t been patched. And many of the systems it infects are older computers, no longer normally supported by Microsoft—though it did belatedly release a patch for those older systems. I know it’s hard, but until companies are forced to maintain old systems, you’re much safer upgrading.

This is easier advice for individuals than for organizations. You and I can pretty easily migrate to a new operating system, but organizations sometimes have custom software that breaks when they change OS versions or install updates. Many of the organizations hit by WannaCry had outdated systems for exactly these reasons. But as expensive and time-consuming as updating might be, the risks of not doing so are increasing.

Your second line of defense is good antivirus software. Sometimes ransomware tricks you into encrypting your own hard drive by clicking on a file attachment that you thought was benign. Antivirus software can often catch your mistake and prevent the malicious software from running. This isn’t perfect, of course, but it’s an important part of any defense.

Your third line of defense is to diligently back up your files. There are systems that do this automatically for your hard drive. You can invest in one of those. Or you can store your important data in the cloud. If your irreplaceable family photos are in a backup drive in your house, then the ransomware has that much less hold on you. If your e-mail and documents are in the cloud, then you can just reinstall the operating system and bypass the ransomware entirely. I know storing data in the cloud has its own privacy risks, but they may be less than the risks of losing everything to ransomware.

That takes care of your computers and smartphones, but what about everything else? We’re deep into the age of the “Internet of things.”

There are now computers in your household appliances. There are computers in your cars and in the airplanes you travel on. Computers run our traffic lights and our power grids. These are all vulnerable to ransomware. The Mirai botnet exploited a vulnerability in internet-enabled devices like DVRs and webcams to launch a denial-of-service attack against a critical internet name server; next time it could just as easily disable the devices and demand payment to turn them back on.

Re-enabling a webcam will be cheap; re-enabling your car will cost more. And you don’t want to know how vulnerable implanted medical devices are to these sorts of attacks.

Commercial solutions are coming, probably a convenient repackaging of the three lines of defense described above. But it’ll be yet another security surcharge you’ll be expected to pay because the computers and internet-of-things devices you buy are so insecure. Because there are currently no liabilities for lousy software and no regulations mandating secure software, the market rewards software that’s fast and cheap at the expense of good. Until that changes, ransomware will continue to be profitable line of criminal business.

This essay previously appeared in the “New York Daily News.”
http://www.nydailynews.com/news/national/…


The Future of Ransomware

Ransomware isn’t new, but it’s increasingly popular and profitable.

The concept is simple: Your computer gets infected with a virus that encrypts your files until you pay a ransom. It’s extortion taken to its networked extreme. The criminals provide step-by-step instructions on how to pay, sometimes even offering a help line for victims unsure how to buy bitcoin. The price is designed to be cheap enough for people to pay instead of giving up: a few hundred dollars in many cases. Those who design these systems know their market, and it’s a profitable one.

The ransomware that has affected systems in more than 150 countries recently, WannaCry, made press headlines last week, but it doesn’t seem to be more virulent or more expensive than other ransomware. This one has a particularly interesting pedigree: It’s based on a vulnerability developed by the National Security Agency that can be used against many versions of the Windows operating system. The NSA’s code was, in turn, stolen by an unknown hacker group called Shadow Brokers—widely believed by the security community to be the Russians—in 2014 and released to the public in April.

Microsoft patched the vulnerability a month earlier, presumably after being alerted by the NSA that the leak was imminent. But the vulnerability affected older versions of Windows that Microsoft no longer supports, and there are still many people and organizations that don’t regularly patch their systems. This allowed whoever wrote WannaCry—it could be anyone from a lone individual to an organized crime syndicate—to use it to infect computers and extort users.

The lessons for users are obvious: Keep your system patches up to date and regularly backup your data. This isn’t just good advice to defend against ransomware, but good advice in general. But it’s becoming obsolete.

Everything is becoming a computer. Your microwave is a computer that makes things hot. Your refrigerator is a computer that keeps things cold. Your car and television, the traffic lights and signals in your city and our national power grid are all computers. This is the much-hyped Internet of Things (IoT). It’s coming, and it’s coming faster than you might think. And as these devices connect to the Internet, they become vulnerable to ransomware and other computer threats.

It’s only a matter of time before people get messages on their car screens saying that the engine has been disabled and it will cost $200 in bitcoin to turn it back on. Or a similar message on their phones about their Internet-enabled door lock: Pay $100 if you want to get into your house tonight. Or pay far more if they want their embedded heart defibrillator to keep working.

This isn’t just theoretical. Researchers have already demonstrated a ransomware attack against smart thermostats, which may sound like a nuisance at first but can cause serious property damage if it’s cold enough outside. If the device under attack has no screen, you’ll get the message on the smartphone app you control it from.

Hackers don’t even have to come up with these ideas on their own; the government agencies whose code was stolen were already doing it. One of the leaked CIA attack tools targets Internet-enabled Samsung smart televisions.

Even worse, the usual solutions won’t work with these embedded systems. You have no way to back up your refrigerator’s software, and it’s unclear whether that solution would even work if an attack targets the functionality of the device rather than its stored data.

These devices will be around for a long time. Unlike our phones and computers, which we replace every few years, cars are expected to last at least a decade. We want our appliances to run for 20 years or more, our thermostats even longer.

What happens when the company that made our smart washing machine—or just the computer part—goes out of business, or otherwise decides that they can no longer support older models? WannaCry affected Windows versions as far back as XP, a version that Microsoft no longer supports. The company broke with policy and released a patch for those older systems, but it has both the engineering talent and the money to do so.

That won’t happen with low-cost IoT devices.

Those devices are built on the cheap, and the companies that make them don’t have the dedicated teams of security engineers ready to craft and distribute security patches. The economics of the IoT doesn’t allow for it. Even worse, many of these devices aren’t patchable. Remember last fall when the Mirai botnet infected hundreds of thousands of Internet-enabled digital video recorders, webcams and other devices and launched a massive denial-of-service attack that resulted in a host of popular websites dropping off the Internet? Most of those devices couldn’t be fixed with new software once they were attacked. The way you update your DVR is to throw it away and buy a new one.

Solutions aren’t easy and they’re not pretty. The market is not going to fix this unaided. Security is a hard-to-evaluate feature against a possible future threat, and consumers have long rewarded companies that provide easy-to-compare features and a quick time-to-market at its expense. We need to assign liabilities to companies that write insecure software that harms people, and possibly even issue and enforce regulations that require companies to maintain software systems throughout their life cycle. We may need minimum security standards for critical IoT devices. And it would help if the NSA got more involved in securing our information infrastructure and less in keeping it vulnerable so the government can eavesdrop.

I know this all sounds politically impossible right now, but we simply cannot live in a future where everything—from the things we own to our nation’s infrastructure—can be held for ransom by criminals again and again.

This essay previously appeared in the “Washington Post.”
https://www.washingtonpost.com/posteverything/wp/…

WannaCry:
https://www.washingtonpost.com/news/the-switch/wp/…
https://www.nytimes.com/2017/05/12/world/europe/…

NSA vulnerability:
https://arstechnica.com/security/2017/05/…

Shadow Brokers:
https://www.lawfareblog.com/…
https://arstechnica.com/security/2017/04/…

The Microsoft patch:
https://arstechnica.com/security/2017/04/…

Me on the IoT:
http://nymag.com/selectall/2017/01/…

Ransomware against smart thermostats:
https://motherboard.vice.com/en_us/article/…

CIA exploit against smart televisions:
https://theintercept.com/2017/03/07/…

Patching the IoT:
https://www.wired.com/2014/01/…

Mirai botnet:
https://motherboard.vice.com/en_us/article/…


WannaCry and Vulnerabilities

There is plenty of blame to go around for the WannaCry ransomware that spread throughout the Internet earlier this month, disrupting work at hospitals, factories, businesses, and universities. First, there are the writers of the malicious software, which blocks victims’ access to their computers until they pay a fee. Then there are the users who didn’t install the Windows security patch that would have prevented an attack. A small portion of the blame falls on Microsoft, which wrote the insecure code in the first place. One could certainly condemn the Shadow Brokers, a group of hackers with links to Russia who stole and published the National Security Agency attack tools that included the exploit code used in the ransomware. But before all of this, there was the NSA, which found the vulnerability years ago and decided to exploit it rather than disclose it.

All software contains bugs or errors in the code. Some of these bugs have security implications, granting an attacker unauthorized access to or control of a computer. These vulnerabilities are rampant in the software we all use. A piece of software as large and complex as Microsoft Windows will contain hundreds of them, maybe more. These vulnerabilities have obvious criminal uses that can be neutralized if patched. Modern software is patched all the time—either on a fixed schedule, such as once a month with Microsoft, or whenever required, as with the Chrome browser.

When the US government discovers a vulnerability in a piece of software, however, it decides between two competing equities. It can keep it secret and use it offensively, to gather foreign intelligence, help execute search warrants, or deliver malware. Or it can alert the software vendor and see that the vulnerability is patched, protecting the country—and, for that matter, the world—from similar attacks by foreign governments and cybercriminals. It’s an either-or choice. As former US Assistant Attorney General Jack Goldsmith has said, “Every offensive weapon is a (potential) chink in our defense—and vice versa.”

This is all well-trod ground, and in 2010 the US government put in place an interagency Vulnerabilities Equities Process (VEP) to help balance the trade-off. The details are largely secret, but a 2014 blog post by then President Barack Obama’s cybersecurity coordinator, Michael Daniel, laid out the criteria that the government uses to decide when to keep a software flaw undisclosed. The post’s contents were unsurprising, listing questions such as “How much is the vulnerable system used in the core Internet infrastructure, in other critical infrastructure systems, in the US economy, and/or in national security systems?” and “Does the vulnerability, if left unpatched, impose significant risk?” They were balanced by questions like “How badly do we need the intelligence we think we can get from exploiting the vulnerability?” Elsewhere, Daniel has noted that the US government discloses to vendors the “overwhelming majority” of the vulnerabilities that it discovers—91 percent, according to NSA Director Michael S. Rogers.

The particular vulnerability in WannaCry is code-named EternalBlue, and it was discovered by the US government—most likely the NSA—sometime before 2014. The “Washington Post” reported both how useful the bug was for attack and how much the NSA worried about it being used by others. It was a reasonable concern: many of our national security and critical infrastructure systems contain the vulnerable software, which imposed significant risk if left unpatched. And yet it was left unpatched.

There’s a lot we don’t know about the VEP. The “Washington Post” says that the NSA used EternalBlue “for more than five years,” which implies that it was discovered after the 2010 process was put in place. It’s not clear if all vulnerabilities are given such consideration, or if bugs are periodically reviewed to determine if they should be disclosed. That said, any VEP that allows something as dangerous as EternalBlue—or the Cisco vulnerabilities that the Shadow Brokers leaked last August to remain unpatched for years isn’t serving national security very well. As a former NSA employee said, the quality of intelligence that could be gathered was “unreal.” But so was the potential damage. The NSA must avoid hoarding vulnerabilities.

Perhaps the NSA thought that no one else would discover EternalBlue. That’s another one of Daniel’s criteria: “How likely is it that someone else will discover the vulnerability?” This is often referred to as NOBUS, short for “nobody but us.” Can the NSA discover vulnerabilities that no one else will? Or are vulnerabilities discovered by one intelligence agency likely to be discovered by another, or by cybercriminals?

In the past few months, the tech community has acquired some data about this question. In one study, two colleagues from Harvard and I examined over 4,300 disclosed vulnerabilities in common software and concluded that 15 to 20 percent of them are rediscovered within a year. Separately, researchers at the Rand Corporation looked at a different and much smaller data set and concluded that fewer than six percent of vulnerabilities are rediscovered within a year. The questions the two papers ask are slightly different and the results are not directly comparable (we’ll both be discussing these results in more detail at the Black Hat Conference in July), but clearly, more research is needed.

People inside the NSA are quick to discount these studies, saying that the data don’t reflect their reality. They claim that there are entire classes of vulnerabilities the NSA uses that are not known in the research world, making rediscovery less likely. This may be true, but the evidence we have from the Shadow Brokers is that the vulnerabilities that the NSA keeps secret aren’t consistently different from those that researchers discover. And given the alarming ease with which both the NSA and CIA are having their attack tools stolen, rediscovery isn’t limited to independent security research.

But even if it is difficult to make definitive statements about vulnerability rediscovery, it is clear that vulnerabilities are plentiful. Any vulnerabilities that are discovered and used for offense should only remain secret for as short a time as possible. I have proposed six months, with the right to appeal for another six months in exceptional circumstances. The United States should satisfy its offensive requirements through a steady stream of newly discovered vulnerabilities that, when fixed, also improve the country’s defense.

The VEP needs to be reformed and strengthened as well. A report from last year by Ari Schwartz and Rob Knake, who both previously worked on cybersecurity policy at the White House National Security Council, makes some good suggestions on how to further formalize the process, increase its transparency and oversight, and ensure periodic review of the vulnerabilities that are kept secret and used for offense. This is the least we can do. A bill recently introduced in both the Senate and the House calls for this and more.

In the case of EternalBlue, the VEP did have some positive effects. When the NSA realized that the Shadow Brokers had stolen the tool, it alerted Microsoft, which released a patch in March. This prevented a true disaster when the Shadow Brokers exposed the vulnerability on the Internet. It was only unpatched systems that were susceptible to WannaCry a month later, including versions of Windows so old that Microsoft normally didn’t support them. Although the NSA must take its share of the responsibility, no matter how good the VEP is, or how many vulnerabilities the NSA reports and the vendors fix, security won’t improve unless users download and install patches, and organizations take responsibility for keeping their software and systems up to date. That is one of the important lessons to be learned from WannaCry.

This essay originally appeared in “Foreign Affairs.” Yes, I wrote a lot about this.
https://www.foreignaffairs.com/articles/2017-05-30/…

Who’s to blame for WannaCry:
https://www.nytimes.com/2017/05/13/opinion/…

Unpatched Windows 7:
https://arstechnica.com/security/2017/05/…

The NSA vulnerability::
https://arstechnica.com/security/2017/05/…

Jack Goldsmith’s commentary:
https://www.lawfareblog.com/…

The vulnerabilities equities process:
https://www.theatlantic.com/technology/archive/2014/…
https://www.eff.org/files/2016/01/18/37-3_vep_2016.pdf
https://www.wired.com/2014/11/…
https://fcw.com/articles/2015/11/09/…

Washington Post on this vulnerability:
https://www.washingtonpost.com/business/technology/…

Cisco vulnerabilities:
https://s.cisco.com/security/shadow-brokers
http://www.securityweek.com/…

Me on the NSA hoarding vulnerabilities:
https://www.schneier.com/blog/archives/2016/08/…

NOBUS:
https://www.washingtonpost.com/news/the-switch/wp/…

My paper on vulnerability rediscovery:
https://papers.ssrn.com/sol3/papers.cfm?…

RAND’s paper on vulnerability rediscovery:
https://www.rand.org/pubs/research_reports/…

Black Hat panel:
https://www.blackhat.com/us-17/…

Schwartz and Knake report:
http://www.belfercenter.org/sites/default/files/…

Recent Congressional bill regarding the VEP:
https://lawfareblog.com/patch-debating-codification-vep

Microsoft patch:
https://technet.microsoft.com/en-us/library/…


News

Using Wi-Fi to get 3D images of surrounding location:
https://physics.aps.org/articles/v10/50
http://newatlas.com/…

The New York Times is reporting that evidence is pointing to North Korea as the author of the WannaCry ransomware. Note that there is no proof at this time, although it would not surprise me if the NSA knew the origins of this malware attack.
https://www.nytimes.com/2017/05/15/us/…

This is a weird story: researchers have discovered that an audio driver installed in some HP laptops includes a keylogger, which records all keystrokes to a local file. There seems to be nothing malicious about this, but it’s a vivid illustration of how hard it is to secure a modern computer. The operating system, drivers, processes, application software, and everything else is so complicated that it’s pretty much impossible to lock down every aspect of it. So many things are eavesdropping on different aspects of the computer’s operation, collecting personal data as they do so. If an attacker can get to the computer when the drive is unencrypted, he gets access to all sorts of information streams—and there’s often nothing the computer’s owner can do.
https://www.bleepingcomputer.com/news/security/…
https://www.modzero.ch/advisories/…

The US Senate just approved Signal for staff use. Signal is a secure messaging app with no backdoor, and no large corporate owner who can be pressured to install a backdoor.
https://www.engadget.com/2017/05/17/…
Susan Landau comments.
https://lawfareblog.com/step-forward-security
Maybe I’m being optimistic, but I think we just won the Crypto War. A very important part of the US government is prioritizing security over surveillance.

I’m sure it pays less than the industry average, and the stakes are much higher than the average. But if you want to be a Director of Information Security that makes a difference, Human Rights Watch is hiring.
https://careers.hrw.org/opportunities/show/?jobid=1426

Reuters has an article on North Korea’s cyberwar capabilities, specifically “Unit 180.”
http://www.reuters.com/article/…
They’re still not in the same league as the US, UK, Russia, China, and Israel. But they’re getting better.

According to court documents, US Immigration and Customs Enforcement is using Stingray cell-site simulators to track illegal immigrants.
https://gizmodo.com/…

There’s interesting research on using a set of “master” digital fingerprints to fool biometric readers. The work is theoretical at the moment, but they might be able to open about two-thirds of iPhones with these master prints.
https://www.nytimes.com/2017/04/10/technology/…
https://boingboing.net/2017/04/14/…
http://www.cse.msu.edu/~rossarun/pubs/…

Hacking the Galaxy S8’s iris biometric is easy.
https://motherboard.vice.com/en_us/article/…

Last year, I wrote about the potential for doxers to alter documents before they leaked them. It was a theoretical threat when I wrote it, but now Citizen Lab has documented this technique in the wild:
https://citizenlab.org/2017/05/…
My essay:
https://www.theatlantic.com/technology/archive/2016/…

Inmates secretly build and network computers while in prison:
http://www.motherjones.com/politics/2016/06/…

Interesting research on a version of RSA that is secure against a quantum computer:
https://eprint.iacr.org/2017/351.pdf

WikiLeaks is still dumping CIA cyberweapons on the Internet. Its latest dump is something called “Pandemic”:
https://threatpost.com/…
https://wikileaks.org/vault7/releases/#Pandemic
https://arstechnica.com/security/2017/06/…

Really interesting research: “Unpacking Spear Phishing Susceptibility,” by Zinaida Benenson, Freya Gassmann, and Robert Landwirth.
https://www1.informatik.uni-erlangen.de/filepool//…
https://www.youtube.com/watch?v=ThOQ63CyQR4

Interesting law-journal article: “Surveillance Intermediaries,” by Alan Z. Rozenshtein.
https://papers.ssrn.com/sol3/papers.cfm?…

Ross Anderson blogged about his new paper on security and safety concerns about the Internet of Things.
https://www.lightbluetouchpaper.org/2017/06/01/…
https://www.cl.cam.ac.uk/~rja14/Papers/weis2017.pdf
https://www.youtube.com/watch?v=PLiE0Nr8VOE
It’s very much along the lines of what I’ve been writing.
https://www.schneier.com/essays/archives/2017/01/…

New US government report: “Report on Improving Cybersecurity in the Health Care Industry.” It’s pretty scathing, but nothing in it will surprise regular readers of this blog. It’s worth reading the executive summary, and then skimming the recommendations. Recommendations are in six areas.
https://securityledger.com/2017/06/…
https://science.slashdot.org/story/17/06/11/2131206/…

Chelsea Manning profiled in “New York Times Magazine”:
https://www.nytimes.com/2017/06/12/magazine/…

Research paper: “Subscribers remote geolocation and tracking using 4G VoLTE enabled Android phone,” by Patrick Ventuzelo, Olivier Le Moal, and Thomas Coudray.
https://www.sstic.org/media/SSTIC2017/SSTIC-actes/…
https://www.bleepingcomputer.com/news/security/…
https://it.slashdot.org/story/17/06/12/2245207/…

This article argues that Britain’s counterterrorism problem isn’t lack of data, it’s lack of analysis.
https://www.theguardian.com/uk-news/2017/jun/10/…

I hesitate to link to this, because it’s an example of everything that’s wrong with pop psychology. Malcolm Harris writes about millennials, and has a theory of why millennials leak secrets. My guess is that you could write a similar essay about every named generation, every age group, and so on.
https://www.washingtonpost.com/posteverything/wp/…


NSA Brute-Force Keysearch Machine

The “Intercept” published a story about a dedicated NSA brute-force keysearch machine being built with the help of New York University and IBM. It’s based on a document that was accidentally shared on the Internet by NYU.

The article is frustratingly short on details:

The WindsorGreen documents are mostly inscrutable to anyone without a Ph.D. in a related field, but they make clear that the computer is the successor to WindsorBlue, a next generation of specialized IBM hardware that would excel at cracking encryption, whose known customers are the U.S. government and its partners.

Experts who reviewed the IBM documents said WindsorGreen possesses substantially greater computing power than WindsorBlue, making it particularly adept at compromising encryption and passwords. In an overview of WindsorGreen, the computer is described as a “redesign” centered around an improved version of its processor, known as an “application specific integrated circuit,” or ASIC, a type of chip built to do one task, like mining bitcoin, extremely well, as opposed to being relatively good at accomplishing the wide range of tasks that, say, a typical MacBook would handle. One of the upgrades was to switch the processor to smaller transistors, allowing more circuitry to be crammed into the same area, a change quantified by measuring the reduction in nanometers (nm) between certain chip features.

Unfortunately, the “Intercept” decided not to publish most of the document, so all of those people with “a Ph.D. in a related field” can’t read and understand WindsorGreen’s capabilities. What sorts of key lengths can the machine brute force? Is it optimized for symmetric or asymmetric cryptanalysis? Random brute force or dictionary attacks? We have no idea.

Whatever the details, this is exactly the sort of thing the NSA should be spending its money on. Breaking the cryptography used by other nations is squarely in the NSA’s mission.

https://theintercept.com/2017/05/11/…

Some documents are actually online:
https://www.documentcloud.org/documents/…


NSA Abandons “About” Searches

Earlier this month, the NSA said that it would no longer conduct “about” searches of bulk communications data. This was the practice of collecting the communications of Americans based on keywords and phrases in the contents of the messages, not based on who they were from or to.

The NSA’s own words:

After considerable evaluation of the program and available technology, NSA has decided that its Section 702 foreign intelligence surveillance activities will no longer include any upstream internet communications that are solely “about” a foreign intelligence target. Instead, this surveillance will now be limited to only those communications that are directly “to” or “from” a foreign intelligence target. These changes are designed to retain the upstream collection that provides the greatest value to national security while reducing the likelihood that NSA will acquire communications of U.S. persons or others who are not in direct contact with one of the Agency’s foreign intelligence targets.

In addition, as part of this curtailment, NSA will delete the vast majority of previously acquired upstream internet communications as soon as practicable.

[…]

After reviewing amended Section 702 certifications and NSA procedures that implement these changes, the FISC recently issued an opinion and order, approving the renewal certifications and use of procedures, which authorize this narrowed form of Section 702 upstream internet collection. A declassification review of the FISC’s opinion and order, and the related targeting and minimization procedures, is underway.

A quick review: under Section 702 of the Patriot Act, the NSA seizes a copy of all communications moving through a telco—think e-mail and such—and searches it for particular senders, receivers, and—until recently—keywords. This pretty clearly violates the Fourth Amendment, and groups like the EFF have been fighting the NSA in court about this for years. The NSA has also had problems in the FISA court about these searches, and cites “inadvertent compliance incidents” related to this.

We might learn more about this change. Again, from the NSA’s statement:

After reviewing amended Section 702 certifications and NSA procedures that implement these changes, the FISC recently issued an opinion and order, approving the renewal certifications and use of procedures, which authorize this narrowed form of Section 702 upstream internet collection. A declassification review of the FISC’s opinion and order, and the related targeting and minimization procedures, is underway.

And the EFF is still fighting for more NSA surveillance reforms.

Blog entry URL:
https://www.schneier.com/blog/archives/2017/05/…

News articles:
https://www.nytimes.com/2017/04/28/us/politics/…
https://www.wired.com/2017/04/…
http://www.reuters.com/article/…
http://www.bbc.com/news/technology-39769426

NSA statement:
https://www.nsa.gov/news-features/press-room/…

EFF case:
https://www.eff.org/cases/jewel
https://www.eff.org/deeplinks/2017/04/…


Extending the Airplane Laptop Ban

The Department of Homeland Security is rumored to be considering extending the current travel ban on large electronics for Middle Eastern flights to European ones as well. The likely reaction of airlines will be to implement new traveler programs, effectively allowing wealthier and more frequent fliers to bring their computers with them. This will only exacerbate the divide between the haves and the have-nots—all without making us any safer.

In March, both the United States and the United Kingdom required that passengers from 10 Muslim countries give up their laptop computers and larger tablets, and put them in checked baggage. The new measure was based on reports that terrorists would try to smuggle bombs onto planes concealed in these larger electronic devices.

The security measure made no sense for two reasons. First, moving these computers into the baggage holds doesn’t keep them off planes. Yes, it is easier to detonate a bomb that’s in your hands than to remotely trigger it in the cargo hold. But it’s also more effective to screen laptops at security checkpoints than it is to place them in checked baggage. TSA already does this kind of screening randomly and occasionally: making passengers turn laptops on to ensure that they’re functional computers and not just bomb-filled cases, and running chemical tests on their surface to detect explosive material.

And, two, banning laptops on selected flights just forces terrorists to buy more roundabout itineraries. It doesn’t take much creativity to fly Doha-Amsterdam-New York instead of direct. Adding Amsterdam to the list of affected airports makes the terrorist add yet another itinerary change; it doesn’t remove the threat.

Which brings up another question: If this is truly a threat, why aren’t domestic flights included in this ban? Remember that anyone boarding a plane to the United States from these Muslim countries has already received a visa to enter the country. This isn’t perfect security—the infamous underwear bomber had a visa, after all—but anyone who could detonate a laptop bomb on his international flight could do it on his domestic connection.

I don’t have access to classified intelligence, and I can’t comment on whether explosive-filled laptops are truly a threat. But, if they are, TSA can set up additional security screenings at the gates of US-bound flights worldwide and screen every laptop coming onto the plane. It wouldn’t be the first time we’ve had additional security screening at the gate. And they should require all laptops to go through this screening, prohibiting them from being stashed in checked baggage.

This measure is nothing more than security theater against what appears to be a movie-plot threat.

Banishing laptops to the cargo holds brings with it a host of other threats. Passengers run the risk of their electronics being stolen from their checked baggage—something that has happened in the past. And, depending on the country, passengers also have to worry about border control officials intercepting checked laptops and making copies of what’s on their hard drives.

Safety is another concern. We’re already worried about large lithium-ion batteries catching fire in airplane baggage holds; adding a few hundred of these devices will considerably exacerbate the risk. Both FedEx and UPS no longer accept bulk shipments of these batteries after two jets crashed in 2010 and 2011 due to combustion.

Of course, passengers will rebel against this rule. Having access to a computer on these long transatlantic flights is a must for many travelers, especially the high-revenue business-class travelers. They also won’t accept the delays and confusion this rule will cause as it’s rolled out. Unhappy passengers fly less, or fly other routes on other airlines without these restrictions.

I don’t know how many passengers are choosing to fly to the Middle East via Toronto to avoid the current laptop ban, but I suspect there may be some. If Europe is included in the new ban, many more may consider adding Canada to their itineraries, as well as choosing European hubs that remain unaffected.

As passengers voice their disapproval with their wallets, airlines will rebel. Already Emirates has a program to loan laptops to their premium travelers. I can imagine US airlines doing the same, although probably for an extra fee. We might learn how to make this work: keeping our data in the cloud or on portable memory sticks and using unfamiliar computers for the length of the flight.

A more likely response will be comparable to what happened after the US increased passenger screening post-9/11. In the months and years that followed, we saw different ways for high-revenue travelers to avoid the lines: faster first-class lanes, and then the extra-cost trusted traveler programs that allow people to bypass the long lines, keep their shoes on their feet and leave their laptops and liquids in their bags. It’s a bad security idea, but it keeps both frequent fliers and airlines happy. It would be just another step to allow these people to keep their electronics with them on their flight.

The problem with this response is that it solves the problem for frequent fliers, while leaving everyone else to suffer. This is already the case; those of us enrolled in a trusted traveler program forget what it’s like to go through “normal” security screening. And since frequent fliers—likely to be more wealthy—no longer see the problem, they don’t have any incentive to fix it.

Dividing security checks into haves and have-nots is bad social policy, and we should actively fight any expansion of it. If the TSA implements this security procedure, it should implement it for every flight. And there should be no exceptions. Force every politically connected flier, from members of Congress to the lobbyists that influence them, to do without their laptops on planes. Let the TSA explain to them why they can’t work on their flights to and from DC.

This essay previously appeared on CNN.com.
http://www.cnn.com/2017/05/16/opinions/…

http://money.cnn.com/2017/05/11/news/…

Existing laptop ban:
https://www.nytimes.com/2017/03/21/us/politics/…

My commentary:
http://www.cnn.com/2017/03/22/opinions/…

Security theater:
https://www.schneier.com/essays/archives/2009/11/…

More commentary:
https://www.eff.org/deeplinks/2017/03/…
https://www.bloomberg.com/news/articles/2017-05-15/…

Emirates loans laptops:
https://www.emirates.com/media-centre/…

Me on trusted traveler programs:
https://www.schneier.com/essays/archives/2004/08/…

EDITED TO ADD: US officials are backing down.
http://www.bbc.com/news/world-europe-39956968
But this rumor keeps bubbling up.


Security and Human Behavior (SHB 2017)

Last month I attended the tenth Workshop on Security and Human Behavior.

SHB is a small invitational gathering of people studying various aspects of the human side of security, organized each year by Ross Anderson, Alessandro Acquisti, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is maximum interaction and discussion. We do that by putting everyone on panels. There are eight six-person panels over the course of the two days. Everyone gets to talk for ten minutes about their work, and then there’s half an hour of questions and discussion. We also have lunches, dinners, and receptions—all designed so people from different disciplines talk to each other.

It’s the most intellectually stimulating conference of my year, and influences my thinking about security in many different ways.

Follow the link for the schedule, participants (and links to their work, and Ross Anderson’s liveblog. Also to my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, and ninth SHB workshops.

I don’t think any of us imagined that this conference would be around this long.

https://www.schneier.com/blog/archives/2017/05/…


Passwords at the Border

The password manager 1Password has just implemented a travel mode that tries to protect users while crossing borders. It doesn’t make much sense. To enable it, you have to create a list of passwords you feel safe traveling with, and then you can turn on the mode that only gives you access to those passwords. But since you can turn it off at will, a border official can just demand you do so. Better would be some sort of time lock where you are unable to turn it off at the border.

There are a bunch of tricks you can use to ensure that you are unable to decrypt your devices, even if someone demands that you do. Back in 2009, I described such a scheme, and mentioned some other tricks the year before. So did Wired. They work with any password manager, including my own Password Safe.

There’s a problem, though. Everything you do along these lines is problematic, because 1) you don’t want to ever lie to a customs official, and 2) any steps you take to make your data inaccessible are in themselves suspicious. Your best defense is not to have anything incriminating on your computer or in the various social media accounts you use. (This advice was given to Australian citizens by their Department of Immigration and Border Protection specifically to Muslims pilgrims returning from hajj. Bizarrely, an Australian MP complained when Muslims repeated that advice.)

The EFF has a comprehensive guide to both the tech and policy of securing your electronics for border crossings.

https://www.wired.com/2017/05/…
https://www.theverge.com/2017/5/23/15681990/…

My writings on this:
https://www.schneier.com/blog/archives/2009/07/…
https://www.schneier.com/blog/archives/2008/05/…

Wired’s advice:
https://www.wired.com/2017/02/…

Password Safe:
https://www.schneier.com/academic/passsafe/

Australian advice:
https://muslimvillage.com/2017/05/30/124068/…

EFF’s comprehensive guide:
https://www.eff.org/files/2017/03/10/…


Schneier News

I’m speaking at NTT InfoSecurity World in Frankfurt on June 21.
http://www.nttcomsecurity.com/us/events/…

I’m speaking at CyberWeek in Tel Aviv on June 27.
https://cyberweek.tau.ac.il/2017/index.php

I’m speaking at the University of Haifa on June 28.
http://weblaw.haifa.ac.il/en/events/pages/events/cyber280617.aspx

Forbes names “Beyond Fear” as one of the “13 books technology executives should have on their shelves.” It’s a weird list.
https://www.forbes.com/sites/forbestechcouncil/2017/…


NSA Document Outlining Russian Attempts to Hack Voter Rolls

This week brought new public evidence about Russian interference in the 2016 election. On Monday, the “Intercept” published a top-secret National Security Agency document describing Russian hacking attempts against the US election system. While the attacks seem more exploratory than operational—and there’s no evidence that they had any actual effect—they further illustrate the real threats and vulnerabilities facing our elections, and they point to solutions.

The document describes how the GRU, Russia’s military intelligence agency, attacked a company called VR Systems that, according to its website, provides software to manage voter rolls in eight states. The August 2016 attack was successful, and the attackers used the information they stole from the company’s network to launch targeted attacks against 122 local election officials on October 27, 12 days before the election.

That is where the NSA’s analysis ends. We don’t know whether those 122 targeted attacks were successful, or what their effects were if so. We don’t know whether other election software companies besides VR Systems were targeted, or what the GRU’s overall plan was—if it had one. Certainly, there are ways to disrupt voting by interfering with the voter registration process or voter rolls. But there was no indication on Election Day that people found their names removed from the system, or their address changed, or anything else that would have had an effect—anywhere in the country, let alone in the eight states where VR Systems is deployed. (There were Election Day problems with the voting rolls in Durham, NC—one of the states that VR Systems supports—but they seem like conventional errors and not malicious action.)

And 12 days before the election (with early voting already well underway in many jurisdictions) seems far too late to start an operation like that. That is why these attacks feel exploratory to me, rather than part of an operational attack. The Russians were seeing how far they could get, and keeping those accesses in their pocket for potential future use.

Presumably, this document was intended for the Justice Department, including the FBI, which would be the proper agency to continue looking into these hacks. We don’t know what happened next, if anything. VR Systems isn’t commenting, and the names of the local election officials targeted did not appear in the NSA document.

So while this document isn’t much of a smoking gun, it’s yet more evidence of widespread Russian attempts to interfere last year.

The document was, allegedly, sent to the “Intercept” anonymously. An NSA contractor, Reality Leigh Winner, was arrested Saturday and charged with mishandling classified information. The speed with which the government identified her serves as a caution to anyone wanting to leak official US secrets.

The “Intercept” sent a scan of the document to another source during its reporting. That scan showed a crease in the original document, which implied that someone had printed the document and then carried it out of some secure location. The second source, according to the FBI’s affidavit against Winner, passed it on to the NSA. From there, NSA investigators were able to look at their records and determine that only six people had printed out the document. (The government may also have been able to track the printout through secret dots that identified the printer.) Winner was the only one of those six who had been in e-mail contact with the “Intercept.” It is unclear whether the e-mail evidence was from Winner’s NSA account or her personal account, but in either case, it’s incredibly sloppy tradecraft.

With President Trump’s election, the issue of Russian interference in last year’s campaign has become highly politicized. Reports like the one from the Office of the Director of National Intelligence in January have been criticized by partisan supporters of the White House. It’s interesting that this document was reported by the “Intercept,” which has been historically skeptical about claims of Russian interference. (I was quoted in their story, and they showed me a copy of the NSA document before it was published.) The leaker was even praised by WikiLeaks founder Julian Assange, who up until now has been traditionally critical of allegations of Russian election interference.

This demonstrates the power of source documents. It’s easy to discount a Justice Department official or a summary report. A detailed NSA document is much more convincing. Right now, there’s a federal suit to force the ODNI to release the entire January report, not just the unclassified summary. These efforts are vital.

This hack will certainly come up at the Senate hearing where former FBI director James B. Comey is scheduled to testify Thursday. Last year, there were several stories about voter databases being targeted by Russia. Last August, the FBI confirmed that the Russians successfully hacked voter databases in Illinois and Arizona. And a month later, an unnamed Department of Homeland Security official said that the Russians targeted voter databases in 20 states. Again, we don’t know of anything that came of these hacks, but expect Comey to be asked about them. Unfortunately, any details he does know are almost certainly classified, and won’t be revealed in open testimony.

But more important than any of this, we need to better secure our election systems going forward. We have significant vulnerabilities in our voting machines, our voter rolls and registration process, and the vote tabulation systems after the polls close. In January, DHS designated our voting systems as critical national infrastructure, but so far that has been entirely for show. In the United States, we don’t have a single integrated election. We have 50-plus individual elections, each with its own rules and its own regulatory authorities. Federal standards that mandate voter-verified paper ballots and post-election auditing would go a long way to secure our voting system. These attacks demonstrate that we need to secure the voter rolls, as well.

Democratic elections serve two purposes. The first is to elect the winner. But the second is to convince the loser. After the votes are all counted, everyone needs to trust that the election was fair and the results accurate. Attacks against our election system, even if they are ultimately ineffective, undermine that trust and—by extension—our democracy. Yes, fixing this will be expensive. Yes, it will require federal action in what’s historically been state-run systems. But as a country, we have no other option.

This essay previously appeared in the “Washington Post.”
https://www.washingtonpost.com/posteverything/wp/…

The Intercept’s story:
https://theintercept.com/2017/06/05/…
https://www.documentcloud.org/documents/…

VR Systems website:
http://www.vrsystems.com/

Durham, NC election-day problems:
http://wncn.com/2016/11/13/…

The FBI’s affidavit against Winner:
https://www.justice.gov/opa/press-release/file/…

Secret dots from printers:
http://.erratasec.com/2017/06/…

January’s ODNI report on Russian hacking of the 2016 election:
https://www.washingtonpost.com/world/…

Skepticism from the Intercept:
https://theintercept.com/2017/03/16/…

Praise from Julian Assange:
http://thehill.com/homenews/news/…

Skepticism from Assange:
https://www.washingtonpost.com/news/fact-checker/wp/…

Federal suit to force ODNI to release the entire January report:
https://www.epic.org/foia/odni/russian-hacking/

Russian hacking of voter rolls:
http://www.nbcnews.com/news/us-news/…
http://www.chicagotribune.com/news/nationworld/…

DHS designated elections critical infrastructure:
https://www.dhs.gov/news/2017/01/06/…

Voter-verified paper ballots:
https://www.schneier.com/blog/archives/2004/11/…

Post-election auditing:
https://www.usatoday.com/story/opinion/2016/11/18/…

Good commentary by The grugq on Reality Winner, the Intercept, and OPSEC.
https://medium.com/@thegrugq/…


Who Are the Shadow Brokers?

In 2013, a mysterious group of hackers that calls itself the Shadow Brokers stole a few disks full of NSA secrets. Since last summer, they’ve been dumping these secrets on the Internet. They have publicly embarrassed the NSA and damaged its intelligence-gathering capabilities, while at the same time have put sophisticated cyberweapons in the hands of anyone who wants them. They have exposed major vulnerabilities in Cisco routers, Microsoft Windows, and Linux mail servers, forcing those companies and their customers to scramble. And they gave the authors of the WannaCry ransomware the exploit they needed to infect hundreds of thousands of computer worldwide this month.

After the WannaCry outbreak, the Shadow Brokers threatened to release more NSA secrets every month, giving cybercriminals and other governments worldwide even more exploits and hacking tools.

Who are these guys? And how did they steal this information? The short answer is: we don’t know. But we can make some educated guesses based on the material they’ve published.

The Shadow Brokers suddenly appeared last August, when they published a series of hacking tools and computer exploits—vulnerabilities in common software—from the NSA. The material was from autumn 2013, and seems to have been collected from an external NSA staging server, a machine that is owned, leased, or otherwise controlled by the US, but with no connection to the agency. NSA hackers find obscure corners of the Internet to hide the tools they need as they go about their work, and it seems the Shadow Brokers successfully hacked one of those caches.

In total, the group has published four sets of NSA material: a set of exploits and hacking tools against routers, the devices that direct data throughout computer networks; a similar collection against mail servers; another collection against Microsoft Windows; and a working directory of an NSA analyst breaking into the SWIFT banking network. Looking at the time stamps on the files and other material, they all come from around 2013. The Windows attack tools, published last month, might be a year or so older, based on which versions of Windows the tools support.

The releases are so different that they’re almost certainly from multiple sources at the NSA. The SWIFT files seem to come from an internal NSA computer, albeit one connected to the Internet. The Microsoft files seem different, too; they don’t have the same identifying information that the router and mail server files do. The Shadow Brokers have released all the material unredacted, without the care journalists took with the Snowden documents or even the care WikiLeaks has taken with the CIA secrets it’s publishing. They also posted anonymous messages in bad English but with American cultural references.

Given all of this, I don’t think the agent responsible is a whistleblower. While possible, it seems like a whistleblower wouldn’t sit on attack tools for three years before publishing. They would act more like Edward Snowden or Chelsea Manning, collecting for a time and then publishing immediately—and publishing documents that discuss what the US is doing to whom. That’s not what we’re seeing here; it’s simply a bunch of exploit code, which doesn’t have the political or ethical implications that a whistleblower would want to highlight. The SWIFT documents are records of an NSA operation, and the other posted files demonstrate that the NSA is hoarding vulnerabilities for attack rather than helping fix them and improve all of our security.

I also don’t think that it’s random hackers who stumbled on these tools and are just trying to harm the NSA or the US. Again, the three-year wait makes no sense. These documents and tools are cyber-Kryptonite; anyone who is secretly hoarding them is in danger from half the intelligence agencies in the world. Additionally, the publication schedule doesn’t make sense for the leakers to be cybercriminals. Criminals would use the hacking tools for themselves, incorporating the exploits into worms and viruses, and generally profiting from the theft.

That leaves a nation state. Whoever got this information years before and is leaking it now has to be both capable of hacking the NSA and willing to publish it all. Countries like Israel and France are capable, but would never publish, because they wouldn’t want to incur the wrath of the US. Countries like North Korea or Iran probably aren’t capable. (Additionally, North Korea is suspected of being behind WannaCry, which was written after the Shadow Brokers released that vulnerability to the public.) As I’ve written previously, the obvious list of countries who fit my two criteria is small: Russia, China, and—I’m out of ideas. And China is currently trying to make nice with the US.

It was generally believed last August, when the first documents were released and before it became politically controversial to say so, that the Russians were behind the leak, and that it was a warning message to President Barack Obama not to retaliate for the Democratic National Committee hacks. Edward Snowden guessed Russia, too. But the problem with the Russia theory is, why? These leaked tools are much more valuable if kept secret. Russia could use the knowledge to detect NSA hacking in its own country and to attack other countries. By publishing the tools, the Shadow Brokers are signaling that they don’t care if the US knows the tools were stolen.

Sure, there’s a chance the attackers knew that the US knew that the attackers knew—and round and round we go. But the “we don’t give a damn” nature of the releases points to an attacker who isn’t thinking strategically: a lone hacker or hacking group, which clashes with the nation-state theory.

This is all speculation on my part, based on discussion with others who don’t have access to the classified forensic and intelligence analysis. Inside the NSA, they have a lot more information. Many of the files published include operational notes and identifying information. NSA researchers know exactly which servers were compromised, and through that know what other information the attackers would have access to. As with the Snowden documents, though, they only know what the attackers could have taken and not what they did take. But they did alert Microsoft about the Windows vulnerability the Shadow Brokers released months in advance. Did they have eavesdropping capability inside whoever stole the files, as they claimed to when the Russians attacked the State Department? We have no idea.

So, how did the Shadow Brokers do it? Did someone inside the NSA accidentally mount the wrong server on some external network? That’s possible, but seems very unlikely for the organization to make that kind of rookie mistake. Did someone hack the NSA itself? Could there be a mole inside the NSA?

If it is a mole, my guess is that the person was arrested before the Shadow Brokers released anything. No country would burn a mole working for it by publishing what that person delivered while he or she was still in danger. Intelligence agencies know that if they betray a source this severely, they’ll never get another one.

That points to two possibilities. The first is that the files came from Hal Martin. He’s the NSA contractor who was arrested in August for hoarding agency secrets in his house for two years. He can’t be the publisher, because the Shadow Brokers are in business even though he is in prison. But maybe the leaker got the documents from his stash, either because Martin gave the documents to them or because he himself was hacked. The dates line up, so it’s theoretically possible. There’s nothing in the public indictment against Martin that speaks to his selling secrets to a foreign power, but that’s just the sort of thing that would be left out. It’s not needed for a conviction.

If the source of the documents *is* Hal Martin, then we can speculate that a random hacker did in fact stumble on it—no need for nation-state cyberattack skills.

The other option is a mysterious second NSA leaker of cyberattack tools. Could this be the person who stole the NSA documents and passed them on to someone else? The only time I have ever heard about this was from a “Washington Post” story about Martin:

There was a second, previously undisclosed breach of cybertools, discovered in the summer of 2015, which was also carried out by a TAO employee [a worker in the Office of Tailored Access Operations], one official said. That individual also has been arrested, but his case has not been made public. The individual is not thought to have shared the material with another country, the official said.

Of course, “not thought to have” is not the same as not having done so.

It is interesting that there have been no public arrests of anyone in connection with these hacks. If the NSA knows where the files came from, it knows who had access to them—and it’s long since questioned everyone involved and should know if someone deliberately or accidentally lost control of them. I know that many people, both inside the government and out, think there is some sort of domestic involvement; things may be more complicated than I realize.

It’s also not over. Last week, the Shadow Brokers were back, with a rambling and taunting message announcing a “Data Dump of the Month” service. They’re offering to sell unreleased NSA attack tools—something they also tried last August—with the threat to publish them if no one pays. The group has made good on their previous boasts: In the coming months, we might see new exploits against web browsers, networking equipment, smartphones, and operating systems—Windows in particular. Even scarier, they’re threatening to release raw NSA intercepts: data from the SWIFT network and banks, and “compromised data from Russian, Chinese, Iranian, or North Korean nukes and missile programs.”

Whoever the Shadow Brokers are, however they stole these disks full of NSA secrets, and for whatever reason they’re releasing them, it’s going to be a long summer inside of Fort Meade—as it will be for the rest of us.

This essay previously appeared in the “Atlantic.”
https://www.theatlantic.com/technology/archive/2017/…
It is an update of this essay from “Lawfare.”
https://www.lawfareblog.com/…

Shadow Brokers vulnerabilities:
https://www.theregister.co.uk/2016/08/17/…
https://arstechnica.com/security/2017/04/…
https://www.nytimes.com/2017/05/16/us/…

Latest Shadow Brokers threat:
https://steemit.com/shadowbrokers/@theshadowbrokers/…

Snowden speculating about the external NSA staging server:
https://twitter.com/Snowden/status/765514477341143040

Nicholas Weaver speculating about multiple leak sources:
https://lawfareblog.com/shadow-brokers-dump-month-club

Speculation about North Korea:
https://www.theguardian.com/technology/2017/may/15/…

My previous speculation about the Shadow Brokers:
https://www.schneier.com/blog/archives/2017/04/…

China:
https://www.wired.com/2015/09/…
http://www.npr.org/sections/parallels/2017/04/06/…

Attributions to Russia from last August:
https://www.theverge.com/2016/8/17/12519804/…
https://www.nytimes.com/2016/08/17/us/…
https://www.nytimes.com/2016/12/13/us/politics/…
https://twitter.com/Snowden/status/765514891813945344

NSA spying on Russia attacking the US State Department:
https://www.washingtonpost.com/world/…

Speculation about a mole inside the NSA:
http://www.thedailybeast.com/articles/2017/04/20/…

Hal Martin:
https://www.washingtonpost.com/world/…
https://www.washingtonpost.com/world/…
https://www.washingtonpost.com/world/…

Even more speculation:
http://www.mcclatchydc.com/news/nation-world/…


Ransomware and the Internet of Things

As devastating as the latest widespread ransomware attacks have been, it’s a problem with a solution. If your copy of Windows is relatively current and you’ve kept it updated, your laptop is immune. It’s only older unpatched systems on your computer that are vulnerable.

Patching is how the computer industry maintains security in the face of rampant Internet insecurity. Microsoft, Apple and Google have teams of engineers who quickly write, test and distribute these patches, updates to the codes that fix vulnerabilities in software. Most people have set up their computers and phones to automatically apply these patches, and the whole thing works seamlessly. It isn’t a perfect system, but it’s the best we have.

But it is a system that’s going to fail in the “Internet of things”: everyday devices like smart speakers, household appliances, toys, lighting systems, even cars, that are connected to the web. Many of the embedded networked systems in these devices that will pervade our lives don’t have engineering teams on hand to write patches and may well last far longer than the companies that are supposed to keep the software safe from criminals. Some of them don’t even have the ability to be patched.

Fast forward five to 10 years, and the world is going to be filled with literally tens of billions of devices that hackers can attack. We’re going to see ransomware against our cars. Our digital video recorders and web cameras will be taken over by botnets. The data that these devices collect about us will be stolen and used to commit fraud. And we’re not going to be able to secure these devices.

Like every other instance of product safety, this problem will never be solved without considerable government involvement.

For years, I have been calling for more regulation to improve security in the face of this market failure. In the short term, the government can mandate that these devices have more secure default configurations and the ability to be patched. It can issue best-practice regulations for critical software and make software manufacturers liable for vulnerabilities. It’ll be expensive, but it will go a long way toward improved security.

But it won’t be enough to focus only on the devices, because these things are going to be around and on the Internet much longer than the two to three years we use our phones and computers before we upgrade them. I expect to keep my car for 15 years, and my refrigerator for at least 20 years. Cities will expect the networks they’re putting in place to last at least that long. I don’t want to replace my digital thermostat ever again. Nor, if I ever need one, do I want a surgeon to ever have to go back in to replace my computerized heart defibrillator in order to fix a software bug.

No amount of regulation can force companies to maintain old products, and it certainly can’t prevent companies from going out of business. The future will contain billions of orphaned devices connected to the web that simply have no engineers able to patch them.

Imagine this: The company that made your Internet-enabled door lock is long out of business. You have no way to secure yourself against the ransomware attack on that lock. Your only option, other than paying, and paying again when it’s reinfected, is to throw it away and buy a new one.

Ultimately, we will also need the network to block these attacks before they get to the devices, but there again the market will not fix the problem on its own. We need additional government intervention to mandate these sorts of solutions.

None of this is welcome news to a government that prides itself on minimal intervention and maximal market forces, but national security is often an exception to this rule. Last week’s cyberattacks have laid bare some fundamental vulnerabilities in our computer infrastructure and serve as a harbinger. There’s a lot of good research into robust solutions, but the economic incentives are all misaligned. As politically untenable as it is, we need government to step in to create the market forces that will get us out of this mess.

This essay previously appeared in the “New York Times.” Yes, I know I’m repeating myself.
https://www.nytimes.com/2017/05/19/opinion/…

A good cartoon:
http://www.geekculture.com/joyoftech/joyarchives/…


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and CTO of IBM Resilient and Special Advisor to IBM Security. See <https://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM Resilient.

Copyright (c) 2017 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.