Blog: May 2017 Archives

Post-Quantum RSA

Interesting research on a version of RSA that is secure against a quantum computer:

Post-quantum RSA

Daniel J. Bernstein, Nadia Heninger, Paul Lou, and Luke Valenta

Abstract: This paper proposes RSA parameters for which (1) key generation, encryption, decryption, signing, and verification are feasible on today’s computers while (2) all known attacks are infeasible, even assuming highly scalable quantum computers. As part of the performance analysis, this paper introduces a new algorithm to generate a batch of primes. As part of the attack analysis, this paper introduces a new quantum factorization algorithm that is often much faster than Shor’s algorithm and much faster than pre-quantum factorization algorithms. Initial pqRSA implementation results are provided.

Posted on May 31, 2017 at 6:31 AM63 Comments

Inmates Secretly Build and Network Computers while in Prison

This is kind of amazing:

Inmates at a medium-security Ohio prison secretly assembled two functioning computers, hid them in the ceiling, and connected them to the Marion Correctional Institution’s network. The hard drives were loaded with pornography, a Windows proxy server, VPN, VOIP and anti-virus software, the Tor browser, password hacking and e-mail spamming tools, and the open source packet analyzer Wireshark.

Another article.

Clearly there’s a lot about prison security, or the lack thereof, that I don’t know. This article reveals some of it.

Posted on May 30, 2017 at 12:47 PM15 Comments

Who Are the Shadow Brokers?

In 2013, a mysterious group of hackers that calls itself the Shadow Brokers stole a few disks full of NSA secrets. Since last summer, they’ve been dumping these secrets on the Internet. They have publicly embarrassed the NSA and damaged its intelligence-gathering capabilities, while at the same time have put sophisticated cyberweapons in the hands of anyone who wants them. They have exposed major vulnerabilities in Cisco routers, Microsoft Windows, and Linux mail servers, forcing those companies and their customers to scramble. And they gave the authors of the WannaCry ransomware the exploit they needed to infect hundreds of thousands of computer worldwide this month.

After the WannaCry outbreak, the Shadow Brokers threatened to release more NSA secrets every month, giving cybercriminals and other governments worldwide even more exploits and hacking tools.

Who are these guys? And how did they steal this information? The short answer is: we don’t know. But we can make some educated guesses based on the material they’ve published.

The Shadow Brokers suddenly appeared last August, when they published a series of hacking tools and computer exploits­—vulnerabilities in common software—­from the NSA. The material was from autumn 2013, and seems to have been collected from an external NSA staging server, a machine that is owned, leased, or otherwise controlled by the US, but with no connection to the agency. NSA hackers find obscure corners of the Internet to hide the tools they need as they go about their work, and it seems the Shadow Brokers successfully hacked one of those caches.

In total, the group has published four sets of NSA material: a set of exploits and hacking tools against routers, the devices that direct data throughout computer networks; a similar collection against mail servers; another collection against Microsoft Windows; and a working directory of an NSA analyst breaking into the SWIFT banking network. Looking at the time stamps on the files and other material, they all come from around 2013. The Windows attack tools, published last month, might be a year or so older, based on which versions of Windows the tools support.

The releases are so different that they’re almost certainly from multiple sources at the NSA. The SWIFT files seem to come from an internal NSA computer, albeit one connected to the Internet. The Microsoft files seem different, too; they don’t have the same identifying information that the router and mail server files do. The Shadow Brokers have released all the material unredacted, without the care journalists took with the Snowden documents or even the care WikiLeaks has taken with the CIA secrets it’s publishing. They also posted anonymous messages in bad English but with American cultural references.

Given all of this, I don’t think the agent responsible is a whistleblower. While possible, it seems like a whistleblower wouldn’t sit on attack tools for three years before publishing. They would act more like Edward Snowden or Chelsea Manning, collecting for a time and then publishing immediately­—and publishing documents that discuss what the US is doing to whom. That’s not what we’re seeing here; it’s simply a bunch of exploit code, which doesn’t have the political or ethical implications that a whistleblower would want to highlight. The SWIFT documents are records of an NSA operation, and the other posted files demonstrate that the NSA is hoarding vulnerabilities for attack rather than helping fix them and improve all of our security.

I also don’t think that it’s random hackers who stumbled on these tools and are just trying to harm the NSA or the US. Again, the three-year wait makes no sense. These documents and tools are cyber-Kryptonite; anyone who is secretly hoarding them is in danger from half the intelligence agencies in the world. Additionally, the publication schedule doesn’t make sense for the leakers to be cybercriminals. Criminals would use the hacking tools for themselves, incorporating the exploits into worms and viruses, and generally profiting from the theft.

That leaves a nation state. Whoever got this information years before and is leaking it now has to be both capable of hacking the NSA and willing to publish it all. Countries like Israel and France are capable, but would never publish, because they wouldn’t want to incur the wrath of the US. Countries like North Korea or Iran probably aren’t capable. (Additionally, North Korea is suspected of being behind WannaCry, which was written after the Shadow Brokers released that vulnerability to the public.) As I’ve written previously, the obvious list of countries who fit my two criteria is small: Russia, China, and­—I’m out of ideas. And China is currently trying to make nice with the US.

It was generally believed last August, when the first documents were released and before it became politically controversial to say so, that the Russians were behind the leak, and that it was a warning message to President Barack Obama not to retaliate for the Democratic National Committee hacks. Edward Snowden guessed Russia, too. But the problem with the Russia theory is, why? These leaked tools are much more valuable if kept secret. Russia could use the knowledge to detect NSA hacking in its own country and to attack other countries. By publishing the tools, the Shadow Brokers are signaling that they don’t care if the US knows the tools were stolen.

Sure, there’s a chance the attackers knew that the US knew that the attackers knew—­and round and round we go. But the “we don’t give a damn” nature of the releases points to an attacker who isn’t thinking strategically: a lone hacker or hacking group, which clashes with the nation-state theory.

This is all speculation on my part, based on discussion with others who don’t have access to the classified forensic and intelligence analysis. Inside the NSA, they have a lot more information. Many of the files published include operational notes and identifying information. NSA researchers know exactly which servers were compromised, and through that know what other information the attackers would have access to. As with the Snowden documents, though, they only know what the attackers could have taken and not what they did take. But they did alert Microsoft about the Windows vulnerability the Shadow Brokers released months in advance. Did they have eavesdropping capability inside whoever stole the files, as they claimed to when the Russians attacked the State Department? We have no idea.

So, how did the Shadow Brokers do it? Did someone inside the NSA accidentally mount the wrong server on some external network? That’s possible, but seems very unlikely for the organization to make that kind of rookie mistake. Did someone hack the NSA itself? Could there be a mole inside the NSA?

If it is a mole, my guess is that the person was arrested before the Shadow Brokers released anything. No country would burn a mole working for it by publishing what that person delivered while he or she was still in danger. Intelligence agencies know that if they betray a source this severely, they’ll never get another one.

That points to two possibilities. The first is that the files came from Hal Martin. He’s the NSA contractor who was arrested in August for hoarding agency secrets in his house for two years. He can’t be the publisher, because the Shadow Brokers are in business even though he is in prison. But maybe the leaker got the documents from his stash, either because Martin gave the documents to them or because he himself was hacked. The dates line up, so it’s theoretically possible. There’s nothing in the public indictment against Martin that speaks to his selling secrets to a foreign power, but that’s just the sort of thing that would be left out. It’s not needed for a conviction.

If the source of the documents is Hal Martin, then we can speculate that a random hacker did in fact stumble on it—­no need for nation-state cyberattack skills.

The other option is a mysterious second NSA leaker of cyberattack tools. Could this be the person who stole the NSA documents and passed them on to someone else? The only time I have ever heard about this was from a Washington Post story about Martin:

There was a second, previously undisclosed breach of cybertools, discovered in the summer of 2015, which was also carried out by a TAO employee [a worker in the Office of Tailored Access Operations], one official said. That individual also has been arrested, but his case has not been made public. The individual is not thought to have shared the material with another country, the official said.

Of course, “not thought to have” is not the same as not having done so.

It is interesting that there have been no public arrests of anyone in connection with these hacks. If the NSA knows where the files came from, it knows who had access to them—­and it’s long since questioned everyone involved and should know if someone deliberately or accidentally lost control of them. I know that many people, both inside the government and out, think there is some sort of domestic involvement; things may be more complicated than I realize.

It’s also not over. Last week, the Shadow Brokers were back, with a rambling and taunting message announcing a “Data Dump of the Month” service. They’re offering to sell unreleased NSA attack tools­—something they also tried last August­—with the threat to publish them if no one pays. The group has made good on their previous boasts: In the coming months, we might see new exploits against web browsers, networking equipment, smartphones, and operating systems—Windows in particular. Even scarier, they’re threatening to release raw NSA intercepts: data from the SWIFT network and banks, and “compromised data from Russian, Chinese, Iranian, or North Korean nukes and missile programs.”

Whoever the Shadow Brokers are, however they stole these disks full of NSA secrets, and for whatever reason they’re releasing them, it’s going to be a long summer inside of Fort Meade­—as it will be for the rest of us.

This essay previously appeared in the Atlantic, and is an update of this essay from Lawfare.

Posted on May 30, 2017 at 6:08 AM69 Comments

Tainted Leaks

Last year, I wrote about the potential for doxers to alter documents before they leaked them. It was a theoretical threat when I wrote it, but now Citizen Lab has documented this technique in the wild:

This report describes an extensive Russia-linked phishing and disinformation campaign. It provides evidence of how documents stolen from a prominent journalist and critic of Russia was tampered with and then “leaked” to achieve specific propaganda aims. We name this technique “tainted leaks.” The report illustrates how the twin strategies of phishing and tainted leaks are sometimes used in combination to infiltrate civil society targets, and to seed mistrust and disinformation. It also illustrates how domestic considerations, specifically concerns about regime security, can motivate espionage operations, particularly those targeting civil society.

Posted on May 29, 2017 at 10:22 AM58 Comments

Security and Human Behavior (SHB 2017)

I’m in Cambridge University, at the tenth Workshop on Security and Human Behavior.

SHB is a small invitational gathering of people studying various aspects of the human side of security, organized each year by Ross Anderson, Alessandro Acquisti, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, political scientists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is maximum interaction and discussion. We do that by putting everyone on panels. There are eight six-person panels over the course of the two days. Everyone gets to talk for ten minutes about their work, and then there’s half an hour of questions and discussion. We also have lunches, dinners, and receptions—all designed so people from different disciplines talk to each other.

It’s the most intellectually stimulating conference of my year, and influences my thinking about security in many different ways.

This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, and ninth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops.

I don’t think any of us imagined that this conference would be around this long.

Posted on May 25, 2017 at 2:30 PM30 Comments

Ransomware and the Internet of Things

As devastating as the latest widespread ransomware attacks have been, it’s a problem with a solution. If your copy of Windows is relatively current and you’ve kept it updated, your laptop is immune. It’s only older unpatched systems on your computer that are vulnerable.

Patching is how the computer industry maintains security in the face of rampant Internet insecurity. Microsoft, Apple and Google have teams of engineers who quickly write, test and distribute these patches, updates to the codes that fix vulnerabilities in software. Most people have set up their computers and phones to automatically apply these patches, and the whole thing works seamlessly. It isn’t a perfect system, but it’s the best we have.

But it is a system that’s going to fail in the “Internet of things”: everyday devices like smart speakers, household appliances, toys, lighting systems, even cars, that are connected to the web. Many of the embedded networked systems in these devices that will pervade our lives don’t have engineering teams on hand to write patches and may well last far longer than the companies that are supposed to keep the software safe from criminals. Some of them don’t even have the ability to be patched.

Fast forward five to 10 years, and the world is going to be filled with literally tens of billions of devices that hackers can attack. We’re going to see ransomware against our cars. Our digital video recorders and web cameras will be taken over by botnets. The data that these devices collect about us will be stolen and used to commit fraud. And we’re not going to be able to secure these devices.

Like every other instance of product safety, this problem will never be solved without considerable government involvement.

For years, I have been calling for more regulation to improve security in the face of this market failure. In the short term, the government can mandate that these devices have more secure default configurations and the ability to be patched. It can issue best-practice regulations for critical software and make software manufacturers liable for vulnerabilities. It’ll be expensive, but it will go a long way toward improved security.

But it won’t be enough to focus only on the devices, because these things are going to be around and on the Internet much longer than the two to three years we use our phones and computers before we upgrade them. I expect to keep my car for 15 years, and my refrigerator for at least 20 years. Cities will expect the networks they’re putting in place to last at least that long. I don’t want to replace my digital thermostat ever again. Nor, if I ever need one, do I want a surgeon to ever have to go back in to replace my computerized heart defibrillator in order to fix a software bug.

No amount of regulation can force companies to maintain old products, and it certainly can’t prevent companies from going out of business. The future will contain billions of orphaned devices connected to the web that simply have no engineers able to patch them.

Imagine this: The company that made your Internet-enabled door lock is long out of business. You have no way to secure yourself against the ransomware attack on that lock. Your only option, other than paying, and paying again when it’s reinfected, is to throw it away and buy a new one.

Ultimately, we will also need the network to block these attacks before they get to the devices, but there again the market will not fix the problem on its own. We need additional government intervention to mandate these sorts of solutions.

None of this is welcome news to a government that prides itself on minimal intervention and maximal market forces, but national security is often an exception to this rule. Last week’s cyberattacks have laid bare some fundamental vulnerabilities in our computer infrastructure and serve as a harbinger. There’s a lot of good research into robust solutions, but the economic incentives are all misaligned. As politically untenable as it is, we need government to step in to create the market forces that will get us out of this mess.

This essay previously appeared in the New York Times. Yes, I know I’m repeating myself.

EDITED TO ADD: A good cartoon.

Posted on May 25, 2017 at 6:15 AM51 Comments

Hacking Fingerprint Readers with Master Prints

There’s interesting research on using a set of “master” digital fingerprints to fool biometric readers. The work is theoretical at the moment, but they might be able to open about two-thirds of iPhones with these master prints.

Definitely something to keep watching.

Research paper (behind a paywall).

EDITED TO ADD (6/13): The research paper is online.

Posted on May 24, 2017 at 6:44 AM15 Comments

The Future of Ransomware

Ransomware isn’t new, but it’s increasingly popular and profitable.

The concept is simple: Your computer gets infected with a virus that encrypts your files until you pay a ransom. It’s extortion taken to its networked extreme. The criminals provide step-by-step instructions on how to pay, sometimes even offering a help line for victims unsure how to buy bitcoin. The price is designed to be cheap enough for people to pay instead of giving up: a few hundred dollars in many cases. Those who design these systems know their market, and it’s a profitable one.

The ransomware that has affected systems in more than 150 countries recently, WannaCry, made press headlines last week, but it doesn’t seem to be more virulent or more expensive than other ransomware. This one has a particularly interesting pedigree: It’s based on a vulnerability developed by the National Security Agency that can be used against many versions of the Windows operating system. The NSA’s code was, in turn, stolen by an unknown hacker group called Shadow Brokers ­ widely believed by the security community to be the Russians ­ in 2014 and released to the public in April.

Microsoft patched the vulnerability a month earlier, presumably after being alerted by the NSA that the leak was imminent. But the vulnerability affected older versions of Windows that Microsoft no longer supports, and there are still many people and organizations that don’t regularly patch their systems. This allowed whoever wrote WannaCry ­—it could be anyone from a lone individual to an organized crime syndicate—to use it to infect computers and extort users.

The lessons for users are obvious: Keep your system patches up to date and regularly backup your data. This isn’t just good advice to defend against ransomware, but good advice in general. But it’s becoming obsolete.

Everything is becoming a computer. Your microwave is a computer that makes things hot. Your refrigerator is a computer that keeps things cold. Your car and television, the traffic lights and signals in your city and our national power grid are all computers. This is the much-hyped Internet of Things (IoT). It’s coming, and it’s coming faster than you might think. And as these devices connect to the Internet, they become vulnerable to ransomware and other computer threats.

It’s only a matter of time before people get messages on their car screens saying that the engine has been disabled and it will cost $200 in bitcoin to turn it back on. Or a similar message on their phones about their Internet-enabled door lock: Pay $100 if you want to get into your house tonight. Or pay far more if they want their embedded heart defibrillator to keep working.

This isn’t just theoretical. Researchers have already demonstrated a ransomware attack against smart thermostats, which may sound like a nuisance at first but can cause serious property damage if it’s cold enough outside. If the device under attack has no screen, you’ll get the message on the smartphone app you control it from.

Hackers don’t even have to come up with these ideas on their own; the government agencies whose code was stolen were already doing it. One of the leaked CIA attack tools targets Internet-enabled Samsung smart televisions.

Even worse, the usual solutions won’t work with these embedded systems. You have no way to back up your refrigerator’s software, and it’s unclear whether that solution would even work if an attack targets the functionality of the device rather than its stored data.

These devices will be around for a long time. Unlike our phones and computers, which we replace every few years, cars are expected to last at least a decade. We want our appliances to run for 20 years or more, our thermostats even longer.

What happens when the company that made our smart washing machine—or just the computer part—goes out of business, or otherwise decides that they can no longer support older models? WannaCry affected Windows versions as far back as XP, a version that Microsoft no longer supports. The company broke with policy and released a patch for those older systems, but it has both the engineering talent and the money to do so.

That won’t happen with low-cost IoT devices.

Those devices are built on the cheap, and the companies that make them don’t have the dedicated teams of security engineers ready to craft and distribute security patches. The economics of the IoT doesn’t allow for it. Even worse, many of these devices aren’t patchable. Remember last fall when the Mirai botnet infected hundreds of thousands of Internet-enabled digital video recorders, webcams and other devices and launched a massive denial-of-service attack that resulted in a host of popular websites dropping off the Internet? Most of those devices couldn’t be fixed with new software once they were attacked. The way you update your DVR is to throw it away and buy a new one.

Solutions aren’t easy and they’re not pretty. The market is not going to fix this unaided. Security is a hard-to-evaluate feature against a possible future threat, and consumers have long rewarded companies that provide easy-to-compare features and a quick time-to-market at its expense. We need to assign liabilities to companies that write insecure software that harms people, and possibly even issue and enforce regulations that require companies to maintain software systems throughout their life cycle. We may need minimum security standards for critical IoT devices. And it would help if the NSA got more involved in securing our information infrastructure and less in keeping it vulnerable so the government can eavesdrop.

I know this all sounds politically impossible right now, but we simply cannot live in a future where everything—from the things we own to our nation’s infrastructure ­—can be held for ransom by criminals again and again.

This essay previously appeared in the Washington Post.

Posted on May 23, 2017 at 5:55 AM54 Comments

Extending the Airplane Laptop Ban

The Department of Homeland Security is rumored to be considering extending the current travel ban on large electronics for Middle Eastern flights to European ones as well. The likely reaction of airlines will be to implement new traveler programs, effectively allowing wealthier and more frequent fliers to bring their computers with them. This will only exacerbate the divide between the haves and the have-nots—all without making us any safer.

In March, both the United States and the United Kingdom required that passengers from 10 Muslim countries give up their laptop computers and larger tablets, and put them in checked baggage. The new measure was based on reports that terrorists would try to smuggle bombs onto planes concealed in these larger electronic devices.

The security measure made no sense for two reasons. First, moving these computers into the baggage holds doesn’t keep them off planes. Yes, it is easier to detonate a bomb that’s in your hands than to remotely trigger it in the cargo hold. But it’s also more effective to screen laptops at security checkpoints than it is to place them in checked baggage. TSA already does this kind of screening randomly and occasionally: making passengers turn laptops on to ensure that they’re functional computers and not just bomb-filled cases, and running chemical tests on their surface to detect explosive material.

And, two, banning laptops on selected flights just forces terrorists to buy more roundabout itineraries. It doesn’t take much creativity to fly Doha-Amsterdam-New York instead of direct. Adding Amsterdam to the list of affected airports makes the terrorist add yet another itinerary change; it doesn’t remove the threat.

Which brings up another question: If this is truly a threat, why aren’t domestic flights included in this ban? Remember that anyone boarding a plane to the United States from these Muslim countries has already received a visa to enter the country. This isn’t perfect security—the infamous underwear bomber had a visa, after all—but anyone who could detonate a laptop bomb on his international flight could do it on his domestic connection.

I don’t have access to classified intelligence, and I can’t comment on whether explosive-filled laptops are truly a threat. But, if they are, TSA can set up additional security screenings at the gates of US-bound flights worldwide and screen every laptop coming onto the plane. It wouldn’t be the first time we’ve had additional security screening at the gate. And they should require all laptops to go through this screening, prohibiting them from being stashed in checked baggage.

This measure is nothing more than security theater against what appears to be a movie-plot threat.

Banishing laptops to the cargo holds brings with it a host of other threats. Passengers run the risk of their electronics being stolen from their checked baggage—something that has happened in the past. And, depending on the country, passengers also have to worry about border control officials intercepting checked laptops and making copies of what’s on their hard drives.

Safety is another concern. We’re already worried about large lithium-ion batteries catching fire in airplane baggage holds; adding a few hundred of these devices will considerably exacerbate the risk. Both FedEx and UPS no longer accept bulk shipments of these batteries after two jets crashed in 2010 and 2011 due to combustion.

Of course, passengers will rebel against this rule. Having access to a computer on these long transatlantic flights is a must for many travelers, especially the high-revenue business-class travelers. They also won’t accept the delays and confusion this rule will cause as it’s rolled out. Unhappy passengers fly less, or fly other routes on other airlines without these restrictions.

I don’t know how many passengers are choosing to fly to the Middle East via Toronto to avoid the current laptop ban, but I suspect there may be some. If Europe is included in the new ban, many more may consider adding Canada to their itineraries, as well as choosing European hubs that remain unaffected.

As passengers voice their disapproval with their wallets, airlines will rebel. Already Emirates has a program to loan laptops to their premium travelers. I can imagine US airlines doing the same, although probably for an extra fee. We might learn how to make this work: keeping our data in the cloud or on portable memory sticks and using unfamiliar computers for the length of the flight.

A more likely response will be comparable to what happened after the US increased passenger screening post-9/11. In the months and years that followed, we saw different ways for high-revenue travelers to avoid the lines: faster first-class lanes, and then the extra-cost trusted traveler programs that allow people to bypass the long lines, keep their shoes on their feet and leave their laptops and liquids in their bags. It’s a bad security idea, but it keeps both frequent fliers and airlines happy. It would be just another step to allow these people to keep their electronics with them on their flight.

The problem with this response is that it solves the problem for frequent fliers, while leaving everyone else to suffer. This is already the case; those of us enrolled in a trusted traveler program forget what it’s like to go through “normal” security screening. And since frequent fliers—likely to be more wealthy—no longer see the problem, they don’t have any incentive to fix it.

Dividing security checks into haves and have-nots is bad social policy, and we should actively fight any expansion of it. If the TSA implements this security procedure, it should implement it for every flight. And there should be no exceptions. Force every politically connected flier, from members of Congress to the lobbyists that influence them, to do without their laptops on planes. Let the TSA explain to them why they can’t work on their flights to and from D.C.

This essay previously appeared on CNN.com.

EDITED TO ADD: US officials are backing down.

Posted on May 22, 2017 at 6:06 AM48 Comments

NSA Abandons "About" Searches

Earlier this month, the NSA said that it would no longer conduct “about” searches of bulk communications data. This was the practice of collecting the communications of Americans based on keywords and phrases in the contents of the messages, not based on who they were from or to.

The NSA’s own words:

After considerable evaluation of the program and available technology, NSA has decided that its Section 702 foreign intelligence surveillance activities will no longer include any upstream internet communications that are solely “about” a foreign intelligence target. Instead, this surveillance will now be limited to only those communications that are directly “to” or “from” a foreign intelligence target. These changes are designed to retain the upstream collection that provides the greatest value to national security while reducing the likelihood that NSA will acquire communications of U.S. persons or others who are not in direct contact with one of the Agency’s foreign intelligence targets.

In addition, as part of this curtailment, NSA will delete the vast majority of previously acquired upstream internet communications as soon as practicable.

[…]

After reviewing amended Section 702 certifications and NSA procedures that implement these changes, the FISC recently issued an opinion and order, approving the renewal certifications and use of procedures, which authorize this narrowed form of Section 702 upstream internet collection. A declassification review of the FISC’s opinion and order, and the related targeting and minimization procedures, is underway.

A quick review: under Section 702 of the Patriot Act, the NSA seizes a copy of all communications moving through a telco—think e-mail and such—and searches it for particular senders, receivers, and—until recently—key words. This pretty clearly violates the Fourth Amendment, and groups like the EFF have been fighting the NSA in court about this for years. The NSA has also had problems in the FISA court about these searches, and cites “inadvertent compliance incidents” related to this.

We might learn more about this change. Again, from the NSA’s statement:

After reviewing amended Section 702 certifications and NSA procedures that implement these changes, the FISC recently issued an opinion and order, approving the renewal certifications and use of procedures, which authorize this narrowed form of Section 702 upstream internet collection. A declassification review of the FISC’s opinion and order, and the related targeting and minimization procedures, is underway.

And the EFF is still fighting for more NSA surveillance reforms.

Posted on May 19, 2017 at 2:05 PM48 Comments

WannaCry Ransomware

Criminals go where the money is, and cybercriminals are no exception.

And right now, the money is in ransomware.

It’s a simple scam. Encrypt the victim’s hard drive, then extract a fee to decrypt it. The scammers can’t charge too much, because they want the victim to pay rather than give up on the data. But they can charge individuals a few hundred dollars, and they can charge institutions like hospitals a few thousand. Do it at scale, and it’s a profitable business.

And scale is how ransomware works. Computers are infected automatically, with viruses that spread over the internet. Payment is no more difficult than buying something online ­—and payable in untraceable bitcoin -­- with some ransomware makers offering tech support to those unsure of how to buy or transfer bitcoin. Customer service is important; people need to know they’ll get their files back once they pay.

And they want you to pay. If they’re lucky, they’ve encrypted your irreplaceable family photos, or the documents of a project you’ve been working on for weeks. Or maybe your company’s accounts receivable files or your hospital’s patient records. The more you need what they’ve stolen, the better.

The particular ransomware making headlines is called WannaCry, and it’s infected some pretty serious organizations.

What can you do about it? Your first line of defense is to diligently install every security update as soon as it becomes available, and to migrate to systems that vendors still support. Microsoft issued a security patch that protects against WannaCry months before the ransomware started infecting systems; it only works against computers that haven’t been patched. And many of the systems it infects are older computers, no longer normally supported by Microsoft—­ though it did belatedly release a patch for those older systems. I know it’s hard, but until companies are forced to maintain old systems, you’re much safer upgrading.

This is easier advice for individuals than for organizations. You and I can pretty easily migrate to a new operating system, but organizations sometimes have custom software that breaks when they change OS versions or install updates. Many of the organizations hit by WannaCry had outdated systems for exactly these reasons. But as expensive and time-consuming as updating might be, the risks of not doing so are increasing.

Your second line of defense is good antivirus software. Sometimes ransomware tricks you into encrypting your own hard drive by clicking on a file attachment that you thought was benign. Antivirus software can often catch your mistake and prevent the malicious software from running. This isn’t perfect, of course, but it’s an important part of any defense.

Your third line of defense is to diligently back up your files. There are systems that do this automatically for your hard drive. You can invest in one of those. Or you can store your important data in the cloud. If your irreplaceable family photos are in a backup drive in your house, then the ransomware has that much less hold on you. If your e-mail and documents are in the cloud, then you can just reinstall the operating system and bypass the ransomware entirely. I know storing data in the cloud has its own privacy risks, but they may be less than the risks of losing everything to ransomware.

That takes care of your computers and smartphones, but what about everything else? We’re deep into the age of the “Internet of things.”

There are now computers in your household appliances. There are computers in your cars and in the airplanes you travel on. Computers run our traffic lights and our power grids. These are all vulnerable to ransomware. The Mirai botnet exploited a vulnerability in internet-enabled devices like DVRs and webcams to launch a denial-of-service attack against a critical internet name server; next time it could just as easily disable the devices and demand payment to turn them back on.

Re-enabling a webcam will be cheap; re-enabling your car will cost more. And you don’t want to know how vulnerable implanted medical devices are to these sorts of attacks.

Commercial solutions are coming, probably a convenient repackaging of the three lines of defense described above. But it’ll be yet another security surcharge you’ll be expected to pay because the computers and internet-of-things devices you buy are so insecure. Because there are currently no liabilities for lousy software and no regulations mandating secure software, the market rewards software that’s fast and cheap at the expense of good. Until that changes, ransomware will continue to be profitable line of criminal business.

This essay previously appeared in the New York Daily News.

Posted on May 19, 2017 at 6:10 AM72 Comments

The US Senate Is Using Signal

The US Senate just approved Signal for staff use. Signal is a secure messaging app with no backdoor, and no large corporate owner who can be pressured to install a backdoor.

Susan Landau comments.

Maybe I’m being optimistic, but I think we just won the Crypto War. A very important part of the US government is prioritizing security over surveillance.

Posted on May 17, 2017 at 2:45 PM89 Comments

Keylogger Found in HP Laptop Audio Drivers

This is a weird story: researchers have discovered that an audio driver installed in some HP laptops includes a keylogger, which records all keystrokes to a local file. There seems to be nothing malicious about this, but it’s a vivid illustration of how hard it is to secure a modern computer. The operating system, drivers, processes, application software, and everything else is so complicated that it’s pretty much impossible to lock down every aspect of it. So many things are eavesdropping on different aspects of the computer’s operation, collecting personal data as they do so. If an attacker can get to the computer when the drive is unencrypted, he gets access to all sorts of information streams—and there’s often nothing the computer’s owner can do.

Posted on May 17, 2017 at 6:32 AM34 Comments

NSA Brute-Force Keysearch Machine

The Intercept published a story about a dedicated NSA brute-force keysearch machine being built with the help of New York University and IBM. It’s based on a document that was accidentally shared on the Internet by NYU.

The article is frustratingly short on details:

The WindsorGreen documents are mostly inscrutable to anyone without a Ph.D. in a related field, but they make clear that the computer is the successor to WindsorBlue, a next generation of specialized IBM hardware that would excel at cracking encryption, whose known customers are the U.S. government and its partners.

Experts who reviewed the IBM documents said WindsorGreen possesses substantially greater computing power than WindsorBlue, making it particularly adept at compromising encryption and passwords. In an overview of WindsorGreen, the computer is described as a “redesign” centered around an improved version of its processor, known as an “application specific integrated circuit,” or ASIC, a type of chip built to do one task, like mining bitcoin, extremely well, as opposed to being relatively good at accomplishing the wide range of tasks that, say, a typical MacBook would handle. One of the upgrades was to switch the processor to smaller transistors, allowing more circuitry to be crammed into the same area, a change quantified by measuring the reduction in nanometers (nm) between certain chip features.

Unfortunately, the Intercept decided not to publish most of the document, so all of those people with “a Ph.D. in a related field” can’t read and understand WindsorGreen’s capabilities. What sorts of key lengths can the machine brute force? Is it optimized for symmetric or asymmetric cryptanalysis? Random brute force or dictionary attacks? We have no idea.

Whatever the details, this is exactly the sort of thing the NSA should be spending their money on. Breaking the cryptography used by other nations is squarely in the NSA’s mission.

EDITED TO ADD (6/13): Some of the documents are online.

Posted on May 16, 2017 at 6:40 AM52 Comments

Using Wi-Fi to Get 3D Images of Surrounding Location

Interesting research:

The radio signals emitted by a commercial Wi-Fi router can act as a kind of radar, providing images of the transmitter’s environment, according to new experiments. Two researchers in Germany borrowed techniques from the field of holography to demonstrate Wi-Fi imaging. They found that the technique could potentially allow users to peer through walls and could provide images 10 times per second.

News article.

Posted on May 16, 2017 at 6:08 AM7 Comments

The Quick vs. the Strong: Commentary on Cory Doctorow's Walkaway

Technological advances change the world. That’s partly because of what they are, but even more because of the social changes they enable. New technologies upend power balances. They give groups new capabilities, increased effectiveness, and new defenses. The Internet decades have been a never-ending series of these upendings. We’ve seen existing industries fall and new industries rise. We’ve seen governments become more powerful in some areas and less in others. We’ve seen the rise of a new form of governance: a multi-stakeholder model where skilled individuals can have more power than multinational corporations or major governments.

Among the many power struggles, there is one type I want to particularly highlight: the battles between the nimble individuals who start using a new technology first, and the slower organizations that come along later.

In general, the unempowered are the first to benefit from new technologies: hackers, dissidents, marginalized groups, criminals, and so on. When they first encountered the Internet, it was transformative. Suddenly, they had access to technologies for dissemination, coordination, organization, and action—things that were impossibly hard before. This can be incredibly empowering. In the early decades of the Internet, we saw it in the rise of Usenet discussion forums and special-interest mailing lists, in how the Internet routed around censorship, and how Internet governance bypassed traditional government and corporate models. More recently, we saw it in the SOPA/PIPA debate of 2011-12, the Gezi protests in Turkey and the various “color” revolutions, and the rising use of crowdfunding. These technologies can invert power dynamics, even in the presence of government surveillance and censorship.

But that’s just half the story. Technology magnifies power in general, but the rates of adoption are different. Criminals, dissidents, the unorganized—all outliers—are more agile. They can make use of new technologies faster, and can magnify their collective power because of it. But when the already-powerful big institutions finally figured out how to use the Internet, they had more raw power to magnify.

This is true for both governments and corporations. We now know that governments all over the world are militarizing the Internet, using it for surveillance, censorship, and propaganda. Large corporations are using it to control what we can do and see, and the rise of winner-take-all distribution systems only exacerbates this.

This is the fundamental tension at the heart of the Internet, and information-based technology in general. The unempowered are more efficient at leveraging new technology, while the powerful have more raw power to leverage. These two trends lead to a battle between the quick and the strong: the quick who can make use of new power faster, and the strong who can make use of that same power more effectively.

This battle is playing out today in many different areas of information technology. You can see it in the security vs. surveillance battles between criminals and the FBI, or dissidents and the Chinese government. You can see it in the battles between content pirates and various media organizations. You can see it where social-media giants and Internet-commerce giants battle against new upstarts. You can see it in politics, where the newer Internet-aware organizations fight with the older, more established, political organizations. You can even see it in warfare, where a small cadre of military can keep a country under perpetual bombardment—using drones—with no risk to the attackers.

This battle is fundamental to Cory Doctorow’s new novel Walkaway. Our heroes represent the quick: those who have checked out of traditional society, and thrive because easy access to 3D printers enables them to eschew traditional notions of property. Their enemy is the strong: the traditional government institutions that exert their power mostly because they can. This battle rages through most of the book, as the quick embrace ever-new technologies and the strong struggle to catch up.

It’s easy to root for the quick, both in Doctorow’s book and in the real world. And while I’m not going to give away Doctorow’s ending—and I don’t know enough to predict how it will play out in the real world—right now, trends favor the strong.

Centralized infrastructure favors traditional power, and the Internet is becoming more centralized. This is true both at the endpoints, where companies like Facebook, Apple, Google, and Amazon control much of how we interact with information. It’s also true in the middle, where companies like Comcast increasingly control how information gets to us. It’s true in countries like Russia and China that increasingly legislate their own national agenda onto their pieces of the Internet. And it’s even true in countries like the US and the UK, that increasingly legislate more government surveillance capabilities.

At the 1996 World Economic Forum, cyber-libertarian John Perry Barlow issued his “Declaration of the Independence of Cyberspace,” telling the assembled world leaders and titans of Industry: “You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear.” Many of us believed him a scant 20 years ago, but today those words ring hollow.

But if history is any guide, these things are cyclic. In another 20 years, even newer technologies—both the ones Doctorow focuses on and the ones no one can predict—could easily tip the balance back in favor of the quick. Whether that will result in more of a utopia or a dystopia depends partly on these technologies, but even more on the social changes resulting from these technologies. I’m short-term pessimistic but long-term optimistic.

This essay previously appeared on Crooked Timber.

Posted on May 15, 2017 at 2:21 PM28 Comments

Yacht Security

Turns out, multi-million dollar yachts are no more secure than anything else out there:

The ease with which ocean-going oligarchs or other billionaires can be hijacked on the high seas was revealed at a superyacht conference held in a private members club in central London this week.

[…]

Murray, a cybercrime expert at BlackBerry, was demonstrating how criminal gangs could exploit lax data security on superyachts to steal their owners’ financial information, private photos ­ and even force the yacht off course.

I’m sure it was a surprise to the yacht owners.

Posted on May 15, 2017 at 6:02 AM20 Comments

Stealing Voice Prints

This article feels like hyperbole:

The scam has arrived in Australia after being used in the United States and Britain.

The scammer may ask several times “can you hear me?”, to which people would usually reply “yes.”

The scammer is then believed to record the “yes” response and end the call.

That recording of the victim’s voice can then be used to authorise payments or charges in the victim’s name through voice recognition.

Are there really banking systems that use voice recognition of the word “yes” to authenticate? I have never heard of that.

Posted on May 12, 2017 at 6:00 AM83 Comments

Securing Elections

Technology can do a lot more to make our elections more secure and reliable, and to ensure that participation in the democratic process is available to all. There are three parts to this process.

First, the voter registration process can be improved. The whole process can be streamlined. People should be able to register online, just as they can register for other government services. The voter rolls need to be protected from tampering, as that’s one of the major ways hackers can disrupt the election.

Second, the voting process can be significantly improved. Voting machines need to be made more secure. There are a lot of technical details best left to the voting-security experts who can deal with them, but such machines must include a paper ballot that provides a record verifiable by voters. The simplest and most reliable way to do that is already practiced in 37 states: optical-scan paper ballots, marked by the voters and counted by computer, but recountable by hand.

We need national security standards for voting machines, and funding for states to procure machines that comply with those standards.

This means no Internet voting. While that seems attractive, and certainly a way technology can improve voting, we don’t know how to do it securely. We simply can’t build an Internet voting system that is secure against hacking because of the requirement for a secret ballot. This makes voting different from banking and anything else we do on the Internet, and it makes security much harder. Even allegations of vote hacking would be enough to undermine confidence in the system, and we simply cannot afford that. We need a system of pre-election and post-election security audits of these voting machines to increase confidence in the system.

The third part of the voting process we need to secure is the tabulation system. After the polls close, we aggregate votes—­from individual machines, to polling places, to precincts, and finally to totals. This system is insecure as well, and we can do a lot more to make it reliable. Similarly, our system of recounts can be made more secure and efficient.

We have the technology to do all of this. The problem is political will. We have to decide that the goal of our election system is for the most people to be able to vote with the least amount of effort. If we continue to enact voter suppression measures like ID requirements, barriers to voter registration, limitations on early voting, reduced polling place hours, and faulty machines, then we are harming democracy more than we are by allowing our voting machines to be hacked.

We have already declared our election system to be critical national infrastructure. This is largely symbolic, but it demonstrates a commitment to secure elections and makes funding and other resources available to states. We can do much more. We owe it to democracy to do it.

This essay previously appeared on TheAtlantic.com.

Posted on May 10, 2017 at 2:14 PM78 Comments

Criminals are Now Exploiting SS7 Flaws to Hack Smartphone Two-Factor Authentication Systems

I’ve previously written about the serious vulnerabilities in the SS7 phone routing system. Basically, the system doesn’t authenticate messages. Now, criminals are using it to hack smartphone-based two-factor authentication systems:

In short, the issue with SS7 is that the network believes whatever you tell it. SS7 is especially used for data-roaming: when a phone user goes outside their own provider’s coverage, messages still need to get routed to them. But anyone with SS7 access, which can be purchased for around 1000 Euros according to The Süddeutsche Zeitung, can send a routing request, and the network may not authenticate where the message is coming from.

That allows the attacker to direct a target’s text messages to another device, and, in the case of the bank accounts, steal any codes needed to login or greenlight money transfers (after the hackers obtained victim passwords).

Posted on May 10, 2017 at 6:50 AM26 Comments

Using Ultrasonic Beacons to Track Users

I’ve previously written about ad networks using ultrasonic communications to jump from one device to another. The idea is for devices like televisions to play ultrasonic codes in advertisements and for nearby smartphones to detect them. This way the two devices can be linked.

Creepy, yes. And also increasingly common, as this research demonstrates:

Privacy Threats through Ultrasonic Side Channels on Mobile Devices

by Daniel Arp, Erwin Quiring, Christian Wressnegger and Konrad Rieck

Abstract: Device tracking is a serious threat to the privacy of users, as it enables spying on their habits and activities. A recent practice embeds ultrasonic beacons in audio and tracks them using the microphone of mobile devices. This side channel allows an adversary to identify a user’s current location, spy on her TV viewing habits or link together her different mobile devices. In this paper, we explore the capabilities, the current prevalence and technical limitations of this new tracking technique based on three commercial tracking solutions. To this end, we develop detection approaches for ultrasonic beacons and Android applications capable of processing these. Our findings confirm our privacy concerns: We spot ultrasonic beacons in various web media content and detect signals in 4 of 35 stores in two European cities that are used for location tracking. While we do not find ultrasonic beacons in TV streams from 7 countries, we spot 234 Android applications that are constantly listening for ultrasonic beacons in the background without the user’s knowledge.

News article. BoingBoing post.

Posted on May 8, 2017 at 9:16 AM37 Comments

Friday Squid Blogging: Squid Communications

In the oval squid Sepioteuthis lessoniana, males use body patterns to communicate with both females and other males:

To gain insight into the visual communication associated with each behavior in terms of the body patterning’s key components, the co-expression frequencies of two or more components at any moment in time were calculated in order to assess uniqueness when distinguishing one behavior from another. This approach identified the minimum set of key components that, when expressed together, represents an unequivocal visual communication signal. While the interpretation of the signal and the associated response of the receiver during visual communication are difficult to determine, the concept of the component assembly is similar to a typical language within which individual words often have multiple meanings, but when they appeared together with other words, the message becomes unequivocal. The present study thus demonstrates that dynamic body pattering, by expressing unique sets of key components acutely, is an efficient way of communicating behavioral information between oval squids.

News article.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on May 5, 2017 at 4:05 PM265 Comments

Why Is the TSA Scanning Paper?

I’ve been reading a bunch of anecdotal reports that the TSA is starting to scan paper separately:

A passenger going through security at Kansas City International Airport (MCI) recently was asked by security officers to remove all paper products from his bag. Everything from books to Post-It Notes, documents and more. Once the paper products were removed, the passenger had to put them in a separate bin to be scanned separately.

When the passenger inquired why he was being forced to remove the paper products from his carry-on bag, the agent told him that it was a pilot program that’s being tested at MCI and will begin rolling out nationwide. KSHB Kansas City is reporting that other passengers traveling through MCI have also reported the paper-removal procedure at the airport. One person said that security dug through the suitcase for two “blocks” of Post-It Notes at the bottom.

Does anyone have any guesses as to why the TSA is doing this?

EDITED TO ADD (5/11): This article says that the TSA has stopped doing this. They blamed it on their contractor, Akai Security.

Posted on May 5, 2017 at 7:35 AM124 Comments

Forging Voice

LyreBird is a system that can accurately reproduce the voice of someone, given a large amount of sample inputs. It’s pretty good—listen to the demo here—and will only get better over time.

The applications for recorded-voice forgeries are obvious, but I think the larger security risk will be real-time forgery. Imagine the social engineering implications of an attacker on the telephone being able to impersonate someone the victim knows.

I don’t think we’re ready for this. We use people’s voices to authenticate them all the time, in all sorts of different ways.

EDITED TO ADD (5/11): This is from 2003 on the topic.

Posted on May 4, 2017 at 10:31 AM40 Comments

Who is Publishing NSA and CIA Secrets, and Why?

There’s something going on inside the intelligence communities in at least two countries, and we have no idea what it is.

Consider these three data points. One: someone, probably a country’s intelligence organization, is dumping massive amounts of cyberattack tools belonging to the NSA onto the Internet. Two: someone else, or maybe the same someone, is doing the same thing to the CIA.

Three: in March, NSA Deputy Director Richard Ledgett described how the NSA penetrated the computer networks of a Russian intelligence agency and was able to monitor them as they attacked the US State Department in 2014. Even more explicitly, a US ally­—my guess is the UK—­was not only hacking the Russian intelligence agency’s computers, but also the surveillance cameras inside their building. “They [the US ally] monitored the [Russian] hackers as they maneuvered inside the U.S. systems and as they walked in and out of the workspace, and were able to see faces, the officials said.”

Countries don’t often reveal intelligence capabilities: “sources and methods.” Because it gives their adversaries important information about what to fix, it’s a deliberate decision done with good reason. And it’s not just the target country who learns from a reveal. When the US announces that it can see through the cameras inside the buildings of Russia’s cyber warriors, other countries immediately check the security of their own cameras.

With all this in mind, let’s talk about the recent leaks at NSA and the CIA.

Last year, a previously unknown group called the Shadow Brokers started releasing NSA hacking tools and documents from about three years ago. They continued to do so this year—­five sets of files in all­—and have implied that more classified documents are to come. We don’t know how they got the files. When the Shadow Brokers first emerged, the general consensus was that someone had found and hacked an external NSA staging server. These are third-party computers that the NSA’s TAO hackers use to launch attacks from. Those servers are necessarily stocked with TAO attack tools. This matched the leaks, which included a “script” directory and working attack notes. We’re not sure if someone inside the NSA made a mistake that left these files exposed, or if the hackers that found the cache got lucky.

That explanation stopped making sense after the latest Shadow Brokers release, which included attack tools against Windows, PowerPoint presentations, and operational notes—­documents that are definitely not going to be on an external NSA staging server. A credible theory, which I first heard from Nicholas Weaver, is that the Shadow Brokers are publishing NSA data from multiple sources. The first leaks were from an external staging server, but the more recent leaks are from inside the NSA itself.

So what happened? Did someone inside the NSA accidentally mount the wrong server on some external network? That’s possible, but seems very unlikely. Did someone hack the NSA itself? Could there be a mole inside the NSA, as Kevin Poulsen speculated?

If it is a mole, my guess is that he’s already been arrested. There are enough individualities in the files to pinpoint exactly where and when they came from. Surely the NSA knows who could have taken the files. No country would burn a mole working for it by publishing what he delivered. Intelligence agencies know that if they betray a source this severely, they’ll never get another one.

That points to two options. The first is that the files came from Hal Martin. He’s the NSA contractor who was arrested in August for hoarding agency secrets in his house for two years. He can’t be the publisher, because the Shadow Brokers are in business even though he is in prison. But maybe the leaker got the documents from his stash: either because Martin gave the documents to them or because he himself was hacked. The dates line up, so it’s theoretically possible, but the contents of the documents speak to someone with a different sort of access. There’s also nothing in the public indictment against Martin that speaks to his selling secrets to a foreign power, and I think it’s exactly the sort of thing that the NSA would leak. But maybe I’m wrong about all of this; Occam’s Razor suggests that it’s him.

The other option is a mysterious second NSA leak of cyberattack tools. The only thing I have ever heard about this is from a Washington Post story about Martin: “But there was a second, previously undisclosed breach of cybertools, discovered in the summer of 2015, which was also carried out by a TAO employee, one official said. That individual also has been arrested, but his case has not been made public. The individual is not thought to have shared the material with another country, the official said.” But “not thought to have” is not the same as not having done so.

On the other hand, it’s possible that someone penetrated the internal NSA network. We’ve already seen NSA tools that can do that kind of thing to other networks. That would be huge, and explain why there were calls to fire NSA Director Mike Rogers last year.

The CIA leak is both similar and different. It consists of a series of attack tools from about a year ago. The most educated guess amongst people who know stuff is that the data is from an almost-certainly air-gapped internal development wiki­a Confluence server­—and either someone on the inside was somehow coerced into giving up a copy of it, or someone on the outside hacked into the CIA and got themselves a copy. They turned the documents over to WikiLeaks, which continues to publish it.

This is also a really big deal, and hugely damaging for the CIA. Those tools were new, and they’re impressive. I have been told that the CIA is desperately trying to hire coders to replace what was lost.

For both of these leaks, one big question is attribution: who did this? A whistleblower wouldn’t sit on attack tools for years before publishing. A whistleblower would act more like Snowden or Manning, publishing immediately—­and publishing documents that discuss what the US is doing to whom, not simply a bunch of attack tools. It just doesn’t make sense. Neither does random hackers. Or cybercriminals. I think it’s being done by a country or countries.

My guess was, and is still, Russia in both cases. Here’s my reasoning. Whoever got this information years before and is leaking it now has to 1) be capable of hacking the NSA and/or the CIA, and 2) willing to publish it all. Countries like Israel and France are certainly capable, but wouldn’t ever publish. Countries like North Korea or Iran probably aren’t capable. The list of countries who fit both criteria is small: Russia, China, and…and…and I’m out of ideas. And China is currently trying to make nice with the US.

Last August, Edward Snowden guessed Russia, too.

So Russia—­or someone else­—steals these secrets, and presumably uses them to both defend its own networks and hack other countries while deflecting blame for a couple of years. For it to publish now means that the intelligence value of the information is now lower than the embarrassment value to the NSA and CIA. This could be because the US figured out that its tools were hacked, and maybe even by whom; which would make the tools less valuable against US government targets, although still valuable against third parties.

The message that comes with publishing seems clear to me: “We are so deep into your business that we don’t care if we burn these few-years-old capabilities, as well as the fact that we have them. There’s just nothing you can do about it.” It’s bragging.

Which is exactly the same thing Ledgett is doing to the Russians. Maybe the capabilities he talked about are long gone, so there’s nothing lost in exposing sources and methods. Or maybe he too is bragging: saying to the Russians that he doesn’t care if they know. He’s certainly bragging to every other country that is paying attention to his remarks. (He may be bluffing, of course, hoping to convince others that the US has intelligence capabilities it doesn’t.)

What happens when intelligence agencies go to war with each other and don’t tell the rest of us? I think there’s something going on between the US and Russia that the public is just seeing pieces of. We have no idea why, or where it will go next, and can only speculate.

This essay previously appeared on Lawfare.com.

Posted on May 1, 2017 at 6:32 AM94 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.