Hacking Team, Computer Vulnerabilities, and the NSA

When the National Security Administration (NSA)—or any government agency—discovers a vulnerability in a popular computer system, should it disclose it or not? The debate exists because vulnerabilities have both offensive and defensive uses. Offensively, vulnerabilities can be exploited to penetrate others’ computers and networks, either for espionage or destructive purposes. Defensively, publicly revealing security flaws can be used to make our own systems less vulnerable to those same attacks. The two options are mutually exclusive: either we can help to secure both our own networks and the systems we might want to attack, or we can keep both networks vulnerable. Many, myself included, have long argued that defense is more important than offense, and that we should patch almost every vulnerability we find. Even the President’s Review Group on Intelligence and Communications Technologies recommended in 2013 that ‘U.S. policy should generally move to ensure that Zero Days are quickly blocked, so that the underlying vulnerabilities are patched on U.S. Government and other networks.’

Both the NSA and the White House have talked about a secret ‘vulnerability equities process’ they go through when they find a security flaw. Both groups maintain the process is heavily weighted in favor or disclosing vulnerabilities to the vendors and having them patched.

An undated document—declassified last week with heavy redactions after a year-long Freedom of Information Act lawsuit—shines some light on the process but still leaves many questions unanswered. An important question is: which vulnerabilities go through the equities process, and which don’t?

A real-world example of the ambiguity surrounding the equities process emerged from the recent hacking of the cyber weapons arms manufacturer Hacking Team. The corporation sells Internet attack and espionage software to countries around the world, including many reprehensible governments to allow them to eavesdrop on their citizens, sometimes as a prelude to arrest and torture. The computer tools were used against U.S. journalists.

In July, unidentified hackers penetrated Hacking Team’s corporate network and stole almost everything of value, including corporate documents, e-mails, and source code. The hackers proceeded to post it all online.

The NSA was most likely able to penetrate Hacking Team’s network and steal the same data. The agency probably did it years ago. They would have learned the same things about Hacking Team’s network software that we did in July: how it worked, what vulnerabilities they were using, and which countries were using their cyber weapons. Armed with that knowledge, the NSA could have quietly neutralized many of the company’s products. The United States could have alerted software vendors about the zero-day exploits and had them patched. It could have told the antivirus companies how to detect and remove Hacking Team’s malware. It could have done a lot. Assuming that the NSA did infiltrate Hacking Team’s network, the fact that the United States chose not to reveal the vulnerabilities it uncovered is both revealing and interesting, and the decision provides a window into the vulnerability equities process.

The first question to ask is why? There are three possible reasons. One, the software was also being used by the United States, and the government did not want to lose its benefits. Two, NSA was able to eavesdrop on other entities using Hacking Team’s software, and they wanted to continue benefitting from the intelligence. And three, the agency did not want to expose their own hacking capabilities by demonstrating that they had compromised Hacking Team’s network. In reality, the decision may have been due to a combination of the three possibilities.

How was this decision made? More explicitly, did any vulnerabilities that Hacking Team exploited, and the NSA was aware of, go through the vulnerability equities process? It is unclear. The NSA plays fast and loose when deciding which security flaws go through the procedure. The process document states that it applies to vulnerabilities that are ‘newly discovered and not publicly known.’ Does that refer only to vulnerabilities discovered by the NSA, or does the process also apply to zero-day vulnerabilities that the NSA discovers others are using? If vulnerabilities used in others’ cyber weapons are excluded, it is very difficult to talk about the process as it is currently formulated.

The U.S. government should close the vulnerabilities that foreign governments are using to attack people and networks. If taking action is as easy as plugging security vulnerabilities in products and making everyone in the world more secure, that should be standard procedure. The fact that the NSA—we assume—chose not to suggests that the United States has its priorities wrong.

Undoubtedly, there would be blowback from closing vulnerabilities utilized in others’ cyber weapons. Several companies sell information about vulnerabilities to different countries, and if they found that those security gaps were regularly closed soon after they started trying to sell them, they would quickly suspect espionage and take more defensive precautions. The new wariness of sellers and decrease in available security flaws would also raise the price of vulnerabilities worldwide. The United States is one of the biggest buyers, meaning that we benefit from greater availability and lower prices.

If we assume the NSA has penetrated these companies’ networks, we should also assume that the intelligence agencies of countries like Russia and China have done the same. Are those countries using Hacking Team’s vulnerabilities in their cyber weapons? We are all embroiled in a cyber arms race—finding, buying, stockpiling, using, and exposing vulnerabilities—and our actions will affect the actions of all the other players.

It seems foolish that we would not take every opportunity to neutralize the cyberweapons of those countries that would attack the United States or use them against their own people for totalitarian gain. Is it truly possible that when the NSA intercepts and reverse-engineers a cyberweapon used by one of our enemies—whether a Hacking Team customer or a country like China—we don’t close the vulnerabilities that that weapon uses? Does the NSA use knowledge of the weapon to defend the U.S. government networks whose security it maintains, at the expense of everyone else in the country and the world? That seems incredibly dangerous.

In my book Data and Goliath, I suggested breaking apart the NSA’s offensive and defensive components, in part to resolve the agency’s internal conflict between attack and defense. One part would be focused on foreign espionage, and another on cyberdefense. This Hacking Team discussion demonstrates that even separating the agency would not be enough. The espionage-focused organization that penetrates and analyzes the products of cyberweapons arms manufacturers would regularly learn about vulnerabilities used to attack systems and networks worldwide. Thus, that section of the agency would still have to transfer that knowledge to the defense-focused organization. That is not going to happen as long as the United States prioritizes surveillance over security and attack over defense. The norms governing actions in cyberspace need to be changed, a task far more difficult than any reform of the NSA.

Categories: Computer and Information Security, National Security Policy

Sidebar photo of Bruce Schneier by Joe MacInnis.