Entries Tagged "cost-benefit analysis"

Page 2 of 23

Volkswagen and Cheating Software

Portuguese translation by Ricardo R Hashimoto

For the past six years, Volkswagen has been cheating on the emissions testing for its diesel cars. The cars’ computers were able to detect when they were being tested, and temporarily alter how their engines worked so they looked much cleaner than they actually were. When they weren’t being tested, they belched out 40 times the pollutants. Their CEO has resigned, and the company will face an expensive recall, enormous fines and worse.

Cheating on regulatory testing has a long history in corporate America. It happens regularly in automobile emissions control and elsewhere. What’s important in the VW case is that the cheating was preprogrammed into the algorithm that controlled cars’ emissions.

Computers allow people to cheat in ways that are new. Because the cheating is encapsulated in software, the malicious actions can happen at a far remove from the testing itself. Because the software is “smart” in ways that normal objects are not, the cheating can be subtler and harder to detect.

We’ve already had examples of smartphone manufacturers cheating on processor benchmark testing: detecting when they’re being tested and artificially increasing their performance. We’re going to see this in other industries.

The Internet of Things is coming. Many industries are moving to add computers to their devices, and that will bring with it new opportunities for manufacturers to cheat. Light bulbs could fool regulators into appearing more energy efficient than they are. Temperature sensors could fool buyers into believing that food has been stored at safer temperatures than it has been. Voting machines could appear to work perfectly — except during the first Tuesday of November, when they undetectably switch a few percent of votes from one party’s candidates to another’s.

My worry is that some corporate executives won’t interpret the VW story as a cautionary tale involving just punishments for a bad mistake but will see it instead as a demonstration that you can get away with something like that for six years.

And they’ll cheat smarter. For all of VW’s brazenness, its cheating was obvious once people knew to look for it. Far cleverer would be to make the cheating look like an accident. Overall software quality is so bad that products ship with thousands of programming mistakes.

Most of them don’t affect normal operations, which is why your software generally works just fine. Some of them do, which is why your software occasionally fails, and needs constant updates. By making cheating software appear to be a programming mistake, the cheating looks like an accident. And, unfortunately, this type of deniable cheating is easier than people think.

Computer-security experts believe that intelligence agencies have been doing this sort of thing for years, both with the consent of the software developers and surreptitiously.

This problem won’t be solved through computer security as we normally think of it. Conventional computer security is designed to prevent outside hackers from breaking into your computers and networks. The car analogue would be security software that prevented an owner from tweaking his own engine to run faster but in the process emit more pollutants. What we need to contend with is a very different threat: malfeasance programmed in at the design stage.

We already know how to protect ourselves against corporate misbehavior. Ronald Reagan once said “trust, but verify” when speaking about the Soviet Union cheating on nuclear treaties. We need to be able to verify the software that controls our lives.

Software verification has two parts: transparency and oversight. Transparency means making the source code available for analysis. The need for this is obvious; it’s much easier to hide cheating software if a manufacturer can hide the code.

But transparency doesn’t magically reduce cheating or improve software quality, as anyone who uses open-source software knows. It’s only the first step. The code must be analyzed. And because software is so complicated, that analysis can’t be limited to a once-every-few-years government test. We need private analysis as well.

It was researchers at private labs in the United States and Germany that eventually outed Volkswagen. So transparency can’t just mean making the code available to government regulators and their representatives; it needs to mean making the code available to everyone.

Both transparency and oversight are being threatened in the software world. Companies routinely fight making their code public and attempt to muzzle security researchers who find problems, citing the proprietary nature of the software. It’s a fair complaint, but the public interests of accuracy and safety need to trump business interests.

Proprietary software is increasingly being used in critical applications: voting machines, medical devices, breathalyzers, electric power distribution, systems that decide whether or not someone can board an airplane. We’re ceding more control of our lives to software and algorithms. Transparency is the only way verify that they’re not cheating us.

There’s no shortage of corporate executives willing to lie and cheat their way to profits. We saw another example of this last week: Stewart Parnell, the former CEO of the now-defunct Peanut Corporation of America, was sentenced to 28 years in prison for knowingly shipping out salmonella-tainted products. That may seem excessive, but nine people died and many more fell ill as a result of his cheating.

Software will only make malfeasance like this easier to commit and harder to prove. Fewer people need to know about the conspiracy. It can be done in advance, nowhere near the testing time or site. And, if the software remains undetected for long enough, it could easily be the case that no one in the company remembers that it’s there.

We need better verification of the software that controls our lives, and that means more — and more public — transparency.

This essay previously appeared on CNN.com.

EDITED TO ADD: Three more essays.

EDITED TO ADD (10/8): A history of emissions-control cheating devices.

Posted on September 30, 2015 at 9:13 AMView Comments

The Further Democratization of Stingray

Stingray is the code name for an IMSI-catcher, which is basically a fake cell phone tower sold by Harris Corporation to various law enforcement agencies. (It’s actually just one of a series of devices with fish names — Amberjack is another — but it’s the name used in the media.) What is basically does is trick nearby cell phones into connecting to it. Once that happens, the IMSI-catcher can collect identification and location information of the phones and, in some cases, eavesdrop on phone conversations, text messages, and web browsing. (IMSI stands for International Mobile Subscriber Identity, which is the unique serial number your cell phone broadcasts so that the cellular system knows where you are.)

The use of IMSI-catchers in the US used to be a massive police secret. The FBI is so scared of explaining this capability in public that the agency makes local police sign nondisclosure agreements before using the technique, and has instructed them to lie about their use of it in court. When it seemed possible that local police in Sarasota, Florida, might release documents about Stingray cell phone interception equipment to plaintiffs in civil rights litigation against them, federal marshals seized the documents. More recently, St. Louis police dropped a case rather than talk about the technology in court. And Baltimore police admitted using Stingray over 25,000 times.

The truth is that it’s no longer a massive police secret. We now know a lot about IMSI-catchers. And the US government does not have a monopoly over the use of IMSI-catchers. I wrote in Data and Goliath:

There are dozens of these devices scattered around Washington, DC, and the rest of the country run by who-knows-what government or organization. Criminal uses are next.

From the Washington Post:

How rife? Turner and his colleagues assert that their specially outfitted smartphone, called the GSMK CryptoPhone, had detected signs of as many as 18 IMSI catchers in less than two days of driving through the region. A map of these locations, released Wednesday afternoon, looks like a primer on the geography of Washington power, with the surveillance devices reportedly near the White House, the Capitol, foreign embassies and the cluster of federal contractors near Dulles International Airport.

At the RSA Conference last week, Pwnie Express demonstrated their IMSI-catcher detector.

Building your own IMSI-catcher isn’t hard or expensive. At Def Con in 2010, researcher Chris Paget (now Kristin Paget) demonstrated a homemade IMSI-catcher. The whole thing cost $1,500, which is cheap enough for both criminals and nosy hobbyists.

It’s even cheaper and easier now. Anyone with a HackRF software-defined radio card can turn their laptop into an amateur IMSI-catcher. And this is why companies are building detectors into their security monitoring equipment.

Two points here. The first is that the FBI should stop treating Stingray like it’s a big secret, so we can start talking about policy.

The second is that we should stop pretending that this capability is exclusive to law enforcement, and recognize that we’re all at risk because of it. If we continue to allow our cellular networks to be vulnerable to IMSI-catchers, then we are all vulnerable to any foreign government, criminal, hacker, or hobbyist that builds one. If we instead engineer our cellular networks to be secure against this sort of attack, then we are safe against all those attackers.

Me:

We have one infrastructure. We can’t choose a world where the US gets to spy and the Chinese don’t. We get to choose a world where everyone can spy, or a world where no one can spy. We can be secure from everyone, or vulnerable to anyone.

Like QUANTUM, we have the choice of building our cellular infrastructure for security or for surveillance. Let’s choose security.

EDITED TO ADD (5/2): Here’s an IMSI catcher for sale on alibaba.com. At this point, every dictator in the world is using this technology against its own citizens. They’re used extensively in China to send SMS spam without paying the telcos any fees. On a Food Network show called Mystery Diners — episode 108, “Cabin Fever” — someone used an IMSI catcher to intercept a phone call between two restaurant employees.

The new model of the IMSI catcher from Harris Corporation is called Hailstorm. It has the ability to remotely inject malware into cell phones. Other Harris IMSI-catcher codenames are Kingfish, Gossamer, Triggerfish, Amberjack and Harpoon. The competitor is DRT, made by the Boeing subsidiary Digital Receiver Technology, Inc.

EDITED TO ADD (5/2): Here’s an IMSI catcher called Piranha, sold by the Israeli company Rayzone Corp. It claims to work on GSM 2G, 3G, and 4G networks (plus CDMA, of course). The basic Stingray only works on GSM 2G networks, and intercepts phones on the more modern networks by forcing them to downgrade to the 2G protocols. We believe that the more modern ISMI catchers also work against 3G and 4G networks.

EDITED TO ADD (5/13): The FBI recently released more than 5,000 pages of documents about Stingray, but nearly everything is redacted.

Posted on April 27, 2015 at 6:27 AMView Comments

Metal Detectors at Sports Stadiums

Fans attending Major League Baseball games are being greeted in a new way this year: with metal detectors at the ballparks. Touted as a counterterrorism measure, they’re nothing of the sort. They’re pure security theater: They look good without doing anything to make us safer. We’re stuck with them because of a combination of buck passing, CYA thinking, and fear.

As a security measure, the new devices are laughable. The ballpark metal detectors are much more lax than the ones at an airport checkpoint. They aren’t very sensitive — people with phones and keys in their pockets are sailing through — and there are no X-ray machines. Bags get the same cursory search they’ve gotten for years. And fans wanting to avoid the detectors can opt for a “light pat-down search” instead.

There’s no evidence that this new measure makes anyone safer. A halfway competent ticketholder would have no trouble sneaking a gun into the stadium. For that matter, a bomb exploded at a crowded checkpoint would be no less deadly than one exploded in the stands. These measures will, at best, be effective at stopping the random baseball fan who’s carrying a gun or knife into the stadium. That may be a good idea, but unless there’s been a recent spate of fan shootings and stabbings at baseball games — and there hasn’t — this is a whole lot of time and money being spent to combat an imaginary threat.

But imaginary threats are the only ones baseball executives have to stop this season; there’s been no specific terrorist threat or actual intelligence to be concerned about. MLB executives forced this change on ballparks based on unspecified discussions with the Department of Homeland Security after the Boston Marathon bombing in 2013. Because, you know, that was also a sporting event.

This system of vague consultations and equally vague threats ensure that no one organization can be seen as responsible for the change. MLB can claim that the league and teams “work closely” with DHS. DHS can claim that it was MLB’s initiative. And both can safely relax because if something happens, at least they did something.

It’s an attitude I’ve seen before: “Something must be done. This is something. Therefore, we must do it.” Never mind if the something makes any sense or not.

In reality, this is CYA security, and it’s pervasive in post-9/11 America. It no longer matters if a security measure makes sense, if it’s cost-effective or if it mitigates any actual threats. All that matters is that you took the threat seriously, so if something happens you won’t be blamed for inaction. It’s security, all right — security for the careers of those in charge.

I’m not saying that these officials care only about their jobs and not at all about preventing terrorism, only that their priorities are skewed. They imagine vague threats, and come up with correspondingly vague security measures intended to address them. They experience none of the costs. They’re not the ones who have to deal with the long lines and confusion at the gates. They’re not the ones who have to arrive early to avoid the messes the new policies have caused around the league. And if fans spend more money at the concession stands because they’ve arrived an hour early and have had the food and drinks they tried to bring along confiscated, so much the better, from the team owners’ point of view.

I can hear the objections to this as I write. You don’t know these measures won’t be effective! What if something happens? Don’t we have to do everything possible to protect ourselves against terrorism?

That’s worst-case thinking, and it’s dangerous. It leads to bad decisions, bad design and bad security. A better approach is to realistically assess the threats, judge security measures on their effectiveness and take their costs into account. And the result of that calm, rational look will be the realization that there will always be places where we pack ourselves densely together, and that we should spend less time trying to secure those places and more time finding terrorist plots before they can be carried out.

So far, fans have been exasperated but mostly accepting of these new security measures. And this is precisely the problem — most of us don’t care all that much. Our options are to put up with these measures, or stay home. Going to a baseball game is not a political act, and metal detectors aren’t worth a boycott. But there’s an undercurrent of fear as well. If it’s in the name of security, we’ll accept it. As long as our leaders are scared of the terrorists, they’re going to continue the security theater. And we’re similarly going to accept whatever measures are forced upon us in the name of security. We’re going to accept the National Security Agency’s surveillance of every American, airport security procedures that make no sense and metal detectors at baseball and football stadiums. We’re going to continue to waste money overreacting to irrational fears.

We no longer need the terrorists. We’re now so good at terrorizing ourselves.

This essay previously appeared in the Washington Post.

Posted on April 15, 2015 at 6:58 AMView Comments

BIOS Hacking

We’ve learned a lot about the NSA’s abilities to hack a computer’s BIOS so that the hack survives reinstalling the OS. Now we have a research presentation about it.

From Wired:

The BIOS boots a computer and helps load the operating system. By infecting this core software, which operates below antivirus and other security products and therefore is not usually scanned by them, spies can plant malware that remains live and undetected even if the computer’s operating system were wiped and re-installed.

[…]

Although most BIOS have protections to prevent unauthorized modifications, the researchers were able to bypass these to reflash the BIOS and implant their malicious code.

[…]

Because many BIOS share some of the same code, they were able to uncover vulnerabilities in 80 percent of the PCs they examined, including ones from Dell, Lenovo and HP. The vulnerabilities, which they’re calling incursion vulnerabilities, were so easy to find that they wrote a script to automate the process and eventually stopped counting the vulns it uncovered because there were too many.

From ThreatPost:

Kallenberg said an attacker would need to already have remote access to a compromised computer in order to execute the implant and elevate privileges on the machine through the hardware. Their exploit turns down existing protections in place to prevent re-flashing of the firmware, enabling the implant to be inserted and executed.

The devious part of their exploit is that they’ve found a way to insert their agent into System Management Mode, which is used by firmware and runs separately from the operating system, managing various hardware controls. System Management Mode also has access to memory, which puts supposedly secure operating systems such as Tails in the line of fire of the implant.

From the Register:

“Because almost no one patches their BIOSes, almost every BIOS in the wild is affected by at least one vulnerability, and can be infected,” Kopvah says.

“The high amount of code reuse across UEFI BIOSes means that BIOS infection can be automatic and reliable.

“The point is less about how vendors don’t fix the problems, and more how the vendors’ fixes are going un-applied by users, corporations, and governments.”

From Forbes:

Though such “voodoo” hacking will likely remain a tool in the arsenal of intelligence and military agencies, it’s getting easier, Kallenberg and Kovah believe. This is in part due to the widespread adoption of UEFI, a framework that makes it easier for the vendors along the manufacturing chain to add modules and tinker with the code. That’s proven useful for the good guys, but also made it simpler for researchers to inspect the BIOS, find holes and create tools that find problems, allowing Kallenberg and Kovah to show off exploits across different PCs. In the demo to FORBES, an HP PC was used to carry out an attack on an ASUS machine. Kovah claimed that in tests across different PCs, he was able to find and exploit BIOS vulnerabilities across 80 per cent of machines he had access to and he could find flaws in the remaining 10 per cent.

“There are protections in place that are supposed to prevent you from flashing the BIOS and we’ve essentially automated a way to find vulnerabilities in this process to allow us to bypass them. It turns out bypassing the protections is pretty easy as well,” added Kallenberg.

The NSA has a term for vulnerabilities it think are exclusive to it: NOBUS, for “nobody but us.” Turns out that NOBUS is a flawed concept. As I keep saying: “Today’s top-secret programs become tomorrow’s PhD theses and the next day’s hacker tools.” By continuing to exploit these vulnerabilities rather than fixing them, the NSA is keeping us all vulnerable.

Two Slashdot threads. Hacker News thread. Reddit thread.

EDITED TO ADD (3/31): Slides from the CanSecWest presentation. The bottom line is that there are some pretty huge BIOS insecurities out there. We as a community and industry need to figure out how to regularly patch our BIOSes.

Posted on March 23, 2015 at 7:07 AMView Comments

The Changing Economics of Surveillance

Cory Doctorow examines the changing economics of surveillance and what it means:

The Stasi employed one snitch for every 50 or 60 people it watched. We can’t be sure of the size of the entire Five Eyes global surveillance workforce, but there are only about 1.4 million Americans with Top Secret clearance, and many of them don’t work at or for the NSA, which means that the number is smaller than that (the other Five Eyes states have much smaller workforces than the US). This million-ish person workforce keeps six or seven billion people under surveillance — a ratio approaching 1:10,000. What’s more, the US has only (“only”!) quadrupled its surveillance budget since the end of the Cold War: tooling up to give the spies their toys wasn’t all that expensive, compared to the number of lives that gear lets them pry into.

IT has been responsible for a 2-3 order of magnitude productivity gain in surveillance efficiency. The Stasi used an army to surveil a nation; the NSA uses a battalion to surveil a planet.

I am reminded of this paper on the changing economics of surveillance.

Posted on March 12, 2015 at 6:22 AMView Comments

Economic Failures of HTTPS Encryption

Interesting paper: “Security Collapse of the HTTPS Market.” From the conclusion:

Recent breaches at CAs have exposed several systemic vulnerabilities and market failures inherent in the current HTTPS authentication model: the security of the entire ecosystem suffers if any of the hundreds of CAs is compromised (weakest link); browsers are unable to revoke trust in major CAs (“too big to fail”); CAs manage to conceal security incidents (information asymmetry); and ultimately customers and end users bear the liability and damages of security incidents (negative externalities).

Understanding the market and value chain for HTTPS is essential to address these systemic vulnerabilities. The market is highly concentrated, with very large price differences among suppliers and limited price competition. Paradoxically, the current vulnerabilities benefit rather than hurt the dominant CAs, because among others, they are too big to fail.

Posted on November 28, 2014 at 6:26 AMView Comments

More Crypto Wars II

FBI Director James Comey again called for an end to secure encryption by putting in a backdoor. Here’s his speech:

There is a misconception that building a lawful intercept solution into a system requires a so-called “back door,” one that foreign adversaries and hackers may try to exploit.

But that isn’t true. We aren’t seeking a back-door approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law. We are completely comfortable with court orders and legal process — front doors that provide the evidence and information we need to investigate crime and prevent terrorist attacks.

Cyber adversaries will exploit any vulnerability they find. But it makes more sense to address any security risks by developing intercept solutions during the design phase, rather than resorting to a patchwork solution when law enforcement comes knocking after the fact. And with sophisticated encryption, there might be no solution, leaving the government at a dead end — all in the name of privacy and network security.

I’m not sure why he believes he can have a technological means of access that somehow only works for people of the correct morality with the proper legal documents, but he seems to believe that’s possible. As Jeffrey Vagle and Matt Blaze point out, there’s no technical difference between Comey’s “front door” and a “back door.”

As in all of these sorts of speeches, Comey gave examples of crimes that could have been solved had only the police been able to decrypt the defendant’s phone. Unfortunately, none of the three stories is true. The Intercept tracked down each story, and none of them is actually a case where encryption foiled an investigation, arrest, or conviction:

In the most dramatic case that Comey invoked — the death of a 2-year-old Los Angeles girl — not only was cellphone data a non-issue, but records show the girl’s death could actually have been avoided had government agencies involved in overseeing her and her parents acted on the extensive record they already had before them.

In another case, of a Louisiana sex offender who enticed and then killed a 12-year-old boy, the big break had nothing to do with a phone: The murderer left behind his keys and a trail of muddy footprints, and was stopped nearby after his car ran out of gas.

And in the case of a Sacramento hit-and-run that killed a man and his girlfriend’s four dogs, the driver was arrested in a traffic stop because his car was smashed up, and immediately confessed to involvement in the incident.

[…]

His poor examples, however, were reminiscent of one cited by Ronald T. Hosko, a former assistant director of the FBI’s Criminal Investigative Division, in a widely cited — and thoroughly debunked — Washington Post opinion piece last month.

In that case, the Post was eventually forced to have Hosko rewrite the piece, with the following caveat appended:

Editors note: This story incorrectly stated that Apple and Google’s new encryption rules would have hindered law enforcement’s ability to rescue the kidnap victim in Wake Forest, N.C. This is not the case. The piece has been corrected.

Hadn’t Comey found anything better since then? In a question-and-answer session after his speech, Comey both denied trying to use scare stories to make his point — and admitted that he had launched a nationwide search for better ones, to no avail.

This is important. All the FBI talk about “going dark” and losing the ability to solve crimes is absolute bullshit. There is absolutely no evidence, either statistically or even anecdotally, that criminals are going free because of encryption.

So why are we even discussing the possibility to forcing companies to provide insecure encryption to their users and customers?

The EFF points out that companies are protected by law from being required to provide insecure security to make the FBI happy.

Sadly, I don’t think this is going to go away anytime soon.

My first post on these new Crypto Wars is here.

Posted on October 21, 2014 at 6:17 AMView Comments

Irrational Fear of Risks Against Our Children

There’s a horrible story of a South Carolina mother arrested for letting her 9-year-old daughter play alone at a park while she was at work. The article linked to another article about a woman convicted of “contributing to the delinquency of a minor” for leaving her 4-year-old son in the car for a few minutes. That article contains some excellent commentary by the very sensible Free Range Kids blogger Lenore Skenazy:

“Listen,” she said at one point. “Let’s put aside for the moment that by far, the most dangerous thing you did to your child that day was put him in a car and drive someplace with him. About 300 children are injured in traffic accidents every day — and about two die. That’s a real risk. So if you truly wanted to protect your kid, you’d never drive anywhere with him. But let’s put that aside. So you take him, and you get to the store where you need to run in for a minute and you’re faced with a decision. Now, people will say you committed a crime because you put your kid ‘at risk.’ But the truth is, there’s some risk to either decision you make.” She stopped at this point to emphasize, as she does in much of her analysis, how shockingly rare the abduction or injury of children in non-moving, non-overheated vehicles really is. For example, she insists that statistically speaking, it would likely take 750,000 years for a child left alone in a public space to be snatched by a stranger. “So there is some risk to leaving your kid in a car,” she argues. It might not be statistically meaningful but it’s not nonexistent. The problem is,”she goes on, “there’s some risk to every choice you make. So, say you take the kid inside with you. There’s some risk you’ll both be hit by a crazy driver in the parking lot. There’s some risk someone in the store will go on a shooting spree and shoot your kid. There’s some risk he’ll slip on the ice on the sidewalk outside the store and fracture his skull. There’s some risk no matter what you do. So why is one choice illegal and one is OK? Could it be because the one choice inconveniences you, makes your life a little harder, makes parenting a little harder, gives you a little less time or energy than you would have otherwise had?”

Later on in the conversation, Skenazy boils it down to this. “There’s been this huge cultural shift. We now live in a society where most people believe a child can not be out of your sight for one second, where people think children need constant, total adult supervision. This shift is not rooted in fact. It’s not rooted in any true change. It’s imaginary. It’s rooted in irrational fear.”

Skenazy has some choice words about the South Carolina story as well:

But, “What if a man would’ve come and snatched her?” said a woman interviewed by the TV station.

To which I must ask: In broad daylight? In a crowded park? Just because something happened on Law & Order doesn’t mean it’s happening all the time in real life. Make “what if?” thinking the basis for an arrest and the cops can collar anyone. “You let your son play in the front yard? What if a man drove up and kidnapped him?” “You let your daughter sleep in her own room? What if a man climbed through the window?” etc.

These fears pop into our brains so easily, they seem almost real. But they’re not. Our crime rate today is back to what it was when gas was 29 cents a gallon, according to The Christian Science Monitor. It may feel like kids are in constant danger, but they are as safe (if not safer) than we were when our parents let us enjoy the summer outside, on our own, without fear of being arrested.

Yes.

Posted on August 11, 2014 at 9:34 AMView Comments

Disclosing vs. Hoarding Vulnerabilities

There’s a debate going on about whether the US government — specifically, the NSA and United States Cyber Command — should stockpile Internet vulnerabilities or disclose and fix them. It’s a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.

A software vulnerability is a programming mistake that allows an adversary access into that system. Heartbleed is a recent example, but hundreds are discovered every year.

Unpublished vulnerabilities are called “zero-day” vulnerabilities, and they’re very valuable because no one is protected. Someone with one of those can attack systems world-wide with impunity.

When someone discovers one, he can either use it for defense or for offense. Defense means alerting the vendor and getting it patched. Lots of vulnerabilities are discovered by the vendors themselves and patched without any fanfare. Others are discovered by researchers and hackers. A patch doesn’t make the vulnerability go away, but most users protect themselves by patching their systems regularly.

Offense means using the vulnerability to attack others. This is the quintessential zero-day, because the vendor doesn’t even know the vulnerability exists until it starts being used by criminals or hackers. Eventually the affected software’s vendor finds out — the timing depends on how extensively the vulnerability is used — and issues a patch to close the vulnerability.

If an offensive military cyber unit discovers the vulnerability — or a cyber-weapons arms manufacturer — it keeps that vulnerability secret for use to deliver a cyber-weapon. If it is used stealthily, it might remain secret for a long time. If unused, it’ll remain secret until someone else discovers it.

Discoverers can sell vulnerabilities. There’s a rich market in zero-days for attack purposes — both military/commercial and black markets. Some vendors offer bounties for vulnerabilities to incent defense, but the amounts are much lower.

The NSA can play either defense or offense. It can either alert the vendor and get a still-secret vulnerability fixed, or it can hold on to it and use it to eavesdrop on foreign computer systems. Both are important US policy goals, but the NSA has to choose which one to pursue. By fixing the vulnerability, it strengthens the security of the Internet against all attackers: other countries, criminals, hackers. By leaving the vulnerability open, it is better able to attack others on the Internet. But each use runs the risk of the target government learning of, and using for itself, the vulnerability — or of the vulnerability becoming public and criminals starting to use it.

There is no way to simultaneously defend US networks while leaving foreign networks open to attack. Everyone uses the same software, so fixing us means fixing them, and leaving them vulnerable means leaving us vulnerable. As Harvard Law Professor Jack Goldsmith wrote, “every offensive weapon is a (potential) chink in our defense — and vice versa.”

To make matters even more difficult, there is an arms race going on in cyberspace. The Chinese, the Russians, and many other countries are finding vulnerabilities as well. If we leave a vulnerability unpatched, we run the risk of another country independently discovering it and using it in a cyber-weapon that we will be vulnerable to. But if we patch all the vulnerabilities we find, we won’t have any cyber-weapons to use against other countries.

Many people have weighed in on this debate. The president’s Review Group on Intelligence and Communications Technologies, convened post-Snowden, concluded (recommendation 30), that vulnerabilities should only be hoarded in rare instances and for short times. Cory Doctorow calls it a public health problem. I have said similar things. Dan Geer recommends that the US government corner the vulnerabilities market and fix them all. Both the FBI and the intelligence agencies claim that this amounts to unilateral disarmament.

It seems like an impossible puzzle, but the answer hinges on how vulnerabilities are distributed in software.

If vulnerabilities are sparse, then it’s obvious that every vulnerability we find and fix improves security. We render a vulnerability unusable, even if the Chinese government already knows about it. We make it impossible for criminals to find and use it. We improve the general security of our software, because we can find and fix most of the vulnerabilities.

If vulnerabilities are plentiful — and this seems to be true — the ones the US finds and the ones the Chinese find will largely be different. This means that patching the vulnerabilities we find won’t make it appreciably harder for criminals to find the next one. We don’t really improve general software security by disclosing and patching unknown vulnerabilities, because the percentage we find and fix is small compared to the total number that are out there.

But while vulnerabilities are plentiful, they’re not uniformly distributed. There are easier-to-find ones, and harder-to-find ones. Tools that automatically find and fix entire classes of vulnerabilities, and coding practices that eliminate many easy-to-find ones, greatly improve software security. And when a person finds a vulnerability, it is likely that another person soon will, or recently has, found the same vulnerability. Heartbleed, for example, remained undiscovered for two years, and then two independent researchers discovered it within two days of each other. This is why it is important for the government to err on the side of disclosing and fixing.

The NSA, and by extension US Cyber Command, tries its best to play both ends of this game. Former NSA Director Michael Hayden talks about NOBUS, “nobody but us.” The NSA has a classified process to determine what it should do about vulnerabilities, disclosing and closing most of the ones it finds, but holding back some — we don’t know how many — vulnerabilities that “nobody but us” could find for attack purposes.

This approach seems to be the appropriate general framework, but the devil is in the details. Many of us in the security field don’t know how to make NOBUS decisions, and the recent White House clarification posed more questions than it answered.

Who makes these decisions, and how? How often are they reviewed? Does this review process happen inside Department of Defense, or is it broader? Surely there needs to be a technical review of each vulnerability, but there should also be policy reviews regarding the sorts of vulnerabilities we are hoarding. Do we hold these vulnerabilities until someone else finds them, or only for a short period of time? How many do we stockpile? The US/Israeli cyberweapon Stuxnet used four zero-day vulnerabilities. Burning four on a single military operation implies that we are not hoarding a small number, but more like 100 or more.

There’s one more interesting wrinkle. Cyber-weapons are a combination of a payload — the damage the weapon does — and a delivery mechanism: the vulnerability used to get the payload into the enemy network. Imagine that China knows about a vulnerability and is using it in a still-unfired cyber-weapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or should it use it itself for attack? If it discloses, then China could find a replacement vulnerability that the NSA won’t know about it. But if it doesn’t, it’s deliberately leaving the US vulnerable to cyber-attack. Maybe someday we can get to the point where we can patch vulnerabilities faster than the enemy can use them in an attack, but we’re nowhere near that point today.

The implications of US policy can be felt on a variety of levels. The NSA’s actions have resulted in a widespread mistrust of the security of US Internet products and services, greatly affecting American business. If we show that we’re putting security ahead of surveillance, we can begin to restore that trust. And by making the decision process much more public than it is today, we can demonstrate both our trustworthiness and the value of open government.

An unpatched vulnerability puts everyone at risk, but not to the same degree. The US and other Western countries are highly vulnerable, because of our critical electronic infrastructure, intellectual property, and personal wealth. Countries like China and Russia are less vulnerable — North Korea much less — so they have considerably less incentive to see vulnerabilities fixed. Fixing vulnerabilities isn’t disarmament; it’s making our own countries much safer. We also regain the moral authority to negotiate any broad international reductions in cyber-weapons; and we can decide not to use them even if others do.

Regardless of our policy towards hoarding vulnerabilities, the most important thing we can do is patch vulnerabilities quickly once they are disclosed. And that’s what companies are doing, even without any government involvement, because so many vulnerabilities are discovered by criminals.

We also need more research in automatically finding and fixing vulnerabilities, and in building secure and resilient software in the first place. Research over the last decade or so has resulted in software vendors being able to find and close entire classes of vulnerabilities. Although there are many cases of these security analysis tools not being used, all of our security is improved when they are. That alone is a good reason to continue disclosing vulnerability details, and something the NSA can do to vastly improve the security of the Internet worldwide. Here again, though, they would have to make the tools they have to automatically find vulnerabilities available for defense and not attack.

In today’s cyberwar arms race, unpatched vulnerabilities and stockpiled cyber-weapons are inherently destabilizing, especially because they are only effective for a limited time. The world’s militaries are investing more money in finding vulnerabilities than the commercial world is investing in fixing them. The vulnerabilities they discover affect the security of us all. No matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security and fix almost all the vulnerabilities we find. But not all, yet.

This essay previously appeared on TheAtlantic.com.

Posted on May 22, 2014 at 6:15 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.