Entries Tagged "SCADA"

Page 4 of 5

More Stuxnet News

This long New York Times article includes some interesting revelations. The article claims that Stuxnet was a joint Israeli-American project, and that its effectiveness was tested on live equipment: “Behind Dimona’s barbed wire, the experts say, Israel has spun nuclear centrifuges virtually identical to Iran’s at Natanz, where Iranian scientists are struggling to enrich uranium.”

The worm itself now appears to have included two major components. One was designed to send Iran’s nuclear centrifuges spinning wildly out of control. Another seems right out of the movies: The computer program also secretly recorded what normal operations at the nuclear plant looked like, then played those readings back to plant operators, like a pre-recorded security tape in a bank heist, so that it would appear that everything was operating normally while the centrifuges were actually tearing themselves apart.

My two previous Stuxnet posts. And an alternate theory: The Chinese did it.

EDITED TO ADD (2/12): More opinions on Stuxnet.

Posted on January 17, 2011 at 12:31 PMView Comments

Cyberwar and the Future of Cyber Conflict

The world is gearing up for cyberwar. The U.S. Cyber Command became operational in November. NATO has enshrined cyber security among its new strategic priorities. The head of Britain’s armed forces said recently that boosting cyber capability is now a huge priority for the UK. And we know China is already engaged in broad cyber espionage attacks against the west. So how can we control a burgeoning cyber arms race?

We may already have seen early versions of cyberwars in Estonia and Georgia, possibly perpetrated by Russia. It’s hard to know for certain, not only because such attacks are often impossible to trace, but because we have no clear definitions of what a cyberwar actually is.

Do the 2007 attacks against Estonia, traced to a young Russian man living in Tallinn and no one else, count? What about a virus from an unknown origin, possibly targeted at an Iranian nuclear complex? Or espionage from within China, but not specifically directed by its government? To such questions one must add even more basic issues, like when a cyberwar is understood to have begun, and how it ends. When even cyber security experts can’t answer these questions, it’s hard to expect much from policymakers.

We can set parameters. It is obviously not an act of war just to develop digital weapons targeting another country. Using cyber attacks to spy on another nation is a grey area, which gets greyer still when a country penetrates information networks, just to see if it can do so. Penetrating such networks and leaving a back door open, or even leaving logic bombs behind to be used later, is a harder case—yet the US and China are doing this to each other right now.

And what about when one country deliberately damages the economy of another, as one of the WikiLeaks cables shows that a member of China’s politburo did against Google in January 2010? Definitions and rules are hard not just because the tools of war have changed, but because cyberspace puts them into the hands of a broader group of people. Previously only the military had weapons. Now anyone with sufficient computer skills can take matters into their own hands.

There are more basic problems too. When a nation is attacked in a regular conflict, a variety of military and civil institutions respond. The legal framework for this depends on two things: the attacker and the motive. But when you’re attacked on the internet, those are precisely the two things you don’t know. We don’t know if Georgia was attacked by the Russian government, or just some hackers living in Russia. In spite of much speculation, we don’t know the origin, or target, of Stuxnet. We don’t even know if last July 4’s attacks against US and South Korean computers originated in North Korea, China, England, or Florida.

When you don’t know, it’s easy to get it wrong; and to retaliate against the wrong target, or for the wrong reason. That means it is easy for things to get out of hand. So while it is legitimate for nations to build offensive and defensive cyberwar capabilities we also need to think now about what can be done to limit the risk of cyberwar.

A first step would be a hotline between the world’s cyber commands, modelled after similar hotlines among nuclear commands. This would at least allow governments to talk to each other, rather than guess where an attack came from. More difficult, but more important, are new cyberwar treaties. These could stipulate a no first use policy, outlaw unaimed weapons, or mandate weapons that self-destruct at the end of hostilities. The Geneva Conventions need to be updated too.

Cyber weapons beg to be used, so limits on stockpiles, and restrictions on tactics, are a logical end point. International banking, for instance, could be declared off-limits. Whatever the specifics, such agreements are badly needed. Enforcement will be difficult, but that’s not a reason not to try. It’s not too late to reverse the cyber arms race currently under way. Otherwise, it is only a matter of time before something big happens: perhaps by the rash actions of a low level military officer, perhaps by a non-state actor, perhaps by accident. And if the target nation retaliates, we could actually find ourselves in a cyberwar.

This essay was originally published in the Financial Times (free registration required for access, or search on Google News).

Posted on December 6, 2010 at 6:42 AMView Comments

Stuxnet News

Another piece of the puzzle:

New research, published late last week, has established that Stuxnet searches for frequency converter drives made by Fararo Paya of Iran and Vacon of Finland. In addition, Stuxnet is only interested in frequency converter drives that operate at very high speeds, between 807 Hz and 1210 Hz.

The malware is designed to change the output frequencies of drives, and therefore the speed of associated motors, for short intervals over periods of months. This would effectively sabotage the operation of infected devices while creating intermittent problems that are that much harder to diagnose.

Low-harmonic frequency converter drives that operate at over 600 Hz are regulated for export in the US by the Nuclear Regulatory Commission as they can be used for uranium enrichment. They may have other applications but would certainly not be needed to run a conveyor belt at a factory, for example.

The threat of Stuxnet variants is being used to scare senators.

Me on Stuxnet.

Posted on November 22, 2010 at 6:19 AMView Comments

Stuxnet

Computer security experts are often surprised at which stories get picked up by the mainstream media. Sometimes it makes no sense. Why this particular data breach, vulnerability, or worm and not others? Sometimes it’s obvious. In the case of Stuxnet, there’s a great story.

As the story goes, the Stuxnet worm was designed and released by a government—the U.S. and Israel are the most common suspects—specifically to attack the Bushehr nuclear power plant in Iran. How could anyone not report that? It combines computer attacks, nuclear power, spy agencies and a country that’s a pariah to much of the world. The only problem with the story is that it’s almost entirely speculation.

Here’s what we do know: Stuxnet is an Internet worm that infects Windows computers. It primarily spreads via USB sticks, which allows it to get into computers and networks not normally connected to the Internet. Once inside a network, it uses a variety of mechanisms to propagate to other machines within that network and gain privilege once it has infected those machines. These mechanisms include both known and patched vulnerabilities, and four “zero-day exploits”: vulnerabilities that were unknown and unpatched when the worm was released. (All the infection vulnerabilities have since been patched.)

Stuxnet doesn’t actually do anything on those infected Windows computers, because they’re not the real target. What Stuxnet looks for is a particular model of Programmable Logic Controller (PLC) made by Siemens (the press often refers to these as SCADA systems, which is technically incorrect). These are small embedded industrial control systems that run all sorts of automated processes: on factory floors, in chemical plants, in oil refineries, at pipelines—and, yes, in nuclear power plants. These PLCs are often controlled by computers, and Stuxnet looks for Siemens SIMATIC WinCC/Step 7 controller software.

If it doesn’t find one, it does nothing. If it does, it infects it using yet another unknown and unpatched vulnerability, this one in the controller software. Then it reads and changes particular bits of data in the controlled PLCs. It’s impossible to predict the effects of this without knowing what the PLC is doing and how it is programmed, and that programming can be unique based on the application. But the changes are very specific, leading many to believe that Stuxnet is targeting a specific PLC, or a specific group of PLCs, performing a specific function in a specific location—and that Stuxnet’s authors knew exactly what they were targeting.

It’s already infected more than 50,000 Windows computers, and Siemens has reported 14 infected control systems, many in Germany. (These numbers were certainly out of date as soon as I typed them.) We don’t know of any physical damage Stuxnet has caused, although there are rumors that it was responsible for the failure of India’s INSAT-4B satellite in July. We believe that it did infect the Bushehr plant.

All the anti-virus programs detect and remove Stuxnet from Windows systems.

Stuxnet was first discovered in late June, although there’s speculation that it was released a year earlier. As worms go, it’s very complex and got more complex over time. In addition to the multiple vulnerabilities that it exploits, it installs its own driver into Windows. These have to be signed, of course, but Stuxnet used a stolen legitimate certificate. Interestingly, the stolen certificate was revoked on July 16, and a Stuxnet variant with a different stolen certificate was discovered on July 17.

Over time the attackers swapped out modules that didn’t work and replaced them with new ones—perhaps as Stuxnet made its way to its intended target. Those certificates first appeared in January. USB propagation, in March.

Stuxnet has two ways to update itself. It checks back to two control servers, one in Malaysia and the other in Denmark, but also uses a peer-to-peer update system: When two Stuxnet infections encounter each other, they compare versions and make sure they both have the most recent one. It also has a kill date of June 24, 2012. On that date, the worm will stop spreading and delete itself.

We don’t know who wrote Stuxnet. We don’t know why. We don’t know what the target is, or if Stuxnet reached it. But you can see why there is so much speculation that it was created by a government.

Stuxnet doesn’t act like a criminal worm. It doesn’t spread indiscriminately. It doesn’t steal credit card information or account login credentials. It doesn’t herd infected computers into a botnet. It uses multiple zero-day vulnerabilities. A criminal group would be smarter to create different worm variants and use one in each. Stuxnet performs sabotage. It doesn’t threaten sabotage, like a criminal organization intent on extortion might.

Stuxnet was expensive to create. Estimates are that it took 8 to 10 people six months to write. There’s also the lab setup—surely any organization that goes to all this trouble would test the thing before releasing it—and the intelligence gathering to know exactly how to target it. Additionally, zero-day exploits are valuable. They’re hard to find, and they can only be used once. Whoever wrote Stuxnet was willing to spend a lot of money to ensure that whatever job it was intended to do would be done.

None of this points to the Bushehr nuclear power plant in Iran, though. Best I can tell, this rumor was started by Ralph Langner, a security researcher from Germany. He labeled his theory “highly speculative,” and based it primarily on the facts that Iran had an unusually high number of infections (the rumor that it had the most infections of any country seems not to be true), that the Bushehr nuclear plant is a juicy target, and that some of the other countries with high infection rates—India, Indonesia, and Pakistan—are countries where the same Russian contractor involved in Bushehr is also involved. This rumor moved into the computer press and then into the mainstream press, where it became the accepted story, without any of the original caveats.

Once a theory takes hold, though, it’s easy to find more evidence. The word “myrtus” appears in the worm: an artifact that the compiler left, possibly by accident. That’s the myrtle plant. Of course, that doesn’t mean that druids wrote Stuxnet. According to the story, it refers to Queen Esther, also known as Hadassah; she saved the Persian Jews from genocide in the 4th century B.C. “Hadassah” means “myrtle” in Hebrew.

Stuxnet also sets a registry value of “19790509” to alert new copies of Stuxnet that the computer has already been infected. It’s rather obviously a date, but instead of looking at the gazillion things—large and small—that happened on that the date, the story insists it refers to the date Persian Jew Habib Elghanain was executed in Tehran for spying for Israel.

Sure, these markers could point to Israel as the author. On the other hand, Stuxnet’s authors were uncommonly thorough about not leaving clues in their code; the markers could have been deliberately planted by someone who wanted to frame Israel. Or they could have been deliberately planted by Israel, who wanted us to think they were planted by someone who wanted to frame Israel. Once you start walking down this road, it’s impossible to know when to stop.

Another number found in Stuxnet is 0xDEADF007. Perhaps that means “Dead Fool” or “Dead Foot,” a term that refers to an airplane engine failure. Perhaps this means Stuxnet is trying to cause the targeted system to fail. Or perhaps not. Still, a targeted worm designed to cause a specific sabotage seems to be the most likely explanation.

If that’s the case, why is Stuxnet so sloppily targeted? Why doesn’t Stuxnet erase itself when it realizes it’s not in the targeted network? When it infects a network via USB stick, it’s supposed to only spread to three additional computers and to erase itself after 21 days—but it doesn’t do that. A mistake in programming, or a feature in the code not enabled? Maybe we’re not supposed to reverse engineer the target. By allowing Stuxnet to spread globally, its authors committed collateral damage worldwide. From a foreign policy perspective, that seems dumb. But maybe Stuxnet’s authors didn’t care.

My guess is that Stuxnet’s authors, and its target, will forever remain a mystery.

This essay originally appeared on Forbes.com.

My alternate explanations for Stuxnet were cut from the essay. Here they are:

  • A research project that got out of control. Researchers have accidentally released worms before. But given the press, and the fact that any researcher working on something like this would be talking to friends, colleagues, and his advisor, I would expect someone to have outed him by now, especially if it was done by a team.
  • A criminal worm designed to demonstrate a capability. Sure, that’s possible. Stuxnet could be a prelude to extortion. But I think a cheaper demonstration would be just as effective. Then again, maybe not.
  • A message. It’s hard to speculate any further, because we don’t know who the message is for, or its context. Presumably the intended recipient would know. Maybe it’s a “look what we can do” message. Or an “if you don’t listen to us, we’ll do worse next time” message. Again, it’s a very expensive message, but maybe one of the pieces of the message is “we have so many resources that we can burn four or five man-years of effort and four zero-day vulnerabilities just for the fun of it.” If that message were for me, I’d be impressed.
  • A worm released by the U.S. military to scare the government into giving it more budget and power over cybersecurity. Nah, that sort of conspiracy is much more common in fiction than in real life.

Note that some of these alternate explanations overlap.

EDITED TO ADD (10/7): Symantec published a very detailed analysis. It seems like one of the zero-day vulnerabilities wasn’t a zero-day after all. Good CNet article. More speculation, without any evidence. Decent debunking. Alternate theory, that the target was the uranium centrifuges in Natanz, Iran.

Posted on October 7, 2010 at 9:56 AMView Comments

The Stuxnet Worm

It’s impressive:

The Stuxnet worm is a “groundbreaking” piece of malware so devious in its use of unpatched vulnerabilities, so sophisticated in its multipronged approach, that the security researchers who tore it apart believe it may be the work of state-backed professionals.

“It’s amazing, really, the resources that went into this worm,” said Liam O Murchu, manager of operations with Symantec’s security response team.

“I’d call it groundbreaking,” said Roel Schouwenberg, a senior antivirus researcher at Kaspersky Lab. In comparison, other notable attacks, like the one dubbed Aurora that hacked Google’s network and those of dozens of other major companies, were child’s play.

EDITED TO ADD (9/22): Here’s an interesting theory:

By August, researchers had found something more disturbing: Stuxnet appeared to be able to take control of the automated factory control systems it had infected – and do whatever it was programmed to do with them. That was mischievous and dangerous.

But it gets worse. Since reverse engineering chunks of Stuxnet’s massive code, senior US cyber security experts confirm what Mr. Langner, the German researcher, told the Monitor: Stuxnet is essentially a precision, military-grade cyber missile deployed early last year to seek out and destroy one real-world target of high importance – a target still unknown.

The article speculates that the target is Iran’s Bushehr nuclear power plant, but there’s not much in the way of actual evidence to support that.

Some more articles.

Posted on September 22, 2010 at 6:25 AMView Comments

Internet Worm Targets SCADA

Stuxnet is a new Internet worm that specifically targets Siemens WinCC SCADA systems: used to control production at industrial plants such as oil rigs, refineries, electronics production, and so on. The worm seems to uploads plant info (schematics and production information) to an external website. Moreover, owners of these SCADA systems cannot change the default password because it would cause the software to break down.

Posted on July 23, 2010 at 8:59 AMView Comments

U.S. Power Grid Hacked, Everyone Panic!

Yesterday I talked to at least a dozen reporters about this breathless Wall Street Journal story:

Cyberspies have penetrated the U.S. electrical grid and left behind software programs that could be used to disrupt the system, according to current and former national-security officials.

The spies came from China, Russia and other countries, these officials said, and were believed to be on a mission to navigate the U.S. electrical system and its controls. The intruders haven’t sought to damage the power grid or other key infrastructure, but officials warned they could try during a crisis or war.

“The Chinese have attempted to map our infrastructure, such as the electrical grid,” said a senior intelligence official. “So have the Russians.”

[…]

Authorities investigating the intrusions have found software tools left behind that could be used to destroy infrastructure components, the senior intelligence official said. He added, “If we go to war with them, they will try to turn them on.”

Officials said water, sewage and other infrastructure systems also were at risk.

“Over the past several years, we have seen cyberattacks against critical infrastructures abroad, and many of our own infrastructures are as vulnerable as their foreign counterparts,” Director of National Intelligence Dennis Blair recently told lawmakers. “A number of nations, including Russia and China, can disrupt elements of the U.S. information infrastructure.”

Read the whole story; there aren’t really any facts in it. I don’t know what’s going on; maybe it’s just budget season and someone is jockeying for a bigger slice.

Honestly, I am much more worried about random errors and undirected worms in the computers running our infrastructure than I am about the Chinese military. I am much more worried about criminal hackers than I am about government hackers. I wrote about the risks to our infrastructure here, and about Chinese hacking here.

And I wrote about last year’s reports of international hacking of our SCADA control systems here.

Posted on April 9, 2009 at 12:02 PMView Comments

Hacking Power Networks

The CIA unleashed a big one at a SANS conference:

On Wednesday, in New Orleans, US Central Intelligence Agency senior analyst Tom Donahue told a gathering of 300 US, UK, Swedish, and Dutch government officials and engineers and security managers from electric, water, oil & gas and other critical industry asset owners from all across North America, that “We have information, from multiple regions outside the United States, of cyber intrusions into utilities, followed by extortion demands. We suspect, but cannot confirm, that some of these attackers had the benefit of inside knowledge. We have information that cyber attacks have been used to disrupt power equipment in several regions outside the United States. In at least one case, the disruption caused a power outage affecting multiple cities. We do not know who executed these attacks or why, but all involved intrusions through the Internet.”

According to Mr. Donahue, the CIA actively and thoroughly considered the benefits and risks of making this information public, and came down on the side of disclosure.

I’ll bet. There’s nothing like an vague unsubstantiated rumor to forestall reasoned discussion. But, of course, everyone is writing about it anyway.

SANS’s Alan Paller is happy to add details:

In the past two years, hackers have in fact successfully penetrated and extorted multiple utility companies that use SCADA systems, says Alan Paller, director of the SANS Institute, an organization that hosts a crisis center for hacked companies. “Hundreds of millions of dollars have been extorted, and possibly more. It’s difficult to know, because they pay to keep it a secret,” Paller says. “This kind of extortion is the biggest untold story of the cybercrime industry.”

And to up the fear factor:

The prospect of cyberattacks crippling multicity regions appears to have prompted the government to make this information public. The issue “went from ‘we should be concerned about to this’ to ‘this is something we should fix now,’ ” said Paller. “That’s why, I think, the government decided to disclose this.”

More rumor:

An attendee of the meeting said that the attack was not well-known through the industry and came as a surprise to many there. Said the person who asked to remain anonymous, “There were apparently a couple of incidents where extortionists cut off power to several cities using some sort of attack on the power grid, and it does not appear to be a physical attack.”

And more hyperbole from someone in the industry:

Over the past year to 18 months, there has been “a huge increase in focused attacks on our national infrastructure networks, . . . and they have been coming from outside the United States,” said Ralph Logan, principal of the Logan Group, a cybersecurity firm.

It is difficult to track the sources of such attacks, because they are usually made by people who have disguised themselves by worming into three or four other computer networks, Logan said. He said he thinks the attacks were launched from computers belonging to foreign governments or militaries, not terrorist groups.”

I’m more than a bit skeptical here. To be sure—fake staged attacks aside—there are serious risks to SCADA systems (Ganesh Devarajan gave a talk at DefCon this year about some potential attack vectors), although at this point I think they’re more a future threat than present danger. But this CIA tidbit tells us nothing about how the attacks happened. Were they against SCADA systems? Were they against general-purpose computer, maybe Windows machines? Insiders may have been involved, so was this a computer security vulnerability at all? We have no idea.

Cyber-extortion is certainly on the rise; we see it at Counterpane. Primarily it’s against fringe industries—online gambling, online gaming, online porn—operating offshore in countries like Bermuda and the Cayman Islands. It is going mainstream, but this is the first I’ve heard of it targeting power companies. Certainly possible, but is that part of the CIA rumor or was it tacked on afterwards?

And here’s list of power outages. Which ones were hacker caused? Some details would be nice.

I’d like a little bit more information before I start panicking.

EDITED TO ADD (1/23): Slashdot thread.

Posted on January 22, 2008 at 2:24 PMView Comments

Staged Attack Causes Generator to Self-Destruct

I assume you’ve all seen the news:

A government video shows the potential destruction caused by hackers seizing control of a crucial part of the U.S. electrical grid: an industrial turbine spinning wildly out of control until it becomes a smoking hulk and power shuts down.

The video, produced for the Homeland Security Department and obtained by The Associated Press on Wednesday, was marked “Official Use Only.” It shows commands quietly triggered by simulated hackers having such a violent reaction that the enormous turbine shudders as pieces fly apart and it belches black-and-white smoke.

The video was produced for top U.S. policy makers by the Idaho National Laboratory, which has studied the little-understood risks to the specialized electronic equipment that operates power, water and chemical plants. Vice President Dick Cheney is among those who have watched the video, said one U.S. official, speaking on condition of anonymity because this official was not authorized to publicly discuss such high-level briefings.

More here. And the video is on CNN.com.

I haven’t written much about SCADA security, except to say that I think the risk is overblown today but is getting more serious all the time—and we need to deal with the security before it’s too late. I didn’t know quite what to make of the Idaho National Laboratory video; it seemed like hype, but I couldn’t find any details. (The CNN headline, “Mouse click could plunge city into darkness, experts say,” was definitely hype.)

Then, I received this anonymous e-mail:

I was one of the industry technical folks the DHS consulted in developing the “immediate and required” mitigation strategies for this problem.

They talked to several industry groups (mostly management not tech folks): electric, refining, chemical, and water. They ignored most of what we said but attached our names to the technical parts of the report to make it look credible. We softened or eliminated quite a few sections that may have had relevance 20 years ago, such as war dialing attacks against modems.

The end product is a work order document from DHS which requires such things as background checks on people who have access to modems and logging their visits to sites with datacom equipment or control systems.

By the way—they were unable to hurt the generator you see in the video but did destroy the shaft that drives it and the power unit. They triggered the event from 30 miles away! Then they extrapolated the theory that a malfunctioning generator can destroy not only generators at the power company but the power glitches on the grid would destroy motors many miles away on the electric grid that pump water or gasoline (through pipelines).

They kept everything very secret (all emails and reports encrypted, high security meetings in DC) until they produced a video and press release for CNN. There was huge concern by DHS that this vulnerability would become known to the bad guys—yet now they release it to the world for their own career reasons. Beyond shameful.

Oh, and they did use a contractor for all the heavy lifting that went into writing/revising the required mitigations document. Could not even produce this work product on their own.

By the way, the vulnerability they hypothesize is completely bogus but I won’t say more about the details. Gitmo is still too hot for me this time of year.

Posted on October 2, 2007 at 6:26 AMView Comments

Assurance

Over the past several months, the state of California conducted the most comprehensive security review yet of electronic voting machines. People I consider to be security experts analyzed machines from three different manufacturers, performing both a red-team attack analysis and a detailed source code review. Serious flaws were discovered in all machines and, as a result, the machines were all decertified for use in California elections.

The reports are worth reading, as is much of the blog commentary on the topic. The reviewers were given an unrealistic timetable and had trouble getting needed documentation. The fact that major security vulnerabilities were found in all machines is a testament to how poorly they were designed, not to the thoroughness of the analysis. Yet California Secretary of State Debra Bowen has conditionally recertified the machines for use, as long as the makers fix the discovered vulnerabilities and adhere to a lengthy list of security requirements designed to limit future security breaches and failures.

While this is a good effort, it has security completely backward. It begins with a presumption of security: If there are no known vulnerabilities, the system must be secure. If there is a vulnerability, then once it’s fixed, the system is again secure. How anyone comes to this presumption is a mystery to me. Is there any version of any operating system anywhere where the last security bug was found and fixed? Is there a major piece of software anywhere that has been, and continues to be, vulnerability-free?

Yet again and again we react with surprise when a system has a vulnerability. Last weekend at the hacker convention DefCon, I saw new attacks against supervisory control and data acquisition (SCADA) systems—those are embedded control systems found in infrastructure systems like fuel pipelines and power transmission facilities—electronic badge-entry systems, MySpace, and the high-security locks used in places like the White House. I will guarantee you that the manufacturers of these systems all claimed they were secure, and that their customers believed them.

Earlier this month, the government disclosed that the computer system of the US-Visit border control system is full of security holes. Weaknesses existed in all control areas and computing device types reviewed, the report said. How exactly is this different from any large government database? I’m not surprised that the system is so insecure; I’m surprised that anyone is surprised.

We’ve been assured again and again that RFID passports are secure. When researcher Lukas Grunwald successfully cloned one last year at DefCon, we were told there was little risk. This year, Grunwald revealed that he could use a cloned passport chip to sabotage passport readers. Government officials are again downplaying the significance of this result, although Grunwald speculates that this or another similar vulnerability could be used to take over passport readers and force them to accept fraudulent passports. Anyone care to guess who’s more likely to be right?

It’s all backward. Insecurity is the norm. If any system—whether a voting machine, operating system, database, badge-entry system, RFID passport system, etc.—is ever built completely vulnerability-free, it’ll be the first time in the history of mankind. It’s not a good bet.

Once you stop thinking about security backward, you immediately understand why the current software security paradigm of patching doesn’t make us any more secure. If vulnerabilities are so common, finding a few doesn’t materially reduce the quantity remaining. A system with 100 patched vulnerabilities isn’t more secure than a system with 10, nor is it less secure. A patched buffer overflow doesn’t mean that there’s one less way attackers can get into your system; it means that your design process was so lousy that it permitted buffer overflows, and there are probably thousands more lurking in your code.

Diebold Election Systems has patched a certain vulnerability in its voting-machine software twice, and each patch contained another vulnerability. Don’t tell me it’s my job to find another vulnerability in the third patch; it’s Diebold’s job to convince me it has finally learned how to patch vulnerabilities properly.

Several years ago, former National Security Agency technical director Brian Snow began talking about the concept of “assurance” in security. Snow, who spent 35 years at the NSA building systems at security levels far higher than anything the commercial world deals with, told audiences that the agency couldn’t use modern commercial systems with their backward security thinking. Assurance was his antidote:

Assurances are confidence-building activities demonstrating that:

  1. The system’s security policy is internally consistent and reflects the requirements of the organization,
  2. There are sufficient security functions to support the security policy,
  3. The system functions to meet a desired set of properties and only those properties,
  4. The functions are implemented correctly, and
  5. The assurances hold up through the manufacturing, delivery and life cycle of the system.

Basically, demonstrate that your system is secure, because I’m just not going to believe you otherwise.

Assurance is less about developing new security techniques than about using the ones we have. It’s all the things described in books like Building Secure Software, Software Security and Writing Secure Code. It’s some of what Microsoft is trying to do with its Security Development Lifecycle (SDL). It’s the Department of Homeland Security’s Build Security In program. It’s what every aircraft manufacturer goes through before it puts a piece of software in a critical role on an aircraft. It’s what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems; we just tend not to bother.

And most of the time, we don’t care. Commercial software, as insecure as it is, is good enough for most purposes. And while backward security is more expensive over the life cycle of the software, it’s cheaper where it counts: at the beginning. Most software companies are short-term smart to ignore the cost of never-ending patching, even though it’s long-term dumb.

Assurance is expensive, in terms of money and time for both the process and the documentation. But the NSA needs assurance for critical military systems; Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be common in IT contracts, not rare. It’s time we stopped thinking backward and pretending that computers are secure until proven otherwise.

This essay originally appeared on Wired.com.

Posted on August 9, 2007 at 8:19 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.