Entries Tagged "sabotage"

Page 2 of 2

Journal Article on Cyberwar

From the Journal of Strategic Studies: “Cyber War Will Not Take Place“:

Abstract: For almost two decades, experts and defense establishments the world over have been predicting that cyber war is coming. But is it? This article argues in three steps that cyber war has never happened in the past, that cyber war does not take place in the present, and that it is unlikely that cyber war will occur in the future. It first outlines what would constitute cyber war: a potentially lethal, instrumental, and political act of force conducted through malicious code. The second part shows what cyber war is not, case-by-case. Not one single cyber offense on record constitutes an act of war on its own. The final part offers a more nuanced terminology to come to terms with cyber attacks. All politically motivated cyber attacks are merely sophisticated versions of three activities that are as old as warfare itself: sabotage, espionage, and subversion.

Here’s another article: “The Non-Existent ‘Cyber War’ Is Nothing More Than A Push For More Government Control.”

EDITED TO ADD (11/4): A reader complained to the publication, and they removed the paywall from the first article.

Posted on November 3, 2011 at 1:22 PMView Comments

Another ATM Theft Tactic

This brazen tactic is from Malaysia. Robbers sabotage the machines, and then report the damage to the bank. When the banks send repair technicians to open and repair the machines, the robbers take the money at gunpoint.

It’s hardly a technology-related attack. But from what I know about ATMs, the security of the money safe inside the machine is separate from the security of the rest of the machine. So it seems that the repair technicians might be given access to only the machine but not the safe inside.

Posted on October 31, 2011 at 8:18 AMView Comments

Attacking PLCs Controlling Prison Doors

Embedded system vulnerabilities in prisons:

Some of the same vulnerabilities that the Stuxnet superworm used to sabotage centrifuges at a nuclear plant in Iran exist in the country’s top high-security prisons, according to security consultant and engineer John Strauchs, who plans to discuss the issue and demonstrate an exploit against the systems at the DefCon hacker conference next week in Las Vegas.

Strauchs, who says he engineered or consulted on electronic security systems in more than 100 prisons, courthouses and police stations throughout the U.S. ­ including eight maximum-security prisons ­ says the prisons use programmable logic controllers to control locks on cells and other facility doors and gates. PLCs are the same devices that Stuxnet exploited to attack centrifuges in Iran.

This seems like a minor risk today; Stuxnet was a military-grade effort, and beyond the reach of your typical criminal organization. But that can only change, as people study and learn from the reverse-engineered Stuxnet code and as hacking PLCs becomes more common.

As we move from mechanical, or even electro-mechanical, systems to digital systems, and as we network those digital systems, this sort of vulnerability is going to only become more common.

Posted on August 2, 2011 at 6:23 AMView Comments

Never Let the Terrorists Know How We're Storing Road Salt

This seems not to be a joke:

The American Civil Liberties Union has filed a lawsuit against the state after it refused to release the construction plans for a barn used to store road salt, on the basis that doing so would be a security risk.

[…]

Chiaffarano filed an OPRA request for the state’s building plans, but was denied her request as the state cited a 2002 executive order by Gov. James McGreevey.

The order, issued in the wake of the Sept. 11 terrorist attacks on the World Trade Center and the Pentagon, allows the state to decline the release of public records that would compromise the state’s ability to “protect and defend the state and its citizens against acts of sabotage or terrorism.”

Lisa Ryan, spokeswoman for the Department of Community Affairs, declined to comment on the pending lawsuit.

Posted on December 8, 2010 at 2:27 PMView Comments

Stuxnet News

Another piece of the puzzle:

New research, published late last week, has established that Stuxnet searches for frequency converter drives made by Fararo Paya of Iran and Vacon of Finland. In addition, Stuxnet is only interested in frequency converter drives that operate at very high speeds, between 807 Hz and 1210 Hz.

The malware is designed to change the output frequencies of drives, and therefore the speed of associated motors, for short intervals over periods of months. This would effectively sabotage the operation of infected devices while creating intermittent problems that are that much harder to diagnose.

Low-harmonic frequency converter drives that operate at over 600 Hz are regulated for export in the US by the Nuclear Regulatory Commission as they can be used for uranium enrichment. They may have other applications but would certainly not be needed to run a conveyor belt at a factory, for example.

The threat of Stuxnet variants is being used to scare senators.

Me on Stuxnet.

Posted on November 22, 2010 at 6:19 AMView Comments

Stuxnet

Computer security experts are often surprised at which stories get picked up by the mainstream media. Sometimes it makes no sense. Why this particular data breach, vulnerability, or worm and not others? Sometimes it’s obvious. In the case of Stuxnet, there’s a great story.

As the story goes, the Stuxnet worm was designed and released by a government—the U.S. and Israel are the most common suspects—specifically to attack the Bushehr nuclear power plant in Iran. How could anyone not report that? It combines computer attacks, nuclear power, spy agencies and a country that’s a pariah to much of the world. The only problem with the story is that it’s almost entirely speculation.

Here’s what we do know: Stuxnet is an Internet worm that infects Windows computers. It primarily spreads via USB sticks, which allows it to get into computers and networks not normally connected to the Internet. Once inside a network, it uses a variety of mechanisms to propagate to other machines within that network and gain privilege once it has infected those machines. These mechanisms include both known and patched vulnerabilities, and four “zero-day exploits”: vulnerabilities that were unknown and unpatched when the worm was released. (All the infection vulnerabilities have since been patched.)

Stuxnet doesn’t actually do anything on those infected Windows computers, because they’re not the real target. What Stuxnet looks for is a particular model of Programmable Logic Controller (PLC) made by Siemens (the press often refers to these as SCADA systems, which is technically incorrect). These are small embedded industrial control systems that run all sorts of automated processes: on factory floors, in chemical plants, in oil refineries, at pipelines—and, yes, in nuclear power plants. These PLCs are often controlled by computers, and Stuxnet looks for Siemens SIMATIC WinCC/Step 7 controller software.

If it doesn’t find one, it does nothing. If it does, it infects it using yet another unknown and unpatched vulnerability, this one in the controller software. Then it reads and changes particular bits of data in the controlled PLCs. It’s impossible to predict the effects of this without knowing what the PLC is doing and how it is programmed, and that programming can be unique based on the application. But the changes are very specific, leading many to believe that Stuxnet is targeting a specific PLC, or a specific group of PLCs, performing a specific function in a specific location—and that Stuxnet’s authors knew exactly what they were targeting.

It’s already infected more than 50,000 Windows computers, and Siemens has reported 14 infected control systems, many in Germany. (These numbers were certainly out of date as soon as I typed them.) We don’t know of any physical damage Stuxnet has caused, although there are rumors that it was responsible for the failure of India’s INSAT-4B satellite in July. We believe that it did infect the Bushehr plant.

All the anti-virus programs detect and remove Stuxnet from Windows systems.

Stuxnet was first discovered in late June, although there’s speculation that it was released a year earlier. As worms go, it’s very complex and got more complex over time. In addition to the multiple vulnerabilities that it exploits, it installs its own driver into Windows. These have to be signed, of course, but Stuxnet used a stolen legitimate certificate. Interestingly, the stolen certificate was revoked on July 16, and a Stuxnet variant with a different stolen certificate was discovered on July 17.

Over time the attackers swapped out modules that didn’t work and replaced them with new ones—perhaps as Stuxnet made its way to its intended target. Those certificates first appeared in January. USB propagation, in March.

Stuxnet has two ways to update itself. It checks back to two control servers, one in Malaysia and the other in Denmark, but also uses a peer-to-peer update system: When two Stuxnet infections encounter each other, they compare versions and make sure they both have the most recent one. It also has a kill date of June 24, 2012. On that date, the worm will stop spreading and delete itself.

We don’t know who wrote Stuxnet. We don’t know why. We don’t know what the target is, or if Stuxnet reached it. But you can see why there is so much speculation that it was created by a government.

Stuxnet doesn’t act like a criminal worm. It doesn’t spread indiscriminately. It doesn’t steal credit card information or account login credentials. It doesn’t herd infected computers into a botnet. It uses multiple zero-day vulnerabilities. A criminal group would be smarter to create different worm variants and use one in each. Stuxnet performs sabotage. It doesn’t threaten sabotage, like a criminal organization intent on extortion might.

Stuxnet was expensive to create. Estimates are that it took 8 to 10 people six months to write. There’s also the lab setup—surely any organization that goes to all this trouble would test the thing before releasing it—and the intelligence gathering to know exactly how to target it. Additionally, zero-day exploits are valuable. They’re hard to find, and they can only be used once. Whoever wrote Stuxnet was willing to spend a lot of money to ensure that whatever job it was intended to do would be done.

None of this points to the Bushehr nuclear power plant in Iran, though. Best I can tell, this rumor was started by Ralph Langner, a security researcher from Germany. He labeled his theory “highly speculative,” and based it primarily on the facts that Iran had an unusually high number of infections (the rumor that it had the most infections of any country seems not to be true), that the Bushehr nuclear plant is a juicy target, and that some of the other countries with high infection rates—India, Indonesia, and Pakistan—are countries where the same Russian contractor involved in Bushehr is also involved. This rumor moved into the computer press and then into the mainstream press, where it became the accepted story, without any of the original caveats.

Once a theory takes hold, though, it’s easy to find more evidence. The word “myrtus” appears in the worm: an artifact that the compiler left, possibly by accident. That’s the myrtle plant. Of course, that doesn’t mean that druids wrote Stuxnet. According to the story, it refers to Queen Esther, also known as Hadassah; she saved the Persian Jews from genocide in the 4th century B.C. “Hadassah” means “myrtle” in Hebrew.

Stuxnet also sets a registry value of “19790509” to alert new copies of Stuxnet that the computer has already been infected. It’s rather obviously a date, but instead of looking at the gazillion things—large and small—that happened on that the date, the story insists it refers to the date Persian Jew Habib Elghanain was executed in Tehran for spying for Israel.

Sure, these markers could point to Israel as the author. On the other hand, Stuxnet’s authors were uncommonly thorough about not leaving clues in their code; the markers could have been deliberately planted by someone who wanted to frame Israel. Or they could have been deliberately planted by Israel, who wanted us to think they were planted by someone who wanted to frame Israel. Once you start walking down this road, it’s impossible to know when to stop.

Another number found in Stuxnet is 0xDEADF007. Perhaps that means “Dead Fool” or “Dead Foot,” a term that refers to an airplane engine failure. Perhaps this means Stuxnet is trying to cause the targeted system to fail. Or perhaps not. Still, a targeted worm designed to cause a specific sabotage seems to be the most likely explanation.

If that’s the case, why is Stuxnet so sloppily targeted? Why doesn’t Stuxnet erase itself when it realizes it’s not in the targeted network? When it infects a network via USB stick, it’s supposed to only spread to three additional computers and to erase itself after 21 days—but it doesn’t do that. A mistake in programming, or a feature in the code not enabled? Maybe we’re not supposed to reverse engineer the target. By allowing Stuxnet to spread globally, its authors committed collateral damage worldwide. From a foreign policy perspective, that seems dumb. But maybe Stuxnet’s authors didn’t care.

My guess is that Stuxnet’s authors, and its target, will forever remain a mystery.

This essay originally appeared on Forbes.com.

My alternate explanations for Stuxnet were cut from the essay. Here they are:

  • A research project that got out of control. Researchers have accidentally released worms before. But given the press, and the fact that any researcher working on something like this would be talking to friends, colleagues, and his advisor, I would expect someone to have outed him by now, especially if it was done by a team.
  • A criminal worm designed to demonstrate a capability. Sure, that’s possible. Stuxnet could be a prelude to extortion. But I think a cheaper demonstration would be just as effective. Then again, maybe not.
  • A message. It’s hard to speculate any further, because we don’t know who the message is for, or its context. Presumably the intended recipient would know. Maybe it’s a “look what we can do” message. Or an “if you don’t listen to us, we’ll do worse next time” message. Again, it’s a very expensive message, but maybe one of the pieces of the message is “we have so many resources that we can burn four or five man-years of effort and four zero-day vulnerabilities just for the fun of it.” If that message were for me, I’d be impressed.
  • A worm released by the U.S. military to scare the government into giving it more budget and power over cybersecurity. Nah, that sort of conspiracy is much more common in fiction than in real life.

Note that some of these alternate explanations overlap.

EDITED TO ADD (10/7): Symantec published a very detailed analysis. It seems like one of the zero-day vulnerabilities wasn’t a zero-day after all. Good CNet article. More speculation, without any evidence. Decent debunking. Alternate theory, that the target was the uranium centrifuges in Natanz, Iran.

Posted on October 7, 2010 at 9:56 AMView Comments

Vigilant Citizens: Then vs. Now

This is from Atomic Bombing: How to Protect Yourself, published in 1950:

Of course, millions of us will go through our lives never seeing a spy or a saboteur going about his business. Thousands of us may, at one time or another, think we see something like that. Only hundreds will be right. It would be foolish for all of us to see enemy agents lurking behind every tree, to become frightened of our own shadows and report them to the F.B.I.

But we are citizens, we might see something which might be useful to the F.B.I. and it is our duty to report what we see. It is also our duty to know what is useful to the F.B.I. and what isn’t.

[…]

If you think your neighbor has “radical” views—that is none of your or the F.B.I.’s business. After all, it is the difference in views of our citizens, from the differences between Jefferson and Hamilton to the differences between Truman and Dewey, which have made our country strong.

But if you see your neighbor—and the views he expresses might seem to agree with yours completely—commit an act which might lead you to suspect that he might be committing espionage, sabotage or subversion, then report it to the F.B.I.

After that, forget about it. Mr. Hoover also said: “Do not circulate rumors about subversive activities, or draw conclusions from information you furnish the F.B.I. The data you possess might be incomplete or only partially accurate. By drawing conclusions based on insufficient evidence grave injustices might result to innocent persons.”

In other words, you might be wrong. In our system, it takes a court, a trial and a jury to say a man is guilty.

It would be nice if this advice didn’t seem as outdated as the rest of the book.

Posted on July 1, 2010 at 1:05 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.