Entries Tagged "Stuxnet"

Page 1 of 4

New Version of Flame Malware Discovered

Flame was discovered in 2012, linked to Stuxnet, and believed to be American in origin. It has recently been linked to more modern malware through new analysis tools that find linkages between different software.

Seems that Flame did not disappear after it was discovered, as was previously thought. (Its controllers used a kill switch to disable and erase it.) It was rewritten and reintroduced.

Note that the article claims that Flame was believed to be Israeli in origin. That’s wrong; most people who have an opinion believe it is from the NSA.

Posted on April 12, 2019 at 6:25 AMView Comments

Signed Malware

Stuxnet famously used legitimate digital certificates to sign its malware. A research paper from last year found that the practice is much more common than previously thought.

Now, researchers have presented proof that digitally signed malware is much more common than previously believed. What’s more, it predated Stuxnet, with the first known instance occurring in 2003. The researchers said they found 189 malware samples bearing valid digital signatures that were created using compromised certificates issued by recognized certificate authorities and used to sign legitimate software. In total, 109 of those abused certificates remain valid. The researchers, who presented their findings Wednesday at the ACM Conference on Computer and Communications Security, found another 136 malware samples signed by legitimate CA-issued certificates, although the signatures were malformed.

The results are significant because digitally signed software is often able to bypass User Account Control and other Windows measures designed to prevent malicious code from being installed. Forged signatures also represent a significant breach of trust because certificates provide what’s supposed to be an unassailable assurance to end users that the software was developed by the company named in the certificate and hasn’t been modified by anyone else. The forgeries also allow malware to evade antivirus protections. Surprisingly, weaknesses in the majority of available AV programs prevented them from detecting known malware that was digitally signed even though the signatures weren’t valid.

Posted on February 2, 2018 at 6:38 AMView Comments

Duqu Malware Techniques Used by Cybercriminals

Duqu 2.0 is a really impressive piece of malware, related to Stuxnet and probably written by the NSA. One of its security features is that it stays resident in its host’s memory without ever writing persistent files to the system’s drives. Now, this same technique is being used by criminals:

Now, fileless malware is going mainstream, as financially motivated criminal hackers mimic their nation-sponsored counterparts. According to research Kaspersky Lab plans to publish Wednesday, networks belonging to at least 140 banks and other enterprises have been infected by malware that relies on the same in-memory design to remain nearly invisible. Because infections are so hard to spot, the actual number is likely much higher. Another trait that makes the infections hard to detect is the use of legitimate and widely used system administrative and security tools­—including PowerShell, Metasploit, and Mimikatz—­to inject the malware into computer memory.

[…]

The researchers first discovered the malware late last year, when a bank’s security team found a copy of Meterpreter—­an in-memory component of Metasploit—­residing inside the physical memory of a Microsoft domain controller. After conducting a forensic analysis, the researchers found that the Meterpreter code was downloaded and injected into memory using PowerShell commands. The infected machine also used Microsoft’s NETSH networking tool to transport data to attacker-controlled servers. To obtain the administrative privileges necessary to do these things, the attackers also relied on Mimikatz. To reduce the evidence left in logs or hard drives, the attackers stashed the PowerShell commands into the Windows registry.

BoingBoing post.

Posted on February 16, 2017 at 10:28 AMView Comments

US Also Tried Stuxnet Against North Korea

According to a Reuters article, the US military tried to launch Stuxnet against North Korea in addition to Iran:

According to one U.S. intelligence source, Stuxnet’s developers produced a related virus that would be activated when it encountered Korean-language settings on an infected machine.

But U.S. agents could not access the core machines that ran Pyongyang’s nuclear weapons program, said another source, a former high-ranking intelligence official who was briefed on the program.

The official said the National Security Agency-led campaign was stymied by North Korea’s utter secrecy, as well as the extreme isolation of its communications systems.

Posted on June 1, 2015 at 6:33 AMView Comments

Attack Attribution and Cyber Conflict

The vigorous debate after the Sony Pictures breach pitted the Obama administration against many of us in the cybersecurity community who didn’t buy Washington’s claim that North Korea was the culprit.

What’s both amazing—and perhaps a bit frightening—about that dispute over who hacked Sony is that it happened in the first place.

But what it highlights is the fact that we’re living in a world where we can’t easily tell the difference between a couple of guys in a basement apartment and the North Korean government with an estimated $10 billion military budget. And that ambiguity has profound implications for how countries will conduct foreign policy in the Internet age.

Clandestine military operations aren’t new. Terrorism can be hard to attribute, especially the murky edges of state-sponsored terrorism. What’s different in cyberspace is how easy it is for an attacker to mask his identity—and the wide variety of people and institutions that can attack anonymously.

In the real world, you can often identify the attacker by the weaponry. In 2006, Israel attacked a Syrian nuclear facility. It was a conventional attack—military airplanes flew over Syria and bombed the plant—and there was never any doubt who did it. That shorthand doesn’t work in cyberspace.

When the US and Israel attacked an Iranian nuclear facility in 2010, they used a cyberweapon and their involvement was a secret for years. On the Internet, technology broadly disseminates capability. Everyone from lone hackers to criminals to hypothetical cyberterrorists to nations’ spies and soldiers are using the same tools and the same tactics. Internet traffic doesn’t come with a return address, and it’s easy for an attacker to obscure his tracks by routing his attacks through some innocent third party.

And while it now seems that North Korea did indeed attack Sony, the attack it most resembles was conducted by members of the hacker group Anonymous against a company called HBGary Federal in 2011. In the same year, other members of Anonymous threatened NATO, and in 2014, still others announced that they were going to attack ISIS. Regardless of what you think of the group’s capabilities, it’s a new world when a bunch of hackers can threaten an international military alliance.

Even when a victim does manage to attribute a cyberattack, the process can take a long time. It took the US weeks to publicly blame North Korea for the Sony attacks. That was relatively fast; most of that time was probably spent trying to figure out how to respond. Attacks by China against US companies have taken much longer to attribute.

This delay makes defense policy difficult. Microsoft’s Scott Charney makes this point: When you’re being physically attacked, you can call on a variety of organizations to defend you—the police, the military, whoever does antiterrorism security in your country, your lawyers. The legal structure justifying that defense depends on knowing two things: who’s attacking you, and why. Unfortunately, when you’re being attacked in cyberspace, the two things you often don’t know are who’s attacking you, and why.

Whose job was it to defend Sony? Was it the US military’s, because it believed the attack to have come from North Korea? Was it the FBI, because this wasn’t an act of war? Was it Sony’s own problem, because it’s a private company? What about during those first weeks, when no one knew who the attacker was? These are just a few of the policy questions that we don’t have good answers for.

Certainly Sony needs enough security to protect itself regardless of who the attacker was, as do all of us. For the victim of a cyberattack, who the attacker is can be academic. The damage is the same, whether it’s a couple of hackers or a nation-state.

In the geopolitical realm, though, attribution is vital. And not only is attribution hard, providing evidence of any attribution is even harder. Because so much of the FBI’s evidence was classified—and probably provided by the National Security Agency—it was not able to explain why it was so sure North Korea did it. As I recently wrote: “The agency might have intelligence on the planning process for the hack. It might, say, have phone calls discussing the project, weekly PowerPoint status reports, or even Kim Jong-un’s sign-off on the plan.” Making any of this public would reveal the NSA’s “sources and methods,” something it regards as a very important secret.

Different types of attribution require different levels of evidence. In the Sony case, we saw the US government was able to generate enough evidence to convince itself. Perhaps it had the additional evidence required to convince North Korea it was sure, and provided that over diplomatic channels. But if the public is expected to support any government retaliatory action, they are going to need sufficient evidence made public to convince them. Today, trust in US intelligence agencies is low, especially after the 2003 Iraqi weapons-of-mass-destruction debacle.

What all of this means is that we are in the middle of an arms race between attackers and those that want to identify them: deception and deception detection. It’s an arms race in which the US—and, by extension, its allies—has a singular advantage. We spend more money on electronic eavesdropping than the rest of the world combined, we have more technology companies than any other country, and the architecture of the Internet ensures that most of the world’s traffic passes through networks the NSA can eavesdrop on.

In 2012, then US Secretary of Defense Leon Panetta said publicly that the US—presumably the NSA—has “made significant advances in … identifying the origins” of cyberattacks. We don’t know if this means they have made some fundamental technological advance, or that their espionage is so good that they’re monitoring the planning processes. Other US government officials have privately said that they’ve solved the attribution problem.

We don’t know how much of that is real and how much is bluster. It’s actually in America’s best interest to confidently accuse North Korea, even if it isn’t sure, because it sends a strong message to the rest of the world: “Don’t think you can hide in cyberspace. If you try anything, we’ll know it’s you.”

Strong attribution leads to deterrence. The detailed NSA capabilities leaked by Edward Snowden help with this, because they bolster an image of an almost-omniscient NSA.

It’s not, though—which brings us back to the arms race. A world where hackers and governments have the same capabilities, where governments can masquerade as hackers or as other governments, and where much of the attribution evidence intelligence agencies collect remains secret, is a dangerous place.

So is a world where countries have secret capabilities for deception and detection deception, and are constantly trying to get the best of each other. This is the world of today, though, and we need to be prepared for it.

This essay previously appeared in the Christian Science Monitor.

Posted on March 9, 2015 at 7:09 AMView Comments

Attributing the Sony Attack

No one has admitted taking down North Korea’s Internet. It could have been an act of retaliation by the US government, but it could just as well have been an ordinary DDoS attack. The follow-on attack against Sony PlayStation definitely seems to be the work of hackers unaffiliated with a government.

Not knowing who did what isn’t new. It’s called the “attribution problem,” and it plagues Internet security. But as governments increasingly get involved in cyberspace attacks, it has policy implications as well. Last year, I wrote:

Ordinarily, you could determine who the attacker was by the weaponry. When you saw a tank driving down your street, you knew the military was involved because only the military could afford tanks. Cyberspace is different. In cyberspace, technology is broadly spreading its capability, and everyone is using the same weaponry: hackers, criminals, politically motivated hacktivists, national spies, militaries, even the potential cyberterrorist. They are all exploiting the same vulnerabilities, using the same sort of hacking tools, engaging in the same attack tactics, and leaving the same traces behind. They all eavesdrop or steal data. They all engage in denial-of-service attacks. They all probe cyberdefences and do their best to cover their tracks.

Despite this, knowing the attacker is vitally important. As members of society, we have several different types of organizations that can defend us from an attack. We can call the police or the military. We can call on our national anti-terrorist agency and our corporate lawyers. Or we can defend ourselves with a variety of commercial products and services. Depending on the situation, all of these are reasonable choices.

The legal regime in which any defense operates depends on two things: who is attacking you and why. Unfortunately, when you are being attacked in cyberspace, the two things you often do not know are who is attacking you and why. It is not that everything can be defined as cyberwar; it is that we are increasingly seeing warlike tactics used in broader cyberconflicts. This makes defence and national cyberdefence policy difficult.

In 2007, the Israeli Air Force bombed and destroyed the al-Kibar nuclear facility in Syria. The Syrian government immediately knew who did it, because airplanes are hard to disguise. In 2010, the US and Israel jointly damaged Iran’s Natanz nuclear facility. But this time they used a cyberweapon, Stuxnet, and no one knew who did it until details were leaked years later. China routinely denies its cyberespionage activities. And a 2009 cyberattack against the United States and South Korea was blamed on North Korea even though it may have originated from either London or Miami.

When it’s possible to identify the origins of cyberattacks­—like forensic experts were able to do with many of the Chinese attacks against US networks­—it’s as a result of months of detailed analysis and investigation. That kind of time frame doesn’t help at the moment of attack, when you have to decide within milliseconds how your network is going to react and within days how your country is going to react. This, in part, explains the relative disarray within the Obama administration over what to do about North Korea. Officials in the US government and international institutions simply don’t have the legal or even the conceptual framework to deal with these types of scenarios.

The blurring of lines between individual actors and national governments has been happening more and more in cyberspace. What has been called the first cyberwar, Russia vs. Estonia in 2007, was partly the work of a 20-year-old ethnic Russian living in Tallinn, and partly the work of a pro-Kremlin youth group associated with the Russian government. Many of the Chinese hackers targeting Western networks seem to be unaffiliated with the Chinese government. And in 2011, the hacker group Anonymous threatened NATO.

It’s a strange future we live in when we can’t tell the difference between random hackers and major governments, or when those same random hackers can credibly threaten international military organizations.

This is why people around the world should care about the Sony hack. In this future, we’re going to see an even greater blurring of traditional lines between police, military, and private actions as technology broadly distributes attack capabilities across a variety of actors. This attribution difficulty is here to stay, at least for the foreseeable future.

If North Korea is responsible for the cyberattack, how is the situation different than a North Korean agent breaking into Sony’s office, photocopying a lot of papers, and making them available to the public? Is Chinese corporate espionage a problem for governments to solve, or should we let corporations defend themselves? Should the National Security Agency defend US corporate networks, or only US military networks? How much should we allow organizations like the NSA to insist that we trust them without proof when they claim to have classified evidence that they don’t want to disclose? How should we react to one government imposing sanctions on another based on this secret evidence? More importantly, when we don’t know who is launching an attack or why, who is in charge of the response and under what legal system should those in charge operate?

We need to figure all of this out. We need national guidelines to determine when the military should get involved and when it’s a police matter, as well as what sorts of proportional responses are available in each instance. We need international agreements defining what counts as cyberwar and what does not. And, most of all right now, we need to tone down all the cyberwar rhetoric. Breaking into the offices of a company and photocopying their paperwork is not an act of war, no matter who did it. Neither is doing the same thing over the Internet. Let’s save the big words for when it matters.

This essay previously appeared on TheAtlantic.com.

Jack Goldsmith responded to this essay.

Posted on January 7, 2015 at 11:16 AMView Comments

More on Stuxnet

Ralph Langer has written the definitive analysis of Stuxnet: short, popular version, and long, technical version.

Stuxnet is not really one weapon, but two. The vast majority of the attention has been paid to Stuxnet’s smaller and simpler attack routine—the one that changes the speeds of the rotors in a centrifuge, which is used to enrich uranium. But the second and “forgotten” routine is about an order of magnitude more complex and stealthy. It qualifies as a nightmare for those who understand industrial control system security. And strangely, this more sophisticated attack came first. The simpler, more familiar routine followed only years later—and was discovered in comparatively short order.

Also:

Stuxnet also provided a useful blueprint to future attackers by highlighting the royal road to infiltration of hard targets. Rather than trying to infiltrate directly by crawling through 15 firewalls, three data diodes, and an intrusion detection system, the attackers acted indirectly by infecting soft targets with legitimate access to ground zero: contractors. However seriously these contractors took their cybersecurity, it certainly was not on par with the protections at the Natanz fuel-enrichment facility. Getting the malware on the contractors’ mobile devices and USB sticks proved good enough, as sooner or later they physically carried those on-site and connected them to Natanz’s most critical systems, unchallenged by any guards.

Any follow-up attacker will explore this infiltration method when thinking about hitting hard targets. The sober reality is that at a global scale, pretty much every single industrial or military facility that uses industrial control systems at some scale is dependent on its network of contractors, many of which are very good at narrowly defined engineering tasks, but lousy at cybersecurity. While experts in industrial control system security had discussed the insider threat for many years, insiders who unwittingly helped deploy a cyberweapon had been completely off the radar. Until Stuxnet.

And while Stuxnet was clearly the work of a nation-state—requiring vast resources and considerable intelligence—future attacks on industrial control and other so-called “cyber-physical” systems may not be. Stuxnet was particularly costly because of the attackers’ self-imposed constraints. Damage was to be disguised as reliability problems. I estimate that well over 50 percent of Stuxnet’s development cost went into efforts to hide the attack, with the bulk of that cost dedicated to the overpressure attack which represents the ultimate in disguise—at the cost of having to build a fully-functional mockup IR-1 centrifuge cascade operating with real uranium hexafluoride. Stuxnet-inspired attackers will not necessarily place the same emphasis on disguise; they may want victims to know that they are under cyberattack and perhaps even want to publicly claim credit for it.

Related: earlier this month, Eugene Kaspersky said that Stuxnet also damaged a Russian nuclear power station and the International Space Station.

Posted on November 29, 2013 at 6:18 AMView Comments

1 2 3 4

Sidebar photo of Bruce Schneier by Joe MacInnis.