Entries Tagged "flash drives"

Page 1 of 3

How to Remain Secure Against the NSA

Now that we have enough details about how the NSA eavesdrops on the Internet, including today’s disclosures of the NSA’s deliberate weakening of cryptographic systems, we can finally start to figure out how to protect ourselves.

For the past two weeks, I have been working with the Guardian on NSA stories, and have read hundreds of top-secret NSA documents provided by whistleblower Edward Snowden. I wasn’t part of today’s story—it was in process well before I showed up—but everything I read confirms what the Guardian is reporting.

At this point, I feel I can provide some advice for keeping secure against such an adversary.

The primary way the NSA eavesdrops on Internet communications is in the network. That’s where their capabilities best scale. They have invested in enormous programs to automatically collect and analyze network traffic. Anything that requires them to attack individual endpoint computers is significantly more costly and risky for them, and they will do those things carefully and sparingly.

Leveraging its secret agreements with telecommunications companies—all the US and UK ones, and many other “partners” around the world—the NSA gets access to the communications trunks that move Internet traffic. In cases where it doesn’t have that sort of friendly access, it does its best to surreptitiously monitor communications channels: tapping undersea cables, intercepting satellite communications, and so on.

That’s an enormous amount of data, and the NSA has equivalently enormous capabilities to quickly sift through it all, looking for interesting traffic. “Interesting” can be defined in many ways: by the source, the destination, the content, the individuals involved, and so on. This data is funneled into the vast NSA system for future analysis.

The NSA collects much more metadata about Internet traffic: who is talking to whom, when, how much, and by what mode of communication. Metadata is a lot easier to store and analyze than content. It can be extremely personal to the individual, and is enormously valuable intelligence.

The Systems Intelligence Directorate is in charge of data collection, and the resources it devotes to this is staggering. I read status report after status report about these programs, discussing capabilities, operational details, planned upgrades, and so on. Each individual problem—recovering electronic signals from fiber, keeping up with the terabyte streams as they go by, filtering out the interesting stuff—has its own group dedicated to solving it. Its reach is global.

The NSA also attacks network devices directly: routers, switches, firewalls, etc. Most of these devices have surveillance capabilities already built in; the trick is to surreptitiously turn them on. This is an especially fruitful avenue of attack; routers are updated less frequently, tend not to have security software installed on them, and are generally ignored as a vulnerability.

The NSA also devotes considerable resources to attacking endpoint computers. This kind of thing is done by its TAO—Tailored Access Operations—group. TAO has a menu of exploits it can serve up against your computer—whether you’re running Windows, Mac OS, Linux, iOS, or something else—and a variety of tricks to get them on to your computer. Your anti-virus software won’t detect them, and you’d have trouble finding them even if you knew where to look. These are hacker tools designed by hackers with an essentially unlimited budget. What I took away from reading the Snowden documents was that if the NSA wants in to your computer, it’s in. Period.

The NSA deals with any encrypted data it encounters more by subverting the underlying cryptography than by leveraging any secret mathematical breakthroughs. First, there’s a lot of bad cryptography out there. If it finds an Internet connection protected by MS-CHAP, for example, that’s easy to break and recover the key. It exploits poorly chosen user passwords, using the same dictionary attacks hackers use in the unclassified world.

As was revealed today, the NSA also works with security product vendors to ensure that commercial encryption products are broken in secret ways that only it knows about. We know this has happened historically: CryptoAG and Lotus Notes are the most public examples, and there is evidence of a back door in Windows. A few people have told me some recent stories about their experiences, and I plan to write about them soon. Basically, the NSA asks companies to subtly change their products in undetectable ways: making the random number generator less random, leaking the key somehow, adding a common exponent to a public-key exchange protocol, and so on. If the back door is discovered, it’s explained away as a mistake. And as we now know, the NSA has enjoyed enormous success from this program.

TAO also hacks into computers to recover long-term keys. So if you’re running a VPN that uses a complex shared secret to protect your data and the NSA decides it cares, it might try to steal that secret. This kind of thing is only done against high-value targets.

How do you communicate securely against such an adversary? Snowden said it in an online Q&A soon after he made his first document public: “Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on.”

I believe this is true, despite today’s revelations and tantalizing hints of “groundbreaking cryptanalytic capabilities” made by James Clapper, the director of national intelligence in another top-secret document. Those capabilities involve deliberately weakening the cryptography.

Snowden’s follow-on sentence is equally important: “Unfortunately, endpoint security is so terrifically weak that NSA can frequently find ways around it.”

Endpoint means the software you’re using, the computer you’re using it on, and the local network you’re using it in. If the NSA can modify the encryption algorithm or drop a Trojan on your computer, all the cryptography in the world doesn’t matter at all. If you want to remain secure against the NSA, you need to do your best to ensure that the encryption can operate unimpeded.

With all this in mind, I have five pieces of advice:

  1. Hide in the network. Implement hidden services. Use Tor to anonymize yourself. Yes, the NSA targets Tor users, but it’s work for them. The less obvious you are, the safer you are.
  2. Encrypt your communications. Use TLS. Use IPsec. Again, while it’s true that the NSA targets encrypted connections—and it may have explicit exploits against these protocols—you’re much better protected than if you communicate in the clear.
  3. Assume that while your computer can be compromised, it would take work and risk on the part of the NSA—so it probably isn’t. If you have something really important, use an air gap. Since I started working with the Snowden documents, I bought a new computer that has never been connected to the Internet. If I want to transfer a file, I encrypt the file on the secure computer and walk it over to my Internet computer, using a USB stick. To decrypt something, I reverse the process. This might not be bulletproof, but it’s pretty good.
  4. Be suspicious of commercial encryption software, especially from large vendors. My guess is that most encryption products from large US companies have NSA-friendly back doors, and many foreign ones probably do as well. It’s prudent to assume that foreign products also have foreign-installed backdoors. Closed-source software is easier for the NSA to backdoor than open-source software. Systems relying on master secrets are vulnerable to the NSA, through either legal or more clandestine means.
  5. Try to use public-domain encryption that has to be compatible with other implementations. For example, it’s harder for the NSA to backdoor TLS than BitLocker, because any vendor’s TLS has to be compatible with every other vendor’s TLS, while BitLocker only has to be compatible with itself, giving the NSA a lot more freedom to make changes. And because BitLocker is proprietary, it’s far less likely those changes will be discovered. Prefer symmetric cryptography over public-key cryptography. Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.

Since I started working with Snowden’s documents, I have been using GPG, Silent Circle, Tails, OTR, TrueCrypt, BleachBit, and a few other things I’m not going to write about. There’s an undocumented encryption feature in my Password Safe program from the command line; I’ve been using that as well.

I understand that most of this is impossible for the typical Internet user. Even I don’t use all these tools for most everything I am working on. And I’m still primarily on Windows, unfortunately. Linux would be safer.

The NSA has turned the fabric of the Internet into a vast surveillance platform, but they are not magical. They’re limited by the same economic realities as the rest of us, and our best defense is to make surveillance of us as expensive as possible.

Trust the math. Encryption is your friend. Use it well, and do your best to ensure that nothing can compromise it. That’s how you can remain secure even in the face of the NSA.

This essay previously appeared in the Guardian.

EDITED TO ADD: Reddit thread.

Someone somewhere commented that the NSA’s “groundbreaking cryptanalytic capabilities” could include a practical attack on RC4. I don’t know one way or the other, but that’s a good speculation.

Posted on September 15, 2013 at 8:11 AMView Comments

Detaining David Miranda

Last Sunday, David Miranda was detained while changing planes at London Heathrow Airport by British authorities for nine hours under a controversial British law—the maximum time allowable without making an arrest. There has been much made of the fact that he’s the partner of Glenn Greenwald, the Guardian reporter whom Edward Snowden trusted with many of his NSA documents and the most prolific reporter of the surveillance abuses disclosed in those documents. There’s less discussion of what I feel was the real reason for Miranda’s detention. He was ferrying documents between Greenwald and Laura Poitras, a filmmaker and his co-reporter on Snowden and his information. These document were on several USB memory sticks he had with him. He had already carried documents from Greenwald in Rio de Janeiro to Poitras in Berlin, and was on his way back with different documents when he was detained.

The memory sticks were encrypted, of course, and Miranda did not know the key. This didn’t stop the British authorities from repeatedly asking for the key, and from confiscating the memory sticks along with his other electronics.

The incident prompted a major outcry in the UK. The UK’s Terrorist Act has always been controversial, and this clear misuse—it was intended to give authorities the right to detain and question suspected terrorists—is prompting new calls for its review. Certainly the UK. police will be more reluctant to misuse the law again in this manner.

I have to admit this story has me puzzled. Why would the British do something like this? What did they hope to gain, and why did they think it worth the cost? And—of course—were the British acting on their own under the Official Secrets Act, or were they acting on behalf of the United States? (My initial assumption was that they were acting on behalf of the US, but after the bizarre story of the British GCHQ demanding the destruction of Guardian computers last month, I’m not sure anymore.)

We do know the British were waiting for Miranda. It’s reasonable to assume they knew his itinerary, and had good reason to suspect that he was ferrying documents back and forth between Greenwald and Poitras. These documents could be source documents provided by Snowden, new documents that the two were working on either separately or together, or both. That being said, it’s inconceivable that the memory sticks would contain the only copies of these documents. Poitras retained copies of everything she gave Miranda. So the British authorities couldn’t possibly destroy the documents; the best they could hope for is that they would be able to read them.

Is it truly possible that the NSA doesn’t already know what Snowden has? They claim they don’t, but after Snowden’s name became public, the NSA would have conducted the mother of all audits. It would try to figure out what computer systems Snowden had access to, and therefore what documents he could have accessed. Hopefully, the audit information would give more detail, such as which documents he downloaded. I have a hard time believing that its internal auditing systems would be so bad that it wouldn’t be able to discover this.

So if the NSA knows what Snowden has, or what he could have, then the most it could learn from the USB sticks is what Greenwald and Poitras are currently working on, or thinking about working on. But presumably the things the two of them are working on are the things they’re going to publish next. Did the intelligence agencies really do all this simply for a few weeks’ heads-up on what was coming? Given how ham-handedly the NSA has handled PR as each document was exposed, it seems implausible that it wanted advance knowledge so it could work on a response. It’s been two months since the first Snowden revelation, and it still doesn’t have a decent PR story.

Furthermore, the UK authorities must have known that the data would be encrypted. Greenwald might have been a crypto newbie at the start of the Snowden affair, but Poitras is known to be good at security. The two have been communicating securely by e-mail when they do communicate. Maybe the UK authorities thought there was a good chance that one of them would make a security mistake, or that Miranda would be carrying paper documents.

Another possibility is that this was just intimidation. If so, it’s misguided. Anyone who regularly reads Greenwald could have told them that he would not have been intimidated—and, in fact, he expressed the exact opposite sentiment—and anyone who follows Poitras knows that she is even more strident in her views. Going after the loved ones of state enemies is a typically thuggish tactic, but it’s not a very good one in this case. The Snowden documents will get released. There’s no way to put this cat back in the bag, not even by killing the principal players.

It could possibly have been intended to intimidate others who are helping Greenwald and Poitras, or the Guardian and its advertisers. This will have some effect. Lavabit, Silent Circle, and now Groklaw have all been successfully intimidated. Certainly others have as well. But public opinion is shifting against the intelligence community. I don’t think it will intimidate future whistleblowers. If the treatment of Chelsea Manning didn’t discourage them, nothing will.

This leaves one last possible explanation—those in power were angry and impulsively acted on that anger. They’re lashing out: sending a message and demonstrating that they’re not to be messed with—that the normal rules of polite conduct don’t apply to people who screw with them. That’s probably the scariest explanation of all. Both the US and UK intelligence apparatuses have enormous money and power, and they have already demonstrated that they are willing to ignore their own laws. Once they start wielding that power unthinkingly, it could get really bad for everyone.

And it’s not going to be good for them, either. They seem to want Snowden so badly that that they’ll burn the world down to get him. But every time they act impulsively aggressive—convincing the governments of Portugal and France to block the plane carrying the Bolivian president because they thought Snowden was on it is another example—they lose a small amount of moral authority around the world, and some ability to act in the same way again. The more pressure Snowden feels, the more likely he is to give up on releasing the documents slowly and responsibly, and publish all of them at once—the same way that WikiLeaks published the US State Department cables.

Just this week, the Wall Street Journal reported on some new NSA secret programs that are spying on Americans. It got the information from “interviews with current and former intelligence and government officials and people from companies that help build or operate the systems, or provide data,” not from Snowden. This is only the beginning. The media will not be intimidated. I will not be intimidated. But it scares me that the NSA is so blind that it doesn’t see it.

This essay previously appeared on TheAtlantic.com.

EDITED TO ADD: I’ve been thinking about it, and there’s a good chance that the NSA doesn’t know what Snowden has. He was a sysadmin. He had access. Most of the audits and controls protect against normal users; someone with root access is going to be able to bypass a lot of them. And he had the technical chops to cover his tracks when he couldn’t just evade the auditing systems.

The AP makes an excellent point about this:

The disclosure undermines the Obama administration’s assurances to Congress and the public that the NSA surveillance programs can’t be abused because its spying systems are so aggressively monitored and audited for oversight purposes: If Snowden could defeat the NSA’s own tripwires and internal burglar alarms, how many other employees or contractors could do the same?

And, to be clear, I didn’t mean to say that intimidation wasn’t the government’s motive. I believe it was, and that it was poorly thought out intimidation: lashing out in anger, rather than from some Machiavellian strategy. (Here’s a similar view.) If they wanted Miranda’s electronics, they could have confiscated them and sent him on his way in fifteen minutes. Holding him for nine hours—the absolute maximum they could under the current law—was intimidation.

I am reminded of the phone call the Guardian received from British government. The exact quote reported was: “You’ve had your fun. Now we want the stuff back.” That’s something you would tell your child. And that’s the power dynamic that’s going on here.

EDITED TO ADD (8/27): Jay Rosen has an excellent essay on this.

EDITED TO ADD (9/12): Other editors react.

Posted on August 27, 2013 at 6:39 AMView Comments

Yet Another "People Plug in Strange USB Sticks" Story

I’m really getting tired of stories like this:

Computer disks and USB sticks were dropped in parking lots of government buildings and private contractors, and 60% of the people who picked them up plugged the devices into office computers. And if the drive or CD had an official logo on it, 90% were installed.

Of course people plugged in USB sticks and computer disks. It’s like “75% of people who picked up a discarded newspaper on the bus read it.” What else are people supposed to do with them?

And this is not the right response:

Mark Rasch, director of network security and privacy consulting for Falls Church, Virginia-based Computer Sciences Corp., told Bloomberg: “There’s no device known to mankind that will prevent people from being idiots.”

Maybe it would be the right response if 60% of people tried to play the USB sticks like ocarinas, or tried to make omelettes out of the computer disks. But not if they plugged them into their computers. That’s what they’re for.

People get USB sticks all the time. The problem isn’t that people are idiots, that they should know that a USB stick found on the street is automatically bad and a USB stick given away at a trade show is automatically good. The problem is that the OS trusts random USB sticks. The problem is that the OS will automatically run a program that can install malware from a USB stick. The problem is that it isn’t safe to plug a USB stick into a computer.

Quit blaming the victim. They’re just trying to get by.

EDITED TO ADD (7/4): As of February of this year, Windows no longer supports AutoRun for USB drives.

Posted on June 29, 2011 at 9:13 AMView Comments

Bin Laden Maintained Computer Security with an Air Gap

From the Associated Press:

Bin Laden’s system was built on discipline and trust. But it also left behind an extensive archive of email exchanges for the U.S. to scour. The trove of electronic records pulled out of his compound after he was killed last week is revealing thousands of messages and potentially hundreds of email addresses, the AP has learned.

Holed up in his walled compound in northeast Pakistan with no phone or Internet capabilities, bin Laden would type a message on his computer without an Internet connection, then save it using a thumb-sized flash drive. He then passed the flash drive to a trusted courier, who would head for a distant Internet cafe.

At that location, the courier would plug the memory drive into a computer, copy bin Laden’s message into an email and send it. Reversing the process, the courier would copy any incoming email to the flash drive and return to the compound, where bin Laden would read his messages offline.

I’m impressed. It’s hard to maintain this kind of COMSEC discipline.

It was a slow, toilsome process. And it was so meticulous that even veteran intelligence officials have marveled at bin Laden’s ability to maintain it for so long. The U.S. always suspected bin Laden was communicating through couriers but did not anticipate the breadth of his communications as revealed by the materials he left behind.

Navy SEALs hauled away roughly 100 flash memory drives after they killed bin Laden, and officials said they appear to archive the back-and-forth communication between bin Laden and his associates around the world.

Posted on May 18, 2011 at 8:45 AMView Comments

Erasing Data from Flash Drives

Reliably Erasing Data From Flash-Based Solid State Drives,” by Michael Wei, Laura M. Grupp, Frederick E. Spada, and Steven Swanson.

Abstract: Reliably erasing data from storage media (sanitizing the media) is a critical component of secure data management. While sanitizing entire disks and individual files is well-understood for hard drives, flash-based solid state disks have a very different internal architecture, so it is unclear whether hard drive techniques will work for SSDs as well.

We empirically evaluate the effectiveness of hard drive-oriented techniques and of the SSDs’ built-in sanitization commands by extracting raw data from the SSD’s flash chips after applying these techniques and commands. Our results lead to three conclusions: First, built-in commands are effective, but manufacturers sometimes implement them incorrectly. Second, overwriting the entire visible address space of an SSD twice is usually, but not always, sufficient to sanitize the drive. Third, none of the existing hard drive-oriented techniques for individual file sanitization are effective on SSDs.

This third conclusion leads us to develop flash translation layer extensions that exploit the details of flash memory’s behavior to efficiently support file sanitization. Overall, we find that reliable SSD sanitization requires built-in, verifiable sanitize operations.

News article. Video of talk.

Posted on March 1, 2011 at 6:29 AMView Comments

Stuxnet

Computer security experts are often surprised at which stories get picked up by the mainstream media. Sometimes it makes no sense. Why this particular data breach, vulnerability, or worm and not others? Sometimes it’s obvious. In the case of Stuxnet, there’s a great story.

As the story goes, the Stuxnet worm was designed and released by a government—the U.S. and Israel are the most common suspects—specifically to attack the Bushehr nuclear power plant in Iran. How could anyone not report that? It combines computer attacks, nuclear power, spy agencies and a country that’s a pariah to much of the world. The only problem with the story is that it’s almost entirely speculation.

Here’s what we do know: Stuxnet is an Internet worm that infects Windows computers. It primarily spreads via USB sticks, which allows it to get into computers and networks not normally connected to the Internet. Once inside a network, it uses a variety of mechanisms to propagate to other machines within that network and gain privilege once it has infected those machines. These mechanisms include both known and patched vulnerabilities, and four “zero-day exploits”: vulnerabilities that were unknown and unpatched when the worm was released. (All the infection vulnerabilities have since been patched.)

Stuxnet doesn’t actually do anything on those infected Windows computers, because they’re not the real target. What Stuxnet looks for is a particular model of Programmable Logic Controller (PLC) made by Siemens (the press often refers to these as SCADA systems, which is technically incorrect). These are small embedded industrial control systems that run all sorts of automated processes: on factory floors, in chemical plants, in oil refineries, at pipelines—and, yes, in nuclear power plants. These PLCs are often controlled by computers, and Stuxnet looks for Siemens SIMATIC WinCC/Step 7 controller software.

If it doesn’t find one, it does nothing. If it does, it infects it using yet another unknown and unpatched vulnerability, this one in the controller software. Then it reads and changes particular bits of data in the controlled PLCs. It’s impossible to predict the effects of this without knowing what the PLC is doing and how it is programmed, and that programming can be unique based on the application. But the changes are very specific, leading many to believe that Stuxnet is targeting a specific PLC, or a specific group of PLCs, performing a specific function in a specific location—and that Stuxnet’s authors knew exactly what they were targeting.

It’s already infected more than 50,000 Windows computers, and Siemens has reported 14 infected control systems, many in Germany. (These numbers were certainly out of date as soon as I typed them.) We don’t know of any physical damage Stuxnet has caused, although there are rumors that it was responsible for the failure of India’s INSAT-4B satellite in July. We believe that it did infect the Bushehr plant.

All the anti-virus programs detect and remove Stuxnet from Windows systems.

Stuxnet was first discovered in late June, although there’s speculation that it was released a year earlier. As worms go, it’s very complex and got more complex over time. In addition to the multiple vulnerabilities that it exploits, it installs its own driver into Windows. These have to be signed, of course, but Stuxnet used a stolen legitimate certificate. Interestingly, the stolen certificate was revoked on July 16, and a Stuxnet variant with a different stolen certificate was discovered on July 17.

Over time the attackers swapped out modules that didn’t work and replaced them with new ones—perhaps as Stuxnet made its way to its intended target. Those certificates first appeared in January. USB propagation, in March.

Stuxnet has two ways to update itself. It checks back to two control servers, one in Malaysia and the other in Denmark, but also uses a peer-to-peer update system: When two Stuxnet infections encounter each other, they compare versions and make sure they both have the most recent one. It also has a kill date of June 24, 2012. On that date, the worm will stop spreading and delete itself.

We don’t know who wrote Stuxnet. We don’t know why. We don’t know what the target is, or if Stuxnet reached it. But you can see why there is so much speculation that it was created by a government.

Stuxnet doesn’t act like a criminal worm. It doesn’t spread indiscriminately. It doesn’t steal credit card information or account login credentials. It doesn’t herd infected computers into a botnet. It uses multiple zero-day vulnerabilities. A criminal group would be smarter to create different worm variants and use one in each. Stuxnet performs sabotage. It doesn’t threaten sabotage, like a criminal organization intent on extortion might.

Stuxnet was expensive to create. Estimates are that it took 8 to 10 people six months to write. There’s also the lab setup—surely any organization that goes to all this trouble would test the thing before releasing it—and the intelligence gathering to know exactly how to target it. Additionally, zero-day exploits are valuable. They’re hard to find, and they can only be used once. Whoever wrote Stuxnet was willing to spend a lot of money to ensure that whatever job it was intended to do would be done.

None of this points to the Bushehr nuclear power plant in Iran, though. Best I can tell, this rumor was started by Ralph Langner, a security researcher from Germany. He labeled his theory “highly speculative,” and based it primarily on the facts that Iran had an unusually high number of infections (the rumor that it had the most infections of any country seems not to be true), that the Bushehr nuclear plant is a juicy target, and that some of the other countries with high infection rates—India, Indonesia, and Pakistan—are countries where the same Russian contractor involved in Bushehr is also involved. This rumor moved into the computer press and then into the mainstream press, where it became the accepted story, without any of the original caveats.

Once a theory takes hold, though, it’s easy to find more evidence. The word “myrtus” appears in the worm: an artifact that the compiler left, possibly by accident. That’s the myrtle plant. Of course, that doesn’t mean that druids wrote Stuxnet. According to the story, it refers to Queen Esther, also known as Hadassah; she saved the Persian Jews from genocide in the 4th century B.C. “Hadassah” means “myrtle” in Hebrew.

Stuxnet also sets a registry value of “19790509” to alert new copies of Stuxnet that the computer has already been infected. It’s rather obviously a date, but instead of looking at the gazillion things—large and small—that happened on that the date, the story insists it refers to the date Persian Jew Habib Elghanain was executed in Tehran for spying for Israel.

Sure, these markers could point to Israel as the author. On the other hand, Stuxnet’s authors were uncommonly thorough about not leaving clues in their code; the markers could have been deliberately planted by someone who wanted to frame Israel. Or they could have been deliberately planted by Israel, who wanted us to think they were planted by someone who wanted to frame Israel. Once you start walking down this road, it’s impossible to know when to stop.

Another number found in Stuxnet is 0xDEADF007. Perhaps that means “Dead Fool” or “Dead Foot,” a term that refers to an airplane engine failure. Perhaps this means Stuxnet is trying to cause the targeted system to fail. Or perhaps not. Still, a targeted worm designed to cause a specific sabotage seems to be the most likely explanation.

If that’s the case, why is Stuxnet so sloppily targeted? Why doesn’t Stuxnet erase itself when it realizes it’s not in the targeted network? When it infects a network via USB stick, it’s supposed to only spread to three additional computers and to erase itself after 21 days—but it doesn’t do that. A mistake in programming, or a feature in the code not enabled? Maybe we’re not supposed to reverse engineer the target. By allowing Stuxnet to spread globally, its authors committed collateral damage worldwide. From a foreign policy perspective, that seems dumb. But maybe Stuxnet’s authors didn’t care.

My guess is that Stuxnet’s authors, and its target, will forever remain a mystery.

This essay originally appeared on Forbes.com.

My alternate explanations for Stuxnet were cut from the essay. Here they are:

  • A research project that got out of control. Researchers have accidentally released worms before. But given the press, and the fact that any researcher working on something like this would be talking to friends, colleagues, and his advisor, I would expect someone to have outed him by now, especially if it was done by a team.
  • A criminal worm designed to demonstrate a capability. Sure, that’s possible. Stuxnet could be a prelude to extortion. But I think a cheaper demonstration would be just as effective. Then again, maybe not.
  • A message. It’s hard to speculate any further, because we don’t know who the message is for, or its context. Presumably the intended recipient would know. Maybe it’s a “look what we can do” message. Or an “if you don’t listen to us, we’ll do worse next time” message. Again, it’s a very expensive message, but maybe one of the pieces of the message is “we have so many resources that we can burn four or five man-years of effort and four zero-day vulnerabilities just for the fun of it.” If that message were for me, I’d be impressed.
  • A worm released by the U.S. military to scare the government into giving it more budget and power over cybersecurity. Nah, that sort of conspiracy is much more common in fiction than in real life.

Note that some of these alternate explanations overlap.

EDITED TO ADD (10/7): Symantec published a very detailed analysis. It seems like one of the zero-day vulnerabilities wasn’t a zero-day after all. Good CNet article. More speculation, without any evidence. Decent debunking. Alternate theory, that the target was the uranium centrifuges in Natanz, Iran.

Posted on October 7, 2010 at 9:56 AMView Comments

Eating a Flash Drive

How not to destroy evidence:

In a bold and bizarre attempt to destroy evidence seized during a federal raid, a New York City man grabbed a flash drive and swallowed the data storage device while in the custody of Secret Service agents, records show.

The article wasn’t explicit about this—odd, as it’s the main question any reader would have—but it seems that the man’s digestive tract did not destroy the evidence.

Posted on March 8, 2010 at 11:00 AMView Comments

FIPS 140-2 Level 2 Certified USB Memory Stick Cracked

Kind of a dumb mistake:

The USB drives in question encrypt the stored data via the practically uncrackable AES 256-bit hardware encryption system. Therefore, the main point of attack for accessing the plain text data stored on the drive is the password entry mechanism. When analysing the relevant Windows program, the SySS security experts found a rather blatant flaw that has quite obviously slipped through testers’ nets. During a successful authorisation procedure the program will, irrespective of the password, always send the same character string to the drive after performing various crypto operations—and this is the case for all USB Flash drives of this type.

Cracking the drives is therefore quite simple. The SySS experts wrote a small tool for the active password entry program’s RAM which always made sure that the appropriate string was sent to the drive, irrespective of the password entered and as a result gained immediate access to all the data on the drive. The vulnerable devices include the Kingston DataTraveler BlackBox, the SanDisk Cruzer Enterprise FIPS Edition and the Verbatim Corporate Secure FIPS Edition.

Nice piece of analysis work.

The article goes on to question the value of the FIPS certification:

The real question, however, remains unanswered ­ how could USB Flash drives that exhibit such a serious security hole be given one of the highest certificates for crypto devices? Even more importantly, perhaps ­ what is the value of a certification that fails to detect such holes?

The problem is that no one really understands what a FIPS 140-2 certification means. Instead, they think something like: “This crypto thingy is certified, so it must be secure.” In fact, FIPS 140-2 Level 2 certification only means that certain good algorithms are used, and that there is some level of tamper resistance and tamper evidence. Marketing departments of security take advantage of this confusion—it’s not only FIPS 140, it’s all the security standards—and encourage their customers to equate conformance to the standard with security.

So when that equivalence is demonstrated to be false, people are surprised.

Posted on January 8, 2010 at 7:24 AMView Comments

1 2 3

Sidebar photo of Bruce Schneier by Joe MacInnis.