Entries Tagged "sabotage"

Page 1 of 2

Story of Gus Weiss

This is a long and fascinating article about Gus Weiss, who masterminded a long campaign to feed technical disinformation to the Soviet Union, which may or may not have caused a massive pipeline explosion somewhere in Siberia in the 1980s, if in fact there even was a massive pipeline explosion somewhere in Siberia in the 1980s.

Lots of information about the origins of US export controls laws and sabotage operations.

Posted on March 27, 2020 at 6:03 AMView Comments

New Ransomware Targets Industrial Control Systems

EKANS is a new ransomware that targets industrial control systems:

But EKANS also uses another trick to ratchet up the pain: It’s designed to terminate 64 different software processes on victim computers, including many that are specific to industrial control systems. That allows it to then encrypt the data that those control system programs interact with. While crude compared to other malware purpose-built for industrial sabotage, that targeting can nonetheless break the software used to monitor infrastructure, like an oil firm’s pipelines or a factory’s robots. That could have potentially dangerous consequences, like preventing staff from remotely monitoring or controlling the equipment’s operation.

EKANS is actually the second ransomware to hit industrial control systems. According to Dragos, another ransomware strain known as Megacortex that first appeared last spring included all of the same industrial control system process-killing features, and may in fact be a predecessor to EKANS developed by the same hackers. But because Megacortex also terminated hundreds of other processes, its industrial-control-system targeted features went largely overlooked.

Speculation is that this is criminal in origin, and not the work of a government.

It’s also the first malware that is named after a Pokémon character.

Posted on February 7, 2020 at 9:42 AMView Comments

Glitter Bomb against Package Thieves

Stealing packages from unattended porches is a rapidly rising crime, as more of us order more things by mail. One person hid a glitter bomb and a video recorder in a package, posting the results when thieves opened the box. At least, that’s what might have happened. At least some of the video was faked, which puts the whole thing into question.

That’s okay, though. Santa is faked, too. Happy whatever you’re celebrating.

Posted on December 25, 2018 at 6:13 AMView Comments

Fraudulent Tactics on Amazon Marketplace

Fascinating article about the many ways Amazon Marketplace sellers sabotage each other and defraud customers. The opening example: framing a seller for false advertising by buying fake five-star reviews for their products.

Defacement: Sellers armed with the accounts of Amazon distributors (sometimes legitimately, sometimes through the black market) can make all manner of changes to a rival’s listings, from changing images to altering text to reclassifying a product into an irrelevant category, like “sex toys.”

Phony fires: Sellers will buy their rival’s product, light it on fire, and post a picture to the reviews, claiming it exploded. Amazon is quick to suspend sellers for safety claims.

[…]

Over the following days, Harris came to realize that someone had been targeting him for almost a year, preparing an intricate trap. While he had trademarked his watch and registered his brand, Dead End Survival, with Amazon, Harris hadn’t trademarked the name of his Amazon seller account, SharpSurvival. So the interloper did just that, submitting to the patent office as evidence that he owned the goods a photo taken from Harris’ Amazon listings, including one of Harris’ own hands lighting a fire using the clasp of his survival watch. The hijacker then took that trademark to Amazon and registered it, giving him the power to kick Harris off his own listings and commandeer his name.

[…]

There are more subtle methods of sabotage as well. Sellers will sometimes buy Google ads for their competitors for unrelated products—say, a dog food ad linking to a shampoo listing—so that Amazon’s algorithm sees the rate of clicks converting to sales drop and automatically demotes their product.

What’s also interesting is how Amazon is basically its own government—with its own rules that its suppliers have no choice but to follow. And, of course, increasingly there is no option but to sell your stuff on Amazon.

Posted on December 20, 2018 at 6:21 AMView Comments

"Surreptitiously Weakening Cryptographic Systems"

New paper: “Surreptitiously Weakening Cryptographic Systems,” by Bruce Schneier, Matthew Fredrikson, Tadayoshi Kohno, and Thomas Ristenpart.

Abstract: Revelations over the past couple of years highlight the importance of understanding malicious and surreptitious weakening of cryptographic systems. We provide an overview of this domain, using a number of historical examples to drive development of a weaknesses taxonomy. This allows comparing different approaches to sabotage. We categorize a broader set of potential avenues for weakening systems using this taxonomy, and discuss what future research is needed to provide sabotage-resilient cryptography.

EDITED TO ADD (3/3): News article.

Posted on February 25, 2015 at 6:09 AMView Comments

Was the iOS SSL Flaw Deliberate?

Last October, I speculated on the best ways to go about designing and implementing a software backdoor. I suggested three characteristics of a good backdoor: low chance of discovery, high deniability if discovered, and minimal conspiracy to implement.

The critical iOS vulnerability that Apple patched last week is an excellent example. Look at the code. What caused the vulnerability is a single line of code: a second “goto fail;” statement. Since that statement isn’t a conditional, it causes the whole procedure to terminate.

The flaw is subtle, and hard to spot while scanning the code. It’s easy to imagine how this could have happened by error. And it would have been trivially easy for one person to add the vulnerability.

Was this done on purpose? I have no idea. But if I wanted to do something like this on purpose, this is exactly how I would do it.

EDITED TO ADD (2/27): If the Apple auditing system is any good, they would be able to trace this errant goto line not just to the source-code check-in details, but to the specific login that made the change. And they would quickly know whether this was just an error, or a deliberate change by a bad actor. Does anyone know what’s going on inside Apple?

EDITED TO ADD (2/27): Steve Bellovin has a pair of posts where he concludes that if this bug is enemy action, it’s fairly clumsy and unlikely to be the work of professionals.

Posted on February 27, 2014 at 6:03 AMView Comments

The Vulnerabilities Market and the Future of Security

Recently, there have been several articles about the new market in zero-day exploits: new and unpatched computer vulnerabilities. It’s not just software companies, who sometimes pay bounties to researchers who alert them of security vulnerabilities so they can fix them. And it’s not only criminal organizations, who pay for vulnerabilities they can exploit. Now there are governments, and companies who sell to governments, who buy vulnerabilities with the intent of keeping them secret so they can exploit them.

This market is larger than most people realize, and it’s becoming even larger. Forbes recently published a price list for zero-day exploits, along with the story of a hacker who received $250K from “a U.S. government contractor” (At first I didn’t believe the story or the price list, but I have been convinced that they both are true.) Forbes published a profile of a company called Vupen, whose business is selling zero-day exploits. Other companies doing this range from startups like Netragard and Endgame to large defense contractors like Northrop Grumman, General Dynamics, and Raytheon.

This is very different than in 2007, when researcher Charlie Miller wrote about his attempts to sell zero-day exploits; and a 2010 survey implied that there wasn’t much money in selling zero days. The market has matured substantially in the past few years.

This new market perturbs the economics of finding security vulnerabilities. And it does so to the detriment of us all.

I’ve long argued that the process of finding vulnerabilities in software systems increases overall security. This is because the economics of vulnerability hunting favored disclosure. As long as the principal gain from finding a vulnerability was notoriety, publicly disclosing vulnerabilities was the only obvious path. In fact, it took years for our industry to move from a norm of full-disclosure—announcing the vulnerability publicly and damn the consequences—to something called “responsible disclosure”: giving the software vendor a head start in fixing the vulnerability. Changing economics is what made the change stick: instead of just hacker notoriety, a successful vulnerability finder could land some lucrative consulting gigs, and being a responsible security researcher helped. But regardless of the motivations, a disclosed vulnerability is one that—at least in most cases—is patched. And a patched vulnerability makes us all more secure.

This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched. That it’s even more lucrative than the public vulnerabilities market means that more hackers will choose this path. And unlike the previous reward of notoriety and consulting gigs, it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they’re working on—and then secretly sell them to some government agency.

No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage.

Even more importantly, the new market for security vulnerabilities results in a variety of government agencies around the world that have a strong interest in those vulnerabilities remaining unpatched. These range from law-enforcement agencies (like the FBI and the German police who are trying to build targeted Internet surveillance tools, to intelligence agencies like the NSA who are trying to build mass Internet surveillance tools, to military organizations who are trying to build cyber-weapons.

All of these agencies have long had to wrestle with the choice of whether to use newly discovered vulnerabilities to protect or to attack. Inside the NSA, this was traditionally known as the “equities issue,” and the debate was between the COMSEC (communications security) side of the NSA and the SIGINT (signals intelligence) side. If they found a flaw in a popular cryptographic algorithm, they could either use that knowledge to fix the algorithm and make everyone’s communications more secure, or they could exploit the flaw to eavesdrop on others—while at the same time allowing even the people they wanted to protect to remain vulnerable. This debate raged through the decades inside the NSA. From what I’ve heard, by 2000, the COMSEC side had largely won, but things flipped completely around after 9/11.

The whole point of disclosing security vulnerabilities is to put pressure on vendors to release more secure software. It’s not just that they patch the vulnerabilities that are made public—the fear of bad press makes them implement more secure software development processes. It’s another economic process; the cost of designing software securely in the first place is less than the cost of the bad press after a vulnerability is announced plus the cost of writing and deploying the patch. I’d be the first to admit that this isn’t perfect—there’s a lot of very poorly written software still out there—but it’s the best incentive we have.

We’ve always expected the NSA, and those like them, to keep the vulnerabilities they discover secret. We have been counting on the public community to find and publicize vulnerabilities, forcing vendors to fix them. With the rise of these new pressures to keep zero-day exploits secret, and to sell them for exploitation, there will be even less incentive on software vendors to ensure the security of their products.

As the incentive for hackers to keep their vulnerabilities secret grows, the incentive for vendors to build secure software shrinks. As a recent EFF essay put it, this is “security for the 1%.” And it makes the rest of us less safe.

This essay previously appeared on Forbes.com.

Edited to add (6/6): Brazillian Portuguese translation here.

EDITED TO ADD (6/12): This presentation makes similar points as my essay.

Posted on June 1, 2012 at 6:48 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.