Entries Tagged "advanced persistent threats"

Page 2 of 2

Lessons from the Sony Hack

Earlier this month, a mysterious group that calls itself Guardians of Peace hacked into Sony Pictures Entertainment’s computer systems and began revealing many of the Hollywood studio’s best-kept secrets, from details about unreleased movies to embarrassing emails (notably some racist notes from Sony bigwigs about President Barack Obama’s presumed movie-watching preferences) to the personnel data of employees, including salaries and performance reviews. The Federal Bureau of Investigation now says it has evidence that North Korea was behind the attack, and Sony Pictures pulled its planned release of “The Interview,” a satire targeting that country’s dictator, after the hackers made some ridiculous threats about terrorist violence.

Your reaction to the massive hacking of such a prominent company will depend on whether you’re fluent in information-technology security. If you’re not, you’re probably wondering how in the world this could happen. If you are, you’re aware that this could happen to any company (though it is still amazing that Sony made it so easy).

To understand any given episode of hacking, you need to understand who your adversary is. I’ve spent decades dealing with Internet hackers (as I do now at my current firm), and I’ve learned to separate opportunistic attacks from targeted ones.

You can characterize attackers along two axes: skill and focus. Most attacks are low-skill and low-focus—people using common hacking tools against thousands of networks world-wide. These low-end attacks include sending spam out to millions of email addresses, hoping that someone will fall for it and click on a poisoned link. I think of them as the background radiation of the Internet.

High-skill, low-focus attacks are more serious. These include the more sophisticated attacks using newly discovered “zero-day” vulnerabilities in software, systems and networks. This is the sort of attack that affected Target, J.P. Morgan Chase and most of the other commercial networks that you’ve heard about in the past year or so.

But even scarier are the high-skill, high-focus attacks­—the type that hit Sony. This includes sophisticated attacks seemingly run by national intelligence agencies, using such spying tools as Regin and Flame, which many in the IT world suspect were created by the U.S.; Turla, a piece of malware that many blame on the Russian government; and a huge snooping effort called GhostNet, which spied on the Dalai Lama and Asian governments, leading many of my colleagues to blame China. (We’re mostly guessing about the origins of these attacks; governments refuse to comment on such issues.) China has also been accused of trying to hack into the New York Times in 2010, and in May, Attorney General Eric Holder announced the indictment of five Chinese military officials for cyberattacks against U.S. corporations.

This category also includes private actors, including the hacker group known as Anonymous, which mounted a Sony-style attack against the Internet-security firm HBGary Federal, and the unknown hackers who stole racy celebrity photos from Apple’s iCloud and posted them. If you’ve heard the IT-security buzz phrase “advanced persistent threat,” this is it.

There is a key difference among these kinds of hacking. In the first two categories, the attacker is an opportunist. The hackers who penetrated Home Depot’s networks didn’t seem to care much about Home Depot; they just wanted a large database of credit-card numbers. Any large retailer would do.

But a skilled, determined attacker wants to attack a specific victim. The reasons may be political: to hurt a government or leader enmeshed in a geopolitical battle. Or ethical: to punish an industry that the hacker abhors, like big oil or big pharma. Or maybe the victim is just a company that hackers love to hate. (Sony falls into this category: It has been infuriating hackers since 2005, when the company put malicious software on its CDs in a failed attempt to prevent copying.)

Low-focus attacks are easier to defend against: If Home Depot’s systems had been better protected, the hackers would have just moved on to an easier target. With attackers who are highly skilled and highly focused, however, what matters is whether a targeted company’s security is superior to the attacker’s skills, not just to the security measures of other companies. Often, it isn’t. We’re much better at such relative security than we are at absolute security.

That is why security experts aren’t surprised by the Sony story. We know people who do penetration testing for a living—real, no-holds-barred attacks that mimic a full-on assault by a dogged, expert attacker—and we know that the expert always gets in. Against a sufficiently skilled, funded and motivated attacker, all networks are vulnerable. But good security makes many kinds of attack harder, costlier and riskier. Against attackers who aren’t sufficiently skilled, good security may protect you completely.

It is hard to put a dollar value on security that is strong enough to assure you that your embarrassing emails and personnel information won’t end up posted online somewhere, but Sony clearly failed here. Its security turned out to be subpar. They didn’t have to leave so much information exposed. And they didn’t have to be so slow detecting the breach, giving the attackers free rein to wander about and take so much stuff.

For those worried that what happened to Sony could happen to you, I have two pieces of advice. The first is for organizations: take this stuff seriously. Security is a combination of protection, detection and response. You need prevention to defend against low-focus attacks and to make targeted attacks harder. You need detection to spot the attackers who inevitably get through. And you need response to minimize the damage, restore security and manage the fallout.

The time to start is before the attack hits: Sony would have fared much better if its executives simply hadn’t made racist jokes about Mr. Obama or insulted its stars—or if their response systems had been agile enough to kick the hackers out before they grabbed everything.

My second piece of advice is for individuals. The worst invasion of privacy from the Sony hack didn’t happen to the executives or the stars; it happened to the blameless random employees who were just using their company’s email system. Because of that, they’ve had their most personal conversations—gossip, medical conditions, love lives—exposed. The press may not have divulged this information, but their friends and relatives peeked at it. Hundreds of personal tragedies must be unfolding right now.

This could be any of us. We have no choice but to entrust companies with our intimate conversations: on email, on Facebook, by text and so on. We have no choice but to entrust the retailers that we use with our financial details. And we have little choice but to use cloud services such as iCloud and Google Docs.

So be smart: Understand the risks. Know that your data are vulnerable. Opt out when you can. And agitate for government intervention to ensure that organizations protect your data as well as you would. Like many areas of our hyper-technical world, this isn’t something markets can fix.

This essay previously appeared on the Wall Street Journal CIO Journal.

EDITED TO ADD (12/21): Slashdot thread.

EDITED TO ADD (1/14): Sony has had more than 50 security breaches in the past fifteen years.

Posted on December 19, 2014 at 12:44 PMView Comments

On Securing Potentially Dangerous Virology Research

Abstract: The problem of securing biological research data is a difficult and complicated one. Our ability to secure data on computers is not robust enough to ensure the security of existing data sets. Lessons from cryptography illustrate that neither secrecy measures, such as deleting technical details, nor national solutions, such as export controls, will work.———Science and Nature have each published papers on the H5N1 virus in humans after considerable debate about whether the research results in those papers could help terrorists create a bioweapon. This notion of “dual use” research is an important one for the community, and one that will sooner or later become critical. Perhaps these two papers are not dangerous in the wrong hands, but eventually there will be research results that are.

My background is in cryptography and computer security. I cannot comment on the potential value or harm from any particular piece of biological research, but I can discuss what works and what does not to keep research data secure. The cryptography and computer security communities have been wrestling for decades now with dual-use research: for example, whether to publish new Windows (Microsoft Corporation) vulnerabilities that can be immediately used to attack computers but whose publication helps us make the operating system more secure in the long run. From this experience, I offer five points to the virology community.

First, security based on secrecy is inherently fragile. The more secrets a system has, the less secure it is. A door lock that has a secret but unchangeable locking mechanism is less secure than a commercially purchased door lock with an easily changeable key. In cryptography, this is known as Kerckhoffs’ principle: Put all your secrecy into the key and none into the cryptographic algorithm. The key is unique and easily changeable; the algorithm is system-wide and much more likely to become public. In fact, algorithms are deliberately published so that they get analyzed broadly. The lesson for dual-use virology research is that it is risky to base your security on keeping research secret. Militaries spend an enormous amount of money trying to maintain secret research laboratories, and even they do not always get security right. Once secret data become public, there is no way to go back.

Second, omitting technical details from published research is a poor security measure. We tried this in computer security with regard to vulnerabilities, announcing general information but not publishing specifics. The problem is that once the general information is announced, it is much easier for another researcher to replicate the results and generate the details. This is probably even more true in virology research than in computer security research, where the very existence of a result can provide much of the road map to that result.

Third, technical difficulty as a security measure has only short-term value. Technology only gets better; it never gets worse. To believe that some research cannot be replicated by amateurs because it requires equipment only available to state-of-the-art research institutions is short-sighted at best. What is impossible today will be a Ph.D. thesis in 20 years, and what was a Ph.D. thesis 20 years ago is a high-school science fair project today.

Fourth, securing research data in computer networks is risky at best. If you read newspapers, you know the current state of the art in computer security: Everything gets hacked. Cyber criminals steal money from banks. Cyber spies steal data from military computers. Although people talk about H5N1 research in terms of securing the research papers, that is largely a red herring; even if no papers existed, the research data would still be on a network-connected computer somewhere.

Not all computers are hacked and not all data gets stolen, but the risks are there. There are two basic types of threats in cyberspace. There are the opportunists: for example, criminals who want to break into a retail merchant’s system and steal a thousand credit card numbers. Against these attackers, relative security is what matters. Because the criminals do not care whom they attack, you are safe if you are more secure than other networks. The other type of threat is a targeted attack. These are attackers who, for whatever reason, want to attack a particular network. The buzzword in Internet security for this is “advanced persistent threat.” It is almost impossible to secure a network against a sufficiently skilled and tenacious adversary. All we can do is make the attacker’s job harder.

This does not mean that all virology data will be stolen via computer networks, but it does mean that, once the existence of that data becomes public knowledge, you should assume that the bad guys will be able to get their hands on it.

Lastly, national measures that prohibit publication will not work in an international community, especially in the Internet age. If either Science or Nature had refused to publish the H5N1 papers, they would have been published somewhere else. Even if some countries stop funding—or ban—this sort of research, it will still happen in another country.

The U.S. cryptography community saw this in the 1970s and early 1980s. At that time, the National Security Agency (NSA) controlled cryptography research, which included denying funding for research, classifying results after the fact, and using export-control laws to limit what ended up in products. This was the pre-Internet world, and it worked for a while. In the 1980s they gave up on classifying research, because an international community arose. The limited ability for U.S. researchers to get funding for block-cipher cryptanalysis merely moved that research to Europe and Asia. The NSA continued to limit the spread of cryptography via export-control laws; the U.S.-centric nature of the computer industry meant that this was effective. In the 1990s they gave up on controlling software because the international online community became mainstream; this period was called “the Crypto Wars.” Export-control laws did prevent Microsoft from embedding cryptography into Windows for over a decade, but it did nothing to prevent products made in other countries from filling the market gaps.

Today, there are no restrictions on cryptography, and many U.S. government standards are the result of public international competitions. Right now the National Institute of Standards and Technology is working on a new Secure Hash Algorithm standard. When it is announced next year, it will be the product of a public call for algorithms that resulted in 64 submissions from over a dozen countries and then years of international analysis. The practical effects of unrestricted research are seen in the computer security you use today: on your computer, as you browse the Internet and engage in commerce, and on your cell phone and other smart devices. Sure, the bad guys make use of this research, too, but the beneficial uses far outweigh the malicious ones.

The computer security community has also had to wrestle with these dual-use issues. In the early days of public computing, researchers who discovered vulnerabilities would quietly tell the product vendors so as to not also alert hackers. But all too often, the vendors would ignore the researchers. Because the vulnerability was not public, there was no urgency to fix it. Fixes might go into the next product release. Researchers, tired of this, started publishing the existence of vulnerabilities but not the details. Vendors, in response, tried to muzzle the researchers. They threatened them with lawsuits and belittled them in the press, calling the vulnerabilities only theoretical and not practical. The response from the researchers was predictable: They started publishing full details, and sometimes even code, demonstrating the vulnerabilities they found. This was called “full disclosure” and is the primary reason vendors now patch vulnerabilities quickly. Faced with published vulnerabilities that they could not pretend did not exist and that the hackers could use, they started building internal procedures to quickly issue patches. If you use Microsoft Windows, you know about “patch Tuesday”; the once-a-month automatic download and installation of security patches.

Once vendors started taking security patches seriously, the research community (university researchers, security consultants, and informal hackers) moved to something called "responsible disclosure." Now it is common for researchers to alert vendors before publication, giving them a month or two head start to release a security patch. But without the threat of full disclosure, responsible disclosure would not work, and vendors would go back to ignoring security vulnerabilities.

Could a similar process work for viruses? That is, could the makers work in concert with people who develop vaccines so that vaccines become available at the same time as the original results are released? Certainly this is not easy in practice, but perhaps it is a goal to work toward.

Limiting research, either through government classification or legal threats from venders, has a chilling effect. Why would professors or graduate students choose cryptography or computer security if they were going to be prevented from publishing their results? Once these sorts of research slow down, the increasing ignorance hurts us all.

On the other hand, the current vibrant fields of cryptography and computer security are a direct result of our willingness to publish methods of attack. Making and breaking systems are one and the same; you cannot learn one without the other. (Some universities even offer classes in computer virus writing.) Cryptography is better, and computers and networks are more secure, because our communities openly publish details on how to attack systems.

Virology is not computer science. A biological virus is not the same as a computer virus. A vulnerability that affects every individual copy of Windows is not as bad as a vulnerability that affects every individual person. Still, the lessons from computer security are valuable to anyone considering policies intended to encourage life-saving research in virology while at the same time prevent that research from being used to cause harm. This debate will not go away; it will only get more urgent.

This essay was originally published in Science.

EDITED TO ADD (7/14): Related article: “What Biology Can Learn from Infosec.”

Posted on June 29, 2012 at 6:35 AMView Comments

Advanced Persistent Threat (APT)

It’s taken me a few years, but I’ve come around to this buzzword. It highlights an important characteristic of a particular sort of Internet attacker.

A conventional hacker or criminal isn’t interested in any particular target. He wants a thousand credit card numbers for fraud, or to break into an account and turn it into a zombie, or whatever. Security against this sort of attacker is relative; as long as you’re more secure than almost everyone else, the attackers will go after other people, not you. An APT is different; it’s an attacker who—for whatever reason—wants to attack you. Against this sort of attacker, the absolute level of your security is what’s important. It doesn’t matter how secure you are compared to your peers; all that matters is whether you’re secure enough to keep him out.

APT attackers are more highly motivated. They’re likely to be better skilled, better funded, and more patient. They’re likely to try several different avenues of attack. And they’re much more likely to succeed.

This is why APT is a useful buzzword.

Posted on November 9, 2011 at 1:51 PMView Comments

Full Extent of the Attack that Compromised RSA in March

Brian Krebs has done the analysis; it’s something like 760 companies that were compromised.

Among the more interesting names on the list are Abbott Labs, the Alabama Supercomputer Network, Charles Schwabb & Co., Cisco Systems, eBay, the European Space Agency, Facebook, Freddie Mac, Google, the General Services Administration, the Inter-American Development Bank, IBM, Intel Corp., the Internal Revenue Service (IRS), the Massachusetts Institute of Technology, Motorola Inc., Northrop Grumman, Novell, Perot Systems, PriceWaterhouseCoopers LLP, Research in Motion (RIM) Ltd., Seagate Technology, Thomson Financial, Unisys Corp., USAA, Verisign, VMWare, Wachovia Corp., and Wells Fargo & Co.

News article.

Posted on October 28, 2011 at 3:21 PMView Comments

RSA Security, Inc Hacked

The company, not the algorithm. Here’s the corporate spin.

Our investigation has led us to believe that the attack is in the category of an Advanced Persistent Threat (APT). Our investigation also revealed that the attack resulted in certain information being extracted from RSA’s systems. Some of that information is specifically related to RSA’s SecurID two-factor authentication products. While at this time we are confident that the information extracted does not enable a successful direct attack on any of our RSA SecurID customers, this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack. We are very actively communicating this situation to RSA customers and providing immediate steps for them to take to strengthen their SecurID implementations.

Here are news articles. The worry is that source code to the company’s SecurID two-factor authentication product was stolen, which would possibly allow hackers to reverse-engineer or otherwise break the system. It’s hard to make any assessments about whether this is possible or likely without knowing 1) how SecurID’s cryptography works, and 2) exactly what was stolen from the company’s servers. We do not know either, and the corporate spin is as short on details as it is long on reassurances.

RSA Data Security, Inc. is probably pretty screwed if SecurID is compromised. Those hardware tokens have no upgrade path, and would have to be replaced. How many of the company’s customers will replace them with competitors’ tokens. Probably a bunch. Hence, it’s in RSA’s best interest for their customers to forget this incident as quickly as possible.

There seems to be two likely scenarios if the attackers have compromised SecurID. One, they are a sophisticated organization who wants the information for a specific purpose. The attackers actually are on RSA’s side in the public-relations spin, and we’re unlikely to see widespread use of this information. Or two, they stole the stuff for conventional criminal purposes and will sell it. In that case, we’re likely to know pretty quickly.

Again, without detailed information—or at least an impartial assessment—it’s impossible to make any recommendations. Security is all about trust, and when trust is lost there is no security. User’s of SecurID trusted RSA Data Security, Inc. to protect the secrets necessary to secure that system. To the extent they did not, the company has lost its customers’ trust.

Posted on March 21, 2011 at 6:52 AMView Comments

HBGary and the Future of the IT Security Industry

This is a really good piece by Paul Roberts on Anonymous vs. HBGary: not the tactics or the politics, but what HBGary demonstrates about the IT security industry.

But I think the real lesson of the hack – and of the revelations that followed it – is that the IT security industry, having finally gotten the attention of law makers, Pentagon generals and public policy establishment wonks in the Beltway, is now in mortal danger of losing its soul. We’ve convinced the world that the threat is real – omnipresent and omnipotent. But in our desire to combat it, we are becoming indistinguishable from the folks with the black hats.

[…]

…While “scare ’em and snare ’em” may be business as usual in the IT security industry, other HBGary Federal skunk works projects clearly crossed a line: a proposal for a major U.S. bank, allegedly Bank of America, to launch offensive cyber attacks on the servers that host the whistle blower site Wikileaks. HBGary was part of a triumvirate of firms that also included Palantir Inc and Berico Technologies, that was working with the law firm of the U.S. Chamber of Commerce to develop plans to target progressive groups, labor unions and other left-leaning non profits who the Chamber opposed with a campaign of false information and entrapment. Other leaked e-mail messages reveal work with General Dynamics and a host of other firms to develop custom, stealth malware and collaborations with other firms selling offensive cyber capabilities including knowledge of previously undiscovered (“zero day”) vulnerabilities.

[…]

What’s more disturbing is the way that the folks at HBGary – mostly Aaron Barr, but others as well – came to view the infowar tactics they were pitching to the military and its contractors as applicable in the civilian context, as well. How effortlessly and seamlessly the focus on “advanced persistent threats” shifted from government backed hackers in China and Russia to encompass political foes like ThinkProgress or the columnist Glenn Greenwald. Anonymous may have committed crimes that demand punishment – but its up to the FBI to handle that, not “a large U.S. bank” or its attorneys.

Read the whole thing.

Posted on February 25, 2011 at 6:14 AMView Comments

More Details on the Chinese Attack Against Google

Three weeks ago, Google announced a sophisticated attack against them from China. There have been some interesting technical details since then. And the NSA is helping Google analyze the attack.

The rumor that China used a system Google put in place to enable lawful intercepts, which I used as a news hook for this essay, has not been confirmed. At this point, I doubt that it’s true.

EDITED TO ADD (2/12): Good article.

Posted on February 8, 2010 at 6:03 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.