Entries Tagged "Sony"

Page 2 of 3

Reacting to the Sony Hack

First we thought North Korea was behind the Sony cyberattacks. Then we thought it was a couple of hacker guys with an axe to grind. Now we think North Korea is behind it again, but the connection is still tenuous. There have been accusations of cyberterrorism, and even cyberwar. I’ve heard calls for us to strike back, with actual missiles and bombs. We’re collectively pegging the hype meter, and the best thing we can do is calm down and take a deep breath.

First, this is not an act of terrorism. There has been no senseless violence. No innocents are coming home in body bags. Yes, a company is seriously embarrassed—and financially hurt—by all of its information leaking to the public. But posting unreleased movies online is not terrorism. It’s not even close.

Nor is this an act of war. Stealing and publishing a company’s proprietary information is not an act of war. We wouldn’t be talking about going to war if someone snuck in and photocopied everything, and it makes equally little sense to talk about it when someone does it over the internet. The threshold of war is much, much higher, and we’re not going to respond to this militarily. Over the years, North Korea has performed far more aggressive acts against US and South Korean soldiers. We didn’t go to war then, and we’re not going to war now.

Finally, we don’t know these attacks were sanctioned by the North Korean government. The US government has made statements linking the attacks to North Korea, but hasn’t officially blamed the government, nor have officials provided any evidence of the linkage. We’ve known about North Korea’s cyberattack capabilities long before this attack, but it might not be the government at all. This wouldn’t be the first time a nationalistic cyberattack was launched without government sanction. We have lots of examples of these sorts of attacks being conducted by regular hackers with nationalistic pride. Kids playing politics, I call them. This may be that, and it could also be a random hacker who just has it out for Sony.

Remember, the hackers didn’t start talking about The Interview until the press did. Maybe the NSA has some secret information pinning this attack on the North Korean government, but unless the agency comes forward with the evidence, we should remain skeptical. We don’t know who did this, and we may never find out. I personally think it is a disgruntled ex-employee, but I don’t have any more evidence than anyone else does.

What we have is a very extreme case of hacking. By “extreme” I mean the quantity of the information stolen from Sony’s networks, not the quality of the attack. The attackers seem to have been good, but no more than that. Sony made its situation worse by having substandard security.

Sony’s reaction has all the markings of a company without any sort of coherent plan. Near as I can tell, every Sony executive is in full panic mode. They’re certainly facing dozens of lawsuits: from shareholders, from companies who invested in those movies, from employees who had their medical and financial data exposed, from everyone who was affected. They’re probably facing government fines, for leaking financial and medical information, and possibly for colluding with other studios to attack Google.

If previous major hacks are any guide, there will be multiple senior executives fired over this; everyone at Sony is probably scared for their jobs. In this sort of situation, the interests of the corporation are not the same as the interests of the people running the corporation. This might go a long way to explain some of the reactions we’ve seen.

Pulling The Interview was exactly the wrong thing to do, as there was no credible threat and it just emboldens the hackers. But it’s the kind of response you get when you don’t have a plan.

Politically motivated hacking isn’t new, and the Sony hack is not unprecedented. In 2011 the hacker group Anonymous did something similar to the internet-security company HBGary Federal, exposing corporate secrets and internal emails. This sort of thing has been possible for decades, although it’s gotten increasingly damaging as more corporate information goes online. It will happen again; there’s no doubt about that.

But it hasn’t happened very often, and that’s not likely to change. Most hackers are garden-variety criminals, less interested in internal emails and corporate secrets and more interested in personal information and credit card numbers that they can monetize. Their attacks are opportunistic, and very different from the targeted attack Sony fell victim to.

When a hacker releases personal data on an individual, it’s called doxing. We don’t have a name for it when it happens to a company, but it’s what happened to Sony. Companies need to wake up to the possibility that a whistleblower, a civic-minded hacker, or just someone who is out to embarrass them will hack their networks and publish their proprietary data. They need to recognize that their chatty private emails and their internal memos might be front-page news.

In a world where everything happens online, including what we think of as ephemeral conversation, everything is potentially subject to public scrutiny. Companies need to make sure their computer and network security is up to snuff, and their incident response and crisis management plans can handle this sort of thing. But they should also remember how rare this sort of attack is, and not panic.

This essay previously appeared on Vice Motherboard.

EDITED TO ADD (12/25): Reddit thread.

Posted on December 22, 2014 at 6:08 AMView Comments

Lessons from the Sony Hack

Earlier this month, a mysterious group that calls itself Guardians of Peace hacked into Sony Pictures Entertainment’s computer systems and began revealing many of the Hollywood studio’s best-kept secrets, from details about unreleased movies to embarrassing emails (notably some racist notes from Sony bigwigs about President Barack Obama’s presumed movie-watching preferences) to the personnel data of employees, including salaries and performance reviews. The Federal Bureau of Investigation now says it has evidence that North Korea was behind the attack, and Sony Pictures pulled its planned release of “The Interview,” a satire targeting that country’s dictator, after the hackers made some ridiculous threats about terrorist violence.

Your reaction to the massive hacking of such a prominent company will depend on whether you’re fluent in information-technology security. If you’re not, you’re probably wondering how in the world this could happen. If you are, you’re aware that this could happen to any company (though it is still amazing that Sony made it so easy).

To understand any given episode of hacking, you need to understand who your adversary is. I’ve spent decades dealing with Internet hackers (as I do now at my current firm), and I’ve learned to separate opportunistic attacks from targeted ones.

You can characterize attackers along two axes: skill and focus. Most attacks are low-skill and low-focus—people using common hacking tools against thousands of networks world-wide. These low-end attacks include sending spam out to millions of email addresses, hoping that someone will fall for it and click on a poisoned link. I think of them as the background radiation of the Internet.

High-skill, low-focus attacks are more serious. These include the more sophisticated attacks using newly discovered “zero-day” vulnerabilities in software, systems and networks. This is the sort of attack that affected Target, J.P. Morgan Chase and most of the other commercial networks that you’ve heard about in the past year or so.

But even scarier are the high-skill, high-focus attacks­—the type that hit Sony. This includes sophisticated attacks seemingly run by national intelligence agencies, using such spying tools as Regin and Flame, which many in the IT world suspect were created by the U.S.; Turla, a piece of malware that many blame on the Russian government; and a huge snooping effort called GhostNet, which spied on the Dalai Lama and Asian governments, leading many of my colleagues to blame China. (We’re mostly guessing about the origins of these attacks; governments refuse to comment on such issues.) China has also been accused of trying to hack into the New York Times in 2010, and in May, Attorney General Eric Holder announced the indictment of five Chinese military officials for cyberattacks against U.S. corporations.

This category also includes private actors, including the hacker group known as Anonymous, which mounted a Sony-style attack against the Internet-security firm HBGary Federal, and the unknown hackers who stole racy celebrity photos from Apple’s iCloud and posted them. If you’ve heard the IT-security buzz phrase “advanced persistent threat,” this is it.

There is a key difference among these kinds of hacking. In the first two categories, the attacker is an opportunist. The hackers who penetrated Home Depot’s networks didn’t seem to care much about Home Depot; they just wanted a large database of credit-card numbers. Any large retailer would do.

But a skilled, determined attacker wants to attack a specific victim. The reasons may be political: to hurt a government or leader enmeshed in a geopolitical battle. Or ethical: to punish an industry that the hacker abhors, like big oil or big pharma. Or maybe the victim is just a company that hackers love to hate. (Sony falls into this category: It has been infuriating hackers since 2005, when the company put malicious software on its CDs in a failed attempt to prevent copying.)

Low-focus attacks are easier to defend against: If Home Depot’s systems had been better protected, the hackers would have just moved on to an easier target. With attackers who are highly skilled and highly focused, however, what matters is whether a targeted company’s security is superior to the attacker’s skills, not just to the security measures of other companies. Often, it isn’t. We’re much better at such relative security than we are at absolute security.

That is why security experts aren’t surprised by the Sony story. We know people who do penetration testing for a living—real, no-holds-barred attacks that mimic a full-on assault by a dogged, expert attacker—and we know that the expert always gets in. Against a sufficiently skilled, funded and motivated attacker, all networks are vulnerable. But good security makes many kinds of attack harder, costlier and riskier. Against attackers who aren’t sufficiently skilled, good security may protect you completely.

It is hard to put a dollar value on security that is strong enough to assure you that your embarrassing emails and personnel information won’t end up posted online somewhere, but Sony clearly failed here. Its security turned out to be subpar. They didn’t have to leave so much information exposed. And they didn’t have to be so slow detecting the breach, giving the attackers free rein to wander about and take so much stuff.

For those worried that what happened to Sony could happen to you, I have two pieces of advice. The first is for organizations: take this stuff seriously. Security is a combination of protection, detection and response. You need prevention to defend against low-focus attacks and to make targeted attacks harder. You need detection to spot the attackers who inevitably get through. And you need response to minimize the damage, restore security and manage the fallout.

The time to start is before the attack hits: Sony would have fared much better if its executives simply hadn’t made racist jokes about Mr. Obama or insulted its stars—or if their response systems had been agile enough to kick the hackers out before they grabbed everything.

My second piece of advice is for individuals. The worst invasion of privacy from the Sony hack didn’t happen to the executives or the stars; it happened to the blameless random employees who were just using their company’s email system. Because of that, they’ve had their most personal conversations—gossip, medical conditions, love lives—exposed. The press may not have divulged this information, but their friends and relatives peeked at it. Hundreds of personal tragedies must be unfolding right now.

This could be any of us. We have no choice but to entrust companies with our intimate conversations: on email, on Facebook, by text and so on. We have no choice but to entrust the retailers that we use with our financial details. And we have little choice but to use cloud services such as iCloud and Google Docs.

So be smart: Understand the risks. Know that your data are vulnerable. Opt out when you can. And agitate for government intervention to ensure that organizations protect your data as well as you would. Like many areas of our hyper-technical world, this isn’t something markets can fix.

This essay previously appeared on the Wall Street Journal CIO Journal.

EDITED TO ADD (12/21): Slashdot thread.

EDITED TO ADD (1/14): Sony has had more than 50 security breaches in the past fifteen years.

Posted on December 19, 2014 at 12:44 PMView Comments

Comments on the Sony Hack

I don’t have a lot to say about the Sony hack, which seems to still be ongoing. I want to highlight a few points, though.

  1. At this point, the attacks seem to be a few hackers and not the North Korean government. (My guess is that it’s not an insider, either.) That we live in the world where we aren’t sure if any given cyberattack is the work of a foreign government or a couple of guys should be scary to us all.
  2. Sony is a company that hackers have loved to hate for years now. (Remember their rootkit from 2005?) We’ve learned previously that putting yourself in this position can be disastrous. (Remember HBGary.) We’re learning that again.
  3. I don’t see how Sony launching a DDoS attack against the attackers is going to help at all.
  4. The most sensitive information that’s being leaked as a result of this attack isn’t the unreleased movies, the executive emails, or the celebrity gossip. It’s the minutiae from random employees:

    The most painful stuff in the Sony cache is a doctor shopping for Ritalin. It’s an email about trying to get pregnant. It’s shit-talking coworkers behind their backs, and people’s credit card log-ins. It’s literally thousands of Social Security numbers laid bare. It’s even the harmless, mundane, trivial stuff that makes up any day’s email load that suddenly feels ugly and raw out in the open, a digital Babadook brought to life by a scorched earth cyberattack.

    These people didn’t have anything to hide. They aren’t public figures. Their details aren’t going to be news anywhere in the world. But their privacy has been violated, and there are literally thousands of personal tragedies unfolding right now as these people deal with their friends and relatives who have searched and read this stuff.

    These are people who did nothing wrong. They didn’t click on phishing links, or use dumb passwords (or even if they did, they didn’t cause this). They just showed up. They sent the same banal workplace emails you send every day, some personal, some not, some thoughtful, some dumb. Even if they didn’t have the expectation of full privacy, at most they may have assumed that an IT creeper might flip through their inbox, or that it was being crunched in an NSA server somewhere. For better or worse, we’ve become inured to small, anonymous violations. What happened to Sony Pictures employees, though, is public. And it is total.

    Gizmodo got this 100% correct. And this is why privacy is so important for everyone.

I’m sure there’ll be more information as this continues to unfold.

EDITED TO ADD (12/12): There are two comment threads on this post: Reddit and Hacker News.

Posted on December 11, 2014 at 2:37 PMView Comments

Not Enough CISOs to Go Around

This article is reporting that the demand for Chief Information Security Officers far exceeds supply:

Sony and every other company that realizes the need for a strong, senior-level security officer are scrambling to find talent, said Kris Lovejoy, general manager of IBM’s security service and former IBM chief security officer.

CISOs are “almost impossible to find these days,” she said. “It’s a bit like musical chairs; there’s a finite number of CISOs and they tend to go from job to job in similar industries.”

I’m not surprised, really. This is a tough job: never enough budget, and you’re the one blamed when the inevitable attacks occur. And it’s a tough skill set: enough technical ability to understand cybersecurity, and sufficient management skill to navigate senior management. I would never want a job like that in a million years.

Here’s a tip: if you want to make your CISO happy, here’s her holiday wish list.

“My first wish is for companies to thoroughly test software releases before release to customers….”

Can we get that gift wrapped?

Posted on December 11, 2014 at 6:31 AMView Comments

Interview with Me About the Sony Hack

These are what I get for giving interviews when I’m in a bad mood. For the record, I think Sony did a terrible job with its customers’ security. I also think that most companies do a terrible job with customers’ security, simply because there isn’t a financial incentive to do better. And that most of us are pretty secure, despite that.

One of my biggest complaints with these stories is how little actual information we have. We often don’t know if any data was actually stolen, only that hackers had access to it. We rarely know how the data was accessed: what sort of vulnerability was used by the hackers. We rarely know the motivations of the hackers: were they criminals, spies, kids, or someone else? We rarely know if the data is actually used for any nefarious purposes; it’s generally impossible to connect a data breach with a corresponding fraud incident. Given all of that, it’s impossible to say anything useful or definitive about the attack. But the press always wants definitive statements.

Posted on May 13, 2011 at 11:29 AMView Comments

Who Owns Your Computer?

When technology serves its owners, it is liberating. When it is designed to serve others, over the owner’s objection, it is oppressive. There’s a battle raging on your computer right now—one that pits you against worms and viruses, Trojans, spyware, automatic update features and digital rights management technologies. It’s the battle to determine who owns your computer.

You own your computer, of course. You bought it. You paid for it. But how much control do you really have over what happens on your machine? Technically you might have bought the hardware and software, but you have less control over what it’s doing behind the scenes.

Using the hacker sense of the term, your computer is “owned” by other people.

It used to be that only malicious hackers were trying to own your computers. Whether through worms, viruses, Trojans or other means, they would try to install some kind of remote-control program onto your system. Then they’d use your computers to sniff passwords, make fraudulent bank transactions, send spam, initiate phishing attacks and so on. Estimates are that somewhere between hundreds of thousands and millions of computers are members of remotely controlled “bot” networks. Owned.

Now, things are not so simple. There are all sorts of interests vying for control of your computer. There are media companies that want to control what you can do with the music and videos they sell you. There are companies that use software as a conduit to collect marketing information, deliver advertising or do whatever it is their real owners require. And there are software companies that are trying to make money by pleasing not only their customers, but other companies they ally themselves with. All these companies want to own your computer.

Some examples:

  • Entertainment software: In October 2005, it emerged that Sony had distributed a rootkit with several music CDs—the same kind of software that crackers use to own people’s computers. This rootkit secretly installed itself when the music CD was played on a computer. Its purpose was to prevent people from doing things with the music that Sony didn’t approve of: It was a DRM system. If the exact same piece of software had been installed secretly by a hacker, this would have been an illegal act. But Sony believed that it had legitimate reasons for wanting to own its customers’ machines.
  • Antivirus: You might have expected your antivirus software to detect Sony’s rootkit. After all, that’s why you bought it. But initially, the security programs sold by Symantec and others did not detect it, because Sony had asked them not to. You might have thought that the software you bought was working for you, but you would have been wrong.
  • Internet services: Hotmail allows you to blacklist certain e-mail addresses, so that mail from them automatically goes into your spam trap. Have you ever tried blocking all that incessant marketing e-mail from Microsoft? You can’t.
  • Application software: Internet Explorer users might have expected the program to incorporate easy-to-use cookie handling and pop-up blockers. After all, other browsers do, and users have found them useful in defending against Internet annoyances. But Microsoft isn’t just selling software to you; it sells Internet advertising as well. It isn’t in the company’s best interest to offer users features that would adversely affect its business partners.
  • Spyware: Spyware is nothing but someone else trying to own your computer. These programs eavesdrop on your behavior and report back to their real owners—sometimes without your knowledge or consent—about your behavior.
  • Internet security: It recently came out that the firewall in Microsoft Vista will ship with half its protections turned off. Microsoft claims that large enterprise users demanded this default configuration, but that makes no sense. It’s far more likely that Microsoft just doesn’t want adware—and DRM spyware—blocked by default.
  • Update: Automatic update features are another way software companies try to own your computer. While they can be useful for improving security, they also require you to trust your software vendor not to disable your computer for nonpayment, breach of contract or other presumed infractions.

Adware, software-as-a-service and Google Desktop search are all examples of some other company trying to own your computer. And Trusted Computing will only make the problem worse.

There is an inherent insecurity to technologies that try to own people’s computers: They allow individuals other than the computers’ legitimate owners to enforce policy on those machines. These systems invite attackers to assume the role of the third party and turn a user’s device against him.

Remember the Sony story: The most insecure feature in that DRM system was a cloaking mechanism that gave the rootkit control over whether you could see it executing or spot its files on your hard disk. By taking ownership away from you, it reduced your security.

If left to grow, these external control systems will fundamentally change your relationship with your computer. They will make your computer much less useful by letting corporations limit what you can do with it. They will make your computer much less reliable because you will no longer have control of what is running on your machine, what it does, and how the various software components interact. At the extreme, they will transform your computer into a glorified boob tube.

You can fight back against this trend by only using software that respects your boundaries. Boycott companies that don’t honestly serve their customers, that don’t disclose their alliances, that treat users like marketing assets. Use open-source software—software created and owned by users, with no hidden agendas, no secret alliances and no back-room marketing deals.

Just because computers were a liberating force in the past doesn’t mean they will be in the future. There is enormous political and economic power behind the idea that you shouldn’t truly own your computer or your software, despite having paid for it.

This essay originally appeared on Wired.com.

EDITED TO ADD (5/5): Commentary. It seems that some of my examples were not very good. I’ll come up with other ones for the Crypto-Gram version.

Posted on May 4, 2006 at 7:13 AMView Comments

"Lessons from the Sony CD DRM Episode"

“Lessons from the Sony CD DRM Episode” is an interesting paper by J. Alex Halderman and Edward W. Felten.

Abstract: In the fall of 2005, problems discovered in two Sony-BMG compact disc copy protection systems, XCP and MediaMax, triggered a public uproar that ultimately led to class-action litigation and the recall of millions of discs. We present an in-depth analysis of these technologies, including their design, implementation, and deployment. The systems are surprisingly complex and suffer from a diverse array of flaws that weaken their content protection and expose users to serious security and privacy risks. Their complexity, and their failure, makes them an interesting case study of digital rights management that carries valuable lessons for content companies, DRM vendors, policymakers, end users, and the security community.

Posted on February 17, 2006 at 2:11 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.