Entries Tagged "essays"

Page 18 of 48

Everyone Wants You To Have Security, But Not from Them

In December, Google’s Executive Chairman Eric Schmidt was interviewed at the CATO Institute Surveillance Conference. One of the things he said, after talking about some of the security measures his company has put in place post-Snowden, was: “If you have important information, the safest place to keep it is in Google. And I can assure you that the safest place to not keep it is anywhere else.”

The surprised me, because Google collects all of your information to show you more targeted advertising. Surveillance is the business model of the Internet, and Google is one of the most successful companies at that. To claim that Google protects your privacy better than anyone else is to profoundly misunderstand why Google stores your data for free in the first place.

I was reminded of this last week when I appeared on Glenn Beck’s show along with cryptography pioneer Whitfield Diffie. Diffie said:

You can’t have privacy without security, and I think we have glaring failures in computer security in problems that we’ve been working on for 40 years. You really should not live in fear of opening an attachment to a message. It ought to be confined; your computer ought to be able to handle it. And the fact that we have persisted for decades without solving these problems is partly because they’re very difficult, but partly because there are lots of people who want you to be secure against everyone but them. And that includes all of the major computer manufacturers who, roughly speaking, want to manage your computer for you. The trouble is, I’m not sure of any practical alternative.

That neatly explains Google. Eric Schmidt does want your data to be secure. He wants Google to be the safest place for your data ­ as long as you don’t mind the fact that Google has access to your data. Facebook wants the same thing: to protect your data from everyone except Facebook. Hardware companies are no different. Last week, we learned that Lenovo computers shipped with a piece of adware called Superfish that broke users’ security to spy on them for advertising purposes.

Governments are no different. The FBI wants people to have strong encryption, but it wants backdoor access so it can get at your data. UK Prime Minister David Cameron wants you to have good security, just as long as it’s not so strong as to keep the UK government out. And, of course, the NSA spends a lot of money ensuring that there’s no security it can’t break.

Corporations want access to your data for profit; governments want it for security purposes, be they benevolent or malevolent. But Diffie makes an even stronger point: we give lots of companies access to our data because it makes our lives easier.

I wrote about this in my latest book, Data and Goliath:

Convenience is the other reason we willingly give highly personal data to corporate interests, and put up with becoming objects of their surveillance. As I keep saying, surveillance-based services are useful and valuable. We like it when we can access our address book, calendar, photographs, documents, and everything else on any device we happen to be near. We like services like Siri and Google Now, which work best when they know tons about you. Social networking apps make it easier to hang out with our friends. Cell phone apps like Google Maps, Yelp, Weather, and Uber work better and faster when they know our location. Letting apps like Pocket or Instapaper know what we’re reading feels like a small price to pay for getting everything we want to read in one convenient place. We even like it when ads are targeted to exactly what we’re interested in. The benefits of surveillance in these and other applications are real, and significant.

Like Diffie, I’m not sure there is any practical alternative. The reason the Internet is a worldwide mass-market phenomenon is that all the technological details are hidden from view. Someone else is taking care of it. We want strong security, but we also want companies to have access to our computers, smart devices, and data. We want someone else to manage our computers and smart phones, organize our e-mail and photos, and help us move data between our various devices.

Those “someones” will necessarily be able to violate our privacy, either by deliberately peeking at our data or by having such lax security that they’re vulnerable to national intelligence agencies, cybercriminals, or both. Last week, we learned that the NSA broke into the Dutch company Gemalto and stole the encryption keys for billions ­ yes, billions ­ of cell phones worldwide. That was possible because we consumers don’t want to do the work of securely generating those keys and setting up our own security when we get our phones; we want it done automatically by the phone manufacturers. We want our data to be secure, but we want someone to be able to recover it all when we forget our password.

We’ll never solve these security problems as long as we’re our own worst enemy. That’s why I believe that any long-term security solution will not only be technological, but political as well. We need laws that will protect our privacy from those who obey the laws, and to punish those who break the laws. We need laws that require those entrusted with our data to protect our data. Yes, we need better security technologies, but we also need laws mandating the use of those technologies.

This essay previously appeared on Forbes.com.

EDITED TO ADD: French translation.

Posted on February 26, 2015 at 6:47 AMView Comments

The Equation Group's Sophisticated Hacking and Exploitation Tools

This week, Kaspersky Labs published detailed information on what it calls the Equation Group—almost certainly the NSA—and its abilities to embed spyware deep inside computers, gaining pretty much total control of those computers while maintaining persistence in the face of reboots, operating system reinstalls, and commercial anti-virus products. The details are impressive, and I urge anyone interested to read the Kaspersky documents, or this very detailed article from Ars Technica.

Kaspersky doesn’t explicitly name the NSA, but talks about similarities between these techniques and Stuxnet, and points to NSA-like codenames. A related Reuters story provides more confirmation: “A former NSA employee told Reuters that Kaspersky’s analysis was correct, and that people still in the intelligence agency valued these spying programs as highly as Stuxnet. Another former intelligence operative confirmed that the NSA had developed the prized technique of concealing spyware in hard drives, but said he did not know which spy efforts relied on it.”

In some ways, this isn’t news. We saw examples of these techniques in 2013, when Der Spiegel published details of the NSA’s 2008 catalog of implants. (Aside: I don’t believe the person who leaked that catalog is Edward Snowden.) In those pages, we saw examples of malware that embedded itself in computers’ BIOS and disk drive firmware. We already know about the NSA’s infection methods using packet injection and hardware interception.

This is targeted surveillance. There’s nothing here that implies the NSA is doing this sort of thing to every computer, router, or hard drive. It’s doing it only to networks it wants to monitor. Reuters again: “Kaspersky said it found personal computers in 30 countries infected with one or more of the spying programs, with the most infections seen in Iran, followed by Russia, Pakistan, Afghanistan, China, Mali, Syria, Yemen and Algeria. The targets included government and military institutions, telecommunication companies, banks, energy companies, nuclear researchers, media, and Islamic activists, Kaspersky said.” A map of the infections Kaspersky found bears this out.

On one hand, it’s the sort of thing we want the NSA to do. It’s targeted. It’s exploiting existing vulnerabilities. In the overall scheme of things, this is much less disruptive to Internet security than deliberately inserting vulnerabilities that leave everyone insecure.

On the other hand, the NSA’s definition of “targeted” can be pretty broad. We know that it’s hacked the Belgian telephone company and the Brazilian oil company. We know it’s collected every phone call in the Bahamas and Afghanistan. It hacks system administrators worldwide.

On the other other hand—can I even have three hands?—I remember a line from my latest book: “Today’s top-secret programs become tomorrow’s PhD theses and the next day’s hacker tools.” Today, the Equation Group is “probably the most sophisticated computer attack group in the world,” but these techniques aren’t magically exclusive to the NSA. We know China uses similar techniques. Companies like Gamma Group sell less sophisticated versions of the same things to Third World governments worldwide. We need to figure out how to maintain security in the face of these sorts of attacks, because we’re all going to be subjected to the criminal versions of them in three to five years.

That’s the real problem. Steve Bellovin wrote about this:

For more than 50 years, all computer security has been based on the separation between the trusted portion and the untrusted portion of the system. Once it was “kernel” (or “supervisor”) versus “user” mode, on a single computer. The Orange Book recognized that the concept had to be broader, since there were all sorts of files executed or relied on by privileged portions of the system. Their newer, larger category was dubbed the “Trusted Computing Base” (TCB). When networking came along, we adopted firewalls; the TCB still existed on single computers, but we trusted “inside” computers and networks more than external ones.

There was a danger sign there, though few people recognized it: our networked systems depended on other systems for critical files….

The National Academies report Trust in Cyberspace recognized that the old TCB concept no longer made sense. (Disclaimer: I was on the committee.) Too many threats, such as Word macro viruses, lived purely at user level. Obviously, one could have arbitrarily classified word processors, spreadsheets, etc., as part of the TCB, but that would have been worse than useless; these things were too large and had no need for privileges.

In the 15+ years since then, no satisfactory replacement for the TCB model has been proposed.

We have a serious computer security problem. Everything depends on everything else, and security vulnerabilities in anything affects the security of everything. We simply don’t have the ability to maintain security in a world where we can’t trust the hardware and software we use.

This article was originally published at the Lawfare blog.

EDITED TO ADD (2/17): Slashdot thread. Hacker News thread. Reddit thread. BoingBoing discussion.

EDITED TO ADD (2/18): Here are are two academic/hacker presentations on exploiting hard drives. And another article.

EDITED TO ADD (2/23): Another excellent article.

Posted on February 17, 2015 at 12:19 PMView Comments

Samsung Television Spies on Viewers

Earlier this week, we learned that Samsung televisions are eavesdropping on their owners. If you have one of their Internet-connected smart TVs, you can turn on a voice command feature that saves you the trouble of finding the remote, pushing buttons and scrolling through menus. But making that feature work requires the television to listen to everything you say. And what you say isn’t just processed by the television; it may be forwarded over the Internet for remote processing. It’s literally Orwellian.

This discovery surprised people, but it shouldn’t have. The things around us are increasingly computerized, and increasingly connected to the Internet. And most of them are listening.

Our smartphones and computers, of course, listen to us when we’re making audio and video calls. But the microphones are always there, and there are ways a hacker, government, or clever company can turn those microphones on without our knowledge. Sometimes we turn them on ourselves. If we have an iPhone, the voice-processing system Siri listens to us, but only when we push the iPhone’s button. Like Samsung, iPhones with the “Hey Siri” feature enabled listen all the time. So do Android devices with the “OK Google” feature enabled, and so does an Amazon voice-activated system called Echo. Facebook has the ability to turn your smartphone’s microphone on when you’re using the app.

Even if you don’t speak, our computers are paying attention. Gmail “listens” to everything you write, and shows you advertising based on it. It might feel as if you’re never alone. Facebook does the same with everything you write on that platform, and even listens to the things you type but don’t post. Skype doesn’t listen—we think—but as Der Spiegel notes, data from the service “has been accessible to the NSA’s snoops” since 2011.

So the NSA certainly listens. It listens directly, and it listens to all these companies listening to you. So do other countries like Russia and China, which we really don’t want listening so closely to their citizens.

It’s not just the devices that listen; most of this data is transmitted over the Internet. Samsung sends it to what was referred to as a “third party” in its policy statement. It later revealed that third party to be a company you’ve never heard of—Nuance—that turns the voice into text for it. Samsung promises that the data is erased immediately. Most of the other companies that are listening promise no such thing and, in fact, save your data for a long time. Governments, of course, save it, too.

This data is a treasure trove for criminals, as we are learning again and again as tens and hundreds of millions of customer records are repeatedly stolen. Last week, it was reported that hackers had accessed the personal records of some 80 million Anthem Health customers and others. Last year, it was Home Depot, JP Morgan, Sony and many others. Do we think Nuance’s security is better than any of these companies? I sure don’t.

At some level, we’re consenting to all this listening. A single sentence in Samsung’s 1,500-word privacy policy, the one most of us don’t read, stated: “Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.” Other services could easily come with a similar warning: Be aware that your e-mail provider knows what you’re saying to your colleagues and friends and be aware that your cell phone knows where you sleep and whom you’re sleeping with—assuming that you both have smartphones, that is.

The Internet of Things is full of listeners. Newer cars contain computers that record speed, steering wheel position, pedal pressure, even tire pressure—and insurance companies want to listen. And, of course, your cell phone records your precise location at all times you have it on—and possibly even when you turn it off. If you have a smart thermostat, it records your house’s temperature, humidity, ambient light and any nearby movement. Any fitness tracker you’re wearing records your movements and some vital signs; so do many computerized medical devices. Add security cameras and recorders, drones and other surveillance airplanes, and we’re being watched, tracked, measured and listened to almost all the time.

It’s the age of ubiquitous surveillance, fueled by both Internet companies and governments. And because it’s largely happening in the background, we’re not really aware of it.

This has to change. We need to regulate the listening: both what is being collected and how it’s being used. But that won’t happen until we know the full extent of surveillance: who’s listening and what they’re doing with it. Samsung buried its listening details in its privacy policy—they have since amended it to be clearer—and we’re only having this discussion because a Daily Beast reporter stumbled upon it. We need more explicit conversation about the value of being able to speak freely in our living rooms without our televisions listening, or having e-mail conversations without Google or the government listening. Privacy is a prerequisite for free expression, and losing that would be an enormous blow to our society.

This essay previously appeared on CNN.com.

ETA (2/16): A German translation by Damian Weber.

Posted on February 13, 2015 at 7:01 AMView Comments

When Thinking Machines Break the Law

Last year, two Swiss artists programmed a Random Botnot Shopper, which every week would spend $100 in bitcoin to buy a random item from an anonymous Internet black market…all for an art project on display in Switzerland. It was a clever concept, except there was a problem. Most of the stuff the bot purchased was benign­—fake Diesel jeans, a baseball cap with a hidden camera, a stash can, a pair of Nike trainers—but it also purchased ten ecstasy tablets and a fake Hungarian passport.

What do we do when a machine breaks the law? Traditionally, we hold the person controlling the machine responsible. People commit the crimes; the guns, lockpicks, or computer viruses are merely their tools. But as machines become more autonomous, the link between machine and controller becomes more tenuous.

Who is responsible if an autonomous military drone accidentally kills a crowd of civilians? Is it the military officer who keyed in the mission, the programmers of the enemy detection software that misidentified the people, or the programmers of the software that made the actual kill decision? What if those programmers had no idea that their software was being used for military purposes? And what if the drone can improve its algorithms by modifying its own software based on what the entire fleet of drones learns on earlier missions?

Maybe our courts can decide where the culpability lies, but that’s only because while current drones may be autonomous, they’re not very smart. As drones get smarter, their links to the humans that originally built them become more tenuous.

What if there are no programmers, and the drones program themselves? What if they are both smart and autonomous, and make strategic as well as tactical decisions on targets? What if one of the drones decides, based on whatever means it has at its disposal, that it no longer maintains allegiance to the country that built it and goes rogue?

Our society has many approaches, using both informal social rules and more formal laws, for dealing with people who won’t follow the rules of society. We have informal mechanisms for small infractions, and a complex legal system for larger ones. If you are obnoxious at a party I throw, I won’t invite you back. Do it regularly, and you’ll be shamed and ostracized from the group. If you steal some of my stuff, I might report you to the police. Steal from a bank, and you’ll almost certainly go to jail for a long time. A lot of this might seem more ad hoc than situation-specific, but we humans have spent millennia working this all out. Security is both political and social, but it’s also psychological. Door locks, for example, only work because our social and legal prohibitions on theft keep the overwhelming majority of us honest. That’s how we live peacefully together at a scale unimaginable for any other species on the planet.

How does any of this work when the perpetrator is a machine with whatever passes for free will? Machines probably won’t have any concept of shame or praise. They won’t refrain from doing something because of what other machines might think. They won’t follow laws simply because it’s the right thing to do, nor will they have a natural deference to authority. When they’re caught stealing, how can they be punished? What does it mean to fine a machine? Does it make any sense at all to incarcerate it? And unless they are deliberately programmed with a self-preservation function, threatening them with execution will have no meaningful effect.

We are already talking about programming morality into thinking machines, and we can imagine programming other human tendencies into our machines, but we’re certainly going to get it wrong. No matter how much we try to avoid it, we’re going to have machines that break the law.

This, in turn, will break our legal system. Fundamentally, our legal system doesn’t prevent crime. Its effectiveness is based on arresting and convicting criminals after the fact, and their punishment providing a deterrent to others. This completely fails if there’s no punishment that makes sense.

We already experienced a small example of this after 9/11, which was when most of us first started thinking about suicide terrorists and how post-facto security was irrelevant to them. That was just one change in motivation, and look at how those actions affected the way we think about security. Our laws will have the same problem with thinking machines, along with related problems we can’t even imagine yet. The social and legal systems that have dealt so effectively with human rulebreakers of all sorts will fail in unexpected ways in the face of thinking machines.

A machine that thinks won’t always think in the ways we want it to. And we’re not ready for the ramifications of that.

This essay previously appeared on Edge.org as one of the answers to the 2015 Edge Question: “What do you think about machines that think?”

EDITED TO ADD: The Random Botnet Shopper is “under arrest.”

Posted on January 23, 2015 at 4:55 AMView Comments

Accountability as a Security System

At a CATO surveillance event last month, Ben Wittes talked about inherent presidential powers of surveillance with this hypothetical: “What should Congress have to say about the rules when Barack Obama wants to know what Vladimir Putin is talking about?” His answer was basically that Congress should have no say: “I think most people, going back to my Vladimir Putin question, would say that is actually an area of inherent presidential authority.” Edward Snowden, a surprise remote participant at the event, said the opposite, although using the courts in general rather than specifically Congress as his example. “…there is no court in the world—well, at least, no court outside Russia—who would not go, ‘This man is an agent of the foreign government. I mean, he’s the head of the government.’ Of course, they will say, ‘this guy has access to some kind of foreign intelligence value. We’ll sign the warrant for him.'”

There’s a principle here worth discussing at length. I’m not talking about the legal principle, as in what kind of court should oversee US intelligence collection. I’m not even talking about the constitutional principle, as in what are the US president’s inherent powers. I am talking about the philosophical principle: what sorts of secret unaccountable actions do we want individuals to be able to take on behalf of their country?

Put that way, I think the answer is obvious: as little as possible.

I am not a lawyer or a political scientist. I am a security technologist. And to me, the separation of powers and the checks and balances written into the US constitution are a security system. The more Barack Obama can do by himself in secret, the more power he has—and the more dangerous that is to all of us. By limiting the actions individuals and groups can take on their own, and forcing differing institutions to approve the actions of each other, the system reduces the ability for those in power to abuse their power. It holds them accountable.

We have enshrined the principle of different groups overseeing each other in many of our social and political systems. The courts issue warrants, limiting police power. Independent audit companies verify corporate balance sheets, limiting corporate power. And the executive, the legislative, and the judicial branches of government get to have their say in our laws. Sometimes accountability takes the form of prior approval, and sometimes it takes the form of ex post facto review. It’s all inefficient, of course, but it’s an inefficiency we accept because it makes us all safer.

While this is a fine guiding principle, it quickly falls apart in the practicalities of running a modern government. It’s just not possible to run a country where every action is subject to review and approval. The complexity of society, and the speed with which some decisions have to be made, can require unilateral actions. So we make allowances. Congress passes broad laws, and agencies turn them into detailed rules and procedures. The president is the commander in chief of the entire US military when it comes time to fight wars. Policeman have a lot of discretion on their own on the beat. And we only get to vote elected officials in and out of office every two, four, or six years.

The thing is, we can do better today. I’ve often said that the modern constitutional democracy is the best form of government mid-18th-century technology could produce. Because both communications and travel were difficult and expensive, it made sense for geographically proximate groups of people to choose one representative to go all the way over there and act for them over a long block of time.

Neither of these two limitations is true today. Travel is both cheap and easy, and communications are so cheap and easy as to be virtually free. Video conferencing and telepresence allow people to communicate without traveling. Surely if we were to design a democratic government today, we would come up with better institutions than the ones we are stuck with because of history.

And we can come up with more granular systems of checks and balances. So, yes, I think we would have a better government if a court had to approve all surveillance actions by the president, including those against Vladimir Putin. And today it might be possible to have a court do just that. Wittes argues that making some of these changes is impossible, given the current US constitution. He may be right, but that doesn’t mean they’re not good ideas.

Of course, the devil is always in the details. Efficiency is still a powerful counterargument. The FBI has procedures for temporarily bypassing prior approval processes if speed is essential. And granularity can still be a problem. Every bullet fired by the US military can’t be subject to judicial approval or even a military court, even though every bullet fired by a US policeman is—at least in theory—subject to judicial review. And while every domestic surveillance decision made by the police and the NSA is (also in theory) subject to judicial approval, it’s hard to know whether this can work for international NSA surveillance decisions until we try.

We are all better off now that many of the NSA’s surveillance programs have been made public and are being debated in Congress and in the media—although I had hoped for more congressional action—and many of the FISA Court’s formerly secret decisions on surveillance are being made public. But we still have a long way to go, and it shouldn’t take someone like Snowden to force at least some openness to happen.

This essay previously appeared on Lawfare.com, where Ben Wittes responded.

Posted on January 20, 2015 at 6:24 AMView Comments

The Security of Data Deletion

Thousands of articles have called the December attack against Sony Pictures a wake-up call to industry. Regardless of whether the attacker was the North Korean government, a disgruntled former employee, or a group of random hackers, the attack showed how vulnerable a large organization can be and how devastating the publication of its private correspondence, proprietary data, and intellectual property can be.

But while companies are supposed to learn that they need to improve their security against attack, there’s another equally important but much less discussed lesson here: companies should have an aggressive deletion policy.

One of the social trends of the computerization of our business and social communications tools is the loss of the ephemeral. Things we used to say in person or on the phone we now say in e-mail, by text message, or on social networking platforms. Memos we used to read and then throw away now remain in our digital archives. Big data initiatives mean that we’re saving everything we can about our customers on the remote chance that it might be useful later.

Everything is now digital, and storage is cheap­—why not save it all?

Sony illustrates the reason why not. The hackers published old e-mails from company executives that caused enormous public embarrassment to the company. They published old e-mails by employees that caused less-newsworthy personal embarrassment to those employees, and these messages are resulting in class-action lawsuits against the company. They published old documents. They published everything they got their hands on.

Saving data, especially e-mail and informal chats, is a liability.

It’s also a security risk: the risk of exposure. The exposure could be accidental. It could be the result of data theft, as happened to Sony. Or it could be the result of litigation. Whatever the reason, the best security against these eventualities is not to have the data in the first place.

If Sony had had an aggressive data deletion policy, much of what was leaked couldn’t have been stolen and wouldn’t have been published.

An organization-wide deletion policy makes sense. Customer data should be deleted as soon as it isn’t immediately useful. Internal e-mails can probably be deleted after a few months, IM chats even more quickly, and other documents in one to two years. There are exceptions, of course, but they should be exceptions. Individuals should need to deliberately flag documents and correspondence for longer retention. But unless there are laws requiring an organization to save a particular type of data for a prescribed length of time, deletion should be the norm.

This has always been true, but many organizations have forgotten it in the age of big data. In the wake of the devastating leak of terabytes of sensitive Sony data, I hope we’ll all remember it now.

This essay previously appeared on ArsTechnica.com, which has comments from people who strongly object to this idea.

Slashdot thread.

Posted on January 15, 2015 at 6:12 AMView Comments

Attack Attribution in Cyberspace

When you’re attacked by a missile, you can follow its trajectory back to where it was launched from. When you’re attacked in cyberspace, figuring out who did it is much harder. The reality of international aggression in cyberspace will change how we approach defense.

Many of us in the computer-security field are skeptical of the US government’s claim that it has positively identified North Korea as the perpetrator of the massive Sony hack in November 2014. The FBI’s evidence is circumstantial and not very convincing. The attackers never mentioned the movie that became the centerpiece of the hack until the press did. More likely, the culprits are random hackers who have loved to hate Sony for over a decade, or possibly a disgruntled insider.

On the other hand, most people believe that the FBI would not sound so sure unless it was convinced. And President Obama would not have imposed sanctions against North Korea if he weren’t convinced. This implies that there’s classified evidence as well. A couple of weeks ago, I wrote for the Atlantic, “The NSA has been trying to eavesdrop on North Korea’s government communications since the Korean War, and it’s reasonable to assume that its analysts are in pretty deep. The agency might have intelligence on the planning process for the hack. It might, say, have phone calls discussing the project, weekly PowerPoint status reports, or even Kim Jong Un’s sign-off on the plan. On the other hand, maybe not. I could have written the same thing about Iraq’s weapons-of-mass-destruction program in the run-up to the 2003 invasion of that country, and we all know how wrong the government was about that.”

The NSA is extremely reluctant to reveal its intelligence capabilities—or what it refers to as “sources and methods”—against North Korea simply to convince all of us of its conclusion, because by revealing them, it tips North Korea off to its insecurities. At the same time, we rightly have reason to be skeptical of the government’s unequivocal attribution of the attack without seeing the evidence. Iraq’s mythical weapons of mass destruction is only the most recent example of a major intelligence failure. American history is littered with examples of claimed secret intelligence pointing us toward aggression against other countries, only for us to learn later that the evidence was wrong.

Cyberspace exacerbates this in two ways. First, it is very difficult to attribute attacks in cyberspace. Packets don’t come with return addresses, and you can never be sure that what you think is the originating computer hasn’t itself been hacked. Even worse, it’s hard to tell the difference between attacks carried out by a couple of lone hackers and ones where a nation-state military is responsible. When we do know who did it, it’s usually because a lone hacker admitted it or because there was a months-long forensic investigation.

Second, in cyberspace, it is much easier to attack than to defend. The primary defense we have against military attacks in cyberspace is counterattack and the threat of counterattack that leads to deterrence.

What this all means is that it’s in the US’s best interest to claim omniscient powers of attribution. More than anything else, those in charge want to signal to other countries that they cannot get away with attacking the US: If they try something, we will know. And we will retaliate, swiftly and effectively. This is also why the US has been cagey about whether it caused North Korea’s Internet outage in late December.

It can be an effective bluff, but only if you get away with it. Otherwise, you lose credibility. The FBI is already starting to equivocate, saying others might have been involved in the attack, possibly hired by North Korea. If the real attackers surface and can demonstrate that they acted independently, it will be obvious that the FBI and NSA were overconfident in their attribution. Already, the FBI has lost significant credibility.

The only way out of this, with respect to the Sony hack and any other incident of cyber-aggression in which we’re expected to support retaliatory action, is for the government to be much more forthcoming about its evidence. The secrecy of the NSA’s sources and methods is going to have to take a backseat to the public’s right to know. And in cyberspace, we’re going to have to accept the uncomfortable fact that there’s a lot we don’t know.

This essay previously appeared in Time.

Posted on January 8, 2015 at 6:34 AMView Comments

Attributing the Sony Attack

No one has admitted taking down North Korea’s Internet. It could have been an act of retaliation by the US government, but it could just as well have been an ordinary DDoS attack. The follow-on attack against Sony PlayStation definitely seems to be the work of hackers unaffiliated with a government.

Not knowing who did what isn’t new. It’s called the “attribution problem,” and it plagues Internet security. But as governments increasingly get involved in cyberspace attacks, it has policy implications as well. Last year, I wrote:

Ordinarily, you could determine who the attacker was by the weaponry. When you saw a tank driving down your street, you knew the military was involved because only the military could afford tanks. Cyberspace is different. In cyberspace, technology is broadly spreading its capability, and everyone is using the same weaponry: hackers, criminals, politically motivated hacktivists, national spies, militaries, even the potential cyberterrorist. They are all exploiting the same vulnerabilities, using the same sort of hacking tools, engaging in the same attack tactics, and leaving the same traces behind. They all eavesdrop or steal data. They all engage in denial-of-service attacks. They all probe cyberdefences and do their best to cover their tracks.

Despite this, knowing the attacker is vitally important. As members of society, we have several different types of organizations that can defend us from an attack. We can call the police or the military. We can call on our national anti-terrorist agency and our corporate lawyers. Or we can defend ourselves with a variety of commercial products and services. Depending on the situation, all of these are reasonable choices.

The legal regime in which any defense operates depends on two things: who is attacking you and why. Unfortunately, when you are being attacked in cyberspace, the two things you often do not know are who is attacking you and why. It is not that everything can be defined as cyberwar; it is that we are increasingly seeing warlike tactics used in broader cyberconflicts. This makes defence and national cyberdefence policy difficult.

In 2007, the Israeli Air Force bombed and destroyed the al-Kibar nuclear facility in Syria. The Syrian government immediately knew who did it, because airplanes are hard to disguise. In 2010, the US and Israel jointly damaged Iran’s Natanz nuclear facility. But this time they used a cyberweapon, Stuxnet, and no one knew who did it until details were leaked years later. China routinely denies its cyberespionage activities. And a 2009 cyberattack against the United States and South Korea was blamed on North Korea even though it may have originated from either London or Miami.

When it’s possible to identify the origins of cyberattacks­—like forensic experts were able to do with many of the Chinese attacks against US networks­—it’s as a result of months of detailed analysis and investigation. That kind of time frame doesn’t help at the moment of attack, when you have to decide within milliseconds how your network is going to react and within days how your country is going to react. This, in part, explains the relative disarray within the Obama administration over what to do about North Korea. Officials in the US government and international institutions simply don’t have the legal or even the conceptual framework to deal with these types of scenarios.

The blurring of lines between individual actors and national governments has been happening more and more in cyberspace. What has been called the first cyberwar, Russia vs. Estonia in 2007, was partly the work of a 20-year-old ethnic Russian living in Tallinn, and partly the work of a pro-Kremlin youth group associated with the Russian government. Many of the Chinese hackers targeting Western networks seem to be unaffiliated with the Chinese government. And in 2011, the hacker group Anonymous threatened NATO.

It’s a strange future we live in when we can’t tell the difference between random hackers and major governments, or when those same random hackers can credibly threaten international military organizations.

This is why people around the world should care about the Sony hack. In this future, we’re going to see an even greater blurring of traditional lines between police, military, and private actions as technology broadly distributes attack capabilities across a variety of actors. This attribution difficulty is here to stay, at least for the foreseeable future.

If North Korea is responsible for the cyberattack, how is the situation different than a North Korean agent breaking into Sony’s office, photocopying a lot of papers, and making them available to the public? Is Chinese corporate espionage a problem for governments to solve, or should we let corporations defend themselves? Should the National Security Agency defend US corporate networks, or only US military networks? How much should we allow organizations like the NSA to insist that we trust them without proof when they claim to have classified evidence that they don’t want to disclose? How should we react to one government imposing sanctions on another based on this secret evidence? More importantly, when we don’t know who is launching an attack or why, who is in charge of the response and under what legal system should those in charge operate?

We need to figure all of this out. We need national guidelines to determine when the military should get involved and when it’s a police matter, as well as what sorts of proportional responses are available in each instance. We need international agreements defining what counts as cyberwar and what does not. And, most of all right now, we need to tone down all the cyberwar rhetoric. Breaking into the offices of a company and photocopying their paperwork is not an act of war, no matter who did it. Neither is doing the same thing over the Internet. Let’s save the big words for when it matters.

This essay previously appeared on TheAtlantic.com.

Jack Goldsmith responded to this essay.

Posted on January 7, 2015 at 11:16 AMView Comments

Doxing as an Attack

Those of you unfamiliar with hacker culture might need an explanation of “doxing.”

The word refers to the practice of publishing personal information about people without their consent. Usually it’s things like an address and phone number, but it can also be credit card details, medical information, private e-mails—­pretty much anything an assailant can get his hands on.

Doxing is not new; the term dates back to 2001 and the hacker group Anonymous. But it can be incredibly offensive. In 2014, several women were doxed by male gamers trying to intimidate them into keeping silent about sexism in computer games.

Companies can be doxed, too. In 2011, Anonymous doxed the technology firm HBGary Federal. In the past few weeks we’ve witnessed the ongoing doxing of Sony.

Everyone from political activists to hackers to government leaders has now learned how effective this attack is. Everyone from common individuals to corporate executives to government leaders now fears this will happen to them. And I believe this will change how we think about computing and the Internet.

This essay previously appeared on BetaBoston, who asked about a trend for 2015.

EDITED TO ADD (1/3): Slashdot thread.

Posted on January 2, 2015 at 7:21 AMView Comments

Did North Korea Really Attack Sony?

I am deeply skeptical of the FBI’s announcement on Friday that North Korea was behind last month’s Sony hack. The agency’s evidence is tenuous, and I have a hard time believing it. But I also have trouble believing that the US government would make the accusation this formally if officials didn’t believe it.

Clues in the hackers’ attack code seem to point in all directions at once. The FBI points to reused code from previous attacks associated with North Korea, as well as similarities in the networks used to launch the attacks. Korean language in the code also suggests a Korean origin, though not necessarily a North Korean one, since North Koreans use a unique dialect. However you read it, this sort of evidence is circumstantial at best. It’s easy to fake, and it’s even easier to interpret it incorrectly. In general, it’s a situation that rapidly devolves into storytelling, where analysts pick bits and pieces of the “evidence” to suit the narrative they already have worked out in their heads.

In reality, there are several possibilities to consider:

  • This is an official North Korean military operation. We know that North Korea has extensive cyberattack capabilities.
  • This is the work of independent North Korean nationals. Many politically motivated hacking incidents in the past have not been government-controlled. There’s nothing special or sophisticated about this hack that would indicate a government operation. In fact, reusing old attack code is a sign of a more conventional hacker being behind this.
  • This is the work of hackers who had no idea that there was a North Korean connection to Sony until they read about it in the media. Sony, after all, is a company that hackers have loved to hate for a decade. The most compelling evidence for this scenario is that the explicit North Korean connection—threats about the movie The Interview—were only made by the hackers after the media picked up on the possible links between the film release and the cyberattack. There is still the very real possibility that the hackers are in it just for the lulz, and that this international geopolitical angle simply makes the whole thing funnier.
  • It could have been an insider—Sony’s Snowden—who orchestrated the breach. I doubt this theory, because an insider wouldn’t need all the hacker tools that were used. I’ve also seen speculation that the culprit was a disgruntled ex-employee. It’s possible, but that employee or ex-employee would have also had to possess the requisite hacking skills, which seems unlikely.
  • The initial attack was not a North Korean government operation, but was co-opted by the government. There’s no reason to believe that the hackers who initially stole the information from Sony are the same ones who threatened the company over the movie. Maybe there are several attackers working independently. Maybe the independent North Korean hackers turned their work over to the government when the job got too big to handle. Maybe the North Koreans hacked the hackers.

I’m sure there are other possibilities that I haven’t thought of, and it wouldn’t surprise me if what’s really going on isn’t even on my list. North Korea’s offer to help with the investigation doesn’t clear matters up at all.

Tellingly, the FBI’s press release says that the bureau’s conclusion is only based “in part” on these clues. This leaves open the possibility that the government has classified evidence that North Korea is behind the attack. The NSA has been trying to eavesdrop on North Korea’s government communications since the Korean War, and it’s reasonable to assume that its analysts are in pretty deep. The agency might have intelligence on the planning process for the hack. It might, say, have phone calls discussing the project, weekly PowerPoint status reports, or even Kim Jong Un’s sign-off on the plan.

On the other hand, maybe not. I could have written the same thing about Iraq’s weapons of mass destruction program in the run-up to the 2003 invasion of that country, and we all know how wrong the government was about that.

Allan Friedman, a research scientist at George Washington University’s Cyber Security Policy Research Institute, told me that, from a diplomatic perspective, it’s a smart strategy for the US to be overconfident in assigning blame for the cyberattacks. Beyond the politics of this particular attack, the long-term US interest is to discourage other nations from engaging in similar behavior. If the North Korean government continues denying its involvement, no matter what the truth is, and the real attackers have gone underground, then the US decision to claim omnipotent powers of attribution serves as a warning to others that they will get caught if they try something like this.

Sony also has a vested interest in the hack being the work of North Korea. The company is going to be on the receiving end of a dozen or more lawsuits—from employees, ex-employees, investors, partners, and so on. Harvard Law professor Jonathan Zittrain opined that having this attack characterized as an act of terrorism or war, or the work of a foreign power, might earn the company some degree of immunity from these lawsuits.

I worry that this case echoes the “we have evidence—trust us” story that the Bush administration told in the run-up to the Iraq invasion. Identifying the origin of a cyberattack is very difficult, and when it is possible, the process of attributing responsibility can take months. While I am confident that there will be no US military retribution because of this, I think the best response is to calm down and be skeptical of tidy explanations until more is known.

This essay originally appeared on The Atlantic.

Lots more doubters. And Ed Felten has also written about the Sony breach.

EDITED TO ADD (12/24): Nicholas Weaver analyzes how the NSA could determine if North Korea was behind the Sony hack. And Jack Goldsmith discusses the US government’s legal and policy confusion surrounding the attack.

EDITED TO ADD: Slashdot thread. Hacker News thread.

EDITED TO ADD (1/14): Interesting article by DEFCON’s director of security operations.

Posted on December 24, 2014 at 6:27 AMView Comments

1 16 17 18 19 20 48

Sidebar photo of Bruce Schneier by Joe MacInnis.