No mention of the species, but the photo is a depressing one.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
No mention of the species, but the photo is a depressing one.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
This was supposed to be a secret until the middle of February, but we've been found out.
We already have European customers; this is our European office.
And, by the way, we're hiring, primarily in the Boston area.
Another story from the Snowden documents:
According to the documents, the LEVITATION program can monitor downloads in several countries across Europe, the Middle East, North Africa, and North America. It is led by the Communications Security Establishment, or CSE, Canada's equivalent of the NSA. (The Canadian agency was formerly known as "CSEC" until a recent name change.)
CSE finds some 350 "interesting" downloads each month, the presentation notes, a number that amounts to less than 0.0001 per cent of the total collected data.
The agency stores details about downloads and uploads to and from 102 different popular file-sharing websites, according to the 2012 document, which describes the collected records as "free file upload," or FFU, "events."
EDITED TO ADD (1/30): News article.
I missed this paper when it was first published in 2012:
"Neuroscience Meets Cryptography: Designing Crypto Primitives Secure Against Rubber Hose Attacks"
Abstract: Cryptographic systems often rely on the secrecy of cryptographic keys given to users. Many schemes, however, cannot resist coercion attacks where the user is forcibly asked by an attacker to reveal the key. These attacks, known as rubber hose cryptanalysis, are often the easiest way to defeat cryptography. We present a defense against coercion attacks using the concept of implicit learning from cognitive psychology. Implicit learning refers to learning of patterns without any conscious knowledge of the learned pattern. We use a carefully crafted computer game to plant a secret password in the participant's brain without the participant having any conscious knowledge of the trained password. While the planted secret can be used for authentication, the participant cannot be coerced into revealing it since he or she has no conscious knowledge of it. We performed a number of user studies using Amazon's Mechanical Turk to verify that participants can successfully re-authenticate over time and that they are unable to reconstruct or even recognize short fragments of the planted secret.
Here's an IDEA-variant with a 128-bit block length. While I think it's a great idea to bring IDEA up to a modern block length, the paper has none of the cryptanalysis behind it that IDEA had. If nothing else, I would have expected more than eight rounds. If anyone wants to practice differential and linear cryptanalysis, here's a new target for you.
Remember back in 2013 when the then-director of the NSA Keith Alexander claimed that Section 215 bulk telephone metadata surveillance stopped "fifty-four different terrorist-related activities"? Remember when that number was backtracked several times, until all that was left was a single Somali taxi driver who was convicted of sending some money back home? This is the story of Basaaly Moalin.
Today, as part of a Harvard computer science symposium, I had a public conversation with Edward Snowden. The topics were largely technical, ranging from cryptography to hacking to surveillance to what to do now.
Here's the video.
EDITED TO ADD (1/24): News article.
Last year, two Swiss artists programmed a Random Botnot Shopper, which every week would spend $100 in bitcoin to buy a random item from an anonymous Internet black market...all for an art project on display in Switzerland. It was a clever concept, except there was a problem. Most of the stuff the bot purchased was benign -- fake Diesel jeans, a baseball cap with a hidden camera, a stash can, a pair of Nike trainers -- but it also purchased ten ecstasy tablets and a fake Hungarian passport.
What do we do when a machine breaks the law? Traditionally, we hold the person controlling the machine responsible. People commit the crimes; the guns, lockpicks, or computer viruses are merely their tools. But as machines become more autonomous, the link between machine and controller becomes more tenuous.
Who is responsible if an autonomous military drone accidentally kills a crowd of civilians? Is it the military officer who keyed in the mission, the programmers of the enemy detection software that misidentified the people, or the programmers of the software that made the actual kill decision? What if those programmers had no idea that their software was being used for military purposes? And what if the drone can improve its algorithms by modifying its own software based on what the entire fleet of drones learns on earlier missions?
Maybe our courts can decide where the culpability lies, but that's only because while current drones may be autonomous, they're not very smart. As drones get smarter, their links to the humans that originally built them become more tenuous.
What if there are no programmers, and the drones program themselves? What if they are both smart and autonomous, and make strategic as well as tactical decisions on targets? What if one of the drones decides, based on whatever means it has at its disposal, that it no longer maintains allegiance to the country that built it and goes rogue?
Our society has many approaches, using both informal social rules and more formal laws, for dealing with people who won't follow the rules of society. We have informal mechanisms for small infractions, and a complex legal system for larger ones. If you are obnoxious at a party I throw, I won't invite you back. Do it regularly, and you'll be shamed and ostracized from the group. If you steal some of my stuff, I might report you to the police. Steal from a bank, and you'll almost certainly go to jail for a long time. A lot of this might seem more ad hoc than situation-specific, but we humans have spent millennia working this all out. Security is both political and social, but it's also psychological. Door locks, for example, only work because our social and legal prohibitions on theft keep the overwhelming majority of us honest. That's how we live peacefully together at a scale unimaginable for any other species on the planet.
How does any of this work when the perpetrator is a machine with whatever passes for free will? Machines probably won't have any concept of shame or praise. They won't refrain from doing something because of what other machines might think. They won't follow laws simply because it's the right thing to do, nor will they have a natural deference to authority. When they're caught stealing, how can they be punished? What does it mean to fine a machine? Does it make any sense at all to incarcerate it? And unless they are deliberately programmed with a self-preservation function, threatening them with execution will have no meaningful effect.
We are already talking about programming morality into thinking machines, and we can imagine programming other human tendencies into our machines, but we're certainly going to get it wrong. No matter how much we try to avoid it, we're going to have machines that break the law.
This, in turn, will break our legal system. Fundamentally, our legal system doesn't prevent crime. Its effectiveness is based on arresting and convicting criminals after the fact, and their punishment providing a deterrent to others. This completely fails if there's no punishment that makes sense.
We already experienced a small example of this after 9/11, which was when most of us first started thinking about suicide terrorists and how post-facto security was irrelevant to them. That was just one change in motivation, and look at how those actions affected the way we think about security. Our laws will have the same problem with thinking machines, along with related problems we can't even imagine yet. The social and legal systems that have dealt so effectively with human rulebreakers of all sorts will fail in unexpected ways in the face of thinking machines.
A machine that thinks won't always think in the ways we want it to. And we're not ready for the ramifications of that.
This essay previously appeared on Edge.org as one of the answers to the 2015 Edge Question: "What do you think about machines that think?"
EDITED TO ADD: The Random Botnet Shopper is "under arrest."
It's a common fraud on sites like eBay: buyers falsely claim that they never received a purchased item in the mail. Here's a paper on defending against this fraud through basic psychological security measures. It's preliminary research, but probably worth experimental research.
We have tested a collection of possible user-interface enhancements aimed at reducing liar buyer fraud. We have found that showing users in the process of filing a dispute that (1) their computer is recognized, and (2) that their location is known dramatically reduces the willingness to file false claims. We believe the reason for the reduction is that the would-be liars can visualize their lack of anonymity at a time when they are deciding whether to perform a fraudulent action. Interestingly, we also showed that users were not affected by knowing that their computer was recognized, but without their location being pin-pointed, or the other way around. We also determined that a reasonably accurate map was necessary -- but that an inaccurate map does not seem to increase the willingness to lie.
At a CATO surveillance event last month, Ben Wittes talked about inherent presidential powers of surveillance with this hypothetical: "What should Congress have to say about the rules when Barack Obama wants to know what Vladimir Putin is talking about?" His answer was basically that Congress should have no say: "I think most people, going back to my Vladimir Putin question, would say that is actually an area of inherent presidential authority." Edward Snowden, a surprise remote participant at the event, said the opposite, although using the courts in general rather than specifically Congress as his example. "...there is no court in the world -- well, at least, no court outside Russia -- who would not go, 'This man is an agent of the foreign government. I mean, he's the head of the government.' Of course, they will say, 'this guy has access to some kind of foreign intelligence value. We'll sign the warrant for him.'"
There's a principle here worth discussing at length. I'm not talking about the legal principle, as in what kind of court should oversee US intelligence collection. I'm not even talking about the constitutional principle, as in what are the US president's inherent powers. I am talking about the philosophical principle: what sorts of secret unaccountable actions do we want individuals to be able to take on behalf of their country?
Put that way, I think the answer is obvious: as little as possible.
I am not a lawyer or a political scientist. I am a security technologist. And to me, the separation of powers and the checks and balances written into the US constitution are a security system. The more Barack Obama can do by himself in secret, the more power he has -- and the more dangerous that is to all of us. By limiting the actions individuals and groups can take on their own, and forcing differing institutions to approve the actions of each other, the system reduces the ability for those in power to abuse their power. It holds them accountable.
We have enshrined the principle of different groups overseeing each other in many of our social and political systems. The courts issue warrants, limiting police power. Independent audit companies verify corporate balance sheets, limiting corporate power. And the executive, the legislative, and the judicial branches of government get to have their say in our laws. Sometimes accountability takes the form of prior approval, and sometimes it takes the form of ex post facto review. It's all inefficient, of course, but it's an inefficiency we accept because it makes us all safer.
While this is a fine guiding principle, it quickly falls apart in the practicalities of running a modern government. It's just not possible to run a country where every action is subject to review and approval. The complexity of society, and the speed with which some decisions have to be made, can require unilateral actions. So we make allowances. Congress passes broad laws, and agencies turn them into detailed rules and procedures. The president is the commander in chief of the entire US military when it comes time to fight wars. Policeman have a lot of discretion on their own on the beat. And we only get to vote elected officials in and out of office every two, four, or six years.
The thing is, we can do better today. I've often said that the modern constitutional democracy is the best form of government mid-18th-century technology could produce. Because both communications and travel were difficult and expensive, it made sense for geographically proximate groups of people to choose one representative to go all the way over there and act for them over a long block of time.
Neither of these two limitations is true today. Travel is both cheap and easy, and communications are so cheap and easy as to be virtually free. Video conferencing and telepresence allow people to communicate without traveling. Surely if we were to design a democratic government today, we would come up with better institutions than the ones we are stuck with because of history.
And we can come up with more granular systems of checks and balances. So, yes, I think we would have a better government if a court had to approve all surveillance actions by the president, including those against Vladimir Putin. And today it might be possible to have a court do just that. Wittes argues that making some of these changes is impossible, given the current US constitution. He may be right, but that doesn't mean they're not good ideas.
Of course, the devil is always in the details. Efficiency is still a powerful counterargument. The FBI has procedures for temporarily bypassing prior approval processes if speed is essential. And granularity can still be a problem. Every bullet fired by the US military can't be subject to judicial approval or even a military court, even though every bullet fired by a US policeman is -- at least in theory -- subject to judicial review. And while every domestic surveillance decision made by the police and the NSA is (also in theory) subject to judicial approval, it's hard to know whether this can work for international NSA surveillance decisions until we try.
We are all better off now that many of the NSA's surveillance programs have been made public and are being debated in Congress and in the media -- although I had hoped for more congressional action -- and many of the FISA Court's formerly secret decisions on surveillance are being made public. But we still have a long way to go, and it shouldn't take someone like Snowden to force at least some openness to happen.
Late last year, in a criminal case involving export violations, the US government disclosed a mysterious database of telephone call records that it had queried in the case.
The defendant argued that the database was the NSA's, and that the query was unconditional and the evidence should be suppressed. The government said that the database was not the NSA's. As part of the back and forth, the judge ordered the government to explain the call records database.
Someone from the Drug Enforcement Agency did that last week. Apparently, there's another bulk telephone metadata collection program and a "federal law enforcement database" authorized as part of a federal drug trafficking statute:
This database [redacted] consisted of telecommunications metadata obtained from United Stated telecommunications service providers pursuant to administrative subpoenas served up on the service providers under the provisions of 21 U.S.C. 876. This metadata related to international telephone calls originating in the United States and calling [redacted] designated foreign countries, one of which was Iran, that were determined to have a demonstrated nexus to international drug trafficking and related criminal activities.
The program began in the 1990s and was "suspended" in September 2013.
EDITED TO ADD (1/19): Another article.
Appelbaum, Poitras, and others have another NSA article with an enormous Snowden document dump on Der Spiegel, giving details on a variety of offensive NSA cyberoperations to infiltrate and exploit networks around the world. There's a lot here: 199 pages. (Here they are in one compressed archive.)
Paired with the 666 pages released in conjunction with the December 28 Spiegel article (compressed archive here) on NSA cryptanalytic capabilities, we've seen a huge amount of Snowden documents in the past few weeks. According to one tally, it runs 3,560 pages in all.
EDITED TO ADD (1/19): In related news, the New York Times is reporting that the NSA has infiltrated North Korea's networks, and provided evidence to blame the country for the Sony hacks.
For its "Top Influencers in Security You Should Be Following in 2015" blog post, TripWire asked me: "If you could have one infosec-related superpower, what would it be?" I answered:
Most superpowers are pretty lame: super strength, super speed, super sight, super stretchiness.
Teleportation would probably be the most useful given my schedule, but for subverting security systems, you can't beat invisibility. You can bypass almost every physical security measure with invisibility, and when you trip an alarm -- say, a motion sensor -- the guards that respond will conclude that you're a false alarm.
Oh, you want an "infosec" superpower. Hmmm. The ability to detect the origin of packets? The ability to bypass firewalls without a sound? The ability to mimic anyone's biometric? Those are all too techy for me. Maybe the ability to translate my thoughts into articles and books without going through the tedious process of writing. But then, what would I do on long airplane flights? So maybe I need teleportation after all.
I have long said that driving a car is the most dangerous thing regularly do in our lives. Turns out deaths due to automobiles are declining, while deaths due to firearms are on the rise:
Guns and cars have long been among the leading causes of non-medical deaths in the U.S. By 2015, firearm fatalities will probably exceed traffic fatalities for the first time, based on data compiled by Bloomberg.
While motor-vehicle deaths dropped 22 percent from 2005 to 2010, gun fatalities are rising again after a low point in 2000, according to the Atlanta-based Centers for Disease Control and Prevention. Shooting deaths in 2015 will probably rise to almost 33,000, and those related to autos will decline to about 32,000, based on the 10-year average trend.
There's also this story.
An excellent idea:
311 for encryption. RSA, DSA, and ECDSA must be 3.4 ounces (100bits) or less per container; must be in 1 quart-sized, clear, plastic, zip-top bag; 1 bag per message placed in screening bin. The bag limits the total data volume each traveling message can bring.
Thousands of articles have called the December attack against Sony Pictures a wake-up call to industry. Regardless of whether the attacker was the North Korean government, a disgruntled former employee, or a group of random hackers, the attack showed how vulnerable a large organization can be and how devastating the publication of its private correspondence, proprietary data, and intellectual property can be.
But while companies are supposed to learn that they need to improve their security against attack, there's another equally important but much less discussed lesson here: companies should have an aggressive deletion policy.
One of the social trends of the computerization of our business and social communications tools is the loss of the ephemeral. Things we used to say in person or on the phone we now say in e-mail, by text message, or on social networking platforms. Memos we used to read and then throw away now remain in our digital archives. Big data initiatives mean that we're saving everything we can about our customers on the remote chance that it might be useful later.
Everything is now digital, and storage is cheap -- why not save it all?
Sony illustrates the reason why not. The hackers published old e-mails from company executives that caused enormous public embarrassment to the company. They published old e-mails by employees that caused less-newsworthy personal embarrassment to those employees, and these messages are resulting in class-action lawsuits against the company. They published old documents. They published everything they got their hands on.
Saving data, especially e-mail and informal chats, is a liability.
It's also a security risk: the risk of exposure. The exposure could be accidental. It could be the result of data theft, as happened to Sony. Or it could be the result of litigation. Whatever the reason, the best security against these eventualities is not to have the data in the first place.
If Sony had had an aggressive data deletion policy, much of what was leaked couldn't have been stolen and wouldn't have been published.
An organization-wide deletion policy makes sense. Customer data should be deleted as soon as it isn't immediately useful. Internal e-mails can probably be deleted after a few months, IM chats even more quickly, and other documents in one to two years. There are exceptions, of course, but they should be exceptions. Individuals should need to deliberately flag documents and correspondence for longer retention. But unless there are laws requiring an organization to save a particular type of data for a prescribed length of time, deletion should be the norm.
This has always been true, but many organizations have forgotten it in the age of big data. In the wake of the devastating leak of terabytes of sensitive Sony data, I hope we'll all remember it now.
This essay previously appeared on ArsTechnica.com, which has comments from people who strongly object to this idea.
It's called SnoopSnitch:
SnoopSnitch is an app for Android devices that analyses your mobile radio traffic to tell if someone is listening in on your phone conversations or tracking your location. Unlike standard antivirus apps, which are designed to combat software intrusions or steal personal info, SnoopSnitch picks up on things like fake mobile base stations or SS7 exploits. As such, it's probably ideally suited to evading surveillance from local government agencies.
The app was written by German outfit Security Research Labs, and is available for free on the Play Store. Unfortunately, you'll need a rooted Android device running a Qualcomm chipset to take advantage.
Download it here.
In the wake of the Paris terrorist shootings, David Cameron has said that he wants to ban encryption in the UK. Here's the quote: "If I am prime minister I will make sure that it is a comprehensive piece of legislation that does not allow terrorists safe space to communicate with each other."
Cory Doctorow has a good essay on Cameron's proposal:
For David Cameron's proposal to work, he will need to stop Britons from installing software that comes from software creators who are out of his jurisdiction. The very best in secure communications are already free/open source projects, maintained by thousands of independent programmers around the world. They are widely available, and thanks to things like cryptographic signing, it is possible to download these packages from any server in the world (not just big ones like Github) and verify, with a very high degree of confidence, that the software you've downloaded hasn't been tampered with.
Cameron is not alone here. The regime he proposes is already in place in countries like Syria, Russia, and Iran (for the record, none of these countries have had much luck with it). There are two means by which authoritarian governments have attempted to restrict the use of secure technology: by network filtering and by technology mandates.
Worry about Ebola (or anything) manifests physically as what's known as a fight, flight, or freeze response. Biological systems ramp up or down to focus the body's resources on the threat at hand. Heart rate and blood pressure increase, immune function is suppressed (after an initial burst), brain chemistry changes, and the normal functioning of the digestive system is interrupted, among other effects. Like fear itself, these changes are protective in the short term. But when they persist, the changes prompted by chronic stress -- defined as stress beyond the normal hassles of life, lasting at least one to two weeks -- are associated with increased risk of cardiovascular disease (the leading cause of death in America); increased likelihood and severity of clinical depression (suicide is the 10th leading cause of death in America); depressed memory formation and recall; impaired fertility; reduced bone growth; and gastrointestinal disorders.
Perhaps most insidious of all, by suppressing our immune systems, chronic stress makes us more likely to catch infectious diseases, or suffer more -- or die -- from diseases that a healthy immune system would be better able to control. The fear of Ebola may well have an impact on the breadth and severity of how many people get sick, or die, from influenza this flu season. (The CDC reports that, either directly or indirectly, influenza kills between 3,000 and 49,000 people per year.)
There is no question that America's physical, economic, and social health is far more at risk from the fear of Ebola than from the virus itself.
This is an interesting historical use of Viking runes as a secret code. Yes, the page is all in Finnish. But scroll to the middle. There's a picture of the Stockholm city police register from 1536, about a married woman who was found with someone who was not her husband. The recording scribe "encrypted" her name and home address using runes.
The report's revelations, based on a survey of nearly 800 writers worldwide, are alarming. Concern about surveillance is now nearly as high among writers living in democracies (75%) as among those living in non-democracies (80%). The levels of self-censorship reported by writers living in democratic countries are approaching the levels reported by writers living in authoritarian or semi-democratic countries. And writers around the world think that mass surveillance has significantly damaged U.S. credibility as a global champion of free expression for the long term.
The FBI has provided more evidence:
Speaking at a Fordham Law School cybersecurity conference Wednesday, Comey said that he has "very high confidence" in the FBI's attribution of the attack to North Korea. And he named several of the sources of his evidence, including a "behavioral analysis unit" of FBI experts trained to psychologically analyze foes based on their writings and actions. He also said that the FBI compared the Sony attack with their own "red team" simulations to determine how the attack could have occurred. And perhaps most importantly, Comey now says that the hackers in the attack failed on multiple occasions to use the proxy servers that bounce their Internet connection through an obfuscating computer somewhere else in the world, revealing IP addresses that tied them to North Koreans.
"In nearly every case, [the Sony hackers known as the Guardians of Peace] used proxy servers to disguise where they were coming from in sending these emails and posting these statements. But several times they got sloppy," Comey said. "Several times, either because they forgot or because of a technical problem, they connected directly and we could see that the IPs they were using...were exclusively used by the North Koreans."
"They shut it off very quickly once they saw the mistake," he added. "But not before we saw where it was coming from."
EDITED TO ADD (1/10): Marc Rogers responds. Here's a piece:
First, they are saying that these guys, who so were careful to route themselves through multiple public proxies in order to hide their connections, got sloppy and connected directly. It's a rookie mistake that every hacker dreads. Many of us "hackers" even set up our systems to make this sort of slip-up impossible. So, while its definitely plausible, it feels very unlikely for professional or state-sponsored hackers in my books. Hackers who take this much care when hiding their connections have usually developed a methodology based around using these kinds of connections to hide their origin. It becomes such common practice that it's almost a reflex. Why? Because their freedom depends on it.
However, even if we take that to one side and accept that these emails came from North Korean IP addresses, what are those addresses? If they are addresses in the North Korean IP ranges then why don't they share them? If they are North Korean servers, then say so! What about the possibility that this attacker who has shown ability and willingness to bounce their connections all over the world is simply bouncing their messages off of North Korean infrastructure?
Finally, how do they even know these emails came from the attackers? From what I saw, the messages with actual incriminating content were dumped to pastebin and not sent via email. Perhaps there are messages with incriminating content -- and by this I mean links to things only the attackers had access to -- which they haven't shared with us? Because from where I am sitting, it's highly possible that someone other than the attacker could have joined in the fun by sending threatening messages as GOP, as we have already seen happen once in this case.
EDITED TO ADD (1/12): The NSA admits involvement.
This sort of thing is still very rare, but I fear it will become more common:
...hackers had struck an unnamed steel mill in Germany. They did so by manipulating and disrupting control systems to such a degree that a blast furnace could not be properly shut down, resulting in "massive" -- though unspecified -- damage.
When you're attacked by a missile, you can follow its trajectory back to where it was launched from. When you're attacked in cyberspace, figuring out who did it is much harder. The reality of international aggression in cyberspace will change how we approach defense.
Many of us in the computer-security field are skeptical of the US government's claim that it has positively identified North Korea as the perpetrator of the massive Sony hack in November 2014. The FBI's evidence is circumstantial and not very convincing. The attackers never mentioned the movie that became the centerpiece of the hack until the press did. More likely, the culprits are random hackers who have loved to hate Sony for over a decade, or possibly a disgruntled insider.
On the other hand, most people believe that the FBI would not sound so sure unless it was convinced. And President Obama would not have imposed sanctions against North Korea if he weren't convinced. This implies that there's classified evidence as well. A couple of weeks ago, I wrote for the Atlantic, "The NSA has been trying to eavesdrop on North Korea's government communications since the Korean War, and it's reasonable to assume that its analysts are in pretty deep. The agency might have intelligence on the planning process for the hack. It might, say, have phone calls discussing the project, weekly PowerPoint status reports, or even Kim Jong Un's sign-off on the plan. On the other hand, maybe not. I could have written the same thing about Iraq's weapons-of-mass-destruction program in the run-up to the 2003 invasion of that country, and we all know how wrong the government was about that."
The NSA is extremely reluctant to reveal its intelligence capabilities -- or what it refers to as "sources and methods" -- against North Korea simply to convince all of us of its conclusion, because by revealing them, it tips North Korea off to its insecurities. At the same time, we rightly have reason to be skeptical of the government's unequivocal attribution of the attack without seeing the evidence. Iraq's mythical weapons of mass destruction is only the most recent example of a major intelligence failure. American history is littered with examples of claimed secret intelligence pointing us toward aggression against other countries, only for us to learn later that the evidence was wrong.
Cyberspace exacerbates this in two ways. First, it is very difficult to attribute attacks in cyberspace. Packets don't come with return addresses, and you can never be sure that what you think is the originating computer hasn't itself been hacked. Even worse, it's hard to tell the difference between attacks carried out by a couple of lone hackers and ones where a nation-state military is responsible. When we do know who did it, it's usually because a lone hacker admitted it or because there was a months-long forensic investigation.
Second, in cyberspace, it is much easier to attack than to defend. The primary defense we have against military attacks in cyberspace is counterattack and the threat of counterattack that leads to deterrence.
What this all means is that it's in the US's best interest to claim omniscient powers of attribution. More than anything else, those in charge want to signal to other countries that they cannot get away with attacking the US: If they try something, we will know. And we will retaliate, swiftly and effectively. This is also why the US has been cagey about whether it caused North Korea's Internet outage in late December.
It can be an effective bluff, but only if you get away with it. Otherwise, you lose credibility. The FBI is already starting to equivocate, saying others might have been involved in the attack, possibly hired by North Korea. If the real attackers surface and can demonstrate that they acted independently, it will be obvious that the FBI and NSA were overconfident in their attribution. Already, the FBI has lost significant credibility.
The only way out of this, with respect to the Sony hack and any other incident of cyber-aggression in which we're expected to support retaliatory action, is for the government to be much more forthcoming about its evidence. The secrecy of the NSA's sources and methods is going to have to take a backseat to the public's right to know. And in cyberspace, we're going to have to accept the uncomfortable fact that there's a lot we don't know.
This essay previously appeared in Time.
No one has admitted taking down North Korea's Internet. It could have been an act of retaliation by the US government, but it could just as well have been an ordinary DDoS attack. The follow-on attack against Sony PlayStation definitely seems to be the work of hackers unaffiliated with a government.
Not knowing who did what isn't new. It's called the "attribution problem," and it plagues Internet security. But as governments increasingly get involved in cyberspace attacks, it has policy implications as well. Last year, I wrote:
Ordinarily, you could determine who the attacker was by the weaponry. When you saw a tank driving down your street, you knew the military was involved because only the military could afford tanks. Cyberspace is different. In cyberspace, technology is broadly spreading its capability, and everyone is using the same weaponry: hackers, criminals, politically motivated hacktivists, national spies, militaries, even the potential cyberterrorist. They are all exploiting the same vulnerabilities, using the same sort of hacking tools, engaging in the same attack tactics, and leaving the same traces behind. They all eavesdrop or steal data. They all engage in denial-of-service attacks. They all probe cyberdefences and do their best to cover their tracks.
Despite this, knowing the attacker is vitally important. As members of society, we have several different types of organizations that can defend us from an attack. We can call the police or the military. We can call on our national anti-terrorist agency and our corporate lawyers. Or we can defend ourselves with a variety of commercial products and services. Depending on the situation, all of these are reasonable choices.
The legal regime in which any defense operates depends on two things: who is attacking you and why. Unfortunately, when you are being attacked in cyberspace, the two things you often do not know are who is attacking you and why. It is not that everything can be defined as cyberwar; it is that we are increasingly seeing warlike tactics used in broader cyberconflicts. This makes defence and national cyberdefence policy difficult.
In 2007, the Israeli Air Force bombed and destroyed the al-Kibar nuclear facility in Syria. The Syrian government immediately knew who did it, because airplanes are hard to disguise. In 2010, the US and Israel jointly damaged Iran's Natanz nuclear facility. But this time they used a cyberweapon, Stuxnet, and no one knew who did it until details were leaked years later. China routinely denies its cyberespionage activities. And a 2009 cyberattack against the United States and South Korea was blamed on North Korea even though it may have originated from either London or Miami.
When it's possible to identify the origins of cyberattacks -- like forensic experts were able to do with many of the Chinese attacks against US networks -- it's as a result of months of detailed analysis and investigation. That kind of time frame doesn't help at the moment of attack, when you have to decide within milliseconds how your network is going to react and within days how your country is going to react. This, in part, explains the relative disarray within the Obama administration over what to do about North Korea. Officials in the US government and international institutions simply don't have the legal or even the conceptual framework to deal with these types of scenarios.
The blurring of lines between individual actors and national governments has been happening more and more in cyberspace. What has been called the first cyberwar, Russia vs. Estonia in 2007, was partly the work of a 20-year-old ethnic Russian living in Tallinn, and partly the work of a pro-Kremlin youth group associated with the Russian government. Many of the Chinese hackers targeting Western networks seem to be unaffiliated with the Chinese government. And in 2011, the hacker group Anonymous threatened NATO.
It's a strange future we live in when we can't tell the difference between random hackers and major governments, or when those same random hackers can credibly threaten international military organizations.
This is why people around the world should care about the Sony hack. In this future, we're going to see an even greater blurring of traditional lines between police, military, and private actions as technology broadly distributes attack capabilities across a variety of actors. This attribution difficulty is here to stay, at least for the foreseeable future.
If North Korea is responsible for the cyberattack, how is the situation different than a North Korean agent breaking into Sony's office, photocopying a lot of papers, and making them available to the public? Is Chinese corporate espionage a problem for governments to solve, or should we let corporations defend themselves? Should the National Security Agency defend US corporate networks, or only US military networks? How much should we allow organizations like the NSA to insist that we trust them without proof when they claim to have classified evidence that they don't want to disclose? How should we react to one government imposing sanctions on another based on this secret evidence? More importantly, when we don't know who is launching an attack or why, who is in charge of the response and under what legal system should those in charge operate?
We need to figure all of this out. We need national guidelines to determine when the military should get involved and when it's a police matter, as well as what sorts of proportional responses are available in each instance. We need international agreements defining what counts as cyberwar and what does not. And, most of all right now, we need to tone down all the cyberwar rhetoric. Breaking into the offices of a company and photocopying their paperwork is not an act of war, no matter who did it. Neither is doing the same thing over the Internet. Let's save the big words for when it matters.
This essay previously appeared on TheAtlantic.com.
Jack Goldsmith responded to this essay.
Sophie Van Der Zee and colleagues have a new paper on using body movement as a lie detector:
Abstract: We present a new robust signal for detecting deception: full body motion. Previous work on detecting deception from body movement has relied either on human judges or on specific gestures (such as fidgeting or gaze aversion) that are coded or rated by humans. The results are characterized by inconsistent and often contradictory findings, with small-stakes lies under lab conditions detected at rates only slightly better than guessing. Building on previous work that uses automatic analysis of facial videos and rhythmic body movements to diagnose stress, we set out to see whether a full body motion capture suit, which records the position, velocity and orientation of 23 points in the subject's body, could yield a better signal of deception. Interviewees of South Asian (n = 60) or White British culture (n = 30) were required to either tell the truth or lie about two experienced tasks while being interviewed by somebody from their own (n = 60) or different culture (n = 30). We discovered that full body motion -- the sum of joint displacements -- was indicative of lying approximately 75% of the time. Furthermore, movement was guilt-related, and occurred independently of anxiety, cognitive load and cultural background. Further analyses indicate that including individual limb data in our full bodymotion measurements, in combination with appropriate questioning strategies, can increase its discriminatory power to around 82%. This culture-sensitive study provides an objective and inclusive view on how people actually behave when lying. It appears that full body motion can be a robust nonverbal indicator of deceit, and suggests that lying does not cause people to freeze. However, should full body motion capture become a routine investigative technique, liars might freeze in order not to give themselves away; but this in itself should be a telltale.
This is a first research study, and the results might not be robust. But it certainly is interesting.
New paper: "Attributing Cyber Attacks," by Thomas Rid and Ben Buchanan:
Abstract: Who did it? Attribution is fundamental. Human lives and the security of the state may depend on ascribing agency to an agent. In the context of computer network intrusions, attribution is commonly seen as one of the most intractable technical problems, as either solvable or not solvable, and as dependent mainly on the available forensic evidence. But is it? Is this a productive understanding of attribution? This article argues that attribution is what states make of it. To show how, we introduce the Q Model: designed to explain, guide, and improve the making of attribution. Matching an offender to an offence is an exercise in minimising uncertainty on three levels: tactically, attribution is an art as well as a science; operationally, attribution is a nuanced process not a black-and-white problem; and strategically, attribution is a function of what is at stake politically. Successful attribution requires a range of skills on all levels, careful management, time, leadership, stress-testing, prudent communication, and recognising limitations and challenges.
In Kyoto, taxi drivers are encouraged to loiter around convenience stores late at night. Their presence reduces crime.
In Kyoto about half of the convenience stores had signed on for the Midnight Defender Strategy. These 500 or so shops hung posters with slogans such as "vigilance strengthening" written on them in their windows. These signs are indicators to taxi drivers that they are allowed to park there as long as they like during breaks. The stores lose a few parking spaces in the process but gain some extra eyes which may be enough to deter a would-be bandit from making their move.
Since the program started in September 2013 the number of armed robberies among participating stores dropped to four compared to 18 in the previous year. On the other hand, the shops which were not in the Midnight Defender Strategy saw an increase in robberies, up from seven to nine incidents compared to the year before. Overall the total number of robberies was nearly halved in the prefecture.
Hacker News thread.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Those of you unfamiliar with hacker culture might need an explanation of "doxing."
The word refers to the practice of publishing personal information about people without their consent. Usually it's things like an address and phone number, but it can also be credit card details, medical information, private e-mails -- pretty much anything an assailant can get his hands on.
Doxing is not new; the term dates back to 2001 and the hacker group Anonymous. But it can be incredibly offensive. In 2014, several women were doxed by male gamers trying to intimidate them into keeping silent about sexism in computer games.
Everyone from political activists to hackers to government leaders has now learned how effective this attack is. Everyone from common individuals to corporate executives to government leaders now fears this will happen to them. And I believe this will change how we think about computing and the Internet.
This essay previously appeared on BetaBoston, who asked about a trend for 2015.
EDITED TO ADD (1/3): Slashdot thread.
Sidebar photo of Bruce Schneier by Joe MacInnis.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.