Page 384

Identifying When Someone Is Operating a Computer Remotely

Here’s an interesting technique to detect Remote Access Trojans, or RATS: differences in how local and remote users use the keyboard and mouse:

By using biometric analysis tools, we are able to analyze cognitive traits such as hand-eye coordination, usage preferences, as well as device interaction patterns to identify a delay or latency often associated with remote access attacks. Simply put, a RAT’s keyboard typing or cursor movement will often cause delayed visual feedback which in turn results in delayed response time; the data is simply not as fluent as would be expected from standard human behavior data.

No data on false positives vs. false negatives, but interesting nonetheless.

Posted on March 9, 2015 at 1:03 PMView Comments

Attack Attribution and Cyber Conflict

The vigorous debate after the Sony Pictures breach pitted the Obama administration against many of us in the cybersecurity community who didn’t buy Washington’s claim that North Korea was the culprit.

What’s both amazing—and perhaps a bit frightening—about that dispute over who hacked Sony is that it happened in the first place.

But what it highlights is the fact that we’re living in a world where we can’t easily tell the difference between a couple of guys in a basement apartment and the North Korean government with an estimated $10 billion military budget. And that ambiguity has profound implications for how countries will conduct foreign policy in the Internet age.

Clandestine military operations aren’t new. Terrorism can be hard to attribute, especially the murky edges of state-sponsored terrorism. What’s different in cyberspace is how easy it is for an attacker to mask his identity—and the wide variety of people and institutions that can attack anonymously.

In the real world, you can often identify the attacker by the weaponry. In 2006, Israel attacked a Syrian nuclear facility. It was a conventional attack—military airplanes flew over Syria and bombed the plant—and there was never any doubt who did it. That shorthand doesn’t work in cyberspace.

When the US and Israel attacked an Iranian nuclear facility in 2010, they used a cyberweapon and their involvement was a secret for years. On the Internet, technology broadly disseminates capability. Everyone from lone hackers to criminals to hypothetical cyberterrorists to nations’ spies and soldiers are using the same tools and the same tactics. Internet traffic doesn’t come with a return address, and it’s easy for an attacker to obscure his tracks by routing his attacks through some innocent third party.

And while it now seems that North Korea did indeed attack Sony, the attack it most resembles was conducted by members of the hacker group Anonymous against a company called HBGary Federal in 2011. In the same year, other members of Anonymous threatened NATO, and in 2014, still others announced that they were going to attack ISIS. Regardless of what you think of the group’s capabilities, it’s a new world when a bunch of hackers can threaten an international military alliance.

Even when a victim does manage to attribute a cyberattack, the process can take a long time. It took the US weeks to publicly blame North Korea for the Sony attacks. That was relatively fast; most of that time was probably spent trying to figure out how to respond. Attacks by China against US companies have taken much longer to attribute.

This delay makes defense policy difficult. Microsoft’s Scott Charney makes this point: When you’re being physically attacked, you can call on a variety of organizations to defend you—the police, the military, whoever does antiterrorism security in your country, your lawyers. The legal structure justifying that defense depends on knowing two things: who’s attacking you, and why. Unfortunately, when you’re being attacked in cyberspace, the two things you often don’t know are who’s attacking you, and why.

Whose job was it to defend Sony? Was it the US military’s, because it believed the attack to have come from North Korea? Was it the FBI, because this wasn’t an act of war? Was it Sony’s own problem, because it’s a private company? What about during those first weeks, when no one knew who the attacker was? These are just a few of the policy questions that we don’t have good answers for.

Certainly Sony needs enough security to protect itself regardless of who the attacker was, as do all of us. For the victim of a cyberattack, who the attacker is can be academic. The damage is the same, whether it’s a couple of hackers or a nation-state.

In the geopolitical realm, though, attribution is vital. And not only is attribution hard, providing evidence of any attribution is even harder. Because so much of the FBI’s evidence was classified—and probably provided by the National Security Agency—it was not able to explain why it was so sure North Korea did it. As I recently wrote: “The agency might have intelligence on the planning process for the hack. It might, say, have phone calls discussing the project, weekly PowerPoint status reports, or even Kim Jong-un’s sign-off on the plan.” Making any of this public would reveal the NSA’s “sources and methods,” something it regards as a very important secret.

Different types of attribution require different levels of evidence. In the Sony case, we saw the US government was able to generate enough evidence to convince itself. Perhaps it had the additional evidence required to convince North Korea it was sure, and provided that over diplomatic channels. But if the public is expected to support any government retaliatory action, they are going to need sufficient evidence made public to convince them. Today, trust in US intelligence agencies is low, especially after the 2003 Iraqi weapons-of-mass-destruction debacle.

What all of this means is that we are in the middle of an arms race between attackers and those that want to identify them: deception and deception detection. It’s an arms race in which the US—and, by extension, its allies—has a singular advantage. We spend more money on electronic eavesdropping than the rest of the world combined, we have more technology companies than any other country, and the architecture of the Internet ensures that most of the world’s traffic passes through networks the NSA can eavesdrop on.

In 2012, then US Secretary of Defense Leon Panetta said publicly that the US—presumably the NSA—has “made significant advances in … identifying the origins” of cyberattacks. We don’t know if this means they have made some fundamental technological advance, or that their espionage is so good that they’re monitoring the planning processes. Other US government officials have privately said that they’ve solved the attribution problem.

We don’t know how much of that is real and how much is bluster. It’s actually in America’s best interest to confidently accuse North Korea, even if it isn’t sure, because it sends a strong message to the rest of the world: “Don’t think you can hide in cyberspace. If you try anything, we’ll know it’s you.”

Strong attribution leads to deterrence. The detailed NSA capabilities leaked by Edward Snowden help with this, because they bolster an image of an almost-omniscient NSA.

It’s not, though—which brings us back to the arms race. A world where hackers and governments have the same capabilities, where governments can masquerade as hackers or as other governments, and where much of the attribution evidence intelligence agencies collect remains secret, is a dangerous place.

So is a world where countries have secret capabilities for deception and detection deception, and are constantly trying to get the best of each other. This is the world of today, though, and we need to be prepared for it.

This essay previously appeared in the Christian Science Monitor.

Posted on March 9, 2015 at 7:09 AMView Comments

Friday Squid Blogging: Biodegradable Thermoplastic Inspired by Squid Teeth

There’s a new 3D-printable biodegradable thermoplastic:

Pennsylvania State University researchers have synthesized a biodegradable thermoplastic that can be used for molding, extrusion, 3D printing, as an adhesive, or a coating using structural proteins from the ring teeth on squid tentacles.

Another article:

The researchers took genes from a squid and put it into E. coli bacteria. “You can insert genes into this organism and while it produces its own genes, [it] produces this extra protein,” Demirel explains. He compares the process to making wine or beer, except that instead of the fermentation process producing alcohol, it produces more of the synthesized squid protein.

They began producing the material in a 1-liter tank, but by now have started using a 300-liter tank and can make 30-40 grams a day. In addition, they’ve made several changes to make the production process cheaper, whittling the cost down from $50 per gram to $100 per kilogram. Demirel says they are looking at using algae instead of bacteria to cut down costs further.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on March 6, 2015 at 4:21 PMView Comments

Data and Goliath's Big Idea

Data and Goliath is a book about surveillance, both government and corporate. It’s an exploration in three parts: what’s happening, why it matters, and what to do about it. This is a big and important issue, and one that I’ve been working on for decades now. We’ve been on a headlong path of more and more surveillance, fueled by fear­—of terrorism mostly­—on the government side, and convenience on the corporate side. My goal was to step back and say “wait a minute; does any of this make sense?” I’m proud of the book, and hope it will contribute to the debate.

But there’s a big idea here too, and that’s the balance between group interest and self-interest. Data about us is individually private, and at the same time valuable to all us collectively. How do we decide between the two? If President Obama tells us that we have to sacrifice the privacy of our data to keep our society safe from terrorism, how do we decide if that’s a good trade-off? If Google and Facebook offer us free services in exchange for allowing them to build intimate dossiers on us, how do we know whether to take the deal?

There are a lot of these sorts of deals on offer. Waze gives us real-time traffic information, but does it by collecting the location data of everyone using the service. The medical community wants our detailed health data to perform all sorts of health studies and to get early warning of pandemics. The government wants to know all about you to better deliver social services. Google wants to know everything about you for marketing purposes, but will “pay” you with free search, free e-mail, and the like.

Here’s another one I describe in the book: “Social media researcher Reynol Junco analyzes the study habits of his students. Many textbooks are online, and the textbook websites collect an enormous amount of data about how­—and how often­—students interact with the course material. Junco augments that information with surveillance of his students’ other computer activities. This is incredibly invasive research, but its duration is limited and he is gaining new understanding about how both good and bad students study­—and has developed interventions aimed at improving how students learn. Did the group benefit of this study outweigh the individual privacy interest of the subjects who took part in it?”

Again and again, it’s the same trade-off: individual value versus group value.

I believe this is the fundamental issue of the information age, and solving it means careful thinking about the specific issues and a moral analysis of how they affect our core values.

You can see that in some of the debate today. I know hardened privacy advocates who think it should be a crime for people to withhold their medical data from the pool of information. I know people who are fine with pretty much any corporate surveillance but want to prohibit all government surveillance, and others who advocate the exact opposite.

When possible, we need to figure out how to get the best of both: how to design systems that make use of our data collectively to benefit society as a whole, while at the same time protecting people individually.

The world isn’t waiting; decisions about surveillance are being made for us­—often in secret. If we don’t figure this out for ourselves, others will decide what they want to do with us and our data. And we don’t want that. I say: “We don’t want the FBI and NSA to secretly decide what levels of government surveillance are the default on our cell phones; we want Congress to decide matters like these in an open and public debate. We don’t want the governments of China and Russia to decide what censorship capabilities are built into the Internet; we want an international standards body to make those decisions. We don’t want Facebook to decide the extent of privacy we enjoy amongst our friends; we want to decide for ourselves.”

In my last chapter, I write: “Data is the pollution problem of the information age, and protecting privacy is the environmental challenge. Almost all computers produce personal information. It stays around, festering. How we deal with it­—how we contain it and how we dispose of it­—is central to the health of our information economy. Just as we look back today at the early decades of the industrial age and wonder how our ancestors could have ignored pollution in their rush to build an industrial world, our grandchildren will look back at us during these early decades of the information age and judge us on how we addressed the challenge of data collection and misuse.”

That’s it; that’s our big challenge. Some of our data is best shared with others. Some of it can be ‘processed’­—anonymized, maybe­—before reuse. Some of it needs to be disposed of properly, either immediately or after a time. And some of it should be saved forever. Knowing what data goes where is a balancing act between group and self-interest, a trade-off that will continually change as technology changes, and one that we will be debating for decades to come.

This essay previously appeared on John Scalzi’s blog Whatever.

EDITED TO ADD (3/7): Hacker News thread.

Posted on March 6, 2015 at 2:10 PMView Comments

FREAK: Security Rollback Attack Against SSL

This week, we learned about an attack called “FREAK”—”Factoring Attack on RSA-EXPORT Keys”—that can break the encryption of many websites. Basically, some sites’ implementations of secure sockets layer technology, or SSL, contain both strong encryption algorithms and weak encryption algorithms. Connections are supposed to use the strong algorithms, but in many cases an attacker can force the website to use the weaker encryption algorithms and then decrypt the traffic. From Ars Technica:

In recent days, a scan of more than 14 million websites that support the secure sockets layer or transport layer security protocols found that more than 36 percent of them were vulnerable to the decryption attacks. The exploit takes about seven hours to carry out and costs as little as $100 per site.

This is a general class of attack I call “security rollback” attacks. Basically, the attacker forces the system users to revert to a less secure version of their protocol. Think about the last time you used your credit card. The verification procedure involved the retailer’s computer connecting with the credit card company. What if you snuck around to the back of the building and severed the retailer’s phone lines? Most likely, the retailer would have still accepted your card, but defaulted to making a manual impression of it and maybe looking at your signature. The result: you’ll have a much easier time using a stolen card.

In this case, the security flaw was designed in deliberately. Matthew Green writes:

Back in the early 1990s when SSL was first invented at Netscape Corporation, the United States maintained a rigorous regime of export controls for encryption systems. In order to distribute crypto outside of the U.S., companies were required to deliberately “weaken” the strength of encryption keys. For RSA encryption, this implied a maximum allowed key length of 512 bits.

The 512-bit export grade encryption was a compromise between dumb and dumber. In theory it was designed to ensure that the NSA would have the ability to “access” communications, while allegedly providing crypto that was still “good enough” for commercial use. Or if you prefer modern terms, think of it as the original “golden master key.”

The need to support export-grade ciphers led to some technical challenges. Since U.S. servers needed to support both strong and weak crypto, the SSL designers used a “cipher suite” negotiation mechanism to identify the best cipher both parties could support. In theory this would allow “strong” clients to negotiate “strong” ciphersuites with servers that supported them, while still providing compatibility to the broken foreign clients.

And that’s the problem. The weak algorithms are still there, and can be exploited by attackers.

Fixes are coming. Companies like Apple are quickly rolling out patches. But the vulnerability has been around for over a decade, and almost has certainly used by national intelligence agencies and criminals alike.

This is the generic problem with government-mandated backdoors, key escrow, “golden keys,” or whatever you want to call them. We don’t know how to design a third-party access system that checks for morality; once we build in such access, we then have to ensure that only the good guys can do it. And we can’t. Or, to quote the Economist: “…mathematics applies to just and unjust alike; a flaw that can be exploited by Western governments is vulnerable to anyone who finds it.”

This essay previously appeared on the Lawfare blog.

EDITED TO ADD: Microsoft Windows is vulnerable.

Posted on March 6, 2015 at 10:46 AMView Comments

The TSA's FAST Personality Screening Program Violates the Fourth Amendment

New law journal article: “A Slow March Towards Thought Crime: How the Department of Homeland Security’s FAST Program Violates the Fourth Amendment,” by Christopher A. Rogers. From the abstract:

FAST is currently designed for deployment at airports, where heightened security threats justify warrantless searches under the administrative search exception to the Fourth Amendment. FAST scans, however, exceed the scope of the administrative search exception. Under this exception, the courts would employ a balancing test, weighing the governmental need for the search versus the invasion of personal privacy of the search, to determine whether FAST scans violate the Fourth Amendment. Although the government has an acute interest in protecting the nation’s air transportation system against terrorism, FAST is not narrowly tailored to that interest because it cannot detect the presence or absence of weapons but instead detects merely a person’s frame of mind. Further, the system is capable of detecting an enormous amount of the scannee’s highly sensitive personal medical information, ranging from detection of arrhythmias and cardiovascular disease, to asthma and respiratory failures, physiological abnormalities, psychiatric conditions, or even a woman’s stage in her ovulation cycle. This personal information warrants heightened protection under the Fourth Amendment. Rather than target all persons who fly on commercial airplanes, the Department of Homeland Security should limit the use of FAST to where it has credible intelligence that a terrorist act may occur and should place those people scanned on prior notice that they will be scanned using FAST.

Posted on March 6, 2015 at 6:28 AMView Comments

Now Corporate Drones are Spying on Cell Phones

The marketing firm Adnear is using drones to track cell phone users:

The capture does not involve conversations or personally identifiable information, according to director of marketing and research Smriti Kataria. It uses signal strength, cell tower triangulation, and other indicators to determine where the device is, and that information is then used to map the user’s travel patterns.

“Let’s say someone is walking near a coffee shop,” Kataria said by way of example.

The coffee shop may want to offer in-app ads or discount coupons to people who often walk by but don’t enter, as well as to frequent patrons when they are elsewhere. Adnear’s client would be the coffee shop or other retailers who want to entice passersby.

[…]

The system identifies a given user through the device ID, and the location info is used to flesh out the user’s physical traffic pattern in his profile. Although anonymous, the user is “identified” as a code. The company says that no name, phone number, router ID, or other personally identifiable information is captured, and there is no photography or video.

Does anyone except this company believe that device ID is not personally identifiable information?

Posted on March 5, 2015 at 6:33 AMView Comments

Tom Ridge Can Find Terrorists Anywhere

One of the problems with our current discourse about terrorism and terrorist policies is that the people entrusted with counterterrorism—those whose job it is to surveil, study, or defend against terrorism—become so consumed with their role that they literally start seeing terrorists everywhere. So it comes as no surprise that if you ask Tom Ridge, the former head of the Department of Homeland Security, about potential terrorism risks at a new LA football stadium, of course he finds them everywhere.

From a report he prepared—paid, I’m sure—about the location of a new football stadium:

Specifically, locating an NFL stadium at the Inglewood-Hollywood Park site needlessly increases risks for existing interests: LAX and tenant airlines, the NFL, the City of Los Angeles, law enforcement and first responders as well as the citizens and commercial enterprises in surrounding areas and across global transportation networks and supply chains. That risk would be expanded with the additional stadium and “soft target” infrastructure that would encircle the facility locally.

To be clear, total risk cannot be eliminated at any site. But basic risk management principles suggest that the proximity of these two sites creates a separate and additional set of risks that are wholly unnecessary.

In the post 9/11 world, the threat of terrorism is a permanent condition. As both a former governor and secretary of homeland security, it is my opinion that the peril of placing a National Football League stadium in the direct flight path of LAX—layering risk—outweigh any benefits over the decades-long lifespan of the facility.

If a decision is made to move forward at the Inglewood/Hollywood Park site, the NFL, state and local leaders, and those they represent, must be willing to accept the significant risk and the possible consequences that accompany a stadium at the location. This should give both public and private leaders in the area some pause. At the very least, an open, public debate should be enabled so that all interests may understand the comprehensive and interconnected security, safety and economic risks well before a shovel touches the ground.

I’m sure he can’t help himself.

I am reminded of Glenn Greenwald’s essay on the “terrorist expert” industry. I am also reminded of this story about a father taking pictures of his daughters.

On the plus side, now we all have a convincing argument against development. “You can’t possibly build that shopping mall near my home, because OMG! terrorism.”

Posted on March 4, 2015 at 6:40 AMView Comments

Data and Goliath: Reviews and Excerpts

On the net right now, there are excerpts from the Introduction on Scientific American, Chapter 5 on the Atlantic, Chapter 6 on the Blaze, Chapter 8 on Ars Technica, Chapter 15 on Slate, and Chapter 16 on Motherboard. That might seem like a lot, but it’s only 9,000 of the book’s 80,000 words: barely 10%.

There are also a few reviews: from Boing Boing, Booklist, Kirkus Reviews, and Nature. More reviews coming.

Amazon claims to be temporarily out of stock, but that’ll only be for a day or so. There are many other places to buy the book, including Indie Bound, which serves independent booksellers.

Book website is here.

Posted on March 3, 2015 at 1:03 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.