Blog: June 2014 Archives

More on Hacking Team's Government Spying Software

Hacking Team is an Italian malware company that sells exploit tools to governments. Both Kaspersky Lab and Citizen Lab have published detailed reports on its capabilities against Android, iOS, Windows Mobile, and BlackBerry smart phones.

They allow, for example, for covert collection of emails, text messages, call history and address books, and they can be used to log keystrokes and obtain search history data. They can take screenshots, record audio from the phones to monitor calls or ambient conversations, hijack the phone’s camera to snap pictures or piggyback on the phone’s GPS system to monitor the user’s location. The Android version can also enable the phone’s Wi-Fi function to siphon data from the phone wirelessly instead of using the cell network to transmit it. The latter would incur data charges and raise the phone owner’s suspicion.

[…]

Once on a system, the iPhone module uses advance techniques to avoid draining the phone’s battery, turning on the phone’s microphone, for example, only under certain conditions.

“They can just turn on the mic and record everything going on around the victim, but the battery life is limited, and the victim can notice something is wrong with the iPhone, so they use special triggers,” says Costin Raiu, head of Kaspersky’s Global Research and Analysis team.

One of those triggers might be when the victim’s phone connects to a specific WiFi network, such as a work network, signaling the owner is in an important environment. “I can’t remember having seen such advanced techniques in other mobile malware,” he says.

Hacking Team’s mobile tools also have a “crisis” module that kicks in when they sense the presence of certain detection activities occurring on a device, such as packet sniffing, and then pause the spyware’s activity to avoid detection. There is also a “wipe” function to erase the tool from infected systems.

Hacking Team claims to sell its tools only to ethical governments, but Citizen Lab has found evidence of their use in Saudi Arabia. It can’t be certain the Saudi government is a customer, but there’s good circumstantial evidence. In general, circumstantial evidence is all we have. Citizen Lab has found Hacking Team servers in many countries, but it’s a perfectly reasonable strategy for Country A to locate its servers in Country B.

And remember, this is just one example of government spyware. Assume that the NSA—as well as the governments of China, Russia, and a handful of other countries—have their own systems that are at least as powerful.

Posted on June 26, 2014 at 6:37 AM37 Comments

Pepper Spray Drones

Coming soon to a protest near you: drones that fire pepper spray bullets.

Desert Wolf’s website states that its Skunk octacopter drone is fitted with four high-capacity paintball barrels, each capable of firing up to 20 bullets per second.

In addition to pepper-spray ammunition, the firm says it can also be armed with dye-marker balls and solid plastic balls.

The machine can carry up to 4,000 bullets at a time as well as “blinding lasers” and on-board speakers that can communicate warnings to a crowd.

Posted on June 25, 2014 at 2:19 PM68 Comments

Risks of Not Understanding a One-Way Function

New York City officials anonymized license plate data by hashing the individual plate numbers with MD5. (I know, they shouldn’t have used MD5, but ignore that for a moment.) Because they didn’t attach long random strings to the plate numbers—i.e., salt—it was trivially easy to hash all valid license plate numbers and deanonymize all the data.

Of course, this technique is not news.

ArsTechnica article. Hacker News thread.

Posted on June 25, 2014 at 6:36 AM33 Comments

Could Keith Alexander's Advice Possibly Be Worth $600K a Month?

Ex-NSA director Keith Alexander has his own consulting company: IronNet Cybersecurity Inc. His advice does not come cheap:

Alexander offered to provide advice to Sifma for $1 million a month, according to two people briefed on the talks. The asking price later dropped to $600,000, the people said, speaking on condition of anonymity because the negotiation was private.

Alexander declined to comment on the details, except to say that his firm will have contracts “in the near future.”

Kenneth Bentsen, Sifma’s president, said at a Bloomberg Government event yesterday in Washington that “cybersecurity is probably our number one priority” now that most regulatory changes imposed after the 2008 credit crisis have been absorbed.

SIFMA is the Securities Industry and Financial Markets Association. Think of how much actual security they could buy with that $600K a month. Unless he’s giving them classified information.

Digby:

But don’t worry, everything Alexander knows will only benefit the average American like you and me. There’s no reason to suspect that he is trading his high level of inside knowledge to benefit a bunch of rich people all around the globe. Because patriotism.

Or, as Recode.net said: “For another million, I’ll show you the back door we put in your router.”

EDITED TO ADD (7/13): Rep. Alan Grayson is suspicious.

Posted on June 24, 2014 at 2:30 PM53 Comments

Defending Against Algorithm Substitution Attacks

Interesting paper: M. Bellare, K. Paterson, and P. Rogaway, “Security of Symmetric Encryption against Mass Surveillance.”

Abstract: Motivated by revelations concerning population-wide surveillance of encrypted communications, we formalize and investigate the resistance of symmetric encryption schemes to mass surveillance. The focus is on algorithm-substitution attacks (ASAs), where a subverted encryption algorithm replaces the real one. We assume that the goal of “big-brother” is undetectable subversion, meaning that ciphertexts produced by the subverted encryption algorithm should reveal plaintexts to big-brother yet be indistinguishable to users from those produced by the real encryption scheme. We formalize security notions to capture this goal and then offer both attacks and defenses. In the first category we show that successful (from the point of view of big brother) ASAs may be mounted on a large class of common symmetric encryption schemes. In the second category we show how to design symmetric encryption schemes that avoid such attacks and meet our notion of security. The lesson that emerges is the danger of choice: randomized, stateless schemes are subject to attack while deterministic, stateful ones are not.

Posted on June 24, 2014 at 7:21 AM29 Comments

Building Retro Reflectors

A group of researchers have reverse-engineered the NSA’s retro reflectors, and has recreated them using software-defined radio (SDR):

An SDR Ossmann designed and built, called HackRF, was a key part of his work in reconstructing the NSA’s retro-reflector systems. Such systems come in two parts – a plantable “reflector” bug and a remote SDR-based receiver.

One reflector, which the NSA called Ragemaster, can be fixed to a computer’s monitor cable to pick up on-screen images. Another, Surlyspawn, sits on the keyboard cable and harvests keystrokes. After a lot of trial and error, Ossmann found these bugs can be remarkably simple devices – little more than a tiny transistor and a 2-centimetre-long wire acting as an antenna.

Getting the information from the bugs is where SDRs come in. Ossmann found that using the radio to emit a high-power radar signal causes a reflector to wirelessly transmit the data from keystrokes, say, to an attacker. The set-up is akin to a large-scale RFID- chip system. Since the signals returned from the reflectors are noisy and often scattered across different bands, SDR’s versatility is handy, says Robin Heydon at Cambridge Silicon Radio in the UK. “Software-defined radio is flexibly programmable and can tune in to anything,” he says.

The NSA devices are LOUDAUTO, SURLYSPAWN, TAWDRYYARD, and RAGEMASTER. Here are videos that talk about how TAWDRYYARD and LOUDAUTO work.

This is important research. While the information we have about these sorts of tools is largely from the NSA, it is fanciful to assume that they are the only intelligence agency using this technology. And it’s equally fanciful to assume that criminals won’t be using this technology soon, even without Snowden’s documents. Understanding and building these tools is the first step to protecting ourselves from them.

Posted on June 23, 2014 at 6:51 AM28 Comments

Co3 Systems Is Hiring

At the beginning of the year, I announced that I’d joined Co3 Systems as its CTO. Co3 Systems makes coordination software—what I hear called workflow management—for incident response. Here’s a 3:30-minute video overview of how it works. It’s old; we’ve put a whole bunch of new features in the system since we made that.

We’ve had a phenomenal first two quarters, and we’re growing. We’re hiring for a bunch of positions, including a production ops engineer, an incident response specialist, and a software engineer.

Posted on June 20, 2014 at 2:19 PM18 Comments

Paying People to Infect their Computers

Research paper: “It’s All About The Benjamins: An empirical study on incentivizing users to ignore security advice,” by Nicolas Christin, Serge Egelman, Timothy Vidas, and Jens Grossklags.

Abstract: We examine the cost for an attacker to pay users to execute arbitrary code—potentially malware. We asked users at home to download and run an executable we wrote without being told what it did and without any way of knowing it was harmless. Each week, we increased the payment amount. Our goal was to examine whether users would ignore common security advice—not to run untrusted executables­—if there was a direct incentive, and how much this incentive would need to be. We observed that for payments as low as $0.01, 22% of the people who viewed the task ultimately ran our executable. Once increased to $1.00, this proportion increased to 43%. We show that as the price increased, more and more users who understood the risks ultimately ran the code. We conclude that users are generally unopposed to running programs of unknown provenance, so long as their incentives exceed their inconvenience.

The experiment was run on Mechanical Turk, which means we don’t know who these people were or even if they were sitting at computers they owned (as opposed to, say, computers at an Internet cafe somewhere). But if you want to build a fair-trade botnet, this is a reasonable way to go about it.

Two articles.

Posted on June 19, 2014 at 6:28 AM32 Comments

Use of Social Media by ISIS

Here are two articles about how effectively the Islamic State of Iraq and Syria (ISIS)—the militant group that has just taken over half of Iraq—is using social media. Its dedicated Android app, that automatically tweets in its users’ names, is especially interesting. Also note how it coordinates the Twitter bombs for maximum effectiveness and to get around Twitter’s spam detectors.

Posted on June 17, 2014 at 10:17 AM10 Comments

The State of Cyberinsurance

Good essay on the current state of cyberinsurance.

So where does that leave the growing cyber insurance industry as it tries to figure out what losses it should cover and appropriate premiums and deductibles? One implication is that the industry faces much greater challenges than trying to quantify or cover intangible—and perhaps largely imaginary—losses to brands’ reputations. In light of the evidence that these losses may be fairly short-lived, that problem pales next to the challenges of determining what should be required of the insured under such policies. Insurers—just like the rest of us—don’t have a good handle on what security practices and controls are most effective, so they don’t know what to require of their customers. If I’m going to insure you against some type of risk, I want to know that you’re taking appropriate steps to prevent that risk yourself 00 installing smoke detectors or wearing your seat belt or locking your door. Insurers require these safety measures when they can because there’s a worry that you’ll be so reliant on the insurance coverage that you’ll stop taking those necessary precautions, a phenomenon known as moral hazard. Solving the moral hazard problem for cyberinsurance requires collecting better data than we currently have on what works—and what doesn’t—to prevent security breaches.

Posted on June 16, 2014 at 1:29 PM20 Comments

Seventh Movie-Plot Threat Contest Winner

On April 1, I announced the Seventh Mostly Annual Movie-Plot Threat Contest:

The NSA has won, but how did it do it? How did it use its ability to conduct ubiquitous surveillance, its massive data centers, and its advanced data analytics capabilities to come out on top? Did it take over the world overtly, or is it just pulling the strings behind everyone’s backs? Did it have to force companies to build surveillance into its products, or could it just piggy-back on market trends? How does it deal with liberal democracies and ruthless totalitarian dictatorships at the same time? Is it blackmailing Congress? How does the money flow? What’s the story?

On May 15, I announced the five semifinalists. The votes are in, and the winner is Doubleplusunlol:

The NSA, GCHQ et al actually don’t have the ability to conduct the mass surveillance that we now believe they do. Edward Snowden was in fact groomed, without his knowledge, to become a whistleblower, and the leaked documents were elaborately falsified by the NSA and GCHQ.

The encryption and security systems that ‘private’ companies are launching in the wake of theses ‘revelations’, however, are in fact being covertly funded by the NSA/GCHQ—the aim being to encourage criminals and terrorists to use these systems, which the security agencies have built massive backdoors into.

The laws that Obama is now about to pass will in fact be the laws that the NSA will abide by—and will entrench mass surveillance as a legitimate government tool before the NSA even has the capability to perform it. That the online populace believes that they are already being watched will become a self-fulfilling prophecy; the people have built their own panopticon, wherein the belief that the Government is omniscient is sufficient for the Government to control them.

“He who is subjected to a field of visibility, and who knows it, assumes responsibility for the constraints of power; he makes them play spontaneously upon himself; he inscribes in himself the power relation in which he simultaneously plays both roles; he becomes the principle of his own subjection.” ­ Michel Foucault, Surveiller et punir, 1975

For the record, Guy Macon was a close runner-up.

Congratulations, Doubleplusunlol. Contact me by e-mail, and I’ll send you your fabulous prizes.

Posted on June 13, 2014 at 6:12 AM11 Comments

Security Risks from Remote-Controlled Smart Devices

We’re starting to see a proliferation of smart devices that can be controlled from your phone. The security risk is, of course, that anyone can control them from their phones. Like this Japanese smart toilet:

The toilet, manufactured by Japanese firm Lixil, is controlled via an Android app called My Satis.

But a hardware flaw means any phone with the app could activate any of the toilets, researchers say.

The toilet uses bluetooth to receive instructions via the app, but the Pin code for every model is hardwired to be four zeros (0000), meaning that it cannot be reset and can be activated by any phone with the My Satis app, a report by Trustwave’s Spiderlabs information security experts reveals.

This particular attack requires Bluetooth connectivity and doesn’t work over the Internet, but many other similar attacks will. And because these devices send to have their code in firmware, a lot of them won’t be patchable. My guess is that the toilet’s manufacturer will ignore it.

On the other end of your home, a smart TV protocol is vulnerable to attack:

The attack uses the Hybrid Broadcast Broadband TV (HbbTV) standard that is widely supported in smart television sets sold in Europe.

The HbbTV system was designed to help broadcasters exploit the internet connection of a smart TV to add extra information to programmes or so advertisers can do a better job of targeting viewers.

But Yossef Oren and Angelos Keromytis, from the Network Security Lab, at Columbia University, have found a way to hijack HbbTV using a cheap antenna and carefully crafted broadcast messages.

The attacker could impersonate the user to the TV provider, websites, and so on. This attack also doesn’t use the Internet, but instead a nearby antenna. And in this case, we know that the manufacturers are going to ignore it:

Mr Oren said the standards body that oversaw HbbTV had been told about the security loophole. However, he added, the body did not think the threat from the attack was serious enough to require a re-write of the technology’s security.

Posted on June 10, 2014 at 8:24 AM46 Comments

Security and Human Behavior (SHB 2014)

I’m at SHB 2014: the Seventh Annual Interdisciplinary Workshop on Security and Human Behavior. This is a small invitational gathering of people studying various aspects of the human side of security. The fifty people in the room include psychologists, computer security researchers, sociologists, behavioral economists, philosophers, political scientists, lawyers, anthropologists, business school professors, neuroscientists, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

I call this the most intellectually stimulating two days of my years. The goal is discussion amongst the group. We do that by putting everyone on panels, but only letting each person talk for 5-7 minutes The rest of the 90-minute panel is left for discussion.

The conference is organized by Alessandro Acquisti, Ross Anderson, and me. This year we’re at Cambridge University, in the UK.

The conference website contains a schedule and a list of participants, which includes links to writings by each of them. Ross Anderson is liveblogging the event. It’s also being recorded; I’ll post the link when it goes live.

Here are my posts on the first, second, third, fourth, fifth, and sixth SHB workshops. Follow those links to find summaries, papers, and audio recordings of the workshops. It’s hard to believe we’ve been doing this for seven years.

Posted on June 9, 2014 at 4:50 AM17 Comments

GCHQ Intercept Sites in Oman

Last June, the Guardian published a story about GCHQ tapping fiber-optic Internet cables around the globe, part of a program codenamed TEMPORA. One of the facts not reported in that story—and supposedly the fact that the Guardian agreed to withhold in exchange for not being prosecuted by the UK authorities—was the location of the access points in the Middle East.

On Tuesday, the Register disclosed that they are in Oman:

The secret British spy base is part of a programme codenamed “CIRCUIT” and also referred to as Overseas Processing Centre 1 (OPC-1). It is located at Seeb, on the northern coast of Oman, where it taps in to various undersea cables passing through the Strait of Hormuz into the Persian/Arabian Gulf. Seeb is one of a three site GCHQ network in Oman, at locations codenamed “TIMPANI”, “GUITAR” and “CLARINET”. TIMPANI, near the Strait of Hormuz, can monitor Iraqi communications. CLARINET, in the south of Oman, is strategically close to Yemen.

Access is provided through secret agreements with BT and Vodaphone:

British national telco BT, referred to within GCHQ and the American NSA under the ultra-classified codename “REMEDY”, and Vodafone Cable (which owns the former Cable & Wireless company, aka “GERONTIC”) are the two top earners of secret GCHQ payments running into tens of millions of pounds annually.

There’s no source document associated with the story, but it does seem to be accurate. Glenn Greenwald comments:

“Snowden has no source relationship with Duncan (who is a great journalist), and never provided documents to him directly or indirectly, as Snowden has made clear,” Greenwald said in an email. “I can engage in informed speculation about how Duncan got this document -­ it’s certainly a document that several people in the Guardian UK possessed ­—but how he got it is something only he can answer.”

The reporter is staying mum on his source:

When Wired.co.uk asked Duncan Campbell—the investigative journalist behind the Register article revealing the Oman location—if he too had copies proving the allegations, he responded: “I won’t answer that question—given the conduct of the authorities.”

“I was able to look at some of the material provided in Britain to the Guardian by Edward Snowden last year,” Campbell, who is a forensic expert witness on communications data, tells us.

Campbell also published this on the NSA today.

EDITED TO ADD (6/16): Cyprus is another interception point for Middle East surveillance.

Posted on June 5, 2014 at 3:58 PM68 Comments

The Human Side of Heartbleed

The announcement on April 7 was alarming. A new Internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.

It was a software insecurity, but the problem was entirely human.

Software has vulnerabilities because it’s written by people, and people make mistakes—thousands of mistakes. This particular mistake was made in 2011 by a German graduate student who was one of the unpaid volunteers working on a piece of software called OpenSSL. The update was approved by a British consultant.

In retrospect, the mistake should have been obvious, and it’s amazing that no one caught it. But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release.

The mistake was discovered around March 21, 2014, and was reported on April 1 by Neel Mehta of Google’s security team, who quickly realized how potentially devastating it was. Two days later, in an odd coincidence, researchers at a security company called Codenomicon independently discovered it.

When a researcher discovers a major vulnerability in a widely used piece of software, he generally discloses it responsibly. Why? As soon as a vulnerability becomes public, criminals will start using it to hack systems, steal identities, and generally create mayhem, so we have to work together to fix the vulnerability quickly after it’s announced.

The researchers alerted some of the larger companies quietly so that they could fix their systems before the public announcement. (Who to tell early is another very human problem: If you tell too few, you’re not really helping, but if you tell too many, the secret could get out.) Then Codenomicon announced the vulnerability.

One of the biggest problems we face in the security community is how to communicate these sorts of vulnerabilities. The story is technical, and people often don’t know how to react to the risk. In this case, the Codenomicon researchers did well. They created a public website explaining (in simple terms) the vulnerability and how to fix it, and they created a logo—a red bleeding heart—that every news outlet used for coverage of the story.

The first week of coverage varied widely, as some people panicked and others downplayed the threat. This wasn’t surprising: There was a lot of uncertainty about the risk, and it wasn’t immediately obvious how disastrous the vulnerability actually was.

The major Internet companies were quick to patch vulnerable systems. Individuals were less likely to update their passwords, but by and large, that was OK.

True to form, hackers started exploiting the vulnerability within minutes of the announcement. We assume that governments also exploited the vulnerability while they could. I’m sure the U.S. National Security Agency had advance warning.

By now, it’s largely over. There are still lots of unpatched systems out there. (Many of them are embedded hardware systems that can’t be patched.) The risk of attack is still there, but minimal. In the end, the actual damage was also minimal, although the expense of restoring security was great.

The question that remains is this: What should we expect in the future—are there more Heartbleeds out there?

Yes. Yes there are. The software we use contains thousands of mistakes—many of them security vulnerabilities. Lots of people are looking for these vulnerabilities: Researchers are looking for them. Criminals and hackers are looking for them. National intelligence agencies in the United States, the United Kingdom, China, Russia, and elsewhere are looking for them. The software vendors themselves are looking for them.

What happens when a vulnerability is found depends on who finds it. If the vendor finds it, it quietly fixes it. If a researcher finds it, he or she alerts the vendor and then reports it to the public. If a national intelligence agency finds the vulnerability, it either quietly uses it to spy on others or—if we’re lucky—alerts the vendor. If criminals and hackers find it, they use it until a security company notices and alerts the vendor, and then it gets fixed—usually within a month.

Heartbleed was unique because there was no single fix. The software had to be updated, and then websites had to regenerate their encryption keys and get new public-key certificates. After that, people had to update their passwords. This multi-stage process had to take place publicly, which is why the announcement happened the way it did.

Yes, it’ll happen again. But most of the time, it’ll be easier to deal with than this.

This essay previously appeared on The Mark News.

Posted on June 4, 2014 at 6:23 AM35 Comments

Chinese Hacking of the US

Chinese hacking of American computer networks is old news. For years we’ve known about their attacks against U.S. government and corporate targets. We’ve seen detailed reports of how they hacked The New York Times. Google has detected them going after Gmail accounts of dissidents. They’ve built sophisticated worldwide eavesdropping networks. These hacks target both military secrets and corporate intellectual property. They’re perpetrated by a combination of state, state-sponsored and state-tolerated hackers. It’s been going on for years.

On Monday, the Justice Department indicted five Chinese hackers in absentia, all associated with the Chinese military, for stealing corporate secrets from U.S. energy, metals and manufacturing companies. It’s entirely for show; the odds that the Chinese are going to send these people to the U.S. to stand trial is zero. But it does move what had been mostly a technical security problem into the world of diplomacy and foreign policy. By doing this, the U.S. government is taking a very public stand and saying “enough.”

The problem with that stand is that we’ve been doing much the same thing to China. Documents revealed by the whistleblower Edward Snowden show that the NSA has penetrated Chinese government and commercial networks, and is exfiltrating—that’s NSA talk for stealing—an enormous amount of secret data. We’ve hacked the networking hardware of one of their own companies, Huawei. We’ve intercepted networking equipment being sent there and installed monitoring devices. We’ve been listening in on their private communications channels.

The only difference between the U.S. and China’s actions is that the U.S. doesn’t engage in direct industrial espionage. That is, we don’t steal secrets from Chinese companies and pass them directly to U.S. competitors. But we do engage in economic espionage; we steal secrets from Chinese companies for an advantage in government trade negotiations, which directly benefits U.S. competitors. We might think this difference is important, but other countries are not as as impressed with our nuance.

Already the Chinese are retaliating against the U.S. actions with rhetoric of their own. I don’t know the Chinese expression for ‘pot calling the kettle black,’ but it certainly fits in this case.

Again, none of this is new. The U.S. and the Chinese have been conducting electronic espionage on each other throughout the Cold War, and there’s no reason to think it’s going to change anytime soon. What’s different now is the ease with which the two countries can do this safely and remotely, over the Internet, as well as the massive amount of information that can be stolen with a few computer commands.

On the Internet today, it is much easier to attack systems and break into them than it is to defend those systems against attack, so the advantage is to the attacker. This is true for a combination of reasons: the ability of an attacker to concentrate his attack, the nature of vulnerabilities in computer systems, poor software quality and the enormous complexity of computer systems.

The computer security industry is used to coping with criminal attacks. In general, such attacks are untargeted. Criminals might have broken into Target’s network last year and stolen 40 million credit and debit card numbers, but they would have been happy with any retailer’s large credit card database. If Target’s security had been better than its competitors, the criminals would have gone elsewhere. In this way, security is relative.

The Chinese attacks are different. For whatever reason, the government hackers wanted certain information inside the networks of Alcoa World Alumina, Westinghouse Electric, Allegheny Technologies, U.S. Steel, United Steelworkers Union and SolarWorld. It wouldn’t have mattered how those companies’ security compared with other companies; all that mattered was whether it was better than the ability of the attackers.

This is a fundamentally different security model—often called APT or Advanced Persistent Threat—and one that is much more difficult to defend against.

In a sense, American corporations are collateral damage in this battle of espionage between the U.S. and China. Taking the battle from the technical sphere into the foreign policy sphere might be a good idea, but it will work only if we have some moral high ground from which to demand that others not spy on us. As long as we run the largest surveillance network in the world and hack computer networks in foreign countries, we’re going to have trouble convincing others not to attempt the same on us.

This essay previously appeared on Time.com.

Posted on June 2, 2014 at 6:37 AM63 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.