Friday Squid Blogging: Bobtail Squid Keeps Bacteria to Protect Its Eggs

The Hawaiian Bobtail Squid deposits bacteria on its eggs to keep them safe.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on October 2, 2015 at 4:11 PM49 Comments

Resilient Systems News

Former Raytheon CEO Bill Swanson has joined our board of directors.

For those who don't know, Resilient Systems is my company. I'm the CTO, and we sell an incident-response management platform that...well...helps IR teams to manage incidents. It's a single hub that allows a team to collect data about an incident, assign and manage tasks, automate actions, integrate intelligence information, and so on. It's designed to be powerful, flexible, and intuitive -- if your HR or legal person needs to get involved, she has to be able to use it without any training. I'm really impressed with how well it works. Incident response is all about people, and the platform makes teams more effective. This is probably the best description of what we do.

We have lots of large- and medium-sized companies as customers. They're all happy, and we continue to sell this thing at an impressive rate. Our Q3 numbers were fantastic. It's kind of scary, really.

Posted on October 2, 2015 at 2:06 PM15 Comments

Stealing Fingerprints

The news from the Office of Personnel Management hack keeps getting worse. In addition to the personal records of over 20 million US government employees, we've now learned that the hackers stole fingerprint files for 5.6 million of them.

This is fundamentally different from the data thefts we regularly read about in the news, and should give us pause before we entrust our biometric data to large networked databases.

There are three basic kinds of data that can be stolen. The first, and most common, is authentication credentials. These are passwords and other information that allows someone else access into our accounts and -- usually -- our money. An example would be the 56 million credit card numbers hackers stole from Home Depot in 2014, or the 21.5 million Social Security numbers hackers stole in the OPM breach. The motivation is typically financial. The hackers want to steal money from our bank accounts, process fraudulent credit card charges in our name, or open new lines of credit or apply for tax refunds.

It's a huge illegal business, but we know how to deal with it when it happens. We detect these hacks as quickly as possible, and update our account credentials as soon as we detect an attack. (We also need to stop treating Social Security numbers as if they were secret.)

The second kind of data stolen is personal information. Examples would be the medical data stolen and exposed when Sony was hacked in 2014, or the very personal data from the infidelity website Ashley Madison stolen and published this year. In these instances, there is no real way to recover after a breach. Once the data is public, or in the hands of an adversary, it's impossible to make it private again.

This is the main consequence of the OPM data breach. Whoever stole the data -- we suspect it was the Chinese -- got copies the security-clearance paperwork of all those government employees. This documentation includes the answers to some very personal and embarrassing questions, and now opens these employees up to blackmail and other types of coercion.

Fingerprints are another type of data entirely. They're used to identify people at crime scenes, but increasingly they're used as an authentication credential. If you have an iPhone, for example, you probably use your fingerprint to unlock your phone. This type of authentication is increasingly common, replacing a password -- something you know -- with a biometric: something you are. The problem with biometrics is that they can't be replaced. So while it's easy to update your password or get a new credit card number, you can't get a new finger.

And now, for the rest of their lives, 5.6 million US government employees need to remember that someone, somewhere, has their fingerprints. And we really don't know the future value of this data. If, in twenty years, we routinely use our fingerprints at ATM machines, that fingerprint database will become very profitable to criminals. If fingerprints start being used on our computers to authorize our access to files and data, that database will become very profitable to spies.

Of course, it's not that simple. Fingerprint readers employ various technologies to prevent being fooled by fake fingers: detecting temperature, pores, a heartbeat, and so on. But this is an arms race between attackers and defenders, and there are many ways to fool fingerprint readers. When Apple introduced its iPhone fingerprint reader, hackers figured out how to fool it within days, and have continued to fool each new generation of phone readers equally quickly.

Not every use of biometrics requires the biometric data to be stored in a central server somewhere. Apple's system, for example, only stores the data locally: on your phone. That way there's no central repository to be hacked. And many systems don't store the biometric data at all, only a mathematical function of the data that can be used for authentication but can't be used to reconstruct the actual biometric. Unfortunately, OPM stored copies of actual fingerprints.

Ashley Madison has taught us all the dangers of entrusting our intimate secrets to a company's computers and networks, because once that data is out there's no getting it back. All biometric data, whether it be fingerprints, retinal scans, voiceprints, or something else, has that same property. We should be skeptical of any attempts to store this data en masse, whether by governments or by corporations. We need our biometrics for authentication, and we can't afford to lose them to hackers.

This essay previously appeared on Motherboard.

Posted on October 2, 2015 at 6:35 AM53 Comments

Existential Risk and Technological Advancement

AI theorist Eliezer Yudkowsky coined Moore's Law of Mad Science: "Every eighteen months, the minimum IQ necessary to destroy the world drops by one point."

Oh, how I wish I said that.

Posted on October 1, 2015 at 12:03 PM62 Comments

Identifying CIA Officers in the Field

During the Cold War, the KGB was very adept at identifying undercover CIA officers in foreign countries through what was basically big data analysis. (Yes, this is a needlessly dense and very hard-to-read article. I think it's worth slogging through, though.)

Posted on October 1, 2015 at 7:00 AM27 Comments

Spoofing Fitness Trackers

The website has a series of instructional videos on how to spoof fitness trackers, using such things as a metronome, pendulum, or power drill. With insurance companies like John Hancock offering discounts to people who allow them to verify their exercise program by opening up their fitness-tracker data, these are useful hacks.

News article.

Posted on September 30, 2015 at 12:02 PM34 Comments

Volkswagen and Cheating Software

For the past six years, Volkswagen has been cheating on the emissions testing for its diesel cars. The cars' computers were able to detect when they were being tested, and temporarily alter how their engines worked so they looked much cleaner than they actually were. When they weren't being tested, they belched out 40 times the pollutants. Their CEO has resigned, and the company will face an expensive recall, enormous fines and worse.

Cheating on regulatory testing has a long history in corporate America. It happens regularly in automobile emissions control and elsewhere. What's important in the VW case is that the cheating was preprogrammed into the algorithm that controlled cars' emissions.

Computers allow people to cheat in ways that are new. Because the cheating is encapsulated in software, the malicious actions can happen at a far remove from the testing itself. Because the software is "smart" in ways that normal objects are not, the cheating can be subtler and harder to detect.

We've already had examples of smartphone manufacturers cheating on processor benchmark testing: detecting when they're being tested and artificially increasing their performance. We're going to see this in other industries.

The Internet of Things is coming. Many industries are moving to add computers to their devices, and that will bring with it new opportunities for manufacturers to cheat. Light bulbs could fool regulators into appearing more energy efficient than they are. Temperature sensors could fool buyers into believing that food has been stored at safer temperatures than it has been. Voting machines could appear to work perfectly -- except during the first Tuesday of November, when it undetectably switches a few percent of votes from one party's candidates to another's.

My worry is that some corporate executives won't interpret the VW story as a cautionary tale involving just punishments for a bad mistake but will see it instead as a demonstration that you can get away with something like that for six years.

And they'll cheat smarter. For all of VW's brazenness, its cheating was obvious once people knew to look for it. Far cleverer would be to make the cheating look like an accident. Overall software quality is so bad that products ship with thousands of programming mistakes.

Most of them don't affect normal operations, which is why your software generally works just fine. Some of them do, which is why your software occasionally fails, and needs constant updates. By making cheating software appear to be a programming mistake, the cheating looks like an accident. And, unfortunately, this type of deniable cheating is easier than people think.

Computer-security experts believe that intelligence agencies have been doing this sort of thing for years, both with the consent of the software developers and surreptitiously.

This problem won't be solved through computer security as we normally think of it. Conventional computer security is designed to prevent outside hackers from breaking into your computers and networks. The car analog would be security software that prevented an owner from tweaking his own engine to run faster but in the process emit more pollutants. What we need to contend with is a very different threat: malfeasance programmed in at the design stage.

We already know how to protect ourselves against corporate misbehavior. Ronald Reagan once said "trust, but verify" when speaking about the Soviet Union cheating on nuclear treaties. We need to be able to verify the software that controls our lives.

Software verification has two parts: transparency and oversight. Transparency means making the source code available for analysis. The need for this is obvious; it's much easier to hide cheating software if a manufacturer can hide the code.

But transparency doesn't magically reduce cheating or improve software quality, as anyone who uses open-source software knows. It's only the first step. The code must be analyzed. And because software is so complicated, that analysis can't be limited to a once-every-few-years government test. We need private analysis as well.

It was researchers at private labs in the United States and Germany that eventually outed Volkswagen. So transparency can't just mean making the code available to government regulators and their representatives; it needs to mean making the code available to everyone.

Both transparency and oversight are being threatened in the software world. Companies routinely fight making their code public and attempt to muzzle security researchers who find problems, citing the proprietary nature of the software. It's a fair complaint, but the public interests of accuracy and safety need to trump business interests.

Proprietary software is increasingly being used in critical applications: voting machines, medical devices, breathalyzers, electric power distribution, systems that decide whether or not someone can board an airplane. We're ceding more control of our lives to software and algorithms. Transparency is the only way verify that they're not cheating us.

There's no shortage of corporate executives willing to lie and cheat their way to profits. We saw another example of this last week: Stewart Parnell, the former CEO of the now-defunct Peanut Corporation of America, was sentenced to 28 years in jail for knowingly shipping out salmonella-tainted products. That may seem excessive, but nine people died and many more fell ill as a result of his cheating.

Software will only make malfeasance like this easier to commit and harder to prove. Fewer people need to know about the conspiracy. It can be done in advance, nowhere near the testing time or site. And, if the software remains undetected for long enough, it could easily be the case that no one in the company remembers that it's there.

We need better verification of the software that controls our lives, and that means more -- and more public -- transparency.

This essay previously appeared on

EDITED TO ADD: Three more essays.

Posted on September 30, 2015 at 9:13 AM69 Comments

How GCHQ Tracks Internet Users

The Intercept has a new story from the Snowden documents about the UK's surveillance of the Internet by the GCHQ:

The mass surveillance operation ­ code-named KARMA POLICE­ was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom's electronic eavesdropping agency, Government Communications Headquarters, or GCHQ.


One system builds profiles showing people's web browsing histories. Another analyzes instant messenger communications, emails, Skype calls, text messages, cell phone locations, and social media interactions. Separate programs were built to keep tabs on "suspicious" Google searches and usage of Google Maps.


As of March 2009, the largest slice of data Black Hole held -- 41 percent -- was about people's Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people's use of tools to browse the Internet anonymously.

Lots more in the article. The Intercept also published 28 new top secret NSA and GCHQ documents.

Posted on September 29, 2015 at 6:16 AM41 Comments

Good Article on the Sony Attack

Fortune has a three-part article on the Sony attack by North Korea. There's not a lot of tech here; it's mostly about Sony's internal politics regarding the movie and IT security before the attack, and some about their reaction afterwards.

Despite what I wrote at the time, I now believe that North Korea was responsible for the attack. This is the article that convinced me. It's about the US government's reaction to the attack.

Posted on September 28, 2015 at 6:22 AM42 Comments

Friday Squid Blogging: Disney's Minigame Squid Wars

It looks like a Nintendo game.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on September 25, 2015 at 4:30 PM176 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient Systems, Inc.