Entries Tagged "authentication"

Page 8 of 28

The Doxing Trend

If the director of the CIA can’t keep his e-mail secure, what hope do the rest of us have—for our e-mail or any of our digital information?

None, and that’s why the companies that we entrust with our digital lives need to be required to secure it for us, and held accountable when they fail. It’s not just a personal or business issue; it’s a matter of public safety.

The details of the story are worth repeating. Someone, reportedly a teenager, hacked into CIA Director John O. Brennan’s AOL account. He says he did so by posing as a Verizon employee to Verizon to get personal information about Brennan’s account, as well as his bank card number and his AOL e-mail address. Then he called AOL and pretended to be Brennan. Armed with the information he got from Verizon, he convinced AOL customer service to reset his password.

The CIA director did nothing wrong. He didn’t choose a lousy password. He didn’t leave a copy of it lying around. He didn’t even send it in e-mail to the wrong person. The security failure, according to this account, was entirely with Verizon and AOL. Yet still Brennan’s e-mail was leaked to the press and posted on WikiLeaks.

This kind of attack is not new. In 2012, the Gmail and Twitter accounts of Wired writer Mat Honan were taken over by a hacker who first persuaded Amazon to give him Honan’s credit card details, then used that information to hack into his Apple ID account, and finally used that information to get into his Gmail account.

For most of us, our primary e-mail account is the “master key” to every one of our other accounts. If we click on a site’s “forgot your password?” link, that site will helpfully e-mail us a special URL that allows us to reset our password. That’s how Honan’s hacker got into his Twitter account, and presumably Brennan’s hacker could have done the same thing to any of Brennan’s accounts.

Internet e-mail providers are trying to beef up their authentication systems. Yahoo recently announced it would do away with passwords, instead sending a one-time authentication code to the user’s smartphone. Google has long had an optional two-step authentication system that involves sending a one-time code to the user via phone call or SMS.

You might think cell phone authentication would thwart these attacks. Even if a hacker persuaded your e-mail provider to change your password, he wouldn’t have your phone and couldn’t obtain the one-time code. But there’s a way to beat this, too. Indie developer Grant Blakeman’s Gmail account was hacked last year, even though he had that extra-secure two-step system turned on. The hackers persuaded his cell phone company to forward his calls to another number, one controlled by the hackers, so they were able to get the necessary one-time code. And from Google, they were able to reset his Instagram password.

Brennan was lucky. He didn’t have anything classified on his AOL account. There were no personal scandals exposed in his email. Yes, his 47-page top-secret clearance form was sensitive, but not embarrassing. Honan was less lucky, and lost irreplaceable photographs of his daughter.

Neither of them should have been put through this. None of us should have to worry about this.

The problem is a system that makes this possible, and companies that don’t care because they don’t suffer the losses. It’s a classic market failure, and government intervention is how we have to fix the problem.

It’s only when the costs of insecurity exceed the costs of doing it right that companies will invest properly in our security. Companies need to be responsible for the personal information they store about us. They need to secure it better, and they need to suffer penalties if they improperly release it. This means regulatory security standards.

The government should not mandate how a company secures our data; that will move the responsibility to the government and stifle innovation. Instead, government should establish minimum standards for results, and let the market figure out how to do it most effectively. It should allow individuals whose information has been exposed sue for damages. This is a model that has worked in all other aspects of public safety, and it needs to be applied here as well.

We have a role to play in this, too. One of the reasons security measures are so easy to bypass is that we as consumers demand they be easy to use, and easy for us to bypass if we lose or forget our passwords. We need to recognize that good security will be less convenient. Again, regulations mandating this will make it more common, and eventually more acceptable.

Information security is complicated, and hard to get right. I’m an expert in the field, and it’s hard for me. It’s hard for the director of the CIA. And it’s hard for you. Security settings on websites are complicated and confusing. Security products are no different. As long as it’s solely the user’s responsibility to get right, and solely the user’s loss if it goes wrong, we’re never going to solve it.

It doesn’t have to be this way. We should demand better and more usable security from the companies we do business with and whose services we use online. But because we don’t have any real visibility into those companies’ security, we should demand our government start regulating the security of these companies as a matter of public safety.

This essay previously appeared on CNN.com.

Posted on October 28, 2015 at 6:24 AMView Comments

Stealing Fingerprints

The news from the Office of Personnel Management hack keeps getting worse. In addition to the personal records of over 20 million US government employees, we’ve now learned that the hackers stole fingerprint files for 5.6 million of them.

This is fundamentally different from the data thefts we regularly read about in the news, and should give us pause before we entrust our biometric data to large networked databases.

There are three basic kinds of data that can be stolen. The first, and most common, is authentication credentials. These are passwords and other information that allows someone else access into our accounts and—usually—our money. An example would be the 56 million credit card numbers hackers stole from Home Depot in 2014, or the 21.5 million Social Security numbers hackers stole in the OPM breach. The motivation is typically financial. The hackers want to steal money from our bank accounts, process fraudulent credit card charges in our name, or open new lines of credit or apply for tax refunds.

It’s a huge illegal business, but we know how to deal with it when it happens. We detect these hacks as quickly as possible, and update our account credentials as soon as we detect an attack. (We also need to stop treating Social Security numbers as if they were secret.)

The second kind of data stolen is personal information. Examples would be the medical data stolen and exposed when Sony was hacked in 2014, or the very personal data from the infidelity website Ashley Madison stolen and published this year. In these instances, there is no real way to recover after a breach. Once the data is public, or in the hands of an adversary, it’s impossible to make it private again.

This is the main consequence of the OPM data breach. Whoever stole the data—we suspect it was the Chinese—got copies the security-clearance paperwork of all those government employees. This documentation includes the answers to some very personal and embarrassing questions, and now opens these employees up to blackmail and other types of coercion.

Fingerprints are another type of data entirely. They’re used to identify people at crime scenes, but increasingly they’re used as an authentication credential. If you have an iPhone, for example, you probably use your fingerprint to unlock your phone. This type of authentication is increasingly common, replacing a password—something you know—with a biometric: something you are. The problem with biometrics is that they can’t be replaced. So while it’s easy to update your password or get a new credit card number, you can’t get a new finger.

And now, for the rest of their lives, 5.6 million US government employees need to remember that someone, somewhere, has their fingerprints. And we really don’t know the future value of this data. If, in twenty years, we routinely use our fingerprints at ATM machines, that fingerprint database will become very profitable to criminals. If fingerprints start being used on our computers to authorize our access to files and data, that database will become very profitable to spies.

Of course, it’s not that simple. Fingerprint readers employ various technologies to prevent being fooled by fake fingers: detecting temperature, pores, a heartbeat, and so on. But this is an arms race between attackers and defenders, and there are many ways to fool fingerprint readers. When Apple introduced its iPhone fingerprint reader, hackers figured out how to fool it within days, and have continued to fool each new generation of phone readers equally quickly.

Not every use of biometrics requires the biometric data to be stored in a central server somewhere. Apple’s system, for example, only stores the data locally: on your phone. That way there’s no central repository to be hacked. And many systems don’t store the biometric data at all, only a mathematical function of the data that can be used for authentication but can’t be used to reconstruct the actual biometric. Unfortunately, OPM stored copies of actual fingerprints.

Ashley Madison has taught us all the dangers of entrusting our intimate secrets to a company’s computers and networks, because once that data is out there’s no getting it back. All biometric data, whether it be fingerprints, retinal scans, voiceprints, or something else, has that same property. We should be skeptical of any attempts to store this data en masse, whether by governments or by corporations. We need our biometrics for authentication, and we can’t afford to lose them to hackers.

This essay previously appeared on Motherboard.

Posted on October 2, 2015 at 6:35 AMView Comments

SS7 Phone-Switch Flaw Enabled Surveillance

Interesting:

Remember that vulnerability in the SS7 inter-carrier network that lets hackers and spies track your cellphone virtually anywhere in the world? It’s worse than you might have thought. Researchers speaking to Australia’s 60 Minutes have demonstrated that it’s possible for anyone to intercept phone calls and text messages through that same network. So long as the attackers have access to an SS7 portal, they can forward your conversations to an online recording device and reroute the call to its intended destination. This helps anyone bent on surveillance, of course, but it also means that a well-equipped criminal could grab your verification messages (such as the kind used in two-factor authentication) and use them before you’ve even seen them.

I wrote about cell phone tracking based on SS7 in Data & Goliath (pp. 2-3):

The US company Verint sells cell phone tracking systems to both corporations and governments worldwide. The company’s website says that it’s “a global leader in Actionable Intelligence solutions for customer engagement optimization, security intelligence, and fraud, risk and compliance,” with clients in “more than 10,000 organizations in over 180 countries.” The UK company Cobham sells a system that allows someone to send a “blind” call to a phone—one that doesn’t ring, and isn’t detectable. The blind call forces the phone to transmit on a certain frequency, allowing the sender to track that phone to within one meter. The company boasts government customers in Algeria, Brunei, Ghana, Pakistan, Saudi Arabia, Singapore, and the United States. Defentek, a company mysteriously registered in Panama, sells a system that can “locate and track any phone number in the world…undetected and unknown by the network, carrier, or the target.” It’s not an idle boast; telecommunications researcher Tobias Engel demonstrated the same thing at a hacker conference in 2008. Criminals do the same today.

Posted on August 21, 2015 at 6:47 AMView Comments

Yet Another New Biometric: Brainprints

New research:

In “Brainprint,” a newly published study in academic journal Neurocomputing, researchers from Binghamton University observed the brain signals of 45 volunteers as they read a list of 75 acronyms, such as FBI and DVD. They recorded the brain’s reaction to each group of letters, focusing on the part of the brain associated with reading and recognizing words, and found that participants’ brains reacted differently to each acronym, enough that a computer system was able to identify each volunteer with 94 percent accuracy. The results suggest that brainwaves could be used by security systems to verify a person’s identity.

I have no idea what the false negatives are, or how robust this biometric is over time, but the article makes the important point that unlike most biometrics this one can be updated.

“If someone’s fingerprint is stolen, that person can’t just grow a new finger to replace the compromised fingerprint—the fingerprint for that person is compromised forever. Fingerprints are ‘non-cancellable.’ Brainprints, on the other hand, are potentially cancellable. So, in the unlikely event that attackers were actually able to steal a brainprint from an authorized user, the authorized user could then ‘reset’ their brainprint,” Laszlo said.

Presumably the resetting involves a new set of acronyms.

Author’s self-archived version of the paper (pdf).

Posted on June 4, 2015 at 10:36 AMView Comments

How Did the Feds Identity Dread Pirate Roberts?

Last month, I wrote that the FBI identified Ross W. Ulbricht as the Silk Road’s Dread Pirate Roberts through a leaky CAPTCHA. Seems that story doesn’t hold water:

The FBI claims that it found the Silk Road server by examining plain text Internet traffic to and from the Silk Road CAPTCHA, and that it visited the address using a regular browser and received the CAPTCHA page. But [Nicholas] Weaver says the traffic logs from the Silk Road server (PDF) that also were released by the government this week tell a different story.

“The server logs which the FBI provides as evidence show that, no, what happened is the FBI didn’t see a leakage coming from that IP,” he said. “What happened is they contacted that IP directly and got a PHPMyAdmin configuration page.” See this PDF file for a look at that PHPMyAdmin page. Here is the PHPMyAdmin server configuration.

But this is hardly a satisfying answer to how the FBI investigators located the Silk Road servers. After all, if the FBI investigators contacted the PHPMyAdmin page directly, how did they know to do that in the first place?

“That’s still the $64,000 question,” Weaver said. “So both the CAPTCHA couldn’t leak in that configuration, and the IP the government visited wasn’t providing the CAPTCHA, but instead a PHPMyAdmin interface. Thus, the leaky CAPTCHA story is full of holes.”

My guess is that the NSA provided the FBI with this information. We know that the NSA provides surveillance data to the FBI and the DEA, under the condition that they lie about where it came from in court.

NSA whistleblower William Binney explained how it’s done:

…when you can’t use the data, you have to go out and do a parallel construction, [which] means you use what you would normally consider to be investigative techniques, [and] go find the data. You have a little hint, though. NSA is telling you where the data is…

Posted on October 20, 2014 at 6:19 AMView Comments

Security for Vehicle-to-Vehicle Communications

The National Highway Traffic Safety Administration (NHTSA) has released a report titled “Vehicle-to-Vehicle Communications: Readiness of V2V Technology for Application.” It’s very long, and mostly not interesting to me, but there are security concerns sprinkled throughout: both authentication to ensure that all the communications are accurate and can’t be spoofed, and privacy to ensure that the communications can’t be used to track cars. It’s nice to see this sort of thing thought about in the beginning, when the system is first being designed, and not tacked on at the end.

Posted on September 22, 2014 at 6:03 AMView Comments

1 6 7 8 9 10 28

Sidebar photo of Bruce Schneier by Joe MacInnis.