Entries Tagged "passwords"

Page 9 of 29

Credential Stealing as an Attack Vector

Traditional computer security concerns itself with vulnerabilities. We employ antivirus software to detect malware that exploits vulnerabilities. We have automatic patching systems to fix vulnerabilities. We debate whether the FBI should be permitted to introduce vulnerabilities in our software so it can get access to systems with a warrant. This is all important, but what’s missing is a recognition that software vulnerabilities aren’t the most common attack vector: credential stealing is.

The most common way hackers of all stripes, from criminals to hacktivists to foreign governments, break into networks is by stealing and using a valid credential. Basically, they steal passwords, set up man-in-the-middle attacks to piggy-back on legitimate logins, or engage in cleverer attacks to masquerade as authorized users. It’s a more effective avenue of attack in many ways: it doesn’t involve finding a zero-day or unpatched vulnerability, there’s less chance of discovery, and it gives the attacker more flexibility in technique.

Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) group—basically the country’s chief hacker—gave a rare public talk at a conference in January. In essence, he said that zero-day vulnerabilities are overrated, and credential stealing is how he gets into networks: “A lot of people think that nation states are running their operations on zero days, but it’s not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive.”

This is true for us, and it’s also true for those attacking us. It’s how the Chinese hackers breached the Office of Personnel Management in 2015. The 2014 criminal attack against Target Corporation started when hackers stole the login credentials of the company’s HVAC vendor. Iranian hackers stole US login credentials. And the hacktivist that broke into the cyber-arms manufacturer Hacking Team and published pretty much every proprietary document from that company used stolen credentials.

As Joyce said, stealing a valid credential and using it to access a network is easier, less risky, and ultimately more productive than using an existing vulnerability, even a zero-day.

Our notions of defense need to adapt to this change. First, organizations need to beef up their authentication systems. There are lots of tricks that help here: two-factor authentication, one-time passwords, physical tokens, smartphone-based authentication, and so on. None of these is foolproof, but they all make credential stealing harder.

Second, organizations need to invest in breach detection and—most importantly—incident response. Credential-stealing attacks tend to bypass traditional IT security software. But attacks are complex and multi-step. Being able to detect them in process, and to respond quickly and effectively enough to kick attackers out and restore security, is essential to resilient network security today.

Vulnerabilities are still critical. Fixing vulnerabilities is still vital for security, and introducing new vulnerabilities into existing systems is still a disaster. But strong authentication and robust incident response are also critical. And an organization that skimps on these will find itself unable to keep its networks secure.

This essay originally appeared on Xconomy.

EDITED TO ADD (5/23): Portuguese translation.

Posted on May 4, 2016 at 6:51 AMView Comments

FBI vs. Apple: Who Is Helping the FBI?

On Monday, the FBI asked the court for a two-week delay in a scheduled hearing on the San Bernardino iPhone case, because some “third party” approached it with a way into the phone. It wanted time to test this access method.

Who approached the FBI? We have no idea.

I have avoided speculation because the story makes no sense. Why did this third party wait so long? Why didn’t the FBI go through with the hearing anyway?

Now we have speculation that the third party is the Israeli forensic company Cellebrite. From its website:

Support for Locked iOS Devices Using UFED Physical Analyzer

Using UFED Physical Analyzer, physical and file system extractions, decoding and analysis can be performed on locked iOS devices with a simple or complex passcode. Simple passcodes will be recovered during the physical extraction process and enable access to emails and keychain passwords. If a complex password is set on the device, physical extraction can be performed without access to emails and keychain. However, if the complex password is known, emails and keychain passwords will be available.

My guess is that it’s not them. They have an existing and ongoing relationship with the FBI. If they could crack the phone, they would have done it months ago. This purchase order seems to be coincidental.

In any case, having a company name doesn’t mean that the story makes any more sense, but there it is. We’ll know more in a couple of weeks, although I doubt the FBI will share any more than they absolutely have to.

This development annoys me in every way. This case was never about the particular phone, it was about the precedent and the general issue of security vs. surveillance. This will just come up again another time, and we’ll have to through this all over again—maybe with a company that isn’t as committed to our privacy as Apple is.

EDITED TO ADD: Watch former NSA Director Michael Hayden defend Apple and iPhone security. I’ve never seen him so impassioned before.

EDITED TO ADD (3/26): Marcy Wheeler has written extensively about the Cellebrite possibility

Posted on March 24, 2016 at 12:34 PMView Comments

Decrypting an iPhone for the FBI

Earlier this week, a federal magistrate ordered Apple to assist the FBI in hacking into the iPhone used by one of the San Bernardino shooters. Apple will fight this order in court.

The policy implications are complicated. The FBI wants to set a precedent that tech companies will assist law enforcement in breaking their users’ security, and the technology community is afraid that the precedent will limit what sorts of security features it can offer customers. The FBI sees this as a privacy vs. security debate, while the tech community sees it as a security vs. surveillance debate.

The technology considerations are more straightforward, and shine a light on the policy questions.

The iPhone 5c in question is encrypted. This means that someone without the key cannot get at the data. This is a good security feature. Your phone is a very intimate device. It is likely that you use it for private text conversations, and that it’s connected to your bank accounts. Location data reveals where you’ve been, and correlating multiple phones reveals who you associate with. Encryption protects your phone if it’s stolen by criminals. Encryption protects the phones of dissidents around the world if they’re taken by local police. It protects all the data on your phone, and the apps that increasingly control the world around you.

This encryption depends on the user choosing a secure password, of course. If you had an older iPhone, you probably just used the default four-digit password. That’s only 10,000 possible passwords, making it pretty easy to guess. If the user enabled the more-secure alphanumeric password, that means a harder-to-guess password.

Apple added two more security features on the iPhone. First, a phone could be configured to erase the data after too many incorrect password guesses. And it enforced a delay between password guesses. This delay isn’t really noticeable by the user if you type the wrong password and then have to retype the correct password, but it’s a large barrier for anyone trying to guess password after password in a brute-force attempt to break into the phone.

But that iPhone has a security flaw. While the data is encrypted, the software controlling the phone is not. This means that someone can create a hacked version of the software and install it on the phone without the consent of the phone’s owner and without knowing the encryption key. This is what the FBI ­ and now the court ­ is demanding Apple do: It wants Apple to rewrite the phone’s software to make it possible to guess possible passwords quickly and automatically.

The FBI’s demands are specific to one phone, which might make its request seem reasonable if you don’t consider the technological implications: Authorities have the phone in their lawful possession, and they only need help seeing what’s on it in case it can tell them something about how the San Bernardino shooters operated. But the hacked software the court and the FBI wants Apple to provide would be general. It would work on any phone of the same model. It has to.

Make no mistake; this is what a backdoor looks like. This is an existing vulnerability in iPhone security that could be exploited by anyone.

There’s nothing preventing the FBI from writing that hacked software itself, aside from budget and manpower issues. There’s every reason to believe, in fact, that such hacked software has been written by intelligence organizations around the world. Have the Chinese, for instance, written a hacked Apple operating system that records conversations and automatically forwards them to police? They would need to have stolen Apple’s code-signing key so that the phone would recognize the hacked as valid, but governments have done that in the past with other keys and other companies. We simply have no idea who already has this capability.

And while this sort of attack might be limited to state actors today, remember that attacks always get easier. Technology broadly spreads capabilities, and what was hard yesterday becomes easy tomorrow. Today’s top-secret NSA programs become tomorrow’s PhD theses and the next day’s hacker tools. Soon this flaw will be exploitable by cybercriminals to steal your financial data. Everyone with an iPhone is at risk, regardless of what the FBI demands Apple do

What the FBI wants to do would make us less secure, even though it’s in the name of keeping us safe from harm. Powerful governments, democratic and totalitarian alike, want access to user data for both law enforcement and social control. We cannot build a backdoor that only works for a particular type of government, or only in the presence of a particular court order.

Either everyone gets security or no one does. Either everyone gets access or no one does. The current case is about a single iPhone 5c, but the precedent it sets will apply to all smartphones, computers, cars and everything the Internet of Things promises. The danger is that the court’s demands will pave the way to the FBI forcing Apple and others to reduce the security levels of their smart phones and computers, as well as the security of cars, medical devices, homes, and everything else that will soon be computerized. The FBI may be targeting the iPhone of the San Bernardino shooter, but its actions imperil us all.

This essay previously appeared in the Washington Post

The original essay contained a major error.

I wrote: “This is why Apple fixed this security flaw in 2014. Apple’s iOS 8.0 and its phones with an A7 or later processor protect the phone’s software as well as the data. If you have a newer iPhone, you are not vulnerable to this attack. You are more secure – from the government of whatever country you’re living in, from cybercriminals and from hackers.” Also: “We are all more secure now that Apple has closed that vulnerability.”

That was based on a misunderstanding of the security changes Apple made in what is known as the “Secure Enclave.” It turns out that all iPhones have this security vulnerability: all can have their software updated without knowing the password. The updated code has to be signed with Apple’s key, of course, which adds a major difficulty to the attack.

Dan Guido writes:

If the device lacks a Secure Enclave, then a single firmware update to iOS will be sufficient to disable passcode delays and auto erase. If the device does contain a Secure Enclave, then two firmware updates, one to iOS and one to the Secure Enclave, are required to disable these security features. The end result in either case is the same. After modification, the device is able to guess passcodes at the fastest speed the hardware supports.

The recovered iPhone is a model 5C. The iPhone 5C lacks TouchID and, therefore, lacks a Secure Enclave. The Secure Enclave is not a concern. Nearly all of the passcode protections are implemented in software by the iOS operating system and are replaceable by a single firmware update.

EDITED TO ADD (2/22): Lots more on my previous blog post on the topic.

How to set a longer iPhone password and thwart this kind of attack. Comey on the issue. And a secret memo describes the FBI’s broader strategy to weaken security.

Orin Kerr’s thoughts: Part 1, Part 2, and Part 3.

EDITED TO ADD (2/22): Tom Cook’s letter to his employees, and an FAQ. How CALEA relates to all this. Here’s what’s not available in the iCloud backup. The FBI told the county to change the password on the phone—that’s why they can’t get in. What the FBI needs is technical expertise, not back doors. And it’s not just this iPhone; the FBI wants Apple to break into lots of them. What China asks of tech companies—not that this is a country we should particularly want to model. Former NSA Director Michael Hayden on the case. There is a quite a bit of detail about the Apple efforts to assist the FBI in the legal motion the Department of Justice filed. Two good essays. Jennifer Granick’s comments.

In my essay, I talk about other countries developing this capability with Apple’s knowledge or consent. Making it work requires stealing a copy of Apple’s code-signing key, something that has been done by the authors of Stuxnet (probably the US) and Flame (probably Russia) in the past.

Posted on February 22, 2016 at 6:58 AMView Comments

Nicholas Weaver on iPhone Security

Excellent essay:

Yes, an iPhone configured with a proper password has enough protection that, turned off, I’d be willing to hand mine over to the DGSE, NSA, or Chinese. But many (perhaps most) users don’t configure their phones right. Beyond just waiting for the suspect to unlock his phone, most people either use a weak 4-digit passcode (that can be brute-forced) or use the fingerprint reader (which the officer has a day to force the subject to use).

Furthermore, most iPhones have a lurking security landmine enabled by default: iCloud backup. A simple warrant to Apple can obtain this backup, which includes all photographs (so there is the selfie) and all undeleted iMessages! About the only information of value not included in this backup are the known WiFi networks and the suspect’s email, but a suspect’s email is a different warrant away anyway.

Finally, there is iMessage, whose “end-to-end” nature, despite FBI complaints, contains some significant weaknesses and deserves scare-quotes. To start with, iMessage’s encryption does not obscure any metadata, and as the saying goes, “the Metadata is the Message”. So with a warrant to Apple, the FBI can obtain all the information about every message sent and received except the message contents, including time, IP addresses, recipients, and the presence and size of attachments. Apple can’t hide this metadata, because Apple needs to use this metadata to deliver messages.

He explains how Apple could enable surveillance on iMessage and FaceTime:

So to tap Alice, it is straightforward to modify the keyserver to present an additional FBI key for Alice to everyone but Alice. Now the FBI (but not Apple) can decrypt all iMessages sent to Alice in the future. A similar modification, adding an FBI key to every request Alice makes for any keys other than her own, enables tapping all messages sent by Alice. There are similar architectural vulnerabilities which enable tapping of “end-to-end secure” FaceTime calls.

There’s a persistent rumor going around that Apple is in the secret FISA Court, fighting a government order to make its platform more surveillance-friendly—and they’re losing. This might explain Apple CEO Tim Cook’s somewhat sudden vehemence about privacy. I have not found any confirmation of the rumor.

Posted on August 6, 2015 at 6:09 AMView Comments

Google's Unguessable URLs

Google secures photos using public but unguessable URLs:

So why is that public URL more secure than it looks? The short answer is that the URL is working as a password. Photos URLs are typically around 40 characters long, so if you wanted to scan all the possible combinations, you’d have to work through 1070 different combinations to get the right one, a problem on an astronomical scale. “There are enough combinations that it’s considered unguessable,” says Aravind Krishnaswamy, an engineering lead on Google Photos. “It’s much harder to guess than your password.”

It’s a perfectly valid security measure, although unsettling to some.

Posted on July 20, 2015 at 5:25 AMView Comments

1 7 8 9 10 11 29

Sidebar photo of Bruce Schneier by Joe MacInnis.