Entries Tagged "Apple"

Page 14 of 17

"Ask Nicely" Doesn't Work as a Security Mechanism

Apple’s map application shows more of Taiwan than Google Maps:

The Taiwanese government/military, like many others around the world, requests that satellite imagery providers, such as Google Maps, blur out certain sensitive military installations. Unfortunately, Apple apparently didn’t get that memo.

[…]

According to reports the Taiwanese defence ministry hasn’t filed a formal request with Apple yet but thought it would be a great idea to splash this across the media and bring everyone’s attention to the story. Obviously it would terribly embarrassing if some unscrupulous person read the story and then found various uncensored military installations around Taiwan and posted photos of them.

Photos at the link.

Posted on October 11, 2012 at 7:03 AMView Comments

Database of 12 Million Apple UDIDs Leaked

In this story, we learn that hackers got their hands on a database of 12 million Apple Unique Device Identifiers (UDIDs) by hacking an FBI laptop.

My question isn’t about the hack, but about the data. Why does an FBI agent have user identification information about 12 million iPhone users on his laptop? And how did the FBI get their hands on this data in the first place?

For its part, the FBI denies everything:

In a statement released Tuesday afternoon, the FBI said, “The FBI is aware of published reports alleging that an FBI laptop was compromised and private data regarding Apple UDIDs was exposed. At this time there is no evidence indicating that an FBI laptop was compromised or that the FBI either sought or obtained this data.”

Apple also denies giving the database to the FBI.

Okay, so where did the database come from? And are there really 12 million, or only one million?

EDITED TO ADD (9/12): A company called BlueToad is the source of the leak.

If you’ve been hacked, you’re not going to be informed:

DeHart said his firm would not be contacting individual consumers to notify them that their information had been compromised, instead leaving it up to individual publishers to contact readers as they see fit.

Posted on September 6, 2012 at 6:48 AMView Comments

Is iPhone Security Really this Good?

Simson Garfinkel writes that the iPhone has such good security that the police can’t use it for forensics anymore:

Technologies the company has adopted protect Apple customers’ content so well that in many situations it’s impossible for law enforcement to perform forensic examinations of devices seized from criminals. Most significant is the increasing use of encryption, which is beginning to cause problems for law enforcement agencies when they encounter systems with encrypted drives.

“I can tell you from the Department of Justice perspective, if that drive is encrypted, you’re done,” Ovie Carroll, director of the cyber-crime lab at the Computer Crime and Intellectual Property Section in the Department of Justice, said during his keynote address at the DFRWS computer forensics conference in Washington, D.C., last Monday. “When conducting criminal investigations, if you pull the power on a drive that is whole-disk encrypted you have lost any chance of recovering that data.”

Yes, I believe that full-disk encryption—whether Apple’s FileVault or Microsoft’s BitLocker (I don’t know what the iOS system is called)—is good; but its security is only as good as the user is at choosing a good password.

The iPhone always supported a PIN lock, but the PIN wasn’t a deterrent to a serious attacker until the iPhone 3GS. Because those early phones didn’t use their hardware to perform encryption, a skilled investigator could hack into the phone, dump its flash memory, and directly access the phone’s address book, e-mail messages, and other information. But now, with Apple’s more sophisticated approach to encryption, investigators who want to examine data on a phone have to try every possible PIN. Examiners perform these so-called brute-force attacks with special software, because the iPhone can be programmed to wipe itself if the wrong PIN is provided more than 10 times in a row. This software must be run on the iPhone itself, limiting the guessing speed to 80 milliseconds per PIN. Trying all four-digit PINs therefore requires no more than 800 seconds, a little more than 13 minutes. However, if the user chooses a six-digit PIN, the maximum time required would be 22 hours; a nine-digit PIN would require 2.5 years, and a 10-digit pin would take 25 years. That’s good enough for most corporate secrets—and probably good enough for most criminals as well.

Leaving aside the user practice questions—my guess is that very few users, even those with something to hide, use a ten-digit PIN—could this possibly be true? In the introduction to Applied Cryptography, almost 20 years ago, I wrote: “There are two kinds of cryptography in this world: cryptography that will stop your kid sister from reading your files, and cryptography that will stop major governments from reading your files.”

Since then, I’ve learned two things: 1) there are a lot of gradients to kid sister cryptography, and 2) major government cryptography is very hard to get right. It’s not the cryptography; it’s everything around the cryptography. I said as much in the preface to Secrets and Lies in 2000:

Cryptography is a branch of mathematics. And like all mathematics, it involves numbers, equations, and logic. Security, palpable security that you or I might find useful in our lives, involves people: things people know, relationships between people, people and how they relate to machines. Digital security involves computers: complex, unstable, buggy computers.

Mathematics is perfect; reality is subjective. Mathematics is defined; computers are ornery. Mathematics is logical; people are erratic, capricious, and barely comprehensible.

If, in fact, we’ve finally achieved something resembling this level of security for our computers and handheld computing devices, this is something to celebrate.

But I’m skeptical.

Another article.

Slashdot has a thread on the article.

EDITED TO ADD: More analysis. And Elcomsoft can crack iPhones.

Posted on August 21, 2012 at 1:42 PMView Comments

Yet Another Risk of Storing Everything in the Cloud

A hacker can social-engineer his way into your cloud storage and delete everything you have.

It turns out, a billing address and the last four digits of a credit card number are the only two pieces of information anyone needs to get into your iCloud account. Once supplied, Apple will issue a temporary password, and that password grants access to iCloud.

Apple tech support confirmed to me twice over the weekend that all you need to access someone’s AppleID is the associated e-mail address, a credit card number, the billing address, and the last four digits of a credit card on file.

Here’s how a hacker gets that information.

First you call Amazon and tell them you are the account holder, and want to add a credit card number to the account. All you need is the name on the account, an associated e-mail address, and the billing address. Amazon then allows you to input a new credit card. (Wired used a bogus credit card number from a website that generates fake card numbers that conform with the industry’s published self-check algorithm.) Then you hang up.

Next you call back, and tell Amazon that you’ve lost access to your account. Upon providing a name, billing address, and the new credit card number you gave the company on the prior call, Amazon will allow you to add a new e-mail address to the account. From here, you go to the Amazon website, and send a password reset to the new e-mail account. This allows you to see all the credit cards on file for the account—not the complete numbers, just the last four digits. But, as we know, Apple only needs those last four digits. We asked Amazon to comment on its security policy, but didn’t have anything to share by press time.

And it’s also worth noting that one wouldn’t have to call Amazon to pull this off. Your pizza guy could do the same thing, for example. If you have an AppleID, every time you call Pizza Hut, you’ve giving the 16-year-old on the other end of the line all he needs to take over your entire digital life.

The victim here is a popular technology journalist, so he got a level of tech support that’s not available to most of us. I believe this will increasingly become a problem, and that cloud providers will need better and more automated solutions.

Initial post.

EDITED TO ADD (8/13): Apple has changed its policy and stopped taking password reset requests over the phone, pretty much as a result of this incident.

EDITED TO ADD (8/17): A follow on story about how he recovered all of his data.

Posted on August 8, 2012 at 6:31 AMView Comments

Law Enforcement Forensics Tools Against Smart Phones

Turns out the password can be easily bypassed:

XRY works by first jailbreaking the handset. According to Micro Systemation, no ‘backdoors’ created by Apple used, but instead it makes use of security flaws in the operating system the same way that regular jailbreakers do.

Once the iPhone has been jailbroken, the tool then goes on to ‘brute-force’ the passcode, trying every possible four digit combination until the correct password has been found. Given the limited number of possible combinations for a four-digit passcode—10,000, ranging from 0000 to 9999—this doesn’t take long.

Once the handset has been jailbroken and the passcode guessed, all the data on the handset, including call logs, messages, contacts, GPS data and even keystrokes, can be accessed and examined.

One of the morals is to use an eight-digit passcode.

EDITED TO ADD (4/13): This has been debunked. The 1Password blog has a fairly lengthy post discussing the details of the XRY tool.

Posted on April 3, 2012 at 2:01 PMView Comments

Recent Developments in Full Disclosure

Last week, I had a long conversation with Robert Lemos over an article he was writing about full disclosure. He had noticed that companies have recently been reacting more negatively to security researchers publishing vulnerabilities about their products.

The debate over full disclosure is as old as computing, and I’ve written about it before. Disclosing security vulnerabilities is good for security and good for society, but vendors really hate it. It results in bad press, forces them to spend money fixing vulnerabilities, and comes out of nowhere. Over the past decade or so, we’ve had an uneasy truce between security researchers and product vendors. That truce seems to be breaking down.

Lemos believes the problem is that because today’s research targets aren’t traditional computer companies—they’re phone companies, or embedded system companies, or whatnot—they’re not aware of the history of the debate or the truce, and are responding more viscerally. For example, Carrier IQ threatened legal action against the researcher that outed it, and only backed down after the EFF got involved. I am reminded of the reaction of locksmiths to Matt Blaze’s vulnerability disclosures about lock security; they thought he was evil incarnate for publicizing hundred-year-old security vulnerabilities in lock systems. And just last week, I posted about a full-disclosure debate in the virology community.

I think Lemos has put his finger on part of what’s going on, but that there’s more. I think that companies, both computer and non-computer, are trying to retain control over the situation. Apple’s heavy-handed retaliation against researcher Charlie Miller is an example of that. On one hand, Apple should know better than to do this. On the other hand, it’s acting in the best interest of its brand: the fewer researchers looking for vulnerabilities, the fewer vulnerabilities it has to deal with.

It’s easy to believe that if only people wouldn’t disclose problems, we could pretend they didn’t exist, and everything would be better. Certainly this is the position taken by the DHS over terrorism: public information about the problem is worse than the problem itself. It’s similar to Americans’ willingness to give both Bush and Obama the power to arrest and indefinitely detain any American without any trial whatsoever. It largely explains the common public backlash against whistle-blowers. What we don’t know can’t hurt us, and what we do know will also be known by those who want to hurt us.

There’s some profound psychological denial going on here, and I’m not sure of the implications of it all. It’s worth paying attention to, though. Security requires transparency and disclosure, and if we willingly give that up, we’re a lot less safe as a society.

Posted on December 6, 2011 at 7:31 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.