Entries Tagged "Apple"

Page 14 of 17

Using the iWatch for Authentication

Usability engineer Bruce Tognazzini talks about how an iWatch—which seems to be either a mythical Apple product or one actually in development—can make authentication easier.

Passcodes. The watch can and should, for most of us, eliminate passcodes altogether on iPhones, and Macs and, if Apple’s smart, PCs: As long as my watch is in range, let me in! That, to me, would be the single-most compelling feature a smartwatch could offer: If the watch did nothing but release me from having to enter my passcode/password 10 to 20 times a day, I would buy it. If the watch would just free me from having to enter pass codes, I would buy it even if it couldn’t tell the right time! I would happily strap it to my opposite wrist! This one is a must. Yes, Apple is working on adding fingerprint reading for iDevices, and that’s just wonderful, but it will still take time and trouble for the device to get an accurate read from the user. I want in now! Instantly! Let me in, let me in, let me in!

Apple must ensure, however, that, if you remove the watch, you must reestablish authenticity. (Reauthorizing would be an excellent place for biometrics.) Otherwise, we’ll have a spate of violent “watchjackings” replacing the non-violent iPhone-grabs going on today.

Posted on February 14, 2013 at 11:42 AMView Comments

Squids on the Economist Cover

Four squids on the cover of this week’s Economist represent the four massive (and intrusive) data-driven Internet giants: Google, Facebook, Apple, and Amazon.

Interestingly, these are the same four companies I’ve been listing as the new corporate threat to the Internet.

The first of three pillars propping up this outside threat are big data collectors, which in addition to Apple and Google, Schneier identified as Amazon and Facebook. (Notice Microsoft didn’t make the cut.) The goal of their data collection is for marketers to be able to make snap decisions about the product tastes, credit worthiness, and employment suitability of millions of people. Often, this information is fed into systems maintained by governments.

Notice that Microsoft didn’t make the Economist’s cut either.

I gave that talk at the RSA Conference in February of this year. The link in the article is from another conference the week before, where I test-drove the talk.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on December 7, 2012 at 4:04 PMView Comments

Feudal Security

It’s a feudal world out there.

Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether … for Facebook.

These vendors are becoming our feudal lords, and we are becoming their vassals. We might refuse to pledge allegiance to all of them—or to a particular one we don’t like. Or we can spread our allegiance around. But either way, it’s becoming increasingly difficult to not pledge allegiance to at least one of them.

Feudalism provides security. Classical medieval feudalism depended on overlapping, complex, hierarchical relationships. There were oaths and obligations: a series of rights and privileges. A critical aspect of this system was protection: vassals would pledge their allegiance to a lord, and in return, that lord would protect them from harm.

Of course, I’m romanticizing here; European history was never this simple, and the description is based on stories of that time, but that’s the general model.

And it’s this model that’s starting to permeate computer security today.

I Pledge Allegiance to the United States of Convenience

Traditional computer security centered around users. Users had to purchase and install anti-virus software and firewalls, ensure their operating system and network were configured properly, update their software, and generally manage their own security.

This model is breaking, largely due to two developments:

  1. New Internet-enabled devices where the vendor maintains more control over the hardware and software than we do—like the iPhone and Kindle; and
  2. Services where the host maintains our data for us—like Flickr and Hotmail.

Now, we users must trust the security of these hardware manufacturers, software vendors, and cloud providers.

We choose to do it because of the convenience, redundancy, automation, and shareability. We like it when we can access our e-mail anywhere, from any computer. We like it when we can restore our contact lists after we’ve lost our phones. We want our calendar entries to automatically appear on all of our devices. These cloud storage sites do a better job of backing up our photos and files than we would manage by ourselves; Apple does a great job keeping malware out of its iPhone apps store.

In this new world of computing, we give up a certain amount of control, and in exchange we trust that our lords will both treat us well and protect us from harm. Not only will our software be continually updated with the newest and coolest functionality, but we trust it will happen without our being overtaxed by fees and required upgrades. We trust that our data and devices won’t be exposed to hackers, criminals, and malware. We trust that governments won’t be allowed to illegally spy on us.

Trust is our only option. In this system, we have no control over the security provided by our feudal lords. We don’t know what sort of security methods they’re using, or how they’re configured. We mostly can’t install our own security products on iPhones or Android phones; we certainly can’t install them on Facebook, Gmail, or Twitter. Sometimes we have control over whether or not to accept the automatically flagged updates—iPhone, for example—but we rarely know what they’re about or whether they’ll break anything else. (On the Kindle, we don’t even have that freedom.)

The Good, the Bad, and the Ugly

I’m not saying that feudal security is all bad. For the average user, giving up control is largely a good thing. These software vendors and cloud providers do a lot better job of security than the average computer user would. Automatic cloud backup saves a lot of data; automatic updates prevent a lot of malware. The network security at any of these providers is better than that of most home users.

Feudalism is good for the individual, for small startups, and for medium-sized businesses that can’t afford to hire their own in-house or specialized expertise. Being a vassal has its advantages, after all.

For large organizations, however, it’s more of a mixed bag. These organizations are used to trusting other companies with critical corporate functions: They’ve been outsourcing their payroll, tax preparation, and legal services for decades. But IT regulations often require audits. Our lords don’t allow vassals to audit them, even if those vassals are themselves large and powerful.

Yet feudal security isn’t without its risks.

Our lords can make mistakes with security, as recently happened with Apple, Facebook, and Photobucket. They can act arbitrarily and capriciously, as Amazon did when it cut off a Kindle user for living in the wrong country. They tether us like serfs; just try to take data from one digital lord to another.

Ultimately, they will always act in their own self-interest, as companies do when they mine our data in order to sell more advertising and make more money. These companies own us, so they can sell us off—again, like serfs—to rival lords…or turn us in to the authorities.

Historically, early feudal arrangements were ad hoc, and the more powerful party would often simply renege on his part of the bargain. Eventually, the arrangements were formalized and standardized: both parties had rights and privileges (things they could do) as well as protections (things they couldn’t do to each other).

Today’s internet feudalism, however, is ad hoc and one-sided. We give companies our data and trust them with our security, but we receive very few assurances of protection in return, and those companies have very few restrictions on what they can do.

This needs to change. There should be limitations on what cloud vendors can do with our data; rights, like the requirement that they delete our data when we want them to; and liabilities when vendors mishandle our data.

Like everything else in security, it’s a trade-off. We need to balance that trade-off. In Europe, it was the rise of the centralized state and the rule of law that undermined the ad hoc feudal system; it provided more security and stability for both lords and vassals. But these days, government has largely abdicated its role in cyberspace, and the result is a return to the feudal relationships of yore.

Perhaps instead of hoping that our Internet-era lords will be sufficiently clever and benevolent—or putting our faith in the Robin Hoods who block phone surveillance and circumvent DRM systems—it’s time we step in in our role as governments (both national and international) to create the regulatory environments that protect us vassals (and the lords as well). Otherwise, we really are just serfs.

A version of this essay was originally published on Wired.com.

Posted on December 3, 2012 at 7:24 AMView Comments

"Ask Nicely" Doesn't Work as a Security Mechanism

Apple’s map application shows more of Taiwan than Google Maps:

The Taiwanese government/military, like many others around the world, requests that satellite imagery providers, such as Google Maps, blur out certain sensitive military installations. Unfortunately, Apple apparently didn’t get that memo.

[…]

According to reports the Taiwanese defence ministry hasn’t filed a formal request with Apple yet but thought it would be a great idea to splash this across the media and bring everyone’s attention to the story. Obviously it would terribly embarrassing if some unscrupulous person read the story and then found various uncensored military installations around Taiwan and posted photos of them.

Photos at the link.

Posted on October 11, 2012 at 7:03 AMView Comments

Database of 12 Million Apple UDIDs Leaked

In this story, we learn that hackers got their hands on a database of 12 million Apple Unique Device Identifiers (UDIDs) by hacking an FBI laptop.

My question isn’t about the hack, but about the data. Why does an FBI agent have user identification information about 12 million iPhone users on his laptop? And how did the FBI get their hands on this data in the first place?

For its part, the FBI denies everything:

In a statement released Tuesday afternoon, the FBI said, “The FBI is aware of published reports alleging that an FBI laptop was compromised and private data regarding Apple UDIDs was exposed. At this time there is no evidence indicating that an FBI laptop was compromised or that the FBI either sought or obtained this data.”

Apple also denies giving the database to the FBI.

Okay, so where did the database come from? And are there really 12 million, or only one million?

EDITED TO ADD (9/12): A company called BlueToad is the source of the leak.

If you’ve been hacked, you’re not going to be informed:

DeHart said his firm would not be contacting individual consumers to notify them that their information had been compromised, instead leaving it up to individual publishers to contact readers as they see fit.

Posted on September 6, 2012 at 6:48 AMView Comments

Is iPhone Security Really this Good?

Simson Garfinkel writes that the iPhone has such good security that the police can’t use it for forensics anymore:

Technologies the company has adopted protect Apple customers’ content so well that in many situations it’s impossible for law enforcement to perform forensic examinations of devices seized from criminals. Most significant is the increasing use of encryption, which is beginning to cause problems for law enforcement agencies when they encounter systems with encrypted drives.

“I can tell you from the Department of Justice perspective, if that drive is encrypted, you’re done,” Ovie Carroll, director of the cyber-crime lab at the Computer Crime and Intellectual Property Section in the Department of Justice, said during his keynote address at the DFRWS computer forensics conference in Washington, D.C., last Monday. “When conducting criminal investigations, if you pull the power on a drive that is whole-disk encrypted you have lost any chance of recovering that data.”

Yes, I believe that full-disk encryption—whether Apple’s FileVault or Microsoft’s BitLocker (I don’t know what the iOS system is called)—is good; but its security is only as good as the user is at choosing a good password.

The iPhone always supported a PIN lock, but the PIN wasn’t a deterrent to a serious attacker until the iPhone 3GS. Because those early phones didn’t use their hardware to perform encryption, a skilled investigator could hack into the phone, dump its flash memory, and directly access the phone’s address book, e-mail messages, and other information. But now, with Apple’s more sophisticated approach to encryption, investigators who want to examine data on a phone have to try every possible PIN. Examiners perform these so-called brute-force attacks with special software, because the iPhone can be programmed to wipe itself if the wrong PIN is provided more than 10 times in a row. This software must be run on the iPhone itself, limiting the guessing speed to 80 milliseconds per PIN. Trying all four-digit PINs therefore requires no more than 800 seconds, a little more than 13 minutes. However, if the user chooses a six-digit PIN, the maximum time required would be 22 hours; a nine-digit PIN would require 2.5 years, and a 10-digit pin would take 25 years. That’s good enough for most corporate secrets—and probably good enough for most criminals as well.

Leaving aside the user practice questions—my guess is that very few users, even those with something to hide, use a ten-digit PIN—could this possibly be true? In the introduction to Applied Cryptography, almost 20 years ago, I wrote: “There are two kinds of cryptography in this world: cryptography that will stop your kid sister from reading your files, and cryptography that will stop major governments from reading your files.”

Since then, I’ve learned two things: 1) there are a lot of gradients to kid sister cryptography, and 2) major government cryptography is very hard to get right. It’s not the cryptography; it’s everything around the cryptography. I said as much in the preface to Secrets and Lies in 2000:

Cryptography is a branch of mathematics. And like all mathematics, it involves numbers, equations, and logic. Security, palpable security that you or I might find useful in our lives, involves people: things people know, relationships between people, people and how they relate to machines. Digital security involves computers: complex, unstable, buggy computers.

Mathematics is perfect; reality is subjective. Mathematics is defined; computers are ornery. Mathematics is logical; people are erratic, capricious, and barely comprehensible.

If, in fact, we’ve finally achieved something resembling this level of security for our computers and handheld computing devices, this is something to celebrate.

But I’m skeptical.

Another article.

Slashdot has a thread on the article.

EDITED TO ADD: More analysis. And Elcomsoft can crack iPhones.

Posted on August 21, 2012 at 1:42 PMView Comments

Yet Another Risk of Storing Everything in the Cloud

A hacker can social-engineer his way into your cloud storage and delete everything you have.

It turns out, a billing address and the last four digits of a credit card number are the only two pieces of information anyone needs to get into your iCloud account. Once supplied, Apple will issue a temporary password, and that password grants access to iCloud.

Apple tech support confirmed to me twice over the weekend that all you need to access someone’s AppleID is the associated e-mail address, a credit card number, the billing address, and the last four digits of a credit card on file.

Here’s how a hacker gets that information.

First you call Amazon and tell them you are the account holder, and want to add a credit card number to the account. All you need is the name on the account, an associated e-mail address, and the billing address. Amazon then allows you to input a new credit card. (Wired used a bogus credit card number from a website that generates fake card numbers that conform with the industry’s published self-check algorithm.) Then you hang up.

Next you call back, and tell Amazon that you’ve lost access to your account. Upon providing a name, billing address, and the new credit card number you gave the company on the prior call, Amazon will allow you to add a new e-mail address to the account. From here, you go to the Amazon website, and send a password reset to the new e-mail account. This allows you to see all the credit cards on file for the account—not the complete numbers, just the last four digits. But, as we know, Apple only needs those last four digits. We asked Amazon to comment on its security policy, but didn’t have anything to share by press time.

And it’s also worth noting that one wouldn’t have to call Amazon to pull this off. Your pizza guy could do the same thing, for example. If you have an AppleID, every time you call Pizza Hut, you’ve giving the 16-year-old on the other end of the line all he needs to take over your entire digital life.

The victim here is a popular technology journalist, so he got a level of tech support that’s not available to most of us. I believe this will increasingly become a problem, and that cloud providers will need better and more automated solutions.

Initial post.

EDITED TO ADD (8/13): Apple has changed its policy and stopped taking password reset requests over the phone, pretty much as a result of this incident.

EDITED TO ADD (8/17): A follow on story about how he recovered all of his data.

Posted on August 8, 2012 at 6:31 AMView Comments

Law Enforcement Forensics Tools Against Smart Phones

Turns out the password can be easily bypassed:

XRY works by first jailbreaking the handset. According to Micro Systemation, no ‘backdoors’ created by Apple used, but instead it makes use of security flaws in the operating system the same way that regular jailbreakers do.

Once the iPhone has been jailbroken, the tool then goes on to ‘brute-force’ the passcode, trying every possible four digit combination until the correct password has been found. Given the limited number of possible combinations for a four-digit passcode—10,000, ranging from 0000 to 9999—this doesn’t take long.

Once the handset has been jailbroken and the passcode guessed, all the data on the handset, including call logs, messages, contacts, GPS data and even keystrokes, can be accessed and examined.

One of the morals is to use an eight-digit passcode.

EDITED TO ADD (4/13): This has been debunked. The 1Password blog has a fairly lengthy post discussing the details of the XRY tool.

Posted on April 3, 2012 at 2:01 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.