Entries Tagged "Apple"

Page 16 of 17

Apple JailBreakMe Vulnerability

Good information from Mikko Hyppönen.

Q: What is this all about?
A: It’s about a site called jailbreakme.com that enables you to Jailbreak your iPhones and iPads just by visiting the site.

Q: So what’s the problem?
A: The problem is that the site uses a zero-day vulnerability to execute code on the device.

Q: How does the vulnerability work?
A: Actually, it’s two vulnerabilities. First one uses a corrupted font embedded in a PDF file to execute code and the second one uses a vulnerability in the kernel to escalate the code execution to unsandboxed root.

Q: How difficult was it to create this exploit?
A: Very difficult.

Q: How difficult would it be for someone else to modify the exploit now that it’s out?
A: Quite easy.

Here’s the JailBreakMe blog.

EDITED TO ADD (8/14): Apple has released a patch. It doesn’t help people with old model iPhones and iPod Touches, or work for people who’ve jailbroken their phones.

EDITED TO ADD (8/15): More info.

Posted on August 10, 2010 at 12:12 PMView Comments

AT&T's iPad Security Breach

I didn’t write about the recent security breach that disclosed tens of thousands of e-mail addresses and ICC-IDs of iPad users because, well, there was nothing terribly interesting about it. It was yet another web security breach.

Right after the incident, though, I was being interviewed by a reporter that wanted to know what the ramifications of the breach were. He specifically wanted to know if anything could be done with those ICC-IDs, and if the disclosure of that information was worse than people thought. He didn’t like the answer I gave him, which is that no one knows yet: that it’s too early to know the full effects of that information disclosure, and that both the good guys and the bad guys would be figuring it out in the coming weeks. And, that it’s likely that there were further security implications of the breach.

Seems like there were:

The problem is that ICC-IDs—unique serial numbers that identify each SIM card—can often be converted into IMSIs. While the ICC-ID is nonsecret—it’s often found printed on the boxes of cellphone/SIM bundles—the IMSI is somewhat secret. In theory, knowing an ICC-ID shouldn’t be enough to determine an IMSI. The phone companies do need to know which IMSI corresponds to which ICC-ID, but this should be done by looking up the values in a big database.

In practice, however, many phone companies simply calculate the IMSI from the ICC-ID. This calculation is often very simple indeed, being little more complex than “combine this hard-coded value with the last nine digits of the ICC-ID.” So while the leakage of AT&T’s customers’ ICC-IDs should be harmless, in practice, it could reveal a secret ID.

What can be done with that secret ID? Quite a lot, it turns out. The IMSI is sent by the phone to the network when first signing on to the network; it’s used by the network to figure out which call should be routed where. With someone else’s IMSI, an attacker can determine the person’s name and phone number, and even track his or her position. It also opens the door to active attacks—creating fake cell towers that a victim’s phone will connect to, enabling every call and text message to be eavesdropped.

More to come, I’m sure.

And that’s really the point: we all want to know—right away—the effects of a security vulnerability, but often we don’t and can’t. It takes time before the full effects are known, sometimes a lot of time.

And in related news, the image redaction that went along with some of the breach reporting wasn’t very good.

Posted on June 21, 2010 at 5:27 AMView Comments

Alerting Users that Applications are Using Cameras, Microphones, Etc.

Interesting research: “What You See is What They Get: Protecting users from unwanted use of microphones, cameras, and other sensors,” by Jon Howell and Stuart Schechter.

Abstract: Sensors such as cameras and microphones collect privacy-sensitive data streams without the user’s explicit action. Conventional sensor access policies either hassle users to grant applications access to sensors or grant with no approval at all. Once access is granted, an application may collect sensor data even after the application’s interface suggests that the sensor is no longer being accessed.

We introduce the sensor-access widget, a graphical user interface element that resides within an application’s display. The widget provides an animated representation of the personal data being collected by its corresponding sensor, calling attention to the application’s attempt to collect the data. The widget indicates whether the sensor data is currently allowed to flow to the application. The widget also acts as a control point through which the user can configure the sensor and grant or deny the application access. By building perpetual disclosure of sensor data collection into the platform, sensor-access widgets enable new access-control policies that relax the tension between the user’s privacy needs and applications’ ease of access.

Apple seems to be taking some steps in this direction with the location sensor disclosure in iPhone 4.0 OS.

Posted on May 24, 2010 at 7:32 AMView Comments

Punishing Security Breaches

The editor of the Freakonomics blog asked me to write about this topic. The idea was that they would get several opinions, and publish them all. They spiked the story, but I already wrote my piece. So here it is.

In deciding what to do with Gray Powell, the Apple employee who accidentally left a secret prototype 4G iPhone in a California bar, Apple needs to figure out how much of the problem is due to an employee not following the rules, and how much of the problem is due to unclear, unrealistic, or just plain bad rules.

If Powell sneaked the phone out of the Apple building in a flagrant violation of the rules—maybe he wanted to show it to a friend—he should be disciplined, perhaps even fired. Some military installations have rules like that. If someone wants to take something classified out of a top secret military compound, he might have to secrete it on his person and deliberately sneak it past a guard who searches briefcases and purses. He might be committing a crime by doing so, by the way. Apple isn’t the military, of course, but if their corporate security policy is that strict, it may very well have rules like that. And the only way to ensure rules are followed is by enforcing them, and that means severe disciplinary action against those who bypass the rules.

Even if Powell had authorization to take the phone out of Apple’s labs—presumably someone has to test drive the new toys sooner or later—the corporate rules might have required him to pay attention to it at all times. We’ve all heard of military attachés who carry briefcases chained to their wrists. It’s an extreme example, but demonstrates how a security policy can allow for objects to move around town—or around the world—without getting lost. Apple almost certainly doesn’t have a policy as rigid as that, but its policy might explicitly prohibit Powell from taking that phone into a bar, putting it down on a counter, and participating in a beer tasting. Again, if Apple’s rules and Powell’s violation were both that clear, Apple should enforce them.

On the other hand, if Apple doesn’t have clear-cut rules, if Powell wasn’t prohibited from taking the phone out of his office, if engineers routinely ignore or bypass security rules and—as long as nothing bad happens—no one complains, then Apple needs to understand that the system is more to blame than the individual. Most corporate security policies have this sort of problem. Security is important, but it’s quickly jettisoned when there’s an important job to be done. A common example is passwords: people aren’t supposed to share them, unless it’s really important and they have to. Another example is guest accounts. And doors that are supposed to remain locked but rarely are. People routinely bypass security policies if they get in the way, and if no one complains, those policies are effectively meaningless.

Apple’s unfortunately public security breach has given the company an opportunity to examine its policies and figure out how much of the problem is Powell and how much of it is the system he’s a part of. Apple needs to fix its security problem, but only after it figures out where the problem is.

Posted on April 26, 2010 at 7:20 AMView Comments

Terrorists Prohibited from Using iTunes

The iTunes Store Terms and Conditions prohibits it:

Notice, as I read this clause not only are terrorists—or at least those on terrorist watch lists—prohibited from using iTunes to manufacture WMD, they are also prohibited from even downloading and using iTunes. So all the Al-Qaeda operatives holed up in the Northwest Frontier Provinces of Pakistan, dodging drone attacks while listening to Britney Spears songs downloaded with iTunes are in violation of the terms and conditions, even if they paid for the music!

And you thought being harassed at airports was bad enough.

Posted on February 10, 2010 at 12:39 PMView Comments

iPhone Encryption Useless

Interesting, although I want some more technical details.

…the new iPhone 3GS’ encryption feature is “broken” when it comes to protecting sensitive information such as credit card numbers and social-security digits, Zdziarski said.

Zdziarski said it’s just as easy to access a user’s private information on an iPhone 3GS as it was on the previous generation iPhone 3G or first generation iPhone, both of which didn’t feature encryption. If a thief got his hands on an iPhone, a little bit of free software is all that’s needed to tap into all of the user’s content. Live data can be extracted in as little as two minutes, and an entire raw disk image can be made in about 45 minutes, Zdziarski said.

Wondering where the encryption comes into play? It doesn’t. Strangely, once one begins extracting data from an iPhone 3GS, the iPhone begins to decrypt the data on its own, he said.

Posted on July 29, 2009 at 6:16 AMView Comments

Protect Your Macintosh Copies Available

In 1994, I published my second book, Protect Your Macintosh. You’ve probably never heard of it; it died a quiet and lonely death.

Going through some boxes, I found a dozen copies of the book: first and, I think, only printing. I’m willing to send one to anyone who wants one for $5 postage. (That’s in the U.S. If you’re elsewhere, we’ll figure out postage.) Please let me know via e-mail if you’re interested.

And I can assure you that, fourteen years later, there’s absolutely nothing of practical value in the book. This offer should only interest collectors. And even them, not that much.

I also have seven copies of my third book, E-Mail Security, from 1995, which also has nothing in it of any practical value anymore. Again, $5 for postage.

EDITED TO ADD (5/3): Sold out; sorry.

Posted on May 2, 2008 at 11:12 AMView Comments

Lock-In

Buying an iPhone isn’t the same as buying a car or a toaster. Your iPhone comes with a complicated list of rules about what you can and can’t do with it. You can’t install unapproved third-party applications on it. You can’t unlock it and use it with the cellphone carrier of your choice. And Apple is serious about these rules: A software update released in September 2007 erased unauthorized software and—in some cases—rendered unlocked phones unusable.

Bricked” is the term, and Apple isn’t the least bit apologetic about it.

Computer companies want more control over the products they sell you, and they’re resorting to increasingly draconian security measures to get that control. The reasons are economic.

Control allows a company to limit competition for ancillary products. With Mac computers, anyone can sell software that does anything. But Apple gets to decide who can sell what on the iPhone. It can foster competition when it wants, and reserve itself a monopoly position when it wants. And it can dictate terms to any company that wants to sell iPhone software and accessories.

This increases Apple’s bottom line. But the primary benefit of all this control for Apple is that it increases lock-in. “Lock-in” is an economic term for the difficulty of switching to a competing product. For some products—cola, for example—there’s no lock-in. I can drink a Coke today and a Pepsi tomorrow: no big deal. But for other products, it’s harder.

Switching word processors, for example, requires installing a new application, learning a new interface and a new set of commands, converting all the files (which may not convert cleanly) and custom software (which will certainly require rewriting), and possibly even buying new hardware. If Coke stops satisfying me for even a moment, I’ll switch: something Coke learned the hard way in 1985 when it changed the formula and started marketing New Coke. But my word processor has to really piss me off for a good long time before I’ll even consider going through all that work and expense.

Lock-in isn’t new. It’s why all gaming-console manufacturers make sure that their game cartridges don’t work on any other console, and how they can price the consoles at a loss and make the profit up by selling games. It’s why Microsoft never wants to open up its file formats so other applications can read them. It’s why music purchased from Apple for your iPod won’t work on other brands of music players. It’s why every U.S. cellphone company fought against phone number portability. It’s why Facebook sues any company that tries to scrape its data and put it on a competing website. It explains airline frequent flyer programs, supermarket affinity cards and the new My Coke Rewards program.

With enough lock-in, a company can protect its market share even as it reduces customer service, raises prices, refuses to innovate and otherwise abuses its customer base. It should be no surprise that this sounds like pretty much every experience you’ve had with IT companies: Once the industry discovered lock-in, everyone started figuring out how to get as much of it as they can.

Economists Carl Shapiro and Hal Varian even proved that the value of a software company is the total lock-in. Here’s the logic: Assume, for example, that you have 100 people in a company using MS Office at a cost of $500 each. If it cost the company less than $50,000 to switch to Open Office, they would. If it cost the company more than $50,000, Microsoft would increase its prices.

Mostly, companies increase their lock-in through security mechanisms. Sometimes patents preserve lock-in, but more often it’s copy protection, digital rights management (DRM), code signing or other security mechanisms. These security features aren’t what we normally think of as security: They don’t protect us from some outside threat, they protect the companies from us.

Microsoft has been planning this sort of control-based security mechanism for years. First called Palladium and now NGSCB (Next-Generation Secure Computing Base), the idea is to build a control-based security system into the computing hardware. The details are complicated, but the results range from only allowing a computer to boot from an authorized copy of the OS to prohibiting the user from accessing “unauthorized” files or running unauthorized software. The competitive benefits to Microsoft are enormous (.pdf).

Of course, that’s not how Microsoft advertises NGSCB. The company has positioned it as a security measure, protecting users from worms, Trojans and other malware. But control does not equal security; and this sort of control-based security is very difficult to get right, and sometimes makes us more vulnerable to other threats. Perhaps this is why Microsoft is quietly killing NGSCB—we’ve gotten BitLocker, and we might get some other security features down the line—despite the huge investment hardware manufacturers made when incorporating special security hardware into their motherboards.

In my last column, I talked about the security-versus-privacy debate, and how it’s actually a debate about liberty versus control. Here we see the same dynamic, but in a commercial setting. By confusing control and security, companies are able to force control measures that work against our interests by convincing us they are doing it for our own safety.

As for Apple and the iPhone, I don’t know what they’re going to do. On the one hand, there’s this analyst report that claims there are over a million unlocked iPhones, costing Apple between $300 million and $400 million in revenue. On the other hand, Apple is planning to release a software development kit this month, reversing its earlier restriction and allowing third-party vendors to write iPhone applications. Apple will attempt to keep control through a secret application key that will be required by all “official” third-party applications, but of course it’s already been leaked.

And the security arms race goes on …

This essay previously appeared on Wired.com.

EDITED TO ADD (2/12): Slashdot thread.

And critical commentary, which is oddly political:

This isn’t lock-in, it’s called choosing a product that meets your needs. If you don’t want to be tied to a particular phone network, don’t buy an iPhone. If installing third-party applications (between now and the end of February, when officially-sanctioned ones will start to appear) is critically important to you, don’t buy an iPhone.

It’s one thing to grumble about an otherwise tempting device not supporting some feature you would find useful; it’s another entirely to imply that this represents anti-libertarian lock-in. The fact remains, you are free to buy one of the many other devices on the market that existed before there ever was an iPhone.

Actually, lock-in is one of the factors you have to consider when choosing a product to meet your needs. It’s not one thing or the other. And lock-in is certainly not “anti-libertarian.” Lock-in is what you get when you have an unfettered free market competing for customers; it’s libertarian utopia. Government regulations that limit lock-in tactics—something I think would be very good for society—is what’s anti-libertarian.

Here’s a commentary on that previous commentary. This is some good commentary, too.

Posted on February 12, 2008 at 6:08 AMView Comments

U.S. Army Installing Apple Computers

Because they’re harder to hack:

Though Apple machines are still pricier than their Windows counterparts, the added security they offer might be worth the cost, says Wallington. He points out that Apple’s X Serve servers, which are gradually becoming more commonplace in Army data centers, are proving their mettle. “Those are some of the most attacked computers there are. But the attacks used against them are designed for Windows-based machines, so they shrug them off,” he says.

Posted on January 7, 2008 at 6:21 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.