Entries Tagged "Apple"

Page 13 of 13

Lock-In

Buying an iPhone isn’t the same as buying a car or a toaster. Your iPhone comes with a complicated list of rules about what you can and can’t do with it. You can’t install unapproved third-party applications on it. You can’t unlock it and use it with the cellphone carrier of your choice. And Apple is serious about these rules: A software update released in September 2007 erased unauthorized software and — in some cases — rendered unlocked phones unusable.

Bricked” is the term, and Apple isn’t the least bit apologetic about it.

Computer companies want more control over the products they sell you, and they’re resorting to increasingly draconian security measures to get that control. The reasons are economic.

Control allows a company to limit competition for ancillary products. With Mac computers, anyone can sell software that does anything. But Apple gets to decide who can sell what on the iPhone. It can foster competition when it wants, and reserve itself a monopoly position when it wants. And it can dictate terms to any company that wants to sell iPhone software and accessories.

This increases Apple’s bottom line. But the primary benefit of all this control for Apple is that it increases lock-in. “Lock-in” is an economic term for the difficulty of switching to a competing product. For some products — cola, for example — there’s no lock-in. I can drink a Coke today and a Pepsi tomorrow: no big deal. But for other products, it’s harder.

Switching word processors, for example, requires installing a new application, learning a new interface and a new set of commands, converting all the files (which may not convert cleanly) and custom software (which will certainly require rewriting), and possibly even buying new hardware. If Coke stops satisfying me for even a moment, I’ll switch: something Coke learned the hard way in 1985 when it changed the formula and started marketing New Coke. But my word processor has to really piss me off for a good long time before I’ll even consider going through all that work and expense.

Lock-in isn’t new. It’s why all gaming-console manufacturers make sure that their game cartridges don’t work on any other console, and how they can price the consoles at a loss and make the profit up by selling games. It’s why Microsoft never wants to open up its file formats so other applications can read them. It’s why music purchased from Apple for your iPod won’t work on other brands of music players. It’s why every U.S. cellphone company fought against phone number portability. It’s why Facebook sues any company that tries to scrape its data and put it on a competing website. It explains airline frequent flyer programs, supermarket affinity cards and the new My Coke Rewards program.

With enough lock-in, a company can protect its market share even as it reduces customer service, raises prices, refuses to innovate and otherwise abuses its customer base. It should be no surprise that this sounds like pretty much every experience you’ve had with IT companies: Once the industry discovered lock-in, everyone started figuring out how to get as much of it as they can.

Economists Carl Shapiro and Hal Varian even proved that the value of a software company is the total lock-in. Here’s the logic: Assume, for example, that you have 100 people in a company using MS Office at a cost of $500 each. If it cost the company less than $50,000 to switch to Open Office, they would. If it cost the company more than $50,000, Microsoft would increase its prices.

Mostly, companies increase their lock-in through security mechanisms. Sometimes patents preserve lock-in, but more often it’s copy protection, digital rights management (DRM), code signing or other security mechanisms. These security features aren’t what we normally think of as security: They don’t protect us from some outside threat, they protect the companies from us.

Microsoft has been planning this sort of control-based security mechanism for years. First called Palladium and now NGSCB (Next-Generation Secure Computing Base), the idea is to build a control-based security system into the computing hardware. The details are complicated, but the results range from only allowing a computer to boot from an authorized copy of the OS to prohibiting the user from accessing “unauthorized” files or running unauthorized software. The competitive benefits to Microsoft are enormous (.pdf).

Of course, that’s not how Microsoft advertises NGSCB. The company has positioned it as a security measure, protecting users from worms, Trojans and other malware. But control does not equal security; and this sort of control-based security is very difficult to get right, and sometimes makes us more vulnerable to other threats. Perhaps this is why Microsoft is quietly killing NGSCB — we’ve gotten BitLocker, and we might get some other security features down the line — despite the huge investment hardware manufacturers made when incorporating special security hardware into their motherboards.

In my last column, I talked about the security-versus-privacy debate, and how it’s actually a debate about liberty versus control. Here we see the same dynamic, but in a commercial setting. By confusing control and security, companies are able to force control measures that work against our interests by convincing us they are doing it for our own safety.

As for Apple and the iPhone, I don’t know what they’re going to do. On the one hand, there’s this analyst report that claims there are over a million unlocked iPhones, costing Apple between $300 million and $400 million in revenue. On the other hand, Apple is planning to release a software development kit this month, reversing its earlier restriction and allowing third-party vendors to write iPhone applications. Apple will attempt to keep control through a secret application key that will be required by all “official” third-party applications, but of course it’s already been leaked.

And the security arms race goes on …

This essay previously appeared on Wired.com.

EDITED TO ADD (2/12): Slashdot thread.

And critical commentary, which is oddly political:

This isn’t lock-in, it’s called choosing a product that meets your needs. If you don’t want to be tied to a particular phone network, don’t buy an iPhone. If installing third-party applications (between now and the end of February, when officially-sanctioned ones will start to appear) is critically important to you, don’t buy an iPhone.

It’s one thing to grumble about an otherwise tempting device not supporting some feature you would find useful; it’s another entirely to imply that this represents anti-libertarian lock-in. The fact remains, you are free to buy one of the many other devices on the market that existed before there ever was an iPhone.

Actually, lock-in is one of the factors you have to consider when choosing a product to meet your needs. It’s not one thing or the other. And lock-in is certainly not “anti-libertarian.” Lock-in is what you get when you have an unfettered free market competing for customers; it’s libertarian utopia. Government regulations that limit lock-in tactics — something I think would be very good for society — is what’s anti-libertarian.

Here’s a commentary on that previous commentary. This is some good commentary, too.

Posted on February 12, 2008 at 6:08 AMView Comments

U.S. Army Installing Apple Computers

Because they’re harder to hack:

Though Apple machines are still pricier than their Windows counterparts, the added security they offer might be worth the cost, says Wallington. He points out that Apple’s X Serve servers, which are gradually becoming more commonplace in Army data centers, are proving their mettle. “Those are some of the most attacked computers there are. But the attacks used against them are designed for Windows-based machines, so they shrug them off,” he says.

Posted on January 7, 2008 at 6:21 AMView Comments

2006 Operating System Vulnerability Study

Long, but interesting.

Closing

While there are an enormous variety of operating systems to choose from, only four “core” lineages exist in the mainstream — Windows, OS X, Linux and UNIX. Each system carries its own baggage of vulnerabilities ranging from local exploits and user introduced weaknesses to remotely available attack vectors.

As far as “straight-out-of-box” conditions go, both Microsoft’s Windows and Apple’s OS X are ripe with remotely accessible vulnerabilities. Even before enabling the servers, Windows based machines contain numerous exploitable holes allowing attackers to not only access the system but also execute arbitrary code. Both OS X and Windows were susceptible to additional vulnerabilities after enabling the built-in services. Once patched, however, both companies support a product that is secure, at least from the outside. The UNIX and Linux variants present a much more robust exterior to the outside. Even when the pre-configured server binaries are enabled, each system generally maintained its integrity against remote attacks. Compared with the Microsoft and Apple products, however, UNIX and Linux systems tend to have a higher learning curve for acceptance as desktop platforms.

When it comes to business, most systems have the benefit of trained administrators and IT departments to properly patch and configure the operating systems and their corresponding services. Things are different with home computers. The esoteric nature of the UNIX and Linux systems tend to result in home users with an increased understanding of security concerns. An already “hardened” operating system therefore has the benefit of a knowledgeable user base. The more consumer oriented operating systems made by Microsoft and Apple are each hardened in their own right. As soon as users begin to arbitrarily enable remote services or fiddle with the default configurations, the systems quickly become open to intrusion. Without a diligence for applying the appropriate patches or enabling automatic updates, owners of Windows and OS X systems are the most susceptible to quick and thorough remote violations by hackers.

Posted on April 2, 2007 at 7:38 AMView Comments

Faux Disclosure

Good essay on “faux disclosure”: disclosing a vulnerability without really disclosing it.

You’ve probably heard of full disclosure, the security philosophy that calls for making public all details of vulnerabilities. It has been the subject of debates among
researchers, vendors, and security firms. But the story that grabbed most of the headlines at the Black Hat Briefings in Las Vegas last week was based on a different type of disclosure. For lack of a better name, I’ll call it faux disclosure. Here’s why.

Security researchers Dave Maynor of ISS and Johnny Cache — a.k.a. Jon Ellch — demonstrated an exploit that allowed them to install a rootkit on an Apple laptop in less than a minute. Well, sort of; they showed a video of it, and also noted that they’d used a third-party Wi-Fi card in the demo of the exploit, rather than the MacBook’s internal Wi-Fi card. But they said that the exploit would work whether the third-party card — which they declined to identify — was inserted
in a Mac, Windows, or Linux laptop.

[…]

How is that for murky and non-transparent? The whole world is at risk — if the exploit is real — whenever the unidentified card is used. But they won’t say which card, although many sources presume the card is based on the Atheros chipset, which Apple employs.

It gets worse. Brian Krebs of the Washington Post, who first reported on the exploit, updated his original story and has reported that Maynor said, “Apple had leaned on Maynor and Ellch pretty hard not to make this an issue about the Mac drivers — mainly because Apple had not fixed the problem yet.”

That’s part of what is meant by full disclosure these days — giving the vendor a chance fix the vulnerability before letting the whole world know about it. That way, the thinking goes, the only people who get hurt by it are the people who get exploited by it. But damage to the responsible vendor’s image is mitigated somewhat, and many in the security business seem to think that damage control is more important than anything that might happen to any of the vendor’s customers.

Big deal. Publicly traded corporations like Apple and Microsoft and all the rest have been known to ignore ethics, morality, any consideration of right or wrong, or anything at all that might divert them from their ultimate goal: to maximize profits. Because of this,
some corporations only speak the truth when it is in their best interest. Otherwise, they lie or maintain silence.

Full disclosure is the only thing that forces vendors to fix security problems. The further we move away from full disclosure, the less incentive vendors have to fix problems and the more at-risk we all are.

Posted on August 14, 2006 at 1:41 PMView Comments

iPod Thefts

What happens if you distribute 50 million small,valuable, and easily sellable objects into the hands of men, women, and children all over the world, and tell them to walk around the streets with them? Why, people steal them, of course.

“Rise in crime blamed on iPods”, yells the front page of London’s Metro. “Muggers targeting iPod users”, says ITV. This is the reaction to the government’s revelation that robberies across the UK have risen by 8 per cent in the last year, from 90,747 to 98,204. The Home Secretary, John Reid, attributes this to the irresistible lure of “young people carrying expensive goods, such as mobile phones and MP3 players”. A separate British Crime Survey, however, suggests robbery has risen by 22 per cent, to 311,000.

This shouldn’t come as a surprise, just as it wasn’t a surprise in the 1990s when there was a wave of high-priced sneaker thefts. Or that there is also a wave of laptop thefts.

What to do about it? Basically, there’s not much you can do except be careful. Muggings have long been a low-risk crime, so it makes sense that we’re seeing an increase in them as the value of what people are carrying on their person goes up. And people carrying portable music players have an unmistakable indicator: those ubiquitous ear buds.

The economics of this crime are such that it will continue until one of three things happens. One, portable music players become much less valuable. Two, the costs of the crime become much higher. Three, society deals with its underclass and gives them a better career option than iPod thief.

And on a related topic, here’s a great essay by Cory Doctorow on how Apple’s iTunes copy protection screws the music industry.

EDITED TO ADD (8/5): Eric Rescorla comments.

Posted on July 31, 2006 at 7:05 AMView Comments

Hacking Computers Over USB

I’ve previously written about the risks of small portable computing devices; how more and more data can be stored on them, and then lost or stolen. But there’s another risk: if an attacker can convince you to plug his USB device into your computer, he can take it over.

Plug an iPod or USB stick into a PC running Windows and the device can literally take over the machine and search for confidential documents, copy them back to the iPod or USB’s internal storage, and hide them as “deleted” files. Alternatively, the device can simply plant spyware, or even compromise the operating system. Two features that make this possible are the Windows AutoRun facility and the ability of peripherals to use something called direct memory access (DMA). The first attack vector you can and should plug; the second vector is the result of a design flaw that’s likely to be with us for many years to come.

The article has the details, but basically you can configure a file on your USB device to automatically run when it’s plugged into a computer. That file can, of course, do anything you want it to.

Recently I’ve been seeing more and more written about this attack. The Spring 2006 issue of 2600 Magazine, for example, contains a short article called “iPod Sneakiness” (unfortunately, not on line). The author suggests that you can innocently ask someone at an Internet cafe if you can plug your iPod into his computer to power it up — and then steal his passwords and critical files.

And here’s an article about someone who used this trick in a penetration test:

We figured we would try something different by baiting the same employees that were on high alert. We gathered all the worthless vendor giveaway thumb drives collected over the years and imprinted them with our own special piece of software. I had one of my guys write a Trojan that, when run, would collect passwords, logins and machine-specific information from the user’s computer, and then email the findings back to us.

The next hurdle we had was getting the USB drives in the hands of the credit union’s internal users. I made my way to the credit union at about 6 a.m. to make sure no employees saw us. I then proceeded to scatter the drives in the parking lot, smoking areas, and other areas employees frequented.

Once I seeded the USB drives, I decided to grab some coffee and watch the employees show up for work. Surveillance of the facility was worth the time involved. It was really amusing to watch the reaction of the employees who found a USB drive. You know they plugged them into their computers the minute they got to their desks.

I immediately called my guy that wrote the Trojan and asked if anything was received at his end. Slowly but surely info was being mailed back to him. I would have loved to be on the inside of the building watching as people started plugging the USB drives in, scouring through the planted image files, then unknowingly running our piece of software.

There is a defense. From the first article:

AutoRun is just a bad idea. People putting CD-ROMs or USB drives into their computers usually want to see what’s on the media, not have programs automatically run. Fortunately you can turn AutoRun off. A simple manual approach is to hold down the “Shift” key when a disk or USB storage device is inserted into the computer. A better way is to disable the feature entirely by editing the Windows Registry. There are many instructions for doing this online (just search for “disable autorun”) or you can download and use Microsoft’s TweakUI program, which is part of the Windows XP PowerToys download. With Windows XP you can also disable AutoRun for CDs by right-clicking on the CD drive icon in the Windows explorer, choosing the AutoPlay tab, and then selecting “Take no action” for each kind of disk that’s listed. Unfortunately, disabling AutoPlay for CDs won’t always disable AutoPlay for USB devices, so the registry hack is the safest course of action.

In the 1990s, the Macintosh operating system had this feature, which was removed after a virus made use of it in 1998. Microsoft needs to remove this feature as well.

EDITED TO ADD (6/12): In the penetration test, they didn’t use AutoRun.

Posted on June 8, 2006 at 1:34 PMView Comments

WiFi Tracking

…a few hundred meters away….”

Forget RFID. Well, don’t, but National Scientific Corporation has a prototype of a WiFi tagging system that, like RFID, lets you track things in real-time and space. The advantage that the WiFi Tracker system has over passive RFID tracking is that you can keep tabs on objects with WiFi Tracker tags (which can hold up to 256K of data) from as far as a few hundred meters away (the range of passive RFID taggers is just a few meters). While you can do something similar with active RFID tags, with WiFi Tracker companies can use their pre-existing WiFi network to track things rather than having to build a whole new RFID system.

In other news, Apple is adding WiFi to the iPod.

And, of course, you can be tracked from your cellphone:

But the FBI and the U.S. Department of Justice have seized on the ability to
locate a cellular customer and are using it to track Americans’ whereabouts
surreptitiously–even when there’s no evidence of wrongdoing.

A pair of court decisions in the last few weeks shows that judges are split
on whether this is legal. One federal magistrate judge in Wisconsin on Jan.
17 ruled it was unlawful, but another nine days later in Louisiana decided
that it was perfectly OK.

This is an unfortunate outcome, not least because it shows that some judges
are reluctant to hold federal agents and prosecutors to the letter of the
law.

It’s also unfortunate because it demonstrates that the FBI swore never to
use a 1994 surveillance law to track cellular phones–but then, secretly,
went ahead and did it, anyway.

Posted on February 14, 2006 at 1:29 PMView Comments

Hymn Project

The Hymn Project exists to break the iTunes mp4 copy-protection scheme, so you can hear the music you bought on any machine you want.

The purpose of the Hymn Project is to allow you to exercise your fair-use rights under copyright law. The various software provided on this web site allows you to free your iTunes Music Store purchases (protected AAC / .m4p) from their DRM restrictions with no loss of sound quality. These songs can then be played outside of the iTunes environment, even on operating systems not supported by iTunes and on hardware not supported by Apple.

Initially, the software recovered your iTunes password (your key, basically) from your hard drive. In response, Apple obfuscated the format and no one has yet figured out how to recover the keys cleanly. To get around this, they developed a program called FairKeys that impersonates iTunes and contacts the server. Since the iTunes client can still get your password, this works.

FairKeys … pretends to be a copy of iTunes running on an imaginary computer, one of the five computers that you’re currently allowed to authorize for playing your iTMS purchases. FairKeys logs into Apple’s web servers to get your keys the same way iTunes does when it needs to get new keys. At least for now, at this stage of the cat-and-mouse game, FairKeys knows how to request your keys and how to decode the response which contains your keys, and once it has those keys it can store them for immediate or future use by JHymn.

More security by inconvenience, and yet another illustration of the neverending arms race between attacker and defender.

Posted on July 11, 2005 at 8:09 AMView Comments

1 11 12 13

Sidebar photo of Bruce Schneier by Joe MacInnis.