Entries Tagged "Apple"

Page 11 of 13

Hacking Apple Laptop Batteries

Interesting:

Security researcher Charlie Miller, widely known for his work on Mac OS X and Apple’s iOS, has discovered an interesting method that enables him to completely disable the batteries on Apple laptops, making them permanently unusable, and perform a number of other unintended actions. The method, which involves accessing and sending instructions to the chip housed on smart batteries could also be used for more malicious purposes down the road.

[…]

What he found is that the batteries are shipped from the factory in a state called “sealed mode” and that there’s a four-byte password that’s required to change that. By analyzing a couple of updates that Apple had sent to fix problems in the batteries in the past, Miller found that password and was able to put the battery into “unsealed mode.”

From there, he could make a few small changes to the firmware, but not what he really wanted. So he poked around a bit more and found that a second password was required to move the battery into full access mode, which gave him the ability to make any changes he wished. That password is a default set at the factory and it’s not changed on laptops before they’re shipped. Once he had that, Miller found he could do a lot of interesting things with the battery.

“That lets you access it at the same level as the factory can,” he said. “You can read all the firmware, make changes to the code, do whatever you want. And those code changes will survive a reinstall of the OS, so you could imagine writing malware that could hide on the chip on the battery. You’d need a vulnerability in the OS or something that the battery could then attack, though.”

As components get smarter, they also get more vulnerable.

Posted on July 29, 2011 at 6:54 AMView Comments

iPhone Iris Scanning Technology

No indication about how well it works:

The smartphone-based scanner, named Mobile Offender Recognition and Information System, or MORIS, is made by BI2 Technologies in Plymouth, Massachusetts, and can be deployed by officers out on the beat or back at the station.

An iris scan, which detects unique patterns in a person’s eyes, can reduce to seconds the time it takes to identify a suspect in custody. This technique also is significantly more accurate than results from other fingerprinting technology long in use by police, BI2 says.

When attached to an iPhone, MORIS can photograph a person’s face and run the image through software that hunts for a match in a BI2-managed database of U.S. criminal records. Each unit costs about $3,000.

[…]

Roughly 40 law enforcement units nationwide will soon be using the MORIS, including Arizona’s Pinal County Sheriff’s Office, as well as officers in Hampton City in Virginia and Calhoun County in Alabama.

Posted on July 26, 2011 at 6:51 AMView Comments

Protecting Private Information on Smart Phones

AppFence is a technology — with a working prototype — that protects personal information on smart phones. It does this by either substituting innocuous information in place of sensitive information or blocking attempts by the application to send the sensitive information over the network.

The significance of systems like AppFence is that they have the potential to change the balance of power in privacy between mobile application developers and users. Today, application developers get to choose what information an application will have access to, and the user faces a take-it-or-leave-it proposition: users must either grant all the permissions requested by the application developer or abandon installation. Take-it-or-leave it offers may make it easier for applications to obtain access to information that users don’t want applications to have. Many applications take advantage of this to gain access to users’ device identifiers and location for behavioral tracking and advertising. Systems like AppFence could make it harder for applications to access these types of information without more explicit consent and cooperation from users.

The problem is that the mobile OS providers might not like AppFence. Google probably doesn’t care, but Apple is one of the biggest consumers of iPhone personal information. Right now, the prototype only works on Android, because it requires flashing the phone. In theory, the technology can be made to work on any mobile OS, but good luck getting Apple to agree to it.

Posted on June 24, 2011 at 6:37 AMView Comments

Whitelisting vs. Blacklisting

The whitelist/blacklist debate is far older than computers, and it’s instructive to recall what works where. Physical security works generally on a whitelist model: if you have a key, you can open the door; if you know the combination, you can open the lock. We do it this way not because it’s easier — although it is generally much easier to make a list of people who should be allowed through your office door than a list of people who shouldn’t–but because it’s a security system that can be implemented automatically, without people.

To find blacklists in the real world, you have to start looking at environments where almost everyone is allowed. Casinos are a good example: everyone can come in and gamble except those few specifically listed in the casino’s black book or the more general Griffin book. Some retail stores have the same model — a Google search on “banned from Wal-Mart” results in 1.5 million hits, including Megan Fox — although you have to wonder about enforcement. Does Wal-Mart have the same sort of security manpower as casinos?

National borders certainly have that kind of manpower, and Marcus is correct to point to passport control as a system with both a whitelist and a blacklist. There are people who are allowed in with minimal fuss, people who are summarily arrested with as minimal a fuss as possible, and people in the middle who receive some amount of fussing. Airport security works the same way: the no-fly list is a blacklist, and people with redress numbers are on the whitelist.

Computer networks share characteristics with your office and Wal-Mart: sometimes you only want a few people to have access, and sometimes you want almost everybody to have access. And you see whitelists and blacklists at work in computer networks. Access control is whitelisting: if you know the password, or have the token or biometric, you get access. Antivirus is blacklisting: everything coming into your computer from the Internet is assumed to be safe unless it appears on a list of bad stuff. On computers, unlike the real world, it takes no extra manpower to implement a blacklist — the software can do it largely for free.

Traditionally, execution control has been based on a blacklist. Computers are so complicated and applications so varied that it just doesn’t make sense to limit users to a specific set of applications. The exception is constrained environments, such as computers in hotel lobbies and airline club lounges. On those, you’re often limited to an Internet browser and a few common business applications.

Lately, we’re seeing more whitelisting on closed computing platforms. The iPhone works on a whitelist: if you want a program to run on the phone, you need to get it approved by Apple and put in the iPhone store. Your Wii game machine works the same way. This is done primarily because the manufacturers want to control the economic environment, but it’s being sold partly as a security measure. But in this case, more security equals less liberty; do you really want your computing options limited by Apple, Microsoft, Google, Facebook, or whoever controls the particular system you’re using?

Turns out that many people do. Apple’s control over its apps hasn’t seemed to hurt iPhone sales, and Facebook’s control over its apps hasn’t seemed to affect Facebook’s user numbers. And honestly, quite a few of us would have had an easier time over the Christmas holidays if we could have implemented a whitelist on the computers of our less-technical relatives.

For these two reasons, I think the whitelist model will continue to make inroads into our general purpose computers. And those of us who want control over our own environments will fight back — perhaps with a whitelist we maintain personally, but more probably with a blacklist.

This essay previously appeared in Information Security as the first half of a point-counterpoint with Marcus Ranum. You can read Marcus’s half there as well.

Posted on January 28, 2011 at 5:02 AMView Comments

Apple JailBreakMe Vulnerability

Good information from Mikko Hyppönen.

Q: What is this all about?
A: It’s about a site called jailbreakme.com that enables you to Jailbreak your iPhones and iPads just by visiting the site.

Q: So what’s the problem?
A: The problem is that the site uses a zero-day vulnerability to execute code on the device.

Q: How does the vulnerability work?
A: Actually, it’s two vulnerabilities. First one uses a corrupted font embedded in a PDF file to execute code and the second one uses a vulnerability in the kernel to escalate the code execution to unsandboxed root.

Q: How difficult was it to create this exploit?
A: Very difficult.

Q: How difficult would it be for someone else to modify the exploit now that it’s out?
A: Quite easy.

Here’s the JailBreakMe blog.

EDITED TO ADD (8/14): Apple has released a patch. It doesn’t help people with old model iPhones and iPod Touches, or work for people who’ve jailbroken their phones.

EDITED TO ADD (8/15): More info.

Posted on August 10, 2010 at 12:12 PMView Comments

AT&T's iPad Security Breach

I didn’t write about the recent security breach that disclosed tens of thousands of e-mail addresses and ICC-IDs of iPad users because, well, there was nothing terribly interesting about it. It was yet another web security breach.

Right after the incident, though, I was being interviewed by a reporter that wanted to know what the ramifications of the breach were. He specifically wanted to know if anything could be done with those ICC-IDs, and if the disclosure of that information was worse than people thought. He didn’t like the answer I gave him, which is that no one knows yet: that it’s too early to know the full effects of that information disclosure, and that both the good guys and the bad guys would be figuring it out in the coming weeks. And, that it’s likely that there were further security implications of the breach.

Seems like there were:

The problem is that ICC-IDs—unique serial numbers that identify each SIM card—can often be converted into IMSIs. While the ICC-ID is nonsecret—it’s often found printed on the boxes of cellphone/SIM bundles—the IMSI is somewhat secret. In theory, knowing an ICC-ID shouldn’t be enough to determine an IMSI. The phone companies do need to know which IMSI corresponds to which ICC-ID, but this should be done by looking up the values in a big database.

In practice, however, many phone companies simply calculate the IMSI from the ICC-ID. This calculation is often very simple indeed, being little more complex than “combine this hard-coded value with the last nine digits of the ICC-ID.” So while the leakage of AT&T’s customers’ ICC-IDs should be harmless, in practice, it could reveal a secret ID.

What can be done with that secret ID? Quite a lot, it turns out. The IMSI is sent by the phone to the network when first signing on to the network; it’s used by the network to figure out which call should be routed where. With someone else’s IMSI, an attacker can determine the person’s name and phone number, and even track his or her position. It also opens the door to active attacks—creating fake cell towers that a victim’s phone will connect to, enabling every call and text message to be eavesdropped.

More to come, I’m sure.

And that’s really the point: we all want to know — right away — the effects of a security vulnerability, but often we don’t and can’t. It takes time before the full effects are known, sometimes a lot of time.

And in related news, the image redaction that went along with some of the breach reporting wasn’t very good.

Posted on June 21, 2010 at 5:27 AMView Comments

Alerting Users that Applications are Using Cameras, Microphones, Etc.

Interesting research: “What You See is What They Get: Protecting users from unwanted use of microphones, cameras, and other sensors,” by Jon Howell and Stuart Schechter.

Abstract: Sensors such as cameras and microphones collect privacy-sensitive data streams without the user’s explicit action. Conventional sensor access policies either hassle users to grant applications access to sensors or grant with no approval at all. Once access is granted, an application may collect sensor data even after the application’s interface suggests that the sensor is no longer being accessed.

We introduce the sensor-access widget, a graphical user interface element that resides within an application’s display. The widget provides an animated representation of the personal data being collected by its corresponding sensor, calling attention to the application’s attempt to collect the data. The widget indicates whether the sensor data is currently allowed to flow to the application. The widget also acts as a control point through which the user can configure the sensor and grant or deny the application access. By building perpetual disclosure of sensor data collection into the platform, sensor-access widgets enable new access-control policies that relax the tension between the user’s privacy needs and applications’ ease of access.

Apple seems to be taking some steps in this direction with the location sensor disclosure in iPhone 4.0 OS.

Posted on May 24, 2010 at 7:32 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.