I haven’t written about Apple’s Lockdown Mode yet, mostly because I haven’t delved into the details. This is how Apple describes it:
Lockdown Mode offers an extreme, optional level of security for the very few users who, because of who they are or what they do, may be personally targeted by some of the most sophisticated digital threats, such as those from NSO Group and other private companies developing state-sponsored mercenary spyware. Turning on Lockdown Mode in iOS 16, iPadOS 16, and macOS Ventura further hardens device defenses and strictly limits certain functionalities, sharply reducing the attack surface that potentially could be exploited by highly targeted mercenary spyware.
At launch, Lockdown Mode includes the following protections:
- Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.
- Apple services: Incoming invitations and service requests, including FaceTime calls, are blocked if the user has not previously sent the initiator a call or request.
- Wired connections with a computer or accessory are blocked when iPhone is locked.
- Configuration profiles cannot be installed, and the device cannot enroll into mobile device management (MDM), while Lockdown Mode is turned on.
What Apple has done here is really interesting. It’s common to trade security off for usability, and the results of that are all over Apple’s operating systems—and everywhere else on the Internet. What they’re doing with Lockdown Mode is the reverse: they’re trading usability for security. The result is a user experience with fewer features, but a much smaller attack surface. And they aren’t just removing random features; they’re removing features that are common attack vectors.
There aren’t a lot of people who need Lockdown Mode, but it’s an excellent option for those who do.
EDITED TO ADD (7/31): An analysis of the effect of Lockdown Mode on Safari.
Posted on July 26, 2022 at 7:57 AM •
Apple has introduced lockdown mode for high-risk users who are concerned about nation-state attacks. It trades reduced functionality for increased security in a very interesting way.
Posted on July 8, 2022 at 9:18 AM •
Researchers have figured how how to intercept and fake an iPhone reboot:
We’ll dissect the iOS system and show how it’s possible to alter a shutdown event, tricking a user that got infected into thinking that the phone has been powered off, but in fact, it’s still running. The “NoReboot” approach simulates a real shutdown. The user cannot feel a difference between a real shutdown and a “fake shutdown.” There is no user-interface or any button feedback until the user turns the phone back “on.”
It’s a complicated hack, but it works.
Uses are obvious:
Historically, when malware infects an iOS device, it can be removed simply by restarting the device, which clears the malware from memory.
However, this technique hooks the shutdown and reboot routines to prevent them from ever happening, allowing malware to achieve persistence as the device is never actually turned off.
I see this as another manifestation of the security problems that stem from all controls becoming software controls. Back when the physical buttons actually did things—like turn the power, the Wi-Fi, or the camera on and off—you could actually know that something was on or off. Now that software controls those functions, you can never be sure.
Posted on January 12, 2022 at 6:15 AM •
Apple’s NeuralHash algorithm—the one it’s using for client-side scanning on the iPhone—has been reverse-engineered.
Turns out it was already in iOS 14.3, and someone noticed:
Early tests show that it can tolerate image resizing and compression, but not cropping or rotations.
We also have the first collision: two images that hash to the same value.
The next step is to generate innocuous images that NeuralHash classifies as prohibited content.
This was a bad idea from the start, and Apple never seemed to consider the adversarial context of the system as a whole, and not just the cryptography.
Posted on August 18, 2021 at 11:51 AM •
iOS apps are repeatedly reading clipboard data, which can include all sorts of sensitive information.
While Haj Bakry and Mysk published their research in March, the invasive apps made headlines again this week with the developer beta release of iOS 14. A novel feature Apple added provides a banner warning every time an app reads clipboard contents. As large numbers of people began testing the beta release, they quickly came to appreciate just how many apps engage in the practice and just how often they do it.
This YouTube video, which has racked up more than 87,000 views since it was posted on Tuesday, shows a small sample of the apps triggering the new warning.
EDITED TO ADD (7/6): LinkedIn and Reddit are doing this.
Posted on June 29, 2020 at 10:24 AM •
This is a good explanation of an iOS bug that allowed someone to break out of the application sandbox. A summary:
What a crazy bug, and Siguza’s explanation is very cogent. Basically, it comes down to this:
- XML is terrible.
- iOS uses XML for Plists, and Plists are used everywhere in iOS (and MacOS).
- iOS’s sandboxing system depends upon three different XML parsers, which interpret slightly invalid XML input in slightly different ways.
So Siguza’s exploit —which granted an app full access to the entire file system, and more - uses malformed XML comments constructed in a way that one of iOS’s XML parsers sees its declaration of entitlements one way, and another XML parser sees it another way. The XML parser used to check whether an application should be allowed to launch doesn’t see the fishy entitlements because it thinks they’re inside a comment. The XML parser used to determine whether an already running application has permission to do things that require entitlements sees the fishy entitlements and grants permission.
This is fixed in the new iOS release, 13.5 beta 3.
Implementing 4 different parsers is just asking for trouble, and the “fix” is of the crappiest sort, bolting on more crap to check they’re doing the right thing in this single case. None of this is encouraging.
More commentary. Hacker News thread.
Posted on May 7, 2020 at 9:56 AM •
A new iOS exploit allows jailbreaking of pretty much all version of the iPhone. This is a huge deal for Apple, but at least it doesn’t allow someone to remotely hack people’s phones.
I wanted to learn how Checkm8 will shape the iPhone experience—particularly as it relates to security—so I spoke at length with axi0mX on Friday. Thomas Reed, director of Mac offerings at security firm Malwarebytes, joined me. The takeaways from the long-ranging interview are:
- Checkm8 requires physical access to the phone. It can’t be remotely executed, even if combined with other exploits.
- The exploit allows only tethered jailbreaks, meaning it lacks persistence. The exploit must be run each time an iDevice boots.
- Checkm8 doesn’t bypass the protections offered by the Secure Enclave and Touch ID.
- All of the above means people will be able to use Checkm8 to install malware only under very limited circumstances. The above also means that Checkm8 is unlikely to make it easier for people who find, steal or confiscate a vulnerable iPhone, but don’t have the unlock PIN, to access the data stored on it.
- Checkm8 is going to benefit researchers, hobbyists, and hackers by providing a way not seen in almost a decade to access the lowest levels of iDevices.
“The main people who are likely to benefit from this are security researchers, who are using their own phone in controlled conditions. This process allows them to gain more control over the phone and so improves visibility into research on iOS or other apps on the phone,” Wood says. “For normal users, this is unlikely to have any effect, there are too many extra hurdles currently in place that they would have to get over to do anything significant.”
If a regular person with no prior knowledge of jailbreaking wanted to use this exploit to jailbreak their iPhone, they would find it extremely difficult, simply because Checkm8 just gives you access to the exploit, but not a jailbreak in itself. It’s also a ‘tethered exploit’, meaning that the jailbreak can only be triggered when connected to a computer via USB and will become untethered once the device restarts.
Posted on October 8, 2019 at 5:24 AM •
The digital forensics company Cellebrite now claims it can unlock any iPhone.
I dithered before blogging this, not wanting to give the company more publicity. But I decided that everyone who wants to know already knows, and that Apple already knows. It’s all of us that need to know.
Posted on June 28, 2019 at 6:35 AM •
This is really just to point out that computer security is really hard:
Almost as soon as Apple released iOS 12.1 on Tuesday, a Spanish security researcher discovered a bug that exploits group Facetime calls to give anyone access to an iPhone users’ contact information with no need for a passcode.
A bad actor would need physical access to the phone that they are targeting and has a few options for viewing the victim’s contact information. They would need to either call the phone from another iPhone or have the phone call itself. Once the call connects they would need to:
- Select the Facetime icon
- Select “Add Person”
- Select the plus icon
- Scroll through the contacts and use 3D touch on a name to view all contact information that’s stored.
Making the phone call itself without entering a passcode can be accomplished by either telling Siri the phone number or, if they don’t know the number, they can say “call my phone.” We tested this with both the owners’ voice and a strangers voice, in both cases, Siri initiated the call.
Posted on November 8, 2018 at 6:35 AM •
Sidebar photo of Bruce Schneier by Joe MacInnis.