Entries Tagged "Apple"

Page 3 of 16

Thieves Using AirTags to “Follow” Cars

From Ontario and not surprising:

Since September 2021, officers have investigated five incidents where suspects have placed small tracking devices on high-end vehicles so they can later locate and steal them. Brand name “air tags” are placed in out-of-sight areas of the target vehicles when they are parked in public places like malls or parking lots. Thieves then track the targeted vehicles to the victim’s residence, where they are stolen from the driveway.

Thieves typically use tools like screwdrivers to enter the vehicles through the driver or passenger door, while ensuring not to set off alarms. Once inside, an electronic device, typically used by mechanics to reprogram the factory setting, is connected to the onboard diagnostics port below the dashboard and programs the vehicle to accept a key the thieves have brought with them. Once the new key is programmed, the vehicle will start and the thieves drive it away.

I’m not sure if there’s anything that can be done:

When Apple first released AirTags earlier this year, concerns immediately sprung up about nefarious use cases for the covert trackers. Apple responded with a slew of anti-stalking measures, but those are more intended for keeping people safe than cars. An AirTag away from its owner will sound an alarm, letting anyone nearby know that it’s been left behind, but it can take up to 24 hours for that alarm to go off—more than enough time to nab a car in the dead of night.

Posted on December 6, 2021 at 10:25 AMView Comments

Apple Sues NSO Group

Piling more on NSO Group’s legal troubles, Apple is suing it:

The complaint provides new information on how NSO Group infected victims’ devices with its Pegasus spyware. To prevent further abuse and harm to its users, Apple is also seeking a permanent injunction to ban NSO Group from using any Apple software, services, or devices.

NSO Group’s Pegasus spyware is favored by totalitarian governments around the world, who use it to hack Apple phones and computers.

More news:

Apple’s legal complaint provides new information on NSO Group’s FORCEDENTRY, an exploit for a now-patched vulnerability previously used to break into a victim’s Apple device and install the latest version of NSO Group’s spyware product, Pegasus. The exploit was originally identified by the Citizen Lab, a research group at the University of Toronto.

The spyware was used to attack a small number of Apple users worldwide with dangerous malware and spyware. Apple’s lawsuit seeks to ban NSO Group from further harming individuals by using Apple’s products and services. The lawsuit also seeks redress for NSO Group’s flagrant violations of US federal and state law, arising out of its efforts to target and attack Apple and its users.

NSO Group and its clients devote the immense resources and capabilities of nation-states to conduct highly targeted cyberattacks, allowing them to access the microphone, camera, and other sensitive data on Apple and Android devices. To deliver FORCEDENTRY to Apple devices, attackers created Apple IDs to send malicious data to a victim’s device—allowing NSO Group or its clients to deliver and install Pegasus spyware without a victim’s knowledge. Though misused to deliver FORCEDENTRY, Apple servers were not hacked or compromised in the attacks.

This follows in the footsteps of Facebook, which is also suing NSO Group and demanding a similar prohibition. And while the idea of the intermediary suing the attacker, and not the victim, is somewhat novel, I think it makes a lot of sense. I have a law journal article about to be published with Jon Penney on the Facebook case.

EDITED TO ADD (12/14): Supplemental brief.

Posted on November 24, 2021 at 9:29 AMView Comments

MacOS Zero-Day Used against Hong Kong Activists

Google researchers discovered a MacOS zero-day exploit being used against Hong Kong activists. It was a “watering hole” attack, which means the malware was hidden in a legitimate website. Users visiting that website would get infected.

From an article:

Google’s researchers were able to trigger the exploits and study them by visiting the websites compromised by the hackers. The sites served both iOS and MacOS exploit chains, but the researchers were only able to retrieve the MacOS one. The zero-day exploit was similar to another in-the-wild vulnerability analyzed by another Google researcher in the past, according to the report.

In addition, the zero-day exploit used in this hacking campaign is “identical” to an exploit previously found by cybersecurity research group Pangu Lab, Huntley said. Pangu Lab’s researchers presented the exploit at a security conference in China in April of this year, a few months before hackers used it against Hong Kong users.

The exploit was discovered in August. Apple patched the vulnerability in September. China is, of course, the obvious suspect, given the victims.

EDITED TO ADD (11/15): Another story.

Posted on November 12, 2021 at 9:07 AMView Comments

Security Risks of Client-Side Scanning

Even before Apple made its announcement, law enforcement shifted their battle for backdoors to client-side scanning. The idea is that they wouldn’t touch the cryptography, but instead eavesdrop on communications and systems before encryption or after decryption. It’s not a cryptographic backdoor, but it’s still a backdoor—and brings with it all the insecurities of a backdoor.

I’m part of a group of cryptographers that has just published a paper discussing the security risks of such a system. (It’s substantially the same group that wrote a similar paper about key escrow in 1997, and other “exceptional access” proposals in 2015. We seem to have to do this every decade or so.) In our paper, we examine both the efficacy of such a system and its potential security failures, and conclude that it’s a really bad idea.

We had been working on the paper well before Apple’s announcement. And while we do talk about Apple’s system, our focus is really on the idea in general.

Ross Anderson wrote a blog post on the paper. (It’s always great when Ross writes something. It means I don’t have to.) So did Susan Landau. And there’s press coverage in the New York Times, the Guardian, Computer Weekly, the Financial Times, Forbes, El Pais (English translation), NRK (English translation), and—this is the best article of them all—the Register. See also this analysis of the law and politics of client-side scanning from last year.

Posted on October 15, 2021 at 9:30 AMView Comments

More on Apple’s iPhone Backdoor

In this post, I’ll collect links on Apple’s iPhone backdoor for scanning CSAM images. Previous links are here and here.

Apple says that hash collisions in its CSAM detection system were expected, and not a concern. I’m not convinced that this secondary system was originally part of the design, since it wasn’t discussed in the original specification.

Good op-ed from a group of Princeton researchers who developed a similar system:

Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.

EDITED TO ADD (8/30): Good essays by Matthew Green and Alex Stamos, Ross Anderson, Edward Snowden, and Susan Landau. And also Kurt Opsahl.

EDITED TO ADD (9/6): Apple is delaying implementation of the scheme.

Posted on August 20, 2021 at 8:54 AMView Comments

Apple’s NeuralHash Algorithm Has Been Reverse-Engineered

Apple’s NeuralHash algorithm—the one it’s using for client-side scanning on the iPhone—has been reverse-engineered.

Turns out it was already in iOS 14.3, and someone noticed:

Early tests show that it can tolerate image resizing and compression, but not cropping or rotations.

We also have the first collision: two images that hash to the same value.

The next step is to generate innocuous images that NeuralHash classifies as prohibited content.

This was a bad idea from the start, and Apple never seemed to consider the adversarial context of the system as a whole, and not just the cryptography.

Posted on August 18, 2021 at 11:51 AMView Comments

Apple Adds a Backdoor to iMessage and iCloud Storage

Apple’s announcement that it’s going to start scanning photos for child abuse material is a big deal. (Here are five news stories.) I have been following the details, and discussing it in several different email lists. I don’t have time right now to delve into the details, but wanted to post something.

EFF writes:

There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts—that is, accounts designated as owned by a minor—for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.

This is pretty shocking coming from Apple, which is generally really good about privacy. It opens the door for all sorts of other surveillance, since now that the system is built it can be used for all sorts of other messages. And it breaks end-to-end encryption, despite Apple’s denials:

Does this break end-to-end encryption in Messages?

No. This doesn’t change the privacy assurances of Messages, and Apple never gains access to communications as a result of this feature. Any user of Messages, including those with with communication safety enabled, retains control over what is sent and to whom. If the feature is enabled for the child account, the device will evaluate images in Messages and present an intervention if the image is determined to be sexually explicit. For accounts of children age 12 and under, parents can set up parental notifications which will be sent if the child confirms and sends or views an image that has been determined to be sexually explicit. None of the communications, image evaluation, interventions, or notifications are available to Apple.

Notice Apple changing the definition of “end-to-end encryption.” No longer is the message a private communication between sender and receiver. A third party is alerted if the message meets a certain criteria.

This is a security disaster. Read tweets by Matthew Green and Edward Snowden. Also this. I’ll post more when I see it.

Beware the Four Horsemen of the Information Apocalypse. They’ll scare you into accepting all sorts of insecure systems.

EDITED TO ADD: This is a really good write-up of the problems.

EDITED TO ADD: Alex Stamos comments.

An open letter to Apple criticizing the project.

A leaked Apple memo responding to the criticisms. (What are the odds that Apple did not intend this to leak?)

EDITED TO ADD: John Gruber’s excellent analysis.

EDITED TO ADD (8/11): Paul Rosenzweig wrote an excellent policy discussion.

EDITED TO ADD (8/13): Really good essay by EFF’s Kurt Opsahl. Ross Anderson did an interview with Glenn Beck. And this news article talks about dissent within Apple about this feature.

The Economist has a good take. Apple responds to criticisms. (It’s worth watching the Wall Street Journal video interview as well.)

EDITED TO ADD (8/14): Apple released a threat model

EDITED TO ADD (8/20): Follow-on blog posts here and here.

Posted on August 10, 2021 at 6:37 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.