More Detail on the Juniper Hack and the NSA PRNG Backdoor
We knew the basics of this story, but it’s good to have more detail.
Here’s me in 2015 about this Juniper hack. Here’s me in 2007 on the NSA backdoor.
Page 2 of 19
We knew the basics of this story, but it’s good to have more detail.
Here’s me in 2015 about this Juniper hack. Here’s me in 2007 on the NSA backdoor.
In this post, I’ll collect links on Apple’s iPhone backdoor for scanning CSAM images. Previous links are here and here.
Apple says that hash collisions in its CSAM detection system were expected, and not a concern. I’m not convinced that this secondary system was originally part of the design, since it wasn’t discussed in the original specification.
Good op-ed from a group of Princeton researchers who developed a similar system:
Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.
EDITED TO ADD (8/30): Good essays by Matthew Green and Alex Stamos, Ross Anderson, Edward Snowden, and Susan Landau. And also Kurt Opsahl.
EDITED TO ADD (9/6): Apple is delaying implementation of the scheme.
Apple’s NeuralHash algorithm—the one it’s using for client-side scanning on the iPhone—has been reverse-engineered.
Turns out it was already in iOS 14.3, and someone noticed:
Early tests show that it can tolerate image resizing and compression, but not cropping or rotations.
We also have the first collision: two images that hash to the same value.
The next step is to generate innocuous images that NeuralHash classifies as prohibited content.
This was a bad idea from the start, and Apple never seemed to consider the adversarial context of the system as a whole, and not just the cryptography.
Apple’s announcement that it’s going to start scanning photos for child abuse material is a big deal. (Here are five news stories.) I have been following the details, and discussing it in several different email lists. I don’t have time right now to delve into the details, but wanted to post something.
EFF writes:
There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts—that is, accounts designated as owned by a minor—for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.
This is pretty shocking coming from Apple, which is generally really good about privacy. It opens the door for all sorts of other surveillance, since now that the system is built it can be used for all sorts of other messages. And it breaks end-to-end encryption, despite Apple’s denials:
Does this break end-to-end encryption in Messages?
No. This doesn’t change the privacy assurances of Messages, and Apple never gains access to communications as a result of this feature. Any user of Messages, including those with with communication safety enabled, retains control over what is sent and to whom. If the feature is enabled for the child account, the device will evaluate images in Messages and present an intervention if the image is determined to be sexually explicit. For accounts of children age 12 and under, parents can set up parental notifications which will be sent if the child confirms and sends or views an image that has been determined to be sexually explicit. None of the communications, image evaluation, interventions, or notifications are available to Apple.
Notice Apple changing the definition of “end-to-end encryption.” No longer is the message a private communication between sender and receiver. A third party is alerted if the message meets a certain criteria.
This is a security disaster. Read tweets by Matthew Green and Edward Snowden. Also this. I’ll post more when I see it.
Beware the Four Horsemen of the Information Apocalypse. They’ll scare you into accepting all sorts of insecure systems.
EDITED TO ADD: This is a really good write-up of the problems.
EDITED TO ADD: Alex Stamos comments.
An open letter to Apple criticizing the project.
A leaked Apple memo responding to the criticisms. (What are the odds that Apple did not intend this to leak?)
EDITED TO ADD: John Gruber’s excellent analysis.
EDITED TO ADD (8/11): Paul Rosenzweig wrote an excellent policy discussion.
EDITED TO ADD (8/13): Really good essay by EFF’s Kurt Opsahl. Ross Anderson did an interview with Glenn Beck. And this news article talks about dissent within Apple about this feature.
The Economist has a good take. Apple responds to criticisms. (It’s worth watching the Wall Street Journal video interview as well.)
EDITED TO ADD (8/14): Apple released a threat model
Motherboard got its hands on one of those Anom phones that were really FBI honeypots.
The details are interesting.
General Packet Radio Service (GPRS) is a mobile data standard that was widely used in the early 2000s. The first encryption algorithm for that standard was GEA-1, a stream cipher built on three linear-feedback shift registers and a non-linear combining function. Although the algorithm has a 64-bit key, the effective key length is only 40 bits, due to “an exceptional interaction of the deployed LFSRs and the key initialization, which is highly unlikely to occur by chance.”
GEA-1 was designed by the European Telecommunications Standards Institute in 1998. ETSI was—and maybe still is—under the auspices of SOGIS: the Senior Officials Group, Information Systems Security. That’s basically the intelligence agencies of the EU countries.
Details are in the paper: “Cryptanalysis of the GPRS Encryption Algorithms GEA-1 and GEA-2.” GEA-2 does not have the same flaw, although the researchers found a practical attack with enough keystream.
Hacker News thread.
EDITED TO ADD (6/18): News article.
For three years, the Federal Bureau of Investigation and the Australian Federal Police owned and operated a commercial encrypted phone app, called AN0M, that was used by organized crime around the world. Of course, the police were able to read everything—I don’t even know if this qualifies as a backdoor. This week, the world’s police organizations announced 800 arrests based on text messages sent over the app. We’ve seen law enforcement take over encrypted apps before: for example, EncroChat. This operation, code-named Trojan Shield, is the first time law enforcement managed an app from the beginning.
If there is any moral to this, it’s one that all of my blog readers should already know: trust is essential to security. And the number of people you need to trust is larger than you might originally think. For an app to be secure, you need to trust the hardware, the operating system, the software, the update mechanism, the login mechanism, and on and on and on. If one of those is untrustworthy, the whole system is insecure.
It’s the same reason blockchain-based currencies are so insecure, even if the cryptography is sound.
Bizarro is a new banking trojan that is stealing financial information and crypto wallets.
…the program can be delivered in a couple of ways—either via malicious links contained within spam emails, or through a trojanized app. Using these sneaky methods, trojan operators will implant the malware onto a target device, where it will install a sophisticated backdoor that “contains more than 100 commands and allows the attackers to steal online banking account credentials,” the researchers write.
The backdoor has numerous commands built in to allow manipulation of a targeted individual, including keystroke loggers that allow for harvesting of personal login information. In some instances, the malware can allow criminals to commandeer a victim’s crypto wallet, too.
Research report.
Developers have discovered a backdoor in the Codecov bash uploader. It’s been there for four months. We don’t know who put it there.
Codecov said the breach allowed the attackers to export information stored in its users’ continuous integration (CI) environments. This information was then sent to a third-party server outside of Codecov’s infrastructure,” the company warned.
Codecov’s Bash Uploader is also used in several uploaders—Codecov-actions uploader for Github, the Codecov CircleCl Orb, and the Codecov Bitrise Step—and the company says these uploaders were also impacted by the breach.
According to Codecov, the altered version of the Bash Uploader script could potentially affect:
- Any credentials, tokens, or keys that our customers were passing through their CI runner that would be accessible when the Bash Uploader script was executed.
- Any services, datastores, and application code that could be accessed with these credentials, tokens, or keys.
- The git remote information (URL of the origin repository) of repositories using the Bash Uploaders to upload coverage to Codecov in CI.
Add this to the long list of recent supply-chain attacks.
Unknown hackers attempted to add a backdoor to the PHP source code. It was two malicious commits, with the subject “fix typo” and the names of known PHP developers and maintainers. They were discovered and removed before being pushed out to any users. But since 79% of the Internet’s websites use PHP, it’s scary.
Developers have moved PHP to GitHub, which has better authentication. Hopefully it will be enough—PHP is a juicy target.
Sidebar photo of Bruce Schneier by Joe MacInnis.