Blog: August 2021 Archives

Interesting Privilege Escalation Vulnerability

If you plug a Razer peripheral (mouse or keyboard, I think) into a Windows 10 or 11 machine, you can use a vulnerability in the Razer Synapse software—which automatically downloads—to gain SYSTEM privileges.

It should be noted that this is a local privilege escalation (LPE) vulnerability, which means that you need to have a Razer devices and physical access to a computer. With that said, the bug is so easy to exploit as you just need to spend $20 on Amazon for Razer mouse and plug it into Windows 10 to become an admin.

Posted on August 26, 2021 at 6:28 AM12 Comments

Surveillance of the Internet Backbone

Vice has an article about how data brokers sell access to the Internet backbone. This is netflow data. It’s useful for cybersecurity forensics, but can also be used for things like tracing VPN activity.

At a high level, netflow data creates a picture of traffic flow and volume across a network. It can show which server communicated with another, information that may ordinarily only be available to the server owner or the ISP carrying the traffic. Crucially, this data can be used for, among other things, tracking traffic through virtual private networks, which are used to mask where someone is connecting to a server from, and by extension, their approximate physical location.

In the hands of some governments, that could be dangerous.

Posted on August 25, 2021 at 10:13 AM44 Comments

More on Apple’s iPhone Backdoor

In this post, I’ll collect links on Apple’s iPhone backdoor for scanning CSAM images. Previous links are here and here.

Apple says that hash collisions in its CSAM detection system were expected, and not a concern. I’m not convinced that this secondary system was originally part of the design, since it wasn’t discussed in the original specification.

Good op-ed from a group of Princeton researchers who developed a similar system:

Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.

EDITED TO ADD (8/30): Good essays by Matthew Green and Alex Stamos, Ross Anderson, Edward Snowden, and Susan Landau. And also Kurt Opsahl.

EDITED TO ADD (9/6): Apple is delaying implementation of the scheme.

Posted on August 20, 2021 at 8:54 AM29 Comments

T-Mobile Data Breach

It’s a big one:

As first reported by Motherboard on Sunday, someone on the dark web claims to have obtained the data of 100 million from T-Mobile’s servers and is selling a portion of it on an underground forum for 6 bitcoin, about $280,000. The trove includes not only names, phone numbers, and physical addresses but also more sensitive data like social security numbers, driver’s license information, and IMEI numbers, unique identifiers tied to each mobile device. Motherboard confirmed that samples of the data “contained accurate information on T-Mobile customers.”

Posted on August 19, 2021 at 6:17 AM19 Comments

Apple’s NeuralHash Algorithm Has Been Reverse-Engineered

Apple’s NeuralHash algorithm—the one it’s using for client-side scanning on the iPhone—has been reverse-engineered.

Turns out it was already in iOS 14.3, and someone noticed:

Early tests show that it can tolerate image resizing and compression, but not cropping or rotations.

We also have the first collision: two images that hash to the same value.

The next step is to generate innocuous images that NeuralHash classifies as prohibited content.

This was a bad idea from the start, and Apple never seemed to consider the adversarial context of the system as a whole, and not just the cryptography.

Posted on August 18, 2021 at 11:51 AM37 Comments

Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Posted on August 14, 2021 at 12:01 PM3 Comments

Using AI to Scale Spear Phishing

The problem with spear phishing is that it takes time and creativity to create individualized enticing phishing emails. Researchers are using GPT-3 to attempt to solve that problem:

The researchers used OpenAI’s GPT-3 platform in conjunction with other AI-as-a-service products focused on personality analysis to generate phishing emails tailored to their colleagues’ backgrounds and traits. Machine learning focused on personality analysis aims to be predict a person’s proclivities and mentality based on behavioral inputs. By running the outputs through multiple services, the researchers were able to develop a pipeline that groomed and refined the emails before sending them out. They say that the results sounded “weirdly human” and that the platforms automatically supplied surprising specifics, like mentioning a Singaporean law when instructed to generate content for people living in Singapore.

While they were impressed by the quality of the synthetic messages and how many clicks they garnered from colleagues versus the human-composed ones, the researchers note that the experiment was just a first step. The sample size was relatively small and the target pool was fairly homogenous in terms of employment and geographic region. Plus, both the human-generated messages and those generated by the AI-as-a-service pipeline were created by office insiders rather than outside attackers trying to strike the right tone from afar.

It’s just a matter of time before this is really effective. Combine it with voice and video synthesis, and you have some pretty scary scenarios. The real risk isn’t that AI-generated phishing emails are as good as human-generated ones, it’s that they can be generated at much greater scale.

Defcon presentation and slides. Another news article

Posted on August 13, 2021 at 6:16 AM19 Comments

Cobalt Strike Vulnerability Affects Botnet Servers

Cobalt Strike is a security tool, used by penetration testers to simulate network attackers. But it’s also used by attackers—from criminals to governments—to automate their own attacks. Researchers have found a vulnerability in the product.

The main components of the security tool are the Cobalt Strike client—also known as a Beacon—and the Cobalt Strike team server, which sends commands to infected computers and receives the data they exfiltrate. An attacker starts by spinning up a machine running Team Server that has been configured to use specific “malleability” customizations, such as how often the client is to report to the server or specific data to periodically send.

Then the attacker installs the client on a targeted machine after exploiting a vulnerability, tricking the user or gaining access by other means. From then on, the client will use those customizations to maintain persistent contact with the machine running the Team Server.

The link connecting the client to the server is called the web server thread, which handles communication between the two machines. Chief among the communications are “tasks” servers send to instruct clients to run a command, get a process list, or do other things. The client then responds with a “reply.”

Researchers at security firm SentinelOne recently found a critical bug in the Team Server that makes it easy to knock the server offline. The bug works by sending a server fake replies that “squeeze every bit of available memory from the C2’s web server thread….”

It’s a pretty serious vulnerability, and there’s already a patch available. But—and this is the interesting part—that patch is available to licensed users, which attackers often aren’t. It’ll be a while before that patch filters down to the pirated copies of the software, and that time window gives defenders an opportunity. They can simulate a Cobolt Strike client, and leverage this vulnerability to reply to servers with messages that cause the server to crash.

Posted on August 11, 2021 at 6:42 AM8 Comments

Apple Adds a Backdoor to iMessage and iCloud Storage

Apple’s announcement that it’s going to start scanning photos for child abuse material is a big deal. (Here are five news stories.) I have been following the details, and discussing it in several different email lists. I don’t have time right now to delve into the details, but wanted to post something.

EFF writes:

There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts—that is, accounts designated as owned by a minor—for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.

This is pretty shocking coming from Apple, which is generally really good about privacy. It opens the door for all sorts of other surveillance, since now that the system is built it can be used for all sorts of other messages. And it breaks end-to-end encryption, despite Apple’s denials:

Does this break end-to-end encryption in Messages?

No. This doesn’t change the privacy assurances of Messages, and Apple never gains access to communications as a result of this feature. Any user of Messages, including those with with communication safety enabled, retains control over what is sent and to whom. If the feature is enabled for the child account, the device will evaluate images in Messages and present an intervention if the image is determined to be sexually explicit. For accounts of children age 12 and under, parents can set up parental notifications which will be sent if the child confirms and sends or views an image that has been determined to be sexually explicit. None of the communications, image evaluation, interventions, or notifications are available to Apple.

Notice Apple changing the definition of “end-to-end encryption.” No longer is the message a private communication between sender and receiver. A third party is alerted if the message meets a certain criteria.

This is a security disaster. Read tweets by Matthew Green and Edward Snowden. Also this. I’ll post more when I see it.

Beware the Four Horsemen of the Information Apocalypse. They’ll scare you into accepting all sorts of insecure systems.

EDITED TO ADD: This is a really good write-up of the problems.

EDITED TO ADD: Alex Stamos comments.

An open letter to Apple criticizing the project.

A leaked Apple memo responding to the criticisms. (What are the odds that Apple did not intend this to leak?)

EDITED TO ADD: John Gruber’s excellent analysis.

EDITED TO ADD (8/11): Paul Rosenzweig wrote an excellent policy discussion.

EDITED TO ADD (8/13): Really good essay by EFF’s Kurt Opsahl. Ross Anderson did an interview with Glenn Beck. And this news article talks about dissent within Apple about this feature.

The Economist has a good take. Apple responds to criticisms. (It’s worth watching the Wall Street Journal video interview as well.)

EDITED TO ADD (8/14): Apple released a threat model

EDITED TO ADD (8/20): Follow-on blog posts here and here.

Posted on August 10, 2021 at 6:37 AM101 Comments

Defeating Microsoft’s Trusted Platform Module

This is a really interesting story explaining how to defeat Microsoft’s TPM in 30 minutes—without having to solder anything to the motherboard.

Researchers at the security consultancy Dolos Group, hired to test the security of one client’s network, received a new Lenovo computer preconfigured to use the standard security stack for the organization. They received no test credentials, configuration details, or other information about the machine.

They were not only able to get into the BitLocker-encrypted computer, but then use the computer to get into the corporate network.

It’s the “evil maid attack.” It requires physical access to your computer, but you leave it in your hotel room all the time when you go out to dinner.

Original blog post.

Posted on August 9, 2021 at 6:19 AM22 Comments

Using “Master Faces” to Bypass Face-Recognition Authenticating Systems

Fascinating research: “Generating Master Faces for Dictionary Attacks with a Network-Assisted Latent Space Evolution.”

Abstract: A master face is a face image that passes face-based identity-authentication for a large portion of the population. These faces can be used to impersonate, with a high probability of success, any user, without having access to any user-information. We optimize these faces, by using an evolutionary algorithm in the latent embedding space of the StyleGAN face generator. Multiple evolutionary strategies are compared, and we propose a novel approach that employs a neural network in order to direct the search in the direction of promising samples, without adding fitness evaluations. The results we present demonstrate that it is possible to obtain a high coverage of the population (over 40%) with less than 10 master faces, for three leading deep face recognition systems.

Two good articles.

Posted on August 6, 2021 at 6:44 AM23 Comments

Zoom Lied about End-to-End Encryption

The facts aren’t news, but Zoom will pay $85M—to the class-action attorneys, and to users—for lying to users about end-to-end encryption, and for giving user data to Facebook and Google without consent.

The proposed settlement would generally give Zoom users $15 or $25 each and was filed Saturday at US District Court for the Northern District of California. It came nine months after Zoom agreed to security improvements and a “prohibition on privacy and security misrepresentations” in a settlement with the Federal Trade Commission, but the FTC settlement didn’t include compensation for users.

Posted on August 5, 2021 at 6:25 AM30 Comments

Paragon: Yet Another Cyberweapons Arms Manufacturer

Forbes has the story:

Paragon’s product will also likely get spyware critics and surveillance experts alike rubbernecking: It claims to give police the power to remotely break into encrypted instant messaging communications, whether that’s WhatsApp, Signal, Facebook Messenger or Gmail, the industry sources said. One other spyware industry executive said it also promises to get longer-lasting access to a device, even when it’s rebooted.

[…]

Two industry sources said they believed Paragon was trying to set itself apart further by promising to get access to the instant messaging applications on a device, rather than taking complete control of everything on a phone. One of the sources said they understood that Paragon’s spyware exploits the protocols of end-to-end encrypted apps, meaning it would hack into messages via vulnerabilities in the core ways in which the software operates.

Read that last sentence again: Paragon uses unpatched zero-day exploits in the software to hack messaging apps.

Posted on August 3, 2021 at 6:44 AM9 Comments

The European Space Agency Launches Hackable Satellite

Of course this is hackable:

A sophisticated telecommunications satellite that can be completely repurposed while in space has launched.

[…]

Because the satellite can be reprogrammed in orbit, it can respond to changing demands during its lifetime.

[…]

The satellite can detect and characterise any rogue emissions, enabling it to respond dynamically to accidental interference or intentional jamming.

We can assume strong encryption, and good key management. Still, seems like a juicy target for other governments.

Posted on August 2, 2021 at 6:46 AM34 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.