Entries Tagged "surveillance"

Page 1 of 83

China’s Olympics App Is Horribly Insecure

China is mandating that athletes download and use a health and travel app when they attend the Winter Olympics next month. Citizen Lab examined the app and found it riddled with security holes.

Key Findings:

  • MY2022, an app mandated for use by all attendees of the 2022 Olympic Games in Beijing, has a simple but devastating flaw where encryption protecting users’ voice audio and file transfers can be trivially sidestepped. Health customs forms which transmit passport details, demographic information, and medical and travel history are also vulnerable. Server responses can also be spoofed, allowing an attacker to display fake instructions to users.
  • MY2022 is fairly straightforward about the types of data it collects from users in its public-facing documents. However, as the app collects a range of highly sensitive medical information, it is unclear with whom or which organization(s) it shares this information.
  • MY2022 includes features that allow users to report “politically sensitive” content. The app also includes a censorship keyword list, which, while presently inactive, targets a variety of political topics including domestic issues such as Xinjiang and Tibet as well as references to Chinese government agencies.
  • While the vendor did not respond to our security disclosure, we find that the app’s security deficits may not only violate Google’s Unwanted Software Policy and Apple’s App Store guidelines but also China’s own laws and national standards pertaining to privacy protection, providing potential avenues for future redress.

News article:

It’s not clear whether the security flaws were intentional or not, but the report speculated that proper encryption might interfere with some of China’s ubiquitous online surveillance tools, especially systems that allow local authorities to snoop on phones using public wireless networks or internet cafes. Still, the researchers added that the flaws were probably unintentional, because the government will already be receiving data from the app, so there wouldn’t be a need to intercept the data as it was being transferred.

[…]

The app also included a list of 2,422 political keywords, described within the code as “illegalwords.txt,” that worked as a keyword censorship list, according to Citizen Lab. The researchers said the list appeared to be a latent function that the app’s chat and file transfer function was not actively using.

The US government has already advised athletes to leave their personal phones and laptops home and bring burners.

Posted on January 21, 2022 at 6:06 AMView Comments

San Francisco Police Illegally Spying on Protesters

Last summer, the San Francisco police illegally used surveillance cameras at the George Floyd protests. The EFF is suing the police:

This surveillance invaded the privacy of protesters, targeted people of color, and chills and deters participation and organizing for future protests. The SFPD also violated San Francisco’s new Surveillance Technology Ordinance. It prohibits city agencies like the SFPD from acquiring, borrowing, or using surveillance technology, without prior approval from the city’s Board of Supervisors, following an open process that includes public participation. Here, the SFPD went through no such process before spying on protesters with this network of surveillance cameras.

It’s feels like a pretty easy case. There’s a law, and the SF police didn’t follow it.

Tech billionaire Chris Larsen is on the side of the police. He thinks that the surveillance is a good thing, and wrote an op-ed defending it.

I wouldn’t be writing about this at all except that Chris is a board member of EPIC, and used his EPIC affiliation in the op-ed to bolster his own credentials. (Bizarrely, he linked to an EPIC page that directly contradicts his position.) In his op-ed, he mischaracterized the EFF’s actions and the facts of the lawsuit. It’s a mess.

The plaintiffs in the lawsuit wrote a good rebuttal to Larsen’s piece. And this week, EPIC published what is effectively its own rebuttal:

One of the fundamental principles that underlies EPIC’s work (and the work of many other groups) on surveillance oversight is that individuals should have the power to decide whether surveillance tools are used in their communities and to impose limits on their use. We have fought for years to shed light on the development, procurement, and deployment of such technologies and have worked to ensure that they are subject to independent oversight through hearings, legal challenges, petitions, and other public forums. The CCOPS model, which was developed by ACLU affiliates and other coalition partners in California and implemented through the San Francisco ordinance, is a powerful mechanism to enable public oversight of dangerous surveillance tools. The access, retention, and use policies put in place by the neighborhood business associations operating these networks provide necessary, but not sufficient, protections against abuse. Strict oversight is essential to promote both privacy and community safety, which includes freedom from arbitrary police action and the freedom to assemble.

So far, EPIC has not done anything about Larsen still being on its board. (Others have criticized them for keeping him on.) I don’t know if I have an opinion on this. Larsen has done good work on financial privacy regulations, which is a good thing. But he seems to be funding all these surveillance cameras in San Francisco, which is really bad.

Posted on January 20, 2022 at 6:13 AMView Comments

Using Foreign Nationals to Bypass US Surveillance Restrictions

Remember when the US and Australian police surreptitiously owned and operated the encrypted cell phone app ANOM? They arrested 800 people in 2021 based on that operation.

New documents received by Motherboard show that over 100 of those phones were shipped to users in the US, far more than previously believed.

What’s most interesting to me about this new information is how the US used the Australians to get around domestic spying laws:

For legal reasons, the FBI did not monitor outgoing messages from Anom devices determined to be inside the U.S. Instead, the Australian Federal Police (AFP) monitored them on behalf of the FBI, according to previously published court records. In those court records unsealed shortly before the announcement of the Anom operation, FBI Special Agent Nicholas Cheviron wrote that the FBI received Anom user data three times a week, which contained the messages of all of the users of Anom with some exceptions, including “the messages of approximately 15 Anom users in the U.S. sent to any other Anom device.”

[…]

Stewart Baker, partner at Steptoe & Johnson LLP, and Bryce Klehm, associate editor of Lawfare, previously wrote that “The ‘threat to life; standard echoes the provision of U.S. law that allows communications providers to share user data with law enforcement without legal process under 18 U.S.C. § 2702. Whether the AFP was relying on this provision of U.S. law or a more general moral imperative to take action to prevent imminent threats is not clear.” That section of law discusses the voluntary disclosure of customer communications or records.

When asked about the practice of Australian law enforcement monitoring devices inside the U.S. on behalf of the FBI, Senator Ron Wyden told Motherboard in a statement “Multiple intelligence community officials have confirmed to me, in writing, that intelligence agencies cannot ask foreign partners to conduct surveillance that the U.S. would be legally prohibited from doing itself. The FBI should follow this same standard. Allegations that the FBI outsourced warrantless surveillance of Americans to a foreign government raise troubling questions about the Justice Department’s oversight of these practices.”

I and others have long suspected that the NSA uses foreign nationals to get around restrictions that prevent it from spying on Americans. It is interesting to see the FBI using the same trick.

Posted on January 13, 2022 at 9:35 AMView Comments

Apple AirTags Are Being Used to Track People and Cars

This development suprises no one who has been paying attention:

Researchers now believe AirTags, which are equipped with Bluetooth technology, could be revealing a more widespread problem of tech-enabled tracking. They emit a digital signal that can be detected by devices running Apple’s mobile operating system. Those devices then report where an AirTag has last been seen. Unlike similar tracking products from competitors such as Tile, Apple added features to prevent abuse, including notifications like the one Ms. Estrada received and automatic beeping. (Tile plans to release a feature to prevent the tracking of people next year, a spokeswoman for that company said.)

[…]

A person who doesn’t own an iPhone might have a harder time detecting an unwanted AirTag. AirTags aren’t compatible with Android smartphones. Earlier this month, Apple released an Android app that can scan for AirTags — but you have to be vigilant enough to download it and proactively use it.

Apple declined to say if it was working with Google on technology that would allow Android phones to automatically detect its trackers.

People who said they have been tracked have called Apple’s safeguards insufficient. Ms. Estrada said she was notified four hours after her phone first noticed the rogue gadget. Others said it took days before they were made aware of an unknown AirTag. According to Apple, the timing of the alerts can vary depending on the iPhone’s operating system and location settings.

Posted on December 31, 2021 at 9:52 AMView Comments

How the FBI Gets Location Information

Vice has a detailed article about how the FBI gets data from cell phone providers like AT&T, T-Mobile, and Verizon, based on a leaked (I think) 2019 139-page presentation.

EDITED TO ADD (11/12): My mistake. It was not a leak:

Ryan Shapiro, executive director of nonprofit organization Property of the People, shared the document with Motherboard after obtaining it through a public record act request. Property of the People focuses on obtaining and publishing government records.

Posted on October 27, 2021 at 9:01 AMView Comments

The European Parliament Voted to Ban Remote Biometric Surveillance

It’s not actually banned in the EU yet — the legislative process is much more complicated than that — but it’s a step: a total ban on biometric mass surveillance.

To respect “privacy and human dignity,” MEPs said that EU lawmakers should pass a permanent ban on the automated recognition of individuals in public spaces, saying citizens should only be monitored when suspected of a crime.

The parliament has also called for a ban on the use of private facial recognition databases — such as the controversial AI system created by U.S. startup Clearview (also already in use by some police forces in Europe) — and said predictive policing based on behavioural data should also be outlawed.

MEPs also want to ban social scoring systems which seek to rate the trustworthiness of citizens based on their behaviour or personality.

Posted on October 11, 2021 at 7:49 AMView Comments

Surveillance of the Internet Backbone

Vice has an article about how data brokers sell access to the Internet backbone. This is netflow data. It’s useful for cybersecurity forensics, but can also be used for things like tracing VPN activity.

At a high level, netflow data creates a picture of traffic flow and volume across a network. It can show which server communicated with another, information that may ordinarily only be available to the server owner or the ISP carrying the traffic. Crucially, this data can be used for, among other things, tracking traffic through virtual private networks, which are used to mask where someone is connecting to a server from, and by extension, their approximate physical location.

In the hands of some governments, that could be dangerous.

Posted on August 25, 2021 at 10:13 AMView Comments

More on Apple’s iPhone Backdoor

In this post, I’ll collect links on Apple’s iPhone backdoor for scanning CSAM images. Previous links are here and here.

Apple says that hash collisions in its CSAM detection system were expected, and not a concern. I’m not convinced that this secondary system was originally part of the design, since it wasn’t discussed in the original specification.

Good op-ed from a group of Princeton researchers who developed a similar system:

Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.

EDITED TO ADD (8/30): Good essays by Matthew Green and Alex Stamos, Ross Anderson, Edward Snowden, and Susan Landau. And also Kurt Opsahl.

EDITED TO ADD (9/6): Apple is delaying implementation of the scheme.

Posted on August 20, 2021 at 8:54 AMView Comments

Apple Adds a Backdoor to iMessage and iCloud Storage

Apple’s announcement that it’s going to start scanning photos for child abuse material is a big deal. (Here are five news stories.) I have been following the details, and discussing it in several different email lists. I don’t have time right now to delve into the details, but wanted to post something.

EFF writes:

There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts — that is, accounts designated as owned by a minor — for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.

This is pretty shocking coming from Apple, which is generally really good about privacy. It opens the door for all sorts of other surveillance, since now that the system is built it can be used for all sorts of other messages. And it breaks end-to-end encryption, despite Apple’s denials:

Does this break end-to-end encryption in Messages?

No. This doesn’t change the privacy assurances of Messages, and Apple never gains access to communications as a result of this feature. Any user of Messages, including those with with communication safety enabled, retains control over what is sent and to whom. If the feature is enabled for the child account, the device will evaluate images in Messages and present an intervention if the image is determined to be sexually explicit. For accounts of children age 12 and under, parents can set up parental notifications which will be sent if the child confirms and sends or views an image that has been determined to be sexually explicit. None of the communications, image evaluation, interventions, or notifications are available to Apple.

Notice Apple changing the definition of “end-to-end encryption.” No longer is the message a private communication between sender and receiver. A third party is alerted if the message meets a certain criteria.

This is a security disaster. Read tweets by Matthew Green and Edward Snowden. Also this. I’ll post more when I see it.

Beware the Four Horsemen of the Information Apocalypse. They’ll scare you into accepting all sorts of insecure systems.

EDITED TO ADD: This is a really good write-up of the problems.

EDITED TO ADD: Alex Stamos comments.

An open letter to Apple criticizing the project.

A leaked Apple memo responding to the criticisms. (What are the odds that Apple did not intend this to leak?)

EDITED TO ADD: John Gruber’s excellent analysis.

EDITED TO ADD (8/11): Paul Rosenzweig wrote an excellent policy discussion.

EDITED TO ADD (8/13): Really good essay by EFF’s Kurt Opsahl. Ross Anderson did an interview with Glenn Beck. And this news article talks about dissent within Apple about this feature.

The Economist has a good take. Apple responds to criticisms. (It’s worth watching the Wall Street Journal video interview as well.)

EDITED TO ADD (8/14): Apple released a threat model

EDITED TO ADD (8/20): Follow-on blog posts here and here.

Posted on August 10, 2021 at 6:37 AMView Comments

Paragon: Yet Another Cyberweapons Arms Manufacturer

Forbes has the story:

Paragon’s product will also likely get spyware critics and surveillance experts alike rubbernecking: It claims to give police the power to remotely break into encrypted instant messaging communications, whether that’s WhatsApp, Signal, Facebook Messenger or Gmail, the industry sources said. One other spyware industry executive said it also promises to get longer-lasting access to a device, even when it’s rebooted.

[…]

Two industry sources said they believed Paragon was trying to set itself apart further by promising to get access to the instant messaging applications on a device, rather than taking complete control of everything on a phone. One of the sources said they understood that Paragon’s spyware exploits the protocols of end-to-end encrypted apps, meaning it would hack into messages via vulnerabilities in the core ways in which the software operates.

Read that last sentence again: Paragon uses unpatched zero-day exploits in the software to hack messaging apps.

Posted on August 3, 2021 at 6:44 AMView Comments

1 2 3 83

Sidebar photo of Bruce Schneier by Joe MacInnis.