Entries Tagged "police"

Page 1 of 27

Ring Gives Videos to Police without a Warrant or User Consent

Amazon has revealed that it gives police videos from its Ring doorbells without a warrant and without user consent.

Ring recently revealed how often the answer to that question has been yes. The Amazon company responded to an inquiry from US Senator Ed Markey (D-Mass.), confirming that there have been 11 cases in 2022 where Ring complied with police “emergency” requests. In each case, Ring handed over private recordings, including video and audio, without letting users know that police had access to—and potentially downloaded—their data. This raises many concerns about increased police reliance on private surveillance, a practice that has long gone unregulated.

EFF writes:

Police are not the customers for Ring; the people who buy the devices are the customers. But Amazon’s long-standing relationships with police blur that line. For example, in the past Amazon has given coaching to police to tell residents to install the Ring app and purchase cameras for their homes—­an arrangement that made salespeople out of the police force. The LAPD launched an investigation into how Ring provided free devices to officers when people used their discount codes to purchase cameras.

Ring, like other surveillance companies that sell directly to the general public, continues to provide free services to the police, even though they don’t have to. Ring could build a device, sold straight to residents, that ensures police come to the user’s door if they are interested in footage—­but Ring instead has decided it would rather continue making money from residents while providing services to police.

CNet has a good explainer.

Slashdot thread.

Posted on August 1, 2022 at 6:09 AMView Comments

San Francisco Police Want Real-Time Access to Private Surveillance Cameras

Surely no one could have predicted this:

The new proposal—championed by Mayor London Breed after November’s wild weekend of orchestrated burglaries and theft in the San Francisco Bay Area—would authorize the police department to use non-city-owned security cameras and camera networks to live monitor “significant events with public safety concerns” and ongoing felony or misdemeanor violations.

Currently, the police can only request historical footage from private cameras related to specific times and locations, rather than blanket monitoring. Mayor Breed also complained the police can only use real-time feeds in emergencies involving “imminent danger of death or serious physical injury.”

If approved, the draft ordinance would also allow SFPD to collect historical video footage to help conduct criminal investigations and those related to officer misconduct. The draft law currently stands as the following, which indicates the cops can broadly ask for and/or get access to live real-time video streams:

The proposed Surveillance Technology Policy would authorize the Police Department to use surveillance cameras and surveillance camera networks owned, leased, managed, or operated by non-City entities to: (1) temporarily live monitor activity during exigent circumstances, significant events with public safety concerns, and investigations relating to active misdemeanor and felony violations; (2) gather and review historical video footage for the purposes of conducting a criminal investigation; and (3) gather and review historical video footage for the purposes of an internal investigation regarding officer misconduct.

Posted on July 15, 2022 at 6:17 AMView Comments

Surveillance by Driverless Car

San Francisco police are using autonomous vehicles as mobile surveillance cameras.

Privacy advocates say the revelation that police are actively using AV footage is cause for alarm.

“This is very concerning,” Electronic Frontier Foundation (EFF) senior staff attorney Adam Schwartz told Motherboard. He said cars in general are troves of personal consumer data, but autonomous vehicles will have even more of that data from capturing the details of the world around them. “So when we see any police department identify AVs as a new source of evidence, that’s very concerning.”

Posted on May 12, 2022 at 1:07 PMView Comments

AirTags Are Used for Stalking Far More than Previously Reported

Ever since Apple introduced AirTags, security people have warned that they could be used for stalking. But while there have been a bunch of anecdotal stories, this is the first vaguely scientific survey:

Motherboard requested records mentioning AirTags in a recent eight month period from dozens of the country’s largest police departments. We obtained records from eight police departments.

Of the 150 total police reports mentioning AirTags, in 50 cases women called the police because they started getting notifications that their whereabouts were being tracked by an AirTag they didn’t own. Of those, 25 could identify a man in their lives—ex-partners, husbands, bosses—who they strongly suspected planted the AirTags on their cars in order to follow and harass them. Those women reported that current and former intimate partners­—the most likely people to harm women overall—­are using AirTags to stalk and harass them.

Eight police departments over eight months yielded fifty cases. And that’s only where the victim (1) realized they were being tracked by someone else’s AirTag, and (2) contacted the police. That’s going to multiply out to a lot of AirTag stalking in the country, and the world.

EDITED TO ADD (4/13): AirTags are being used by Ukrainians to track goods stolen by Russians and, as a nice side effect, to track the movements of Russian troops.

Posted on April 8, 2022 at 6:06 AMView Comments

Hackers Using Fake Police Data Requests against Tech Companies

Brian Krebs has a detailed post about hackers using fake police data requests to trick companies into handing over data.

Virtually all major technology companies serving large numbers of users online have departments that routinely review and process such requests, which are typically granted as long as the proper documents are provided and the request appears to come from an email address connected to an actual police department domain name.

But in certain circumstances ­—such as a case involving imminent harm or death—­ an investigating authority may make what’s known as an Emergency Data Request (EDR), which largely bypasses any official review and does not require the requestor to supply any court-approved documents.

It is now clear that some hackers have figured out there is no quick and easy way for a company that receives one of these EDRs to know whether it is legitimate. Using their illicit access to police email systems, the hackers will send a fake EDR along with an attestation that innocent people will likely suffer greatly or die unless the requested data is provided immediately.

In this scenario, the receiving company finds itself caught between two unsavory outcomes: Failing to immediately comply with an EDR -­- and potentially having someone’s blood on their hands -­- or possibly leaking a customer record to the wrong person.

Another article claims that both Apple and Facebook (or Meta, or whatever they want to be called now) fell for this scam.

We allude to this kind of risk in our 2015 “Keys Under Doormats” paper:

Third, exceptional access would create concentrated targets that could attract bad actors. Security credentials that unlock the data would have to be retained by the platform provider, law enforcement agencies, or some other trusted third party. If law enforcement’s keys guaranteed access to everything, an attacker who gained access to these keys would enjoy the same privilege. Moreover, law enforcement’s stated need for rapid access to data would make it impractical to store keys offline or split keys among multiple keyholders, as security engineers would normally do with extremely high-value credentials.

The “credentials” are even more insecure than we could have imagined: access to an email address. And the data, of course, isn’t very secure. But imagine how this kind of thing could be abused with a law enforcement encryption backdoor.

Posted on April 5, 2022 at 6:04 AMView Comments

San Francisco Police Illegally Spying on Protesters

Last summer, the San Francisco police illegally used surveillance cameras at the George Floyd protests. The EFF is suing the police:

This surveillance invaded the privacy of protesters, targeted people of color, and chills and deters participation and organizing for future protests. The SFPD also violated San Francisco’s new Surveillance Technology Ordinance. It prohibits city agencies like the SFPD from acquiring, borrowing, or using surveillance technology, without prior approval from the city’s Board of Supervisors, following an open process that includes public participation. Here, the SFPD went through no such process before spying on protesters with this network of surveillance cameras.

It’s feels like a pretty easy case. There’s a law, and the SF police didn’t follow it.

Tech billionaire Chris Larsen is on the side of the police. He thinks that the surveillance is a good thing, and wrote an op-ed defending it.

I wouldn’t be writing about this at all except that Chris is a board member of EPIC, and used his EPIC affiliation in the op-ed to bolster his own credentials. (Bizarrely, he linked to an EPIC page that directly contradicts his position.) In his op-ed, he mischaracterized the EFF’s actions and the facts of the lawsuit. It’s a mess.

The plaintiffs in the lawsuit wrote a good rebuttal to Larsen’s piece. And this week, EPIC published what is effectively its own rebuttal:

One of the fundamental principles that underlies EPIC’s work (and the work of many other groups) on surveillance oversight is that individuals should have the power to decide whether surveillance tools are used in their communities and to impose limits on their use. We have fought for years to shed light on the development, procurement, and deployment of such technologies and have worked to ensure that they are subject to independent oversight through hearings, legal challenges, petitions, and other public forums. The CCOPS model, which was developed by ACLU affiliates and other coalition partners in California and implemented through the San Francisco ordinance, is a powerful mechanism to enable public oversight of dangerous surveillance tools. The access, retention, and use policies put in place by the neighborhood business associations operating these networks provide necessary, but not sufficient, protections against abuse. Strict oversight is essential to promote both privacy and community safety, which includes freedom from arbitrary police action and the freedom to assemble.

So far, EPIC has not done anything about Larsen still being on its board. (Others have criticized them for keeping him on.) I don’t know if I have an opinion on this. Larsen has done good work on financial privacy regulations, which is a good thing. But he seems to be funding all these surveillance cameras in San Francisco, which is really bad.

Posted on January 20, 2022 at 6:13 AMView Comments

Ransomware Is Getting Ugly

Modern ransomware has two dimensions: pay to get your data back, and pay not to have your data dumped on the Internet. The DC police are the victims of this ransomware, and the criminals have just posted personnel records—”including the results of psychological assessments and polygraph tests; driver’s license images; fingerprints; social security numbers; dates of birth; and residential, financial, and marriage histories”—for two dozen police officers.

The negotiations don’t seem to be doing well. The criminals want $4M. The DC police offered them $100,000.

The Colonial Pipeline is another current high-profile ransomware victim. (Brian Krebs has some good information on DarkSide, the criminal group behind that attack.) So is Vastaamo, a Finnish mental heal clinic. Criminals contacted the individual patients and demanded payment, and then dumped their personal psychological information online.

An industry group called the Institute for Security and Technology (no, I haven’t heard of it before, either) just released a comprehensive report on combating ransomware. It has a “comprehensive plan of action,” which isn’t much different from anything most of us can propose. Solving this is not easy. Ransomware is big business, made possible by insecure networks that allow criminals to gain access to networks in the first place, and cryptocurrencies that allow for payments that governments cannot interdict. Ransomware has become the most profitable cybercrime business model, and until we solve those two problems, that’s not going to change.

Posted on May 14, 2021 at 6:30 AMView Comments

Security Vulnerabilities in Cellebrite

Moxie Marlinspike has an intriguing blog post about Cellebrite, a tool used by police and others to break into smartphones. Moxie got his hands on one of the devices, which seems to be a pair of Windows software packages and a whole lot of connecting cables.

According to Moxie, the software is riddled with vulnerabilities. (The one example he gives is that it uses FFmpeg DLLs from 2012, and have not been patched with the 100+ security updates since then.)

…we found that it’s possible to execute arbitrary code on a Cellebrite machine simply by including a specially formatted but otherwise innocuous file in any app on a device that is subsequently plugged into Cellebrite and scanned. There are virtually no limits on the code that can be executed.

This means that Cellebrite has one—or many—remote code execution bugs, and that a specially designed file on the target phone can infect Cellebrite.

For example, by including a specially formatted but otherwise innocuous file in an app on a device that is then scanned by Cellebrite, it’s possible to execute code that modifies not just the Cellebrite report being created in that scan, but also all previous and future generated Cellebrite reports from all previously scanned devices and all future scanned devices in any arbitrary way (inserting or removing text, email, photos, contacts, files, or any other data), with no detectable timestamp changes or checksum failures. This could even be done at random, and would seriously call the data integrity of Cellebrite’s reports into question.

That malicious file could, for example, insert fabricated evidence or subtly alter the evidence it copies from a phone. It could even write that fabricated/altered evidence back to the phone so that from then on, even an uncorrupted version of Cellebrite will find the altered evidence on that phone.

Finally, Moxie suggests that future versions of Signal will include such a file, sometimes:

Files will only be returned for accounts that have been active installs for some time already, and only probabilistically in low percentages based on phone number sharding.

The idea, of course, is that a defendant facing Cellebrite evidence in court can claim that the evidence is tainted.

I have no idea how effective this would be in court. Or whether this runs foul of the Computer Fraud and Abuse Act in the US. (Is it okay to booby-trap your phone?) A colleague from the UK says that this would not be legal to do under the Computer Misuse Act, although it’s hard to blame the phone owner if he doesn’t even know it’s happening.

Posted on April 27, 2021 at 6:57 AMView Comments

Deliberately Playing Copyrighted Music to Avoid Being Live-Streamed

Vice is reporting on a new police hack: playing copyrighted music when being filmed by citizens, trying to provoke social media sites into taking the videos down and maybe even banning the filmers:

In a separate part of the video, which Devermont says was filmed later that same afternoon, Devermont approaches [BHPD Sgt. Billy] Fair outside. The interaction plays out almost exactly like it did in the department—when Devermont starts asking questions, Fair turns on the music.

Devermont backs away, and asks him to stop playing music. Fair says “I can’t hear you”—again, despite holding a phone that is blasting tunes.

Later, Fair starts berating Devermont’s livestreaming account, saying “I read the comments [on your account], they talk about how fake you are.” He then holds out his phone, which is still on full blast, and walks toward Devermont, saying “Listen to the music”.

In a statement emailed to VICE News, Beverly Hills PD said that “the playing of music while accepting a complaint or answering questions is not a procedure that has been recommended by Beverly Hills Police command staff,” and that the videos of Fair were “currently under review.”

However, this is not the first time that a Beverly Hills police officer has done this, nor is Fair the only one.

In an archived clip from a livestream shared privately to VICE Media that Devermont has not publicly reposted but he says was taken weeks ago, another officer can be seen quickly swiping through his phone as Devermont approaches. By the time Devermont is close enough to speak to him, the officer’s phone is already blasting “In My Life” by the Beatles—a group whose rightsholders have notoriously sued Apple numerous times. If you want to get someone in trouble for copyright infringement, the Beatles are quite possibly your best bet.

As Devermont asks about the music, the officer points the phone at him, asking, “Do you like it?”

Clever, really, and an illustration of the problem with context-free copyright enforcement.

Posted on February 15, 2021 at 1:11 PMView Comments

On US Capitol Security—By Someone Who Manages Arena-Rock-Concert Security

Smart commentary:

…I was floored on Wednesday when, glued to my television, I saw police in some areas of the U.S. Capitol using little more than those same mobile gates I had ­ the ones that look like bike racks that can hook together ­ to try to keep the crowds away from sensitive areas and, later, push back people intent on accessing the grounds. (A new fence that appears to be made of sturdier material was being erected on Thursday.) That’s the same equipment and approximately the same amount of force I was able to use when a group of fans got a little feisty and tried to get backstage at a Vanilla Ice show.

[…]

There’s not ever going to be enough police or security at any event to stop people if they all act in unison; if enough people want to get to Vanilla Ice at the same time, they’re going to get to Vanilla Ice. Social constructs and basic decency, not lightweight security gates, are what hold everyone except the outliers back in a typical crowd.

[…]

When there are enough outliers in a crowd, it throws the normal dynamics of crowd control off; everyone in my business knows this. Citizens tend to hold each other to certain standards ­ which is why my 40,000-person town does not have 40,000 police officers, and why the 8.3 million people of New York City aren’t policed by 8.3 million police officers.

Social norms are the fabric that make an event run smoothly—and, really, hold society together. There aren’t enough police in your town to handle it if everyone starts acting up at the same time.

I like that she uses the term “outliers,” and I make much the same points in Liars and Outliers.

Posted on January 13, 2021 at 6:06 AMView Comments

1 2 3 27

Sidebar photo of Bruce Schneier by Joe MacInnis.