Entries Tagged "cameras"

Page 1 of 20

Ring Gives Videos to Police without a Warrant or User Consent

Amazon has revealed that it gives police videos from its Ring doorbells without a warrant and without user consent.

Ring recently revealed how often the answer to that question has been yes. The Amazon company responded to an inquiry from US Senator Ed Markey (D-Mass.), confirming that there have been 11 cases in 2022 where Ring complied with police “emergency” requests. In each case, Ring handed over private recordings, including video and audio, without letting users know that police had access to—and potentially downloaded—their data. This raises many concerns about increased police reliance on private surveillance, a practice that has long gone unregulated.

EFF writes:

Police are not the customers for Ring; the people who buy the devices are the customers. But Amazon’s long-standing relationships with police blur that line. For example, in the past Amazon has given coaching to police to tell residents to install the Ring app and purchase cameras for their homes—­an arrangement that made salespeople out of the police force. The LAPD launched an investigation into how Ring provided free devices to officers when people used their discount codes to purchase cameras.

Ring, like other surveillance companies that sell directly to the general public, continues to provide free services to the police, even though they don’t have to. Ring could build a device, sold straight to residents, that ensures police come to the user’s door if they are interested in footage—­but Ring instead has decided it would rather continue making money from residents while providing services to police.

CNet has a good explainer.

Slashdot thread.

Posted on August 1, 2022 at 6:09 AMView Comments

San Francisco Police Want Real-Time Access to Private Surveillance Cameras

Surely no one could have predicted this:

The new proposal—championed by Mayor London Breed after November’s wild weekend of orchestrated burglaries and theft in the San Francisco Bay Area—would authorize the police department to use non-city-owned security cameras and camera networks to live monitor “significant events with public safety concerns” and ongoing felony or misdemeanor violations.

Currently, the police can only request historical footage from private cameras related to specific times and locations, rather than blanket monitoring. Mayor Breed also complained the police can only use real-time feeds in emergencies involving “imminent danger of death or serious physical injury.”

If approved, the draft ordinance would also allow SFPD to collect historical video footage to help conduct criminal investigations and those related to officer misconduct. The draft law currently stands as the following, which indicates the cops can broadly ask for and/or get access to live real-time video streams:

The proposed Surveillance Technology Policy would authorize the Police Department to use surveillance cameras and surveillance camera networks owned, leased, managed, or operated by non-City entities to: (1) temporarily live monitor activity during exigent circumstances, significant events with public safety concerns, and investigations relating to active misdemeanor and felony violations; (2) gather and review historical video footage for the purposes of conducting a criminal investigation; and (3) gather and review historical video footage for the purposes of an internal investigation regarding officer misconduct.

Posted on July 15, 2022 at 6:17 AMView Comments

Wyze Camera Vulnerability

Wyze ignored a vulnerability in its home security cameras for three years. Bitdefender, who discovered the vulnerability, let the company get away with it.

In case you’re wondering, no, that is not normal in the security community. While experts tell me that the concept of a “responsible disclosure timeline” is a little outdated and heavily depends on the situation, we’re generally measuring in days, not years. “The majority of researchers have policies where if they make a good faith effort to reach a vendor and don’t get a response, that they publicly disclose in 30 days,” Alex Stamos, director of the Stanford Internet Observatory and former chief security officer at Facebook, tells me.

Posted on April 4, 2022 at 6:13 AMView Comments

Modern Mass Surveillance: Identify, Correlate, Discriminate

Communities across the United States are starting to ban facial recognition technologies. In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.

These efforts are well-intentioned, but facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let’s take them in turn.

Facial recognition is a technology that can be used to identify people without their knowledge or consent. It relies on the prevalence of cameras, which are becoming both more powerful and smaller, and machine learning technologies that can match the output of these cameras with images from a database of existing photos.

But that’s just one identification technology among many. People can be identified at a distance by their heartbeat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

Once we are identified, the data about who we are and what we are doing can be correlated with other data collected at other times. This might be movement data, which can be used to “follow” us as we move throughout our day. It can be purchasing data, Internet browsing data, or data about who we talk to via email or text. It might be data about our income, ethnicity, lifestyle, profession and interests. There is an entire industry of data brokers who make a living analyzing and augmenting data about who we are ­—using surveillance data collected by all sorts of companies and then sold without our knowledge or consent.

There is a huge ­—and almost entirely unregulated ­—data broker industry in the United States that trades on our information. This is how large Internet companies like Google and Facebook make their money. It’s not just that they know who we are, it’s that they correlate what they know about us to create profiles about who we are and what our interests are. This is why many companies buy license plate data from states. It’s also why companies like Google are buying health records, and part of the reason Google bought the company Fitbit, along with all of its data.

The whole purpose of this process is for companies—­ and governments ­—to treat individuals differently. We are shown different ads on the Internet and receive different offers for credit cards. Smart billboards display different advertisements based on who we are. In the future, we might be treated differently when we walk into a store, just as we currently are when we visit websites.

The point is that it doesn’t matter which technology is used to identify people. That there currently is no comprehensive database of heartbeats or gaits doesn’t make the technologies that gather them any less effective. And most of the time, it doesn’t matter if identification isn’t tied to a real name. What’s important is that we can be consistently identified over time. We might be completely anonymous in a system that uses unique cookies to track us as we browse the Internet, but the same process of correlation and discrimination still occurs. It’s the same with faces; we can be tracked as we move around a store or shopping mall, even if that tracking isn’t tied to a specific name. And that anonymity is fragile: If we ever order something online with a credit card, or purchase something with a credit card in a store, then suddenly our real names are attached to what was anonymous tracking information.

Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there’s only one law ­—passed in Vermont in 2018 ­—that requires data brokers to register and explain in broad terms what kind of data they collect. The large Internet surveillance companies like Facebook and Google collect dossiers on us are more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.

Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.

Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point. We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations—and what sorts of influence we want them to have over our lives.

This essay previously appeared in the New York Times.

EDITED TO ADD: Rereading this post-publication, I see that it comes off as overly critical of those who are doing activism in this space. Writing the piece, I wasn’t thinking about political tactics. I was thinking about the technologies that support surveillance capitalism, and law enforcement’s usage of that corporate platform. Of course it makes sense to focus on face recognition in the short term. It’s something that’s easy to explain, viscerally creepy, and obviously actionable. It also makes sense to focus specifically on law enforcement’s use of the technology; there are clear civil and constitutional rights issues. The fact that law enforcement is so deeply involved in the technology’s marketing feels wrong. And the technology is currently being deployed in Hong Kong against political protesters. It’s why the issue has momentum, and why we’ve gotten the small wins we’ve had. (The EU is considering a five-year ban on face recognition technologies.) Those wins build momentum, which lead to more wins. I should have been kinder to those in the trenches.

If you want to help, sign the petition from Public Voice calling on a moratorium on facial recognition technology for mass surveillance. Or write to your US congressperson and demand similar action. There’s more information from EFF and EPIC.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

Posted on January 27, 2020 at 12:21 PMView Comments

Police Surveillance Tools from Special Services Group

Special Services Group, a company that sells surveillance tools to the FBI, DEA, ICE, and other US government agencies, has had its secret sales brochure published. Motherboard received the brochure as part of a FOIA request to the Irvine Police Department in California.

“The Tombstone Cam is our newest video concealment offering the ability to conduct remote surveillance operations from cemeteries,” one section of the Black Book reads. The device can also capture audio, its battery can last for two days, and “the Tombstone Cam is fully portable and can be easily moved from location to location as necessary,” the brochure adds. Another product is a video and audio capturing device that looks like an alarm clock, suitable for “hotel room stings,” and other cameras are designed to appear like small tree trunks and rocks, the brochure reads.

The “Shop-Vac Covert DVR Recording System” is essentially a camera and 1TB harddrive hidden inside a vacuum cleaner. “An AC power connector is available for long-term deployments, and DC power options can be connected for mobile deployments also,” the brochure reads. The description doesn’t say whether the vacuum cleaner itself works.

[…]

One of the company’s “Rapid Vehicle Deployment Kits” includes a camera hidden inside a baby car seat. “The system is fully portable, so you are not restricted to the same drop car for each mission,” the description adds.

[…]

The so-called “K-MIC In-mouth Microphone & Speaker Set” is a tiny Bluetooth device that sits on a user’s teeth and allows them to “communicate hands-free in crowded, noisy surroundings” with “near-zero visual indications,” the Black Book adds.

Other products include more traditional surveillance cameras and lenses as well as tools for surreptitiously gaining entry to buildings. The “Phantom RFID Exploitation Toolkit” lets a user clone an access card or fob, and the so-called “Shadow” product can “covertly provide the user with PIN code to an alarm panel,” the brochure reads.

The Motherboard article also reprints the scary emails Motherboard received from Special Services Group, when asked for comment. Of course, Motherboard published the information anyway.

Posted on January 10, 2020 at 8:41 AMView Comments

Zoom Vulnerability

The Zoom conferencing app has a vulnerability that allows someone to remotely take over the computer’s camera.

It’s a bad vulnerability, made worse by the fact that it remains even if you uninstall the Zoom app:

This vulnerability allows any website to forcibly join a user to a Zoom call, with their video camera activated, without the user’s permission.

On top of this, this vulnerability would have allowed any webpage to DOS (Denial of Service) a Mac by repeatedly joining a user to an invalid call.

Additionally, if you’ve ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage. This re-install ‘feature’ continues to work to this day.

Zoom didn’t take the vulnerability seriously:

This vulnerability was originally responsibly disclosed on March 26, 2019. This initial report included a proposed description of a ‘quick fix’ Zoom could have implemented by simply changing their server logic. It took Zoom 10 days to confirm the vulnerability. The first actual meeting about how the vulnerability would be patched occurred on June 11th, 2019, only 18 days before the end of the 90-day public disclosure deadline. During this meeting, the details of the vulnerability were confirmed and Zoom’s planned solution was discussed. However, I was very easily able to spot and describe bypasses in their planned fix. At this point, Zoom was left with 18 days to resolve the vulnerability. On June 24th after 90 days of waiting, the last day before the public disclosure deadline, I discovered that Zoom had only implemented the ‘quick fix’ solution originally suggested.

This is why we disclose vulnerabilities. Now, finally, Zoom is taking this seriously and fixing it for real.

EDITED TO ADD (8/8): Apple silently released a macOS update that removes the Zoom webserver if the app is not present.

Posted on July 16, 2019 at 12:54 PMView Comments

1 2 3 20

Sidebar photo of Bruce Schneier by Joe MacInnis.