Entries Tagged "cameras"

Page 4 of 21

Automatic Face Recognition and Surveillance

ID checks were a common response to the terrorist attacks of 9/11, but they’ll soon be obsolete. You won’t have to show your ID, because you’ll be identified automatically. A security camera will capture your face, and it’ll be matched with your name and a whole lot of other information besides. Welcome to the world of automatic facial recognition. Those who have access to databases of identified photos will have the power to identify us. Yes, it’ll enable some amazing personalized services; but it’ll also enable whole new levels of surveillance. The underlying technologies are being developed today, and there are currently no rules limiting their use.

Walk into a store, and the salesclerks will know your name. The store’s cameras and computers will have figured out your identity, and looked you up in both their store database and a commercial marketing database they’ve subscribed to. They’ll know your name, salary, interests, what sort of sales pitches you’re most vulnerable to, and how profitable a customer you are. Maybe they’ll have read a profile based on your tweets and know what sort of mood you’re in. Maybe they’ll know your political affiliation or sexual identity, both predictable by your social media activity. And they’re going to engage with you accordingly, perhaps by making sure you’re well taken care of or possibly by trying to make you so uncomfortable that you’ll leave.

Walk by a policeman, and she will know your name, address, criminal record, and with whom you routinely are seen. The potential for discrimination is enormous, especially in low-income communities where people are routinely harassed for things like unpaid parking tickets and other minor violations. And in a country where people are arrested for their political views, the use of this technology quickly turns into a nightmare scenario.

The critical technology here is computer face recognition. Traditionally it has been pretty poor, but it’s slowly improving. A computer is now as good as a person. Already Google’s algorithms can accurately match child and adult photos of the same person, and Facebook has an algorithm that works by recognizing hair style, body shape, and body language ­- and works even when it can’t see faces. And while we humans are pretty much as good at this as we’re ever going to get, computers will continue to improve. Over the next years, they’ll continue to get more accurate, making better matches using even worse photos.

Matching photos with names also requires a database of identified photos, and we have plenty of those too. Driver’s license databases are a gold mine: all shot face forward, in good focus and even light, with accurate identity information attached to each photo. The enormous photo collections of social media and photo archiving sites are another. They contain photos of us from all sorts of angles and in all sorts of lighting conditions, and we helpfully do the identifying step for the companies by tagging ourselves and our friends. Maybe this data will appear on handheld screens. Maybe it’ll be automatically displayed on computer-enhanced glasses. Imagine salesclerks ­—or politicians ­—being able to scan a room and instantly see wealthy customers highlighted in green, or policemen seeing people with criminal records highlighted in red.

Science fiction writers have been exploring this future in both books and movies for decades. Ads followed people from billboard to billboard in the movie Minority Report. In John Scalzi’s recent novel Lock In, characters scan each other like the salesclerks I described above.

This is no longer fiction. High-tech billboards can target ads based on the gender of who’s standing in front of them. In 2011, researchers at Carnegie Mellon pointed a camera at a public area on campus and were able to match live video footage with a public database of tagged photos in real time. Already government and commercial authorities have set up facial recognition systems to identify and monitor people at sporting events, music festivals, and even churches. The Dubai police are working on integrating facial recognition into Google Glass, and more US local police forces are using the technology.

Facebook, Google, Twitter, and other companies with large databases of tagged photos know how valuable their archives are. They see all kinds of services powered by their technologies ­ services they can sell to businesses like the stores you walk into and the governments you might interact with.

Other companies will spring up whose business models depend on capturing our images in public and selling them to whoever has use for them. If you think this is farfetched, consider a related technology that’s already far down that path: license-plate capture.

Today in the US there’s a massive but invisible industry that records the movements of cars around the country. Cameras mounted on cars and tow trucks capture license places along with date/time/location information, and companies use that data to find cars that are scheduled for repossession. One company, Vigilant Solutions, claims to collect 70 million scans in the US every month. The companies that engage in this business routinely share that data with the police, giving the police a steady stream of surveillance information on innocent people that they could not legally collect on their own. And the companies are already looking for other profit streams, selling that surveillance data to anyone else who thinks they have a need for it.

This could easily happen with face recognition. Finding bail jumpers could even be the initial driving force, just as finding cars to repossess was for license plate capture.

Already the FBI has a database of 52 million faces, and describes its integration of facial recognition software with that database as “fully operational.” In 2014, FBI Director James Comey told Congress that the database would not include photos of ordinary citizens, although the FBI’s own documents indicate otherwise. And just last month, we learned that the FBI is looking to buy a system that will collect facial images of anyone an officer stops on the street.

In 2013, Facebook had a quarter of a trillion user photos in its database. There’s currently a class-action lawsuit in Illinois alleging that the company has over a billion “face templates” of people, collected without their knowledge or consent.

Last year, the US Department of Commerce tried to prevail upon industry representatives and privacy organizations to write a voluntary code of conduct for companies using facial recognition technologies. After 16 months of negotiations, all of the consumer-focused privacy organizations pulled out of the process because industry representatives were unable to agree on any limitations on something as basic as nonconsensual facial recognition.

When we talk about surveillance, we tend to concentrate on the problems of data collection: CCTV cameras, tagged photos, purchasing habits, our writings on sites like Facebook and Twitter. We think much less about data analysis. But effective and pervasive surveillance is just as much about analysis. It’s sustained by a combination of cheap and ubiquitous cameras, tagged photo databases, commercial databases of our actions that reveal our habits and personalities, and ­—most of all ­—fast and accurate face recognition software.

Don’t expect to have access to this technology for yourself anytime soon. This is not facial recognition for all. It’s just for those who can either demand or pay for access to the required technologies ­—most importantly, the tagged photo databases. And while we can easily imagine how this might be misused in a totalitarian country, there are dangers in free societies as well. Without meaningful regulation, we’re moving into a world where governments and corporations will be able to identify people both in real time and backwards in time, remotely and in secret, without consent or recourse.

Despite protests from industry, we need to regulate this budding industry. We need limitations on how our images can be collected without our knowledge or consent, and on how they can be used. The technologies aren’t going away, and we can’t uninvent these capabilities. But we can ensure that they’re used ethically and responsibly, and not just as a mechanism to increase police and corporate power over us.

This essay previously appeared on Forbes.com.

EDITED TO ADD: Two articles that say much the same thing.

Posted on October 5, 2015 at 6:11 AMView Comments

Living in a Code Yellow World

In the 1980s, handgun expert Jeff Cooper invented something called the Color Code to describe what he called the “combat mind-set.” Here is his summary:

In White you are unprepared and unready to take lethal action. If you are attacked in White you will probably die unless your adversary is totally inept.

In Yellow you bring yourself to the understanding that your life may be in danger and that you may have to do something about it.

In Orange you have determined upon a specific adversary and are prepared to take action which may result in his death, but you are not in a lethal mode.

In Red you are in a lethal mode and will shoot if circumstances warrant.

Cooper talked about remaining in Code Yellow over time, but he didn’t write about its psychological toll. It’s significant. Our brains can’t be on that alert level constantly. We need downtime. We need to relax. This is why we have friends around whom we can let our guard down and homes where we can close our doors to outsiders. We only want to visit Yellowland occasionally.

Since 9/11, the US has increasingly become Yellowland, a place where we assume danger is imminent. It’s damaging to us individually and as a society.

I don’t mean to minimize actual danger. Some people really do live in a Code Yellow world, due to the failures of government in their home countries. Even there, we know how hard it is for them to maintain a constant level of alertness in the face of constant danger. Psychologist Abraham Maslow wrote about this, making safety a basic level in his hierarchy of needs. A lack of safety makes people anxious and tense, and the long term effects are debilitating.

The same effects occur when we believe we’re living in an unsafe situation even if we’re not. The psychological term for this is hypervigilance. Hypervigilance in the face of imagined danger causes stress and anxiety. This, in turn, alters how your hippocampus functions, and causes an excess of cortisol in your body. Now cortisol is great in small and infrequent doses, and helps you run away from tigers. But it destroys your brain and body if you marinate in it for extended periods of time.

Not only does trying to live in Yellowland harm you physically, it changes how you interact with your environment and it impairs your judgment. You forget what’s normal and start seeing the enemy everywhere. Terrorism actually relies on this kind of reaction to succeed.

Here’s an example from The Washington Post last year: “I was taking pictures of my daughters. A stranger thought I was exploiting them.” A father wrote about his run-in with an off-duty DHS agent, who interpreted an innocent family photoshoot as something nefarious and proceeded to harass and lecture the family. That the parents were white and the daughters Asian added a racist element to the encounter.

At the time, people wrote about this as an example of worst-case thinking, saying that as a DHS agent, “he’s paid to suspect the worst at all times and butt in.” While, yes, it was a “disturbing reminder of how the mantra of ‘see something, say something’ has muddied the waters of what constitutes suspicious activity,” I think there’s a deeper story here. The agent is trying to live his life in Yellowland, and it caused him to see predators where there weren’t any.

I call these “movie-plot threats,” scenarios that would make great action movies but that are implausible in real life. Yellowland is filled with them.

Last December former DHS director Tom Ridge wrote about the security risks of building a NFL stadium near the Los Angeles Airport. His report is full of movie-plot threats, including terrorists shooting down a plane and crashing it into a stadium. His conclusion, that it is simply too dangerous to build a sports stadium within a few miles of the airport, is absurd. He’s been living too long in Yellowland.

That our brains aren’t built to live in Yellowland makes sense, because actual attacks are rare. The person walking towards you on the street isn’t an attacker. The person doing something unexpected over there isn’t a terrorist. Crashing an airplane into a sports stadium is more suitable to a Die Hard movie than real life. And the white man taking pictures of two Asian teenagers on a ferry isn’t a sex slaver. (I mean, really?)

Most of us, that DHS agent included, are complete amateurs at knowing the difference between something benign and something that’s actually dangerous. Combine this with the rarity of attacks, and you end up with an overwhelming number of false alarms. This is the ultimate problem with programs like “see something, say something.” They waste an enormous amount of time and money.

Those of us fortunate enough to live in a Code White society are much better served acting like we do. This is something we need to learn at all levels, from our personal interactions to our national policy. Since the terrorist attacks of 9/11, many of our counterterrorism policies have helped convince people they’re not safe, and that they need to be in a constant state of readiness. We need our leaders to lead us out of Yellowland, not to perpetuate it.

This essay previously appeared on Fusion.net.

EDITED TO ADD (9/25): UK student reading book on terrorism is accused of being a terrorist. He was reading the book for a class he was taking. I’ll let you guess his ethnicity.

Posted on September 24, 2015 at 11:39 AMView Comments

Shooting Down Drones

A Kentucky man shot down a drone that was hovering in his backyard:

“It was just right there,” he told Ars. “It was hovering, I would never have shot it if it was flying. When he came down with a video camera right over my back deck, that’s not going to work. I know they’re neat little vehicles, but one of those uses shouldn’t be flying into people’s yards and videotaping.”

Minutes later, a car full of four men that he didn’t recognize rolled up, “looking for a fight.”

“Are you the son of a bitch that shot my drone?” one said, according to Merideth.

His terse reply to the men, while wearing a 10mm Glock holstered on his hip: “If you cross that sidewalk onto my property, there’s going to be another shooting.”

He was arrested, but what’s the law?

In the view of drone lawyer Brendan Schulman and robotics law professor Ryan Calo, home owners can’t just start shooting when they see a drone over their house. The reason is because the law frowns on self-help when a person can just call the police instead. This means that Meredith may not have been defending his house, but instead engaging in criminal acts and property damage for which he could have to pay.

But a different and bolder argument, put forward by law professor Michael Froomkin, could provide Meredith some cover. In a paper, Froomkin argues that it’s reasonable to assume robotic intrusions are not harmless, and that people may have a right to “employ violent self-help.”

Froomkin’s paper is well worth reading:

Abstract: Robots can pose—or can appear to pose—a threat to life, property, and privacy. May a landowner legally shoot down a trespassing drone? Can she hold a trespassing autonomous car as security against damage done or further torts? Is the fear that a drone may be operated by a paparazzo or a peeping Tom sufficient grounds to disable or interfere with it? How hard may you shove if the office robot rolls over your foot? This paper addresses all those issues and one more: what rules and standards we could put into place to make the resolution of those questions easier and fairer to all concerned.

The default common-law legal rules governing each of these perceived threats are somewhat different, although reasonableness always plays an important role in defining legal rights and options. In certain cases—drone overflights, autonomous cars, national, state, and even local regulation—may trump the common law. Because it is in most cases obvious that humans can use force to protect themselves against actual physical attack, the paper concentrates on the more interesting cases of (1) robot (and especially drone) trespass and (2) responses to perceived threats other than physical attack by robots notably the risk that the robot (or drone) may be spying – perceptions which may not always be justified, but which sometimes may nonetheless be considered reasonable in law.

We argue that the scope of permissible self-help in defending one’s privacy should be quite broad. There is exigency in that resort to legally administered remedies would be impracticable; and worse, the harm caused by a drone that escapes with intrusive recordings can be substantial and hard to remedy after the fact. Further, it is common for new technology to be seen as risky and dangerous, and until proven otherwise drones are no exception. At least initially, violent self-help will seem, and often may be, reasonable even when the privacy threat is not great—or even extant. We therefore suggest measures to reduce uncertainties about robots, ranging from forbidding weaponized robots to requiring lights, and other markings that would announce a robot’s capabilities, and RFID chips and serial numbers that would uniquely identify the robot’s owner.

The paper concludes with a brief examination of what if anything our survey of a person’s right to defend against robots might tell us about the current state of robot rights against people.

Note that there are drones that shoot back.

Here are two books that talk about these topics. And an article from 2012.

EDITED TO ADD (8/9): How to shoot down a drone.

Posted on August 4, 2015 at 8:24 AMView Comments

Human and Technology Failures in Nuclear Facilities

This is interesting:

We can learn a lot about the potential for safety failures at US nuclear plants from the July 29, 2012, incident in which three religious activists broke into the supposedly impregnable Y-12 facility at Oak Ridge, Tennessee, the Fort Knox of uranium. Once there, they spilled blood and spray painted “work for peace not war” on the walls of a building housing enough uranium to build thousands of nuclear weapons. They began hammering on the building with a sledgehammer, and waited half an hour to be arrested. If an 82-year-old nun with a heart condition and two confederates old enough to be AARP members could do this, imagine what a team of determined terrorists could do.

[…]

Where some other countries often rely more on guards with guns, the United States likes to protect its nuclear facilities with a high-tech web of cameras and sensors. Under the Nunn-Lugar program, Washington has insisted that Russia adopt a similar approach to security at its own nuclear sites­—claiming that an American cultural preference is objectively superior. The Y-12 incident shows the problem with the American approach of automating security. At the Y-12 facility, in addition to the three fences the protestors had to cut through with wire-cutters, there were cameras and motion detectors. But we too easily forget that technology has to be maintained and watched to be effective. According to Munger, 20 percent of the Y-12 cameras were not working on the night the activists broke in. Cameras and motion detectors that had been broken for months had gone unrepaired. A security guard was chatting rather than watching the feed from a camera that did work. And guards ignored the motion detectors, which were so often set off by local wildlife that they assumed all alarms were false positives….

Instead of having government forces guard the site, the Department of Energy had hired two contractors: Wackenhut and Babcock and Wilcox. Wackenhut is now owned by the British company G4S, which also botched security for the 2012 London Olympics, forcing the British government to send 3,500 troops to provide security that the company had promised but proved unable to deliver. Private companies are, of course, driven primarily by the need to make a profit, but there are surely some operations for which profit should not be the primary consideration.

Babcock and Wilcox was supposed to maintain the security equipment at the Y-12 site, while Wackenhut provided the guards. Poor communication between the two companies was one reason sensors and cameras were not repaired. Furthermore, Babcock and Wilcox had changed the design of the plant’s Highly Enriched Uranium Materials Facility, making it a more vulnerable aboveground building, in order to cut costs. And Wackenhut was planning to lay off 70 guards at Y-12, also to cut costs.

There’s an important lesson here. Security is a combination of people, process, and technology. All three have to be working in order for security to work.

Slashdot thread.

Posted on July 14, 2015 at 5:53 AMView Comments

Automatic Scanning for Highly Stressed Individuals

This borders on ridiculous:

Chinese scientists are developing a mini-camera to scan crowds for highly stressed individuals, offering law-enforcement officers a potential tool to spot would-be suicide bombers.

[…]

“They all looked and behaved as ordinary people but their level of mental stress must have been extremely high before they launched their attacks. Our technology can detect such people, so law enforcement officers can take precautions and prevent these tragedies,” Chen said.

Officers looking through the device at a crowd would see a mental “stress bar” above each person’s head, and the suspects highlighted with a red face.

The researchers said they were able to use the technology to tell the difference between high-blood oxygen levels produced by stress rather than just physical exertion.

I’m not optimistic about this technology.

Posted on August 13, 2014 at 6:20 AMView Comments

Conversnitch

Surveillance is getting cheaper and easier:

Two artists have revealed Conversnitch, a device they built for less than $100 that resembles a lightbulb or lamp and surreptitiously listens in on nearby conversations and posts snippets of transcribed audio to Twitter. Kyle McDonald and Brian House say they hope to raise questions about the nature of public and private spaces in an era when anything can be broadcast by ubiquitous, Internet-connected listening devices.

This is meant as an art project to raise awareness, but the technology is getting cheaper all the time.

The surveillance gadget they unveiled Wednesday is constructed from little more than a Raspberry Pi miniature computer, a microphone, an LED and a plastic flower pot. It screws into and draws power from any standard bulb socket. Then it uploads captured audio via the nearest open Wi-Fi network to Amazon’s Mechanical Turk crowdsourcing platform, which McDonald and House pay small fees to transcribe the audio and post lines of conversation to Conversnitch’s Twitter account.

Consumer spy devices are now affordable by the masses. For $54, you can buy a camera hidden in a smoke detector. For $80, you can buy one hidden in an alarm clock. There are many more options.

Posted on April 23, 2014 at 2:33 PMView Comments

Police Disabling Their Own Voice Recorders

This is not a surprise:

The Los Angeles Police Commission is investigating how half of the recording antennas in the Southeast Division went missing, seemingly as a way to evade new self-monitoring procedures that the Los Angeles Police Department imposed last year.

The antennas, which are mounted onto individual patrol cars, receive recorded audio captured from an officer’s belt-worn transmitter. The transmitter is designed to capture an officer’s voice and transmit the recording to the car itself for storage. The voice recorders are part of a video camera system that is mounted in a front-facing camera on the patrol car. Both elements are activated any time the car’s emergency lights and sirens are turned on, but they can also be activated manually.

According to the Los Angeles Times, an LAPD investigation determined that around half of the 80 patrol cars in one South LA division were missing antennas as of last summer, and an additional 10 antennas were unaccounted for.

Surveillance of power is one of the most important ways to ensure that power does not abuse its status. But, of course, power does not like to be watched.

Posted on April 11, 2014 at 6:41 AMView Comments

How Security Becomes Banal

Interesting paper: “The Banality of Security: The Curious Case of Surveillance Cameras,” by Benjamin Goold, Ian Loader, and Angélica Thumala (full paper is behind a paywall).

Abstract: Why do certain security goods become banal (while others do not)? Under what conditions does banality occur and with what effects? In this paper, we answer these questions by examining the story of closed circuit television cameras (CCTV) in Britain. We consider the lessons to be learned from CCTV’s rapid—but puzzling—transformation from novelty to ubiquity, and what the banal properties of CCTV tell us about the social meanings of surveillance and security. We begin by revisiting and reinterpreting the historical process through which camera surveillance has diffused across the British landscape, focusing on the key developments that encoded CCTV in certain dominant meanings (around its effectiveness, for example) and pulled the cultural rug out from under alternative or oppositional discourses. Drawing upon interviews with those who produce and consume CCTV, we tease out and discuss the family of meanings that can lead one justifiably to describe CCTV as a banal good. We then examine some frontiers of this process and consider whether novel forms of camera surveillance (such as domestic CCTV systems) may press up against the limits of banality in ways that risk unsettling security practices whose social value and utility have come to be taken for granted. In conclusion, we reflect on some wider implications of banal security and its limits.

Posted on August 23, 2013 at 1:23 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.