Entries Tagged "cameras"

Page 9 of 21

Burglary Detection through Video Analytics

This is interesting:

Some of the scenarios where we have installed video analytics for our clients include:

  • to detect someone walking in an area of their yard (veering off of the main path) that they are not supposed to be;
  • to send an alarm if someone is standing too close to the front of a store window/front door after hours;
  • to alert security guards about someone in a parkade during specific hours;
  • to count the number of people coming into (and out of) a store during the day;

In the case of burglary prevention, getting an early warning about someone trespassing makes a huge difference for our response teams. Now, rather than waiting for a detector in the house to trip, we can receive an alarm signal while a potential burglar is still outside.

Effectiveness is going to be a question of limiting false positives.

Posted on July 14, 2010 at 12:54 PMView Comments

Filming the Police

In at least three U.S. states, it is illegal to film an active duty policeman:

The legal justification for arresting the “shooter” rests on existing wiretapping or eavesdropping laws, with statutes against obstructing law enforcement sometimes cited. Illinois, Massachusetts, and Maryland are among the 12 states in which all parties must consent for a recording to be legal unless, as with TV news crews, it is obvious to all that recording is underway. Since the police do not consent, the camera-wielder can be arrested. Most all-party-consent states also include an exception for recording in public places where “no expectation of privacy exists” (Illinois does not) but in practice this exception is not being recognized.

Massachusetts attorney June Jensen represented Simon Glik who was arrested for such a recording. She explained, “[T]he statute has been misconstrued by Boston police. You could go to the Boston Common and snap pictures and record if you want.” Legal scholar and professor Jonathan Turley agrees, “The police are basing this claim on a ridiculous reading of the two-party consent surveillance law—requiring all parties to consent to being taped. I have written in the area of surveillance law and can say that this is utter nonsense.”

The courts, however, disagree. A few weeks ago, an Illinois judge rejected a motion to dismiss an eavesdropping charge against Christopher Drew, who recorded his own arrest for selling one-dollar artwork on the streets of Chicago. Although the misdemeanor charges of not having a peddler’s license and peddling in a prohibited area were dropped, Drew is being prosecuted for illegal recording, a Class I felony punishable by 4 to 15 years in prison.

This is a horrible idea, and will make us all less secure. I wrote in 2008:

You cannot evaluate the value of privacy and disclosure unless you account for the relative power levels of the discloser and the disclosee.

If I disclose information to you, your power with respect to me increases. One way to address this power imbalance is for you to similarly disclose information to me. We both have less privacy, but the balance of power is maintained. But this mechanism fails utterly if you and I have different power levels to begin with.

An example will make this clearer. You’re stopped by a police officer, who demands to see identification. Divulging your identity will give the officer enormous power over you: He or she can search police databases using the information on your ID; he or she can create a police record attached to your name; he or she can put you on this or that secret terrorist watch list. Asking to see the officer’s ID in return gives you no comparable power over him or her. The power imbalance is too great, and mutual disclosure does not make it OK.

You can think of your existing power as the exponent in an equation that determines the value, to you, of more information. The more power you have, the more additional power you derive from the new data.

Another example: When your doctor says “take off your clothes,” it makes no sense for you to say, “You first, doc.” The two of you are not engaging in an interaction of equals.

This is the principle that should guide decision-makers when they consider installing surveillance cameras or launching data-mining programs. It’s not enough to open the efforts to public scrutiny. All aspects of government work best when the relative power between the governors and the governed remains as small as possible—when liberty is high and control is low. Forced openness in government reduces the relative power differential between the two, and is generally good. Forced openness in laypeople increases the relative power, and is generally bad.

EDITED TO ADD (7/13): Another article. One jurisdiction in Pennsylvania has explicitly ruled the opposite: that it’s legal to record police officers no matter what.

Posted on June 16, 2010 at 1:36 PMView Comments

Alerting Users that Applications are Using Cameras, Microphones, Etc.

Interesting research: “What You See is What They Get: Protecting users from unwanted use of microphones, cameras, and other sensors,” by Jon Howell and Stuart Schechter.

Abstract: Sensors such as cameras and microphones collect privacy-sensitive data streams without the user’s explicit action. Conventional sensor access policies either hassle users to grant applications access to sensors or grant with no approval at all. Once access is granted, an application may collect sensor data even after the application’s interface suggests that the sensor is no longer being accessed.

We introduce the sensor-access widget, a graphical user interface element that resides within an application’s display. The widget provides an animated representation of the personal data being collected by its corresponding sensor, calling attention to the application’s attempt to collect the data. The widget indicates whether the sensor data is currently allowed to flow to the application. The widget also acts as a control point through which the user can configure the sensor and grant or deny the application access. By building perpetual disclosure of sensor data collection into the platform, sensor-access widgets enable new access-control policies that relax the tension between the user’s privacy needs and applications’ ease of access.

Apple seems to be taking some steps in this direction with the location sensor disclosure in iPhone 4.0 OS.

Posted on May 24, 2010 at 7:32 AMView Comments

Preventing Terrorist Attacks in Crowded Areas

On the New York Times Room for Debate Blog, I—along with several other people—was asked about how to prevent terrorist attacks in crowded areas. This is my response.

In the wake of Saturday’s failed Times Square car bombing, it’s natural to ask how we can prevent this sort of thing from happening again. The answer is stop focusing on the specifics of what actually happened, and instead think about the threat in general.

Think about the security measures commonly proposed. Cameras won’t help. They don’t prevent terrorist attacks, and their forensic value after the fact is minimal. In the Times Square case, surely there’s enough other evidence—the car’s identification number, the auto body shop the stolen license plates came from, the name of the fertilizer store—to identify the guy. We will almost certainly not need the camera footage. The images released so far, like the images in so many other terrorist attacks, may make for exciting television, but their value to law enforcement officers is limited.

Check points won’t help, either. You can’t check everybody and everything. There are too many people to check, and too many train stations, buses, theaters, department stores and other places where people congregate. Patrolling guards, bomb-sniffing dogs, chemical and biological weapons detectors: they all suffer from similar problems. In general, focusing on specific tactics or defending specific targets doesn’t make sense. They’re inflexible; possibly effective if you guess the plot correctly, but completely ineffective if you don’t. At best, the countermeasures just force the terrorists to make minor changes in their tactic and target.

It’s much smarter to spend our limited counterterrorism resources on measures that don’t focus on the specific. It’s more efficient to spend money on investigating and stopping terrorist attacks before they happen, and responding effectively to any that occur. This approach works because it’s flexible and adaptive; it’s effective regardless of what the bad guys are planning for next time.

After the Christmas Day airplane bombing attempt, I was asked how we can better protect our airplanes from terrorist attacks. I pointed out that the event was a security success—the plane landed safely, nobody was hurt, a terrorist was in custody—and that the next attack would probably have nothing to do with explosive underwear. After the Moscow subway bombing, I wrote that overly specific security countermeasures like subway cameras and sensors were a waste of money.

Now we have a failed car bombing in Times Square. We can’t protect against the next imagined movie-plot threat. Isn’t it time to recognize that the bad guys are flexible and adaptive, and that we need the same quality in our countermeasures?

I know, nothing I haven’t said many times before.

Steven Simon likes cameras, although his arguments are more movie-plot than real. Michael Black, Noah Shachtman, Michael Tarr, and Jeffrey Rosen all write about the limitations of security cameras. Paul Ekman wants more people. And Richard Clarke has a nice essay about how we shouldn’t panic.

Posted on May 4, 2010 at 1:31 PMView Comments

Life Recorder

In 2006, writing about future threats on privacy, I described a life recorder:

A “life recorder” you can wear on your lapel that constantly records is still a few generations off: 200 gigabytes/year for audio and 700 gigabytes/year for video. It’ll be sold as a security device, so that no one can attack you without being recorded.

I can’t find a quote right now, but in talks I would say that this kind of technology would first be used by groups of people with diminished rights: children, soldiers, prisoners, and the non-lucid elderly.

It’s been proposed:

With GPS capabilities built into phones that can be made ever smaller, and the ability for these phones to transmit both sound and audio, isn’t it time to think about a wearable device that could be used to call for help and accurately report what was happening?

[…]

The device could contain cameras and microphones that activate if the device was triggered to create evidence that could locate an attacker and cause them to flee, an alarm sound that could help locate the victim and also help scare off an attacker, and a set of sensors that could detect everything from sudden deceleration to an irregular heartbeat or compromised breathing.

Just one sentence on the security and privacy issues:

Indeed, privacy concerns need to be addressed so that stalkers and predators couldn’t compromise the device.

Indeed.

Posted on April 19, 2010 at 6:30 AMView Comments

New York and the Moscow Subway Bombing

People intent on preventing a Moscow-style terrorist attack against the New York subway system are proposing a range of expensive new underground security measures, some temporary and some permanent.

They should save their money – and instead invest every penny they’re considering pouring into new technologies into intelligence and old-fashioned policing.

Intensifying security at specific stations only works against terrorists who aren’t smart enough to move to another station. Cameras are useful only if all the stars align: The terrorists happen to walk into the frame, the video feeds are being watched in real time and the police can respond quickly enough to be effective. They’re much more useful after an attack, to figure out who pulled it off.

Installing biological and chemical detectors requires similarly implausible luck – plus a terrorist plot that includes the specific biological or chemical agent that is being detected.

What all these misguided reactions have in common is that they’re based on “movie-plot threats”: overly specific attack scenarios. They fill our imagination vividly, in full color with rich detail. Before long, we’re envisioning an entire story line, with or without Bruce Willis saving the day. And we’re scared.

It’s not that movie-plot threats are not worth worrying about. It’s that each one – Moscow’s subway attack, the bombing of the Oklahoma City federal building, etc. – is too specific. These threats are infinite, and the bad guys can easily switch among them.

New York has thousands of possible targets, and there are dozens of possible tactics. Implementing security against movie-plot threats is only effective if we correctly guess which specific threat to protect against. That’s unlikely.

A far better strategy is to spend our limited counterterrorism resources on investigation and intelligence – and on emergency response. These measures don’t hinge on any specific threat; they don’t require us to guess the tactic or target correctly. They’re effective in a variety of circumstances, even nonterrorist ones.

The result may not be flashy or outwardly reassuring – as are pricey new scanners in airports. But the strategy will save more lives.

The 2006 arrest of the liquid bombers – who wanted to detonate liquid explosives to be brought onboard airliners traveling from England to North America – serves as an excellent example. The plotters were arrested in their London apartments, and their attack was foiled before they ever got to the airport.

It didn’t matter if they were using liquids or solids or gases. It didn’t even matter if they were targeting airports or shopping malls or theaters. It was a straightforward, although hardly simple, matter of following leads.

Gimmicky security measures are tempting – but they’re distractions we can’t afford. The Christmas Day bomber chose his tactic because it would circumvent last year’s security measures, and the next attacker will choose his tactic – and target – according to similar criteria. Spend money on cameras and guards in the subways, and the terrorists will simply modify their plot to render those countermeasures ineffective.

Humans are a species of storytellers, and the Moscow story has obvious parallels in New York. When we read the word “subway,” we can’t help but think about the system we use every day. This is a natural response, but it doesn’t make for good public policy. We’d all be safer if we rose above the simple parallels and the need to calm our fears with expensive and seductive new technologies – and countered the threat the smart way.

This essay originally appeared in the New York Daily News.

Posted on April 7, 2010 at 8:52 AMView Comments

Security Cameras in the New York City Subways

The New York Times has an article about cameras in the subways. The article is all about how horrible it is that the cameras don’t work:

Moreover, nearly half of the subway system’s 4,313 security cameras that have been installed—in stations and tunnels throughout the system—do not work, because of either shoddy software or construction problems, say officials with the Metropolitan Transportation Authority, which operates the city’s bus, subway and train system.

I certainly agree that taxpayers should be upset when something they’ve purchased doesn’t function as expected. But way down at the bottom of the article, we find:

Even without the cameras, officials said crime in the transit system had dropped to a record low. In 1990, the system averaged 47.8 crimes a day, compared with 5.3 so far this year. “The subway system is safer than it’s ever been,” said Kevin Ortiz, an authority spokesman.

No data on how many crimes were solved by cameras, but we know from other studies that their effect on crime is minimal.

Posted on March 31, 2010 at 1:24 PMView Comments

1 7 8 9 10 11 21

Sidebar photo of Bruce Schneier by Joe MacInnis.