Entries Tagged "cameras"

Page 7 of 21

Nikon Image Authentication System Cracked

Not a lot of details:

ElcomSoft research shows that image metadata and image data are processed independently with a SHA-1 hash function. There are two 160-bit hash values produced, which are later encrypted with a secret (private) key by using an asymmetric RSA-1024 algorithm to create a digital signature. Two 1024-bit (128-byte) signatures are stored in EXIF MakerNote tag 0×0097 (Color Balance).

During validation, Nikon Image Authentication Software calculates two SHA-1 hashes from the same data, and uses the public key to verify the signature by decrypting stored values and comparing the result with newly calculated hash values.

The ultimate vulnerability is that the private (should-be-secret) cryptographic key is handled inappropriately, and can be extracted from camera. After obtaining the private key, it is possible to generate a digital signature value for any image, thus forging the Image Authentication System.

News article.

Canon’s system is just as bad, by the way.

Fifteen years ago, I co-authored a paper on the problem. The idea was to use a hash chain to better deal with the possibility of a secret-key compromise.

Posted on May 3, 2011 at 7:54 AMView Comments

Software as Evidence

Increasingly, chains of evidence include software steps. It’s not just the RIAA suing people—and getting it wrong—based on automatic systems to detect and identify file sharers. It’s forensic programs used to collect and analyze data from computers and smart phones. It’s audit logs saved and stored by ISPs and websites. It’s location data from cell phones. It’s e-mails and IMs and comments posted to social networking sites. It’s tallies from digital voting machines. It’s images and meta-data from surveillance cameras. The list goes on and on. We in the security field know the risks associated with trusting digital data, but this evidence is routinely assumed by courts to be accurate.

Sergey Bratus is starting to look at this problem. His paper, written with Ashlyn Lembree and Anna Shubina, is “Software on the Witness Stand: What Should it Take for Us to Trust it?

We discuss the growing trend of electronic evidence, created automatically by autonomously running software, being used in both civil and criminal court cases. We discuss trustworthiness requirements that we believe should be applied to such software and platforms it runs on. We show that courts tend to regard computer-generated materials as inherently trustworthy evidence, ignoring many software and platform trustworthiness problems well known to computer security researchers. We outline the technical challenges in making evidence-generating software trustworthy and the role Trusted Computing can play in addressing them.

From a presentation he gave on the subject:

Constitutionally, criminal defendants have the right to confront accusers. If software is the accusing agent, what should the defendant be entitled to under the Confrontation Clause?

[…]

Witnesses are sworn in and cross-examined to expose biases & conflicts—what about software as a witness?

Posted on April 19, 2011 at 6:47 AMView Comments

Jury Says it's Okay to Record the TSA

The Seattle man who refused to show ID to the TSA and recorded the whole incident has been cleared of all charges:

[The jury] returned not guilty verdicts for charges that included concealing his identity, refusing to obey a lawful order, trespassing, and disorderly conduct.

Papers, Please! says the acquittal proves what TSA critics have said all along: That checkpoint staff have no police powers, that contrary to TSA claims, passengers have the right to fly without providing ID, and yes, passengers are free to video record checkpoints as long as images on screening monitors aren’t captured.

“Annoying the TSA is not a crime,” the blog post states. “Photography is not a crime. You have the right to fly without ID, and to photograph, film, and record what happens.”

And a recent Dilbert is about the TSA.

EDITED TO ADD (1/10): Details and links.

Posted on January 31, 2011 at 6:56 AMView Comments

This Suspicious Photography Stuff Is Confusing

See:

Last week, Metro Transit Police received a report from a rider about suspicious behavior at the L’Enfant Plaza station and on an Orange Line train to Vienna.

The rider told Metro he saw two men acting suspiciously and videotaping platforms, trains and riders.

“The men, according to the citizen report, were trying to be inconspicuous, holding the cameras at their sides,” Metro spokesman Steven Taubenkibel says.

The rider was able to photograph the men who were videotaping and sent the photo to Metro Transit Police.

I assume the rider took that photo inconspicuously, too, which means that he’s now suspicious.

How will this all end?

EDITED TO ADD (12/27): In the comments I was asked about reconciling good profiling with this sort of knee-jerk photography=suspicious nonsense. It’s complicated, and I wrote about it here in 2007. This, from 2004, is also relevant.

Posted on December 27, 2010 at 6:12 AMView Comments

Recording the Police

I’ve written a lot on the “War on Photography,” where normal people are harassed as potential terrorists for taking pictures of things in public. This article is different; it’s about recording the police:

Allison’s predicament is an extreme example of a growing and disturbing trend. As citizens increase their scrutiny of law enforcement officials through technologies such as cell phones, miniature cameras, and devices that wirelessly connect to video-sharing sites such as YouTube and LiveLeak, the cops are increasingly fighting back with force and even jail time—and not just in Illinois. Police across the country are using decades-old wiretapping statutes that did not anticipate iPhones or Droids, combined with broadly written laws against obstructing or interfering with law enforcement, to arrest people who point microphones or video cameras at them. Even in the wake of gross injustices, state legislatures have largely neglected the issue. Meanwhile, technology is enabling the kind of widely distributed citizen documentation that until recently only spy novelists dreamed of. The result is a legal mess of outdated, loosely interpreted statutes and piecemeal court opinions that leave both cops and citizens unsure of when recording becomes a crime.

This is all important. Being able to record the police is one of the best ways to ensure that the police are held accountable for their actions. Privacy has to be viewed in the context of relative power. For example, the government has a lot more power than the people. So privacy for the government increases their power and increases the power imbalance between government and the people; it decreases liberty. Forced openness in government—open government laws, Freedom of Information Act filings, the recording of police officers and other government officials, WikiLeaks—reduces the power imbalance between government and the people, and increases liberty.

Privacy for the people increases their power. It also increases liberty, because it reduces the power imbalance between government and the people. Forced openness in the people—NSA monitoring of everyone’s phone calls and e-mails, the DOJ monitoring everyone’s credit card transactions, surveillance cameras—decreases liberty.

I think we need a law that explicitly makes it legal for people to record government officials when they are interacting with them in their official capacity. And this is doubly true for police officers and other law enforcement officials.

EDITED TO ADD: Anthony Graber, the Maryland motorcyclist in the article, had all the wiretapping charges cleared.

Posted on December 21, 2010 at 1:39 PMView Comments

Sometimes CCTV Cameras Work

Sex attack caught on camera.

Hamilton police have arrested two men after a sex attack on a woman early today was caught on the city’s closed circuit television (CCTV) cameras.

CCTV operators contacted police when they became concerned about the safety of a woman outside an apartment block near the intersection of Victoria and Collingwood streets about 5am today.

Remember, though, that the test for whether the surveillance cameras are worth it is whether or not this crime would have been solved without them. That is, were the cameras necessary for arrest or conviction?

My previous writing on cameras.

EDITED TO ADD (12/17): When I wrote “remember, though, that the test for whether the surveillance cameras are worth it is whether or not this crime would have been solved without them,” I was being sloppy. That’s the test as to whether or not they had any value in this case.

Posted on December 13, 2010 at 2:01 PMView Comments

Crowdsourcing Surveillance

Internet Eyes is a U.K. startup designed to crowdsource digital surveillance. People pay a small fee to become a “Viewer.” Once they do, they can log onto the site and view live anonymous feeds from surveillance cameras at retail stores. If they notice someone shoplifting, they can alert the store owner. Viewers get rated on their ability to differentiate real shoplifting from false alarms, can win 1000 pounds if they detect the most shoplifting in some time interval, and otherwise get paid a wage that most likely won’t cover their initial fee.

Although the system has some nod towards privacy, groups like Privacy International oppose the system for fostering a culture of citizen spies. More fundamentally, though, I don’t think the system will work. Internet Eyes is primarily relying on voyeurism to compensate its Viewers. But most of what goes on in a retail store is incredibly boring. Some of it is actually voyeuristic, and very little of it is criminal. The incentives just aren’t there for Viewers to do more than peek, and there’s no obvious way to discouraging them from siding with the shoplifter and just watch the scenario unfold.

This isn’t the first time groups have tried to crowdsource surveillance camera monitoring. Texas’s Virtual Border Patrol tried the same thing: deputizing the general public to monitor the Texas-Mexico border. It ran out of money last year, and was widely criticized as a joke.

This system suffered the same problems as Internet Eyes—not enough incentive to do a good job, boredom because crime is the rare exception—as well as the fact that false alarms were very expensive to deal with.

Both of these systems remind me of the one time this idea was conceptualized correctly. Invented in 2003 by my friend and colleague Jay Walker, US HomeGuard also tried to crowdsource surveillance camera monitoring. But this system focused on one very specific security concern: people in no-mans areas. These are areas between fences at nuclear power plants or oil refineries, border zones, areas around dams and reservoirs, and so on: areas where there should never be anyone.

The idea is that people would register to become “spotters.” They would get paid a decent wage (that and patriotism was the incentive), receive a stream of still photos, and be asked a very simple question: “Is there a person or a vehicle in this picture?” If a spotter clicked “yes,” the photo—and the camera—would be referred to whatever professional response the camera owner had set up.

HomeGuard would monitor the monitors in two ways. One, by sending stored, known, photos to people regularly to verify that they were paying attention. And two, by sending live photos to multiple spotters and correlating the results, to many more monitors if a spotter claimed to have spotted a person or vehicle.

Just knowing that there’s a person or a vehicle in a no-mans area is only the first step in a useful response, and HomeGuard envisioned a bunch of enhancements to the rest of that system. Flagged photos could be sent to the digital phones of patrolling guards, cameras could be controlled remotely by those guards, and speakers in the cameras could issue warnings. Remote citizen spotters were only useful for that first step, looking for a person or a vehicle in a photo that shouldn’t contain any. Only real guards at the site itself could tell an intruder from the occasional maintenance person.

Of course the system isn’t perfect. A would-be infiltrator could sneak past the spotters by holding a bush in front of him, or disguising himself as a vending machine. But it does fill in a gap in what fully automated systems can do, at least until image processing and artificial intelligence get significantly better.

HomeGuard never got off the ground. There was never any good data about whether spotters were more effective than motion sensors as a first level of defense. But more importantly, Walker says that the politics surrounding homeland security money post-9/11 was just too great to penetrate, and that as an outsider he couldn’t get his ideas heard. Today, probably, the patriotic fervor that gripped so many people post-9/11 has dampened, and he’d probably have to pay his spotters more than he envisioned seven years ago. Still, I thought it was a clever idea then and I still think it’s a clever idea—and it’s an example of how to do surveillance crowdsourcing correctly.

Making the system more general runs into all sorts of problems. An amateur can spot a person or vehicle pretty easily, but is much harder pressed to notice a shoplifter. The privacy implications of showing random people pictures of no-mans lands is minimal, while a busy store is another matter—stores have enough individuality to be identifiable, as do people. Public photo tagging will even allow the process to be automated. And, of course, the normalization of a spy-on-your-neighbor surveillance society where it’s perfectly reasonable to watch each other on cameras just in case one of us does something wrong.

This essay first appeared in ThreatPost.

Posted on November 9, 2010 at 12:59 PMView Comments

1 5 6 7 8 9 21

Sidebar photo of Bruce Schneier by Joe MacInnis.