Entries Tagged "retail"

Page 2 of 6

Shopper Surveillance Using Cell Phones

Electronic surveillance is becoming so easy that even marketers can do it:

The cellphone tracking technology, called Footpath, is made by Path Intelligence Ltd., a Portsmouth, U.K.-based company. It uses sensors placed throughout the mall to detect signals from mobile phones and track their path around the mall. The sensors cannot gather phone numbers or other identifying data, or intercept or log data about calls or SMS messages, the company says.

EDITED TO ADD (12/14): Two malls have shelved the system for now.

Posted on November 29, 2011 at 7:01 AMView Comments

Epsilon Hack

I have no idea why the Epsilon hack is getting so much press.

Yes, millions of names and e-mail addresses might have been stolen. Yes, other customer information might have been stolen, too. Yes, this personal information could be used to create more personalized and better targeted phishing attacks.

So what? These sorts of breaches happen all the time, and even more personal information is stolen.

I get that over 50 companies were affected, and some of them are big names. But the hack of the century? Hardly.

Posted on April 5, 2011 at 12:58 PMView Comments

Crowdsourcing Surveillance

Internet Eyes is a U.K. startup designed to crowdsource digital surveillance. People pay a small fee to become a “Viewer.” Once they do, they can log onto the site and view live anonymous feeds from surveillance cameras at retail stores. If they notice someone shoplifting, they can alert the store owner. Viewers get rated on their ability to differentiate real shoplifting from false alarms, can win 1000 pounds if they detect the most shoplifting in some time interval, and otherwise get paid a wage that most likely won’t cover their initial fee.

Although the system has some nod towards privacy, groups like Privacy International oppose the system for fostering a culture of citizen spies. More fundamentally, though, I don’t think the system will work. Internet Eyes is primarily relying on voyeurism to compensate its Viewers. But most of what goes on in a retail store is incredibly boring. Some of it is actually voyeuristic, and very little of it is criminal. The incentives just aren’t there for Viewers to do more than peek, and there’s no obvious way to discouraging them from siding with the shoplifter and just watch the scenario unfold.

This isn’t the first time groups have tried to crowdsource surveillance camera monitoring. Texas’s Virtual Border Patrol tried the same thing: deputizing the general public to monitor the Texas-Mexico border. It ran out of money last year, and was widely criticized as a joke.

This system suffered the same problems as Internet Eyes—not enough incentive to do a good job, boredom because crime is the rare exception—as well as the fact that false alarms were very expensive to deal with.

Both of these systems remind me of the one time this idea was conceptualized correctly. Invented in 2003 by my friend and colleague Jay Walker, US HomeGuard also tried to crowdsource surveillance camera monitoring. But this system focused on one very specific security concern: people in no-mans areas. These are areas between fences at nuclear power plants or oil refineries, border zones, areas around dams and reservoirs, and so on: areas where there should never be anyone.

The idea is that people would register to become “spotters.” They would get paid a decent wage (that and patriotism was the incentive), receive a stream of still photos, and be asked a very simple question: “Is there a person or a vehicle in this picture?” If a spotter clicked “yes,” the photo—and the camera—would be referred to whatever professional response the camera owner had set up.

HomeGuard would monitor the monitors in two ways. One, by sending stored, known, photos to people regularly to verify that they were paying attention. And two, by sending live photos to multiple spotters and correlating the results, to many more monitors if a spotter claimed to have spotted a person or vehicle.

Just knowing that there’s a person or a vehicle in a no-mans area is only the first step in a useful response, and HomeGuard envisioned a bunch of enhancements to the rest of that system. Flagged photos could be sent to the digital phones of patrolling guards, cameras could be controlled remotely by those guards, and speakers in the cameras could issue warnings. Remote citizen spotters were only useful for that first step, looking for a person or a vehicle in a photo that shouldn’t contain any. Only real guards at the site itself could tell an intruder from the occasional maintenance person.

Of course the system isn’t perfect. A would-be infiltrator could sneak past the spotters by holding a bush in front of him, or disguising himself as a vending machine. But it does fill in a gap in what fully automated systems can do, at least until image processing and artificial intelligence get significantly better.

HomeGuard never got off the ground. There was never any good data about whether spotters were more effective than motion sensors as a first level of defense. But more importantly, Walker says that the politics surrounding homeland security money post-9/11 was just too great to penetrate, and that as an outsider he couldn’t get his ideas heard. Today, probably, the patriotic fervor that gripped so many people post-9/11 has dampened, and he’d probably have to pay his spotters more than he envisioned seven years ago. Still, I thought it was a clever idea then and I still think it’s a clever idea—and it’s an example of how to do surveillance crowdsourcing correctly.

Making the system more general runs into all sorts of problems. An amateur can spot a person or vehicle pretty easily, but is much harder pressed to notice a shoplifter. The privacy implications of showing random people pictures of no-mans lands is minimal, while a busy store is another matter—stores have enough individuality to be identifiable, as do people. Public photo tagging will even allow the process to be automated. And, of course, the normalization of a spy-on-your-neighbor surveillance society where it’s perfectly reasonable to watch each other on cameras just in case one of us does something wrong.

This essay first appeared in ThreatPost.

Posted on November 9, 2010 at 12:59 PMView Comments

Cloning Retail Gift Cards

Clever attack.

After researching how gift cards work, Zepeda purchased a magnetic card reader online, began stealing blank gift cards, on display for purchase, from Fred Meyer and scanning them with his reader. He would then return some of the scanned cards to the store and wait for a computer program to alert him when the cards were activated and loaded with money.

Using a magnetic card writer, Zepeda then rewrote one of the leftover stolen gift card’s magnetic strip with the activated card’s information, thus creating a cloned card.

Posted on August 13, 2010 at 7:36 AMView Comments

Acrobatic Thieves

Some movie-plot attacks actually happen:

They never touched the floor—that would have set off an alarm.

They didn’t appear on store security cameras. They cut a hole in the roof and came in at a spot where the cameras were obscured by advertising banners.

And they left with some $26,000 in laptop computers, departing the same way they came in—down a 3-inch gas pipe that runs from the roof to the ground outside the store.

EDITED TO ADD (4/13): Similar heists.

Posted on March 24, 2010 at 1:51 PMView Comments

Man-in-the-Middle Attack Against Chip and PIN

Nice attack against the EMV—Eurocard Mastercard Visa—the “chip and PIN” credit card payment system. The attack allows a criminal to use a stolen card without knowing the PIN.

The flaw is that when you put a card into a terminal, a negotiation takes place about how the cardholder should be authenticated: using a PIN, using a signature or not at all. This particular subprotocol is not authenticated, so you can trick the card into thinking it’s doing a chip-and-signature transaction while the terminal thinks it’s chip-and-PIN. The upshot is that you can buy stuff using a stolen card and a PIN of 0000 (or anything you want). We did so, on camera, using various journalists’ cards. The transactions went through fine and the receipts say “Verified by PIN”.

[…]

So what went wrong? In essence, there is a gaping hole in the specifications which together create the “Chip and PIN” system. These specs consist of the EMV protocol framework, the card scheme individual rules (Visa, MasterCard standards), the national payment association rules (UK Payments Association aka APACS, in the UK), and documents produced by each individual issuer describing their own customisations of the scheme. Each spec defines security criteria, tweaks options and sets rules—but none take responsibility for listing what back-end checks are needed. As a result, hundreds of issuers independently get it wrong, and gain false assurance that all bases are covered from the common specifications. The EMV specification stack is broken, and needs fixing.

Read Ross Anderson’s entire blog post for both details and context. Here’s the paper, the press release, and a FAQ. And one news article.

This is big. There are about a gazillion of these in circulation.

EDITED TO ADD (2/12): BBC video of the attack in action.

Posted on February 11, 2010 at 4:18 PMView Comments

$3.2 Million Jewelry Store Theft

I’ve written about this sort of thing before:

A robber bored a hole through the wall of jewelry shop and walked off with about 200 luxury watches worth 300 million yen ($3.2 million) in Tokyo’s upscale Ginza district, police said Saturday.

From Secrets and Lies, p. 318:

Threat modeling is, for the most part, ad hoc. You think about the threats until you can’t think of any more, then you stop. And then you’re annoyed and surprised when some attacker thinks of an attack you didn’t. My favorite example is a band of California art thieves that would break into people’s houses by cutting a hole in their walls with a chainsaw. The attacker completely bypassed the threat model of the defender. The countermeasures that the homeowner put in place were door and window alarms; they didn’t make a difference to this attack.

One of the important things to consider in threat modeling is whether the attacker is looking for any victim, or is specifically targeting you. If the attacker is looking for any victim, then countermeasures that make you a less attractive target than other people are generally good enough. If the attacker is specifically targeting you, then you need to consider a greater level of security.

Posted on January 14, 2010 at 12:43 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.