Entries Tagged "false positives"

Page 3 of 13

Fake Documents that Alarm if Opened

This sort of thing seems like a decent approach, but it has a lot of practical problems:

In the wake of Wikileaks, the Department of Defense has stepped up its game to stop leaked documents from making their way into the hands of undesirables—be they enemy forces or concerned citizens. A new piece of software has created a way to do this by generating realistic, fake documents that phone home when they’re accessed, serving the dual purpose of providing false intelligence and helping identify the culprit.

Details aside, this kind of thing falls into the general category of data tracking. It doesn’t even have to be fake documents; you could imagine some sort of macro embedded into Word or pdf documents that phones home when the document is opened. (I have no idea if you actually can do it with those formats, but the concept is plausible.) This allows the owner of a document to track when, and possibly by what computer, a document is opened.

But by far the biggest drawback from this tech is the possibility of false positives. If you seed a folder full of documents with a large number of fakes, how often do you think an authorized user will accidentally double click on the wrong file? And what if they act on the false information? Sure, this will prevent hackers from blindly trusting that every document on a server is correct, but we bet it won’t take much to look into the code of a document and spot the fake, either.

I’m less worried about false positives, and more concerned by how easy it is to get around this sort of thing. Detach your computer from the Internet, and the document no longer phones home. A fix is to combine the system with an encryption scheme that requires a remote key. Now the document has to phone home before it can be viewed. Of course, once someone is authorized to view the document, it would be easy to create an unprotected copy—screen captures, if nothing else—to forward along,

While potentially interesting, this sort of technology is not going to prevent large data leaks. But it’s good to see research.

Posted on November 7, 2011 at 6:26 AMView Comments

Bin Laden's Death Causes Spike in Suspicious Package Reports

It’s not that the risk is greater, it’s that the fear is greater. Data from New York:

There were 10,566 reports of suspicious objects across the five boroughs in 2010. So far this year, the total was 2,775 as of Tuesday compared with 2,477 through the same period last year.

[…]

The daily totals typically spike when terrorist plot makes headlines here or overseas, NYPD spokesman Paul Browne said Tuesday. The false alarms themselves sometimes get break-in cable news coverage or feed chatter online, fueling further fright.

On Monday, with news of the dramatic military raid of bin Laden’s Pakistani lair at full throttle, there were 62 reports of suspicious packages. The previous Monday, the 24-hour total was 18. All were deemed non-threats.

Despite all the false alarms, the New York Police Department still wants to hear them:

“We anticipate that with increased public vigilance comes an increase in false alarms for suspicious packages,” Kelly said at the Monday news conference. “This typically happens at times of heightened awareness. But we don’t want to discourage the public. If you see something, say something.”

That slogan, oddly enough, is owned by New York’s transit authority.

I have a different opinion: “If you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.”

People have always come forward to tell the police when they see something genuinely suspicious, and should continue to do so. But encouraging people to raise an alarm every time they’re spooked only squanders our security resources and makes no one safer.

Refuse to be terrorized,” people.

Posted on May 5, 2011 at 6:43 AMView Comments

Get Your Terrorist Alerts on Facebook and Twitter

Colors are so last decade:

The U.S. government’s new system to replace the five color-coded terror alerts will have two levels of warnings ­ elevated and imminent ­ that will be relayed to the public only under certain circumstances for limited periods of time, sometimes using Facebook and Twitter, according to a draft Homeland Security Department plan obtained by The Associated Press.

Some terror warnings could be withheld from the public entirely if announcing a threat would risk exposing an intelligence operation or a current investigation, according to the government’s confidential plan.

Like a carton of milk, the new terror warnings will each come with a stamped expiration date.

Specific and limited are good. Twitter and Facebook: I’m not so sure.

But what could go wrong?

An errant keystroke touched off a brief panic Thursday at the University of Illinois at Urbana-Champaign when an emergency message accidentally was sent out saying an “active shooter” was on campus.

The first message was sent on the university’s emergency alert system at 10:40 a.m., reaching 87,000 cellphones and email addresses, according to the university.

The university corrected the false alarm about 12 minutes later and said the alert was caused when a worker updating the emergency messaging system inadvertently sent the message rather than saving it.

The emails are designed to go out quickly in the event of an emergency, so the false alarm could not be canceled before it went out, the university said.

Posted on April 8, 2011 at 1:23 PMView Comments

Burglary Detection through Video Analytics

This is interesting:

Some of the scenarios where we have installed video analytics for our clients include:

  • to detect someone walking in an area of their yard (veering off of the main path) that they are not supposed to be;
  • to send an alarm if someone is standing too close to the front of a store window/front door after hours;
  • to alert security guards about someone in a parkade during specific hours;
  • to count the number of people coming into (and out of) a store during the day;

In the case of burglary prevention, getting an early warning about someone trespassing makes a huge difference for our response teams. Now, rather than waiting for a detector in the house to trip, we can receive an alarm signal while a potential burglar is still outside.

Effectiveness is going to be a question of limiting false positives.

Posted on July 14, 2010 at 12:54 PMView Comments

Behavioral Profiling at Airports

There’s a long article in Nature on the practice:

It remains unclear what the officers found anomalous about George’s behaviour, and why he was detained. The TSA’s parent agency, the Department of Homeland Security (DHS), has declined to comment on his case because it is the subject of a federal lawsuit that was filed on George’s behalf in February by the American Civil Liberties Union. But the incident has brought renewed attention to a burgeoning controversy: is it possible to know whether people are being deceptive, or planning hostile acts, just by observing them?

Some people seem to think so. At London’s Heathrow Airport, for example, the UK government is deploying behaviour-detection officers in a trial modelled in part on SPOT. And in the United States, the DHS is pursuing a programme that would use sensors to look at nonverbal behaviours, and thereby spot terrorists as they walk through a corridor. The US Department of Defense and intelligence agencies have expressed interest in similar ideas.

Yet a growing number of researchers are dubious ­ not just about the projects themselves, but about the science on which they are based. “Simply put, people (including professional lie-catchers with extensive experience of assessing veracity) would achieve similar hit rates if they flipped a coin,” noted a 2007 report from a committee of credibility-assessment experts who reviewed research on portal screening.

“No scientific evidence exists to support the detection or inference of future behaviour, including intent,” declares a 2008 report prepared by the JASON defence advisory group. And the TSA had no business deploying SPOT across the nation’s airports “without first validating the scientific basis for identifying suspicious passengers in an airport environment”, stated a two-year review of the programme released on 20 May by the Government Accountability Office (GAO), the investigative arm of the US Congress.

Commentary from the MindHacks blog.

Also, the GAO has published a report on the U.S. DHS’s SPOT program: “Aviation Security: Efforts to Validate TSA’s Passenger Screening Behavior Detection Program Underway, but Opportunities Exist to Strengthen Validation and Address Operational Challenges.”

As of March 2010, TSA deployed about 3,000 BDOs at an annual cost of about $212 million; this force increased almost fifteen-fold between March 2007 and July 2009. BDOs have been selectively deployed to 161 of the 457 TSA-regulated airports in the United States at which passengers and their property are subject to TSA-mandated screening procedures.

It seems pretty clear that the program only catches criminals, and no terrorists. You’d think there would be more important things to spend $200 million a year on.

EDITED TO ADD (6/14): In the comments, a couple of people asked how this compares with the Israeli model of airport security—concentrate on the person—and the idea that trained officers notice if someone is acting “hinky”: both things that I have written favorably about.

The difference is the experience of the detecting officer and the amount of time they spend with each person. If you read about the programs described above, they’re supposed to “spot terrorists as they walk through a corridor,” or possibly after a few questions. That’s very different from what happens when you check into a flight an Ben Gurion Airport.

The problem with fast detection programs is that they don’t work, and the problem with the Israeli security model is that it doesn’t scale.

Posted on June 14, 2010 at 6:23 AMView Comments

Terrorists Placing Fake Bombs in Public Places

Supposedly, the latest terrorist tactic is to place fake bombs—suspicious looking bags, backpacks, boxes, and coolers—in public places in an effort to paralyze the city and probe our defenses. The article doesn’t say whether or not this has actually ever happened, only that the FBI is warning of the tactic.

Citing an FBI informational document, ABC News reports a so called “battle of suspicious bags” is being encouraged on a jihadist website.

I have no doubt that this may happen, but I’m sure these are not actual terrorists doing the planting. We’re so easy to terrorize that anyone can play; this is the equivalent of hacking in the real world. One solution is to continue to overreact, and spend even more money on these fake threats. The other is to refuse to be terrorized.

Posted on June 9, 2010 at 6:24 AMView Comments

If You See Something, Think Twice About Saying Something

If you see something, say something.” Or, maybe not:

The Travis County Criminal Justice Center was closed for most of the day on Friday, May 14, after a man reported that a “suspicious package” had been left in the building. The court complex was evacuated, and the APD Explosive Ordinance Disposal Unit was called in for a look-see. The package in question, a backpack, contained paperwork but no explosive device. The building reopened at 1:40pm. The man who reported the suspicious package, Douglas Scott Hoopes, was arrested and charged with making a false report and booked into the jail. The charge is a felony punishable by up to two years in jail.

I don’t think we can have it both ways. We expect people to report anything suspicious—even dumb things—and now we want to press charges if they report something that isn’t an actual threat. Truth is, if you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.

I think this excerpt from a poem by Rick Moranis says it best:

If you see something,
Say something.
If you say something,
Mean something.
If you mean something,
You may have to prove something.
If you can’t prove something,
You may regret saying something.

There’s more.

EDITED TO ADD (5/26): Seems like he left the package himself, and then called it in. So there’s ample reason to arrest him. Never mind.

Posted on May 26, 2010 at 9:16 AMView Comments

Scanning Cargo for Nuclear Material and Conventional Explosives

Still experimental:

The team propose using a particle accelerator to alternately smash ionised hydrogen molecules and deuterium ions into targets of carbon and boron respectively. The collisions produce beams of gamma rays of various energies as well as neutrons. These beams are then passed through the cargo.

By measuring the way the beams are absorbed, Goldberg and company say they can work out whether the cargo contains explosives or nuclear materials. And they say they can do it at the rate of 20 containers per hour.

That’s an ambitious goal that presents numerous challenges.

For example, the beam currents will provide relatively sparse data so the team will have to employ a technique called few-view tomography to fill in the gaps. It will also mean that each container will have to be zapped several times. That may not be entirely desirable for certain types of goods such as food and equipment with delicate electronics.

Just how beams of gamma rays and neutrons affect these kinds of goods is something that will have to be determined

Then there is the question of false positives. One advantage of a machine like this is that it has several scanning modes is that if one reveals something suspicious, it can switch to another to look in more detail. That should build up a decent picture of the cargo’s contents and reduce false positives.

Posted on January 27, 2010 at 6:53 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.