Entries Tagged "searches"

Page 7 of 16

Body Cavity Scanners

At least one company is touting its technology:

Nesch, a company based in Crown Point, Indiana, may have a solution. It’s called diffraction-enhanced X-ray imaging or DEXI, which employs proprietary diffraction enhanced imaging and multiple image radiography

Rather than simply shining X-rays through the subject and looking at the amount that passes through (like a conventional X-ray machine), DEXI analyzes the X-rays that are scattered or refracted by soft tissue or other low-density material. Conventional X-rays show little more than the skeleton, but the new technique can reveal far more, which makes it useful for both medical and security applications.

Posted on January 14, 2010 at 6:00 AMView Comments

Breaching the Secure Area in Airports

An unidentified man breached airport security at Newark Airport on Sunday, walking into the secured area through the exit, prompting the evacuation of a terminal and flight delays that continued into the next day. This isn’t common, but it happens regularly. The result is always the same, and it’s not obvious that fixing the problem is the right solution.

This kind of security breach is inevitable, simply because human guards are not perfect. Sometimes it’s someone going in through the out door, unnoticed by a bored guard. Sometimes it’s someone running through the checkpoint and getting lost in the crowd. Sometimes it’s an open door that should be locked. Amazing as it seems to frequent fliers, the perpetrator often doesn’t even know he did anything wrong.

Basically, whenever there is—or could be—an unscreened person lost within the secure area of an airport, there are two things the TSA can do. They can say “this isn’t a big deal,” and ignore it. Or they can evacuate everyone inside the secure area, search every nook and cranny—inside the large boxes of napkins at the fast food restaurant, above the false ceilings in the bathrooms, everywhere—looking for anyone hiding or anything anyone hid, and then rescreen everybody: causing delays of six, eight, twelve, or more hours. That’s it; those are the options. And there’s no way someone in charge will choose to ignore the risk; even if the odds of a terrorist exploit are minuscule, it’ll cost him his career if he’s wrong.

Several European airports have their security screening organized differently. At Schipol Airport in Amsterdam, for example, passengers are screened at the gates. This is more expensive and requires a substantially different airport design, but it does mean that if there is a security breach, only the gate has to be evacuated and searched, and the people rescreened.

American airports can do more to secure against this risk, but I’m reasonably sure it’s not worth it. We could double the guards to reduce the risk of inattentiveness, and redesign the airports to make this kind of thing less likely, but those are expensive solutions to an already rare problem. As much as I don’t like saying it, the smartest thing is probably to live with this occasional but major inconvenience.

This essay originally appeared on ThreatPost.com.

EDITED TO ADD (1/9): A first-person account of the chaos at Newark Airport, with observations and recommendations.

Posted on January 6, 2010 at 6:10 AMView Comments

Matt Blaze on the New "Unpredictable" TSA Screening Measures

Interesting:

“Unpredictable” security as applied to air passenger screening means that sometimes (perhaps most of the time), certain checks that might detect terrorist activity are not applied to some or all passengers on any given flight. Passengers can’t predict or influence when or whether they are be subjected to any particular screening mechanism. And so, the strategy assumes, the would-be terrorist will be forced to prepare for every possible mechanism in the TSA’s arsenal, effectively narrowing his or her range of options enough to make any serious mischief infeasible.

But terrorist organizations—especially those employing suicide bombers—have very different goals and incentives from those of smugglers, fare beaters and tax cheats. Groups like Al Qaeda aim to cause widespread disruption and terror by whatever means they can, even at great cost to individual members. In particular, they are willing and able to sacrifice—martyr—the very lives of their solders in the service of that goal. The fate of any individual terrorist is irrelevant as long as the loss contributes to terror and disruption.

Paradoxically, the best terrorist strategy (as long as they have enough volunteers) under unpredictable screening may be to prepare a cadre of suicide bombers for the least rigorous screening to which they might be subjected, and not, as the strategy assumes, for the most rigorous. Sent on their way, each will either succeed at destroying a plane or be caught, but either outcome serves the terrorists’ objective.

The problem is that catching someone under a randomized strategy creates a terrible dilemma for the authorities. What do we do when we detect a bomb-wielding terrorist whose device was discovered through the enhanced, randomly applied screening procedure?

EDITED TO ADD (1/5): In this blog post, a reader of Andrew Sullivan’s blog argues that the terrorist didn’t care if he blew the plane up or not, that he went back to his seat instead of detonating the explosive in the toilet precisely because he wanted his fellow passengers to see his attempt—just in case it failed.

Posted on January 5, 2010 at 11:41 AMView Comments

Christmas Bomber: Where Airport Security Worked

With all the talk about the failure of airport security to detect the PETN that the Christmas bomber sewed into his underwear—and to think I’ve been using the phrase “underwear bomber” as a joke all these years—people forget that airport security played an important role in foiling the plot.

In order to get through airport security, Abdulmutallab—or, more precisely, whoever built the bomb—had to construct a far less reliable bomb than he would have otherwise; he had to resort to a much more ineffective detonation mechanism. And, as we’ve learned, detonating PETN is actually very hard.

Additionally, I don’t think it’s fair to criticize airport security for not catching the PETN. The security systems at airports aren’t designed to catch someone strapping a plastic explosive to his body. Even more strongly: no security system, at any airport, in any country on the planet, is designed to catch someone doing this. This isn’t a surprise. It isn’t even a new idea. It wasn’t even a new idea when I said this to then TSA head Kip Hawley in 2007: “I don’t want to even think about how much C4 I can strap to my legs and walk through your magnetometers.” You can try to argue that the TSA, and other airport security organizations around the world, should have been redesigned years ago to catch this, but anyone who is surprised by this attack simply hasn’t been paying attention.

EDITED TO ADD (1/4): I don’t know what to make of this:

Ben Wallace, who used to work at defence firm QinetiQ, one of the companies making the technology, warned it was not a “big silver bullet”.

[…]

Mr Wallace said the scanners would probably not have detected the failed Detroit plane plot of Christmas Day.

He said the same of the 2006 airliner liquid bomb plot and of explosives used in the 2005 bombings of three Tube trains and a bus in London.

[…]

He said the “passive millimetre wave scanners” – which QinetiQ helped develop – probably would not have detected key plots affecting passengers in the UK in recent years.

[…]

Mr Wallace told BBC Radio 4’s Today programme: “The advantage of the millimetre waves are that they can be used at longer range, they can be quicker and they are harmless to travellers.

“But there is a big but, and the but was in all the testing that we undertook, it was unlikely that it would have picked up the current explosive devices being used by al-Qaeda.”

He added: “It probably wouldn’t have picked up the very large plot with the liquids in 2006 at Heathrow or indeed the… bombs that were used on the Tube because it wasn’t very good and it wasn’t that easy to detect liquids and plastics unless they were very solid plastics.

“This is not necessarily the big silver bullet that is somehow being portrayed by Downing Street.”

A spokeswoman for QinetiQ said “no single technology can address every eventuality or security risk”.

“QinetiQ’s passive millimetre wave system, SPO, is a… people-screening system which can identify potential security threats concealed on the human body. It is not a checkpoint security system.

“SPO can effectively shortlist people who may need further investigation, either via other technology such as x-rays, or human intervention such as a pat-down search.”

Posted on January 4, 2010 at 6:28 AMView Comments

Laptop Security while Crossing Borders

Last year, I wrote about the increasing propensity for governments, including the U.S. and Great Britain, to search the contents of people’s laptops at customs. What we know is still based on anecdote, as no country has clarified the rules about what their customs officers are and are not allowed to do, and what rights people have.

Companies and individuals have dealt with this problem in several ways, from keeping sensitive data off laptops traveling internationally, to storing the data—encrypted, of course—on websites and then downloading it at the destination. I have never liked either solution. I do a lot of work on the road, and need to carry all sorts of data with me all the time. It’s a lot of data, and downloading it can take a long time. Also, I like to work on long international flights.

There’s another solution, one that works with whole-disk encryption products like PGP Disk (I’m on PGP’s advisory board), TrueCrypt, and BitLocker: Encrypt the data to a key you don’t know.

It sounds crazy, but stay with me. Caveat: Don’t try this at home if you’re not very familiar with whatever encryption product you’re using. Failure results in a bricked computer. Don’t blame me.

Step One: Before you board your plane, add another key to your whole-disk encryption (it’ll probably mean adding another “user”)—and make it random. By “random,” I mean really random: Pound the keyboard for a while, like a monkey trying to write Shakespeare. Don’t make it memorable. Don’t even try to memorize it.

Technically, this key doesn’t directly encrypt your hard drive. Instead, it encrypts the key that is used to encrypt your hard drive—that’s how the software allows multiple users.

So now there are two different users named with two different keys: the one you normally use, and some random one you just invented.

Step Two: Send that new random key to someone you trust. Make sure the trusted recipient has it, and make sure it works. You won’t be able to recover your hard drive without it.

Step Three: Burn, shred, delete or otherwise destroy all copies of that new random key. Forget it. If it was sufficiently random and non-memorable, this should be easy.

Step Four: Board your plane normally and use your computer for the whole flight.

Step Five: Before you land, delete the key you normally use.

At this point, you will not be able to boot your computer. The only key remaining is the one you forgot in Step Three. There’s no need to lie to the customs official; you can even show him a copy of this article if he doesn’t believe you.

Step Six: When you’re safely through customs, get that random key back from your confidant, boot your computer and re-add the key you normally use to access your hard drive.

And that’s it.

This is by no means a magic get-through-customs-easily card. Your computer might be impounded, and you might be taken to court and compelled to reveal who has the random key.

But the purpose of this protocol isn’t to prevent all that; it’s just to deny any possible access to your computer to customs. You might be delayed. You might have your computer seized. (This will cost you any work you did on the flight, but—honestly—at that point that’s the least of your troubles.) You might be turned back or sent home. But when you’re back home, you have access to your corporate management, your personal attorneys, your wits after a good night’s sleep, and all the rights you normally have in whatever country you’re now in.

This procedure not only protects you against the warrantless search of your data at the border, it also allows you to deny a customs official your data without having to lie or pretend—which itself is often a crime.

Now the big question: Who should you send that random key to?

Certainly it should be someone you trust, but—more importantly—it should be someone with whom you have a privileged relationship. Depending on the laws in your country, this could be your spouse, your attorney, your business partner or your priest. In a larger company, the IT department could institutionalize this as a policy, with the help desk acting as the key holder.

You could also send it to yourself, but be careful. You don’t want to e-mail it to your webmail account, because then you’d be lying when you tell the customs official that there is no possible way you can decrypt the drive.

You could put the key on a USB drive and send it to your destination, but there are potential failure modes. It could fail to get there in time to be waiting for your arrival, or it might not get there at all. You could airmail the drive with the key on it to yourself a couple of times, in a couple of different ways, and also fax the key to yourself … but that’s more work than I want to do when I’m traveling.

If you only care about the return trip, you can set it up before you return. Or you can set up an elaborate one-time pad system, with identical lists of keys with you and at home: Destroy each key on the list you have with you as you use it.

Remember that you’ll need to have full-disk encryption, using a product such as PGP Disk, TrueCrypt or BitLocker, already installed and enabled to make this work.

I don’t think we’ll ever get to the point where our computer data is safe when crossing an international border. Even if countries like the U.S. and Britain clarify their rules and institute privacy protections, there will always be other countries that will exercise greater latitude with their authority. And sometimes protecting your data means protecting your data from yourself.

This essay originally appeared on Wired.com.

Posted on July 15, 2009 at 12:10 PMView Comments

Court Limits on TSA Searches

This is good news:

A federal judge in June threw out seizure of three fake passports from a traveler, saying that TSA screeners violated his Fourth Amendment rights against unreasonable search and seizure. Congress authorizes TSA to search travelers for weapons and explosives; beyond that, the agency is overstepping its bounds, U.S. District Court Judge Algenon L. Marbley said.

“The extent of the search went beyond the permissible purpose of detecting weapons and explosives and was instead motivated by a desire to uncover contraband evidencing ordinary criminal wrongdoing,” Judge Marbley wrote.

In the second case, Steven Bierfeldt, treasurer for the Campaign for Liberty, a political organization launched from Ron Paul’s presidential run, was detained at the St. Louis airport because he was carrying $4,700 in a lock box from the sale of tickets, T-shirts, bumper stickers and campaign paraphernalia. TSA screeners quizzed him about the cash, his employment and the purpose of his trip to St. Louis, then summoned local police and threatened him with arrest because he responded to their questions with a question of his own: What were his rights and could TSA legally require him to answer?

[…]

Mr. Bierfeldt’s suit, filed in U.S. District Court in the District of Columbia, seeks to bar TSA from “conducting suspicion-less pre-flight searches of passengers or their belongings for items other than weapons or explosives.”

I wrote about this a couple of weeks ago:

…Obama should mandate that airport security be solely about terrorism, and not a general-purpose security checkpoint to catch everyone from pot smokers to deadbeat dads.

The Constitution provides us, both Americans and visitors to America, with strong protections against invasive police searches. Two exceptions come into play at airport security checkpoints. The first is “implied consent,” which means that you cannot refuse to be searched; your consent is implied when you purchased your ticket. And the second is “plain view,” which means that if the TSA officer happens to see something unrelated to airport security while screening you, he is allowed to act on that.

Both of these principles are well established and make sense, but it’s their combination that turns airport security checkpoints into police-state-like checkpoints.

The TSA should limit its searches to bombs and weapons and leave general policing to the police—where we know courts and the Constitution still apply.

Posted on July 8, 2009 at 6:42 AMView Comments

Update on Computer Science Student's Computer Seizure

In April, I blogged about the Boston police seizing a student’s computer for, among other things, running Linux. (Anyone who runs Linux instead of Windows is obviously a scary bad hacker.)

Last week, the Massachusetts Supreme Court threw out the search warrant:

Massachusetts Supreme Judicial Court Associate Justice Margot Botsford on Thursday said that Boston College and Massachusetts State Police had insufficient evidence to search the dorm room of BC senior Riccardo Calixte. During the search, police confiscated a variety of electronic devices, including three laptop computers, two iPod music players, and two cellphones.

Police obtained a warrant to search Calixte’s dorm after a roommate accused him of breaking into the school’s computer network to change other students’ grades, and of spreading a rumor via e-mail that the roommate is gay.

Botsford said the search warrant affidavit presented considerable evidence that the e-mail came from Calixte’s laptop computer. But even if it did, she said, spreading such rumors is probably not illegal. Botsford also said that while breaking into BC’s computer network would be criminal activity, the affidavit supporting the warrant presented little evidence that such a break-in had taken place.

Posted on June 2, 2009 at 12:01 PMView Comments

Me on Full-Body Scanners in Airports

I’m very happy with this quote in a CNN.com story on “whole-body imaging” at airports:

Bruce Schneier, an internationally recognized security technologist, said whole-body imaging technology “works pretty well,” privacy rights aside. But he thinks the financial investment was a mistake. In a post-9/11 world, he said, he knows his position isn’t “politically tenable,” but he believes money would be better spent on intelligence-gathering and investigations.

“It’s stupid to spend money so terrorists can change plans,” he said by phone from Poland, where he was speaking at a conference. If terrorists are swayed from going through airports, they’ll just target other locations, such as a hotel in Mumbai, India, he said.

“We’d be much better off going after bad guys … and back to pre-9/11 levels of airport security,” he said. “There’s a huge ‘cover your ass’ factor in politics, but unfortunately, it doesn’t make us safer.”

I’ve written about “cover your ass” security in the past, but it’s nice to see it in the press.

Posted on May 20, 2009 at 2:34 PMView Comments

No Warrant Required for GPS Tracking

At least, according to a Wisconsin appeals court ruling:

As the law currently stands, the court said police can mount GPS on cars to track people without violating their constitutional rights—even if the drivers aren’t suspects.

Officers do not need to get warrants beforehand because GPS tracking does not involve a search or a seizure, Judge Paul Lundsten wrote for the unanimous three-judge panel based in Madison.

That means “police are seemingly free to secretly track anyone’s public movements with a GPS device,” he wrote.

The court wants the legislature to fix it:

However, the District 4 Court of Appeals said it was “more than a little troubled” by that conclusion and asked Wisconsin lawmakers to regulate GPS use to protect against abuse by police and private individuals.

I think the odds of that happening are approximately zero.

Posted on May 15, 2009 at 6:30 AMView Comments

Software Problems with a Breath Alcohol Detector

This is an excellent lesson in the security problems inherent in trusting proprietary software:

After two years of attempting to get the computer based source code for the Alcotest 7110 MKIII-C, defense counsel in State v. Chun were successful in obtaining the code, and had it analyzed by Base One Technologies, Inc.

Draeger, the manufacturer maintained that the system was perfect, and that revealing the source code would be damaging to its business. They were right about the second part, of course, because it turned out that the code was terrible.

2. Readings are Not Averaged Correctly: When the software takes a series of readings, it first averages the first two readings. Then, it averages the third reading with the average just computed. Then the fourth reading is averaged with the new average, and so on. There is no comment or note detailing a reason for this calculation, which would cause the first reading to have more weight than successive readings. Nonetheless, the comments say that the values should be averaged, and they are not.

3. Results Limited to Small, Discrete Values: The A/D converters measuring the IR readings and the fuel cell readings can produce values between 0 and 4095. However, the software divides the final average(s) by 256, meaning the final result can only have 16 values to represent the five-volt range (or less), or, represent the range of alcohol readings possible. This is a loss of precision in the data; of a possible twelve bits of information, only four bits are used. Further, because of an attribute in the IR calculations, the result value is further divided in half. This means that only 8 values are possible for the IR detection, and this is compared against the 16 values of the fuel cell.

4. Catastrophic Error Detection Is Disabled: An interrupt that detects that the microprocessor is trying to execute an illegal instruction is disabled, meaning that the Alcotest software could appear to run correctly while executing wild branches or invalid code for a period of time. Other interrupts ignored are the Computer Operating Property (a watchdog timer), and the Software Interrupt.

Basically, the system was designed to return some sort of result regardless.

This is important. As we become more and more dependent on software for evidentiary and other legal applications, we need to be able to carefully examine that software for accuracy, reliability, etc. Every government contract for breath alcohol detectors needs to include the requirement for public source code. “You can’t look at our code because we don’t want you to” simply isn’t good enough.

Posted on May 13, 2009 at 2:07 PMView Comments

1 5 6 7 8 9 16

Sidebar photo of Bruce Schneier by Joe MacInnis.