Entries Tagged "penetration testing"

Page 2 of 2

New TSA Report

A classified 2006 TSA report on airport security has been leaked to USA Today. (Other papers are covering the story, but their articles seem to be all derived from the original USA Today article.)

There’s good news:

This year, the TSA for the first time began running covert tests every day at every checkpoint at every airport. That began partly in response to the classified TSA report showing that screeners at San Francisco International Airport were tested several times a day and found about 80% of the fake bombs.

Constant testing makes screeners “more suspicious as well as more capable of recognizing (bomb) components,” the report said. The report does not explain the high failure rates but said O’Hare’s checkpoints were too congested and too wide for supervisors to monitor screeners.

At San Francisco, “everybody realizes they are under scrutiny, being watched and tested constantly,” said Gerald Berry, president of Covenant Aviation Security, which hires and manages the San Francisco screeners. San Francisco is one of eight airports, most of them small, where screeners work for a private company instead of the TSA. The idea for constant testing came from Ed Gomez, TSA security director at San Francisco, Berry said. The tests often involve an undercover person putting a bag with a fake bomb on an X-ray machine belt, he said.

Repeated testing is good, for a whole bunch of reasons.

There’s bad news:

Howe said the increased difficulty explains why screeners at Los Angeles and Chicago O’Hare airports failed to find more than 60% of fake explosives that TSA agents tried to get through checkpoints last year.

The failure rates—about 75% at Los Angeles and 60% at O’Hare—are higher than some tests of screeners a few years ago and equivalent to other previous tests.

Sure, the tests are harder. But those are miserable numbers.

And there’s unexplainable news:

At San Diego International Airport, tests are run by passengers whom local TSA managers ask to carry a fake bomb, said screener Cris Soulia, an official in a screeners union.

Someone please tell me this doesn’t actually happen. “Hi Mr. Passenger. I’m a TSA manager. You know I’m not lying to you because of this official-looking laminated badge I have. We need you to help us test airport security. Here’s a ‘fake’ bomb that we’d like you to carry through security in your luggage. Another TSA manager will, um, meet you at your destination. Give the fake bomb to him when you land. And, by the way, what’s your mother’s maiden name?”

How in the world is this a good idea? And how hard is it to dress real TSA managers up like vacationers?

EDITED TO ADD (10/24): Here’s a story of someone being asked to carry an item through airport security at Dulles Airport.

EDITED TO ADD (10/26): TSA claims that this doesn’t happen:

TSA officials do not ask random passengers to carry fake bombs through checkpoints for testing at San Diego International Airport, or any other airport.

[…]

TSA Traveler Alert: If approached by anyone claiming to be a TSA employee asking you to take something through the checkpoint, please contact a uniformed TSA employee at the checkpoint or a law enforcement officer immediately.

Is there anyone else who has had this happen to them?

Posted on October 19, 2007 at 2:37 PMView Comments

Is Penetration Testing Worth it?

There are security experts who insist penetration testing is essential for network security, and you have no hope of being secure unless you do it regularly. And there are contrarian security experts who tell you penetration testing is a waste of time; you might as well throw your money away. Both of these views are wrong. The reality of penetration testing is more complicated and nuanced.

Penetration testing is a broad term. It might mean breaking into a network to demonstrate you can. It might mean trying to break into a network to document vulnerabilities. It might involve a remote attack, physical penetration of a data center or social engineering attacks. It might use commercial or proprietary vulnerability scanning tools, or rely on skilled white-hat hackers. It might just evaluate software version numbers and patch levels, and make inferences about vulnerabilities.

It’s going to be expensive, and you’ll get a thick report when the testing is done.

And that’s the real problem. You really don’t want a thick report documenting all the ways your network is insecure. You don’t have the budget to fix them all, so the document will sit around waiting to make someone look bad. Or, even worse, it’ll be discovered in a breach lawsuit. Do you really want an opposing attorney to ask you to explain why you paid to document the security holes in your network, and then didn’t fix them? Probably the safest thing you can do with the report, after you read it, is shred it.

Given enough time and money, a pen test will find vulnerabilities; there’s no point in proving it. And if you’re not going to fix all the uncovered vulnerabilities, there’s no point uncovering them. But there is a way to do penetration testing usefully. For years I’ve been saying security consists of protection, detection and response—and you need all three to have good security. Before you can do a good job with any of these, you have to assess your security. And done right, penetration testing is a key component of a security assessment.

I like to restrict penetration testing to the most commonly exploited critical vulnerabilities, like those found on the SANS Top 20 list. If you have any of those vulnerabilities, you really need to fix them.

If you think about it, penetration testing is an odd business. Is there an analogue to it anywhere else in security? Sure, militaries run these exercises all the time, but how about in business? Do we hire burglars to try to break into our warehouses? Do we attempt to commit fraud against ourselves? No, we don’t.

Penetration testing has become big business because systems are so complicated and poorly understood. We know about burglars and kidnapping and fraud, but we don’t know about computer criminals. We don’t know what’s dangerous today, and what will be dangerous tomorrow. So we hire penetration testers in the belief they can explain it.

There are two reasons why you might want to conduct a penetration test. One, you want to know whether a certain vulnerability is present because you’re going to fix it if it is. And two, you need a big, scary report to persuade your boss to spend more money. If neither is true, I’m going to save you a lot of money by giving you this free penetration test: You’re vulnerable.

Now, go do something useful about it.

This essay appeared in the March issue of Information Security, as the first half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 15, 2007 at 7:05 AMView Comments

Hacking Computers Over USB

I’ve previously written about the risks of small portable computing devices; how more and more data can be stored on them, and then lost or stolen. But there’s another risk: if an attacker can convince you to plug his USB device into your computer, he can take it over.

Plug an iPod or USB stick into a PC running Windows and the device can literally take over the machine and search for confidential documents, copy them back to the iPod or USB’s internal storage, and hide them as “deleted” files. Alternatively, the device can simply plant spyware, or even compromise the operating system. Two features that make this possible are the Windows AutoRun facility and the ability of peripherals to use something called direct memory access (DMA). The first attack vector you can and should plug; the second vector is the result of a design flaw that’s likely to be with us for many years to come.

The article has the details, but basically you can configure a file on your USB device to automatically run when it’s plugged into a computer. That file can, of course, do anything you want it to.

Recently I’ve been seeing more and more written about this attack. The Spring 2006 issue of 2600 Magazine, for example, contains a short article called “iPod Sneakiness” (unfortunately, not on line). The author suggests that you can innocently ask someone at an Internet cafe if you can plug your iPod into his computer to power it up—and then steal his passwords and critical files.

And here’s an article about someone who used this trick in a penetration test:

We figured we would try something different by baiting the same employees that were on high alert. We gathered all the worthless vendor giveaway thumb drives collected over the years and imprinted them with our own special piece of software. I had one of my guys write a Trojan that, when run, would collect passwords, logins and machine-specific information from the user’s computer, and then email the findings back to us.

The next hurdle we had was getting the USB drives in the hands of the credit union’s internal users. I made my way to the credit union at about 6 a.m. to make sure no employees saw us. I then proceeded to scatter the drives in the parking lot, smoking areas, and other areas employees frequented.

Once I seeded the USB drives, I decided to grab some coffee and watch the employees show up for work. Surveillance of the facility was worth the time involved. It was really amusing to watch the reaction of the employees who found a USB drive. You know they plugged them into their computers the minute they got to their desks.

I immediately called my guy that wrote the Trojan and asked if anything was received at his end. Slowly but surely info was being mailed back to him. I would have loved to be on the inside of the building watching as people started plugging the USB drives in, scouring through the planted image files, then unknowingly running our piece of software.

There is a defense. From the first article:

AutoRun is just a bad idea. People putting CD-ROMs or USB drives into their computers usually want to see what’s on the media, not have programs automatically run. Fortunately you can turn AutoRun off. A simple manual approach is to hold down the “Shift” key when a disk or USB storage device is inserted into the computer. A better way is to disable the feature entirely by editing the Windows Registry. There are many instructions for doing this online (just search for “disable autorun”) or you can download and use Microsoft’s TweakUI program, which is part of the Windows XP PowerToys download. With Windows XP you can also disable AutoRun for CDs by right-clicking on the CD drive icon in the Windows explorer, choosing the AutoPlay tab, and then selecting “Take no action” for each kind of disk that’s listed. Unfortunately, disabling AutoPlay for CDs won’t always disable AutoPlay for USB devices, so the registry hack is the safest course of action.

In the 1990s, the Macintosh operating system had this feature, which was removed after a virus made use of it in 1998. Microsoft needs to remove this feature as well.

EDITED TO ADD (6/12): In the penetration test, they didn’t use AutoRun.

Posted on June 8, 2006 at 1:34 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.