Entries Tagged "physical security"

Page 18 of 25

Assurance

Over the past several months, the state of California conducted the most comprehensive security review yet of electronic voting machines. People I consider to be security experts analyzed machines from three different manufacturers, performing both a red-team attack analysis and a detailed source code review. Serious flaws were discovered in all machines and, as a result, the machines were all decertified for use in California elections.

The reports are worth reading, as is much of the blog commentary on the topic. The reviewers were given an unrealistic timetable and had trouble getting needed documentation. The fact that major security vulnerabilities were found in all machines is a testament to how poorly they were designed, not to the thoroughness of the analysis. Yet California Secretary of State Debra Bowen has conditionally recertified the machines for use, as long as the makers fix the discovered vulnerabilities and adhere to a lengthy list of security requirements designed to limit future security breaches and failures.

While this is a good effort, it has security completely backward. It begins with a presumption of security: If there are no known vulnerabilities, the system must be secure. If there is a vulnerability, then once it’s fixed, the system is again secure. How anyone comes to this presumption is a mystery to me. Is there any version of any operating system anywhere where the last security bug was found and fixed? Is there a major piece of software anywhere that has been, and continues to be, vulnerability-free?

Yet again and again we react with surprise when a system has a vulnerability. Last weekend at the hacker convention DefCon, I saw new attacks against supervisory control and data acquisition (SCADA) systems—those are embedded control systems found in infrastructure systems like fuel pipelines and power transmission facilities—electronic badge-entry systems, MySpace, and the high-security locks used in places like the White House. I will guarantee you that the manufacturers of these systems all claimed they were secure, and that their customers believed them.

Earlier this month, the government disclosed that the computer system of the US-Visit border control system is full of security holes. Weaknesses existed in all control areas and computing device types reviewed, the report said. How exactly is this different from any large government database? I’m not surprised that the system is so insecure; I’m surprised that anyone is surprised.

We’ve been assured again and again that RFID passports are secure. When researcher Lukas Grunwald successfully cloned one last year at DefCon, we were told there was little risk. This year, Grunwald revealed that he could use a cloned passport chip to sabotage passport readers. Government officials are again downplaying the significance of this result, although Grunwald speculates that this or another similar vulnerability could be used to take over passport readers and force them to accept fraudulent passports. Anyone care to guess who’s more likely to be right?

It’s all backward. Insecurity is the norm. If any system—whether a voting machine, operating system, database, badge-entry system, RFID passport system, etc.—is ever built completely vulnerability-free, it’ll be the first time in the history of mankind. It’s not a good bet.

Once you stop thinking about security backward, you immediately understand why the current software security paradigm of patching doesn’t make us any more secure. If vulnerabilities are so common, finding a few doesn’t materially reduce the quantity remaining. A system with 100 patched vulnerabilities isn’t more secure than a system with 10, nor is it less secure. A patched buffer overflow doesn’t mean that there’s one less way attackers can get into your system; it means that your design process was so lousy that it permitted buffer overflows, and there are probably thousands more lurking in your code.

Diebold Election Systems has patched a certain vulnerability in its voting-machine software twice, and each patch contained another vulnerability. Don’t tell me it’s my job to find another vulnerability in the third patch; it’s Diebold’s job to convince me it has finally learned how to patch vulnerabilities properly.

Several years ago, former National Security Agency technical director Brian Snow began talking about the concept of “assurance” in security. Snow, who spent 35 years at the NSA building systems at security levels far higher than anything the commercial world deals with, told audiences that the agency couldn’t use modern commercial systems with their backward security thinking. Assurance was his antidote:

Assurances are confidence-building activities demonstrating that:

  1. The system’s security policy is internally consistent and reflects the requirements of the organization,
  2. There are sufficient security functions to support the security policy,
  3. The system functions to meet a desired set of properties and only those properties,
  4. The functions are implemented correctly, and
  5. The assurances hold up through the manufacturing, delivery and life cycle of the system.

Basically, demonstrate that your system is secure, because I’m just not going to believe you otherwise.

Assurance is less about developing new security techniques than about using the ones we have. It’s all the things described in books like Building Secure Software, Software Security and Writing Secure Code. It’s some of what Microsoft is trying to do with its Security Development Lifecycle (SDL). It’s the Department of Homeland Security’s Build Security In program. It’s what every aircraft manufacturer goes through before it puts a piece of software in a critical role on an aircraft. It’s what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems; we just tend not to bother.

And most of the time, we don’t care. Commercial software, as insecure as it is, is good enough for most purposes. And while backward security is more expensive over the life cycle of the software, it’s cheaper where it counts: at the beginning. Most software companies are short-term smart to ignore the cost of never-ending patching, even though it’s long-term dumb.

Assurance is expensive, in terms of money and time for both the process and the documentation. But the NSA needs assurance for critical military systems; Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be common in IT contracts, not rare. It’s time we stopped thinking backward and pretending that computers are secure until proven otherwise.

This essay originally appeared on Wired.com.

Posted on August 9, 2007 at 8:19 AMView Comments

Conversation with Kip Hawley, TSA Administrator (Part 5)

This is Part 5 of a five-part series. Link to whole thing.

BS: So far, we’ve only talked about passengers. What about airport workers? Nearly one million workers move in and out of airports every day without ever being screened. The JFK plot, as laughably unrealistic as it was, highlighted the security risks of airport workers. As with any security problem, we need to secure the weak links, rather than make already strong links stronger. What about airport employees, delivery vehicles, and so on?

KH: I totally agree with your point about a strong base level of security everywhere and not creating large gaps by over-focusing on one area. This is especially true with airport employees. We do background checks on all airport employees who have access to the sterile area. These employees are in the same places doing the same jobs day after day, so when someone does something out of the ordinary, it immediately stands out. They serve as an additional set of eyes and ears throughout the airport.

Even so, we should do more on airport employees and my House testimony of April 19 gives details of where we’re heading. The main point is that everything you need for an attack is already inside the perimeter of an airport. For example, why take lighters from people who work with blowtorches in facilities with millions of gallons of jet fuel?

You could perhaps feel better by setting up employee checkpoints at entry points, but you’d hassle a lot of people at great cost with minimal additional benefit, and a smart, patient terrorist could find a way to beat you. Today’s random, unpredictable screenings that can and do occur everywhere, all the time (including delivery vehicles, etc.) are harder to defeat. With the latter, you make it impossible to engineer an attack; with the former, you give the blueprint for exactly that.

BS: There’s another reason to screen pilots and flight attendants: they go through the same security lines as passengers. People have to remember that it’s not pilots being screened, it’s people dressed as pilots. You either have to implement a system to verify that people dressed as pilots are actual pilots, or just screen everybody. The latter choice is far easier.

I want to ask you about general philosophy. Basically, there are three broad ways of defending airplanes: preventing bad people from getting on them (ID checks), preventing bad objects from getting on them (passenger screening, baggage screening), and preventing bad things from happening on them (reinforcing the cockpit door, sky marshals). The first one seems to be a complete failure, the second one is spotty at best. I’ve always been a fan of the third. Any future developments in that area?

KH: You are too eager to discount the first—stopping bad people from getting on planes. That is the most effective! Don’t forget about all the intel work done partnering with other countries to stop plots before they get here (UK liquids, NY subway), all the work done to keep them out either through no-flys (at least several times a month) or by Customs & Border Protection on their way in, and law enforcement once they are here (Ft. Dix). Then, you add the behavior observation (both uniformed and not) and identity validation (as we take that on) and that’s all before they get to the checkpoint.

The screening-for-things part, we’ve discussed, so I’ll jump to in-air measures. Reinforced, locked cockpit doors and air marshals are indeed huge upgrades since 9/11. Along the same lines, you have to consider the role of the engaged flight crew and passengers—they are quick to give a heads-up about suspicious behavior and they can, and do, take decisive action when threatened. Also, there are thousands of flights covered by pilots who are qualified as law enforcement and are armed, as well as the agents from other government entities like the Secret Service and FBI who provide coverage as well. There is also a fair amount of communications with the flight deck during flights if anything comes up en route—either in the aircraft or if we get information that would be of interest to them. That allows “quiet” diversions or other preventive measures. Training is, of course, important too. Pilots need to know what to do in the event of a missile sighting or other event, and need to know what we are going to do in different situations. Other things coming: better air-to-ground communications for air marshals and flight information, including, possibly, video.

So, when you boil it down, keeping the bomb off the plane is the number one priority. A terrorist has to know that once that door closes, he or she is locked into a confined space with dozens, if not hundreds, of zero-tolerance people, some of whom may be armed with firearms, not to mention the memory of United Flight 93.

BS: I’ve read repeated calls to privatize airport security: to return it to the way it was pre-9/11. Personally, I think it’s a bad idea, but I’d like your opinion on the question. And regardless of what you think should happen, do you think it will happen?

KH: From an operational security point of view, I think it works both ways. So it is not a strategic issue for me.

SFO, our largest private airport, has excellent security and is on a par with its federalized counterparts (in fact, I am on a flight from there as I write this). One current federalized advantage is that we can surge resources around the system with no notice; essentially, the ability to move from anywhere to anywhere and mix TSOs with federal air marshals in different force packages. We would need to be sure we don’t lose that interchangeability if we were to expand privatized screening.

I don’t see a major security or economic driver that would push us to large-scale privatization. Economically, the current cost-plus model makes it a better deal for the government in smaller airports than in bigger. So, maybe more small airports will privatize. If Congress requires collective bargaining for our TSOs, that will impose an additional overhead cost of about $500 million, which would shift the economic balance significantly toward privatized screening. But unless that happens, I don’t see major change in this area.

BS: Last question. I regularly criticize overly specific security measures, because forcing the terrorists to make minor modifications in their tactics doesn’t make us any safer. We’ve talked about specific airline threats, but what about airplanes as a specific threat? On the one hand, if we secure our airlines and the terrorists all decide instead to bomb shopping malls, we haven’t improved our security very much. On the other hand, airplanes make particularly attractive targets for several reasons. One, they’re considered national symbols. Two, they’re a common and important travel vehicle, and are deeply embedded throughout our economy. Three, they travel to distant places where the terrorists are. And four, the failure mode is severe: a small bomb drops the plane out of the sky and kills everyone. I don’t expect you to give back any of your budget, but when do we have “enough” airplane security as compared with the rest of our nation’s infrastructure?

KH: Airplanes are a high-profile target for terrorists for all the reasons you cited. The reason we have the focus we do on aviation is because of the effect the airline system has on our country, both economically and psychologically. We do considerable work (through grants and voluntary agreements) to ensure the safety of surface transportation, but it’s less visible to the public because people other than ones in TSA uniforms are taking care of that responsibility.

We look at the aviation system as one component in a much larger network that also includes freight rail, mass transit, highways, etc. And that’s just in the U.S. Then you add the world’s transportation sectors—it’s all about the network.

The only components that require specific security measures are the critical points of failure—and they have to be protected at virtually any cost. It doesn’t matter which individual part of the network is attacked—what matters is that the network as a whole is resilient enough to operate even with losing one or more components.

The network approach allows various transportation modes to benefit from our layers of security. Take our first layer: intel. It is fundamental to our security program to catch terrorists long before they get to their target, and even better if we catch them before they get into our country. Our intel operation works closely with other international and domestic agencies, and that information and analysis benefits all transportation modes.

Dogs have proven very successful at detecting explosives. They work in airports and they work in mass transit venues as well. As we test and pilot technologies like millimeter wave in airports, we assess their viability in other transportation modes, and vice versa.

To get back to your question, we’re not at the point where we can say “enough” for aviation security. But we’re also aware of the attractiveness of other modes and continue to use the network to share resources and lessons learned.

BS: Thank you very much for your time. I appreciate both your time and your candor.

KH: I enjoyed the exchange and appreciated your insights. Thanks for the opportunity.

Posted on August 3, 2007 at 6:12 AMView Comments

Security Hole at Phoenix Airport

The news:

We’ve discovered a 4.5 hour time frame each night when virtually anything can be brought into the secure side of Phoenix Sky Harbor Airport. There’s no metal detector, no X-ray machine, and it’s apparently not a problem.

Afraid to show her face, one long time Sky Harbor employee talks about the security most people don’t see.

Lisa Fletcher: “You’re telling me Sky Harbor’s not safe?”

Employee: “I’m telling you Sky Harbor’s not safe and hasn’t been for a long time.”

It’s what we discovered in the middle of the night—TSA agents going away, and security guards taking over. It’s 4.5 hours—every night—when an employee badge becomes an all-access pass.

I have mixed feelings about this story. On the one hand, it’s a big security hole that not everyone knew was there. On the other hand, airport employees are allowed to bring stuff in and out of airports without screening all the time. So yes, the airports aren’t secure—but they never have been, so what’s the big deal?

The real issue here is that people don’t understand that an airport is a complex system and that securing it means more than passenger screening.

Posted on August 2, 2007 at 11:35 AMView Comments

Transporting a $1.9M Rare Coin

Excellent story of security by obscurity:

Feigenbaum put the dime, encased in a 3-inch-square block of plastic, in his pocket and, accompanied by a security guard, drove in an ordinary sedan directly to San Jose airport to catch the red-eye to Newark.

The overnight flight, he said, was the only way to make sure the dime would be in New York by the time the buyer’s bank opened in the morning. People who pay $1.9 million for dimes do not like to be kept waiting for them.

Feigenbaum had purchased a coach ticket, to avoid suspicion, but found himself upgraded to first class. That was a worry, because people in flip-flops, T-shirts and grubby jeans do not regularly ride in first class. But it would have been more suspicious to decline a free upgrade. So Feigenbaum forced himself to sit in first class, where he found himself to be the only passenger in flip-flops.

He was too nervous to sleep, he said. He did not watch the in-flight movie, which was “Firehouse Dog.” He turned down a Reuben sandwich and sensibly declined all offers of alcoholic beverages.

Shortly after boarding the plane, he transferred the dime from his pants pocket to his briefcase.

“I was worried that the dime might fall out of my pocket while I was sitting down,” Feigenbaum said.

All across the country, Feigenbaum kept checking to make sure the dime was safe by reaching into his briefcase to feel for it. Feigenbaum did not actually take the dime out of his briefcase, as it is suspicious to stare at dimes.

This isn’t the first time security through obscurity was employed to transport a very small and very valuable object. From Beyond Fear, pp 211-212:

At 3,106 carats, a little under a pound and a half, the Cullinan Diamond was the largest uncut diamond ever discovered. It was extracted from the earth at the Premier Mine, near Pretoria, South Africa, in 1905. Appreciating the literal enormity of the find, the Transvaal government bought the diamond as a gift for King Edward VII. Transporting the stone to England was a huge security problem, of course, and there was much debate on how best to do it. Detectives were sent from London to guard it on its journey. News leaked that a certain steamer was carrying it, and the presence of the detectives confirmed this. But the diamond on that steamer was a fake. Only a few people knew of the real plan; they packed the Cullinan in a small box, stuck a three-shilling stamp on it, and sent it to England anonymously by unregistered parcel post.

Like all security measures, security by obscurity has its place. I wrote a lot more about the general concepts in this 2002 essay.

Posted on July 30, 2007 at 4:30 PMView Comments

Cocktail Condoms

They’re protective covers that go over your drink and “protect” against someone trying to slip a Mickey Finn (or whatever they’re called these days):

The concept behind the cocktail cover is fairly simply. About the size of a coaster, it can be used to cap a drink that goes unattended. When a person returns to a beverage, there is a layer that can be pulled back, leaving a thin sheath protecting the cocktail. That can be punctured with a straw or pulled off entirely—either way the drinker will know that the cocktail has not been tampered with.

I’m sure there are many ways to defeat this security device if you’re so inclined: a syringe, affixing a new cover after you tamper with the drink, and so on. And this is exactly the sort of rare risk we’re likely to overreact to. But to me, the most interesting aspect of this story is the agenda. If these things become common, it won’t be because of security. It will be because of advertising:

Barry said that companies could advertise on the cocktail covers, likely covering the cost of production. Each cover, he said, costs less than 10 cents to make.

Posted on June 25, 2007 at 6:25 AMView Comments

1 16 17 18 19 20 25

Sidebar photo of Bruce Schneier by Joe MacInnis.