Entries Tagged "cost-benefit analysis"

Page 18 of 23

REAL ID Harder Than Legislators Thought

According to the Associated Press:

State motor vehicle officials nationwide who will have to carry out the Real ID Act say its authors grossly underestimated its logistical, technological and financial demands.

In a comprehensive survey obtained by The Associated Press and in follow-up interviews, officials cast doubt on the states’ ability to comply with the law on time and fretted that it will be a budget buster.

I’ve already written about REAL ID, including the obscene costs:

REAL ID is expensive. It’s an unfunded mandate: the federal government is forcing the states to spend their own money to comply with the act. I’ve seen estimates that the cost to the states of complying with REAL ID will be $120 million. That’s $120 million that can’t be spent on actual security.

According to the AP, I was way off:

Pennsylvania alone estimated a hit of up to $85 million. Washington state projected at least $46 million annually in the first several years.

Separately, a December report to Virginia’s governor pegged the potential price tag for that state as high as $169 million, with $63 million annually in successive years. Of the initial cost, $33 million would be just to redesign computing systems.

Remember, security is a trade-off. REAL ID is a bad idea primarily because the security gained is not worth the enormous expense.

See also the ACLU’s site on REAL ID.

Posted on January 13, 2006 at 1:23 PMView Comments

Automatic Lie Detector

Coming soon to airports:

Tested in Russia, the two-stage GK-1 voice analyser requires that passengers don headphones at a console and answer “yes” or “no” into a microphone to questions about whether they are planning something illicit.

The software will almost always pick up uncontrollable tremors in the voice that give away liars or those with something to hide, say its designers at Israeli firm Nemesysco.

Fascinating.

In general, I prefer security systems that are invasive yet anonymous to ones that are based on massive databases. And automatic systems that divide people into a “probably fine” and “investigate a bit more” categories seem like a good use of technology. I have no idea whether this system works (there is a lot of evidence that it does not), what the false positive and false negative rates are (this article states a completely useless 12% false positive rate), or how easy it would be to learn how to fool the system, though. And in all of these trade-off discussions, the devil is in the details.

Posted on November 21, 2005 at 8:07 AMView Comments

Hans Bethe on Security

Hans Bethe was one of the first nuclear scientists, a member of the Manhattan Project, and a political activist. In this article about him, there’s a great quote:

Sometimes insistence on 100 percent security actually impairs our security, while the bold decision—though at the time it seems to involve some risk—will give us more security in the long run.

Posted on November 15, 2005 at 12:41 PMView Comments

NIST Hash Workshop Liveblogging (4)

This morning we heard a variety of talks about hash function design. All are esoteric and interesting, and too subtle to summarize here. Hopefully the papers will be online soon; keep checking the conference website.

Lots of interesting ideas, but no real discussion about trade-offs. But it’s the trade-offs that are important. It’s easy to design a good hash function, given no performance constraints. But we need to trade off performance with security. When confronted with a clever idea, like Ron Rivest’s dithering trick, we need to decide if this a good use of time. The question is not whether we should use dithering. The question is whether dithering is the best thing we can do with (I’m making these numbers up) a 20% performance degradation. Is dithering better than adding 20% more rounds? This is the kind of analysis we did when designing Twofish, and it’s the correct analysis here as well.

Bart Preneel pointed out the obvious: if SHA-1 had double the number of rounds, this workshop wouldn’t be happening. If MD5 had double the number of rounds, that hash function would still be secure. Maybe we’ve just been too optimistic about how strong hash functions are.

The other thing we need to be doing is providing answers to developers. It’s not enough to express concern about SHA-256, or wonder how much better the attacks on SHA-1 will become. Developers need to know what hash function to use in their designs. They need an answer today. (SHA-256 is what I tell people.) They’ll need an answer in a year. They’ll need an answer in four years. Maybe the answers will be the same, and maybe they’ll be different. But if we don’t give them answers, they’ll make something up. They won’t wait for us.

And while it’s true that we don’t have any real theory of hash functions, and it’s true that anything we choose will be based partly on faith, we have no choice but to choose.

And finally, I think we need to stimulate research more. Whether it’s a competition or a series of conferences, we need new ideas for design and analysis. Designs beget analyses beget designs beget analyses…. We need a whole bunch of new hash functions to beat up; that’s how we’ll learn to design better ones.

Posted on November 1, 2005 at 11:19 AMView Comments

Real ID and Identity Theft

Reuters on the trade-offs of Real ID:

Nobody yet knows how much the Real ID Act will cost to implement or how much money Congress will provide for it. The state of Washington, which has done the most thorough cost analysis, put the bill in that state alone at $97 million in the first two years and believes it will have to raise the price of a driver’s license to $58 from $25.

On the other hand, a secure ID system could save millions in Medicare and Medicaid fraud and combat identity theft.

Why does Reuters think that a better ID card will protect against identity theft? The problem with identity theft isn’t that ID cards are forgeable, it’s that financial institutions don’t check them before authorizing transactions.

Posted on October 14, 2005 at 11:20 AMView Comments

Jamming Aircraft Navigation Near Nuclear Power Plants

The German government want to jam aircraft navigation equipment near nuclear power plants.

This certainly could help if terrorists want to fly an airplane into a nuclear power plant, but it feels like a movie-plot threat to me. On the other hand, this could make things significantly worse if an airplane flies near the nuclear power plant by accident. My guess is that the latter happens far more often than the former.

Posted on September 29, 2005 at 6:40 AMView Comments

Secure Flight News

The TSA is not going to use commercial databases in its initial roll-out of Secure Flight, its airline screening program that matches passengers with names on the Watch List and No-Fly List. I don’t believe for a minute that they’re shelving plans to use commercial data permanently, but at least they’re delaying the process.

In other news, the report (also available here, here, and here) of the Secure Flight Privacy/IT Working Group is public. I was a member of that group, but honestly, I didn’t do any writing for the report. I had given up on the process, sick of not being able to get any answers out of TSA, and believed that the report would end up in somebody’s desk drawer, never to be seen again. I was stunned when I learned that the ASAC made the report public.

There’s a lot of stuff in the report, but I’d like to quote the section that outlines the basic questions that the TSA was unable to answer:

The SFWG found that TSA has failed to answer certain key questions about Secure Flight: First and foremost, TSA has not articulated what the specific goals of Secure Flight are. Based on the limited test results presented to us, we cannot assess whether even the general goal of evaluating passengers for the risk they represent to aviation security is a realistic or feasible one or how TSA proposes to achieve it. We do not know how much or what kind of personal information the system will collect or how data from various sources will flow through the system.

Until TSA answers these questions, it is impossible to evaluate the potential privacy or security impact of the program, including:

  • Minimizing false positives and dealing with them when they occur.
  • Misuse of information in the system.
  • Inappropriate or illegal access by persons with and without permissions.
  • Preventing use of the system and information processed through it for purposes other than airline passenger screening.

The following broadly defined questions represent the critical issues we believe TSA must address before we or any other advisory body can effectively evaluate the privacy and security impact of Secure Flight on the public.

  1. What is the goal or goals of Secure Flight? The TSA is under a Congressional mandate to match domestic airline passenger lists against the consolidated terrorist watch list. TSA has failed to specify with consistency whether watch list matching is the only goal of Secure Flight at this stage. The Secure Flight Capabilities and Testing Overview, dated February 9, 2005 (a non-public document given to the SFWG), states in the Appendix that the program is not looking for unknown terrorists and has no intention of doing so. On June 29, 2005, Justin Oberman (Assistant Administrator, Secure Flight/Registered Traveler) testified to a Congressional committee that “Another goal proposed for Secure Flight is its use to establish “Mechanisms for…violent criminal data vetting.” Finally, TSA has never been forthcoming about whether it has an additional, implicit goal the tracking of terrorism suspects (whose presence on the terrorist watch list does not necessarily signify intention to commit violence on a flight).

    While the problem of failing to establish clear goals for Secure Flight at a given point in time may arise from not recognizing the difference between program definition and program evolution, it is clearly an issue the TSA must address if Secure Flight is to proceed.

  2. What is the architecture of the Secure Flight system? The Working Group received limited information about the technical architecture of Secure Flight and none about how software and hardware choices were made. We know very little about how data will be collected, transferred, analyzed, stored or deleted. Although we are charged with evaluating the privacy and security of the system, we saw no statements of privacy policies and procedures other than Privacy Act notices published in the Federal Register for Secure Flight testing. No data management plan either for the test phase or the program as implemented was provided or discussed.
  3. Will Secure Flight be linked to other TSA applications? Linkage with other screening programs (such as Registered Traveler, Transportation Worker Identification and Credentialing (TWIC), and Customs and Border Patrol systems like U.S.-VISIT) that may operate on the same platform as Secure Flight is another aspect of the architecture and security question. Unanswered questions remain about how Secure Flight will interact with other vetting programs operating on the same platform; how it will ensure that its policies on data collection, use and retention will be implemented and enforced on a platform that also operates programs with significantly different policies in these areas; and how it will interact with the vetting of passengers on international flights?
  4. How will commercial data sources be used? One of the most controversial elements of Secure Flight has been the possible uses of commercial data. TSA has never clearly defined two threshold issues: what it means by “commercial data” and how it might use commercial data sources in the implementation of Secure Flight. TSA has never clearly distinguished among various possible uses of commercial data, which all have different implications.

    Possible uses of commercial data sometimes described by TSA include: (1) identity verification or authentication; (2) reducing false positives by augmenting passenger records indicating a possible match with data that could help distinguish an innocent passenger from someone on a watch list; (3) reducing false negatives by augmenting all passenger records with data that could suggest a match that would otherwise have been missed; (4) identifying sleepers, which itself includes: (a) identifying false identities; and (b) identifying behaviors indicative of terrorist activity. A fifth possibility has not been discussed by TSA: using commercial data to augment watch list entries to improve their fidelity. Assuming that identity verification is part of Secure Flight, what are the consequences if an identity cannot be verified with a certain level of assurance?

    It is important to note that TSA never presented the SFWG with the results of its commercial data tests. Until these test results are available and have been independently analyzed, commercial data should not be utilized in the Secure Flight program.

  5. Which matching algorithms work best? TSA never presented the SFWG with test results showing the effectiveness of algorithms used to match passenger names to a watch list. One goal of bringing watch list matching inside the government was to ensure that the best available matching technology was used uniformly. The SFWG saw no evidence that TSA compared different products and competing solutions. As a threshold matter, TSA did not describe to the SFWG its criteria for determining how the optimal matching solution would be determined. There are obvious and probably not-so-obvious tradeoffs between false positives and false negatives, but TSA did not explain how it reconciled these concerns.
  6. What is the oversight structure and policy for Secure Flight? TSA has not produced a comprehensive policy document for Secure Flight that defines oversight or governance responsibilities.

The members of the working group, and the signatories to the report, are Martin Abrams, Linda Ackerman, James Dempsey, Edward Felten, Daniel Gallington, Lauren Gelman, Steven Lilenthal, Anna Slomovic, and myself.

My previous posts about Secure Flight, and my involvement in the working group, are here, here, here, here, here, and here.

And in case you think things have gotten better, there’s a new story about how the no-fly list cost a pilot his job:

Cape Air pilot Robert Gray said he feels like he’s living a nightmare. Two months after he sued the federal government for refusing to let him take flight training courses so he could fly larger planes, he said yesterday, his situation has only worsened.

When Gray showed up for work a couple of weeks ago, he said Cape Air told him the government had placed him on its no-fly list, making it impossible for him to do his job. Gray, a Belfast native and British citizen, said the government still won’t tell him why it thinks he’s a threat.

“I haven’t been involved in any kind of terrorism, and I never committed any crime,” said Gray, 35, of West Yarmouth. He said he has never been arrested and can’t imagine what kind of secret information the government is relying on to destroy his life.

Remember what the no-fly list is. It’s a list of people who are so dangerous that they can’t be allowed to board an airplane under any circumstances, yet so innocent that they can’t be arrested—even under the provisions of the PATRIOT Act.

EDITED TO ADD: The U.S. Department of Justice Inspector General released a report last month on Secure Flight, basically concluding that the costs were out of control, and that the TSA didn’t know how much the program would cost in the future.

Here’s an article about some of the horrible problems people who have mistakenly found themselves on the no-fly list have had to endure. And another on what you can do if you find yourself on a list.

EDITED TO ADD: EPIC has received a bunch of documents about continued problems with false positives.

Posted on September 26, 2005 at 7:14 AMView Comments

Hurricane Security and Airline Security Collide

Here’s a story (quote is from the second page) where airline security is actually doing harm:

Long lines and chaos snarled evacuees when they tried to catch flights out from two of Houston’s airports. After about 100 federal security screeners failed to report to work Thursday, scores of passengers missed flights and waited for hours at sparsely monitored X-ray machines and luggage conveyors. Transportation Security Administration officials were at a loss for an explanation and scrambled to send in a team of replacement workers from Cleveland.

This isn’t an easy call, but sometimes the smartest thing to do in an emergency is to suspend security rules. Unfortunately, sometimes the bad guys count on that.

If I were in charge, I would have let people onto the airplanes. The trade-off makes sense to me.

Posted on September 23, 2005 at 9:10 PMView Comments

Cameras Catch Dry Run of 7/7 London Terrorists

Score one for security cameras:

Newly released CCTV footage shows the 7 July London bombers staged a practice run nine days before the attack.

Detectives reconstructed the bombers’ movements after studying thousands of hours of film as part of the probe into the blasts which killed 52 people.

CCTV images show three of the bombers entering Luton station, before travelling to King’s Cross station where they are also pictured.

Officers are keen to find out if the men met anyone else on the day.

See also The New York Times.

Security cameras certainly aren’t useless. I just don’t think they’re worth it.

Posted on September 21, 2005 at 12:50 PMView Comments

Research in Behavioral Risk Analysis

I very am interested in this kind of research:

Network Structure, Behavioral Considerations and Risk Management in Interdependent Security Games

Interdependent security (IDS) games model situations where each player has to determine whether or not to invest in protection or security against an uncertain event knowing that there is some chance s/he will be negatively impacted by others who do not follow suit. IDS games capture a wide variety of collective risk and decision-making problems that include airline security, corporate governance, computer network security and vaccinations against diseases. This research project will investigate the marriage of IDS models with network formation models developed from social network theory and apply these models to problems in network security. Behavioral and controlled experiments will examine how human participants actually make choices under uncertainty in IDS settings. Computational aspects of IDS models will also be examined. To encourage and induce individuals to invest in cost-effective protection measures for IDS problems, we will examine several risk management strategies designed to foster cooperative behavior that include providing risk information, communication with others, economic incentives, and tipping strategies.

The proposed research is interdisciplinary in nature and should serve as an exciting focal point for researchers in computer science, decision and management sciences, economics, psychology, risk management, and policy analysis. It promises to advance our understanding of decision-making under risk and uncertainty for problems that are commonly faced by individuals, organizations, and nations. Through advances in computational methods one should be able to apply IDS models to large-scale problems. The research will also focus on weak links in an interdependent system and suggest risk management strategies for reducing individual and societal losses in the interconnected world in which we live.

Posted on September 15, 2005 at 7:05 AMView Comments

1 16 17 18 19 20 23

Sidebar photo of Bruce Schneier by Joe MacInnis.