Entries Tagged "economics of security"

Page 34 of 39

Howard Schmidt on Software Vulnerabilities

Howard Schmidt was misquoted in the article that spurred my rebuttal.

This essay outlines what he really thinks:

Like it or not, the hard work of developers often takes the brunt of malicious hacker attacks.

Many people know that developers are often under intense pressure to deliver more features on time and under budget. Few developers get the time to review their code for potential security vulnerabilities. When they do get the time, they often don’t have secure-coding training and lack the automated tools to prevent hackers from using hundreds of common exploit techniques to trigger malicious attacks.

So what can software vendors do? In a sense, a big part of the answer is relatively old fashioned; the developers need to be accountable to their employers and provided with incentives, better tools and proper training.

He’s against making vendors liable for defects in their products, unlike every other industry:

I always have been, and continue to be, against any sort of liability actions as long as we continue to see market forces improve software. Unfortunately, introducing vendor liability to solve security flaws hurts everybody, including employees, shareholders and customers, because it raises costs and stifles innovation.

After all, when companies are faced with large punitive judgments, a frequent step is often to cut salaries, increase prices or even reduce employees. This is not good for anyone.

And he closes with:

In the end, what security requires is the same attention any business goal needs. Employers should expect their employees to take pride in and own a certain level of responsibility for their work. And employees should expect their employers to provide the tools and training they need to get the job done. With these expectations established and goals agreed on, perhaps the software industry can do a better job of strengthening the security of its products by reducing software vulnerabilities.

That first sentence, I think, nicely sums up what’s wrong with his argument. If security is to be a business goal, then it needs to make business sense. Right now, it makes more business sense not to produce secure software products than it does to produce secure software products. Any solution needs to address that fundamental market failure, instead of simply wishing it were true.

Posted on November 8, 2005 at 7:34 AMView Comments

Preventing Identity Theft: The Living and the Dead

A company called Metacharge has rolled out an e-commerce security service in the United Kingdom. For about $2 per name, website operators can verify their customers against the UK Electoral Roll, the British Telecom directory, and a mortality database.

That’s not cheap, and the company is mainly targeting customers in high-risk industries, such as online gaming. But the economics behind this system are interesting to examine. They illustrate externalities associated with fraud and identity theft, and why leaving matters to the companies won’t fix the problem.

The mortality database is interesting. According to Metacharge, “the fastest growing form of identity theft is not phishing; it is taking the identities of dead people and using them to get credit.”

For a website, the economics are straightforward. It costs $2 to verify that a customer is alive. If the probability the customer is actually dead (and therefore fraudulent) times the average losses due to this dead customer is more than $2, this service makes sense. If it is less, then the service doesn’t. For example, if dead customers are one in ten thousand, and they cost $15,000 each, then the service is not worth it. If they cost $25,000 each, or if they occur twice as often, then it is worth it.

Imagine now that there is a similar service that identifies identity fraud among living people. The same economic analysis would also hold. But in this case, there’s an externality: there is an additional cost of fraud borne by the victim and not by the website. So if fraud using the identity of living customers occurs at a rate of one in ten thousand, and each one costs $15,000 to the website and another $10,000 to the victim, the website will conclude that the service is not worthwhile, even though paying for it is cheaper overall. This is why legislation is needed: to raise the cost of fraud to the websites.

There’s another economic trade-off. Websites have two basic opportunities to verify customers using services such as these. The first is when they sign up the customer, and the second is after some kind of non-payment. Most of the damages to the customer occur after the non-payment is referred to a credit bureau, so it would make sense to perform some extra identification checks at that point. It would certainly be cheaper to the website, as far fewer checks would be paid for. But because this second opportunity comes after the website has suffered its losses, it has no real incentive to take advantage of it. Again, economics drives security.

Posted on October 28, 2005 at 8:08 AMView Comments

Terrorists Playing Bingo in Kentucky

One of the sillier movie-plot threats I’ve seen recently:

Kentucky has been awarded a federal Homeland Security grant aimed at keeping terrorists from using charitable gaming to raise money.

The state Office of Charitable Gaming won the $36,300 grant and will use it to provide five investigators with laptop computers and access to a commercially operated law-enforcement data base, said John Holiday, enforcement director at the Office of Charitable Gaming.

The idea is to keep terrorists from playing bingo or running a charitable game to raise large amounts of cash, Holiday said.

Posted on October 25, 2005 at 3:30 PMView Comments

Scandinavian Attack Against Two-Factor Authentication

I’ve repeatedly said that two-factor authentication won’t stop phishing, because the attackers will simply modify their techniques to get around it. Here’s an example where that has happened:

Scandinavian bank Nordea was forced to shut down part of its Web banking service for 12 hours last week following a phishing attack that specifically targeted its paper-based one-time password security system.

According to press reports, the scam targeted customers that access the Nordea Sweden Web banking site using a paper-based single-use password security system.

A blog posting by Finnish security firm F-Secure says recipients of the spam e-mail were directed to bogus Web sites but were also asked to enter their account details along with the next password on their list of one-time passwords issued to them by the bank on a “scratch sheet”.

From F-Secure’s blog:

The fake mails were explaining that Nordea is introducing new security measures, which can be accessed at www.nordea-se.com or www.nordea-bank.net (fake sites hosted in South Korea).

The fake sites looked fairly real. They were asking the user for his personal number, access code and the next available scratch code. Regardless of what you entered, the site would complain about the scratch code and asked you to try the next one. In reality the bad boys were trying to collect several scratch codes for their own use.

The Register also has a story.

Two-factor authentication won’t stop identity theft, because identity theft is not an authentication problem. It’s a transaction-security problem. I’ve written about that already. Solutions need to address the transactions directly, and my guess is that they’ll be a combination of things. Some transactions will become more cumbersome. It will definitely be more cumbersome to get a new credit card. Back-end systems will be put in place to identify fraudulent transaction patterns. Look at credit card security; that’s where you’re going to find ideas for solutions to this problem.

Unfortunately, until financial institutions are liable for all the losses associated with identity theft, and not just their direct losses, we’re not going to see a lot of these solutions. I’ve written about this before as well.

We got them for credit cards because Congress mandated that the banks were liable for all but the first $50 of fraudulent transactions.

EDITED TO ADD: Here’s a related story. The Bank of New Zealand suspended Internet banking because of phishing concerns. Now there’s a company that is taking the threat seriously.

Posted on October 25, 2005 at 12:49 PMView Comments

Real ID and Identity Theft

Reuters on the trade-offs of Real ID:

Nobody yet knows how much the Real ID Act will cost to implement or how much money Congress will provide for it. The state of Washington, which has done the most thorough cost analysis, put the bill in that state alone at $97 million in the first two years and believes it will have to raise the price of a driver’s license to $58 from $25.

On the other hand, a secure ID system could save millions in Medicare and Medicaid fraud and combat identity theft.

Why does Reuters think that a better ID card will protect against identity theft? The problem with identity theft isn’t that ID cards are forgeable, it’s that financial institutions don’t check them before authorizing transactions.

Posted on October 14, 2005 at 11:20 AMView Comments

Tax Breaks for Good Security

Congress is talking—it’s just talking, but at least it’s talking—about giving tax breaks to companies with good cybersecurity.

The devil is in the details, and this could be a meaningless handout, but the idea is sound. Rational companies are going to protect their assets only up to their value to that company. The problem is that many of the security risks to digital assets are not risks to the company who owns them. This is an externality. So if we all need a company to protect its digital assets to some higher level, then we need to pay for that extra protection. (At least we do in a capitalist society.) We can pay through regulation or liabilities, which translates to higher prices for whatever the company does. We can pay through directly funding that extra security, either by writing a check or reducing taxes. But we can’t expect a company to spend the extra money out of the goodness of its heart.

Posted on October 13, 2005 at 8:02 AMView Comments

RFID Car Keys

RFID car keys (subscription required) are becoming more popular. Since these devices broadcast a unique serial number, it’s only a matter of time before a significant percentage of the population can be tracked with them.

Lexus has made what it calls the “SmartAccess” keyless-entry system standard on its new IS sedans, designed to compete with German cars like the BMW 3 series or the Audi A4, as well as rivals such as the Infiniti G35 or the U.S.-made Cadillac CTS. BMW offers what it calls “keyless go” as an option on the new 3 series, and on its higher-priced 5, 6 and 7 series sedans.

Volkswagen AG’s Audi brand offers keyless-start systems on its A6 and A8 sedans, but not yet on U.S.-bound A4s. Cadillac’s new STS sedan, big brother to the CTS, also offers a pushbutton start.

Starter buttons have a racy flair—European sports cars and race cars used them in the past. The proliferation of starter buttons in luxury sedans has its roots in theft protection. An increasing number of cars now come with theft-deterrent systems that rely on a chip in the key fob that broadcasts a code to a receiver in the car. If the codes don’t match, the car won’t start.

Cryptography can be used to make these devices anonymous, but there’s no business reason for automobile manufacturers to field such a system. Once again, the economic barriers to security are far greater than the technical ones.

Posted on October 5, 2005 at 8:13 AMView Comments

Secure Flight News

The TSA is not going to use commercial databases in its initial roll-out of Secure Flight, its airline screening program that matches passengers with names on the Watch List and No-Fly List. I don’t believe for a minute that they’re shelving plans to use commercial data permanently, but at least they’re delaying the process.

In other news, the report (also available here, here, and here) of the Secure Flight Privacy/IT Working Group is public. I was a member of that group, but honestly, I didn’t do any writing for the report. I had given up on the process, sick of not being able to get any answers out of TSA, and believed that the report would end up in somebody’s desk drawer, never to be seen again. I was stunned when I learned that the ASAC made the report public.

There’s a lot of stuff in the report, but I’d like to quote the section that outlines the basic questions that the TSA was unable to answer:

The SFWG found that TSA has failed to answer certain key questions about Secure Flight: First and foremost, TSA has not articulated what the specific goals of Secure Flight are. Based on the limited test results presented to us, we cannot assess whether even the general goal of evaluating passengers for the risk they represent to aviation security is a realistic or feasible one or how TSA proposes to achieve it. We do not know how much or what kind of personal information the system will collect or how data from various sources will flow through the system.

Until TSA answers these questions, it is impossible to evaluate the potential privacy or security impact of the program, including:

  • Minimizing false positives and dealing with them when they occur.
  • Misuse of information in the system.
  • Inappropriate or illegal access by persons with and without permissions.
  • Preventing use of the system and information processed through it for purposes other than airline passenger screening.

The following broadly defined questions represent the critical issues we believe TSA must address before we or any other advisory body can effectively evaluate the privacy and security impact of Secure Flight on the public.

  1. What is the goal or goals of Secure Flight? The TSA is under a Congressional mandate to match domestic airline passenger lists against the consolidated terrorist watch list. TSA has failed to specify with consistency whether watch list matching is the only goal of Secure Flight at this stage. The Secure Flight Capabilities and Testing Overview, dated February 9, 2005 (a non-public document given to the SFWG), states in the Appendix that the program is not looking for unknown terrorists and has no intention of doing so. On June 29, 2005, Justin Oberman (Assistant Administrator, Secure Flight/Registered Traveler) testified to a Congressional committee that “Another goal proposed for Secure Flight is its use to establish “Mechanisms for…violent criminal data vetting.” Finally, TSA has never been forthcoming about whether it has an additional, implicit goal the tracking of terrorism suspects (whose presence on the terrorist watch list does not necessarily signify intention to commit violence on a flight).

    While the problem of failing to establish clear goals for Secure Flight at a given point in time may arise from not recognizing the difference between program definition and program evolution, it is clearly an issue the TSA must address if Secure Flight is to proceed.

  2. What is the architecture of the Secure Flight system? The Working Group received limited information about the technical architecture of Secure Flight and none about how software and hardware choices were made. We know very little about how data will be collected, transferred, analyzed, stored or deleted. Although we are charged with evaluating the privacy and security of the system, we saw no statements of privacy policies and procedures other than Privacy Act notices published in the Federal Register for Secure Flight testing. No data management plan either for the test phase or the program as implemented was provided or discussed.
  3. Will Secure Flight be linked to other TSA applications? Linkage with other screening programs (such as Registered Traveler, Transportation Worker Identification and Credentialing (TWIC), and Customs and Border Patrol systems like U.S.-VISIT) that may operate on the same platform as Secure Flight is another aspect of the architecture and security question. Unanswered questions remain about how Secure Flight will interact with other vetting programs operating on the same platform; how it will ensure that its policies on data collection, use and retention will be implemented and enforced on a platform that also operates programs with significantly different policies in these areas; and how it will interact with the vetting of passengers on international flights?
  4. How will commercial data sources be used? One of the most controversial elements of Secure Flight has been the possible uses of commercial data. TSA has never clearly defined two threshold issues: what it means by “commercial data” and how it might use commercial data sources in the implementation of Secure Flight. TSA has never clearly distinguished among various possible uses of commercial data, which all have different implications.

    Possible uses of commercial data sometimes described by TSA include: (1) identity verification or authentication; (2) reducing false positives by augmenting passenger records indicating a possible match with data that could help distinguish an innocent passenger from someone on a watch list; (3) reducing false negatives by augmenting all passenger records with data that could suggest a match that would otherwise have been missed; (4) identifying sleepers, which itself includes: (a) identifying false identities; and (b) identifying behaviors indicative of terrorist activity. A fifth possibility has not been discussed by TSA: using commercial data to augment watch list entries to improve their fidelity. Assuming that identity verification is part of Secure Flight, what are the consequences if an identity cannot be verified with a certain level of assurance?

    It is important to note that TSA never presented the SFWG with the results of its commercial data tests. Until these test results are available and have been independently analyzed, commercial data should not be utilized in the Secure Flight program.

  5. Which matching algorithms work best? TSA never presented the SFWG with test results showing the effectiveness of algorithms used to match passenger names to a watch list. One goal of bringing watch list matching inside the government was to ensure that the best available matching technology was used uniformly. The SFWG saw no evidence that TSA compared different products and competing solutions. As a threshold matter, TSA did not describe to the SFWG its criteria for determining how the optimal matching solution would be determined. There are obvious and probably not-so-obvious tradeoffs between false positives and false negatives, but TSA did not explain how it reconciled these concerns.
  6. What is the oversight structure and policy for Secure Flight? TSA has not produced a comprehensive policy document for Secure Flight that defines oversight or governance responsibilities.

The members of the working group, and the signatories to the report, are Martin Abrams, Linda Ackerman, James Dempsey, Edward Felten, Daniel Gallington, Lauren Gelman, Steven Lilenthal, Anna Slomovic, and myself.

My previous posts about Secure Flight, and my involvement in the working group, are here, here, here, here, here, and here.

And in case you think things have gotten better, there’s a new story about how the no-fly list cost a pilot his job:

Cape Air pilot Robert Gray said he feels like he’s living a nightmare. Two months after he sued the federal government for refusing to let him take flight training courses so he could fly larger planes, he said yesterday, his situation has only worsened.

When Gray showed up for work a couple of weeks ago, he said Cape Air told him the government had placed him on its no-fly list, making it impossible for him to do his job. Gray, a Belfast native and British citizen, said the government still won’t tell him why it thinks he’s a threat.

“I haven’t been involved in any kind of terrorism, and I never committed any crime,” said Gray, 35, of West Yarmouth. He said he has never been arrested and can’t imagine what kind of secret information the government is relying on to destroy his life.

Remember what the no-fly list is. It’s a list of people who are so dangerous that they can’t be allowed to board an airplane under any circumstances, yet so innocent that they can’t be arrested—even under the provisions of the PATRIOT Act.

EDITED TO ADD: The U.S. Department of Justice Inspector General released a report last month on Secure Flight, basically concluding that the costs were out of control, and that the TSA didn’t know how much the program would cost in the future.

Here’s an article about some of the horrible problems people who have mistakenly found themselves on the no-fly list have had to endure. And another on what you can do if you find yourself on a list.

EDITED TO ADD: EPIC has received a bunch of documents about continued problems with false positives.

Posted on September 26, 2005 at 7:14 AMView Comments

Research in Behavioral Risk Analysis

I very am interested in this kind of research:

Network Structure, Behavioral Considerations and Risk Management in Interdependent Security Games

Interdependent security (IDS) games model situations where each player has to determine whether or not to invest in protection or security against an uncertain event knowing that there is some chance s/he will be negatively impacted by others who do not follow suit. IDS games capture a wide variety of collective risk and decision-making problems that include airline security, corporate governance, computer network security and vaccinations against diseases. This research project will investigate the marriage of IDS models with network formation models developed from social network theory and apply these models to problems in network security. Behavioral and controlled experiments will examine how human participants actually make choices under uncertainty in IDS settings. Computational aspects of IDS models will also be examined. To encourage and induce individuals to invest in cost-effective protection measures for IDS problems, we will examine several risk management strategies designed to foster cooperative behavior that include providing risk information, communication with others, economic incentives, and tipping strategies.

The proposed research is interdisciplinary in nature and should serve as an exciting focal point for researchers in computer science, decision and management sciences, economics, psychology, risk management, and policy analysis. It promises to advance our understanding of decision-making under risk and uncertainty for problems that are commonly faced by individuals, organizations, and nations. Through advances in computational methods one should be able to apply IDS models to large-scale problems. The research will also focus on weak links in an interdependent system and suggest risk management strategies for reducing individual and societal losses in the interconnected world in which we live.

Posted on September 15, 2005 at 7:05 AMView Comments

Cameras in the New York City Subways

New York City is spending $212 million on surveillance technology: 1,000 video cameras and 3,000 motion sensors for the city’s subways, bridges, and tunnels.

Why? Why, given that cameras didn’t stop the London train bombings? Why, when there is no evidence that cameras are effectice at reducing either terrorism and crime, and every reason to believe that they are ineffective?

One reason is that it’s the “movie plot threat” of the moment. (You can hear the echos of the movie plots when you read the various quotes in the news stories.) The terrorists bombed a subway in London, so we need to defend our subways. The other reason is that New York City officials are erring on the side of caution. If nothing happens, then it was only money. But if something does happen, they won’t keep their jobs unless they can show they did everything possible. And technological solutions just make everyone feel better.

If I had $212 million to spend to defend against terrorism in the U.S., I would not spend it on cameras in the New York City subways. If I had $212 million to defend New York City against terrorism, I would not spend it on cameras in the subways. This is nothing more than security theater against a movie plot threat.

On the plus side, the money will also go for a new radio communications system for subway police, and will enable cell phone service in underground stations, but not tunnels.

Posted on August 24, 2005 at 1:10 PMView Comments

1 32 33 34 35 36 39

Sidebar photo of Bruce Schneier by Joe MacInnis.