Entries Tagged "economics of security"

Page 27 of 39

Wal-Mart Stays Open During Bomb Scare

This is interesting: A Wal-Mart store in Mitchell, South Dakota receives a bomb threat. The store managers decide not to evacuate while the police search for the bomb. Presumably, they decided that the loss of revenue due to an evacuation was not worth the additional security of an evacuation:

During the nearly two-hour search Wal-Mart officials opted not to evacuated the busy discount store even though police recomended [sic] they do so. Wal-Mart officials said the call was a hoax and not a threat.

I think this is a good sign. It shows that people are thinking rationally about security trade-offs, and not thoughtlessly being terrorized.

Remember, though: security trade-offs are based on agenda. From the perspective of the Wal-Mart managers, the store’s revenues are the most important; most of the risks of the bomb threat are externalities.

Of course, the store employees have a different agenda—there is no upside to staying open, and only a downside due to the additional risk—and they didn’t like the decision:

The incident has family members of Wal-Mart employees criticizing store officials for failing to take police’s recommendation to evacuate.

Voorhees has worked at the Mitchell discount chain since Wal-Mart Supercenter opened in 2001. Her daughter, Charlotte Goode, 36, said Voorhees called her Sunday, crying and upset as she relayed the story.

“It’s right before Christmas. They were swamped with people,” she said. “To me, they endangerd [sic] the community, customers and associates. They put making a buck ahead of public safety.”

Posted on December 28, 2006 at 1:32 PMView Comments

A Cost Analysis of Windows Vista Content Protection

Peter Gutman’s “A Cost Analysis of Windows Vista Content Protection” is fascinating reading:

Executive Summary

Windows Vista includes an extensive reworking of core OS elements in order to provide content protection for so-called “premium content”, typically HD data from Blu-Ray and HD-DVD sources. Providing this protection incurs considerable costs in terms of system performance, system stability, technical support overhead, and hardware and software cost. These issues affect not only users of Vista but the entire PC industry, since the effects of the protection measures extend to cover all hardware and software that will ever come into contact with Vista, even if it’s not used directly with Vista (for example hardware in a Macintosh computer or on a Linux server). This document analyses the cost involved in Vista’s content protection, and the collateral damage that this incurs throughout the computer industry.

Executive Executive Summary

The Vista Content Protection specification could very well constitute the longest suicide note in history.

It contains stuff like:

Denial-of-Service via Driver Revocation

Once a weakness is found in a particular driver or device, that driver will have its signature revoked by Microsoft, which means that it will cease to function (details on this are a bit vague here, presumably some minimum functionality like generic 640×480 VGA support will still be available in order for the system to boot). This means that a report of a compromise of a particular driver or device will cause all support for that device worldwide to be turned off until a fix can be found. Again, details are sketchy, but if it’s a device problem then presumably the device turns into a paperweight once it’s revoked. If it’s an older device for which the vendor isn’t interested in rewriting their drivers (and in the fast-moving hardware market most devices enter “legacy” status within a year of two of their replacement models becoming available), all devices of that type worldwide become permanently unusable.

Read the whole thing.

And here’s commentary on the paper.

Posted on December 26, 2006 at 1:56 PMView Comments

Automated Targeting System

If you’ve traveled abroad recently, you’ve been investigated. You’ve been assigned a score indicating what kind of terrorist threat you pose. That score is used by the government to determine the treatment you receive when you return to the U.S. and for other purposes as well.

Curious about your score? You can’t see it. Interested in what information was used? You can’t know that. Want to clear your name if you’ve been wrongly categorized? You can’t challenge it. Want to know what kind of rules the computer is using to judge you? That’s secret, too. So is when and how the score will be used.

U.S. customs agencies have been quietly operating this system for several years. Called Automated Targeting System, it assigns a “risk assessment” score to people entering or leaving the country, or engaging in import or export activity. This score, and the information used to derive it, can be shared with federal, state, local and even foreign governments. It can be used if you apply for a government job, grant, license, contract or other benefit. It can be shared with nongovernmental organizations and individuals in the course of an investigation. In some circumstances private contractors can get it, even those outside the country. And it will be saved for 40 years.

Little is known about this program. Its bare outlines were disclosed in the Federal Register in October. We do know that the score is partially based on details of your flight record—where you’re from, how you bought your ticket, where you’re sitting, any special meal requests—or on motor vehicle records, as well as on information from crime, watch-list and other databases.

Civil liberties groups have called the program Kafkaesque. But I have an even bigger problem with it. It’s a waste of money.

The idea of feeding a limited set of characteristics into a computer, which then somehow divines a person’s terrorist leanings, is farcical. Uncovering terrorist plots requires intelligence and investigation, not large-scale processing of everyone.

Additionally, any system like this will generate so many false alarms as to be completely unusable. In 2005 Customs & Border Protection processed 431 million people. Assuming an unrealistic model that identifies terrorists (and innocents) with 99.9% accuracy, that’s still 431,000 false alarms annually.

The number of false alarms will be much higher than that. The no-fly list is filled with inaccuracies; we’ve all read about innocent people named David Nelson who can’t fly without hours-long harassment. Airline data, too, are riddled with errors.

The odds of this program’s being implemented securely, with adequate privacy protections, are not good. Last year I participated in a government working group to assess the security and privacy of a similar program developed by the Transportation Security Administration, called Secure Flight. After five years and $100 million spent, the program still can’t achieve the simple task of matching airline passengers against terrorist watch lists.

In 2002 we learned about yet another program, called Total Information Awareness, for which the government would collect information on every American and assign him or her a terrorist risk score. Congress found the idea so abhorrent that it halted funding for the program. Two years ago, and again this year, Secure Flight was also banned by Congress until it could pass a series of tests for accuracy and privacy protection.

In fact, the Automated Targeting System is arguably illegal, as well (a point several congressmen made recently); all recent Department of Homeland Security appropriations bills specifically prohibit the department from using profiling systems against persons not on a watch list.

There is something un-American about a government program that uses secret criteria to collect dossiers on innocent people and shares that information with various agencies, all without any oversight. It’s the sort of thing you’d expect from the former Soviet Union or East Germany or China. And it doesn’t make us any safer from terrorism.

This essay, without the links, was published in Forbes. They also published a rebuttal by William Baldwin, although it doesn’t seen to rebut any of the actual points.

Here’s an odd division of labor: a corporate data consultant argues for more openness, while a journalist favors more secrecy.

It’s only odd if you don’t understand security.

Posted on December 22, 2006 at 11:38 AMView Comments

Sneaking into Airports

The stories keep getting better. Here’s someone who climbs a fence at the Raleigh-Durham Airport, boards a Delta plane, and hangs out for a bunch of hours.

Best line of the article:

“It blows my mind that you can’t get 3.5 ounces of toothpaste on a plane,” he said, “yet somebody can sneak on a plane and take a nap.”

Exactly. We’re spending millions enhancing passenger screening—new backscatter X-ray machines, confiscating liquids—and we ignore the other, less secure, paths onto airplanes. It’s idiotic, that’s what it is.

Posted on December 20, 2006 at 1:17 PMView Comments

Cybercrime Hype Alert

It seems to be the season for cybercrime hype. First, we have this article from CNN, which seems to have no actual news:

Computer hackers will open a new front in the multi-billion pound “cyberwar” in 2007, targeting mobile phones, instant messaging and community Web sites such as MySpace, security experts predict.

As people grow wise to email scams, criminal gangs will find new ways to commit online fraud, sell fake goods or steal corporate secrets.

And next, this article, which claims that criminal organizations are paying student members to get IT degrees:

The most successful cyber crime gangs were based on partnerships between those with the criminals skills and contacts and those with the technical ability, said Mr Day.

“Traditional criminals have the ability to move funds and use all of the background they have,” he said, “but they don’t have the technical expertise.”

As the number of criminal gangs looking to move into cyber crime expanded, it got harder to recruit skilled hackers, said Mr Day. This has led criminals to target university students all around the world.

“Some students are being sponsored through their IT degree,” said Mr Day. Once qualified, the graduates go to work for the criminal gangs.

[…]

The aura of rebellion the name conjured up helped criminals ensnare children as young as 14, suggested the study.

By trawling websites, bulletin boards and chat rooms that offer hacking tools, cracks or passwords for pirated software, criminal recruiters gather information about potential targets.

Once identified, young hackers are drawn in by being rewarded for carrying out low-level tasks such as using a network of hijacked home computers, a botnet, to send out spam.

The low risk of being caught and the relatively high-rewards on offer helped the criminal gangs to paint an attractive picture of a cyber criminal’s life, said Mr Day.

As youngsters are drawn in the stakes are raised and they are told to undertake increasingly risky jobs.

Criminals targeting children—that’s sure to peg anyone’s hype-meter.

To be sure, I don’t want to minimize the threat of cybercrime. Nor do I want to minimize the threat of organized cybercrime. There are more and more criminals prowling the net, and more and more cybercrime has gone up the food chain—to large organized crime syndicates. Cybercrime is big business, and it’s getting bigger.

But I’m not sure if stories like these help or hurt.

Posted on December 14, 2006 at 2:36 PMView Comments

The Square Root of Terrorist Intent

I’ve already written about the DHS’s database of top terrorist targets and how dumb it is. Important sites are not on the list, and unimportant ones are. The reason is pork, of course; states get security money based on this list, so every state wants to make sure they have enough sites on it. And over the past five years, states with Republican congressmen got more money than states without.

Here’s another article on this general topic, centering around an obscure quantity: the square root of terrorist intent:

The Department of Homeland Security is the home of many mysteries. There is, of course, the color-coded system for gauging the threat of an attack. And there is the department database of national assets to protect against a terrorist threat, which includes Old MacDonald’s Petting Zoo in Woodville, Ala., and the Apple and Pork Festival in Clinton, Ill.

And now Jim O’Brien, the director of the Office of Emergency Management and Homeland Security in Clark County, Nev., has discovered another hard-to-fathom DHS notion: a mathematical value purporting to represent the square root of terrorist intent. The figure appears deep in the mind-numbingly complex risk-assessment formulas that the department used in 2006 to decide the likelihood that a place is or will become a terrorist target—an all-important estimate outside the Beltway, because greater slices of the federal anti-terrorism pie go to the locations with the highest scores. Overall, the department awarded $711 million in high-risk urban counterterrorism grants last year.

[…]

As O’Brien reviewed the risk-assessment formulas—a series of calculations that runs into the billions—he found himself unable to account for several factors, the terrorist-intent notion principal among them. “I have a Ph.D. I think I understand formulas,” he says. “Take the square root of terrorist intent? Now, give me a break.” The whole notion, O’Brien says, is a contradiction in terms: “How can you quantify what somebody is thinking?”

Other designations for variables in the formula are almost befuddling, O’Brien says, such as the “attractiveness factor,” which seeks to establish how terrorists might prefer one sort of target over another, and the “chatter factor,” which tries to gauge the intent of potential terror plotters based on communication intercepts.

“One man’s garbage is another man’s treasure,” he says. “So I don’t know how you measure attractiveness.” The chatter factor, meanwhile, leaves O’Brien entirely in the dark: “I’m not sure what that means.”

What I said last time still applies:

We’re never going to get security right if we continue to make it a parody of itself.

Posted on December 11, 2006 at 12:18 PMView Comments

Why Management Doesn't Get IT Security

At the request of the Department of Homeland Security, a group called The Conference Board completed a study about senior management and their perceptions of IT security. The results aren’t very surprising.

Most C-level executives view security as an operational issue—kind of like facilities management—and not as a strategic review. As such, they don’t have direct responsibility for security.

Such attitudes about security have caused many organizations to distance their security teams from other parts of the business as well. “Security directors appear to be politically isolated within their companies,” Cavanagh says. Security pros often do not talk to business managers or other departments, he notes, so they don’t have many allies in getting their message across to upper management.

What to do? The report has some suggestions, the same ones you can hear at any security conference anywhere.

Security managers need to reach out more aggressively to other areas of the business to help them make their case, Cavanagh says. “Risk managers are among the best potential allies,” he observes, because they are usually tasked with measuring the financial impact of various threats and correlating them with the likelihood that those threats will happen.

“That can be tricky, because most risk managers come from a financial background, and they don’t speak the same language as the security people,” Cavanagh notes. “It’s also difficult because security presents some unusual risk scenarios. There are some franchise events that could destroy the company’s business, but have a very low likelihood of occurrence, so it’s very hard to gauge the risk.”

Getting attention (and budget) from top executives such as risk managers, CFOs, and CEOs, means creating metrics that help measure the value of the security effort, Cavanagh says. In the study, The Conference Board found that the cost of business interruption was the most helpful metric, cited by almost 64 percent of respondents. That metric was followed by vulnerability assessments (60 percent), benchmarks against industry standards (49 percent), the value of the facilities (43.5 percent), and the level of insurance premiums (39 percent).

Face time is another important way to gain attention in mahogany row, the report says. In industries where there are critical infrastructure issues, such as financial services, about 66 percent of top executives meet at least once a month with their security director, according to the study. That figure dropped to around 44 percent in industries without critical infrastructure issues.

I guess it’s more confirmation of the conventional wisdom.

The full report is available, but it costs $125 if you’re something called a Conference Board associate, and $495 if you’re not. But my guess is that you’ve already heard everything that’s in it.

Posted on November 8, 2006 at 6:15 AMView Comments

Forge Your Own Boarding Pass

Last week Christopher Soghoian created a Fake Boarding Pass Generator website, allowing anyone to create a fake Northwest Airlines boarding pass: any name, airport, date, flight. This action got him visited by the FBI, who later came back, smashed open his front door, and seized his computers and other belongings. It resulted in calls for his arrest—the most visible by Rep. Edward Markey (D-Massachusetts)—who has since recanted. And it’s gotten him more publicity than he ever dreamed of.

All for demonstrating a known and obvious vulnerability in airport security involving boarding passes and IDs.

This vulnerability is nothing new. There was an article on CSOonline from February 2006. There was an article on Slate from February 2005. Sen. Chuck Schumer spoke about it as well. I wrote about it in the August 2003 issue of Crypto-Gram. It’s possible I was the first person to publish it, but I certainly wasn’t the first person to think of it.

It’s kind of obvious, really. If you can make a fake boarding pass, you can get through airport security with it. Big deal; we know.

You can also use a fake boarding pass to fly on someone else’s ticket. The trick is to have two boarding passes: one legitimate, in the name the reservation is under, and another phony one that matches the name on your photo ID. Use the fake boarding pass in your name to get through airport security, and the real ticket in someone else’s name to board the plane.

This means that a terrorist on the no-fly list can get on a plane: He buys a ticket in someone else’s name, perhaps using a stolen credit card, and uses his own photo ID and a fake ticket to get through airport security. Since the ticket is in an innocent’s name, it won’t raise a flag on the no-fly list.

You can also use a fake boarding pass instead of your real one if you have the “SSSS” mark and want to avoid secondary screening, or if you don’t have a ticket but want to get into the gate area.

Historically, forging a boarding pass was difficult. It required special paper and equipment. But since Alaska Airlines started the trend in 1999, most airlines now allow you to print your boarding pass using your home computer and bring it with you to the airport. This program was temporarily suspended after 9/11, but was quickly brought back because of pressure from the airlines. People who print the boarding passes at home can go directly to airport security, and that means fewer airline agents are required.

Airline websites generate boarding passes as graphics files, which means anyone with a little bit of skill can modify them in a program like Photoshop. All Soghoian’s website did was automate the process with a single airline’s boarding passes.

Soghoian claims that he wanted to demonstrate the vulnerability. You could argue that he went about it in a stupid way, but I don’t think what he did is substantively worse than what I wrote in 2003. Or what Schumer described in 2005. Why is it that the person who demonstrates the vulnerability is vilified while the person who describes it is ignored? Or, even worse, the organization that causes it is ignored? Why are we shooting the messenger instead of discussing the problem?

As I wrote in 2005: “The vulnerability is obvious, but the general concepts are subtle. There are three things to authenticate: the identity of the traveler, the boarding pass and the computer record. Think of them as three points on the triangle. Under the current system, the boarding pass is compared to the traveler’s identity document, and then the boarding pass is compared with the computer record. But because the identity document is never compared with the computer record—the third leg of the triangle—it’s possible to create two different boarding passes and have no one notice. That’s why the attack works.”

The way to fix it is equally obvious: Verify the accuracy of the boarding passes at the security checkpoints. If passengers had to scan their boarding passes as they went through screening, the computer could verify that the boarding pass already matched to the photo ID also matched the data in the computer. Close the authentication triangle and the vulnerability disappears.

But before we start spending time and money and Transportation Security Administration agents, let’s be honest with ourselves: The photo ID requirement is no more than security theater. Its only security purpose is to check names against the no-fly list, which would still be a joke even if it weren’t so easy to circumvent. Identification is not a useful security measure here.

Interestingly enough, while the photo ID requirement is presented as an antiterrorism security measure, it is really an airline-business security measure. It was first implemented after the explosion of TWA Flight 800 over the Atlantic in 1996. The government originally thought a terrorist bomb was responsible, but the explosion was later shown to be an accident.

Unlike every other airplane security measure—including reinforcing cockpit doors, which could have prevented 9/11—the airlines didn’t resist this one, because it solved a business problem: the resale of non-refundable tickets. Before the photo ID requirement, these tickets were regularly advertised in classified pages: “Round trip, New York to Los Angeles, 11/21-30, male, $100.” Since the airlines never checked IDs, anyone of the correct gender could use the ticket. Airlines hated that, and tried repeatedly to shut that market down. In 1996, the airlines were finally able to solve that problem and blame it on the FAA and terrorism.

So business is why we have the photo ID requirement in the first place, and business is why it’s so easy to circumvent it. Instead of going after someone who demonstrates an obvious flaw that is already public, let’s focus on the organizations that are actually responsible for this security failure and have failed to do anything about it for all these years. Where’s the TSA’s response to all this?

The problem is real, and the Department of Homeland Security and TSA should either fix the security or scrap the system. What we’ve got now is the worst security system of all: one that annoys everyone who is innocent while failing to catch the guilty.

This essay—my 30th for Wired.com—appeared today.

EDITED TO ADD (11/4): More news and commentary.

EDITED TO ADD (1/10): Great essay by Matt Blaze.

Posted on November 2, 2006 at 6:21 AMView Comments

Airline Passenger Profiling for Profit

I have previously written and spoken about the privacy threats that come from the confluence of government and corporate interests. It’s not the deliberate police-state privacy invasions from governments that worry me, but the normal-business privacy invasions by corporations—and how corporate privacy invasions pave the way for government privacy invasions and vice versa.

The U.S. government’s airline passenger profiling system was called Secure Flight, and I’ve written about it extensively. At one point, the system was going to perform automatic background checks on all passengers based on both government and commercial databases—credit card databases, phone records, whatever—and assign everyone a “risk score” based on the data. Those with a higher risk score would be searched more thoroughly than those with a lower risk score. It’s a complete waste of time, and a huge invasion of privacy, and the last time I paid attention it had been scrapped.

But the very same system that is useless at picking terrorists out of passenger lists is probably very good at identifying consumers. So what the government rightly decided not to do, the start-up corporation Jetera is doing instead:

Jetera would start with an airline’s information on individual passengers on board a given flight, drawing the name, address, credit card number and loyalty club status from reservations data. Through a process, for which it seeks a patent, the company would match the passenger’s identification data with the mountains of information about him or her available at one of the mammoth credit bureaus, which maintain separately managed marketing as well as credit information. Jetera would tap into the marketing side, showing consumer demographics, purchases, interests, attitudes and the like.

Jetera’s data manipulation would shape the entertainment made available to each passenger during a flight. The passenger who subscribes to a do-it-yourself magazine might be offered a video on woodworking. Catalog purchase records would boost some offerings and downplay others. Sports fans, known through their subscriptions, credit card ticket-buying or booster club memberships, would get “The Natural” instead of “Pretty Woman.”

The article is dated August 21, 2006 and is subscriber-only. Most of it talks about the revenue potential of the model, the funding the company received, and the talks it has had with anonymous airlines. No airline has signed up for the service yet, which would not only include in-flight personalization but pre- and post-flight mailings and other personalized services. Privacy is dealt with at the end of the article:

Jetera sees two legal issues regarding privacy and resolves both in its favor. Nothing Jetera intends to do would violate federal law or airline privacy policies as expressed on their websites. In terms of customer perceptions, Jetera doesn’t intend to abuse anyone’s privacy and will have an “opt-out” opportunity at the point where passengers make inflight entertainment choices.

If an airline wants an opt-out feature at some other point in the process, Jetera will work to provide one, McChesney says. Privacy and customer service will be an issue for each airline, and Jetera will adapt specifically to each.

The U.S. government already collects data from the phone company, from hotels and rental-car companies, and from airlines. How long before it piggy backs onto this system?

The other side to this is in the news, too: commercial databases using government data:

Records once held only in paper form by law enforcement agencies, courts and corrections departments are now routinely digitized and sold in bulk to the private sector. Some commercial databases now contain more than 100 million criminal records. They are updated only fitfully, and expunged records now often turn up in criminal background checks ordered by employers and landlords.

Posted on October 24, 2006 at 11:00 AMView Comments

1 25 26 27 28 29 39

Sidebar photo of Bruce Schneier by Joe MacInnis.