Entries Tagged "risk assessment"

Page 19 of 21

Cybercrime Hype Alert

It seems to be the season for cybercrime hype. First, we have this article from CNN, which seems to have no actual news:

Computer hackers will open a new front in the multi-billion pound “cyberwar” in 2007, targeting mobile phones, instant messaging and community Web sites such as MySpace, security experts predict.

As people grow wise to email scams, criminal gangs will find new ways to commit online fraud, sell fake goods or steal corporate secrets.

And next, this article, which claims that criminal organizations are paying student members to get IT degrees:

The most successful cyber crime gangs were based on partnerships between those with the criminals skills and contacts and those with the technical ability, said Mr Day.

“Traditional criminals have the ability to move funds and use all of the background they have,” he said, “but they don’t have the technical expertise.”

As the number of criminal gangs looking to move into cyber crime expanded, it got harder to recruit skilled hackers, said Mr Day. This has led criminals to target university students all around the world.

“Some students are being sponsored through their IT degree,” said Mr Day. Once qualified, the graduates go to work for the criminal gangs.

[…]

The aura of rebellion the name conjured up helped criminals ensnare children as young as 14, suggested the study.

By trawling websites, bulletin boards and chat rooms that offer hacking tools, cracks or passwords for pirated software, criminal recruiters gather information about potential targets.

Once identified, young hackers are drawn in by being rewarded for carrying out low-level tasks such as using a network of hijacked home computers, a botnet, to send out spam.

The low risk of being caught and the relatively high-rewards on offer helped the criminal gangs to paint an attractive picture of a cyber criminal’s life, said Mr Day.

As youngsters are drawn in the stakes are raised and they are told to undertake increasingly risky jobs.

Criminals targeting children — that’s sure to peg anyone’s hype-meter.

To be sure, I don’t want to minimize the threat of cybercrime. Nor do I want to minimize the threat of organized cybercrime. There are more and more criminals prowling the net, and more and more cybercrime has gone up the food chain — to large organized crime syndicates. Cybercrime is big business, and it’s getting bigger.

But I’m not sure if stories like these help or hurt.

Posted on December 14, 2006 at 2:36 PMView Comments

The Square Root of Terrorist Intent

I’ve already written about the DHS’s database of top terrorist targets and how dumb it is. Important sites are not on the list, and unimportant ones are. The reason is pork, of course; states get security money based on this list, so every state wants to make sure they have enough sites on it. And over the past five years, states with Republican congressmen got more money than states without.

Here’s another article on this general topic, centering around an obscure quantity: the square root of terrorist intent:

The Department of Homeland Security is the home of many mysteries. There is, of course, the color-coded system for gauging the threat of an attack. And there is the department database of national assets to protect against a terrorist threat, which includes Old MacDonald’s Petting Zoo in Woodville, Ala., and the Apple and Pork Festival in Clinton, Ill.

And now Jim O’Brien, the director of the Office of Emergency Management and Homeland Security in Clark County, Nev., has discovered another hard-to-fathom DHS notion: a mathematical value purporting to represent the square root of terrorist intent. The figure appears deep in the mind-numbingly complex risk-assessment formulas that the department used in 2006 to decide the likelihood that a place is or will become a terrorist target — an all-important estimate outside the Beltway, because greater slices of the federal anti-terrorism pie go to the locations with the highest scores. Overall, the department awarded $711 million in high-risk urban counterterrorism grants last year.

[…]

As O’Brien reviewed the risk-assessment formulas — a series of calculations that runs into the billions — he found himself unable to account for several factors, the terrorist-intent notion principal among them. “I have a Ph.D. I think I understand formulas,” he says. “Take the square root of terrorist intent? Now, give me a break.” The whole notion, O’Brien says, is a contradiction in terms: “How can you quantify what somebody is thinking?”

Other designations for variables in the formula are almost befuddling, O’Brien says, such as the “attractiveness factor,” which seeks to establish how terrorists might prefer one sort of target over another, and the “chatter factor,” which tries to gauge the intent of potential terror plotters based on communication intercepts.

“One man’s garbage is another man’s treasure,” he says. “So I don’t know how you measure attractiveness.” The chatter factor, meanwhile, leaves O’Brien entirely in the dark: “I’m not sure what that means.”

What I said last time still applies:

We’re never going to get security right if we continue to make it a parody of itself.

Posted on December 11, 2006 at 12:18 PMView Comments

New U.S. Customs Database on Trucks and Travellers

It’s yet another massive government surveillance program:

US Customs and Border Protection issued a notice in the Federal Register yesterday which detailed the agency’s massive database that keeps risk assessments on every traveler entering or leaving the country. Citizens who are concerned that their information is inaccurate are all but out of luck: the system “may not be accessed under the Privacy Act for the purpose of contesting the content of the record.”

The system in question is the Automated Targeting System, which is associated with the previously-existing Treasury Enforcement Communications System. TECS was built to screen people and assets that moved in and out of the US, and its database contains more than one billion records that are accessible by more than 30,000 users at 1,800 sites around the country. Customs has adapted parts of the TECS system to its own use and now plans to screen all passengers, inbound and outbound cargo, and ships.

The system creates a risk assessment for each person or item in the database. The assessment is generated from information gleaned from federal and commercial databases, provided by people themselves as they cross the border, and the Passenger Name Record information recorded by airlines. This risk assessment will be maintained for up to 40 years and can be pulled up by agents at a moment’s notice in order to evaluate potential threats against the US.

If you leave the country, the government will suddenly know a lot about you. The Passenger Name Record alone contains names, addresses, telephone numbers, itineraries, frequent-flier information, e-mail addresses — even the name of your travel agent. And this information can be shared with plenty of people:

  • Federal, state, local, tribal, or foreign governments
  • A court, magistrate, or administrative tribunal
  • Third parties during the course of a law enforcement investigation
  • Congressional office in response to an inquiry
  • Contractors, grantees, experts, consultants, students, and others performing or working on a contract, service, or grant
  • Any organization or person who might be a target of terrorist activity or conspiracy
  • The United States Department of Justice
  • The National Archives and Records Administration
  • Federal or foreign government intelligence or counterterrorism agencies
  • Agencies or people when it appears that the security or confidentiality of their information has been compromised.

That’s a lot of people who could be looking at your information and your government-designed risk assessment. The one person who won’t be looking at that information is you. The entire system is exempt from inspection and correction under provision 552a (j)(2) and (k)(2) of US Code Title 5, which allows such exemptions when the data in question involves law enforcement or intelligence information.

This means you can’t review your data for accuracy, and you can’t correct any errors.

But the system can be used to give you a risk assessment score, which presumably will affect how you’re treated when you return to the U.S.

I’ve already explained why data mining does not find terrorists or terrorist plots. So have actual math professors. And we’ve seen this kind of “risk assessment score” idea and the problems it causes with Secure Flight.

This needs some mainstream press attention.

EDITED TO ADD (11/4): More commentary here, here, and here.

EDITED TO ADD (11/5): It’s buried in the back pages, but at least The Washington Post wrote about it.

Posted on November 4, 2006 at 9:19 AMView Comments

Perceived Risk vs. Actual Risk

I’ve written repeatedly about the difference between perceived and actual risk, and how it explains many seemingly perverse security trade-offs. Here’s a Los Angeles Times op-ed that does the same. The author is Daniel Gilbert, psychology professor at Harvard. (I just recently finished his book Stumbling on Happiness, which is not a self-help book but instead about how the brain works. Strongly recommended.)

The op-ed is about the public’s reaction to the risks of global warming and terrorism, but the points he makes are much more general. He gives four reasons why some risks are perceived to be more or less serious than they actually are:

  1. We over-react to intentional actions, and under-react to accidents, abstract events, and natural phenomena.

    That’s why we worry more about anthrax (with an annual death toll of roughly zero) than influenza (with an annual death toll of a quarter-million to a half-million people). Influenza is a natural accident, anthrax is an intentional action, and the smallest action captures our attention in a way that the largest accident doesn’t. If two airplanes had been hit by lightning and crashed into a New York skyscraper, few of us would be able to name the date on which it happened.

  2. We over-react to things that offend our morals.

    When people feel insulted or disgusted, they generally do something about it, such as whacking each other over the head, or voting. Moral emotions are the brain’s call to action.

    He doesn’t say it, but it’s reasonable to assume that we under-react to things that don’t.

  3. We over-react to immediate threats and under-react to long-term threats.

    The brain is a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get. That’s what brains did for several hundred million years — and then, just a few million years ago, the mammalian brain learned a new trick: to predict the timing and location of dangers before they actually happened.

    Our ability to duck that which is not yet coming is one of the brain’s most stunning innovations, and we wouldn’t have dental floss or 401(k) plans without it. But this innovation is in the early stages of development. The application that allows us to respond to visible baseballs is ancient and reliable, but the add-on utility that allows us to respond to threats that loom in an unseen future is still in beta testing.

  4. We under-react to changes that occur slowly and over time.

    The human brain is exquisitely sensitive to changes in light, sound, temperature, pressure, size, weight and just about everything else. But if the rate of change is slow enough, the change will go undetected. If the low hum of a refrigerator were to increase in pitch over the course of several weeks, the appliance could be singing soprano by the end of the month and no one would be the wiser.

It’s interesting to compare this to what I wrote in Beyond Fear (pages 26-27) about perceived vs. actual risk:

  • People exaggerate spectacular but rare risks and downplay common risks. They worry more about earthquakes than they do about slipping on the bathroom floor, even though the latter kills far more people than the former. Similarly, terrorism causes far more anxiety than common street crime, even though the latter claims many more lives. Many people believe that their children are at risk of being given poisoned candy by strangers at Halloween, even though there has been no documented case of this ever happening.
  • People have trouble estimating risks for anything not exactly like their normal situation. Americans worry more about the risk of mugging in a foreign city, no matter how much safer it might be than where they live back home. Europeans routinely perceive the U.S. as being full of guns. Men regularly underestimate how risky a situation might be for an unaccompanied woman. The risks of computer crime are generally believed to be greater than they are, because computers are relatively new and the risks are unfamiliar. Middle-class Americans can be particularly naïve and complacent; their lives are incredibly secure most of the time, so their instincts about the risks of many situations have been dulled.
  • Personified risks are perceived to be greater than anonymous risks. Joseph Stalin said, “A single death is a tragedy, a million deaths is a statistic.” He was right; large numbers have a way of blending into each other. The final death toll from 9/11 was less than half of the initial estimates, but that didn’t make people feel less at risk. People gloss over statistics of automobile deaths, but when the press writes page after page about nine people trapped in a mine — complete with human-interest stories about their lives and families — suddenly everyone starts paying attention to the dangers with which miners have contended for centuries. Osama bin Laden represents the face of Al Qaeda, and has served as the personification of the terrorist threat. Even if he were dead, it would serve the interests of some politicians to keep him “alive” for his effect on public opinion.
  • People underestimate risks they willingly take and overestimate risks in situations they can’t control. When people voluntarily take a risk, they tend to underestimate it. When they have no choice but to take the risk, they tend to overestimate it. Terrorists are scary because they attack arbitrarily, and from nowhere. Commercial airplanes are perceived as riskier than automobiles, because the controls are in someone else’s hands — even though they’re much safer per passenger mile. Similarly, people overestimate even more those risks that they can’t control but think they, or someone, should. People worry about airplane crashes not because we can’t stop them, but because we think as a society we should be capable of stopping them (even if that is not really the case). While we can’t really prevent criminals like the two snipers who terrorized the Washington, DC, area in the fall of 2002 from killing, most people think we should be able to.
  • Last, people overestimate risks that are being talked about and remain an object of public scrutiny. News, by definition, is about anomalies. Endless numbers of automobile crashes hardly make news like one airplane crash does. The West Nile virus outbreak in 2002 killed very few people, but it worried many more because it was in the news day after day. AIDS kills about 3 million people per year worldwide — about three times as many people each day as died in the terrorist attacks of 9/11. If a lunatic goes back to the office after being fired and kills his boss and two coworkers, it’s national news for days. If the same lunatic shoots his ex-wife and two kids instead, it’s local news…maybe not even the lead story.

Posted on November 3, 2006 at 7:18 AMView Comments

Air Cargo Security

BBC is reporting a “major” hole in air cargo security. Basically, cargo is being flown on passenger planes without being screened. A would-be terrorist could therefore blow up a passenger plane by shipping a bomb via FedEx.

In general, cargo deserves much less security scrutiny than passengers. Here’s the reasoning:

Cargo planes are much less of a terrorist risk than passenger planes, because terrorism is about innocents dying. Blowing up a planeload of FedEx packages is annoying, but not nearly as terrorizing as blowing up a planeload of tourists. Hence, the security around air cargo doesn’t have to be as strict.

Given that, if most air cargo flies around on cargo planes, then it’s okay for some small amount — assuming it’s random and assuming the shipper doesn’t know which packages beforehand — of cargo to fly as baggage on passenger planes. A would-be terrorist would be better off taking his bomb and blowing up a bus than shipping it and hoping it might possibly be put on a passenger plane.

At least, that’s the theory. But theory and practice are different.

The British system involves “known shippers”:

Under a system called “known shipper” or “known consignor” companies which have been security vetted by government appointed agents can send parcels by air, which do not have to be subjected to any further security checks.

Unless a package from a known shipper arouses suspicion or is subject to a random search it is taken on trust that its contents are safe.

But:

Captain Gary Boettcher, president of the US Coalition Of Airline Pilots Associations, says the “known shipper” system “is probably the weakest part of the cargo security today”.

“There are approx 1.5 million known shippers in the US. There are thousands of freight forwarders. Anywhere down the line packages can be intercepted at these organisations,” he said.

“Even reliable respectable organisations, you really don’t know who is in the warehouse, who is tampering with packages, putting parcels together.”

This system has already been exploited by drug smugglers:

Mr Adeyemi brought pounds of cocaine into Britain unchecked by air cargo, transported from the US by the Federal Express courier company. He did not have to pay the postage.

This was made possible because he managed to illegally buy the confidential Fed Ex account numbers of reputable and security cleared companies from a former employee.

An accomplice in the US was able to put the account numbers on drugs parcels which, as they appeared to have been sent by known shippers, arrived unchecked at Stansted Airport.

When police later contacted the companies whose accounts and security clearance had been so abused they discovered they had suspected nothing.

And it’s not clear that a terrorist can’t figure out which shipments are likely to be put on passenger aircraft:

However several large companies such as FedEx and UPS offer clients the chance to follow the progress of their parcels online.

This is a facility that Chris Yates, an expert on airline security for Jane’s Transport, says could be exploited by terrorists.

“From these you can get a fair indication when that package is in the air, if you are looking to get a package into New York from Heathrow at a given time of day.

And BBC reports that 70% of cargo is shipped on passenger planes. That seems like too high a number.

If we had infinite budget, of course we’d screen all air cargo. But we don’t, and it’s a reasonable trade-off to ignore cargo planes and concentrate on passenger planes. But there are some awfully big holes in this system.

Posted on October 24, 2006 at 6:11 AMView Comments

Perceived Risk vs. Actual Risk

Good essay on perceived vs. actual risk. The hook is Mayor Daley of Chicago demanding a no-fly-zone over Chicago in the wake of the New York City airplane crash.

Other politicians (with the spectacular and notable exception of New York City Mayor Michael Bloomberg) and self-appointed “experts” are jumping on the tragic accident — repeat, accident — in New York to sound off again about the “danger” of light aircraft, and how they must be regulated, restricted, banned.

OK, for all of those ranting about “threats” from GA aircraft, we’ll believe that you’re really serious about controlling “threats” when you call for:

  • Banning all vans within cities. A small panel van was used in the first World Trade Center attack. The bomb, which weighed 1,500 pounds, killed six and injured 1,042.
  • Banning all box trucks from cities. Timothy McVeigh’s rented Ryder truck carried a 5,000-pound bomb that killed 168 in Oklahoma City.
  • Banning all semi-trailer trucks. They can carry bombs weighing more than 50,000 pounds.
  • Banning newspapers on subways. That’s how the terrorists hid packages of sarin nerve gas in the Tokyo subway system. They killed 12.
  • Banning backpacks on all buses and subways. That’s how the terrorists got the bombs into the London subway system. They killed 52.
  • Banning all cell phones on trains. That’s how they detonated the bombs in backpacks placed on commuter trains in Madrid. They killed 191.
  • Banning all small pleasure boats on public waterways. That’s how terrorists attacked the USS Cole, killing 17.
  • Banning all heavy or bulky clothing in all public places. That’s how suicide bombers hide their murderous charges. Thousands killed.

Number of people killed by a terrorist attack using a GA aircraft? Zero.

Number of people injured by a terrorist attack using a GA aircraft? Zero.

Property damage from a terrorist attack using a GA aircraft? None.

So Mr. Mayor (and Mr. Governor, Ms. Senator, Mr. Congressman, and Mr. “Expert”), if you’re truly serious about “protecting” the public, advocate all of the bans I’ve listed above. Using the “logic” you apply to general aviation aircraft, you’re forced to conclude that newspapers, winter coats, cell phones, backpacks, trucks, and boats all pose much greater risks to the public.

So be consistent in your logic. If you are dead set on restricting a personal transportation system that carries more passengers than any single airline, reaches more American cities than all the airlines combined, provides employment for 1.3 million American citizens and $160 billion in business “to protect the public,” then restrict or control every other transportation system that the terrorists have demonstrated they can use to kill.

And, on the same topic, why it doesn’t make sense to ban small aircraft from cities as a terrorism defense.

Posted on October 23, 2006 at 10:01 AMView Comments

More Than 10 Ways to Avoid the Next 9/11

From yesterday’s New York Times, “Ten Ways to Avoid the Next 9/11”:

If we are fortunate, we will open our newspapers this morning knowing that there have been no major terrorist attacks on American soil in nearly five years. Did we just get lucky?

The Op-Ed page asked 10 people with experience in security and counterterrorism to answer the following question: What is one major reason the United States has not suffered a major attack since 2001, and what is the one thing you would recommend the nation do in order to avoid attacks in the future?

Actually, they asked more than 10, myself included. But some of us were cut because they didn’t have enough space. This was my essay:

Despite what you see in the movies and on television, it’s actually very difficult to execute a major terrorist act. It’s hard to organize, plan, and execute an attack, and it’s all too easy to slip up and get caught. Combine that with our intelligence work tracking terrorist cells and interdicting terrorist funding, and you have a climate where major attacks are rare. In many ways, the success of 9/11 was an anomaly; there were many points where it could have failed. The main reason we haven’t seen another 9/11 is that it isn’t as easy as it looks.

Much of our counterterrorist efforts are nothing more than security theater: ineffectual measures that look good. Forget the “war on terror”; the difficulty isn’t killing or arresting the terrorists, it’s finding them. Terrorism is a law enforcement problem, and needs to be treated as such. For example, none of our post-9/11 airline security measures would have stopped the London shampoo bombers. The lesson of London is that our best defense is intelligence and investigation. Rather than spending money on airline security, or sports stadium security — measures that require us to guess the plot correctly in order to be effective — we’re better off spending money on measures that are effective regardless of the plot.

Intelligence and investigation have kept us safe from terrorism in the past, and will continue to do so in the future. If the CIA and FBI had done a better job of coordinating and sharing data in 2001, 9/11 would have been another failed attempt. Coordination has gotten better, and those agencies are better funded — but it’s still not enough. Whenever you read about the billions being spent on national ID cards or massive data mining programs or new airport security measures, think about the number of intelligence agents that the same money could buy. That’s where we’re going to see the greatest return on our security investment.

Posted on September 11, 2006 at 6:36 AMView Comments

Anti-Missile Defenses for Passenger Aircraft

It’s not happening anytime soon:

Congress agreed to pay for the development of the systems to protect the planes from such weapons, but balked at proposals to spend the billions needed to protect all 6,800 commercial U.S. airliners.

Probably for the best, actually. One, there are far more effective ways to spend that money on counterterrorism. And two, they’re only effective against a particular type of missile technology:

Both BAE and Northrop systems use lasers to jam the guidance systems of incoming missiles, which lock onto the heat of an aircraft’s engine.

Posted on August 3, 2006 at 7:30 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.