Entries Tagged "terrorism"

Page 65 of 80

New Airline Security Rules

The foiled UK terrorist plot has wreaked havoc with air travel in the country:

All short-haul inbound flights to Heathrow airport have been cancelled. Some flights in and out of Gatwick have been suspended.

Security has been increased at Channel ports and the Eurotunnel terminal.

German carrier Lufthansa has cancelled flights to Heathrow and the Spanish airline Iberia has stopped UK flights.

British Airways has announced it has cancelled all its short-haul flights to and from Heathrow for the rest of Thursday.

The airline added that it was also cancelling some domestic and short haul services in and out of Gatwick airport during the remainder of the day.

In addition, pretty much no carry-ons are allowed:

These measures will prevent passengers from carrying hand luggage into the cabin of an aircraft with the following exceptions (which must be placed in a plastic bag):

  • Pocket size wallets and pocket size purses plus contents (for example money, credit cards, identity cards etc (not handbags);
  • Travel documents essential for the journey (for example passports and travel tickets);
  • Prescription medicines and medical items sufficient and essential for the flight (e.g. diabetic kit), except in liquid form unless verified as authentic;
  • Spectacles and sunglasses, without cases;
  • Contact lens holders, without bottles of solution;
  • For those traveling with an infant: baby food, milk (the contents of each bottle must be tasted by the accompanying passenger);
  • Sanitary items sufficient and essential for the flight (nappies, wipes, creams and nappy disposal bags);
  • Female sanitary items sufficient and essential for the flight, if unboxed (e.g. tampons, pads, towels and wipes) tissues (unboxed) and/or handkerchiefs;
  • Keys (but no electrical key fobs)

Across the Atlantic, the TSA has announced new security rules:

Passengers are not allowed to have gels or liquids of any kind at screening points or in the cabin of any airplane.

They said this includes beverages, food, suntan lotion, creams toothpaste, hair gel, or similar items. Those items must be packed into checked luggage. Beverages bought on the secure side of the checkpoint must be disposed of before boarding the plane.

There are several exceptions to the new rule. Baby formula, breast milk, or juice for small children, prescription medications where the name matched the name of a ticked passenger, as well as insulin and other essential health items may be brought onboard the plane.

See the TSA rules for more detail.

Given how little we know of the extent of the plot, these don’t seem like ridiculous short-term measures. I’m sure glad I’m not flying anywhere this week.

EDITED TO ADD (8/10): Interesting analysis by Eric Rescorla.

Posted on August 10, 2006 at 7:40 AMView Comments

Britain Adopts Threat Levels

Taking a cue from a useless American idea, the UK has announced a system of threat levels:

“Threat levels are designed to give a broad indication of the likelihood of a terrorist attack,” the intelligence.gov.uk website said in a posting. “They are based on the assessment of a range of factors including current intelligence, recent events and what is known about terrorist intentions and capabilities. This information may well be incomplete and decisions about the appropriate security response are made with this in mind.”

Unlike the previous secret grading system offering seven levels of threat, the new system has been simplified to five, starting with “low,” meaning an attack is unlikely, to “critical,” meaning an attack is expected imminently. Unlike American threat assessments, the British system is not color-coded.

The current level is “severe”:

“Severe” is the second-highest threat level, but the Web site did not say what kind of attack was likely. The assessment is roughly the same as it has been for a year.

I wrote about the stupidity of this sort of system back in 2004:

In theory, the warnings are supposed to cultivate an atmosphere of preparedness. If Americans are vigilant against the terrorist threat, then maybe the terrorists will be caught and their plots foiled. And repeated warnings brace Americans for the aftermath of another attack.

The problem is that the warnings don’t do any of this. Because they are so vague and so frequent, and because they don’t recommend any useful actions that people can take, terror threat warnings don’t prevent terrorist attacks. They might force a terrorist to delay his plan temporarily, or change his target. But in general, professional security experts like me are not particularly impressed by systems that merely force the bad guys to make minor modifications in their tactics.

And the alerts don’t result in a more vigilant America. It’s one thing to issue a hurricane warning, and advise people to board up their windows and remain in the basement. Hurricanes are short-term events, and it’s obvious when the danger is imminent and when it’s over. People can do useful things in response to a hurricane warning; then there is a discrete period when their lives are markedly different, and they feel there was utility in the higher alert mode, even if nothing came of it.

It’s quite another thing to tell people to be on alert, but not to alter their plans?as Americans were instructed last Christmas. A terrorist alert that instills a vague feeling of dread or panic, without giving people anything to do in response, is ineffective. Indeed, it inspires terror itself. Compare people’s reactions to hurricane threats with their reactions to earthquake threats. According to scientists, California is expecting a huge earthquake sometime in the next two hundred years. Even though the magnitude of the disaster will be enormous, people just can’t stay alert for two centuries. The news seems to have generated the same levels of short-term fear and long-term apathy in Californians that the terrorist warnings do. It’s human nature; people simply can’t be vigilant indefinitely.

[…]

This all implies that if the government is going to issue a threat warning at all, it should provide as many details as possible. But this is a catch-22: Unfortunately, there’s an absolute limit to how much information the government can reveal. The classified nature of the intelligence that goes into these threat alerts precludes the government from giving the public all the information it would need to be meaningfully prepared.

[…]

A terror alert that instills a vague feeling of dread or panic echoes the very tactics of the terrorists. There are essentially two ways to terrorize people. The first is to do something spectacularly horrible, like flying airplanes into skyscrapers and killing thousands of people. The second is to keep people living in fear with the threat of doing something horrible. Decades ago, that was one of the IRA’s major aims. Inadvertently, the DHS is achieving the same thing.

There’s another downside to incessant threat warnings, one that happens when everyone realizes that they have been abused for political purposes. Call it the “Boy Who Cried Wolf” problem. After too many false alarms, the public will become inured to them. Already this has happened. Many Americans ignore terrorist threat warnings; many even ridicule them. The Bush administration lost considerable respect when it was revealed that August’s New York/Washington warning was based on three-year-old information. And the more recent warning that terrorists might target cheap prescription drugs from Canada was assumed universally to be politics-as-usual.

Repeated warnings do more harm than good, by needlessly creating fear and confusion among those who still trust the government, and anesthetizing everyone else to any future alerts that might be important. And every false alarm makes the next terror alert less effective.

The Bush administration used this system largely as a political tool. Perhaps Tony Blair has the same idea.

Crossposted to the ACLU blog.

Posted on August 2, 2006 at 4:01 PMView Comments

Remote-Control Airplane Software

Does anyone other than me see a problem with this?

Some 30 European businesses and research institutes are working to create software that would make it possible from a distance to regain control of an aircraft from hijackers, according to the German news magazine.

The system “which could only be controlled from the ground would conduct the aircraft posing a problem to the nearest airport whether it liked it or not,” according to extracts from next Monday’s Der Spiegel released Saturday.

“A hijacker would have no chance of reaching his goal,” it said.

Unless his goal were, um, hijacking the aircraft.

It seems to me that by designing remote-control software for airplanes, you open the possibility for someone to hijack the plane without even being on board. Sure, there are going to be computer-security controls protecting this thing, but we all know how well that sort of thing has worked in the past.

The system would be designed in such a way that even a computer hacker on board could not get round it.

But what about computer hackers on the ground?

I’m not saying this is a bad idea; it might be a good idea. But this security countermeasure opens up an entirely new vulnerability, and I hope that someone is studying that new vulnerability.

Posted on July 28, 2006 at 2:09 PMView Comments

Sky Marshals Name Innocents to Meet Quota

One news source is reporting that sky marshals are reporting on innocent people in order to meet a quota:

The air marshals, whose identities are being concealed, told 7NEWS that they’re required to submit at least one report a month. If they don’t, there’s no raise, no bonus, no awards and no special assignments.

“Innocent passengers are being entered into an international intelligence database as suspicious persons, acting in a suspicious manner on an aircraft … and they did nothing wrong,” said one federal air marshal.

[…]

These unknowing passengers who are doing nothing wrong are landing in a secret government document called a Surveillance Detection Report, or SDR. Air marshals told 7NEWS that managers in Las Vegas created and continue to maintain this potentially dangerous quota system.

“Do these reports have real life impacts on the people who are identified as potential terrorists?” 7NEWS Investigator Tony Kovaleski asked.

“Absolutely,” a federal air marshal replied.

[…]

What kind of impact would it have for a flying individual to be named in an SDR?

“That could have serious impact … They could be placed on a watch list. They could wind up on databases that identify them as potential terrorists or a threat to an aircraft. It could be very serious,” said Don Strange, a former agent in charge of air marshals in Atlanta. He lost his job attempting to change policies inside the agency.

This is so insane, it can’t possibly be true. But I have been stunned before by the stupidity of the Department of Homeland Security.

EDITED TO ADD (7/27): This is what Brock Meeks said on David Farber’s IP mailing list:

Well, it so happens that I was the one that BROKE this story… way back in 2004. There were at least two offices, Miami and Las Vegas that had this quota system for writing up and filing “SDRs.”

The requirement was totally renegade and NOT endorsed by Air Marshal officials in Washington. The Las Vegas Air Marshal field office was (I think he’s retired now) by a real cowboy at the time, someone that caused a lot of problems for the Washington HQ staff. (That official once grilled an Air Marshal for three hours in an interrogation room because he thought the air marshal was source of mine on another story. The air marshal was then taken off flight status and made to wash the office cars for two weeks… I broke that story, too. And no, the punished air marshal was never a source of mine.)

Air marshals told they were filing false reports, as they did below, just to hit the quota.

When my story hit, those in the offices of Las Vegas and Miami were reprimanded and the practice was ordered stopped by Washington HQ.

I suppose the biggest question I have for this story is the HYPE of what happens to these reports. They do NOT place the person mention on a “watch list.” These reports, filed on Palm Pilot PDAs, go into an internal Air Marshal database that is rarely seen and pretty much ignored by other intelligence agencies, from all sources I talked to.

Why? Because the air marshals are seen as little more than “sky cops” and these SDRs considered little more than “field interviews” that cops sometimes file when they question someone loitering at a 7-11 too late at night.

The quota system, if it is still going on, is heinous, but it hardly results in the big spooky data collection scare that this cheapjack Denver “investigative” TV reporter makes it out to be.

The quoted former field official from Atlanta, Don Strange, did, in fact, lose his job over trying to chance internal policies. He was the most well-liked official among the rank and file and the Atlanta office, under his command, had the highest morale in the nation.

Posted on July 25, 2006 at 9:55 AMView Comments

Top Terrorist Targets from the DHS

It’s a seriously dumb list:

A federal inspector general has analyzed the nation’s database of top terrorist targets. There are more than 77,000 of them—up from 160 a few years ago, before the entire exercise morphed into a congressional porkfest.

And on that list of national assets are … 1,305 casinos! No doubt Muckleshoot made the cut (along with every other casino in our state).

The list has 234 restaurants. I have no idea if Dick’s made it. The particulars are classified. But you have to figure it did.

Why? Because here’s more of what the inspector general found passes for “critical infrastructure.” An ice-cream parlor. A tackle shop. A flea market. An Amish popcorn factory.

Seven hundred mortuaries made the list. Terrorists know no limits if they’re planning attacks on our dead people.

The report says our state has a whopping 3,650 critical sites, sixth in the U.S. It didn’t identify them—remember, we wouldn’t want this list of eateries, zoos and golf courses to fall into the wrong hands.

That number, 3,650, is so high I’m positive we haven’t heard the most farcical of it yet.

What’s going on? Pork barrel funding, that’s what’s going on.

We’re never going to get security right if we continue to make it a parody of itself.

Posted on July 18, 2006 at 7:25 AMView Comments

Complexity and Terrorism Investigations

Good article on how complexity greatly limits the effectiveness of terror investigations. The stories of wasted resources are all from the UK, but the morals are universal.

The Committee’s report accepts that the increasing number of investigations, together with their increasing complexity, will make longer detention inevitable in the future. The core calculation is essentially the one put forward by the police and accepted by the Government – technology has been an enabler for international terrorism, with email, the Internet and mobile telephony producing wide, diffuse, international networks. The data on hard drives and mobile phones needs to be examined, contacts need to be investigated and their data examined, and in the case of an incident, vast amounts of CCTV records need to be gone through. As more and more of this needs to be done, the time taken to do it will obviously climb, and as it’s ‘necessary’ to detain the new breed of terrorist early in the investigation before he can strike, more time will be needed between arrest and charge in order to build a case.

All of which is, as far as it goes, logical. But take it a little further and the inherent futility of the route becomes apparent – ultimately, probably quite soon, the volume of data overwhelms the investigators and infinite time is needed to analyse all of it. And the less developed the plot is at the time the suspects are pulled in, the greater the number of possible outcomes (things they ‘might’ be planning) that will need to be chased-up. Short of the tech industry making the breakthrough into machine intelligence that will effectively do the analysis for them (which is a breakthrough the snake-oil salesmen suggest, and dopes in Government believe, has been achieved already), the approach itself is doomed. Essentially, as far as data is concerned police try to ‘collar the lot’ and then through analysis, attempt to build the most complete picture of a case that is possible. Use of initiative, experience and acting on probabilities will tend to be pressured out of such systems, and as the data volumes grow the result will tend to be teams of disempowered machine minders chained to a system that has ground to a halt. This effect is manifesting itself visibly across UK Government systems in general, we humbly submit. But how long will it take them to figure this out?

[…]

There is clearly a major problem for the security services in distinguishing disaffected talk from serious planning, and in deciding when an identified group constitutes a real threat. But the current technology-heavy approach to the threat doesn’t make a great deal of sense, because it produces very large numbers of suspects who are not and never will be a serious threat. Quantities of these suspects will nevertheless be found to be guilty of something, and along the way large amounts of investigative resource will have been expended to no useful purpose, aside from filling up 90 days. Overreaction to suggestions of CBRN threats is similarly counter-productive, because it makes it more likely that nascent groups will, just like the police, misunderstand the capabilities of the weapons, and start trying to research and build them. Mischaracterising the threat by inflating early, inexpert efforts as ‘major plots’ meanwhile fosters a climate of fear and ultimately undermines public confidence in the security services.

The oft-used construct, “the public would never forgive us if…” is a cop-out. It’s a spurious justification for taking the ‘collar the lot’ approach, throwing resources at it, ducking out of responsibility and failing to manage. Getting back to basics, taking ownership and telling the public the truth is more honest, and has some merit. A serious terror attack needs intent, attainable target and capability, the latter being the hard bit amateurs have trouble achieving without getting spotted along the way. Buying large bags of fertiliser if you’re not known to the vendor and you don’t look in the slightest bit like a farmer is going to put you onto MI5’s radar, and despite what it says on a lot of web sites, making your own explosives if you don’t know what you’re doing is a good way of blowing yourself up before you intended to. If disaffected youth had a more serious grasp of these realities, and had heard considerably more sense about the practicalities, then it’s quite possible that fewer of them would persist with their terror studies. Similarly, if the general public had better knowledge it would be better placed to spot signs of bomb factories. Bleached hair, dead plants, large numbers of peroxide containers? It could surely have been obvious.

Posted on July 14, 2006 at 7:25 AMView Comments

A Minor Security Lesson from Mumbai Terrorist Bombings

Two quotes:

Authorities had also severely limited the cellular network for fear it could be used to trigger more attacks.

And:

Some of the injured were seen frantically dialing their cell phones. The mobile phone network collapsed adding to the sense of panic.

(Note: The story was changed online, and the second quote was deleted.)

Cell phones are useful to terrorists, but they’re more useful to the rest of us.

Posted on July 13, 2006 at 1:20 PMView Comments

Terrorists, Data Mining, and the Base Rate Fallacy

I have already explained why NSA-style wholesale surveillance data-mining systems are useless for finding terrorists. Here’s a more formal explanation:

Floyd Rudmin, a professor at a Norwegian university, applies the mathematics of conditional probability, known as Bayes’ Theorem, to demonstrate that the NSA’s surveillance cannot successfully detect terrorists unless both the percentage of terrorists in the population and the accuracy rate of their identification are far higher than they are. He correctly concludes that “NSA’s surveillance system is useless for finding terrorists.”

The surveillance is, however, useful for monitoring political opposition and stymieing the activities of those who do not believe the government’s propaganda.

And here’s the analysis:

What is the probability that people are terrorists given that NSA’s mass surveillance identifies them as terrorists? If the probability is zero (p=0.00), then they certainly are not terrorists, and NSA was wasting resources and damaging the lives of innocent citizens. If the probability is one (p=1.00), then they definitely are terrorists, and NSA has saved the day. If the probability is fifty-fifty (p=0.50), that is the same as guessing the flip of a coin. The conditional probability that people are terrorists given that the NSA surveillance system says they are, that had better be very near to one (p=1.00) and very far from zero (p=0.00).

The mathematics of conditional probability were figured out by the Scottish logician Thomas Bayes. If you Google “Bayes’ Theorem”, you will get more than a million hits. Bayes’ Theorem is taught in all elementary statistics classes. Everyone at NSA certainly knows Bayes’ Theorem.

To know if mass surveillance will work, Bayes’ theorem requires three estimations:

  1. The base-rate for terrorists, i.e. what proportion of the population are terrorists;
  2. The accuracy rate, i.e., the probability that real terrorists will be identified by NSA;
  3. The misidentification rate, i.e., the probability that innocent citizens will be misidentified by NSA as terrorists.

No matter how sophisticated and super-duper are NSA’s methods for identifying terrorists, no matter how big and fast are NSA’s computers, NSA’s accuracy rate will never be 100% and their misidentification rate will never be 0%. That fact, plus the extremely low base-rate for terrorists, means it is logically impossible for mass surveillance to be an effective way to find terrorists.

I will not put Bayes’ computational formula here. It is available in all elementary statistics books and is on the web should any readers be interested. But I will compute some conditional probabilities that people are terrorists given that NSA’s system of mass surveillance identifies them to be terrorists.

The US Census shows that there are about 300 million people living in the USA.

Suppose that there are 1,000 terrorists there as well, which is probably a high estimate. The base-rate would be 1 terrorist per 300,000 people. In percentages, that is .00033%, which is way less than 1%. Suppose that NSA surveillance has an accuracy rate of .40, which means that 40% of real terrorists in the USA will be identified by NSA’s monitoring of everyone’s email and phone calls. This is probably a high estimate, considering that terrorists are doing their best to avoid detection. There is no evidence thus far that NSA has been so successful at finding terrorists. And suppose NSA’s misidentification rate is .0001, which means that .01% of innocent people will be misidentified as terrorists, at least until they are investigated, detained and interrogated. Note that .01% of the US population is 30,000 people. With these suppositions, then the probability that people are terrorists given that NSA’s system of surveillance identifies them as terrorists is only p=0.0132, which is near zero, very far from one. Ergo, NSA’s surveillance system is useless for finding terrorists.

Suppose that NSA’s system is more accurate than .40, let’s say, .70, which means that 70% of terrorists in the USA will be found by mass monitoring of phone calls and email messages. Then, by Bayes’ Theorem, the probability that a person is a terrorist if targeted by NSA is still only p=0.0228, which is near zero, far from one, and useless.

Suppose that NSA’s system is really, really, really good, really, really good, with an accuracy rate of .90, and a misidentification rate of .00001, which means that only 3,000 innocent people are misidentified as terrorists. With these suppositions, then the probability that people are terrorists given that NSA’s system of surveillance identifies them as terrorists is only p=0.2308, which is far from one and well below flipping a coin. NSA’s domestic monitoring of everyone’s email and phone calls is useless for finding terrorists.

As an exercise to the reader, you can use the same analysis to show that data mining is an excellent tool for finding stolen credit cards, or stolen cell phones. Data mining is by no means useless; it’s just useless for this particular application.

Posted on July 10, 2006 at 7:15 AMView Comments

1 63 64 65 66 67 80

Sidebar photo of Bruce Schneier by Joe MacInnis.