Blog: October 2006 Archives

Airport Screeners Still Aren't Any Good

They may be great at keeping you from taking your bottle of water onto the plane, but when it comes to catching actual bombs and guns they’re not very good:

Screeners at Newark Liberty International Airport, one of the starting points for the Sept. 11 hijackers, failed 20 of 22 security tests conducted by undercover U.S. agents last week, missing concealed bombs and guns at checkpoints throughout the major air hub’s three terminals, according to federal security officials.

[…]

One of the security officials familiar with last week’s tests said Newark screeners missed fake explosive devices hidden under bottles of water in carry-on luggage, taped beneath an agent’s clothing and concealed under a leg bandage another tester wore.

The official said screeners also failed to use handheld metal-detector wands when required, missed an explosive device during a pat-down and failed to properly hand-check suspicious carry-on bags. Supervisors also were cited for failing to properly monitor checkpoint screeners, the official said. “We just totally missed everything,” the official said.

As I’ve written before, this is actually a very hard problem to solve:

Airport screeners have a difficult job, primarily because the human brain isn’t naturally adapted to the task. We’re wired for visual pattern matching, and are great at picking out something we know to look for—for example, a lion in a sea of tall grass.

But we’re much less adept at detecting random exceptions in uniform data. Faced with an endless stream of identical objects, the brain quickly concludes that everything is identical and there’s no point in paying attention. By the time the exception comes around, the brain simply doesn’t notice it. This psychological phenomenon isn’t just a problem in airport screening: It’s been identified in inspections of all kinds, and is why casinos move their dealers around so often. The tasks are simply mind-numbing.

To make matters worse, the smuggler can try to exploit the system. He can position the weapons in his baggage just so. He can try to disguise them by adding other metal items to distract the screeners. He can disassemble bomb parts so they look nothing like bombs. Against a bored screener, he has the upper hand.

But perversely, even a mediocre success rate here is probably good enough:

Remember the point of passenger screening. We’re not trying to catch the clever, organized, well-funded terrorists. We’re trying to catch the amateurs and the incompetent. We’re trying to catch the unstable. We’re trying to catch the copycats. These are all legitimate threats, and we’re smart to defend against them. Against the professionals, we’re just trying to add enough uncertainty into the system that they’ll choose other targets instead.

[…]

What that means is that a basic cursory screening is good enough. If I were investing in security, I would fund significant research into computer-assisted screening equipment for both checked and carry-on bags, but wouldn’t spend a lot of money on invasive screening procedures and secondary screening. I would much rather have well-trained security personnel wandering around the airport, both in and out of uniform, looking for suspicious actions.

Remember this truism: We can’t keep weapons out of prisons. We can’t possibly keep them out of airports.

Posted on October 31, 2006 at 12:52 PM47 Comments

Total Information Awareness Is Back

Remember Total Information Awareness?

In November 2002, the New York Times reported that the Defense Advanced Research Projects Agency (DARPA) was developing a tracking system called “Total Information Awareness” (TIA), which was intended to detect terrorists through analyzing troves of information. The system, developed under the direction of John Poindexter, then-director of DARPA’s Information Awareness Office, was envisioned to give law enforcement access to private data without suspicion of wrongdoing or a warrant.

TIA purported to capture the “information signature” of people so that the government could track potential terrorists and criminals involved in “low-intensity/low-density” forms of warfare and crime. The goal was to track individuals through collecting as much information about them as possible and using computer algorithms and human analysis to detect potential activity.

The project called for the development of “revolutionary technology for ultra-large all-source information repositories,” which would contain information from multiple sources to create a “virtual, centralized, grand database.” This database would be populated by transaction data contained in current databases such as financial records, medical records, communication records, and travel records as well as new sources of information. Also fed into the database would be intelligence data.

The public found it so abhorrent, and objected so forcefully, that Congress killed funding for the program in September 2003.

None of us thought that meant the end of TIA, only that it would turn into a classified program and be renamed. Well, the program is now called Tangram, and it is classified:

The government’s top intelligence agency is building a computerized system to search very large stores of information for patterns of activity that look like terrorist planning. The system, which is run by the Office of the Director of National Intelligence, is in the early research phases and is being tested, in part, with government intelligence that may contain information on U.S. citizens and other people inside the country.

It encompasses existing profiling and detection systems, including those that create “suspicion scores” for suspected terrorists by analyzing very large databases of government intelligence, as well as records of individuals’ private communications, financial transactions, and other everyday activities.

The information about Tangram comes from a government document looking for contractors to help design and build the system.

DefenseTech writes:

The document, which is a description of the Tangram program for potential contractors, describes other, existing profiling and detection systems that haven’t moved beyond so-called “guilt-by-association models,” which link suspected terrorists to potential associates, but apparently don’t tell analysts much about why those links are significant. Tangram wants to improve upon these methods, as well as investigate the effectiveness of other detection links such as “collective inferencing,” which attempt to create suspicion scores of entire networks of people simultaneously.

Data mining for terrorists has always been a dumb idea. And the existence of Tangram illustrates the problem with Congress trying to stop a program by killing its funding; it just comes back under a different name.

Posted on October 31, 2006 at 6:59 AM33 Comments

Privacy and Google

Mother Jones article on Google and privacy:

Google Larry Page and Sergey Brin, the two former Stanford geeks who founded the company that has become synonymous with Internet searching, and you’ll find more than a million entries each. But amid the inevitable dump of press clippings, corporate bios, and conference appearances, there’s very little about Page’s and Brin’s personal lives; it’s as if the pair had known all along that Google would change the way we acquire information, and had carefully insulated their lives—putting their homes under other people’s names, choosing unlisted numbers, abstaining from posting anything personal on web pages.

That obsession with privacy may explain Google’s puzzling reaction last year, when Elinor Mills, a reporter with the tech news service cnet, ran a search on Google ceo Eric Schmidt and published the results: Schmidt lived with his wife in Atherton, California, was worth about $1.5 billion, had dumped about $140 million in Google shares that year, was an amateur pilot, and had been to the Burning Man festival. Google threw a fit, claimed that the information was a security threat, and announced it was blacklisting cnet’s reporters for a year. (The company eventually backed down.) It was a peculiar response, especially given that the information Mills published was far less intimate than the details easily found online on every one of us. But then, this is something of a pattern with Google: When it comes to information, it knows what’s best.

Posted on October 30, 2006 at 12:56 PM51 Comments

Friday Squid Blogging: Greenland Squid Balls

A snack:

These snacks had a cheese puff-like consistency and were a bit larger than your typical cheese balls. They had a somewhat fishy but sweet taste upon first biting in, and then the fishiness got stronger and worse with subsequent bites, with a hot taste also kicking in and then lingering for the aftertaste. Everyone who tried these just hated them. Nobody was able to eat more than one squid ball. The hot flavor on its own might have possibly been good, but we’ll never know, because the squid taste was bad, and the combination of flavors just didn’t work and tasted awful.

Posted on October 27, 2006 at 4:31 PM23 Comments

Surveillance as Performance Art

Hasan Elahi has been making his every movement public, after being detained by the FBI (and then cleared) when entering the country:

For the next few months, every trip Elahi took, he’d call his FBI agent and give the routing, so he didn’t get detained along the way. He realized, after a point—why just tell the FBI—why not tell everyone?

So he hacked his cellphone into a tracking bracelet which he wears on his ankle, reporting his movements on a map—log onto his site and you can see that he’s in Camden. But he’s gone further, trying to document his life in a series of photos: the airports he passes through, the meals he eats, the bathrooms he uses. The result is a photographic record of his daily life which would be very hard to falsify. We all know photos can be digitally altered… but altering as many photos as Elahi puts online would require a whole team trying to build this alternative path through the world.

Elahi also puts other apsects of his life online, including his banking records. This gives a record of his purchases, which complements the photographs. He doesn’t put the phone records online, because it would compromise the privacy of the people he talks with, and some friends have asked him to stop visiting, but he views the self-surveillance both as an art form and as his perpetual alibi for the next time the FBI questions him.

At the same time, he’s stretching the limits of surveillance systems, taking advantage of non-places. He flew to Singapore for four days and never left the airport, never clearing customs. For four days, he was noplace—he’d fallen off the map, which is precisely what the FBI and others worry about. But he documented every noodle and every toilet along the way.

This is extreme, but the level of surveillance is likely to be the norm. It won’t be on a public website available to everyone, but it will be available to governments and corporations.

Posted on October 27, 2006 at 12:49 PM26 Comments

Canadian "Guidelines for Identification and Authentication"

These guidelines were released by the Canadian Privacy Comissioner, is a good document discussing both privacy risks and security threats:

Authentication processes can contribute to the protection of privacy by reducing the risk of unauthorized disclosures, but only if they are appropriately designed given the sensitivity of the information and the risks associated with the information. Overly rigorous authentication process, or requiring individuals to authenticate themselves unnecessarily, can be privacy intrusive.

And here’s a longer document published in 2004 by Industry Canada: “Principles for Electronic Authentication.”

Posted on October 27, 2006 at 7:29 AM12 Comments

Create Your Own Northwest Boarding Pass

Use this handy boarding-pass generator to: 1) get through airport security without a ticket, 2) bypass the “extra screening” if you have “SSSS” printed on your ticket, or 3)—and this is harder—snag yourself a Business Class seat with a Coach ticket.

EDITED TO ADD (10/28): Lots of news on this item: the page is down, and he was visited by the FBI.

Posted on October 26, 2006 at 4:35 PM62 Comments

Heathrow Tests Biometric ID

Heathrow airport is testing an iris scan biometric machine to identify passengers at customs.

I’ve written previously about biometrics: when they work and when they fail:

Biometrics are powerful and useful, but they are not keys. They are useful in situations where there is a trusted path from the reader to the verifier; in those cases all you need is a unique identifier. They are not useful when you need the characteristics of a key: secrecy, randomness, the ability to update or destroy. Biometrics are unique identifiers, but they are not secrets.

The system under trial at Heathrow is a good use of biometrics. There’s a trusted path from the person through the reader to the verifier; attempts to use fake eyeballs will be immediately obvious and suspicious. The verifier is being asked to match a biometric with a specific reference, and not to figure out who the person is from his or her biometric. There’s no need for secrecy or randomness; it’s not being used as a key. And it has the potential to really speed up customs lines.

Posted on October 26, 2006 at 1:04 PM27 Comments

Tamper-Evident Seals

Interesting article, available to subscribers only (unfortunately):

Prehistoric evidence indicates that people have always been concerned with detecting whether others have tampered with their belongings. Early human beings may have swept the ground in front of their dwellings to detect trespassers’ footprints. At least 7,000 years ago, intricate stone carvings were pressed into clay to seal jars and later, writing tablets. What is the most secure way to ensure that people are not messing with your things? Roger Johnston’s tests have covered everything from ancient clay seals to metal flange seals used to secure cargo containers and electronic seals used on nuclear material. He has found that high-tech, expensive seals are often no more reliable, and factors such as properly training inspectors to know what to look for are often just as important as the seal itself. Johnston has also developed some new electronic seals that are harder to defeat because they use “anti-evidence”: They provide the correct passcode only when they are not tampered with, and the passcode is erased if they are interrupted.

Posted on October 26, 2006 at 7:01 AM17 Comments

Cheyenne Mountain Retired

Cheyenne Mountain was the United States’ underground command post, designed to survive a direct hit from a nuclear warhead. It’s a Cold War relic—built in the 1960s—and retiring the site is probably a good idea. But this paragraph gives me pause:

Keating said the new control room, in contrast, could be damaged if a terrorist commandeered a jumbo jet and somehow knew exactly where to crash it. But “how unlikely is that? We think very,” Keating said.

I agree that this is an unlikely terrorist target, but still.

Posted on October 25, 2006 at 4:35 PM46 Comments

Hacker-Controlled Computers Hiding Better

If you have control of a network of computers—by infecting them with some sort of malware—the hard part is controlling that network. Traditionally, these computers (called zombies) are controlled via IRC. But IRC can be detected and blocked, so the hackers have adapted:

Instead of connecting to an IRC server, newly compromised PCs connect to one or more Web sites to check in with the hackers and get their commands. These Web sites are typically hosted on hacked servers or computers that have been online for a long time. Attackers upload the instructions for download by their bots.

As a result, protection mechanisms, such as blocking IRC traffic, will fail. This could mean that zombies, which so far have mostly been broadband-connected home computers, will be created using systems on business networks.

The trick here is to not let the computer’s legitimate owner know that someone else is controlling it. It’s an arms race between attacker and defender.

Posted on October 25, 2006 at 12:14 PM24 Comments

Paramedic Stopped at Airport Security for Nitroglycerine Residue

At least we know those chemical-residue detectors are working:

The punch line is that my bag tested positive for nitroglycerine residue. Which is, in hindsight, totally not unexpected, since it has been home to several bottles of nitro spray that at one point or another have found their way into my pockets and then into my bag. (Don’t look at me like that—I’m not stealing the damn drug. It’s just that it’s frequently easier to shove them in a pants pocket rather than keep fishing for one at the bedside or whatever, and besides, we’ve now gone to single-patient use sprays so that once you use one on one patient, it’s fininshed.) Whether one discharged, or leaked, or whatevered in my bag, it somehow got NTG molecules all over the place, and that’s what the detector picked up. The guy said this happens all the time but I’m not so sure, and in any event I’m not even remotely certain how I could go about getting the NTG residue off my bag so this doesn’t happen in the future. NTG spray has a pretty distinctive smell. All I can smell in my bag is consumer electronics, so it must have been some minute amount somewhere.

Posted on October 25, 2006 at 8:59 AM45 Comments

Real-World Social Engineering Crime

Classic:

Late on Monday, two thieves used a swipe card to drive a van up to Easynet’s Brick Lane headquarters. Once inside they began loading equipment into their van. They were watched by two security guards—one was doing his rounds and the other watched by CCTV—but both assumed the thieves, with their legitimate swipe cards also had a legitimate reason to take the kit, according to our sources.

EDITED TO ADD (11/25): Here’s another story (link in Turkish). The police receive an anonymous emergency call from someone claiming to have planted an explosive in the Haydarpasa Numune Hospital. They evaculate the hospital (100 patients plus doctors, staff, visitors, etc.) and search the place for two hours. They find nothing. When patients and visitors return, they realize that their valuables were stolen.

Posted on October 24, 2006 at 2:13 PM36 Comments

Airline Passenger Profiling for Profit

I have previously written and spoken about the privacy threats that come from the confluence of government and corporate interests. It’s not the deliberate police-state privacy invasions from governments that worry me, but the normal-business privacy invasions by corporations—and how corporate privacy invasions pave the way for government privacy invasions and vice versa.

The U.S. government’s airline passenger profiling system was called Secure Flight, and I’ve written about it extensively. At one point, the system was going to perform automatic background checks on all passengers based on both government and commercial databases—credit card databases, phone records, whatever—and assign everyone a “risk score” based on the data. Those with a higher risk score would be searched more thoroughly than those with a lower risk score. It’s a complete waste of time, and a huge invasion of privacy, and the last time I paid attention it had been scrapped.

But the very same system that is useless at picking terrorists out of passenger lists is probably very good at identifying consumers. So what the government rightly decided not to do, the start-up corporation Jetera is doing instead:

Jetera would start with an airline’s information on individual passengers on board a given flight, drawing the name, address, credit card number and loyalty club status from reservations data. Through a process, for which it seeks a patent, the company would match the passenger’s identification data with the mountains of information about him or her available at one of the mammoth credit bureaus, which maintain separately managed marketing as well as credit information. Jetera would tap into the marketing side, showing consumer demographics, purchases, interests, attitudes and the like.

Jetera’s data manipulation would shape the entertainment made available to each passenger during a flight. The passenger who subscribes to a do-it-yourself magazine might be offered a video on woodworking. Catalog purchase records would boost some offerings and downplay others. Sports fans, known through their subscriptions, credit card ticket-buying or booster club memberships, would get “The Natural” instead of “Pretty Woman.”

The article is dated August 21, 2006 and is subscriber-only. Most of it talks about the revenue potential of the model, the funding the company received, and the talks it has had with anonymous airlines. No airline has signed up for the service yet, which would not only include in-flight personalization but pre- and post-flight mailings and other personalized services. Privacy is dealt with at the end of the article:

Jetera sees two legal issues regarding privacy and resolves both in its favor. Nothing Jetera intends to do would violate federal law or airline privacy policies as expressed on their websites. In terms of customer perceptions, Jetera doesn’t intend to abuse anyone’s privacy and will have an “opt-out” opportunity at the point where passengers make inflight entertainment choices.

If an airline wants an opt-out feature at some other point in the process, Jetera will work to provide one, McChesney says. Privacy and customer service will be an issue for each airline, and Jetera will adapt specifically to each.

The U.S. government already collects data from the phone company, from hotels and rental-car companies, and from airlines. How long before it piggy backs onto this system?

The other side to this is in the news, too: commercial databases using government data:

Records once held only in paper form by law enforcement agencies, courts and corrections departments are now routinely digitized and sold in bulk to the private sector. Some commercial databases now contain more than 100 million criminal records. They are updated only fitfully, and expunged records now often turn up in criminal background checks ordered by employers and landlords.

Posted on October 24, 2006 at 11:00 AM33 Comments

Air Cargo Security

BBC is reporting a “major” hole in air cargo security. Basically, cargo is being flown on passenger planes without being screened. A would-be terrorist could therefore blow up a passenger plane by shipping a bomb via FedEx.

In general, cargo deserves much less security scrutiny than passengers. Here’s the reasoning:

Cargo planes are much less of a terrorist risk than passenger planes, because terrorism is about innocents dying. Blowing up a planeload of FedEx packages is annoying, but not nearly as terrorizing as blowing up a planeload of tourists. Hence, the security around air cargo doesn’t have to be as strict.

Given that, if most air cargo flies around on cargo planes, then it’s okay for some small amount—assuming it’s random and assuming the shipper doesn’t know which packages beforehand—of cargo to fly as baggage on passenger planes. A would-be terrorist would be better off taking his bomb and blowing up a bus than shipping it and hoping it might possibly be put on a passenger plane.

At least, that’s the theory. But theory and practice are different.

The British system involves “known shippers”:

Under a system called “known shipper” or “known consignor” companies which have been security vetted by government appointed agents can send parcels by air, which do not have to be subjected to any further security checks.

Unless a package from a known shipper arouses suspicion or is subject to a random search it is taken on trust that its contents are safe.

But:

Captain Gary Boettcher, president of the US Coalition Of Airline Pilots Associations, says the “known shipper” system “is probably the weakest part of the cargo security today”.

“There are approx 1.5 million known shippers in the US. There are thousands of freight forwarders. Anywhere down the line packages can be intercepted at these organisations,” he said.

“Even reliable respectable organisations, you really don’t know who is in the warehouse, who is tampering with packages, putting parcels together.”

This system has already been exploited by drug smugglers:

Mr Adeyemi brought pounds of cocaine into Britain unchecked by air cargo, transported from the US by the Federal Express courier company. He did not have to pay the postage.

This was made possible because he managed to illegally buy the confidential Fed Ex account numbers of reputable and security cleared companies from a former employee.

An accomplice in the US was able to put the account numbers on drugs parcels which, as they appeared to have been sent by known shippers, arrived unchecked at Stansted Airport.

When police later contacted the companies whose accounts and security clearance had been so abused they discovered they had suspected nothing.

And it’s not clear that a terrorist can’t figure out which shipments are likely to be put on passenger aircraft:

However several large companies such as FedEx and UPS offer clients the chance to follow the progress of their parcels online.

This is a facility that Chris Yates, an expert on airline security for Jane’s Transport, says could be exploited by terrorists.

“From these you can get a fair indication when that package is in the air, if you are looking to get a package into New York from Heathrow at a given time of day.

And BBC reports that 70% of cargo is shipped on passenger planes. That seems like too high a number.

If we had infinite budget, of course we’d screen all air cargo. But we don’t, and it’s a reasonable trade-off to ignore cargo planes and concentrate on passenger planes. But there are some awfully big holes in this system.

Posted on October 24, 2006 at 6:11 AM31 Comments

Online Hacker Forums

Really interesting article about online hacker forums, especially the politics that goes on in them.

Clearly enterprising and given to posting rambling messages explaining his strategic thinking, Iceman grew CardersMarket’s membership to 1,500. On Aug. 16, he hacked into four rival forums’ databases, electronically extracted their combined 4,500 members, and in one stroke quadrupled CardersMarket’s membership to 6,000, according to security experts who monitored the takeovers.

The four hijacked forums—DarkMarket, TalkCash, ScandinavianCarding and TheVouched—became inaccessible to their respective members. Shortly thereafter, all of the historical postings from each of those forums turned up integrated into the CardersMarket website.

To make that happen, Iceman had to gain access to each forum’s underlying database, tech-security experts say. Iceman boasted in online postings that he took advantage of security flaws lazily left unpatched. CardCops’ Clements says he probably cracked weak database passwords. “Somehow he got through to those servers to grab the historical postings and move them to CardersMarket,” he says.

Iceman lost no time touting his business rationale and hyping the benefits. In a posting on CardersMarket shortly after completing the takeovers he wrote: “basically, (sic) this was long overdue … why (sic) have five different forums each with the same content, splitting users and vendors, and a mish mash of poor security and sometimes poor administration?”

He dispatched an upbeat e-mail to new members heralding CardersMarket’s superior security safeguards. The linchpin: a recent move of the forum’s host computer server to Iran, putting it far beyond the reach of U.S. authorities. He described Iran as “possibly the most politically distant country to the united states (sic) in the world today.”

Posted on October 23, 2006 at 2:54 PM

Perceived Risk vs. Actual Risk

Good essay on perceived vs. actual risk. The hook is Mayor Daley of Chicago demanding a no-fly-zone over Chicago in the wake of the New York City airplane crash.

Other politicians (with the spectacular and notable exception of New York City Mayor Michael Bloomberg) and self-appointed “experts” are jumping on the tragic accident—repeat, accident—in New York to sound off again about the “danger” of light aircraft, and how they must be regulated, restricted, banned.

OK, for all of those ranting about “threats” from GA aircraft, we’ll believe that you’re really serious about controlling “threats” when you call for:

  • Banning all vans within cities. A small panel van was used in the first World Trade Center attack. The bomb, which weighed 1,500 pounds, killed six and injured 1,042.
  • Banning all box trucks from cities. Timothy McVeigh’s rented Ryder truck carried a 5,000-pound bomb that killed 168 in Oklahoma City.
  • Banning all semi-trailer trucks. They can carry bombs weighing more than 50,000 pounds.
  • Banning newspapers on subways. That’s how the terrorists hid packages of sarin nerve gas in the Tokyo subway system. They killed 12.
  • Banning backpacks on all buses and subways. That’s how the terrorists got the bombs into the London subway system. They killed 52.
  • Banning all cell phones on trains. That’s how they detonated the bombs in backpacks placed on commuter trains in Madrid. They killed 191.
  • Banning all small pleasure boats on public waterways. That’s how terrorists attacked the USS Cole, killing 17.
  • Banning all heavy or bulky clothing in all public places. That’s how suicide bombers hide their murderous charges. Thousands killed.

Number of people killed by a terrorist attack using a GA aircraft? Zero.

Number of people injured by a terrorist attack using a GA aircraft? Zero.

Property damage from a terrorist attack using a GA aircraft? None.

So Mr. Mayor (and Mr. Governor, Ms. Senator, Mr. Congressman, and Mr. “Expert”), if you’re truly serious about “protecting” the public, advocate all of the bans I’ve listed above. Using the “logic” you apply to general aviation aircraft, you’re forced to conclude that newspapers, winter coats, cell phones, backpacks, trucks, and boats all pose much greater risks to the public.

So be consistent in your logic. If you are dead set on restricting a personal transportation system that carries more passengers than any single airline, reaches more American cities than all the airlines combined, provides employment for 1.3 million American citizens and $160 billion in business “to protect the public,” then restrict or control every other transportation system that the terrorists have demonstrated they can use to kill.

And, on the same topic, why it doesn’t make sense to ban small aircraft from cities as a terrorism defense.

Posted on October 23, 2006 at 10:01 AM66 Comments

Security and Class

I don’t think I’ve ever read anyone talking about class issues as they relate to security before:

On July 23, 2003, New York City Council candidate Othniel Boaz Askew was able to shoot and kill council member and rival James Davis with a gun in school headquarters at City Hall, even though entrance to the building required a trip through a magnetometer. How? Askew used his politicians’ privilege—a courtesy wave around from security guards at the magnetometer.

An isolated incident? Hardly. In 2002, undercover investigators from Congress’ auditing arm, the General Accounting Office, used fake law enforcement credentials to get the free pass around the magnetometers at various federal office buildings around the country.

What we see here is class warfare on the security battleground. The reaction to Sept. 11 has led to harassment, busywork, and inconvenience for us all ­ well, almost all. A select few who know the right people, hold the right office or own the right equipment don’t suffer the ordeals. They are waved around security checkpoints or given broad exceptions to security lockdowns.

If you want to know why America’s security is so heavy on busywork and inconvenience and light on practicality, consider this: The people who make the rules don’t have to live with them. Public officials, some law enforcement officers and those who can afford expensive hobbies are often able to pull rank.

Posted on October 19, 2006 at 12:25 PM38 Comments

Lousy Home Security Installation

Impressively bad. (Yes, it’s an advertisement. But there are still important security lessons in the blog post.)

1. The keypad is actually the control panel. This particular model is called a Lynx and is manufactured by Honeywell. However, most of the major manufacturers have their own version of an “all-in-one” control panel, siren & keypad (Here is a link to GE’s version). These all-in-one models were designed to simplify installation and are typically part of “free” or low-cost alarm systems. They are all equally useless.

The most important problem with systems like this is the fact that you need to have a delay time in order to open your door and get to the keypad each time you enter your home. So, when a crook breaks in, they also have the same amount of time. If the crook follows the sound of the beeping keypad they will be standing in front of not only the keypad, but the brains of the alarm system. So, rather than punching in a valid code, the crook could simply rip the entire unit off of the wall.

Provided that they rip the panel off of the wall before the alarm sends its first signal, it will never be able to send a signal.

2. If point #1 wasn’t bad enough (or maybe because the installer who put the ‘system’ in realized how useless it was going to be) the power supply for the system is located right beside the keypad/control panel. Unplug the transformer (which is just barely able to stay plugged in as it is) and the alarm loses power. This provides a really convenient way for someone to either accidentally or intentionally unplug the system and wait for the back-up battery to die.

3. Even worse, the phone jack has also been located beside the power supply. The phone jack is the alarm systems only connection to the outside world. If it gets unplugged, the system cannot communicate and a crook would not have to go through the hassle of ripping the panel off of the wall.

Posted on October 19, 2006 at 9:46 AM24 Comments

Architecture and Security

You’ve seen them: those large concrete blocks in front of skyscrapers, monuments and government buildings, designed to protect against car and truck bombs. They sprang up like weeds in the months after 9/11, but the idea is much older. The prettier ones doubled as planters; the uglier ones just stood there.

Form follows function. From medieval castles to modern airports, security concerns have always influenced architecture. Castles appeared during the reign of King Stephen of England because they were the best way to defend the land and there wasn’t a strong king to put any limits on castle-building. But castle design changed over the centuries in response to both innovations in warfare and politics, from motte-and-bailey to concentric design in the late medieval period to entirely decorative castles in the 19th century.

These changes were expensive. The problem is that architecture tends toward permanence, while security threats change much faster. Something that seemed a good idea when a building was designed might make little sense a century—or even a decade—later. But by then it’s hard to undo those architectural decisions.

When Syracuse University built a new campus in the mid-1970s, the student protests of the late 1960s were fresh on everybody’s mind. So the architects designed a college without the open greens of traditional college campuses. It’s now 30 years later, but Syracuse University is stuck defending itself against an obsolete threat.

Similarly, hotel entries in Montreal were elevated above street level in the 1970s, in response to security worries about Quebecois separatists. Today the threat is gone, but those older hotels continue to be maddeningly difficult to navigate.

Also in the 1970s, the Israeli consulate in New York built a unique security system: a two-door vestibule that allowed guards to identify visitors and control building access. Now this kind of entryway is widespread, and buildings with it will remain unwelcoming long after the threat is gone.

The same thing can be seen in cyberspace as well. In his book, Code and Other Laws of Cyberspace, Lawrence Lessig describes how decisions about technological infrastructure—the architecture of the internet—become embedded and then impracticable to change. Whether it’s technologies to prevent file copying, limit anonymity, record our digital habits for later investigation or reduce interoperability and strengthen monopoly positions, once technologies based on these security concerns become standard it will take decades to undo them.

It’s dangerously shortsighted to make architectural decisions based on the threat of the moment without regard to the long-term consequences of those decisions.

Concrete building barriers are an exception: They’re removable. They started appearing in Washington, D.C., in 1983, after the truck bombing of the Marines barracks in Beirut. After 9/11, they were a sort of bizarre status symbol: They proved your building was important enough to deserve protection. In New York City alone, more than 50 buildings were protected in this fashion.

Today, they’re slowly coming down. Studies have found they impede traffic flow, turn into giant ashtrays and can pose a security risk by becoming flying shrapnel if exploded.

We should be thankful they can be removed, and did not end up as permanent aspects of our cities’ architecture. We won’t be so lucky with some of the design decisions we’re seeing about internet architecture.

This essay originally appeared (my 29th column) in Wired.com.

EDITED TO ADD (11/3): Activism-restricting architecture at the University of Texas. And commentary from the Architectures of Control in Design Blog.

Posted on October 19, 2006 at 9:27 AM43 Comments

The Death of Ephemeral Conversation

The political firestorm over former U.S. Rep. Mark Foley’s salacious instant messages hides another issue, one about privacy. We are rapidly turning into a society where our intimate conversations can be saved and made public later. This represents an enormous loss of freedom and liberty, and the only way to solve the problem is through legislation.

Everyday conversation used to be ephemeral. Whether face-to-face or by phone, we could be reasonably sure that what we said disappeared as soon as we said it. Of course, organized crime bosses worried about phone taps and room bugs, but that was the exception. Privacy was the default assumption.

This has changed. We now type our casual conversations. We chat in e-mail, with instant messages on our computer and SMS messages on our cellphones, and in comments on social networking Web sites like Friendster, LiveJournal, and MySpace. These conversations—with friends, lovers, colleagues, fellow employees—are not ephemeral; they leave their own electronic trails.

We know this intellectually, but we haven’t truly internalized it. We type on, engrossed in conversation, forgetting that we’re being recorded.

Foley’s instant messages were saved by the young men he talked to, but they could have also been saved by the instant messaging service. There are tools that allow both businesses and government agencies to monitor and log IM conversations. E-mail can be saved by your ISP or by the IT department in your corporation. Gmail, for example, saves everything, even if you delete it.

And these conversations can come back to haunt people—in criminal prosecutions, divorce proceedings or simply as embarrassing disclosures. During the 1998 Microsoft anti-trust trial, the prosecution pored over masses of e-mail, looking for a smoking gun. Of course they found things; everyone says things in conversation that, taken out of context, can prove anything.

The moral is clear: If you type it and send it, prepare to explain it in public later.

And voice is no longer a refuge. Face-to-face conversations are still safe, but we know that the National Security Agency is monitoring everyone’s international phone calls. (They said nothing about SMS messages, but one can assume they were monitoring those too.) Routine recording of phone conversations is still rare—certainly the NSA has the capability—but will become more common as telephone calls continue migrating to the IP network.

If you find this disturbing, you should. Fewer conversations are ephemeral, and we’re losing control over the data. We trust our ISPs, employers and cellphone companies with our privacy, but again and again they’ve proven they can’t be trusted. Identity thieves routinely gain access to these repositories of our information. Paris Hilton and other celebrities have been the victims of hackers breaking into their cellphone providers’ networks. Google reads our Gmail and inserts context-dependent ads.

Even worse, normal constitutional protections don’t apply to much of this. The police need a court-issued warrant to search our papers or eavesdrop on our communications, but can simply issue a subpoena—or ask nicely or threateningly—for data of ours that is held by a third party, including stored copies of our communications.

The Justice Department wants to make this problem even worse, by forcing ISPs and others to save our communications—just in case we’re someday the target of an investigation. This is not only bad privacy and security, it’s a blow to our liberty as well. A world without ephemeral conversation is a world without freedom.

We can’t turn back technology; electronic communications are here to stay. But as technology makes our conversations less ephemeral, we need laws to step in and safeguard our privacy. We need a comprehensive data privacy law, protecting our data and communications regardless of where it is stored or how it is processed. We need laws forcing companies to keep it private and to delete it as soon as it is no longer needed.

And we need to remember, whenever we type and send, we’re being watched.

Foley is an anomaly. Most of us do not send instant messages in order to solicit sex with minors. Law enforcement might have a legitimate need to access Foley’s IMs, e-mails and cellphone calling logs, but that’s why there are warrants supported by probable cause—they help ensure that investigations are properly focused on suspected pedophiles, terrorists and other criminals. We saw this in the recent UK terrorist arrests; focused investigations on suspected terrorists foiled the plot, not broad surveillance of everyone without probable cause.

Without legal privacy protections, the world becomes one giant airport security area, where the slightest joke—or comment made years before—lands you in hot water. The world becomes one giant market-research study, where we are all life-long subjects. The world becomes a police state, where we all are assumed to be Foleys and terrorists in the eyes of the government.

This essay originally appeared on Forbes.com.

Posted on October 18, 2006 at 3:30 PM66 Comments

Swiss Police to Use Trojans for VoIP Tapping

At least they’re thinking about it:

Swiss authorities are investigating the possibility of tapping VoIP calls, which could involve commandeering ISPs to install Trojan code on target computers.

VoIP calls through software services such as Skype are encrypted as they are passed over the public Internet, in order to safeguard the privacy of the callers.

This presents a problem for anyone wanting to listen in, as they are faced with trying to decrypt the packets by brute force—not easy during a three-minute phone call. What’s more, many VoIP services are not based in Switzerland, so the authorities don’t have the jurisdiction to force them to hand over the decryption keys or offer access to calls made through these services.

The only alternative is to find a means of listening in at a point before the data is encrypted.

[…]

In order to install the application on the target computer, the Swiss authorities
envisage two strategies: either have law enforcement surreptitiously install it locally, or have the telco or ISP which provides Internet access to that computer install it remotely.

The application, essentially a piece of Trojan code, is also able to turn on the microphone on the target PC and monitor not just VoIP conversations, but also any other ambient audio.

Posted on October 18, 2006 at 2:26 PM29 Comments

Targeted Trojan Horses Are the Future of Malware

Good article:

Security technology can stop common attacks, but targeted attacks fly under the radar. That’s because traditional products, which scan e-mail at the network gateway or on the desktop, can’t recognize the threat. Alarm bells will ring if a new attack targets thousands of people or more, but not if just a handful of e-mails laden with a new Trojan horse is sent.

“It is very much sweeping in under the radar,” said Graham Cluley, a senior technology consultant at Sophos, a U.K.-based antivirus company. If it is a big attack, security companies would know something is up, because it hits their customers’ systems and their own honeypots (traps set up to catch new and existing threats), he said.

Targeted attacks are, at most, a blip on the radar in the big scheme of security problems, researchers said. MessageLabs pulls about 3 million pieces of malicious software out of e-mail messages every day. Only seven of those can be classified as a targeted Trojan attack, said Alex Shipp, a senior antivirus technologist at the e-mail security company.

“A typical targeted attack will consist of between one and 10 similar e-mails directed at between one and three organizations,” Shipp said. “By far the most common form of attack is to send just one e-mail to one organization.”

Posted on October 17, 2006 at 7:04 AM25 Comments

Please Stop My Car

Residents of Prescott Valley are being invited to register their car if they don’t drive in the middle of the night. Police will then stop those cars if they are on the road at that time, under the assumption that they’re stolen.

The Watch Your Car decal program is a voluntary program whereby vehicle owners enroll their vehicles with the AATA. The vehicle is then entered into a special database, developed and maintained by the AATA, which is directly linked to the Motor Vehicle Division (MVD).

Participants then display the Watch Your Car decals in the front and rear windows of their vehicle. By displaying the decals, vehicle owners convey to law enforcement officials that their vehicle is not usually in use between the hours of 1:00 AM and 5:00 AM, when the majority of thefts occur.

If a police officer witnesses the vehicle in operation between these hours, they have the authority to pull it over and question the driver. With access to the MVD database, the officer will be able to determine if the vehicle has been stolen, or not. The program also allows law enforcement officials to notify the vehicle’s owner immediately upon determination that it is being illegally operated.

This program is entirely optional, but there’s a serious externality. If the police spend time chasing false alarms, they’re not available for other police business. If the town charged car owners a fine for each false alarm, I would have no problems with this program. It doesn’t have to be a large fine, but it has to be enough to offset the cost to the town. It’s no different than police departments charging homeowners for false burglar alarms, when the alarm systems are automatically hooked into the police stations.

Posted on October 16, 2006 at 6:30 AM71 Comments

A Million Random Digits

The Rand Corporation published A Million Random Digits with 100,000 Normal Deviates back in 1955, when generating random numbers was hard.

The random digits in the book were produced by rerandomization of a basic table generated by an electronic roulette wheel. Briefly, a random frequency pulse source, providing on the average about 100,000 pulses per second, was gated about once per second by a constant frequency pulse. Pulse standardization circuits passed the pulses through a 5-place binary counter. In principle the machine was a 32-place roulette wheel which made, on the average, about 3000 revolutions per trial and produced one number per second. A binary-to-decimal converter was used which converted 20 of the 32 numbers (the other twelve were discarded) and retained only the final digit of two-digit numbers; this final digit was fed into an IBM punch to produce finally a punched card table of random digits.

I have a copy of the original book; it’s one of my library’s prize possessions. I had no idea that the book was reprinted in 2002; it’s available on Amazon. But even if you don’t buy it, go to the Amazon page and read the user reviews. They’re hysterical.

This is what I said in Applied Cryptography:

The meat of the book is the “Table of Random Digits.” It lists them in five-digit groups—”10097 32533 76520 13586 …”—50 on a line and 50 lines on a page. The table goes on for 400 pages and, except for a particularly racy section on page 283 which reads “69696,” makes for a boring read.

Posted on October 13, 2006 at 12:12 PM66 Comments

RFID Tagging People at Airports

How’s this for a dumb idea? Tagging passengers at airports. That’s all passengers.

EDITED TO ADD (10/13): Ross Anderson said this to me in e-mail: “The real reason for wanting to tag airline passengers is that when people check bags but don’t turn up for the flight in time, the bags have to be unloaded, causing expensive delays.” Interesting analysis.

Posted on October 13, 2006 at 7:28 AM60 Comments

Torture and the Ticking Time Bomb

Nice essay on the idiocy of the “ticking time bomb” theory of torture:

So let us imagine ourselves in the interrogation room with the suspect. Evidence collected from his apartment certainly seems to indicate that he has knowledge of a looming terrorist attack, but he is begging for mercy. Too bad, isn’t it? All we have done is deprive him of sleep and clothing. And it is a bit cold. Unfortunately, he may be scared and cold, but he hasn’t given us one scrap of useful information. And we’re under some time pressure. Your superior has an idea. For better cover, the suspect was living with his family, a wife and young daughter. We’re detaining them in another room. The evidence seems to show the suspect cares for them. Perhaps if we brought them into the room? Your superior warns you to steel yourself for what comes next. Perhaps the suspect will respond to mere threats that they might be put to death in front of him. If threats are not enough, however, we must be prepared to do the worst. Of course, in some cultures there are acts regarded as worse than death. Your superior looks at you. Do you understand what he is talking about? Of course you do. You are experienced in the ways of the TTB, of doing what is necessary to elicit information under the terrible pressure of a deadline.

I really hope I don’t have to elaborate further this fantastic scenario of moral corruption. Our popular culture is full of faux scenarios of torture and cruelty. Just check out your local video rental store. What’s amazing about the TTB is that it is taken to be “real,” a serious matter for public debate. But it’s no more real than my scenario, a Tom Clancy novel of military adventure or a superhero comic.

The TTB counts on eliciting a certain sort of response. Of course, “the president would have to authorize torture” to prevent millions from dying. But surely it puts a slightly different spin on the situation to imagine that you are the one responsible for making sure the interrogation is effective. And you will have to live with the consequences if you turn out to be wrong. What wouldn’t you do to prevent millions from dying? Well, I wouldn’t engage in torture, child abuse, murder, rape and a whole long list of morally corrupt acts. And I’m willing to bet you wouldn’t either. Scenarios like the TTB are well designed to cloud our reason and judgment. For that reason, we should avoid them and concentrate on the ways in which we can realistically prevent terrorist attacks.

I almost forgot. After you finish following orders and torturing the suspect, it turns out he really didn’t know anything. That’s the way almost all of these scenarios end, isn’t it?

Posted on October 12, 2006 at 2:09 PM66 Comments

Fukuyama on Secrecy

From the New York Times:

All new threats entail huge uncertainties. Then, as now, there was a pronounced tendency to assume the worst, and for the government to claim enormous discretion in protecting the American public. The Bush administration has consistently argued that it needs to be protected from Congressional oversight and media scrutiny. An example is the National Security Agency’s warrantless surveillance of telephone traffic into and out of the United States. Rather than going to Congress and trying to negotiate changes to the law that regulates such activities, the administration simply grabbed that authority for itself, saying, in effect, “Trust us: if you knew what we know about the threat, you’d be perfectly happy to have us do what we’re doing.” In other areas, like the holding of prisoners in Guantanamo and interrogation methods used there and in the Middle East, one can only quote Moynihan on an earlier era: “As fears of Communist conspiracies and German subversion mounted, it was the U.S. government’s conduct that approached the illegal.”

Even if we do not at this juncture know the full scope of the threat we face from jihadist terrorism, it is certainly large enough to justify many changes in the way we conduct our lives, both at home and abroad. But the American government does have a track record in dealing with similar problems in the past, one suggesting that all American institutions—Congress, the courts, the news media—need to do their jobs in scrutinizing official behavior, and not take the easy way out of deferring to the executive. Past experience also suggests that the government would do far better to make public what it knows, as well as the limits of that knowledge, if we are to arrive at a balanced view of the challenges we face today.

Posted on October 12, 2006 at 6:54 AM14 Comments

New Harder-to-Counterfeit Iraqi Police Uniforms

In an effort to deal with the problem of imposters in fake uniforms, Iraqi policemen now have a new uniform:

Police Colonel Abdul-Munim Jassim explained why the new uniform would be difficult for criminals to fake.

“The Americans take a photo of the policeman together with the number of the uniform. If found elsewhere, it will immediately be recognised as stolen,” he said.

Bolani promised tough measures against anyone caught counterfeiting or trading in the uniforms and praised his officers, telling them their work had begun to turn back the tide of violence around Iraq.

I’m sure these things help, but I don’t see what kind of difference it will make to a normal citizen faced with someone in a police uniform breaking down his door at night. Or when gunmen dressed in police uniforms execute the brother of Iraqi Vice President Tariq al-Hashimi.

Posted on October 11, 2006 at 12:28 PM32 Comments

Bureau of Industry and Security Hacked

The BIS is the part of the U.S. Department of Commerce responsible for export control. If you have a dual-use technology that you need special approval in order to export outside the U.S., or to export it to specific countries, BIS is what you submit the paperwork to.

It’s been hacked by “hackers working through Chinese servers,” and has been shut down. This may very well have been a targeted attack.

Manufacturers of hardware crypto devices—mass-market software is exempted—must submit detailed design information to BIS in order to get an export license. There’s a lot of detailed information on crypto products in the BIS computers.

Of course, I have no way of knowing if this information was breached or if that’s what the hackers were after, but it is interesting. On the other hand, any crypto product that relied on this information being secret doesn’t deserve to be on the market anyway.

Posted on October 11, 2006 at 7:16 AM24 Comments

Jelly As a Terrorist Risk

Continued terrorist paranoia causes yet another ridiculous story:

A pile of jelly1 left by a road in Germany caused a major security alert after it was mistaken for toxic waste.

A large area near the town of Halle was cordoned off after a “flabby red, orange and green substance” was found by the road, Reuters reported.

Fire officers in protective suits spent two hours inspecting the substance before concluding it was jelly.

Years ago, someone would have just cleaned up the mess. Today, we call in firemen in HAZMAT suits.

1 “Jelly” in Europe is Jell-O in America. What Americans call “jelly,” Europeans call “jam.”

Posted on October 10, 2006 at 1:28 PM60 Comments

Airport Security Confiscates Rock

They already take away scissors. Can paper be far behind?

Here’s the story:

In retrospect, I suppose I could have put the grapefruit-sized specimen inside my sock, swung it around my head like a mace, charged the cabin and attempted to hijack the flight. This, of course, never occurred to me until the zealous inspector declared my rock a “dual-use” item.

“What, pray tell, is a dual-use item?” I asked. I’m afraid I chuckled just a little, causing her to glare, withhold a satisfactory answer and call her supervisor. He hefted my rock, scrutinized it for a moment, and agreed that my specimen was indeed a dual-use item, meaning a potential low-tech weapon. During those uneasy moments when I thought I would be detained, I wondered if a doctor’s stethoscope would also be declared a dual-use item, since it could be used to strangle a pilot.

We can’t keep weapons out of prisons. We can’t possibly keep them out of airports.

Posted on October 10, 2006 at 11:53 AM94 Comments

The Doghouse: SecureRF

SecureRF:

Claims to offer the first feasible security for RFIDs. Conventional public key cryptography (such as RSA) is far too computationally intensive for an RFID. SecureRF provides a similar technology at far lower footprint by harnessing a relatively obscure area of mathematics: infinite group theory, which comes (of all places) from knot theory, a branch of topology.

Their website claims to have “white papers” on the theory, but you have to give them your personal information to get it. Of course, they reference no actual published cryptography papers. “New mathematics” is my Snake-Oil Warning Sign #2—and I strongly suspect their documentation displays several other of the warning signs, too. I’d stay away from this one.

Posted on October 9, 2006 at 7:47 AM30 Comments

Opinion Monitoring Software

Interesting research:

A consortium of major universities, using Homeland Security Department money, is developing software that would let the government monitor negative opinions of the United States or its leaders in newspapers and other publications overseas.

Such a “sentiment analysis” is intended to identify potential threats to the nation, security officials said.

This kind of thing could actually be a good idea. For example, it could be used to help the administration understand how we are viewed by people in other countries, and make us more responsible players on the world stage as a result.

On the other hand, this kind of thing could also be used to track critics of the U.S., and to aid in media manipulation. It is not unusual for government leaders to punish reporters who do not provide favorable coverage by excluding them from important events and key briefings, and this could facilitate that. At the very least, it would have a chilling effect on worldwide freedom of the press.

Note also that the project director says that the system would not extend to domestic news sources:

It could take several years for such a monitoring system to be in place, said Joe Kielman, coordinator of the research effort. The monitoring would not extend to United States news, Mr. Kielman said.

But a few paragraphs later:

The articles in the database include work from many American newspapers and news wire services, including The Miami Herald and The New York Times, as well as foreign sources like Agence France-Presse and The Dawn, a newspaper in Pakistan.

I have to admit I find the whole thing a bit too Orwellian for my tastes.

Posted on October 6, 2006 at 11:57 AM50 Comments

No-Fly List

60 Minutes has a copy:

60 Minutes, in collaboration with the National Security News Service, has obtained the secret list used to screen airline passengers for terrorists and discovered it includes names of people not likely to cause terror, including the president of Bolivia, people who are dead and names so common, they are shared by thousands of innocent fliers.

[…]

The “data dump” of names from the files of several government agencies, including the CIA, fed into the computer compiling the list contained many unlikely terrorists. These include Saddam Hussein, who is under arrest, Nabih Berri, Lebanon’s parliamentary speaker, and Evo Morales, the president of Bolivia. It also includes the names of 14 of the 19 dead 9/11 hijackers.

But the names of some of the most dangerous living terrorists or suspects are kept off the list.

The 11 British suspects recently charged with plotting to blow up airliners with liquid explosives were not on it, despite the fact they were under surveillance for more than a year.

The name of David Belfield who now goes by Dawud Sallahuddin, is not on the list, even though he assassinated someone in Washington, D.C., for former Iranian leader Ayatollah Khomeini. This is because the accuracy of the list meant to uphold security takes a back seat to overarching security needs: it could get into the wrong hands. “The government doesn’t want that information outside the government,” says Cathy Berrick, director of Homeland Security investigations for the General Accounting Office.

When are we going to realize that this list simply isn’t effective?

Posted on October 6, 2006 at 6:07 AM69 Comments

Screening People with Clearances

Why should we waste time at airport security, screening people with U.S. government security clearances? This perfectly reasonable question was asked recently by Robert Poole, director of transportation studies at The Reason Foundation, as he and I were interviewed by WOSU Radio in Ohio.

Poole argued that people with government security clearances, people who are entrusted with U.S. national security secrets, are trusted enough to be allowed through airport security with only a cursory screening. They’ve already gone through background checks, he said, and it would be more efficient to concentrate screening resources on everyone else.

To someone not steeped in security, it makes perfect sense. But it’s a terrible idea, and understanding why teaches us some important security lessons.

The first lesson is that security is a system. Identifying someone’s security clearance is a complicated process. People with clearances don’t have special ID cards, and they can’t just walk into any secured facility. A clearance is held by a particular organization—usually the organization the person works for—and is transferred by a classified message to other organizations when that person travels on official business.

Airport security checkpoints are not set up to receive these clearance messages, so some other system would have to be developed.

Of course, it makes no sense for the cleared person to have his office send a message to every airport he’s visiting, at the time of travel. Far easier is to have a centralized database of people who are cleared. But now you have to build this database. And secure it. And ensure that it’s kept up to date.

Or maybe we can create a new type of ID card: one that identifies people with security clearances. But that also requires a backend database and a card that can’t be forged. And clearances can be revoked at any time, so there needs to be some way of invalidating cards automatically and remotely.

Whatever you do, you need to implement a new set of security procedures at airport security checkpoints to deal with these people. The procedures need to be good enough that people can’t spoof it. Screeners need to be trained. The system needs to be tested.

What starts out as a simple idea—don’t waste time searching people with government security clearances—rapidly becomes a complicated security system with all sorts of new vulnerabilities.

The second lesson is that security is a trade-off. We don’t have infinite dollars to spend on security. We need to choose where to spend our money, and we’re best off if we spend it in ways that give us the most security for our dollar.

Given that very few Americans have security clearances, and that speeding them through security wouldn’t make much of a difference to anyone else standing in line, wouldn’t it be smarter to spend the money elsewhere? Even if you’re just making trade-offs about airport security checkpoints, I would rather take the hundreds of millions of dollars this kind of system could cost and spend it on more security screeners and better training for existing security screeners. We could both speed up the lines and make them more effective.

The third lesson is that security decisions are often based on subjective agenda. My guess is that Poole has a security clearance—he was a member of the Bush-Cheney transition team in 2000—and is annoyed that he is being subjected to the same screening procedures as the other (clearly less trusted) people he is forced to stand in line with. From his perspective, not screening people like him is obvious. But objectively it’s not.

This issue is no different than searching airplane pilots, something that regularly elicits howls of laughter among amateur security watchers. What they don’t realize is that the issue is not whether we should trust pilots, airplane maintenance technicians or people with clearances. The issue is whether we should trust people who are dressed as pilots, wear airplane-maintenance-tech IDs or claim to have clearances.

We have two choices: Either build an infrastructure to verify their claims, or assume that they’re false. And with apologies to pilots, maintenance techs and people with clearances, it’s cheaper, easier and more secure to search you all.

This is my twenty-eighth essay for Wired.com.

Posted on October 5, 2006 at 8:27 AM82 Comments

Firefox JavaScript Flaw: Real or Hoax?

Two hackers—Mischa Spiegelmock and Andrew Wbeelsoi—have announced a flaw in Firefox’s JavaScript:

An attacker could commandeer a computer running the browser simply by crafting a Web page that contains some malicious JavaScript code, Mischa Spiegelmock and Andrew Wbeelsoi said in a presentation at the ToorCon hacker conference here. The flaw affects Firefox on Windows, Apple Computer’s Mac OS X and Linux, they said.

More interesting was this piece:

The hackers claim they know of about 30 unpatched Firefox flaws. They don’t plan to disclose them, instead holding onto the bugs.

Jesse Ruderman, a Mozilla security staffer, attended the presentation and was called up on the stage with the two hackers. He attempted to persuade the presenters to responsibly disclose flaws via Mozilla’s bug bounty program instead of using them for malicious purposes such as creating networks of hijacked PCs, called botnets.

“I do hope you guys change your minds and decide to report the holes to us and take away $500 per vulnerability instead of using them for botnets,” Ruderman said.

The two hackers laughed off the comment. “It is a double-edged sword, but what we’re doing is really for the greater good of the Internet. We’re setting up communication networks for black hats,” Wbeelsoi said.

Sounds pretty bad? But maybe it’s all a hoax:

Spiegelmock, a developer at Six Apart, a blog software company in San Francisco, now says the ToorCon talk was meant “to be humorous” and insists the code presented at the conference cannot result in code execution.

Spiegelmock’s strange about-face comes as Mozilla’s security response team is racing to piece together information from the ToorCon talk to figure out how to fix the issue.

[…]

On the claim that there are 30 undisclosed Firefox vulnerabilities, Spiegelmock pinned that entirely on co-presenter Wbeelsoi. “I have no undisclosed Firefox vulnerabilities. The person who was speaking with me made this claim, and I honestly have no idea if he has them or not. I apologize to everyone involved, and I hope I have made everything as clear as possible,” Spiegelmock added.

I vote: hoax, with maybe some seeds of real.

Posted on October 4, 2006 at 7:04 AM38 Comments

This Is What Vigilantism Looks Like

Another airplane passenger false alarm:

Seth Stein is used to jetting around the world to create stylish holiday homes for wealthy clients. This means the hip architect is familiar with the irritations of heightened airline security post-9/11. But not even he could have imagined being mistaken for an Islamist terrorist and physically pinned to his seat while aboard an American Airlines flight—especially as he has Jewish origins.

Turns out that one of the other passengers decided to take matters into his own hands.

In Mr Stein’s case, he was pounced on as the crew and other travellers looked on. The drama unfolded less than an hour into the flight. As he settled down with a book and a ginger ale, the father-of-three was grabbed from behind and held in a head-lock.

“This guy just told me his name was Michael Wilk, that he was with the New York Police Department, that I’d been acting suspiciously and should stay calm. I could barely find my voice and couldn’t believe it was happening,” said Mr Stein.

“He went into my pocket and took out my passport and my iPod. All the other passengers were looking concerned.” Eventually, cabin crew explained that the captain had run a security check on Mr Stein after being alerted by the policeman and that this had cleared him. The passenger had been asked to go back to his seat before he had restrained Mr Stein. When the plane arrived in New York, Mr Stein was met by apologetic police officers who offered to fast-track him out of the airport.

Even stranger:

In a twist to the story, Mr Stein has since discovered that there is only one Michael Wilk on the NYPD’s official register of officers, but the man retired 25 years ago. Officials have told the architect that his assailant may work for another law enforcement agency but have refused to say which one.

I’ve written about this kind of thing before.

EDITED TO ADD (10/3): Here’s a man booted off a plane for speaking Tamil into his cellphone.

Posted on October 3, 2006 at 12:42 PM70 Comments

New Voting Protocol

Interesting voting protocol from Ron Rivest:

Abstract:

We present a new paper-based voting method with attractive security properties. Not only can each voter verify that her vote is recorded as she intended, but she gets a “receipt” that she can take home that can be used later to verify that her vote is actually included in the final tally. Her receipt, however, does not allow her to prove to anyone else how she voted.

The new voting system is in some ways similar to recent cryptographic voting system proposals, but it achieves very nearly the same objectives without using any cryptography at all. Its principles are simple and easy to understand.

In this “ThreeBallot” voting system, each voter casts three paper ballots (with certain restrictions on how they may be filled out, so the tallying works). These paper ballots are of course “voter-verifiable.” All ballots cast are scanned and published on a web site, so anyone may correctly compute the election result.

A voter receives a copy of one of her ballots as her “receipt,” which she may take home. Only the voter knows which ballot she copied for her receipt. The voter is unable to use her receipt to prove how she voted or to sell her vote, as the receipt doesn’t reveal how she voted.

A voter can check that the web site contains a ballot matching her receipt. Deletion or modification of ballots is thus detectable; so the integrity of the election is verifiable.

The method can be implemented in a quite practical manner, although further refinements to improve usability would be nice.

Very clever.

Posted on October 2, 2006 at 1:27 PM49 Comments

Voting Software and Secrecy

Here’s a quote from an elections official in Los Angeles:

“The software developed for InkaVote is proprietary software. All the software developed by vendors is proprietary. I think it’s odd that some people don’t want it to be proprietary. If you give people the open source code, they would have the directions on how to hack into it. We think the proprietary nature of the software is good for security.”

It’s funny, really. What she should be saying is something like: “I think it’s odd that everyone who has any expertise in computer security doesn’t want the software to be proprietary. Speaking as someone who knows nothing about computer security, I think that secrecy is an asset.” That’s a more realistic quote.

As I’ve said many times, secrecy is not the same as security. And in many cases, secrecy hurts security.

Posted on October 2, 2006 at 7:10 AM35 Comments

The Onion on TSA's Liquid Ban

“New Air-Travel Guidelines”:

Elaine Siegel, Sales Representative
“Thank God. I don’t think I’d be able to make one more flight from New York to Chicago with a mouthful of shampoo.”

Alex Hunter, Surveyor
“The ban was a necessary precaution. We have to be willing to make these kinds of sacrifices if we’re going to prevent scientifically impossible terrorist attacks.”

Ed Johansen, Systems Analyst
“By giving passengers renewed access to these gels, lotions, and shampoos, we run the risk of creating a very dangerous and highly evasive super-slippery terrorist able to avoid all manners of restraint.”

Posted on October 1, 2006 at 9:41 AM11 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.