Blog: December 2007 Archives

The Nugache Worm/Botnet

I’ve already written about the Storm worm, and how it represents a new generation of worm/botnets. And Scott Berinato has written an excellent article about the Gozi worm, another new-generation worm/botnet.

This article is about yet another new-generation worm-botnet: Nugache. Dave Dittrich thinks this is the most advanced worm/botnet yet:

But this new piece of malware, which came to be known as Nugache, was a game-changer. With no C&C server to target, bots capable of sending encrypted packets and the possibility of any peer on the network suddenly becoming the de facto leader of the botnet, Nugache, Dittrich knew, would be virtually impossible to stop.

[…]

Nugache, and its more famous cousin, the Storm Trojan, are not simply the next step in the evolution of malware. They represent a major step forward in both the quality of software that malware authors are producing and in the sophistication of their tactics. Although they’re often referred to as worms, Storm and Nugache are actually Trojans. The Storm creator, for example, sends out millions of spam messages on a semi-regular basis, each containing a link to content on some remote server, normally disguised in a fake pitch for a penny stock, Viagra or relief for victims of a recent natural disaster. When a user clicks on the link, the attacker’s server installs the Storm Trojan on the user’s PC and it’s off and running.

Various worms, viruses, bots and Trojans over the years have had one or two of the features that Storm, Nugache, Rbot and other such programs possess, but none has approached the breadth and depth of their feature sets. Rbot, for example, has more than 100 features that users can choose from when compiling the bot. This means that two different bots compiled from an identical source could have nearly identical feature sets, yet look completely different to an antivirus engine.

[…]

As scary as Storm and Nugache are, the scarier thing is that they represent just the tip of the iceberg. Experts say that there are several malware groups out there right now that are writing custom Trojans, rootkits and attack toolkits to the specifications of their customers. The customers are in turn using the malware not to build worldwide botnets a la Storm, but to attack small slices of a certain industry, such as financial services or health care.

Rizo, a variant of the venerable Rbot, is the poster child for this kind of attack. A Trojan in the style of Nugache and Storm, Rizo has been modified a number of times to meet the requirements of various different attack scenarios. Within the course of a few weeks, different versions of Rizo were used to attack customers of several different banks in South America. Once installed on a user’s PC, it monitors Internet activity and gathers login credentials for online banking sites, which it then sends back to the attacker. It’s standard behavior for these kinds of Trojans, but the amount of specificity and customization involved in the code and the ways in which the author changed it over time are what have researchers worried.

[…]

“I’m pretty sure that there are tactics being shared between the Nugache and Storm authors,” Dittrich said. “There’s a direct lineage from Sdbot to Rbot to Mytob to Bancos. These guys can just sell the Web front-end to these things and the customers can pick their options and then just hit go.”

See also: “Command and control structures in malware: From Handler/Agent to P2P,” by Dave Dittrich and Sven Dietrich, USENIX ;login:, vol. 32, no. 6, December 2007, and “Analysis of the Storm and Nugache Trojans: P2P is here,” Sam Stover, Dave Dittrich, John Hernandez, and Sven Dietrich, USENIX ;login:, vol. 32, no. 6, December 2007. The second link is available to USENIX members only, unfortunately.

Posted on December 31, 2007 at 7:19 AM23 Comments

New Lithium Battery Rules for U.S. Airplanes

Starting in 2008, there are new rules for bringing lithium batteries on airplanes:

The following quantity limits apply to both your spare and installed batteries. The limits are expressed in grams of “equivalent lithium content.” 8 grams of equivalent lithium content is approximately 100 watt-hours. 25 grams is approximately 300 watt-hours:

  • Under the new rules, you can bring batteries with up to 8-gram equivalent lithium content. All lithium ion batteries in cell phones are below 8 gram equivalent lithium content. Nearly all laptop computers also are below this quantity threshold.
  • You can also bring up to two spare batteries with an aggregate equivalent lithium content of up to 25 grams, in addition to any batteries that fall below the 8-gram threshold. Examples of two types of lithium ion batteries with equivalent lithium content over 8 grams but below 25 are shown below.
  • For a lithium metal battery, whether installed in a device or carried as a spare, the limit on lithium content is 2 grams of lithium metal per battery.
  • Almost all consumer-type lithium metal batteries are below 2 grams of lithium metal. But if you are unsure, contact the manufacturer!

Near as I can tell, this affects pretty much no one except audio/visual professionals. And the TSA isn’t saying whether this is a safety issue or a security issue. They aren’t giving any reason. But those of you who paid close attention to the Second Movie-Plot Threat Contest know of the dangers:

Terrorists camouflages bombs as college textbooks, with detonators hidden in the lithium-ion batteries of various electronics. The terrorist nonchalantly wanders up by the cockpit with his armed textbook and detonates it right after the seat belt sign goes off, but while the plane is still over an inhabited area. Thousands die, with most of the casualties on the ground.

Chat about the ban on FlyerTalk. Does any other country have any similar restrictions?

EDITED TO ADD (12/28): It’s not a TSA rule; it’s an FAA rule.

The FAA has found that current systems for putting out aircraft cargo fires could not suppress a fire if a shipment of non-rechargeable batteries ignited during flight, the release said.

Here’s the actual rule; it’s the DOT that published it. Lithium batteries have been banned as cargo for a long time now. This is the DC-8 fire that led to the ban.

Posted on December 28, 2007 at 3:05 PM55 Comments

Security of Adult Websites Compromised

This article claims the software that runs the back end of either 35% or 80%-95% (depending on which part of the article you read) has been compromised, and that the adult industry is hushing this up. Like many of these sorts of stories, there’s no evidence that the bad guys have the personal information database. The vulnerability only means that they could have it.

Does anyone know about this?

Slashdot thread.

Posted on December 28, 2007 at 7:54 AM19 Comments

Picasso Stolen from Brazilian Museum

A professional job:

The thieves used a hydraulic car jack to pry their way past the pull-down metal gate that protects the museum’s front entrance. Then, they smashed through two glass doors, probably using a crowbar, to get to the paintings on the second floor, police said.

The fundamental problem with securing fine art is that it’s so extraordinarily valuable; museums simply can’t afford the security required.

Local media reports estimated their value at around $100 million, but Cosomano and other curators said it is difficult to put a price on them because the paintings had not gone to auction.

“The prices paid for such works would be incalculable, enough to give you vertigo,” said curator Miriam Alzuri of the Bellas Artes Museum of Bilbao, Spain.

We basically rely on the fact that fine art can’t be resold, because everyone knows it’s stolen. But if someone wants the painting and is willing to hang it in a secret room somewhere in his estate, that doesn’t hold.

“Everything indicates they were sent to do it by some wealthy art lover for his own collection—someone who, although wealthy, was not rich enough to buy the paintings,” Moura added.

Posted on December 27, 2007 at 1:41 PM44 Comments

Airport Security Study

Surprising nobody, a new study concludes that airport security isn’t helping:

A team at the Harvard School of Public Health could not find any studies showing whether the time-consuming process of X-raying carry-on luggage prevents hijackings or attacks.

They also found no evidence to suggest that making passengers take off their shoes and confiscating small items prevented any incidents.

[…]

The researchers said it would be interesting to apply medical standards to airport security. Screening programs for illnesses like cancer are usually not broadly instituted unless they have been shown to work.

Note the defense by the TSA:

“Even without clear evidence of the accuracy of testing, the Transportation Security Administration defended its measures by reporting that more than 13 million prohibited items were intercepted in one year,” the researchers added. “Most of these illegal items were lighters.”

This is where the TSA has it completely backwards. The goal isn’t to confiscate prohibited items. The goal is to prevent terrorism on airplanes. When the TSA confiscates millions of lighters from innocent people, that’s a security failure. The TSA is reacting to non-threats. The TSA is reacting to false alarms. Now you can argue that this level of failures is necessary to make people safer, but it’s certainly not evidence that people are safer.

For example, does anyone think that the TSA’s vigilance regarding pies is anything other than a joke?

Here’s the actual paper from the British Medical Journal:

Of course, we are not proposing that money spent on unconfirmed but politically comforting efforts to identify and seize water bottles and skin moisturisers should be diverted to research on cancer or malaria vaccines. But what would the National Screening Committee recommend on airport screening? Like mammography in the 1980s, or prostate specific antigen testing and computer tomography for detecting lung cancer more recently, we would like to open airport security screening to public and academic debate. Rigorously evaluating the current system is just the first step to building a future airport security programme that is more user friendly and cost effective, and that ultimately protects passengers from realistic threats.

I talked about airport security at length with Kip Hawley, the head of the TSA, here.

Posted on December 27, 2007 at 6:28 AM62 Comments

"Tiger Team" Reality TV Show

On Court TV:

This vérité action series follows Tiger Team ­ a group of elite professionals hired to infiltrate major business and corporate interests with the objective of exposing weaknesses in the world’s most sophisticated security systems, defeating criminals at their own game. Tiger Team is comprised of Security Audit Specialists Chris Nickerson, Luke McOmie and Ryan Jones who employ a variety of covert techniques ­ electronic, psychological and tactical—as they take on a new assignment in each episode.

Watch the trailer. Look at the photo. Okay, so it’ll be unrealistically sensationalist. But it might be fun.

First episode is tonight.

EDITED TO ADD (12/26): My apologies. The episodes aired last night, on Christmas Day. If there are any recordings out there, please post URLs.

Posted on December 26, 2007 at 7:50 AM65 Comments

More Voting Machine News

Ohio just completed a major study of voting machines. (Here’s the report, a gigantic pdf.) And, like the California study earlier this year, they found all sorts of problems:

While some tests to compromise voting systems took higher levels of sophistication, fairly simple techniques were often successfully deployed.

“To put it in every-day terms, the tools needed to compromise an accurate vote count could be as simple as tampering with the paper audit trail connector or using a magnet and a personal digital assistant,” Brunner said.

The New York Times writes:

“It was worse than I anticipated,” the official, Secretary of State Jennifer Brunner, said of the report. “I had hoped that perhaps one system would test superior to the others.”

At polling stations, teams working on the study were able to pick locks to access memory cards and use hand-held devices to plug false vote counts into machines. At boards of election, they were able to introduce malignant software into servers.

Note the lame defense from one voting machine manufacturer:

Chris Riggall, a Premier spokesman, said hardware and software problems had been corrected in his company’s new products, which will be available for installation in 2008.

“It is important to note,” he said, “that there has not been a single documented case of a successful attack against an electronic voting system, in Ohio or anywhere in the United States.”

I guess he didn’t read the part of the report that talked about how these attacks would be undetectable. Like this one:

They found that the ES&S tabulation system and the voting machine firmware were rife with basic buffer overflow vulnerabilities that would allow an attacker to easily take control of the systems and “exercise complete control over the results reported by the entire county election system.”

They also found serious security vulnerabilities involving the magnetically switched bidirectional infrared (IrDA) port on the front of the machines and the memory devices that are used to communicate with the machine through the port. With nothing more than a magnet and an infrared-enabled Palm Pilot or cell phone they could easily read and alter a memory device that is used to perform important functions on the ES&S iVotronic touch-screen machine—such as loading the ballot definition file and programming the machine to allow a voter to cast a ballot. They could also use a Palm Pilot to emulate the memory device and hack a voting machine through the infrared port (see the picture above right).

They found that a voter or poll worker with a Palm Pilot and no more than a minute’s access to a voting machine could surreptitiously re-calibrate the touch-screen so that it would prevent voters from voting for specific candidates or cause the machine to secretly record a voter’s vote for a different candidate than the one the voter chose. Access to the screen calibration function requires no password, and the attacker’s actions, the researchers say, would be indistinguishable from the normal behavior of a voter in front of a machine or of a pollworker starting up a machine in the morning.

Elsewhere in the country, Colorado has decertified most of its electronic voting machines:

The decertification decision, which cited problems with accuracy and security, affects electronic voting machines in Denver and five other counties. A number of electronic scanners used to count ballots were also decertified.

Coffman would not comment Monday on what his findings mean for past elections, despite his conclusion that some equipment had accuracy issues.

“I can only report,” he said. “The voters in those respective counties are going to have to interpret” the results.

Coffman announced in March that he had adopted new rules for testing electronic voting machines. He required the four systems used in Colorado to apply for recertification.

The systems are manufactured by Premier Election Solutions, formerly known as Diebold Election Systems; Hart InterCivic; Sequoia Voting Systems; and Election Systems and Software. Only Premier had all its equipment pass the recertification.

California is about to give up on electronic voting machines, too. This probably didn’t help:

More than a hundred computer chips containing voting machine software were lost or stolen during transit in California this week.

EDITED TO ADD (1/2): More news.

Posted on December 24, 2007 at 1:02 PM18 Comments

Privacy Problems with AskEraser

Last week, Ask.com announced a feature called AskEraser (good description here), which erases a user’s search history. While it’s great to see companies using privacy features for competitive advantage, EPIC examined the feature and wrote to the company with some problems:

The first one is the fact that AskEraser uses an opt-out cookie. Cookies are bits of software left on a consumer’s computer that are used to authenticate the user and maintain information such as the user’s site preferences.

Usually, people concerned with privacy delete cookies, so creating an opt-out cookie is “counter-intuitive,” the letter states. Once the AskEraser opt-out cookie is deleted, the privacy setting is lost and the consumer’s search activity will be tracked. Why not have an opt-in cookie instead, the letter suggests.

The second problem is that Ask inserts the exact time that the user enables AskEraser and stores it in the cookie, which could make identifying the computer easier and make it easy for third-party tracking if the cookie were transferred to such parties. The letter recommends using a session cookie that expires once the search result is returned.

Ask’s Frequently Asked Questions for the feature notes that there may be circumstances when Ask is required to comply with a court order and if asked to, it will retain the consumer’s search data even if AskEraser appears to be turned on. Ask should notify consumers when the feature has been disabled so that people are not misled into thinking their searches aren’t being tracked when they actually are, the letter said.

Here’s a copy of the letter, signed by eight privacy organizations. Still no word from Ask.com.

While I have your attention, I want to talk about EPIC. This is exactly the sort of thing the Electronic Privacy Information Center does best. Whether it’s search engine privacy, electronic voting, ID cards, or databases and data mining, EPIC is always at the forefront of these sorts of privacy issues. It’s the end of the year, and lots of people are looking for causes worthy of donation. Here’s EPIC’s donation page; they—well, “we” really, as I’m on the board—can use the support.

Posted on December 21, 2007 at 11:18 AM13 Comments

Refuse to be Terrorized

I know nothing about the politics of this organization, but their “I am not afraid” campaign is something I can certainly get behind. I think we should all send a letter like this to our elected officials, whatever country we’re in:

I am not afraid of terrorism, and I want you to stop being afraid on my behalf. Please start scaling back the official government war on terror. Please replace it with a smaller, more focused anti-terrorist police effort in keeping with the rule of law. Please stop overreacting. I understand that it will not be possible to stop all terrorist acts. I accept that. I am not afraid.

Refuse to be terrorized, and you deny the terrorists their most potent weapon—your fear.

EDITED TO ADD (12/21): There’s also this video.

And Chicago opens a new front on the war on the unexpected, trying to scare everybody:

Each year, the Winter Holiday Season tends to spur larger crowds and increased traffic throughout the City. As it pertains to shopping districts, public transportation routes, and all other places of public assembly, the increased crowds become a matter of Homeland Security concern. During this holiday period, as a matter of public safety, we ask that all members of the general public heighten their awareness regarding any and all suspicious activity that may be an indicator of a threat to public safety. It is important to immediately report any or all of the below suspect activities.

  • Physical Surveillance (note taking, binocular use, cameras, video, maps)
  • Attempts to gain sensitive information regarding key facilities
  • Attempts to penetrate or test physical security / response procedures
  • Attempts to improperly acquire explosives, weapons, ammunition, dangerous chemicals, etc.
  • Suspicious or improper attempts to acquire official vehicles, uniforms, badges or access devices
  • Presence of individuals who do not appear to belong in workplaces, business establishments, or near key facilities
  • Mapping out routes, playing out scenarios, monitoring key facilities, timing traffic lights
  • Stockpiling suspicious materials or abandoning potential containers for explosives (e.g., vehicles, suitcases, etc)
  • Suspicious reporting of lost or stolen identification

This may be real or it may be a hoax; I don’t know.

And this is probably my last post on the war on the unexpected. There are simply too many examples.

Posted on December 21, 2007 at 7:26 AM62 Comments

"Where Should Airport Security Begin?"

In this essay, Clark Ervin argues that airport security should begin at the front door to the airport:

Like many people, I spend a lot of time in airport terminals, and I often think that they must be an awfully appealing target to terrorists. The largest airports have huge terminals teeming with thousands of passengers on any given day. They serve as conspicuous symbols of American consumerism, with McDonald’s restaurants, Starbucks coffee shops and Disney toy stores. While airport screeners do only a so-so job of checking for guns, knives and bombs at checkpoints, there’s no checking for weapons before checkpoints. So if the intention isn’t to carry out an attack once on board a plane, but instead to carry out an attack on the airport itself by killing people inside it, there’s nothing to stop a terrorist from doing so.

[…]

To prevent smaller attacks—and larger ones that could be catastrophic—what if we moved the screening checkpoints from the interior of airports to the entrance? The sooner we screen passengers’ and visitors’ persons and baggage (both checked and carry-on) for guns, knives and explosives, the sooner we can detect those weapons and prevent them from being used to sow destruction.

This is a silly argument, one that any regular reader of this blog should be able to counter. If you’re worried about explosions on the ground, any place you put security checkpoints is arbitrary. The point of airport security is to prevent terrorism on the airplanes, because airplane terrorism is a more serious problem than conventional bombs blowing up in crowded buildings. (Four reasons. First, airlines are often national symbols. Second, airplanes often fly to dangerous countries. Third, for whatever reason, airplanes are a preferred terrorist target. And fourth, the particular failure mode of airplanes means that even a small bomb can kill everyone on board. That same bomb in an airport means that a few people die and many more get injured.) And most airport security measures aren’t effective.

His bias betrays itself primary through this quote:

Like many people, I spend a lot of time in airport terminals, and I often think that they must be an awfully appealing target to terrorists.

If he spent a lot of time in shopping malls, he would probably think they must be awfully appealing targets as well. They also “serve as conspicuous symbols of American consumerism, with McDonald’s restaurants, Starbucks coffee shops and Disney toy stores.” He sounds like he’s just scared.

Face it, there are far too many targets. Stop trying to defend against the tactic, and instead try to defend against terrorism. Airport security is the last line of defense, and not a very good one at that. Real security happens long before anyone gets to an airport, a shopping mall, or wherever.

Posted on December 20, 2007 at 12:28 PM48 Comments

Prison Break

Details:

Police said Espinosa and Blunt were in adjacent cells and used a long metal wire to scrape away mortar around the cinder block between their cells and the outer wall in Espinosa’s cell.

Once the cement block between the cells was removed, they smashed the block and hid the pieces in a footlocker. According to police, Blunt, who is 5 feet 10 inches and weighs 210 pounds, squeezed into Espinosa’s cell through an approximately 16- to 18-inch hole.

The two inmates wiggled through another 18-inch hole in the outer wall. From a roof landing, the two men “took a running jump or they were standing and they jumped approximately 15 feet out and about 30 feet down,” Romankow said.

Then they jumped a razor-wire fence onto a New Jersey transit railroad bed to freedom, police said. Authorities found two sets of footprints in the snow heading in opposite directions.

[…]

To delay discovery of the escape, Espinosa and Blunt used dummies made of sheets and pillows in their beds. They also hung photographs of bikini-clad women to hide the holes in the walls, a move reminiscent of a scene in the Hollywood hit “The Shawshank Redemption.”

Posted on December 19, 2007 at 5:10 AM39 Comments

Anonymity and the Netflix Dataset

Last year, Netflix published 10 million movie rankings by 500,000 customers, as part of a challenge for people to come up with better recommendation systems than the one the company was using. The data was anonymized by removing personal details and replacing names with random numbers, to protect the privacy of the recommenders.

Arvind Narayanan and Vitaly Shmatikov, researchers at the University of Texas at Austin, de-anonymized some of the Netflix data by comparing rankings and timestamps with public information in the Internet Movie Database, or IMDb.

Their research (.pdf) illustrates some inherent security problems with anonymous data, but first it’s important to explain what they did and did not do.

They did not reverse the anonymity of the entire Netflix dataset. What they did was reverse the anonymity of the Netflix dataset for those sampled users who also entered some movie rankings, under their own names, in the IMDb. (While IMDb’s records are public, crawling the site to get them is against the IMDb’s terms of service, so the researchers used a representative few to prove their algorithm.)

The point of the research was to demonstrate how little information is required to de-anonymize information in the Netflix dataset.

On one hand, isn’t that sort of obvious? The risks of anonymous databases have been written about before, such as in this 2001 paper published in an IEEE journal. The researchers working with the anonymous Netflix data didn’t painstakingly figure out people’s identities—as others did with the AOL search database last year—they just compared it with an already identified subset of similar data: a standard data-mining technique.

But as opportunities for this kind of analysis pop up more frequently, lots of anonymous data could end up at risk.

Someone with access to an anonymous dataset of telephone records, for example, might partially de-anonymize it by correlating it with a catalog merchants’ telephone order database. Or Amazon’s online book reviews could be the key to partially de-anonymizing a public database of credit card purchases, or a larger database of anonymous book reviews.

Google, with its database of users’ internet searches, could easily de-anonymize a public database of internet purchases, or zero in on searches of medical terms to de-anonymize a public health database. Merchants who maintain detailed customer and purchase information could use their data to partially de-anonymize any large search engine’s data, if it were released in an anonymized form. A data broker holding databases of several companies might be able to de-anonymize most of the records in those databases.

What the University of Texas researchers demonstrate is that this process isn’t hard, and doesn’t require a lot of data. It turns out that if you eliminate the top 100 movies everyone watches, our movie-watching habits are all pretty individual. This would certainly hold true for our book reading habits, our internet shopping habits, our telephone habits and our web searching habits.

The obvious countermeasures for this are, sadly, inadequate. Netflix could have randomized its dataset by removing a subset of the data, changing the timestamps or adding deliberate errors into the unique ID numbers it used to replace the names. It turns out, though, that this only makes the problem slightly harder. Narayanan’s and Shmatikov’s de-anonymization algorithm is surprisingly robust, and works with partial data, data that has been perturbed, even data with errors in it.

With only eight movie ratings (of which two may be completely wrong), and dates that may be up to two weeks in error, they can uniquely identify 99 percent of the records in the dataset. After that, all they need is a little bit of identifiable data: from the IMDb, from your blog, from anywhere. The moral is that it takes only a small named database for someone to pry the anonymity off a much larger anonymous database.

Other research reaches the same conclusion. Using public anonymous data from the 1990 census, Latanya Sweeney found that 87 percent of the population in the United States, 216 million of 248 million, could likely be uniquely identified by their five-digit ZIP code, combined with their gender and date of birth. About half of the U.S. population is likely identifiable by gender, date of birth and the city, town or municipality in which the person resides. Expanding the geographic scope to an entire county reduces that to a still-significant 18 percent. “In general,” the researchers wrote, “few characteristics are needed to uniquely identify a person.”

Stanford University researchers reported similar results using 2000 census data. It turns out that date of birth, which (unlike birthday month and day alone) sorts people into thousands of different buckets, is incredibly valuable in disambiguating people.

This has profound implications for releasing anonymous data. On one hand, anonymous data is an enormous boon for researchers—AOL did a good thing when it released its anonymous dataset for research purposes, and it’s sad that the CTO resigned and an entire research team was fired after the public outcry. Large anonymous databases of medical data are enormously valuable to society: for large-scale pharmacology studies, long-term follow-up studies and so on. Even anonymous telephone data makes for fascinating research.

On the other hand, in the age of wholesale surveillance, where everyone collects data on us all the time, anonymization is very fragile and riskier than it initially seems.

Like everything else in security, anonymity systems shouldn’t be fielded before being subjected to adversarial attacks. We all know that it’s folly to implement a cryptographic system before it’s rigorously attacked; why should we expect anonymity systems to be any different? And, like everything else in security, anonymity is a trade-off. There are benefits, and there are corresponding risks.

Narayanan and Shmatikov are currently working on developing algorithms and techniques that enable the secure release of anonymous datasets like Netflix’s. That’s a research result we can all benefit from.

This essay originally appeared on Wired.com.

Posted on December 18, 2007 at 5:53 AM32 Comments

Dual_EC_DRBG Added to Windows Vista

Microsoft has added the random-number generator Dual_EC-DRBG to Windows Vista, as part of SP1. Yes, this is the same RNG that could have an NSA backdoor.

It’s not enabled by default, and my advice is to never enable it. Ever.

EDITED TO ADD (12/18): I should make this clear that the algorithm is available as a program call. It is not something that the user can enable or disable.

Posted on December 17, 2007 at 10:22 AM82 Comments

Chinese Hackers

Time Magazine article on Chinese hackers:

But reports in Chinese newspapers suggest that the establishment of a cybermilitia is well under way. In recent years, for example, the military has engaged in nationwide recruiting campaigns to try to discover the nation’s most talented hackers. The campaigns are conducted through competitions that feature large cash prizes, with the PLA advertising the challenges in local newspapers.

Tan is a successful graduate of this system. He earned $4,000 in prize money from hacker competitions, enough to make him worthy of a glowing profile in Sichuan University’s campus newspaper. Tan told the paper that he was at his happiest “when he succeeds in gaining control of a server” and described a highly organized selection and training process that aspiring cybermilitiamen (no cyberwomen, apparently) undertake. The story details the links between the hackers and the military. “On July 25, 2005,” it said, “Sichuan Military Command Communication Department located [Tan] through personal information published online and instructed him to participate in the network attack/defense training organized by the provincial military command, in preparation for the coming Chengdu Military Command Network Attack/Defense Competition in September.” (The State Council Information Office didn’t respond to questions about Tan, and China’s Foreign Ministry denies knowing about him.)

With the help of experts from Sichuan University, the story continued, Tan’s team won the competition and then had a month of intense training organized by the provincial military command, simulating attacks, designing hacking tools and drafting network-infiltration strategies. Tan was then chosen to represent the Sichuan Military Command in competition with other provinces. His team won again, after which, the iDefense reports say, he founded the NCPH and acquired an unidentified benefactor (“most likely the PLA”) to subsidize the group’s activities to the tune of $271 a month.

Posted on December 14, 2007 at 11:08 AM28 Comments

Defeating the Shoe Scanning Machine at Heathrow Airport

For a while now, Heathrow Airport has had a unique setup for scanning shoes. Instead of taking your shoes off during the normal screening process, as you do in U.S. airports, you go through the metal detector with your shoes on. Then, later, there is a special shoe scanning X-ray machine. You take your shoes off, send them through the machine, and put them on at the other end.

It’s definitely faster, but it’s an easy system to defeat. The vulnerability is that no one verifies that the shoes you walked through the metal detector with are the same shoes you put on the scanning machine.

Here’s how the attack works. Assume that you have two pairs of shoes: a clean pair that passes all levels of screening, and a dangerous pair that doesn’t. (Ignore for a moment the ridiculousness of screening shoes in the first place, and assume that an X-ray machine can detect the dangerous pair.) Put the dangerous shoes on your feet and the clean shoes in your carry-on bag. Walk through the metal detector. Then, at the shoe X-ray machine, take the dangerous shoes off and put them in your bag, and take the clean shoes out of your bag and place them on the X-ray machine. You’ve now managed to get through security without having your shoes screened.

This works because the two security systems are decoupled. And the shoe screening machine is so crowded and chaotic, and so poorly manned, that no one notices the switch.

U.S. airports force people to put their shoes through the X-ray machine and walk through the metal detector shoeless, ensuring that all shoes get screened. That might be slower, but it works.

EDITED TO ADD (12/14): Heathrow Terminal 3, that is. The system wasn’t in place in Terminal 4, and I don’t know about Terminals 1 and 2.

Posted on December 14, 2007 at 5:43 AM93 Comments

Security-Breach Notification Laws

Interesting study on the effects of security-breach notification laws in the U.S.:

This study surveys the literature on changes in the information security world and significantly expands upon it with qualitative data from seven in-depth discussions with information security officers. These interviews focused on the most important factors driving security investment at their organizations and how security breach notification laws fit into that list. Often missing from the debate is that, regardless of the risk of identity theft and alleged consumer apathy towards notices, the simple fact of having to publicly notify causes organizations to implement stronger security standards that protect personal information.

The interviews showed that security breaches drive information exchange among security professionals, causing them to engage in discussions about information security issues that may arise at their and others’ organizations. For example, we found that some CSOs summarize news reports from breaches at other organizations and circulate them to staff with “lessons learned” from each incident. In some cases, organizations have a “that could have been us” moment, and patch systems with similar vulnerabilities to the entity that had a breach.

Breach notification laws have significantly contributed to heightened awareness of the importance of information security throughout all levels of a business organization and to development of a level of cooperation among different departments within an organization hat resulted from the need to monitor data access for the purposes of detecting, investigating, and reporting breaches. CSOs reported that breach notification duties empowered them to implement new access controls, auditing measures, and encryption. Aside from the organization’s own efforts at complying with notification laws, reports of breaches at other organizations help information officers maintain that sense of awareness.

Posted on December 12, 2007 at 1:53 PM22 Comments

Police Helping Thieves

This is a weird article. Local police are putting yellow stickers on cars with visible packages, making it easier for thieves to identify which cars are worth breaking into.

How odd.

EDITED TO ADD 12/19): According to a comment, this was misreported in the news. The police didn’t just put signs on cars with visible packages, but on all cars. Cars with no visible packages got a note saying: “Nothing Observed (Good Job!).” So a thief would have to read the sign, which means he’s already close enough to look in the car. Much better.

Posted on December 12, 2007 at 8:18 AM57 Comments

Secretly Recording Interrogations

It’s getting easier to watch the watchers:

A teen suspect’s snap decision to secretly record his interrogation with an MP3 player has resulted in a perjury case against a veteran detective and a plea deal for the teen.

Unaware of the recording, Detective Christopher Perino insisted under oath at a trial in April that suspect Erik Crespo wasn’t questioned about a shooting in the Bronx.

But the defense confronted the detective with a transcript it said proved he had spent more than an hour unsuccessfully trying to persuade Crespo to confess.

Perino was arraigned today on 12 counts of first-degree perjury and freed on bail.

My guess is that this sort of perjury occurs more than we realize. If there’s one place I think cameras should be rolling at all times, it’s in police station interrogation rooms. And no erasing the tapes either. (And those tapes must have been really damning. Old interrogation tapes can yield valuable intelligence; you don’t ever erase them unless you absolutely have to.)

Posted on December 11, 2007 at 12:26 PM101 Comments

Fake Dynamite Prompts Evacuation

Yes, it’s yet another story of knee-jerk overreaction to a nonexistent threat. But notice that the police evacuated everyone within a mile radius of the “dynamite.” Isn’t that a little excessive, even for real dynamite?

EDITED TO ADD (12/14): Assuming that this information is correct, this was an intentional hoax. The fake dynamite consisted of road flares duct taped together and attached to the side of the home.

Posted on December 6, 2007 at 6:43 AM79 Comments

California Electronic Voting Update

News:

Electronic voting systems used throughout California still aren’t good enough to be trusted with the state’s elections, Secretary of State Debra Bowen said Saturday.

While Bowen has been putting tough restrictions and new security requirements on the use of the touch screen machines, she admitted having doubts as to whether the electronic voting systems will ever meet the standards she believes are needed in California.

I’ve written a lot on this issue.

EDITED TO ADD (12/5): Ed Felten comments.

Posted on December 5, 2007 at 1:52 PM25 Comments

MI5 Sounds Alarm on Internet Spying from China

Someone in MI5 is pissed off at China:

In an unprecedented alert, the Director-General of MI5 sent a confidential letter to 300 chief executives and security chiefs at banks, accountants and legal firms this week warning them that they were under attack from “Chinese state organisations.”

[…]

Firms known to have been compromised recently by Chinese attacks are one of Europe’s largest engineering companies and a large oil company, The Times has learnt. Another source familiar with the MI5 warning said, however, that known attacks had not been limited to large firms based in the City of London. Law firms and other businesses in the regions that deal even with only small parts of Chinese-linked deals are being probed as potential weak spots, he said.

A security expert who has also seen the letter said that among the techniques used by Chinese groups were “custom Trojans”, software designed to hack into the network of a particular firm and feed back confidential data. The MI5 letter includes a list of known “signatures” that can be used to identify Chinese Trojans and a list of internet addresses known to have been used to launch attacks.

A big study gave warning this week that Government and military computer systems in Britain are coming under sustained attack from China and other countries. It followed a report presented to the US Congress last month describing Chinese espionage in the US as so extensive that it represented “the single greatest risk to the security of American technologies.”

EDITED TO ADD (12/13): The Onion comments.

EDITED TO ADD (12/14): At first, I thought that someone in MI5 was pissed off at China. But now I think that someone in MI5 was pissed that he wasn’t getting any budget.

Posted on December 4, 2007 at 12:34 PM36 Comments

How to Secure Your Computer, Disks, and Portable Drives

Computer security is hard. Software, computer and network security are all ongoing battles between attacker and defender. And in many cases the attacker has an inherent advantage: He only has to find one network flaw, while the defender has to find and fix every flaw.

Cryptography is an exception. As long as you don’t write your own algorithm, secure encryption is easy. And the defender has an inherent mathematical advantage: Longer keys increase the amount of work the defender has to do linearly, while geometrically increasing the amount of work the attacker has to do.

Unfortunately, cryptography can’t solve most computer-security problems. The one problem cryptography can solve is the security of data when it’s not in use. Encrypting files, archives—even entire disks—is easy.

All of this makes it even more amazing that Her Majesty’s Revenue & Customs in the United Kingdom lost two disks with personal data on 25 million British citizens, including dates of birth, addresses, bank-account information and national insurance numbers. On the one hand, this is no bigger a deal than any of the thousands of other exposures of personal data we’ve read about in recent years—the U.S. Veteran’s Administration loss of personal data of 26 million American veterans is an obvious similar event. But this has turned into Britain’s privacy Chernobyl.

Perhaps encryption isn’t so easy after all, and some people could use a little primer. This is how I protect my laptop.

There are several whole-disk encryption products on the market. I use PGP Disk’s Whole Disk Encryption tool for two reasons. It’s easy, and I trust both the company and the developers to write it securely. (Disclosure: I’m also on PGP Corp.’s Technical Advisory Board.)

Setup only takes a few minutes. After that, the program runs in the background. Everything works like before, and the performance degradation is negligible. Just make sure you choose a secure password—PGP’s encouragement of passphrases makes this much easier—and you’re secure against leaving your laptop in the airport or having it stolen out of your hotel room.

The reason you encrypt your entire disk, and not just key files, is so you don’t have to worry about swap files, temp files, hibernation files, erased files, browser cookies or whatever. You don’t need to enforce a complex policy about which files are important enough to be encrypted. And you have an easy answer to your boss or to the press if the computer is stolen: no problem; the laptop is encrypted.

PGP Disk can also encrypt external disks, which means you can also secure that USB memory device you’ve been using to transfer data from computer to computer. When I travel, I use a portable USB drive for backup. Those devices are getting physically smaller—but larger in capacity—every year, and by encrypting I don’t have to worry about losing them.

I recommend one more complication. Whole-disk encryption means that anyone at your computer has access to everything: someone at your unattended computer, a Trojan that infected your computer and so on. To deal with these and similar threats I recommend a two-tier encryption strategy. Encrypt anything you don’t need access to regularly—archived documents, old e-mail, whatever—separately, with a different password. I like to use PGP Disk’s encrypted zip files, because it also makes secure backup easier (and lets you secure those files before you burn them on a DVD and mail them across the country), but you can also use the program’s virtual-encrypted-disk feature to create a separately encrypted volume. Both options are easy to set up and use.

There are still two scenarios you aren’t secure against, though. You’re not secure against someone snatching your laptop out of your hands as you’re typing away at the local coffee shop. And you’re not secure against the authorities telling you to decrypt your data for them.

The latter threat is becoming more real. I have long been worried that someday, at a border crossing, a customs official will open my laptop and ask me to type in my password. Of course I could refuse, but the consequences might be severe—and permanent. And some countries—the United Kingdom, Singapore, Malaysia—have passed laws giving police the authority to demand that you divulge your passwords and encryption keys.

To defend against both of these threats, minimize the amount of data on your laptop. Do you really need 10 years of old e-mails? Does everyone in the company really need to carry around the entire customer database? One of the most incredible things about the Revenue & Customs story is that a low-level government employee mailed a copy of the entire national child database to the National Audit Office in London. Did he have to? Doubtful. The best defense against data loss is to not have the data in the first place.

Failing that, you can try to convince the authorities that you don’t have the encryption key. This works better if it’s a zipped archive than the whole disk. You can argue that you’re transporting the files for your boss, or that you forgot the key long ago. Make sure the time stamp on the files matches your claim, though.

There are other encryption programs out there. If you’re a Windows Vista user, you might consider BitLocker. This program, embedded in the operating system, also encrypts the computer’s entire drive. But it only works on the C: drive, so it won’t help with external disks or USB tokens. And it can’t be used to make encrypted zip files. But it’s easy to use, and it’s free.

This essay previously appeared on Wired.com.

EDITED TO ADD (12/14): Lots of people have pointed out that the free and open-source program TrueCrypt is a good alternative to PGP Disk. I haven’t used or reviewed the program at all.

Posted on December 4, 2007 at 6:40 AM109 Comments

SANS Top 20

Every year SANS publishes a list of the 20 most important vulnerabilities. It’s always a great list, and this year is no different:

The threat landscape is very dynamic, which in turn makes it necessary to adopt newer security measures. Just over the last year, the kinds of vulnerabilities that are being exploited are very different from the ones being exploited in the past. Here are some observations:

  • Operating systems have fewer vulnerabilities that can lead to massive Internet worms. For instance, during 2002-2005, Microsoft Windows worms like Blaster, Nachi, Sasser and Zotob infected a large number of systems on the Internet. There have not been any new large-scale worms targeting Windows services since 2005. On the other hand, vulnerabilities found anti-virus, backup or other application software, can result in worms. Most notable was the worm exploiting the Symantec anti-virus buffer overflow flaw last year.
  • We have seen significant growth in the number of client-side vulnerabilities, including vulnerabilities in browsers, in office software, in media players and in other desktop applications. These vulnerabilities are being discovered on multiple operating systems and are being massively exploited in the wild, often to drive recruitment for botnets.
  • Users who are allowed by their employers to browse the Internet have become a source of major security risk for their organizations. A few years back securing servers and services was seen as the primary task for securing an organization. Today it is equally important, perhaps even more important, to prevent users having their computers compromised via malicious web pages or other client-targeting attacks.
  • Web application vulnerabilities in open-source as well as custom-built applications account for almost half the total number of vulnerabilities being discovered in the past year. These vulnerabilities are being exploited widely to convert trusted web sites into malicious servers serving client-side exploits and phishing scams.
  • The default configurations for many operating systems and services continue to be weak and continue to include default passwords. As a result, many systems have been compromised via dictionary and brute-force password guessing attacks in 2007!
  • Attackers are finding more creative ways to obtain sensitive data from organizations. Therefore, it is now critical to check the nature of any data leaving an organization’s boundary.

Much, much more information at the link.

Posted on December 3, 2007 at 3:12 PM10 Comments

Security in Ten Years

This is a conversation between myself and Marcus Ranum. It will appear in Information Security Magazine this month.


Bruce Schneier: Predictions are easy and difficult. Roy Amara of the Institute for the Future once said: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

Moore’s Law is easy: In 10 years, computers will be 100 times more powerful. My desktop will fit into my cell phone, we’ll have gigabit wireless connectivity everywhere, and personal networks will connect our computing devices and the remote services we subscribe to. Other aspects of the future are much more difficult to predict. I don’t think anyone can predict what the emergent properties of 100x computing power will bring: new uses for computing, new paradigms of communication. A 100x world will be different, in ways that will be surprising.

But throughout history and into the future, the one constant is human nature. There hasn’t been a new crime invented in millennia. Fraud, theft, impersonation and counterfeiting are perennial problems that have been around since the beginning of society. During the last 10 years, these crimes have migrated into cyberspace, and over the next 10, they will migrate into whatever computing, communications and commerce platforms we’re using.

The nature of the attacks will be different: the targets, tactics and results. Security is both a trade-off and an arms race, a balance between attacker and defender, and changes in technology upset that balance. Technology might make one particular tactic more effective, or one particular security technology cheaper and more ubiquitous. Or a new emergent application might become a favored target.

I don’t see anything by 2017 that will fundamentally alter this. Do you?


Marcus Ranum: I think you’re right; at a meta-level, the problems are going to stay the same. What’s shocking and disappointing to me is that our responses to those problems also remain the same, in spite of the obvious fact that they aren’t effective. It’s 2007 and we haven’t seemed to accept that:

  • You can’t turn shovelware into reliable software by patching it a whole lot.
  • You shouldn’t mix production systems with non-production systems.
  • You actually have to know what’s going on in your networks.
  • If you run your computers with an open execution runtime model you’ll always get viruses, spyware and Trojan horses.
  • You can pass laws about locking barn doors after horses have left, but it won’t put the horses back in the barn.
  • Security has to be designed in, as part of a system plan for reliability, rather than bolted on afterward.

The list could go on for several pages, but it would be too depressing. It would be “Marcus’ list of obvious stuff that everybody knows but nobody accepts.”

You missed one important aspect of the problem: By 2017, computers will be even more important to our lives, economies and infrastructure.

If you’re right that crime remains a constant, and I’m right that our responses to computer security remain ineffective, 2017 is going to be a lot less fun than 2007 was.

I’ve been pretty dismissive of the concepts of cyberwar and cyberterror. That dismissal was mostly motivated by my observation that the patchworked and kludgy nature of most computer systems acts as a form of defense in its own right, and that real-world attacks remain more cost-effective and practical for terror purposes.

I’d like to officially modify my position somewhat: I believe it’s increasingly likely that we’ll suffer catastrophic failures in critical infrastructure systems by 2017. It probably won’t be terrorists that do it, though. More likely, we’ll suffer some kind of horrible outage because a critical system was connected to a non-critical system that was connected to the Internet so someone could get to MySpace—­and that ancillary system gets a piece of malware. Or it’ll be some incomprehensibly complex software, layered with Band-Aids and patches, that topples over when some “merely curious” hacker pushes the wrong e-button. We’ve got some bad-looking trend lines; all the indicators point toward a system that is more complex, less well-understood and more interdependent. With infrastructure like that, who needs enemies?

You’re worried criminals will continue to penetrate into cyberspace, and I’m worried complexity, poor design and mismanagement will be there to meet them.


Bruce Schneier: I think we’ve already suffered that kind of critical systems failure. The August 2003 blackout that covered much of northeastern United States and Canada­—50 million people­—was caused by a software bug.

I don’t disagree that things will continue to get worse. Complexity is the worst enemy of security, and the Internet—and the computers and processes connected to it—­is getting more complex all the time. So things are getting worse, even though security technology is improving. One could say those critical insecurities are another emergent property of the 100x world of 2017.

Yes, IT systems will continue to become more critical to our infrastructure­—banking, communications, utilities, defense, everything.

By 2017, the interconnections will be so critical that it will probably be cost-effective—and low-risk—for a terrorist organization to attack over the Internet. I also deride talk of cyberterror today, but I don’t think I will in another 10 years.

While the trends of increased complexity and poor management don’t look good, there is another trend that points to more security—but neither you nor I is going to like it. That trend is IT as a service.

By 2017, people and organizations won’t be buying computers and connectivity the way they are today. The world will be dominated by telcos, large ISPs and systems integration companies, and computing will look a lot like a utility. Companies will be selling services, not products: email services, application services, entertainment services. We’re starting to see this trend today, and it’s going to take off in the next 10 years. Where this affects security is that by 2017, people and organizations won’t have a lot of control over their security. Everything will be handled at the ISPs and in the backbone. The free-wheeling days of general-use PCs will be largely over. Think of the iPhone model: You get what Apple decides to give you, and if you try to hack your phone, they can disable it remotely. We techie geeks won’t like it, but it’s the future. The Internet is all about commerce, and commerce won’t survive any other way.


Marcus Ranum: You’re right about the shift toward services—it’s the ultimate way to lock in customers.

If you can make it difficult for the customer to get his data back after you’ve held it for a while, you can effectively prevent the customer from ever leaving. And of course, customers will be told “trust us, your data is secure,” and they’ll take that for an answer. The back-end systems that will power the future of utility computing are going to be just as full of flaws as our current systems. Utility computing will also completely fail to address the problem of transitive trust unless people start shifting to a more reliable endpoint computing platform.

That’s the problem with where we’re heading: the endpoints are not going to get any better. People are attracted to appliances because they get around the headache of system administration (which, in today’s security environment, equates to “endless patching hell”), but underneath the slick surface of the appliance we’ll have the same insecure nonsense we’ve got with general-purpose desktops. In fact, the development of appliances running general-purpose operating systems really does raise the possibility of a software monoculture. By 2017, do you think system engineering will progress to the point where we won’t see a vendor release a new product and instantly create an installed base of 1 million-plus users with root privileges? I don’t, and that scares me.

So if you’re saying the trend is to continue putting all our eggs in one basket and blithely trusting that basket, I agree.

Another trend I see getting worse is government IT know-how. At the rate outsourcing has been brain-draining the federal workforce, by 2017 there won’t be a single government employee who knows how to do anything with a computer except run PowerPoint and Web surf. Joking aside, the result is that the government’s critical infrastructure will be almost entirely managed from the outside. The strategic implications of such a shift have scared me for a long time; it amounts to a loss of control over data, resources and communications.


Bruce Schneier: You’re right about the endpoints not getting any better. I’ve written again and again how measures like two-factor authentication aren’t going to make electronic banking any more secure. The problem is if someone has stuck a Trojan on your computer, it doesn’t matter how many ways you authenticate to the banking server; the Trojan is going to perform illicit transactions after you authenticate.

It’s the same with a lot of our secure protocols. SSL, SSH, PGP and so on all assume the endpoints are secure, and the threat is in the communications system. But we know the real risks are the endpoints.

And a misguided attempt to solve this is going to dominate computing by 2017. I mentioned software-as-a-service, which you point out is really a trick that allows businesses to lock up their customers for the long haul. I pointed to the iPhone, whose draconian rules about who can write software for that platform accomplishes much the same thing. We could also point to Microsoft’s Trusted Computing, which is being sold as a security measure but is really another lock-in mechanism designed to keep users from switching to “unauthorized” software or OSes.

I’m reminded of the post-9/11 anti-terrorist hysteria—we’ve confused security with control, and instead of building systems for real security, we’re building systems of control. Think of ID checks everywhere, the no-fly list, warrantless eavesdropping, broad surveillance, data mining, and all the systems to check up on scuba divers, private pilots, peace activists and other groups of people. These give us negligible security, but put a whole lot of control in the government’s hands.

Computing is heading in the same direction, although this time it is industry that wants control over its users. They’re going to sell it to us as a security system—they may even have convinced themselves it will improve security—but it’s fundamentally a control system. And in the long run, it’s going to hurt security.

Imagine we’re living in a world of Trustworthy Computing, where no software can run on your Windows box unless Microsoft approves it. That brain drain you talk about won’t be a problem, because security won’t be in the hands of the user. Microsoft will tout this as the end of malware, until some hacker figures out how to get his software approved. That’s the problem with any system that relies on control: Once you figure out how to hack the control system, you’re pretty much golden. So instead of a zillion pesky worms, by 2017 we’re going to see fewer but worse super worms that sail past our defenses.

By then, though, we’ll be ready to start building real security. As you pointed out, networks will be so embedded into our critical infrastructure—­and there’ll probably have been at least one real disaster by then—that we’ll have no choice. The question is how much we’ll have to dismantle and build over to get it right.


Marcus Ranum: I agree regarding your gloomy view of the future. It’s ironic the counterculture “hackers” have enabled (by providing an excuse) today’s run-patch-run-patch-reboot software environment and tomorrow’s software Stalinism.

I don’t think we’re going to start building real security. Because real security is not something you build—­it’s something you get when you leave out all the other garbage as part of your design process. Purpose-designed and purpose-built software is more expensive to build, but cheaper to maintain. The prevailing wisdom about software return on investment doesn’t factor in patching and patch-related downtime, because if it did, the numbers would stink. Meanwhile, I’ve seen purpose-built Internet systems run for years without patching because they didn’t rely on bloated components. I doubt industry will catch on.

The future will be captive data running on purpose-built back-end systems—and it won’t be a secure future, because turning your data over always decreases your security. Few possess the understanding of complexity and good design principles necessary to build reliable or secure systems. So, effectively, outsourcing—or other forms of making security someone else’s problem—will continue to seem attractive.
That doesn’t look like a very rosy future to me. It’s a shame, too, because getting this stuff correct is important. You’re right that there are going to be disasters in our future.

I think they’re more likely to be accidents where the system crumbles under the weight of its own complexity, rather than hostile action. Will we even be able to figure out what happened, when it happens?

Folks, the captains have illuminated the “Fasten your seat belts” sign. We predict bumpy conditions ahead.

EDITED TO ADD (12/4): Commentary on the point/counterpoint.

Posted on December 3, 2007 at 12:14 PM98 Comments

Even More "War on the Unexpected"

We’re losing the “War on the Unexpected.”

A blind calypso musician and his band removed from an airplane:

The passenger told the pilot of the Sardinia-Stansted flight that he was concerned about the behaviour of Michael Toussaint and four other members of the Caribbean Steel International Orchestra, a court heard. He claimed to be a psychology lecturer from London University and said he had noticed the group in “high spirits” in the terminal building, but that they had sat separately and quietly on board. He also believed Toussaint, who was wearing dark glasses, could have been feigning blindness, the court was told.

A Jewish man removed from a train:

The incident took place on a train that left Chicago early in the morning – when Jewish men are obligated to put on tefillin (phylacteries). The passenger began strapping the head-tefillin to his forehead and passengers unfamiliar with the custom rushed to the conductor and told him there was a man on board who was fastening a box to his head with wires dangling from it.”

The conductor approached the passenger but the latter refused to answer him as he was in the middle of the prayer, heightening the conductor’s suspicions.

Meanwhile, the passengers grew even more frantic when they noticed that the passenger sitting next to the Jewish man had a Middle-Eastern appearance and wore a turban.

More stories. And the point.

EDITED TO ADD (12/6): Bomb squad in Sarasota, Florida called in to detonate a typewriter.

EDITED TO ADD (2/8/08): The calypso band won damages in court:

A judge ruled that the airline had not acted reasonably and had failed in its duty of care to the passengers, particularly Toussaint, who was entitled to special care because of his disability.

He also found the company had issued a “false and misleading” statement to the BBC, which blamed the incident on the Italian security authorities.

Posted on December 3, 2007 at 6:15 AM36 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.