Entries Tagged "Israel"

Page 3 of 4

Stuxnet

Computer security experts are often surprised at which stories get picked up by the mainstream media. Sometimes it makes no sense. Why this particular data breach, vulnerability, or worm and not others? Sometimes it’s obvious. In the case of Stuxnet, there’s a great story.

As the story goes, the Stuxnet worm was designed and released by a government–the U.S. and Israel are the most common suspects–specifically to attack the Bushehr nuclear power plant in Iran. How could anyone not report that? It combines computer attacks, nuclear power, spy agencies and a country that’s a pariah to much of the world. The only problem with the story is that it’s almost entirely speculation.

Here’s what we do know: Stuxnet is an Internet worm that infects Windows computers. It primarily spreads via USB sticks, which allows it to get into computers and networks not normally connected to the Internet. Once inside a network, it uses a variety of mechanisms to propagate to other machines within that network and gain privilege once it has infected those machines. These mechanisms include both known and patched vulnerabilities, and four “zero-day exploits”: vulnerabilities that were unknown and unpatched when the worm was released. (All the infection vulnerabilities have since been patched.)

Stuxnet doesn’t actually do anything on those infected Windows computers, because they’re not the real target. What Stuxnet looks for is a particular model of Programmable Logic Controller (PLC) made by Siemens (the press often refers to these as SCADA systems, which is technically incorrect). These are small embedded industrial control systems that run all sorts of automated processes: on factory floors, in chemical plants, in oil refineries, at pipelines–and, yes, in nuclear power plants. These PLCs are often controlled by computers, and Stuxnet looks for Siemens SIMATIC WinCC/Step 7 controller software.

If it doesn’t find one, it does nothing. If it does, it infects it using yet another unknown and unpatched vulnerability, this one in the controller software. Then it reads and changes particular bits of data in the controlled PLCs. It’s impossible to predict the effects of this without knowing what the PLC is doing and how it is programmed, and that programming can be unique based on the application. But the changes are very specific, leading many to believe that Stuxnet is targeting a specific PLC, or a specific group of PLCs, performing a specific function in a specific location–and that Stuxnet’s authors knew exactly what they were targeting.

It’s already infected more than 50,000 Windows computers, and Siemens has reported 14 infected control systems, many in Germany. (These numbers were certainly out of date as soon as I typed them.) We don’t know of any physical damage Stuxnet has caused, although there are rumors that it was responsible for the failure of India’s INSAT-4B satellite in July. We believe that it did infect the Bushehr plant.

All the anti-virus programs detect and remove Stuxnet from Windows systems.

Stuxnet was first discovered in late June, although there’s speculation that it was released a year earlier. As worms go, it’s very complex and got more complex over time. In addition to the multiple vulnerabilities that it exploits, it installs its own driver into Windows. These have to be signed, of course, but Stuxnet used a stolen legitimate certificate. Interestingly, the stolen certificate was revoked on July 16, and a Stuxnet variant with a different stolen certificate was discovered on July 17.

Over time the attackers swapped out modules that didn’t work and replaced them with new ones–perhaps as Stuxnet made its way to its intended target. Those certificates first appeared in January. USB propagation, in March.

Stuxnet has two ways to update itself. It checks back to two control servers, one in Malaysia and the other in Denmark, but also uses a peer-to-peer update system: When two Stuxnet infections encounter each other, they compare versions and make sure they both have the most recent one. It also has a kill date of June 24, 2012. On that date, the worm will stop spreading and delete itself.

We don’t know who wrote Stuxnet. We don’t know why. We don’t know what the target is, or if Stuxnet reached it. But you can see why there is so much speculation that it was created by a government.

Stuxnet doesn’t act like a criminal worm. It doesn’t spread indiscriminately. It doesn’t steal credit card information or account login credentials. It doesn’t herd infected computers into a botnet. It uses multiple zero-day vulnerabilities. A criminal group would be smarter to create different worm variants and use one in each. Stuxnet performs sabotage. It doesn’t threaten sabotage, like a criminal organization intent on extortion might.

Stuxnet was expensive to create. Estimates are that it took 8 to 10 people six months to write. There’s also the lab setup–surely any organization that goes to all this trouble would test the thing before releasing it–and the intelligence gathering to know exactly how to target it. Additionally, zero-day exploits are valuable. They’re hard to find, and they can only be used once. Whoever wrote Stuxnet was willing to spend a lot of money to ensure that whatever job it was intended to do would be done.

None of this points to the Bushehr nuclear power plant in Iran, though. Best I can tell, this rumor was started by Ralph Langner, a security researcher from Germany. He labeled his theory “highly speculative,” and based it primarily on the facts that Iran had an unusually high number of infections (the rumor that it had the most infections of any country seems not to be true), that the Bushehr nuclear plant is a juicy target, and that some of the other countries with high infection rates–India, Indonesia, and Pakistan–are countries where the same Russian contractor involved in Bushehr is also involved. This rumor moved into the computer press and then into the mainstream press, where it became the accepted story, without any of the original caveats.

Once a theory takes hold, though, it’s easy to find more evidence. The word “myrtus” appears in the worm: an artifact that the compiler left, possibly by accident. That’s the myrtle plant. Of course, that doesn’t mean that druids wrote Stuxnet. According to the story, it refers to Queen Esther, also known as Hadassah; she saved the Persian Jews from genocide in the 4th century B.C. “Hadassah” means “myrtle” in Hebrew.

Stuxnet also sets a registry value of “19790509” to alert new copies of Stuxnet that the computer has already been infected. It’s rather obviously a date, but instead of looking at the gazillion things–large and small–that happened on that the date, the story insists it refers to the date Persian Jew Habib Elghanain was executed in Tehran for spying for Israel.

Sure, these markers could point to Israel as the author. On the other hand, Stuxnet’s authors were uncommonly thorough about not leaving clues in their code; the markers could have been deliberately planted by someone who wanted to frame Israel. Or they could have been deliberately planted by Israel, who wanted us to think they were planted by someone who wanted to frame Israel. Once you start walking down this road, it’s impossible to know when to stop.

Another number found in Stuxnet is 0xDEADF007. Perhaps that means “Dead Fool” or “Dead Foot,” a term that refers to an airplane engine failure. Perhaps this means Stuxnet is trying to cause the targeted system to fail. Or perhaps not. Still, a targeted worm designed to cause a specific sabotage seems to be the most likely explanation.

If that’s the case, why is Stuxnet so sloppily targeted? Why doesn’t Stuxnet erase itself when it realizes it’s not in the targeted network? When it infects a network via USB stick, it’s supposed to only spread to three additional computers and to erase itself after 21 days–but it doesn’t do that. A mistake in programming, or a feature in the code not enabled? Maybe we’re not supposed to reverse engineer the target. By allowing Stuxnet to spread globally, its authors committed collateral damage worldwide. From a foreign policy perspective, that seems dumb. But maybe Stuxnet’s authors didn’t care.

My guess is that Stuxnet’s authors, and its target, will forever remain a mystery.

This essay originally appeared on Forbes.com.

My alternate explanations for Stuxnet were cut from the essay. Here they are:

  • A research project that got out of control. Researchers have accidentally released worms before. But given the press, and the fact that any researcher working on something like this would be talking to friends, colleagues, and his advisor, I would expect someone to have outed him by now, especially if it was done by a team.
  • A criminal worm designed to demonstrate a capability. Sure, that’s possible. Stuxnet could be a prelude to extortion. But I think a cheaper demonstration would be just as effective. Then again, maybe not.
  • A message. It’s hard to speculate any further, because we don’t know who the message is for, or its context. Presumably the intended recipient would know. Maybe it’s a “look what we can do” message. Or an “if you don’t listen to us, we’ll do worse next time” message. Again, it’s a very expensive message, but maybe one of the pieces of the message is “we have so many resources that we can burn four or five man-years of effort and four zero-day vulnerabilities just for the fun of it.” If that message were for me, I’d be impressed.
  • A worm released by the U.S. military to scare the government into giving it more budget and power over cybersecurity. Nah, that sort of conspiracy is much more common in fiction than in real life.

Note that some of these alternate explanations overlap.

EDITED TO ADD (10/7): Symantec published a very detailed analysis. It seems like one of the zero-day vulnerabilities wasn’t a zero-day after all. Good CNet article. More speculation, without any evidence. Decent debunking. Alternate theory, that the target was the uranium centrifuges in Natanz, Iran.

Posted on October 7, 2010 at 9:56 AMView Comments

Behavioral Profiling at Airports

There’s a long article in Nature on the practice:

It remains unclear what the officers found anomalous about George’s behaviour, and why he was detained. The TSA’s parent agency, the Department of Homeland Security (DHS), has declined to comment on his case because it is the subject of a federal lawsuit that was filed on George’s behalf in February by the American Civil Liberties Union. But the incident has brought renewed attention to a burgeoning controversy: is it possible to know whether people are being deceptive, or planning hostile acts, just by observing them?

Some people seem to think so. At London’s Heathrow Airport, for example, the UK government is deploying behaviour-detection officers in a trial modelled in part on SPOT. And in the United States, the DHS is pursuing a programme that would use sensors to look at nonverbal behaviours, and thereby spot terrorists as they walk through a corridor. The US Department of Defense and intelligence agencies have expressed interest in similar ideas.

Yet a growing number of researchers are dubious ­ not just about the projects themselves, but about the science on which they are based. “Simply put, people (including professional lie-catchers with extensive experience of assessing veracity) would achieve similar hit rates if they flipped a coin,” noted a 2007 report from a committee of credibility-assessment experts who reviewed research on portal screening.

“No scientific evidence exists to support the detection or inference of future behaviour, including intent,” declares a 2008 report prepared by the JASON defence advisory group. And the TSA had no business deploying SPOT across the nation’s airports “without first validating the scientific basis for identifying suspicious passengers in an airport environment”, stated a two-year review of the programme released on 20 May by the Government Accountability Office (GAO), the investigative arm of the US Congress.

Commentary from the MindHacks blog.

Also, the GAO has published a report on the U.S. DHS’s SPOT program: “Aviation Security: Efforts to Validate TSA’s Passenger Screening Behavior Detection Program Underway, but Opportunities Exist to Strengthen Validation and Address Operational Challenges.”

As of March 2010, TSA deployed about 3,000 BDOs at an annual cost of about $212 million; this force increased almost fifteen-fold between March 2007 and July 2009. BDOs have been selectively deployed to 161 of the 457 TSA-regulated airports in the United States at which passengers and their property are subject to TSA-mandated screening procedures.

It seems pretty clear that the program only catches criminals, and no terrorists. You’d think there would be more important things to spend $200 million a year on.

EDITED TO ADD (6/14): In the comments, a couple of people asked how this compares with the Israeli model of airport security — concentrate on the person — and the idea that trained officers notice if someone is acting “hinky”: both things that I have written favorably about.

The difference is the experience of the detecting officer and the amount of time they spend with each person. If you read about the programs described above, they’re supposed to “spot terrorists as they walk through a corridor,” or possibly after a few questions. That’s very different from what happens when you check into a flight an Ben Gurion Airport.

The problem with fast detection programs is that they don’t work, and the problem with the Israeli security model is that it doesn’t scale.

Posted on June 14, 2010 at 6:23 AMView Comments

Even More on the al-Mabhouh Assassination

This, from a former CIA chief of station:

The point is that in this day and time, with ubiquitous surveillance cameras, the ability to comprehensively analyse patterns of cell phone and credit card use, computerised records of travel documents which can be shared in the blink of an eye, the growing use of biometrics and machine-readable passports, and the ability of governments to share vast amounts of travel and security-related information almost instantaneously, it is virtually impossible for clandestine operatives not to leave behind a vast electronic trail which, if and when there is reason to examine it in detail, will amount to a huge body of evidence.

A not-terribly flattering article about Mossad:

It would be surprising if a key part of this extraordinary story did not turn out to be the role played by Palestinians. It is still Mossad practice to recruit double agents, just as it was with the PLO back in the 1970s. News of the arrest in Damascus of another senior Hamas operative ­ though denied by Mash’al ­ seems to point in this direction. Two other Palestinians extradited from Jordan to Dubai are members of the Hamas armed wing, the Izzedine al-Qassam brigades, suggesting treachery may indeed have been involved. Previous assassinations have involved a Palestinian agent identifying the target.

There’s no proof, of course, that Mossad was behind this operation. But the author is certainly right that the Palestinians believe that Mossad was behind it.

The Cold Spy lists what he sees as the mistakes made:

1. Using passport names of real people not connected with the operation.

2. Airport arrival without disguises in play thus showing your real faces.

3. Not anticipating the wide use of surveillance cameras in Dubai.

4. Checking into several hotels prior to checking in at the target hotel thus bringing suspicion on your entire operation.

5. Checking into the same hotel that the last person on the team checked into in order to change disguises.

6. Not anticipating the reaction that the local police had upon discovery of the crime, and their subsequent use of surveillance cameras in showing your entire operation to the world in order to send you a message that such actions or activities will not be tolerated on their soil.

7. Not anticipating the use of surveillance camera footage being posted on YouTube, thus showing everything about your operation right down to your faces and use of disguises to the masses around the world.

8. Using 11 people for a job that one person could have done without all the negative attention to the operation. For example, it could have been as simple as a robbery on the street with a subsequent shooting to cover it all up for what it really was.

9. Using too much sophistication in the operation showing it to be a high level intelligence/hit operation, as opposed to a simple matter using one person to carry out the assignment who was either used as a cutout or an expendable person which was then eliminated after the job was completed, thus covering all your tracks without one shred of evidence leading back to the original order for the hit.

10. Arriving too close to the date or time of the hit. Had the team arrived a few weeks earlier they could have established a presence in the city ­ thus seeing all the problems associated with carrying out said assignment ­ thus calling it off or having a counter plan whereby something else could have been tried elsewhere or in another country.

11. And to take everything to 11 points, not even noticing (which many on your team did in fact notice) all the surveillance you were under, and not calling the entire thing off because of it, and because you failed to see all of your mistakes made so far and then not calling it off because of them.

I disagree with a bunch of those.

My previous two blog posts on the topic.

EDITED TO ADD (3/22): The Israeli public believes Mossad was behind the assassination, too.

EDITED TO ADD (4/13): The Cold Spy responds in comments. Actually, there’s lots of interesting discussion in the comments.

Posted on March 22, 2010 at 9:10 AMView Comments

Security Trade-Offs and Sacred Values

Interesting research:

Psychologist Jeremy Ginges and his colleagues identified this backfire effect in studies of the Israeli-Palestinian conflict in 2007. They interviewed both Israelis and Palestinians who possessed sacred values toward key issues such as ownership over disputed territories like the West Bank or the right of Palestinian refugees to return to villages they were forced to leave—these people viewed compromise on these issues completely unacceptable. Ginges and colleagues found that individuals offered a monetary payout to compromise their values expressed more moral outrage and were more supportive of violent opposition toward the other side. Opposition decreased, however, when the other side offered to compromise on a sacred value of its own, such as Israelis formally renouncing their right to the West Bank or Palestinians formally recognizing Israel as a state. Ginges and Scott Atran found similar evidence of this backfire effect with Indonesian madrassah students, who expressed less willingness to compromise their belief in sharia, strict Islamic law, when offered a material incentive.

[…]

After giving their opinions on Iran’s nuclear program, all participants were asked to consider one of two deals for Iranian disarmament. Half of the participants read about a deal in which the United States would reduce military aid to Israel in exchange for Iran giving up its military program. The other half of the participants read about a deal in which the United States would reduce aid to Israel and would pay Iran $40 billion. After considering the deal, all participants predicted how much the Iranian people would support the deal and how much anger they would feel toward the deal. In line with the Palestinian-Israeli and Indonesian studies, those who considered the nuclear program a sacred value expressed less support, and more anger, when the deal included money.

Posted on March 19, 2010 at 6:58 AMView Comments

Al-Mabhouh Assassination

The January 19th assassination of Mahmoud al-Mabhouh reads like a very professional operation:

Security footage of the killers’ movements during the afternoon, released by police in Dubai yesterday, underlines the professionalism of the operation. The group switched hotels several times and wore disguises including false beards and wigs, while surveillance teams rotated in pairs through the hotel lobby, never hanging around for too long and paying for everything in cash.

Folliard and another member of the party carrying an Irish passport in the name of Kevin Daveron were operating as spotters on the second floor of the hotel when the murder was committed. Both switched hotels that afternoon and dressed smartly to pose as hotel staff. The bald Daveron donned a dark wig and glasses, while Folliard appears to have removed a blonde wig to reveal dark hair.

Throughout the operation, none of the suspects made a direct call to any another. However, Dubai police traced a high volume of calls and text messages between three phones carried by the assassins and four numbers in Austria where a command centre had apparently been established.

To co-ordinate their movements on the ground, the team used discreet, sophisticated short-range communication devices as they tracked their victim.

And this:

The Dubai authorities claim there were two teams: one carried out surveillance of the target, while the other—which appears to be a group of younger men, at least as far as the camera shots show—carried out the killing.

Contrary to reports, the squad did not break into Mabhouh’s hotel room, nor did they knock on the door. They entered the room using copies of keys they had somehow acquired.

Read the whole thing — and watch (in three parts) this video compilation of all the CCTV cameras in the hotels and airprort. It’s impressive. And the professionalism leads pretty much everyone to suspect Mossad.

There are a few things I wonder about. The team didn’t know what hotel Mabhouh would be staying in, nor whether he would be alone or with others. The team also didn’t use any guns. How much of the operation was preplanned, and how much was created on the fly? Was that why there were so many people involved?

The team booked the hotel room directly across the hallway from Mabhouh. That seems like the part of the plan most likely to arouse suspicion. It’s unusual to reserve a particular room, and not unreasonable to think that the hotel desk staff might wonder who else is booked nearby.

How did they get into Mabhouh’s hotel room. The video shows evidence of them trying to reprogram the door. Given that they didn’t know the hotel until they got there, what kind of general hotel-key reprogramming devices do they have?

I wonder if any of those fake passports had RFID chips?

Dubai’s police chief said six of the suspects had British passports, three were Irish, one French and one German.

The passports are believed to be fakes.

And Mabhouh was discovered in his room, the door locked and barred from the inside. Is it really that easy to do that to a hotel room door?

Note: Please limit comments to the security considerations and lessons of the assassination, and steer clear of the politics.

EDITED TO ADD (2/19): Interesting analysis:

Investigators believe the assassins tried to reprogram the electronic lock on al-Mabhouh’s door to gain entry. Some news reports say the assassins entered the room while the victim was out and waited for him to return, while others say they were thwarted from entering the room when a hotel guest stepped off the elevator on al-Mabhouh’s floor. They then had to resort to tricking al-Mabhouh into opening his door to them after he returned.

[…]

He said the number of people involved in the operation indicates that it may have been put together in a rush.

“The less time you have to plan and carry out an operation, the more people you need to carry it out [on the ground],” he said. “The more time you have to plan . . . there’s a lot of things you eliminate.”

If you know that you can stop the elevator in the basement, for example, you don’t then need people guarding the elevator lobby on the victim’s floor to make sure no one steps off the elevator, he said.

He says it was likely that the Mossad’s second in command for operations was in the hotel or the area when the assassination took place and has gone unnoticed by the Dubai authorities.

[…]

Ostrovsky said although the operatives scattered to various parts of the world after the operation was completed, he believes they’re all back in Israel now. He says other countries are likely sifting through their airport surveillance tapes now to track the final destination of the team members.

He added that the Mossad was likely surprised by how the Dubai authorities pieced everything together so well and publicized the video and passport photos of the suspects.

[…]

Ostrovsky said that despite the Dubai operation’s success, it was amateurish at moments. He points to the bad disguises the suspects used — wigs, glasses and moustaches — and the fact that suspects seemed changed their disguises in the same place. He also points to two of the suspects who followed the victim to his hotel room while dressed in tennis outfits and didn’t seem to know what they were doing.

The two seemed to confer momentarily while the victim exited the elevator, as if deciding who would follow the victim to his room. A hotel employee accompanying the victim to his room even glanced back at the two, as if noticing their confusion.

“A lot of people in the field make those mistakes and they never come up because they’re never [caught on tape],” he said.

Posted on February 19, 2010 at 6:49 AMView Comments

Adopting the Israeli Airport Security Model

I’ve been reading a lot recently — like this article on the Israeli airport security model, and how we should adopt more of the Israeli security model here in the U.S. This sums up the problem with that idea nicely:

On the other hand, no matter how safe or how wonderful the flying experience on El Al, it is TINY airline by U.S. standards, with only 38 aircraft, 46 destinations, and fewer than two million passengers in 2008. As near as I can tell, Cairo is their only destination in a majority Muslim country. Delta, before the Northwest merger is included, reported 449 aircraft and 375 destinations.

Ben Gurion Airport is Israel’s primary (not only) international gateway. In 2008, Ben Gurion served 11.1 million international passengers and 470,000 domestic passengers, roughly comparable to the 10 million total served at Sacramento, the airport I use most often. Amsterdam served 47.4 million total, and Detroit served 35.1 million total in 2008.

By American standards, in terms of passengers served, Ben Gurion is a busy regional airport.

Simply put, the Israeli airport security model does not scale.

EDITED TO ADD (1/7): More.

EDITED TO ADD (1/12): Interview with El Al’s former head of security.

Posted on January 5, 2010 at 7:04 AMView Comments

Mossad Hacked Syrian Official's Computer

It was unattended in a hotel room at the time:

Israel’s Mossad espionage agency used Trojan Horse programs to gather intelligence about a nuclear facility in Syria the Israel Defense Forces destroyed in 2007, the German magazine Der Spiegel reported Monday.

According to the magazine, Mossad agents in London planted the malware on the computer of a Syrian official who was staying in the British capital; he was at a hotel in the upscale neighborhood of Kensington at the time.

The program copied the details of Syria’s illicit nuclear program and sent them directly to the Mossad agents’ computers, the report said.

Remember the evil maid attack: if an attacker gets hold of your computer temporarily, he can bypass your encryption software.

Posted on November 5, 2009 at 12:48 PMView Comments

Israel Implementing IFF System for Commercial Aircraft

Israel is implementing an IFF (identification, friend or foe) system for commercial aircraft, designed to differentiate legitimate planes from terrorist-controlled planes.

The news article implies that it’s a basic challenge-and-response system. Ground control issues some kind of alphanumeric challenge to the plane. The pilot types the challenge into some hand-held computer device, and reads back the reply. Authentication is achieved by 1) physical possession of the device, and 2) typing a legitimate PIN into the device to activate it.

The article talks about a distress mode, where the pilot signals that a terrorist is holding a gun to his head. Likely, that’s done by typing a special distress PIN into the device, and reading back whatever the screen displays.

The military has had this sort of system — first paper-based, and eventually computer-based — for decades. The critical issue with using this on commercial aircraft is how to deal with user error. The system has to be easy enough to use, and the parts hard enough to lose, that there won’t be a lot of false alarms.

Posted on March 10, 2008 at 12:24 PMView Comments

Airport Security: Israel vs. the United States

A comparison:

We were subjected to a 15-minute interrogation at the airport in Eilat, in southern Israel, after spending the weekend in neighboring Jordan. The young, bespectacled security official was robotic and driven in his questioning. He asked to see a copy of my husband’s invitation to his conference. The full names of anyone we knew in Israel. More and more questions, raising suspicions that started to make me feel guilty.

“Did you give anyone your e-mail or phone number? Did anyone want to stay in contact with you?” He had us pegged for naive travelers who could become the tool of terrorists.

He even went through our digital photos, stopping at a picture of a little boy, holding a baby goat. “Who is this?”

“It’s a Bedouin,” I snapped. “We don’t have his contact information.”

In the same calm tone, he told me not to become angry. Later I realized it was a necessary part of traveling in Israel, as a safety precaution. Ironically, we didn’t have to throw away our water bottles or take off our shoes when we passed through the security gate — which made me wonder at the effectiveness of U.S. policies at airports.

Regularly I hear people talking about Israeli airport security, and asking why we can’t do the same in the U.S. The short answer is: scale. Israel has 11 million airline passengers a year; there are close to 700 million in the U.S. Israel has seven airports; the U.S. has over 400 “primary” airports — and who knows how many others. Things that can work there just don’t scale to the U.S.

Posted on July 3, 2007 at 3:13 PMView Comments

Recognizing a Suicide Bomber

Fascinating story of an Israeli taxi driver who picked up a suicide bomber. What’s interesting to me is how the driver comes to realize his passenger is a suicide bomber. It wasn’t anything that comes up on a profile, but a feeling that something is wrong:

Mr Woltinsky said he realised straight away that something was not quite right.

“When he got into my car, I had a bad feeling because he did not behave normally — his eyes, his nerves — and the fact he was wearing a big red jacket even though it was hot.

“I asked him where he wanted to go but he didn’t say anything, just waved his hand.

“When I asked him again, he said only one word, “Haifa”, in an Arab accent. Haifa is hundreds of kilometres away, so now I was almost 100% sure he was a suicide bomber.”

In other words, his passanger was acting hinky.

EDITED TO ADD (2/1): The Israeli was not a taxi driver. Apologies.

Posted on February 1, 2007 at 6:26 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.