Entries Tagged "terrorism"

Page 11 of 80

Random Links on the Boston Terrorist Attack

Encouraging poll data says that maybe Americans are starting to have realistic fears about terrorism, or at least are refusing to be terrorized.

Good essay by Scott Atran on terrorism and our reaction.

Reddit apologizes. I think this is a big story. The Internet is going to help in everything, including trying to identify terrorists. This will happen whether or not the help is needed, wanted, or even helpful. I think this took the FBI by surprise. (Here’s a good commentary on this sort of thing.)

Facial recognition software didn’t help. I agree with this, though; it will only get better.

EDITED TO ADD (4/25): “Hapless, Disorganized, and Irrational“: John Mueller and Mark Stewart describe the Boston—and most other—terrorists.

Posted on April 25, 2013 at 6:42 AMView Comments

The Police Now Like Amateur Photography

PhotographyIsNotACrime.com points out the obvious: after years of warning us that photography is suspicious, the police were happy to accept all of those amateur photographs and videos at the Boston Marathon.

Adding to the hypocrisy is that these same authorities will most likely start clamping down on citizens with cameras more than ever once the smoke clears and we once again become a nation of paranoids willing to give up our freedoms in exchange for some type of perceived security.

After all, that is exactly how it played out in the years after the 9/11 terrorist attacks where it became impossible to photograph buildings, trains or airplanes without drawing the suspicion of authorities as potential terrorists.

Posted on April 23, 2013 at 12:34 PMView Comments

The Boston Marathon Bomber Manhunt

I generally give the police a lot of tactical leeway in times like this. The very armed and very dangerous suspects warranted extraordinary treatment. They were perfectly capable of killing again, taking hostages, planting more bombs—and we didn’t know the extent of the plot or the group. That’s why I didn’t object to the massive police dragnet, the city-wide lock down, and so on.

Ross Anderson has a different take:

…a million people were under virtual house arrest; the 19-year-old fugitive from justice happened to be a Muslim. Whatever happened to the doctrine that infringements of one liberty to protect another should be necessary and proportionate?

In the London bombings, four idiots killed themselves in the first incident with a few dozen bystanders, but the second four failed and ran for it when their bombs didn’t go off. It didn’t occur to anyone to lock down London. They were eventually tracked down and arrested, together with their support team. Digital forensics played a big role; the last bomber to be caught left the country and changed his SIM, but not his IMEI. It’s next to impossible for anyone to escape nowadays if the authorities try hard.

He has a point, although I’m not sure I agree with it.

Opinions?

EDITED TO ADD (4/20): This makes the argument very well. On the other hand, readers are rightfully pointing out that the lock down was in response to the shooting of a campus police officer, a carjacking, a firefight, and a vehicle chase with thrown bombs: the sort of thing that pretty much only happens in the movies.

EDITED TO ADD (4/20): More commentary on this Slashdot thread.

Posted on April 20, 2013 at 8:19 AMView Comments

Initial Thoughts on the Boston Bombings

I rewrote my “refuse to be terrorized” essay for the Atlantic. David Rothkopf (author of the great book Power, Inc.) wrote something similar, and so did John Cole.

It’s interesting to see how much more resonance this idea has today than it did a dozen years ago. If other people have written similar essays, please post links in the comments.

EDITED TO ADD (4/16): Two good essays.

EDITED TO ADD (4/16): I did a Q&A on the Washington Post blog. And—I can hardly believe it—President Obama said “the American people refuse to be terrorized” in a press briefing today.

EDITED TO ADD (4/16): I did a podcast interview and another press interview.

EDITED TO ADD (4/16): This, on the other hand, is pitiful.

EDITED TO ADD (4/17): Another audio interview with me.

EDITED TO ADD (4/19): I have done a lot of press this week. Here’s a link to a “To the Point” segment, and two Huffington Post Live segments. I was on The Steve Malzberg Show, which I didn’t realize was shouting conservative talk radio until it was too late.

EDITED TO ADD (4/20): That Atlantic essay had 40,000 Facebook likes and 6800 Tweets. The editor told me it had about 360,000 hits. That makes it the most popular piece I’ve ever written.

EDITED TO ADD (5/14): More links here.

Posted on April 16, 2013 at 9:19 AMView Comments

When Technology Overtakes Security

A core, not side, effect of technology is its ability to magnify power and multiply force—for both attackers and defenders. One side creates ceramic handguns, laser-guided missiles, and new-identity theft techniques, while the other side creates anti-missile defense systems, fingerprint databases, and automatic facial recognition systems.

The problem is that it’s not balanced: Attackers generally benefit from new security technologies before defenders do. They have a first-mover advantage. They’re more nimble and adaptable than defensive institutions like police forces. They’re not limited by bureaucracy, laws, or ethics. They can evolve faster. And entropy is on their side—it’s easier to destroy something than it is to prevent, defend against, or recover from that destruction.

For the most part, though, society still wins. The bad guys simply can’t do enough damage to destroy the underlying social system. The question for us is: can society still maintain security as technology becomes more advanced?

I don’t think it can.

Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious…and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.

This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.

As the destructive power of individual actors and fringe groups increases, so do the calls for—and society’s acceptance of—increased security.

Traditional security largely works "after the fact". We tend not to ban or restrict the objects that can do harm; instead, we punish the people who do harm with objects. There are exceptions, of course, but they’re exactly that: exceptions. This system works as long as society can tolerate the destructive effects of those objects (for example, allowing people to own baseball bats and arresting them after they use them in a riot is only viable if society can tolerate the potential for riots).

When that isn’t enough, we resort to "before-the-fact" security measures. These come in two basic varieties: general surveillance of people in an effort to stop them before they do damage, and specific interdictions in an effort to stop people from using those technologies to do damage.

But these measures work better at keeping dangerous technologies out of the hands of amateurs than at keeping them out of the hands of professionals.

And in the global interconnected world we live in, they’re not anywhere close to foolproof. Still, a climate of fear causes governments to try. Lots of technologies are already restricted: entire classes of drugs, entire classes of munitions, explosive materials, biological agents. There are age restrictions on vehicles and training restrictions on complex systems like aircraft. We’re already almost entirely living in a surveillance state, though we don’t realize it or won’t admit it to ourselves. This will only get worse as technology advances… today’s Ph.D. theses are tomorrow’s high-school science-fair projects.

Increasingly, broad prohibitions on technologies, constant ubiquitous surveillance, and Minority Report-like preemptive security will become the norm. We can debate the effectiveness of various security measures in different circumstances. But the problem isn’t that these security measures won’t work—even as they shred our freedoms and liberties—it’s that no security is perfect.

Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We’ll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.

As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of someone in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn’t kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?

If security won’t work in the end, what is the solution?

Resilience—building systems able to survive unexpected and devastating attacks—is the best answer we have right now. We need to recognize that large-scale attacks will happen, that society can survive more than we give it credit for, and that we can design systems to survive these sorts of attacks. Calling terrorism an existential threat is ridiculous in a country where more people die each month in car crashes than died in the 9/11 terrorist attacks.

If the U.S. can survive the destruction of an entire city—witness New Orleans after Hurricane Katrina or even New York after Sandy—we need to start acting like it, and planning for it. Still, it’s hard to see how resilience buys us anything but additional time. Technology will continue to advance, and right now we don’t know how to adapt any defenses—including resilience—fast enough.

We need a more flexible and rationally reactive approach to these problems and new regimes of trust for our information-interconnected world. We’re going to have to figure this out if we want to survive, and I’m not sure how many decades we have left.

This essay originally appeared on Wired.com.

Commentary.

Posted on March 21, 2013 at 7:02 AMView Comments

Al Qaeda Document on Avoiding Drone Strikes

Interesting:

3 – Spreading the reflective pieces of glass on a car or on the roof of the building.

4 – Placing a group of skilled snipers to hunt the drone, especially the reconnaissance
ones because they fly low, about six kilometers or less.

5 – Jamming of and confusing of electronic communication using the ordinary water-lifting dynamo fitted with a 30-meter copper pole.

6 – Jamming of and confusing of electronic communication using old equipment and
keeping them 24-hour running because of their strong frequencies and it is possible using simple ideas of deception of equipment to attract the electronic waves devices similar to that used by the Yugoslav army when they used the microwave (oven) in attracting and confusing the NATO missiles fitted with electromagnetic searching devices.

Posted on March 6, 2013 at 6:50 AMView Comments

New al Qaeda Encryption Tool

There’s not a lot of information—and quite a lot of hyperbole—in this article:

With the release of the Asrar Al Dardashah plugin, GIMF promised “secure correspondence” based on the Pidgin chat client, which supports multiple chat platforms, including Yahoo Messenger, Windows Live Messenger, AOL Instant Messenger, Google Talk and Jabber/XMPP.

“The Asrar Al Dardashah plugin supports most of the languages in the world through the use of Unicode encoding, including Arabic, English, Urdu, Pashto, Bengali and Indonesian,” stated the announcement, which was posted on several top online Jihadist forums and GIMF’s official website.

“The plugin is easy and quick to use, and, like its counterpart, the Asrar Al Mujahideen program, it uses the technical algorithm RSA for asymmetric encryption, which is based [on] a pair of interrelated keys: a public key allocated for encrypting and a private key used for decrypting,” GIMF’s statement said. “To use the plugin, both of the communicating parties should install and activate the plugin and produce and import the Asrar Al Mujahideen private key into the Asrar Al Dardashah plugin, which automatically produces the corresponding public key of 2048-bit-length for use. It offers a level of encryption which has not been cracked or broken and can be relied upon entirely to protect the confidentiality of sensitive communication[s].”

Posted on February 13, 2013 at 6:13 AMView Comments

Classifying a Shape

This is a great essay:

Spheres are special shapes for nuclear weapons designers. Most nuclear weapons have, somewhere in them, that spheres-within-spheres arrangement of the implosion nuclear weapon design. You don’t have to use spheres—cylinders can be made to work, and there are lots of rumblings and rumors about non-spherical implosion designs around these here Internets—but spheres are pretty common.

[…]

Imagine the scenario: you’re a security officer working at Los Alamos. You know that spheres are weapon parts. You walk into a technical area, and you see spheres all around! Is that an ashtray, or it is a model of a plutonium pit? Anxiety mounts—does the ashtray go into a safe at the end of the day, or does it stay out on the desk? (Has someone been tapping their cigarettes out into the pit model?)

All of this anxiety can be gone—gone!—by simply banning all non-nuclear spheres! That way you can effectively treat all spheres as sensitive shapes.

What I love about this little policy proposal is that it illuminates something deep about how secrecy works. Once you decide that something is so dangerous that the entire world hinges on keeping it under control, this sense of fear and dread starts to creep outwards. The worry about what must be controlled becomes insatiable ­ and pretty soon the mundane is included with the existential.

The essay continues with a story of a scientist who received a security violation for leaving an orange on his desk.

Two points here. One, this is a classic problem with any detection system. When it’s hard to build a system that detects the thing you’re looking for, you change the problem to detect something easier—and hope the overlap is enough to make the system work. Think about airport security. It’s too hard to detect actual terrorists with terrorist weapons, so instead they detect pointy objects. Internet filtering systems work the same way, too. (Remember when URL filters blocked the word “sex,” and the Middlesex Public Library found that it couldn’t get to its municipal webpages?)

Two, the Los Alamos system only works because false negatives are much, much worse than false positives. It really is worth classifying an abstract shape and annoying an officeful of scientists and others to protect the nuclear secrets. Airport security fails because the false-positive/false-negative cost ratio is different.

Posted on January 3, 2013 at 6:03 AMView Comments

Information-Age Law Enforcement Techniques

This is an interesting blog post:

Buried inside a recent United Nations Office on Drugs and Crime report titled Use of Internet for Terrorist Purposes one can carve out details and examples of law enforcement electronic surveillance techniques that are normally kept secret.

[…]

Point 280: International members of the guerilla group Revolutionary Armed Forces of Colombia (FARC) communicated with their counterparts hiding messages inside images with steganography and sending the emails disguised as spam, deleting Internet browsing cache afterwards to make sure that the authorities would not get hold of the data. Spanish and Colombian authorities cooperated to break the encryption keys and successfully deciphered the messages.

[…]

Point 198: It explains how an investigator can circumvent Truecrypt plausible deniability feature (hidden container), advising computer forensics investigators to take into consideration during the computer analysis to check if there is any missing volume of data.

[…]

Point 210: Explains how Remote Administration Trojans (RATs) can be introduced into a suspects computer to collect data or control his computer and it makes reference to hardware and software keyloggers as well as packet sniffers.

There’s more at the above link. Here’s the final report.

Posted on December 19, 2012 at 6:47 AMView Comments

Book Review: Against Security

Against Security: How We Go Wrong at Airports, Subways, and Other Sites of Ambiguous Danger, by Harvey Molotch, Princeton University Press, 278 pages, $35.

Security is both a feeling and a reality, and the two are different things. People can feel secure when they’re actually not, and they can be secure even when they believe otherwise.

This discord explains much of what passes for our national discourse on security policy. Security measures often are nothing more than security theater, making people feel safer without actually increasing their protection.

A lot of psychological research has tried to make sense out of security, fear, risk, and safety. But however fascinating the academic literature is, it often misses the broader social dynamics. New York University’s Harvey Molotch helpfully brings a sociologist’s perspective to the subject in his new book Against Security.

Molotch delves deeply into a few examples and uses them to derive general principles. He starts Against Security with a mundane topic: the security of public restrooms. It’s a setting he knows better than most, having authored Toilet: The Public Restroom and the Politics of Sharing (New York University Press) in 2010. It turns out the toilet is not a bad place to begin a discussion of the sociology of security.

People fear various things in public restrooms: crime, disease, embarrassment. Different cultures either ignore those fears or address them in culture-specific ways. Many public lavatories, for example, have no-touch flushing mechanisms, no-touch sinks, no-touch towel dispensers, and even no-touch doors, while some Japanese commodes play prerecorded sounds of water running, to better disguise the embarrassing tinkle.

Restrooms have also been places where, historically and in some locations, people could do drugs or engage in gay sex. Sen. Larry Craig (R-Idaho) was arrested in 2007 for soliciting sex in the bathroom at the Minneapolis-St. Paul International Airport, suggesting that such behavior is not a thing of the past. To combat these risks, the managers of some bathrooms—men’s rooms in American bus stations, in particular—have taken to removing the doors from the toilet stalls, forcing everyone to defecate in public to ensure that no one does anything untoward (or unsafe) behind closed doors.

Subsequent chapters discuss security in subways, at airports, and on airplanes; at Ground Zero in lower Manhattan; and after Hurricane Katrina in New Orleans. Each of these chapters is an interesting sociological discussion of both the feeling and reality of security, and all of them make for fascinating reading. Molotch has clearly done his homework, conducting interviews on the ground, asking questions designed to elicit surprising information.

Molotch demonstrates how complex and interdependent the factors that comprise security are. Sometimes we implement security measures against one threat, only to magnify another. He points out that more people have died in car crashes since 9/11 because they were afraid to fly—or because they didn’t want to deal with airport security—than died during the terrorist attacks. Or to take a more prosaic example, special “high-entry” subway turn­stiles make it much harder for people to sneak in for a free ride but also make platform evacuations much slower in the case of an emergency.

The common thread in Against Security is that effective security comes less from the top down and more from the bottom up. Molotch’s subtitle telegraphs this conclusion: “How We Go Wrong at Airports, Subways, and Other Sites of Ambiguous Danger.” It’s the word ambiguous that’s important here. When we don’t know what sort of threats we want to defend against, it makes sense to give the people closest to whatever is happening the authority and the flexibility to do what is necessary. In many of Molotch’s anecdotes and examples, the authority figure—a subway train driver, a policeman—has to break existing rules to provide the security needed in a particular situation. Many security failures are exacerbated by a reflexive adherence to regulations.

Molotch is absolutely right to home in on this kind of individual initiative and resilience as a critical source of true security. Current U.S. security policy is overly focused on specific threats. We defend individual buildings and monuments. We defend airplanes against certain terrorist tactics: shoe bombs, liquid bombs, underwear bombs. These measures have limited value because the number of potential terrorist tactics and targets is much greater than the ones we have recently observed. Does it really make sense to spend a gazillion dollars just to force terrorists to switch tactics? Or drive to a different target? In the face of modern society’s ambiguous dangers, it is flexibility that makes security effective.

We get much more bang for our security dollar by not trying to guess what terrorists are going to do next. Investigation, intelligence, and emergency response are where we should be spending our money. That doesn’t mean mass surveillance of everyone or the entrapment of incompetent terrorist wannabes; it means tracking down leads—the sort of thing that caught the 2006 U.K. liquid bombers. They chose their tactic specifically to evade established airport security at the time, but they were arrested in their London apartments well before they got to the airport on the strength of other kinds of intelligence.

In his review of Against Security in Times Higher Education, aviation security expert Omar Malik takes issue with the book’s seeming trivialization of the airplane threat and Molotch’s failure to discuss terrorist tactics. “Nor does he touch on the multitude of objects and materials that can be turned into weapons,” Malik laments. But this is precisely the point. Our fears of terrorism are wildly out of proportion to the actual threat, and an analysis of various movie-plot threats does nothing to make us safer.

In addition to urging people to be more reasonable about potential threats, Molotch makes a strong case for optimism and kindness. Treating every air traveler as a potential terrorist and every Hurricane Katrina refugee as a potential looter is dehumanizing. Molotch argues that we do better as a society when we trust and respect people more. Yes, the occasional bad thing will happen, but 1) it happens less often, and is less damaging, than you probably think, and 2) individuals naturally organize to defend each other. This is what happened during the evacuation of the Twin Towers and in the aftermath of Katrina before official security took over. Those in charge often do a worse job than the common people on the ground.

While that message will please skeptics of authority, Molotch sees a role for government as well. In fact, many of his lessons are primarily aimed at government agencies, to help them design and implement more effective security systems. His final chapter is invaluable on that score, discussing how we should focus on nurturing the good in most people—by giving them the ability and freedom to self-organize in the event of a security disaster, for example—rather than focusing solely on the evil of the very few. It is a hopeful yet realistic message for an irrationally anxious time. Whether those government agencies will listen is another question entirely.

This review was originally published at reason.com.

Posted on December 14, 2012 at 12:24 PMView Comments

1 9 10 11 12 13 80

Sidebar photo of Bruce Schneier by Joe MacInnis.