Blog: April 2009 Archives

Preparing for Cyberwar

Interesting article from The New York Times.

Because so many aspects of the American effort to develop cyberweapons and define their proper use remain classified, many of those officials declined to speak on the record. The White House declined several requests for interviews or to say whether Mr. Obama as a matter of policy supports or opposes the use of American cyberweapons.

The most exotic innovations under consideration would enable a Pentagon programmer to surreptitiously enter a computer server in Russia or China, for example, and destroy a “botnet”—a potentially destructive program that commandeers infected machines into a vast network that can be clandestinely controlled—before it could be unleashed in the United States.

Or American intelligence agencies could activate malicious code that is secretly embedded on computer chips when they are manufactured, enabling the United States to take command of an enemy’s computers by remote control over the Internet. That, of course, is exactly the kind of attack officials fear could be launched on American targets, often through Chinese-made chips or computer servers.

So far, however, there are no broad authorizations for American forces to engage in cyberwar. The invasion of the Qaeda computer in Iraq several years ago and the covert activity in Iran were each individually authorized by Mr. Bush. When he issued a set of classified presidential orders in January 2008 to organize and improve America’s online defenses, the administration could not agree on how to write the authorization.

I’ve written about cyberwar here.

Posted on April 30, 2009 at 2:18 PM20 Comments

A Sad Tale of Biometrics Gone Wrong

From The Daily WTF:

Johnny was what you might call a “gym rat.” In incredible shape from almost-daily gym visits, a tight Lycra tank top, iPod strapped to his sizable bicep, underneath which was a large black tribal tattoo. He scanned his finger on his way out, but the turnstile wouldn’t budge.

“Uh, just a second,” the receptionist furiously typed and clicked, while Johnny removed one of his earbuds out and stared. “I’ll just have to manually override it…” but it was useless. There was no manual override option. Somehow, it was never considered that the scanner would malfunction. After several seconds of searching and having Johnny try to scan his finger again, the receptionist instructed him just to jump over the turnstile.

It was later discovered that the system required a “sign in” and a “sign out,” and if a member was recognized as someone else when attempting to sign out, the system rejected the input, and the turnstile remained locked in position. This was not good.

The scene repeated itself several times that day. Worse, the fingerprint scanner at the exit was getting kind of disgusting. Dozens of sweaty fingerprints required the scanner to be cleaned hourly, and even after it was freshly cleaned, it sometimes still couldn’t read fingerprints right. The latticed patterns on the barbell grips would leave indented patterns temporarily on the members’ fingers, there could be small cuts or folds on fingertips just from carrying weights or scrapes on the concrete coming out of the pool, fingers were wrinkly after a long swim, or sometimes the system just misidentified the person for no apparent reason.

Me on biometrics.

Posted on April 30, 2009 at 6:19 AM66 Comments

Lessons from the Columbine School Shooting

Lots of high-tech gear, but that’s not what makes schools safe:

Some of the noticeable security measures remain, but experts say the country is exploring a new way to protect kids from in-school violence: administrators now want to foster school communities that essentially can protect themselves with or without the high-tech gear.

“The first and best line of defense is always a well-trained, highly alert staff and student body,” said Kenneth Trump, president of National School Safety and Security Services, an Ohio-based firm specializing in school security.

“The No. 1 way we find out about weapons in schools is not from a piece of equipment [such as a metal detector] but from a kid who comes forward and reports it to an adult that he or she trusts.”

Of course, there never was an epidemic of school shootings—it just seemed that way in the media. And kids are much safer in schools than outside of them.

Posted on April 29, 2009 at 5:57 AM55 Comments

"No-Fly" Also Means "No-Flyover"

I’ve previously written about the piece of counterterrorism silliness known as the no-fly list:

Imagine a list of suspected terrorists so dangerous that we can’t ever let them fly, yet so innocent that we can’t arrest them—even under the draconian provisions of the Patriot Act.

Turns out these people are so dangerous that they can’t be allowed to fly over United States territory, even on a flight from Paris to Mexico.

What makes the whole incident even more interesting is that Air France had only sent its passenger manifest to the Mexicans, but now it is clear that Mexico shares this information with the United States.

Hernando Calvo Ospina has written articles about the United States involvement in Latin America, and is currently writing a book about he CIA. The exact reason for him being on the terrorist watch list is unknown, and we’ll probably never know what criteria are used for adding people to it. Air France is considering asking the United States for compensation. Good luck with that.

Additional links.

Posted on April 28, 2009 at 1:00 PM61 Comments

Cell Phones and Hostage Situations

I haven’t read this book on the Columbine school shooting and massacre, but the New York Times review had an interesting paragraph about cell phones in a hostage situation:

Fuselier is one of the people Cullen spotlights in his retelling in order to clear up the historical record. Some of the confusion generated by Columbine was inevitable: Harris and Klebold started out wearing trench coats, for instance, but at some point removed them, giving the illusion that they were four people rather than two. The homemade pipe bombs they were tossing in all directions—down stairwells, onto the roof—only seemed to further the impression that there were more of them. And then there were the SWAT teams: students trapped inside the building would hear their rifle fire, assume it was the killers and report it to the media by cellphone, complicating the cops’ efforts to keep them safe. “This was the first major hostage standoff of the cellphone age,” Cullen notes. The police “had never seen anything like it.”

Posted on April 27, 2009 at 6:57 AM33 Comments

Unfair and Deceptive Data Trade Practices

Do you know what your data did last night? Almost none of the more than 27 million people who took the RealAge quiz realized that their personal health data was being used by drug companies to develop targeted e-mail marketing campaigns.

There’s a basic consumer protection principle at work here, and it’s the concept of “unfair and deceptive” trade practices. Basically, a company shouldn’t be able to say one thing and do another: sell used goods as new, lie on ingredients lists, advertise prices that aren’t generally available, claim features that don’t exist, and so on.

Buried in RealAge’s 2,400-word privacy policy is this disclosure: “If you elect to say yes to becoming a free RealAge Member, we will periodically send you free newsletters and e-mails that directly promote the use of our site(s) or the purchase of our products or services and may contain, in whole or in part, advertisements for third parties which relate to marketed products of selected RealAge partners.”

They maintain that when you join the website, you consent to receiving pharmaceutical company spam. But since that isn’t spelled out, it’s not really informed consent. That’s deceptive.

Cloud computing is another technology where users entrust their data to service providers. Salesforce.com, Gmail, and Google Docs are examples; your data isn’t on your computer—it’s out in the “cloud” somewhere—and you access it from your web browser. Cloud computing has significant benefits for customers and huge profit potential for providers. It’s one of the fastest growing IT market segments—69% of Americans now use some sort of cloud computing services—but the business is rife with shady, if not outright deceptive, advertising.

Take Google, for example. Last month, the Electronic Privacy Information Center (I’m on its board of directors) filed a complaint with the Federal Trade Commission concerning Google’s cloud computing services. On its website, Google repeatedly assures customers that their data is secure and private, while published vulnerabilities demonstrate that it is not. Google’s not foolish, though; its Terms of Service explicitly disavow any warranty or any liability for harm that might result from Google’s negligence, recklessness, malevolent intent, or even purposeful disregard of existing legal obligations to protect the privacy and security of user data. EPIC claims that’s deceptive.

Facebook isn’t much better. Its plainly written (and not legally binding) Statement of Principles contains an admirable set of goals, but its denser and more legalistic Statement of Rights and Responsibilities undermines a lot of it. One research group who studies these documents called it “democracy theater“: Facebook wants the appearance of involving users in governance, without the messiness of actually having to do so. Deceptive.

These issues are not identical. RealAge is hiding what it does with your data. Google is trying to both assure you that your data is safe and duck any responsibility when it’s not. Facebook wants to market a democracy but run a dictatorship. But they all involve trying to deceive the customer.

Cloud computing services like Google Docs, and social networking sites like RealAge and Facebook, bring with them significant privacy and security risks over and above traditional computing models. Unlike data on my own computer, which I can protect to whatever level I believe prudent, I have no control over any of these sites, nor any real knowledge of how these companies protect my privacy and security. I have to trust them.

This may be fine—the advantages might very well outweigh the risks—but users often can’t weigh the trade-offs because these companies are going out of their way to hide the risks.

Of course, companies don’t want people to make informed decisions about where to leave their personal data. RealAge wouldn’t get 27 million members if its webpage clearly stated “you are signing up to receive e-mails containing advertising from pharmaceutical companies,” and Google Docs wouldn’t get five million users if its webpage said “We’ll take some steps to protect your privacy, but you can’t blame us if something goes wrong.”

And of course, trust isn’t black and white. If, for example, Amazon tried to use customer credit card info to buy itself office supplies, we’d all agree that that was wrong. If it used customer names to solicit new business from their friends, most of us would consider this wrong. When it uses buying history to try to sell customers new books, many of us appreciate the targeted marketing. Similarly, no one expects Google’s security to be perfect. But if it didn’t fix known vulnerabilities, most of us would consider that a problem.

This is why understanding is so important. For markets to work, consumers need to be able to make informed buying decisions. They need to understand both the costs and benefits of the products and services they buy. Allowing sellers to manipulate the market by outright lying, or even by hiding vital information, about their products breaks capitalism—and that’s why the government has to step in to ensure markets work smoothly.

Last month, Mary K. Engle, Acting Deputy Director of the FTC’s Bureau of Consumer Protection said: “a company’s marketing materials must be consistent with the nature of the product being offered. It’s not enough to disclose the information only in a fine print of a lengthy online user agreement.” She was speaking about Digital Rights Management and, specifically, an incident where Sony used a music copy protection scheme without disclosing that it secretly installed software on customers’ computers. DRM is different from cloud computing or even online surveys and quizzes, but the principle is the same.

Engle again: “if your advertising giveth and your EULA [license agreement] taketh away don’t be surprised if the FTC comes calling.” That’s the right response from government.

A version of this article originally appeared on The Wall Street Journal.

EDITED TO ADD (2/29): Two rebuttals.

Posted on April 27, 2009 at 6:16 AM35 Comments

The Terrorism Arrests that Weren't

Remember those terrorism arrests that the UK government conducted, after a secret document was accidentally photographed? No one was charged:

The Crown Prosecution Service said there was insufficient evidence to press charges or hold them any longer.

The Muslim Council of Britain said the government behaved “very dishonourably” over the treatment of the men should admit it had made a mistake.

Of the 12 men arrested in the raids, 11 were Pakistani nationals, 10 held student visas and one was from Britain.

Posted on April 24, 2009 at 1:27 PM25 Comments

Fake Facts on Twitter

Clever hack:

Back during the debate for HR 1, I was amazed at how easily conservatives were willing to accept and repeat lies about spending in the stimulus package, even after those provisions had been debunked as fabrications. The $30 million for the salt marsh mouse is a perfect example, and Kagro X documented well over a dozen congressmen repeating the lie.

To test the limits of this phenomenon, I started a parody Twitter account last Thursday, which I called “InTheStimulus“, where all the tweets took the format “InTheStimulus is $x million for ______”. I went through the followers of Republican Twitter feeds and in turn followed them, all the way up to the limit of 2000. From people following me back, I was able to get 500 followers in less than a day, and 1000 by Sunday morning.

You can read through all the retweets and responses by looking at the Twitter search for “InTheStimulus“. For the most part, my first couple days of posts were believable, but unsourced lies:

  • $3 million for replacement tires for 1992-1995 Geo Metros.
  • $750,000 for an underground tunnel connecting a middle school and high school in North Carolina.
  • $4.7 million for a program supplying public television to K-8 classrooms.
  • $2.3 million for a museum dedicated to the electric bass guitar.

The Twitter InTheStimulus site appears to have been taken down.

There a several things going on here. First is confirmation bias, which is the tendency of people to believe things that reinforce their prior beliefs. But the second is the limited bandwidth of Twitter—140-character messages—that makes it very difficult to authenticate anything. Twitter is an ideal medium to inject fake facts into society for precisely this reason.

EDITED TO ADD (5/14): False Twitter rumors about Swine Flu.

Posted on April 24, 2009 at 6:29 AM52 Comments

Hacking U.S. Military Satellites

The problem is more widespread than you might think:

First lofted into orbit in the 1970s, the FLTSATCOM bird was at the time a major advance in military communications. Their 23 channels were used by every branch of the U.S. armed forces and the White House for encrypted data and voice, typically from portable ground units that could be quickly unpacked and put to use on the battlefield.

As the original FLTSAT constellation of four satellites fell out of service, the Navy launched a more advanced UFO satellite (for Ultra High Frequency Follow-On) to replace them. Today, there are two FLTSAT and eight UFO birds in geosynchronous orbit. Navy contractors are working on a next-generation system called Mobile User Objective System beginning in September 2009.

Until then, the military is still using aging FLTSAT and UFO satellites—and so are a lot of Brazilians. While the technology on the transponders still dates from the 1970s, radio sets back on Earth have only improved and plummeted in cost—opening a cheap, efficient and illegal backdoor.

To use the satellite, pirates typically take an ordinary ham radio transmitter, which operates in the 144- to 148-MHZ range, and add a frequency doubler cobbled from coils and a varactor diode. That lets the radio stretch into the lower end of FLTSATCOM’s 292- to 317-MHz uplink range. All the gear can be bought near any truck stop for less than $500. Ads on specialized websites offer to perform the conversion for less than $100. Taught the ropes, even rough electricians can make Bolinha-ware.

Posted on April 23, 2009 at 12:30 PM23 Comments

Conficker

Conficker’s April Fool’s joke—the huge, menacing build-up and then nothing—is a good case study on how we think about risks, one whose lessons are applicable far outside computer security. Generally, our brains aren’t very good at probability and risk analysis. We tend to use cognitive shortcuts instead of thoughtful analysis. This worked fine for the simple risks we encountered for most of our species’s existence, but it’s less effective against the complex risks society forces us to face today.

We tend to judge the probability of something happening on how easily we can bring examples to mind. It’s why people tend to buy earthquake insurance after an earthquake, when the risk is lowest. It’s why those of us who have been the victims of a crime tend to fear crime more than those who haven’t. And it’s why we fear a repeat of 9/11 more than other types of terrorism.

We fear being murdered, kidnapped, raped and assaulted by strangers, when friends and relatives are far more likely to do those things to us. We worry about plane crashes instead of car crashes, which are far more common. We tend to exaggerate spectacular, strange, and rare events, and downplay more ordinary, familiar, and common ones.

We also respond more to stories than to data. If I show you statistics on crime in New York, you’ll probably shrug and continue your vacation planning. But if a close friend gets mugged there, you’re more likely to cancel your trip.

And specific stories are more convincing than general ones. That is why we buy more insurance against plane accidents than against travel accidents, or accidents in general. Or why, when surveyed, we are willing to pay more for air travel insurance covering “terrorist acts” than “all possible causes”. That is why, in experiments, people judge specific scenarios more likely than more general ones, even if the general ones include the specific.

Conficker’s 1 April deadline was precisely the sort of event humans tend to overreact to. It’s a specific threat, which convinces us that it’s credible. It’s a specific date, which focuses our fear. Our natural tendency to exaggerate makes it more spectacular, which further increases our fear. Its repetition by the media makes it even easier to bring to mind. As the story becomes more vivid, it becomes more convincing.

The New York Times called it an “unthinkable disaster”, the television news show 60 Minutes said it could “disrupt the entire internet” and we at the Guardian warned that it might be a “deadly threat”. Naysayers were few, and drowned out.

The first of April passed without incident, but Conficker is no less dangerous today. About 2.2m computers worldwide, are still infected with Conficker.A and B, and about 1.3m more are infected with the nastier Conficker.C. It’s true that on 1 April Conficker.C tried a new trick to update itself, but its authors could have updated the worm using another mechanism any day. In fact, they updated it on 8 April, and can do so again.

And Conficker is just one of many, many dangerous worms being run by criminal organisations. It came with a date and got a lot of press—that 1 April date was more hype than reality—but it’s not particularly special. In short, there are many criminal organisations on the internet using worms and other forms of malware to infect computers. They then use those computers to send spam, commit fraud, and infect more computers. The risks are real and serious. Luckily, keeping your anti-virus software up-to-date and not clicking on strange attachments can keep you pretty secure. Conficker spreads through a Windows vulnerability that was patched in October. You do have automatic update turned on, right?

But people being people, it takes a specific story for us to protect ourselves.

This essay previously appeared in The Guardian.

Posted on April 23, 2009 at 5:50 AM44 Comments

Low-Tech Impersonation

Sometimes the basic tricks work best:

Police say a man posing as a waiter collected $186 in cash from diners at two restaurants in New Jersey and walked out with the money in his pocket.

Diners described the bogus waiter as a spikey-haired 20-something wearing a dark blue or black button-down shirt, yellow tie and khaki pants.

Police say he approached two women dining at Hobson’s Choice in Hoboken, N.J. around 7:20 p.m. on Thursday. He asked if they needed anything else before paying. They said no and handed him $90 in cash.

About two hours later he approached three women dining at Margherita’s Pizza and Cafe. He asked if they were ready to pay, took $96 and never returned with their change.

Certainly he’ll be caught if he keeps it up, but it’s a good trick if used sparingly.

Posted on April 22, 2009 at 7:04 AM43 Comments

DHS Recruitment Drive

Anyone interested?

General Dynamics Information Technology put out an ad last month on behalf of the Homeland Security Department seeking someone who could “think like the bad guy.” Applicants, it said, must understand hackers’ tools and tactics and be able to analyze Internet traffic and identify vulnerabilities in the federal systems.

In the Pentagon’s budget request submitted last week, Defense Secretary Robert Gates said the Pentagon will increase the number of cyberexperts it can train each year from 80 to 250 by 2011.

Posted on April 21, 2009 at 6:25 AM32 Comments

Book Review: The Science of Fear

Daniel Gardner’s The Science of Fear was published last July, but I’ve only just gotten around to reading it. That was a big mistake. It’s a fantastic look at how how humans deal with fear: exactly the kind of thing I have been reading and writing about for the past couple of years. It’s the book I wanted to write, and it’s a great read.

Gardner writes about how the brain processes fear and risk, how it assesses probability and likelihood, and how it makes decisions under uncertainty. The book talks about all the interesting psychological studies—cognitive psychology, evolutionary psychology, behavioral economics, experimental philosophy—that illuminate how we think and act regarding fear. The book also talks about how fear is used to influence people, by marketers, by politicians, by the media. And lastly, the book talks about different areas where fear plays a part: health, crime, terrorism.

There have been a lot of books published recently that apply these new paradigms of human psychology to different domains—to randomness, to traffic, to rationality, to art, to religion, and etc.—but after you read a few you start seeing the same dozen psychology experiments over and over again. Even I did it, when I wrote about the psychology of security. But Gardner’s book is different: he goes further, explains more, demonstrates his point with the more obscure experiments that most authors don’t bother seeking out. His writing style is both easy to read and informative, a nice mix of data an anecdote. The flow of the book makes sense. And his analysis is spot-on.

My only problem with the book is that Gardner doesn’t use standard names for the various brain heuristics he talks about. Yes, his names are more intuitive and evocative, but they’re wrong. If you have already read other books in the field, this is annoying because you have to constantly translate into standard terminology. And if you haven’t read anything else in the field, this is a real problem because you’ll be needlessly confused when you read about these things in other books and articles.

So here’s a handy conversion chart. Print it out and tape it to the inside front cover. Print another copy out and use it as a bookmark.

  • Rule of Typical Things = representativeness heuristic
  • Example Rule = availability heuristic
  • Good-Bad Rule = affect heuristic
  • confirmation bias = confirmation bias

That’s it. That’s the only thing I didn’t like about the book. Otherwise, it’s perfect. It’s the book I wish I had written. Only I don’t think I would have done as good a job as Gardner did. The Science of Fear should be required reading for…well, for everyone.

The paperback will be published in June. But, amazingly enough, the hardcover is on sale for only $6 at Amazon. Buy two and give one to someone else.

Here’s a link from Powell’s, if you’re boycotting Amazon.

Posted on April 20, 2009 at 6:16 AM30 Comments

How to Write a Scary Cyberterrorism Story

From Foreign Policy:

8. If you are still having trouble working the Chinese or the Russian governments into your story, why not throw in some geopolitical kerfuffle that involves a country located in between? Not only would it implicate both governments, it would also make cyberspace seem relevant to geopolitics. I suggest you settle on Kyrgyzstan, as it would also help to make a connection to the US military bases; there is no better story than having Russian and Chinese hackers oust the US from Kyrgyzstan via cyber-attacks. Bonus points for mentioning Azerbaijan and the importance of cyberwarfare to the politics of the Caspian oil; in the worst case, Kazakhstan would do as well. Never mention any connectivity statistics for the countries you are writing about: you don’t want readers to start doubting that someone might be interested in launching a cyberwar on countries that couldn’t care less about the Internet.

Posted on April 15, 2009 at 6:17 AM29 Comments

Tweenbots

Tweenbots:

Tweenbots are human-dependent robots that navigate the city with the help of pedestrians they encounter. Rolling at a constant speed, in a straight line, Tweenbots have a destination displayed on a flag, and rely on people they meet to read this flag and to aim them in the right direction to reach their goal.

Given their extreme vulnerability, the vastness of city space, the dangers posed by traffic, suspicion of terrorism, and the possibility that no one would be interested in helping a lost little robot, I initially conceived the Tweenbots as disposable creatures which were more likely to struggle and die in the city than to reach their destination. Because I built them with minimal technology, I had no way of tracking the Tweenbot’s progress, and so I set out on the first test with a video camera hidden in my purse. I placed the Tweenbot down on the sidewalk, and walked far enough away that I would not be observed as the Tweenbot—a smiling 10-inch tall cardboard missionary—bumped along towards his inevitable fate.

The results were unexpected. Over the course of the following months, throughout numerous missions, the Tweenbots were successful in rolling from their start point to their far-away destination assisted only by strangers. Every time the robot got caught under a park bench, ground futilely against a curb, or became trapped in a pothole, some passerby would always rescue it and send it toward its goal. Never once was a Tweenbot lost or damaged. Often, people would ignore the instructions to aim the Tweenbot in the “right” direction, if that direction meant sending the robot into a perilous situation. One man turned the robot back in the direction from which it had just come, saying out loud to the Tweenbot, “You can’t go that way, it’s toward the road.”

It’s a measure of our restored sanity that no one called the TSA. Or maybe it’s just that no one has tried this in Boston yet. Or maybe it’s a lesson for terrorists: paint smiley faces on your bombs.

Posted on April 13, 2009 at 6:14 AM51 Comments

How Not to Carry Around Secret Documents

Here’s a tip: when walking around in public with secret government documents, put them in an envelope.

A huge MI5 and police counterterrorist operation against al-Qaeda suspects had to be brought forward at short notice last night after Scotland Yard’s counter-terrorism chief accidentally revealed a briefing document.

[…]

The operation was nearly blown when Assistant Commissioner Bob Quick walked up Downing Street holding a document marked “secret” with highly sensitive operational details visible to photographers.

The document, carried under his arm, revealed how many terrorist suspects were to be arrested, in which cities across the North West. It revealed that armed members of the Greater Manchester Police would force entry into a number of homes. The operation’s secret code headed the list of action that was to take place.

Now the debate begins about whether he was just stupid, or very very stupid:

Opposition MPs criticised Mr Quick, with the Liberal Democrats describing him as “accident prone” and the Conservatives condemning his “very alarming” lapse of judgement.

But former Labour Mayor of London Ken Livingstone said it would be wrong for such an experienced officer to resign “for holding a piece of paper the wrong way”.

It wasn’t just a piece of paper. It was a secret piece of paper. (Here’s the best blow-up of the picture. And surely these people have procedures for transporting classified material. That’s what the mistake was: not following proper procedure.

He resigned.

Posted on April 10, 2009 at 7:06 AM71 Comments

U.S. Power Grid Hacked, Everyone Panic!

Yesterday I talked to at least a dozen reporters about this breathless Wall Street Journal story:

Cyberspies have penetrated the U.S. electrical grid and left behind software programs that could be used to disrupt the system, according to current and former national-security officials.

The spies came from China, Russia and other countries, these officials said, and were believed to be on a mission to navigate the U.S. electrical system and its controls. The intruders haven’t sought to damage the power grid or other key infrastructure, but officials warned they could try during a crisis or war.

“The Chinese have attempted to map our infrastructure, such as the electrical grid,” said a senior intelligence official. “So have the Russians.”

[…]

Authorities investigating the intrusions have found software tools left behind that could be used to destroy infrastructure components, the senior intelligence official said. He added, “If we go to war with them, they will try to turn them on.”

Officials said water, sewage and other infrastructure systems also were at risk.

“Over the past several years, we have seen cyberattacks against critical infrastructures abroad, and many of our own infrastructures are as vulnerable as their foreign counterparts,” Director of National Intelligence Dennis Blair recently told lawmakers. “A number of nations, including Russia and China, can disrupt elements of the U.S. information infrastructure.”

Read the whole story; there aren’t really any facts in it. I don’t know what’s going on; maybe it’s just budget season and someone is jockeying for a bigger slice.

Honestly, I am much more worried about random errors and undirected worms in the computers running our infrastructure than I am about the Chinese military. I am much more worried about criminal hackers than I am about government hackers. I wrote about the risks to our infrastructure here, and about Chinese hacking here.

And I wrote about last year’s reports of international hacking of our SCADA control systems here.

Posted on April 9, 2009 at 12:02 PM55 Comments

P2P Privacy

Interesting research:

The team of researchers, which includes graduate students David Choffnes (electrical engineering and computer science) and Dean Malmgren (chemical and biological engineering), and postdoctoral fellow Jordi Duch (chemical and biological engineering), studied connection patterns in the BitTorrent file-sharing network—one of the largest and most popular P2P systems today. They found that over the course of weeks, groups of users formed communities where each member consistently connected with other community members more than with users outside the community.

“This was particularly surprising because BitTorrent is designed to establish connections at random, so there is no a priori reason for such strong communities to exist,” Bustamante says. After identifying this community behavior, the researchers showed that an eavesdropper could classify users into specific communities using a relatively small number of observation points. Indeed, a savvy attacker can correctly extract communities more than 85 percent of the time by observing only 0.01 percent of the total users. Worse yet, this information could be used to launch a “guilt-by-association” attack, where an attacker need only determine the downloading behavior of one user in the community to convincingly argue that all users in the communities are doing the same.

Given the impact of this threat, the researchers developed a technique that prevents accurate classification by intelligently hiding user-intended downloading behavior in a cloud of random downloading. They showed that this approach causes an eavesdropper’s classification to be wrong the majority of the time, providing users with grounds to claim “plausible deniability” if accused.

Posted on April 9, 2009 at 7:07 AM17 Comments

Police Powers and the UK Government in the 1980s

I found this great paragraph in this article on the future of privacy in the UK:

One of the few home secretaries who dominated his department rather than be cowed by it was Lord Whitelaw in the 1980s. He boasted how after any security lapse, the police would come to beg for new and draconian powers. He laughed and sent them packing, saying only a bunch of softies would erode British liberty to give themselves an easier job. He said they laughed in return and remarked that “it was worth a try”.

Posted on April 8, 2009 at 1:25 PM21 Comments

Social Networking Identity Theft Scams

Clever:

I’m going to tell you exactly how someone can trick you into thinking they’re your friend. Now, before you send me hate mail for revealing this deep, dark secret, let me assure you that the scammers, crooks, predators, stalkers and identity thieves are already aware of this trick. It works only because the public is not aware of it. If you’re scamming someone, here’s what you’d do:

Step 1: Request to be “friends” with a dozen strangers on MySpace. Let’s say half of them accept. Collect a list of all their friends.

Step 2: Go to Facebook and search for those six people. Let’s say you find four of them also on Facebook. Request to be their friends on Facebook. All accept because you’re already an established friend.

Step 3: Now compare the MySpace friends against the Facebook friends. Generate a list of people that are on MySpace but are not on Facebook. Grab the photos and profile data on those people from MySpace and use it to create false but convincing profiles on Facebook. Send “friend” requests to your victims on Facebook.

As a bonus, others who are friends of both your victims and your fake self will contact you to be friends and, of course, you’ll accept. In fact, Facebook itself will suggest you as a friend to those people.

(Think about the trust factor here. For these secondary victims, they not only feel they know you, but actually request “friend” status. They sought you out.)

Step 4: Now, you’re in business. You can ask things of these people that only friends dare ask.

Like what? Lend me $500. When are you going out of town? Etc.

The author has no evidence that anyone has actually done this, but certainly someone will do this sometime in the future.

We have seen attacks by people hijacking existing social networking accounts:

Rutberg was the victim of a new, targeted version of a very old scam—the “Nigerian,” or “419,” ploy. The first reports of such scams emerged back in November, part of a new trend in the computer underground—rather than sending out millions of spam messages in the hopes of trapping a tiny fractions of recipients, Web criminals are getting much more personal in their attacks, using social networking sites and other databases to make their story lines much more believable.

In Rutberg’s case, criminals managed to steal his Facebook login password, steal his Facebook identity, and change his page to make it appear he was in trouble. Next, the criminals sent e-mails to dozens of friends, begging them for help.

“Can you just get some money to us,” the imposter implored to one of Rutberg’s friends. “I tried Amex and it’s not going through. … I’ll refund you as soon as am back home. Let me know please.”

Posted on April 8, 2009 at 6:43 AM56 Comments

Crypto Puzzle and NSA Problem

From Cryptosmith:

The NSA had an incinerator in their old Arlington Hall facility that was designed to reduce top secret crypto materials and such to ash. Someone discovered that it wasn’t in fact working. Contract disposal trucks had been disposing of this not-quite-sanitized rubish, and officers tracked down a huge pile in a field in Ft. Meyer.

How did they dispose of it? The answer is encrypted in the story’s text!

The story sounds like it’s from the early 1960s. The Arlington Hall incinerator contained a grating that was to keep the documents in the flames until reduced to ash. The grate failed, and “there was no telling how long the condition had persisted before discovery.”

Posted on April 7, 2009 at 1:03 PM71 Comments

What to Fear

Nice rundown of the statistics.

The single greatest killer of Americans is the so-called “lifestyle disease.” Somewhere between half a million and a million of us get a short ride in a long hearse every year because of smoking, lousy diets, parking our bodies in front of the TV instead of operating them, and downing yet another six pack and / or tequila popper.

According to the US Department of Health and Human Services, between 310,000 and 580,000 of us will commit suicide by cigarette this year. Another 260,000 to 470,000 will go in the ground due to poor diet and sedentary lifestyle. And some 85,000 of us will drink to our own departure.

After the person in the mirror, the next most dangerous individual we’re ever likely to encounter is one in a white coat. Something like 200,000 of us will experience “cessation of life” due to medical errors—botched procedures, mis-prescribed drugs and “nosocomial infections.” (The really nasty ones you get from treatment in a hospital or healthcare service unit.)

The next most dangerous encounter the average American is likely to have is with a co-worker with an infection. Or a doorknob, stair railing or restaurant utensil touched by someone with the crud. “Microbial Agents” (read bugs like flu and pneumonia) will send 75,000 of us to meet the Reaper this year.

If we live through those social encounters, the next greatest danger is “Toxic Agents”—asbestos in our ceiling, lead in our pipes, the stuff we spray on our lawns or pour down our clogged drains. Annual body count from these handy consumer products is around 55,000.

After that, the most dangerous person in our lives is the one behind the wheel. About 42,000 of us will cash our chips in our rides this year. More than half will do so because we didn’t wear a seat belt. (Lest it wrinkle our suit.)

Some 31,000 of us will commit suicide by intention this year. (As opposed to not fastening our seat belts or smoking, by which we didn’t really mean to kill ourselves.)

About 30,000 of us will die due to our sexual behaviors, through which we’ll contract AIDS or Hepatitis C. Another 20,000 of us will pop off due to illicit drug use.

The next scariest person in our lives is someone we know who’s having a really bad day. Over 16,000 Americans will be murdered this year, most often by a relative or friend.

Posted on April 7, 2009 at 6:14 AM69 Comments

Definition of "Weapon of Mass Destruction"

At least, according to U.S. law:

18 U.S.C. 2332a

  • (2) the term "weapon of mass destruction" means—
    • (A) any destructive device as defined in section 921 of this title;
    • (B) any weapon that is designed or intended to cause death or serious bodily injury through the release, dissemination, or impact of toxic or poisonous chemicals, or their precursors;
    • (C) any weapon involving a biological agent, toxin, or vector (as those terms are defined in section 178 of this title); or
    • (D) any weapon that is designed to release radiation or radioactivity at a level dangerous to human life;

18 U.S.C. 921

  • (4) The term "destructive device" means—
    • (A) any explosive, incendiary, or poison gas—
      • (i) bomb,
      • (ii) grenade,
      • (iii) rocket having a propellant charge of more than four ounces,
      • (iv) missile having an explosive or incendiary charge of more than one-quarter ounce,
      • (v) mine, or
      • (vi) device similar to any of the devices described in the preceding clauses;
    • (B) any type of weapon (other than a shotgun or a shotgun shell which the Attorney General finds is generally recognized as particularly suitable for sporting purposes) by whatever name known which will, or which may be readily converted to, expel a projectile by the action of an explosive or other propellant, and which has any barrel with a bore of more than one-half inch in diameter; and
    • (C) any combination of parts either designed or intended for use in converting any device into any destructive device described in subparagraph (A) or (B) and from which a destructive device may be readily assembled.

The term "destructive device" shall not include any device which is neither designed nor redesigned for use as a weapon; any device, although originally designed for use as a weapon, which is redesigned for use as a signaling, pyrotechnic, line throwing, safety, or similar device; surplus ordnance sold, loaned, or given by the Secretary of the Army pursuant to the provisions of section 4684 (2), 4685, or 4686 of title 10; or any other device which the Attorney General finds is not likely to be used as a weapon, is an antique, or is a rifle which the owner intends to use solely for sporting, recreational or cultural purposes.

This is a very broad definition, and one that involves the intention of the weapon’s creator as well as the details of the weapon itself.

In an e-mail, John Mueller commented:

As I understand it, not only is a grenade a weapon of mass destruction, but so is a maliciously-designed child’s rocket even if it doesn’t have a warhead. On the other hand, although a missile-propelled firecracker would be considered a weapons of mass destruction if its designers had wanted to think of it as a weapon, it would not be so considered if it had previously been designed for use as a weapon and then redesigned for pyrotechnic use or if it was surplus and had been sold, loaned, or given to you (under certain circumstances) by the Secretary of the Army.

It’s also means that we are coming up on the 25th anniversary of the Reagan administration’s long-misnamed WMD-for-Hostages deal with Iran.

Bad news for you, though. You’ll have to amend that line you like using in your presentations about how all WMD in all of history have killed fewer people than OIF (or whatever), since all artillery, and virtually every muzzle-loading military long arm for that matter, legally qualifies as an WMD. It does make the bombardment of Ft. Sumter all the more sinister. To say nothing of the revelation that The Star Spangled Banner is in fact an account of a WMD attack on American shores.

Amusing, to be sure, but there’s something important going on. The U.S. government has passed specific laws about “weapons of mass destruction,” because they’re particularly scary and damaging. But by generalizing the definition of WMDs, those who write the laws greatly broaden their applicability. And I have to wonder how many of those who vote in favor of the laws realize how general they really are, or—if they do know—vote for them anyway because they can’t be seen to be “soft” on WMDs.

It reminds me of those provisions of the USA PATRIOT Act—and other laws—that created police powers to be used for “terrorism and other crimes.”

EDITED TO ADD (4/14): Prosecutions based on this unreasonable definition.

Posted on April 6, 2009 at 7:10 AM78 Comments

Identifying People using Anonymous Social Networking Data

Interesting:

Computer scientists Arvind Narayanan and Dr Vitaly Shmatikov, from the University of Texas at Austin, developed the algorithm which turned the anonymous data back into names and addresses.

The data sets are usually stripped of personally identifiable information, such as names, before it is sold to marketing companies or researchers keen to plumb it for useful information.

Before now, it was thought sufficient to remove this data to make sure that the true identities of subjects could not be reconstructed.

The algorithm developed by the pair looks at relationships between all the members of a social network—not just the immediate friends that members of these sites connect to.

Social graphs from Twitter, Flickr and Live Journal were used in the research.

The pair found that one third of those who are on both Flickr and Twitter can be identified from the completely anonymous Twitter graph. This is despite the fact that the overlap of members between the two services is thought to be about 15%.

The researchers suggest that as social network sites become more heavily used, then people will find it increasingly difficult to maintain a veil of anonymity.

More details:

In “De-anonymizing social networks,” Narayanan and Shmatikov take an anonymous graph of the social relationships established through Twitter and find that they can actually identify many Twitter accounts based on an entirely different data source—in this case, Flickr.

One-third of users with accounts on both services could be identified on Twitter based on their Flickr connections, even when the Twitter social graph being used was completely anonymous. The point, say the authors, is that “anonymity is not sufficient for privacy when dealing with social networks,” since their scheme relies only on a social network’s topology to make the identification.

The issue is of more than academic interest, as social networks now routinely release such anonymous social graphs to advertisers and third-party apps, and government and academic researchers ask for such data to conduct research. But the data isn’t nearly as “anonymous” as those releasing it appear to think it is, and it can easily be cross-referenced to other data sets to expose user identities.

It’s not just about Twitter, either. Twitter was a proof of concept, but the idea extends to any sort of social network: phone call records, healthcare records, academic sociological datasets, etc.

Here’s the paper.

Posted on April 6, 2009 at 6:51 AM41 Comments

Learning About Giant Squid From Sperm Whale Stomachs

Interesting research:

By looking in the stomachs of three sperm whales stranded in the Bay of Biscay, Cherel recovered hundreds of beaks from 19 separate species—17 squids including the giant squid, the seven-arm octopus (the largest in the world) and the bizarre vampire squid. Together, these species represent a decent spread of the full diversity of deep-sea cephalopods.

He analysed the chemical composition of the beaks. and in particular, their ratio of carbon isotopes (carbon-13 compared to carbon-13) and their ratio of ratio of nitrogen isotopes (nitrogen-15 compared to nitrogen-14). These measurements are a reflection of both what and where the animals ate.

Levels of carbon-13 can tell us how deep an animal lives, whether it swims offshore or inshore, and whether it spends its time in the open ocean, or sticks close to its floor. All of the cephalopods’ carbon-13 levels fell within a narrow range, indicating that all 19 species live in similar and overlapping parts of the ocean.

Posted on April 3, 2009 at 4:28 PM10 Comments

Stealing Commodities

Before his arrest, Tom Berge stole lead roof tiles from several buildings in south-east England, including the Honeywood Museum in Carshalton, the Croydon parish church, and the Sutton high school for girls. He then sold those tiles to scrap metal dealers.

As a security expert, I find this story interesting for two reasons. First, amongst increasingly ridiculous attempts to ban, or at least censor, Google Earth, lest it help the terrorists, here is an actual crime that relied on the service: Berge needed Google Earth for reconnaissance.

But more interesting is the discrepancy between the value of the lead tiles to the original owner and to the thief. The Sutton school had to spend £10,000 to buy new lead tiles; the Croydon Church had to repair extensive water damage after the theft. But Berge only received £700 a ton from London scrap metal dealers.

This isn’t an isolated story; the same dynamic is in play with other commodities as well.

There is an epidemic of copper wiring thefts worldwide; copper is being stolen out of telephone and power stations—and off poles in the streets—and thieves have killed themselves because they didn’t understand the dangers of high voltage. Homeowners are returning from holiday to find the copper pipes stolen from their houses. In 2001, scrap copper was worth 70 cents per pound. In April 2008, it was worth $4.

Gasoline siphoning became more common as pump prices rose. And used restaurant grease, formerly either given away or sold for pennies to farmers, is being stolen from restaurant parking lots and turned into biofuels. Newspapers and other recyclables are stolen from curbs, and trees are stolen and resold as Christmas trees.

Iron fences have been stolen from buildings and houses, manhole covers have been stolen from the middle of streets, and aluminum guard rails have been stolen from roadways. Steel is being stolen for scrap, too. In 2004 in Ukraine, thieves stole an entire steel bridge.

These crimes are particularly expensive to society because the replacement cost is much higher than the thief’s profit. A manhole cover is worth $5–$10 as scrap, but it costs $500 to replace, including labor. A thief may take $20 worth of copper from a construction site, but do $10,000 in damage in the process. And even if the thieves don’t get to the copper or steel, the increased threat means more money being spent on security to protect those commodities in the first place.

Security can be viewed as a tax on the honest, and these thefts demonstrate that our taxes are going up. And unlike many taxes, we don’t benefit from their collection. The cost to society of retrofitting manhole covers with locks, or replacing them with less resalable alternatives, is high; but there is no benefit other than reducing theft.

These crimes are a harbinger of the future: evolutionary pressure on our society, if you will. Criminals are often referred to as social parasites; they leech off society but provide no useful benefit. But they are an early warning system of societal changes. Unfettered by laws or moral restrictions, they can be the first to respond to changes that the rest of society will be slower to pick up on. In fact, currently there’s a reprieve. Scrap metal prices are all down from last year’s—copper is currently $1.62 per pound, and lead is half what Berge got—and thefts are down along with them.

We’ve designed much of our infrastructure around the assumptions that commodities are cheap and theft is rare. We don’t protect transmission lines, manhole covers, iron fences, or lead flashing on roofs. But if commodity prices really are headed for new higher stable points, society will eventually react and find alternatives for these items—or find ways to protect them. Criminals were the first to point this out, and will continue to exploit the system until it restabilizes.

A version of this essay originally appeared in The Guardian.

Posted on April 3, 2009 at 5:25 AM58 Comments

DNA False Positives

A story about a very expensive series of false positives. The German police spent years and millions of dollars tracking a mysterious killer whose DNA had been found at the scenes of six murders. Finally they realized they were tracking a worker at the factory that assembled the prepackaged swabs used for DNA testing.

This story could be used as justification for a massive DNA database. After all, if that factory worker had his or her DNA in the database, the police would have quickly realized what the problem was.

Posted on April 2, 2009 at 2:54 PM50 Comments

Who Should be in Charge of U.S. Cybersecurity?

U.S. government cybersecurity is an insecure mess, and fixing it is going to take considerable attention and resources. Trying to make sense of this, President Barack Obama ordered a 60-day review of government cybersecurity initiatives. Meanwhile, the U.S. House Subcommittee on Emerging Threats, Cybersecurity, Science and Technology is holding hearings on the same topic.

One of the areas of contention is who should be in charge. The FBI, DHS and DoD—specifically, the NSA—all have interests here. Earlier this month, Rod Beckström resigned from his position as director of the DHS’s National Cybersecurity Center, warning of a power grab by the NSA.

Putting national cybersecurity in the hands of the NSA is an incredibly bad idea. An entire parade of people, ranging from former FBI director Louis Freeh to Microsoft’s Trusted Computing Group Vice President and former Justice Department computer crime chief Scott Charney, have told Congress the same thing at this month’s hearings.

Cybersecurity isn’t a military problem, or even a government problem—it’s a universal problem. All networks, military, government, civilian and commercial, use the same computers, the same networking hardware, the same Internet protocols and the same software packages. We all are the targets of the same attack tools and tactics. It’s not even that government targets are somehow more important; these days, most of our nation’s critical IT infrastructure is in commercial hands. Government-sponsored Chinese hackers go after both military and civilian targets.

Some have said that the NSA should be in charge because it has specialized knowledge. Earlier this month, Director of National Intelligence Admiral Dennis Blair made this point, saying “There are some wizards out there at Ft. Meade who can do stuff.” That’s probably not true, but if it is, we’d better get them out of Ft. Meade as soon as possible—they’re doing the nation little good where they are now.

Not that government cybersecurity failings require any specialized wizardry to fix. GAO reports indicate that government problems include insufficient access controls, a lack of encryption where necessary, poor network management, failure to install patches, inadequate audit procedures, and incomplete or ineffective information security programs. These aren’t super-secret NSA-level security issues; these are the same managerial problems that every corporate CIO wrestles with.

We’ve all got the same problems, so solutions must be shared. If the government has any clever ideas to solve its cybersecurity problems, certainly a lot of us could benefit from those solutions. If it has an idea for improving network security, it should tell everyone. The best thing the government can do for cybersecurity world-wide is to use its buying power to improve the security of the IT products everyone uses. If it imposes significant security requirements on its IT vendors, those vendors will modify their products to meet those requirements. And those same products, now with improved security, will become available to all of us as the new standard.

Moreover, the NSA’s dual mission of providing security and conducting surveillance means it has an inherent conflict of interest in cybersecurity. Inside the NSA, this is called the “equities issue.” During the Cold War, it was easy; the NSA used its expertise to protect American military information and communications, and eavesdropped on Soviet information and communications. But what happens when both the good guys the NSA wants to protect, and the bad guys the NSA wants to eavesdrop on, use the same systems? They all use Microsoft Windows, Oracle databases, Internet email, and Skype. When the NSA finds a vulnerability in one of those systems, does it alert the manufacturer and fix it—making both the good guys and the bad guys more secure? Or does it keep quiet about the vulnerability and not tell anyone—making it easier to spy on the bad guys but also keeping the good guys insecure? Programs like the NSA’s warrantless wiretapping program have created additional vulnerabilities in our domestic telephone networks.

Testifying before Congress earlier this month, former DHS National Cyber Security division head Amit Yoran said “the intelligence community has always and will always prioritize its own collection efforts over the defensive and protection mission of our government’s and nation’s digital systems.”

Maybe the NSA could convince us that it’s putting cybersecurity first, but its culture of secrecy will mean that any decisions it makes will be suspect. Under current law, extended by the Bush administration’s extravagant invocation of the “state secrets” privilege when charged with statutory and constitutional violations, the NSA’s activities are not subject to any meaningful public oversight. And the NSA’s tradition of military secrecy makes it harder for it to coordinate with other government IT departments, most of which don’t have clearances, let alone coordinate with local law enforcement or the commercial sector.

We need transparent and accountable government processes, using commercial security products. We need government cybersecurity programs that improve security for everyone. The NSA certainly has an advisory and a coordination role in national cybersecurity, and perhaps a more supervisory role in DoD cybersecurity—both offensive and defensive—but it should not be in charge.

A version of this essay appeared on The Wall Street Journal website.

Posted on April 2, 2009 at 6:09 AM56 Comments

Thefts at the Museum of Bad Art

I’m not making this up:

The loss of two MOBA works to theft has drawn media attention, and enhanced the museum’s stature. In 1996, the painting Eileen, by R. Angelo Le, vanished from MOBA. Eileen was acquired from the trash by Wilson, and features a rip in the canvas where someone slashed it with a knife even before the museum acquired it, “adding an additional element of drama to an already powerful work,” according to MOBA.

The museum offered a reward of $6.50 for the return of Eileen, and although MOBA donors later increased that reward to $36.73, the work remained unrecovered for many years. The Boston Police listed the crime as “larceny, other,” and Sacco was reported saying she was unable to establish a link between the disappearance of Eileen and a notorious heist at Boston’s famed Isabella Stewart Gardner Museum that occurred in 1990. In 2006—10 years after Eileen was stolen—MOBA was contacted by the purported thief demanding a $5,000 ransom for the painting; no ransom was paid, but it was returned anyway.

Prompted by the theft of Eileen, MOBA staff installed a fake video camera over a sign at their Dedham branch reading: “Warning. This gallery is protected by fake video cameras.” Despite this deterrent, in 2004 Rebecca Harris’ Self Portrait as a Drainpipe was removed from the wall and replaced with a ransom note demanding $10, although the thief neglected to include any contact information. Soon after its disappearance the painting was returned, with a $10 donation. Curator Michael Frank speculates that the thief had difficulty fencing the portrait because “reputable institutions refuse to negotiate with criminals.”

Be sure and notice the camera.

Posted on April 1, 2009 at 12:55 PM29 Comments

Fourth Annual Movie-Plot Threat Contest

Let’s face it, the War on Terror is a tired brand. There just isn’t enough action out there to scare people. If this keeps up, people will forget to be scared. And then both the terrorists and the terror-industrial complex lose. We can’t have that.

We’re going to help revive the fear. There’s plenty to be scared about, if only people would just think about it in the right way. In this Fourth Movie-Plot Threat Contest, the object is to find an existing event somewhere in the industrialized world—Third World events are just too easy—and provide a conspiracy theory to explain how the terrorists were really responsible.

The goal here is to be outlandish but plausible, ridiculous but possible, and—if it were only true—terrifying. (An example from The Onion: Fowl Qaeda.) Entries should be formatted as a news story, and are limited to 150 words (I’m going to check this time) because fear needs to be instilled in a population with short attention spans. Submit your entry, by the end of the month, in comments.

The First Movie-Plot Threat Contest rules and winner. The Second Movie-Plot Threat Contest rules, semifinalists, and winner. The Third Movie-Plot Theat Contest rules, semifinalists, and winner.

EDITED TO ADD: The contest has ended; the winner is here

Posted on April 1, 2009 at 6:37 AM

Sidebar photo of Bruce Schneier by Joe MacInnis.