April 2012 Archives

JCS Chairman Sows Cyberwar Fears

Army General Martin E. Dempsey, the chairman of the Joint Chiefs of Staff, said:

A cyber attack could stop our society in its tracks.

Gadzooks. A scared populace is much more willing to pour money into the cyberwar arms race.

Posted on April 30, 2012 at 6:52 AM38 Comments

Vote for Liars and Outliers

Actionable Books is having a vote to determine which of four books to summarize on their site. If you are willing, please go there and vote for Liars and Outliers. (Voting requires a Facebook ID.) Voting closes Monday at noon EST, although I presume they mean EDT.

Posted on April 27, 2012 at 7:57 PM28 Comments

Attack Mitigation

At the RSA Conference this year, I noticed a trend of companies that have products and services designed to help victims recover from attacks. Kelly Jackson Higgins noticed the same thing: "Damage Mitigation as the New Defense."

That new reality, which has been building for several years starting in the military sector, has shifted the focus from trying to stop attackers at the door to instead trying to lessen the impact of an inevitable hack. The aim is to try to detect an attack as early in its life cycle as possible and to quickly put a stop to any damage, such as extricating the attacker from your data server -- or merely stopping him from exfiltrating sensitive information. It's more about containment now, security experts say. Relying solely on perimeter defenses is now passe -- and naively dangerous. "Organizations that are only now coming to the realization that their network perimeters have been compromised are late to the game. Malware ceased being obvious and destructive years ago," says Dave Piscitello, senior security technologist for ICANN. "The criminal application of collected/exfiltrated data is now such an enormous problem that it's impossible to avoid."

Attacks have become more sophisticated, and social engineering is a powerful, nearly sure-thing tool for attackers to schmooze their way into even the most security-conscious companies. "Security traditionally has been a preventative game, trying to prevent things from happening. What's been going on is people realizing you cannot do 100 percent prevention anymore," says Chenxi Wang, vice president and principal analyst for security and risk at Forrester Research. "So we figured out what we're going to do is limit the damage when prevention fails."

Posted on April 27, 2012 at 6:53 AM17 Comments

Biometric Passports Make it Harder for Undercover CIA Officers

Last year, I wrote about how social media sites are making it harder than ever for undercover police officers. This story talks about how biometric passports are making it harder than ever for undercover CIA agents.

Busy spy crossroads such as Dubai, Jordan, India and many E.U. points of entry are employing iris scanners to link eyeballs irrevocably to a particular name. Likewise, the increasing use of biometric passports, which are embedded with microchips containing a person's face, sex, fingerprints, date and place of birth, and other personal data, are increasingly replacing the old paper ones. For a clandestine field operative, flying under a false name could be a one-way ticket to a headquarters desk, since they're irrevocably chained to whatever name and passport they used.

"If you go to one of those countries under an alias, you can't go again under another name," explains a career spook, who spoke on condition of anonymity because he remains an agency consultant. "So it's a one-time thing -- one and done. The biometric data on your passport, and maybe your iris, too, has been linked forever to whatever name was on your passport the first time. You can't show up again under a different name with the same data."

Posted on April 26, 2012 at 6:57 AM53 Comments

Fear and the Attention Economy

danah boyd is thinking about -- in a draft essay, and as a recording of a presentation -- fear and the attention economy. Basically, she is making the argument that the attention economy magnifies the culture of fear because fear is a good way to get attention, and that this is being made worse by the rise of social media.

A lot of this isn't new. Fear has been used to sell products (I've written about that here) and policy ("Remember the Maine!" "Remember the Alamo! "Remember 9/11!") since forever. Newspapers have used fear to attract readers since there were readers. Long before there were child predators on the Internet, irrational panics swept society. Shark attacks in the 1970s. Marijuana in the 1950s. boyd relates a story from Glassner's The Culture of Fear about elderly women being mugged in the 1990s.

These fears have largely been driven from the top down: from political leaders, from the news media. What's new today -- and I agree this is very interesting -- is that in addition to these traditional top-down fears, we're also seeing fears come from the bottom up. Social media are allowing all of us to sow fear and, because fear gets attention, is enticing us to do so. Rather than fostering empathy and bringing us all together, social media might be pushing us further apart.

A lot of this is related to my own writing about trust. Fear causes us to mistrust a group we're fearful of, and to more strongly trust the group we're a part of. It's natural, and it can be manipulated. It can be amplified, and it can be dampened. How social media are both enabling and undermining trust is a really important thing for us to understand. As boyd says: "What we design and how we design it matters. And how our systems are used also matters, even if those uses aren't what we intended."

Posted on April 25, 2012 at 6:51 AM24 Comments

Amazing Round of "Split or Steal"

In Liars and Outliers, I use the metaphor of the Prisoner's Dilemma to exemplify the conflict between group interest and self-interest. There are a gazillion academic papers on the Prisoner's Dilemma from a good dozen different academic disciplines, but the weirdest dataset on real people playing the game is from a British game show called Golden Balls.

In the final round of the game, called "Split or Steal," two contestants play a one-shot Prisoner's Dilemma -- technically, it's a variant -- choosing to either cooperate (and split a jackpot) or defect (and try to steal it). If one steals and the other splits, the stealer gets the whole jackpot. And, of course, if both contestants steal then both end up with nothing. There are lots of videos from the show on YouTube. (There are even two papers that analyze data from the game.) The videos are interesting to watch, not just to see how players cooperate and defect, but to watch their conversation beforehand and their reactions afterwards. I wrote a few paragraphs about this game for Liars and Outliers, but I ended up deleting them.

This is the weirdest, most surreal round of "Split or Steal" I have ever seen. The more I think about the psychology of it, the more interesting it is. I'll save my comments for the comments, because I want you to watch it before I say more. Really.

For consistency's sake in the comments, here are their names. The man on the left is Ibrahim, and the man on the right is Nick.

EDITED TO ADD (5/14): Economic analysis of the episode.

Posted on April 24, 2012 at 6:43 AM162 Comments

Alan Turing Cryptanalysis Papers

GCHQ, the UK government's communications headquarters, has released two new -- well, 70 years old, but new to us -- cryptanalysis documents by Alan Turing.

The papers, one entitled The Applications of Probability to Crypt, and the other entitled Paper on the Statistics of Repetitions, discuss mathematical approaches to code breaking.

[...]

According to the GCHQ mathematician, who identified himself only as Richard, the papers detailed using "mathematical analysis to try and determine which are the more likely settings so that they can be tried as quickly as possible."

The papers don't seem to be online yet, but here's their National Archives data.

EDITED TO ADD (5/12): The papers are available for download at GBP 3.50 each.

Posted on April 23, 2012 at 6:18 AM7 Comments

Liars & Outliers Update

Liars & Outliers has been available for about two months, and is selling well both in hardcover and e-book formats. More importantly, I'm very pleased with the book's reception. The reviews I've gotten have been great, and I read a lot of tweets from people who have enjoyed the book. My goal was to give people new ways to think about trust and society -- and by extension security and society -- and it looks like I've succeeded.

Some samplings:

  • InfoWorld: "The fact that Liars and Outliers prompted me to go back and update my own thinking is truly the measure of Schneier's latest book."
  • ComputerWeekly.com: "I used to think that Bruce Schneier was out of touch with industry CISOs, but now I think that they are out of touch with him."
  • Slashdot: "the reader will find that Schneier is one of the most original thinkers around."
  • CSO: "If you get a chance to read Schneier's book (beg, borrow or steal a copy--although I'm not sure what that says about trust if you steal it), you should do so...trust me!"

I'm really proud of the book. I think it's the best thing I've written. If you haven't read the book yet, please give it a look. It's the synthesis of a lot of my security thinking to date. I really believe you will enjoy it, and that you'll think differently after you read it.

So far, though, my readership has mostly been within the security community: people who already know my writing. What I need help with is getting the word out to people outside the circles of computer security or this blog. Anyone who has read the book, I would really appreciate a review somewhere. On your blog if you have one, on Amazon, anywhere. If you know of a venue that reviews, or otherwise discusses books and authors, I would appreciate an introduction.

Thank you.

Posted on April 20, 2012 at 12:48 PM19 Comments

TSA Behavioral Detection Statistics

Interesting data from the U.S. Government Accountability Office:

But congressional auditors have questions about other efficiencies as well, like having 3,000 "behavior detection" officers assigned to question passengers. The officers sidetracked 50,000 passengers in 2010, resulting in the arrests of 300 passengers, the GAO found. None turned out to be terrorists.

Yet in the same year, behavior detection teams apparently let at least 16 individuals allegedly involved in six subsequent terror plots slip through eight different airports. GAO said the individuals moved through protected airports on at least 23 different occasions.

I don't believe the second paragraph. We haven't had six terror plots between 2010 and today. And even if we did, how would the auditors know? But I'm sure the first paragraph is correct: the behavioral detection program is 0% effective at preventing terrorism.

The rest of the article is pretty depressing. The TSA refuses to back down on any of its security theater measures. At the same time, its budget is being cut and more people are flying. The result: longer waiting times at security.

Posted on April 20, 2012 at 6:19 AM32 Comments

Dance Moves As an Identifier

A burglar was identified by his dance moves, captured on security cameras:

"The 16-year-old juvenile suspect is known for his 'swag,' or signature dance move," Heyse said, "and [he] does it in the hallways at school." Presumably, although the report doesn't make it clear, a classmate or teacher saw the video, recognized the distinctive swag and notified authorities.

But is swag admissible to identify a defendant? Assuming it really is unique or distinctive -- and it looks that way from the clip, but I'm no swag expert -- I'd say yes.

Posted on April 19, 2012 at 1:03 PM25 Comments

Smart Meter Hacks

Brian Krebs writes about smart meter hacks:

But it appears that some of these meters are smarter than others in their ability to deter hackers and block unauthorized modifications. The FBI warns that insiders and individuals with only a moderate level of computer knowledge are likely able to compromise meters with low-cost tools and software readily available on the Internet.

Sometime in 2009, an electric utility in Puerto Rico asked the FBI to help it investigate widespread incidents of power thefts that it believed was related to its smart meter deployment. In May 2010, the bureau distributed an intelligence alert about its findings to select industry personnel and law enforcement officials.

Citing confidential sources, the FBI said it believes former employees of the meter manufacturer and employees of the utility were altering the meters in exchange for cash and training others to do so. "These individuals are charging $300 to $1,000 to reprogram residential meters, and about $3,000 to reprogram commercial meters," the alert states.

The FBI believes that miscreants hacked into the smart meters using an optical converter device ­- such as an infrared light ­- connected to a laptop that allows the smart meter to communicate with the computer. After making that connection, the thieves changed the settings for recording power consumption using software that can be downloaded from the Internet.

Posted on April 19, 2012 at 5:52 AM33 Comments

Password Security at Linode

Here's something good:

We have implemented sophisticated brute force protection for Linode Manager user accounts that combines a time delay on failed attempts, forced single threading of log in attempts from a given remote address, and automatic tarpitting of requests from attackers.

And this:

Some of you may have noticed a few changes to the Linode Manger over the past few weeks, most notably that accessing your "My Profile" and the "Account -> Users & Permissions" subtab now require password re-authentication.

The re-authentication is meant to protect your contact settings, password changes, and other preferences. The re-auth lasts for about 10 minutes, after which you'll be asked to provide your password again on those sections of the Linode Manager.

It's nice to see some companies implementing these sorts of security measures.

Posted on April 18, 2012 at 1:30 PM27 Comments

Stolen Phone Database

This article talks about a database of stolen cell phone IDs that will be used to deny service. While I think this is a good idea, I don't know how much it would deter cell phone theft. As long as there are countries that don't implement blocking based on the IDs in the databases -- and surely there will always be -- there will be a market for stolen cell phones.

Plus, think of the possibilities for a denial-of-service attack. Can I report your cell phone as stolen and have it turned off? Surely no political party will think of doing that to the phones of all the leaders of a rival party the weekend before a major election.

Posted on April 18, 2012 at 6:49 AM33 Comments

Forever-Day Bugs

That's a nice turn of phrase:

Forever day is a play on "zero day," a phrase used to classify vulnerabilities that come under attack before the responsible manufacturer has issued a patch. Also called iDays, or "infinite days" by some researchers, forever days refer to bugs that never get fixed­--even when they're acknowledged by the company that developed the software. In some cases, rather than issuing a patch that plugs the hole, the software maker simply adds advice to user manuals showing how to work around the threat.

The article is about bugs in industrial control systems, many of which don't have a patching mechanism.

Posted on April 17, 2012 at 1:22 PM20 Comments

Outliers in Intelligence Analysis

From the CIA journal Studies in Intelligence: "Capturing the Potential of Outlier Ideas in the Intelligence Community."

In war you will generally find that the enemy has at any time three courses of action open to him. Of those three, he will invariably choose the fourth.

—Helmuth Von Moltke

With that quip, Von Moltke may have launched a spirited debate within his intelligence staff. The modern version of the debate can be said to exist in the cottage industry that has been built on the examination and explanation of intelligence failures, surprises, omissions, and shortcomings. The contributions of notable scholars to the discussion span multiple analytic generations, and each expresses points with equal measures of regret, fervor, and hope. Their diagnoses and their prescriptions are sadly similar, however, suggesting that the lessons of the past are lost on each succeeding generation of analysts and managers or that the processes and culture of intelligence analysis are incapable of evolution. It is with the same regret, fervor, and hope that we offer our own observations on avoiding intelligence omissions and surprise. Our intent is to explore the ingrained bias against outliers, the potential utility of outliers, and strategies for deliberately considering them.

Posted on April 17, 2012 at 6:15 AM13 Comments

Hawley Channels His Inner Schneier

Kip Hawley wrote an essay for the Wall Street Journal on airport security. In it, he says so many sensible things that people have been forwarding it to me with comments like "did you ghostwrite this?" and "it looks like you won an argument" and "how did you convince him?"

(Sadly, the essay was published in the Journal, which means it won't be freely available on the Internet forever. Because of that, I'm going to quote from it liberally. And if anyone finds a permanent URL for this, I'll add it here.)

Hawley:

Any effort to rebuild TSA and get airport security right in the U.S. has to start with two basic principles:

First, the TSA's mission is to prevent a catastrophic attack on the transportation system, not to ensure that every single passenger can avoid harm while traveling. Much of the friction in the system today results from rules that are direct responses to how we were attacked on 9/11. But it's simply no longer the case that killing a few people on board a plane could lead to a hijacking. Never again will a terrorist be able to breach the cockpit simply with a box cutter or a knife. The cockpit doors have been reinforced, and passengers, flight crews and air marshals would intervene.

This sounds a lot like me (2005):

Exactly two things have made airline travel safer since 9/11: reinforcement of cockpit doors, and passengers who now know that they may have to fight back.

I'm less into sky marshals than he is.

Hawley:

Second, the TSA's job is to manage risk, not to enforce regulations. Terrorists are adaptive, and we need to be adaptive, too. Regulations are always playing catch-up, because terrorists design their plots around the loopholes.

Me in 2008:

It's this fetish-like focus on tactics that results in the security follies at airports. We ban guns and knives, and terrorists use box-cutters. We take away box-cutters and corkscrews, so they put explosives in their shoes. We screen shoes, so they use liquids. We take away liquids, and they're going to do something else. Or they'll ignore airplanes entirely and attack a school, church, theatre, stadium, shopping mall, airport terminal outside the security area, or any of the other places where people pack together tightly.

These are stupid games, so let's stop playing.

He disses Trusted Traveler programs, where known people are allowed bypass some security measures:

I had hoped to advance the idea of a Registered Traveler program, but the second that you create a population of travelers who are considered "trusted," that category of fliers moves to the top of al Qaeda's training list, whether they are old, young, white, Asian, military, civilian, male or female. The men who bombed the London Underground in July 2005 would all have been eligible for the Registered Traveler cards we were developing at the time. No realistic amount of prescreening can alleviate this threat when al Qaeda is working to recruit "clean" agents. TSA dropped the idea on my watch -- though new versions of it continue to pop up.

Me in 2004:

What the Trusted Traveler program does is create two different access paths into the airport: high security and low security. The intent is that only good guys will take the low-security path, and the bad guys will be forced to take the high-security path, but it rarely works out that way. You have to assume that the bad guys will find a way to take the low-security path.

Hawley's essay ends with a list of recommendations for change, and they are mostly good:

What would a better system look like? If politicians gave the TSA some political cover, the agency could institute the following changes before the start of the summer travel season:

1. No more banned items: Aside from obvious weapons capable of fast, multiple killings -- such as guns, toxins and explosive devices -- it is time to end the TSA's use of well-trained security officers as kindergarten teachers to millions of passengers a day. The list of banned items has created an "Easter-egg hunt" mentality at the TSA. Worse, banning certain items gives terrorists a complete list of what not to use in their next attack. Lighters are banned? The next attack will use an electric trigger.

Me in 2009:

Return passenger screening to pre-9/11 levels.

Hawley:

2. Allow all liquids: Simple checkpoint signage, a small software update and some traffic management are all that stand between you and bringing all your liquids on every U.S. flight. Really.

This is referring to a point he makes earlier in his essay:

I was initially against a ban on liquids as well, because I thought that, with proper briefing, TSA officers could stop al Qaeda's new liquid bombs. Unfortunately, al Qaeda's advancing skill with hydrogen-peroxide-based bombs made a total liquid ban necessary for a brief period and a restriction on the amount of liquid one could carry on a plane necessary thereafter.

Existing scanners could allow passengers to carry on any amount of liquid they want, so long as they put it in the gray bins. The scanners have yet to be used in this way because of concern for the large number of false alarms and delays that they could cause. When I left TSA in 2009, the plan was to designate "liquid lanes" where waits might be longer but passengers could board with snow globes, beauty products or booze. That plan is still sitting on someone's desk.

I have been complaining about the liquids ban for years, but Hawley's comment confuses me. He says that hydrogen-peroxide based bombs -- these are the bombs that are too dangerous to bring on board in 4-oz. bottles, but perfectly fine in four 1-oz bottles combined after the checkpoints -- can be detected with existing scanners, not with new scanners using new technology. Does anyone know what he's talking about?

Hawley:

3. Give TSA officers more flexibility and rewards for initiative, and hold them accountable: No security agency on earth has the experience and pattern-recognition skills of TSA officers. We need to leverage that ability. TSA officers should have more discretion to interact with passengers and to work in looser teams throughout airports. And TSA's leaders must be prepared to support initiative even when officers make mistakes. Currently, independence on the ground is more likely to lead to discipline than reward.

This is a great idea, but it's going to cost money. Being a TSA screener is a pretty lousy job. Morale is poor: "In surveys on employee morale and job satisfaction, TSA often performs poorly compared to other government agencies. In 2010 TSA ranked 220 out of 224 government agency subcomponents for employee satisfaction." Pay is low: "The men and women at the front lines of the battle to keep the skies safe are among the lowest paid of all federal employees, and they have one of the highest injury rates." And there is traditionally a high turnover: 20% in 2008. The 2011 decision allowing TSA workers to unionize will help this somewhat, but for it to really work, the rules can't be this limiting: "the paper outlining his decision precludes negotiations on security policies, pay, pensions and compensation, proficiency testing, job qualifications and discipline standards. It also will prohibit screeners from striking or engaging in work slowdowns."

TSA workers who are smart, flexible, and show initiative will cost money, and that'll be difficult when the TSA's budget is being cut.

Hawley:

4. Eliminate baggage fees: Much of the pain at TSA checkpoints these days can be attributed to passengers overstuffing their carry-on luggage to avoid baggage fees. The airlines had their reasons for implementing these fees, but the result has been a checkpoint nightmare. Airlines might increase ticket prices slightly to compensate for the lost revenue, but the main impact would be that checkpoint screening for everybody will be faster and safer.

Another great idea, but I don't see how we can do it without passing a law forbidding airlines to charge those fees. Over the past few years, airlines have drastically increased fees as a revenue source. Sneaking in extra charges allows them to advertise lower prices, and I don't see that changing anytime soon.

5. Randomize security: Predictability is deadly. Banned-item lists, rigid protocols -- if terrorists know what to expect at the airport, they have a greater chance of evading our system.

This would be a disaster. Actually, I'm surprised Hawley even mentions it, given that he wrote this a few paragraphs earlier:

One brilliant bit of streamlining from the consultants: It turned out that if the outline of two footprints was drawn on a mat in the area for using metal-detecting wands, most people stepped on the feet with no prompting and spread their legs in the most efficient stance. Every second counts when you're processing thousands of passengers a day.

Randomization would slow checkpoints down to a crawl, as well as anger passengers. Do I have to take my shoes off or not? Does my computer go in the bin or not? (Even the weird but mostly consistent rules about laptops vs. iPads is annoying people.) Yesterday, liquids were allowed -- today they're banned. But at this airport, the TSA is confiscating anything with more than two ounces of aluminum and questioning people carrying Tom Clancy novels.

I'm not even convinced this would be a hardship for the terrorists. I've gotten really good at avoiding lanes with full-body scanners, and presumably the terrorists will simply assume that all security regulations are in force at all times. I'd like to see a cost-benefit analysis of this sort of thing first.

Hawley's concluding paragraph:

In America, any successful attack -- no matter how small -- is likely to lead to a series of public recriminations and witch hunts. But security is a series of trade-offs. We've made it through the 10 years after 9/11 without another attack, something that was not a given. But no security system can be maintained over the long term without public support and cooperation. If Americans are ready to embrace risk, it is time to strike a new balance.

I agree with this. Sadly, I'm not optimistic for change anytime soon. There's one point Hawley makes, but I don't think he makes it strongly enough. He says:

I wanted to reduce the amount of time that officers spent searching for low-risk objects, but politics intervened at every turn. Lighters were untouchable, having been banned by an act of Congress. And despite the radically reduced risk that knives and box cutters presented in the post-9/11 world, allowing them back on board was considered too emotionally charged for the American public.

This is the fundamental political problem of airport security: it's in nobody's self-interest to take a stand for what might appear to be reduced security. Imagine that the TSA management announces a new rule that box cutters are now okay, and that they respond to critics by explaining that the current risks to airplanes don't warrant prohibiting them. Even if they're right, they're open to attacks from political opponents that they're not taking terrorism seriously enough. And if they're wrong, their careers are over.

It's even worse when it's elected officials who have to make the decision. Which congressman is going to jeopardize his political career by standing up and saying that the cigarette lighter ban is stupid and should be repealed? It's all political risk, and no political gain.

We have the same problem with the no-fly list: Congress mandates that the TSA match passengers against these lists. Rolling this back is politically difficult at the best of times, and impossible in today's climate, even if the TSA decided it wanted to do so.

I am very impressed with Hawley's essay. I do wonder where it came from. This wasn't the same argument Hawley made when I debated him last month on the Economist website. This definitely wasn't the same argument he made when I interviewed him in 2007, when he was still head of the TSA. But it's great to read today.

Hopefully, someone is listening. And hopefully, our social climate will change so that these sorts of changes become politically possible.

ETA (4/16): Slashdot thread.

Posted on April 16, 2012 at 12:29 PM48 Comments

How Information Warfare Changes Warfare

Really interesting paper on the moral and ethical implications of cyberwar, and the use of information technology in war (drones, for example):

"Information Warfare: A Philosophical Perspective," by Mariarosaria Taddeo, Philosophy and Technology, 2012.

Abstract: This paper focuses on Information Warfare -- the warfare characterised by the use of information and communication technologies. This is a fast growing phenomenon, which poses a number of issues ranging from the military use of such technologies to its political and ethical implications. The paper presents a conceptual analysis of this phenomenon with the goal of investigating its nature. Such an analysis is deemed to be necessary in order to lay the groundwork for future investigations into this topic, addressing the ethical problems engendered by this kind of warfare. The conceptual analysis is developed in three parts. First, it delineates the relation between Information Warfare and the Information revolution. It then focuses attention on the effects that the diffusion of this phenomenon has on the concepts of war. On the basis of this analysis, a definition of Information Warfare is provided as a phenomenon not necessarily sanguinary and violent, and rather transversal concerning the environment in which it is waged, the way it is waged and the ontological and social status of its agents. The paper concludes by taking into consideration the Just War Theory and the problems arising from its application to the case of Information Warfare.

Here's an interview with the author.

Posted on April 16, 2012 at 5:55 AM9 Comments

Me at RSA 2012

This is not a video of my talk at the RSA Conference earlier this year. This is a 16-minute version of that talk -- TED-like -- that the conference filmed the day after for the purpose of putting it on the Internet.

Today's Internet threats are not technical; they're social and political. They aren't criminals, hackers, or terrorists. They're the government and corporate attempts to mold the Internet into what they want it to be, either to bolster their business models or facilitate social control. Right now, these two goals coincide, making it harder than ever to keep the Internet free and open.

Posted on April 13, 2012 at 2:11 PM14 Comments

Disguising Tor Traffic as Skype Video Calls

One of the problems with Tor traffic is that it can de detected and blocked. Here's SkypeMorph, a clever system that disguises Tor traffic as Skype video traffic.

To prevent the Tor traffic from being recognized by anyone analyzing the network flow, SkypeMorph uses what's known as traffic shaping to convert Tor packets into User Datagram Protocol packets, as used by Skype. The traffic shaping also mimics the sizes and timings of packets produced by normal Skype video conversations. As a result, outsiders observing the traffic between the end user and the bridge see data that looks identical to a Skype video conversation.

The SkypeMorph developers chose Skype because the software is widely used throughout the world, making it hard for governments to block it without arousing widespread criticism. The developers picked the VoIP client's video functions because its flow of packets more closely resembles Tor traffic. Voice communications, by contrast, show long pauses in transmissions, as one party speaks and the other listens.

Posted on April 13, 2012 at 7:08 AM30 Comments

Bomb Threats As a Denial-of-Service Attack

The University of Pittsburgh has been the recipient of 50 bomb threats in the past two months (over 30 during the last week). Each time, the university evacuates the threatened building, searches it top to bottom -- one of the threatened buildings is the 42-story Cathedral of Learning -- finds nothing, and eventually resumes classes. This seems to be nothing more than a very effective denial-of-service attack.

Police have no leads. The threats started out as handwritten messages on bathroom walls, but are now being sent via e-mail and anonymous remailers. (Here is a blog and a
Google Docs spreadsheet documenting the individual threats.)

The University is implementing some pretty annoying security theater in response:

To enter secured buildings, we all will need to present a University of Pittsburgh ID card. It is important to understand that book bags, backpacks and packages will not be allowed. There will be single entrances to buildings so there will be longer waiting times to get into the buildings. In addition, non-University of Pittsburgh residents will not be allowed in the residence halls.

I can't see how this will help, but what else can the University do? Their incentives are such that they're stuck overreacting. If they ignore the threats and they're wrong, people will be fired. If they overreact to the threats and they're wrong, they'll be forgiven. There's no incentive to do an actual cost-benefit analysis of the security measures.

For the attacker, though, the cost-benefit payoff is enormous. E-mails are cheap, and the response they induce is very expensive.

If you have any information about the bomb threatener, contact the FBI. There's a $50,000 reward waiting for you. For the university, paying that would be a bargain.

Posted on April 12, 2012 at 1:34 PM61 Comments

Brian Snow on Cybersecurity

Interesting video of Brian Snow speaking from last November. (Brian used to be the Technical Director of NSA's Information Assurance Directorate.) About a year and a half ago, I complained that his words were being used to sow cyber-fear. This talk -- about 30 minutes -- is a better reflection of what he really thinks.

Posted on April 12, 2012 at 6:38 AM3 Comments

"Raise the Crime Rate"

I read this a couple of months ago, and I'm still not sure what I think about it. It's definitely one of the most thought-provoking essays I've read this year.

According to government statistics, Americans are safer today than at any time in the last forty years. In 1990, there were 2,245 homicides in New York City. In 2010, there were 536, only 123 of which involved people who didn't already know each other. The fear, once common, that walking around city parks late at night could get you mugged or murdered has been relegated to grandmothers; random murders, with few exceptions, simply don't happen anymore.

When it comes to rape, the numbers look even better: from 1980 to 2005, the estimated number of sexual assaults in the US fell by 85 percent. Scholars attribute this stunning collapse to various factors, including advances in gender equality, the abortion of unwanted children, and the spread of internet pornography.

It shouldn't surprise us that the country was more dangerous in 1990, at the height of the crack epidemic, than in 2006, at the height of the real estate bubble. What’s strange is that crime has continued to fall during the recession. On May 23, in what has become an annual ritual, the New York Times celebrated the latest such finding: in 2010, as America's army of unemployed grew to 14 million, violent crime fell for the fourth year in a row, sinking to a level not seen since the early ’70s. This seemed odd. Crime and unemployment were supposed to rise in tandem­progressives have been harping on this point for centuries. Where had all the criminals gone?

Statistics are notoriously slippery, but the figures that suggest that violence has been disappearing in the United States contain a blind spot so large that to cite them uncritically, as the major papers do, is to collude in an epic con. Uncounted in the official tallies are the hundreds of thousands of crimes that take place in the country’s prison system, a vast and growing residential network whose forsaken tenants increasingly bear the brunt of America’s propensity for anger and violence.

Crime has not fallen in the United States -- it’s been shifted. Just as Wall Street connived with regulators to transfer financial risk from spendthrift banks to careless home buyers, so have federal, state, and local legislatures succeeded in rerouting criminal risk away from urban centers and concentrating it in a proliferating web of hyperhells. The statistics touting the country’s crime-reduction miracle, when juxtaposed with those documenting the quantity of rape and assault that takes place each year within the correctional system, are exposed as not merely a lie, or even a damn lie -- but as the single most shameful lie in American life.

The author argues that the only moral thing for the U.S. to do is to accept a slight rise in the crime rate while vastly reducing the number of people incarcerated.

While I might not agree with his conclusion -- as I said above, I'm not sure whether I do or I don't -- it's very much the sort of trade-off I talk about in Liars and Outliers. And Steven Pinker has an extensive argument about violent crime in modern society that he makes in The Better Angels of our Nature.

Posted on April 11, 2012 at 1:25 PM66 Comments

A Heathrow Airport Story about Trousers

Usually I don't bother posting random stories about dumb or inconsistent airport security measures. But this one is particularly interesting:

"Sir, your trousers."

"Pardon?"

"Sir, please take your trousers off."

A pause.

"No."

"No?"

The security official clearly was not expecting that response.

He begins to look like he doesn't know what to do, bless him.

"You have no power to require me to do that. You also haven't also given any good reason. I am sure any genuine security concerns you have can be addressed in other ways. You do not need to invade my privacy in this manner."

A pause.

"I think you probably need to get your manager, don't you?" I am trying to be helpful.

As I said in my Economist essay, "At this point, we don't trust America's TSA, Britain's Department for Transport, or airport security in general." We don't trust that, when they tell us to do something and claim it's essential for security, they're tellling the truth.

Posted on April 11, 2012 at 9:57 AM32 Comments

Teenagers and Privacy

Good article debunking the myth that young people don't care about privacy on the Intenet.

Most kids are well aware of risks, and make "fairly sophisticated" decisions about privacy settings based on advice and information from their parents, teachers, and friends. They differentiate between people they don't know out in the world (distant strangers) and those they don't know in the community, such as high school students in their hometown (near strangers). Marisa, for example, a 10-year-old interviewed in the study (who technically is not allowed to use Facebook), "enjoys participating in virtual worlds and using instant messenger and Facebook to socialize with her friends"; is keenly aware of the risks -- especially those related to privacy; and she doesn't share highly sensitive personal information on her Facebook profile and actively blocks certain people.

[...]

Rather than fearing the unknown stranger, young adults are more wary of the "known other" -- parents, school teachers, classmates, etc. -- for fear of "the potential for the known others to share embarrassing information about them"; 83 percent of the sample group cited at least one known other they wanted to maintain their privacy from; 71 percent cited at least one known adult. Strikingly, seven out of the 10 participants who reported an incident when their privacy was breached said it was "perpetrated by known others."

Posted on April 10, 2012 at 10:21 AM19 Comments

Laptops and the TSA

The New York Times tries to make sense of the TSA's policies on computers. Why do you have to take your tiny laptop out of your bag, but not your iPad? Their conclusion: security theater.

Posted on April 9, 2012 at 7:45 AM53 Comments

A Systems Framework for Catastrophic Disaster Response

The National Academies Press has published Crisis Standards of Care: A Systems Framework for Catastrophic Disaster Response.

When a nation or region prepares for public health emergencies such as a pandemic influenza, a large-scale earthquake, or any major disaster scenario in which the health system may be destroyed or stressed to its limits, it is important to describe how standards of care would change due to shortages of critical resources. At the 17th World Congress on Disaster and Emergency Medicine, the IOM Forum on Medical and Public Health Preparedness sponsored a session that focused on the promise of and challenges to integrating crisis standards of care principles into international disaster response plans.

Posted on April 6, 2012 at 11:03 AM3 Comments

James Randi on Magicians and the Security Mindset

Okay, so he doesn't use that term. But he explains how a magician's inherent ability to detect deception can be useful to science.

We can't make magicians out of scientists -- we wouldn't want to -- but we can help scientists "think in the groove" -- think like a magician. And we should.

We are not scientists -- with a few rare but important exceptions, like Ray Hyman and Richard Wiseman. But our highly specific expertise comes from knowledge of the ways in which our audiences can be led to quite false conclusions by calculated means ­ psychological, physical and especially sensory, visual being rather paramount since it has such a range of variety.

The fact that ours is a concealed art as well as one designed to confound persons of average and advanced thinking skills -- our typical audience -- makes it rather immune to ordinary analysis or solutions.

I've observed that scientists tend to think and perceive logically by using their training and observational skills -- of course -- and are thus often psychologically insulated from the possibility that there might be chicanery at work. This is where magicians can come in. No matter how well educated, or how basically intelligent, trained, or observant a scientist may be, s/he may be a poor judge of a methodology employed in deliberate deception.

Here's my essay on the security mindset.

Posted on April 6, 2012 at 5:35 AM28 Comments

JetBlue Captain Clayton Osbon and Resilient Security

This is the most intelligent thing I've read about the JetBlue incident where a pilot had a mental breakdown in the cockpit:

For decades, public safety officials and those who fund them have focused on training and equipment that has a dual-use function for any hazard that may come our way. The post-9/11 focus on terrorism, with all the gizmos that were bought in its name, was a moment of frenzy, and sometimes inconsistent with sound public policy. Over time, there was a return to security measures that were adaptable (dual or multiple use) to any threat and more sustainable in a world that has its fair share of both predictable and utterly bizarre events.

The mental condition of airline pilots is a relevant factor in their annual or bi-annual physicals. (FAA rules differ on the number of physicals required, based on the type of plane being flown.) But believing that the system is flawed because it didn't predict the breakdown of one of 450,000 certified pilots is a myopic reaction.

In many ways, though, this kind of incident was anticipated. The system envisions pilot incapacitation -- physical, mental, or possibly, as in the campy movie ''Snakes on a Plane,'' a slithering foe.

That is, after all, why we have copilots.

The whole essay is worth reading.

Posted on April 5, 2012 at 6:19 AM47 Comments

The Battle for Internet Governance

Good article on the current battle for Internet governance:

The War for the Internet was inevitable -- a time bomb built into its creation. The war grows out of tensions that came to a head as the Internet grew to serve populations far beyond those for which it was designed. Originally built to supplement the analog interactions among American soldiers and scientists who knew one another off-line, the Internet was established on a bedrock of trust: trust that people were who they said they were, and trust that information would be handled according to existing social and legal norms. That foundation of trust crumbled as the Internet expanded. The system is now approaching a state of crisis on four main fronts.

The first is sovereignty: by definition, a boundary-less system flouts geography and challenges the power of nation-states. The second is piracy and intellectual property: information wants to be free, as the hoary saying goes, but rights-holders want to be paid and protected. The third is privacy: online anonymity allows for creativity and political dissent, but it also gives cover to disruptive and criminal behavior -- and much of what Internet users believe they do anonymously online can be tracked and tied to people’s real-world identities. The fourth is security: free access to an open Internet makes users vulnerable to various kinds of hacking, including corporate and government espionage, personal surveillance, the hijacking of Web traffic, and remote manipulation of computer-controlled military and industrial processes.

Posted on April 4, 2012 at 12:34 PM26 Comments

Lost Smart Phones and Human Nature

Symantec deliberately "lost" a bunch of smart phones with tracking software on them, just to see what would happen:

Some 43 percent of finders clicked on an app labeled "online banking." And 53 percent clicked on a filed named "HR salaries." A file named "saved passwords" was opened by 57 percent of finders. Social networking tools and personal e-mail were checked by 60 percent. And a folder labeled "private photos" tempted 72 percent.

Collectively, 89 percent of finders clicked on something they probably shouldn't have.

Meanwhile, only 50 percent of finders offered to return the gadgets, even though the owner's name was listed clearly within the contacts file.

[...]

Some might consider the 50 percent return rate a victory for humanity, but that wasn't really the point of Symantec's project. The firm wanted to see if -- even among what seem to be honest people -- the urge to peek into someone's personal data was just too strong to resist. It was.

EDITED TO ADD (4/13): Original study.

Posted on April 4, 2012 at 6:07 AM48 Comments

Law Enforcement Forensics Tools Against Smart Phones

Turns out the password can be easily bypassed:

XRY works by first jailbreaking the handset. According to Micro Systemation, no ‘backdoors’ created by Apple used, but instead it makes use of security flaws in the operating system the same way that regular jailbreakers do.

Once the iPhone has been jailbroken, the tool then goes on to ‘brute-force’ the passcode, trying every possible four digit combination until the correct password has been found. Given the limited number of possible combinations for a four-digit passcode -- 10,000, ranging from 0000 to 9999 -- this doesn’t take long.

Once the handset has been jailbroken and the passcode guessed, all the data on the handset, including call logs, messages, contacts, GPS data and even keystrokes, can be accessed and examined.

One of the morals is to use an eight-digit passcode.

EDITED TO ADD (4/13): This has been debunked. The 1Password blog has a fairly lengthy post discussing the details of the XRY tool.

Posted on April 3, 2012 at 2:01 PM26 Comments

Computer Forensics: An Example

Paul Ceglia's lawsuit against Facebook is fascinating, but that's not the point of this blog post. As part of the case, there are allegations that documents and e-mails have been electronically forged. I found this story about the forensics done on Ceglia's computer to be interesting.

Posted on April 3, 2012 at 6:53 AM25 Comments

Buying Exploits on the Grey Market

This article talks about legitimate companies buying zero-day exploits, including the fact that "an undisclosed U.S. government contractor recently paid $250,000 for an iOS exploit."

The price goes up if the hack is exclusive, works on the latest version of the software, and is unknown to the developer of that particular software. Also, more popular software results in a higher payout. Sometimes, the money is paid in instalments, which keep coming as long as the hack does not get patched by the original software developer.

Yes, I know that vendors will pay bounties for exploits. And I'm sure there are a lot of government agencies around the world who want zero-day exploits for both espionage and cyber-weapons. But I just don't see that much value in buying an exploit from random hackers around the world.

These things only have value until they're patched, and a known exploit -- even if it is just known by the seller -- is much more likely to get patched. I can much more easily see a criminal organization deciding that the exploit has significant value before that happens. Government agencies are playing a much longer game.

And I would expect that most governments have their own hackers who are finding their own exploits. One, cheaper. And two, only known within that government.

Here's another story, with a price list for different exploits. But I still don't trust this story.

Posted on April 2, 2012 at 7:56 AM38 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..