Blog: January 2013 Archives

The Eavesdropping System in Your Computer

Dan Farmer has an interesting paper (long version here; short version here) discussing the Baseboard Management Controller on your computer's motherboard:

The BMC is an embedded computer found on most server motherboards made in the last 10 or 15 years. Often running Linux, the BMC's CPU, memory, storage, and network run independently. It runs Intel's IPMI out-of-band systems management protocol alongside network services (web, telnet, VNC, SMTP, etc.) to help manage, debug, monitor, reboot, and roll out servers, virtual systems, and supercomputers. Vendors frequently add features and rebrand OEM'd BMCs: Dell has iDRAC, Hewlett Packard iLO, IBM calls theirs IMM2, etc. It is popular because it helps raise efficiency and lower costs associated with availability, personnel, scaling, power, cooling, and more.

To do its magic, the BMC has near complete control over the server's hardware: the IPMI specification says that it can have "full access to system memory and I/O space." Designed to operate when the bits hit the fan, it continues to run even if the server is powered down. Activity on the BMC is essentially invisible unless you have a good hardware hacker on your side or have cracked root on the embedded operating system.

What's the problem?

Servers are usually managed in large groups, which may have thousands or even hundreds of thousands of computers. Each group typically has one or two reusable and closely guarded passwords; if you know the password, you control all the servers in the group. Passwords can remain unchanged for a long time -- often years -- not only because it is very difficult to manage or modify, but also due to the near impossibility of auditing or verifying change. And due to the spec, the password is stored in clear text on the BMC.

IPMI network traffic is usually restricted to a VLAN or management network, but if an attacker has management access to a server she'll be able to communicate to its BMC and possibly unprotected private networks. If the BMC itself is compromised, it is possible to recover the IPMI password as well. In that bleak event all bets and gloves are off.

BMC vulnerabilities are difficult to manage since they are so low level and vendor pervasive. At times, problems originate in the OEM firmware, not the server vendor, adding uncertainty as to what is actually at risk. You can't apply fixes yourself since BMCs will only run signed and proprietary flash images. I found an undocumented way of gaining root shell access on a major vendor's BMC and another giving out-of-the box root shell via SSH. Who knows what's on other BMCs, and who is putting what where? I'll note that most BMCs are designed or manufactured in China.

Basically, it's a perfect spying platform. You can't control it. You can't patch it. It can completely control your computer's hardware and software. And its purpose is remote monitoring.

At the very least, we need to be able to look into these devices and see what's running on them.

I'm amazed we haven't seen any talk about this before now.

EDITED TO ADD (1/31): Correction -- these chips are on server motherboards, not on PCs or other consumer devices.

Posted on January 31, 2013 at 1:28 PM52 Comments

Power and the Internet

All disruptive technologies upset traditional power balances, and the Internet is no exception. The standard story is that it empowers the powerless, but that's only half the story. The Internet empowers everyone. Powerful institutions might be slow to make use of that new power, but since they are powerful, they can use it more effectively. Governments and corporations have woken up to the fact that not only can they use the Internet, they can control it for their interests. Unless we start deliberately debating the future we want to live in, and the role of information technology in enabling that world, we will end up with an Internet that benefits existing power structures and not society in general.

We've all lived through the Internet's disruptive history. Entire industries, like travel agencies and video rental stores, disappeared. Traditional publishing -- books, newspapers, encyclopedias, music -- lost power, while Amazon and others gained. Advertising-based companies like Google and Facebook gained a lot of power. Microsoft lost power (as hard as that is to believe).

The Internet changed political power as well. Some governments lost power as citizens organized online. Political movements became easier, helping to topple governments. The Obama campaign made revolutionary use of the Internet, both in 2008 and 2012.

And the Internet changed social power, as we collected hundreds of "friends" on Facebook, tweeted our way to fame, and found communities for the most obscure hobbies and interests. And some crimes became easier: impersonation fraud became identity theft, copyright violation became file sharing, and accessing censored materials -- political, sexual, cultural -- became trivially easy.

Now powerful interests are looking to deliberately steer this influence to their advantage. Some corporations are creating Internet environments that maximize their profitability: Facebook and Google, among many others. Some industries are lobbying for laws that make their particular business models more profitable: telecom carriers want to be able to discriminate between different types of Internet traffic, entertainment companies want to crack down on file sharing, advertisers want unfettered access to data about our habits and preferences.

On the government side, more countries censor the Internet -- and do so more effectively -- than ever before. Police forces around the world are using Internet data for surveillance, with less judicial oversight and sometimes in advance of any crime. Militaries are fomenting a cyberwar arms race. Internet surveillance -- both governmental and commercial -- is on the rise, not just in totalitarian states but in Western democracies as well. Both companies and governments rely more on propaganda to create false impressions of public opinion.

In 1996, cyber-libertarian John Perry Barlow issued his "Declaration of the Independence of Cyberspace." He told governments: "You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear." It was a utopian ideal, and many of us believed him. We believed that the Internet generation, those quick to embrace the social changes this new technology brought, would swiftly outmaneuver the more ponderous institutions of the previous era.

Reality turned out to be much more complicated. What we forgot is that technology magnifies power in both directions. When the powerless found the Internet, suddenly they had power. But while the unorganized and nimble were the first to make use of the new technologies, eventually the powerful behemoths woke up to the potential -- and they have more power to magnify. And not only does the Internet change power balances, but the powerful can also change the Internet. Does anyone else remember how incompetent the FBI was at investigating Internet crimes in the early 1990s? Or how Internet users ran rings around China's censors and Middle Eastern secret police? Or how digital cash was going to make government currencies obsolete, and Internet organizing was going to make political parties obsolete? Now all that feels like ancient history.

It's not all one-sided. The masses can occasionally organize around a specific issue -- SOPA/PIPA, the Arab Spring, and so on -- and can block some actions by the powerful. But it doesn't last. The unorganized go back to being unorganized, and powerful interests take back the reins.

Debates over the future of the Internet are morally and politically complex. How do we balance personal privacy against what law enforcement needs to prevent copyright violations? Or child pornography? Is it acceptable to be judged by invisible computer algorithms when being served search results? When being served news articles? When being selected for additional scrutiny by airport security? Do we have a right to correct data about us? To delete it? Do we want computer systems that forget things after some number of years? These are complicated issues that require meaningful debate, international cooperation, and iterative solutions. Does anyone believe we're up to the task?

We're not, and that's the worry. Because if we're not trying to understand how to shape the Internet so that its good effects outweigh the bad, powerful interests will do all the shaping. The Internet's design isn't fixed by natural laws. Its history is a fortuitous accident: an initial lack of commercial interests, governmental benign neglect, military requirements for survivability and resilience, and the natural inclination of computer engineers to build open systems that work simply and easily. This mix of forces that created yesterday's Internet will not be trusted to create tomorrow's. Battles over the future of the Internet are going on right now: in legislatures around the world, in international organizations like the International Telecommunications Union and the World Trade Organization, and in Internet standards bodies. The Internet is what we make it, and is constantly being recreated by organizations, companies, and countries with specific interests and agendas. Either we fight for a seat at the table, or the future of the Internet becomes something that is done to us.

This essay appeared as a response to Edge's annual question, "What *Should* We Be Worried About?"

Posted on January 31, 2013 at 7:09 AM39 Comments

"People, Process, and Technology"

Back in 1999 when I formed Counterpane Internet Security, Inc., I popularized the notion that security was a combination of people, process, and technology. Back then, it was an important notion; security back then was largely technology-only, and I was trying to push the idea that people and process needed to be incorporated into an overall security system.

This blog post argues that the IT security world has become so complicated that we need less in the way of people and process, and more technology:

Such a landscape can no longer be policed by humans and procedures. Technology is needed to leverage security controls. The Golden Triangle of people, process and technology needs to be rebalanced in favour of automation. And I'm speaking as a pioneer and highly experienced expert in process and human factors.


Today I'd ditch the Triangle. It's become an argument against excessive focus on technology. Yet that's what we now need. There's nowhere near enough exploitation of technology in our security controls. We rely far too much on policy and people, neither of which are reliable, especially when dealing with fast-changing, large scale infrastructures.

He's right. People and process work on human timescales, not computer timescales. They're important at the strategic level, and sometimes at the tactical level -- but the more we can capture and automate that, the better we're going to do.

The problem is, though, that sometimes human intelligence is required to make sense of an attack, and to formulate an appropriate response. And as long as that's the case, there are going to be instances where an automated attack is going to have the advantage.

Posted on January 30, 2013 at 12:20 PM20 Comments

Who Does Skype Let Spy?

Lately I've been thinking a lot about power and the Internet, and what I call the feudal model of IT security that is becoming more and more pervasive. Basically, between cloud services and locked-down end-user devices, we have less control and visibility over our security -- and have no point but to trust those in power to keep us safe.

The effects of this model were in the news last week, when privacy activists pleaded with Skype to tell them who is spying on Skype calls.

"Many of its users rely on Skype for secure communications -- whether they are activists operating in countries governed by authoritarian regimes, journalists communicating with sensitive sources, or users who wish to talk privately in confidence with business associates, family, or friends," the letter explains.

Among the group's concerns is that although Skype was founded in Europe, its acquisition by a US-based company -- Microsoft -- may mean it is now subject to different eavesdropping and data-disclosure requirements than it was before.

The group claims that both Microsoft and Skype have refused to answer questions about what kinds of user data the service retains, whether it discloses such data to governments, and whether Skype conversations can be intercepted.

The letter calls upon Microsoft to publish a regular Transparency Report outlining what kind of data Skype collects, what third parties might be able to intercept or retain, and how Skype interprets its responsibilities under the laws that pertain to it. In addition it asks for quantitative data about when, why, and how Skype shares data with third parties, including governments.

That's security in today's world. We have no choice but to trust Microsoft. Microsoft has reasons to be trustworthy, but they also have reasons to betray our trust in favor of other interests. And all we can do is ask them nicely to tell us first.

Posted on January 30, 2013 at 6:51 AM48 Comments

Complexity and Security

I have written about complexity and security for over a decade now (for example, this from 1999). Here's the results of a survey that confirms this:

Results showed that more than half of the survey respondents from mid-sized (identified as 50-2500 employees) and enterprise organizations (identified as 2500+ employees) stated that complex policies ultimately led to a security breach, system outage or both.

Usual caveats for this sort of thing apply. The survey is only among 127 people -- I can't find data on what percentage replied. The numbers are skewed because only those that chose to reply were counted. And the results are based on self-reported replies: no way to verify them.

But still.

Posted on January 29, 2013 at 6:32 AM35 Comments

Dangerous Security Theater: Scrambling Fighter Jets

This story exemplifies everything that's wrong with our see-something-say-something war on terror: a perfectly innocent person on an airplane, a random person identifying him as a terrorist threat, and a complete overreaction on the part of the authorities.

Typical overreaction, but in this case -- as in several others over the past decade -- F-15 fighter jets were scrambled to escort the airplane to the ground. Very expensive, and potentially catastrophically fatal.

This blog post makes the point well:

What bothers me about this is not so much that they interrogated the wrong person -- that happens all the time, not that it's okay -- but rather the fighter jets. I think most people probably understand this, but just to make it totally clear, if they send up fighters that is not because they are bringing the first-class passengers some more of those little hot towels. It is so they can be ready to SHOOT YOU DOWN if necessary. Now, I realize the odds that would ever happen, even accidentally, are very tiny. I still question whether it's wise to put fighters next to a passenger plane at the drop of a hat, or in this case because of an anonymous tip about a sleeping passenger.


According to the Seattle Times report, though, interceptions like this are apparently much more common than I thought. Citing a NORAD spokesman, it says this has happened "thousands of times" since 9/11. In this press release NORAD says there have been "over fifteen hundred" since 9/11, most apparently involving planes that violated "temporary flight restriction" areas. Either way, while this is a small percentage of all flights, of course, it still seems like one hell of a lot of interceptions -- especially since in every single case, it has been unnecessary, and is (as NORAD admits) "at great expense to the taxpayer."

Posted on January 28, 2013 at 1:25 PM61 Comments

Violence as a Contagious Disease

This is fascinating:

Intuitively we understand that people surrounded by violence are more likely to be violent themselves. This isn't just some nebulous phenomenon, argue Slutkin and his colleagues, but a dynamic that can be rigorously quantified and understood.

According to their theory, exposure to violence is conceptually similar to exposure to, say, cholera or tuberculosis. Acts of violence are the germs. Instead of wracking intestines or lungs, they lodge in the brain. When people, in particular children and young adults whose brains are extremely plastic, repeatedly experience or witness violence, their neurological function is altered.

Cognitive pathways involving anger are more easily activated. Victimized people also interpret reality through perceptual filters in which violence seems normal and threats are enhanced. People in this state of mind are more likely to behave violently. Instead of through a cough, the disease spreads through fights, rapes, killings, suicides, perhaps even media, the researchers argue.


Not everybody becomes infected, of course. As with an infectious disease, circumstance is key. Social circumstance, especially individual or community isolation ­-- people who feel there’s no way out for them, or disconnected from social norms ­-- is what ultimately allows violence to spread readily, just as water sources fouled by sewage exacerbate cholera outbreaks.

At a macroscopic population level, these interactions produce geographic patterns of violence that sometimes resemble maps of disease epidemics. There are clusters, hotspots, epicenters. Isolated acts of violence are followed by others, which are followed by still more, and so on.

There are telltale incidence patterns formed as an initial wave of cases recedes, then is followed by successive waves that result from infected individuals reaching new, susceptible populations. "The epidemiology of this is very clear when you look at the math," said Slutkin. "The density maps of shootings in Kansas City or New York or Detroit look like cholera case maps from Bangladesh."

I am reminded of this paper on the effects of bystanders on escalating and de-escalating potentially violent situations.

Posted on January 28, 2013 at 6:07 AM40 Comments

Friday Squid Blogging: USB Squirming Tentacle

Just the thing. (Note that this is different than the squid USB drive I blogged about.)

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on January 25, 2013 at 4:15 PM53 Comments

Shaming as Punishment for Repeated Drunk Driving

Janesville, Wisconsin, has published information about repeated drunk driving offenders since 2010. The idea is that the public shame will reduce future incidents.

Posted on January 25, 2013 at 7:03 AM32 Comments

Identifying People from their Writing Style

It's called stylometry, and it's based on the analysis of things like word choice, sentence structure, syntax and punctuation. In one experiment, researchers were able to identify 80% of users with a 5,000-word writing sample.

Download tools here, including one to anonymize your writing style.

Posted on January 24, 2013 at 1:33 PM30 Comments

Identifying People from their DNA


The genetic data posted online seemed perfectly anonymous ­- strings of billions of DNA letters from more than 1,000 people. But all it took was some clever sleuthing on the Web for a genetics researcher to identify five people he randomly selected from the study group. Not only that, he found their entire families, even though the relatives had no part in the study ­-- identifying nearly 50 people.


Other reports have identified people whose genetic data was online, but none had done so using such limited information: the long strings of DNA letters, an age and, because the study focused on only American subjects, a state.

Posted on January 24, 2013 at 6:48 AM31 Comments

The Security of the Mega File-Sharing Service

Ever since the launch of Kim Dotcom's file-sharing service, I have been asked about the unorthodox encryption and security system.

I have not reviewed it, and don't have an opinion. All I know is what I read: this, this, this, this, and this.

Please add other links in the comments.

EDITED TO ADD (1/24): Also this.

Posted on January 23, 2013 at 12:55 PM25 Comments

Commenting on Aaron Swartz's Death

There has been an enormous amount written about the suicide of Aaron Swartz. This is primarily a collection of links, starting with those that use his death to talk about the broader issues at play: Orin Kerr, Larry Lessig, Jennifer Granick, Glenn Greenwald, Henry Farrell, danah boyd, Cory Doctorow, James Fallows, Brewster Kahle, Carl Malamud, and Mark Bernstein. Here are obituaries from the New York Times and Economist. Here are articles and essays from, The Huffington Post, Larry Lessig, TechDirt, CNet, and Forbes, mostly about the prosecutor's statement after the death and the problems with plea bargaining in general. Representative Zoe Lofgren is introducing a bill to prevent this from happening again.

I don't have anything to add, but enough people have sent me their thoughts via e-mail that I thought it would be good to have a thread on this blog for conversation.

EDITED TO ADD (1/23): Groklaw's legal analysis. Secret Service involvement.

EDITED TO ADD (1/29): Another.

EDITED TO ADD (2/28): The DoJ has admitted that Aaron Swartz's prosecution was political.

EDITED TO ADD (3/4): This profile of Aaron Swartz is very good.

Posted on January 23, 2013 at 6:14 AM27 Comments

Google's Authentication Research

Google is working on non-password authentication techniques.

But for Google's password-liberation plan to really take off, they’re going to need other websites to play ball. "Others have tried similar approaches but achieved little success in the consumer world," they write. "Although we recognize that our initiative will likewise remain speculative until we've proven large scale acceptance, we’re eager to test it with other websites."

So they've developed a (as yet unnamed) protocol for device-based authentication that they say is independent of Google, requires no special software to work -- aside from a web browser that supports the login standard -- and which prevents web sites from using this technology to track users.

The great thing about Google’s approach is that it circumvents the really common attack that even Google’s existing mobile-phone authentication system can't prevent: phishing.

They have enough industry muscle that they might pull it off.

Another article.

Posted on January 22, 2013 at 12:04 PM52 Comments

Thinking About Obscurity

This essay is worth reading:

Obscurity is the idea that when information is hard to obtain or understand, it is, to some degree, safe. Safety, here, doesn't mean inaccessible. Competent and determined data hunters armed with the right tools can always find a way to get it. Less committed folks, however, experience great effort as a deterrent.

Online, obscurity is created through a combination of factors. Being invisible to search engines increases obscurity. So does using privacy settings and pseudonyms. Disclosing information in coded ways that only a limited audience will grasp enhances obscurity, too. Since few online disclosures are truly confidential or highly publicized, the lion's share of communication on the social web falls along the expansive continuum of obscurity: a range that runs from completely hidden to totally obvious.


Many contemporary privacy disputes are probably better classified as concern over losing obscurity. Consider the recent debate over whether a newspaper violated the privacy rights of gun owners by publishing a map comprised of information gleaned from public records. The situation left many scratching their heads. After all, how can public records be considered private? What obscurity draws our attention to, is that while the records were accessible to any member of the public prior to the rise of big data, more effort was required to obtain, aggregate, and publish them. In that prior context, technological constraints implicitly protected privacy interests. Now, in an attempt to keep pace with diminishing structural barriers, New York is considering excepting gun owners from "public records laws that normally allow newspapers or private citizens access to certain information the government collects."

The essay is about Facebook's new Graph search tool, and how its harm is best thought of as reducing obscurity.

Posted on January 22, 2013 at 5:23 AM32 Comments

TSA Removing Rapiscan Full-Body Scanners from U.S. Airports

This is big news:

The U.S. Transportation Security Administration will remove airport body scanners that privacy advocates likened to strip searches after OSI Systems Inc. (OSIS) couldn't write software to make passenger images less revealing.

This doesn't mean the end of full-body scanning. There are two categories of these devices: backscatter X-ray and millimeter wave.

The government said Friday it is abandoning its deployment of so-called backscatter technology machines produced by Rapiscan because the company could not meet deadlines to switch to generic imaging with so-called Automated Target Recognition software, the TSA said. Instead, the TSA will continue to use and deploy more millimeter wave technology scanners produced by L-3 Communications, which has adopted the generic-outline standard.


Rapiscan had a contract to produce 500 machines for the TSA at a cost of about $180,000 each. The company could be fined and barred from participating in government contracts, or employees could face prison terms if it is found to have defrauded the government. In all, the 250 Rapiscan machines already deployed are to be phased out of airports nationwide and will be replaced with machines produced by L-3 Communications.

And there are still backscatter X-ray machines being deployed, but I don't think there are very many of them.

TSA has contracted with L-3, Smiths Group Plc (SMIN) and American Science & Engineering Inc. (ASEI) for new body-image scanners, all of which must have privacy software. L-3 and Smiths used millimeter-wave technology. American Science uses backscatter.

This is a big win for privacy. But, more importantly, it's a big win because the TSA is actually taking privacy seriously. Yes, Congress ordered them to do so. But they didn't defy Congress; they did it. The machines will be gone by June.


Posted on January 21, 2013 at 6:38 AM35 Comments

Friday Squid Blogging: The Search for the Colossal Squid

Now that videographers have bagged a giant squid, the search turns to the colossal squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on January 18, 2013 at 3:31 PM31 Comments

Man-in-the-Middle Attacks Against Browser Encryption

Last week, a story broke about how Nokia mounts man-in-the-middle attacks against secure browser sessions.

The Finnish phone giant has since admitted that it decrypts secure data that passes through HTTPS connections -- including social networking accounts, online banking, email and other secure sessions -- in order to compress the data and speed up the loading of Web pages.

The basic problem is that https sessions are opaque as they travel through the network. That's the point -- it's more secure -- but it also means that the network can't do anything about them. They can't be compressed, cached, or otherwise optimized. They can't be rendered remotely. They can't be inspected for security vulnerabilities. All the network can do is transmit the data back and forth.

But in our cloud-centric world, it makes more and more sense to process web data in the cloud. Nokia isn't alone here. Opera's mobile browser performs all sorts of optimizations on web pages before they are sent over the air to your smart phone. Amazon does the same thing with browsing on the Kindle. MobileScope, a really good smart-phone security application, performs the same sort of man-in-the-middle attack against https sessions to detect and prevent data leakage. I think Umbrella does as well. Nokia's mistake was that they did it without telling anyone. With appropriate consent, it's perfectly reasonable for most people and organizations to give both performance and security companies that ability to decrypt and re-encrypt https sessions -- at least most of the time.

This is an area where security concerns are butting up against other issues. Nokia's answer, which is basically "trust us, we're not looking at your data," is going to increasingly be the norm.

Posted on January 17, 2013 at 9:50 AM43 Comments

Cheating at Chess

There's a fascinating story about a probable tournament chess cheat. No one knows how he does it; there's only the facts that 1) historically he's not nearly as good as his recent record, and 2) his moves correlate almost perfectly with one of best computer chess programs. The general question is how valid statistical evidence is when there is no other corroborating evidence.

It reminds me of this story of a marathon runner who arguably has figured out how to cheat undetectably.

Posted on January 16, 2013 at 6:25 AM63 Comments

Lexical Warfare

This essay, which uses the suicide of Aaron Swartz as a jumping off point for how the term "hactivist" has been manipulated by various powers, has this to say about "lexical warfare":

I believe the debate itself is far broader than the specifics of this unhappy case, for if there was prosecutorial overreach it raises the question of whether we as a society created the enabling condition for this sort of overreach by letting the demonization of hacktivists go unanswered. Prosecutors do not work in a vacuum, after all; they are more apt to pursue cases where public discourse supports their actions. The debate thus raises an issue that, as philosopher of language, I have spent time considering: the impact of how words and terms are defined in the public sphere.

"Lexical Warfare" is a phrase that I like to use for battles over how a term is to be understood. Our political discourse is full of such battles; it is pretty routine to find discussions of who gets to be called "Republican" (as opposed to RINO – Republican in Name Only), what "freedom" should mean, what legitimately gets to be called "rape" -- and the list goes on.

Lexical warfare is important because it can be a device to marginalize individuals within their self-identified political affiliation (for example, branding RINO’s defines them as something other than true Republicans), or it can beguile us into ignoring true threats to freedom (focusing on threats from government while being blind to threats from corporations, religion and custom), and in cases in which the word in question is "rape," the definition can have far reaching consequences for the rights of women and social policy.

Lexical warfare is not exclusively concerned with changing the definitions of words and terms -- it can also work to attach either a negative or positive affect to a term. Ronald Reagan and other conservatives successfully loaded the word "liberal" with negative connotations, while enhancing the positive aura of terms like "patriot" (few today would reject the label "patriotic," but rather argue for why they are entitled to it).

Posted on January 15, 2013 at 6:10 AM66 Comments

Anti-Surveillance Clothing

It's both an art project and a practical clothing line.

...Harvey's line of "Stealth Wear" clothing includes an "anti-drone hoodie" that uses metalized material designed to counter thermal imaging used by drones to spot people on the ground. He's also created a cellphone pouch made of a special "signal attenuating fabric." The pocket blocks your phone signal so that it can't be tracked or intercepted by devices like the covert "Stingray" tool used by law enforcement agencies like the FBI.

Posted on January 14, 2013 at 1:27 PM54 Comments

Friday Squid Blogging: Giant Squid Video

Last week, I blogged about an upcoming Discovery Channel program with actual video footage of a live giant squid. ABC News has a tantalizingly short sneak peek.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on January 11, 2013 at 3:59 PM54 Comments

Experimental Results: Liars and Outliers Trust Offer

Last August, I offered to sell Liars and Outliers for $11 in exchange for a book review. This was much less than the $30 list price; less even than the $16 Amazon price. For readers outside the U.S., where books can be very expensive, it was a great price.

I sold 800 books from this offer -- much more than the few hundred I originally intended -- to people all over the world. It was the end of September before I mailed them all out, and probably a couple of weeks later before everyone received their copy. Now, three months after that, it's interesting to count up the number of reviews I received from the offer.

That's not a trivial task. I asked people to e-mail me URLs for their review, but not everyone did. But counting the independent reviews, the Amazon reviews, and the Goodreads reviews from the time period, and making some reasonable assumptions, about 70 people fulfilled their end of the bargain and reviewed my book.

That's 9%.

There were some outliers. One person wrote to tell me that he didn't like the book, and offered not to publish a review despite the agreement. Another two e-mailed me to offer to return the price difference (I declined).

Perhaps people have been busier than they expected -- and haven't gotten around to reading the book and writing a review yet. I know my reading is often delayed by more pressing priorities. And although I didn't put any deadline on when the review should be completed by, I received a surge of reviews around the end of the year -- probably because some people self-imposed a deadline. What is certain is that a great majority of people decided not to uphold their end of the bargain.

The original offer was an exercise in trust. But to use the language of the book, the only thing inducing compliance was the morals of the reader. I suppose I could have collected everyone's names, checked off those who wrote reviews, and tried shaming the rest -- but that seems like a lot of work. Perhaps this public nudge will be enough to convince some more people to write reviews.

EDITED TO ADD (1/11): I never intended to make people feel bad with this post. I know that some people are busy, and that reading an entire book is a large time commitment (especially in our ever-shortened-attention-span era). I can see how this post could be read as an attempt to shame, but -- really -- that was not my intention.

EDITED TO ADD (1/22): Some comments.

Posted on January 11, 2013 at 8:10 AM150 Comments

The Politics and Philosophy of National Security

This essay explains why we're all living in failed Hobbesian states:

What do these three implications -- states have a great deal of freedom to determine what threatens a people and how to respond to those threats, and in making those determinations, they are influenced by the interests and ideologies of their primary constituencies; states have strong incentives and have been given strong justifications for exaggerating threats; and while states aspire, rhetorically, to a unity of will and judgment, they seldom achieve it in practice -- tell us about the relationship between security and freedom? What light do they shed on the question of why security is such a potent argument for the suppression of rights and liberties?

Security is an ideal language for suppressing rights because it combines a universality and neutrality in rhetoric with a particularity and partiality in practice. Security is a good that everyone needs, and, we assume, that everyone needs in the same way and to the same degree. It is "the most vital of all interests," John Stuart Mill wrote, which no one can "possibly do without." Though Mill was referring here to the security of persons rather than of nations or states, his argument about personal security is often extended to nations and states, which are conceived to be persons writ large.

Unlike other values -- say justice or equality -- the need for and definition of security is not supposed to be dependent upon our beliefs or other interests and it is not supposed to favor any one set of beliefs or interests. It is the necessary condition for the pursuit of any belief or interest, regardless of who holds that belief or has that interest. It is a good, as I've said, that is universal and neutral. That's the theory.

The reality, as we have seen, is altogether different. The practice of security involves a state that is rife with diverse and competing ideologies and interests, and these ideologies and interests fundamentally help determine whether threats become a focus of attention, and how they are perceived and mobilized against. The provision of security requires resources, which are not limitless. They must be distributed according to some calculus, which, like the distribution calculus of any other resource (say income or education), will reflect controversial and contested assumption about justice and will be the subject of debate. National security is as political as Social Security, and just as we argue about the latter, so do we argue about the former.

Posted on January 10, 2013 at 6:49 AM16 Comments

Cat Smuggler

Not a cat burglar, a cat smuggler.

Guards thought there was something suspicious about a little white cat slipping through a prison gate in northeastern Brazil. A prison official says that when they caught the animal, they found a cellphone, drills, small saws and other contraband taped to its body.

Another article, with video.

A prison spokesperson was quoted by local paper Estado de S. Paulo as saying: "It's tough to find out who's responsible for the action as the cat doesn't speak."

Posted on January 8, 2013 at 1:36 PM23 Comments

DHS Gets to Spy on Everyone

This Wall Street Journal investigative piece is a month old, but well worth reading. Basically, the Total Information Awareness program is back with a different name:

The rules now allow the little-known National Counterterrorism Center to examine the government files of U.S. citizens for possible criminal behavior, even if there is no reason to suspect them. That is a departure from past practice, which barred the agency from storing information about ordinary Americans unless a person was a terror suspect or related to an investigation.

Now, NCTC can copy entire government databases -- flight records, casino-employee lists, the names of Americans hosting foreign-exchange students and many others. The agency has new authority to keep data about innocent U.S. citizens for up to five years, and to analyze it for suspicious patterns of behavior. Previously, both were prohibited. Data about Americans "reasonably believed to constitute terrorism information" may be permanently retained.

Note that this is government data only, not commercial data. So while it includes "almost any government database, from financial forms submitted by people seeking federally backed mortgages to the health records of people who sought treatment at Veterans Administration hospitals" as well lots of commercial data, it's data the corporations have already given to the government. It doesn't include, for example, your detailed cell phone bills or your tweets.

See also this supplementary blog post to the article.

Posted on January 8, 2013 at 6:28 AM54 Comments

Details of an Internet Scam

Interesting details of an Amazon Marketplace scam. Worth reading.

Most scams use a hook to cause a reaction. The idea being that if you are reacting, they get to control you. If you take the time to stop and think things through, you take control back and can usually spot the scam. Common hooks involve Urgency, Uncertainty, Sex, Fear or Anger. In this case, it's all about Urgency, Uncertainty and Fear. By setting the price so low, they drive urgency high, as you're afraid that you might miss the deal. They then compound this by telling me there was an error in the shipment, trying to make me believe they are incompetent and if I act quickly, I can take advantage of their error.

The second email hypes the urgency, trying to get me to pay quickly. I did not reply, but if I had, the next step in a scam like this is to sweeten the deal if I were to act immediately, often by pretending to ship my non-existent camera with a bonus item (like a cell phone) overnight if I give them payment information immediately.

Of course, if I ever did give them my payment information, they'd empty my checking account and, if they're with a larger attacker group, start using my account to traffic stolen funds.

Posted on January 7, 2013 at 6:31 AM23 Comments

Friday Squid Blogging: Giant Squid Finally Captured on Video

We'll see it later this month.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

EDITED TO ADD (1/8): Some more news stories here.

Posted on January 4, 2013 at 3:36 PM32 Comments

What Facebook Gives the Police

This is what Facebook gives the police in response to a subpoena. (Note that this isn't in response to a warrant; it's in response to a subpoena.) This might be the first one of these that has ever become public.

EDITED TO ADD (1/4): Commenters point out that this case is four years old, and that Facebook claims to have revised its policies since then.

Posted on January 4, 2013 at 7:48 AM33 Comments

Classifying a Shape

This is a great essay:

Spheres are special shapes for nuclear weapons designers. Most nuclear weapons have, somewhere in them, that spheres-within-spheres arrangement of the implosion nuclear weapon design. You don’t have to use spheres -- cylinders can be made to work, and there are lots of rumblings and rumors about non-spherical implosion designs around these here Internets -- but spheres are pretty common.


Imagine the scenario: you’re a security officer working at Los Alamos. You know that spheres are weapon parts. You walk into a technical area, and you see spheres all around! Is that an ashtray, or it is a model of a plutonium pit? Anxiety mounts -- does the ashtray go into a safe at the end of the day, or does it stay out on the desk? (Has someone been tapping their cigarettes out into the pit model?)

All of this anxiety can be gone -- gone! -- by simply banning all non-nuclear spheres! That way you can effectively treat all spheres as sensitive shapes.

What I love about this little policy proposal is that it illuminates something deep about how secrecy works. Once you decide that something is so dangerous that the entire world hinges on keeping it under control, this sense of fear and dread starts to creep outwards. The worry about what must be controlled becomes insatiable ­ and pretty soon the mundane is included with the existential.

The essay continues with a story of a scientist who received a security violation for leaving an orange on his desk.

Two points here. One, this is a classic problem with any detection system. When it's hard to build a system that detects the thing you're looking for, you change the problem to detect something easier -- and hope the overlap is enough to make the system work. Think about airport security. It's too hard to detect actual terrorists with terrorist weapons, so instead they detect pointy objects. Internet filtering systems work the same way, too. (Remember when URL filters blocked the word "sex," and the Middlesex Public Library found that it couldn't get to its municipal webpages?)

Two, the Los Alamos system only works because false negatives are much, much worse than false positives. It really is worth classifying an abstract shape and annoying an officeful of scientists and others to protect the nuclear secrets. Airport security fails because the false-positive/false-negative cost ratio is different.

Posted on January 3, 2013 at 6:03 AM34 Comments

Apollo Robbins, Pickpocket

Fascinating story:

"Come on," Jillette said. "Steal something from me."

Again, Robbins begged off, but he offered to do a trick instead. He instructed Jillette to place a ring that he was wearing on a piece of paper and trace its outline with a pen. By now, a small crowd had gathered. Jillette removed his ring, put it down on the paper, unclipped a pen from his shirt, and leaned forward, preparing to draw. After a moment, he froze and looked up. His face was pale.

"Fuck. You," he said, and slumped into a chair.

Robbins held up a thin, cylindrical object: the cartridge from Jillette’s pen.

Really -- read the whole thing.

EDITED TO ADD (1/6): A video accompanying the article. There's much more on YouTube.

Posted on January 2, 2013 at 8:44 AM31 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.