Blog: April 2011 Archives

The Cyberwar Arms Race

Good paper: “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy,” by Jerry Brito and Tate Watkins.

Over the past two years there has been a steady drumbeat of alarmist rhetoric coming out of Washington about potential catastrophic cyber threats. For example, at a Senate Armed Services Committee hearing last year, Chairman Carl Levin said that “cyberweapons and cyberattacks potentially can be devastating, approaching weapons of mass destruction in their effects.” Proposed responses include increased federal spending on cybersecurity and the regulation of private network security practices.

The rhetoric of “cyber doom” employed by proponents of increased federal intervention, however, lacks clear evidence of a serious threat that can be verified by the public. As a result, the United States may be witnessing a bout of threat inflation similar to that seen in the run-up to the Iraq War. Additionally, a cyber-industrial complex is emerging, much like the military-industrial complex of the Cold War. This complex may serve to not only supply cybersecurity solutions to the federal government, but to drum up demand for them as well.

Part I of this article draws a parallel between today’s cybersecurity debate and the run-up to the Iraq War and looks at how an inflated public conception of the threat we face may lead to unnecessary regulation of the Internet. Part II draws a parallel between the emerging cybersecurity establishment and the military-industrial complex of the Cold War and looks at how unwarranted external influence can lead to unnecessary federal spending. Finally, Part III surveys several federal cybersecurity proposals and presents a framework for analyzing the cybersecurity threat.

Also worth reading is an earlier paper by Sean Lawson: “Beyond Cyber Doom.”

EDITED TO ADD (5/3): Good article on the paper.

Posted on April 28, 2011 at 6:56 AM38 Comments

Social Solidarity as an Effect of the 9/11 Terrorist Attacks

It’s standard sociological theory that a group experiences social solidarity in response to external conflict. This paper studies the phenomenon in the United States after the 9/11 terrorist attacks.

Conflict produces group solidarity in four phases: (1) an initial few days of shock and idiosyncratic individual reactions to attack; (2) one to two weeks of establishing standardized displays of solidarity symbols; (3) two to three months of high solidarity plateau; and (4) gradual decline toward normalcy in six to nine months. Solidarity is not uniform but is clustered in local groups supporting each other’s symbolic behavior. Actual solidarity behaviors are performed by minorities of the population, while vague verbal claims to performance are made by large majorities. Commemorative rituals intermittently revive high emotional peaks; participants become ranked according to their closeness to a center of ritual attention. Events, places, and organizations claim importance by associating themselves with national solidarity rituals and especially by surrounding themselves with pragmatically ineffective security ritual. Conflicts arise over access to centers of ritual attention; clashes occur between pragmatists deritualizing security and security zealots attempting to keep up the level of emotional intensity. The solidarity plateau is also a hysteria zone; as a center of emotional attention, it attracts ancillary attacks unrelated to the original terrorists as well as alarms and hoaxes. In particular historical circumstances, it becomes a period of atrocities.

This certainly makes sense as a group survival mechanism: self-interest giving way to group interest in face of a threat to the group. It’s the kind of thing I am talking about in my new book.

Paper also available here.

Posted on April 27, 2011 at 9:10 AM16 Comments

Security Risks of Running an Open WiFi Network

As I’ve written before, I run an open WiFi network. It’s stories like these that may make me rethink that.

The three stories all fall along the same theme: a Buffalo man, Sarasota man, and Syracuse man all found themselves being raided by the FBI or police after their wireless networks were allegedly used to download child pornography. “You’re a creep… just admit it,” one FBI agent was quoted saying to the accused party. In all three cases, the accused ended up getting off the hook after their files were examined and neighbors were found to be responsible for downloading child porn via unsecured WiFi networks.

EDITED TO ADD (4/29): The EFF is calling for an open wireless movement. I approve.

Posted on April 26, 2011 at 6:59 AM182 Comments

Hard-Drive Steganography through Fragmentation

Clever:

Khan and his colleagues have written software that ensures clusters of a file, rather than being positioned at the whim of the disc drive controller chip, as is usually the case, are positioned according to a code. All the person at the other end needs to know is which file’s cluster positions have been encoded.

The code depends on whether sequential clusters in a file are situated adjacent to each other on the hard disc or not. If they are adjacent, this corresponds to a binary 1 in the secret message.

Paper.

Posted on April 25, 2011 at 5:24 AM38 Comments

Friday Squid Blogging: Squid Prints

Okay, this is a little weird:

This year’s Earth Day will again include the celebrated “squid printing” activity with two big, beautiful Pacific Humboldt squid donated from the Gulf of the Farallones National Marine Sanctuary. We’ll be inking them up and laying them out on paper to create fascinating one-of-a- kind imprints of their bodies.

I don’t know what’s worse: that they’re making prints from squid bodies, or that they’re doing it “again.”

Posted on April 22, 2011 at 4:30 PM18 Comments

Large-Scale Food Theft

A criminal gang is stealing truckloads of food:

Late last month, a gang of thieves stole six tractor-trailer loads of tomatoes and a truck full of cucumbers from Florida growers. They also stole a truckload of frozen meat. The total value of the illegal haul: about $300,000.

The thieves disappeared with the shipments just after the price of Florida tomatoes skyrocketed after freezes that badly damaged crops in Mexico. That suddenly made Florida tomatoes a tempting target, on a par with flat-screen TVs or designer jeans, but with a big difference: tomatoes are perishable.

“I’ve never experienced people targeting produce loads before,” said Shaun Leiker, an assistant manager at Allen Lund, a trucking broker in Oviedo, Fla., that was hit three times by the thieves. “It’s a little different than selling TVs off the back of your truck.”

It’s a professional operation. The group knew how wholesale foodstuff trucking worked. They set up a bogus trucking company. They bid for jobs, collected the trailers, and disappeared. Presumably they knew how to fence the goods, too.

Posted on April 20, 2011 at 6:52 AM51 Comments

Software as Evidence

Increasingly, chains of evidence include software steps. It’s not just the RIAA suing people—and getting it wrong—based on automatic systems to detect and identify file sharers. It’s forensic programs used to collect and analyze data from computers and smart phones. It’s audit logs saved and stored by ISPs and websites. It’s location data from cell phones. It’s e-mails and IMs and comments posted to social networking sites. It’s tallies from digital voting machines. It’s images and meta-data from surveillance cameras. The list goes on and on. We in the security field know the risks associated with trusting digital data, but this evidence is routinely assumed by courts to be accurate.

Sergey Bratus is starting to look at this problem. His paper, written with Ashlyn Lembree and Anna Shubina, is “Software on the Witness Stand: What Should it Take for Us to Trust it?

We discuss the growing trend of electronic evidence, created automatically by autonomously running software, being used in both civil and criminal court cases. We discuss trustworthiness requirements that we believe should be applied to such software and platforms it runs on. We show that courts tend to regard computer-generated materials as inherently trustworthy evidence, ignoring many software and platform trustworthiness problems well known to computer security researchers. We outline the technical challenges in making evidence-generating software trustworthy and the role Trusted Computing can play in addressing them.

From a presentation he gave on the subject:

Constitutionally, criminal defendants have the right to confront accusers. If software is the accusing agent, what should the defendant be entitled to under the Confrontation Clause?

[…]

Witnesses are sworn in and cross-examined to expose biases & conflicts—what about software as a witness?

Posted on April 19, 2011 at 6:47 AM52 Comments

WikiLeaks Cable about Chinese Hacking of U.S. Networks

We know it’s prevalent, but there’s some new information:

Secret U.S. State Department cables, obtained by WikiLeaks and made available to Reuters by a third party, trace systems breaches—colorfully code-named “Byzantine Hades” by U.S. investigators—to the Chinese military. An April 2009 cable even pinpoints the attacks to a specific unit of China’s People’s Liberation Army.

Privately, U.S. officials have long suspected that the Chinese government and in particular the military was behind the cyber-attacks. What was never disclosed publicly, until now, was evidence.

U.S. efforts to halt Byzantine Hades hacks are ongoing, according to four sources familiar with investigations. In the April 2009 cable, officials in the State Department’s Cyber Threat Analysis Division noted that several Chinese-registered Web sites were “involved in Byzantine Hades intrusion activity in 2006.”

The sites were registered in the city of Chengdu, the capital of Sichuan Province in central China, according to the cable. A person named Chen Xingpeng set up the sites using the “precise” postal code in Chengdu used by the People’s Liberation Army Chengdu Province First Technical Reconnaissance Bureau (TRB), an electronic espionage unit of the Chinese military. “Much of the intrusion activity traced to Chengdu is similar in tactics, techniques and procedures to (Byzantine Hades) activity attributed to other” electronic spying units of the People’s Liberation Army, the cable says.

[…]

What is known is the extent to which Chinese hackers use “spear-phishing” as their preferred tactic to get inside otherwise forbidden networks. Compromised email accounts are the easiest way to launch spear-phish because the hackers can send the messages to entire contact lists.

The tactic is so prevalent, and so successful, that “we have given up on the idea we can keep our networks pristine,” says Stewart Baker, a former senior cyber-security official at the U.S. Department of Homeland Security and National Security Agency. It’s safer, government and private experts say, to assume the worst—that any network is vulnerable.

Two former national security officials involved in cyber-investigations told Reuters that Chinese intelligence and military units, and affiliated private hacker groups, actively engage in “target development” for spear-phish attacks by combing the Internet for details about U.S. government and commercial employees’ job descriptions, networks of associates, and even the way they sign their emails—such as U.S. military personnel’s use of “V/R,” which stands for “Very Respectfully” or “Virtual Regards.”

The spear-phish are “the dominant attack vector. They work. They’re getting better. It’s just hard to stop,” says Gregory J. Rattray, a partner at cyber-security consulting firm Delta Risk and a former director for cyber-security on the National Security Council.

Spear-phish are used in most Byzantine Hades intrusions, according to a review of State Department cables by Reuters. But Byzantine Hades is itself categorized into at least three specific parts known as “Byzantine Anchor,” “Byzantine Candor,” and “Byzantine Foothold.” A source close to the matter says the sub-codenames refer to intrusions which use common tactics and malicious code to extract data.

A State Department cable made public by WikiLeaks last December highlights the severity of the spear-phish problem. “Since 2002, (U.S. government) organizations have been targeted with social-engineering online attacks” which succeeded in “gaining access to hundreds of (U.S. government) and cleared defense contractor systems,” the cable said. The emails were aimed at the U.S. Army, the Departments of Defense, State and Energy, other government entities and commercial companies.

By the way, reading this blog entry might be illegal under the U.S. Espionage Act:

Dear Americans: If you are not “authorized” personnel, but you have read, written about, commented upon, tweeted, spread links by “liking” on Facebook, shared by email, or otherwise discussed “classified” information disclosed from WikiLeaks, you could be implicated for crimes under the U.S. Espionage Act—or so warns a legal expert who said the U.S. Espionage Act could make “felons of us all.”

As the U.S. Justice Department works on a legal case against WikiLeak’s Julian Assange for his role in helping publish 250,000 classified U.S. diplomatic cables, authorities are leaning toward charging Assange with spying under the Espionage Act of 1917. Legal experts warn that if there is an indictment under the Espionage Act, then any citizen who has discussed or accessed “classified” information can be arrested on “national security” grounds.

Maybe I should have warned you at the top of this post.

Posted on April 18, 2011 at 9:33 AM54 Comments

"Schneier's Law"

Back in 1998, I wrote:

Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break.

In 2004, Cory Doctorow called this Schneier’s law:

…what I think of as Schneier’s Law: “any person can invent a security system so clever that she or he can’t think of how to break it.”

The general idea is older than my writing. Wikipedia points out that in The Codebreakers, David Kahn writes:

Few false ideas have more firmly gripped the minds of so many intelligent men than the one that, if they just tried, they could invent a cipher that no one could break.

The idea is even older. Back in 1864, Charles Babbage wrote:

One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.

My phrasing is different, though. Here’s my original quote in context:

Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break. It’s not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis. And the only way to prove that is to subject the algorithm to years of analysis by the best cryptographers around.

And here’s me in 2006:

Anyone can invent a security system that he himself cannot break. I’ve said this so often that Cory Doctorow has named it “Schneier’s Law”: When someone hands you a security system and says, “I believe this is secure,” the first thing you have to ask is, “Who the hell are you?” Show me what you’ve broken to demonstrate that your assertion of the system’s security means something.

And that’s the point I want to make. It’s not that people believe they can create an unbreakable cipher; it’s that people create a cipher that they themselves can’t break, and then use that as evidence they’ve created an unbreakable cipher.

EDITED TO ADD (4/16): This is an example of the Dunning-Kruger effect, named after the authors of this paper: “Unskilled and Unaware of It: How Difficulties in recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.”

Abstract: People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.

EDITED TO ADD (4/18): If I have any contribution to this, it’s to generalize it to security systems and not just to cryptographic algorithms. Because anyone can design a security system that he cannot break, evaluating the security credentials of the designer is an essential aspect of evaluating the system’s security.

Posted on April 15, 2011 at 1:45 PM79 Comments

Unanticipated Security Risk of Keeping Your Money in a Home Safe

In Japan, lots of people—especially older people—keep their life savings in cash in their homes. (The country’s banks pay very low interest rates, so the incentive to deposit that money into bank accounts is lower than in other countries.) This is all well and good, until a tsunami destroys your home and washes your money out to sea. Then, when it washes up onto the beach, the police collect it:

One month after the March 11 tsunami devastated Ofunato and other nearby cities, police departments already stretched thin now face the growing task of managing lost wealth.

“At first we put all the safes in the station,” said Noriyoshi Goto, head of the Ofunato Police Department’s financial affairs department, which is in charge of lost-and-found items. “But then there were too many, so we had to move them.”

Goto couldn’t specify how many safes his department has collected so far, saying only that there were “several hundreds” with more coming in every day.

Identifying the owners of lost safes is hard enough. But it’s nearly impossible when it comes to wads of cash being found in envelopes, unmarked bags, boxes and furniture.

After three months, the money goes to the government.

Posted on April 15, 2011 at 6:49 AM47 Comments

Changing Incentives Creates Security Risks

One of the things I am writing about in my new book is how security equilibriums change. They often change because of technology, but they sometimes change because of incentives.

An interesting example of this is the recent scandal in the Washington, DC, public school system over teachers changing their students’ test answers.

In the U.S., under the No Child Left Behind Act, students have to pass certain tests; otherwise, schools are penalized. In the District of Columbia, things went further. Michelle Rhee, chancellor of the public school system from 2007 to 2010, offered teachers $8,000 bonuses—and threatened them with termination—for improving test scores. Scores did increase significantly during the period, and the schools were held up as examples of how incentives affect teaching behavior.

It turns out that a lot of those score increases were faked. In addition to teaching students, teachers cheated on their students’ tests by changing wrong answers to correct ones. That’s how the cheating was discovered; researchers looked at the actual test papers and found more erasures than usual, and many more erasures from wrong answers to correct ones than could be explained by anything other than deliberate manipulation.

Teachers were always able to manipulate their students’ test answers, but before, there wasn’t much incentive to do so. With Rhee’s changes, there was a much greater incentive to cheat.

The point is that whatever security measures were in place to prevent teacher cheating before the financial incentives and threats of firing wasn’t sufficient to prevent teacher cheating afterwards. Because Rhee significantly increased the costs of cooperation (by threatening to fire teachers of poorly performing students) and increased the benefits of defection ($8,000), she created a security risk. And she should have increased security measures to restore balance to those incentives.

This is not isolated to DC. It has happened elsewhere as well.

Posted on April 14, 2011 at 6:36 AM45 Comments

Security Fears of Wi-Fi in London Underground

The London Underground is getting Wi-Fi. Of course there are security fears:

But Will Geddes, founder of ICP Group which specialises in reducing terror or technology-related threats, said the plan was problematic.

He said: “There are lots of implications in terms of terrorism and security.

“This will enable people to use their laptop on the Tube as if it was a cell phone.”

Mr Geddes said there had been numerous examples of bomb attacks detonated remotely by mobile phone in Afghanistan and Iraq.

He warned a wi-fi system would enable a terror cell to communicate underground.

And he said “Trojan” or eavesdropping software could be used to penetrate users’ laptops and garner information such as bank details.

Mr Geddes added: “Eavesdropping software can be found and downloaded within minutes.”

This is just silly. We could have a similar conversation regarding any piece of our infrastructure. Yes, the bad guys could use it, just as they use telephones and automobiles and all-night restaurants. If we didn’t deploy technologies because of this fear, we’d still be living in the Middle Ages.

Posted on April 13, 2011 at 1:14 PM44 Comments

Euro Coin Recycling Scam

This story is just plain weird. Regularly, damaged coins are taken out of circulation. They’re destroyed and then sold to scrap metal dealers. That makes sense, but it seems that one- and two-euro coins aren’t destroyed very well. They’re both bi-metal designs, and they’re just separated into an inner core and an outer ring and then sold to Chinese scrap metal dealers. The dealers, being no dummies, put the two parts back together and sold them back to a German bank at face value. The bank was chosen because they accept damaged coins and don’t inspect them very carefully.

Is this not entirely predictable? If you’re going to take coins out of circulation, you had better use a metal shredder. (Except for pennies, which are worth more in component metals.)

Posted on April 13, 2011 at 6:25 AM35 Comments

Israel's Counter-Cyberterrorism Unit

You’d think the country would already have one of these:

Israel is mulling the creation of a counter-cyberterrorism unit designed to safeguard both government agencies and core private sector firms against hacking attacks.

The proposed unit would supplement the efforts of Mossad and other agencies in fighting cyberespionage and denial of service attacks.

Posted on April 12, 2011 at 2:06 PM28 Comments

How did the CIA and FBI Know that Australian Government Computers were Hacked?

Newspapers are reporting that, for about a month, hackers had access to computers “of at least 10 federal ministers including the Prime Minister, Foreign Minister and Defence Minister.”

That’s not much of a surprise. What is odd is the statement that “Australian intelligence agencies were tipped off to the cyber-spy raid by US intelligence officials within the Central Intelligence Agency and the Federal Bureau of Investigation.”

How did the CIA and the FBI know? Did they see some intelligence traffic and assume that those computers were where the stolen e-mails were coming from? Or something else?

Posted on April 12, 2011 at 6:03 AM61 Comments

New French Law Reduces Website Security

I didn’t know about this:

The law obliges a range of e-commerce sites, video and music services and webmail providers to keep a host of data on customers.

This includes users’ full names, postal addresses, telephone numbers and passwords. The data must be handed over to the authorities if demanded.

Police, the fraud office, customs, tax and social security bodies will all have the right of access.

The social benefits of anonymity aside, we’re all more secure if these websites do not have a file of everyone’s plaintext password.

EDITED TO ADD (4/12): Seems that the BBC article misstated the law. Companies have to retain information they already collect for a year after it is no longer required. So if they’re not already storing plaintext passwords, they don’t have to start.

Posted on April 11, 2011 at 1:20 PM31 Comments

The CIA and Assassinations

The former CIA general counsel, John A. Rizzo, talks about his agency’s assassination program, which has increased dramatically under the Obama administration:

The hub of activity for the targeted killings is the CIA’s Counterterrorist Center, where lawyers—there are roughly 10 of them, says Rizzo—write a cable asserting that an individual poses a grave threat to the United States. The CIA cables are legalistic and carefully argued, often running up to five pages. Michael Scheuer, who used to be in charge of the CIA’s Osama bin Laden unit, describes “a dossier,” or a “two-page document,” along with “an appendix with supporting information, if anybody wanted to read all of it.” The dossier, he says, “would go to the lawyers, and they would decide. They were very picky.” Sometimes, Scheuer says, the hurdles may have been too high. “Very often this caused a missed opportunity. The whole idea that people got shot because someone has a hunch­I only wish that was true. If it were, there would be a lot more bad guys dead.”

Sometimes, as Rizzo recalls, the evidence against an individual would be thin, and high-level lawyers would tell their subordinates, “You guys did not make a case.” “Sometimes the justification would be that the person was thought to be at a meeting,” Rizzo explains. “It was too squishy.” The memo would get kicked back downstairs.

The cables that were “ready for prime time,” as Rizzo puts it, concluded with the following words: “Therefore we request approval for targeting for lethal operation.” There was a space provided for the signature of the general counsel, along with the word “concurred.” Rizzo says he saw about one cable each month, and at any given time there were roughly 30 individuals who were targeted. Many of them ended up dead, but not all: “No. 1 and No. 2 on the hit parade are still out there,” Rizzo says, referring to “you-know-who and [Ayman al-] Zawahiri,” a top Qaeda leader.

And the ACLU Deputy Legal Director on the interview:

What was most remarkable about the interview, though, was not what Rizzo said but that it was Rizzo who said it. For more than six years until his retirement in December 2009, Rizzo was the CIA’s acting general counsel—the agency’s chief lawyer. On his watch the CIA had sought to quash a Freedom of Information Act lawsuit by arguing that national security would be harmed irreparably if the CIA were to acknowledge any detail about the targeted killing program, even the program’s mere existence.

Rizzo’s disclosure was long overdue—the American public surely has a right to know that the assassination of terrorism suspects is now official government policy ­ and reflects an opportunistic approach to allegedly sensitive information that has become the norm for senior government officials. Routinely, officials insist to courts that the nation’s security will be compromised if certain facts are revealed but then supply those same facts to trusted reporters.

Posted on April 11, 2011 at 6:33 AM85 Comments

Friday Squid Blogging: A New Book About Squid

Wendy Williams, Kraken: The Curious, Exciting, and Slightly Disturbing Science of Squid.

Kraken is the traditional name for gigantic sea monsters, and this book introduces one of the most charismatic, enigmatic, and curious inhabitants of the sea: the squid. The pages take the reader on a wild narrative ride through the world of squid science and adventure, along the way addressing some riddles about what intelligence is, and what monsters lie in the deep. In addition to squid, both giant and otherwise, Kraken examines other equally enthralling cephalopods, including the octopus and the cuttlefish, and explores their otherworldly abilities, such as camouflage and bioluminescence. Accessible and entertaining, Kraken is also the first substantial volume on the subject in more than a decade and a must for fans of popular science.

Seems to be getting good reviews.

Posted on April 8, 2011 at 4:08 PM8 Comments

Get Your Terrorist Alerts on Facebook and Twitter

Colors are so last decade:

The U.S. government’s new system to replace the five color-coded terror alerts will have two levels of warnings ­ elevated and imminent ­ that will be relayed to the public only under certain circumstances for limited periods of time, sometimes using Facebook and Twitter, according to a draft Homeland Security Department plan obtained by The Associated Press.

Some terror warnings could be withheld from the public entirely if announcing a threat would risk exposing an intelligence operation or a current investigation, according to the government’s confidential plan.

Like a carton of milk, the new terror warnings will each come with a stamped expiration date.

Specific and limited are good. Twitter and Facebook: I’m not so sure.

But what could go wrong?

An errant keystroke touched off a brief panic Thursday at the University of Illinois at Urbana-Champaign when an emergency message accidentally was sent out saying an “active shooter” was on campus.

The first message was sent on the university’s emergency alert system at 10:40 a.m., reaching 87,000 cellphones and email addresses, according to the university.

The university corrected the false alarm about 12 minutes later and said the alert was caused when a worker updating the emergency messaging system inadvertently sent the message rather than saving it.

The emails are designed to go out quickly in the event of an emergency, so the false alarm could not be canceled before it went out, the university said.

Posted on April 8, 2011 at 1:23 PM34 Comments

Pinpointing a Computer to Within 690 Meters

This is impressive, and scary:

Every computer connected to the web has an internet protocol (IP) address, but there is no simple way to map this to a physical location. The current best system can be out by as much as 35 kilometres.

Now, Yong Wang, a computer scientist at the University of Electronic Science and Technology of China in Chengdu, and colleagues at Northwestern University in Evanston, Illinois, have used businesses and universities as landmarks to achieve much higher accuracy.

These organisations often host their websites on servers kept on their premises, meaning the servers’ IP addresses are tied to their physical location. Wang’s team used Google Maps to find both the web and physical addresses of such organisations, providing them with around 76,000 landmarks. By comparison, most other geolocation methods only use a few hundred landmarks specifically set up for the purpose.

The new method zooms in through three stages to locate a target computer. The first stage measures the time it takes to send a data packet to the target and converts it into a distance—a common geolocation technique that narrows the target’s possible location to a radius of around 200 kilometres.

Wang and colleagues then send data packets to the known Google Maps landmark servers in this large area to find which routers they pass through. When a landmark machine and the target computer have shared a router, the researchers can compare how long a packet takes to reach each machine from the router; converted into an estimate of distance, this time difference narrows the search down further. “We shrink the size of the area where the target potentially is,” explains Wang.

Finally, they repeat the landmark search at this more fine-grained level: comparing delay times once more, they establish which landmark server is closest to the target. The result can never be entirely accurate, but it’s much better than trying to determine a location by converting the initial delay into a distance or the next best IP-based method. On average their method gets to within 690 metres of the target and can be as close as 100 metres—good enough to identify the target computer’s location to within a few streets.

Posted on April 8, 2011 at 6:22 AM48 Comments

Detecting Cheaters

Our brains are specially designed to deal with cheating in social exchanges. The evolutionary psychology explanation is that we evolved brain heuristics for the social problems that our prehistoric ancestors had to deal with. Once humans became good at cheating, they then had to become good at detecting cheating—otherwise, the social group would fall apart.

Perhaps the most vivid demonstration of this can be seen with variations on what’s known as the Wason selection task, named after the psychologist who first studied it. Back in the 1960s, it was a test of logical reasoning; today, it’s used more as a demonstration of evolutionary psychology. But before we get to the experiment, let’s get into the mathematical background.

Propositional calculus is a system for deducing conclusions from true premises. It uses variables for statements because the logic works regardless of what the statements are. College courses on the subject are taught by either the mathematics or the philosophy department, and they’re not generally considered to be easy classes. Two particular rules of inference are relevant here: modus ponens and modus tollens. Both allow you to reason from a statement of the form, "if P, then Q." (If Socrates was a man, then Socrates was mortal. If you are to eat dessert, then you must first eat your vegetables. If it is raining, then Gwendolyn had Crunchy Wunchies for breakfast. That sort of thing.) Modus ponens goes like this:

If P, then Q. P. Therefore, Q.

In other words, if you assume the conditional rule is true, and if you assume the antecedent of that rule is true, then the consequent is true. So,

If Socrates was a man, then Socrates was mortal. Socrates was a man. Therefore, Socrates was mortal.

Modus tollens is more complicated:

If P, then Q. Not Q. Therefore, not P.

If Socrates was a man, then Socrates was mortal. Socrates was not mortal. Therefore, Socrates was not a man.

This makes sense: if Socrates was not mortal, then he was a demigod or a stone statue or something.

Both are valid forms of logical reasoning. If you know "if P, then Q" and "P," then you know "Q." If you know "if P, then Q" and "not Q," then you know "not P." (The other two similar forms don’t work. If you know "if P, then Q" and "Q," you don’t know anything about "P." And if you know "if P, then Q" and "not P," then you don’t know anything about "Q.")

If I explained this in front of an audience full of normal people, not mathematicians or philosophers, most of them would be lost. Unsurprisingly, they would have trouble either explaining the rules or using them properly. Just ask any grad student who has had to teach a formal logic class; people have trouble with this.

Consider the Wason selection task. Subjects are presented with four cards next to each other on a table. Each card represents a person, with each side listing some statement about that person. The subject is then given a general rule and asked which cards he would have to turn over to ensure that the four people satisfied that rule. For example, the general rule might be, "If a person travels to Boston, then he or she takes a plane." The four cards might correspond to travelers and have a destination on one side and a mode of transport on the other. On the side facing the subject, they read: "went to Boston," "went to New York," "took a plane," and "took a car." Formal logic states that the rule is violated if someone goes to Boston without taking a plane. Translating into propositional calculus, there’s the general rule: if P, then Q. The four cards are "P," "not P," "Q," and "not Q." To verify that "if P, then Q" is a valid rule, you have to verify modus ponens by turning over the "P" card and making sure that the reverse says "Q." To verify modus tollens, you turn over the "not Q" card and make sure that the reverse doesn’t say "P."

Shifting back to the example, you need to turn over the "went to Boston" card to make sure that person took a plane, and you need to turn over the "took a car" card to make sure that person didn’t go to Boston. You don’t—as many people think—need to turn over the "took a plane" card to see if it says "went to Boston" because you don’t care. The person might have been flying to Boston, New York, San Francisco, or London. The rule only says that people going to Boston fly; it doesn’t break the rule if someone flies elsewhere.

If you’re confused, you aren’t alone. When Wason first did this study, fewer than 10 percent of his subjects got it right. Others replicated the study and got similar results. The best result I’ve seen is "fewer than 25 percent." Training in formal logic doesn’t seem to help very much. Neither does ensuring that the example is drawn from events and topics with which the subjects are familiar. People are just bad at the Wason selection task. They also tend to only take college logic classes upon requirement.

This isn’t just another "math is hard" story. There’s a point to this. The one variation of this task that people are surprisingly good at getting right is when the rule has to do with cheating and privilege. For example, change the four cards to children in a family—"gets dessert," "doesn’t get dessert," "ate vegetables," and "didn’t eat vegetables"—and change the rule to "If a child gets dessert, he or she ate his or her vegetables." Many people—65 to 80 percent—get it right immediately. They turn over the "ate dessert" card, making sure the child ate his vegetables, and they turn over the "didn’t eat vegetables" card, making sure the child didn’t get dessert. Another way of saying this is that they turn over the "benefit received" card to make sure the cost was paid. And they turn over the "cost not paid" card to make sure no benefit was received. They look for cheaters.

The difference is startling. Subjects don’t need formal logic training. They don’t need math or philosophy. When asked to explain their reasoning, they say things like the answer "popped out at them."

Researchers, particularly evolutionary psychologists Leda Cosmides and John Tooby, have run this experiment with a variety of wordings and settings and on a variety of subjects: adults in the US, UK, Germany, Italy, France, and Hong Kong; Ecuadorian schoolchildren; and Shiriar tribesmen in Ecuador. The results are the same: people are bad at the Wason selection task, except when the wording involves cheating.

In the world of propositional calculus, there’s absolutely no difference between a rule about traveling to Boston by plane and a rule about eating vegetables to get dessert. But in our brains, there’s an enormous difference: the first is a arbitrary rule about the world, and the second is a rule of social exchange. It’s of the form "If you take Benefit B, you must first satisfy Requirement R."

Our brains are optimized to detect cheaters in a social exchange. We’re good at it. Even as children, we intuitively notice when someone gets a benefit he didn’t pay the cost for. Those of us who grew up with a sibling have experienced how the one child not only knew that the other cheated, but felt compelled to announce it to the rest of the family. As adults, we might have learned that life isn’t fair, but we still know who among our friends cheats in social exchanges. We know who doesn’t pay his or her fair share of a group meal. At an airport, we might not notice the rule "If a plane is flying internationally, then it boards 15 minutes earlier than domestic flights." But we’ll certainly notice who breaks the "If you board first, then you must be a first-class passenger" rule.

This essay was originally published in IEEE Security & Privacy, and is an excerpt from the draft of my new book.

EDITED TO ADDD (4/14): Another explanation of the Wason Selection Task, with a possible correlation with psychopathy.

Posted on April 7, 2011 at 1:10 PM84 Comments

Optical Stun Ray

It’s been patented; no idea if it actually works.

…newly patented device can render an assailant helpless with a brief flash of high-intensity light. It works by overloading the neural networks connected to the retina, saturating the target’s world in a blinding pool of white light. “It’s the inverse of blindness—the technical term is a loss of contrast sensitivity,” says Todd Eisenberg, the engineer who invented the device. “The typical response is for the person to freeze. Law enforcement can easily walk up and apprehend [the suspect].”

Posted on April 7, 2011 at 6:29 AM57 Comments

Counterterrorism Security Cost-Benefit Analysis

Terror, Security, and Money: Balancing the Risks, Benefits, and Costs of Homeland Security,” by John Mueller and Mark Stewart:

Abstract:The cumulative increase in expenditures on US domestic homeland security over the decade since 9/11 exceeds one trillion dollars. It is clearly time to examine these massive expenditures applying risk assessment and cost-benefit approaches that have been standard for decades. Thus far, officials do not seem to have done so and have engaged in various forms of probability neglect by focusing on worst case scenarios; adding, rather than multiplying, the probabilities; assessing relative, rather than absolute, risk; and inflating terrorist capacities and the importance of potential terrorist targets. We find that enhanced expenditures have been excessive: to be deemed cost-effective in analyses that substantially bias the consideration toward the opposite conclusion, they would have to deter, prevent, foil, or protect against 1,667 otherwise successful Times-Square type attacks per year, or more than four per day. Although there are emotional and political pressures on the terrorism issue, this does not relieve politicians and bureaucrats of the fundamental responsibility of informing the public of the limited risk that terrorism presents and of seeking to expend funds wisely. Moreover, political concerns may be over-wrought: restrained reaction has often proved to be entirely acceptable politically.

Posted on April 6, 2011 at 6:03 AM34 Comments

Epsilon Hack

I have no idea why the Epsilon hack is getting so much press.

Yes, millions of names and e-mail addresses might have been stolen. Yes, other customer information might have been stolen, too. Yes, this personal information could be used to create more personalized and better targeted phishing attacks.

So what? These sorts of breaches happen all the time, and even more personal information is stolen.

I get that over 50 companies were affected, and some of them are big names. But the hack of the century? Hardly.

Posted on April 5, 2011 at 12:58 PM57 Comments

Reducing Bribery by Legalizing the Giving of Bribes

Here’s some very clever thinking from India’s chief economic adviser. In order to reduce bribery, he proposes legalizing the giving of bribes:

Under the current law, discussed in some detail in the next section, once a bribe is given, the bribe giver and the bribe taker become partners in crime. It is in their joint interest to keep this fact hidden from the authorities and to be fugitives from the law, because, if caught, both expect to be punished. Under the kind of revised law that I am proposing here, once a bribe is given and the bribe giver collects whatever she is trying to acquire by giving the money, the interests of the bribe taker and bribe giver become completely orthogonal to each other. If caught, the bribe giver will go scot free and will be able to collect his bribe money back. The bribe taker, on the other hand, loses the booty of bribe and faces a hefty punishment.

Hence, in the post-bribe situation it is in the interest of the bribe giver to have the bribe taker caught. Since the bribe giver will cooperate with the law, the chances are much higher of the bribe taker getting caught. In fact, it will be in the interest of the bribe giver to have the taker get caught, since that way the bribe giver can get back the money she gave as bribe. Since the bribe taker knows this, he will be much less inclined to take the bribe in the first place. This establishes that there will be a drop in the incidence of bribery.

He notes that this only works for a certain class of bribes: when you have to bribe officials for something you are already entitled to receive. It won’t work for any long-term bribery relationship, or in any situation where the briber would otherwise not want the bribe to become public.

News article.

Posted on April 5, 2011 at 8:46 AM50 Comments

Ebook Fraud

Interesting post—and discussion—on Making Light about ebook fraud. Currently there are two types of fraud. The first is content farming, discussed in these two interesting blog posts. People are creating automatically generated content, web-collected content, or fake content, turning it into a book, and selling it on an ebook site like Amazon.com. Then they use multiple identities to give it good reviews. (If it gets a bad review, the scammer just relists the same content under a new name.) That second blog post contains a screen shot of something called “Autopilot Kindle Cash,” which promises to teach people how to post dozens of ebooks to Amazon.com per day.

The second type of fraud is stealing a book and selling it as an ebook. So someone could scan a real book and sell it on an ebook site, even though he doesn’t own the copyright. It could be a book that isn’t already available as an ebook, or it could be a “low cost” version of a book that is already available. Amazon doesn’t seem particularly motivated to deal with this sort of fraud. And it too is suitable for automation.

Broadly speaking, there’s nothing new here. All complex ecosystems have parasites, and every open communications system we’ve ever built gets overrun by scammers and spammers. Far from making editors superfluous, systems that democratize publishing have an even greater need for editors. The solutions are not new, either: reputation-based systems, trusted recommenders, white lists, takedown notices. Google has implemented a bunch of security countermeasures against content farming; ebook sellers should implement them as well. It’ll be interesting to see what particular sort of mix works in this case.

Posted on April 4, 2011 at 9:18 AM34 Comments

34 SCADA Vulnerabilities Published

It’s hard to tell how serious this is.

Computer security experts who examined the code say the vulnerabilities are not highly dangerous on their own, because they would mostly just allow an attacker to crash a system or siphon sensitive data, and are targeted at operator viewing platforms, not the backend systems that directly control critical processes. But experts caution that the vulnerabilities could still allow an attacker to gain a foothold on a system to find additional security holes that could affect core processes.

Posted on April 1, 2011 at 6:58 AM11 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.