Blog: August 2009 Archives

On London's Surveillance Cameras

A recent report has concluded that the London’s surveillance cameras have solved one crime per thousand cameras per year.

David Davis MP, the former shadow home secretary, said: “It should provoke a long overdue rethink on where the crime prevention budget is being spent.”

He added: “CCTV leads to massive expense and minimum effectiveness.

“It creates a huge intrusion on privacy, yet provides little or no improvement in security.

Also:

Earlier this year separate research commissioned by the Home Office suggested that the cameras had done virtually nothing to cut crime, but were most effective in preventing vehicle crimes in car parks.

A report by a House of Lords committee also said that £500 million was spent on new cameras in the 10 years to 2006, money which could have been spent on street lighting or neighbourhood crime prevention initiatives.

A large proportion of the cash has been In London, where an estimated £200 million so far has been spent on the cameras. This suggests that each crime has cost £20,000 to detect.

I haven’t seen the report, but I know it’s hard to figure out when a crime has been “solved” by a surveillance camera. To me, the crime has to have been unsolvable without the cameras. Repeatedly I see pro-camera lobbyists pointing to the surveillance-camera images that identified the 7/7 London Transport bombers, but it is obvious that they would have been identified even without the cameras.

And it would really help my understanding of that £20,000 figure (I assume it is calculated from £200 million for the cameras times 1 in 1000 cameras used to solve a crime per year divided by ten years) if I knew what sorts of crimes the cameras “solved.” If the £200 million solved 10,000 murders, it might very well be a good security trade-off. But my guess is that most of the crimes were of a much lower level.

Cameras are largely security theater:

A Home Office spokeswoman said CCTVs “help communities feel safer”.

Posted on August 31, 2009 at 5:59 AM88 Comments

Marine Worms with Glowing Bombs

More security stories from the natural world:

During chase scenes, movie protagonists often make their getaway by releasing some sort of decoy to cover their escape or distract their pursuer. But this tactic isn’t reserved for action heroes—some deep-sea animals also evade their predators by releasing decoys—glowing ones.

Karen Osborn from the Scripps Institute of Oceanography has discovered seven new species of closely related marine worms (annelids) that use this trick. Each species pack up to four pairs of “bombs” near their heads—simple, fluid-filled globes that the worms can detach at will. When released, the “bombs” give off an intense light that lasts for several seconds.

My two previous posts on the topic.

Posted on August 28, 2009 at 6:12 AM5 Comments

Banning Beer Glasses in Pubs

Not beer, just the glasses:

The Home Office has commissioned a new design, in an attempt to stop glasses being used as weapons.

Official figures show 5,500 people are attacked with glasses and bottles every year in England and Wales.

The British Beer and Pub Association said it did not want the new plastic glasses to be made compulsory.

I don’t think this will go anywhere, but the sheer idiocy is impressive. Reminds me of the call to ban pointy knives. That recommendation also came out of the UK. What’s going on over there?

Posted on August 27, 2009 at 1:44 PM110 Comments

Stealing 130 Million Credit Card Numbers

Someone has been charged with stealing 130 million credit card numbers.

Yes, it’s a lot, but that’s the sort of quantities credit card numbers come in. They come by the millions, in large database files. Even if you only want ten, you have to steal millions. I’m sure every one of us has a credit card in our wallet whose number has been stolen. It’ll probably never be used for fraudulent purposes, but it’s in some stolen database somewhere.

Years ago, when giving advice on how to avoid identity theft, I would tell people to shred their trash. Today, that advice is completely obsolete. No one steals credit card numbers one by one out of the trash when they can be stolen by the millions from merchant databases.

Posted on August 27, 2009 at 7:02 AM46 Comments

Small Business Identity Theft and Fraud

The sorts of crimes we’ve been seeing perpetrated against individuals are starting to be perpetrated against small businesses:

In July, a school district near Pittsburgh sued to recover $700,000 taken from it. In May, a Texas company was robbed of $1.2 million. An electronics testing firm in Baton Rouge, La., said it was bilked of nearly $100,000.

In many cases, the advisory warned, the scammers infiltrate companies in a similar fashion: They send a targeted e-mail to the company’s controller or treasurer, a message that contains either a virus-laden attachment or a link that—when opened—surreptitiously installs malicious software designed to steal passwords. Armed with those credentials, the crooks then initiate a series of wire transfers, usually in increments of less than $10,000 to avoid banks’ anti-money-laundering reporting requirements.

The alert states that these scams typically rely on help from “money mules”—willing or unwitting individuals in the United States—often hired by the criminals via popular Internet job boards. Once enlisted, the mules are instructed to set up bank accounts, withdraw the fraudulent deposits and then wire the money to fraudsters, the majority of which are in Eastern Europe, according to the advisory.

This has the potential to grow into a very big problem. Even worse:

Businesses do not enjoy the same legal protections as consumers when banking online. Consumers typically have up to 60 days from the receipt of a monthly statement to dispute any unauthorized charges.

In contrast, companies that bank online are regulated under the Uniform Commercial Code, which holds that commercial banking customers have roughly two business days to spot and dispute unauthorized activity if they want to hold out any hope of recovering unauthorized transfers from their accounts.

And, of course, the security externality means that the banks care much less:

“The banks spend a lot of money on protecting consumer customers because they owe money if the consumer loses money,” Litan said. “But the banks don’t spend the same resources on the corporate accounts because they don’t have to refund the corporate losses.”

Posted on August 26, 2009 at 5:46 AM49 Comments

Actual Security Theater

As part of their training, federal agents engage in mock exercises in public places. Sometimes, innocent civilians get involved.

Every day, as Washingtonians go about their overt lives, the FBI, CIA, Capitol Police, Secret Service and U.S. Marshals Service stage covert dramas in and around the capital where they train. Officials say the scenarios help agents and officers integrate the intellectual, physical and emotional aspects of classroom instruction. Most exercises are performed inside restricted compounds. But they also unfold in public parks, suburban golf clubs and downtown transit stations.

Curtain up on threat theater—a growing, clandestine art form. Joseph Persichini, Jr., assistant director of the FBI’s Washington field office, says, “What better way to adapt agents or analysts to cultural idiosyncrasies than role play?”

For the public, there are rare, startling peeks: At a Holiday Inn, a boy in water wings steps out of his seventh floor room into a stampede of federal agents; at a Bowie retirement home, an elderly woman panics as a role-player collapses, believing his seizure is real; at a county museum, a father sweeps his daughter into his arms, running for the exit, while a raving, bearded man resists arrest.

EDITED TO ADD (9/11): It happened in D.C., in the Potomac River, with the Coast Guard.

Posted on August 25, 2009 at 6:43 AM71 Comments

Non-Randomness in Coin Flipping

It turns out that flipping a coin has all sorts of non-randomness:

Here are the broad strokes of their research:

  1. If the coin is tossed and caught, it has about a 51% chance of landing on the same face it was launched. (If it starts out as heads, there’s a 51% chance it will end as heads).
  2. If the coin is spun, rather than tossed, it can have a much-larger-than-50% chance of ending with the heavier side down. Spun coins can exhibit “huge bias” (some spun coins will fall tails-up 80% of the time).
  3. If the coin is tossed and allowed to clatter to the floor, this probably adds randomness.
  4. If the coin is tossed and allowed to clatter to the floor where it spins, as will sometimes happen, the above spinning bias probably comes into play.
  5. A coin will land on its edge around 1 in 6000 throws, creating a flipistic singularity.
  6. The same initial coin-flipping conditions produce the same coin flip result. That is, there’s a certain amount of determinism to the coin flip.
  7. A more robust coin toss (more revolutions) decreases the bias.

The paper.

Posted on August 24, 2009 at 7:12 AM59 Comments

Modeling Zombie Outbreaks

The math doesn’t look good: “When Zombies Attack!: Mathematical Modelling of an Outbreak of Zombie Infection.”

An outbreak of zombies infecting humans is likely to be disastrous, unless extremely aggressive tactics are employed against the undead. While aggressive quarantine may eradicate the infection, this is unlikely to happen in practice. A cure would only result in some humans surviving the outbreak, although they will still coexist with zombies. Only sufficiently frequent attacks, with increasing force, will result in eradication, assuming the available resources can be mustered in time.

Furthermore, these results assumed that the timescale of the outbreak was short, so that the natural birth and death rates could be ignored. If the timescale of the outbreak increases, then the result is the doomsday scenario: an outbreak of zombies will result in the collapse of civilisation, with every human infected, or dead. This is because human births and deaths will provide the undead with a limitless supply of new bodies to infect, resurrect and convert. Thus, if zombies arrive, we must act quickly and decisively to eradicate them before they eradicate us.

The key difference between the models presented here and other models of infectious disease is that the dead can come back to life. Clearly, this is an unlikely scenario if taken literally, but possible real-life applications may include allegiance to political parties, or diseases with a dormant infection.

This is, perhaps unsurprisingly, the first mathematical analysis of an outbreak of zombie infection. While the scenarios considered are obviously not realistic, it is nevertheless instructive to develop mathematical models for an unusual outbreak. This demonstrates the flexibility of mathematical modelling and shows how modelling can respond to a wide variety of challenges in ‘biology’.

In summary, a zombie outbreak is likely to lead to the collapse of civilisation, unless it is dealt with quickly. While aggressive quarantine may contain the epidemic, or a cure may lead to coexistence of humans and zombies, the most effective way to contain the rise of the undead is to hit hard and hit often. As seen in the movies, it is imperative that zombies are dealt with quickly, or else we are all in a great deal of trouble.

Posted on August 24, 2009 at 5:57 AM47 Comments

Hacking the Assa Solo Lock

Marc Weber Tobias again:

The new Assa Solo was recently introduced in Europe and we believe is the latest Cliq design. We were provided with samples and were able to show a reporter for Wired’s Threat Level how to completely circumvent the electronic credentials in less than thirty seconds, which she easily accomplished. This is the latest and most current example of a failure in security engineering at Assa.

[…]

In response to demonstrations and our disclosures about the bypass of Assa Cliq locks at Defcon 17, the product development manager of Assa in the U.S. told Wired Magazine that “From what I know of the CLIQ technology it can’t be done,” … “And until I’ve seen it done, it can’t be done.”

We believe this statement typifies precisely the problem at Assa Abloy companies: a failure of imagination. It prompted our research and subsequent discovery of multiple vulnerabilities in Cliq, Logic, and NexGen locks. It is this attitude that will continue to allow us to break locks that are represented as the ultimate in security by these companies, and which often provide a false sense of security to the locksmiths and customers that rely upon these products.

Me on locks and lockpicking.

Posted on August 21, 2009 at 6:03 AM27 Comments

Developments in Lie Detection

Interesting:

Scientists looking for better ways to detect lies have found a promising one: increasing suspects’ “cognitive load.” For a host of reasons, their theory goes, lying is more mentally taxing than telling the truth. Per­forming an extra task while lying or telling the truth should therefore affect the liars more.

To test this idea, deception researchers led by psychologist Aldert Vrij of the University of Portsmouth in England asked one group to lie convincingly and another group to tell the truth about a staged theft scenario that only the truth-tellers had experienced. A second pair of groups had to do the same but with a crucial twist: both the liars and the truth-tellers had to maintain eye contact while telling their stories.

Later, as researchers watched videotapes of the suspects’ accounts, they tallied verbal signs of cognitive load (such as fewer spatial details in the suspects’ stories) and nonverbal ones (such as fewer eyeblinks). The eyeblinks are particularly interesting because whereas rapid blinking suggests nervousness, fewer blinks are a sign of cognitive load, Vrij explains—and contrary to what police are taught, liars tend to blink less. Although the effect was subtle, the instruction to maintain eye contact did magnify the differences between the truth-tellers and the liars.

So do these differences actually make it easier for others to distinguish liars from truth-tellers? They do—but although students watching the videos had an easier time spotting a liar in the eye-contact condition, their accuracy rates were still poor. Any group differences between liars and truth-tellers were dwarfed by differences between individual participants. (For example, some people blink far less than others whether or not they are lying—and some are simply better able to carry a higher cognitive load.)

Posted on August 20, 2009 at 6:59 AM47 Comments

The Continuing Cheapening of the Word "Terrorism"

Terroristic threats“?

A pickup truck driver is accused of trying to run over a bicyclist and then coming after him brandishing an ax after a road-rage incident in Burnsville last weekend.

The driver, Mitchel J. Pieper, 32, of Burnsville, was charged in Dakota County District Court on Tuesday with making terroristic threats, a felony, in connection with the altercation Saturday. The bicyclist was not seriously hurt.

Seems like a normal threat to me. Or assault, with intent to do bodily harm. What’s wrong with those criminal statutes?

Let’s save the word “terrorism” for things that actually are terrorism.

Posted on August 19, 2009 at 1:08 PM61 Comments

Fabricating DNA Evidence

This isn’t good:

The scientists fabricated blood and saliva samples containing DNA from a person other than the donor of the blood and saliva. They also showed that if they had access to a DNA profile in a database, they could construct a sample of DNA to match that profile without obtaining any tissue from that person.

[…]

The planting of fabricated DNA evidence at a crime scene is only one implication of the findings. A potential invasion of personal privacy is another.

Using some of the same techniques, it may be possible to scavenge anyone’s DNA from a discarded drinking cup or cigarette butt and turn it into a saliva sample that could be submitted to a genetic testing company that measures ancestry or the risk of getting various diseases.

The paper.

EDITED TO ADD (8/19): A better article.

Posted on August 19, 2009 at 6:57 AM54 Comments

Movie-Plot Threat Alert: Robot Suicide Bombers

Let’s all be afraid:

But it adds: “Robots that effectively mimic human appearance and movements may be used as human proxies.”

It raised the prospects of terrorists using robots to plant and detonate bombs or even replacing human suicide bombers.

A Home Office spokeswoman said: “This strategy looks at how technology might develop in future.

“Clearly it is important that we understand how those wishing us harm might use such technology in future so we can stay one step ahead.”

The document also warns that nanotechnology will help accelerate development of materials for future explosives while advances in fabrics will “significantly” improve camouflage and protection.

I’m sure I’ve seen this stuff in movies.

Posted on August 18, 2009 at 6:16 AM84 Comments

Flash Cookies

Flash has the equivalent of cookies, and they’re hard to delete:

Unlike traditional browser cookies, Flash cookies are relatively unknown to web users, and they are not controlled through the cookie privacy controls in a browser. That means even if a user thinks they have cleared their computer of tracking objects, they most likely have not.

What’s even sneakier?

Several services even use the surreptitious data storage to reinstate traditional cookies that a user deleted, which is called ‘re-spawning’ in homage to video games where zombies come back to life even after being “killed,” the report found. So even if a user gets rid of a website’s tracking cookie, that cookie’s unique ID will be assigned back to a new cookie again using the Flash data as the “backup.”

Posted on August 17, 2009 at 6:36 AM78 Comments

EFF on Locational Privacy

Excellent paper: “On Locational Privacy, and How to Avoid Losing it Forever.”

Some threats to locational privacy are overt: it’s evident how cameras backed by face-recognition software could be misused to track people and record their movements. In this document, we’re primarily concerned with threats to locational privacy that arise as a hidden side-effect of clearly useful location-based services.

We can’t stop the cascade of new location-based digital services. Nor would we want to—the benefits they offer are impressive. What urgently needs to change is that these systems need to be built with privacy as part of their original design. We can’t afford to have pervasive surveillance technology built into our electronic civic infrastructure by accident. We have the opportunity now to ensure that these dangers are averted.

Our contention is that the easiest and best solution to the locational privacy problem is to build systems which don’t collect the data in the first place. This sounds like an impossible requirement (how do we tell you when your friends are nearby without knowing where you and your friends are?) but in fact as we discuss below it is a reasonable objective that can be achieved with modern cryptographic techniques.

Modern cryptography actually allows civic data processing systems to be designed with a whole spectrum of privacy policies: ranging from complete anonymity to limited anonymity to support law enforcement. But we need to ensure that systems aren’t being built right at the zero-privacy, everything-is-recorded end of that spectrum, simply because that’s the path of easiest implementation.

I’ve already written about wholesale surveillance.

Posted on August 14, 2009 at 6:30 AM31 Comments

Man-in-the-Middle Trucking Attack

Clever:

For over three years the pair hacked into a Department of Transportation website called Safersys.org, which maintains a list of licensed interstate-trucking companies and brokers, according to an affidavit (.pdf) filed by a DOT investigator. There, they would temporarily change the contact information for a legitimate trucking company to an address and phone number under their control.

The men then took to the web-based “load boards” where brokers advertise cargo in need of transportation. They’d negotiate a deal, for example, to transport cargo from American Canyon, California, to Jessup, Maryland, for $3,500.

But instead of transporting the load, Lakes and Berkovich would outsource the job to another trucking company, the feds say, posing as the legitimate company whose identity they’d hijacked. Once the cargo was delivered, the men invoiced their customer and pocketed the funds. But when the company that actually drove the truck tried to get paid, they’d eventually discover that the firm who’d supposedly hired them didn’t know anything about it.

Actually, not so clever. I’m amazed it went on for three years. You’d think that more than a few of the subcontracters would pick up the phone and call the original customers—and they’d figure out what happened. Maybe there are just so many trucking companies, and so many people who need cargo shipped places, that they were able to hide for three years.

But this scheme was bound to unravel sooner or later. If the criminal middlemen had legitimately subcontracted the work and just pocketed the difference, they might have remained undiscovered forever. But that’s much less profit per contract.

Posted on August 13, 2009 at 5:09 AM33 Comments

Lockpicking and the Internet

Physical locks aren’t very good. They keep the honest out, but any burglar worth his salt can pick the common door lock pretty quickly.

It used to be that most people didn’t know this. Sure, we all watched television criminals and private detectives pick locks with an ease only found on television and thought it realistic, but somehow we still held onto the belief that our own locks kept us safe from intruders.

The Internet changed that.

First was the MIT Guide to Lockpicking, written by the late Bob (“Ted the Tool”) Baldwin. Then came Matt Blaze’s 2003 paper on breaking master key systems. After that, came a flood of lock picking information on the Net: opening a bicycle lock with a Bic pen, key bumping, and more. Many of these techniques were already known in both the criminal and locksmith communities. The locksmiths tried to suppress the knowledge, believing their guildlike secrecy was better than openness. But they’ve lost: Never has there been more public information about lock picking—or safecracking, for that matter.

Lock companies have responded with more complicated locks, and more complicated disinformation campaigns.

There seems to be a limit to how secure you can make a wholly mechanical lock, as well as a limit to how large and unwieldy a key the public will accept. As a result, there is increasing interest in other lock technologies.

As a security technologist, I worry that if we don’t fully understand these technologies and the new sorts of vulnerabilities they bring, we may be trading a flawed technology for an even worse one. Electronic locks are vulnerable to attack, often in new and surprising ways.

Start with keypads, more and more common on house doors. These have the benefit that you don’t have to carry a physical key around, but there’s the problem that you can’t give someone the key for a day and then take it away when that day is over. As such, the security decays over time—the longer the keypad is in use, the more people know how to get in. More complicated electronic keypads have a variety of options for dealing with this, but electronic keypads work only when the power is on, and battery-powered locks have their own failure modes. Plus, far too many people never bother to change the default entry code.

Keypads have other security failures, as well. I regularly see keypads where four of the 10 buttons are more worn than the other six. They’re worn from use, of course, and instead of 10,000 possible entry codes, I now have to try only 24.

Fingerprint readers are another technology, but there are many known security problems with those. And there are operational problems, too: They’re hard to use in the cold or with sweaty hands; and leaving a key with a neighbor to let the plumber in starts having a spy-versus-spy feel.

Some companies are going even further. Earlier this year, Schlage launched a series of locks that can be opened either by a key, a four-digit code, or the Internet. That’s right: The lock is online. You can send the lock SMS messages or talk to it via a Website, and the lock can send you messages when someone opens it—or even when someone tries to open it and fails.

Sounds nifty, but putting a lock on the Internet opens up a whole new set of problems, none of which we fully understand. Even worse: Security is only as strong as the weakest link. Schlage’s system combines the inherent “pickability” of a physical lock, the new vulnerabilities of electronic keypads, and the hacking risk of online. For most applications, that’s simply too much risk.

This essay previously appeared on DarkReading.com.

Posted on August 12, 2009 at 5:48 AM88 Comments

An Ethical Code for Intelligence Officers

August’s Communications of the ACM has an interesting article: “An Ethics Code for U.S. Intelligence Officers,” by former NSAers Brian Snow and Clint Brooks. The article is behind a paywall, but here’s the code:

Draft Statement of Ethics for the Intelligence Community

Preamble: Intelligence work may present exceptional or unusual ethical dilemmas beyond those of ordinary life. Ethical thinking and review should be a part of our day to day efforts; it can protect our nation’s and our agency’s integrity, improve the chances of mission success, protect us from the consequences of bad choices, and preserve our alliances. Therefore, we adhere to the following standards of professional ethics and behavior:

  1. First, do no harm to U.S. citizens or their rights under the Constitution.
  2. We uphold the Constitution and the Rule of Law; we are constrained by both the spirit and the letter of the laws of the United States.
  3. We will comply with all international human rights agreements that our nation has ratified.
  4. We will insist on clarification of ambiguities that arise between directives or law and the principles of this code. We will protect those within our institutions who call reasonable attention to wrongdoing.
  5. Expediency is not an excuse for misconduct.
  6. We are accountable for our decisions and actions. We support timely, rigorous processes that fix accountability to the responsible person.
  7. Statements we make to our clients, colleagues, overseers and the U.S. public will be true, and structured not to unnecessarily mislead or conceal.
  8. We will resolve difficult ethical choices in favor of constitutional requirements, the truth, and our fellow citizens.
  9. We will address the potential consequences of our actions in advance, especially the consequences of failure, discovery, and unintended or collateral consequences of success.
  10. We will not impose unnecessary risk on innocents.
  11. Although we may work in secrecy, we will work so that when our efforts become known, our fellow citizens will be proud of us and of our efforts.

It’s supposed to be for U.S. intelligence officers, but with one inconsequential modification it could be made international.

Posted on August 11, 2009 at 12:29 PM108 Comments

Self-Enforcing Protocols

There are several ways two people can divide a piece of cake in half. One way is to find someone impartial to do it for them. This works, but it requires another person. Another way is for one person to divide the piece, and the other person to complain (to the police, a judge, or his parents) if he doesn’t think it’s fair. This also works, but still requires another person—at least to resolve disputes. A third way is for one person to do the dividing, and for the other person to choose the half he wants.

That third way, known by kids, pot smokers, and everyone else who needs to divide something up quickly and fairly, is called cut-and-choose. People use it because it’s a self-enforcing protocol: a protocol designed so that neither party can cheat.

Self-enforcing protocols are useful because they don’t require trusted third parties. Modern systems for transferring money—checks, credit cards, PayPal—require trusted intermediaries like banks and credit card companies to facilitate the transfer. Even cash transfers require a trusted government to issue currency, and they take a cut in the form of seigniorage. Modern contract protocols require a legal system to resolve disputes. Modern commerce wasn’t possible until those systems were in place and generally trusted, and complex business contracts still aren’t possible in areas where there is no fair judicial system. Barter is a self-enforcing protocol: nobody needs to facilitate the transaction or resolve disputes. It just works.

Self-enforcing protocols are safer than other types because participants don’t gain an advantage from cheating. Modern voting systems are rife with the potential for cheating, but an open show of hands in a room—one that everyone in the room can count for himself—is self-enforcing. On the other hand, there’s no secret ballot, late voters are potentially subjected to coercion, and it doesn’t scale well to large elections. But there are mathematical election protocols that have self-enforcing properties, and some cryptographers have suggested their use in elections.

Here’s a self-enforcing protocol for determining property tax: the homeowner decides the value of the property and calculates the resultant tax, and the government can either accept the tax or buy the home for that price. Sounds unrealistic, but the Greek government implemented exactly that system for the taxation of antiquities. It was the easiest way to motivate people to accurately report the value of antiquities.

A VAT, or value-added tax, is a self-enforcing alternative to sales tax. Sales tax is collected on the entire value of the thing at the point of retail sale; both the customer and the storeowner want to cheat the government. But VAT is collected at every step between raw materials and that final customer; it’s the difference between the price of the materials sold and the materials bought. Buyers wants official receipts with as high a purchase price as possible, so each buyer along the chain keeps each seller honest. Yes, there’s still an incentive to cheat on the final sale to the customer, but the amount of tax collected at that point is much lower.

Of course, self-enforcing protocols aren’t perfect. For example, someone in a cut-and-choose can punch the other guy and run away with the entire piece of cake. But perfection isn’t the goal here; the goal is to reduce cheating by taking away potential avenues of cheating. Self-enforcing protocols improve security not by implementing countermeasures that prevent cheating, but by leveraging economic incentives so that the parties don’t want to cheat.

One more self-enforcing protocol. Imagine a pirate ship that encounters a storm. The pirates are all worried about their gold, so they put their personal bags of gold in the safe. During the storm, the safe cracks open, and all the gold mixes up and spills out on the floor. How do the pirates determine who owns what? They each announce to the group how much gold they had. If the total of all the announcements matches what’s in the pile, it’s divided as people announced. If it’s different, then the captain keeps it all. I can think of all kinds of ways this can go wrong—the captain and one pirate can collude to throw off the total, for example—but it is self-enforcing against individual misreporting.

This essay originally appeared on ThreatPost.

EDITED TO ADD (8/12): Shotgun clauses are an example of a self-enforcing protocol.

Posted on August 11, 2009 at 6:15 AM68 Comments

Password Advice

Here’s some complicated advice on securing passwords that—I’ll bet—no one follows.

  • DO use a password manager such as those reviewed by Scott Dunn in his Sept. 18, 2008,
    Insider Tips
    column. Although Scott focused on free programs, I really like CallPod’s Keeper, a $15 utility that comes in Windows, Mac, and iPhone versions and allows you to keep all your passwords in sync. Find more information about the program and a download link for the 15-day free-trial version on the vendor’s site.

  • DO change passwords frequently. I change mine every six months or whenever I sign in to a site I haven’t visited in long time. Don’t reuse old passwords. Password managers can assign expiration dates to your passwords and remind you when the passwords are about to expire.
  • DO keep your passwords secret. Putting them into a file on your computer, e-mailing them to others, or writing them on a piece of paper in your desk is tantamount to giving them away. If you must allow someone else access to an account, create a temporary password just for them and then change it back immediately afterward.

    No matter how much you may trust your friends or colleagues, you can’t trust their computers. If they need ongoing access, consider creating a separate account with limited privileges for them to use.

  • DON’T use passwords comprised of dictionary words, birthdays, family and pet names, addresses, or any other personal information. Don’t use repeat characters such as 111 or sequences like abc, qwerty, or 123 in any part of your password.
  • DON’T use the same password for different sites. Otherwise, someone who culls your Facebook or Twitter password in a phishing exploit could, for example, access your bank account.
  • DON’T allow your computer to automatically sign in on boot-up and thus use any automatic e-mail, chat, or browser sign-ins. Avoid using the same Windows sign-in password on two different computers.

  • DON’T use the “remember me” or automatic sign-in option available on many Web sites. Keep sign-ins under the control of your password manager instead.

  • DON’T enter passwords on a computer you don’t control—such as a friend’s computer—because you don’t know what spyware or keyloggers might be on that machine.

  • DON’T access password-protected accounts over open Wi-Fi networks—or any other network you don’t trust—unless the site is secured via https. Use a VPN if you travel a lot. (See Ian “Gizmo” Richards’ Dec. 11, 2008, Best Software column, “Connect safely over open Wi-Fi networks,” for Wi-Fi security tips.)
  • DON’T enter a password or even your account name in any Web page you access via an e-mail link. These are most likely phishing scams. Instead, enter the normal URL for that site directly into your browser, and proceed to the page in question from there.

I regularly break seven of those rules. How about you? (Here’s my advice on choosing secure passwords.)

Posted on August 10, 2009 at 6:57 AM102 Comments

Friday Squid Blogging: Humboldt Squid is "Timid"

Contrary to my previous blog entry on the topic, Humboldt squid are really timid:

Humboldt squid feed in surface waters at night, then retreat to great depths during daylight hours. “They spend the day 300 meters deep where oxygen levels are very low,” Seibel said. “We wanted to know how they deal with so little oxygen.”

Seibel said that while the squid are strong swimmers with a parrot-like beak that could inflict injury, man-eaters they are not. Unlike some large sharks that feed on large fish and marine mammals, jumbo squid use their numerous small, toothed suckers on their arms and tentacles to feed on small fish and plankton that are no more than a few centimeters in length.

[…]

Seibel was surprised by the large number of squid he encountered, which made it easy to imagine how they could be potentially dangerous to anything swimming with them. Their large numbers also made Seibel somewhat pleased that they appeared frightened of his dive light. Yet he said the animals were also curious about other lights, like reflections off his metal equipment or a glow-in-the-dark tool that one squid briefly attacked.

“Based on the stories I had heard, I was expecting them to be very aggressive, so I was surprised at how timid they were. As soon as we turned on the lights, they were gone,” he said. “I didn’t get the sense that they saw the entire diver as a food item, but they were definitely going after pieces of our equipment.”

I don’t trust the research, or the squid.

Posted on August 7, 2009 at 4:53 PM14 Comments

Risk Intuition

People have a natural intuition about risk, and in many ways it’s very good. It fails at times due to a variety of cognitive biases, but for normal risks that people regularly encounter, it works surprisingly well: often better than we give it credit for.

This struck me as I listened to yet another conference presenter complaining about security awareness training. He was talking about the difficulty of getting employees at his company to actually follow his security policies: encrypting data on memory sticks, not sharing passwords, not logging in from untrusted wireless networks. “We have to make people understand the risks,” he said.

It seems to me that his co-workers understand the risks better than he does. They know what the real risks are at work, and that they all revolve around not getting the job done. Those risks are real and tangible, and employees feel them all the time. The risks of not following security procedures are much less real. Maybe the employee will get caught, but probably not. And even if he does get caught, the penalties aren’t serious.

Given this accurate risk analysis, any rational employee will regularly circumvent security to get his or her job done. That’s what the company rewards, and that’s what the company actually wants.

“Fire someone who breaks security procedure, quickly and publicly,” I suggested to the presenter. “That’ll increase security awareness faster than any of your posters or lectures or newsletters.” If the risks are real, people will get it.

You see the same sort of risk intuition on motorways. People are less careful about posted speed limits than they are about the actual speeds police issue tickets for. It’s also true on the streets: people respond to real crime rates, not public officials proclaiming that a neighbourhood is safe.

The warning stickers on ladders might make you think the things are considerably riskier than they are, but people have a good intuition about ladders and ignore most of the warnings. (This isn’t to say that some people don’t do stupid things around ladders, but for the most part they’re safe. The warnings are more about the risk of lawsuits to ladder manufacturers than risks to people who climb ladders.)

As a species, we are naturally tuned in to the risks inherent in our environment. Throughout our evolution, our survival depended on making reasonably accurate risk management decisions intuitively, and we’re so good at it, we don’t even realise we’re doing it.

Parents know this. Children have surprisingly perceptive risk intuition. They know when parents are serious about a threat and when their threats are empty. And they respond to the real risks of parental punishment, not the inflated risks based on parental rhetoric. Again, awareness training lectures don’t work; there have to be real consequences.

It gets even weirder. The University College London professor John Adams popularised the metaphor of a mental risk thermostat. We tend to seek some natural level of risk, and if something becomes less risky, we tend to make it more risky. Motorcycle riders who wear helmets drive faster than riders who don’t.

Our risk thermostats aren’t perfect (that newly helmeted motorcycle rider will still decrease his overall risk) and will tend to remain within the same domain (he might drive faster, but he won’t increase his risk by taking up smoking), but in general, people demonstrate an innate and finely tuned ability to understand and respond to risks.

Of course, our risk intuition fails spectacularly and often, with regards to rare risks , unknown risks, voluntary risks, and so on. But when it comes to the common risks we face every day—the kinds of risks our evolutionary survival depended on—we’re pretty good.

So whenever you see someone in a situation who you think doesn’t understand the risks, stop first and make sure you understand the risks. You might be surprised.

This essay previously appeared in The Guardian.

EDITED TO ADD (8/12): Commentary on risk thermostat.

Posted on August 6, 2009 at 5:08 AM53 Comments

How we Reacted to the Unexpected 75 Years Ago

A 1934 story from the International Herald Tribune:

Dynamite Found On Track

SPOKANE Discovery of a box of useless dynamite on the railway track two and a half miles southwest of this city led to special precautions being taken to guard the line over which President Roosevelt’s train passed this morning [August 4] en route to Washington. Six deputy sheriffs guarded the section of the line near which the discovery was made. The President’s train passed safely at 10 a.m. Officials are skeptical about the dynamite having any connection with a possible plot against the President.

Imagine if the same thing happened today.

Posted on August 5, 2009 at 1:46 PM23 Comments

Security vs. Usability

Good essay: “When Security Gets in the Way.”

The numerous incidents of defeating security measures prompts my cynical slogan: The more secure you make something, the less secure it becomes. Why? Because when security gets in the way, sensible, well-meaning, dedicated people develop hacks and workarounds that defeat the security. Hence the prevalence of doors propped open by bricks and wastebaskets, of passwords pasted on the fronts of monitors or hidden under the keyboard or in the drawer, of home keys hidden under the mat or above the doorframe or under fake rocks that can be purchased for this purpose.

We are being sent a mixed message: on the one hand, we are continually forced to use arbitrary security procedures. On the other hand, even the professionals ignore many of them. How is the ordinary person to know which ones matter and which don’t? The confusion has unexpected negative side-effects. I once discovered a computer system that was missing essential security patches. When I queried the computer’s user, I discovered that the continual warning against clicking on links or agreeing to requests from pop-up windows had been too effective. This user was so frightened of unwittingly agreeing to install all those nasty things from “out there” that all requests were denied, even the ones for essential security patches. On reflection, this is sensible behavior: It is very difficult to distinguish the legitimate from the illegitimate. Even experts slip up, as the confessions reported occasionally in various computer digests I attest.

Posted on August 5, 2009 at 6:10 AM29 Comments

Regulating Chemical Plant Security

The New York Times has an editorial on regulating chemical plants:

Since Sept. 11, 2001, experts have warned that an attack on a chemical plant could produce hundreds of thousands of deaths and injuries. Public safety and environmental advocates have fought for strong safety rules, but the chemical industry used its clout in Congress in 2006 to ensure that only a weak law was enacted.

That law sunsets this fall, and the moment is right to move forward. For the first time in years, there is a real advocate for chemical plant security in the White House. As a senator, President Obama co-sponsored a strong bill, and he raised the issue repeatedly in last year’s campaign. Both chambers of Congress are controlled by Democrats who have been far more supportive than Republicans of tough safety rules.

A good bill is moving through the House. It would require the highest-risk chemical plants to switch to less dangerous chemicals only in limited circumstances, but Republicans have still been fighting it. In the House Homeland Security Committee, the Republicans recently succeeded in adding several weakening amendments, including one that could block implementation of safer-chemical rules if they cost jobs. Saving jobs is important, but not if it means putting large numbers of Americans at risk of a deadly attack.

The Obama administration needs to come out forcefully for a clean bill that contains strong safety rules without the Republican loopholes. Janet Napolitano, the secretary of homeland security, said last week that she considers chemical plants a major vulnerability and promised that the administration will be speaking out on the subject in the days ahead.

It is looking increasingly likely that Congress will extend the current inadequate law for another year to take more time to come up with an alternative. That would be regrettable. There is no excuse for continuing to expose the nation to attacks that could lead to mass casualties.

The problem is a classic security externality, which I wrote about in 2007:

Any rational chemical plant owner will only secure the plant up to its value to him. That is, if the plant is worth $100 million, then it makes no sense to spend $200 million on securing it. If the odds of it being attacked are less than 1 percent, it doesn’t even make sense to spend $1 million on securing it. The math is more complicated than this, because you have to factor in such things as the reputational cost of having your name splashed all over the media after an incident, but that’s the basic idea.

But to society, the cost of an actual attack can be much, much greater. If a terrorist blows up a particularly toxic plant in the middle of a densely populated area, deaths could be in the tens of thousands and damage could be in the hundreds of millions. Indirect economic damage could be in the billions. The owner of the chlorine plant would pay none of these potential costs.

Sure, the owner could be sued. But he’s not at risk for more than the value of his company, and—in any case—he’d probably be smarter to take the chance. Expensive lawyers can work wonders, courts can be fickle, and the government could step in and bail him out (as it did with airlines after Sept. 11). And a smart company can often protect itself by spinning off the risky asset in a subsidiary company, or selling it off completely. The overall result is that our nation’s chemical plants are secured to a much smaller degree than the risk warrants.

Posted on August 4, 2009 at 12:52 PM49 Comments

Too Many Security Warnings Results in Complacency

Research that proves what we already knew:

Crying Wolf: An Empirical Study of SSL Warning Effectiveness

Abstract. Web users are shown an invalid certificate warning when their browser cannot validate the identity of the websites they are visiting. While these warnings often appear in benign situations, they can also signal a man-in-the-middle attack. We conducted a survey of over 400 Internet users to examine their reactions to and understanding of current SSL warnings. We then designed two new warnings using warnings science principles and lessons learned from the survey. We evaluated warnings used in three popular web browsers and our two warnings in a 100-participant, between-subjects laboratory study. Our warnings performed significantly better than existing warnings, but far too many participants exhibited dangerous behavior in all warning conditions. Our results suggest that, while warnings can be improved, a better approach may be to minimize the use of SSL warnings altogether by blocking users from making unsafe connections and eliminating warnings in benign
situations.

Posted on August 4, 2009 at 10:01 AM32 Comments

Building in Surveillance

China is the world’s most successful Internet censor. While the Great Firewall of China isn’t perfect, it effectively limits information flowing in and out of the country. But now the Chinese government is taking things one step further.

Under a requirement taking effect soon, every computer sold in China will have to contain the Green Dam Youth Escort software package. Ostensibly a pornography filter, it is government spyware that will watch every citizen on the Internet.

Green Dam has many uses. It can police a list of forbidden Web sites. It can monitor a user’s reading habits. It can even enlist the computer in some massive botnet attack, as part of a hypothetical future cyberwar.

China’s actions may be extreme, but they’re not unique. Democratic governments around the world—Sweden, Canada and the United Kingdom, for example—are rushing to pass laws giving their police new powers of Internet surveillance, in many cases requiring communications system providers to redesign products and services they sell.

Many are passing data retention laws, forcing companies to keep information on their customers. Just recently, the German government proposed giving itself the power to censor the Internet.

The United States is no exception. The 1994 CALEA law required phone companies to facilitate FBI eavesdropping, and since 2001, the NSA has built substantial eavesdropping systems in the United States. The government has repeatedly proposed Internet data retention laws, allowing surveillance into past activities as well as present.

Systems like this invite criminal appropriation and government abuse. New police powers, enacted to fight terrorism, are already used in situations of normal crime. Internet surveillance and control will be no different.

Official misuses are bad enough, but the unofficial uses worry me more. Any surveillance and control system must itself be secured. An infrastructure conducive to surveillance and control invites surveillance and control, both by the people you expect and by the people you don’t.

China’s government designed Green Dam for its own use, but it’s been subverted. Why does anyone think that criminals won’t be able to use it to steal bank account and credit card information, use it to launch other attacks, or turn it into a massive spam-sending botnet?

Why does anyone think that only authorized law enforcement will mine collected Internet data or eavesdrop on phone and IM conversations?

These risks are not theoretical. After 9/11, the National Security Agency built a surveillance infrastructure to eavesdrop on telephone calls and e-mails within the United States.

Although procedural rules stated that only non-Americans and international phone calls were to be listened to, actual practice didn’t always match those rules. NSA analysts collected more data than they were authorized to, and used the system to spy on wives, girlfriends, and famous people such as President Clinton.

But that’s not the most serious misuse of a telecommunications surveillance infrastructure. In Greece, between June 2004 and March 2005, someone wiretapped more than 100 cell phones belonging to members of the Greek government—the prime minister and the ministers of defense, foreign affairs and justice.

Ericsson built this wiretapping capability into Vodafone’s products, and enabled it only for governments that requested it. Greece wasn’t one of those governments, but someone still unknown—a rival political party? organized crime?—figured out how to surreptitiously turn the feature on.

Researchers have already found security flaws in Green Dam that would allow hackers to take over the computers. Of course there are additional flaws, and criminals are looking for them.

Surveillance infrastructure can be exported, which also aids totalitarianism around the world. Western companies like Siemens, Nokia, and Secure Computing built Iran’s surveillance infrastructure. U.S. companies helped build China’s electronic police state. Twitter’s anonymity saved the lives of Iranian dissidents—anonymity that many governments want to eliminate.

Every year brings more Internet censorship and control—not just in countries like China and Iran, but in the United States, the United Kingdom, Canada and other free countries.

The control movement is egged on by both law enforcement, trying to catch terrorists, child pornographers and other criminals, and by media companies, trying to stop file sharers.

It’s bad civic hygiene to build technologies that could someday be used to facilitate a police state. No matter what the eavesdroppers and censors say, these systems put us all at greater risk. Communications systems that have no inherent eavesdropping capabilities are more secure than systems with those capabilities built in.

This essay previously appeared—albeit with fewer links—on the Minnesota Public Radio website.

Posted on August 3, 2009 at 6:43 AM37 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.