Blog: February 2007 Archives

Cloning RFID Chips Made by HID

Remember the Cisco fiasco from BlackHat 2005? Next in the stupid box is RFID-card manufacturer HID, who has prevented Chris Paget from presenting research on how to clone those cards.

Won’t these companies ever learn? HID won’t prevent the public from learning about the vulnerability, and they will end up looking like heavy handed goons. And it’s not even secret; Paget demonstrated the attack to me and others at the RSA Conference last month.

There’s a difference between a security flaw and information about a security flaw; HID needs to fix the first and not worry about the second. Full disclosure benefits us all.

EDITED TO ADD (2/28): The ACLU is presenting instead.

Posted on February 28, 2007 at 12:00 PM34 Comments

Money Laundering Inside the U.S.

With all the attention on foreign money laundering, we’re ignoring the problem in our own country.

How widespread is the problem? No one really knows for sure because the states “have no idea who is behind the companies they have incorporated,” says Senator Carl Levin (D—Mich.), who is trying to force the states to insist on greater transparency. “The United States should never be the situs of choice for international crime, but that is exactly what the lax regulatory regimes in some of our states are inviting.” The Financial Crimes Enforcement Network, the U.S. Treasury bureau investigating money laundering, says roughly $14 billion worth of suspicious transactions involving private U.S. shells and overseas bank accounts came in from banks from 2004 to 2005, the latest Treasury data available. That’s up from $4 billion for the long stretch between April 1996 and January 2004. Now, estimates the FBI, anonymously held U.S. shell companies have laundered $36 billion to date just from the former Soviet Union.

State governments provide plenty of cover for bad guys. Every year they incorporate 1.9 million or so private companies, but no state verifies or records the identities of owners, much less screens ownership information against criminal watch lists, according to a study by the Government Accountability Office. “You have to supply more information to get a driver’s license than you do to form one of these nonpublicly traded corporations,” says Senator Levin.

Posted on February 28, 2007 at 7:59 AM33 Comments

Private Police Forces

In Raleigh, N.C., employees of Capitol Special Police patrol apartment buildings, a bowling alley and nightclubs, stopping suspicious people, searching their cars and making arrests.

Sounds like a good thing, but Capitol Special Police isn’t a police force at all—it’s a for-profit security company hired by private property owners.

This isn’t unique. Private security guards outnumber real police more than 5-1, and increasingly act like them.

They wear uniforms, carry weapons and drive lighted patrol cars on private properties like banks and apartment complexes and in public areas like bus stations and national monuments. Sometimes they operate as ordinary citizens and can only make citizen’s arrests, but in more and more states they’re being granted official police powers.

This trend should greatly concern citizens. Law enforcement should be a government function, and privatizing it puts us all at risk.

Most obviously, there’s the problem of agenda. Public police forces are charged with protecting the citizens of the cities and towns over which they have jurisdiction. Of course, there are instances of policemen overstepping their bounds, but these are exceptions, and the police officers and departments are ultimately responsible to the public.

Private police officers are different. They don’t work for us; they work for corporations. They’re focused on the priorities of their employers or the companies that hire them. They’re less concerned with due process, public safety and civil rights.

Also, many of the laws that protect us from police abuse do not apply to the private sector. Constitutional safeguards that regulate police conduct, interrogation and evidence collection do not apply to private individuals. Information that is illegal for the government to collect about you can be collected by commercial data brokers, then purchased by the police.

We’ve all seen policemen “reading people their rights” on television cop shows. If you’re detained by a private security guard, you don’t have nearly as many rights.

For example, a federal law known as Section 1983 allows you to sue for civil rights violations by the police but not by private citizens. The Freedom of Information Act allows us to learn what government law enforcement is doing, but the law doesn’t apply to private individuals and companies. In fact, most of your civil right protections apply only to real police.

Training and regulation is another problem. Private security guards often receive minimal training, if any. They don’t graduate from police academies. And while some states regulate these guard companies, others have no regulations at all: anyone can put on a uniform and play policeman. Abuses of power, brutality, and illegal behavior are much more common among private security guards than real police.

A horrific example of this happened in South Carolina in 1995. Ricky Coleman, an unlicensed and untrained Best Buy security guard with a violent criminal record, choked a fraud suspect to death while another security guard held him down.

This trend is larger than police. More and more of our nation’s prisons are being run by for-profit corporations. The IRS has started outsourcing some back-tax collection to debt-collection companies that will take a percentage of the money recovered as their fee. And there are about 20,000 private police and military personnel in Iraq, working for the Defense Department.

Throughout most of history, specific people were charged by those in power to keep the peace, collect taxes and wage wars. Corruption and incompetence were the norm, and justice was scarce. It is for this very reason that, since the 1600s, European governments have been built around a professional civil service to both enforce the laws and protect rights.

Private security guards turn this bedrock principle of modern government on its head. Whether it’s FedEx policemen in Tennessee who can request search warrants and make arrests; a privately funded surveillance helicopter in Jackson, Miss., that can bypass constitutional restrictions on aerial spying; or employees of Capitol Special Police in North Carolina who are lobbying to expand their jurisdiction beyond the specific properties they protect—privately funded policemen are not protecting us or working in our best interests.

This op ed originally appeared in the Minneapolis Star-Tribune.

EDITED TO ADD (4/2): This is relevant.

Posted on February 27, 2007 at 6:02 AM160 Comments

Windows for Warships

No, really:

The Type 45 destroyers now being launched will run Windows for Warships: and that’s not all. The attack submarine Torbay has been retrofitted with Microsoft-based command systems, and as time goes by the rest of the British submarine fleet will get the same treatment, including the Vanguard class (V class). The V boats carry the UK’s nuclear weapons and are armed with Trident ICBMs, tipped with multiple H-bomb warheads.

And here’s a related story about a software bug in the F-22 Raptor stealth fighter. It seems that the computer systems had problems flying West across the International Date Line. No word as to what operating system the computers were running.

EDITED TO ADD (2/27): Here’s a related article from 1998, involving Windows NT and the USS Yorktown.

Posted on February 26, 2007 at 3:07 PM73 Comments

Is Everything a Bomb These Days?

In New Mexico, a bomb squad blew up two CD players, duct-taped to the bottoms of church pews, that played pornographic messages during Mass. This is a pretty funny high school prank and I hope the kids that did it get suitably punished. But they’re not terrorists. And I have a hard time believing that the police actually thought CD players were bombs.

Meanwhile, Irish police blew up a tape dispenser left outside a police station.

And not to be outdone, the Dutch police mistook one of their own transmitters for a bomb. At least they didn’t blow anything up.

Okay, everyone. We need some ideas, here. If we’re going to think everything weird is a bomb, then the false alarms are going to kill any hope of security.

EDITED TO ADD (3/3): If you’re having trouble identifying bombs, this quiz should help. And here’s a relevant cartoon.

Posted on February 23, 2007 at 12:38 PM75 Comments

U.S Terrorism Arrests/Convictions Significantly Overstated

Interesting report (long, but at least read the Executive Summary) from the U.S. Department of Justice’s Inspector General that says, basically, that all the U.S. terrorism statistics since 9/11—arrests, convictions, and so on—have been grossly inflated.

As summarized in the following table, we determined that the FBI, EOUSA, and the Criminal Division did not accurately report 24 of the 26 statistics we reviewed.

“EOUSA” is the Executive Office for United States Attorneys, part of the U.S. Department of Justice.

The report gives a series of reasons why the statistics were so bad. Here’s one:

The number of terrorism-related convictions was overstated because the FBI initially coded the investigative cases as terrorism-related when the cases were opened, but did not recode cases when no link to terrorism was established.

And here’s an example of a problem:

For example, Operation Tarmac was a worksite enforcement operation launched in November 2001 at the nation’s airports. During this operation, Department and other federal agents went into regional airports and checked the immigration papers of airport workers. The agents then arrested any individuals who used falsified documents, such as social security numbers, drivers’ licenses, and other identification documents, to gain employment. EOUSA officials told us they believe these defendants are properly coded under the anti-terrorism program activity. We do not agree that law enforcement efforts such as these should be counted as “anti-terrorism” unless the subject or target is reasonably linked to terrorist activity.

There’s an enormous amount of detail in the report, if you want to wade through the 80ish pages of report and another 80ish of appendices.

Posted on February 23, 2007 at 7:13 AM26 Comments

Drive-By Pharming

Sid Stamm, Zulfikar Ramzan, and Markus Jakobsson have developed a clever, and potentially devastating, attack against home routers.

First, the attacker creates a web page containing a simple piece of malicious JavaScript code. When the page is viewed, the code makes a login attempt into the user’s home broadband router, and then attempts to change its DNS server settings to point to an attacker-controlled DNS server. Once the user’s machine receives the updated DNS settings from the router (after the machine is rebooted) future DNS requests are made to and resolved by the attacker’s DNS server.

And then the attacker basically owns the victim’s web connection.

The main condition for the attack to be successful is that the attacker can guess the router password. This is surprisingly easy, since home routers come with a default password that is uniform and often never changed.

They’ve written proof of concept code that can successfully carry out the steps of the attack on Linksys, D-Link, and NETGEAR home routers. If users change their home broadband router passwords to something difficult to guess, they are safe from this attack.

Additional details (as well as a nifty flash animation illustrating it) can be found here. There’s also a paper on the attack. And there’s a Slashdot thread.

Cisco says that 77 of its routers are vulnerable.

Note that the attack does not require the user to download any malicious software; simply viewing a web page with the malicious JavaScript code is enough.

Posted on February 22, 2007 at 12:40 PM78 Comments

CYA Security

Since 9/11, we’ve spent hundreds of billions of dollars defending ourselves from terrorist attacks. Stories about the ineffectiveness of many of these security measures are common, but less so are discussions of why they are so ineffective. In short: much of our country’s counterterrorism security spending is not designed to protect us from the terrorists, but instead to protect our public officials from criticism when another attack occurs.

Boston, January 31: As part of a guerilla marketing campaign, a series of amateur-looking blinking signs depicting characters in the Aqua Teen Hunger Force, a show on the Cartoon Network, were placed on bridges, near a medical center, underneath an interstate highway, and in other crowded public places.

Police mistook these signs for bombs and shut down parts of the city, eventually spending over $1M sorting it out. Authorities blasted the stunt as a terrorist hoax, while others ridiculed the Boston authorities for overreacting. Almost no one looked beyond the finger pointing and jeering to discuss exactly why the Boston authorities overreacted so badly. They overreacted because the signs were weird.

If someone left a backpack full of explosives in a crowded movie theater, or detonated a truck bomb in the middle of a tunnel, no one would demand to know why the police hadn’t noticed it beforehand. But if a weird device with blinking lights and wires turned out to be a bomb—what every movie bomb looks like—there would be inquiries and demands for resignations. It took the police two weeks to notice the Mooninite blinkies, but once they did, they overreacted because their jobs were at stake.

This is “Cover Your Ass” security, and unfortunately it’s very common.

Airplane security seems to forever be looking backwards. Pre-9/11, it was bombs, guns, and knives. Then it was small blades and box cutters. Richard Reid tried to blow up a plane, and suddenly we all have to take off our shoes. And after last summer’s liquid plot, we’re stuck with a series of nonsensical bans on liquids and gels.

Once you think about this in terms of CYA, it starts to make sense. The TSA wants to be sure that if there’s another airplane terrorist attack, it’s not held responsible for letting it slip through. One year ago, no one could blame the TSA for not detecting liquids. But since everything seems obvious in hindsight, it’s basic job preservation to defend against what the terrorists tried last time.

We saw this kind of CYA security when Boston and New York randomly checked bags on the subways after the London bombing, or when buildings started sprouting concrete barriers after the Oklahoma City bombing. We also see it in ineffective attempts to detect nuclear bombs; authorities employ CYA security against the media-driven threat so they can say “we tried.”

At the same time, we’re ignoring threat possibilities that don’t make the news as much—against chemical plants, for example. But if there were ever an attack, that would change quickly.

CYA also explains the TSA’s inability to take anyone off the no-fly list, no matter how innocent. No one is willing to risk his career on removing someone from the no-fly list who might—no matter how remote the possibility—turn out to be the next terrorist mastermind.

Another form of CYA security is the overly specific countermeasures we see during big events like the Olympics and the Oscars, or in protecting small towns. In all those cases, those in charge of the specific security don’t dare return the money with a message “use this for more effective general countermeasures.” If they were wrong and something happened, they’d lose their jobs.

And finally, we’re seeing CYA security on the national level, from our politicians. We might be better off as a nation funding intelligence gathering and Arabic translators, but it’s a better re-election strategy to fund something visible but ineffective, like a national ID card or a wall between the U.S. and Mexico.

Securing our nation from threats that are weird, threats that either happened before or captured the media’s imagination, and overly specific threats are all examples of CYA security. It happens not because the authorities involved—the Boston police, the TSA, and so on—are not competent, or not doing their job. It happens because there isn’t sufficient national oversight, planning, and coordination.

People and organizations respond to incentives. We can’t expect the Boston police, the TSA, the guy who runs security for the Oscars, or local public officials to balance their own security needs against the security of the nation. They’re all going to respond to the particular incentives imposed from above. What we need is a coherent antiterrorism policy at the national level: one based on real threat assessments, instead of fear-mongering, re-election strategies, or pork-barrel politics.

Sadly, though, there might not be a solution. All the money is in fear-mongering, re-election strategies, and pork-barrel politics. And, like so many things, security follows the money.

This essay originally appeared on Wired.com.

EDITED TO ADD (2/23): Interesting commentary, and a Slashdot thread.

Posted on February 22, 2007 at 5:52 AM84 Comments

OpenSSL Now FIPS 140-2 Certified

The process took five years:

The biggest frustration OSSI encountered by the seemingly endless delays is that now the software that was validated by the CMVP is more than three years old. “[This toolkit] is branched from version 0.9.7, but 0.9.8 is already available and 0.9.9 is in development,” says Marquess. “We’re glad it’s available, but now it’s dated. We understand a lot better what the CMVP’s requirements are, though, so validation will go more smoothly next time around. We also know the criticism we’ll encounter, and we’ll nail them with the next release.”

This is one problem with long certification cycles; software development cycles are faster.

Posted on February 21, 2007 at 12:17 PM14 Comments

Movie Plot Threat in Vancouver

The idiocy of this is impressive:

A Vancouver Police computer crime investigator has warned the city that plans for a citywide wireless Internet system put the city at risk of terrorist attack during the 2010 Winter Olympic Games.

The problem? Well, the problem seems to be that terrorists might attend the Olympic games and use the Internet while they’re there.

“If you have an open wireless system across the city, as a bad guy I could sit on a bus with a laptop and do global crime,” Fenton explained. “It would be virtually impossible to find me.”

There’s also some scary stuff about SCADA systems, and the city putting some of its own service on the Internet. Clearly this guy has thought about the risks a lot, just not with any sense. He’s overestimating cyberterrorism. He’s overestimating how important this one particular method of wireless Internet access is. He’s overestimating how important the 2010 Winter Olympics is.

But the newspaper was happy to play along and spread the fear. The photograph accompanying the article is captioned: “Anyone with a laptop and wireless access could commit a terrorist act, police warn.”

Posted on February 21, 2007 at 6:51 AM66 Comments

More AACS Cracking

Slowly, AACS—the security in both Blu-ray and HD DVD—has been cracked. Now, it has been cracked even further:

Arnezami, a hacker on the Doom9 forum, has published a crack for extracting the “processing key” from a high-def DVD player. This key can be used to gain access to every single Blu-Ray and HD-DVD disc.

Previously, another Doom9 user called Muslix64 had broken both Blu-Ray and HD-DVD by extracting the “volume keys” for each disc, a cumbersome process. This break builds on Muslix64’s work but extends it—now you can break all AACS-locked discs.

As I have said before, what will be interesting to watch is how well HD DVD and Blu-ray recover. Both were built expecting these sorts of cracks, and both have mechanisms to recover security for future movies. It remains to be seen how well these recovery systems will work.

Posted on February 19, 2007 at 1:28 PM28 Comments

The FBI: Now Losing Fewer Laptops

According to a new report, the FBI has lost 160 laptops, including at least ten with classified information, in the past four years.

But it’s not all bad news:

The results are an improvement on findings in a similar audit in 2002, which reported that 354 weapons and 317 laptops were lost or stolen at the FBI over about two years. They follow the high-profile losses last year of laptops containing personal information from the Veterans Administration and the Internal Revenue Service.

In a statement yesterday, FBI Assistant Director John Miller emphasized that the report showed “significant progress in decreasing the rate of loss for weapons and laptops” at the FBI. The average number of laptops or guns that went missing dropped from about 12 per month to four per month for each category, according to the report.

The FBI: Now losing fewer laptops!

Posted on February 16, 2007 at 12:14 PM24 Comments

The Doghouse: Onboard Threat Detection System

It’s almost too absurd to even write about seriously—this plan to spot terrorists in airplane seats:

Cameras fitted to seat-backs will record every twitch, blink, facial expression or suspicious movement before sending the data to onboard software which will check it against individual passenger profiles.

[…]

They say that rapid eye movements, blinking excessively, licking lips or ways of stroking hair or ears are classic symptoms of somebody trying to conceal something.

A separate microphone will hear and record even whispered remarks. Islamic suicide bombers are known to whisper texts from the Koran in the moments before they explode bombs.

The software being developed by the scientists will be so sophisticated that it will be able to take account of nervous flyers or people with a natural twitch, helping to ensure there are no false alarms.

The only thing I can think of is that some company press release got turned into real news without a whole lot of thinking.

Posted on February 16, 2007 at 6:55 AM42 Comments

Scanning People's Intentions

Here’s an article on a brain scanning technique that reads people’s intentions.

There’s not a lot of detail, but my guess is that it doesn’t work very well. But that’s not really the point. If it doesn’t work today, it will in five, ten, twenty years; it will work eventually.

What we need to do, today, is debate the legality and ethics of these sorts of interrogations:

“These techniques are emerging and we need an ethical debate about the implications, so that one day we’re not surprised and overwhelmed and caught on the wrong foot by what they can do. These things are going to come to us in the next few years and we should really be prepared,” Professor Haynes told the Guardian.

The use of brain scanners to judge whether people are likely to commit crimes is a contentious issue that society should tackle now, according to Prof Haynes. “We see the danger that this might become compulsory one day, but we have to be aware that if we prohibit it, we are also denying people who aren’t going to commit any crime the possibility of proving their innocence.”

More discussion along these lines is in the article. And I wrote about this sort of thing in 2005, in the context of Judge Roberts’ confirmation hearings.

Posted on February 15, 2007 at 6:32 AM52 Comments

"One Laptop per Child" Security System

It’s called “Bitfrost,” and it’s interesting:

We have set out to create a system that is both drastically more secure and provides drastically more usable security than any mainstream system currently on the market. One result of the dedication to usability is that there is only one protection provided by the Bitfrost platform that requires user response, and even then, it’s a simple ‘yes or no’ question understandable even by young children. The remainder of the security is provided behind the scenes. But pushing the envelope on both security and usability is a tall order, and it’s important to note that we have neither tried to create, nor do we believe we have created, a “perfectly secure” system. Notions of perfect security in the real world are foolish, and we distance ourselves up front from any such claims.

Read the design principles and design goals. And there’s an article on the Wired website, and there’s a Slashdot thread.

What they propose to do is radical, and different—just like the whole One Laptop Per Child program. Definitely worth paying attention to, and supporting if possible.

Posted on February 14, 2007 at 7:04 AM46 Comments

SWIFT Violates Legal Privacy Protections

This is a good summary of the SWIFT privacy case:

This week, the Article 29 group—a panel of European Commissioners for Freedom, Security, and Justice—ruled that the interbank money transfer service SWIFT (Society for Worldwide Interbank Financial Telecommunication) has failed to respect the provisions of the EU Data Protection directive by transferring personal financial data to the US in a manner the press release describes as “hidden, systematic, massive, and long-term.”

Posted on February 13, 2007 at 7:49 AM21 Comments

DRM in Windows Vista

Windows Vista includes an array of “features” that you don’t want. These features will make your computer less reliable and less secure. They’ll make your computer less stable and run slower. They will cause technical support problems. They may even require you to upgrade some of your peripheral hardware and existing software. And these features won’t do anything useful. In fact, they’re working against you. They’re digital rights management (DRM) features built into Vista at the behest of the entertainment industry.

And you don’t get to refuse them.

The details are pretty geeky, but basically Microsoft has reworked a lot of the core operating system to add copy protection technology for new media formats like HD DVD and Blu-ray disks. Certain high-quality output paths—audio and video—are reserved for protected peripheral devices. Sometimes output quality is artificially degraded; sometimes output is prevented entirely. And Vista continuously spends CPU time monitoring itself, trying to figure out if you’re doing something that it thinks you shouldn’t. If it does, it limits functionality and in extreme cases restarts just the video subsystem. We still don’t know the exact details of all this, and how far-reaching it is, but it doesn’t look good.

Microsoft put all those functionality-crippling features into Vista because it wants to own the entertainment industry. This isn’t how Microsoft spins it, of course. It maintains that it has no choice, that it’s Hollywood that is demanding DRM in Windows in order to allow “premium content”—meaning, new movies that are still earning revenue—onto your computer. If Microsoft didn’t play along, it’d be relegated to second-class status as Hollywood pulled its support for the platform.

It’s all complete nonsense. Microsoft could have easily told the entertainment industry that it was not going to deliberately cripple its operating system, take it or leave it. With 95% of the operating system market, where else would Hollywood go? Sure, Big Media has been pushing DRM, but recently some—Sony after their 2005 debacle and now EMI Group—are having second thoughts.

What the entertainment companies are finally realizing is that DRM doesn’t work, and just annoys their customers. Like every other DRM system ever invented, Microsoft’s won’t keep the professional pirates from making copies of whatever they want. The DRM security in Vista was broken the day it was released. Sure, Microsoft will patch it, but the patched system will get broken as well. It’s an arms race, and the defenders can’t possibly win.

I believe that Microsoft knows this and also knows that it doesn’t matter. This isn’t about stopping pirates and the small percentage of people who download free movies from the Internet. This isn’t even about Microsoft satisfying its Hollywood customers at the expense of those of us paying for the privilege of using Vista. This is about the overwhelming majority of honest users and who owns the distribution channels to them. And while it may have started as a partnership, in the end Microsoft is going to end up locking the movie companies into selling content in its proprietary formats.

We saw this trick before; Apple pulled it on the recording industry. First iTunes worked in partnership with the major record labels to distribute content, but soon Warner Music’s CEO Edgar Bronfman Jr. found that he wasn’t able to dictate a pricing model to Steve Jobs. The same thing will happen here; after Vista is firmly entrenched in the marketplace, Sony’s Howard Stringer won’t be able to dictate pricing or terms to Bill Gates. This is a war for 21st-century movie distribution and, when the dust settles, Hollywood won’t know what hit them.

To be fair, just last week Steve Jobs publicly came out against DRM for music. It’s a reasonable business position, now that Apple controls the online music distribution market. But Jobs never mentioned movies, and he is the largest single shareholder in Disney. Talk is cheap. The real question is would he actually allow iTunes Music Store purchases to play on Microsoft or Sony players, or is this just a clever way of deflecting blame to the—already hated—music labels.

Microsoft is reaching for a much bigger prize than Apple: not just Hollywood, but also peripheral hardware vendors. Vista’s DRM will require driver developers to comply with all kinds of rules and be certified; otherwise, they won’t work. And Microsoft talks about expanding this to independent software vendors as well. It’s another war for control of the computer market.

Unfortunately, we users are caught in the crossfire. We are not only stuck with DRM systems that interfere with our legitimate fair-use rights for the content we buy, we’re stuck with DRM systems that interfere with all of our computer use—even the uses that have nothing to do with copyright.

I don’t see the market righting this wrong, because Microsoft’s monopoly position gives it much more power than we consumers can hope to have. It might not be as obvious as Microsoft using its operating system monopoly to kill Netscape and own the browser market, but it’s really no different. Microsoft’s entertainment market grab might further entrench its monopoly position, but it will cause serious damage to both the computer and entertainment industries. DRM is bad, both for consumers and for the entertainment industry: something the entertainment industry is just starting to realize, but Microsoft is still fighting. Some researchers think that this is the final straw that will drive Windows users to the competition, but I think the courts are necessary.

In the meantime, the only advice I can offer you is to not upgrade to Vista. It will be hard. Microsoft’s bundling deals with computer manufacturers mean that it will be increasingly hard not to get the new operating system with new computers. And Microsoft has some pretty deep pockets and can wait us all out if it wants to. Yes, some people will shift to Macintosh and some fewer number to Linux, but most of us are stuck on Windows. Still, if enough customers say no to Vista, the company might actually listen.

This essay originally appeared on Forbes.com.

EDITED TO ADD (2/23): Some commentary.

Posted on February 12, 2007 at 10:37 AM240 Comments

Homeland Security Pork

This article is a perfect illustrating of the wasteful, pork-barrel, political spending that we like to call “homeland security.” And to think we could actually be spending this money on something useful.

When the fire department in the tiny Berkshire hamlet of Cheshire needed a new fire truck, it asked Uncle Sam for a little help.

The response last month was stunning: a $665,962 homeland security grant.

The award was nearly 26 times the annual budget of the volunteer fire department in the town of 3,500. And the rub: The department is not allowed to spend it on a fire truck.

[…]

The town does have the Cheshire Cheese Monument, a sizable concrete sculpture of a cheese press commemorating a 1,450-pound cheese hunk given by town elders to Thomas Jefferson in 1801. But its value as a terrorist target is not readily apparent.

[…]

…Sweet said he might use some of the money to recruit high school students. Or he might put some of the windfall into a marketing campaign to lure volunteers to Cheshire.

“It’ll be on billboards, TVs, and radio stations, and that kind of stuff,” he said. “We’ll have to spend it wisely.”

How many times is this story being repeated across the country? I’m sure the town needs its fire truck, and I hope it gets it. But this is just appalling.

Posted on February 12, 2007 at 6:20 AM37 Comments

Schneier on Video: Security Theater Against Movie Plot Threats

On June 10, 2006, I gave a talk at the ACLU New Jersey Membership Conference: “Counterterrorism in America: Security Theater Against Movie-Plot Threats.” Here’s the video.

EDITED TO ADD (2/10): The video is a little over an hour long. You can download the .WMV version directly here. It will play in the cross-platform, GPL VLC media player, but you may need to upgrade to the most recent version (0.8.6).

EDITED TO ADD (2/11): Someone put the video up on Google Video.

Posted on February 9, 2007 at 1:07 PM19 Comments

A New Secure Hash Standard

The U.S. National Institute of Standards and Technology is having a competition for a new cryptographic hash function.

This matters. The phrase “one-way hash function” might sound arcane and geeky, but hash functions are the workhorses of modern cryptography. They provide web security in SSL. They help with key management in e-mail and voice encryption: PGP, Skype, all the others. They help make it harder to guess passwords. They’re used in virtual private networks, help provide DNS security and ensure that your automatic software updates are legitimate. They provide all sorts of security functions in your operating system. Every time you do something with security on the internet, a hash function is involved somewhere.

Basically, a hash function is a fingerprint function. It takes a variable-length input—anywhere from a single byte to a file terabytes in length—and converts it to a fixed-length string: 20 bytes, for example.

One-way hash functions are supposed to have two properties. First, they’re one-way. This means that it is easy to take an input and compute the hash value, but it’s impossible to take a hash value and recreate the original input. By “impossible” I mean “can’t be done in any reasonable amount of time.”

Second, they’re collision-free. This means that even though there are an infinite number of inputs for every hash value, you’re never going to find two of them. Again, “never” is defined as above. The cryptographic reasoning behind these two properties is subtle, but any cryptographic text talks about them.

The hash function you’re most likely to use routinely is SHA-1. Invented by the National Security Agency, it’s been around since 1995. Recently, though, there have been some pretty impressive cryptanalytic attacks against the algorithm. The best attack is barely on the edge of feasibility, and not effective against all applications of SHA-1. But there’s an old saying inside the NSA: “Attacks always get better; they never get worse.” It’s past time to abandon SHA-1.

There are near-term alternatives—a related algorithm called SHA-256 is the most obvious—but they’re all based on the family of hash functions first developed in 1992. We’ve learned a lot more about the topic in the past 15 years, and can certainly do better.

Why the National Institute of Standards and Technology, or NIST, though? Because it has exactly the experience and reputation we want. We were in the same position with encryption functions in 1997. We needed to replace the Data Encryption Standard, but it wasn’t obvious what should replace it. NIST decided to orchestrate a worldwide competition for a new encryption algorithm. There were 15 submissions from 10 countries—I was part of the group that submitted Twofish—and after four years of analysis and cryptanalysis, NIST chose the algorithm Rijndael to become the Advanced Encryption Standard (.pdf), or AES.

The AES competition was the most fun I’ve ever had in cryptography. Think of it as a giant cryptographic demolition derby: A bunch of us put our best work into the ring, and then we beat on each other until there was only one standing. It was really more academic and structured than that, but the process stimulated a lot of research in block-cipher design and cryptanalysis. I personally learned an enormous amount about those topics from the AES competition, and we as a community benefited immeasurably.

NIST did a great job managing the AES process, so it’s the perfect choice to do the same thing with hash functions. And it’s doing just that (.pdf). Last year and the year before, NIST sponsored two workshops to discuss the requirements for a new hash function, and last month it announced a competition to choose a replacement for SHA-1. Submissions will be due in fall 2008, and a single standard is scheduled to be chosen by the end of 2011.

Yes, this is a reasonable schedule. Designing a secure hash function seems harder than designing a secure encryption algorithm, although we don’t know whether this is inherently true of the mathematics or simply a result of our imperfect knowledge. Producing a new secure hash standard is going to take a while. Luckily, we have an interim solution in SHA-256.

Now, if you’ll excuse me, the Twofish team needs to reconstitute and get to work on an Advanced Hash Standard submission.

This essay originally appeared on Wired.com.

EDITED TO ADD (2/8): Every time I write about one-way hash functions, I get responses from people claiming they can’t possibly be secure because an infinite number of texts hash to the same short (160-bit, in the case of SHA-1) hash value. Yes, of course an infinite number of texts hash to the same value; that’s the way the function works. But the odds of it happening naturally are less than the odds of all the air molecules bunching up in the corner of the room and suffocating you, and you can’t force it to happen either. Right now, several groups are trying to implement Xiaoyun Wang’s attack against SHA-1. I predict one of them will find two texts that hash to the same value this year—it will demonstrate that the hash function is broken and be really big news.

Posted on February 8, 2007 at 9:07 AM77 Comments

Corsham Bunker

Fascinating article on the Corsham bunker, the secret underground UK site the government was to retreat to in the event of a nuclear war.

Until two years ago, the existence of this complex, variously codenamed Burlington, Stockwell, Turnstile or 3-Site, was classified. It was a huge yet very secret complex, where the government and 6,000 apparatchiks would have taken refuge for 90 days during all-out thermonuclear war. Solid yet cavernous, surrounded by 100ft-deep reinforced concrete walls within a subterranean 240-acre limestone quarry just outside Corsham, it drives one to imagine the ghosts of people who, thank God, never took refuge here.

Posted on February 7, 2007 at 2:40 PM47 Comments

Eyewitness Identification Reform

According to this article, “Mistaken eyewitness identification is the leading cause of wrongful convictions.” Given what I’ve been reading recently about memory and the brain, this does not surprise me at all.

New Mexico is currently debating a bill reforming eyewitness identification procedures:

Under the proposed regulations, an eyewitness must provide a written description before a lineup takes place; there must be at least six individuals in a live lineup and 10 photos in a photographic line-up; and the members of the lineup must be shown sequentially rather than simultaneously.

The bill would also restrict the amount of time in which law enforcement could bring a suspect by for a physical identification by a victim or witness to within one hour after the crime was reported. Anything beyond one hour would require a lineup with multiple photos or people.

I don’t have access to any of the psychological or criminology studies that back these reforms up, but the bill is being supported by the right sorts of people.

Posted on February 7, 2007 at 6:38 AM34 Comments

The Psychology of Security

I just posted a long essay (pdf available here) on my website, exploring how psychology can help explain the difference between the feeling of security and the reality of security.

We make security trade-offs, large and small, every day. We make them when we decide to lock our doors in the morning, when we choose our driving route, and when we decide whether we’re going to pay for something via check, credit card, or cash. They’re often not the only factor in a decision, but they’re a contributing factor. And most of the time, we don’t even realize, it. We make security trade-offs intuitively. Most decisions are default decisions, and there have been many popular books that explore reaction, intuition, choice, and decision.

These intuitive choices are central to life on this planet. Every living thing makes security trade-offs, mostly as a species—evolving this way instead of that way—but also as individuals. Imagine a rabbit sitting in a field, eating clover. Suddenly, he spies a fox. He’s going to make a security trade-off: should I stay or should I flee? The rabbits that are good at making these trade-offs are going to live to reproduce, while the rabbits that are bad at it are going to get eaten or starve. This means that, as a successful species on the planet, humans should be really good at making security trade-offs.

And yet at the same time we seem hopelessly bad at it. We get it wrong all the time. We exaggerate some risks while minimizing others. We exaggerate some costs while minimizing others. Even simple trade-offs we get wrong, wrong, wrong—again and again. A Vulcan studying human security behavior would shake his head in amazement.

The truth is that we’re not hopelessly bad at making security trade-offs. We are very well adapted to dealing with the security environment endemic to hominids living in small family groups on the highland plains of East Africa. It’s just that the environment in New York in 2006 is different from Kenya circa 100,000 BC. And so our feeling of security diverges from the reality of security, and we get things wrong.

The essay examines particular brain heuristics, how they work and how they fail, in an attempt to explain why our feeling of security so often diverges from reality. I’m giving a talk on the topic at the RSA Conference today at 3:00 PM. Dark Reading posted an article on this, also discussed on Slashdot. CSO Online also has a podcast interview with me on the topic. I expect there’ll be more press coverage this week.

The essay is really still in draft, and I would very much appreciate any and all comments, criticisms, additions, corrections, suggestions for further research, and so on. I think security technology has a lot to learn from psychology, and that I’ve only scratched the surface of the interesting and relevant research—and what it means.

EDITED TO ADD (2/7): Two more articles on topic.

Posted on February 6, 2007 at 1:44 PM210 Comments

Dave Barry on Super Bowl Security

Funny:

Also, if you are planning to go to the Super Bowl game on Sunday, be aware that additional security measures will be in effect, as follows:

  • WHEN TO ARRIVE: All persons attending the game MUST arrive at the stadium no later than 7:45 a.m. yesterday. There will be NO EXCEPTIONS. I am talking to you, Prince.
  • PERSONAL BELONGINGS: Fans will not be allowed to take anything into the stadium except medically required organs. If you need, for example, both kidneys, you will be required to produce a note from your doctor, as well as your actual doctor.
  • TAILGATING: There will be no tailgating. This is to thwart the terrorists, who are believed to have been planning a tailgate-based attack (code name ”Death Hibachi”) involving the detonation of a nuclear bratwurst capable of leveling South Florida, if South Florida was not already so level to begin with.
  • TALKING: There will be no talking.
  • PERMITTED CHEERS: The National Football League, in conjunction with the Department of Homeland Security, the FBI, the CIA and Vice President Cheney, has approved the following three cheers for use during the game: (1) ”You suck, ref!” (2) ”Come on, (Name of Team)!” (3) “You suck, Prince!”

Back in 2004, I wrote a more serious essay on security at the World Series.

Posted on February 6, 2007 at 7:31 AM14 Comments

Business Models for Discovering Security Vulnerabilities

One company sells them to the vendors:

The founder of a small Moscow security company, Gleg, Legerov scrutinizes computer code in commonly used software for programming bugs, which attackers can use to break into computer systems, and sends his findings to a few dozen corporate customers around the world. Each customer pays more than $10,000 for information it can use to plug the hidden holes in its computers and stay ahead of criminal hackers.

iDefensebuys them:

This month, iDefense, a Virginia- based subsidiary of the technology company VeriSign, began offering an $8,000 bounty to the first six researchers to find holes in Vista or the newest version of Internet Explorer, and up to $4,000 more for code that take can advantage of the weaknesses. Like Gleg, iDefense, will sell information about those vulnerabilities to companies and government agencies for an undisclosed amount, though iDefense makes it a practice to alert vendors like Microsoft first.

So do criminals:

But the iDefense rewards are low compared to bounties offered on the black market. In December, the Japanese antivirus company TrendMicro found a Vista vulnerability being offered by an anonymous hacker on a Romanian Web forum for $50,000.”

There’s a lot of FUD in this article, but also some good stuff.

Posted on February 5, 2007 at 12:44 PM40 Comments

Excessive Secrecy and Security Helps Terrorists

I’ve said it, and now so has the director of the Canadian Security Intelligence Service:

Canada’s spy master, of all people, is warning that excessive government secrecy and draconian counterterrorism measures will only play into the hands of terrorists.

“The response to the terrorist threat, whether now or in the future, should follow the long-standing principle of ‘in all things moderation,'” Jim Judd, director of the Canadian Security Intelligence Service, said in a recent Toronto speech.

Posted on February 2, 2007 at 7:25 AM23 Comments

Non-Terrorist Embarrassment in Boston

The story is almost too funny to write about seriously. To advertise the Cartoon Network show “Aqua Teen Hunger Force,” the network put up 38 blinking signs (kind of like Lite Brites) around the Boston area. The Boston police decided—with absolutely no supporting evidence—that these were bombs and shut down parts of the city.

Now the police look stupid, but they’re trying really not hard not to act humiliated:

Governor Deval Patrick told the Associated Press: “It’s a hoax—and it’s not funny.”

Unfortunately, it is funny. What isn’t funny is now the Boston government is trying to prosecute the artist and the network instead of owning up to their own stupidity. The police now claim that they were “hoax” explosive devices. I don’t think you can claim they are hoax explosive devices unless they were intended to look like explosive devices, which merely a cursory look at any of them shows that they weren’t.

But it’s much easier to blame others than to admit that you were wrong:

“It is outrageous, in a post 9/11 world, that a company would use this type of marketing scheme,” Mayor Thomas Menino said. “I am prepared to take any and all legal action against Turner Broadcasting and its affiliates for any and all expenses incurred.”

And:

Rep. Ed Markey, a Boston-area congressman, said, “Whoever thought this up needs to find another job.”

“Scaring an entire region, tying up the T and major roadways, and forcing first responders to spend 12 hours chasing down trinkets instead of terrorists is marketing run amok,” Markey, a Democrat, said in a written statement. “It would be hard to dream up a more appalling publicity stunt.”

And:

“It had a very sinister appearance,” [Massachusetts Attorney General Martha] Coakley told reporters. “It had a battery behind it, and wires.”

For heavens sake, don’t let her inside a Radio Shack.

I like this comment:

They consisted of magnetic signs with blinking lights in the shape of a cartoon character.

And everyone knows that bombs have blinking lights on ‘em. Every single movie bomb you’ve ever seen has a blinking light.

Triumph for Homeland Security, guys.

And this one:

“It’s almost too easy to be a terrorist these days,” said Jennifer Mason, 26. “You stick a box on a corner and you can shut down a city.”

And this one, by one of the artists who installed the signs:

“I find it kind of ridiculous that they’re making these statements on TV that we must not be safe from terrorism, because they were up there for three weeks and no one noticed. It’s pretty commonsensical to look at them and say this is a piece of art and installation,” he said.

Right. If this wasn’t a ridiculous overreaction to a non-existent threat, then how come the devices were in place for weeks without anyone noticing them? What does that say about the Boston police?

Maybe if the Boston police stopped wasting time and money searching bags on subways….

Of the 2,449 inspections between Oct. 10 and Dec. 31, the bags of 27 riders tested positive in the initial screening for explosives, prompting further searches, the Globe found in an analysis of daily inspection reports obtained under the state’s Freedom of Information Act.

In the additional screening, 11 passengers had their bags checked by explosive-sniffing dogs, and 16 underwent a physical search. Nothing was found.

These blinking signs have been up for weeks in ten cities—Boston, New York, Los Angeles, Chicago, Atlanta, Seattle, Portland, Austin, San Francisco, and Philadelphia—and no one else has managed to panic so completely. Refuse to be terrorized, people!

EDITED TO ADD (2/2): Here’s some good information about whether the stunt broke the law or not.

EDITED TO ADD (2/3): This is 100% right:

Let’s get a few facts straight on the Aqua Teen Hunger Force sign fiasco:

1. Attorney General Martha Coakley needs to shut up and stop using the word “hoax.” There was no hoax. Hoax implies Turner Networks and the ATHF people were trying to defraud or confuse people as to what they were doing. Hoax implies they were trying to make their signs look like bombs. They weren’t. They made Lite-Brite signs of a cartoon character giving the finger.

2. It bears repeating again that Turner, and especially Berdovsky, did absolutely nothing illegal. The devices were not bombs. They did not look like bombs. They were all placed in public spaces and caused no obstruction to traffic or commerce. At most, Berdovsky is guilty of littering or illegal flyering.

3. The “devices” were placed in ten cities, and have been there for over two weeks. No other city managed to freak out and commit an entire platoon of police officers to scaring their own city claiming they might be bombs. No other mayor agreed to talk to Fox News with any statement beyond “no comment” when spending the day asking if this was a “terrorist dry run.”

4. There is nothing, not a single thing, remotely suggesting that Turner or the guerilla marketing firm they hired intended to cause a public disturbance. Many have claimed the signs were “like saying ‘fire’ in a crowded theater.” Wrong. This was like taping a picture of a fire to the wall of a theater and someone freaked out and called the fire department.

And this is also worth reading.

EDITED TO ADD (2/6): More info.

Posted on February 1, 2007 at 1:08 PM244 Comments

Recognizing a Suicide Bomber

Fascinating story of an Israeli taxi driver who picked up a suicide bomber. What’s interesting to me is how the driver comes to realize his passenger is a suicide bomber. It wasn’t anything that comes up on a profile, but a feeling that something is wrong:

Mr Woltinsky said he realised straight away that something was not quite right.

“When he got into my car, I had a bad feeling because he did not behave normally—his eyes, his nerves—and the fact he was wearing a big red jacket even though it was hot.

“I asked him where he wanted to go but he didn’t say anything, just waved his hand.

“When I asked him again, he said only one word, “Haifa”, in an Arab accent. Haifa is hundreds of kilometres away, so now I was almost 100% sure he was a suicide bomber.”

In other words, his passanger was acting hinky.

EDITED TO ADD (2/1): The Israeli was not a taxi driver. Apologies.

Posted on February 1, 2007 at 6:26 AM41 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.