Schneier on Security
A blog covering security and security technology.
February 2007 Archives
Won't these companies ever learn? HID won't prevent the public from learning about the vulnerability, and they will end up looking like heavy handed goons. And it's not even secret; Paget demonstrated the attack to me and others at the RSA Conference last month.
There's a difference between a security flaw and information about a security flaw; HID needs to fix the first and not worry about the second. Full disclosure benefits us all.
EDITED TO ADD (2/28): The ACLU is presenting instead.
With all the attention on foreign money laundering, we're ignoring the problem in our own country.
How widespread is the problem? No one really knows for sure because the states "have no idea who is behind the companies they have incorporated," says Senator Carl Levin (D--Mich.), who is trying to force the states to insist on greater transparency. "The United States should never be the situs of choice for international crime, but that is exactly what the lax regulatory regimes in some of our states are inviting." The Financial Crimes Enforcement Network, the U.S. Treasury bureau investigating money laundering, says roughly $14 billion worth of suspicious transactions involving private U.S. shells and overseas bank accounts came in from banks from 2004 to 2005, the latest Treasury data available. That's up from $4 billion for the long stretch between April 1996 and January 2004. Now, estimates the FBI, anonymously held U.S. shell companies have laundered $36 billion to date just from the former Soviet Union.
In Raleigh, N.C., employees of Capitol Special Police patrol apartment buildings, a bowling alley and nightclubs, stopping suspicious people, searching their cars and making arrests.
Sounds like a good thing, but Capitol Special Police isn't a police force at all -- it's a for-profit security company hired by private property owners.
This isn't unique. Private security guards outnumber real police more than 5-1, and increasingly act like them.
They wear uniforms, carry weapons and drive lighted patrol cars on private properties like banks and apartment complexes and in public areas like bus stations and national monuments. Sometimes they operate as ordinary citizens and can only make citizen's arrests, but in more and more states they're being granted official police powers.
This trend should greatly concern citizens. Law enforcement should be a government function, and privatizing it puts us all at risk.
Most obviously, there's the problem of agenda. Public police forces are charged with protecting the citizens of the cities and towns over which they have jurisdiction. Of course, there are instances of policemen overstepping their bounds, but these are exceptions, and the police officers and departments are ultimately responsible to the public.
Private police officers are different. They don't work for us; they work for corporations. They're focused on the priorities of their employers or the companies that hire them. They're less concerned with due process, public safety and civil rights.
Also, many of the laws that protect us from police abuse do not apply to the private sector. Constitutional safeguards that regulate police conduct, interrogation and evidence collection do not apply to private individuals. Information that is illegal for the government to collect about you can be collected by commercial data brokers, then purchased by the police.
We've all seen policemen "reading people their rights" on television cop shows. If you're detained by a private security guard, you don't have nearly as many rights.
For example, a federal law known as Section 1983 allows you to sue for civil rights violations by the police but not by private citizens. The Freedom of Information Act allows us to learn what government law enforcement is doing, but the law doesn't apply to private individuals and companies. In fact, most of your civil right protections apply only to real police.
Training and regulation is another problem. Private security guards often receive minimal training, if any. They don't graduate from police academies. And while some states regulate these guard companies, others have no regulations at all: anyone can put on a uniform and play policeman. Abuses of power, brutality, and illegal behavior are much more common among private security guards than real police.
A horrific example of this happened in South Carolina in 1995. Ricky Coleman, an unlicensed and untrained Best Buy security guard with a violent criminal record, choked a fraud suspect to death while another security guard held him down.
This trend is larger than police. More and more of our nation's prisons are being run by for-profit corporations. The IRS has started outsourcing some back-tax collection to debt-collection companies that will take a percentage of the money recovered as their fee. And there are about 20,000 private police and military personnel in Iraq, working for the Defense Department.
Throughout most of history, specific people were charged by those in power to keep the peace, collect taxes and wage wars. Corruption and incompetence were the norm, and justice was scarce. It is for this very reason that, since the 1600s, European governments have been built around a professional civil service to both enforce the laws and protect rights.
Private security guards turn this bedrock principle of modern government on its head. Whether it's FedEx policemen in Tennessee who can request search warrants and make arrests; a privately funded surveillance helicopter in Jackson, Miss., that can bypass constitutional restrictions on aerial spying; or employees of Capitol Special Police in North Carolina who are lobbying to expand their jurisdiction beyond the specific properties they protect -- privately funded policemen are not protecting us or working in our best interests.
This op ed originally appeared in the Minneapolis Star-Tribune.
EDITED TO ADD (4/2): This is relevant.
The Type 45 destroyers now being launched will run Windows for Warships: and that's not all. The attack submarine Torbay has been retrofitted with Microsoft-based command systems, and as time goes by the rest of the British submarine fleet will get the same treatment, including the Vanguard class (V class). The V boats carry the UK's nuclear weapons and are armed with Trident ICBMs, tipped with multiple H-bomb warheads.
And here's a related story about a software bug in the F-22 Raptor stealth fighter. It seems that the computer systems had problems flying West across the International Date Line. No word as to what operating system the computers were running.
EDITED TO ADD (2/27): Here's a related article from 1998, involving Windows NT and the USS Yorktown.
And a knitted squid iPod case.
I just counted, and 68 blog readers sent me an e-mail about this story.
In New Mexico, a bomb squad blew up two CD players, duct-taped to the bottoms of church pews, that played pornographic messages during Mass. This is a pretty funny high school prank and I hope the kids that did it get suitably punished. But they're not terrorists. And I have a hard time believing that the police actually thought CD players were bombs.
Meanwhile, Irish police blew up a tape dispenser left outside a police station.
And not to be outdone, the Dutch police mistook one of their own transmitters for a bomb. At least they didn't blow anything up.
Okay, everyone. We need some ideas, here. If we're going to think everything weird is a bomb, then the false alarms are going to kill any hope of security.
Interesting report (long, but at least read the Executive Summary) from the U.S. Department of Justice's Inspector General that says, basically, that all the U.S. terrorism statistics since 9/11 -- arrests, convictions, and so on -- have been grossly inflated.
As summarized in the following table, we determined that the FBI, EOUSA, and the Criminal Division did not accurately report 24 of the 26 statistics we reviewed.
"EOUSA" is the Executive Office for United States Attorneys, part of the U.S. Department of Justice.
The report gives a series of reasons why the statistics were so bad. Here's one:
The number of terrorism-related convictions was overstated because the FBI initially coded the investigative cases as terrorism-related when the cases were opened, but did not recode cases when no link to terrorism was established.
And here's an example of a problem:
For example, Operation Tarmac was a worksite enforcement operation launched in November 2001 at the nation’s airports. During this operation, Department and other federal agents went into regional airports and checked the immigration papers of airport workers. The agents then arrested any individuals who used falsified documents, such as social security numbers, drivers’ licenses, and other identification documents, to gain employment. EOUSA officials told us they believe these defendants are properly coded under the anti-terrorism program activity. We do not agree that law enforcement efforts such as these should be counted as "anti-terrorism" unless the subject or target is reasonably linked to terrorist activity.
There's an enormous amount of detail in the report, if you want to wade through the 80ish pages of report and another 80ish of appendices.
Sid Stamm, Zulfikar Ramzan, and Markus Jakobsson have developed a clever, and potentially devastating, attack against home routers.
And then the attacker basically owns the victim's web connection.
The main condition for the attack to be successful is that the attacker can guess the router password. This is surprisingly easy, since home routers come with a default password that is uniform and often never changed.
They've written proof of concept code that can successfully carry out the steps of the attack on Linksys, D-Link, and NETGEAR home routers. If users change their home broadband router passwords to something difficult to guess, they are safe from this attack.
Cisco says that 77 of its routers are vulnerable.
Since 9/11, we've spent hundreds of billions of dollars defending ourselves from terrorist attacks. Stories about the ineffectiveness of many of these security measures are common, but less so are discussions of why they are so ineffective. In short: much of our country's counterterrorism security spending is not designed to protect us from the terrorists, but instead to protect our public officials from criticism when another attack occurs.
Boston, January 31: As part of a guerilla marketing campaign, a series of amateur-looking blinking signs depicting characters in the Aqua Teen Hunger Force, a show on the Cartoon Network, were placed on bridges, near a medical center, underneath an interstate highway, and in other crowded public places.
Police mistook these signs for bombs and shut down parts of the city, eventually spending over $1M sorting it out. Authorities blasted the stunt as a terrorist hoax, while others ridiculed the Boston authorities for overreacting. Almost no one looked beyond the finger pointing and jeering to discuss exactly why the Boston authorities overreacted so badly. They overreacted because the signs were weird.
If someone left a backpack full of explosives in a crowded movie theater, or detonated a truck bomb in the middle of a tunnel, no one would demand to know why the police hadn't noticed it beforehand. But if a weird device with blinking lights and wires turned out to be a bomb -- what every movie bomb looks like -- there would be inquiries and demands for resignations. It took the police two weeks to notice the Mooninite blinkies, but once they did, they overreacted because their jobs were at stake.
This is "Cover Your Ass" security, and unfortunately it's very common.
Airplane security seems to forever be looking backwards. Pre-9/11, it was bombs, guns, and knives. Then it was small blades and box cutters. Richard Reid tried to blow up a plane, and suddenly we all have to take off our shoes. And after last summer's liquid plot, we're stuck with a series of nonsensical bans on liquids and gels.
Once you think about this in terms of CYA, it starts to make sense. The TSA wants to be sure that if there's another airplane terrorist attack, it's not held responsible for letting it slip through. One year ago, no one could blame the TSA for not detecting liquids. But since everything seems obvious in hindsight, it's basic job preservation to defend against what the terrorists tried last time.
We saw this kind of CYA security when Boston and New York randomly checked bags on the subways after the London bombing, or when buildings started sprouting concrete barriers after the Oklahoma City bombing. We also see it in ineffective attempts to detect nuclear bombs; authorities employ CYA security against the media-driven threat so they can say "we tried."
At the same time, we're ignoring threat possibilities that don't make the news as much -- against chemical plants, for example. But if there were ever an attack, that would change quickly.
CYA also explains the TSA's inability to take anyone off the no-fly list, no matter how innocent. No one is willing to risk his career on removing someone from the no-fly list who might -- no matter how remote the possibility -- turn out to be the next terrorist mastermind.
Another form of CYA security is the overly specific countermeasures we see during big events like the Olympics and the Oscars, or in protecting small towns. In all those cases, those in charge of the specific security don't dare return the money with a message "use this for more effective general countermeasures." If they were wrong and something happened, they'd lose their jobs.
And finally, we're seeing CYA security on the national level, from our politicians. We might be better off as a nation funding intelligence gathering and Arabic translators, but it's a better re-election strategy to fund something visible but ineffective, like a national ID card or a wall between the U.S. and Mexico.
Securing our nation from threats that are weird, threats that either happened before or captured the media's imagination, and overly specific threats are all examples of CYA security. It happens not because the authorities involved -- the Boston police, the TSA, and so on -- are not competent, or not doing their job. It happens because there isn't sufficient national oversight, planning, and coordination.
People and organizations respond to incentives. We can't expect the Boston police, the TSA, the guy who runs security for the Oscars, or local public officials to balance their own security needs against the security of the nation. They're all going to respond to the particular incentives imposed from above. What we need is a coherent antiterrorism policy at the national level: one based on real threat assessments, instead of fear-mongering, re-election strategies, or pork-barrel politics.
Sadly, though, there might not be a solution. All the money is in fear-mongering, re-election strategies, and pork-barrel politics. And, like so many things, security follows the money.
This essay originally appeared on Wired.com.
The process took five years:
The biggest frustration OSSI encountered by the seemingly endless delays is that now the software that was validated by the CMVP is more than three years old. "[This toolkit] is branched from version 0.9.7, but 0.9.8 is already available and 0.9.9 is in development," says Marquess. "We're glad it's available, but now it's dated. We understand a lot better what the CMVP's requirements are, though, so validation will go more smoothly next time around. We also know the criticism we'll encounter, and we'll nail them with the next release."
This is one problem with long certification cycles; software development cycles are faster.
The idiocy of this is impressive:
A Vancouver Police computer crime investigator has warned the city that plans for a citywide wireless Internet system put the city at risk of terrorist attack during the 2010 Winter Olympic Games.
The problem? Well, the problem seems to be that terrorists might attend the Olympic games and use the Internet while they're there.
"If you have an open wireless system across the city, as a bad guy I could sit on a bus with a laptop and do global crime," Fenton explained. "It would be virtually impossible to find me."
There's also some scary stuff about SCADA systems, and the city putting some of its own service on the Internet. Clearly this guy has thought about the risks a lot, just not with any sense. He's overestimating cyberterrorism. He's overestimating how important this one particular method of wireless Internet access is. He's overestimating how important the 2010 Winter Olympics is.
But the newspaper was happy to play along and spread the fear. The photograph accompanying the article is captioned: "Anyone with a laptop and wireless access could commit a terrorist act, police warn."
From The Register:
In a recent social engineering test undertaken by UK-based security consultancy NTA Monitor, a tester was able to easily gain access to a corporate building through a back door that was left open for smokers.
Arnezami, a hacker on the Doom9 forum, has published a crack for extracting the "processing key" from a high-def DVD player. This key can be used to gain access to every single Blu-Ray and HD-DVD disc.
As I have said before, what will be interesting to watch is how well HD DVD and Blu-ray recover. Both were built expecting these sorts of cracks, and both have mechanisms to recover security for future movies. It remains to be seen how well these recovery systems will work.
What's interesting is that Microsoft is positioning this as a trade-off between security and ease-of-use. That's correct, of course, but it seems that someone made a bad decision in this regard.
From the BBC:
Big deep-sea squid emit blinding flashes of light as they attack their prey, research shows.
According to a new report, the FBI has lost 160 laptops, including at least ten with classified information, in the past four years.
But it's not all bad news:
The results are an improvement on findings in a similar audit in 2002, which reported that 354 weapons and 317 laptops were lost or stolen at the FBI over about two years. They follow the high-profile losses last year of laptops containing personal information from the Veterans Administration and the Internal Revenue Service.
It's almost too absurd to even write about seriously -- this plan to spot terrorists in airplane seats:
Cameras fitted to seat-backs will record every twitch, blink, facial expression or suspicious movement before sending the data to onboard software which will check it against individual passenger profiles.
The only thing I can think of is that some company press release got turned into real news without a whole lot of thinking.
I'll bet this sort of problem is pretty common.
Here's an article on a brain scanning technique that reads people's intentions.
There's not a lot of detail, but my guess is that it doesn't work very well. But that's not really the point. If it doesn't work today, it will in five, ten, twenty years; it will work eventually.
What we need to do, today, is debate the legality and ethics of these sorts of interrogations:
"These techniques are emerging and we need an ethical debate about the implications, so that one day we're not surprised and overwhelmed and caught on the wrong foot by what they can do. These things are going to come to us in the next few years and we should really be prepared," Professor Haynes told the Guardian.
More discussion along these lines is in the article. And I wrote about this sort of thing in 2005, in the context of Judge Roberts' confirmation hearings.
It's called "Bitfrost," and it's interesting:
We have set out to create a system that is both drastically more secure and provides drastically more usable security than any mainstream system currently on the market. One result of the dedication to usability is that there is only one protection provided by the Bitfrost platform that requires user response, and even then, it's a simple 'yes or no' question understandable even by young children. The remainder of the security is provided behind the scenes. But pushing the envelope on both security and usability is a tall order, and it's important to note that we have neither tried to create, nor do we believe we have created, a "perfectly secure" system. Notions of perfect security in the real world are foolish, and we distance ourselves up front from any such claims.
What they propose to do is radical, and different -- just like the whole One Laptop Per Child program. Definitely worth paying attention to, and supporting if possible.
Here's a short interview I did for Information Week.
We've all seen those anti-counterfeiting holograms: on credit cards, on software, on expensive apparel.
Turns out they're getting easier to counterfeit.
This is a good summary of the SWIFT privacy case:
This week, the Article 29 group -- a panel of European Commissioners for Freedom, Security, and Justice -- ruled that the interbank money transfer service SWIFT (Society for Worldwide Interbank Financial Telecommunication) has failed to respect the provisions of the EU Data Protection directive by transferring personal financial data to the US in a manner the press release describes as "hidden, systematic, massive, and long-term."
Windows Vista includes an array of "features" that you don't want. These features will make your computer less reliable and less secure. They'll make your computer less stable and run slower. They will cause technical support problems. They may even require you to upgrade some of your peripheral hardware and existing software. And these features won't do anything useful. In fact, they're working against you. They're digital rights management (DRM) features built into Vista at the behest of the entertainment industry.
And you don't get to refuse them.
The details are pretty geeky, but basically Microsoft has reworked a lot of the core operating system to add copy protection technology for new media formats like HD DVD and Blu-ray disks. Certain high-quality output paths -- audio and video -- are reserved for protected peripheral devices. Sometimes output quality is artificially degraded; sometimes output is prevented entirely. And Vista continuously spends CPU time monitoring itself, trying to figure out if you're doing something that it thinks you shouldn't. If it does, it limits functionality and in extreme cases restarts just the video subsystem. We still don't know the exact details of all this, and how far-reaching it is, but it doesn't look good.
Microsoft put all those functionality-crippling features into Vista because it wants to own the entertainment industry. This isn't how Microsoft spins it, of course. It maintains that it has no choice, that it's Hollywood that is demanding DRM in Windows in order to allow "premium content" -- meaning, new movies that are still earning revenue -- onto your computer. If Microsoft didn't play along, it'd be relegated to second-class status as Hollywood pulled its support for the platform.
It's all complete nonsense. Microsoft could have easily told the entertainment industry that it was not going to deliberately cripple its operating system, take it or leave it. With 95% of the operating system market, where else would Hollywood go? Sure, Big Media has been pushing DRM, but recently some -- Sony after their 2005 debacle and now EMI Group -- are having second thoughts.
What the entertainment companies are finally realizing is that DRM doesn't work, and just annoys their customers. Like every other DRM system ever invented, Microsoft's won't keep the professional pirates from making copies of whatever they want. The DRM security in Vista was broken the day it was released. Sure, Microsoft will patch it, but the patched system will get broken as well. It's an arms race, and the defenders can't possibly win.
I believe that Microsoft knows this and also knows that it doesn't matter. This isn't about stopping pirates and the small percentage of people who download free movies from the Internet. This isn't even about Microsoft satisfying its Hollywood customers at the expense of those of us paying for the privilege of using Vista. This is about the overwhelming majority of honest users and who owns the distribution channels to them. And while it may have started as a partnership, in the end Microsoft is going to end up locking the movie companies into selling content in its proprietary formats.
We saw this trick before; Apple pulled it on the recording industry. First iTunes worked in partnership with the major record labels to distribute content, but soon Warner Music's CEO Edgar Bronfman Jr. found that he wasn't able to dictate a pricing model to Steve Jobs. The same thing will happen here; after Vista is firmly entrenched in the marketplace, Sony's Howard Stringer won't be able to dictate pricing or terms to Bill Gates. This is a war for 21st-century movie distribution and, when the dust settles, Hollywood won't know what hit them.
To be fair, just last week Steve Jobs publicly came out against DRM for music. It's a reasonable business position, now that Apple controls the online music distribution market. But Jobs never mentioned movies, and he is the largest single shareholder in Disney. Talk is cheap. The real question is would he actually allow iTunes Music Store purchases to play on Microsoft or Sony players, or is this just a clever way of deflecting blame to the -- already hated -- music labels.
Microsoft is reaching for a much bigger prize than Apple: not just Hollywood, but also peripheral hardware vendors. Vista's DRM will require driver developers to comply with all kinds of rules and be certified; otherwise, they won't work. And Microsoft talks about expanding this to independent software vendors as well. It's another war for control of the computer market.
Unfortunately, we users are caught in the crossfire. We are not only stuck with DRM systems that interfere with our legitimate fair-use rights for the content we buy, we're stuck with DRM systems that interfere with all of our computer use -- even the uses that have nothing to do with copyright.
I don't see the market righting this wrong, because Microsoft's monopoly position gives it much more power than we consumers can hope to have. It might not be as obvious as Microsoft using its operating system monopoly to kill Netscape and own the browser market, but it's really no different. Microsoft's entertainment market grab might further entrench its monopoly position, but it will cause serious damage to both the computer and entertainment industries. DRM is bad, both for consumers and for the entertainment industry: something the entertainment industry is just starting to realize, but Microsoft is still fighting. Some researchers think that this is the final straw that will drive Windows users to the competition, but I think the courts are necessary.
In the meantime, the only advice I can offer you is to not upgrade to Vista. It will be hard. Microsoft's bundling deals with computer manufacturers mean that it will be increasingly hard not to get the new operating system with new computers. And Microsoft has some pretty deep pockets and can wait us all out if it wants to. Yes, some people will shift to Macintosh and some fewer number to Linux, but most of us are stuck on Windows. Still, if enough customers say no to Vista, the company might actually listen.
This essay originally appeared on Forbes.com.
This article is a perfect illustrating of the wasteful, pork-barrel, political spending that we like to call "homeland security." And to think we could actually be spending this money on something useful.
When the fire department in the tiny Berkshire hamlet of Cheshire needed a new fire truck, it asked Uncle Sam for a little help.
How many times is this story being repeated across the country? I'm sure the town needs its fire truck, and I hope it gets it. But this is just appalling.
On June 10, 2006, I gave a talk at the ACLU New Jersey Membership Conference: "Counterterrorism in America: Security Theater Against Movie-Plot Threats." Here's the video.
EDITED TO ADD (2/10): The video is a little over an hour long. You can download the .WMV version directly here. It will play in the cross-platform, GPL VLC media player, but you may need to upgrade to the most recent version (0.8.6).
EDITED TO ADD (2/11): Someone put the video up on Google Video.
Ross Anderson and Tyler Moore just published their survey paper on the economics of information security. (Here are the slides from the conference talk, and a shorter version from Science.)
Three pipe bombs were found in the town of Pearblossom, California, and -- it seems -- disposed of without causing hysteria.
Boston, are you paying attention?
Interesting data from New York. The number of people stopped and searched has gone up fivefold since 2002, but the number of arrests due to these stops has only doubled. (The number of "summonses" has also gone up fivefold.)
Good data for the "Is it worth it?" question.
The U.S. National Institute of Standards and Technology is having a competition for a new cryptographic hash function.
This matters. The phrase "one-way hash function" might sound arcane and geeky, but hash functions are the workhorses of modern cryptography. They provide web security in SSL. They help with key management in e-mail and voice encryption: PGP, Skype, all the others. They help make it harder to guess passwords. They're used in virtual private networks, help provide DNS security and ensure that your automatic software updates are legitimate. They provide all sorts of security functions in your operating system. Every time you do something with security on the internet, a hash function is involved somewhere.
Basically, a hash function is a fingerprint function. It takes a variable-length input -- anywhere from a single byte to a file terabytes in length -- and converts it to a fixed-length string: 20 bytes, for example.
One-way hash functions are supposed to have two properties. First, they're one-way. This means that it is easy to take an input and compute the hash value, but it's impossible to take a hash value and recreate the original input. By "impossible" I mean "can't be done in any reasonable amount of time."
Second, they're collision-free. This means that even though there are an infinite number of inputs for every hash value, you're never going to find two of them. Again, "never" is defined as above. The cryptographic reasoning behind these two properties is subtle, but any cryptographic text talks about them.
The hash function you're most likely to use routinely is SHA-1. Invented by the National Security Agency, it's been around since 1995. Recently, though, there have been some pretty impressive cryptanalytic attacks against the algorithm. The best attack is barely on the edge of feasibility, and not effective against all applications of SHA-1. But there's an old saying inside the NSA: "Attacks always get better; they never get worse." It's past time to abandon SHA-1.
There are near-term alternatives -- a related algorithm called SHA-256 is the most obvious -- but they're all based on the family of hash functions first developed in 1992. We've learned a lot more about the topic in the past 15 years, and can certainly do better.
Why the National Institute of Standards and Technology, or NIST, though? Because it has exactly the experience and reputation we want. We were in the same position with encryption functions in 1997. We needed to replace the Data Encryption Standard, but it wasn't obvious what should replace it. NIST decided to orchestrate a worldwide competition for a new encryption algorithm. There were 15 submissions from 10 countries -- I was part of the group that submitted Twofish -- and after four years of analysis and cryptanalysis, NIST chose the algorithm Rijndael to become the Advanced Encryption Standard (.pdf), or AES.
The AES competition was the most fun I've ever had in cryptography. Think of it as a giant cryptographic demolition derby: A bunch of us put our best work into the ring, and then we beat on each other until there was only one standing. It was really more academic and structured than that, but the process stimulated a lot of research in block-cipher design and cryptanalysis. I personally learned an enormous amount about those topics from the AES competition, and we as a community benefited immeasurably.
NIST did a great job managing the AES process, so it's the perfect choice to do the same thing with hash functions. And it's doing just that (.pdf). Last year and the year before, NIST sponsored two workshops to discuss the requirements for a new hash function, and last month it announced a competition to choose a replacement for SHA-1. Submissions will be due in fall 2008, and a single standard is scheduled to be chosen by the end of 2011.
Yes, this is a reasonable schedule. Designing a secure hash function seems harder than designing a secure encryption algorithm, although we don't know whether this is inherently true of the mathematics or simply a result of our imperfect knowledge. Producing a new secure hash standard is going to take a while. Luckily, we have an interim solution in SHA-256.
Now, if you'll excuse me, the Twofish team needs to reconstitute and get to work on an Advanced Hash Standard submission.
This essay originally appeared on Wired.com.
EDITED TO ADD (2/8): Every time I write about one-way hash functions, I get responses from people claiming they can't possibly be secure because an infinite number of texts hash to the same short (160-bit, in the case of SHA-1) hash value. Yes, of course an infinite number of texts hash to the same value; that's the way the function works. But the odds of it happening naturally are less than the odds of all the air molecules bunching up in the corner of the room and suffocating you, and you can't force it to happen either. Right now, several groups are trying to implement Xiaoyun Wang's attack against SHA-1. I predict one of them will find two texts that hash to the same value this year -- it will demonstrate that the hash function is broken and be really big news.
Fascinating article on the Corsham bunker, the secret underground UK site the government was to retreat to in the event of a nuclear war.
Until two years ago, the existence of this complex, variously codenamed Burlington, Stockwell, Turnstile or 3-Site, was classified. It was a huge yet very secret complex, where the government and 6,000 apparatchiks would have taken refuge for 90 days during all-out thermonuclear war. Solid yet cavernous, surrounded by 100ft-deep reinforced concrete walls within a subterranean 240-acre limestone quarry just outside Corsham, it drives one to imagine the ghosts of people who, thank God, never took refuge here.
According to this article, "Mistaken eyewitness identification is the leading cause of wrongful convictions." Given what I've been reading recently about memory and the brain, this does not surprise me at all.
New Mexico is currently debating a bill reforming eyewitness identification procedures:
Under the proposed regulations, an eyewitness must provide a written description before a lineup takes place; there must be at least six individuals in a live lineup and 10 photos in a photographic line-up; and the members of the lineup must be shown sequentially rather than simultaneously.
I don't have access to any of the psychological or criminology studies that back these reforms up, but the bill is being supported by the right sorts of people.
We make security trade-offs, large and small, every day. We make them when we decide to lock our doors in the morning, when we choose our driving route, and when we decide whether we're going to pay for something via check, credit card, or cash. They're often not the only factor in a decision, but they're a contributing factor. And most of the time, we don't even realize, it. We make security trade-offs intuitively. Most decisions are default decisions, and there have been many popular books that explore reaction, intuition, choice, and decision.
The essay examines particular brain heuristics, how they work and how they fail, in an attempt to explain why our feeling of security so often diverges from reality. I'm giving a talk on the topic at the RSA Conference today at 3:00 PM. Dark Reading posted an article on this, also discussed on Slashdot. CSO Online also has a podcast interview with me on the topic. I expect there'll be more press coverage this week.
The essay is really still in draft, and I would very much appreciate any and all comments, criticisms, additions, corrections, suggestions for further research, and so on. I think security technology has a lot to learn from psychology, and that I've only scratched the surface of the interesting and relevant research -- and what it means.
I officially declare that the industry has run out of good names for security companies.
Also, if you are planning to go to the Super Bowl game on Sunday, be aware that additional security measures will be in effect, as follows:
Back in 2004, I wrote a more serious essay on security at the World Series.
One company sells them to the vendors:
The founder of a small Moscow security company, Gleg, Legerov scrutinizes computer code in commonly used software for programming bugs, which attackers can use to break into computer systems, and sends his findings to a few dozen corporate customers around the world. Each customer pays more than $10,000 for information it can use to plug the hidden holes in its computers and stay ahead of criminal hackers.
This month, iDefense, a Virginia- based subsidiary of the technology company VeriSign, began offering an $8,000 bounty to the first six researchers to find holes in Vista or the newest version of Internet Explorer, and up to $4,000 more for code that take can advantage of the weaknesses. Like Gleg, iDefense, will sell information about those vulnerabilities to companies and government agencies for an undisclosed amount, though iDefense makes it a practice to alert vendors like Microsoft first.
So do criminals:
But the iDefense rewards are low compared to bounties offered on the black market. In December, the Japanese antivirus company TrendMicro found a Vista vulnerability being offered by an anonymous hacker on a Romanian Web forum for $50,000."
There's a lot of FUD in this article, but also some good stuff.
"Knowing the Enemy," by George Packer (The New Yorker, Dec 18, 2006), is a fascinating article about the social science needed to prevail against Islamic terrorism, which the author argues is best characterized as a counterinsurgency.
Rebecca Blood interviewed me for her "Bloggers on Blogging" series.
I've said it, and now so has the director of the Canadian Security Intelligence Service:
Canada's spy master, of all people, is warning that excessive government secrecy and draconian counterterrorism measures will only play into the hands of terrorists.
Symantec has published a good analysis of Windows Vista security.
The story is almost too funny to write about seriously. To advertise the Cartoon Network show "Aqua Teen Hunger Force," the network put up 38 blinking signs (kind of like Lite Brites) around the Boston area. The Boston police decided -- with absolutely no supporting evidence -- that these were bombs and shut down parts of the city.
Now the police look stupid, but they're trying really not hard not to act humiliated:
Governor Deval Patrick told the Associated Press: "It's a hoax -- and it's not funny."
Unfortunately, it is funny. What isn't funny is now the Boston government is trying to prosecute the artist and the network instead of owning up to their own stupidity. The police now claim that they were "hoax" explosive devices. I don't think you can claim they are hoax explosive devices unless they were intended to look like explosive devices, which merely a cursory look at any of them shows that they weren't.
But it's much easier to blame others than to admit that you were wrong:
"It is outrageous, in a post 9/11 world, that a company would use this type of marketing scheme," Mayor Thomas Menino said. "I am prepared to take any and all legal action against Turner Broadcasting and its affiliates for any and all expenses incurred."
Rep. Ed Markey, a Boston-area congressman, said, "Whoever thought this up needs to find another job."
"It had a very sinister appearance," [Massachusetts Attorney General Martha] Coakley told reporters. "It had a battery behind it, and wires."
For heavens sake, don't let her inside a Radio Shack.
I like this comment:
They consisted of magnetic signs with blinking lights in the shape of a cartoon character.
And this one:
"It's almost too easy to be a terrorist these days," said Jennifer Mason, 26. "You stick a box on a corner and you can shut down a city."
And this one, by one of the artists who installed the signs:
"I find it kind of ridiculous that they're making these statements on TV that we must not be safe from terrorism, because they were up there for three weeks and no one noticed. It's pretty commonsensical to look at them and say this is a piece of art and installation," he said.
Right. If this wasn't a ridiculous overreaction to a non-existent threat, then how come the devices were in place for weeks without anyone noticing them? What does that say about the Boston police?
Maybe if the Boston police stopped wasting time and money searching bags on subways....
Of the 2,449 inspections between Oct. 10 and Dec. 31, the bags of 27 riders tested positive in the initial screening for explosives, prompting further searches, the Globe found in an analysis of daily inspection reports obtained under the state's Freedom of Information Act.
These blinking signs have been up for weeks in ten cities -- Boston, New York, Los Angeles, Chicago, Atlanta, Seattle, Portland, Austin, San Francisco, and Philadelphia -- and no one else has managed to panic so completely. Refuse to be terrorized, people!
EDITED TO ADD (2/2): Here's some good information about whether the stunt broke the law or not.
EDITED TO ADD (2/3): This is 100% right:
Let's get a few facts straight on the Aqua Teen Hunger Force sign fiasco:
And this is also worth reading.
EDITED TO ADD (2/6): More info.
Fascinating story of an Israeli taxi driver who picked up a suicide bomber. What's interesting to me is how the driver comes to realize his passenger is a suicide bomber. It wasn't anything that comes up on a profile, but a feeling that something is wrong:
Mr Woltinsky said he realised straight away that something was not quite right.
In other words, his passanger was acting hinky.
EDITED TO ADD (2/1): The Israeli was not a taxi driver. Apologies.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.