Schneier on Security
A blog covering security and security technology.
August 2005 Archives
The website Cryptome has a list of 276 MI6 agents:
This combines three lists of MI6 officers published here on 13 May 1999 (116 names), 21 August 2005 (74 names), and 27 August 2005 (121 names).
According to Silicon.com:
It is not the first time this kind of information has been published on the internet and Foreign Office policy is to neither confirm nor deny the accuracy of such lists. But a spokesman slammed its publication for potentially putting lives in danger.
On the other hand:
The website is run by John Young, who "welcomes" secret documents for publication and recently said there was a "need to name as many intelligence officers and agents as possible".
The Trusted Computing Group (TCG) is an industry consortium that is trying to build more secure computers. They have a lot of members, although the board of directors consists of Microsoft, Sony, AMD, Intel, IBM, SUN, HP, and two smaller companies who are voted on in a rotating basis.
The basic idea is that you build a computer from the ground up securely, with a core hardware "root of trust" called a Trusted Platform Module (TPM). Applications can run securely on the computer, can communicate with other applications and their owners securely, and can be sure that no untrusted applications have access to their data or code.
This sounds great, but it's a double-edged sword. The same system that prevents worms and viruses from running on your computer might also stop you from using any legitimate software that your hardware or operating system vendor simply doesn't like. The same system that protects spyware from accessing your data files might also stop you from copying audio and video files. The same system that ensures that all the patches you download are legitimate might also prevent you from, well, doing pretty much anything.
In May, the Trusted Computing Group published a best practices document: "Design, Implementation, and Usage Principles for TPM-Based Platforms." Written for users and implementers of TCG technology, the document tries to draw a line between good uses and bad uses of this technology.
The principles that TCG believes underlie the effective, useful, and acceptable design, implementation, and use of TCG technologies are the following:
It's basically a good document, although there are some valid criticisms. I like that the document clearly states that coercive use of the technology -- forcing people to use digital rights management systems, for example, are inappropriate:
The use of coercion to effectively force the use of the TPM capabilities is not an appropriate use of the TCG technology.
I like that the document tries to protect user privacy:
All implementations of TCG-enabled components should ensure that the TCG technology is not inappropriately used for data aggregation of personal information/
I wish that interoperability were more strongly enforced. The language has too much wiggle room for companies to break interoperability under the guise of security:
Furthermore, implementations and deployments of TCG specifications should not introduce any new interoperability obstacles that are not for the purpose of security.
That sounds good, but what does "security" mean in that context? Security of the user against malicious code? Security of big media against people copying music and videos? Security of software vendors against competition? The big problem with TCG technology is that it can be used to further all three of these "security" goals, and this document is where "security" should be better defined.
Complaints aside, it's a good document and we should all hope that companies follow it. Compliance is totally voluntary, but it's the kind of document that governments and large corporations can point to and demand that vendors follow.
But there's something fishy going on. Microsoft is doing its best to stall the document, and to ensure that it doesn't apply to Vista (formerly known as Longhorn), Microsoft's next-generation operating system.
The document was first written in the fall of 2003, and went through the standard review process in early 2004. Microsoft delayed the adoption and publication of the document, demanding more review. Eventually the document was published in June of this year (with a May date on the cover).
Meanwhile, the TCG built a purely software version of the specification: Trusted Network Connect (TNC). Basically, it's a TCG system without a TPM.
The best practices document doesn't apply to TNC, because Microsoft (as a member of the TCG board of directors) blocked it. The excuse is that the document hadn't been written with software-only applications in mind, so it shouldn't apply to software-only TCG systems.
This is absurd. The document outlines best practices for how the system is used. There's nothing in it about how the system works internally. There's nothing unique to hardware-based systems, nothing that would be different for software-only systems. You can go through the document yourself and replace all references to "TPM" or "hardware" with "software" (or, better yet, "hardware or software") in five minutes. There are about a dozen changes, and none of them make any meaningful difference.
The only reason I can think of for all this Machiavellian maneuvering is that the TCG board of directors is making sure that the document doesn't apply to Vista. If the document isn't published until after Vista is released, then obviously it doesn't apply.
Near as I can tell, no one is following this story. No one is asking why TCG best practices apply to hardware-based systems if they're writing software-only specifications. No one is asking why the document doesn't apply to all TCG systems, since it's obviously written without any particular technology in mind. And no one is asking why the TCG is delaying the adoption of any software best practices.
I believe the reason is Microsoft and Vista, but clearly there's some investigative reporting to be done.
EDITED TO ADD: This comment completely misses my point. Which is odd; I thought I was pretty clear.
EDITED TO ADD: There is a thread on Slashdot on the topic.
Here's a new Internet data-mining research program with a cool name: Unintended Information Revelation:
Existing search engines process individual documents based on the number of times a key word appears in a single document, but UIR constructs a concept chain graph used to search for the best path connecting two ideas within a multitude of documents.
I'm a big fan of research, and I'm glad to see it being done. But I hope there is a lot of discussion and debate before we deploy something like this. I want to be convinced that the false positives don't make it useless as an intelligence-gathering tool.
We've all received them in the mail: envelopes from banks with PINs, access codes, or other secret information. The letters are somewhat tamper-proof, but mostly they're designed to be tamper-evident: if someone opens the letter and reads the information, you're going to know. The security devices include fully sealed packaging, and black inks that obscure the secret information if you hold the envelope up to the light.
Researchers from Cambridge University have been looking at the security inherent in these systems, and they've written a paper that outlines how to break them:
Abstract. Tamper-evident laser-printed PIN mailers are used by many institutions to issue PINs and other secrets to individuals in a secure manner. Such mailers are created by printing the PIN using a normal laser, but on to special stationery and using a special font. The background of the stationery disguises the PIN so that it cannot be read with the naked eye without tampering. We show that currently deployed PIN mailer technology (used by the major UK banks) is vulnerable to trivial attacks that reveal the PIN without tampering. We describe image processing attacks, where a colour difference between the toner and the stationary "masking pattern" is exploited. We also describe angled light attacks, where the reflective properties of the toner and stationery are exploited to allow the naked eye to separate the PIN from the backing pattern. All laser-printed mailers examined so far have been shown insecure.
According to a researcher website:
It should be noted that we sat on this report for about 9 months, and the various manufacturers all have new products which address to varying degrees the issues raised in the report.
BBC covered the story.
James Cook left on a business trip to Florida, and his wife Paula went to Oklahoma to care for her sick mother. When the two returned to Frisco, Texas, several days later, their keys didn't work. The locks on the house had been changed.
This is a perfect example of the sort of fraud issue that a national ID card won't solve. The problem is not that identity credentials are too easy to forge. The problem is that the criminal needed nothing more than "Mrs. Cook's Social Security number, driver's license number and a copy of her signature." And the solution isn't a harder-to-forge card; the solution is to make the procedure for transferring real-estate ownership more onerous. If the Denton County Courthouse had better transaction authentication procedures, the particulars of identity authentication -- a national ID, a state driver's license, biometrics, or whatever -- wouldn't matter.
If we are ever going to solve identity theft, we need to think about it properly. The problem isn't misused identity information; the problem is fraudulent transactions.
Ignore the corporate sleaziness by Cingular for the moment -- they sold used cell phones meant for charity -- and focus on the privacy implications. Cingular didn't erase any of the personal information on the used phones they sold.
This reminds me of Simson Garfinkel's analysis of used hard drives. He found that 90% of them contained old data, some of it very private and interesting.
Erasing data is one of the big problems of the information age. We know how to do it, but it takes time and we mostly don't bother. And sadly, these kinds of privacy violations are more the norm than the exception. I don't think it will get better unless Cingular becomes liable for violating its customers' privacy like that.
EDITED TO ADD: I already wrote about the risks of losing small portable devices.
Peggy Noonan is opposed to the current round of U.S. base closings because, well, basically because she thinks they'll be useful if the government ever has to declare martial law.
I don't know anything about military bases, and what should be closed or remain open. What's interesting to me is that her essay is a perfect example of thinking based on movie-plot threats:
Among the things we may face over the next decade, as we all know, is another terrorist attack on American soil. But let's imagine the next one has many targets, is brilliantly planned and coordinated. Imagine that there are already 100 serious terror cells in the U.S., two per state. The members of each cell have been coming over, many but not all crossing our borders, for five years. They're working jobs, living lives, quietly planning.
This game of "let's imagine" really does stir up emotions, but it's not the way to plan national security policy. There's a movie plot to justify any possible national policy, and another to render that same policy ineffectual.
This of course is pure guessing on my part. I can't prove it with data.
That's precisely the problem.
From the Washington Post:
Web sites in China are being used heavily to target computer networks in the Defense Department and other U.S. agencies, successfully breaching hundreds of unclassified networks, according to several U.S. officials.
Did you know you could be arrested for carrying a police uniform in New York City?
With security tighter in the Big Apple since Sept. 11, 2001, the union that represents TV and film actors has begun advising its New York-area members to stop buying police costumes or carrying them to gigs, even if their performances require them.
This seems like overkill to me. I understand that a police uniform is an authentication device -- not a very good one, but one nonetheless -- and we want to make it harder for the bad guys to get one. But there's no reason to prohibit screen or stage actors from having police uniforms if it's part of their job. This seems similar to the laws surrounding lockpicks: you can be arrested for carrying them without a good reason, but locksmiths are allowed to own the tools of their trade.
Here's another bit from the article:
Under police department rules, real officers must be on hand any time an actor dons a police costume during a TV or film production.
I guess that's to prevent the actor from actually impersonating a policeman. But how often does that actually happen? Is this a good use of police manpower?
Does anyone know how other cities and countries handle this?
Interesting research grant from the NSF:
Technical security measures are often breached through social means, but little research has tackled the problem of system security in the context of the entire socio-technical system, with the interactions between the social and technical parts integrated into one model. Similar problems exist in the field of system safety, but recently a new accident model has been devised that uses a systems-theoretic approach to understand accident causation. Systems theory allows complex relationships between events and the system as a whole to be taken into account, so this new model permits an accident to be considered not simply as arising from a chain of individual component failures, but from the interactions among system components, including those that have not failed.
Why? Why, given that cameras didn't stop the London train bombings? Why, when there is no evidence that cameras are effectice at reducing either terrorism and crime, and every reason to believe that they are ineffective?
One reason is that it's the "movie plot threat" of the moment. (You can hear the echos of the movie plots when you read the various quotes in the news stories.) The terrorists bombed a subway in London, so we need to defend our subways. The other reason is that New York City officials are erring on the side of caution. If nothing happens, then it was only money. But if something does happen, they won't keep their jobs unless they can show they did everything possible. And technological solutions just make everyone feel better.
If I had $212 million to spend to defend against terrorism in the U.S., I would not spend it on cameras in the New York City subways. If I had $212 million to defend New York City against terrorism, I would not spend it on cameras in the subways. This is nothing more than security theater against a movie plot threat.
On the plus side, the money will also go for a new radio communications system for subway police, and will enable cell phone service in underground stations, but not tunnels.
"Unlike X-ray machines or radar instruments, the sensor doesn't have to generate a signal to detect objects it spots them based on how brightly they reflect the natural radiation that is all around us every day."
First millimeter-wave detection systems, and now this. There's some interesting research in remote sensing going on, and there are sure to be some cool security applications.
Advertisers are beaming unwanted content to Bluetooth phones at a distance of 100 meters.
Sure, it's annoying, but worse, there are serious security risks. Don't believe this:
Furthermore, there is no risk of downloading viruses or other malware to the phone, says O'Regan: "We don't send applications or executable code." The system uses the phone's native download interface so they should be able to see the kind of file they are downloading before accepting it, he adds.
This company might not send executable code, but someone else certainly could. And what percentage of people who use Bluetooth phones can recognize "the kind of file they are downloading"?
We've already seen two ways to steal data from Bluetooth devices. And we know that more and more sensitive data is being stored on these small devices, increasing the risk. This is almost certainly another avenue for attack.
The British government is testing a scheme to put active -- the kind that are independently powered -- RFID chips in automobile license plates. They can be read at least 300 feet away, and probably much, much further.
Thieves are using Bluetooth phones to find Bluetooth-enabled laptops in parked cars, which they then steal.
Nice example of unintended security consequences of a new technology. And more evidence that new features need to be turned off by default.
They're being called the Kutztown 13 -- a group of high schoolers charged with felonies for bypassing security with school-issued laptops, downloading forbidden internet goodies and using monitoring software to spy on district administrators.
There's more to the story, though. Here's some good commentary on the issue:
What the parents don't mention — but the school did in a press release— is that it wasn't as if the school came down with the Hammer of God out of nowhere.
Yes, the kids should be punished. No, a felony comviction is not the way to punish them.
The problem is that the punishment doesn't fit the crime. Breaking the rules is what kids do. Society needs to deal with that, yes, but it needs to deal with that in a way that doesn't ruin lives. Deterrence is critical if we are to ever have a lawful society on the internet, but deterrence has to come from rational prosecution. This simply isn't rational.
EDITED TO ADD (2 Sep): It seems that charges have been dropped.
All security decisions are trade-offs, and smart security trade-offs are ones where the security you get is worth what you have to give up. This sounds simple, but it isn't. There are differences between perceived risk and actual risk, differences between perceived security and actual security, and differences between perceived cost and actual cost. And beyond that, there are legitimate differences in trade-off analysis. Any complicated security decision affects multiple players, and each player evaluates the trade-off from his or her own perspective.
I call this "agenda," and it is one of the central themes of Beyond Fear. It is clearly illustrated in the current debate about rescinding the prohibition against small pointy things on airplanes. The flight attendants are against the change. Reading their comments, you can clearly see their subjective agenda:
"As the front-line personnel with little or no effective security training or means of self defense, such weapons could prove fatal to our members," Patricia A. Friend, international president of the Association of Flight Attendants, said in a letter to Edmund S. "Kip" Hawley, the new leader of the Transportation Security Administration. "They may not assist in breaking through a flightdeck door, but they could definitely lead to the deaths of flight attendants and passengers"....
The flight attendants are not evaluating the security countermeasure from a global perspective. They're not trying to figure out what the optimal level of risk is, what sort of trade-offs are acceptable, and what security countermeasures most efficiently achieve that trade-off. They're looking at the trade-off from their perspective: they get more benefit from the countermeasure than the average flier because it's their workplace, and the cost of the countermeasure is borne largely by the passengers.
There is nothing wrong with flight attendants evaluating airline security from their own agenda. I'd be surprised if they didn't. But understanding agenda is essential to understanding how security decisions are made.
Imagine you're in charge of airport security. You have a watch list of terrorist names, and you're supposed to give anyone on that list extra scrutiny. One day someone shows up for a flight whose name is on that list. They're an infant.
What do you do?
If you have even the slightest bit of sense, you realize that an infant can't be a terrorist. So you let the infant through, knowing that it's a false alarm. But if you have no flexibility in your job, if you have to follow the rules regardless of how stupid they are, if you have no authority to make your own decisions, then you detain the baby.
EDITED TO ADD: I know what the article says about the TSA rules:
The Transportation Security Administration, which administers the lists, instructs airlines not to deny boarding to children under 12 -- or select them for extra security checks -- even if their names match those on a list.
Whether the rules are being followed or ignored is besides my point. The screener is detaining babies because he thinks that's what the rules require. He's not permitted to exercise his own common sense.
Security works best when well-trained people have the authority to make decisions, not when poorly-trained people are slaves to the rules (whether real or imaginary). Rules provide CYA security, but not security against terrorism.
I've been reading the massive press coverage about Zotob (technical details are here, here, and here), and can't figure out what the big deal is about. Yes, it propagates in Windows 2000 without user intervention, which is always nastier. It uses a Microsoft plug-and-play vulnerability, which is somewhat interesting. But the only reason I can think of that CNN did rolling coverage on it is that CNN was hit by it.
Xiaoyun Wang, one of the team of Chinese cryptographers that successfully broke SHA-0 and SHA-1, along with Andrew Yao and Frances Yao, announced new results against SHA-1 yesterday at Crypto's rump session. (Actually, Adi Shamir announced the results in their name, since she and her student did not receive U.S. visas in time to attend the conference.)
Shamir presented few details -- and there's no paper -- but the time complexity of the new attack is 263. (Their previous result was 269; brute force is 280.) He did say that he expected Wang and her students to improve this result over the next few months. The modifications to their published attack are still new, and more improvements are likely over the next several months. There is no reason to believe that 263 is anything like a lower limit.
But an attack that's faster than 264 is a significant milestone. We've already done massive computations with complexity 264. Now that the SHA-1 collision search is squarely in the realm of feasibility, some research group will try to implement it. Writing working software will both uncover hidden problems with the attack, and illuminate hidden improvements. And while a paper describing an attack against SHA-1 is damaging, software that produces actual collisions is even more so.
The story of SHA-1 is not over. Again, I repeat the saying I've heard comes from inside the NSA: "Attacks always get better; they never get worse."
Meanwhile, NIST is holding a workshop in late October to discuss what the security community should do now. The NIST Hash Function Workshop should be interesting, indeed. (Here is one paper that examines the effect of these attacks on S/MIME, TLS, and IPsec.)
EDITED TO ADD: Here are Xiaoyun Wang's two papers from Crypto this week: "Efficient Collision Search Attacks on SHA-0" and "Finding Collisions in the Full SHA-1Collision Search Attacks on SHA1." And here are the rest of her papers.
On Monday, she was scheduled to explain her discovery in a keynote address to an international group of researchers meeting in California.
Sadly, this is now common:
Although none of the scientists were officially denied visas by the United States Consulate, officials at the State Department and National Academy of Sciences said this week that the situation was not uncommon.
These delays can make it impossible for some foreign researchers to attend U.S. conferences. There are researchers who need to have their paper accepted before they can apply for a visa. But the paper review and selection process, done by the program committee in the months before the conference, doesn't finish early enough. Conferences can move the submission and selection deadlines earlier, but that just makes the conference less current.
In Wang's case, she applied for her visa in early July. So did her student. Dingyi Pei, another Chinese researcher who is organizing Asiacrypt this year, applied for his in early June. (I don't know about the others.) Wang has not received her visa, and Pei got his just yesterday.
This kind of thing hurts cryptography, and hurts national security. The visa restrictions were designed to protect American advanced technologies from foreigners, but in this case they're having the opposite effect. We are all more secure because there is a vibrant cryptography research community in the U.S. and the world. By prohibiting Chinese cryptographers from attending U.S. conferences, we're only hurting ourselves.
NIST is sponsoring a workshop on hash functions (sadly, it's being referred to as a "hash bash") in October. I hope Wang gets a visa for that.
Looks like the DHS and TSA are finally beginning to realize that small pointy things are not a terrorist threat to aviation.
They never were.
From the Associated Press:
Joseph Duncan III is a computer expert who bragged online, days before authorities believe he killed three people in Idaho, about a tell-all journal that would not be accessed for decades, authorities say.
This is the kind of story that the government likes to use to illustrate the dangers of encryption. How can we allow people to use strong encryption, they ask, if it means not being able to convict monsters like Duncan?
But how is this different than Duncan speaking the confession when no one was able to hear? Or writing it down and hiding it where no one could ever find it? Or not saying anything at all? If the police can't convict him without this confession -- which we only have his word for as existing -- then maybe he's innocent?
Technologies have good and bad uses. Encryption, telephones, cars: they're all used by both honest citizens and by criminals. For almost all technologies, the good far outweighs the bad. Banning a technology because the bad guys use it, denying everyone else the beneficial uses of that technology, is almost always a bad security trade-off.
EDITED TO ADD: Looking at the details of the encryption, it's certainly possible that the authorities will break the diary. It probably depends on how random a key Duncan chose, although possibly on whether or not there's an implementation error in the cryptographic software. If I had more details, I could speculate further.
Remember all thost stories about the terrorists hiding messages in television broadcasts? They were all false alarms:
The first sign that something was amiss came a few days before Christmas Eve 2003. The US department of homeland security raised the national terror alert level to "high risk". The move triggered a ripple of concern throughout the airline industry and nearly 30 flights were grounded, including long hauls between Paris and Los Angeles and subsequently London and Washington.
It's a signal-to-noise issue. If you look at enough noise, you're going to find signal just by random chance. It's only signal that rises above random chance that's valuable.
And the whole notion of terrorists using steganography to embed secret messages was ludicrous from the beginning. It makes no sense to communicate with terrorist cells this way, given the wide variety of more efficient anonymous communications channels.
I first wrote about this in September of 2001.
According to Wired News, the DHS is looking for someone in Congress to sponsor a bill that eliminates congressional oversight over the Secure Flight program.
The bill would allow them to go ahead with the program regardless of GAO's assessment. (Current law requires them to meet ten criteria set by Congress; the most recent GAO report said that they did not meet nine of them.) The bill would allow them to use commercial data even though they have not demonstrated its effectiveness. (The DHS funding bill passed by both the House and the Senate prohibits them from using commercial data during passenger screening, because there has been absolutely no test results showing that it is effective.)
In this new bill, all that would be required to go ahead with Secure Flight would be for Secretary Chertoff to say so:
Additionally, the proposed changes would permit Secure Flight to be rolled out to the nation's airports after Homeland Security chief Michael Chertoff certifies the program will be effective and not overly invasive. The current bill requires independent congressional investigators to make that determination.
Looks like the DHS, being unable to comply with the law, is trying to change it. This is a rogue program that needs to be stopped.
In other news, the TSA has deleted about three million personal records it used for Secure Flight testing. This seems like a good idea, but it prevents people from knowing what data the government had on them -- in violation of the Privacy Act.
Civil liberties activist Bill Scannell says it's difficult to know whether TSA's decision to destroy records so swiftly is a housecleaning effort or something else.
My previous essay on Secure Flight is here.
Is e-mail in transit communications or data in storage? Seems like a basic question, but the answer matters a lot to the police. A U.S. federal Appeals Court has ruled that the interception of e-mail in temporary storage violates the federal wiretap act, reversing an earlier court opinion.
The case and associated privacy issues are summarized here. Basically, different privacy laws protect electronic communications in transit and data in storage; the former is protected much more than the latter. E-mail stored by the sender or the recipient is obviously data in storage. But what about e-mail on its way from the sender to the receiver? On the one hand, it's obviously communications on transit. But the other side argued that it's actually stored on various computers as it wends its way through the Internet; hence it's data in storage.
The initial court decision in this case held that e-mail in transit is just data in storage. Judge Lipez wrote an inspired dissent in the original opinion. In the rehearing en banc (more judges), he wrote the opinion for the majority which overturned the earlier opinion.
The opinion itself is long, but well worth reading. It's well reasoned, and reflects extraordinary understanding and attention to detail. And a great last line:
If the issue presented be "garden-variety"... this is a garden in need of a weed killer.
There's a larger issue here, and it's the same one that the entertainment industry used to greatly expand copyright law in cyberspace. They argued that every time a copyrighted work is moved from computer to computer, or CD-ROM to RAM, or server to client, or disk drive to video card, a "copy" is being made. This ridiculous definition of "copy" has allowed them to exert far greater legal control over how people use copyrighted works.
Photograph from What-the-Hack.
I want "The Devil's Infosec Dictionary" to be funnier. And I wish the entry that mentions me -- "Cryptography: The science of applying a complex set of mathematical algorithms to sensitive data with the aim of making Bruce Schneier exceedingly rich" -- were more true.
In any case, I'll bet the assembled here can come up with funnier infosec dictionary definitions. Post them as comments here, and -- if there are enough good ones -- I'll collect them up on a single page.
This could make an enormous difference in security against forgeries:
The scientists built a laser scanner that sweeps across the surface of paper, cardboard, or plastic, recording all of the unique microscopic imperfections that are a natural part of manufacturing such materials.
Scientific American has more details:
All nonreflective surfaces are rough on a microscopic level. James D. R. Buchanan and his colleagues at Imperial College London report today in the journal Nature on the potential for this characteristic to "provide strong, in-built, hidden security for a wide range of paper, plastic or cardboard objects." Using a focused laser to scan a variety of objects, the team measured how the light scattered at four different angles. By calculating how far the light moved from a mean value, and transforming the fluctuations into ones and zeros, the researchers developed a unique fingerprint code for each object. The scanning of two pieces of paper from the same pack yielded two different identifiers, whereas the fingerprint for one sheet stayed the same even after three days of regular use. Furthermore, when the team put the paper through its paces--screwing it into a tight ball, submerging it in cold water, baking it at 180 degrees Celsius, among other abuses--its fingerprint remained easily recognizable.
To ensure the security of currency, you could fingerprint every bill and store the fingerprints in a large database. Or you can digitally sign the fingerprint and print it on the bill itself. The fingerprint is large enough to use as an encryption key, which opens up a bunch of other security possibilities.
This idea isn't new. I remember currency anti-counterfeiting research in which fiber-optic bits were added to the paper pulp, and a "fingerprint" was taken using a laser. It didn't work then, but it was clever.
A reader sent this to me. He's corresponding with the TSA about getting his name off the watch list, and was told that he should turn off his e-mail spam filter.
The Register comments on the government using a border-security failure to push for national ID cards:
The Government spokesman the media could get hold of last weekend, leader of the House of Commons Geoff Hoon, said that the Government was looking into whether there should be "additional" passport checks on Eurostar, and added that the matter showed the need for identity cards because "it's vitally important that we know who is coming in as well as going out." Meanwhile the Observer reported plans by ministers to accelerate the introduction of the e-borders system in order to increase border security.
A team of Chinese maths enthusiasts have thrown NSW's speed cameras system into disarray by cracking the technology used to store data about errant motorists.
It's true that MD5 is broken. On the other hand, it's almost certainly true that the speed cameras were correct. If there's any lesson here, it's that theoretical security is important in legal proceedings.
I think that's a good thing.
Interesting article: "The Hidden Boot Code of the Xbox, or How to fit three bugs in 512 bytes of security code."
Microsoft wanted to lock out both pirated games and unofficial games, so they built a chain of trust on the Xbox from the hardware to the execution of the game code. Only code authorized by Microsoft could run on the Xbox. The link between hardware and software in this chain of trust is the hidden "MCPX" boot ROM. The article discusses that ROM.
Lots of kindergarten security mistakes.
There's a new Trojan that tries to steal World of Warcraft passwords.
That reminded me about this article, about people paying programmers to find exploits to make virtual money in multiplayer online games, and then selling the proceeds for real money.
And here's a page about ways people steal fake money in the online game Neopets, including cookie grabbers, fake login pages, fake contests, social engineering, and pyramid schemes.
I regularly say that every form of theft and fraud in the real world will eventually be duplicated in cyberspace. Perhaps every method of stealing real money will eventually be used to steal imaginary money, too.
I've written previously (including this op ed in the International Herald Tribune) about RFID chips in passports. An article in today's USA Today (the paper version has a really good graphic) summarizes the latest State Department proposal, and it looks pretty good. They're addressing privacy concerns, and they're doing it right.
The most important feature they've included is an access-control system for the RFID chip. The data on the chip is encrypted, and the key is printed on the passport. The officer swipes the passport through an optical reader to get the key, and then the RFID reader uses the key to communicate with the RFID chip. This means that the passport-holder can control who has access to the information on the chip; someone cannot skim information from the passport without first opening it up and reading the information inside. Good security.
The new design also includes a thin radio shield in the cover, protecting the chip when the passport is closed. More good security.
Assuming that the RFID passport works as advertised (a big "if," I grant you), then I am no longer opposed to the idea. And, more importantly, we have an example of an RFID identification system with good privacy safeguards. We should demand that any other RFID identification cards have similar privacy safeguards.
EDITED TO ADD: There's more information in a Wired story:
The 64-KB chips store a copy of the information from a passport's data page, including name, date of birth and a digitized version of the passport photo. To prevent counterfeiting or alterations, the chips are digitally signed....
So it sounds like this access-control mechanism is not definite. In any case, I believe the system described in the USA Today article is a good one.
This New York Times op ed argues that panic is largely a myth. People feel stressed but they behave rationally, and it only gets called "panic" because of the stress.
If our leaders are really planning for panic, in the technical sense, then they are at best wasting resources on a future that is unlikely to happen. At worst, they may be doing our enemies' work for them - while people are amazing under pressure, it cannot help to have predictions of panic drummed into them by supposed experts.
Don't believe wireless distance limitations. Again and again they're proven wrong.
The record holders relied on more than just a pair of wireless laptops. The equipment required for the feat, according to the event website, included a "collection of homemade antennas, surplus 12 foot satellite dishes, home-welded support structures, scaffolds, ropes and computers".
Bad news for those of us who rely on physical distance to secure our wireless networks.
Even more important, the world record for communicating with a passive RFID device was set at 69 feet. (Pictures here.) Remember that the next time someone tells you that it's impossible to read RFID identity cards at a distance.
Whenever you hear a manufacturer talk about a distance limitation for any wireless technology -- wireless LANs, RFID, Bluetooth, anything -- assume he's wrong. If he's not wrong today, he will be in a couple of years. Assume that someone who spends some money and effort building more sensitive technology can do much better, and that it will take less money and effort over the years. Technology always gets better; it never gets worse. If something is difficult and expensive now, it will get easier and cheaper in the future.
Orlando Airport is piloting a new pre-screening program called CLEAR. The idea is that you pay $80 a year and subject yourself to a background check, and then you can use a faster security line at airports.
I've already written about this idea, back when Steven Brill first started talking about it:
My primary security concerns surrounding this system stem from what it's trying to do. In his writings and speaking, Brill is very careful to explain that these are not "trusted traveler cards." He calls them "verified identity cards." But the only purpose of his card is to divide people into two lines -- a fast line and a slow line, a "search less" line and a "search more" line, or whatever....
Nothing in this program is different from what I wrote about last year. According to their website:
Your Membership will be continuously reviewed by TSA's ongoing Security Threat Assessment Process. If your security status changes, your Membership will be immediately deactivated and you will receive a notification email of your status change as well as a refund of the unused portion of your annual enrollment fee.
Think about it. For $80 a year, any potential terrorist can be automatically notified if the Department of Homeland Security is on to him. Such a deal.
Amazingly, this works:
To clear out undesirables, opera and classical music have been piped into Canadian parks, Australian railway stations, 7-Eleven parking lots and, most recently, London Underground stops.
It's not new:
But as Kahle points out, "It's well known within the industry that classical music discourages teen loitering. It was first used by 7-11 stores across the country over a decade ago."
Note that this does not reduce loitering, but moves it around. But if you're the owner of a 7-Eleven, you don't care if kids are loitering at the store down the block. You just don't want them loitering at your store.
Interesting details about the bombs used in the 7/7 London bombings:
The NYPD officials said investigators believe the bombers used a peroxide-based explosive called HMDT, or hexamethylene triperoxide diamine. HMDT can be made using ordinary ingredients like hydrogen peroxide (hair bleach), citric acid (a common food preservative) and heat tablets (sometimes used by the military for cooking).
For those of you upset that the police divulged the recipe -- citric acid, hair bleach, and food heater tablets -- the details are already out there.
And here are some images of home-made explosives seized in the various raids after the bombings.
Normally this kind of information would be classified, but presumably the London (and U.S.) governments feel that the more people that know about this, the better. Anyone owning a commercial-grade refrigerator without a good reason should expect a knock on his door.
There's a new Windows 2000 vulnerability:
A serious flaw has been discovered in a core component of Windows 2000, with no possible work-around until it gets fixed, a security company said.
Don't fail to notice the sensationalist explanation from eEye. This is what I call a "publicity attack" (note that the particular example in that essay is wrong): it's an attempt by eEye Digital Security to get publicity for their company. Yes, I'm sure it's a bad vulnerability. Yes, I'm sure Microsoft should have done more to secure their systems. But eEye isn't blameless in this; they're searching for vulnerabilities that make good press releases.
Rules on exporting cryptography outside the United States have been renewed:
President Bush this week declared a national emergency based on an "extraordinary threat to the national security."
To be honest, I don't know what the rules are these days. I think there is a blanket exemption for mass-market software products, but I'm not sure. I haven't a clue what the hardware requirements are. But certainly something is working right; we're seeing more strong encryption in more software -- and not just encryption software.
I've already written about the police "shoot-to-kill" policy in the UK in response to the terrorist bombings last month, explaining why it's a bad security trade-off. Now the International Association of Chiefs of Police have issued new guidelines that also recommend a shoot-to-kill policy.
What might cause a police officer to think you're a suicide bomber, and then shoot you in the head?
The police organization's behavioral profile says such a person might exhibit "multiple anomalies," including wearing a heavy coat or jacket in warm weather or carrying a briefcase, duffel bag or backpack with protrusions or visible wires. The person might display nervousness, an unwillingness to make eye contact or excessive sweating. There might be chemical burns on the clothing or stains on the hands. The person might mumble prayers or be "pacing back and forth in front of a venue."
Is that all that's required?
The police group's guidelines also say the threat to officers does not have to be "imminent," as police training traditionally teaches. Officers do not have to wait until a suspected bomber makes a move, another traditional requirement for police to use deadly force. An officer just needs to have a "reasonable basis" to believe that the suspect can detonate a bomb, the guidelines say.
Does anyone actually think they're safer if a policy like this is put into effect?
EDITED TO ADD: For reference:
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
But what does a 215-year-old document know?
Here's a post-Cold War risk that I hadn't considered before:
Construction workers involved in building a new hotel just across from the Kremlin were surprised to find 250 kg of TNT buried deep beneath the old Moskva Hotel that had just been demolished to make way for a new one. Police astonished Muscovites further when they said that the 12 boxes of explosives lodged in the basement could have been there for half a century.
There's some new information on last week's Lynn/Cisco/ISS story: Mike Lynn gave an interesting interview to Wired. Here's some news about the FBI's investigation. And here's a video of Cisco/ISS ripping pages out of the BlackHat conference proceedings.
Someone is setting up a legal defense fund for Lynn. Send donations via PayPal to Abaddon@IO.com. (Does anyone know the URL?) According to BoingBoing, donations not used to defend Lynn will be donated to the EFF.
Copies of Lynn's talk have popped up on the Internet, but some have been removed due to legal cease-and-desist letters from ISS attorneys, like this one. Currently, Lynn's slides are here, here, here, here, here, here, here, here, here, here, here, here, here, here, and here. (The list is from BoingBoing.) Note that the presentation above is not the same as the one Lynn gave at BlackHat. The presentation at BlackHat didn't have the ISS logo at the bottom, as the one on the Internet does. Also, the critical code components were blacked out. (Photographs of Lynn's actual presentation slides were available here, but have been removed due to legal threats from ISS.)
Hackers are working overtime to reconstruct Lynn's attack and write an exploit. This, of course, means that we're in much more danger of there being a worm that makes use of this vulnerability.
The sad thing is that we could have avoided this. If Cisco and ISS had simply let Lynn present his work, it would have been just another obscure presentation amongst the sea of obscure presentations that is BlackHat. By attempting to muzzle Lynn, the two companies ensured that 1) the vulnerability was the biggest story of the conference, and 2) some group of hackers would turn the vulnerability into exploit code just to get back at them.
EDITED TO ADD: Jennifer Granick is Lynn's attorney, and she has blogged about what happened at BlackHat and DefCon. And photographs of the slides Lynn actually used for his talk are here (for now, at least). Is it just me, or does it seem like ISS is pursuing this out of malice? With Cisco I think it was simple stupidity, but I think it's malice with ISS.
EDITED TO ADD: I don't agree with Irs Winkler's comments, either.
EDITED TO ADD: ISS defends itself.
EDITED TO ADD: More commentary.
EDITED TO ADD: Nice rebuttal to Winkler's essay.
Salon has an interesting article about parents turning to technology to monitor their children, instead of to other people in their community.
"What is happening is that parents now assume the worst possible outcome, rather than seeing other adults as their allies," says Frank Furedi, a professor of sociology at England's University of Kent and the author of "Paranoid Parenting." "You never hear stories about asking neighbors to care for kids or coming together as community. Instead we become insular, privatized communities, and look for technological solutions to what are really social problems." Indeed, while our parents' generation was taught to "honor thy neighbor," the mantra for today's kids is "stranger danger," and the message is clear -- expect the worst of anyone unfamiliar -- anywhere, and at any time.
This is security based on fear, not reason. And I think people who act this way make their families less safe.
EDITED TO ADD: Here's a link to the book Paranoid Parenting.
This is impressive:
This new toool is called The Car Whisperer and allows people equipped with a Linux Laptop and a directional antenna to inject audio to, and record audio from bypassing cars that have an unconnected Bluetooth handsfree unit running. Since many manufacturers use a standard passkey which often is the only authentication that is needed to connect.
EDITED TO ADD: Another article.
The Department of Homeland Security is testing a program to issue RFID identity cards to visitors entering the U.S.
They'll have to carry the wireless devices as a way for border guards to access the electronic information stored inside a document about the size of a large index card.
According to the DHS:
The technology will be tested at a simulated port this spring. By July 31, 2005, the testing will begin at the ports of Nogales East and Nogales West in Arizona; Alexandria Bay in New York; and, Pacific Highway and Peace Arch in Washington. The testing or "proof of concept" phase is expected to continue through the spring of 2006.
I know nothing about the details of this program or about the security of the cards. Even so, the long-term implications of this kind of thing are very chilling.
A vulnerability in many hotel television infrared systems can allow a hacker to obtain guests' names and their room numbers from the billing system.
A paper published in the December 2004 issue of the SIGCSE Bulletin, "Cryptanalysis of some encryption/cipher schemes using related key attack," by Khawaja Amer Hayat, Umar Waqar Anis, and S. Tauseef-ur-Rehman, is the same as a paper that John Kelsey, David Wagner, and I published in 1997.
It's clearly plagiarism. Sentences have been reworded or summarized a bit and many typos have been introduced, but otherwise it's the same paper. It's copied, with the same section, paragraph, and sentence structure -- right down to the same mathematical variable names. It has the same quirks in the way references are cited. And so on.
We wrote two papers on the topic; this is the second. They don't list either of our papers in their bibliography. They do have a lurking reference to "[KSW96]" (the first of our two papers) in the body of their introduction and design principles, presumably copied from our text; but a full citation for "[KSW96]" isn't in their bibliography. Perhaps they were worried that one of the referees would read the papers listed in their bibliography, and notice the plagiarism.
The three authors are from the International Islamic University in Islamabad, Pakistan. The third author, S. Tauseef-Ur-Rehman, is a department head (and faculty member) in the Telecommunications Engineering Department at this Pakistani institution. If you believe his story -- which is probably correct -- he had nothing to do with the research, but just appended his name to a paper by two of his students. (This is not unusual; it happens all the time in universities all over the world.) But that doesn't get him off the hook. He's still responsible for anything he puts his name on.
I wrote to the editor of the SIGCSE Bulletin, who removed the paper from their website and demanded official letters of admission and apology. (The apologies are at the bottom of this page.) They said that they would ban them from submitting again, but have since backpedaled. Mark Mandelbaum, Director of the Office of Publications at ACM, now says that ACM has no policy on plagiarism and that nothing additional will be done. I've also written to Springer-Verlag, the publisher of my original paper.
I don't blame the journals for letting these papers through. I've refereed papers, and it's pretty much impossible to verify that a piece of research is original. We're largely self-policing.
Mostly, the system works. These three have been found out, and should be fired and/or expelled. Certainly ACM should ban them from submitting anything, and I am very surprised at their claim that they have no policy with regards to plagiarism. Academic plagiarism is serious enough to warrant that level of response. I don't know if the system works in Pakistan, though. I hope it does. These people knew the risks when they did it. And then they did it again.
If I sound angry, I'm not. I'm more amused. I've heard of researchers from developing countries resorting to plagiarism to pad their CVs, but I'm surprised see it happen to me. I mean, really; if they were going to do this, wouldn't it have been smarter to pick a more obscure author?
And it's nice to know that our work is still considered relevant eight years later.
EDITED TO ADD: Another paper, "Analysis of Real-time Transport Protocol Security," by Junaid Aslam, Saad Rafique and S. Tauseef-ur-Rehman", has been plagiarized from this original: Real-time Transport Protocol (RTP) security," by Ville Hallivuori.
EDITED TO ADD: Ron Boisvert, the Co-Chair of the ACM Publications Board, has said this:
1. ACM has always been a champion for high ethical standards among computing professionals. Respecting intellectual property rights is certainly a part of this, as is clearly reflected in the ACM Code of Ethics.
EDITED TO ADD: There's a news story with some new developments.
EDITED TO ADD: Over the past couple of weeks, I have been getting repeated e-mails from people, presumably faculty and administrators of the International Islamic University, to close comments in this blog entry. The justification usually given is that there is an official investigation underway so there's no longer any reason for comments, or that Tauseef has been fired so there's no longer any reason for comments, or that the comments are harmful to the reputation of the university or the country.
I have responded that I will not close comments on this blog entry. I have, and will continue to, delete posts that are incoherent or hostile (there have been examples of both).
Blog comments are anonymous. There is no way for me to verify the identity of posters, and I don't. I have, and will continue to, remove any posts purporting to come from a person it does not come, but generally the only way I can figure that out is if the real person e-mails me and asks.
Otherwise, consider this a forum for anonymous free speech. The comments here are unvetted and unverified. They might be true, and they might be false. Readers are expected to understand that, and I believe for the most part they do.
In the United States, we have a saying that the antidote for bad speech is more speech. I invite anyone who disagrees with the comments on the page to post their own opinions.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.