September 15, 2004
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to firstname.lastname@example.org.
Crypto-Gram also has an RSS feed at <http://www.schneier.com/crypto-gram-rss.xml>.
Or you can read it on the web at <http://www.schneier.com/crypto-gram-0409.html>.
In this issue:
- Beyond Fear: First Anniversary
- Security at the Olympics
- Cryptanalysis of MD5 and SHA
- Crypto-Gram Reprints
- Trusted Traveler Program
- Security Notes from All Over: Museum Security
- Counterpane News
- Finnish Mobile Phone Spoofing
- No-Fly List
- Comments from Readers
My latest book, Beyond Fear, is one year old this month.
I wrote it to explain how security works. The idea was to take the methodical approach to security that was developed in the computer domain and apply it to the rest of security: personal, corporate, national. In the book I talk about viewing security as a system, and all that implies. I talk about the notion of a weakest link, and what that means for security effectiveness. I talk about different security strategies—barriers, authentication, compartmentalization, trusted people, counterattack—that have been developed over the millennia, and how they can be applied to today’s security problems.
I have a lot to say about national security and terrorism—concepts and analyses that attempt to inject some sense in what has become an overly political debate.
The book was successful, but it never broke into the mainstream. Reviews, even reviews in non-computer publications, were uniformly positive, but I never got reviewed by the major newspapers and magazines. And without reviews like that, people who didn’t already know my work never found out about the book.
Even today, I’m still seeing reviews. The “New York Law Journal” (Aug 17, 2004) published a glowing review. So did a publication called “Knowledge, Technology, and Policy” (Winter 2004). These are hardly niche computer publications, and illustrate that my goals in writing the book were successful. I wanted to write a book that the average person could read, enjoy, and learn from. Whenever I hear from one of those people after they’ve read the book, it’s clear that I was successful.
Lately, I’ve taken to handing the book to seatmates on airplanes. What I find is that people who start reading the book are hooked, and more often than not ask to buy it from me then and there.
Book publishing is an odd business. More and more books are being published, but publishers concentrate most of their resources on the next bestseller. Bookstores, too, primarily sell new and popular books. What that means is that even good steady sellers like “Beyond Fear” only stay in bookstores for a few months, and they rarely make it to the tables and racks up front where people browsing might see them.
At this point, “Beyond Fear” can only be found in larger bookstores, and even there it’ll be buried in the back somewhere. As much as I want people to support local bookstores, the cheapest and easiest way to buy the book is on the Internet. List price is $25, but you can find it cheaper. (Normally I would recommend Amazon, but Overstock.com has the book for $16 including shipping.)
The publisher has no plans to do a paperback edition of the book. Nor are there any plans to turn it into an e-book. They will continue to reprint it, as long as people continue to buy it.
For those of you who have already bought it, consider lending it to a friend.
Book’s homepage, including links to reviews:
Buy the book on Overstock.com:
Buy the book on Amazon:
If you watched the Olympic games on television, you saw the unprecedented security surrounding the 2004 Olympics. You saw shots of guards and soldiers, and gunboats and frogmen patrolling the harbors. But there was a lot more security behind the scenes. Olympic press materials state that there was a system of 1,250 infrared and high-resolution surveillance cameras mounted on concrete poles. Additional surveillance data was collected from sensors on 12 patrol boats, 4000 vehicles, 9 helicopters, 4 mobile command centers, and a blimp. It wasn’t only images; microphones collected conversations, speech-recognition software converted them to text, and then sophisticated pattern-matching software looked for suspicious patterns. 70,000 people were involved in Olympic security, about seven per athlete or one for every 76 spectators.
The Greek government reportedly spent $1.5 billion on security during the Olympics. But aside from the impressive-looking guards and statistics, was the money well-spent? In many ways, Olympic security is a harbinger of what life could be like in the U.S. If the Olympics are going to be a security test bed, it’s worth exploring how well the security actually worked.
Unfortunately, that’s not easy to do. We have press materials, but the security details remain secret. We know, for example, that SAIC developed the massive electronic surveillance system, but we have to take their word for it that it actually works. Now SAIC is no slouch; they were one of the contractors that built the NSA’s ECHELON electronics eavesdropping system, and presumably have some tricks up their sleeves. But how well does it detect suspicious conversations or objects, and how often does it produce false alarms? We have no idea.
But while we can’t examine the inner workings of Olympic security, we do have some glimpses of security in action.
A reporter from the Sunday Mirror, a newspaper in Britain, reported all sorts of problems with security. First, he got a job as a driver with a British contractor. He provided no references, underwent no formal interview or background check, and was immediately given access to the main stadium. He found that his van was never thoroughly searched, and that he could have brought in anything. He was able to plant three packages that were designed to look like bombs, all of which went undetected during security sweeps. He was able to get within 60 feet of dozens of heads of state during the opening ceremonies.
In a separate incident, a man wearing a tutu and clown shoes managed to climb a diving platform, dive into the water, and swim around for several minutes before officials pulled him out. He claimed that he wanted to send a message to his wife, but the name of an online gambling website printed on his chest implied a more commercial motive.
And on the last day of the Olympics, a Brazilian runner who was leading the men’s marathon, with only a few miles to go, was shoved off the course into the crowd by a costumed intruder from Ireland. He ended up coming in third; his lead was narrowing before the incident, but it’s impossible to tell how much it might have cost him.
These three incidents are anecdotal, but they illustrate an important point about security at this kind of event: it’s pretty much impossible to stop a lone operator intent on making mischief. It doesn’t matter how many cameras and listening devices you’ve installed. It doesn’t matter how many badge checkers and gun-toting security personnel you’ve hired. It doesn’t matter how many billions of dollars you’ve spent.
A lone gunman or a lone bomber can always find a crowd of people.
This is not to say that guards and cameras are useless, only that they have their limits. Money spent on them rapidly reaches the point of diminishing returns, and after that more is just wasteful.
Far more effective would have been to spend most of that $1.5 billion on intelligence and on emergency response. Intelligence is an invaluable tool against terrorism, and it works regardless of what the terrorists are plotting – even if the plots have nothing to do with the Olympics. Emergency response preparedness is no less valuable, and it too works regardless of what terrorists manage to pull off—before, during, or after the Olympics.
No major security incidents happened this year at the Olympics As a result, major security contractors will tout that result as proof that $1.5 billion was well-spent on security. What it really shows is how quickly $1.5 billion can be wasted on security. Now that the Olympics are over and everyone has gone home, the world will be no safer for spending all the money. That’s a shame, because that $1.5 billion could have bought the world a lot of security if spent properly.
A version of this essay originally appeared in the Sydney Morning Herald, during the Olympics:
At the CRYPTO conference in Santa Barbara, CA, last month, researchers announced several weaknesses in common hash functions. These results, while mathematically significant, aren’t cause for alarm. But even so, it’s probably time for the cryptography community to get together and create a new hash standard.
One-way hash functions are a cryptographic construct used in many applications. They are used in conjunction with public-key algorithms for both encryption and digital signatures. They are used in integrity checking. They are used in authentication. They have all sorts of applications in a great many different protocols. Much more than encryption algorithms, one-way hash functions are the workhorses of modern cryptography.
In 1990, Ron Rivest invented the hash function MD4. In 1992, he improved on MD4 and developed another hash function: MD5. In 1993, the National Security Agency published a hash function very similar to MD5, called SHA (Secure Hash Algorithm). Then, in 1995, citing a newly discovered weakness that it refused to elaborate on, the NSA made a change to SHA. The new algorithm was called SHA-1. Today, the most popular hash function is SHA-1, with MD5 still being used in older applications.
One-way hash functions are supposed to have two properties. One, they’re one way. This means that it is easy to take a message and compute the hash value, but it’s impossible to take a hash value and recreate the original message. (By “impossible” I mean “can’t be done in any reasonable amount of time.”) Two, they’re collision free. This means that it is impossible to find two messages that hash to the same hash value. The cryptographic reasoning behind these two properties is subtle, and I invite curious readers to learn more in my book “Applied Cryptography.”
Breaking a hash function means showing that either—or both—of those properties are not true. Cryptanalysis of the MD4 family of hash functions has proceeded in fits and starts over the last decade or so, with results against simplified versions of the algorithms and partial results against the whole algorithms. This year, Eli Biham and Rafi Chen, and separately Antoine Joux, announced some pretty impressive cryptographic results against MD5 and SHA. Collisions have been demonstrated in SHA. And there are rumors, unconfirmed at this writing, of results against SHA-1.
The magnitude of these results depends on who you are. If you’re a cryptographer, this is a huge deal. While not revolutionary, these results are substantial advances in the field. The techniques described by the researchers are likely to have other applications, and we’ll be better able to design secure systems as a result. This is how the science of cryptography advances: we learn how to design new algorithms by breaking other algorithms. Additionally, algorithms from the NSA are considered a sort of alien technology: they come from a superior race with no explanations. Any successful cryptanalysis against an NSA algorithm is an interesting data point in the eternal question of how good they really are in there.
To a user of cryptographic systems—as I assume most readers are—this news is important, but not particularly worrisome. MD5 and SHA aren’t suddenly insecure. No one is going to be breaking digital signatures or reading encrypted messages anytime soon with these techniques. The electronic world is no less secure after these announcements than it was before.
But there’s an old saying inside the NSA: “Attacks always get better; they never get worse.” These techniques will continue to improve, and probably someday there will be practical attacks based on these techniques.
It’s time for us all to migrate away from SHA-1.
Luckily, there are alternatives. The National Institute of Standards and Technology already has standards for longer—and harder to break—hash functions: SHA-224, SHA-256, SHA-384, and SHA-512. They’re already government standards, and can already be used. This is a good stopgap, but I’d like to see more.
I’d like to see NIST orchestrate a worldwide competition for a new hash function, like they did for the new encryption algorithm, AES, to replace DES. NIST should issue a call for algorithms, and conduct a series of analysis rounds, where the community analyzes the various proposals with the intent of establishing a new standard.
Most of the hash functions we have, and all the ones in widespread use, are based on the general principles of MD4. Clearly we’ve learned a lot about hash functions in the past decade, and I think we can start applying that knowledge to create something even more secure.
Better to do it now, when there’s no reason to panic, than years from now, when there might be.
NIST’s SHA site:
This essay originally appeared in Computerworld:
Crypto-Gram is currently in its seventh year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.
Accidents and security incidents:
Special issue on 9/11, including articles on airport security, biometrics, cryptography, steganography, intelligence failures, and protecting liberty:
Full Disclosure and the Window of Exposure:
Open Source and Security:
Factoring a 512-bit Number:
If you fly out of Logan Airport and don’t want to take off your shoes for the security screeners and get your bags opened up, pay attention. The U.S. government is testing its “Trusted Traveler” program, and Logan is the fourth test airport. Currently only American Airlines frequent fliers are eligible, but if all goes well the program will be opened up to more people and more airports.
Participants provide their name, address, phone number, and birth date, a set of fingerprints, and a retinal scan. That information is matched against law enforcement and intelligence databases. If the applicant is not on any terrorist watch list, and is otherwise an upstanding citizen, he gets a card that allows him access to a special security lane. The lane doesn’t bypass the metal detector or X-ray machine for carry-on bags, but avoids more intensive secondary screening unless there’s an alarm of some kind.
Unfortunately, this program won’t make us more secure. Some terrorists will be able to get Trusted Traveler cards, and they’ll know in advance that they’ll be subjected to less-stringent security.
Since 9/11, airport security has been subjecting people to special screening: sometimes randomly, and sometimes based on profile criteria as analyzed by computer. For example, people who buy one-way tickets, or pay with cash, are more likely to be flagged for this extra screening.
Sometimes the results are bizarre. Screeners have searched children and people in wheelchairs. In 2002, Al Gore was randomly stopped and searched twice in one week. And just last month, Senator Ted Kennedy was flagged – and denied boarding – because the computer decided he was on some “no fly” list.
Why waste precious time making Grandma Lillie from Worchester empty her purse, when you can search the carry-on items of Anwar, a twenty-six-year-old who arrived last month from Egypt and is traveling without luggage?
The reason is security. Imagine you’re a terrorist plotter with half a dozen potential terrorists at your disposal. They all apply for a card, and three get one. Guess which three are going on the mission? And they’ll buy round-trip tickets with credit cards, and have a “normal” amount of luggage with them.
What the Trusted Traveler program does is create two different access paths into the airport: high security and low security. The intent is that only good guys will take the low-security path, and the bad guys will be forced to take the high-security path, but it rarely works out that way. You have to assume that the bad guys will find a way to take the low-security path.
The Trusted Traveler program is based on the dangerous myth that terrorists match a particular profile, and that we can somehow pick terrorists out of a crowd if we only can identify everyone. That’s simply not true. Most of the 9/11 terrorists were unknown, and not on any watch list. Timothy McVeigh was an upstanding U.S. citizen before he blew up the Oklahoma City Federal Building. Palestinian suicide bombers in Israel are normal, nondescript people. Intelligence reports indicate that al Qaeda is recruiting non-Arab terrorists for U.S. operations. Airport security is best served by intelligent guards watching for suspicious behavior, and not dumb guards blindly following the results of a Trusted Traveler program.
Moreover, there’s no need for the program. Frequent fliers and first-class travelers already have access to special lanes that bypass long lines at security checkpoints, and the computers never seem to flag them for special screening. And even the long lines aren’t very long. I’ve flown out of Logan repeatedly, and I’ve never had to wait more than ten minutes at security. The people who could use the card don’t need one, and infrequent travelers are unlikely to take the trouble—or pay the fee—to get one.
As counterintuitive as it may seem, it’s smarter security to randomly screen people than it is to screen solely based on profile. And it’s smarter still to do a little bit of both: random screening and profile-based screening. But to create a low-security path, one that guarantees someone on it less rigorous screening, is to invite the bad guys to use that path.
This essay originally appeared in the Boston Globe:
The theft of the Munch paintings from the Munch Museum in Oslo illustrates an interesting point: the conflicting requirements of security and public access for famous works of art are simply incompatible. The level of security that the Smithsonian gives the Hope Diamond just doesn’t scale, and most museums don’t have the budget to even come close.
It’s an interesting security problem. Museum items, especially fine art, are incredibly valuable. Lesser museum works have less value, but are easily resold. (Think of all the Mesopotamian artifacts that were looted from Iraqi museums after the U.S. invaded.) But more valuable items, especially fine art, is immediately recognizable and hence much less valuable when stolen. It takes a very special kind of art collector to want a work that he can never display publicly.
Museum items can also be stolen and held for ransom, or defaced as an act of protest.
What this means is that works of art need to be secured in museums. But most museums can’t afford the level of security that these works of art require. (Even insurance is often beyond a museum’s budget.) And the items need to be available to the public, which makes security even harder.
The choice seems to be good door locks and building alarms, and sometimes 24-hour guards. This works most of the time, but occasionally there are some spectacular security failures.
And even worse, the prevalence of good after-hours museum security means that thieves are more likely to turn to more violent “crash and grab” means of theft.
Links about turning cell phones into listening devices:
Turn this phone’s ringer to silent, and you have a listening device with no modification:
Last month I mentioned this encrypted external drive, and wondered why decent key lengths weren’t available. Either I missed them when I looked at the site, or they’ve recently added them. But they come in 40-bit and 64-bit DES encryption, 128-bit triple DES encryption, and 128-bit and 192-bit AES encryption. Looks like a really neat product.
Fascinating account of the uses, and limits, of interrogation:
Looks like ALL users of Windows XP will be able to download the install the SP2 upgrade, whether they have a legal copy or not. This is excellent news for Internet security:
Another airport false alarm:
An analysis of port knocking:
Good article on cyberinsurance
Fascinating, and long, FBI document about concealed weapons. Includes photographs and X-ray images:
Security discussion of supermarket self-checkout technology:
A Canada Revenue Agency (the branch of government that deals with taxes) employee wanted to fight a speeding ticket. So he served the police officer who gave him the ticket with a phony tax audit notice. The speeding ticket case was scheduled to be heard on the same day as the officer was supposed to appear for the tax audit.
Authentication vulnerabilities in the U.S. Emergency Alert System:
The Federal Reserve is going to be moving money over the Internet. Yikes!
The average time before an unpatched PC on the Internet is successfully attacked is 20 minutes. As the article points out, that’s not even long enough to download the patches you need to be secure.
Interview with the author of the Netsky and Sasser worms:
There’s a Chinese worm that attempts to steal exams, presumably because there’s a black market in their sale:
Is encryption doomed? Despite the sensational title of this article, it’s not. Don’t expect this kind of thing to matter anytime soon.
Long and interesting review of Windows XP SP2, including a list of missed opportunities for increased security. Worth reading:
Extensive comments are here:
Fascinating article on counterfeiting in Canada:
Another “free information is more secure than secret information” story. A science panel concludes that the value of freely sharing data on dangerous germs so vaccines and treatments can be developed outweighs the danger that bioterrorists may use the information to do harm.
A model for disclosure and security:
Interesting links between malware writers and organized crime:
Nice essay on the difficulty, and importance, of erasing computer information:
A website offers Caller ID falsification service:
A virus has penetrated a classified, completely separate, Internet-compatible network called SIPRNET. How did the virus jump the air gap? The article doesn’t say. Why did the computers get infected? They didn’t have any antivirus protection. Moral: even if your computer is not connected to the Internet, you still need to think about Internet security:
Bruce Schneier is hosting a series of roundtable events, discussing security concerns with CIOs and CISOs. If you would like to receive an invitation to one of these events, please contact Teresa Buchholz at email@example.com. Locations I’ll be visiting between now and the end of this year include New York, Washington DC, Houston, Dallas, San Francisco, and LA.
Schneier has been interviewed:
Schneier has written an article on “benevolent worms” for CSO Online:
Schneier has written an article on DES for eWeek:
An alert reader wrote to me about an interesting spoofing attack. He demonstrated that it works in Finland with Nokia mobile phones; it may work elsewhere.
For an unimportant reason, this person ended up with two identical phone numbers with different prefixes: 040 1234567 and 050 1234567. He was able to do this because the second operator—with the second prefix—allowed him to choose a number.
What he noticed is that if someone has one—but not both—of those phone numbers in his address book, and he calls that person from the other number, the Caller ID function of the phone correctly identifies him anyway. This also works for text messages.
It turns out that mobile phones do not check the *whole* number when matching identities in the address book.
This suggests a very simple identity spoofing attack: attacker signs up for a mobile phone account with a different prefix than his victim, but chooses the number to be the same. Attacker can then call and send text messages to contacts of the victim, and the recipients will believe that he is in fact the victim.
Imagine a list of suspected terrorists so dangerous that we can’t ever let them fly, yet so innocent that we can’t arrest them—even under the draconian provisions of the Patriot Act.
This is the federal government’s “No Fly” list. First circulated in the weeks after 9/11 as a counterterrorist tool, its details are shrouded in secrecy. But because the list is filled with inaccuracies and ambiguities, thousands of innocent, law-abiding Americans have been subjected to lengthy interrogations and invasive searches every time they fly, and sometimes forbidden to board airplanes. It also has been a complete failure, and has not been responsible for a single terrorist arrest anywhere.
Instead, the list has snared Asif Iqbal, a Rochester, NY businessman who shares a name with a suspected terrorist currently in custody in Guantanamo. It’s snared a 71-year-old retired English teacher. A man with a top-secret government clearance. A woman whose name is similar to that of an Australian man 20 years younger. Anyone with the name David Nelson is on the list. And recently it snared Senator Ted Kennedy, who had the unfortunate luck to share a name with “T Kennedy,” an alias once used by a person someone decided should be on the list.
There is no recourse for those on the list, and their stories quickly take on a Kafkaesque tone. People can be put on the list for any reason; no standards exist. There’s no ability to review any evidence against you, or even confirm that you are actually on the list. And for most people, there’s no way to get off the list or to “prove” once and for all that they’re not whoever the list is really looking for. It took Senator Kennedy three weeks to get his name off the list. People without his political pull have spent years futilely trying to clear their names.
There’s something distinctly un-American about a secret government blacklist, with no right of appeal or judicial review. Even worse, there’s evidence that it’s being used as a political harassment tool: environmental activists, peace protestors, and anti-free-trade activists have all found themselves on the list.
But security is always a trade-off, and some might make the reasonable argument that these kinds of civil liberty abuses are required if we are to successfully fight terrorism in our country. The problem is that the no-fly list doesn’t protect us from terrorism.
It’s not just that terrorists are not stupid enough to fly under recognized names. It’s that the very problems with the list that make it such an affront to civil liberties also make it less effective as a counterterrorist tool.
Any watch list where it’s easy to put names on and difficult to take names off will quickly fill with false positives. These false positives eventually overwhelm any real information on the list, and soon the list does no more than flag innocents—which is what we see happening today, and why the list hasn’t resulted in any arrests.
A quick search through an Internet phone book shows 3,400 listings for “T Kennedy” in the United States. Since many couples only have one phone book entry, many T Kennedy’s are unlisted spouses—that translates to about 5,000 people total. Adding “T Kennedy” to the no-fly list is irresponsible, especially since it was known to be an alias.
Even worse, this behavior suggests an easy terrorist tactic: use common American names to refer to co-conspirators in your communications. This will make the list even less effective as a security tool, and more effective as a random harassment tool. There might be 5,000 people named “T Kennedy” in the U.S., but there are 54,000 listings for “J. Brown.”
Watch lists can be good security, but they need to be implemented properly. It should be harder than it currently is to add names to the list. It should be possible to add names to the list for short periods of time. It should be easy to take names off the list, and to add qualifiers to the list. There needs to be a legal appeals process for people on the list who want to clear their name. For a watch list to be a part of good security, there needs to be a notion of maintaining the list.
This isn’t new, and this isn’t hard. The police deal with this problem all the time, and they do it well. We do worse identifying potential terrorists than the police do identifying crime suspects. Imagine if all the police did when having a witness identify a suspect was asking whether the names “sounded about right.” No suspect picture book. No line up.
In a country built on the principles of due process, the current no-fly list is an affront to our freedoms and liberties. And it’s lousy security to boot.
Getting off the list by using your middle name:
This essay originally appeared in Newsday:
From: Wayne Schroeder <schroedezuri.sdsc.edu>
Subject: BOB on Board
I agree with most of your comments concerning the BOB incident. The captain overreacted when deciding to turn the plane around after a bag was found with “BOB” written on it (which could have meant “Bomb On Board” or a few other things). Even though, as one report indicated, there was a VIP on board. The fear was excessive, and only results in more fear.
But I disagree with your statement that “Fear won’t make anyone more secure” and I think it is worth a few moments to consider fear and other emotions in the context of our rational decisions.
There are, as you point out, situations in which fear will make us less secure. We can become so concerned about remote possibilities that we ignore more pressing threats. We can waste resources on activities that are highly ineffective.
But in general, fear very often DOES make us more secure. And it does so with such frequency and efficiency that we hardly notice. When I get on my motorcycle, I feel a slight bit of fear. Fear that I might lose control, or that someone will run me down, that a wheel will fall off, or whatever. And that fear adds a bit of caution to my frame of mind; and results in a lot of defensive driving. And it seems to have worked reasonably well, as I’ve been riding for 30+ years and I’m still alive.
When we walk to the edge of a cliff, we hesitate. Many of those who didn’t are dead, and died before they reproduced.
For the most part, our emotions exist today because they are effective motivators that efficiently interact with the more rational aspects of our selves to result in wise decisions. And because of this, natural selection has preserved them. A little bit of fear, or anger, or love, can be a good thing. It is when we forget to also engage our heads, that we often get into trouble.
Excessive fear, which clouds our judgment, can frequently make us less secure. But a little bit of fear will usually motivate us to maintain our safety.
I’m sorry that your recent Crypto-Gram only mentions GHB as a “date rape drug.” I wasn’t able to find a good, recent, authoritative reference on this, but quick perusal of FAQs and Web sites of people who use these kinds of drugs supports my subjective impression that persons using GHB far more commonly dose *themselves* rather than dosing others. They either enjoy the neurological effects (similar to alcohol, but stronger) or they’re trying to use it for bodybuilding (I’m not sure exactly how that works, but with a special diet it helps raise muscle mass or something).
The “date rape drug” story is arguably an even better example of misperception of risks than your carry-a-bottle-opener example; date rape may be numerically quite an unpopular use of GHB, but it’s the sole threat targeted by almost all anti-GHB efforts. The law making GHB a Schedule I controlled substance was called “The Hillory J. Farias and Samantha Reid Date-Rape Drug Prohibition Act”. The result is that lots of people now think date rape is the only thing anyone ever does with GHB. There seem to be far more cases of people being harmed by deliberately mixing it into their own alcoholic drinks, because it interacts badly with alcohol, but I’ve never seen an ad campaign warning against that threat.
From: Trammell Hudson <hudsonswcp.com>
For a far more elaborate security countermeasure, see this invention by a North Wales man:
From: Greg Walker <Greg.Walkercityofhouston.net>
Subject: Houston Airport Rangers
The Airport Rangers program is an adjunct security program that, from my reading of Mr. Schneier’s other articles that are available on the web, would appear to be that “out of the box” non-traditional thinking that he would normally champion and I must assume that the only reason he doesn’t champion our program is because he is not fully apprised of the background of the airport nor of the place of the Airport Rangers in the overall scheme of protection.
Mr. Schneier appears to base his opinion on simply reviewing the Houston Airport System website’s page(s) about the Airport Ranger program, which by the way has over 450 volunteers that have passed background checks. It should be noted not all applicants have passed. I believe it would be imprudent of me to discuss the perimeters and depth of our background checks in a public forum. However, I will state that we security professionals in the HAS Public Safety & Technology Division believe it to be at an appropriate level considering that the Rangers are basically riding upon public land to begin with. It would appear that Mr. Schneier may believe that we depend on the Airport Rangers program as a primary means of perimeter security; if so, this would be a tremendous misassumption. We have numerous technological and professional security force resources involved in protecting our perimeter and are constantly reviewing the latest trends and techniques. I am forbidden by federal
law to disclose in any further detail our present techniques and as to which new techniques we are leaning.
George Bush Intercontinental Airport (IAH) has approximately 11,000+ acres of land and was built on what, not too long ago, was a rural area close to the City of Houston. This is a very common type of location for airports. A large portion of the land at IAH is wooded and contains a large amount of undergrowth. This is land being held for future expansion and a clear zone exists between these areas and the security fence around the Airport Operations Area. This land is outside of the area regulated by the federal regulatory authorities and is essentially public land. For many years, as the area where equestrians could ride has been greatly reduced due to the development that occurs around airports, the equestrian community in North Harris County has ridden around this area without having proper permission or documentation.
Houston Airport System (HAS) thought it better to work with the equestrian community than to fight it. After all, the land is owned by the taxpayers, and as a result a “win-win” situation developed. The equestrians now have a nice place to ride as the Houston Airport System has done a number of improvements on the land to encourage daily use by the Airport Rangers. In exchange, the Airport now knows who is riding, knows their background and has them wearing picture identification cards. Rangers challenge and report to our Security Dispatch Center, by cellular telephone, anyone they come upon who doesn’t have the HAS issued ID card or anything they deem to be suspicious. In short, Rangers provide extra sets of eyes and ears for the police and the security professionals. Not only do they look for suspicious persons or signs of unauthorized entrance or activity, they also report brush that is growing too close to the security fence, any attempts by animals to dig under th e fence and get onto the runways, etc. so that corrective action can be taken to keep the runways and taxiways safer places from even just normal everyday type of dangers. Several incidences of trespass have been reported by Rangers and at least two thefts of property have been thwarted.
Mr. Schneier worries about civil rights and constitutional protections, along with racial profiling. These are necessary and admirable concerns. However, the Airport Rangers have no powers to arrest or detain—they are simply eyes and ears for law enforcement and security professionals—their sole obligation is to observe, actively look for certain things, and report them by cell phone to the HAS Security Dispatch Center.
Mr. Schneier opines that the perimeter around an airport used to be a no-man’s land and anyone on the property was immediately suspicious. Ah, to live in that perfect world. At most large airports today public streets, roads and even highways run just a few feet outside of the main security fence. If the airport is lucky and has large acreage on several sides, such as George Bush Intercontinental does, then the problem becomes one of manpower to patrol all of the areas. Today, more than ever, law enforcement and security agencies need the assistance of every citizen and the Airport Ranger Program, at least at George Bush Intercontinental Airport, goes a very long way in making that a reality.
As any law enforcement or security agency will tell you, crime, and in today’s world, terrorism, is substantially reduced when there is a visible presence of law enforcement, security, and yes, Mr. Schneier, even citizens, especially alert citizens that know who to call and for what to report. I’m pleased to say that law enforcement, at all levels, including federal and local, are strongly supportive of the program and from time to time provide extra training for the Airport Rangers.
From: Folkert van Heusden <folkertvanheusden.com>
Subject: Processor Microcode Vulnerabilities
> Turns out that it’s possible to update the AMD K8
> processor (Athlon64 or Opteron) microcode. And,
> get this, there’s no authentication check.
This is also possible with the Intel Pentium Pro, II, etc. See: <http://www.urbanmyth.org/microcode/> for how to do this under Linux.
From: “Allan Dyer” <adyeryuikee.com.hk>
Subject: CIA Privacy Notice
Regarding the CIA’s privacy notice, you say, “Um, isn’t their job to collect personal information about people without their permission?”
Yes, very true. But isn’t it also their job to lie?
From: “Hamlin, Stuart” <SHamlinGAM.COM>
Subject: Windows XP SP2
With reference to your point about Windows XP SP2, I think you are half right and half wrong. It is true that if fixing security bugs in the OS breaks an application, it is the application that is wrong. However, I think the sheer scale of the carnage caused by SP2 is verging on the ridiculous.
Microsoft has hardly kept it a secret that some applications might not function as before, but the trouble is people are going to look at that list and say “well that’s everything I run” and give up. No security fix is worth that much grief to most people, especially SMEs. Doubly so if they have managed to avoid infection or attack to date.
Of course what is needed is not a ‘look what we broke’ posting on their site, but a comprehensive help resource to get the applications working again. Until they provide it my guess is that most people will balk at installing it and that makes us all less secure.
Once again MS fails where it matters most.
From: Jim Reid <jimrfc1035.com>
Subject: British Emergency Preparedness
“It’s a joke site, and worth a visit: <http://www.preparingforemergencies.co.uk/>
This is a parody of a pamphlet that the UK government has sent to every home in the country. This contains wise advice like “If there is a fire, get out, stay out and call 999” and “If a bomb goes off in your building, look for the safest way out.” Thank goodness Her Majesty’s Government told me to do that. I would never have thought to do any of those things if it wasn’t for this pamphlet.
The genuine article is at <http://www.preparingforemergencies.gov.uk>. IMO it’s much better tragi-comedy than the parody. It states the bleedingly obvious—in an emergency, call the emergency services and listen to the radio & TV for advice and information. But it doesn’t say anything about what to do if the phones, radio and TV are down. Even so, at least someone in the civil service has a sense of humour. The small print says the pamphlet can be copied without permission provided it’s not used in a derogatory manner.
From: BJ Brooks <bobstuff17hotmail.com>
“Here’s a guy who has a webcam pointing at his Secure ID token, so he doesn’t have to remember to carry it around. Here’s the strange thing: unless you know who the webpage belongs to, it’s still good security.”
That’s a recent quote from your newsletter regarding the moron that has FOB on display for the world to see. I agree with you saying unless you know whose it is, it’s still decent security.
In a perfect world, nobody would need a FOB, but all this guy did was make himself and more importantly his company a target of would be hackers or guys that are bored looking for some fun. Within minutes of being shown this webpage, I had the guys name, address, 4 phone numbers, wife’s name, son’s name and b-day, Company name, job title, and several email addresses. I even had a picture of him and his son. And this was all on the level. I have no desire to get into his company’s firewall. I just wanted to see how hard it would be to find. His username for each email address is identical. I’d bet my next paycheck that it’s the same username he’s got at work. With the combination of having his wife’s and kid’s name, not to mention birthday, with this guy’s level of originality, I’m sure my 8-year-old neighbor could figure out his password within an hour.
From: Greg Guerin <glguerinamug.org>
Subject: ICS Atlanta’s Comment About Navajo Code-Talkers
The Navajo code-talkers’ “key length” would have been much larger than zero. All code-talkers were specially trained, and memorized the coded meaning of a large number of code-words, which were all spoken in the Dineh (Navajo) native tongue. One dictionary example I found appears to match others turned up for “navajo code talker dictionary”: <http://www.americanindians.com/…>
The letter-substitution code alone is clever. A fair number of the coded words are bilingual puns, such as “ant fight” for “about”, or “weasel tied together” for “which” (A-bout and W-hitch, get it?)..
And that’s just the code itself. It’s not even considering the cultural context shared by all the code-talkers, or that all the coded messages were spoken person-to-person, by native speakers of the language.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. Back issues are available on <http://www.schneier.com/crypto-gram.html>.
To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to firstname.lastname@example.org. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.
Comments on CRYPTO-GRAM should be sent to email@example.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane’s expert security analysts protect networks for Fortune 1000 companies world-wide. See <http://www.counterpane.com>.
Copyright (c) 2004 by Bruce Schneier.