Blog: January 2006 Archives

The Failure of US-VISIT

US-VISIT is the program to program to fingerprint and otherwise keep tabs on foriegn visitors to the U.S. This article talks about how the program is being rolled out, but the last paragraph is the most interesting:

Since January 2004, US-VISIT has processed more than 44 million visitors. It has spotted and apprehended nearly 1,000 people with criminal or immigration violations, according to a DHS press release.

I wrote about US-VISIT in 2004, and back then I said that it was too expensive and a bad trade-off. The price tag for "the next phase" was $15B; I'm sure the total cost is much higher.

But take that $15B number. One thousand bad guys, most of them not very bad, caught through US-VISIT. That's $15M per bad guy caught.

Surely there's a more cost-effective way to catch bad guys?

Posted on January 31, 2006 at 4:07 PM70 Comments

Bug in Google's Censorship

Seems that the censorship service that Google has set up at China's request suffers from a trivial bug: if you type your searches using capital letters, you bypass the censor.

This'll be fixed real soon, I'm sure.

Posted on January 31, 2006 at 3:00 PM19 Comments

Dutch Biometric Passport Cracked

There's a good write-up from The Register.

Two points stand out. One, the RFID chip in the passport can be read from ten meters. Two, lots of predictability in the encryption key -- sloppy, sloppy -- makes the brute-force attack much easier.

But the references are from last summer. Why is this being reported now?

Posted on January 31, 2006 at 1:04 PM24 Comments

Wireless Dead Drop

Dead drops have gone high tech:

Russia's Federal Security Service (FSB) has opened an investigation into a spying device discovered in Moscow, the service said Monday.

The FSB said it had confiscated a fake rock containing electronic equipment used for espionage on January 23, and had uncovered a ring of four British spies who worked under diplomatic cover, funding human rights organizations operating in Russia.

BBC had this to say:

The old idea of the dead-drop ('letterboxes' the British tend to call them) - by the oak tree next to the lamppost in such-and-such a park etc - has given way to hand-held computers and short-range transmitters.

Just transmit your info at the rock and your 'friends' will download it next day. No need for codes and wireless sets at midnight anymore.

Transferring information to and from spies has always been risky. It's interesting to see modern technology help with this problem.

Phil Karn wrote to me in e-mail:

My first reaction: what a clever idea! It's about time spycraft went hi-tech. I'd like to know if special hardware was used, or if it was good old 802.11. Special forms of spread-spectrum modulation and oddball frequencies could make the RF hard to detect, but then your spies run the risk of being caught with highly specialized hardware. 802.11 is almost universal, so it's inherently less suspicious. Randomize your MAC address, change the SSID frequently and encrypt at multiple layers. Store sensitive files encrypted, without headers, in the free area of a laptop's hard drive so they're not likely to be found in forensic analysis. Keep all keys physically separate from encrypted data.

Even better, hide your wireless dead drop in plain sight by making it an open, public access point with an Internet connection so the sight of random people loitering with open laptops won't be at all unusual.

To keep the counterespionage people from wiretapping the hotspot's ISP and performing traffic analysis, hang a PC off the access point and use it as a local drop box so the communications in question never go to the ISP.

I am reminded of a dead drop technique used by, I think, the 9/11 terrorists. They used Hotmail (or some other anonymous e-mail service) accounts, but instead of e-mailing messages to each other, one would save a message as "draft" and the recipient would retrieve it from the same account later. I thought that was pretty clever, actually.

Posted on January 31, 2006 at 7:17 AM36 Comments

Handwritten Real-World Cryptogram

I get e-mail, occasionally weird e-mail. Every once in a while I get an e-mail like this:

I know this is going to sound like a plot from a movie. It isn't. A very good friend of mine Linda Rayburn and her son Michael Berry were brutally murdered by her husband...the son's stepfather.

They were murdered on February 3rd, 2004. He then hung himself in the basement of their house. He left behind a number of disturbing items.

However, the most intriguing is a cryptogram handwritten on paper utilizing letters, numbers and symbols from a computer keyboard. Linda's daughter Jenn was the one who found the bodies. Jenn is a very good friend of mine and I told her I would do everything within my power to see if this cryptogram is truly a cryptogram with valuable information or if it is a wild goose chase to keep us occupied and wondering forever what it means.

I have no idea if any of this is true, but here's a news blip from 2004:

Feb. 2: Linda Rayburn, 44, and Michael Berry, 23, of Saugus, both killed at home. According to police, Rayburn's husband, David Rayburn, killed his wife and stepson with a hammer. Their bodies were found in adjacent bedrooms. David Rayburn left a suicide note, went to the basement, and hanged himself.

And here is the cryptogram:

The rectangle drawn over the top two lines was not done by the murderer. It was done by a family member afterwards.

Assuming this is all real, it's a real-world puzzle with no solution. No one knows what the message is, or even if there is a message.

If anyone figures it out, please let me know.

Posted on January 30, 2006 at 10:15 AM410 Comments

Friday Squid Blogging: Giant Squid in Tasmania

In 2002, a 60-foot long giant squid washed up on the beach in Tasmania.

Because of the low number of observations, scientists have struggled to build up a profile of the giant squid, discovering only in the last five years how it reproduces.

It is believed they rarely have an opportunity to mate, and live isolated lives, but it is still unknown where the squid fits on the food chain.

The giant squid is a carnivorous mollusk with a beak-like mouth strong enough to cut through a steel cable and its eyes are the largest in the animal kingdom -- growing up to 45 centimeters (18 inches) wide.

The giant squid is believed to feed on, among other things, the world's biggest animals with several eyewitness stories from fisherman who have seen the squid in fierce battles with whales.

Dead whales have been found washed up on beaches with large sucker marks on their bodies, apparently from squid attacks.

Posted on January 27, 2006 at 4:08 PM17 Comments

NSA Has a Technology Transfer Program


The National Security Agency has established a formal technology transfer mechanism for openly sharing technologies with the external community. Our scientists and engineers, along with our academic and research partners, have developed cutting-edge technologies, which have not only satisfied our mission requirements, but have also served to improve the global technological leadership of the United States. In addition, these technical advances have contributed to the creation and improvement of many commercial products in America.

Look at their 44 Technology Profile Fact Sheets.

Posted on January 27, 2006 at 7:03 AM28 Comments

Another No-Fly List Victim

This person didn't even land in the U.S. His plane flew from Canada to Mexico over U.S. airspace:

Fifteen minutes after the plane left Toronto's Pearson International Airport, the airline provided customs officials in the United States with a list of passengers. Agents ran the list through a national data base and up popped a name matching Mr. Kahil's.


When the plane landed in Acapulco, the Kahils were ushered into a room for questioning. Mug shots were taken of the couple, along with their sons, Karim and Adam, who are 8 and 6. But it was not until a couple of hours later that the Kahils found out why.

Ms. Kahil and the children returned to Canada later that day and Mr. Kahil was put in a detention centre and his passport was confiscated.

Just another case of mistaken identity.

And here's a story of a four-year-old boy on the watch list.

This program has been a miserable failure in every respect. Not one terrorist caught, ever. (I say this because I believe 100% that if this administration caught anyone through this program, they would be trumpeting it for all to hear.) Thousands of innocents subjected to lengthy and extreme searches every time they fly, prevented from flying, or arrested.

Posted on January 26, 2006 at 3:28 PM53 Comments

Election Machine Conflict of Interests

From EPIC:

EPIC FOIA Notes #11: No-Bid Contracts Go to Vendors with Close Ties to Election Advisory Group

Documents obtained by EPIC from the Election Assistance Commission describe two no-bid contracts for work on voting system standards given to vendors with ties to the Commission's technical advisory committee.

From a security perspective, this seems like a really bad idea.

Posted on January 26, 2006 at 7:35 AM13 Comments

New Zealand Espionage History

This is fascinating:

Among the personal papers bequeathed to the nation by former Prime Minister David Lange is a numbered copy of a top secret report from the organisation that runs the 'spy domes' at Waihopai and Tangimoana. It provides an unprecedented insight into how espionage was conducted 20 years ago.


Much of the GCSB's work involved translating and analysing communications intercepted by other agencies, "most of the raw traffic used ... (coming) from GCHQ/NSA sources", the British and US signals intelligence agencies.

Its report says "reporting on items of intelligence derived from South Pacific telex messages on satellite communications links was accelerated during the year.

"A total of 171 reports were published, covering the Solomons, Fiji, Tonga and international organisations operating in the Pacific. The raw traffic for this reporting provided by NSA the US National Security Agency)."

The GCSB also produced 238 intelligence reports on Japanese diplomatic cables, using "raw traffic from GCHQ/NSA sources". This was down from the previous year: "The Japanese government implementation of a new high grade cypher system seriously reduced the bureau's output." For French government communications, the GCSB "relied heavily on (British) GCHQ acquisition and forwarding of French Pacific satellite intercept".

The report lists the Tangimoana station's targets in 1985-86 as "French South Pacific civil, naval and military; French Antarctic civil; Vietnamese diplomatic; North Korean diplomatic; Egyptian diplomatic; Soviet merchant and scientific research shipping; Soviet Antarctic civil. Soviet fisheries; Argentine naval; Non-Soviet Antarctic civil; East German diplomatic; Japanese diplomatic; Philippine diplomatic; South African Armed Forces; Laotian diplomatic (and) UN diplomatic."

The station intercepted 165,174 messages from these targets, "an increase of approximately 37,000 on the 84/85 figure. Reporting on the Soviet target increased by 20% on the previous year".

Posted on January 25, 2006 at 12:58 PM30 Comments

Vulnerability Disclosure Survey

If you have a moment, take this survey.

This research project seeks to understand how secrecy and openness can be balanced in the analysis and alerting of security vulnerabilities to protect critical national infrastructures. To answer this question, this thesis will investigate:

  1. How vulnerabilities are analyzed, understood and managed throughout the vulnerability lifecycle process.
  2. The ways that the critical infrastructure security community interact to exchange security-related information and the outcome of such interactions to date.
  3. The nature of and influences upon collaboration and information-sharing within the critical infrastructure protection community, particularly those handling internet security concerns.
  4. The relationship between secrecy and openness in providing and exchanging security-related information.

This looks interesting.

Posted on January 25, 2006 at 8:24 AM13 Comments

How to Survive a Robot Uprising

It's Friday, so why not something a little silly?

This is a good start:

i'm reading about how to survive a robot uprising. i'm not gonna give away all the secrets, but i'll share a few...

  • choose a complex environment. waterfalls, street traffic, and places with lots of ambient noise confuse the robots.
  • lose your heat signature. smear yourself with mud and leaves and sit real still.
  • use uncommon words to suss out robots on the phone. robots do not know how to pronounce supercalifragilisticexpealidocious.
  • find a blunt weapon. serrated edges won't work on robo exo-skeletons. nope.
  • alter your stride. robots can judge gait and injury, even height and intention, by stride, so put some rocks in your shoes and mix things up a bit. doing some ministry of silly walks stuff goes even further towards confusing them.
  • pretend that everything is normal. to forstall a mechanized killing spree, you must pretend that nothing is amiss.

Surely we can do better. Any other suggestions?

EDITED TO ADD (1/30): Okay, it was Tuesday.

EDITED TO ADD (2/14): There's a book. Also a zombie survival guide.

Posted on January 24, 2006 at 2:52 PM98 Comments

The Doghouse: Super Cipher P2P Messenger

Super Cipher P2P Messenger uses "unbreakable Infinity bit Triple Layer Socket Encryption for completely secure communication."

Wow. That sure sounds secure.

EDITED TO ADD (2/15): More humor from their website:

Combining today's most advanced encryption techniques, and expanding on them. The maximum encryption cipher size is Infinity! Which means each bit of your file or message is encrypted uniquely, with no repetition. You define a short key in the program, this key is used in an algorithm to generate the Random Infinity bit Triple Cipher. Every time you send a message or file, even if it is exactly the same, the Triple Cipher completely changes; hence then name 'Random'. Using this method a hackers chances of decoding your messages or file is one to infinity. In fact, I challenge anyone in the world to try and break a single encrypted message; because it can't be done. Brute Force and pattern searching will never work. The Encryption method Super Cipher P2P Messenger uses is unbreakable.

Posted on January 24, 2006 at 12:51 PM77 Comments

How the French Spy on Their Citizens

Interesting article on how the French utilize domestic spying as a counterterrorism tool:

In the French system, an investigating judge is the equivalent of an empowered U.S. prosecutor. The judge is in charge of a secret probe, through which he or she can file charges, order wiretaps, and issue warrants and subpoenas. The conclusions of the judge are then transmitted to the prosecutor's office, which decides whether to send the case to trial. The antiterrorist magistrates have even broader powers than their peers. For instance, they can request the assistance of the police and intelligence services, order the preventive detention of suspects for six days without charge, and justify keeping someone behind bars for several years pending an investigation. In addition, they have an international mandate when a French national is involved in a terrorist act, be it as a perpetrator or as a victim. As a result, France today has a pool of specialized judges and investigators adept at dismantling and prosecuting terrorist networks.

Posted on January 24, 2006 at 6:25 AM67 Comments

43rd Mersenne Prime Found

Last month, researchers found the 43rd Mersenne Prime: 230,402,457-1. It's 9,152,052 decimal digits long.

This is a great use of massively parallel computing:

The 700 campus computers are part of an international grid called PrimeNet, consisting of 70,000 networked computers in virtually every time zone of the world. PrimeNet organizes the parallel number crunching to create a virtual supercomputer running 24x7 at 18 trillion calculations per second, or 'teraflops.' This greatly accelerates the search. This prime, found in just 10 months, would have taken 4,500 years on a single PC.

Posted on January 23, 2006 at 3:07 PM36 Comments

Reading RFID Cards at Yards Away

This article talks about a not-a-passport ID card that U.S. citizens could use to go back and forth between the U.S. and Canada or Mexico. Pretty basic stuff, but this paragraph jumped out:

Officials said the card would be about the size of a credit card, carry a picture of the holder and cost about $50, about half the price of a passport. It will be equipped with radio frequency identification, allowing it to be read from several yards away at border crossings.

"Several yards away"? What about inches?

Note: My previous entries on RFID passports are here, here, here, and here.

Posted on January 23, 2006 at 12:27 PM27 Comments

Countering "Trusting Trust"

Way back in 1974, Paul Karger and Roger Schell discovered a devastating attack against computer systems. Ken Thompson described it in his classic 1984 speech, "Reflections on Trusting Trust." Basically, an attacker changes a compiler binary to produce malicious versions of some programs, INCLUDING ITSELF. Once this is done, the attack perpetuates, essentially undetectably. Thompson demonstrated the attack in a devastating way: he subverted a compiler of an experimental victim, allowing Thompson to log in as root without using a password. The victim never noticed the attack, even when they disassembled the binaries -- the compiler rigged the disassembler, too.

This attack has long been part of the lore of computer security, and everyone knows that there's no defense. And that makes this paper by David A. Wheeler so interesting. It's "Countering Trusting Trust through Diverse Double-Compiling," and here's the abstract:

An Air Force evaluation of Multics, and Ken Thompson's famous Turing award lecture "Reflections on Trusting Trust," showed that compilers can be subverted to insert malicious Trojan horses into critical software, including themselves. If this attack goes undetected, even complete analysis of a system's source code will not find the malicious code that is running, and methods for detecting this particular attack are not widely known. This paper describes a practical technique, termed diverse double-compiling (DDC), that detects this attack and some unintended compiler defects as well. Simply recompile the purported source code twice: once with a second (trusted) compiler, and again using the result of the first compilation. If the result is bit-for-bit identical with the untrusted binary, then the source code accurately represents the binary. This technique has been mentioned informally, but its issues and ramifications have not been identified or discussed in a peer-reviewed work, nor has a public demonstration been made. This paper describes the technique, justifies it, describes how to overcome practical challenges, and demonstrates it.

To see how this works, look at the attack. In a simple form, the attacker modifies the compiler binary so that whenever some targeted security code like a password check is compiled, the compiler emits the attacker's backdoor code in the executable.

Now, this would be easy to get around by just recompiling the compiler. Since that will be done from time to time as bugs are fixed or features are added, a more robust form of of the attack adds a step: Whenever the compiler is itself compiled, it emits the code to insert malicious code into various programs, including itself.

Assuming broadly that the compiler source is updated, but not completely rewritten, this attack is undetectable.

Wheeler explains how to defeat this more robust attack. Suppose we have two completely independent compilers: A and T. More specifically, we have source code SA of compiler A, and executable code EA and ET. We want to determine if the binary of compiler A -- EA -- contains this trusting trust attack.

Here's Wheeler's trick:

Step 1: Compile SA with EA, yielding new executable X.

Step 2: Compile SA with ET, yielding new executable Y.

Since X and Y were generated by two different compilers, they should have different binary code but be functionally equivalent. So far, so good. Now:

Step 3: Compile SA with X, yielding new executable V.

Step 4: Compile SA with Y, yielding new executable W.

Since X and Y are functionally equivalent, V and W should be bit-for-bit equivalent.

And that's how to detect the attack. If EA is infected with the robust form of the attack, then X and Y will be functionally different. And if X and Y are functionally different, then V and W will be bitwise different. So all you have to do is to run a binary compare between V and W; if they're different, then EA is infected.

Now you might read this and think: "What's the big deal? All I need to test if I have a trusted compiler is...another trusted compiler. Isn't it turtles all the way down?"

Not really. You do have to trust a compiler, but you don't have to know beforehand which one you must trust. If you have the source code for compiler T, you can test it against compiler A. Basically, you still have to have at least one executable compiler you trust. But you don't have to know which one you should start trusting.

And the definition of "trust" is much looser. This countermeasure will only fail if both A and T are infected in exactly the same way. The second compiler can be malicious; it just has to be malicious in some different way: i.e., it can't have the same triggers and payloads of the first. You can greatly increase the odds that the triggers/payloads are not identical by increasing diversity: using a compiler from a different era, on a different platform, without a common heritage, transforming the code, etc.

Also, the only thing compiler B has to do is compile the compiler-under-test. It can be hideously slow, produce code that is hideously slow, or only work on a machine that hasn't been produced in a decade. You could create a compiler specifically for this task. And if you're really worried about "turtles all the way down," you can write Compiler B yourself for a computer you built yourself from vacuum tubes that you made yourself. Since Compiler B only has to occasionally recompile your "real" compiler, you can impose a lot of restrictions that you would never accept in a typical production-use compiler. And you can periodically check Compiler B's integrity using every other compiler out there.

For more detailed information, see Wheeler's website.

Now, this technique only detects when the binary doesn't match the source, so someone still needs to examine the compiler source code. But now you only have to examine the source code (a much easier task), not the binary.

It's interesting: the "trusting trust" attack has actually gotten easier over time, because compilers have gotten increasingly complex, giving attackers more places to hide their attacks. Here's how you can use a simpler compiler -- that you can trust more -- to act as a watchdog on the more sophisticated and more complex compiler.

Posted on January 23, 2006 at 6:19 AM74 Comments

Surreptitious Lie Detector

According to The New Scientist:

THE US Department of Defense has revealed plans to develop a lie detector that can be used without the subject knowing they are being assessed. The Remote Personnel Assessment (RPA) device will also be used to pinpoint fighters hiding in a combat zone, or even to spot signs of stress that might mark someone out as a terrorist or suicide bomber.

"Revealed plans" is a bit of an overstatement. It seems that they're just asking for proposals:

In a call for proposals on a DoD website, contractors are being given until 13 January to suggest ways to develop the RPA, which will use microwave or laser beams reflected off a subject's skin to assess various physiological parameters without the need for wires or skin contacts. The device will train a beam on "moving and non-cooperative subjects", the DoD proposal says, and use the reflected signal to calculate their pulse, respiration rate and changes in electrical conductance, known as the "galvanic skin response". "Active combatants will in general have heart, respiratory and galvanic skin responses that are outside the norm," the website says.

The DoD asks for pie-in-the-sky stuff all the time. For example, they've wanted a synthetic blood substitute for decades. A surreptitious lie detector would be pretty neat.

Posted on January 20, 2006 at 12:37 PM37 Comments


This seems like a really important development: an anonymous operating system:

Titled Anonym.OS, the system is a type of disc called a "live CD" -- meaning it's a complete solution for using a computer without touching the hard drive. Developers say Anonym.OS is likely the first live CD based on the security-heavy OpenBSD operating system.

OpenBSD running in secure mode is relatively rare among desktop users. So to keep from standing out, Anonym.OS leaves a deceptive network fingerprint. In everything from the way it actively reports itself to other computers, to matters of technical minutia such as TCP packet length, the system is designed to look like Windows XP SP1. "We considered part of what makes a system anonymous is looking like what is most popular, so you blend in with the crowd," explains project developer Adam Bregenzer of Super Light Industry.

Booting the CD, you are presented with a text based wizard-style list of questions to answer, one at a time, with defaults that will work for most users. Within a few moments, a fairly naive user can be up and running and connected to an open Wi-Fi point, if one is available.

Once you're running, you have a broad range of anonymity-protecting applications at your disposal.

Get yours here.

See also this Slashdot thread.

Posted on January 20, 2006 at 7:39 AM40 Comments

20th Anniversary of the Computer Virus

Today is the 20th Anniversary of the oldest computer virus known: the Brain virus.

It was a boot sector virus, and spread via infected floppy disks.

EDITED TO ADD (1/19): F-Secure has some amusing comments.

EDITED TO ADD (1/30): As many people pointed out, Brain is not the first computer virus. It's the first PC virus.

Posted on January 19, 2006 at 9:53 AM23 Comments

Foiling Counterfeiting Countermeasures

Great story illustrating how criminals adapt to security measures.

The notes were all $5 bills that had been bleached and altered to look like $100 bills, sheriff's investigators said. They passed muster with the pen because it determines only whether the paper used to manufacture the currency is legitimate, Bandy said.

As a security measure, the merchants use a chemical pen that determines if the bills are counterfeit. But that's not exactly what the pen does. The pen only verifies that the paper is legitimate. The criminals successfully exploited this security hole.

Posted on January 19, 2006 at 6:38 AM71 Comments

Liberty Increases Security

From the Scientific American essay "Murdercide: Science unravels the myth of suicide bombers":

Another method [of reducing terrorism], says Princeton University economist Alan B. Krueger, is to increase the civil liberties of the countries that breed terrorist groups. In an analysis of State Department data on terrorism, Krueger discovered that "countries like Saudi Arabia and Bahrain, which have spawned relatively many terrorists, are economically well off yet lacking in civil liberties. Poor countries with a tradition of protecting civil liberties are unlikely to spawn suicide terrorists. Evidently, the freedom to assemble and protest peacefully without interference from the government goes a long way to providing an alternative to terrorism." Let freedom ring.

This seems obvious to me.

Found on John Quarterman's blog.

Posted on January 18, 2006 at 1:33 PM52 Comments

NSA Eavesdropping Yields Dead Ends

All of that extra-legal NSA eavesdropping resulted in a whole lot of dead ends.

In the anxious months after the Sept. 11 attacks, the National Security Agency began sending a steady stream of telephone numbers, e-mail addresses and names to the F.B.I. in search of terrorists. The stream soon became a flood, requiring hundreds of agents to check out thousands of tips a month.

But virtually all of them, current and former officials say, led to dead ends or innocent Americans.

Surely this can't be a surprise to anyone? And as I've been arguing for years, programs like this divert resources from real investigations.

President Bush has characterized the eavesdropping program as a "vital tool" against terrorism; Vice President Dick Cheney has said it has saved "thousands of lives."

But the results of the program look very different to some officials charged with tracking terrorism in the United States. More than a dozen current and former law enforcement and counterterrorism officials, including some in the small circle who knew of the secret program and how it played out at the F.B.I., said the torrent of tips led them to few potential terrorists inside the country they did not know of from other sources and diverted agents from counterterrorism work they viewed as more productive.

A lot of this article reads like a turf war between the NSA and the FBI, but the "inside baseball" aspects are interesting.

EDITED TO ADD (1/18): Jennifer Granick has written on the topic.

Posted on January 18, 2006 at 6:51 AM70 Comments

DHS Funding Open Source Security

From eWeek:

The U.S. government's Department of Homeland Security plans to spend $1.24 million over three years to fund an ambitious software auditing project aimed at beefing up the security and reliability of several widely deployed open-source products.

The grant, called the "Vulnerability Discovery and Remediation Open Source Hardening Project," is part of a broad federal initiative to perform daily security audits of approximately 40 open-source software packages, including Linux, Apache, MySQL and Sendmail.

The plan is to use source code analysis technology from San Francisco-based Coverity Inc. to pinpoint and correct security vulnerabilities and other potentially dangerous defects in key open-source packages.

Software engineers at Stanford University will manage the project and maintain a publicly available database of bugs and defects.

Anti-virus vendor Symantec Corp. is providing guidance as to where security gaps might be in certain open-source projects.

I think this is a great use of public funds. One of the limitations of open-source development is that it's hard to fund tools like Coverity. And this kind of thing improves security for a lot of different organizations against a wide variety of threats. And it increases competition with Microsoft, which will force them to improve their OS as well. Everybody wins.

What's affected?

In addition to Linux, Apache, MySQL and Sendmail, the project will also pore over the code bases for FreeBSD, Mozilla, PostgreSQL and the GTK (GIMP Tool Kit) library.

And from ZDNet:

The list of open-source projects that Stanford and Coverity plan to check for security bugs includes Apache, BIND, Ethereal, KDE, Linux, Firefox, FreeBSD, OpenBSD, OpenSSL and MySQL, Coverity said.

Posted on January 17, 2006 at 1:04 PM42 Comments

Ben Franklin on the Feeling of Security

Today is Ben Franklin's 300th birthday. Among many other discoveries and inventions, Franklin worked out a way of protecting buildings from lightning strikes, by providing a conducting path to ground -- outside a building -- from one or more pointed rods high atop the structure. People tried this, and it worked. Franklin became a celebrity, not just among "electricians," but among the general public.

An article in this month's issue of Physics Today has a great 1769 quote by Franklin about lightning rods, and the reality vs. the feeling of security:

Those who calculate chances may perhaps find that not one death (or the destruction of one house) in a hundred thousand happens from that cause, and that therefore it is scarce worth while to be at any expense to guard against it. But in all countries there are particular situations of buildings more exposed than others to such accidents, and there are minds so strongly impressed with the apprehension of them, as to be very unhappy every time a little thunder is within their hearing; it may therefore be well to render this little piece of new knowledge as general and well understood as possible, since to make us safe is not all its advantage, it is some to make us easy. And as the stroke it secures us from might have chanced perhaps but once in our lives, while it may relieve us a hundred times from those painful apprehensions, the latter may possibly on the whole contribute more to the happiness of mankind than the former.

Posted on January 17, 2006 at 7:52 AM24 Comments

Who Watches the Watchers?

One problem with cameras is that you can't trust the watchers not to misuse them:

Two council CCTV camera operators have been jailed for spying on a naked woman in her own home.

Mark Summerton and Kevin Judge, from Sefton Council, Merseyside, trained a street camera into the woman's flat.


The images from the camera, including the woman without her clothes on, were shown on a large plasma screen in the council's CCTV control room in November 2004, Liverpool Crown Court heard.

Over several hours, she was filmed cuddling her boyfriend before undressing, using the toilet, having a bath and watching television dressed only in a towel.

Judge Gerald Clifton told the three men: "To dismiss what was happening as laddish behaviour, something that the 21st Century apparently condones, is absurd.

"You only have to read the impact statements of the lady to realise the harrowing effect that this had on her.

"Her life has almost been ruined, her self-confidence entirely destroyed by the thought that prying male eyes have entered her flat."

Also, The Register reported on this.

Posted on January 16, 2006 at 12:00 PM73 Comments

U.S. Customs Opening International Mail

Reuters is reporting that Customs and Border Protection is opening international mail coming into the U.S. without warrant.

Sadly, this is legal.

Congress passed a trade act in 2002, 107 H.R. 3009, that expanded the Custom Service's ability to open international mail. Here's the beginning of Section 344:

(1) In general.--For purposes of ensuring compliance with the Customs laws of the United States and other laws enforced by the Customs Service, including the provisions of law described in paragraph (2), a Customs officer may, subject to the provisions of this section, stop and search at the border, without a search warrant, mail of domestic origin transmitted for export by the United States Postal Service and foreign mail transiting the United States that is being imported or exported by the United States Postal Service.

If I remember correctly, the ACLU was able to temper the amendment, and this language is better than what the government originally wanted.

Domestic First Class mail is still private; the police need a warrant is to open it. But there is a lower standard for Media Mail and the like, and a lower standard for "mail covers": the practice of collecting address information from the outside of the envelope.

Posted on January 16, 2006 at 6:43 AM113 Comments

Friday Squid Blogging

It's from last September, but it's the biggest giant squid news in years -- a live giant squid caught on camera:

In their efforts to photograph the huge cephalopod, Tsunemi Kubodera and Kyoichi Mori have been using a camera and depth recorder attached to a long-line, which they lower into the sea from their research vessel.

Below the camera, they suspend a weighted jig -- a set of ganged hooks to snag the squid -- along with a single Japanese common squid as bait and an odour lure consisting of chopped-up shrimps.

At 0915 local time on 30 September 2004, they struck lucky. At a depth close to 1km in waters off Japan's Ogasawara Islands, an 8m-long Architeuthis wrapped its long tentacles around the bait, snagging one of them on the jig.

Kubodera and Mori took more than 550 images of the giant squid as it made repeated attempts to detach itself.

The pictures show the squid spreading its arms, enveloping the long-line and swimming away in its efforts to struggle free.

Finally, four hours and 13 minutes after it was first snagged, the attached tentacle broke off, allowing the squid to escape. The researchers retrieved a 5.5m portion with the line.

See also this article from Nature.

Posted on January 13, 2006 at 2:17 PM22 Comments

REAL ID Harder Than Legislators Thought

According to the Associated Press:

State motor vehicle officials nationwide who will have to carry out the Real ID Act say its authors grossly underestimated its logistical, technological and financial demands.

In a comprehensive survey obtained by The Associated Press and in follow-up interviews, officials cast doubt on the states' ability to comply with the law on time and fretted that it will be a budget buster.

I've already written about REAL ID, including the obscene costs:

REAL ID is expensive. It's an unfunded mandate: the federal government is forcing the states to spend their own money to comply with the act. I've seen estimates that the cost to the states of complying with REAL ID will be $120 million. That's $120 million that can't be spent on actual security.

According to the AP, I was way off:

Pennsylvania alone estimated a hit of up to $85 million. Washington state projected at least $46 million annually in the first several years.

Separately, a December report to Virginia's governor pegged the potential price tag for that state as high as $169 million, with $63 million annually in successive years. Of the initial cost, $33 million would be just to redesign computing systems.

Remember, security is a trade-off. REAL ID is a bad idea primarily because the security gained is not worth the enormous expense.

See also the ACLU's site on REAL ID.

Posted on January 13, 2006 at 1:23 PM23 Comments

Forged Credentials and Security

In Beyond Fear, I wrote about the difficulty of verifying credentials. Here's a real story about that very problem:

When Frank Coco pulled over a 24-year-old carpenter for driving erratically on Interstate 55, Coco was furious. Coco was driving his white Chevy Caprice with flashing lights and had to race in front of the young man and slam on his brakes to force him to stop.

Coco flashed his badge and shouted at the driver, Joe Lilja: "I'm a cop and when I tell you to pull over, you pull over, you motherf-----!"

Coco punched Lilja in the face and tried to drag him out of his car.

But Lilja wasn't resisting arrest. He wasn't even sure what he'd done wrong.

"I thought, 'Oh my God, I can't believe he's hitting me,' " Lilja recalled.

It was only after Lilja sped off to escape -- leading Coco on a tire-squealing, 90-mph chase through the southwest suburbs -- that Lilja learned the truth.

Coco wasn't a cop at all.

He was a criminal.

There's no obvious way to solve this. This is some of what I wrote in Beyond Fear:

Authentication systems suffer when they are rarely used and when people aren't trained to use them.


Imagine you're on an airplane, and Man A starts attacking a flight attendant. Man B jumps out of his seat, announces that he's a sky marshal, and that he's taking control of the flight and the attacker. (Presumably, the rest of the plane has subdued Man A by now.) Man C then stands up and says: "Don't believe Man B. He's not a sky marshal. He's one of Man A's cohorts. I'm really the sky marshal."

What do you do? You could ask Man B for his sky marshal identification card, but how do you know what an authentic one looks like? If sky marshals travel completely incognito, perhaps neither the pilots nor the flight attendants know what a sky marshal identification card looks like. It doesn't matter if the identification card is hard to forge if person authenticating the credential doesn't have any idea what a real card looks like.


Many authentication systems are even more informal. When someone knocks on your door wearing an electric company uniform, you assume she's there to read the meter. Similarly with deliverymen, service workers, and parking lot attendants. When I return my rental car, I don't think twice about giving the keys to someone wearing the correct color uniform. And how often do people inspect a police officer's badge? The potential for intimidation makes this security system even less effective.

Posted on January 13, 2006 at 7:00 AM73 Comments

Anonymity and Accountability

Last week I blogged Kevin Kelly's rant against anonymity. Today I wrote about it for

And that's precisely where Kelly makes his mistake. The problem isn't anonymity; it's accountability. If someone isn't accountable, then knowing his name doesn't help. If you have someone who is completely anonymous, yet just as completely accountable, then -- heck, just call him Fred.

History is filled with bandits and pirates who amass reputations without anyone knowing their real names.

EBay's feedback system doesn't work because there's a traceable identity behind that anonymous nickname. EBay's feedback system works because each anonymous nickname comes with a record of previous transactions attached, and if someone cheats someone else then everybody knows it.

Similarly, Wikipedia's veracity problems are not a result of anonymous authors adding fabrications to entries. They're an inherent property of an information system with distributed accountability. People think of Wikipedia as an encyclopedia, but it's not. We all trust Britannica entries to be correct because we know the reputation of that company, and by extension its editors and writers. On the other hand, we all should know that Wikipedia will contain a small amount of false information because no particular person is accountable for accuracy -- and that would be true even if you could mouse over each sentence and see the name of the person who wrote it.

Please read the whole thing before you comment.

Posted on January 12, 2006 at 4:36 AM69 Comments

Now Everyone Gets to Watch the Cameras

From The Times:

Residents of a trendy London neighbourhood are to become the first in Britain to receive "Asbo TV" -- television beamed live to their homes from CCTV cameras on the surrounding streets.

As part of the £12m scheme funded by the Office of the Deputy Prime Minister, residents of Shoreditch in the East End will also be able to compare characters they see behaving suspiciously with an on-screen "rogues' gallery" of local recipients of anti-social behaviour orders (Asbos).

Viewers will then be able to use an anonymous e-mail tip-off system to report to the police anyone they see breaching an Asbo or committing a crime.

Someone knows what the deal is here:

"The CCTV element is part curiosity, like a 21st-century version of Big Brother, and partly about security," said Atul Hatwell, of the Shoreditch Digital Bridge project.

Certainly this kind of system can be abused, but my guess is that worrying about this is kind of silly:

Andrew Duff, a Conservative councillor, raised concerns about the system being adopted by burglars to check unoccupied properties. "It could be used by dishonest people as well," he said.

My guess is that this sort of system will reduce the crime rate, as criminals move to neighborhoods without these sorts of systems. But once everyone has this sort of system, criminals will adapt and the crime rate will return to its original rate.

Meanwhile, everybody loses more privacy.

Posted on January 11, 2006 at 7:55 AM67 Comments

TSA in Space

The government is already thinking about security checks for space tourists.

According to the BBC:

It has recommended security checks similar to those for airline passengers.

The FAA also suggests space tourism companies check the global "no-fly" list, from the US's Homeland Security Department, to exclude potential terrorists.

Here's the FAA draft.

Posted on January 10, 2006 at 8:26 AM22 Comments

Anonymous Internet Annoying Is Illegal in the U.S.

How bizarre:

Last Thursday, President Bush signed into law a prohibition on posting annoying Web messages or sending annoying e-mail messages without disclosing your true identity.


Buried deep in the new law is Sec. 113, an innocuously titled bit called "Preventing Cyberstalking." It rewrites existing telephone harassment law to prohibit anyone from using the Internet "without disclosing his identity and with intent to annoy."

What does this mean for the comment section of this blog? Or any blog? Or Usenet?

More importantly, what does it mean for our society when obviously stupid laws like this get passed, and we have to rely on the police being nice enough to not enforce them?

EDITED TO ADD (1/9) Some commenters to BoingBoing clarify the legal issues. This is from an anonymous attorney:

The anonymous harassment provision ( Link ) is the old telephone-annoyance statute that has been on the books for decades. It was updated in the widely (and in many respects deservedly) ridiculed Communications Decency Act to include new technologies, and the cases make clear its applicability to Internet communications. See, e.g., ACLU v. Reno, 929 F. Supp. 824, 829 n.5 (E.D. Pa. 1996) (text here), aff'd, 521 U.S. 824 (1997). Unlike the indecency provisions of the CDA, this scope update was not invalidated in the courts and remains fully effective.

In other words, the latest amendment, which supposedly adds Internet communications devices to the scope of the law, is meaningless surplusage.

Posted on January 9, 2006 at 2:38 PM93 Comments

Anyone Can Get Anyone's Phone Records

Interested in who your spouse is talking to? Your boss? A celebrity? A politician?

The Chicago Police Department is warning officers their cell phone records are available to anyone -- for a price. Dozens of online services are selling lists of cell phone calls, raising security concerns among law enforcement and privacy experts....

How well do the services work? The Chicago Sun-Times paid $110 to to purchase a one-month record of calls for this reporter's company cell phone. It was as simple as e-mailing the telephone number to the service along with a credit card number. The request was made Friday after the service was closed for the New Year's holiday.

On Tuesday, when it reopened, e-mailed a list of 78 telephone numbers this reporter called on his cell phone between Nov. 19 and Dec. 17. The list included calls to law enforcement sources, story subjects and other Sun-Times reporters and editors.

EDITED TO ADD (1/9): More information on BoingBoing.

EDITED TO ADD (1/9): Also see this on EPIC West.

EDITED TO ADD (1/14): Daniel Solove has some good commentary.

Posted on January 9, 2006 at 6:59 AM37 Comments

Friday Squid Blogging

A squid that cares for its young:

But a team of ocean scientists exploring the inky depths of the Monterey Canyon off California has discovered that at least one squid species cares for its young with loving attention, the mother cradling the eggs in her arms for months, waving her tentacles to bathe the eggs in fresh seawater. The scientists suspect that other species are doting parents, too, and that misperceptions about squid behavior have arisen because the deep is so poorly explored.

"Our finding is unexpected because this behavior differs from the reproductive habits of all other known squid species," the scientists wrote in the Dec. 15 issue of Nature. "We expect it to be found in other squids."

Posted on January 6, 2006 at 3:23 PM24 Comments

Stupid Band Names

Be careful what you write in your journal:

An airline passenger with the words "suicide bomber" written in his journal was arrested when his plane arrived in San Jose, California, on Wednesday, but the words appeared to refer to music and he was later released, officials said.

..."Preliminary, what we believe is that that was the name of either a band or a song," Quy said.

I'm not sure I want "Suicide Bombers" displayed on my iPod. I certainly wouldn't want to be in a band with that name, flying around the country with crates of gear marked "Suicide Bombers." That would be asking for trouble.

On the other hand, it's pretty sad what is enough to get you arrested these days:

"A male was observed by his fellow passengers as having a journal and handwritten on the journal were the words 'suicide bomber,'" FBI spokeswoman LaRae Quy said.

"That, combined with the fact that he was clutching a backpack, and then finally he was acting a little suspiciously" prompted law enforcement to act.

My guess is that it wouldn't matter how he held his backpack; once the jittery passenger saw the words everything else was interpreted suspiciously.

Posted on January 6, 2006 at 12:00 PM75 Comments

Wisconsin Voting Machines

Here's an impressive piece of common sense:

Among the 15 bills governor Jim Doyle signed into law on Wednesday will require the software of touch-screen voting machines used in elections to be open-source.

Municipalities that use electronic voting machines are responsible for providing to the public, on request, the code used.

Any voting machines to be used in the state already had to pass State Elections Board tests. Electronic voting machines, in particular, already were required to maintain their results tallies even if the power goes out, and to produce paper ballots that could be used in case of a recount. The new law also requires the paper ballots to be presented to voters for verification before being stored.

I wrote about electronic voting here (2004), here (2003), and here (2000).

Posted on January 6, 2006 at 7:15 AM43 Comments

Kevin Kelly on Anonymity

He's against it:

More anonymity is good: that's a dangerous idea.

Fancy algorithms and cool technology make true anonymity in mediated environments more possible today than ever before. At the same time this techno-combo makes true anonymity in physical life much harder. For every step that masks us, we move two steps toward totally transparent unmasking. We have caller ID, but also caller ID Block, and then caller ID-only filters. Coming up: biometric monitoring and little place to hide. A world where everything about a person can be found and archived is a world with no privacy, and therefore many technologists are eager to maintain the option of easy anonymity as a refuge for the private.

However in every system that I have seen where anonymity becomes common, the system fails. The recent taint in the honor of Wikipedia stems from the extreme ease which anonymous declarations can be put into a very visible public record. Communities infected with anonymity will either collapse, or shift the anonymous to pseudo-anonymous, as in eBay, where you have a traceable identity behind an invented nickname. Or voting, where you can authenticate an identity without tagging it to a vote.

Anonymity is like a rare earth metal. These elements are a necessary ingredient in keeping a cell alive, but the amount needed is a mere hard-to-measure trace. In larger does these heavy metals are some of the most toxic substances known to a life. They kill. Anonymity is the same. As a trace element in vanishingly small doses, it's good for the system by enabling the occasional whistleblower, or persecuted fringe. But if anonymity is present in any significant quantity, it will poison the system.

There's a dangerous idea circulating that the option of anonymity should always be at hand, and that it is a noble antidote to technologies of control. This is like pumping up the levels of heavy metals in your body into to make it stronger.

Privacy can only be won by trust, and trust requires persistent identity, if only pseudo-anonymously. In the end, the more trust, the better. Like all toxins, anonymity should be keep as close to zero as possible.

I don't even know where to begin. Anonymity is essential for free and fair elections. It's essential for democracy and, I think, liberty. It's essential to privacy in a large society, and so it is essential to protect the rights of the minority against the tyranny of the majority...and to protect individual self-respect.

Kelly makes the very valid point that reputation makes society work. But that doesn't mean that 1) reputation can't be anonymous, or 2) anonymity isn't also essential for society to work.

I'm writing an essay on this for Wired News. Comments and arguments, pro or con, are appreciated.

Posted on January 5, 2006 at 1:20 PM121 Comments

Data Mining and Amazon Wishlists

Data Mining 101: Finding Subversives with Amazon Wishlists.

Now, imagine the false alarms and abuses that are possible if you have lots more data, and lots more computers to slice and dice it.

Of course, there are applications where this sort of data mining makes a whole lot of sense. But finding terrorists isn't one of them. It's a needle-in-a-haystack problem, and piling on more hay doesn't help matters much.

Posted on January 5, 2006 at 6:15 AM38 Comments

RFID Zapper

This is an interesting demonstration project: a hand-held device that disables passive RFID tags.

There are several ways to deactivate RFID-Tags. One that might be offered by the industries are RFID-deactivators, which will send the RFID-Tag to sleep. A problem with this method is, that it is not permanent, the RFID-Tag can be reactivated (probably without your knowledge). Several ways of permanently deactivating RFID-Tags are know, e.g. cutting off the antenna from the actual microchip or overloading and literally frying the RFID-Tag by placing it in a common microwave-oven for even very short periods of time. Unfortunately both methods aren't suitable for the destruction of RFID-Tags in clothes: cutting off the antenna would require to damage the piece of cloth, while frying the chips is likely to cause a short but potent flame, which would damage most textiles or even set them on fire.

The RFID-Zapper solves this dilemma. Basically it copies the mircowave-oven-method, but in a much smaller scale. It generates a strong electromagnetic field with a coil, which should be placed as near to the target-RFID-Tag as possible. The RFID-Tag then will recive a strong shock of energy comparable with an EMP and some part of it will blow, most likely the capacitator, thus deactivating the chip forever.

An obvious application would be to disable the RFID chip on your passport, but this kind of thing will probably be more popular with professional shoplifters.

Posted on January 4, 2006 at 6:35 AM82 Comments

How Profitable is Cybercrime?

Interesting quote:

The Treasury Department says that cyber crime has now outgrown illegal drug sales in annual proceeds, netting an estimated $105 billion in 2004, the report said.

Posted on January 3, 2006 at 7:31 AM44 Comments

Top Ten Privacy Stories

The Electronic Privacy Information Information Center (EPIC) lists its Top Ten Privacy Stories of 2005:

  • PATRIOT Act Reauthorization Falls Short
  • Security Breaches on the Rise
  • Defense Department Ignores Privacy Laws
  • In Federal Court, a Good E-mail Privacy Decision
  • Privacy for Voters
  • State Department Drops Hi-Tech Passport Plan, But Problems Remain
  • NSA Domestic Spying Disclosed
  • Problems Remain with Travel Screening Plans
  • Credit Freeze Laws on the Rise
  • Surveillance of Activists Revealed

And its Top Ten Issues to Watch in 2006:

  • Nomination of Samuel Alito
  • Future of REAL ID
  • "Welcome to the US. Fingerprints, please."
  • Workplace Privacy
  • Student Privacy
  • Location Tracking
  • New Revelations About Government Datamining
  • Wiretapping the Internet
  • DNA Databases and Genetic Privacy Legislation
  • Data Broker Regulation

More information on each item behind the link. I don't think the lists are in any order.

Posted on January 2, 2006 at 7:26 AM14 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.