Blog: 2005 Archives

ID Cards and ID Fraud

Unforeseen security effects of weak ID cards:

It can even be argued that the introduction of the photocard licence has encouraged ID fraud. It has been relatively easy for fraudsters to obtain a licence, but because it looks and feels like ‘photo ID’, it is far more readily accepted as proof of identity than the paper licence is, and can therefore be used directly as an ID document or to support the establishment of stronger fraudulent ID, particularly in countries familiar with ID cards in this format, but perhaps unfamiliar with the relative strengths of British ID documents.

During the Commons ID card debates this kind of process was described by Tory MP Patrick Mercer, drawing on his experience as a soldier in Northern Ireland, where photo driving licences were first introduced as an anti-terror measure. This “quasi-identity card… I think—had a converse effect to that which the Government sought… anybody who had such a card or driving licence on their person had a pass, which, if shown to police or soldiers, gave them free passage. So, it had precisely the opposite effect to that which was intended.”

Effectively – as security experts frequently point out – apparently stronger ID can have a negative effect in that it means that the people responsible for checking it become more likely to accept it as conclusive, and less likely to consider the individual bearing it in any detail. A similar effect has been observed following the introduction of chip and PIN credit cards, where ownership of the card and knowledge of the PIN is now almost always viewed as conclusive.

Posted on December 30, 2005 at 1:51 PM21 Comments

Project Shamrock

Decades before 9/11, and the subsequent Bush order that directed the NSA to eavesdrop on every phone call, e-mail message, and who-knows-what-else going into or out of the United States, U.S. citizens included, they did the same thing with telegrams. It was called Project Shamrock, and anyone who thinks this is new legal and technological terrain should read up on that program.

Project SHAMROCK…was an espionage exercise that involved the accumulation of all telegraphic data entering into or exiting from the United States. The Armed Forces Security Agency (AFSA) and its successor NSA were given direct access to daily microfilm copies of all incoming, outgoing, and transiting telegraphs via the Western Union and its associates RCA and ITT. Operation Shamrock lasted well into the 1960s when computerized operations (HARVEST) made it possible to search for keywords rather than read through all communications.

Project SHAMROCK became so successful that in 1966 the NSA and CIA set up a front company in lower Manhattan (where the offices of the telegraph companies were located) under the codename LPMEDLEY. At the height of Project SHAMROCK, 150,000 messages a month were printed and analyzed by NSA agents. In May 1975 however, congressional critics began to investigate and expose the program. As a result, NSA director Lew Allen terminated it. The testimony of both the representatives from the cable companies and of director Allen at the hearings prompted Senate Intelligence Committee chairman Sen. Frank Church to conclude that Project SHAMROCK was “probably the largest government interception program affecting Americans ever undertaken.”

If you want details, the best place is James Banford’s books about the NSA: his 1982 book, The Puzzle Palace, and his 2001 book, Body of Secrets. This quote is from the latter book, page 440:

Among the reforms to come out of the Church Committee investigation was the creation of the Foreign Intelligence Surveillance Act (FISA), which for the first time outlined what NSA was and was not permitted to do. The new statute outlawed wholesale, warrantless acquisition of raw telegrams such as had been provided under Shamrock. It also outlawed the arbitrary compilation of watch list containing the names of Americans. Under FISA, a secret federal court was set up, the Foreign Intelligence Surveillance Court. In order for NSA to target an American citizen or a permanent resident alien—a “green card” holder—within the United States, a secret warrant must be obtained from the court. To get the warrant, NSA officials must show that the person they wish to target is either an agent of a foreign power or involved in espionage or terrorism.

A lot of people are trying to say that it’s a different world today, and that eavesdropping on a massive scale is not covered under the FISA statute, because it just wasn’t possible or anticipated back then. That’s a lie. Project Shamrock began in the 1950s, and ran for about twenty years. It too had a massive program to eavesdrop on all international telegram communications, including communications to and from American citizens. It too was to counter a terrorist threat inside the United States. It too was secret, and illegal. It is exactly, by name, the sort of program that the FISA process was supposed to get under control.

Twenty years ago, Senator Frank Church warned of the dangers of letting the NSA get involved in domestic intelligence gathering. He said that the “potential to violate the privacy of Americans is unmatched by any other intelligence agency.” If the resources of the NSA were ever used domestically, “no American would have any privacy left…. There would be no place to hide…. We must see to it that this agency and all agencies that possess this technology operate within the law and under proper supervision, so that we never cross over that abyss. That is an abyss from which there is no return.”

Bush’s eavesdropping program was explicitly anticipated in 1978, and made illegal by FISA. There might not have been fax machines, or e-mail, or the Internet, but the NSA did the exact same thing with telegrams.

We can decide as a society that we need to revisit FISA. We can debate the relative merits of police-state surveillance tactics and counterterrorism. We can discuss the prohibitions against spying on American citizens without a warrant, crossing over that abyss that Church warned us about twenty years ago. But the president can’t simply decide that the law doesn’t apply to him.

This issue is not about terrorism. It’s not about intelligence gathering. It’s about the executive branch of the United States ignoring a law, passed by the legislative branch and signed by President Jimmy Carter: a law that directs the judicial branch to monitor eavesdropping on Americans in national security investigations.

It’s not the spying, it’s the illegality.

Posted on December 29, 2005 at 8:40 AM97 Comments

Bomb-Sniffing Wasps

No, this isn’t from The Onion. Trained wasps:

The tiny, non-stinging wasps can check for hidden explosives at airports and monitor for toxins in subway tunnels.

“You can rear them by the thousands, and you can train them within a matter of minutes,” says Joe Lewis, a U.S. Agriculture Department entomologist. “This is just the very tip of the iceberg of a very new resource.”

Sounds like it will be cheap enough….

EDITED TO ADD (12/29): Bomb-sniffing bees are old news.

Posted on December 28, 2005 at 12:47 PM35 Comments

Are Computer-Security Export Controls Back?

I thought U.S. export regulations were finally over and done with, at least for software. Maybe not:

Unfortunately, due to strict US Government export regulations Symantec is only able to fulfill new LC5 orders or offer technical support directly with end-users located in the United States and commercial entities in Canada, provided all screening is successful.

Commodities, technology or software is subject to U.S. Dept. of Commerce, Bureau of Industry and Security control if exported or electronically transferred outside of the USA. Commodities, technology or software are controlled under ECCN 5A002.c.1, cryptanalytic.

You can also access further information on our web site at the following address: http://www.symantec.com/region/reg_eu/techsupp/enterprise/index.html

The software in question is the password breaking and auditing tool called LC5, better known as L0phtCrack.

Anyone have any ideas what’s going on, because I sure don’t.

Posted on December 28, 2005 at 7:08 AM31 Comments

Bug Bounties Are Not Security

Paying people rewards for finding security flaws is not the same as hiring your own analysts and testers. It’s a reasonable addition to a software security program, but no substitute.

I’ve said this before, but Moshe Yudkowsky said it better:

Here’s an outsourcing idea: get rid of your fleet of delivery trucks, toss your packages out into the street, and offer a reward to anyone who successfully delivers a package. Sound like a good idea, or a recipe for disaster?

Red Herring offers an article about the bounties that some software companies offer for bugs. That is, if you’re an independent researcher and you find a bug in their software, some companies will offer you a cash bonus when you report the bug.

As the article notes, “in a free market everything has value,” and therefore information that a bug exists should logically result in some sort of market. However, I think it’s misleading to call this practice “outsourcing” of security, any more than calling the practice of tossing packages into the street a “delivery service.” Paying someone to tell you about a bug may or may not be a good business practice, but that practice alone certainly does not constitute a complete security policy.

Posted on December 27, 2005 at 7:46 AM20 Comments

Is the NSA Reading Your E-Mail?

Richard M Smith has some interesting ideas on how to test if the NSA is eavesdropping on your e-mail.

With all of the controversy about the news that the NSA has been monitoring, since 9/11, telephone calls and email messages of Americans, some folks might now be wondering if they are being snooped on. Here’s a quick and easy method to see if one’s email messages are being read by someone else.

The steps are:

  1. Set up a Hotmail account.
  2. Set up a second email account with a non-U.S. provider. (eg. Rediffmail.com)
  3. Send messages between the two accounts which might be interesting to the NSA.
  4. In each message, include a unique URL to a Web server that you have access to its server logs. This URL should only be known by you and not linked to from any other Web page. The text of the message should encourage an NSA monitor to visit the URL.
  5. If the server log file ever shows this URL being accessed, then you know that you are being snooped on. The IP address of the access can also provide clues about who is doing the snooping.

The trick is to make the link enticing enough for someone or something to want to click on it. As part of a large-scale research project, I would suggest sending out a few hundred thousand messages using various tricks to find one that might work. Here are some possible ideas:

  • Include a variety of terrorist related trigger words
  • Include other links in a message to known AQ message boards
  • Include a fake CC: to Mohamed Atta’s old email address (el-amir@tu-harburg.de)
  • Send the message from an SMTP server in Iraq, Afghanistan, etc.
  • Use a fake return address from a known terrorist organization
  • Use a ziplip or hushmail account.

Besides monitoring the NSA, this same technique can be used if you suspect your email account password has been stolen or if a family member or coworker is reading your email on your computer of the sly.

The only problem is that you might get a knock on your door by some random investigative agency. Or get searched every time you try to get on an airplane.

But I think that risk is pretty low, actually. If people actually do this, please report back. I’m very curious.

Posted on December 26, 2005 at 12:31 PM133 Comments

Internet Explorer Sucks

This study is from August, but I missed it. The researchers tracked three browsers (MSIE, Firefox, Opera) in 2004 and counted which days they were “known unsafe.” Their definition of “known unsafe”: a remotely exploitable security vulnerability had been publicly announced and no patch was yet available.

MSIE was 98% unsafe. There were only 7 days in 2004 without an unpatched publicly disclosed security hole.

Firefox was 15% unsafe. There were 56 days with an unpatched publicly disclosed security hole. 30 of those days were a Mac hole that only affected Mac users. Windows Firefox was 7% unsafe.

Opera was 17% unsafe: 65 days. That number is accidentally a little better than it should be, as two of the upatched periods happened to overlap.

This underestimates the risk, because it doesn’t count vulnerabilities known to the bad guys but not publicly disclosed (and it’s foolish to think that such things don’t exist). So the “98% unsafe” figure for MSIE is generous, and the situation might be even worse.

Wow.

Posted on December 26, 2005 at 6:27 AM96 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.