Blog: December 2005 Archives

ID Cards and ID Fraud

Unforeseen security effects of weak ID cards:

It can even be argued that the introduction of the photocard licence has encouraged ID fraud. It has been relatively easy for fraudsters to obtain a licence, but because it looks and feels like ‘photo ID’, it is far more readily accepted as proof of identity than the paper licence is, and can therefore be used directly as an ID document or to support the establishment of stronger fraudulent ID, particularly in countries familiar with ID cards in this format, but perhaps unfamiliar with the relative strengths of British ID documents.

During the Commons ID card debates this kind of process was described by Tory MP Patrick Mercer, drawing on his experience as a soldier in Northern Ireland, where photo driving licences were first introduced as an anti-terror measure. This “quasi-identity card… I think—had a converse effect to that which the Government sought… anybody who had such a card or driving licence on their person had a pass, which, if shown to police or soldiers, gave them free passage. So, it had precisely the opposite effect to that which was intended.”

Effectively – as security experts frequently point out – apparently stronger ID can have a negative effect in that it means that the people responsible for checking it become more likely to accept it as conclusive, and less likely to consider the individual bearing it in any detail. A similar effect has been observed following the introduction of chip and PIN credit cards, where ownership of the card and knowledge of the PIN is now almost always viewed as conclusive.

Posted on December 30, 2005 at 1:51 PM21 Comments

Project Shamrock

Decades before 9/11, and the subsequent Bush order that directed the NSA to eavesdrop on every phone call, e-mail message, and who-knows-what-else going into or out of the United States, U.S. citizens included, they did the same thing with telegrams. It was called Project Shamrock, and anyone who thinks this is new legal and technological terrain should read up on that program.

Project SHAMROCK…was an espionage exercise that involved the accumulation of all telegraphic data entering into or exiting from the United States. The Armed Forces Security Agency (AFSA) and its successor NSA were given direct access to daily microfilm copies of all incoming, outgoing, and transiting telegraphs via the Western Union and its associates RCA and ITT. Operation Shamrock lasted well into the 1960s when computerized operations (HARVEST) made it possible to search for keywords rather than read through all communications.

Project SHAMROCK became so successful that in 1966 the NSA and CIA set up a front company in lower Manhattan (where the offices of the telegraph companies were located) under the codename LPMEDLEY. At the height of Project SHAMROCK, 150,000 messages a month were printed and analyzed by NSA agents. In May 1975 however, congressional critics began to investigate and expose the program. As a result, NSA director Lew Allen terminated it. The testimony of both the representatives from the cable companies and of director Allen at the hearings prompted Senate Intelligence Committee chairman Sen. Frank Church to conclude that Project SHAMROCK was “probably the largest government interception program affecting Americans ever undertaken.”

If you want details, the best place is James Banford’s books about the NSA: his 1982 book, The Puzzle Palace, and his 2001 book, Body of Secrets. This quote is from the latter book, page 440:

Among the reforms to come out of the Church Committee investigation was the creation of the Foreign Intelligence Surveillance Act (FISA), which for the first time outlined what NSA was and was not permitted to do. The new statute outlawed wholesale, warrantless acquisition of raw telegrams such as had been provided under Shamrock. It also outlawed the arbitrary compilation of watch list containing the names of Americans. Under FISA, a secret federal court was set up, the Foreign Intelligence Surveillance Court. In order for NSA to target an American citizen or a permanent resident alien—a “green card” holder—within the United States, a secret warrant must be obtained from the court. To get the warrant, NSA officials must show that the person they wish to target is either an agent of a foreign power or involved in espionage or terrorism.

A lot of people are trying to say that it’s a different world today, and that eavesdropping on a massive scale is not covered under the FISA statute, because it just wasn’t possible or anticipated back then. That’s a lie. Project Shamrock began in the 1950s, and ran for about twenty years. It too had a massive program to eavesdrop on all international telegram communications, including communications to and from American citizens. It too was to counter a terrorist threat inside the United States. It too was secret, and illegal. It is exactly, by name, the sort of program that the FISA process was supposed to get under control.

Twenty years ago, Senator Frank Church warned of the dangers of letting the NSA get involved in domestic intelligence gathering. He said that the “potential to violate the privacy of Americans is unmatched by any other intelligence agency.” If the resources of the NSA were ever used domestically, “no American would have any privacy left…. There would be no place to hide…. We must see to it that this agency and all agencies that possess this technology operate within the law and under proper supervision, so that we never cross over that abyss. That is an abyss from which there is no return.”

Bush’s eavesdropping program was explicitly anticipated in 1978, and made illegal by FISA. There might not have been fax machines, or e-mail, or the Internet, but the NSA did the exact same thing with telegrams.

We can decide as a society that we need to revisit FISA. We can debate the relative merits of police-state surveillance tactics and counterterrorism. We can discuss the prohibitions against spying on American citizens without a warrant, crossing over that abyss that Church warned us about twenty years ago. But the president can’t simply decide that the law doesn’t apply to him.

This issue is not about terrorism. It’s not about intelligence gathering. It’s about the executive branch of the United States ignoring a law, passed by the legislative branch and signed by President Jimmy Carter: a law that directs the judicial branch to monitor eavesdropping on Americans in national security investigations.

It’s not the spying, it’s the illegality.

Posted on December 29, 2005 at 8:40 AM97 Comments

Bomb-Sniffing Wasps

No, this isn’t from The Onion. Trained wasps:

The tiny, non-stinging wasps can check for hidden explosives at airports and monitor for toxins in subway tunnels.

“You can rear them by the thousands, and you can train them within a matter of minutes,” says Joe Lewis, a U.S. Agriculture Department entomologist. “This is just the very tip of the iceberg of a very new resource.”

Sounds like it will be cheap enough….

EDITED TO ADD (12/29): Bomb-sniffing bees are old news.

Posted on December 28, 2005 at 12:47 PM35 Comments

Are Computer-Security Export Controls Back?

I thought U.S. export regulations were finally over and done with, at least for software. Maybe not:

Unfortunately, due to strict US Government export regulations Symantec is only able to fulfill new LC5 orders or offer technical support directly with end-users located in the United States and commercial entities in Canada, provided all screening is successful.

Commodities, technology or software is subject to U.S. Dept. of Commerce, Bureau of Industry and Security control if exported or electronically transferred outside of the USA. Commodities, technology or software are controlled under ECCN 5A002.c.1, cryptanalytic.

You can also access further information on our web site at the following address: http://www.symantec.com/region/reg_eu/techsupp/enterprise/index.html

The software in question is the password breaking and auditing tool called LC5, better known as L0phtCrack.

Anyone have any ideas what’s going on, because I sure don’t.

Posted on December 28, 2005 at 7:08 AM31 Comments

Bug Bounties Are Not Security

Paying people rewards for finding security flaws is not the same as hiring your own analysts and testers. It’s a reasonable addition to a software security program, but no substitute.

I’ve said this before, but Moshe Yudkowsky said it better:

Here’s an outsourcing idea: get rid of your fleet of delivery trucks, toss your packages out into the street, and offer a reward to anyone who successfully delivers a package. Sound like a good idea, or a recipe for disaster?

Red Herring offers an article about the bounties that some software companies offer for bugs. That is, if you’re an independent researcher and you find a bug in their software, some companies will offer you a cash bonus when you report the bug.

As the article notes, “in a free market everything has value,” and therefore information that a bug exists should logically result in some sort of market. However, I think it’s misleading to call this practice “outsourcing” of security, any more than calling the practice of tossing packages into the street a “delivery service.” Paying someone to tell you about a bug may or may not be a good business practice, but that practice alone certainly does not constitute a complete security policy.

Posted on December 27, 2005 at 7:46 AM20 Comments

Is the NSA Reading Your E-Mail?

Richard M Smith has some interesting ideas on how to test if the NSA is eavesdropping on your e-mail.

With all of the controversy about the news that the NSA has been monitoring, since 9/11, telephone calls and email messages of Americans, some folks might now be wondering if they are being snooped on. Here’s a quick and easy method to see if one’s email messages are being read by someone else.

The steps are:

  1. Set up a Hotmail account.
  2. Set up a second email account with a non-U.S. provider. (eg. Rediffmail.com)
  3. Send messages between the two accounts which might be interesting to the NSA.
  4. In each message, include a unique URL to a Web server that you have access to its server logs. This URL should only be known by you and not linked to from any other Web page. The text of the message should encourage an NSA monitor to visit the URL.
  5. If the server log file ever shows this URL being accessed, then you know that you are being snooped on. The IP address of the access can also provide clues about who is doing the snooping.

The trick is to make the link enticing enough for someone or something to want to click on it. As part of a large-scale research project, I would suggest sending out a few hundred thousand messages using various tricks to find one that might work. Here are some possible ideas:

  • Include a variety of terrorist related trigger words
  • Include other links in a message to known AQ message boards
  • Include a fake CC: to Mohamed Atta’s old email address (el-amir@tu-harburg.de)
  • Send the message from an SMTP server in Iraq, Afghanistan, etc.
  • Use a fake return address from a known terrorist organization
  • Use a ziplip or hushmail account.

Besides monitoring the NSA, this same technique can be used if you suspect your email account password has been stolen or if a family member or coworker is reading your email on your computer of the sly.

The only problem is that you might get a knock on your door by some random investigative agency. Or get searched every time you try to get on an airplane.

But I think that risk is pretty low, actually. If people actually do this, please report back. I’m very curious.

Posted on December 26, 2005 at 12:31 PM133 Comments

Internet Explorer Sucks

This study is from August, but I missed it. The researchers tracked three browsers (MSIE, Firefox, Opera) in 2004 and counted which days they were “known unsafe.” Their definition of “known unsafe”: a remotely exploitable security vulnerability had been publicly announced and no patch was yet available.

MSIE was 98% unsafe. There were only 7 days in 2004 without an unpatched publicly disclosed security hole.

Firefox was 15% unsafe. There were 56 days with an unpatched publicly disclosed security hole. 30 of those days were a Mac hole that only affected Mac users. Windows Firefox was 7% unsafe.

Opera was 17% unsafe: 65 days. That number is accidentally a little better than it should be, as two of the upatched periods happened to overlap.

This underestimates the risk, because it doesn’t count vulnerabilities known to the bad guys but not publicly disclosed (and it’s foolish to think that such things don’t exist). So the “98% unsafe” figure for MSIE is generous, and the situation might be even worse.

Wow.

Posted on December 26, 2005 at 6:27 AM96 Comments

Story About "Little Red Book" and Federal Agents a Hoax

This is important news:

The UMass Dartmouth student who claimed to have been visited by Homeland Security agents over his request for “The Little Red Book” by Mao Zedong has admitted to making up the entire story.

The 22-year-old student tearfully admitted he made the story up to his history professor, Dr. Brian Glyn Williams, and his parents, after being confronted with the inconsistencies in his account.

Had the student stuck to his original story, it might never have been proved false.

But on Thursday, when the student told his tale in the office of UMass Dartmouth professor Dr. Robert Pontbriand to Dr. Williams, Dr. Pontbriand, university spokesman John Hoey and The Standard-Times, the student added new details.

The agents had returned, the student said, just last night. The two agents, the student, his parents and the student’s uncle all signed confidentiality agreements, he claimed, to put an end to the matter.

But when Dr. Williams went to the student’s home yesterday and relayed that part of the story to his parents, it was the first time they had heard it. The story began to unravel, and the student, faced with the truth, broke down and cried.

I don’t know what the moral is, here. 1) He’s an idiot. 2) Don’t believe everything you read. 3) We live in such an invasive political climate that such stories are easily believable. 4) He’s definitely an idiot.

Posted on December 24, 2005 at 8:53 AM31 Comments

Weird Computer-Worm Social Engineering Story

I can’t make this stuff up:

A child porn offender in Germany turned himself in to the police after mistaking an email he received from a computer worm for an official warning that he was under investigation….

Seems like the e-mail was actually from a worm, and not a sting operation by the police. But who knows?

Posted on December 23, 2005 at 3:30 PM18 Comments

Idiotic Article on TPM

This is just an awful news story.

“TPM” stands for “Trusted Platform Module.” It’s a chip that may soon be in your computer that will try to enforce security: both your security, and the security of software and media companies against you. It’s complicated, and it will prevent some attacks. But there are dangers. And lots of ways to hack it. (I’ve written about TPM here, and here when Microsoft called it Palladium. Ross Anderson has some good stuff here.)

In fact, with TPM, your bank wouldn’t even need to ask for your username and password—it would know you simply by the identification on your machine.

Since when is “your computer” the same as “you”? And since when is identifying a computer the same as authenticating the user? And until we can eliminate bot networks and “owned” machines, there’s no way to know who is controlling your computer.

Of course you could always “fool” the system by starting your computer with your unique PIN or fingerprint and then letting another person use it, but that’s a choice similar to giving someone else your credit card.

Right, letting someone use your computer is the same as letting someone use your credit card. Does he have any idea that there are shared computers that you can rent and use? Does he know any families that share computers? Does he ever have friends who visit him at home? There are lots of ways a PIN can be guessed or stolen.

Oh, I can’t go on.

My guess is the reporter was fed the story by some PR hack, and never bothered to check out if it were true.

Posted on December 23, 2005 at 11:13 AM42 Comments

Vehicle Tracking in the UK

Universal automobile surveillance is coming:

Britain is to become the first country in the world where the movements of all vehicles on the roads are recorded. A new national surveillance system will hold the records for at least two years.

Using a network of cameras that can automatically read every passing number plate, the plan is to build a huge database of vehicle movements so that the police and security services can analyse any journey a driver has made over several years.

The network will incorporate thousands of existing CCTV cameras which are being converted to read number plates automatically night and day to provide 24/7 coverage of all motorways and main roads, as well as towns, cities, ports and petrol-station forecourts.

By next March a central database installed alongside the Police National Computer in Hendon, north London, will store the details of 35 million number-plate “reads” per day. These will include time, date and precise location, with camera sites monitored by global positioning satellites.

As The Independent opines, this is only the beginning:

The new national surveillance network for tracking car journeys, which has taken more than 25 years to develop, is only the beginning of plans to monitor the movements of all British citizens. The Home Office Scientific Development Branch in Hertfordshire is already working on ways of automatically recognising human faces by computer, which many people would see as truly introducing the prospect of Orwellian street surveillance, where our every move is recorded and stored by machines.

Although the problems of facial recognition by computer are far more formidable than for car number plates, experts believe it is only a matter of time before machines can reliably pull a face out of a crowd of moving people.

If the police and security services can show that a national surveillance operation based on recording car movements can protect the public against criminals and terrorists, there will be a strong political will to do the same with street cameras designed to monitor the flow of human traffic.

I’ve already written about the security risks of what I call “wholesale surveillance.” Once this information is collected, it will be misused, lost, and stolen. It will be filled with errors. The problems and insecurities that come from living in a surveillance society more than outweigh any crimefighting (and terrorist-fighting) advantages.

Posted on December 22, 2005 at 2:41 PM58 Comments

Dutch Botnet

Back in October, the Dutch police arrested three people who created a large botnet and used it to extort money from U.S. companies. When the trio was arrested, authorities said that the botnet consisted of about 100,000 computers. The actual number was 1.5 million computers.

And I’ve heard reports from reputable sources that the actual actual number was “significantly higher.”

And it may still be growing. The bots continually scan the network and try to infect other machines. They do this autonomously, even after the command and control node was shut down. Since most of those 1.5 million machines—or however many there are—still have the botnet software running on them, it’s reasonable to believe that the botnet is still growing.

Posted on December 22, 2005 at 8:18 AM30 Comments

Electronic Shackles and Telephone Communications

The article is in Hebrew, but the security story is funny in any language.

It’s about a prisoner who was forced to wear an electronic shackle to monitor that he did not violate his home arrest. The shackle is pretty simple: if the suspect leaves the defined detention area, the electronic shackle signals through the telephone line to the local police.

How do you defeat a system such as this? Just stop paying your phone bill and wait for the phone company to shut off service.

Posted on December 21, 2005 at 12:03 PM29 Comments

The Security Threat of Unchecked Presidential Power

This past Thursday, the New York Times exposed the most significant violation of federal surveillance law in the post-Watergate era. President Bush secretly authorized the National Security Agency to engage in domestic spying, wiretapping thousands of Americans and bypassing the legal procedures regulating this activity.

This isn’t about the spying, although that’s a major issue in itself. This is about the Fourth Amendment protections against illegal search. This is about circumventing a teeny tiny check by the judicial branch, placed there by the legislative branch, placed there 27 years ago—on the last occasion that the executive branch abused its power so broadly.

In defending this secret spying on Americans, Bush said that he relied on his constitutional powers (Article 2) and the joint resolution passed by Congress after 9/11 that led to the war in Iraq. This rationale was spelled out in a memo written by John Yoo, a White House attorney, less than two weeks after the attacks of 9/11. It’s a dense read and a terrifying piece of legal contortionism, but it basically says that the president has unlimited powers to fight terrorism. He can spy on anyone, arrest anyone, and kidnap anyone and ship him to another country … merely on the suspicion that he might be a terrorist. And according to the memo, this power lasts until there is no more terrorism in the world.

Yoo starts by arguing that the Constitution gives the president total power during wartime. He also notes that Congress has recently been quiescent when the president takes some military action on his own, citing President Clinton’s 1998 strike against Sudan and Afghanistan.

Yoo then says: “The terrorist incidents of September 11, 2001, were surely far graver a threat to the national security of the United States than the 1998 attacks. … The President’s power to respond militarily to the later attacks must be correspondingly broader.”

This is novel reasoning. It’s as if the police would have greater powers when investigating a murder than a burglary.

More to the point, the congressional resolution of Sept. 14, 2001, specifically refused the White House’s initial attempt to seek authority to preempt any future acts of terrorism, and narrowly gave Bush permission to go after those responsible for the attacks on the Pentagon and World Trade Center.

Yoo’s memo ignored this. Written 11 days after Congress refused to grant the president wide-ranging powers, it admitted that “the Joint Resolution is somewhat narrower than the President’s constitutional authority,” but argued “the President’s broad constitutional power to use military force … would allow the President to … [take] whatever actions he deems appropriate … to pre-empt or respond to terrorist threats from new quarters.”

Even if Congress specifically says no.

The result is that the president’s wartime powers, with its armies, battles, victories, and congressional declarations, now extend to the rhetorical “War on Terror”: a war with no fronts, no boundaries, no opposing army, and—most ominously—no knowable “victory.” Investigations, arrests, and trials are not tools of war. But according to the Yoo memo, the president can define war however he chooses, and remain “at war” for as long as he chooses.

This is indefinite dictatorial power. And I don’t use that term lightly; the very definition of a dictatorship is a system that puts a ruler above the law. In the weeks after 9/11, while America and the world were grieving, Bush built a legal rationale for a dictatorship. Then he immediately started using it to avoid the law.

This is, fundamentally, why this issue crossed political lines in Congress. If the president can ignore laws regulating surveillance and wiretapping, why is Congress bothering to debate reauthorizing certain provisions of the Patriot Act? Any debate over laws is predicated on the belief that the executive branch will follow the law.

This is not a partisan issue between Democrats and Republicans; it’s a president unilaterally overriding the Fourth Amendment, Congress and the Supreme Court. Unchecked presidential power has nothing to do with how much you either love or hate George W. Bush. You have to imagine this power in the hands of the person you most don’t want to see as president, whether it be Dick Cheney or Hillary Rodham Clinton, Michael Moore or Ann Coulter.

Laws are what give us security against the actions of the majority and the powerful. If we discard our constitutional protections against tyranny in an attempt to protect us from terrorism, we’re all less safe as a result.

This essay was published today as an op-ed in the Minneapolis Star Tribune.

Here’s the opening paragraph of the Yoo memo. Remember, think of this power in the hands of your least favorite politician when you read it:

You have asked for our opinion as to the scope of the President’s authority to take military action in response to the terrorist attacks on the United States on September 11, 2001. We conclude that the President has broad constitutional power to use military force. Congress has acknowledged this inherent executive power in both the War Powers Resolution, Pub. L. No. 93-148, 87 Stat. 555 (1973), codified at 50 U.S.C. § 1541-1548 (the “WPR”), and in the Joint Resolution passed by Congress on September 14, 2001, Pub. L. No. 107-40, 115 Stat. 224 (2001). Further, the President has the constitutional power not only to retaliate against any person, organization, or State suspected of involvement in terrorist attacks on the United States, but also against foreign States suspected of harboring or supporting such organizations. Finally, the President may deploy military force preemptively against terrorist organizations or the States that harbor or support them, whether or not they can be linked to the specific terrorist incidents of September 11.

There’s a similar reasoning in the Braybee memo, which was written in 2002 about torture:

In a series of opinions examining various legal questions arising after September 11, we have examined the scope of the President’s Commander-in-Chief power. . . . Foremost among the objectives committed by the Constitution to [the President’s] trust. As Hamilton explained in arguing for the Constitution’s adoption, “because the circumstances which may affect the public safety are not reducible within certain limits, it must be admitted, as a necessary consequence, that there can be no limitation of that authority, which is to provide for the defense and safety of the community, in any manner essential to its efficacy.”

. . . [The Constitution’s] sweeping grant vests in the President an unenumerated Executive power . . . The Commander in Chief power and the President’s obligation to protect the Nation imply the ancillary powers necessary to their successful exercise.

NSA watcher James Bamford points out how this action was definitely considered illegal in 1978, which is why FISA was passed in the first place:

When the Foreign Intelligence Surveillance Act was created in 1978, one of the things that the Attorney General at the time, Griffin Bell, said—he testified before the intelligence committee, and he said that the current bill recognizes no inherent power of the President to conduct electronic surveillance. He said, “This bill specifically states that the procedures in the bill are the exclusive means by which electronic surveillance may be conducted.” In other words, what the President is saying is that he has these inherent powers to conduct electronic surveillance, but the whole reason for creating this act, according to the Attorney General at the time, was to prevent the President from using any inherent powers and to use exclusively this act.

Also this from Salon, discussing a 1952 precedent:

Attorney General Alberto Gonzales argues that the president’s authority rests on two foundations: Congress’s authorization to use military force against al-Qaida, and the Constitution’s vesting of power in the president as commander-in-chief, which necessarily includes gathering “signals intelligence” on the enemy. But that argument cannot be squared with Supreme Court precedent. In 1952, the Supreme Court considered a remarkably similar argument during the Korean War. Youngstown Sheet & Tube Co. v. Sawyer, widely considered the most important separation-of-powers case ever decided by the court, flatly rejected the president’s assertion of unilateral domestic authority during wartime. President Truman had invoked the commander-in-chief clause to justify seizing most of the nation’s steel mills. A nationwide strike threatened to undermine the war, Truman contended, because the mills were critical to manufacturing munitions.

The Supreme Court’s rationale for rejecting Truman’s claims applies with full force to Bush’s policy. In what proved to be the most influential opinion in the case, Justice Robert Jackson identified three possible scenarios in which a president’s actions may be challenged. Where the president acts with explicit or implicit authorization from Congress, his authority “is at its maximum,” and will generally be upheld. Where Congress has been silent, the president acts in a “zone of twilight” in which legality “is likely to depend on the imperatives of events and contemporary imponderables rather than on abstract theories of law.” But where the president acts in defiance of “the expressed or implied will of Congress,” Justice Jackson maintained, his power is “at its lowest ebb,” and his actions can be sustained only if Congress has no authority to regulate the subject at all.

In the steel seizure case, Congress had considered and rejected giving the president the authority to seize businesses in the face of threatened strikes, thereby placing President Truman’s action in the third of Justice Jackson’s categories. As to the war power, Justice Jackson noted, “The Constitution did not contemplate that the Commander in Chief of the Army and Navy will constitute him also Commander in Chief of the country, its industries, and its inhabitants.”

Like Truman, President Bush acted in the face of contrary congressional authority. In FISA, Congress expressly addressed the subject of warrantless wiretaps during wartime, and limited them to the first 15 days after war is declared. Congress then went further and made it a crime, punishable by up to five years in jail, to conduct a wiretap without statutory authorization.

The Attorney General said that the Administration didn’t try to do this legally, because they didn’t think they could get the law passed. But don’t worry, an NSA shift supervisor is acting in the role of a FISC judge:

GENERAL HAYDEN: FISA involves the process—FISA involves marshaling arguments; FISA involves looping paperwork around, even in the case of emergency authorizations from the Attorney General. And beyond that, it’s a little—it’s difficult for me to get into further discussions as to why this is more optimized under this process without, frankly, revealing too much about what it is we do and why and how we do it.

Q If FISA didn’t work, why didn’t you seek a new statute that allowed something like this legally?

ATTORNEY GENERAL GONZALES: That question was asked earlier. We’ve had discussions with members of Congress, certain members of Congress, about whether or not we could get an amendment to FISA, and we were advised that that was not likely to be—that was not something we could likely get, certainly not without jeopardizing the existence of the program, and therefore, killing the program. And that—and so a decision was made that because we felt that the authorities were there, that we should continue moving forward with this program.

Q And who determined that these targets were al Qaeda? Did you wiretap them?

GENERAL HAYDEN: The judgment is made by the operational work force at the National Security Agency using the information available to them at the time, and the standard that they apply—and it’s a two-person standard that must be signed off by a shift supervisor, and carefully recorded as to what created the operational imperative to cover any target, but particularly with regard to those inside the United States.

Q So a shift supervisor is now making decisions that a FISA judge would normally make? I just want to make sure I understand. Is that what you’re saying?

Senators from both parties are demanding hearings:

Democratic and Republican calls mounted on Tuesday for U.S. congressional hearings into President George W. Bush’s assertion that he can order warrantless spying on Americans with suspected terrorist ties.

Vice President Dick Cheney predicted a backlash against critics of the administration’s anti-terrorism policies. He also dismissed charges that Bush overstepped his constitutional bounds when he implemented the recently disclosed eavesdropping shortly after the September 11 attacks.

Republican Sens. Chuck Hagel of Nebraska and Olympia Snowe of Maine joined Democratic Sens. Carl Levin of Michigan, Dianne Feinstein of California and Ron Wyden of Oregon in calling for a joint investigation by the Senate Intelligence and Judiciary Committees into whether the government eavesdropped “without appropriate legal authority.”

Senate Minority Leader Harry Reid, a Nevada Democrat, said he would prefer separate hearings by the Judiciary Committee, which has already promised one, and Intelligence Committee.

This New York Times paragraph is further evidence that we’re talking about an Echelon-like surveillance program here:

Administration officials, speaking anonymously because of the sensitivity of the information, suggested that the speed with which the operation identified “hot numbers” – the telephone numbers of suspects – and then hooked into their conversations lay behind the need to operate outside the old law.

And some more snippets.

There are about a zillion more URLs I could list here. I posted these already, but both Oren Kerr and
Daniel Solove have good discussions of the legal issues. And here are three legal posts by Marty Lederman. A summary of the Republican arguments. Four good blog posts. Spooks comment on the issue.

And this George W. Bush quote (video and transcript), from December 18, 2000, is just too surreal not to reprint: “If this were a dictatorship, it’d be a heck of a lot easier, just so long as I’m the dictator.”

I guess 9/11 made it a heck of a lot easier.

Look, I don’t think 100% of the blame belongs to President Bush. (This kind of thing was also debated under Clinton.) The Congress, Democrats included, have allowed the Executive to gather power at the expense of the other two branches. This is the fundamental security issue here, and it’ll be an issue regardless of who wins the White House in 2008.

EDITED TO ADD (12/21): FISC Judge James Robertson resigned yesterday:

Two associates familiar with his decision said yesterday that Robertson privately expressed deep concern that the warrantless surveillance program authorized by the president in 2001 was legally questionable and may have tainted the FISA court’s work.

….Robertson indicated privately to colleagues in recent conversations that he was concerned that information gained from warrantless NSA surveillance could have then been used to obtain FISA warrants. FISA court Presiding Judge Colleen Kollar-Kotelly, who had been briefed on the spying program by the administration, raised the same concern in 2004 and insisted that the Justice Department certify in writing that it was not occurring.

“They just don’t know if the product of wiretaps were used for FISA warrants—to kind of cleanse the information,” said one source, who spoke on the condition of anonymity because of the classified nature of the FISA warrants. “What I’ve heard some of the judges say is they feel they’ve participated in a Potemkin court.”

More generally, here’s some of the relevant statutes and decisions:

Foreign Intelligence Surveillance Act (FISA)” (1978).

Authorization for Use of Military Force (2001),” the law authorizing Bush to use military force against the 9/11 terrorists.

United States v. United States District Court,” 407 U.S. 297 (1972), a national security surveillance case that turned on the Fourth Amendment.

Hamdi v. Rumsfeld,” 124 S. Ct. 981 (2004), the recent Supreme Court case examining the president’s powers during wartime.

[The Government’s position] cannot be mandated by any reasonable view of the separation of powers, as this view only serves to condense power into a single branch of government. We have long since made clear that a state of war is not a blank check for the President when it comes to the rights of the Nation’s citizens. Youngstown Steel and Tube, 343 U.S. at 587. Whatever power the United States Constitution envisions for the Executive in times of conflict with other Nations or enemy organizations, it most assuredly envisions a role for all three branches when individual liberties are at stake.

And here are a bunch of blog posts:

Daniel Solove: “Hypothetical: What If President Bush Were Correct About His Surveillance Powers?.”

Seth Weinberger: “Declaring War and Executive Power.”

Juliette Kayyem: “Wiretaps, AUMF and Bush’s Comments Today.”

Mark Schmitt: “Alito and the Wiretaps.”

Eric Muller: “Lawless Like I Said.”

Cass Sunstein: “Presidential Wiretap.”

Spencer Overton: “Judge Damon J. Keith: No Warrantless Wiretaps of Citizens.”

Will Baude: “Presidential Authority, A Lament.”

And news articles:

Washington Post: “Clash Is Latest Chapter in Bush Effort to Widen Executive Power.”

The clash over the secret domestic spying program is one slice of a broader struggle over the power of the presidency that has animated the Bush administration. George W. Bush and Dick Cheney came to office convinced that the authority of the presidency had eroded and have spent the past five years trying to reclaim it.

From shielding energy policy deliberations to setting up military tribunals without court involvement, Bush, with Cheney’s encouragement, has taken what scholars call a more expansive view of his role than any commander in chief in decades. With few exceptions, Congress and the courts have largely stayed out of the way, deferential to the argument that a president needs free rein, especially in wartime.

New York Times: Spying Program Snared U.S. Calls.”

A surveillance program approved by President Bush to conduct eavesdropping without warrants has captured what are purely domestic communications in some cases, despite a requirement by the White House that one end of the intercepted conversations take place on foreign soil, officials say.

Posted on December 21, 2005 at 6:50 AM

How Much High Explosive Does Any One Person Need?

Four hundred pounds:

The stolen goods include 150 pounds of C-4 plastic explosive and 250 pounds of thin sheets of explosives that could be used in letter bombs. Also, 2,500 detonators were missing from a storage explosive container, or magazine, in a bunker owned by Cherry Engineering.

The theft was professional:

Thieves apparently used blowtorches to cut through the storage trailers—suggesting they knew what they were after.

Most likely it’s a criminal who will resell the stuff, but it could be a terrorist organization. My guess is criminals, though.

By the way, this is in America…

The material was taken from Cherry Engineering, a company owned by Chris Cherry, a scientist at Sandia National Labs.

…where security is an afterthought:

The site, located outside Albuquerque, had no guards and no surveillance cameras.

Or maybe not even an afterthought:

It was the site’s second theft in the past two years.

If anyone is looking for something to spend national security money on that will actually make us safer, securing high-explosive-filled trailers would be high on my list.

EDITED TO ADD (12/29): The explosives were recovered.

Posted on December 20, 2005 at 2:20 PM34 Comments

NSA and Bush’s Illegal Eavesdropping

When President Bush directed the National Security Agency to secretly eavesdrop on American citizens, he transferred an authority previously under the purview of the Justice Department to the Defense Department and bypassed the very laws put in place to protect Americans against widespread government eavesdropping. The reason may have been to tap the NSA’s capability for data-mining and widespread surveillance.

Illegal wiretapping of Americans is nothing new. In the 1950s and ’60s, in a program called “Project Shamrock,” the NSA intercepted every single telegram coming into or going out of the United States. It conducted eavesdropping without a warrant on behalf of the CIA and other agencies. Much of this became public during the 1975 Church Committee hearings and resulted in the now famous Foreign Intelligence Surveillance Act (FISA) of 1978.

The purpose of this law was to protect the American people by regulating government eavesdropping. Like many laws limiting the power of government, it relies on checks and balances: one branch of the government watching the other. The law established a secret court, the Foreign Intelligence Surveillance Court (FISC), and empowered it to approve national-security-related eavesdropping warrants. The Justice Department can request FISA warrants to monitor foreign communications as well as communications by American citizens, provided that they meet certain minimal criteria.

The FISC issued about 500 FISA warrants per year from 1979 through 1995, and has slowly increased subsequently—1,758 were issued in 2004. The process is designed for speed and even has provisions where the Justice Department can wiretap first and ask for permission later. In all that time, only four warrant requests were ever rejected: all in 2003. (We don’t know any details, of course, as the court proceedings are secret.)

FISA warrants are carried out by the FBI, but in the days immediately after the terrorist attacks, there was a widespread perception in Washington that the FBI wasn’t up to dealing with these new threats—they couldn’t uncover plots in a timely manner. So instead the Bush administration turned to the NSA. They had the tools, the expertise, the experience, and so they were given the mission.

The NSA’s ability to eavesdrop on communications is exemplified by a technological capability called Echelon. Echelon is the world’s largest information “vacuum cleaner,” sucking up a staggering amount of voice, fax, and data communications—satellite, microwave, fiber-optic, cellular and everything else—from all over the world: an estimated 3 billion communications per day. These communications are then processed through sophisticated data-mining technologies, which look for simple phrases like “assassinate the president” as well as more complicated communications patterns.

Supposedly Echelon only covers communications outside of the United States. Although there is no evidence that the Bush administration has employed Echelon to monitor communications to and from the U.S., this surveillance capability is probably exactly what the president wanted and may explain why the administration sought to bypass the FISA process of acquiring a warrant for searches.

Perhaps the NSA just didn’t have any experience submitting FISA warrants, so Bush unilaterally waived that requirement. And perhaps Bush thought FISA was a hindrance—in 2002 there was a widespread but false believe that the FISC got in the way of the investigation of Zacarias Moussaoui (the presumed “20th hijacker”)—and bypassed the court for that reason.

Most likely, Bush wanted a whole new surveillance paradigm. You can think of the FBI’s capabilities as “retail surveillance”: It eavesdrops on a particular person or phone. The NSA, on the other hand, conducts “wholesale surveillance.” It, or more exactly its computers, listens to everything. An example might be to feed the computers every voice, fax, and e-mail communication looking for the name “Ayman al-Zawahiri.” This type of surveillance is more along the lines of Project Shamrock, and not legal under FISA. As Sen. Jay Rockefeller wrote in a secret memo after being briefed on the program, it raises “profound oversight issues.”

It is also unclear whether Echelon-style eavesdropping would prevent terrorist attacks. In the months before 9/11, Echelon noticed considerable “chatter”: bits of conversation suggesting some sort of imminent attack. But because much of the planning for 9/11 occurred face-to-face, analysts were unable to learn details.

The fundamental issue here is security, but it’s not the security most people think of. James Madison famously said: “If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary.” Terrorism is a serious risk to our nation, but an even greater threat is the centralization of American political power in the hands of any single branch of the government.

Over 200 years ago, the framers of the U.S. Constitution established an ingenious security device against tyrannical government: they divided government power among three different bodies. A carefully thought out system of checks and balances in the executive branch, the legislative branch, and the judicial branch, ensured that no single branch became too powerful.

After watching tyrannies rise and fall throughout Europe, this seemed like a prudent way to form a government. Courts monitor the actions of police. Congress passes laws that even the president must follow. Since 9/11, the United States has seen an enormous power grab by the executive branch. It’s time we brought back the security system that’s protected us from government for over 200 years.

A version of this essay originally appeared in Salon.

I wrote another essay about the legal and constitutional implications of this. The Minneapolis Star Tribune will publish it either Wednesday or Thursday, and I will post it here at that time.

I didn’t talk about the political dynamics in either essay, but they’re fascinating. The White House kept this secret, but they briefed at least six people outside the administration. The current and former chief justices of the FISC knew about this. Last Sunday’s Washington Post reported that both of them had misgivings about the program, but neither did anything about it. The White House also briefed the Committee Chairs and Ranking Members of the House and Senate Intelligence Committees, and they didn’t do anything about it. (Although Sen. Rockefeller wrote a bizarre I’m-not-going-down-with-you memo to Cheney and for his files.)

Cheney was on television this weekend citing this minimal disclosure as evidence that Congress acquiesced to the program. I see it as evidence of something else: if people from both the Legislative and the Judiciary branches knowingly permitted unlawful surveillance by the Executive branch, then the current system of checks and balances isn’t working.

It’s also evidence about how secretive this administration is. None of the other FISC judges, and none of the other House or Senate Intelligence Committee members, were told about this,­ even under clearance. And if there’s one thing these people hate, it’s being kept in the dark on a matter within their jurisdiction. That’s why Senator Feinstein, a member of the Senate Intelligence Committee, was so upset yesterday. And it’s pushing Senator Specter, and some of the Republicans in these Judiciary committees, further into the civil liberties camp.

There are about a zillion links worth reading, but here are some of them you might not yet have seen. Some good newspaper commentaries. An excellent legal analysis. Three blog posts. Four more blog posts. Daniel Solove on FISA. Two legal analyses. An interesting “Democracy Now” commentary, including interesting comments on the NSA’s capabilities by James Bamford. And finally, my 2004 essay on the security of checks and balances.

“Necessity is the plea for every infringement of human freedom. It is the argument of tyrants; it is the creed of slaves.”—William Pitt, House of Commons, 11/18/1783.

Posted on December 20, 2005 at 12:45 PM97 Comments

Microsoft Windows Receives EAL 4+ Certification

Windows has a Common Criteria (CC) certification:

Microsoft announced that all the products earned the EAL 4 + (Evaluation Assurance Level), which is the highest level granted to a commercial product.

The products receiving CC certification include Windows XP Professional with Service Pack 2 and Windows XP Embedded with Service Pack 2. Four different versions of Windows Server 2003 also received certification.

Is this true?

…director of security engineering strategy at Microsoft Steve Lipner said the certifications are a significant proof point of Redmond’s commitment to creating secure software.

Or are the certifications proof that EAL 4+ isn’t worth much?

Posted on December 20, 2005 at 7:21 AM48 Comments

Cell Phone Companies and Security

This is a fascinating story of cell phone fraud, security, economics, and externalities. Its moral is obvious, and demonstrates how economic considerations drive security decisions.

Susan Drummond was a customer of Rogers Wireless, a large Canadaian cell phone company. Her phone was cloned while she was on vacation, and she got a $12,237.60 phone bill (her typical bill was $75). Rogers maintains that there is nothing to be done, and that Drummond has to pay.

Like all cell phone companies, Rogers has automatic fraud detection systems that detect this kind of abnormal cell phone usage. They don’t turn the cell phones off, though, because they don’t want to annoy their customers.

Ms. Hopper [a manager in Roger’s security department] said terrorist groups had identified senior cellphone company officers as perfect targets, since the company was loath to shut off their phones for reasons that included inconvenience to busy executives and, of course, the public-relations debacle that would take place if word got out.

As long as Rogers can get others to pay for the fraud, this makes perfect sense. Shutting off a phone based on an automatic fraud-detection system costs the phone company in two ways: people inconvenienced by false alarms, and bad press. But the major cost of not shutting off a phone remains an externality: the customer pays for it.

In fact, there seems be some evidence that Rogers decides whether or not to shut off a suspecious phone based on the customer’s ability to pay:

Ms. Innes [a vice-president with Rogers Communications] said that Rogers has a policy of contacting consumers if fraud is suspected. In some cases, she admitted, phones are shut off automatically, but refused to say what criteria were used. (Ms. Drummond and Mr. Gefen believe that the company bases the decision on a customer’s creditworthiness. “If you have the financial history, they let the meter run,” Ms. Drummond said.) Ms. Drummond noted that she has a salary of more than $100,000, and a sterling credit history. “They knew something was wrong, but they thought they could get the money out of me. It’s ridiculous.”

Makes sense from Rogers’ point of view. High-paying customers are 1) more likely to pay, and 2) more damaging if pissed off in a false alarm. Again, economic considerations trump security.

Rogers is defending itself in court, and shows no signs of backing down:

In court filings, the company has made it clear that it intends to hold Ms. Drummond responsible for the calls made on her phone. “. . . the plaintiff is responsible for all calls made on her phone prior to the date of notification that her phone was stolen,” the company says. “The Plaintiff’s failure to mitigate deprived the Defendant of the opportunity to take any action to stop fraudulent calls prior to the 28th of August 2005.”

The solution here is obvious: Rogers should not be able to charge its customers for telephone calls they did not make. Ms. Drummond’s phone was cloned; there is no possible way she could notify Rogers of this before she saw calls she did not make on her bill. She is also completely powerless to affect the anti-cloning security in the Rogers phone system. To make her liable for the fraud is to ensure that the problem never gets fixed.

Rogers is the only party in a position to do something about the problem. The company can, and according to the article has, implemented automatic fraud-detection software.

Rogers customers will pay for the fraud in any case. If they are responsible for the loss, either they’ll take their chances and pay a lot only if they are the victims, or there’ll be some insurance scheme that spreads the cost over the entire customer base. If Rogers is responsible for the loss, then the customers will pay in the form of slightly higher prices. But only if Rogers is responsible for the loss will they implement security countermeasures to limit fraud.

And if they do that, everyone benefits.

There is a Slashdot thread on the topic.

Posted on December 19, 2005 at 1:10 PM57 Comments

Insider Threat Statistics

From Europe, although I doubt it’s any different in the U.S.:

  • One in five workers (21%) let family and friends use company laptops and PCs to access the Internet.
  • More than half (51%) connect their own devices or gadgets to their work PC.
  • A quarter of these do so every day.
  • Around 60% admit to storing personal content on their work PC.
  • One in ten confessed to downloading content at work they shouldn’t.
  • Two thirds (62%) admitted they have a very limited knowledge of IT Security.
  • More than half (51%) had no idea how to update the anti-virus protection on their company PC.
  • Five percent say they have accessed areas of their IT system they shouldn’t have.

One caveat: the study is from McAfee, and as the article rightly notes:

Naturally McAfee has a vested interest in talking up this kind of threat….

And finally:

Based on its survey, McAfee has identified four types of employees who put their workplace at risk:

  • The Security Softie – This group comprises the vast majority of employees. They have a very limited knowledge of security and put their business at risk through using their work computer at home or letting family members surf the Internet on their work PC.
  • The Gadget Geek – Those that come to work armed with a variety of devices/gadgets, all of which get plugged into their PC.
  • The Squatter – Those who use the company IT resources in ways they shouldn’t (i.e. by storing content or playing games).
  • The Saboteur – A very small minority of employees. This group will maliciously hack into areas of the IT system to which they shouldn’t have access or infect the network purposely from within

I like the list.

Posted on December 19, 2005 at 7:13 AM35 Comments

More Erosion of Police Oversight in the U.S.

From EPIC:

Documents obtained by EPIC in a Freedom of Information Act lawsuit reveal FBI agents expressing frustration that the Office of Intelligence Policy and Review, an office that reviews FBI search requests, had not approved applications for orders under Section 215 of the Patriot Act. A subsequent memo refers to “recent changes” allowing the FBI to “bypass”; the office. EPIC is expecting to receive further information about this matter.

Some background:

Under Section 215, the FBI must show only “relevance” to a foreign intelligence or terrorism investigation to obtain vast amounts of personal information. It is unclear why the Office of Intelligence Policy and Review did not approve these applications. The FBI has not revealed this information, nor did it explain whether other search methods had failed.

Remember, the issue here is not whether or not the FBI can engage in counterterrorism. The issue is the erosion of judicial oversight—the only check we have on police power. And this power grab is dangerous regardless of which party is in the White House at the moment.

Posted on December 16, 2005 at 10:03 AM18 Comments

The Military is Spying on Americans

The Defense Department is collecting data on perfectly legal, peaceful, anti-war protesters.

The DOD database obtained by NBC News includes nearly four dozen anti-war meetings or protests, including some that have taken place far from any military installation, post or recruitment center. One “incident” included in the database is a large anti-war protest at Hollywood and Vine in Los Angeles last March that included effigies of President Bush and anti-war protest banners. Another incident mentions a planned protest against military recruiters last December in Boston and a planned protest last April at McDonald’s National Salute to America’s Heroes—a military air and sea show in Fort Lauderdale, Fla.

The Fort Lauderdale protest was deemed not to be a credible threat and a column in the database concludes: “US group exercising constitutional rights.” Two-hundred and forty-three other incidents in the database were discounted because they had no connection to the Department of Defense—yet they all remained in the database.

The DOD has strict guidelines (PDF link), adopted in December 1982, that limit the extent to which they can collect and retain information on U.S. citizens.

Still, the DOD database includes at least 20 references to U.S. citizens or U.S. persons. Other documents obtained by NBC News show that the Defense Department is clearly increasing its domestic monitoring activities. One DOD briefing document stamped “secret” concludes: “[W]e have noted increased communication and encouragement between protest groups using the [I]nternet,” but no “significant connection” between incidents, such as “reoccurring instigators at protests” or “vehicle descriptions.”

Personally, I am very worried about this increase in military activity inside our country. If anyone should be making sure protesters stay on the right side of the law, it’s the police…not the military.

And it could get worse.

EDITED TO ADD (12/16): There’s also this news :

Months after the Sept. 11 attacks, President Bush secretly authorized the National Security Agency to eavesdrop on Americans and others inside the United States to search for evidence of terrorist activity without the court-approved warrants ordinarily required for domestic spying, according to government officials…..

Mr. Bush’s executive order allowing some warrantless eavesdropping on those inside the United States including American citizens, permanent legal residents, tourists and other foreigners is based on classified legal opinions that assert that the president has broad powers to order such searches, derived in part from the September 2001 Congressional resolution authorizing him to wage war on Al Qaeda and other terrorist groups, according to the officials familiar with the N.S.A. operation.

And:

….officials familiar with it said the N.S.A. eavesdropped without warrants on up to 500 people in the United States at any given time. The list changes as some names are added and others dropped, so the number monitored in this country may have reached into the thousands over the past three years, several officials said. Overseas, about 5,000 to 7,000 people suspected of terrorist ties are monitored at one time, according to those officials.

This is a very long article, but worth reading. It is not overstatement to suggest that this may be the most significant violation of federal surveillance law in the post-Watergate era.

EDITED TO ADD (12/16): Good analysis from Political Animal. The reason Bush’s executive order is a big deal is because it’s against the law.

Here is the Foreign Intelligence Surveillance Act. Its Section 1809a makes it a criminal offense to “engage in electronic surveillance under color of law except as authorized by statute.”

FISA does authorize surveillance without a warrant, but not on US citizens (with the possible exception of citizens speaking from property openly owned by a foreign power; e.g., an embassy.)

FISA also says that the Attorney General can authorize emergency surveillance without a warrant when there is no time to obtain one. But it requires that the Attorney General notify the judge of that authorization immediately, and that he (and yes, the law does say ‘he’) apply for a warrant “as soon as practicable, but not more than 72 hours after the Attorney General authorizes such surveillance.”

It also says this:

“In the absence of a judicial order approving such electronic surveillance, the surveillance shall terminate when the information sought is obtained, when the application for the order is denied, or after the expiration of 72 hours from the time of authorization by the Attorney General, whichever is earliest. In the event that such application for approval is denied, or in any other case where the electronic surveillance is terminated and no order is issued approving the surveillance, no information obtained or evidence derived from such surveillance shall be received in evidence or otherwise disclosed in any trial, hearing, or other proceeding in or before any court, grand jury, department, office, agency, regulatory body, legislative committee, or other authority of the United States, a State, or political subdivision thereof”.

Nothing in the New York Times report suggests that the wiretaps Bush authorized extended only for 72 hours, or that normal warrants were sought in each case within 72 hours after the wiretap began. On the contrary, no one would have needed a special program or presidential order if they had.

According to the Times, “the Bush administration views the operation as necessary so that the agency can move quickly to monitor communications that may disclose threats to the United States.” But this is just wrong. As I noted above, the law specifically allows for warrantless surveillance in emergencies, when the government needs to start surveillance before it can get a warrant. It explains exactly what the government needs to do under those circumstances. It therefore provides the flexibility the administration claims it needed.

They had no need to go around the law. They could easily have obeyed it. They just didn’t want to.

Posted on December 16, 2005 at 6:49 AM65 Comments

Are Port Scans Precursors to Attack?

Interesting research:

Port scans may not be a precursor to hacking efforts, according to conventional wisdom, reports the University of Maryland’s engineering school.

An analysis of quantitative attack data gathered by the university over a two-month period showed that port scans precede attacks only about five percent of the time, said Michel Cukier, a professor in the Centre for Risk and Reliability. In fact, more than half of all attacks aren’t preceded by a scan of any kind, Cukier said.

I agree with Ullrich, who said that the analysis seems too simplistic:

Johannes Ullrich, chief technology officer at the SANS Institute ‘s Internet Storm Center, said that while the design and development of the testbed used for the research appears to be valid, the analysis is too simplistic.

Rather than counting the number of packets in a connection, it’s far more important to look at the content when classifying a connection as a port scan or an attack, Ullrich said.

Often, attacks such as the SQL Slammer worm, which hit in 2003, can be as small as one data packet, he said. A lot of the automated attacks that take place combine port and vulnerability scans and exploit code, according to Ullrich.

As a result, much of what researchers counted as port scans may have actually been attacks, said Ullrich, whose Bethesda, Md.-based organization provides Internet threat-monitoring services.

Posted on December 15, 2005 at 6:38 AM23 Comments

Totally Secure Classical Communications?

My eighth Wired column:

How would you feel if you invested millions of dollars in quantum cryptography, and then learned that you could do the same thing with a few 25-cent Radio Shack components?

I’m exaggerating a little here, but if a new idea out of Texas A&M University turns out to be secure, we’ve come close.

Earlier this month, Laszlo Kish proposed securing a communications link, like a phone or computer line, with a pair of resistors. By adding electronic noise, or using the natural thermal noise of the resistors—called “Johnson noise”—Kish can prevent eavesdroppers from listening in.

In the blue-sky field of quantum cryptography, the strange physics of the subatomic world are harnessed to create a secure, unbreakable communications channel between two points. Kish’s research is intriguing, in part, because it uses the simpler properties of classic physics—the stuff you learned in high school—to achieve the same results.

At least, that’s the theory.

I go on to describe how the system works, and then discuss the security:

There hasn’t been enough analysis. I certainly don’t know enough electrical engineering to know whether there is any clever way to eavesdrop on Kish’s scheme. And I’m sure Kish doesn’t know enough security to know that, either. The physics and stochastic mathematics look good, but all sorts of security problems crop up when you try to actually build and operate something like this.

It’s definitely an idea worth exploring, and it’ll take people with expertise in both security and electrical engineering to fully vet the system.

There are practical problems with the system, though. The bandwidth the system can handle appears very limited. The paper gives the bandwidth-distance product as 2 x 106 meter-Hz. This means that over a 1-kilometer link, you can only send at 2,000 bps. A dialup modem from 1985 is faster. Even with a fat 500-pair cable you’re still limited to 1 million bps over 1 kilometer.

And multi-wire cables have their own problems; there are all sorts of cable-capacitance and cross-talk issues with that sort of link. Phone companies really hate those high-density cables, because of how long it takes to terminate or splice them.

Even more basic: It’s vulnerable to man-in-the-middle attacks. Someone who can intercept and modify messages in transit can break the security. This means you need an authenticated channel to make it work—a link that guarantees you’re talking to the person you think you’re talking to. How often in the real world do we have a wire that is authenticated but not confidential? Not very often.

Generally, if you can eavesdrop you can also mount active attacks. But this scheme only defends against passive eavesdropping.

For those keeping score, that’s four practical problems: It’s only link encryption and not end-to-end, it’s bandwidth-limited (but may be enough for key exchange), it works best for short ranges and it requires authentication to make it work. I can envision some specialized circumstances where this might be useful, but they’re few and far between.

But quantum key distributions have the same problems. Basically, if Kish’s scheme is secure, it’s superior to quantum communications in every respect: price, maintenance, speed, vibration, thermal resistance and so on.

Both this and the quantum solution share another problem, however; they’re solutions looking for a problem. In the realm of security, encryption is the one thing we already do pretty well. Focusing on encryption is like sticking a tall stake in the ground and hoping the enemy runs right into it, instead of building a wide wall.

Arguing about whether this kind of thing is more secure than AES—the United States’ national encryption standard—is like arguing about whether the stake should be a mile tall or a mile and a half tall. However tall it is, the enemy is going to go around the stake.

Software security, network security, operating system security, user interface—these are the hard security problems. Replacing AES with this kind of thing won’t make anything more secure, because all the other parts of the security system are so much worse.

This is not to belittle the research. I think information-theoretic security is important, regardless of practicality. And I’m thrilled that an easy-to-build classical system can work as well as a sexy, media-hyped quantum cryptosystem. But don’t throw away your crypto software yet.

Here’s the press release, here’s the paper, and here’s the Slashdot thread.

EDITED TO ADD (1/31): Here’s an interesting rebuttal.

Posted on December 15, 2005 at 6:13 AM52 Comments

Leon County, FL Dumps Diebold Voting Machines

Finnish security expert Harri Hursti demonstrated how easy it is to hack the vote:

A test election was run in Leon County on Tuesday with a total of eight ballots. Six ballots voted “no” on a ballot question as to whether Diebold voting machines can be hacked or not. Two ballots, cast by Dr. Herbert Thompson and by Harri Hursti voted “yes” indicating a belief that the Diebold machines could be hacked.

At the beginning of the test election the memory card programmed by Harri Hursti was inserted into an Optical Scan Diebold voting machine. A “zero report” was run indicating zero votes on the memory card. In fact, however, Hursti had pre-loaded the memory card with plus and minus votes.

The eight ballots were run through the optical scan machine. The standard Diebold-supplied “ender card” was run through as is normal procedure ending the election. A results tape was run from the voting machine.

Correct results should have been: Yes:2 ; No:6

However, just as Hursti had planned, the results tape read: Yes:7 ; No:1

The results were then uploaded from the optical scan voting machine into the GEMS central tabulator, a step cited by Diebold as a protection against memory card hacking. The central tabulator is the “mother ship” that pulls in all votes from voting machines. However, the GEMS central tabulator failed to notice that the voting machines had been hacked.

The results in the central tabulator read:

Yes:7 ; No:1

This is my 2004 essay on the problems with electronic voting machines. The solution is straightforward: machines need voter-verifiable paper audit trails, and all software must be open to public scrutiny. This is not a partisan issue: election irregularities have affected people in both parties.

Posted on December 14, 2005 at 3:30 PM104 Comments

Weakest Link Security

Funny story:

At the airport where this pilot fish works, security has gotten a lot more attention since 9/11. “All the security doors that connect the concourses to office spaces and alleyways for service personnel needed an immediate upgrade,” says fish. “It seems that the use of a security badge was no longer adequate protection.

“So over the course of about a month, more than 50 doors were upgraded to require three-way protection. To open the door, a user needed to present a security badge (something you possess), a numeric code (something you know) and a biometric thumb scan (something you are).

“Present all three, and the door beeps and lets you in.”

One by one, the doors are brought online. The technology works, and everything looks fine—until fish decides to test the obvious.

After all, the average member of the public isn’t likely to forge a security badge, guess a multidigit number and fake a thumb scan. “But what happens if you just turn the handle without any of the above?” asks fish. “Would it set off alarms or call security?

“It turns out that if you turn the handle, the door opens.

“Despite the addition of all that technology and security on every single door, nobody bothered to check that the doors were set to lock by default.”

Remember, security is only as strong as the weakest link.

Posted on December 14, 2005 at 11:59 AM24 Comments

Korea Solves the Identity Theft Problem

South Korea gets it:

The South Korean government is introducing legislation that will make it mandatory for financial institutions to compensate customers who have fallen victim to online fraud and identity theft.

The new laws will require financial firms in the country to compensate customers for virtually all financial losses resulting from online identity theft and account hacking, even if the banks are not directly responsible.

Of course, by itself this action doesn’t solve identity theft. But in a vibrant capitalist economic market, this action is going to pave the way for technical security improvements that will effectively deal with identity theft.

The good news for the rest of us is that we can watch what happens now.

Posted on December 14, 2005 at 7:14 AM28 Comments

Brian Snow on Security

Good paper (.pdf) by Brian Snow of the NSA on security and assurance.

Abstract: When will we be secure? Nobody knows for sure—but it cannot happen before commercial security products and services possess not only enough functionality to satisfy customers’ stated needs, but also sufficient assurance of quality, reliability, safety, and appropriateness for use. Such assurances are lacking in most of today’s commercial security products and services. I discuss paths to better assurance in Operating Systems, Applications, and Hardware through better development environments, requirements definition, systems engineering, quality certification, and legal/regulatory constraints. I also give some examples.

Posted on December 13, 2005 at 2:15 PM11 Comments

FBI Speaks Sense on Cyberterrorism

A surprising outbreak of reason:

Al Qaida and other terrorist groups are more sophisticated in their use of computers but still are unable to mount crippling internet-based attacks against US power grids, airports and other targets, the FBI’s top cyber crime official said on Wednesday.

Here’s a transcript of a debate on the topic. And this is my 2003 essay.

Posted on December 13, 2005 at 8:02 AM17 Comments

A Pilot on Airline Security

Good comments from Salon’s pilot-in-residence on airline security:

In the days ahead, you can expect sharp debate on whether the killing was justified, and whether the nation’s several thousand air marshals—their exact number is a tightly guarded secret—undergo sufficient training. How are they taught to deal with mentally ill individuals who might be unpredictable and unstable, but not necessarily dangerous? Are the rules of engagement overly aggressive?

Those are fair questions, but not the most important ones.

Wednesday’s incident fulfills what many of us predicted ever since the Federal Air Marshals Service was widely expanded following the 2001 terror attacks in New York, Pennsylvania and Washington: The first person killed by a sky marshal, whether through accident or misunderstanding, would not be a terrorist. In a lot of ways, Alpizar is the latest casualty of Sept. 11. He is not the victim of a trigger-happy federal marshal but of our own, now fully metastasized security mania.

And:

Terrorists, meanwhile, won’t waste their time on schemes with such an extreme likelihood of failure.

Unfortunately, the same cannot be said for us. In America, reasoned debate and clear thinking aren’t the useful currencies they once were, and backlash to the TSA’s announcement has come from a host of unexpected sources—members of Congress, flight attendants unions and families of Sept. 11 victims.

“The Bush administration proposal is just asking the next Mohammed Atta to move from box cutters to scissors,” said Rep. Markey.

Actually, that Atta and his henchmen used box cutters to commandeer four aircraft means very little. Just as effectively, they could have employed snapped-off pieces of plastic, shattered bottles or, for that matter, their own bare fists and some clever wile. Sept. 11 had nothing to do with exploiting airport security and everything to do with exploiting our mindset at the time. What weapons the terrorists had or didn’t have is essentially irrelevant. Hijackings, to that point in history, were perpetrated mainly through bluff, and while occasionally deadly, they seldom resulted in more than a temporary inconvenience—diversions to Cuba or cities in the Middle East. The moment American flight 11 collided with the north tower of the World Trade Center, everything changed; good luck to the next skyjacker stupid enough to attempt the same stunt with anything less than a flamethrower in his hand.

And finally:

This is almost acceptable, if only there weren’t so many hours of squandered time and manpower in the balance. Nobody wants weapons on a jetliner. But, more critical, neither do we want to bog down the system. The longer we fuss at the metal detectors over low-threat objects, the greater we expose ourselves to the very serious dangers of bombs and explosives. TSA is not in need of more screeners; it’s in need of reallocation of personnel and resources.

It was, we shouldn’t forget, 17 years ago this month that Pan Am flight 103 was destroyed over Lockerbie, Scotland by a stash of Semtex hidden inside a Toshiba radio in a piece of checked luggage. Then as now, and perhaps for years to come, explosives were the most serious high-level threat facing commercial aviation. European authorities were quick to implement a sweeping revision of luggage-screening protocols designed to thwart another Lockerbie. It took almost 15 years, and the catastrophe of Sept. 11, before America began to do the same—and a comprehensive system still isn’t fully in place.

Flying was and remains exceptionally safe, but whether that’s because or in spite of the system is tough to tell. The “war on terror” has left us fighting many enemies—some real, many imagined. We’ll figure things out at some point, maybe. Until then, dead in Miami, Rigoberto Alpizar is yet more collateral damage.

Posted on December 12, 2005 at 1:21 PM40 Comments

Most Stolen Identities Never Used

This is something I’ve been saying for a while, and it’s nice to see some independent confirmation:

A new study suggests consumers whose credit cards are lost or stolen or whose personal information is accidentally compromised face little risk of becoming victims of identity theft.

The analysis, released on Wednesday, also found that even in the most dangerous data breaches—where thieves access social security numbers and other sensitive information on consumers they have deliberately targeted—only about 1 in 1,000 victims had their identities stolen.

The reason is that thieves are stealing far more identities than they need. Two years ago, if someone asked me about protecting against identity theft, I would tell them to shred their trash and be careful giving information over the Internet. Today, that advice is obsolete. Criminals are not stealing identity information in ones and twos; they’re stealing identity information in blocks of hundreds of thousands and even millions.

If a criminal ring wants a dozen identities for some fraud scam, and they steal a database with 500,000 identities, then—as a percentage—almost none of those identities will ever be the victims of fraud.

Some other findings from their press release:

A significant finding from the research is that different breaches pose different degrees of risk. In the research, ID Analytics distinguishes between “identity-level” breaches, where names and Social Security numbers were stolen and “account-level” breaches, where only account numbers—sometimes associated with names—were stolen. ID Analytics also discovered that the degree of risk varies based on the nature of the data breach, for example, whether the breach was the result of a deliberate hacking into a database or a seemingly unintentional loss of data, such as tapes or disks being lost in transit.

And:

ID Analytics’ fraud experts believe the reason for the minimal use of stolen identities is based on the amount of time it takes to actually perpetrate identity theft against a consumer. As an example, it takes approximately five minutes to fill out a credit application. At this rate, it would take a fraudster working full-time ­ averaging 6.5 hours day, five days a week, 50 weeks a year ­ over 50 years to fully utilize a breached file consisting of one million consumer identities. If the criminal outsourced the work at a rate of $10 an hour in an effort to use a breached file of the same size in one year, it would cost that criminal about $830,000.

Another key finding indicates that in certain targeted data breaches, notices may have a deterrent effect. In one large-scale identity-level breach, thieves slowed their use of the data to commit identity theft after public notification. The research also showed how the criminals who stole the data in the breaches used identity data manipulation, or “tumbling” to avoid detection and to prolong the scam.

That last bit is interesting, and it makes this recommendation even more surprising:

The company suggests, for instance, that companies shouldn’t always notify consumers of data breaches because they may be unnecessarily alarming people who stand little chance of being victimized.

I agree with them that all this notification is having a “boy who cried wolf” effect on people. I know people living in California who get disclosure notifications in the mail regularly, and who have stopped paying attention to them.

But remember, the main security value of notification requirements is the cost. By increasing the cost to companies of data thefts, the goal is for them to increase their security. (The main security value used to be the public shaming, but these breaches are now so common that the press no longer writes about them.) Direct fines would be a better way of dealing with the economic externality, but the notification law is all we’ve got right now. I don’t support eliminating it until there’s something else in its place.

Posted on December 12, 2005 at 9:50 AM33 Comments

G. Gordon Liddy on Terrorism

I remember reading this fictional account by G. Gordon Liddy when it first appeared in Omni in 1989. I wouldn’t say he “predicted attack on America,” but he did produce an entertaining piece of fiction.

The rendering of U.S. jet equipment inventory unusable cannot be attributed to the events of second August. The intelligence community and the Federal Bureau of Investigation are, however, unanimously in agreement that the two are part of the same overall operation. This conclusion is based primarily upon the evidence taken from the body of a female slain by SEAL Team 3 on second August in the San Diego area while she was participating in the attack on the national electrical power distribution system (next heading). But for this fortuitous event, the sudden failure of several aircraft belonging to each U.S. carrier would still be blamed on age (a la the 1988 Aloha aircraft incident, when metal fatigue caused the roof of a Boeing 737 to rupture in flight). As it is, we have had to ground the U.S. civil commercial aviation fleet for an indefinite time, but at least we know what to look for. Japanese intelligence has confirmed that the body that the body of the woman slain by the SEALs is that of a member of their “Red Army” group. On her person was an item at first thought unrelated to her mission: what appeared to be a U.S.-made Magic Marker, which, although not dried out, did not mark. The fluid it contained has now been identified by researchers at the Defense Advanced Research Projects Agency (DARPA) as nearly chemically identical to our classified liquid metal embrittlement (LME) agent. Unfortunately, prior to being added to the classified technologies list, the LME agent was discussed in open literature.

Posted on December 9, 2005 at 4:16 PM17 Comments

Sky Marshal Shooting in Miami

I have heretofore refrained from writing about the Miami false-alarm terrorist incident. For those of you who have spent the last few days in an isolation chamber, sky marshals shot and killed a mentally ill man they believed to be a terrorist. The shooting happened on the ground, in the jetway. The man claimed he had a bomb and wouldn’t stop when ordered to by sky marshals. At least, that’s the story.

I’ve read the reports, the claims of the sky marshals and the counterclaims of some witnesses. Whatever happened—and it’s possible that we’ll never know—it does seem that this incident isn’t the same as the British shooting of a Brazilian man on July 22.

I do want to make two points, though.

One, any time you have an officer making split-second life and death decisions, you’re going to have mistakes. I hesitate to second-guess the sky marshals on the ground; they were in a very difficult position. But the way to minimize mistakes is through training. I strongly recommend that anyone interested in this sort of thing read Blink, by Malcolm Gladwell.

Two, I’m not convinced the sky marshals’ threat model matches reality. Mentally ill people are far more common than terrorists. People who claim to have a bomb and don’t are far more common than people who actually do. The real question we should be asking here is: what should the appropriate response be to this low-probability threat?

EDITED TO ADD (12/11): Good Salon article on the topic.

Posted on December 9, 2005 at 1:28 PM176 Comments

E-Hijacking

The article is a bit inane, but it talks about an interesting security problem. “E-hijacking” is the term used to describe the theft of goods in transit by altering the electronic paperwork:

He pointed to the supposed loss of 3.9-million banking records stored on computer backup tapes that were being shipped by UPS from New York-based Citigroup to an Experian credit bureau in Texas. “These tapes were not lost – they were stolen,” Spoonamore said. “Not only were they stolen, the theft occurred by altering the electronic manifest in transit so it would be delivered right to the thieves.” He added that UPS, Citigroup, and Experian spent four days blaming each other for losing the shipment before realizing it had actually been stolen.

Spoonamore, a veteran of the intelligence community, said in his analysis of this e-hijacking, upwards of 15 to 20 people needed to be involved to hack five different computer systems simultaneously to breach the electronic safeguards on the electronic manifest. The manifest was reset from “secure” to “standard” while in transit, so it could be delivered without the required three signatures, he said. Afterward the manifest was put back to “secure”? and three signatures were uploaded into the system to appear as if proper procedures had been followed.

“What’s important to remember here is that there is no such thing as ‘security’ in the data world: all data systems can and will be breached,” Spoonamore said. “What you can have, however, is data custody so you know at all times who has it, if they are supposed to have it, and what they are doing with it. Custody is what begets data security.”

This is interesting. More and more, the physical movement of goods is secondary to the electronic movement of information. Oil being shipped across the Atlantic, for example, can change hands several times while it is in transit. I see a whole lot of new risks along these lines in the future.

Posted on December 9, 2005 at 7:41 AM22 Comments

Truckers Watching the Highways

Highway Watch is yet another civilian distributed counterterrorism program. Basically, truckers are trained to look out for suspicious activities on the highways. Despite its similarities to such ill-conceived still-born programs like TIPS, I think this one has some merit.

Why? Two things: training, and a broader focus than terrorism. This is from their overview:

Highway Watch® training provides Highway Watch® participants with the observational tools and the opportunity to exercise their expert understand of the transportation environment to report safety and security concerns rapidly and accurately to the authorities. In addition to matters of homeland security – stranded vehicles or accidents, unsafe road conditions, and other safety related situations are reported eliciting the appropriate emergence responders. Highway Watch® reports are combined with other information sources and shared both with federal agencies and the roadway transportation sector by the Highway ISAC.

Sure, the “matters of homeland security” is the sexy application that gets the press and the funding, but “stranded vehicles or accidents, unsafe road conditions, and other safety related situations” are likely to be the bread and butter of this kind of program. And interstate truckers are likely to be in a good position to report these things, assuming there’s a good mechanism for it.

About the training:

Highway Watch® participants attend a comprehensive training session before they become certified Highway Watch® members. This training incorporates both safety and security issues. Participants are instructed on what to look for when witnessing traffic accidents and other safety-related situations and how to make a proper emergency report. Highway Watch® curriculum also provides anti-terrorism information, such as: a brief account of modern terrorist attacks from around the world, an outline explaining how terrorist acts are usually carried out, and tips on preventing terrorism. From this solid baseline curriculum, different segments of the highway sector have or are developing unique modules attuned to their specific security related situation.

Okay, okay, it does sound a bit hokey. “…tips on preventing terrorism” indeed. (Tip #7: When transporting nuclear wastes, always be sure to padlock your truck. Tip #12: If someone asks you to deliver a trailer to the parking lot underneath a large office building and run away very fast, always check with your supervisor first.) But again, I like the inclusion of the mundane “what to look for when witnessing traffic accidents and other safety-related situations and how to make a proper emergency report.”

This program has a lot of features I like in security systems: it’s dynamic, it’s distributed, it relies on trained people paying attention, and it’s not focused on a specific threat.

Usually we see terrorism as the justification for something that is ineffective and wasteful. Done right, this could be an example of terrorism being used as the justification for something that is smart and effective.

Posted on December 8, 2005 at 12:12 PM37 Comments

U.S. Immigration Database Security

In September, the Inspector General of the Department of Homeland Security published a report on the security of the USCIS (United States Citizenship and Immigration Services) databases. It’s called: “Security Weaknesses Increase Risks to Critical United States Citizenship and Immigration Services Database,” and a redacted version (.pdf) is on the DHS website.

This is from the Executive Summary:

Although USCIS has not established adequate or effective database security controls for the Central Index System, it has implemented many essential security controls such as procedures for controlling temporary or emergency system access, a configuration management plan, and procedures for implementing routine and emergency changes. Further, we did not identify any significant configuration weaknesses during our technical tests of the Central Index System. However, additional work remains to implement the access controls, configuration management procedures, and continuity of operations safeguards necessary to protect sensitive Central Index System data effectively. Specifically, USCIS has not: 1) implemented effective user administration procedures; 2) reviewed and retained [REDACTED] effectively, 3) ensured that system changes are properly controlled; 4) developed and tested an adequate Information technology (IT) contingency plan; 5) implemented [REDACTED]; or 6) monitored system security functions sufficiently. These database security exposures increase the risk that unauthorized individuals could gain access to critical USCIS database resources and compromise the confidentiality, integrity, and availability of sensitive Central Index System data. [REDACTED]

Posted on December 8, 2005 at 7:38 AM13 Comments

OpenDocument Format and the State of Massachusetts

OpenDocument format (ODF) is an alternative to the Microsoft document, spreadsheet, and etc. file formats. (Here’s the homepage for the ODF standard; it’ll put you to sleep, I promise you.)

So far, nothing here is relevant to this blog. Except that Microsoft, with its proprietary Office document format, is spreading rumors that ODF is somehow less secure.

This, from the company that allows Office documents to embed arbitrary Visual Basic programs?

Yes, there is a way to embed scripts in ODF; this seems to be what Microsoft is pointing to. But at least ODF has a clean and open XML format, which allows layered security and the ability to remove scripts as needed. This is much more difficult in the binary Microsoft formats that effectively hide embedded programs.

Microsoft’s claim that the the open ODF is inherently less secure than the proprietary Office format is essentially an argument for security through obscurity. ODF is no less secure than current .doc and other proprietary formats, and may be—marginally, at least—more secure.

This document document from the ODF people says it nicely:

There is no greater security risk, no greater ability to “manipulate code” or gain access to content using ODF than alternative document formats. Security should be addressed through policy decisions on information sharing, regardless of document format. Security exposures caused by programmatic extensions such as the visual basic macros that can be imbedded in Microsoft Office documents are well known and notorious, but there is nothing distinct about ODF that makes it any more or less vulnerable to security risks than any other format specification. The many engineers working to enhance the ODF specification are working to develop techniques to mitigate any exposure that may exist through these extensions.

This whole thing has heated up because Massachusetts recently required public records be held in OpenDocument format, which has put Microsoft into a bit of a tizzy. (Here are two commentaries on the security of that move.) I don’t know if it’s why Microsoft is submitting its Office Document Formats to ECMA for “open standardization,” but I’m sure it’s part of the reason.

Posted on December 7, 2005 at 2:21 PM14 Comments

30,000 People Mistakenly Put on Terrorist Watch List

This is incredible:

Nearly 30,000 airline passengers discovered in the past year that they were mistakenly placed on federal “terrorist” watch lists, a transportation security official said Tuesday.

When are we finally going to admit that the DHS is incompetent at this?

EDITED TO ADD (12/7): At least they weren’t kidnapped and imprisoned for five months, and “shackled, beaten, photographed nude and injected with drugs by interrogators.”

Posted on December 7, 2005 at 10:26 AM59 Comments

Snake-Oil Research in Nature

Snake-oil isn’t only in commercial products. Here’s a piece of research published (behind a paywall) in Nature that’s just full of it.

The article suggests using chaos in an electro-optical system to generate a pseudo-random light sequence, which is then added to the message to protect it from interception. Now, the idea of using chaos to build encryption systems has been tried many times in the cryptographic community, and has always failed. But the authors of the Nature article show no signs of familiarity with prior cryptographic work.

The published system has the obvious problem that it does not include any form of message authentication, so it will be trivial to send spoofed messages or tamper with messages while they are in transit.

But a closer examination of the paper’s figures suggests a far more fundamental problem. There’s no key. Anyone with a valid receiver can decode the ciphertext. No key equals no security, and what you have left is a totally broken system.

I e-mailed Claudio R. Mirasso, the corresponding author, about the lack of any key, and got this reply: “To extract the message from the chaotic carrier you need to replicate the carrier itself. This can only be done by a laser that matches the emitter characteristics within, let’s say, within 2-5%. Semiconductor lasers with such similarity have to be carefully selected from the same wafer. Even though you have to test them because they can still be too different and do not synchronize. We talk abut a hardware key. Also the operating conditions (current, feedback length and coupling strength) are part of the key.”

Let me translate that. He’s saying that there is a hardware key baked into the system at fabrication. (It comes from manufacturing deviations in the lasers.) There’s no way to change the key in the field. There’s no way to recover security if any of the transmitters/receivers are lost or stolen. And they don’t know how hard it would be for an attacker to build a compatible receiver, or even a tunable receiver that could listen to a variety of encodings.

This paper would never get past peer review in any competent cryptography journal or conference. I’m surprised it was accepted in Nature, a fiercely competitive journal. I don’t know why Nature is taking articles on topics that are outside its usual competence, but it looks to me like Nature got burnt here by a lack of expertise in the area.

To be fair, the paper very carefully skirts the issue of security, and claims hardly anything: “Additionally, chaotic carriers offer a certain degree of intrinsic privacy, which could complement (via robust hardware encryption) both classical (software based) and quantum cryptography systems.” Now that “certain degree of intrinsic privacy” is approximately zero. But other than that, they’re very careful how they word their claims.

For instance, the abstract says: “Chaotic signals have been proposed as broadband information carriers with the potential of providing a high level of robustness and privacy in data transmission.” But there’s no disclosure that this proposal is bogus, from a privacy perspective. And the next-to-last paragraph says “Building on this, it should be possible to develop reliable cost-effective secure communication systems that exploit deeper properties of chaotic dynamics.” No disclosure that “chaotic dynamics” is actually irrelevant to the “secure” part. The last paragraph talks about “smart encryption techniques” (referencing a paper that talks about chaos encryption), “developing active eavesdropper-evasion strategies” (whatever that means), and so on. It’s just enough that if you don’t parse their words carefully and don’t already know the area well, you might come away with the impression that this is a major advance in secure communications. It seems as if it would have helped to have a more careful disclaimer.

Communications security was listed as one of the motivations for studying this communications technique. To list this as a motivation, without explaining that their experimental setup is actually useless for communications security, is questionable at best.

Meanwhile, the press has written articles that convey the wrong impression. Science News has an article that lauds this as a big achievement for communications privacy.

It talks about it as a “new encryption strategy,” “chaos-encrypted communication,” “1 gigabyte of chaos-encrypted information per second.” It’s obvious that the communications security aspect is what Science News is writing about. If the authors knew that their scheme is useless for communications security, they didn’t explain that very well.

There is also a New Scientist article titled “Let chaos keep your secrets safe” that characterizes this as a “new cryptographic technique, ” but I can’t get a copy of the full article.

Here are two more articles that discuss its security benefits. In the latter, Mirasso says “the main task we have for the future” is to “define, test, and calibrate the security that our system can offer.”

And their project web page says that “the continuous increase of computer speed threatens the safety” of traditional cryptography (which is bogus) and suggests using physical-layer chaos as a way to solve this. That’s listed as the goal of the project.

There’s a lesson here. This is research undertaken by researchers with no prior track record in cryptography, submitted to a journal with no background in cryptography, and reviewed by reviewers with who knows what kind of experience in cryptography. Cryptography is a subtle subject, and trying to design new cryptosystems without the necessary experience and training in the field is a quick route to insecurity.

And what’s up with Nature? Cryptographers with no training in physics know better than to think they are competent to evaluate physics research. If a physics paper were submitted to a cryptography journal, the authors would likely be gently redirected to a physics journal—we wouldn’t want our cryptography conferences to accept a paper on a subject they aren’t competent to evaluate. Why would Nature expect the situation to be any different when physicists try to do cryptography research?

Posted on December 7, 2005 at 6:36 AM63 Comments

CME in Practice

CME is “Common Malware Enumeration,” and it’s an initiave by US-CERT to give all worms, viruses, and such uniform names. The problem is that different security vendors use different names for the same thing, and it can be extremely confusing for customers. A uniform naming system is a great idea. (I blogged about this in September.)

Here’s someone talking about how it’s not working very well in practice. Basically, while you can go from a vendor’s site to the CME information, you can’t go from the CME information to a vendor’s site. This essentially makes it worthless: just another name and number without references.

Posted on December 6, 2005 at 3:21 PM18 Comments

Reinventing 911

(That’s the 911 emergency service, not the September 11th date.)

This is a really interesting article from Wired on emergency information services. I like the talk about the inherent strength of agile communications systems and its usefulness in disseminating emergency information. Also the bottom-up approach to information.

Posted on December 6, 2005 at 12:05 PM9 Comments

Child-Repellent Sounds

I’ve already written about merchants using classical music to discourage loitering. Young people don’t like the music, so they don’t stick around.

Here’s a new twist: high-frequency noise that children and teenagers can hear but adults can’t:

The results were almost instantaneous. It was as if someone had used anti-teenager spray around the entrance, the way you might spray your sofas to keep pets off. Where disaffected youths used to congregate, now there is no one.

At first, members of the usual crowd tried to gather as normal, repeatedly going inside the store with their fingers in their ears and “begging me to turn it off,” Gough said. But he held firm and neatly avoided possible aggressive confrontations: “I told them it was to keep birds away because of the bird flu epidemic.”

At least he didn’t claim it was an anti-terrorism security measure.

Posted on December 6, 2005 at 7:46 AM54 Comments

Benevolent Worms

Yet another story about benevolent worms and how they can secure our networks. This idea shows up every few years. (I wrote about it in 2000, and again in 2003. This quote (emphasis mine) from the article shows what the problem is:

Simulations show that the larger the network grows, the more efficient this scheme should be. For example, if a network has 50,000 nodes (computers), and just 0.4% of those are honeypots, just 5% of the network will be infected before the immune system halts the virus, assuming the fix works properly. But, a 200-million-node network ­ with the same proportion of honeypots ­ should see just 0.001% of machines get infected.

This is from my 2003 essay:

A worm is not “bad” or “good” depending on its payload. Viral propagation mechanisms are inherently bad, and giving them beneficial payloads doesn’t make things better. A worm is no tool for any rational network administrator, regardless of intent.

A good software distribution mechanism has the following characteristics:

  1. People can choose the options they want.
  2. Installation is adapted to the host it’s running on.
  3. It’s easy to stop an installation in progress, or uninstall the software.
  4. It’s easy to know what has been installed where.

A successful worm, on the other hand, runs without the consent of the user. It has a small amount of code, and once it starts to spread, it is self-propagating, and will keep going automatically until it’s halted.

These characteristics are simply incompatible. Giving the user more choice, making installation flexible and universal, allowing for uninstallation—all of these make worms harder to propagate. Designing a better software distribution mechanism, makes it a worse worm, and vice versa. On the other hand, making the worm quieter and less obvious to the user, making it smaller and easier to propagate, and making it impossible to contain, all make for bad software distribution.

All of this makes worms easy to get wrong and hard to recover from. Experimentation, most of it involuntary, proves that worms are very hard to debug successfully: in other words, once worms starts spreading it’s hard to predict exactly what they will do. Some viruses were written to propagate harmlessly, but did damage—ranging from crashed machines to clogged networks—because of bugs in their code. Many worms were written to do damage and turned out to be harmless (which is even more revealing).

Intentional experimentation by well-meaning system administrators proves that in your average office environment, the code that successfully patches one machine won’t work on another. Indeed, sometimes the results are worse than any threat of external attack. Combining a tricky problem with a distribution mechanism that’s impossible to debug and difficult to control is fraught with danger. Every system administrator who’s ever distributed software automatically on his network has had the “I just automatically, with the press of a button, destroyed the software on hundreds of machines at once!” experience. And that’s with systems you can debug and control; self-propagating systems don’t even let you shut them down when you find the problem. Patching systems is fundamentally a human problem, and beneficial worms are a technical solution that doesn’t work.

Posted on December 5, 2005 at 2:50 PM19 Comments

Armed Killer Dolphins

Whatever are we to make of this:

It may be the oddest tale to emerge from the aftermath of Hurricane Katrina. Armed dolphins, trained by the US military to shoot terrorists and pinpoint spies underwater, may be missing in the Gulf of Mexico.

To answer your first question: toxic dart guns.

EDITED TO ADD (12/5): Snopes, a reliable source in these matters, claims this to be a hoax.

Posted on December 5, 2005 at 7:33 AM25 Comments

The Onion on Security

CIA Realizes It’s Been Using Black Highlighters All These Years“:

A report released Tuesday by the CIA’s Office of the Inspector General revealed that the CIA has mistakenly obscured hundreds of thousands of pages of critical intelligence information with black highlighters.

According to the report, sections of the documents—”almost invariably the most crucial passages”—are marred by an indelible black ink that renders the lines impossible to read, due to a top-secret highlighting policy that began at the agency’s inception in 1947.

Terrorist Has No Idea What To Do With All This Plutonium“:

Yaquub Akhtar, the leader of an eight-man cell linked to a terrorist organization known as the Army Of Martyrs, admitted Tuesday that he “doesn’t have the slightest clue” what to do with the quarter-kilogram of plutonium he recently acquired.

And “RIAA Bans Telling Friends About Songs.”

Posted on December 3, 2005 at 9:26 AM28 Comments

GAO Report on Electronic Voting

The full report, dated September 2005, is 107-pages long. Here’s the “Results in Brief” section:

While electronic voting systems hold promise for a more accurate and efficient election process, numerous entities have raised concerns about their security and reliability, citing instances of weak security controls, system design flaws, inadequate system version control, inadequate security testing, incorrect system configuration, poor security management, and vague or incomplete voting system standards, among other issues. For example, studies found (1) some electronic voting systems did not encrypt cast ballots or system audit logs, and it was possible to alter both without being detected; (2) it was possible to alter the files that define how a ballot looks and works so that the votes for one candidate could be recorded for a different candidate; and (3) vendors installed uncertified versions of voting system software at the local level. It is important to note that many of the reported concerns were drawn from specific system makes and models or from a specific jurisdiction’s election, and that there is a lack of consensus among election officials and other experts on the pervasiveness of the concerns. Nevertheless, some of these concerns were reported to have caused local problems in federal elections—resulting in the loss or miscount of votes—and therefore merit attention.

Federal organizations and nongovernmental groups have issued recommended practices and guidance for improving the election process, including electronic voting systems, as well as general practices for the security and reliability of information systems. For example, in mid-2004, EAC issued a compendium of practices recommended by election experts, including state and local election officials. This compendium includes approaches for making voting processes more secure and reliable through, for example, risk analysis of the voting process, poll worker security training, and chain of custody controls for election day operations, along with practices that are specific to ensuring the security and reliability of different types of electronic voting systems. As another example, in July 2004, the California Institute of Technology and the Massachusetts Institute of Technology issued a report containing recommendations pertaining to testing equipment, retaining audit logs, and physically securing voting systems. In addition to such election-specific practices, numerous recommended practices are available that can be applied to any information system. For instance, we, NIST, and others have issued guidance that emphasizes the importance of incorporating security and reliability into the life cycle of information systems through practices related to security planning and management, risk management, and procurement. The recommended practices in these election-specific and information technology (IT) focused documents provide valuable guidance that, if implemented effectively, should help improve the security and reliability of voting systems.

Since the passage of HAVA in 2002, the federal government has begun a range of actions that are expected to improve the security and reliability of electronic voting systems. Specifically, after beginning operations in January 2004, EAC has led efforts to (1) draft changes to the existing federal voluntary standards for voting systems, including provisions related to security and reliability, (2) develop a process for certifying, decertifying, and recertifying voting systems, (3) establish a program to accredit the national independent testing laboratories that test electronic voting systems against the federal voluntary standards, and (4) develop a software library and clearinghouse for information on state and local elections and systems. However, these actions are unlikely to have a significant effect in the 2006 federal election cycle because the changes to the voluntary standards have not yet been completed, the system certification and laboratory accreditation programs are still in development, and the software library has not been updated or improved since the 2004 elections. Further, EAC has not defined tasks, processes, and time frames for completing these activities. As a result, it is unclear when the results will be available to assist state and local election officials. In addition to the federal government’s activities, other organizations have actions under way that are intended to improve the security and reliability of electronic voting systems. These actions include developing and obtaining international acceptance for voting system standards, developing voting system software in an open source environment (i.e., not proprietary to any particular company), and cataloging and analyzing reported problems with electronic voting systems.

To improve the security and reliability of electronic voting systems, we are recommending that EAC establish tasks, processes, and time frames for improving the federal voluntary voting system standards, testing capabilities, and management support available to state and local election officials.

EAC and NIST provided written comments on a draft of this report (see apps. V and VI). EAC commissioners agreed with our recommendations and stated that actions on each are either under way or intended. NIST’s director agreed with the report’s conclusions. In addition to their comments on our recommendations, EAC commissioners expressed three concerns with our use of reports produced by others to identify issues with the security and reliability of electronic voting systems. Specifically, EAC sought (1) additional clarification on our sources, (2) context on the extent to which voting system problems are systemic, and (3) substantiation of claims in the reports issued by others. To address these concerns, we provided additional clarification of sources where applicable. Further, we note throughout our report that many issues involved specific system makes and models or circumstances in the elections of specific jurisdictions. We also note that there is a lack of consensus on the pervasiveness of the problems, due in part to a lack of comprehensive information on what system makes and models are used in jurisdictions throughout the country. Additionally, while our work focused on identifying and grouping problems and vulnerabilities identified in issued reports and studies, where appropriate and feasible, we sought additional context, clarification, and corroboration from experts, including election officials, security experts, and key reports’ authors. EAC commissioners also expressed concern that we focus too much on the commission, and noted that it is one of many entities with a role in improving the security and reliability of voting systems. While we agree that EAC is one of many entities with responsibilities for improving the security and reliability of voting systems, we believe that our focus on EAC is appropriate, given its leadership role in defining voting system standards, in establishing programs both to accredit laboratories and to certify voting systems, and in acting as a clearinghouse for improvement efforts across the nation. EAC and NIST officials also provided detailed technical corrections, which we incorporated throughout the report as appropriate.

Posted on December 2, 2005 at 3:08 PM40 Comments

FBI to Approve All Software?

Sounds implausible, I know. But how else do you explain this FCC ruling (from September—I missed it until now):

The Federal Communications Commission thinks you have the right to use software on your computer only if the FBI approves.

No, really. In an obscure “policy” document released around 9 p.m. ET last Friday, the FCC announced this remarkable decision.

According to the three-page document, to preserve the openness that characterizes today’s Internet, “consumers are entitled to run applications and use services of their choice, subject to the needs of law enforcement.” Read the last seven words again.

The FCC didn’t offer much in the way of clarification. But the clearest reading of the pronouncement is that some unelected bureaucrats at the commission have decreeed that Americans don’t have the right to use software such as Skype or PGPfone if it doesn’t support mandatory backdoors for wiretapping. (That interpretation was confirmed by an FCC spokesman on Monday, who asked not to be identified by name. Also, the announcement came at the same time as the FCC posted its wiretapping rules for Internet telephony.)

Posted on December 2, 2005 at 11:24 AM76 Comments

Limitations on Police Power Shouldn't Be a Partisan Issue

In response to my op ed last week, the Minneapolis Star Tribune published this letter:

THE PATRIOT ACT

Where are the abuses?

The Nov. 22 commentary “The erosion of freedom” is yet another example of how liberal hysteria is conspicuously light on details.

While the Patriot Act may allow for potential abuses of power, flaws undoubtedly to be fine-tuned over time, the “erosion of freedom” it may foster absolutely pales in comparison to the freedom it is designed to protect in the new age of global terrorism.

I have yet to read of one incident of infringement of any private citizen’s rights as a direct result of the Patriot Act—nor does this commentary point out any, either.

While I’m a firm believer in the Fourth Amendment, I also want our law enforcement to have the legal tools necessary, unfettered by restrictions to counter liberals’ paranoid fixation on “fascism,” in order to combat the threat that terrorism has on all our freedoms.

I have enough trust in our free democratic society and the coequal branches of government that we won’t evolve into a sinister “police state,” as ominously predicted by this commentary.

CHRIS GARDNER, MINNEAPOLIS

Two things strike me in this letter. The first is his “I have yet to read of one incident of infringement of any private citizen’s rights as a direct result of the Patriot Act….” line. It’s just odd. A simple Googling of “patriot act abuses” comes up with almost 3 million hits, many of them pretty extensive descriptions of Patriot Act abuses. Now, he could decide that none of them are abuses. He could choose not to believe any of them are true. He could choose to believe, as he seems to, that it’s all in some liberal fantasy. But to simply not even bother reading about them…isn’t he just admitting that he’s not qualified to have an opinion on the matter? (There’s also that “direct result” weaseling, which I’m not sure what to make of either. Are infringements that are an indirect result of the Patriot Act somehow better?)

I suppose that’s just being petty, though.

The more important thing that strikes me is how partisan he is. He writes about “liberal hysteria” and “liberals’ paranoid fixation on ‘fascism.'” In his last paragraph, he writes about his trust in government.

Most laws don’t matter when we all trust each other. Contracts are rarely if ever looked at if the parties trust each other. The whole point of laws and contracts is to protect us when the parties don’t trust each other. It’s not enough that this guy, and everyone else with this opinion, trusts the Bush government to judiciously balance his rights with the need to fight global terrorism. This guy has to believe that when the Democrats are in power that his rights are just as protected: that he is just as secure against police and government abuse.

Because that’s how you should think about laws, contracts, and government power. When reading through a contract, don’t think about how much you like the other person who’s signing it; imagine how the contract will protect you if you become enemies. When thinking about a law, imagine how it will protect you when your worst nightmare—Hillary Clinton as President, Janet Reno as Attorney General, Howard Dean as something-or-other, and a Democratic Senate and House—is in power.

Laws and contracts are not written for one political party, or for one side. They’re written for everybody. History teaches us this lesson again and again. In the United States, the Bill of Rights was opposed on the grounds that it wasn’t necessary; the Alien and Sedition Act of 1798 proved that it was, only nine years later.

It makes no sense to me that this is a partisan issue.

Posted on December 2, 2005 at 6:11 AM56 Comments

The Human Side of Security

A funny—and all too true—addition to the SANS Top 20:

H1. Humans

H1.1 Description:

The species Homo sapiens supports a wide range of intellectual capabilities such as speech, emotion, rational thinking etc. Many of these components are enabled by default – though to differing degrees of success. These components are implemented by the cerebral cortex, and are under the control of the identity engine which runs as me.exe. Vulnerabilities in these components are the most common avenues for exploitation.

Posted on December 1, 2005 at 1:01 PM21 Comments

Airplane Security

My seventh Wired.com column is on line. Nothing you haven’t heard before, except for this part:

I know quite a lot about this. I was a member of the government’s Secure Flight Working Group on Privacy and Security. We looked at the TSA’s program for matching airplane passengers with the terrorist watch list, and found a complete mess: poorly defined goals, incoherent design criteria, no clear system architecture, inadequate testing. (Our report was on the TSA website, but has recently been removed—”refreshed” is the word the organization used—and replaced with an “executive summary” (.doc) that contains none of the report’s findings. The TSA did retain two (.doc) rebuttals (.doc), which read like products of the same outline and dismiss our findings by saying that we didn’t have access to the requisite information.) Our conclusions match those in two (.pdf) reports (.pdf) by the Government Accountability Office and one (.pdf) by the DHS inspector general.

That’s right; the TSA is disappearing our report.

I also wrote an op ed for the Sydney Morning Herald on “weapons”—like the metal knives distributed with in-flight meals—aboard aircraft, based on this blog post. Again, nothing you haven’t heard before. (And I stole some bits from your comments to the blog posting.)

There is new news, though. The TSA is relaxing the rules for bringing pointy things on aircraft:.

The summary document says the elimination of the ban on metal scissors with a blade of four inches or less and tools of seven inches or less – including screwdrivers, wrenches and pliers – is intended to give airport screeners more time to do new types of random searches.

Passengers are now typically subject to a more intensive, so-called secondary search only if their names match a listing of suspected terrorists or because of anomalies like a last-minute ticket purchase or a one-way trip with no baggage.

The new strategy, which has been tested in Pittsburgh, Indianapolis and Orange County, Calif., will mean that a certain number of passengers, even if they are not identified by these computerized checks, will be pulled aside and subject to an added search lasting about two minutes. Officials said passengers would be selected randomly, without regard to ethnicity or nationality.

What happens next will vary. One day at a certain airport, carry-on bags might be physically searched. On the same day at a different airport, those subject to the random search might have their shoes screened for explosives or be checked with a hand-held metal detector. “By design, a traveler will not experience the same search every time he or she flies,” the summary said. “The searches will add an element of unpredictability to the screening process that will be easy for passengers to navigate but difficult for terrorists to manipulate.”

The new policy will also change the way pat-down searches are done to check for explosive devices. Screeners will now search the upper and lower torso, the entire arm and legs from the mid-thigh down to the ankle and the back and abdomen, significantly expanding the area checked.

Currently, only the upper torso is checked. Under the revised policy, screeners will still have the option of skipping pat-downs in certain areas “if it is clear there is no threat,” like when a person is wearing tight clothing making it obvious that there is nothing hidden. But the default position will be to do the more comprehensive search, in part because of fear that a passenger could be carrying plastic explosives that might not set off a handheld metal detector.

I don’t know if they will still make people take laptops out of their cases, make people take off their shoes, or confiscate pocket knives. (Different articles have said different things about the last one.)

This is a good change, and it’s long overdue. Airplane terrorism hasn’t been the movie-plot threat that everyone worries about for a while.

The most amazing reaction to this is from Corey Caldwell, spokeswoman for the Association of Flight Attendants:

When weapons are allowed back on board an aircraft, the pilots will be able to land the plane safety but the aisles will be running with blood.

How’s that for hyperbole?

In Beyond Fear and elsewhere, I’ve written about the notion of “agenda” and how it informs security trade-offs. From the perspective of the flight attendants, subjecting passengers to onerous screening requirements is a perfectly reasonable trade-off. They’re safer—albeit only slightly—because of it, and it doesn’t cost them anything. The cost is an externality to them: the passengers pay it. Passengers have a broader agenda: safety, but also cost, convenience, time, etc. So it makes perfect sense that the flight attendants object to a security change that the passengers are in favor of.

EDITED TO ADD (12/2): The SFWG report hasn’t been removed from the TSA website, just unlinked.

EDITED TO ADD (12/20): The report seems to be gone from the TSA website now, but it’s available here.

Posted on December 1, 2005 at 10:14 AM56 Comments

New Phishing Trick

Although I think I’ve seen the trick before:

Phishing schemes are all about deception, and recently some clever phishers have added a new layer of subterfuge called the secure phish. It uses the padlock icon indicating that your browser has established a secure connection to a Web site to lull you into a false sense of security. According to Internet security company SurfControl, phishers have begun to outfit their counterfeit sites with self-generated Secure Sockets Layer certificates. To distinguish an imposter from the genuine article, you should carefully scan the security certificate prompt for a reference to either “a self-issued certificate” or “an unknown certificate authority.”

Yeah, like anyone is going to do that.

Posted on December 1, 2005 at 7:43 AM55 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.