Entries Tagged "essays"

Page 40 of 44

Risks of Losing Portable Devices

Last July I blogged about the risks of storing ever-larger amounts of data in ever-smaller devices.

Last week I wrote my tenth Wired.com column on the topic:

The point is that it’s now amazingly easy to lose an enormous amount of information. Twenty years ago, someone could break into my office and copy every customer file, every piece of correspondence, everything about my professional life. Today, all he has to do is steal my computer. Or my portable backup drive. Or my small stack of DVD backups. Furthermore, he could sneak into my office and copy all this data, and I’d never know it.

This problem isn’t going away anytime soon.

There are two solutions that make sense. The first is to protect the data. Hard-disk encryption programs like PGP Disk allow you to encrypt individual files, folders or entire disk partitions. Several manufacturers market USB thumb drives with built-in encryption. Some PDA manufacturers are starting to add password protection — not as good as encryption, but at least it’s something — to their devices, and there are some aftermarket PDA encryption programs.

The second solution is to remotely delete the data if the device is lost. This is still a new idea, but I believe it will gain traction in the corporate market. If you give an employee a BlackBerry for business use, you want to be able to wipe the device’s memory if he loses it. And since the device is online all the time, it’s a pretty easy feature to add.

But until these two solutions become ubiquitous, the best option is to pay attention and erase data. Delete old e-mails from your BlackBerry, SMSs from your cell phone and old data from your address books — regularly. Find that call log and purge it once in a while. Don’t store everything on your laptop, only the files you might actually need.

EDITED TO ADD (2/2): A Dutch army officer lost a memory stick with details of an Afgan mission.

Posted on February 1, 2006 at 10:32 AMView Comments

Anonymity and Accountability

Last week I blogged Kevin Kelly’s rant against anonymity. Today I wrote about it for Wired.com:

And that’s precisely where Kelly makes his mistake. The problem isn’t anonymity; it’s accountability. If someone isn’t accountable, then knowing his name doesn’t help. If you have someone who is completely anonymous, yet just as completely accountable, then — heck, just call him Fred.

History is filled with bandits and pirates who amass reputations without anyone knowing their real names.

EBay’s feedback system doesn’t work because there’s a traceable identity behind that anonymous nickname. EBay’s feedback system works because each anonymous nickname comes with a record of previous transactions attached, and if someone cheats someone else then everybody knows it.

Similarly, Wikipedia’s veracity problems are not a result of anonymous authors adding fabrications to entries. They’re an inherent property of an information system with distributed accountability. People think of Wikipedia as an encyclopedia, but it’s not. We all trust Britannica entries to be correct because we know the reputation of that company, and by extension its editors and writers. On the other hand, we all should know that Wikipedia will contain a small amount of false information because no particular person is accountable for accuracy — and that would be true even if you could mouse over each sentence and see the name of the person who wrote it.

Please read the whole thing before you comment.

Posted on January 12, 2006 at 4:36 AMView Comments

The Security Threat of Unchecked Presidential Power

This past Thursday, the New York Times exposed the most significant violation of federal surveillance law in the post-Watergate era. President Bush secretly authorized the National Security Agency to engage in domestic spying, wiretapping thousands of Americans and bypassing the legal procedures regulating this activity.

This isn’t about the spying, although that’s a major issue in itself. This is about the Fourth Amendment protections against illegal search. This is about circumventing a teeny tiny check by the judicial branch, placed there by the legislative branch, placed there 27 years ago — on the last occasion that the executive branch abused its power so broadly.

In defending this secret spying on Americans, Bush said that he relied on his constitutional powers (Article 2) and the joint resolution passed by Congress after 9/11 that led to the war in Iraq. This rationale was spelled out in a memo written by John Yoo, a White House attorney, less than two weeks after the attacks of 9/11. It’s a dense read and a terrifying piece of legal contortionism, but it basically says that the president has unlimited powers to fight terrorism. He can spy on anyone, arrest anyone, and kidnap anyone and ship him to another country … merely on the suspicion that he might be a terrorist. And according to the memo, this power lasts until there is no more terrorism in the world.

Yoo starts by arguing that the Constitution gives the president total power during wartime. He also notes that Congress has recently been quiescent when the president takes some military action on his own, citing President Clinton’s 1998 strike against Sudan and Afghanistan.

Yoo then says: “The terrorist incidents of September 11, 2001, were surely far graver a threat to the national security of the United States than the 1998 attacks. … The President’s power to respond militarily to the later attacks must be correspondingly broader.”

This is novel reasoning. It’s as if the police would have greater powers when investigating a murder than a burglary.

More to the point, the congressional resolution of Sept. 14, 2001, specifically refused the White House’s initial attempt to seek authority to preempt any future acts of terrorism, and narrowly gave Bush permission to go after those responsible for the attacks on the Pentagon and World Trade Center.

Yoo’s memo ignored this. Written 11 days after Congress refused to grant the president wide-ranging powers, it admitted that “the Joint Resolution is somewhat narrower than the President’s constitutional authority,” but argued “the President’s broad constitutional power to use military force … would allow the President to … [take] whatever actions he deems appropriate … to pre-empt or respond to terrorist threats from new quarters.”

Even if Congress specifically says no.

The result is that the president’s wartime powers, with its armies, battles, victories, and congressional declarations, now extend to the rhetorical “War on Terror”: a war with no fronts, no boundaries, no opposing army, and — most ominously — no knowable “victory.” Investigations, arrests, and trials are not tools of war. But according to the Yoo memo, the president can define war however he chooses, and remain “at war” for as long as he chooses.

This is indefinite dictatorial power. And I don’t use that term lightly; the very definition of a dictatorship is a system that puts a ruler above the law. In the weeks after 9/11, while America and the world were grieving, Bush built a legal rationale for a dictatorship. Then he immediately started using it to avoid the law.

This is, fundamentally, why this issue crossed political lines in Congress. If the president can ignore laws regulating surveillance and wiretapping, why is Congress bothering to debate reauthorizing certain provisions of the Patriot Act? Any debate over laws is predicated on the belief that the executive branch will follow the law.

This is not a partisan issue between Democrats and Republicans; it’s a president unilaterally overriding the Fourth Amendment, Congress and the Supreme Court. Unchecked presidential power has nothing to do with how much you either love or hate George W. Bush. You have to imagine this power in the hands of the person you most don’t want to see as president, whether it be Dick Cheney or Hillary Rodham Clinton, Michael Moore or Ann Coulter.

Laws are what give us security against the actions of the majority and the powerful. If we discard our constitutional protections against tyranny in an attempt to protect us from terrorism, we’re all less safe as a result.

This essay was published today as an op-ed in the Minneapolis Star Tribune.

Here’s the opening paragraph of the Yoo memo. Remember, think of this power in the hands of your least favorite politician when you read it:

You have asked for our opinion as to the scope of the President’s authority to take military action in response to the terrorist attacks on the United States on September 11, 2001. We conclude that the President has broad constitutional power to use military force. Congress has acknowledged this inherent executive power in both the War Powers Resolution, Pub. L. No. 93-148, 87 Stat. 555 (1973), codified at 50 U.S.C. § 1541-1548 (the “WPR”), and in the Joint Resolution passed by Congress on September 14, 2001, Pub. L. No. 107-40, 115 Stat. 224 (2001). Further, the President has the constitutional power not only to retaliate against any person, organization, or State suspected of involvement in terrorist attacks on the United States, but also against foreign States suspected of harboring or supporting such organizations. Finally, the President may deploy military force preemptively against terrorist organizations or the States that harbor or support them, whether or not they can be linked to the specific terrorist incidents of September 11.

There’s a similar reasoning in the Braybee memo, which was written in 2002 about torture:

In a series of opinions examining various legal questions arising after September 11, we have examined the scope of the President’s Commander-in-Chief power. . . . Foremost among the objectives committed by the Constitution to [the President’s] trust. As Hamilton explained in arguing for the Constitution’s adoption, “because the circumstances which may affect the public safety are not reducible within certain limits, it must be admitted, as a necessary consequence, that there can be no limitation of that authority, which is to provide for the defense and safety of the community, in any manner essential to its efficacy.”

. . . [The Constitution’s] sweeping grant vests in the President an unenumerated Executive power . . . The Commander in Chief power and the President’s obligation to protect the Nation imply the ancillary powers necessary to their successful exercise.

NSA watcher James Bamford points out how this action was definitely considered illegal in 1978, which is why FISA was passed in the first place:

When the Foreign Intelligence Surveillance Act was created in 1978, one of the things that the Attorney General at the time, Griffin Bell, said — he testified before the intelligence committee, and he said that the current bill recognizes no inherent power of the President to conduct electronic surveillance. He said, “This bill specifically states that the procedures in the bill are the exclusive means by which electronic surveillance may be conducted.” In other words, what the President is saying is that he has these inherent powers to conduct electronic surveillance, but the whole reason for creating this act, according to the Attorney General at the time, was to prevent the President from using any inherent powers and to use exclusively this act.

Also this from Salon, discussing a 1952 precedent:

Attorney General Alberto Gonzales argues that the president’s authority rests on two foundations: Congress’s authorization to use military force against al-Qaida, and the Constitution’s vesting of power in the president as commander-in-chief, which necessarily includes gathering “signals intelligence” on the enemy. But that argument cannot be squared with Supreme Court precedent. In 1952, the Supreme Court considered a remarkably similar argument during the Korean War. Youngstown Sheet & Tube Co. v. Sawyer, widely considered the most important separation-of-powers case ever decided by the court, flatly rejected the president’s assertion of unilateral domestic authority during wartime. President Truman had invoked the commander-in-chief clause to justify seizing most of the nation’s steel mills. A nationwide strike threatened to undermine the war, Truman contended, because the mills were critical to manufacturing munitions.

The Supreme Court’s rationale for rejecting Truman’s claims applies with full force to Bush’s policy. In what proved to be the most influential opinion in the case, Justice Robert Jackson identified three possible scenarios in which a president’s actions may be challenged. Where the president acts with explicit or implicit authorization from Congress, his authority “is at its maximum,” and will generally be upheld. Where Congress has been silent, the president acts in a “zone of twilight” in which legality “is likely to depend on the imperatives of events and contemporary imponderables rather than on abstract theories of law.” But where the president acts in defiance of “the expressed or implied will of Congress,” Justice Jackson maintained, his power is “at its lowest ebb,” and his actions can be sustained only if Congress has no authority to regulate the subject at all.

In the steel seizure case, Congress had considered and rejected giving the president the authority to seize businesses in the face of threatened strikes, thereby placing President Truman’s action in the third of Justice Jackson’s categories. As to the war power, Justice Jackson noted, “The Constitution did not contemplate that the Commander in Chief of the Army and Navy will constitute him also Commander in Chief of the country, its industries, and its inhabitants.”

Like Truman, President Bush acted in the face of contrary congressional authority. In FISA, Congress expressly addressed the subject of warrantless wiretaps during wartime, and limited them to the first 15 days after war is declared. Congress then went further and made it a crime, punishable by up to five years in jail, to conduct a wiretap without statutory authorization.

The Attorney General said that the Administration didn’t try to do this legally, because they didn’t think they could get the law passed. But don’t worry, an NSA shift supervisor is acting in the role of a FISC judge:

GENERAL HAYDEN: FISA involves the process — FISA involves marshaling arguments; FISA involves looping paperwork around, even in the case of emergency authorizations from the Attorney General. And beyond that, it’s a little — it’s difficult for me to get into further discussions as to why this is more optimized under this process without, frankly, revealing too much about what it is we do and why and how we do it.

Q If FISA didn’t work, why didn’t you seek a new statute that allowed something like this legally?

ATTORNEY GENERAL GONZALES: That question was asked earlier. We’ve had discussions with members of Congress, certain members of Congress, about whether or not we could get an amendment to FISA, and we were advised that that was not likely to be — that was not something we could likely get, certainly not without jeopardizing the existence of the program, and therefore, killing the program. And that — and so a decision was made that because we felt that the authorities were there, that we should continue moving forward with this program.

Q And who determined that these targets were al Qaeda? Did you wiretap them?

GENERAL HAYDEN: The judgment is made by the operational work force at the National Security Agency using the information available to them at the time, and the standard that they apply — and it’s a two-person standard that must be signed off by a shift supervisor, and carefully recorded as to what created the operational imperative to cover any target, but particularly with regard to those inside the United States.

Q So a shift supervisor is now making decisions that a FISA judge would normally make? I just want to make sure I understand. Is that what you’re saying?

Senators from both parties are demanding hearings:

Democratic and Republican calls mounted on Tuesday for U.S. congressional hearings into President George W. Bush’s assertion that he can order warrantless spying on Americans with suspected terrorist ties.

Vice President Dick Cheney predicted a backlash against critics of the administration’s anti-terrorism policies. He also dismissed charges that Bush overstepped his constitutional bounds when he implemented the recently disclosed eavesdropping shortly after the September 11 attacks.

Republican Sens. Chuck Hagel of Nebraska and Olympia Snowe of Maine joined Democratic Sens. Carl Levin of Michigan, Dianne Feinstein of California and Ron Wyden of Oregon in calling for a joint investigation by the Senate Intelligence and Judiciary Committees into whether the government eavesdropped “without appropriate legal authority.”

Senate Minority Leader Harry Reid, a Nevada Democrat, said he would prefer separate hearings by the Judiciary Committee, which has already promised one, and Intelligence Committee.

This New York Times paragraph is further evidence that we’re talking about an Echelon-like surveillance program here:

Administration officials, speaking anonymously because of the sensitivity of the information, suggested that the speed with which the operation identified “hot numbers” – the telephone numbers of suspects – and then hooked into their conversations lay behind the need to operate outside the old law.

And some more snippets.

There are about a zillion more URLs I could list here. I posted these already, but both Oren Kerr and
Daniel Solove have good discussions of the legal issues. And here are three legal posts by Marty Lederman. A summary of the Republican arguments. Four good blog posts. Spooks comment on the issue.

And this George W. Bush quote (video and transcript), from December 18, 2000, is just too surreal not to reprint: “If this were a dictatorship, it’d be a heck of a lot easier, just so long as I’m the dictator.”

I guess 9/11 made it a heck of a lot easier.

Look, I don’t think 100% of the blame belongs to President Bush. (This kind of thing was also debated under Clinton.) The Congress, Democrats included, have allowed the Executive to gather power at the expense of the other two branches. This is the fundamental security issue here, and it’ll be an issue regardless of who wins the White House in 2008.

EDITED TO ADD (12/21): FISC Judge James Robertson resigned yesterday:

Two associates familiar with his decision said yesterday that Robertson privately expressed deep concern that the warrantless surveillance program authorized by the president in 2001 was legally questionable and may have tainted the FISA court’s work.

….Robertson indicated privately to colleagues in recent conversations that he was concerned that information gained from warrantless NSA surveillance could have then been used to obtain FISA warrants. FISA court Presiding Judge Colleen Kollar-Kotelly, who had been briefed on the spying program by the administration, raised the same concern in 2004 and insisted that the Justice Department certify in writing that it was not occurring.

“They just don’t know if the product of wiretaps were used for FISA warrants — to kind of cleanse the information,” said one source, who spoke on the condition of anonymity because of the classified nature of the FISA warrants. “What I’ve heard some of the judges say is they feel they’ve participated in a Potemkin court.”

More generally, here’s some of the relevant statutes and decisions:

Foreign Intelligence Surveillance Act (FISA)” (1978).

Authorization for Use of Military Force (2001),” the law authorizing Bush to use military force against the 9/11 terrorists.

United States v. United States District Court,” 407 U.S. 297 (1972), a national security surveillance case that turned on the Fourth Amendment.

Hamdi v. Rumsfeld,” 124 S. Ct. 981 (2004), the recent Supreme Court case examining the president’s powers during wartime.

[The Government’s position] cannot be mandated by any reasonable view of the separation of powers, as this view only serves to condense power into a single branch of government. We have long since made clear that a state of war is not a blank check for the President when it comes to the rights of the Nation’s citizens. Youngstown Steel and Tube, 343 U.S. at 587. Whatever power the United States Constitution envisions for the Executive in times of conflict with other Nations or enemy organizations, it most assuredly envisions a role for all three branches when individual liberties are at stake.

And here are a bunch of blog posts:

Daniel Solove: “Hypothetical: What If President Bush Were Correct About His Surveillance Powers?.”

Seth Weinberger: “Declaring War and Executive Power.”

Juliette Kayyem: “Wiretaps, AUMF and Bush’s Comments Today.”

Mark Schmitt: “Alito and the Wiretaps.”

Eric Muller: “Lawless Like I Said.”

Cass Sunstein: “Presidential Wiretap.”

Spencer Overton: “Judge Damon J. Keith: No Warrantless Wiretaps of Citizens.”

Will Baude: “Presidential Authority, A Lament.”

And news articles:

Washington Post: “Clash Is Latest Chapter in Bush Effort to Widen Executive Power.”

The clash over the secret domestic spying program is one slice of a broader struggle over the power of the presidency that has animated the Bush administration. George W. Bush and Dick Cheney came to office convinced that the authority of the presidency had eroded and have spent the past five years trying to reclaim it.

From shielding energy policy deliberations to setting up military tribunals without court involvement, Bush, with Cheney’s encouragement, has taken what scholars call a more expansive view of his role than any commander in chief in decades. With few exceptions, Congress and the courts have largely stayed out of the way, deferential to the argument that a president needs free rein, especially in wartime.

New York Times: Spying Program Snared U.S. Calls.”

A surveillance program approved by President Bush to conduct eavesdropping without warrants has captured what are purely domestic communications in some cases, despite a requirement by the White House that one end of the intercepted conversations take place on foreign soil, officials say.

Posted on December 21, 2005 at 6:50 AM

NSA and Bush's Illegal Eavesdropping

When President Bush directed the National Security Agency to secretly eavesdrop on American citizens, he transferred an authority previously under the purview of the Justice Department to the Defense Department and bypassed the very laws put in place to protect Americans against widespread government eavesdropping. The reason may have been to tap the NSA’s capability for data-mining and widespread surveillance.

Illegal wiretapping of Americans is nothing new. In the 1950s and ’60s, in a program called “Project Shamrock,” the NSA intercepted every single telegram coming into or going out of the United States. It conducted eavesdropping without a warrant on behalf of the CIA and other agencies. Much of this became public during the 1975 Church Committee hearings and resulted in the now famous Foreign Intelligence Surveillance Act (FISA) of 1978.

The purpose of this law was to protect the American people by regulating government eavesdropping. Like many laws limiting the power of government, it relies on checks and balances: one branch of the government watching the other. The law established a secret court, the Foreign Intelligence Surveillance Court (FISC), and empowered it to approve national-security-related eavesdropping warrants. The Justice Department can request FISA warrants to monitor foreign communications as well as communications by American citizens, provided that they meet certain minimal criteria.

The FISC issued about 500 FISA warrants per year from 1979 through 1995, and has slowly increased subsequently — 1,758 were issued in 2004. The process is designed for speed and even has provisions where the Justice Department can wiretap first and ask for permission later. In all that time, only four warrant requests were ever rejected: all in 2003. (We don’t know any details, of course, as the court proceedings are secret.)

FISA warrants are carried out by the FBI, but in the days immediately after the terrorist attacks, there was a widespread perception in Washington that the FBI wasn’t up to dealing with these new threats — they couldn’t uncover plots in a timely manner. So instead the Bush administration turned to the NSA. They had the tools, the expertise, the experience, and so they were given the mission.

The NSA’s ability to eavesdrop on communications is exemplified by a technological capability called Echelon. Echelon is the world’s largest information “vacuum cleaner,” sucking up a staggering amount of voice, fax, and data communications — satellite, microwave, fiber-optic, cellular and everything else — from all over the world: an estimated 3 billion communications per day. These communications are then processed through sophisticated data-mining technologies, which look for simple phrases like “assassinate the president” as well as more complicated communications patterns.

Supposedly Echelon only covers communications outside of the United States. Although there is no evidence that the Bush administration has employed Echelon to monitor communications to and from the U.S., this surveillance capability is probably exactly what the president wanted and may explain why the administration sought to bypass the FISA process of acquiring a warrant for searches.

Perhaps the NSA just didn’t have any experience submitting FISA warrants, so Bush unilaterally waived that requirement. And perhaps Bush thought FISA was a hindrance — in 2002 there was a widespread but false believe that the FISC got in the way of the investigation of Zacarias Moussaoui (the presumed “20th hijacker”) — and bypassed the court for that reason.

Most likely, Bush wanted a whole new surveillance paradigm. You can think of the FBI’s capabilities as “retail surveillance”: It eavesdrops on a particular person or phone. The NSA, on the other hand, conducts “wholesale surveillance.” It, or more exactly its computers, listens to everything. An example might be to feed the computers every voice, fax, and e-mail communication looking for the name “Ayman al-Zawahiri.” This type of surveillance is more along the lines of Project Shamrock, and not legal under FISA. As Sen. Jay Rockefeller wrote in a secret memo after being briefed on the program, it raises “profound oversight issues.”

It is also unclear whether Echelon-style eavesdropping would prevent terrorist attacks. In the months before 9/11, Echelon noticed considerable “chatter”: bits of conversation suggesting some sort of imminent attack. But because much of the planning for 9/11 occurred face-to-face, analysts were unable to learn details.

The fundamental issue here is security, but it’s not the security most people think of. James Madison famously said: “If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary.” Terrorism is a serious risk to our nation, but an even greater threat is the centralization of American political power in the hands of any single branch of the government.

Over 200 years ago, the framers of the U.S. Constitution established an ingenious security device against tyrannical government: they divided government power among three different bodies. A carefully thought out system of checks and balances in the executive branch, the legislative branch, and the judicial branch, ensured that no single branch became too powerful.

After watching tyrannies rise and fall throughout Europe, this seemed like a prudent way to form a government. Courts monitor the actions of police. Congress passes laws that even the president must follow. Since 9/11, the United States has seen an enormous power grab by the executive branch. It’s time we brought back the security system that’s protected us from government for over 200 years.

A version of this essay originally appeared in Salon.

I wrote another essay about the legal and constitutional implications of this. The Minneapolis Star Tribune will publish it either Wednesday or Thursday, and I will post it here at that time.

I didn’t talk about the political dynamics in either essay, but they’re fascinating. The White House kept this secret, but they briefed at least six people outside the administration. The current and former chief justices of the FISC knew about this. Last Sunday’s Washington Post reported that both of them had misgivings about the program, but neither did anything about it. The White House also briefed the Committee Chairs and Ranking Members of the House and Senate Intelligence Committees, and they didn’t do anything about it. (Although Sen. Rockefeller wrote a bizarre I’m-not-going-down-with-you memo to Cheney and for his files.)

Cheney was on television this weekend citing this minimal disclosure as evidence that Congress acquiesced to the program. I see it as evidence of something else: if people from both the Legislative and the Judiciary branches knowingly permitted unlawful surveillance by the Executive branch, then the current system of checks and balances isn’t working.

It’s also evidence about how secretive this administration is. None of the other FISC judges, and none of the other House or Senate Intelligence Committee members, were told about this,­ even under clearance. And if there’s one thing these people hate, it’s being kept in the dark on a matter within their jurisdiction. That’s why Senator Feinstein, a member of the Senate Intelligence Committee, was so upset yesterday. And it’s pushing Senator Specter, and some of the Republicans in these Judiciary committees, further into the civil liberties camp.

There are about a zillion links worth reading, but here are some of them you might not yet have seen. Some good newspaper commentaries. An excellent legal analysis. Three blog posts. Four more blog posts. Daniel Solove on FISA. Two legal analyses. An interesting “Democracy Now” commentary, including interesting comments on the NSA’s capabilities by James Bamford. And finally, my 2004 essay on the security of checks and balances.

“Necessity is the plea for every infringement of human freedom. It is the argument of tyrants; it is the creed of slaves.” — William Pitt, House of Commons, 11/18/1783.

Posted on December 20, 2005 at 12:45 PMView Comments

Totally Secure Classical Communications?

My eighth Wired column:

How would you feel if you invested millions of dollars in quantum cryptography, and then learned that you could do the same thing with a few 25-cent Radio Shack components?

I’m exaggerating a little here, but if a new idea out of Texas A&M University turns out to be secure, we’ve come close.

Earlier this month, Laszlo Kish proposed securing a communications link, like a phone or computer line, with a pair of resistors. By adding electronic noise, or using the natural thermal noise of the resistors — called “Johnson noise” — Kish can prevent eavesdroppers from listening in.

In the blue-sky field of quantum cryptography, the strange physics of the subatomic world are harnessed to create a secure, unbreakable communications channel between two points. Kish’s research is intriguing, in part, because it uses the simpler properties of classic physics — the stuff you learned in high school — to achieve the same results.

At least, that’s the theory.

I go on to describe how the system works, and then discuss the security:

There hasn’t been enough analysis. I certainly don’t know enough electrical engineering to know whether there is any clever way to eavesdrop on Kish’s scheme. And I’m sure Kish doesn’t know enough security to know that, either. The physics and stochastic mathematics look good, but all sorts of security problems crop up when you try to actually build and operate something like this.

It’s definitely an idea worth exploring, and it’ll take people with expertise in both security and electrical engineering to fully vet the system.

There are practical problems with the system, though. The bandwidth the system can handle appears very limited. The paper gives the bandwidth-distance product as 2 x 106 meter-Hz. This means that over a 1-kilometer link, you can only send at 2,000 bps. A dialup modem from 1985 is faster. Even with a fat 500-pair cable you’re still limited to 1 million bps over 1 kilometer.

And multi-wire cables have their own problems; there are all sorts of cable-capacitance and cross-talk issues with that sort of link. Phone companies really hate those high-density cables, because of how long it takes to terminate or splice them.

Even more basic: It’s vulnerable to man-in-the-middle attacks. Someone who can intercept and modify messages in transit can break the security. This means you need an authenticated channel to make it work — a link that guarantees you’re talking to the person you think you’re talking to. How often in the real world do we have a wire that is authenticated but not confidential? Not very often.

Generally, if you can eavesdrop you can also mount active attacks. But this scheme only defends against passive eavesdropping.

For those keeping score, that’s four practical problems: It’s only link encryption and not end-to-end, it’s bandwidth-limited (but may be enough for key exchange), it works best for short ranges and it requires authentication to make it work. I can envision some specialized circumstances where this might be useful, but they’re few and far between.

But quantum key distributions have the same problems. Basically, if Kish’s scheme is secure, it’s superior to quantum communications in every respect: price, maintenance, speed, vibration, thermal resistance and so on.

Both this and the quantum solution share another problem, however; they’re solutions looking for a problem. In the realm of security, encryption is the one thing we already do pretty well. Focusing on encryption is like sticking a tall stake in the ground and hoping the enemy runs right into it, instead of building a wide wall.

Arguing about whether this kind of thing is more secure than AES — the United States’ national encryption standard — is like arguing about whether the stake should be a mile tall or a mile and a half tall. However tall it is, the enemy is going to go around the stake.

Software security, network security, operating system security, user interface — these are the hard security problems. Replacing AES with this kind of thing won’t make anything more secure, because all the other parts of the security system are so much worse.

This is not to belittle the research. I think information-theoretic security is important, regardless of practicality. And I’m thrilled that an easy-to-build classical system can work as well as a sexy, media-hyped quantum cryptosystem. But don’t throw away your crypto software yet.

Here’s the press release, here’s the paper, and here’s the Slashdot thread.

EDITED TO ADD (1/31): Here’s an interesting rebuttal.

Posted on December 15, 2005 at 6:13 AMView Comments

Benevolent Worms

Yet another story about benevolent worms and how they can secure our networks. This idea shows up every few years. (I wrote about it in 2000, and again in 2003. This quote (emphasis mine) from the article shows what the problem is:

Simulations show that the larger the network grows, the more efficient this scheme should be. For example, if a network has 50,000 nodes (computers), and just 0.4% of those are honeypots, just 5% of the network will be infected before the immune system halts the virus, assuming the fix works properly. But, a 200-million-node network ­ with the same proportion of honeypots ­ should see just 0.001% of machines get infected.

This is from my 2003 essay:

A worm is not “bad” or “good” depending on its payload. Viral propagation mechanisms are inherently bad, and giving them beneficial payloads doesn’t make things better. A worm is no tool for any rational network administrator, regardless of intent.

A good software distribution mechanism has the following characteristics:

  1. People can choose the options they want.
  2. Installation is adapted to the host it’s running on.
  3. It’s easy to stop an installation in progress, or uninstall the software.
  4. It’s easy to know what has been installed where.

A successful worm, on the other hand, runs without the consent of the user. It has a small amount of code, and once it starts to spread, it is self-propagating, and will keep going automatically until it’s halted.

These characteristics are simply incompatible. Giving the user more choice, making installation flexible and universal, allowing for uninstallation — all of these make worms harder to propagate. Designing a better software distribution mechanism, makes it a worse worm, and vice versa. On the other hand, making the worm quieter and less obvious to the user, making it smaller and easier to propagate, and making it impossible to contain, all make for bad software distribution.

All of this makes worms easy to get wrong and hard to recover from. Experimentation, most of it involuntary, proves that worms are very hard to debug successfully: in other words, once worms starts spreading it’s hard to predict exactly what they will do. Some viruses were written to propagate harmlessly, but did damage — ranging from crashed machines to clogged networks — because of bugs in their code. Many worms were written to do damage and turned out to be harmless (which is even more revealing).

Intentional experimentation by well-meaning system administrators proves that in your average office environment, the code that successfully patches one machine won’t work on another. Indeed, sometimes the results are worse than any threat of external attack. Combining a tricky problem with a distribution mechanism that’s impossible to debug and difficult to control is fraught with danger. Every system administrator who’s ever distributed software automatically on his network has had the “I just automatically, with the press of a button, destroyed the software on hundreds of machines at once!” experience. And that’s with systems you can debug and control; self-propagating systems don’t even let you shut them down when you find the problem. Patching systems is fundamentally a human problem, and beneficial worms are a technical solution that doesn’t work.

Posted on December 5, 2005 at 2:50 PMView Comments

Airplane Security

My seventh Wired.com column is on line. Nothing you haven’t heard before, except for this part:

I know quite a lot about this. I was a member of the government’s Secure Flight Working Group on Privacy and Security. We looked at the TSA’s program for matching airplane passengers with the terrorist watch list, and found a complete mess: poorly defined goals, incoherent design criteria, no clear system architecture, inadequate testing. (Our report was on the TSA website, but has recently been removed — “refreshed” is the word the organization used — and replaced with an “executive summary” (.doc) that contains none of the report’s findings. The TSA did retain two (.doc) rebuttals (.doc), which read like products of the same outline and dismiss our findings by saying that we didn’t have access to the requisite information.) Our conclusions match those in two (.pdf) reports (.pdf) by the Government Accountability Office and one (.pdf) by the DHS inspector general.

That’s right; the TSA is disappearing our report.

I also wrote an op ed for the Sydney Morning Herald on “weapons” — like the metal knives distributed with in-flight meals — aboard aircraft, based on this blog post. Again, nothing you haven’t heard before. (And I stole some bits from your comments to the blog posting.)

There is new news, though. The TSA is relaxing the rules for bringing pointy things on aircraft:.

The summary document says the elimination of the ban on metal scissors with a blade of four inches or less and tools of seven inches or less – including screwdrivers, wrenches and pliers – is intended to give airport screeners more time to do new types of random searches.

Passengers are now typically subject to a more intensive, so-called secondary search only if their names match a listing of suspected terrorists or because of anomalies like a last-minute ticket purchase or a one-way trip with no baggage.

The new strategy, which has been tested in Pittsburgh, Indianapolis and Orange County, Calif., will mean that a certain number of passengers, even if they are not identified by these computerized checks, will be pulled aside and subject to an added search lasting about two minutes. Officials said passengers would be selected randomly, without regard to ethnicity or nationality.

What happens next will vary. One day at a certain airport, carry-on bags might be physically searched. On the same day at a different airport, those subject to the random search might have their shoes screened for explosives or be checked with a hand-held metal detector. “By design, a traveler will not experience the same search every time he or she flies,” the summary said. “The searches will add an element of unpredictability to the screening process that will be easy for passengers to navigate but difficult for terrorists to manipulate.”

The new policy will also change the way pat-down searches are done to check for explosive devices. Screeners will now search the upper and lower torso, the entire arm and legs from the mid-thigh down to the ankle and the back and abdomen, significantly expanding the area checked.

Currently, only the upper torso is checked. Under the revised policy, screeners will still have the option of skipping pat-downs in certain areas “if it is clear there is no threat,” like when a person is wearing tight clothing making it obvious that there is nothing hidden. But the default position will be to do the more comprehensive search, in part because of fear that a passenger could be carrying plastic explosives that might not set off a handheld metal detector.

I don’t know if they will still make people take laptops out of their cases, make people take off their shoes, or confiscate pocket knives. (Different articles have said different things about the last one.)

This is a good change, and it’s long overdue. Airplane terrorism hasn’t been the movie-plot threat that everyone worries about for a while.

The most amazing reaction to this is from Corey Caldwell, spokeswoman for the Association of Flight Attendants:

When weapons are allowed back on board an aircraft, the pilots will be able to land the plane safety but the aisles will be running with blood.

How’s that for hyperbole?

In Beyond Fear and elsewhere, I’ve written about the notion of “agenda” and how it informs security trade-offs. From the perspective of the flight attendants, subjecting passengers to onerous screening requirements is a perfectly reasonable trade-off. They’re safer — albeit only slightly — because of it, and it doesn’t cost them anything. The cost is an externality to them: the passengers pay it. Passengers have a broader agenda: safety, but also cost, convenience, time, etc. So it makes perfect sense that the flight attendants object to a security change that the passengers are in favor of.

EDITED TO ADD (12/2): The SFWG report hasn’t been removed from the TSA website, just unlinked.

EDITED TO ADD (12/20): The report seems to be gone from the TSA website now, but it’s available here.

Posted on December 1, 2005 at 10:14 AMView Comments

Surveillance and Oversight

Christmas 2003, Las Vegas. Intelligence hinted at a terrorist attack on New Year’s Eve. In the absence of any real evidence, the FBI tried to compile a real-time database of everyone who was visiting the city. It collected customer data from airlines, hotels, casinos, rental car companies, even storage locker rental companies. All this information went into a massive database — probably close to a million people overall — that the FBI’s computers analyzed, looking for links to known terrorists. Of course, no terrorist attack occurred and no plot was discovered: The intelligence was wrong.

A typical American citizen spending the holidays in Vegas might be surprised to learn that the FBI collected his personal data, but this kind of thing is increasingly common. Since 9/11, the FBI has been collecting all sorts of personal information on ordinary Americans, and it shows no signs of letting up.

The FBI has two basic tools for gathering information on large groups of Americans. Both were created in the 1970s to gather information solely on foreign terrorists and spies. Both were greatly expanded by the USA Patriot Act and other laws, and are now routinely used against ordinary, law-abiding Americans who have no connection to terrorism. Together, they represent an enormous increase in police power in the United States.

The first are FISA warrants (sometimes called Section 215 warrants, after the section of the Patriot Act that expanded their scope). These are issued in secret, by a secret court. The second are national security letters, less well known but much more powerful, and which FBI field supervisors can issue all by themselves. The exact numbers are secret, but a recent Washington Post article estimated that 30,000 letters each year demand telephone records, banking data, customer data, library records, and so on.

In both cases, the recipients of these orders are prohibited by law from disclosing the fact that they received them. And two years ago, Attorney General John Ashcroft rescinded a 1995 guideline that this information be destroyed if it is not relevant to whatever investigation it was collected for. Now, it can be saved indefinitely, and disseminated freely.

September 2005, Rotterdam. The police had already identified some of the 250 suspects in a soccer riot from the previous April, but most were unidentified but captured on video. In an effort to help, they sent text messages to 17,000 phones known to be in the vicinity of the riots, asking that anyone with information contact the police. The result was more evidence, and more arrests.

The differences between the Rotterdam and Las Vegas incidents are instructive. The Rotterdam police needed specific data for a specific purpose. Its members worked with federal justice officials to ensure that they complied with the country’s strict privacy laws. They obtained the phone numbers without any names attached, and deleted them immediately after sending the single text message. And their actions were public, widely reported in the press.

On the other hand, the FBI has no judicial oversight. With only a vague hinting that a Las Vegas attack might occur, the bureau vacuumed up an enormous amount of information. First its members tried asking for the data; then they turned to national security letters and, in some cases, subpoenas. There was no requirement to delete the data, and there is every reason to believe that the FBI still has it all. And the bureau worked in secret; the only reason we know this happened is that the operation leaked.

These differences illustrate four principles that should guide our use of personal information by the police. The first is oversight: In order to obtain personal information, the police should be required to show probable cause, and convince a judge to issue a warrant for the specific information needed. Second, minimization: The police should only get the specific information they need, and not any more. Nor should they be allowed to collect large blocks of information in order to go on “fishing expeditions,” looking for suspicious behavior. The third is transparency: The public should know, if not immediately then eventually, what information the police are getting and how it is being used. And fourth, destruction. Any data the police obtains should be destroyed immediately after its court-authorized purpose is achieved. The police should not be able to hold on to it, just in case it might become useful at some future date.

This isn’t about our ability to combat terrorism; it’s about police power. Traditional law already gives police enormous power to peer into the personal lives of people, to use new crime-fighting technologies, and to correlate that information. But unfettered police power quickly resembles a police state, and checks on that power make us all safer.

As more of our lives become digital, we leave an ever-widening audit trail in our wake. This information has enormous social value — not just for national security and law enforcement, but for purposes as mundane as using cell-phone data to track road congestion, and as important as using medical data to track the spread of diseases. Our challenge is to make this information available when and where it needs to be, but also to protect the principles of privacy and liberty our country is built on.

This essay originally appeared in the Minneapolis Star-Tribune.

Posted on November 22, 2005 at 6:06 AMView Comments

Sony's DRM Rootkit: The Real Story

This is my sixth column for Wired.com:

It’s a David and Goliath story of the tech blogs defeating a mega-corporation.

On Oct. 31, Mark Russinovich broke the story in his blog: Sony BMG Music Entertainment distributed a copy-protection scheme with music CDs that secretly installed a rootkit on computers. This software tool is run without your knowledge or consent — if it’s loaded on your computer with a CD, a hacker can gain and maintain access to your system and you wouldn’t know it.

The Sony code modifies Windows so you can’t tell it’s there, a process called “cloaking” in the hacker world. It acts as spyware, surreptitiously sending information about you to Sony. And it can’t be removed; trying to get rid of it damages Windows.

This story was picked up by other blogs (including mine), followed by the computer press. Finally, the mainstream media took it up.

The outcry was so great that on Nov. 11, Sony announced it was temporarily halting production of that copy-protection scheme. That still wasn’t enough — on Nov. 14 the company announced it was pulling copy-protected CDs from store shelves and offered to replace customers’ infected CDs for free.

But that’s not the real story here.

It’s a tale of extreme hubris. Sony rolled out this incredibly invasive copy-protection scheme without ever publicly discussing its details, confident that its profits were worth modifying its customers’ computers. When its actions were first discovered, Sony offered a “fix” that didn’t remove the rootkit, just the cloaking.

Sony claimed the rootkit didn’t phone home when it did. On Nov. 4, Thomas Hesse, Sony BMG’s president of global digital business, demonstrated the company’s disdain for its customers when he said, “Most people don’t even know what a rootkit is, so why should they care about it?” in an NPR interview. Even Sony’s apology only admits that its rootkit “includes a feature that may make a user’s computer susceptible to a virus written specifically to target the software.”

However, imperious corporate behavior is not the real story either.

This drama is also about incompetence. Sony’s latest rootkit-removal tool actually leaves a gaping vulnerability. And Sony’s rootkit — designed to stop copyright infringement — itself may have infringed on copyright. As amazing as it might seem, the code seems to include an open-source MP3 encoder in violation of that library’s license agreement. But even that is not the real story.

It’s an epic of class-action lawsuits in California and elsewhere, and the focus of criminal investigations. The rootkit has even been found on computers run by the Department of Defense, to the Department of Homeland Security’s displeasure. While Sony could be prosecuted under U.S. cybercrime law, no one thinks it will be. And lawsuits are never the whole story.

This saga is full of weird twists. Some pointed out how this sort of software would degrade the reliability of Windows. Someone created malicious code that used the rootkit to hide itself. A hacker used the rootkit to avoid the spyware of a popular game. And there were even calls for a worldwide Sony boycott. After all, if you can’t trust Sony not to infect your computer when you buy its music CDs, can you trust it to sell you an uninfected computer in the first place? That’s a good question, but — again — not the real story.

It’s yet another situation where Macintosh users can watch, amused (well, mostly) from the sidelines, wondering why anyone still uses Microsoft Windows. But certainly, even that is not the real story.

The story to pay attention to here is the collusion between big media companies who try to control what we do on our computers and computer-security companies who are supposed to be protecting us.

Initial estimates are that more than half a million computers worldwide are infected with this Sony rootkit. Those are amazing infection numbers, making this one of the most serious internet epidemics of all time — on a par with worms like Blaster, Slammer, Code Red and Nimda.

What do you think of your antivirus company, the one that didn’t notice Sony’s rootkit as it infected half a million computers? And this isn’t one of those lightning-fast internet worms; this one has been spreading since mid-2004. Because it spread through infected CDs, not through internet connections, they didn’t notice? This is exactly the kind of thing we’re paying those companies to detect — especially because the rootkit was phoning home.

But much worse than not detecting it before Russinovich’s discovery was the deafening silence that followed. When a new piece of malware is found, security companies fall over themselves to clean our computers and inoculate our networks. Not in this case.

McAfee didn’t add detection code until Nov. 9, and as of Nov. 15 it doesn’t remove the rootkit, only the cloaking device. The company admits on its web page that this is a lousy compromise. “McAfee detects, removes and prevents reinstallation of XCP.” That’s the cloaking code. “Please note that removal will not impair the copyright-protection mechanisms installed from the CD. There have been reports of system crashes possibly resulting from uninstalling XCP.” Thanks for the warning.

Symantec’s response to the rootkit has, to put it kindly, evolved. At first the company didn’t consider XCP malware at all. It wasn’t until Nov. 11 that Symantec posted a tool to remove the cloaking. As of Nov. 15, it is still wishy-washy about it, explaining that “this rootkit was designed to hide a legitimate application, but it can be used to hide other objects, including malicious software.”

The only thing that makes this rootkit legitimate is that a multinational corporation put it on your computer, not a criminal organization.

You might expect Microsoft to be the first company to condemn this rootkit. After all, XCP corrupts Windows’ internals in a pretty nasty way. It’s the sort of behavior that could easily lead to system crashes — crashes that customers would blame on Microsoft. But it wasn’t until Nov. 13, when public pressure was just too great to ignore, that Microsoft announced it would update its security tools to detect and remove the cloaking portion of the rootkit.

Perhaps the only security company that deserves praise is F-Secure, the first and the loudest critic of Sony’s actions. And Sysinternals, of course, which hosts Russinovich’s blog and brought this to light.

Bad security happens. It always has and it always will. And companies do stupid things; always have and always will. But the reason we buy security products from Symantec, McAfee and others is to protect us from bad security.

I truly believed that even in the biggest and most-corporate security company there are people with hackerish instincts, people who will do the right thing and blow the whistle. That all the big security companies, with over a year’s lead time, would fail to notice or do anything about this Sony rootkit demonstrates incompetence at best, and lousy ethics at worst.

Microsoft I can understand. The company is a fan of invasive copy protection — it’s being built into the next version of Windows. Microsoft is trying to work with media companies like Sony, hoping Windows becomes the media-distribution channel of choice. And Microsoft is known for watching out for its business interests at the expense of those of its customers.

What happens when the creators of malware collude with the very companies we hire to protect us from that malware?

We users lose, that’s what happens. A dangerous and damaging rootkit gets introduced into the wild, and half a million computers get infected before anyone does anything.

Who are the security companies really working for? It’s unlikely that this Sony rootkit is the only example of a media company using this technology. Which security company has engineers looking for the others who might be doing it? And what will they do if they find one? What will they do the next time some multinational company decides that owning your computers is a good idea?

These questions are the real story, and we all deserve answers.

EDITED TO ADD (11/17): Slashdotted.

EDITED TO ADD (11/19): Details of Sony’s buyback program. And more GPL code was stolen and used in the rootkit.

Posted on November 17, 2005 at 9:08 AM

The Zotob Worm

If you’ll forgive the possible comparison to hurricanes, Internet epidemics are much like severe weather: they happen randomly, they affect some segments of the population more than others, and your previous preparation determines how effective your defense is.

Zotob was the first major worm outbreak since MyDoom in January 2004. It happened quickly — less than five days after Microsoft published a critical security bulletin (its 39th of the year). Zotob’s effects varied greatly from organization to organization: some networks were brought to their knees, while others didn’t even notice.

The worm started spreading on Sunday, 14 August. Honestly, it wasn’t much of a big deal, but it got a lot of play in the press because it hit several major news outlets, most notably CNN. If a news organization is personally affected by something, it’s much more likely to report extensively on it. But my company, Counterpane Internet Security, monitors more than 500 networks worldwide, and we didn’t think it was worth all the press coverage.

By the 17th, there were at least a dozen other worms that exploited the same vulnerability, both Zotob variants and others that were completely different. Most of them tried to recruit computers for bot networks, and some of the different variants warred against each other — stealing “owned” computers back and forth. If your network was infected, it was a mess.

Two weeks later, the 18-year-old who wrote the original Zotob worm was arrested, along with the 21-year-old who paid him to write it. It seems likely the person who funded the worm’s creation was not a hacker, but rather a criminal looking to profit.

The nature of worms has changed in the past few years. Previously, hackers looking for prestige or just wanting to cause damage were responsible for most worms. Today, they’re increasingly written or commissioned by criminals. By taking over computers, worms can send spam, launch denial-of-service extortion attacks, or search for credit-card numbers and other personal information.

What could you have done beforehand to protect yourself against Zotob and its kin? “Install the patch” is the obvious answer, but it’s not really a satisfactory one. There are simply too many patches. Although a single computer user can easily set up patches to automatically download and install — at least Microsoft Windows system patches — large corporate networks can’t. Far too often, patches cause other things to break.

It would be great to know which patches are actually important and which ones just sound important. Before that weekend in August, the patch that would have protected against Zotob was just another patch; by Monday morning, it was the most important thing a sysadmin could do to secure the network.

Microsoft had six new patches available on 9 August, three designated as critical (including the one that Zotob used), one important, and two moderate. Could you have guessed beforehand which one would have actually been critical? With the next patch release, will you know which ones you can put off and for which ones you need to drop everything, test, and install across your network?

Given that it’s impossible to know what’s coming beforehand, how you respond to an actual worm largely determines your defense’s effectiveness. You might need to respond quickly, and you most certainly need to respond accurately. Because it’s impossible to know beforehand what the necessary response should be, you need a process for that response. Employees come and go, so the only thing that ensures a continuity of effective security is a process. You need accurate and timely information to fuel this process. And finally, you need experts to decipher the information, determine what to do, and implement a solution.

The Zotob storm was both typical and unique. It started soon after the vulnerability was published, but I don’t think that made a difference. Even worms that use six-month-old vulnerabilities find huge swaths of the Internet unpatched. It was a surprise, but they all are.

This essay will appear in the November/December 2005 issue of IEEE Security & Privacy.

Posted on November 11, 2005 at 7:46 AMView Comments

1 38 39 40 41 42 44

Sidebar photo of Bruce Schneier by Joe MacInnis.