Entries Tagged "national security policy"

Page 46 of 59

Ray McGovern on Intelligence Failures

Good commentary from former CIA analyst Ray McGovern:

The short answer to the second sentence is: Yes, it is inevitable that “certain plots will succeed.”

A more helpful answer would address the question as to how we might best minimize their prospects for success. And to do this, sorry to say, there is no getting around the necessity to address the root causes of terrorism or, in the vernacular, “why they hate us.”

If we don’t go beyond self-exculpatory sloganeering in attempting to answer that key question, any “counter terrorism apparatus” is doomed to failure. Honest appraisals would tread on delicate territory, but any intelligence agency worth its salt must be willing/able to address it.

Delicate? Take, for example, what Khalid Sheik Mohammed, the “mastermind” of 9/11, said was his main motive. Here’s what the 9/11 Commission Report wrote on page 147. You will not find it reported in the Fawning Corporate Media (FCM):

“By his own account, KSM’s animus toward the United States stemmed…from his violent disagreement with U.S. foreign policy favoring Israel.”

This is not the entire picture, of course. Other key factors include the post-Gulf War stationing of U.S. troops in Saudi Arabia, widely seen as defiling the holy sites of Islam.

Add Washington’s propping up of dictatorial, repressive regimes in order to secure continuing access to oil and natural gas—widely (and accurately) seen as one of the main reasons for the invasion of Iraq and Afghanistan.

Not to mention the Pentagon’s insatiable thirst for additional permanent (sorry, the term is now “enduring”) military bases in that part of the world.

[…]

The most effective step would be to release the CIA Inspector General report on intelligence community performance prior to 9/11. That investigation was run by, and its report was prepared by an honest man, it turns out.

It was immediately suppressed by then-Acting DCI John McLaughlin—another Tenet clone—and McLaughin’s successors as director, Porter Goss, Michael Hayden, and now Leon Panetta.

Accountability is key. If there is no accountability, there is total freedom to screw up, and screw up royally, without any thought of possible personal consequences.

Not only is it certain that we will face more terrorist attacks, but the keystone-cops nature of recent intelligence operations … whether in using cell phones in planning kidnappings in Italy, or in allowing suicide bombers access to CIA bases in Taliban-infested eastern Afghanistan … will continue. Not to mention the screw-up in the case of Abdulmutallab.

Posted on January 15, 2010 at 7:22 AMView Comments

Another Contest: Fixing Airport Security

Slate is hosting an airport security suggestions contest: ideas “for making airport security more effective, more efficient, or more pleasant.” Deadline is midday Friday.

I had already submitted a suggestion before I was asked to be a judge. Since I’m no longer eligible, here’s what I sent them:

Reduce the TSA’s budget, and spend the money on:

1. Intelligence. Security measures that focus on specific tactics or targets are a waste of money unless we guess the next attack correctly. Security measures that just force the terrorists to make a minor change in their tactics or targets is not money well spent.

2. Investigation. Since the terrorists deliberately choose plots that we’re not looking for, the best security is to stop plots before they get to the airport. Remember the arrest of the London liquid bombers.

3. Emergency response. Terrorism’s harm depends more on our reactions to attacks than the attacks themselves. We’re naturally resilient, but how we respond in those first hours and days is critical.

And as an added bonus, all of these measures protect us against non-airplane terrorism as well. All we have to do is stop focusing on specific movie plots, and start thinking about the overall threat.

Probably not what they were looking for, and certainly not anything the government is even going to remotely consider—but the smart solution all the same.

Posted on January 7, 2010 at 10:53 AMView Comments

David Brooks on Resilience in the Face of Security Imperfection

David Brooks makes some very good points in this New York Times op-ed from last week:

All this money and technology seems to have reduced the risk of future attack. But, of course, the system is bound to fail sometimes. Reality is unpredictable, and no amount of computer technology is going to change that. Bureaucracies are always blind because they convert the rich flow of personalities and events into crude notations that can be filed and collated. Human institutions are always going to miss crucial clues because the information in the universe is infinite and events do not conform to algorithmic regularity.

[…]

In a mature nation, President Obama could go on TV and say, “Listen, we’re doing the best we can, but some terrorists are bound to get through.” But this is apparently a country that must be spoken to in childish ways. The original line out of the White House was that the system worked. Don’t worry, little Johnny.

When that didn’t work the official line went to the other extreme. “I consider that totally unacceptable,” Obama said. I’m really mad, Johnny. But don’t worry, I’ll make it all better.

[…]

For better or worse, over the past 50 years we have concentrated authority in centralized agencies and reduced the role of decentralized citizen action. We’ve done this in many spheres of life. Maybe that’s wise, maybe it’s not. But we shouldn’t imagine that these centralized institutions are going to work perfectly or even well most of the time. It would be nice if we reacted to their inevitable failures not with rabid denunciation and cynicism, but with a little resiliency, an awareness that human systems fail and bad things will happen and we don’t have to lose our heads every time they do.

There’s a pervasive belief in this society that perfection is possible. So if something bad occurs, it can never be because we just got unlucky. It must be because something went wrong and someone is at fault, and therefore things must be fixed. Sometimes, though, this simply isn’t true. Sometimes it’s better not to fix things: either there is no fix, or the fix is more expensive than living with the problem, or the side effects of the fix are worse than the problem. And sometimes you can do everything right and have it still turn out wrong. Welcome to the real world.

EDITED TO ADD (1/8): Glenn Greenwald on “The Degrading Effects of Terrorism Fears.”

Posted on January 6, 2010 at 10:27 AMView Comments

Me and the Christmas Underwear Bomber

I spent a lot of yesterday giving press interviews. Nothing I haven’t said before, but it’s now national news and everyone wants to hear it.

These are the most interesting bits. Rachel Maddow interviewed me last night on her show. Jeffrey Goldberg interviewed me for the Atlantic website. And CNN.com published a rewrite of an older article of mine on terrorism and security.

I’ve started to call the bizarre new TSA rules “magical thinking”: if we somehow protect against the specific tactic of the previous terrorist, we make ourselves safe from the next terrorist.

EDITED TO ADD (12/29): I don’t know about this quote:

“I flew 265,000 miles last year,” said Bruce Schneier, a cryptographer and security analyst. “You know what really pisses me off? Making me check my luggage. Not letting me use my laptop, so I can’t work. Taking away my Kindle, so I can’t read. I care about those things. I care about making me safer much, much less.”

For the record, I do care about being safer. I just don’t think any of the airplane security measures proposed by the TSA accomplish that.

Posted on December 29, 2009 at 11:17 AMView Comments

The Politics of Power in Cyberspace

Thoughtful blog post by The Atlantic‘s Marc Ainbinder:

We allow Google, Amazon.com, credit companies and all manner of private corporations to collect intimate information about our lives, but we reflexively recoil when the government proposes to monitor (and not even collect) a fraction of that information, even with legal safeguards. We carry in our wallets credit cards with RFID chips. Data companies send unmarked vans in our neighborhoods, mapping wireless networks. The IBM scientist and tech guru Jeff Jonas noted on his blog that every time we send a text message, we’re contributing to a cloud where “powerful analytics commingle space-time-travel data with tertiary data.” Geolocated tweets can tell everyone where we are, what we’re doing, and who we like. Sure, The data is ostensibly anonymized, but the reality is a bit different: we provide so much of it that, as Jonas notes, we tend to re-identify ourselves—out our identity—fairly quickly. This is good and bad; the world becomes more efficient, we leave less of a footprint, we get what we want more quickly. But we also sacrifice privacy, individuality, and other goods that can’t be measured in dollars and cents.

Government power is just different than corporate power. Our engagement with technology implies a certain consent to give up information to companies. A deeper mistrust of government is healthy, so far as the it places pressure on lawmakers to properly oversee the exercise of state power. Warrantless domestic surveillance by NSA during the Bush administration doubtless ensnared a number of innocent Americans and monitored the communications of people who posed no harm to anyone. Where the standard is personal privacy and the rule of law, the violation is severe.

But where the standard is harm, the damage is minimal compared to the information that is routinely and legally collected by non-state entities—information that is used to target us for political appeals, to sell us something, or to steal money, to pilfer intellectual property or abuse technology. 85 percent of infrastructure in this country is in private hands; it is extremely vulnerable to attack and even to catastrophic resource failure.

[…]

This asymmetry is distorting the politics of cyber security. It frustrates the front line cyber folks to no end, but they are, in some ways, responsible for it.

For one thing, the NSA lacks credibility with many Americans and with some lawmakers because of its aforementioned activities. And yet the NSA is—really—the only entity with the expertise, the size, and the capability to secure the cyber realm. For another, the government remains obsessed with secrecy. The NSA and the Department of Defense can penetrate virtually any computer network on the face of the planet, and probably do so with regularity for defense purposes. Their capabilities in this “offensive” realm are awesome, and kind of scary. The technology that’ll be used to defend the country from cyber attacks of all types is the same technology used to track insurgents in Iraq (classified), tap into terrorist net-centered communications (classified), probe nation-state computer defenses (classified), figure out how to electronically hack into missile guidance systems (classified). Also: they’re worried that terrorists would figure out how vulnerable we really are if they knew everything. Here’s the weird part: China, Russia, savvy cyber terrorists—they know all this. They have the same technology.

My essay on who should be in charge of cybersecurity.

Posted on December 17, 2009 at 6:10 AMView Comments

U.S./Russia Cyber Arms Control Talks

Now this is interesting:

The United States has begun talks with Russia and a United Nations arms control committee about strengthening Internet security and limiting military use of cyberspace.

[…]

The Russians have held that the increasing challenges posed by military activities to civilian computer networks can be best dealt with by an international treaty, similar to treaties that have limited the spread of nuclear, chemical and biological weapons. The United States had resisted, arguing that it was impossible to draw a line between the commercial and military uses of software and hardware.

[…]

A State Department official, who was not authorized to speak about the talks and requested anonymity, disputed the Russian characterization of the American position. While the Russians have continued to focus on treaties that may restrict weapons development, the United States is hoping to use the talks to increase international cooperation in opposing Internet crime. Strengthening defenses against Internet criminals would also strengthen defenses against any military-directed cyberattacks, the United States maintains.

[…]

The American interest in reopening discussions shows that the Obama administration, even in absence of a designated Internet security chief, is breaking with the Bush administration, which declined to talk with Russia about issues related to military attacks using the Internet.

I’m not sure what can be achieved here, but talking is always good.

I just posted about cyberwar policy.

Posted on December 14, 2009 at 6:46 AMView Comments

Obama's Cybersecurity Czar

Rumors are that RSA president Art Coviello declined the job. No surprise: it has no actual authority but a lot of responsibility.

Security experts have pointed out that previous cybersecurity positions, cybersecurity czars and directors at the Department of Homeland Security, have been unable to make any significant changes to lock down federal systems. Virtually nothing can get done without some kind of budgetary authority, security expert Bruce Schneier has said about the vacant position. An advisor can set priorities and try to carry them out, but won’t have the clout to force government agencies to make changes and adhere to policies.

For the record, I was never approached. But I would certainly decline; this is a political job, and someone political needs to fill it.

I’ve written about this before—also, the last paragraph here:

And if you’re going to appoint a cybersecurity czar, you have to give him actual budgetary authority—otherwise he won’t be able to get anything done, either.

Maybe we should do a reality TV show: “America’s Next Cybersecurity Czar.”

EDITED TO ADD (12/12): Commentary.

Posted on December 11, 2009 at 6:37 AMView Comments

TSA Publishes Standard Operating Procedures

BoingBoing is pretty snarky:

The TSA has published a “redacted” version of their s00per s33kr1t screening procedure guidelines (Want to know whether to frisk a CIA operative at the checkpoint? Now you can!). Unfortunately, the security geniuses at the DHS don’t know that drawing black blocks over the words you want to eliminate from your PDF doesn’t actually make the words go away, and can be defeated by nefarious al Qaeda operatives through a complex technique known as ctrl-a/ctrl-c/ctrl-v. Thankfully, only the most elite terrorists would be capable of matching wits with the technology brilliance on display at the agency charged with defending our nation’s skies by ensuring that imaginary hair-gel bombs are kept off of airplanes.

TSA is launching a “full review” to determine how this could have happened. I’ll save them the effort: someone screwed up.

In a statement Tuesday night, the TSA sought to minimize the impact of the unintentional release—calling the document “outdated,” “unclassified” and unimplemented—while saying that it took the incident “very seriously,” and “took swift action” when it was discovered.

Yeah, right.

The original link to the document is dead, but here’s the unredacted document.

I’ve skimmed it, and haven’t found anything terribly interesting. Here’s what Wired.com noticed:

One of the redacted sections, for example, indicates that an armed law enforcement officer in or out of uniform may pass beyond the checkpoint without screening after providing a U.S. government-issued photo ID and “Notice of LEO Flying Armed Document.”

Some commercial airline pilots receive training by the U.S. Marshals Service and are allowed to carry TSA-issued firearms on planes. They can pass through without screening only after presenting “bonafide credentials and aircraft operator photo ID,” the document says.

Foreign dignitaries equivalent to cabinet rank and above, accompanying a spouse, their children under the age of 12, and a State Department escort are exempt from screening.

There are also references to a CIA program called WOMAP, the Worldwide Operational Meet and Assist Program. As part of WOMAP, foreign dignitaries and their escorts—authorized CIA representatives—are exempt from screening, provided they’re approved in advance by TSA’s Office of Intelligence.

Passengers carrying passports from Cuba, Iran, North Korea, Libya, Syria, Sudan, Afghanistan, Lebanon, Somalia, Iraq, Yemen or Algeria are to be designated for selective screening.

Although only a few portions of the document were redacted, the manual contains other tidbits that weren’t redacted, such as a thorough description of diplomatic pouches that are exempt from screening.

I’m a little bit saddened when we all make a big deal about how dumb people are at redacting digital documents. We’ve had a steady stream of these badly redacted documents, and I don’t want to lose that. I also don’t want agencies deciding not to release documents at all, rather than risk this sort of embarrassment.

EDITED TO ADD (12/10): News:

Five Transportation Security Administration employees have been placed on administrative leave after a sensitive airport security manual was posted on the Internet, the agency announced Wednesday.

EDITED TO ADD (12/12): Did the TSA compromise an intelligence program?

Posted on December 10, 2009 at 6:47 AMView Comments

Cyberwarfare Policy

National Journal has an excellent article on cyberwar policy. I agree with the author’s comments on The Atlantic blog:

Would the United States ever use a more devastating weapon, perhaps shutting off the lights in an adversary nation? The answer is, almost certainly no, not unless America were attacked first.

To understand why, forget about the cyber dimension for a moment. Imagine that some foreign military had flown over a power substation and Brazil and dropped a bomb on it, depriving electricity to millions of people, as well as the places they work, the hospitals they visit, and the transportation they use. If there were no official armed conflict between Brazil and its attacker, the bombing would be illegal under international law. That’s a pretty basic test. But even if there were a declared war, or a recognized state of hostilities, knocking out vital electricity to millions of citizens—who presumably are not soldiers in the fight—would fail a number of other basic requirements of the laws of armed conflict. For starters, it could be considered disproportionate, particularly if Brazil hadn’t launched any similar sized offensive on its adversary. Shutting off electricity to whole cities can effectively paralyze them. And the bombing would clearly target non-combatants. The government uses electricity, yes, but so does the entire civilian population.

Now add the cyber dimension. If the effect of a hacker taking down the power grid is the same as a bomber—that is, knocking out electrical power—then the same rules apply. That essentially was the conclusion of a National Academies of Sciences report in April. The authors write, “During acknowledged armed conflict (notably when kinetic and other means are also being used against the same target nation), cyber attack is governed by all the standard law of armed conflict. …If the effects of a kinetic attack are such that the attack would be ruled out on such grounds, a cyber attack that would cause similar effects would also be ruled out.”

[…]

According to a report in The Guardian, military planners refrained from launching a broad cyber attack against Serbia during the Kosovo conflict for fear of committing war crimes. The Pentagon theoretically had the power to “bring Serbia’s financial systems to a halt” and to go after the personal accounts of Slobodan Milosevic, the newspaper reported. But when the NATO-led bombing campaign was in full force, the Defense Department’s general counsel issued guidance on cyber war that said the law of (traditional) war applied.

The military ran into this same dilemma four years later, during preparations to invade Iraq in 2003. Planners considered whether to launch a massive attack on the Iraqi financial system in advance of the conventional strike. But they stopped short when they realized that the same networks used by Iraqi banks were also used by banks in France. Releasing a vicious computer virus into the system could potentially harm America’s allies. Some planners also worried that the contagion could spread to the United States. It could have been the cyber equivalent of nuclear fallout.

A 240-page Rand study by Martin Libicki—”Cyberdefense and Cyberwar“—came to the same conclusion:

Predicting what an attack can do requires knowing how the system and its operators will respond to signs of dysfunction and knowing the behavior of processes and systems associated with the system being attacked. Even then, cyberwar operations neither directly harm individuals nor destroy equipment (albeit with some exceptions). At best, these operations can confuse and frustrate operators of military systems, and then only temporarily. Thus, cyberwar can only be a support function for other elements of warfare, for instance, in disarming the enemy.

Commenting on the Rand report:

The report backs its findings by measuring probable outcomes to cyberattacks and determining that the results are too scattered to carry out accurate predictions. This is coupled with the problem of countering an attack. It is difficult to determine who conducted a specific cyberattack so any counter strikes or retaliations could backfire. Rather than going on the offensive, the United States should pursue diplomacy and attempt to find and prosecute the cybercriminals involved in an initial strike.

Libicki said that the military can attempt a cyberattack for a specific combat operation, but it would be a guessing game when trying to gauge the operation’s success since any result from the cyberattack would be unclear.

Instead the Rand report suggests the government invest in bolstering military networks, which as we know, have the same vulnerabilities as civilian networks.

I wrote about cyberwar back in 2005.

Posted on December 1, 2009 at 6:59 AMView Comments

1 44 45 46 47 48 59

Sidebar photo of Bruce Schneier by Joe MacInnis.