Entries Tagged "national security policy"

Page 49 of 61

Cyberwarfare Policy

National Journal has an excellent article on cyberwar policy. I agree with the author’s comments on The Atlantic blog:

Would the United States ever use a more devastating weapon, perhaps shutting off the lights in an adversary nation? The answer is, almost certainly no, not unless America were attacked first.

To understand why, forget about the cyber dimension for a moment. Imagine that some foreign military had flown over a power substation and Brazil and dropped a bomb on it, depriving electricity to millions of people, as well as the places they work, the hospitals they visit, and the transportation they use. If there were no official armed conflict between Brazil and its attacker, the bombing would be illegal under international law. That’s a pretty basic test. But even if there were a declared war, or a recognized state of hostilities, knocking out vital electricity to millions of citizens—who presumably are not soldiers in the fight—would fail a number of other basic requirements of the laws of armed conflict. For starters, it could be considered disproportionate, particularly if Brazil hadn’t launched any similar sized offensive on its adversary. Shutting off electricity to whole cities can effectively paralyze them. And the bombing would clearly target non-combatants. The government uses electricity, yes, but so does the entire civilian population.

Now add the cyber dimension. If the effect of a hacker taking down the power grid is the same as a bomber—that is, knocking out electrical power—then the same rules apply. That essentially was the conclusion of a National Academies of Sciences report in April. The authors write, “During acknowledged armed conflict (notably when kinetic and other means are also being used against the same target nation), cyber attack is governed by all the standard law of armed conflict. …If the effects of a kinetic attack are such that the attack would be ruled out on such grounds, a cyber attack that would cause similar effects would also be ruled out.”

[…]

According to a report in The Guardian, military planners refrained from launching a broad cyber attack against Serbia during the Kosovo conflict for fear of committing war crimes. The Pentagon theoretically had the power to “bring Serbia’s financial systems to a halt” and to go after the personal accounts of Slobodan Milosevic, the newspaper reported. But when the NATO-led bombing campaign was in full force, the Defense Department’s general counsel issued guidance on cyber war that said the law of (traditional) war applied.

The military ran into this same dilemma four years later, during preparations to invade Iraq in 2003. Planners considered whether to launch a massive attack on the Iraqi financial system in advance of the conventional strike. But they stopped short when they realized that the same networks used by Iraqi banks were also used by banks in France. Releasing a vicious computer virus into the system could potentially harm America’s allies. Some planners also worried that the contagion could spread to the United States. It could have been the cyber equivalent of nuclear fallout.

A 240-page Rand study by Martin Libicki—”Cyberdefense and Cyberwar“—came to the same conclusion:

Predicting what an attack can do requires knowing how the system and its operators will respond to signs of dysfunction and knowing the behavior of processes and systems associated with the system being attacked. Even then, cyberwar operations neither directly harm individuals nor destroy equipment (albeit with some exceptions). At best, these operations can confuse and frustrate operators of military systems, and then only temporarily. Thus, cyberwar can only be a support function for other elements of warfare, for instance, in disarming the enemy.

Commenting on the Rand report:

The report backs its findings by measuring probable outcomes to cyberattacks and determining that the results are too scattered to carry out accurate predictions. This is coupled with the problem of countering an attack. It is difficult to determine who conducted a specific cyberattack so any counter strikes or retaliations could backfire. Rather than going on the offensive, the United States should pursue diplomacy and attempt to find and prosecute the cybercriminals involved in an initial strike.

Libicki said that the military can attempt a cyberattack for a specific combat operation, but it would be a guessing game when trying to gauge the operation’s success since any result from the cyberattack would be unclear.

Instead the Rand report suggests the government invest in bolstering military networks, which as we know, have the same vulnerabilities as civilian networks.

I wrote about cyberwar back in 2005.

Posted on December 1, 2009 at 6:59 AMView Comments

Decertifying "Terrorist" Pilots

This article reads like something written by the company’s PR team.

When it comes to sleuthing these days, knowing your way within a database is as valued a skill as the classic, Sherlock Holmes-styled powers of detection.

Safe Banking Systems Software proved this very point in a demonstration of its algorithm acumen—one that resulted in a disclosure that convicted terrorists actually maintained working licenses with the U.S. Federal Aviation Administration.

The algorithm seems to be little more than matching up names and other basic info:

It used its algorithm-detection software to sift out uncommon names such as Abdelbaset Ali Elmegrahi, aka the Lockerbie bomber. It found that a number of licensed airmen all had the same P.O. box as their listed address—one that happened to be in Tripoli, Libya. These men all had working FAA certificates. And while the FAA database information investigated didn’t contain date-of-birth information, Safe Banking was able to use content on the FAA Website to determine these key details as well, to further gain a positive and clear identification of the men in question.

In any case, they found these three people with pilot’s licenses:

Elmegrahi, who had been posted on the FBI Most Wanted list for a decade and was convicted of blowing up Pan Am Flight 103, killing 259 people in 1988 over Lockerbie, Scotland. Elmegrahi was an FAA-certified aircraft dispatcher.

Re Tabib, a California resident who was convicted in 2007 for illegally exporting U.S. military aircraft parts—specifically export maintenance kits for F-14 fighter jets—to Iran. Tabib received three FAA licenses after his conviction, qualifying to be a flight instructor, ground instructor and transport pilot.

Myron Tereshchuk, who pleaded guilty to possession of a biological weapon after the FBI caught him with a brew of ricin, explosive powder and other essentials in Maryland in 2004. Tereshchuk was a licensed mechanic and student pilot.

And the article concludes with:

Suffice to say, after the FAA was made aware of these criminal histories, all three men have since been decertified.

Although I’m all for annoying international arms dealers, does anyone know the procedures for FAA decertification? Did the FAA have the legal right to do this, after being “made aware” of some information by a third party?

Of course, they don’t talk about all the false positives their system also found. How many innocents were also decertified? And they don’t mention the fact that, in the 9/11 attacks, FAA certification wasn’t really an issue. “Excuse me, young man. You can’t hijack and fly this aircraft. It says right here that the FAA decertified you.”

Posted on November 23, 2009 at 2:36 PMView Comments

Denial-of-Service Attack Against CALEA

Interesting:

The researchers say they’ve found a vulnerability in U.S. law enforcement wiretaps, if only theoretical, that would allow a surveillance target to thwart the authorities by launching what amounts to a denial-of-service (DoS) attack against the connection between the phone company switches and law enforcement.

[…]

The University of Pennsylvania researchers found the flaw after examining the telecommunication industry standard ANSI Standard J-STD-025, which addresses the transmission of wiretapped data from telecom switches to authorities, according to IDG News Service. Under the 1994 Communications Assistance for Law Enforcement Act, or Calea, telecoms are required to design their network architecture to make it easy for authorities to tap calls transmitted over digitally switched phone networks.

But the researchers, who describe their findings in a paper, found that the standard allows for very little bandwidth for the transmission of data about phone calls, which can be overwhelmed in a DoS attack. When a wiretap is enabled, the phone company’s switch establishes a 64-Kbps Call Data Channel to send data about the call to law enforcement. That paltry channel can be flooded if a target of the wiretap sends dozens of simultaneous SMS messages or makes numerous VOIP phone calls “without significant degradation of service to the targets’ actual traffic.”

As a result, the researchers say, law enforcement could lose records of whom a target called and when. The attack could also prevent the content of calls from being accurately monitored or recorded.

The paper. Comments by Matt Blaze, one of the paper’s authors.

Posted on November 20, 2009 at 6:11 AMView Comments

FBI/CIA/NSA Information Sharing Before 9/11

It’s conventional wisdom that the legal “wall” between intelligence and law enforcement was one of the reasons we failed to prevent 9/11. The 9/11 Comission evaluated that claim, and published a classified report in 2004. The report was released, with a few redactions, over the summer: “Legal Barriers to Information Sharing: The Erection of a Wall Between Intelligence and Law Enforcement Investigations,” 9/11 Commission Staff Monograph by Barbara A. Grewe, Senior Counsel for Special Projects, August 20, 2004.

The report concludes otherwise:

“The information sharing failures in the summer of 2001 were not the result of legal barriers but of the failure of individuals to understand that the barriers did not apply to the facts at hand,” the 35-page monograph concludes. “Simply put, there was no legal reason why the information could not have been shared.”

The prevailing confusion was exacerbated by numerous complicating circumstances, the monograph explains. The Foreign Intelligence Surveillance Court was growing impatient with the FBI because of repeated errors in applications for surveillance. Justice Department officials were uncomfortable requesting intelligence surveillance of persons and facilities related to Osama bin Laden since there was already a criminal investigation against bin Laden underway, which normally would have preempted FISA surveillance. Officials were reluctant to turn to the FISA Court of Review for clarification of their concerns since one of the judges on the court had expressed doubts about the constitutionality of FISA in the first place. And so on. Although not mentioned in the monograph, it probably didn’t help that public interest critics in the 1990s (myself included) were accusing the FISA Court of serving as a “rubber stamp” and indiscriminately approving requests for intelligence surveillance.

In the end, the monograph implicitly suggests that if the law was not the problem, then changing the law may not be the solution.

James Bamford comes to much the same conclusion in his book, The Shadow Factory: The NSA from 9/11 to the Eavesdropping on America: there was no legal wall that prevented intelligence and law enforcement from sharing the information necessary to prevent 9/11; it was inter-agency rivalries and turf battles.

Posted on November 12, 2009 at 2:26 PMView Comments

CIA Invests in Social-Network Datamining

From Wired:

In-Q-Tel, the investment arm of the CIA and the wider intelligence community, is putting cash into Visible Technologies, a software firm that specializes in monitoring social media. It’s part of a larger movement within the spy services to get better at using “open source intelligence“—information that’s publicly available, but often hidden in the flood of TV shows, newspaper articles, blog posts, online videos and radio reports generated every day.

Here’s the Visible Technologies press release on the funding.

Posted on October 26, 2009 at 6:53 AMView Comments

James Bamford on the NSA

James Bamford—author of The Shadow Factory: The NSA from 9/11 to the Eavesdropping on America writes about the NSA’s new data center in Utah as he reviews another book: The Secret Sentry: The Untold History of the National Security Agency:

Just how much information will be stored in these windowless cybertemples? A clue comes from a recent report prepared by the MITRE Corporation, a Pentagon think tank. “As the sensors associated with the various surveillance missions improve,” says the report, referring to a variety of technical collection methods, “the data volumes are increasing with a projection that sensor data volume could potentially increase to the level of Yottabytes (1024 Bytes) by 2015.” Roughly equal to about a septillion (1,000,000,000,000,000,000,000,000) pages of text, numbers beyond Yottabytes haven’t yet been named. Once vacuumed up and stored in these near-infinite “libraries,” the data are then analyzed by powerful infoweapons, supercomputers running complex algorithmic programs, to determine who among us may be—or may one day become—a terrorist.

[…]

Aid concludes that the biggest problem facing the agency is not the fact that it’s drowning in untranslated, indecipherable, and mostly unusable data, problems that the troubled new modernization plan, Turbulence, is supposed to eventually fix. “These problems may, in fact, be the tip of the iceberg,” he writes. Instead, what the agency needs most, Aid says, is more power. But the type of power to which he is referring is the kind that comes from electrical substations, not statutes. “As strange as it may sound,” he writes, “one of the most urgent problems facing NSA is a severe shortage of electrical power.” With supercomputers measured by the acre and estimated $70 million annual electricity bills for its headquarters, the agency has begun browning out, which is the reason for locating its new data centers in Utah and Texas.

Of course, that yottabyte number is hyperbole. The problem with all of that data is that there’s no time to process it. Think of it as trying to drink from a fire hose. The NSA has to make lightning-fast real-time decisions about what to save for later analysis. And there’s not a lot of time for later analysis; more data is coming constantly at the same fire-hose rate.

Bamford’s entire article is worth reading. He summarizes some of the things he talks about in his book: the inability of the NSA to predict national security threats (9/11 being one such failure) and the manipulation of intelligence data for political purposes.

Posted on October 22, 2009 at 6:10 AMView Comments

Terrorist Havens

Good essay on “terrorist havens”—like Afghanistan—and why they’re not as big a worry as some maintain:

Rationales for maintaining the counterinsurgency in Afghanistan are varied and complex, but they all center on one key tenet: that Afghanistan must not be allowed to again become a haven for terrorist groups, especially al-Qaeda.

[…]

The debate has largely overlooked a more basic question: How important to terrorist groups is any physical haven? More to the point: How much does a haven affect the danger of terrorist attacks against U.S. interests, especially the U.S. homeland? The answer to the second question is: not nearly as much as unstated assumptions underlying the current debate seem to suppose. When a group has a haven, it will use it for such purposes as basic training of recruits. But the operations most important to future terrorist attacks do not need such a home, and few recruits are required for even very deadly terrorism. Consider: The preparations most important to the Sept. 11, 2001, attacks took place not in training camps in Afghanistan but, rather, in apartments in Germany, hotel rooms in Spain and flight schools in the United States.

In the past couple of decades, international terrorist groups have thrived by exploiting globalization and information technology, which has lessened their dependence on physical havens.

By utilizing networks such as the Internet, terrorists’ organizations have become more network-like, not beholden to any one headquarters. A significant jihadist terrorist threat to the United States persists, but that does not mean it will consist of attacks instigated and commanded from a South Asian haven, or that it will require a haven at all. Al-Qaeda’s role in that threat is now less one of commander than of ideological lodestar, and for that role a haven is almost meaningless.

Posted on September 21, 2009 at 6:46 AMView Comments

Modifying the Color-Coded Threat Alert System

I wrote about the DHS’s color-coded threat alert system in 2003, in Beyond Fear:

The color-coded threat alerts issued by the Department of Homeland Security are useless today, but may become useful in the future. The U.S. military has a similar system; DEFCON 1-5 corresponds to the five threat alerts levels: Green, Blue, Yellow, Orange, and Red. The difference is that the DEFCON system is tied to particular procedures; military units have specific actions they need to perform every time the DEFCON level goes up or down. The color-alert system, on the other hand, is not tied to any specific actions. People are left to worry, or are given nonsensical instructions to buy plastic sheeting and duct tape. Even local police departments and government organizations largely have no idea what to do when the threat level changes. The threat levels actually do more harm than good, by needlessly creating fear and confusion (which is an objective of terrorists) and anesthetizing people to future alerts and warnings. If the color-alert system became something better defined, so that people know exactly what caused the levels to change, what the change means, and what actions they need to take in the event of a change, then it could be useful. But even then, the real measure of effectiveness is in the implementation. Terrorist attacks are rare, and if the color-threat level changes willy-nilly with no obvious cause or effect, then people will simply stop paying attention. And the threat levels are publicly known, so any terrorist with a lick of sense will simply wait until the threat level goes down.

Of course, the codes never became useful. There were never any actions associated with them. And we now know that their primary use was political. They were, and remain, a security joke.

This is what I wrote in 2004:

The DHS’s threat warnings have been vague, indeterminate, and unspecific. The threat index goes from yellow to orange and back again, although no one is entirely sure what either level means. We’ve been warned that the terrorists might use helicopters, scuba gear, even cheap prescription drugs from Canada. New York and Washington, D.C., were put on high alert one day, and the next day told that the alert was based on information years old. The careful wording of these alerts allows them not to require any sound, confirmed, accurate intelligence information, while at the same time guaranteeing hysterical media coverage. This headline-grabbing stuff might make for good movie plots, but it doesn’t make us safer.

This kind of behavior is all that’s needed to generate widespread fear and uncertainty. It keeps the public worried about terrorism, while at the same time reminding them that they’re helpless without the government to defend them.

It’s one thing to issue a hurricane warning, and advise people to board up their windows and remain in the basement. Hurricanes are short-term events, and it’s obvious when the danger is imminent and when it’s over. People respond to the warning, and there is a discrete period when their lives are markedly different. They feel there was a usefulness to the higher alert mode, even if nothing came of it.

It’s quite another to tell people to remain on alert, but not to alter their plans. According to scientists, California is expecting a huge earthquake sometime in the next 200 years. Even though the magnitude of the disaster will be enormous, people just can’t stay alert for 200 years. It goes against human nature. Residents of California have the same level of short-term fear and long-term apathy regarding the threat of earthquakes that the rest of the nation has developed regarding the DHS’s terrorist threat alert.

A terrorist alert that instills a vague feeling of dread or panic, without giving people anything to do in response, is ineffective. Even worse, it echoes the very tactics of the terrorists. There are two basic ways to terrorize people. The first is to do something spectacularly horrible, like flying airplanes into skyscrapers and killing thousands of people. The second is to keep people living in fear. Decades ago, that was one of the IRA’s major aims. Inadvertently, the DHS is achieving the same thing.

Finally, in 2009, the DHS is considering changes to the system:

A proposal by the Homeland Security Advisory Council, unveiled late Tuesday, recommends removing two of the five colors, with a standard state of affairs being a “guarded” Yellow. The Green “low risk of terrorist attacks” might get removed altogether, meaning stay prepared for your morning subway commute to turn deadly at any moment.

That’s right, according to the DHS the problem was too many levels. I hope you all feel safer now.

Here are some more whimsical designs, but I want the whole thing be ditched. And it should be easy to ditch; no one thinks it has any value. Unfortunately, if the Obama Administration can’t make this simple change, I don’t think they have the political will to make any of the harder changes we need.

Posted on September 18, 2009 at 6:45 AMView Comments

1 47 48 49 50 51 61

Sidebar photo of Bruce Schneier by Joe MacInnis.