Entries Tagged "national security policy"

Page 45 of 59

The Washington Post on the U.S. Intelligence Industry

The Washington Post has published a phenomenal piece of investigative journalism: a long, detailed, and very interesting expose on the U.S. intelligence industry (overall website; parts 1, 2, and 3; blog; Washington reactions; top 10 revelations; many many many blog comments and reactions; and so on).

It’s a truly excellent piece of investigative journalism. Pity people don’t care much about investigative journalism—or facts in politics, really—anymore.

EDITED TO ADD (7/25): More commentary.

EDITED TO ADD (7/26): Jay Rosen writes:

Last week, it was the Washington Post’s big series, Top Secret America, two years in the making. It reported on the massive security shadowland that has arisen since 09/11. The Post basically showed that there is no accountability, no knowledge at the center of what the system as a whole is doing, and too much “product” to make intelligent use of. We’re wasting billions upon billions of dollars on an intelligence system that does not work. It’s an explosive finding but the explosive reactions haven’t followed, not because the series didn’t do its job, but rather: the job of fixing what is broken would break the system responsible for such fixes.

The mental model on which most investigative journalism is based states that explosive revelations lead to public outcry; elites get the message and reform the system. But what if elites believe that reform is impossible because the problems are too big, the sacrifices too great, the public too distractible? What if cognitive dissonance has been insufficiently accounted for in our theories of how great journalism works…and often fails to work?

EDITED TO ADD (7/27): More.

Posted on July 23, 2010 at 12:46 PMView Comments

More Research on the Effectiveness of Terrorist Profiling

Interesting:

The use of profiling by ethnicity or nationality to trigger secondary security screening is a controversial social and political issue. Overlooked is the question of whether such actuarial methods are in fact mathematically justified, even under the most idealized assumptions of completely accurate prior probabilities, and secondary screenings concentrated on the highest-probablity individuals. We show here that strong profiling (defined as screening at least in proportion to prior probability) is no more efficient than uniform random sampling of the entire population, because resources are wasted on the repeated screening of higher probability, but innocent, individuals. A mathematically optimal strategy would be ”square-root biased sampling,” the geometric mean between strong profiling and uniform sampling, with secondary screenings distributed broadly, although not uniformly, over the population. Square-root biased sampling is a general idea that can be applied whenever a ”bell-ringer” event must be found by sampling with replacement, but can be recognized (either with certainty, or with some probability) when seen.

Posted on July 22, 2010 at 6:41 AMView Comments

Patrolling the U.S./Canada Border

Doesn’t the DHS have anything else to do?

As someone who believes that our nation has a right to enforce its borders, I should have been gratified when the Immigrations official at the border saw the canoe on our car and informed us that anyone who crossed the nearby international waterway illegally would be arrested and fined as much as $5,000.

Trouble is, the river wasn’t the Rio Grande, but the St. Croix, which defines the border between Maine and New Brunswick, Canada. And the threat of arrest wasn’t aimed at illegal immigrants or terrorists but at canoeists like myself.

The St. Croix is a wild river that flows through unpopulated country. Primitive campsites are maintained on both shores, some accessible by logging roads, but most reached only by water or by bushwhacking for miles through thick forest and marsh. There are easier ways to sneak into the U.S. from Canada. According to Homeland Security regulations, however, canoeists who begin their trip in Canada cannot step foot on American soil, thus putting half the campsites off limits. It is not an idle threat; the U.S. Border Patrol makes regular helicopter flights down the river.

Posted on June 17, 2010 at 6:57 AMView Comments

Terrorist Attacks and Comparable Risks, Part 2

John Adams argues that our irrationality about comparative risks depends on the type of risk:

With “pure” voluntary risks, the risk itself, with its associated challenge and rush of adrenaline, is the reward. Most climbers on Mount Everest know that it is dangerous and willingly take the risk. With a voluntary, self-controlled, applied risk, such as driving, the reward is getting expeditiously from A to B. But the sense of control that drivers have over their fates appears to encourage a high level of tolerance of the risks involved.

Cycling from A to B (I write as a London cyclist) is done with a diminished sense of control over one’s fate. This sense is supported by statistics that show that per kilometre travelled a cyclist is 14 times more likely to die than someone in a car. This is a good example of the importance of distinguishing between relative and absolute risk. Although 14 times greater, the absolute risk of cycling is still small—1 fatality in 25 million kilometres cycled; not even Lance Armstrong can begin to cover that distance in a lifetime of cycling. And numerous studies have demonstrated that the extra relative risk is more than offset by the health benefits of regular cycling; regular cyclists live longer.

While people may voluntarily board planes, buses and trains, the popular reaction to crashes in which passengers are passive victims, suggests that the public demand a higher standard of safety in circumstances in which people voluntarily hand over control of their safety to pilots, or to bus or train drivers.

Risks imposed by nature—such as those endured by those living on the San Andreas Fault or the slopes of Mount Etna—or impersonal economic forces—such as the vicissitudes of the global economy—are placed in the middle of the scale. Reactions vary widely. They are usually seen as motiveless and are responded to fatalistically – unless or until the threat appears imminent.

Imposed risks are less tolerated. Consider mobile phones. The risk associated with the handsets is either non-existent or very small. The risk associated with the base stations, measured by radiation dose, unless one is up the mast with an ear to the transmitter, is orders of magnitude less. Yet all round the world billions are queuing up to take the voluntary risk, and almost all the opposition is focussed on the base stations, which are seen by objectors as impositions. Because the radiation dose received from the handset increases with distance from the base station, to the extent that campaigns against the base stations are successful, they will increase the distance from the base station to the average handset, and thus the radiation dose. The base station risk, if it exist, might be labelled a benignly imposed risk; no one supposes that the phone company wishes to murder all those in the neighbourhood.

Less tolerated are risks whose imposers are perceived as motivated by profit or greed. In Europe, big biotech companies such as Monsanto are routinely denounced by environmentalist opponents for being more concerned with profits than the welfare of the environment or the consumers of its products.

Less tolerated still are malignly imposed risks—crimes ranging from mugging to rape and murder. In most countries in the world the number of deaths on the road far exceeds the numbers of murders, but far more people are sent to jail for murder than for causing death by dangerous driving. In the United States in 2002 16,000 people were murdered—a statistic that evoked far more popular concern than the 42,000 killed on the road—but far less than the 25 killed by terrorists.

This isn’t a new result, but it’s vital to understand how people react to different risks.

Posted on April 13, 2010 at 1:18 PMView Comments

Terrorist Attacks and Comparable Risks, Part 1

Nice analysis by John Mueller and Mark G. Stewart:

There is a general agreement about risk, then, in the established regulatory practices of several developed countries: risks are deemed unacceptable if the annual fatality risk is higher than 1 in 10,000 or perhaps higher than 1 in 100,000 and acceptable if the figure is lower than 1 in 1 million or 1 in 2 million. Between these two ranges is an area in which risk might be considered “tolerable.”

These established considerations are designed to provide a viable, if somewhat rough, guideline for public policy. In all cases, measures and regulations intended to reduce risk must satisfy essential cost-benefit considerations. Clearly, hazards that fall in the unacceptable range should command the most attention and resources. Those in the tolerable range may also warrant consideration—but since they are less urgent, they should be combated with relatively inexpensive measures. Those hazards in the acceptable range are of little, or even negligible, concern, so precautions to reduce their risks even further would scarcely be worth pursuing unless they are remarkably inexpensive.

[…]

As can be seen, annual terrorism fatality risks, particularly for areas outside of war zones, are less than one in one million and therefore generally lie within the range regulators deem safe or acceptable, requiring no further regulations, particularly those likely to be expensive. They are similar to the risks of using home appliances (200 deaths per year in the United States) or of commercial aviation (103 deaths per year). Compared with dying at the hands of a terrorist, Americans are twice as likely to perish in a natural disaster and nearly a thousand times more likely to be killed in some type of accident. The same general conclusion holds when the full damage inflicted by terrorists—not only the loss of life but direct and indirect economic costs—is aggregated. As a hazard, terrorism, at least outside of war zones, does not inflict enough damage to justify substantially increasing expenditures to deal with it.

[…]

To border on becoming unacceptable by established risk conventions—that is, to reach an annual fatality risk of 1 in 100,000—the number of fatalities from terrorist attacks in the United States and Canada would have to increase 35-fold; in Great Britain (excluding Northern Ireland), more than 50-fold; and in Australia, more than 70-fold. For the United States, this would mean experiencing attacks on the scale of 9/11 at least once a year, or 18 Oklahoma City bombings every year.

Posted on April 13, 2010 at 6:07 AMView Comments

Should the Government Stop Outsourcing Code Development?

Information technology is increasingly everywhere, and it’s the same technologies everywhere. The same operating systems are used in corporate and government computers. The same software controls critical infrastructure and home shopping. The same networking technologies are used in every country. The same digital infrastructure underpins the small and the large, the important and the trivial, the local and the global; the same vendors, the same standards, the same protocols, the same applications.

With all of this sameness, you’d think these technologies would be designed to the highest security standard, but they’re not. They’re designed to the lowest or, at best, somewhere in the middle. They’re designed sloppily, in an ad hoc manner, with efficiency in mind. Security is a requirement, more or less, but it’s a secondary priority. It’s far less important than functionality, and security is what gets compromised when schedules get tight.

Should the government—ours, someone else’s?—stop outsourcing code development? That’s the wrong question to ask. Code isn’t magically more secure when it’s written by someone who receives a government paycheck than when it’s written by someone who receives a corporate paycheck. It’s not magically less secure when it’s written by someone who speaks a foreign language, or is paid by the hour instead of by salary. Writing all your code in-house isn’t even a viable option anymore; we’re all stuck with software written by who-knows-whom in who-knows-which-country. And we need to figure out how to get security from that.

The traditional solution has been defense in depth: layering one mediocre security measure on top of another mediocre security measure. So we have the security embedded in our operating system and applications software, the security embedded in our networking protocols, and our additional security products such as antivirus and firewalls. We hope that whatever security flaws—either found and exploited, or deliberately inserted—there are in one layer are counteracted by the security in another layer, and that when they’re not, we can patch our systems quickly enough to avoid serious long-term damage. That is a lousy solution when you think about it, but we’ve been more-or-less managing with it so far.

Bringing all software—and hardware, I suppose—development in-house under some misconception that proximity equals security is not a better solution. What we need is to improve the software development process, so we can have some assurance that our software is secure—regardless of what coder, employed by what company, and living in what country, writes it. The key word here is “assurance.”

Assurance is less about developing new security techniques than about using the ones we already have. It’s all the things described in books on secure coding practices. It’s what Microsoft is trying to do with its Security Development Lifecycle. It’s the Department of Homeland Security’s Build Security In program. It’s what every aircraft manufacturer goes through before it fields a piece of avionics software. It’s what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems. But most of the time, we don’t care; commercial software, as insecure as it is, is good enough for most purposes.

Assurance is expensive, in terms of money and time, for both the process and the documentation. But the NSA needs assurance for critical military systems and Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be more common in government IT contracts.

The software used to run our critical infrastructure—government, corporate, everything—isn’t very secure, and there’s no hope of fixing it anytime soon. Assurance is really our only option to improve this, but it’s expensive and the market doesn’t care. Government has to step in and spend the money where its requirements demand it, and then we’ll all benefit when we buy the same software.

This essay first appeared in Information Security, as the second part of a point-counterpoint with Marcus Ranum. You can read Marcus’s essay there as well.

Posted on March 31, 2010 at 6:54 AMView Comments

Comprehensive National Cybersecurity Initiative

On Tuesday, the White House published an unclassified summary of its Comprehensive National Cybersecurity Initiative (CNCI). Howard Schmidt made the announcement at the RSA Conference. These are the 12 initiatives in the plan:

  • Initiative #1. Manage the Federal Enterprise Network as a single network enterprise with Trusted Internet.
  • Initiative #2. Deploy an intrusion detection system of sensors across the Federal enterprise.
  • Initiative #3. Pursue deployment of intrusion prevention systems across the Federal enterprise.
  • Initiative #4: Coordinate and redirect research and development (R&D) efforts.
  • Initiative #5. Connect current cyber ops centers to enhance situational awareness.
  • Initiative #6. Develop and implement a government-wide cyber counterintelligence (CI) plan.
  • Initiative #7. Increase the security of our classified networks.
  • Initiative #8. Expand cyber education.
  • Initiative #9. Define and develop enduring “leap-ahead” technology, strategies, and programs.
  • Initiative #10. Define and develop enduring deterrence strategies and programs.
  • Initiative #11. Develop a multi-pronged approach for global supply chain risk management.
  • Initiative #12. Define the Federal role for extending cybersecurity into critical infrastructure domains.

While this transparency is a good, in this sort of thing the devil is in the details—and we don’t have any details. We also don’t have any information about the legal authority for cybersecurity, and how much the NSA is, and should be, involved. Good commentary on that here. EPIC is suing the NSA to learn more about its involvement.

Posted on March 4, 2010 at 12:55 PMView Comments

Fixing Intelligence Failures

President Obama, in his speech last week, rightly focused on fixing the intelligence failures that resulted in Umar Farouk Abdulmutallab being ignored, rather than on technologies targeted at the details of his underwear-bomb plot. But while Obama’s instincts are right, reforming intelligence for this new century and its new threats is a more difficult task than he might like. We don’t need new technologies, new laws, new bureaucratic overlords, or—for heaven’s sake—new agencies. What prevents information sharing among intelligence organizations is the culture of the generation that built those organizations.

The U.S. intelligence system is a sprawling apparatus, spanning the FBI and the State Department, the CIA and the National Security Agency, and the Department of Homeland Security—itself an amalgamation of two dozen different organizations—designed and optimized to fight the Cold War. The single, enormous adversary then was the Soviet Union: as bureaucratic as they come, with a huge budget, and capable of very sophisticated espionage operations. We needed to defend against technologically advanced electronic eavesdropping operations, their agents trying to bribe or seduce our agents, and a worldwide intelligence gathering capability that hung on our every word.

In that environment, secrecy was paramount. Information had to be protected by armed guards and double fences, shared only among those with appropriate security clearances and a legitimate “need to know,” and it was better not to transmit information at all than to transmit it insecurely.

Today’s adversaries are different. There are still governments, like China, who are after our secrets. But the secrets they’re after are more often corporate than military, and most of the other organizations of interest are like al Qaeda: decentralized, poorly funded and incapable of the intricate spy versus spy operations the Soviet Union could pull off.

Against these adversaries, sharing is far more important than secrecy. Our intelligence organizations need to trade techniques and expertise with industry, and they need to share information among the different parts of themselves. Today’s terrorist plots are loosely organized ad hoc affairs, and those dots that are so important for us to connect beforehand might be on different desks, in different buildings, owned by different organizations.

Critics have pointed to laws that prohibited inter-agency sharing but, as the 9/11 Commission found, the law allows for far more sharing than goes on. It doesn’t happen because of inter-agency rivalries, a reliance on outdated information systems, and a culture of secrecy. What we need is an intelligence community that shares ideas and hunches and facts on their versions of Facebook, Twitter and wikis. We need the bottom-up organization that has made the Internet the greatest collection of human knowledge and ideas ever assembled.

The problem is far more social than technological. Teaching your mom to “text” and your dad to Twitter doesn’t make them part of the Internet generation, and giving all those cold warriors blogging lessons won’t change their mentality—or the culture. The reason this continues to be a problem, the reason President George W. Bush couldn’t change things even after the 9/11 Commission came to much the same conclusions as President Obama’s recent review did, is generational. The Internet is the greatest generation gap since rock and roll, and it’s just as true inside government as out. We might have to wait for the elders inside these agencies to retire and be replaced by people who grew up with the Internet.

A version of this op-ed previously appeared in the San Francisco Chronicle.

I wrote about this in 2002.

EDITED TO ADD (1/17): Another opinion.

Posted on January 16, 2010 at 7:13 AMView Comments

1 43 44 45 46 47 59

Sidebar photo of Bruce Schneier by Joe MacInnis.