Entries Tagged "Department of Defense"

Page 3 of 4

Pentagon Consulting Social Scientists on Security

This seems like a good idea:

Eager to embrace eggheads and ideas, the Pentagon has started an ambitious and unusual program to recruit social scientists and direct the nation’s brainpower to combating security threats like the Chinese military, Iraq, terrorism and religious fundamentalism.

The article talks a lot about potential conflicts of interest and such, and less on what sorts of insights the social scientists can offer. I think there is a lot of potential value here.

Posted on June 30, 2008 at 12:13 PMView Comments

Pentagon May Issue Pocket Lie Detectors to Afghan Soldiers

This is just ridiculous. Lie detectors are pseudo-science at best, and even the Pentagon knows it:

The Pentagon, in a PowerPoint presentation released to msnbc.com through a Freedom of Information Act request, says the PCASS is 82 to 90 percent accurate. Those are the only accuracy numbers that were sent up the chain of command at the Pentagon before the device was approved.

But Pentagon studies obtained by msnbc.com show a more complicated picture: In calculating its accuracy, the scientists conducting the tests discarded the yellow screens, or inconclusive readings.

That practice was criticized in the 2003 National Academy study, which said the “inconclusives” have to be included to measure accuracy. If you take into account the yellow screens, the PCASS accuracy rate in the three Pentagon-funded tests drops to the level of 63 to 79 percent.

Posted on April 14, 2008 at 12:57 PMView Comments

Idiotic Cryptography Reporting

Oh, this is funny:

A team of researchers and engineers at a UK division of Franco-German aerospace giant EADS has developed what it believes is the world’s first hacker-proof encryption technology for the internet.

[…]

Gordon Duncan, the division’s government and commercial sales manager, said he was convinced that sensitive data could now be sent across the world without fear of it being spied on by hackers. “All the computer technology in the world cannot break it,” he said yesterday.

At the heart of the system is the lightning speed with which the “keys” needed to enter the computer systems can be scrambled and re-formatted. Just when a hacker thinks he or she has broken the code, the code changes. “There is nothing to compare with it,” said Mr Duncan.

EADS is in talks with the Pentagon about supplying the US military with the system, although some American defence companies are also working on what they believe will be fool-proof encryption systems.

Snake oil, absolute snake oil.

EDITED TO ADD (9/26): Steve Bellovin, who knows what he’s talking about, writes:

Actually, it’s not snake oil, it’s very solid—till it got to Marketing. The folks at EADS built a high-assurance, Type I (or the British equivalent) IP encryptor—a HAIPE, in NSA-speak. Their enemy isn’t “hackers”, it’s the PLA and the KGB++. See this and this.

Of course, Marketing did get hold of it.

David Lacey makes the same point here.

Posted on September 24, 2007 at 1:58 PMView Comments

Pentagon Hacked by Chinese Military

The story seems to have started yesterday in the Financial Times, and is now spreading.

Not enough details to know what’s really going on, though. From the FT:

The Chinese military hacked into a Pentagon computer network in June in the most successful cyber attack on the US defence department, say American officials.

The Pentagon acknowledged shutting down part of a computer system serving the office of Robert Gates, defence secretary, but declined to say who it believed was behind the attack.

Current and former officials have told the Financial Times an internal investigation has revealed that the incursion came from the People’s Liberation Army.

One senior US official said the Pentagon had pinpointed the exact origins of the attack. Another person familiar with the event said there was a “very high level of confidence…trending towards total certainty” that the PLA was responsible. The defence ministry in Beijing declined to comment on Monday.

EDITED TO ADD (9/13): Another good commentary.

Posted on September 4, 2007 at 10:44 AMView Comments

Poppy Coins Are not Radio Transmitters

Remember the weird story about radio transmitters found in Canadian coins in order to spy on Americans?

Complete nonsense:

The worried contractors described the coins as “anomalous” and “filled with something man-made that looked like nanotechnology,” according to once-classified U.S. government reports and e-mails obtained by the AP.

The silver-colored 25-cent piece features the red image of a poppy—Canada’s flower of remembrance—inlaid over a maple leaf. The unorthodox quarter is identical to the coins pictured and described as suspicious in the contractors’ accounts.

The supposed nanotechnology actually was a conventional protective coating the Royal Canadian Mint applied to prevent the poppy’s red color from rubbing off. The mint produced nearly 30 million such quarters in 2004 commemorating Canada’s 117,000 war dead.

“It did not appear to be electronic [analog] in nature or have a power source,” wrote one U.S. contractor, who discovered the coin in the cup holder of a rental car. “Under high power microscope, it appeared to be complex consisting of several layers of clear, but different material, with a wire-like mesh suspended on top.”

The confidential accounts led to a sensational warning from the Defense Security Service, an agency of the Defense Department, that mysterious coins with radio frequency transmitters were found planted on U.S. contractors with classified security clearances on at least three separate occasions between October 2005 and January 2006 as the contractors traveled through Canada.

One contractor believed someone had placed two of the quarters in an outer coat pocket after the contractor had emptied the pocket hours earlier. “Coat pockets were empty that morning and I was keeping all of my coins in a plastic bag in my inner coat pocket,” the contractor wrote.

Posted on May 9, 2007 at 11:28 AMView Comments

New Directions in Chemical Warfare

From New Scientist:

The Pentagon considered developing a host of non-lethal chemical weapons that would disrupt discipline and morale among enemy troops, newly declassified documents reveal.

Most bizarre among the plans was one for the development of an “aphrodisiac” chemical weapon that would make enemy soldiers sexually irresistible to each other. Provoking widespread homosexual behaviour among troops would cause a “distasteful but completely non-lethal” blow to morale, the proposal says.

Other ideas included chemical weapons that attract swarms of enraged wasps or angry rats to troop positions, making them uninhabitable. Another was to develop a chemical that caused “severe and lasting halitosis”, making it easy to identify guerrillas trying to blend in with civilians. There was also the idea of making troops’ skin unbearably sensitive to sunlight.

Technology always gets better; it never gets worse. There will be a time, probably in our lifetimes, when weapons like these will be real.

Posted on June 9, 2006 at 1:33 PMView Comments

Movie Clip Mistaken for Al Qaeda Video

Oops:

Reuters quoted a Pentagon official, Dan Devlin, as saying, “What we have seen is that any video game that comes out… (al Qaeda will) modify it and change the game for their needs.”

The influential committee, chaired by Rep. Peter Hoekstra (R-MI), watched footage of animated combat in which characters depicted as Islamic insurgents killed U.S. troops in battle. The video began with the voice of a male narrator saying, “I was just a boy when the infidels came to my village in Blackhawk helicopters…”

Several GP readers immediately noticed that the voice-over was actually lifted from Team America: World Police, an outrageous 2004 satirical film produced by the creators of the popular South Park comedy series. At about the same time, gamers involved in the online Battlefield 2 community were pointing out the video footage shown to Congress was not a mod of BF2 at all, but standard game footage from EA’s Special Forces BF2 add-on module, a retail product widely available in the United States and elsewhere.

Posted on May 24, 2006 at 2:14 PMView Comments

Data Mining for Terrorists

In the post 9/11 world, there’s much focus on connecting the dots. Many believe that data mining is the crystal ball that will enable us to uncover future terrorist plots. But even in the most wildly optimistic projections, data mining isn’t tenable for that purpose. We’re not trading privacy for security; we’re giving up privacy and getting no security in return.

Most people first learned about data mining in November 2002, when news broke about a massive government data mining program called Total Information Awareness. The basic idea was as audacious as it was repellent: suck up as much data as possible about everyone, sift through it with massive computers, and investigate patterns that might indicate terrorist plots. Americans across the political spectrum denounced the program, and in September 2003, Congress eliminated its funding and closed its offices.

But TIA didn’t die. According to The National Journal, it just changed its name and moved inside the Defense Department.

This shouldn’t be a surprise. In May 2004, the General Accounting Office published a report that listed 122 different federal government data mining programs that used people’s personal information. This list didn’t include classified programs, like the NSA’s eavesdropping effort, or state-run programs like MATRIX.

The promise of data mining is compelling, and convinces many. But it’s wrong. We’re not going to find terrorist plots through systems like this, and we’re going to waste valuable resources chasing down false alarms. To understand why, we have to look at the economics of the system.

Security is always a trade-off, and for a system to be worthwhile, the advantages have to be greater than the disadvantages. A national security data mining program is going to find some percentage of real attacks, and some percentage of false alarms. If the benefits of finding and stopping those attacks outweigh the cost—in money, liberties, etc.—then the system is a good one. If not, then you’d be better off spending that cost elsewhere.

Data mining works best when there’s a well-defined profile you’re searching for, a reasonable number of attacks per year, and a low cost of false alarms. Credit card fraud is one of data mining’s success stories: all credit card companies data mine their transaction databases, looking for spending patterns that indicate a stolen card. Many credit card thieves share a pattern—purchase expensive luxury goods, purchase things that can be easily fenced, etc.—and data mining systems can minimize the losses in many cases by shutting down the card. In addition, the cost of false alarms is only a phone call to the cardholder asking him to verify a couple of purchases. The cardholders don’t even resent these phone calls—as long as they’re infrequent—so the cost is just a few minutes of operator time.

Terrorist plots are different. There is no well-defined profile, and attacks are very rare. Taken together, these facts mean that data mining systems won’t uncover any terrorist plots until they are very accurate, and that even very accurate systems will be so flooded with false alarms that they will be useless.

All data mining systems fail in two different ways: false positives and false negatives. A false positive is when the system identifies a terrorist plot that really isn’t one. A false negative is when the system misses an actual terrorist plot. Depending on how you “tune” your detection algorithms, you can err on one side or the other: you can increase the number of false positives to ensure that you are less likely to miss an actual terrorist plot, or you can reduce the number of false positives at the expense of missing terrorist plots.

To reduce both those numbers, you need a well-defined profile. And that’s a problem when it comes to terrorism. In hindsight, it was really easy to connect the 9/11 dots and point to the warning signs, but it’s much harder before the fact. Certainly, there are common warning signs that many terrorist plots share, but each is unique, as well. The better you can define what you’re looking for, the better your results will be. Data mining for terrorist plots is going to be sloppy, and it’s going to be hard to find anything useful.

Data mining is like searching for a needle in a haystack. There are 900 million credit cards in circulation in the United States. According to the FTC September 2003 Identity Theft Survey Report, about 1% (10 million) cards are stolen and fraudulently used each year. Terrorism is different. There are trillions of connections between people and events—things that the data mining system will have to “look at”—and very few plots. This rarity makes even accurate identification systems useless.

Let’s look at some numbers. We’ll be optimistic. We’ll assume the system has a 1 in 100 false positive rate (99% accurate), and a 1 in 1,000 false negative rate (99.9% accurate).

Assume one trillion possible indicators to sift through: that’s about ten events—e-mails, phone calls, purchases, web surfings, whatever—per person in the U.S. per day. Also assume that 10 of them are actually terrorists plotting.

This unrealistically-accurate system will generate one billion false alarms for every real terrorist plot it uncovers. Every day of every year, the police will have to investigate 27 million potential plots in order to find the one real terrorist plot per month. Raise that false-positive accuracy to an absurd 99.9999% and you’re still chasing 2,750 false alarms per day—but that will inevitably raise your false negatives, and you’re going to miss some of those ten real plots.

This isn’t anything new. In statistics, it’s called the “base rate fallacy,” and it applies in other domains as well. For example, even highly accurate medical tests are useless as diagnostic tools if the incidence of the disease is rare in the general population. Terrorist attacks are also rare, any “test” is going to result in an endless stream of false alarms.

This is exactly the sort of thing we saw with the NSA’s eavesdropping program: the New York Times reported that the computers spat out thousands of tips per month. Every one of them turned out to be a false alarm.

And the cost was enormous: not just the cost of the FBI agents running around chasing dead-end leads instead of doing things that might actually make us safer, but also the cost in civil liberties. The fundamental freedoms that make our country the envy of the world are valuable, and not something that we should throw away lightly.

Data mining can work. It helps Visa keep the costs of fraud down, just as it helps Amazon.com show me books that I might want to buy, and Google show me advertising I’m more likely to be interested in. But these are all instances where the cost of false positives is low—a phone call from a Visa operator, or an uninteresting ad—and in systems that have value even if there is a high number of false negatives.

Finding terrorism plots is not a problem that lends itself to data mining. It’s a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated.

This essay originally appeared on Wired.com.

Posted on March 9, 2006 at 7:44 AMView Comments

Surreptitious Lie Detector

According to The New Scientist:

THE US Department of Defense has revealed plans to develop a lie detector that can be used without the subject knowing they are being assessed. The Remote Personnel Assessment (RPA) device will also be used to pinpoint fighters hiding in a combat zone, or even to spot signs of stress that might mark someone out as a terrorist or suicide bomber.

“Revealed plans” is a bit of an overstatement. It seems that they’re just asking for proposals:

In a call for proposals on a DoD website, contractors are being given until 13 January to suggest ways to develop the RPA, which will use microwave or laser beams reflected off a subject’s skin to assess various physiological parameters without the need for wires or skin contacts. The device will train a beam on “moving and non-cooperative subjects”, the DoD proposal says, and use the reflected signal to calculate their pulse, respiration rate and changes in electrical conductance, known as the “galvanic skin response”. “Active combatants will in general have heart, respiratory and galvanic skin responses that are outside the norm,” the website says.

The DoD asks for pie-in-the-sky stuff all the time. For example, they’ve wanted a synthetic blood substitute for decades. A surreptitious lie detector would be pretty neat.

Posted on January 20, 2006 at 12:37 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.