Blog: October 2009 Archives

Friday Squid Blogging: Humboldt Squid in Canada

They’re washing ashore on Vancouver Island.

Scientists have begun attaching tracking devices to squid off the coast of Vancouver Island to find out why the marine animals have wandered so far from their traditional territory.

They also hope to find out why the squid have been beaching themselves and dying by the hundreds this summer near the town of Tofino on the island’s west coast.

Two great batches of Humboldt squid washed ashore, one in August then another in September. The Humboldt is a species of squid that, up to now, has been associated with waters warmer than those found off Vancouver Island.

Posted on October 30, 2009 at 4:15 PM5 Comments

A Critical Essay on the TSA

A critical essay on the TSA from a former assistant police chief:

This is where I find myself now obsessing over TSA policy, or its apparent lack. Every one of us goes to work each day harboring prejudice. This is simply human nature. What I have witnessed in law enforcement over the course of the last two decades serves to remind me how active and passive prejudice can undermine public trust in important institutions, like police agencies. And TSA.

Over the last fifteen years or so, many police agencies started capturing data on police interactions. The primary purpose was to document what had historically been undocumented: informal street contacts. By capturing specific data, we were able to ask ourselves tough questions about potentially biased-policing. Many agencies are still struggling with the answers to those questions.

Regardless, the data permitted us to detect problematic patterns, commonly referred to as passive discrimination. This is a type of discrimination that occurs when we are not aware of how our own biases affect our decisions. This kind of bias must be called to our attention, and there must be accountability to correct it.

One of the most troubling observations I made, at both Albany and BWI, was that—aside from the likely notation in a log (that no one will ever look at)—there was no information captured and I was asked no questions, aside from whether or not I wanted to change my mind.

Given that TSA interacts with tens if not hundreds of millions of travelers each year, it is incredible to me that we, the stewards of homeland security, have failed to insist that data capturing and analysis should occur in a manner similar to what local police agencies have been doing for many years.

EDITED TO ADD (11/12): Follow-on essay by the same person.

Posted on October 29, 2009 at 6:41 AM49 Comments

Best Buy Sells Surveillance Tracker

Only $99.99:

Keep tabs on your child at all times with this small but sophisticated device that combines GPS and cellular technology to provide you with real-time location updates. The small and lightweight Little Buddy transmitter fits easily into a backpack, lunchbox or other receptacle, making it easy for your child to carry so you can check his or her location at any time using a smartphone or computer. Customizable safety checks allow you to establish specific times and locations where your child is supposed to be—for example, in school—causing the device to alert you with a text message if your child leaves the designated area during that time. Additional real-time alerts let you know when the device’s battery is running low so you can take steps to ensure your monitoring isn’t interrupted.

Presumably it can also be used to track people who aren’t your kids.

EDITED TO ADD (11/12): You can also use an iPhone as a tracking device.

Posted on October 28, 2009 at 1:28 PM64 Comments

Psychology and Security Resource Page

Ross Anderson has put together a great resource page on security and psychology:

At a deeper level, the psychology of security touches on fundamental scientific and philosophical problems. The ‘Machiavellian Brain’ hypothesis states that we evolved high intelligence not to make better tools, but to use other monkeys better as tools: primates who were better at deception, or at detecting deception in others, left more descendants. Conflict is also deeply tied up with social psychology and anthropology, while evolutionary explanations for the human religious impulse involve both trust and conflict. The dialogue between researchers in security and in psychology has thus been widening, bringing in people from usability engineering, protocol design, privacy, and policy on the one hand, and from social psychology, evolutionary biology, and behavioral economics on the other. We believe that this new discipline will increasingly become one of the active contact points between computing and psychology—an exchange that has hugely benefited both disciplines for over a generation.

Posted on October 28, 2009 at 6:48 AM31 Comments

CIA Invests in Social-Network Datamining

From Wired:

In-Q-Tel, the investment arm of the CIA and the wider intelligence community, is putting cash into Visible Technologies, a software firm that specializes in monitoring social media. It’s part of a larger movement within the spy services to get better at using “open source intelligence“—information that’s publicly available, but often hidden in the flood of TV shows, newspaper articles, blog posts, online videos and radio reports generated every day.

Here’s the Visible Technologies press release on the funding.

Posted on October 26, 2009 at 6:53 AM36 Comments

"Evil Maid" Attacks on Encrypted Hard Drives

Earlier this month, Joanna Rutkowska implemented the “evil maid” attack against TrueCrypt. The same kind of attack should work against any whole-disk encryption, including PGP Disk and BitLocker. Basically, the attack works like this:

Step 1: Attacker gains access to your shut-down computer and boots it from a separate volume. The attacker writes a hacked bootloader onto your system, then shuts it down.

Step 2: You boot your computer using the attacker’s hacked bootloader, entering your encryption key. Once the disk is unlocked, the hacked bootloader does its mischief. It might install malware to capture the key and send it over the Internet somewhere, or store it in some location on the disk to be retrieved later, or whatever.

You can see why it’s called the “evil maid” attack; a likely scenario is that you leave your encrypted computer in your hotel room when you go out to dinner, and the maid sneaks in and installs the hacked bootloader. The same maid could even sneak back the next night and erase any traces of her actions.

This attack exploits the same basic vulnerability as the “Cold Boot” attack from last year, and the “Stoned Boot” attack from earlier this year, and there’s no real defense to this sort of thing. As soon as you give up physical control of your computer, all bets are off.

Similar hardware-based attacks were among the main reasons why Symantec’s CTO Mark Bregman was recently advised by “three-letter agencies in the US Government” to use separate laptop and mobile device when traveling to China, citing potential hardware-based compromise.

PGP sums it up in their blog.

No security product on the market today can protect you if the underlying computer has been compromised by malware with root level administrative privileges. That said, there exists well-understood common sense defenses against “Cold Boot,” “Stoned Boot” “Evil Maid,” and many other attacks yet to be named and publicized.

The defenses are basically two-factor authentication: a token you don’t leave in your hotel room for the maid to find and use. The maid could still corrupt the machine, but it’s more work than just storing the password for later use. Putting your data on a thumb drive and taking it with you doesn’t work; when you return you’re plugging your thumb into a corrupted machine.

The real defense here is trusted boot, something Trusted Computing is supposed to enable. But Trusted Computing has its own problems, which is why we haven’t seen anything out of Microsoft in the seven-plus years they have been working on it (I wrote this in 2002 about what they then called Palladium).

In the meantime, people who encrypt their hard drives, or partitions on their hard drives, have to realize that the encryption gives them less protection than they probably believe. It protects against someone confiscating or stealing their computer and then trying to get at the data. It does not protect against an attacker who has access to your computer over a period of time during which you use it, too.

EDITED TO ADD (10/23): A few readers have pointed out that BitLocker, the one thing that has come out of Microsoft’s Trusted Computing initiative in the seven-plus years they’ve been working on it, can prevent these sorts of attacks if the computer has a TPM module, version 1.2 or later, on the motherboard. (Note: Not all computers do.) I actually knew that; I just didn’t remember it.

EDITED TO ADD (11/12): Peter Kleissner’s Stoned Boot attacks on TrueCrypt.

EDITED TO ADD (12/9): A similar attack is possible against BitLocker with a TPM.

Posted on October 23, 2009 at 6:43 AM187 Comments

James Bamford on the NSA

James Bamford—author of The Shadow Factory: The NSA from 9/11 to the Eavesdropping on America writes about the NSA’s new data center in Utah as he reviews another book: The Secret Sentry: The Untold History of the National Security Agency:

Just how much information will be stored in these windowless cybertemples? A clue comes from a recent report prepared by the MITRE Corporation, a Pentagon think tank. “As the sensors associated with the various surveillance missions improve,” says the report, referring to a variety of technical collection methods, “the data volumes are increasing with a projection that sensor data volume could potentially increase to the level of Yottabytes (1024 Bytes) by 2015.” Roughly equal to about a septillion (1,000,000,000,000,000,000,000,000) pages of text, numbers beyond Yottabytes haven’t yet been named. Once vacuumed up and stored in these near-infinite “libraries,” the data are then analyzed by powerful infoweapons, supercomputers running complex algorithmic programs, to determine who among us may be—or may one day become—a terrorist.

[…]

Aid concludes that the biggest problem facing the agency is not the fact that it’s drowning in untranslated, indecipherable, and mostly unusable data, problems that the troubled new modernization plan, Turbulence, is supposed to eventually fix. “These problems may, in fact, be the tip of the iceberg,” he writes. Instead, what the agency needs most, Aid says, is more power. But the type of power to which he is referring is the kind that comes from electrical substations, not statutes. “As strange as it may sound,” he writes, “one of the most urgent problems facing NSA is a severe shortage of electrical power.” With supercomputers measured by the acre and estimated $70 million annual electricity bills for its headquarters, the agency has begun browning out, which is the reason for locating its new data centers in Utah and Texas.

Of course, that yottabyte number is hyperbole. The problem with all of that data is that there’s no time to process it. Think of it as trying to drink from a fire hose. The NSA has to make lightning-fast real-time decisions about what to save for later analysis. And there’s not a lot of time for later analysis; more data is coming constantly at the same fire-hose rate.

Bamford’s entire article is worth reading. He summarizes some of the things he talks about in his book: the inability of the NSA to predict national security threats (9/11 being one such failure) and the manipulation of intelligence data for political purposes.

Posted on October 22, 2009 at 6:10 AM44 Comments

Ballmer Blames the Failure of Windows Vista on Security

According to the Telegraph:

Mr Ballmer said: “We got some uneven reception when [Vista] first launched in large part because we made some design decisions to improve security at the expense of compatibility. I don’t think from a word-of-mouth perspective we ever recovered from that.”

Commentary:

Vista’s failure and Ballmer’s faulting security is a bit of being careful for what you wish. Vista (codename “Longhorn” during its development) was always intended to be a more secure operating system. Following the security disasters and 2000 and 2001 that befell Windows 98 and 2000, Microsoft shut down all software development and launched the Trustworthy Computing Initiative that advocated secure coding practices. Microsoft retrained thousands of programmers to eliminate common security problems such as buffer overflows. The immediate result was a retooling of Windows XP to make it more secure for its 2002 launch. Long-term, though, was to make Vista the most secure operating system in Microsoft’s history.

What made XP and Vista more secure? Eliminating (or reducing) buffer overflow errors helped. But what really made a difference is shutting off services by default. Many of the vulnerabilities exploited in Windows 98, NT and 2000 were actually a result of unused services that were active by default. Microsoft’s own vulnerability tracking shows that Vista has far less reported vulnerabilities than any of its predecessors. Unfortunately, a Vista locked down out of the box made it less palatable to users.

Now security obstacles aren’t the only ills that Vista suffered. Huge memory footprint, incompatible graphics requirements, slow responsiveness and a general sense that it was already behind competing Mac and Linux OSes in functionality and features made Vista thud. In my humble opinion, the security gains in Vista were worth many of the tradeoffs; and it was the other technical requirements and incompatible applications that doomed this operating system.

There was also the problem of Vista’s endless security warnings. The problem is that they were almost always false alarms, and there were no adverse effects of ignoring them. So users did, which means they ended up being nothing but an annoyance.

Security warnings are often a way for the developer to avoid making a decision. “We don’t know what to do here, so we’ll put up a warning and ask the user.” But unless the users have the information and the expertise to make the decision, they’re not going to be able to. We need user interfaces that only put up warnings when it matters.

I never upgraded to Vista. I’m hoping Windows 7 is worth upgrading to. We’ll see.

EDITED TO ADD (10/22): Another opinion.

Posted on October 21, 2009 at 7:46 AM83 Comments

Australia Man Receives Reduced Sentence Due to Encryption

From the Courier-Mail:

A man who established a sophisticated network of peepholes and cameras to spy on his flatmates has escaped a jail sentence after police were unable to crack an encryption code on his home computer.

[…]

They found a series of holes drilled in to walls and ceilings throughout the Surfers Paradise apartment with wires leading back to Wyllie’s bedroom.

Police seized his personal computer, but files were encrypted and a video camera was not plugged in.

[…]

In passing sentence, Judge Devereaux took in to account the 33 days Wyllie had spent in custody after being arrested and ordered that two years’ probation was sufficient punishment, given that there was no hard evidence proving he had secretly recorded his flatmates.

Posted on October 21, 2009 at 7:19 AM50 Comments

TSA Successfully Defends Itself

Story here. Basically, a woman posts a horrible story of how she was mistreated by the TSA, and the TSA responds by releasing the video showing that she was lying.

There was a similar story in 2007. Then, I wrote:

Why is it that we all—myself included—believe these stories? Why are we so quick to assume that the TSA is a bunch of jack-booted thugs, officious and arbitrary and drunk with power?

It’s because everything seems so arbitrary, because there’s no accountability or transparency in the DHS. Rules and regulations change all the time, without any explanation or justification. Of course this kind of thing induces paranoia. It’s the sort of thing you read about in history books about East Germany and other police states. It’s not what we expect out of 21st century America.

The problem is larger than the TSA, but the TSA is the part of “homeland security” that the public comes into contact with most often—at least the part of the public that writes about these things most. They’re the public face of the problem, so of course they’re going to get the lion’s share of the finger pointing.

It was smart public relations on the TSA’s part to get the video of the incident on the Internet quickly, but it would be even smarter for the government to restore basic constitutional liberties to our nation’s counterterrorism policy. Accountability and transparency are basic building blocks of any democracy; and the more we lose sight of them, the more we lose our way as a nation.

EDITED TO ADD (11/12): Follow up by the woman who posted the original story. She claims that the TSA’s video is incomplete, and omits the part where she is separated from her son. I don’t believe her.

Posted on October 20, 2009 at 1:11 PM55 Comments

Computer Card Counter Detects Human Card Counters

All it takes is a computer that can track every card:

The anti-card-counter system uses cameras to watch players and keep track of the actual “count” of the cards, the same way a player would. It also measures how much each player is betting on each hand, and it syncs up the two data points to look for patterns in the action. If a player is betting big when the count is indeed favorable, and keeping his chips to himself when it’s not, he’s fingered by the computer… and, in the real world, he’d probably receive a visit from a burly dude in a bad suit, too.

The system reportedly works even if the gambler intentionally attempts to mislead it with high bets at unfavorable times.

Of course it does; it’s just a signal-to-noise problem.

I have long been impressed with the casino industry’s ability to, in the case of blackjack, convince the gambling public that using strategy equals cheating.

Posted on October 20, 2009 at 6:16 AM54 Comments

Six Years of Patch Tuesdays

Nice article summing up six years of Microsoft Patch Tuesdays:

The total number of flaws disclosed and patched by the software maker so far this year stands at around 160, more than the 155 or so that Microsoft reported for all of 2008. The number of flaws reported in Microsoft products over the last two years is more than double the number of flaws disclosed in 2004 and 2005, the first two full years of Patch Tuesdays.

The last time Microsoft did not release any patches on a Patch Tuesday was March 2007, more than 30 months ago. In the past six years, Microsoft had just four patch-free months—two of which were in 2005. In contrast, the company has issued patches for 10 or more vulnerabilities on more than 20 occasions and patches for 20 or more flaws in a single month on about 10 occasions, including yesterday.

I wrote about the “patch treadmill,” pointing out that there are simply too many patches and that it’s impossible to keep up:

Security professionals are quick to blame system administrators who don’t install every patch. “They should have updated their systems; it’s their own fault when they get hacked.” This is beginning to feel a lot like blaming the victim. “He should have known not to walk down that deserted street; it’s his own fault he was mugged.” “She should never have dressed that provocatively; it’s her own fault she was attacked.” Perhaps such precautions should have been taken, but the real blame lies elsewhere.

Those who manage computer networks are people too, and people don’t always do the smartest thing. They know they’re supposed to install all patches. But sometimes they can’t take critical systems off-line. Sometimes they don’t have the staffing available to patch every system on their network. Sometimes applying a patch breaks something else on their network. I think it’s time the industry realized that expecting the patch process to improve network security just doesn’t work.

Patching is essentially an impossible problem. A patch needs to be incredibly well-tested. It has to work, without tweaking, on every configuration of the software out there. And for security reasons, it needs to be pushed out to users within days—hours, if possible. These two requirements are mutually contradictory: you can’t have a piece of software that is both well-tested and quickly written.

Before October 2003, Microsoft’s patching was a mess. Patches weren’t well-tested. They broke systems so frequently that many sysadmins wouldn’t install them without extensive testing. There were jokes that a Microsoft patch was indistinguishable from a DoS attack.

In 2003, Microsoft went to a once-a-month patching cycle, and I think it’s been a resounding success. Microsoft’s patches are much better tested. They’re much less likely to break other things. And, as a result, many more people have turned on automatic update, meaning that many more people have their patches up to date. The downside is that the window of exposure—the time period between a vulnerability’s release and the availability of a patch—is longer. Patch Tuesdays might be the best we can do, but the whole patching system is fundamentally broken. This is what I wrote last year:

The real lesson is that the patch treadmill doesn’t work, and it hasn’t for years. This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete. We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design. This process won’t prevent every vulnerability, but it’s much more secure—and cheaper—than the patch treadmill we’re all on now.

Posted on October 19, 2009 at 3:38 PM60 Comments

Helpful Hint for Fugitives: Don't Update Your Location on Facebook

Fugitive caught after updating his status on Facebook.”

Investigators scoured social networking sites such as Facebook and MySpace but initially could find no trace of him and were unable to pin down his location in Mexico.

Several months later, a secret service agent, Seth Reeg, checked Facebook again and up popped MaxiSopo. His photo showed him partying in front of a backdrop featuring logos of BMW and Courvoisier cognac, sporting a black jacket adorned with a not-so-subtle white lion.

Although Sopo’s profile was set to private, his list of friends was not. Scoville started combing through it and was surprised to see that one friend listed an affiliation with the justice department. He sent a message requesting a phone call.

“We figured this was a person we could probably trust to keep our inquiry discreet,” Scoville said.

Proving the 2.0 adage that a friend on Facebook is rarely a friend indeed, the former official said he had met Sopo in Cancun’s nightclubs a few times, but did not really know him and had no idea he was a fugitive. The official learned where Sopo was living and passed that information back to Scoville, who provided it to Mexican authorities. They arrested Sopo last month.

It’s easy to say “so dumb,” and it would be true, but what’s interesting is how people just don’t think through the privacy implications of putting their information on the Internet. Facebook is how we interact with friends, and we think of it in the frame of interacting with friends. We don’t think that our employers might be looking—they’re not our friends!—that the information will be around forever, or that it might be abused. Privacy isn’t salient; chatting with friends is.

Posted on October 19, 2009 at 7:55 AM28 Comments

The Commercial Speech Arms Race

A few years ago, a company began to sell a liquid with identification codes suspended in it. The idea was that you would paint it on your stuff as proof of ownership. I commented that I would paint it on someone else’s stuff, then call the police.

I was reminded of this recently when a group of Israeli scientists demonstrated that it’s possible to fabricate DNA evidence. So now, instead of leaving your own DNA at a crime scene, you can leave fabricated DNA. And it isn’t even necessary to fabricate. In Charlie Stross’s novel Halting State, the bad guys foul a crime scene by blowing around the contents of a vacuum cleaner bag, containing the DNA of dozens, if not hundreds, of people.

This kind of thing has been going on for ever. It’s an arms race, and when technology changes, the balance between attacker and defender changes. But when automated systems do the detecting, the results are different. Face recognition software can be fooled by cosmetic surgery, or sometimes even just a photograph. And when fooling them becomes harder, the bad guys fool them on a different level. Computer-based detection gives the defender economies of scale, but the attacker can use those same economies of scale to defeat the detection system.

Google, for example, has anti-fraud systems that detect ­ and shut down ­ advertisers who try to inflate their revenue by repeatedly clicking on their own AdSense ads. So people built bots to repeatedly click on the AdSense ads of their competitors, trying to convince Google to kick them out of the system.

Similarly, when Google started penalizing a site’s search engine rankings for having “bad neighbors”—backlinks from link farms, adult or gambling sites, or blog spam—people engaged in sabotage: they built link farms and left blog comment spam linking to their competitors’ sites.

The same sort of thing is happening on Yahoo Answers. Initially, companies would leave answers pushing their products, but Yahoo started policing this. So people have written bots to report abuse on all their competitors. There are Facebook bots doing the same sort of thing.

Last month, Google introduced Sidewiki, a browser feature that lets you read and post comments on virtually any webpage. People and industries are already worried about the effects unrestrained commentary might have on their businesses, and how they might control the comments. I’m sure Google has sophisticated systems ready to detect commercial interests that try to take advantage of the system, but are they ready to deal with commercial interests that try to frame their competitors? And do we want to give one company the power to decide which comments should rise to the top and which get deleted?

Whenever you build a security system that relies on detection and identification, you invite the bad guys to subvert the system so it detects and identifies someone else. Sometimes this is hard ­—leaving someone else’s fingerprints on a crime scene is hard, as is using a mask of someone else’s face to fool a guard watching a security camera ­—and sometimes it’s easy. But when automated systems are involved, it’s often very easy. It’s not just hardened criminals that try to frame each other, it’s mainstream commercial interests.

With systems that police internet comments and links, there’s money involved in commercial messages ­—so you can be sure some will take advantage of it. This is the arms race. Build a detection system, and the bad guys try to frame someone else. Build a detection system to detect framing, and the bad guys try to frame someone else framing someone else. Build a detection system to detect framing of framing, and well, there’s no end, really. Commercial speech is on the internet to stay; we can only hope that they don’t pollute the social systems we use so badly that they’re no longer useful.

This essay originally appeared in The Guardian.

Posted on October 16, 2009 at 8:56 AM29 Comments

The Bizarre Consequences of "Zero Tolerance" Weapons Policies at Schools

Good article:

Zachary’s offense? [He’s six years old.] Taking a camping utensil that can serve as a knife, fork and spoon to school. He was so excited about recently joining the Cub Scouts that he wanted to use it at lunch. School officials concluded that he had violated their zero-tolerance policy on weapons, and Zachary was suspended and now faces 45 days in the district’s reform school.

[…]

“Something has to change,” said Dodi Herbert, whose 13-year old son, Kyle, was suspended in May and ordered to attend the Christina district’s reform school for 45 days after another student dropped a pocket knife in his lap.

[…]

The Christina school district attracted similar controversy in 2007 when it expelled a seventh-grade girl who had used a utility knife to cut windows out of a paper house for a class project.

The problem, of course, is that the global rule trumps any situational common sense, any discretion. But in granting discretion those in overall charge must trust people below them who have more detailed situational knowledge. It’s CYA security—the same thing you see at airports. Those involved in the situation can’t be blamed for making a bad decision as long as they follow the rules, no matter how stupid they are and how little they apply to the situation.

Posted on October 15, 2009 at 7:34 AM88 Comments

The Doghouse: Privacy Inside

I’m just going to quote without comment:

About the file:
the text message file encrypted with a symmetric key combine 3 modes

1st changing the original text with random (white noise) and PHR (Pure Human Randomness) shuffle command , move and replace instruction combine with the key from mode 1 (white noise) and 2 (PHR)

2nd mode ­ xor PHR – Pure Human random ( or ROEE Random Oriented Enhanced Encryption) with a TIME set of instruction , and a computational temporary set of instructions to produce a real one time PAD when every time ,

Text will transform to a cipher the last will be different

3rd mode ­ xor WNS – White Noise Signal with a TIME set of instruction , and a computational temporary set of instructions to produce a real one time PAD when every time ,

Text will transform to a cipher the last will be different

4th Reconstructs file, levels and dimensions to a
this is a none mathematical with zero use of calculation algorithm – so no brute force , Rainbow Crack , or gpu cuda nvidia brute force crack can be applied on this technology . Sorry you have to find a new way to crack chaos theory for that.

We use 0% of any mathematical calculation algorithm ­ so we can perform any ware with unparalleled strength

Key Strength – 1million bit or more
Speed performance 400% faster Compeer to AES
MPU use – Mathematical Process Unit in CPU use 3% – 7% only
Overhead of the file from original 5% +/- (original+5%) +/-
A combination of mode 1 and 2 applied with a new variation of XOR – to perform the encrypted message

Anyone have any ideas?

Posted on October 13, 2009 at 2:55 PM106 Comments

David Dittrich on Criminal Malware

Good essay: “Malware to crimeware: How far have they gone, and how do we catch up?;login:, August 2009:

I have surveyed over a decade of advances in delivery of malware. Over this period, attackers have shifted to using complex, multi-phase attacks based on
subtle social engineering tactics, advanced cyptographic techniques to defeat takeover and analysis, and highly targeted attacks that are intended to fly below the radar of
current technical defenses. I will show how malicious technology combined with social manipulation is used against us and conclude that this understanding might even help us design our own combination of technical and social mechanisms to better protect us.

Posted on October 13, 2009 at 7:15 AM20 Comments

The Futility of Defending the Targets

This is just silly:

Beaver Stadium is a terrorist target. It is most likely the No. 1 target in the region. As such, it deserves security measures commensurate with such a designation, but is the stadium getting such security?

[..]

When the stadium is not in use it does not mean it is not a target. It must be watched constantly. An easy solution is to assign police officers there 24 hours a day, seven days a week. This is how a plot to destroy the Brooklyn Bridge was thwarted—police presence. Although there are significant costs to this, the costs pale in comparison if the stadium is destroyed or damaged.

The idea is to create omnipresence, which is a belief in everyone’s minds (terrorists and pranksters included) that the stadium is constantly being watched so that any attempt would be futile.

Actually, the Brooklyn Bridge plot failed because the plotters were idiots and the plot—cutting through cables with blowtorches—was dumb. That, and the all-too-common police informant who egged the plotters on.

But never mind that. Beaver Stadium is Pennsylvania State University’s football stadium, and this article argues that it’s a potential terrorist target that needs 24/7 police protection.

The problem with that kind of reasoning is that it makes no sense. As I said in an article that will appear in New Internationalist:

To be sure, reasonable arguments can be made that some terrorist targets are more attractive than others: aeroplanes because a small bomb can result in the death of everyone aboard, monuments because of their national significance, national events because of television coverage, and transportation because of the numbers of people who commute daily. But there are literally millions of potential targets in any large country (there are five million commercial buildings alone in the US), and hundreds of potential terrorist tactics; it’s impossible to defend every place against everything, and it’s impossible to predict which tactic and target terrorists will try next.

Defending individual targets only makes sense if the number of potential targets is few. If there are seven terrorist targets and you defend five of them, you seriously reduce the terrorists’ ability to do damage. But if there are a million terrorist targets and you defend five of them, the terrorists won’t even notice. I tend to dislike security measures that merely cause the bad guys to make a minor change in their plans.

And the expense would be enormous. Add up these secondary terrorist targets—stadiums, theaters, churches, schools, malls, office buildings, anyplace where a lot of people are packed together—and the number is probably around 200,000, including Beaver Stadium. Full-time police protection requires people, so that’s 1,000,000 policemen. At an encumbered cost of $100,000 per policeman per year, probably a low estimate, that’s a total annual cost of $100B. (That’s about what we’re spending each year in Iraq.) On the other hand, hiring one out of every 300 Americans to guard our nation’s infrastructure would solve our unemployment problem. And since policemen get health care, our health care problem as well. Just make sure you don’t accidentally hire a terrorist to guard against terrorists—that would be embarrassing.

The whole idea is nonsense. As I’ve been saying for years, what works is investigation, intelligence, and emergency response:

We need to defend against the broad threat of terrorism, not against specific movie plots. Security is most effective when it doesn’t make arbitrary assumptions about the next terrorist act. We need to spend more money on intelligence and investigation: identifying the terrorists themselves, cutting off their funding, and stopping them regardless of what their plans are. We need to spend more money on emergency response: lessening the impact of a terrorist attack, regardless of what it is. And we need to face the geopolitical consequences of our foreign policy and how it helps or hinders terrorism.

Posted on October 9, 2009 at 6:37 AM51 Comments

Detecting Forged Signatures Using Pen Pressure and Angle

Interesting:

Songhua Xu presented an interesting idea for measuring pen angle and pressure to present beautiful flower-like visual versions of a handwritten signature. You could argue that signatures are already a visual form, nicely identifiable and universal. However, with the added data about pen pressure and angle, the authors were able to create visual signatures that offer potentially greater security, assuming you can learn to read them.

A better image. The paper (abstract is free; paper is behind a paywall).

Posted on October 8, 2009 at 6:43 AM35 Comments

Hotel Safe Scam

This is interesting:

Since then, his scams have tended to take place in luxury hotels around the world.

Typically, he would arrive at a hotel, claim to be a guest, and then tell security that he had forgotten the combination code to his safe.

When hotel staff helped him to open the safe, he would pocket the contents and make his escape.

Doesn’t the hotel staff ask for ID before doing something like that?

Posted on October 7, 2009 at 1:07 PM36 Comments

Detecting People Who Want to Do Harm

I’m dubious:

At a demonstration of the technology this week, project manager Robert P. Burns said the idea is to track a set of involuntary physiological reactions that might slip by a human observer. These occur when a person harbors malicious intent—but not when someone is late for a flight or annoyed by something else, he said, citing years of research into the psychology of deception.

The development team is investigating how effective its techniques are at flagging only people who intend to do harm. Even if it works, the technology raises a slew of questions – from privacy concerns, to the more fundamental issue of whether machines are up to a task now entrusted to humans.

I have a lot of respect for Paul Ekman’s opinion on the matter:

“I can understand why there’s an attempt being made to find a way to replace or improve on what human observers can do: the need is vast, for a country as large and porous as we are. However, I’m by no means convinced that any technology, any hardware will come close to doing what a highly trained human observer can do,'” said Ekman, who directs a company that trains government workers, including for the Transportation Security Administration, to detect suspicious behavior.

Posted on October 7, 2009 at 12:54 PM44 Comments

Computer-Assisted Witness Identification

Witnesses are much more accurate at identifying criminals when computers assist in the identification process, not police officers.

A major cause of miscarriages of justice could be avoided if computers, rather than detectives, guided witnesses through the identification of suspects. That’s according to Brent Daugherty at the University of North Carolina in Charlotte and colleagues, who say that too often officers influence witnesses’ choices.

The problem was highlighted in 2003 when the Innocence Project in New York analysed the case histories of 130 wrongly imprisoned people later freed by DNA evidence. Mistaken eyewitness identification was a factor in 77 per cent of the cases examined.

Makes sense to me.

Posted on October 7, 2009 at 7:12 AM19 Comments

Don't Let Hacker Inmates Reprogram Prison Computers

You’d think this would be obvious:

Douglas Havard, 27, serving six years for stealing up to £6.5million using forged credit cards over the internet, was approached after governors wanted to create an internal TV station but needed a special computer program written.

He was left unguarded and hacked into the system’s hard drive at Ranby Prison, near Retford, Notts. Then he set up a series of passwords so no one else could get into the system.

And you shouldn’t give a prisoner who is a lockpicking expert access to the prison’s keys, either. No, wait:

The blunder emerged a week after the Sunday Mirror revealed how an inmate at the same jail managed to get a key cut that opened every door.

Next week: inmate sharpshooters in charge of prison’s gun locker.

Posted on October 6, 2009 at 2:32 PM28 Comments

Malware that Forges Bank Statements

This is brilliant:

The sophisticated hack uses a Trojan horse program installed on the victim’s machine that alters html coding before it’s displayed in the user’s browser, to either erase evidence of a money transfer transaction entirely from a bank statement, or alter the amount of money transfers and balances.

Another article.

If there’s a moral here, it’s that banks can’t rely on the customer to detect fraud. But we already knew that.

Posted on October 6, 2009 at 6:40 AM35 Comments

UK Defense Security Manual Leaked

Wow. It’s over 2,000 pages, so it’ll take time to make any sense of. According to Ross Anderson, who’s given it a quick look over, “it seems to be the bureaucratic equivalent of spaghetti code: a hodgepodge of things written by people from different backgrounds, and with different degrees of clue, in different decades.”

The computer security stuff starts at page 1,531.

EDITED TO ADD (10/6): An article.

Posted on October 5, 2009 at 3:10 PM27 Comments

Moving Hippos in the Post-9/11 World

It’s a security risk:

The crate was hoisted onto the flatbed with a 120-ton construction crane. For security reasons, there were no signs on the truck indicating that the cargo was a hippopotamus, the zoo said.

The last thing you need is a hijacked hippo.

Does this make any sense? Has there ever been a zoo animal hijacking anywhere?

EDITED TO ADD (10/13): Kidnapped zoo animals.

Posted on October 5, 2009 at 1:29 PM65 Comments

"Security Theater in New York City"

For the U.N. General Assembly:

For those entranced by security theater, New York City is a sight to behold this week. A visit to one of the two centers of the action—the Waldorf Astoria, where the presidents of China, Russia, the Prime Ministers of Israel and the Palestinian Authority, and the President of the United States—are all staying. (Who gets the presidential suite? Our POTUS.) Getting to the Waldorf is a little intimidating, which is the point. Wade through the concrete barriers, the double-parked police cars, the NYPD mobile command post, a signals post, acreages of metal fencing, snipers, counter surveillance teams, FBI surveillance teams in street clothes, dodge traffic and a dignitary motorcade or two, and you’re right at the front door of the hotel. A Secret Service agent from the Midwest gestured dismissively when a reporter showed him a press credential. “You don’t need it. Just go in that door over there.”

At the door over there, another agent sent the reporter back to the first agent. The two agents—each from different field offices, no doubt—argued a bit over which of the Waldorf front doors they were going to let the general public in. Maybe the agents had just been “pushed”—or there was a shift change. In any event, the agents didn’t seem to mind when the reporter walked right past them. A standard magnetometer and x-ray screening later, and I was in the packed front lobby. African heads of state were just about to have a group lunch, and about three dozen members of the continental press corps awaited some arrivals. Some of the heads of state walked in through the front, tailed by a few of their own bodyguards and tired looking USSS agents.

Posted on October 2, 2009 at 12:23 PM22 Comments

Proving a Computer Program's Correctness

This is interesting:

Professor Gernot Heiser, the John Lions Chair in Computer Science in the School of Computer Science and Engineering and a senior principal researcher with NICTA, said for the first time a team had been able to prove with mathematical rigour that an operating-system kernel—the code at the heart of any computer or microprocessor—was 100 per cent bug-free and therefore immune to crashes and failures.

Don’t expect this to be practical any time soon:

Verifying the kernel—known as the seL4 microkernel—involved mathematically proving the correctness of about 7,500 lines of computer code in an project taking an average of six people more than five years.

That’s 250 lines of code verified per man-year. Both Linux and Windows have something like 50 million lines of code; verifying that would take 200,000 man-years, assuming no increased complexity resulting from the increased complexity. Clearly some efficiency improvements are required.

Posted on October 2, 2009 at 7:01 AM91 Comments

Reproducing Keys from Photographs

Reproducing keys from distant and angled photographs:

Abstract:
The access control provided by a physical lock is based on the assumption that the information content of the corresponding key is private—that duplication should require either possession of the key or a priori knowledge of how it was cut. However, the ever-increasing capabilities and prevalence of digital imaging technologies present a fundamental challenge to this privacy assumption. Using modest imaging equipment and standard computer vision algorithms, we demonstrate the effectiveness of physical key teleduplication—extracting a key’s complete and precise bitting code at a distance via optical decoding and then cutting precise duplicates. We describe our prototype system, Sneakey, and evaluate its effectiveness, in both laboratory and real-world settings, using the most popular residential key types in the U.S.

Those of you who carry your keys on a ring dangling from a belt loop, take note.

Posted on October 1, 2009 at 2:09 PM24 Comments

Nice Use of Diversion During a Robbery

During a daring bank robbery in Sweden that involved a helicopter, the criminals disabled a police helicopter by placing a package with the word “bomb” near the helicopter hangar, thus engaging the full caution/evacuation procedure while they escaped.

I wrote about this exact sort of thing in Beyond Fear.

EDITED TO ADD (10/13): The attack was successfully carried off even though the Swedish police had been warned.

Posted on October 1, 2009 at 7:01 AM35 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.