Blog: March 2006 Archives

Friday Squid Blogging: Ben Deacon, Squid Researcher

Third item on the page:

According to juicy folklore and loose legend, for centuries, the inky waters of our deepest oceans have been home to that most mysterious of marine creatures—the giant squid. Well, as we speak, visitors to Melbourne’s aquarium can take a gander at the real thing, a 7m-long squid, caught in New Zealand and frozen in a block of ice.

For 30 years, almost obsessively, one real scientific character from across the Tasman has been chasing these elusive creatures and Ben Deacon caught up with him, hard at what’s clearly become his life’s work.

Posted on March 31, 2006 at 3:05 PM7 Comments

iJacking

The San Francisco Bay Guardian is reporting on a new crime: people who grab laptops out of their owners’ hands and then run away. It’s called “iJacking,” and there seems to be a wave of this type of crime at Internet cafes in San Francisco:

In 2004 the SFPD Robbery Division recorded 17 strong-arm laptop robberies citywide. This increased to 30 cases in 2005, a total that doesn’t even include thefts that fall under the category of “burglary,” when a victim isn’t present. (SFPD could not provide statistics on the number of laptop burglaries.)

In the past three months alone, Park Station, the police precinct that includes the Western Addition, has reported 11 strong-arm laptop robberies, a statistic that suggests this one district may exceed last year’s citywide total by the end of 2006.

Some stories:

Maloney was absorbed in his work when suddenly a hooded person yanked the laptop from Maloney’s hands and ran out the door. Maloney tried to grab his computer, but he stumbled across a few chairs and landed on the floor as the perpetrator dashed to a vehicle waiting a quarter block away.

[…]

Two weeks before Maloney’s robbery, on a Sunday afternoon, a man had been followed out of the Starbucks on the corner of Fulton Street and Masonic Avenue and was assaulted by two suspects in broad daylight. According to the police report, the suspects dragged the victim 15 feet along the pavement, kicking him in the face before stealing his computer.

In early February a women had her laptop snatched while sitting in Ali’s Café. She pursued the perpetrator out the door, only to be blindsided by a second accomplice. Ali described the assault as “a football tackle” so severe it left the victim’s eyeglasses in the branches of a nearby tree. In the most recent laptop robbery, on March 16 in a café on the 900 block of Valencia Street, police say the victim was actually stabbed.

It’s obvious why these thefts are occurring. Laptops are valuable, easy to steal, and easy to fence. If we want to “solve” this problem, we need to modify at least one of those characteristics. Some Internet cafes are providing locking cables for their patrons, in an attempt to make them harder to steal. But that will only mean that the muggers will follow their victims out of the cafes. Laptops will become less valuable over time, but that really isn’t a good solution. The only thing left is to make them harder to fence.

This isn’t an easy problem. There are a bunch of companies that make solutions that help people recover stolen laptops. There are programs that “phone home” if a laptop is stolen. There are programs that hide a serial number on the hard drive somewhere. There are non-removable tags users can affix to their computers with ID information. But until this kind of thing becomes common, the crimes will continue.

Reminds me of the problem of bicycle thefts.

Posted on March 31, 2006 at 1:06 PM67 Comments

Cubicle Farms are a Terrorism Risk

The British security service MI5 is warning business leaders that their offices are probably badly designed against terrorist bombs. The common modern office consists of large rooms without internal walls, which puts employees at greater risk in the event of terrorist bombs.

From The Scotsman:

The trend towards open-plan offices without internal walls could put employees at increased risk in the event of a terrorist bomb, MI5 has warned business leaders. The advice comes as the Security Service steps up its advice to companies on how to prepare for an attack. MI5 has produced a 40-page leaflet, “Protecting Against Terrorism”, which will be distributed to large businesses and public-sector bodies across Britain. Among the guidance in the pamphlet is that bosses should consider the security implications of getting rid of internal walls.

Open-plan offices are increasingly popular as businesses seek to improve communication and cooperation between employees. But MI5 points out that there are potential risks, too. “If you are converting your building to open-plan accommodation, remember that the removal of internal walls reduces protection against blast and fragments,” the leaflet says.

All businesses should make contingency plans for keeping staff safe in the event of a bomb attack, the Security Service advises. Instead of automatically evacuating staff, companies are recommended to gather workers in a designated “protected space” until the location of the bomb can be confirmed. “Since glass and other fragments may kill or maim at a considerable distance from the centre of a large explosion, moving staff into protected spaces is often safer than evacuating them on to the streets,” the leaflet cautions. Interior rooms with reinforced concrete or masonry walls often make suitable protected spaces, as they tend to remain intact in the event of an explosion outside the building, employers are told. But open-plan offices often lack such places, and can have other effects on emergency planning: “If corridors no longer exist then you may also lose your evacuation routes, assembly or protected spaces, while the new layout will probably affect your bomb threat contingency procedures.” Companies converting to open-plan are told to ensure that there is no significant reduction in staff protection, “for instance by improving glazing protection.”

Posted on March 31, 2006 at 5:14 AM30 Comments

An Economic Analysis of Airport Security Screening

Interesting paper: “Passenger Profiling, Imperfect Screening, and Airport Security,” by Nicola Persico and Petra E. Todd. The authors use game theory to investigate the optimal screening policy, in a scenario when there are different social groups (separated by felons, race, religion, etc.) with different preferences for crime and/or terrorism.

Posted on March 30, 2006 at 1:59 PM21 Comments

Evading Copyright Through XOR

Monolith is an open-source program that can XOR two files together to create a third file, and—of course—can XOR that third file with one of the original two to create the other original file.

The website wonders about the copyright implications of all of this:

Things get interesting when you apply Monolith to copyrighted files. For example, munging two copyrighted files will produce a completely new file that, in most cases, contains no information from either file. In other words, the resulting Mono file is not “owned” by the original copyright holders (if owned at all, it would be owned by the person who did the munging). Given that the Mono file can be combined with either of the original, copyrighted files to reconstruct the other copyrighted file, this lack of Mono ownership may be seem hard to believe.

The website then postulates this as a mechanism to get around copyright law:

What does this mean? This means that Mono files can be freely distributed.

So what? Mono files are useless without their corresponding Basis files, right? And the Basis files are copyrighted too, so they cannot be freely distributed, right? There is one more twist to this idea. What happens when we use Basis files that are freely distributable? For example, we could use a Basis file that is in the public domain or one that is licensed for free distribution. Now we are getting somewhere.

None of the aforementioned properties of Mono files change when we use freely distributable Basis files, since the same arguments hold. Mono files are still not copyrighted by the people who hold the copyrights over the corresponding Element files. Now we can freely distribute Mono files and Basis files.

Interesting? Not really. But what you can do with these files, in the privacy of your own home, might be interesting, depending on your proclivities. For example, you can use the Mono files and the Basis files to reconstruct the Element files.

Clever, but it won’t hold up in court. In general, technical hair splitting is not an effective way to get around the law. My guess is that anyone who distributes that third file—they call it a “Mono” file—along with instructions on how to recover the copyrighted file is going to be found guilty of copyright violation.

The correct way to solve this problem is through law, not technology.

Posted on March 30, 2006 at 8:07 AM79 Comments

80 Cameras for 2,400 People

This story is about the remote town of Dillingham, Alaska, which is probably the most watched town in the country. There are 80 surveillance cameras for the 2,400 people, which translates to one camera for every 30 people.

The cameras were bought, I assume, because the town couldn’t think of anything else to do with the $202,000 Homeland Security grant they received. (One of the problems of giving this money out based on political agenda, rather than by where the actual threats are.)

But they got the money, and they spent it. And now they have to justify the expense. Here’s the movie-plot threat the Dillingham Police Chief uses to explain why the expense was worthwhile:

“Russia is about 800 miles that way,” he says, arm extending right.

“Seattle is about 1,200 miles back that way.” He points behind him.

“So if I have the math right, we’re closer to Russia than we are to Seattle.”

Now imagine, he says: What if the bad guys, whoever they are, manage to obtain a nuclear device in Russia, where some weapons are believed to be poorly guarded. They put the device in a container and then hire organized criminals, “maybe Mafiosi,” to arrange a tramp steamer to pick it up. The steamer drops off the container at the Dillingham harbor, complete with forged paperwork to ship it to Seattle. The container is picked up by a barge.

“Ten days later,” the chief says, “the barge pulls into the Port of Seattle.”

Thompson pauses for effect.

“Phoooom,” he says, his hands blooming like a flower.

The first problem with the movie plot is that it’s just plain silly. But the second problem, which you might have to look back to notice, is that those 80 cameras will do nothing to stop his imagined attack.

We are all security consumers. We spend money, and we expect security in return. This expenditure was a waste of money, and as a U.S. taxpayer, I am pissed that I’m getting such a lousy deal.

Posted on March 29, 2006 at 1:13 PM48 Comments

Chameleon Weapons

You can’t detect them, because they look normal:

One type is the exact size and shape of a credit card, except that two of the edges are lethally sharp. It’s made of G10 laminate, an ultra-hard material normally employed for circuit boards. You need a diamond file to get an edge on it.

[…]

Another configuration is a stabbing weapon which is indistinguishable from a pen. This one is made from melamine fiber, and can sit snugly inside a Bic casing. You would only find out it was not the real thing if you tried to write with it. It’s sharpened with a blade edge at the tip which Defense Review describes as “scary sharp.”

Also:

The FBI’s extensive Guide to Concealable Weapons has 89 pages of weapons intended to get through security. These are generally variations of a knifeblade concealed in a pen, comb or a cross—and most of them are pretty obvious on X-ray.

Posted on March 29, 2006 at 6:58 AM51 Comments

Al Qaeda Hacker Captured

Irhabi 007 has been captured.

For almost two years, intelligence services around the world tried to uncover the identity of an Internet hacker who had become a key conduit for al-Qaeda. The savvy, English-speaking, presumably young webmaster taunted his pursuers, calling himself Irhabi—Terrorist—007. He hacked into American university computers, propagandized for the Iraq insurgents led by Abu Musab al-Zarqawi and taught other online jihadists how to wield their computers for the cause.

Assuming the British authorities are to be believed, he definitely was a terrorist:

Suddenly last fall, Irhabi 007 disappeared from the message boards. The postings ended after Scotland Yard arrested a 22-year-old West Londoner, Younis Tsouli, suspected of participating in an alleged bomb plot. In November, British authorities brought a range of charges against him related to that plot. Only later, according to our sources familiar with the British probe, was Tsouli’s other suspected identity revealed. British investigators eventually confirmed to us that they believe he is Irhabi 007.

[…]

Tsouli has been charged with eight offenses including conspiracy to murder, conspiracy to cause an explosion, conspiracy to cause a public nuisance, conspiracy to obtain money by deception and offences relating to the possession of articles for terrorist purposes and fundraising.

Okay. So he was a terrorist. And he used the Internet, both as a communication tool and to break into networks. But this does not make him a cyberterrorist.

Interesting article, though.

Here’s the Slashdot thread on the topic.

Posted on March 28, 2006 at 7:27 AM18 Comments

Quasar Encryption

Does anyone have the faintest clue what they’re talking about here? If I had to guess, it’s just another random-number generator. It definitely doesn’t sound like two telescopes pointing at the same piece of key can contruct the same key—now that would be cool.

The National Institute of Information and Communications Technology is trying to patent a system of encryption using electromagnetic waves from Quasars.

According to The Nihon Keizai Shimbun, this technology is used to take cosmic radio waves are received through a radio telescope, encrypt and then retransmit them. Because cosmic waves are irregular, it is virtually impossible for others to decipher them. A spokesman is quoted as saying that the system could be used for the transmission of state secrets and other sensitive information.

The radio telescope can decipher the information by observing the cosmic wave patterns emitted by a particular quasar selected in advance. Even if the encrypted data is stolen, it is impossible to read it without the appropriate quasar’s radio signals.

The only way to really break the code is to know which radio telescope the coder is using and what Quasar it is pointing at. Only then do you have a slim chance of decoding it.

I can see the story on the home page of Nikkei.net Interactive, but can’t get at the story without a login.

Posted on March 27, 2006 at 1:21 PM58 Comments

Firefox Bug Causes Relationship to Break Up

A couple—living together, I assume—and engaged to be married, shared a computer. He used Firefox to visit a bunch of dating sites, being smart enough not to have the browser save his password. But Firefox did save the names of the sites it was told never to save the password for. She happened to stumble on this list. The details are left to the imagination, but they broke up.

Most bug reports aren’t this colorful.

Posted on March 27, 2006 at 7:53 AM58 Comments

"Terrorist with Nuke" Movie Plot

Since when did The New Scientist hire novelists to write science stories?

A truck pulls up in front of New York City’s Grand Central Station, one of the most densely crowded spots in the world. It is a typical weekday afternoon, with over half a million people in the immediate area, working, shopping or just passing through. A few moments later the driver makes his delivery: a 10-kiloton atomic explosion.

Almost instantly, an electromagnetic pulse knocks out all electronics within a radius of 4 kilometres. The shock wave levels every building within a half-kilometre, killing everyone inside, and severely damages virtually all buildings for a kilometre in every direction. Detonation temperatures of millions of degrees ignite a firestorm that rapidly engulfs the area, generating winds of 600 kilometres an hour.

Within seconds, the blast, heat and direct exposure to radiation have killed several hundred thousand people. Perhaps they are the lucky ones. What follows is, if anything, even worse.

The explosion scoops …

EDITED TO ADD (3/24): Here’s the full article.

Posted on March 24, 2006 at 11:51 AM41 Comments

Security Overreaction

Who needs terrorists? We can cause terror all by ourselves:

A worker at a Downtown building who was using a pellet gun with a scope to scare pigeons prompted a massive police response that led to the shutdown of several blocks this afternoon.

[…]

Dozens of motorcycle and special response officers responded to the area.

The Fort Pitt Tunnels inbound were shut down temporarily.

The Port Authority was forced to reroute buses around the area.

People in some buildings were told to stay inside while those in others were evacuated.

Students who attend Pittsburgh High School for the Creative & Performing Arts (CAPA High) remained in their Fort Duquesne Boulevard school this afternoon until the situation was resolved.

The All-City Senior Orchestra rehearsal scheduled for 4 p.m. at CAPA High has been canceled.

Students who attend all other Pittsburgh Public Schools have been dismissed since Port Authority buses and school buses that normally travel through Downtown were being re-routed.

Community College of Allegheny County canceled evening classes at its Downtown center tonight on Stanwix Street.

Before the all-clear was given and roads were reopened, police searched buildings floor-by-floor looking for the gunman and stationed snipers in surrounding buildings.

Posted on March 24, 2006 at 7:59 AM49 Comments

London Rejects Subway Scanners

Rare outbreak of security common sense in London:

London Underground is likely to reject the use of passenger scanners designed to detect weapons or explosives as they are “not practical”, a security chief for the capital’s transport authority said on 14 March 2006.

[…]

“Basically, what we know is that it’s not practical,” he told Government Computing News. “People use the tube for speed and are concerned with journey time. It would just be too time consuming. Secondly, there’s just not enough space to put this kind of equipment in.”

“Finally there’s also the risk that you actually create another target with people queuing up and congregating at the screening points.”

Posted on March 23, 2006 at 1:39 PM19 Comments

Airport Passenger Screening

It seems like every time someone tests airport security, airport security fails. In tests between November 2001 and February 2002, screeners missed 70 percent of knives, 30 percent of guns and 60 percent of (fake) bombs. And recently (see also this), testers were able to smuggle bomb-making parts through airport security in 21 of 21 attempts. It makes you wonder why we’re all putting our laptops in a separate bin and taking off our shoes. (Although we should all be glad that Richard Reid wasn’t the “underwear bomber.”)

The failure to detect bomb-making parts is easier to understand. Break up something into small enough parts, and it’s going to slip past the screeners pretty easily. The explosive material won’t show up on the metal detector, and the associated electronics can look benign when disassembled. This isn’t even a new problem. It’s widely believed that the Chechen women who blew up the two Russian planes in August 2004 probably smuggled their bombs aboard the planes in pieces.

But guns and knives? That surprises most people.

Airport screeners have a difficult job, primarily because the human brain isn’t naturally adapted to the task. We’re wired for visual pattern matching, and are great at picking out something we know to look for—for example, a lion in a sea of tall grass.

But we’re much less adept at detecting random exceptions in uniform data. Faced with an endless stream of identical objects, the brain quickly concludes that everything is identical and there’s no point in paying attention. By the time the exception comes around, the brain simply doesn’t notice it. This psychological phenomenon isn’t just a problem in airport screening: It’s been identified in inspections of all kinds, and is why casinos move their dealers around so often. The tasks are simply mind-numbing.

To make matters worse, the smuggler can try to exploit the system. He can position the weapons in his baggage just so. He can try to disguise them by adding other metal items to distract the screeners. He can disassemble bomb parts so they look nothing like bombs. Against a bored screener, he has the upper hand.

And, as has been pointed out again and again in essays on the ludicrousness of post-9/11 airport security, improvised weapons are a huge problem. A rock, a battery for a laptop, a belt, the extension handle off a wheeled suitcase, fishing line, the bare hands of someone who knows karate … the list goes on and on.

Technology can help. X-ray machines already randomly insert “test” bags into the stream—keeping screeners more alert. Computer-enhanced displays are making it easier for screeners to find contraband items in luggage, and eventually the computers will be able to do most of the work. It makes sense: Computers excel at boring repetitive tasks. They should do the quick sort, and let the screeners deal with the exceptions.

Sure, there’ll be a lot of false alarms, and some bad things will still get through. But it’s better than the alternative.

And it’s likely good enough. Remember the point of passenger screening. We’re not trying to catch the clever, organized, well-funded terrorists. We’re trying to catch the amateurs and the incompetent. We’re trying to catch the unstable. We’re trying to catch the copycats. These are all legitimate threats, and we’re smart to defend against them. Against the professionals, we’re just trying to add enough uncertainty into the system that they’ll choose other targets instead.

The terrorists’ goals have nothing to do with airplanes; their goals are to cause terror. Blowing up an airplane is just a particular attack designed to achieve that goal. Airplanes deserve some additional security because they have catastrophic failure properties: If there’s even a small explosion, everyone on the plane dies. But there’s a diminishing return on investments in airplane security. If the terrorists switch targets from airplanes to shopping malls, we haven’t really solved the problem.

What that means is that a basic cursory screening is good enough. If I were investing in security, I would fund significant research into computer-assisted screening equipment for both checked and carry-on bags, but wouldn’t spend a lot of money on invasive screening procedures and secondary screening. I would much rather have well-trained security personnel wandering around the airport, both in and out of uniform, looking for suspicious actions.

When I travel in Europe, I never have to take my laptop out of its case or my shoes off my feet. Those governments have had far more experience with terrorism than the U.S. government, and they know when passenger screening has reached the point of diminishing returns. (They also implemented checked-baggage security measures decades before the United States did—again recognizing the real threat.)

And if I were investing in security, I would invest in intelligence and investigation. The best time to combat terrorism is before the terrorist tries to get on an airplane. The best countermeasures have value regardless of the nature of the terrorist plot or the particular terrorist target.

In some ways, if we’re relying on airport screeners to prevent terrorism, it’s already too late. After all, we can’t keep weapons out of prisons. How can we ever hope to keep them out of airports?

A version of this essay originally appeared on Wired.com.

Posted on March 23, 2006 at 7:03 AM95 Comments

New Kind of Door Lock

There’s a new kind of door lock from the Israeli company E-Lock. It responds to sound. Instead of carrying a key, you carry a small device that makes a series of quick knocking sounds. Just touching it to the door causes the door to open; there’s no keyhole. The device, called a “KnocKey,” has a keypad and can be programmed to require a PIN before operation—for even greater security.

Clever idea, but there’s the usual security hyperbole:

Since there is no keyhole or contact point on the door, this unique mechanism offers a significantly higher level of security than existing technology.

More accurate would be to say that the security vulnerabilities are different than existing technology. We know a lot about the vulnerabilities of conventional locks, but we know very little about the security of this system. But don’t confuse this lack of knowledge with increased security.

Posted on March 22, 2006 at 5:15 AM48 Comments

DHS Privacy and Integrity Report

Last year, the Department of Homeland Security finally got around to appointing its DHS Data Privacy and Integrity Advisory Committee. It was mostly made up of industry insiders instead of anyone with any real privacy experience. (Lance Hoffman from George Washington University was the most notable exception.)

And now, we have something from that committee. On March 7th they published their “Framework for Privacy Analysis of Programs, Technologies, and Applications.”

This document sets forth a recommended framework for analyzing programs, technologies, and applications in light of their effects on privacy and related interests. It is intended as guidance for the Data Privacy and Integrity Advisory Committee (the Committee) to the U.S. Department of Homeland Security (DHS). It may also be useful to the DHS Privacy Office, other DHS components, and other governmental entities that are seeking to reconcile personal data-intensive programs and activities with important social and human values.

It’s surprisingly good.

I like that it is a series of questions a program manager has to answer: about the legal basis for the program, its efficacy against the threat, and its effects on privacy. I am particularly pleased that their questions on pages 3-4 are very similar to the “five steps” I wrote about in Beyond Fear. I am thrilled that the document takes a “trade-off” approach; the last question asks: “Should the program proceed? Do the benefits of the program…justify the costs to privacy interests….?”

I think this is a good starting place for any technology or program with respect to security and privacy. And I hope the DHS actually follows the recommendations in this report.

Posted on March 21, 2006 at 3:07 PM13 Comments

No Funding for Homeland Security

Really interesting article by Robert X. Cringely on the lack of federal funding for security technologies.

After the 9-11 terrorist attacks, the United States threw its considerable fortune into the War on Terror, of which a large component was Homeland Security. We conducted a couple wars abroad, both of which still seem to be going on, and took a vast domestic security bureaucracy and turned it into a different and even more vast domestic security bureaucracy. We could argue all day about whether or not America is more secure as a result of these changes, but we’d all agree that a lot of money has been spent. In fact, from a pragmatic point of view, ALL the money has been spent, and that’s the point of this particular column. For a variety of reasons, there is no money left to spend on homeland security ­ none, nada, zilch. We’re busted.

I think his assessment is spot on.

Posted on March 21, 2006 at 12:39 PM15 Comments

Fake 300, 600, and 1,000 Euro Notes Passed as Real

They’re deliberately fake, made in Germany for a promotion. But they’re being passed as real:

Cologne newsagent Bernd Friedhelm, 33, accepted one of the fake 600 euro notes from an unknown customer who bought two cartons of cigarettes and walked off with 534 euros in change.

Friedhelm said: “He told me it was a new type of note and I just figured I hadn’t seen one before.”

This is why security is so hard: people.

Posted on March 21, 2006 at 6:47 AM59 Comments

Security Through Begging

From TechDirt:

Last summer, the surprising news came out that Japanese nuclear secrets leaked out, after a contractor was allowed to connect his personal virus-infested computer to the network at a nuclear power plant. The contractor had a file sharing app on his laptop as well, and suddenly nuclear secrets were available to plenty of kids just trying to download the latest hit single. It’s only taken about nine months for the government to come up with its suggestion on how to prevent future leaks of this nature: begging all Japanese citizens not to use file sharing systems—so that the next time this happens, there won’t be anyone on the network to download such documents.

Even if their begging works, it solves the wrong problem. Sad.

EDITED TO ADD (3/22): Another article.

Posted on March 20, 2006 at 2:01 PM15 Comments

Writing about IEDs

Really good article by a reporter who has been covering improvised explosive devices in Iraq:

Last summer, a U.S. Colonel in Baghdad told me that I was America’s enemy, or very close to it. For months, I had been covering the U.S. military’s efforts to deal with the threat of IEDs, improvised explosive devices. And my writing, he told me, was going too far—especially this January 2005 Wired News story, in which I described some of the Pentagon’s more exotic attempts to counter these bombs.

None of the material in the story—the stuff about microwave blasters or radio frequency jammers—was classified, he admitted. Most of it had been taken from open source materials. And many of the systems were years and years from being fielded. But by bundling it all together, I was doing a “world class job of doing the enemy’s research for him, for free.” So watch your step, he said, as I went back to my ride-alongs with the Baghdad Bomb Squad—the American soldiers defusing IEDs in the area.

Today, I hear that the President and the Pentagon’s higher-ups are trotting out the same argument. “News coverage of this topic has provided a rich source of information for the enemy, and we inadvertently contribute to our enemies’ collection efforts through our responses to media interest,” states a draft Defense Department memo, obtained by Inside Defense. “Individual pieces of information, though possibly insignificant taken alone, when aggregated provide robust information about our capabilities and weaknesses.”

In other words, Al Qaeda hasn’t discovered how to Google, yet. Don’t help ’em out.

Posted on March 20, 2006 at 11:53 AM41 Comments

Friday Squid Blogging: Squid Poaching in Argentina

Squid fishing turns into an international incident back in February 2005:

A Taiwanese flagged jigger allegedly poaching in the South Atlantic was arrested by the Argentine Coast Guard after intimidating fire. This is the second incident in a week.

According to Argentine sources the 35 crew jigger was detected operating in the Isla Rasa area, 199 miles offshore Comodoro Rivadavia, and refused to stop engines when approached by a Coast Guard vessel.

Primary reports indicate that “Chich Man 1” was transporting 3,700 boxes of 12,5 kilos each of frozen squid, plus another 68 of fresh squid stored on deck.

When the jigger instead of obeying orders tried to flee the Argentine Coast Guard vessel fired intimidating shots. She was then boarded by a party of Argentine sailors and is currently being escorted to Comodoro Rivadavia where the captain will face charges of illegal fishing.

Less that a week ago another Taiwanese jigger, “Hsien Hua 6” was caught red-hand poaching in the same area and was by ARA Guerrico and escorted to Puerto Deseado.

Posted on March 17, 2006 at 3:32 PM8 Comments

Power Analysis of RFID Tags

This is great work by Yossi Oren and Adi Shamir:

Abstract (Summary)

We show the first power analysis attack on passive RFID tags. Compared to standard power analysis attacks, this attack is unique in that it requires no physical contact with the device under attack. While the specific attack described here requires the attacker to actually transmit data to the tag under attack, the power analysis part itself requires only a receive antenna. This means that a variant of this attack can be devised such that the attacker is completely passive while it is acquiring the data, making the attack very hard to detect. As a proof of concept, we describe a password extraction attack on Class 1 Generation 1 EPC tags operating in the UHF frequency range. The attack presented below lets an adversary discover the kill password of such a tag and, then, disable it. The attack can be readily adapted to finding the access and kill passwords of Gen 2 tags. The main significance of our attack is in its implications ­ any cryptographic functionality built into tags needs to be designed to be resistant to power analysis, and achieving this resistance is an undertaking which has an effect both on the price and on the read range of tags.

My guess of the industry’s response: downplay the results and pretend it’s not a problem.

Posted on March 17, 2006 at 12:22 PM8 Comments

RFID Chips and Viruses

Of course RFID chips can carry viruses. They’re just little computers.

More info here. The coverage is more than a tad sensationalist, though.

EDITED TO ADD (3/16): I thought the attack vector was interesting: a Trojan RFID attacks the central database, rather than attacking other RFID chips directly. Metaphorically, it’s a lot closer to biological viruses, because it actually requires the more powerful host being subverted, and there’s no way an infected tag could propagate directly to another tag.

Posted on March 16, 2006 at 6:55 AM34 Comments

Bioterrorism

Long, and interesting, article on bioterrorism.

When you read this, don’t concentrate too much on what’s possible right now. If the techniques discussed in the article are beyond the reach of government laboratories now, they won’t be in five or ten years. And then they’ll become cheaper and easier. Attackers look for leverage, and technology gives attackers leverage.

Posted on March 15, 2006 at 1:46 PM19 Comments

Police Department Privilege Escalation

It’s easier than you think to create your own police department in the United States.

Yosef Maiwandi formed the San Gabriel Valley Transit Authority—a tiny, privately run nonprofit organization that provides bus rides to disabled people and senior citizens. It operates out of an auto repair shop. Then, because the law seems to allow transit companies to form their own police departments, he formed the San Gabriel Valley Transit Authority Police Department. As a thank you, he made Stefan Eriksson a deputy police commissioner of the San Gabriel Transit Authority Police’s anti-terrorism division, and gave him business cards.

Police departments like this don’t have much legal authority, they don’t really need to. My guess is that the name alone is impressive enough.

In the computer security world, privilege escalation means using some legitimately granted authority to secure extra authority that was not intended. This is a real-world counterpart. Even though transit police departments are meant to police their vehicles only, the title—and the ostensible authority that comes along with it—is useful elsewhere. Someone with criminal intent could easily use this authority to evade scrutiny or commit fraud.

Deal said that his agency has discovered that several railroad agencies around California have created police departments—even though the companies have no rail lines in California to patrol. The police certification agency is seeking to decertify those agencies because it sees no reason for them to exist in California.

The issue of private transit firms creating police agencies has in recent years been a concern in Illinois, where several individuals with criminal histories created railroads as a means of forming a police agency.

The real problem is that we’re too deferential to police power. We don’t know the limits of police authority, whether it be an airport policeman or someone with a business card from the “San Gabriel Valley Transit Authority Police Department.”

Posted on March 15, 2006 at 7:47 AM67 Comments

Airport Security Failure

At LaGuardia, a man successfully walked through the metal detector, but screeners wanted to check his shoes. (Some reports say that his shoes set off an alarm.) But he didn’t wait, and disappeared into the crowd.

The entire Delta Airlines terminal had to be evacuated, and between 2,500 and 3,000 people had to be rescreened. I’m sure the resultant flight delays rippled through the entire system.

Security systems can fail in two ways. They can fail to defend against an attack. And they can fail when there is no attack to defend. The latter failure is often more important, because false alarms are more common than real attacks.

Aside from the obvious security failure—how did this person manage to disappear into the crowd, anyway—it’s painfully obvious that the overall security system did not fail well. Well-designed security systems fail gracefully, without affecting the entire airport terminal. That the only thing the TSA could do after the failure was evacuate the entire terminal and rescreen everyone is a testament to how badly designed the security system is.

Posted on March 14, 2006 at 12:15 PM32 Comments

Basketball Prank

On March 4, University of California Berkeley (Cal) played a basketball game against the University of Southern California (USC). With Cal in contention for the PAC-10 title and the NCAA tournament at stake, the game was a must-win.

Enter “Victoria.”

Victoria was a hoax UCLA co-ed, created by Cal’s Rally Committee. For the previous week, “she” had been chatting with Gabe Pruitt, USC’s starting guard, over AOL Instant Messenger. It got serious. Pruitt and several of his teammates made plans to go to Westwood after the game so that they could party with Victoria and her friends.

On Saturday, at the game, when Pruitt was introduced in the starting lineup, the chants began: “Victoria, Victoria.” One of the fans held up a sign with her phone number.

The look on Pruitt’s face when he turned to the bench after the first Victoria chant was priceless. The expression was unlike anything ever seen in collegiate or pro sports. Never did a chant by the opposing crowd have such an impact on a visiting player. Pruitt was in total shock. (This is the only picture I could find.)

The chant “Victoria” lasted all night. To add to his embarrassment, transcripts of their IM conversations were handed out to the bench before the game: “You look like you have a very fit body.” “Now I want to c u so bad.”

Pruitt ended up a miserable 3-for-13 from the field.

(See also here and here.)

Security morals? First, this is the cleverest social engineering attack I’ve read about in a long time. Second, authentication is hard in little text windows—but it’s no less important. (Although even if this were a real co-ed recruited for the ruse, authentication wouldn’t have helped.) And third, you can hoodwink college basketball players if you get them thinking with their hormones.

Posted on March 14, 2006 at 12:11 PM104 Comments

Bypassing the Airport Identity Check

Here’s an article about how you can modify, and then print, you own boarding pass and get on an airplane even if you’re on the no-fly list. This isn’t news; I wrote about it in 2003.

I don’t worry about it now any more than I worried about it then:

In terms of security, this is no big deal; the photo-ID requirement doesn’t provide much security. Identification of passengers doesn’t increase security very much. All of the 9/11 terrorists presented photo-IDs, many in their real names. Others had legitimate driver’s licenses in fake names that they bought from unscrupulous people working in motor vehicle offices.

The photo-ID requirement is presented as a security measure, but business is the real reason. Airlines didn’t resist it, even though they resisted every other security measure of the past few decades, because it solved a business problem: the reselling of nonrefundable tickets. Such tickets used to be advertised regularly in newspaper classifieds. An ad might read: “Round trip, Boston to Chicago, 11/22-11/30, female, $50.” Since the airlines didn’t check IDs and could observe gender, any female could buy the ticket and fly the route. Now that won’t work. Under the guise of helping prevent terrorism, the airlines solved a business problem of their own and passed the blame for the solution on to FAA security requirements.

But the system fails. I can fly on your ticket. You can fly on my ticket. We don’t even have to be the same gender.

Posted on March 14, 2006 at 7:58 AM28 Comments

Credit Card Companies and Agenda

This has been making the rounds on the Internet. Basically, a guy tears up a credit card application, tapes it back together, fills it out with someone else’s address and a different phone number, and send it in. He still gets a credit card.

Imagine that some fraudster is rummaging through your trash and finds a torn-up credit card application. That’s why this is bad.

To understand why it’s happening, you need to understand the trade-offs and the agenda. From the point of view of the credit card company, the benefits of giving someone a credit card is that he’ll use it and generate revenue. The risk is that it’s a fraudster who will cost the company revenue. The credit card industry has dealt with the risk in two ways: they’ve pushed a lot of the risk onto the merchants, and they’ve implemented fraud detection systems to limit the damage.

All other costs and problems of identity theft are borne by the consumer; they’re an externality to the credit card company. They don’t enter into the trade-off decision at all.

We can laugh at this kind of thing all day, but it’s actually in the best interests of the credit card industry to mail cards in response to torn-up and taped-together applications without doing much checking of the address or phone number. If we want that to change, we need to fix the externality.

Posted on March 13, 2006 at 2:18 PM43 Comments

Googling for Covert CIA Agents

It’s easy to blow the cover of CIA agents using the Internet:

The CIA asked the Tribune not to publish her name because she is a covert operative, and the newspaper agreed. But unbeknown to the CIA, her affiliation and those of hundreds of men and women like her have somehow become a matter of public record, thanks to the Internet.

When the Tribune searched a commercial online data service, the result was a virtual directory of more than 2,600 CIA employees, 50 internal agency telephone numbers and the locations of some two dozen secret CIA facilities around the United States.

Only recently has the CIA recognized that in the Internet age its traditional system of providing cover for clandestine employees working overseas is fraught with holes, a discovery that is said to have “horrified” CIA Director Porter Goss.

Seems to be serious:

Not all of the 2,653 employees whose names were produced by the Tribune search are supposed to be working under cover. More than 160 are intelligence analysts, an occupation that is not considered a covert position, and senior CIA executives such as Tenet are included on the list.

Covert employees discovered

But an undisclosed number of those on the list—the CIA would not say how many—are covert employees, and some are known to hold jobs that could make them terrorist targets.

Other potential targets include at least some of the two dozen CIA facilities uncovered by the Tribune search. Most are in northern Virginia, within a few miles of the agency’s headquarters. Several are in Florida, Ohio, Pennsylvania, Utah and Washington state. There is one in Chicago.

Some are heavily guarded. Others appear to be unguarded private residences that bear no outward indication of any affiliation with the CIA.

A senior U.S. official, reacting to the computer searches that produced the names and addresses, said, “I don’t know whether Al Qaeda could do this, but the Chinese could.”

There are more articles.

Posted on March 13, 2006 at 11:02 AM35 Comments

Huge Vulnerability in GPG

GPG is an open-source version of the PGP e-mail encryption protocol. Recently, a very serious vulnerability was discovered in the software: given a signed e-mail message, you can modify the message—specifically, you can prepend or append arbitrary data—without disturbing the signature verification.

It appears this bug has existed for years without anybody finding it.

Moral: Open source does not necessarily mean “fewer bugs.” I wrote about this back in 1999.

UPDATED TO ADD (3/13): This bug is fixed in Version 1.4.2.2. Users should upgrade immediately.

Posted on March 13, 2006 at 6:33 AM37 Comments

Friday Squid Blogging: Giant Squid in Australia

On television:

According to juicy folklore and loose legend, for centuries, the inky waters of our deepest oceans have been home to that most mysterious of marine creatures—the giant squid. Well, as we speak, visitors to Melbourne’s aquarium can take a gander at the real thing, a 7m-long squid, caught in New Zealand and frozen in a block of ice.

For 30 years, almost obsessively, one real scientific character from across the Tasman has been chasing these elusive creatures and Ben Deacon caught up with him, hard at what’s clearly become his life’s work.

Watch the video here.

Posted on March 10, 2006 at 2:46 PM15 Comments

Blowing Up ATMs

In the Netherlands, criminals are stealing money from ATMs by blowing them up (article in Dutch). First, they drill a hole in an ATM and fill it with some sort of gas. Then, they ignite the gas—from a safe distance—and clean up the money that flies all over the place after the ATM explodes.

Sounds crazy, but apparently there has been an increase in this type of attack recently. The banks’ countermeasure is to install air vents so that gas can’t build up inside the ATMs.

Posted on March 10, 2006 at 12:26 PM59 Comments

Flying Without ID

According to the TSA, in the 9th Circuit Case of John Gilmore, you are allowed to fly without showing ID—you’ll just have to submit yourself to secondary screening.

The Identity Project wants you to try it out. If you have time, try to fly without showing ID.

Mr. Gilmore recommends that every traveler who is concerned with privacy or anonymity should opt to become a “selectee” rather than show an ID. We are very likely to lose the right to travel anonymously, if citizens do not exercise it. TSA and the airlines will attempt to make it inconvenient for you, by wasting your time and hassling you, but they can’t do much in that regard without compromising their avowed missions, which are to transport paying passengers, and to keep weapons off planes. If you never served in the armed services, this is a much easier way to spend some time keeping your society free. (Bring a copy of the court decision with you and point out some of the numerous places it says you can fly as a selectee rather than show ID. Paper tickets are also helpful, though not required.)

I’m curious what the results are.

EDITED TO ADD (11/25): Here’s someone who tried, and failed.

Posted on March 10, 2006 at 7:20 AM88 Comments

More on the ATM-Card Class Break

A few days ago, I wrote about the class break of Citibank ATM cards in Canada, the UK, and Russia. This is new news:

With consumers around the country reporting mysterious fraudulent account withdrawals, and multiple banks announcing problems with stolen account information, it appears thieves have unleashed a powerful new way to steal money from cash machines.

Criminals have stolen bank account data from a third-party company, several banks have said, and then used the data to steal money from related accounts using counterfeit cards at ATM machines.

The central question surrounding the new wave of crime is this: How did the thieves managed to foil the PIN code system designed to fend off such crimes? Investigators are considering the possibility that criminals have stolen PIN codes from a retailer, MSNBC has learned.

Read the whole article. Details are emerging slowly, but there’s still a lot we don’t know.

EDITED TO ADD (3/11): More info in these four articles.

Posted on March 9, 2006 at 3:51 PM54 Comments

Danish ATM-Card Skimming

Criminals are breaking into stores and pretending to ransack them, as a cover for installing ATM skimming hardware, complete with a transmitter.

Note the last paragraph of the story—it’s in Danish, sorry—where the company admits that this is the fourth attempt they know of criminals installing reader equipment inside ATM terminals for the purpose of skimming numbers and PINs.

Posted on March 9, 2006 at 1:40 PM17 Comments

Data Mining for Terrorists

In the post 9/11 world, there’s much focus on connecting the dots. Many believe that data mining is the crystal ball that will enable us to uncover future terrorist plots. But even in the most wildly optimistic projections, data mining isn’t tenable for that purpose. We’re not trading privacy for security; we’re giving up privacy and getting no security in return.

Most people first learned about data mining in November 2002, when news broke about a massive government data mining program called Total Information Awareness. The basic idea was as audacious as it was repellent: suck up as much data as possible about everyone, sift through it with massive computers, and investigate patterns that might indicate terrorist plots. Americans across the political spectrum denounced the program, and in September 2003, Congress eliminated its funding and closed its offices.

But TIA didn’t die. According to The National Journal, it just changed its name and moved inside the Defense Department.

This shouldn’t be a surprise. In May 2004, the General Accounting Office published a report that listed 122 different federal government data mining programs that used people’s personal information. This list didn’t include classified programs, like the NSA’s eavesdropping effort, or state-run programs like MATRIX.

The promise of data mining is compelling, and convinces many. But it’s wrong. We’re not going to find terrorist plots through systems like this, and we’re going to waste valuable resources chasing down false alarms. To understand why, we have to look at the economics of the system.

Security is always a trade-off, and for a system to be worthwhile, the advantages have to be greater than the disadvantages. A national security data mining program is going to find some percentage of real attacks, and some percentage of false alarms. If the benefits of finding and stopping those attacks outweigh the cost—in money, liberties, etc.—then the system is a good one. If not, then you’d be better off spending that cost elsewhere.

Data mining works best when there’s a well-defined profile you’re searching for, a reasonable number of attacks per year, and a low cost of false alarms. Credit card fraud is one of data mining’s success stories: all credit card companies data mine their transaction databases, looking for spending patterns that indicate a stolen card. Many credit card thieves share a pattern—purchase expensive luxury goods, purchase things that can be easily fenced, etc.—and data mining systems can minimize the losses in many cases by shutting down the card. In addition, the cost of false alarms is only a phone call to the cardholder asking him to verify a couple of purchases. The cardholders don’t even resent these phone calls—as long as they’re infrequent—so the cost is just a few minutes of operator time.

Terrorist plots are different. There is no well-defined profile, and attacks are very rare. Taken together, these facts mean that data mining systems won’t uncover any terrorist plots until they are very accurate, and that even very accurate systems will be so flooded with false alarms that they will be useless.

All data mining systems fail in two different ways: false positives and false negatives. A false positive is when the system identifies a terrorist plot that really isn’t one. A false negative is when the system misses an actual terrorist plot. Depending on how you “tune” your detection algorithms, you can err on one side or the other: you can increase the number of false positives to ensure that you are less likely to miss an actual terrorist plot, or you can reduce the number of false positives at the expense of missing terrorist plots.

To reduce both those numbers, you need a well-defined profile. And that’s a problem when it comes to terrorism. In hindsight, it was really easy to connect the 9/11 dots and point to the warning signs, but it’s much harder before the fact. Certainly, there are common warning signs that many terrorist plots share, but each is unique, as well. The better you can define what you’re looking for, the better your results will be. Data mining for terrorist plots is going to be sloppy, and it’s going to be hard to find anything useful.

Data mining is like searching for a needle in a haystack. There are 900 million credit cards in circulation in the United States. According to the FTC September 2003 Identity Theft Survey Report, about 1% (10 million) cards are stolen and fraudulently used each year. Terrorism is different. There are trillions of connections between people and events—things that the data mining system will have to “look at”—and very few plots. This rarity makes even accurate identification systems useless.

Let’s look at some numbers. We’ll be optimistic. We’ll assume the system has a 1 in 100 false positive rate (99% accurate), and a 1 in 1,000 false negative rate (99.9% accurate).

Assume one trillion possible indicators to sift through: that’s about ten events—e-mails, phone calls, purchases, web surfings, whatever—per person in the U.S. per day. Also assume that 10 of them are actually terrorists plotting.

This unrealistically-accurate system will generate one billion false alarms for every real terrorist plot it uncovers. Every day of every year, the police will have to investigate 27 million potential plots in order to find the one real terrorist plot per month. Raise that false-positive accuracy to an absurd 99.9999% and you’re still chasing 2,750 false alarms per day—but that will inevitably raise your false negatives, and you’re going to miss some of those ten real plots.

This isn’t anything new. In statistics, it’s called the “base rate fallacy,” and it applies in other domains as well. For example, even highly accurate medical tests are useless as diagnostic tools if the incidence of the disease is rare in the general population. Terrorist attacks are also rare, any “test” is going to result in an endless stream of false alarms.

This is exactly the sort of thing we saw with the NSA’s eavesdropping program: the New York Times reported that the computers spat out thousands of tips per month. Every one of them turned out to be a false alarm.

And the cost was enormous: not just the cost of the FBI agents running around chasing dead-end leads instead of doing things that might actually make us safer, but also the cost in civil liberties. The fundamental freedoms that make our country the envy of the world are valuable, and not something that we should throw away lightly.

Data mining can work. It helps Visa keep the costs of fraud down, just as it helps Amazon.com show me books that I might want to buy, and Google show me advertising I’m more likely to be interested in. But these are all instances where the cost of false positives is low—a phone call from a Visa operator, or an uninteresting ad—and in systems that have value even if there is a high number of false negatives.

Finding terrorism plots is not a problem that lends itself to data mining. It’s a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated.

This essay originally appeared on Wired.com.

Posted on March 9, 2006 at 7:44 AM93 Comments

The Analog Hole

Nice essay on the human dimension of the problem of securing information. “Analog hole” is a good name for it.

Along the same lines, here’s a story about the security risks of talking loudly:

About four seats away is a gentleman (on this occasion pronounced ‘fool’) with a BlackBerry mobile device and a very loud voice. He is obviously intent on selling a customer something and is briefing his team. It seems he is the leader as he defines the strategy and assigns each of his unseen team with specific tasks and roles.

Customer products, names, preferences, relationships and monies are being broadcast to everyone within earshot. The strategy for the conference call is discussed, and the specific customer now identified by name and company, and openly described as a BlackBerry nut!

Posted on March 8, 2006 at 12:48 PM24 Comments

Fighting Misuse of the Patriot Act

I like this idea:

I had to sign a tedious business contract the other day. They wanted my corporation number—fair enough—plus my Social Security number—well, if you insist—and also my driver’s license number—hang on, what’s the deal with that?

Well, we e-mailed over a query and they e-mailed back that it was a requirement of the Patriot Act. So we asked where exactly in the Patriot Act could this particular requirement be found and, after a bit of a delay, we got an answer.

And on discovering that there was no mention of driver’s licenses in that particular subsection, I wrote back that we have a policy of reporting all erroneous invocations of the Patriot Act to the Department of Homeland Security on the grounds that such invocations weaken the rationale for the act, and thereby undermine public support for genuine anti-terrorism measures and thus constitute a threat to America’s national security.

And about 10 minutes after that the guy sent back an e-mail saying he didn’t need the driver’s license number after all.

Posted on March 8, 2006 at 7:17 AM36 Comments

How to Crash the Oscars

It’s all social engineering:

If you want to crash the glitziest party of all, the Oscars, here’s a tip from a professional: Show up at the theater, dressed as a chef carrying a live lobster, looking really concerned.

[…]

“The most important technique is confidence,” he said. “Part of it is being dressed the part, looking the part, and acting the part and then lying to get in the door.”

The biggest hole in the elaborate Oscars security plan, Mamlet said, is that while everyone from stagehands to reporters have to wear official credentials, the celebrities and movie executives attending the event do not.

“If you really act like a celebrity, the security guards will worry that they will get into trouble for not recognizing you,” Mamlet said.

Posted on March 7, 2006 at 6:20 AM40 Comments

Class Break of Citibank ATM Cards

There seems to be some massive class break against Citibank ATM cards in Canada, the UK, and Russia. I don’t know any details, but the story is interesting. More info here.

EDITED TO ADD (3/6): More info here, here, here, and here.

EDITED TO ADD (3/7): Another news article.

From Jake Appelbaum: “The one unanswered question in all of this seems to be: Why is the new card going to have any issues in any of the affected countries? No one from Citibank was able to provide me with a promise my new card wouldn’t be locked yet again. Pretty amazing. I guess when I get my new card, I’ll find out.

EDITED TO ADD (3/8): Some more news.

Posted on March 6, 2006 at 2:44 PM31 Comments

The Terrorist Threat of Paying Your Credit Card Balance

This article shows how badly terrorist profiling can go wrong:

They paid down some debt. The balance on their JCPenney Platinum MasterCard had gotten to an unhealthy level. So they sent in a large payment, a check for $6,522.

And an alarm went off. A red flag went up. The Soehnges’ behavior was found questionable.

And all they did was pay down their debt. They didn’t call a suspected terrorist on their cell phone. They didn’t try to sneak a machine gun through customs.

They just paid a hefty chunk of their credit card balance. And they learned how frighteningly wide the net of suspicion has been cast.

After sending in the check, they checked online to see if their account had been duly credited. They learned that the check had arrived, but the amount available for credit on their account hadn’t changed.

So Deana Soehnge called the credit-card company. Then Walter called.

“When you mess with my money, I want to know why,” he said.

They both learned the same astounding piece of information about the little things that can set the threat sensors to beeping and blinking.

They were told, as they moved up the managerial ladder at the call center, that the amount they had sent in was much larger than their normal monthly payment. And if the increase hits a certain percentage higher than that normal payment, Homeland Security has to be notified. And the money doesn’t move until the threat alert is lifted.

The article goes on to blame something called the Bank Privacy Act, but that’s not correct. The culprit here is the amendments made to the Bank Secrecy Act by the USA Patriot Act, Sections 351 and 352. There’s a general discussion here, and the Federal Register here.

There has been some rumbling on the net that this story is badly garbled—or even a hoax—but certainly this kind of thing is what financial institutions are required to report under the Patriot Act.

Remember, all the time spent chasing down silly false alarms is time wasted. Finding terrorist plots is a signal-to-noise problem, and stuff like this substantially decreases that ratio: it adds a lot of noise without adding enough signal. It makes us less safe, because it makes terrorist plots harder to find.

Posted on March 6, 2006 at 10:45 AM58 Comments

The Future of Privacy

Over the past 20 years, there’s been a sea change in the battle for personal privacy.

The pervasiveness of computers has resulted in the almost constant surveillance of everyone, with profound implications for our society and our freedoms. Corporations and the police are both using this new trove of surveillance data. We as a society need to understand the technological trends and discuss their implications. If we ignore the problem and leave it to the “market,” we’ll all find that we have almost no privacy left.

Most people think of surveillance in terms of police procedure: Follow that car, watch that person, listen in on his phone conversations. This kind of surveillance still occurs. But today’s surveillance is more like the NSA’s model, recently turned against Americans: Eavesdrop on every phone call, listening for certain keywords. It’s still surveillance, but it’s wholesale surveillance.

Wholesale surveillance is a whole new world. It’s not “follow that car,” it’s “follow every car.” The National Security Agency can eavesdrop on every phone call, looking for patterns of communication or keywords that might indicate a conversation between terrorists. Many airports collect the license plates of every car in their parking lots, and can use that database to locate suspicious or abandoned cars. Several cities have stationary or car-mounted license-plate scanners that keep records of every car that passes, and save that data for later analysis.

More and more, we leave a trail of electronic footprints as we go through our daily lives. We used to walk into a bookstore, browse, and buy a book with cash. Now we visit Amazon, and all of our browsing and purchases are recorded. We used to throw a quarter in a toll booth; now EZ Pass records the date and time our car passed through the booth. Data about us are collected when we make a phone call, send an e-mail message, make a purchase with our credit card, or visit a website.

Much has been written about RFID chips and how they can be used to track people. People can also be tracked by their cell phones, their Bluetooth devices, and their WiFi-enabled computers. In some cities, video cameras capture our image hundreds of times a day.

The common thread here is computers. Computers are involved more and more in our transactions, and data are byproducts of these transactions. As computer memory becomes cheaper, more and more of these electronic footprints are being saved. And as processing becomes cheaper, more and more of it is being cross-indexed and correlated, and then used for secondary purposes.

Information about us has value. It has value to the police, but it also has value to corporations. The Justice Department wants details of Google searches, so they can look for patterns that might help find child pornographers. Google uses that same data so it can deliver context-sensitive advertising messages. The city of Baltimore uses aerial photography to surveil every house, looking for building permit violations. A national lawn-care company uses the same data to better market its services. The phone company keeps detailed call records for billing purposes; the police use them to catch bad guys.

In the dot-com bust, the customer database was often the only salable asset a company had. Companies like Experian and Acxiom are in the business of buying and reselling this sort of data, and their customers are both corporate and government.

Computers are getting smaller and cheaper every year, and these trends will continue. Here’s just one example of the digital footprints we leave:

It would take about 100 megabytes of storage to record everything the fastest typist input to his computer in a year. That’s a single flash memory chip today, and one could imagine computer manufacturers offering this as a reliability feature. Recording everything the average user does on the Internet requires more memory: 4 to 8 gigabytes a year. That’s a lot, but “record everything” is Gmail’s model, and it’s probably only a few years before ISPs offer this service.

The typical person uses 500 cell phone minutes a month; that translates to 5 gigabytes a year to save it all. My iPod can store 12 times that data. A “life recorder” you can wear on your lapel that constantly records is still a few generations off: 200 gigabytes/year for audio and 700 gigabytes/year for video. It’ll be sold as a security device, so that no one can attack you without being recorded. When that happens, will not wearing a life recorder be used as evidence that someone is up to no good, just as prosecutors today use the fact that someone left his cell phone at home as evidence that he didn’t want to be tracked?

In a sense, we’re living in a unique time in history. Identification checks are common, but they still require us to whip out our ID. Soon it’ll happen automatically, either through an RFID chip in our wallet or face-recognition from cameras. And those cameras, now visible, will shrink to the point where we won’t even see them.

We’re never going to stop the march of technology, but we can enact legislation to protect our privacy: comprehensive laws regulating what can be done with personal information about us, and more privacy protection from the police. Today, personal information about you is not yours; it’s owned by the collector. There are laws protecting specific pieces of personal data—videotape rental records, health care information—but nothing like the broad privacy protection laws you find in European countries. That’s really the only solution; leaving the market to sort this out will result in even more invasive wholesale surveillance.

Most of us are happy to give out personal information in exchange for specific services. What we object to is the surreptitious collection of personal information, and the secondary use of information once it’s collected: the buying and selling of our information behind our back.

In some ways, this tidal wave of data is the pollution problem of the information age. All information processes produce it. If we ignore the problem, it will stay around forever. And the only way to successfully deal with it is to pass laws regulating its generation, use and eventual disposal.

This essay was originally published in the Minneapolis Star-Tribune.

Posted on March 6, 2006 at 5:41 AM113 Comments

Friday Squid Blogging: Giant Squid in London's Natural History Museum

There’s a 28-foot (8.62-meter) giant squid on display at the Natural History Museum in London:

It took several months to prepare the squid for display.

“The first stage was to defrost it; that took about four days. The problem was the mantle – the body – is very thick and the tentacles very narrow, so we had to try and thaw the thick mantle without the tentacles rotting,” Mr Ablett told the BBC News website.

The scientists did this by bathing the mantle in water, whilst covering the tentacles in ice packs, after which they injected the squid with a formol-saline solution to prevent it from rotting.

The team then needed to find someone to build a glass tank which could not only hold the huge creature, but could leave the squid accessible for future scientific research, and they decided to draw upon the knowledge of an artist famed for displaying preserved dead animals.

The website has a video. Here is another news story. Damien Hirst got involved in the defrosting.

Note that this squid is larger than the 25-foot specimen on display at the American Museum of Natural History in New York.

Posted on March 3, 2006 at 3:24 PM24 Comments

Caller ID Spoofing

What’s worse than a bad authentication system? A bad authentication system that people have learned to trust. According to the Associated Press:

In the last few years, Caller ID spoofing has become much easier. Millions of people have Internet telephone equipment that can be set to make any number appear on a Caller ID system. And several Web sites have sprung up to provide Caller ID spoofing services, eliminating the need for any special hardware.

For instance, Spoofcard.com sells a virtual “calling card” for $10 that provides 60 minutes of talk time. The user dials a toll-free number, then keys in the destination number and the Caller ID number to display.

Near as anyone can tell, this is perfectly legal. (Although the FCC is investigating.)

The applications for Caller ID spoofing are not limited to fooling people. There’s real fraud that can be committed:

Lance James, chief scientist at security company Secure Science Corp., said Caller ID spoofing Web sites are used by people who buy stolen credit card numbers. They will call a service such as Western Union, setting Caller ID to appear to originate from the card holder’s home, and use the credit card number to order cash transfers that they then pick up.

Exposing a similar vulnerability, Caller ID is used by credit-card companies to authenticate newly issued cards. The recipients are generally asked to call from their home phones to activate their cards.

And, of course, harmful pranks:

In one case, SWAT teams surrounded a building in New Brunswick, N.J., last year after police received a call from a woman who said she was being held hostage in an apartment. Caller ID was spoofed to appear to come from the apartment.

It’s also easy to break into a cell phone voice mailbox using spoofing, because many systems are set to automatically grant entry to calls from the owner of the account. Stopping that requires setting a PIN code or password for the mailbox.

I have never been a fan of Caller ID. My phone number is configured to block Caller ID on outgoing calls. The number of phone numbers that refuse to accept my calls is growing, however.

Posted on March 3, 2006 at 7:10 AM

The Psychology of Password Generation

Nothing too surprising in this study of password generation practices:

The majority of participants in the current study most commonly reported password generation practices that are simplistic and hence very insecure. Particular practices reported include using lowercase letters, numbers or digits, personally meaningful words and numbers (e.g., dates). It is widely known that users typically use birthdates, anniversary dates, telephone numbers, license plate numbers, social security numbers, street addresses, apartment numbers, etc. Likewise, personally meaningful words are typically derived from predictable areas and interests in the person’s life and could be guessed through basic knowledge of his or her interests.

The finding that participants in the current study use such simplistic practices to develop passwords is supported by similar research by Bishop and Klein (1995) and Vu, Bhargav & Proctor (2003) who found that even with the application of password guidelines, users would tend to revert to the simplest possible strategies (Proctor et al., 2002). In the current study, nearly 60% of the respondents reported that they do not vary the complexity of their passwords depending on the nature of the site and 53% indicated that they never change their password if they are not required to do so. These practices are most likely encouraged by the fact that users maintain multiple accounts (average = 8.5) and have difficulty recalling too many unique passwords.

It would seem to be a logical assumption that the practices and behaviors users engage in would be related to what they think they should do in order to create secure passwords. This does not seem to be the case as participants in the current study were able to identify many of the recommended practices, despite the fact that they did not use the practices themselves. These findings contradict the ideas put forth in Adams & Sasse (1999) and Gheringer (2002) who state that users are largely unaware of the methods and practices that are effective for creating strong passwords. Davis and Ganesan (1993) point out that the majority of users are not aware of the vulnerability of password protected systems, the prevalence of password cracking, the ease with which it can be accomplished, or the damage that can be caused by it. While the majority of this sample of password users demonstrated technical knowledge of password practices, further education regarding the vulnerability of password protected systems would help users form a more accurate mental model of computer security.

Posted on March 2, 2006 at 11:46 AM63 Comments

FedEx Kinko's Payment Card Hacked

This site goes into detail about how the FedEx Kinko’s ExpressPay stored value card has been hacked. There’s nothing particulary amazing about the hack; the most remarkable thing is how badly the system was designed in the first place. The only security on the cards is a three-byte code that lets you read and write to the card. I’d be amazed if no one has hacked this before.

EDITED TO ADD (3/2): News article.

Posted on March 2, 2006 at 7:02 AM30 Comments

More on Greek Wiretapping

Earlier this month I blogged about a wiretapping scandal in Greece.

Unknowns tapped the mobile phones of about 100 Greek politicians and offices, including the U.S. embassy in Athens and the Greek prime minister.

Details are sketchy, but it seems that a piece of malicious code was discovered by Ericsson technicians in Vodafone’s mobile phone software. The code tapped into the conference call system. It “conference called” phone calls to 14 prepaid mobile phones where the calls were recorded.

More details are emerging. It turns out that the “malicious code” was actually code designed into the system. It’s eavesdropping code put into the system for the police.

The attackers managed to bypass the authorization mechanisms of the eavesdropping system, and activate the “lawful interception” module in the mobile network. They then redirected about 100 numbers to 14 shadow numbers they controlled. (Here are translations of some of the press conferences with technical details. And here are details of the system used.)

There is an important security lesson here. I have long argued that when you build surveillance mechanisms into communication systems, you invite the bad guys to use those mechanisms for their own purposes. That’s exactly what happened here.

UPDATED TO ADD (3/2): From a reader: “I have an update. There is some news from the ‘Hellenic Authority for the Information and Communication Security and Privacy’ with a few facts and I got a rumor that there is a root backdoor in the telnetd of Ericssons AXE backdoor. (No, I can’t confirm the rumor.)”

Posted on March 1, 2006 at 8:04 AM14 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.