Blog: June 2008 Archives

Pentagon Consulting Social Scientists on Security

This seems like a good idea:

Eager to embrace eggheads and ideas, the Pentagon has started an ambitious and unusual program to recruit social scientists and direct the nation’s brainpower to combating security threats like the Chinese military, Iraq, terrorism and religious fundamentalism.

The article talks a lot about potential conflicts of interest and such, and less on what sorts of insights the social scientists can offer. I think there is a lot of potential value here.

Posted on June 30, 2008 at 12:13 PM18 Comments

Security and Human Behavior

I’m writing from the First Interdisciplinary Workshop on Security and Human Behavior (SHB 08).

Security is both a feeling and a reality, and they’re different. There are several different research communities: technologists who study security systems, and psychologists who study people, not to mention economists, anthropologists and others. Increasingly these worlds are colliding.

  • Security design is by nature psychological, yet many systems ignore this, and cognitive biases lead people to misjudge risk. For example, a key in the corner of a web browser makes people feel more secure than they actually are, while people feel far less secure flying than they actually are. These biases are exploited by various attackers.

  • Security problems relate to risk and uncertainty, and the way we react to them. Cognitive and perception biases affect the way we deal with risk, and therefore the way we understand security—whether that is the security of a nation, of an information system, or of one’s personal information.

  • Many real attacks on information systems exploit psychology more than technology. Phishing attacks trick people into logging on to websites that appear genuine but actually steal passwords. Technical measures can stop some phishing tactics, but stopping users from making bad decisions is much harder. Deception-based attacks are now the greatest threat to online
    security.

  • In order to be effective, security must be usable—not just by geeks, but by ordinary people. Research into usable security invariably has a psychological component.

  • Terrorism is perceived to be a major threat to society. Yet the actual damage done by terrorist attacks is dwarfed by the secondary effects as target societies overreact. There are many topics here, from the manipulation of risk perception to the anthropology of religion.

  • There are basic research questions; for example, about the extent to which the use and detection of deception in social contexts may have helped drive human evolution.

The dialogue between researchers in security and in psychology is rapidly widening, bringing in more and more disciplines—from security usability engineering, protocol design, privacy, and policy on the one hand, and from social psychology, evolutionary biology, and behavioral economics on the other.

About a year ago Ross Anderson and I conceived this conference as a way to bring together computer security researchers, psychologists, behavioral economists, sociologists, philosophers, and others—all of whom are studying the human side of security. I’ve read a lot—and written some—on psychology and security over the past few years, and have been continually amazed by some of the research that people outside my field have been doing on topics very relevant to my field. Ross and I both thought that bringing these diverse communities together would be fascinating to everyone. So we convinced behavioral economists Alessandro Acquisti and George Loewenstein to help us organize the workshop, invited the people we all have been reading, and also asked them who else to invite. The response was overwhelming. Almost everyone we wanted was able to attend, and the result was a 42-person conference with 35 speakers.

We’re most of the way through the morning, and it’s been even more fascinating than I expected. (Here’s the agenda.) We’ve talked about detecting deception in people, organizational biases in making security decisions, building security “intuition” into Internet browsers, different techniques to prevent crime, complexity and failure, and the modeling of security feeling.

I had high hopes of liveblogging this event, but it’s far too fascinating to spend time writing posts. If you want to read some of the more interesting papers written by the participants, this is a good page to start with.

I’ll write more about the conference later.

EDITED TO ADD (6/30): Ross Anderson has a blog post, where he liveblogs the individual sessions in the comments. And I should add that this was an invitational event—which is why you haven’t heard about it before—and that the room here at MIT is completely full.

EDITED TO ADD (7/1): Matt Blaze has posted audio. And Ross Anderson—link above—is posting paragraph-long summaries for each speaker.

EDITED TO ADD (7/6): Photos of the speakers.

EDITED TO ADD (7/7): MSNBC article on the workshop. And L. Jean Camp’s notes.

Posted on June 30, 2008 at 11:17 AM18 Comments

CCTV Cameras

Pervasive security cameras don’t substantially reduce crime. There are exceptions, of course, and that’s what gets the press. Most famously, CCTV cameras helped catch James Bulger’s murderers in 1993. And earlier this year, they helped convict Steve Wright of murdering five women in the Ipswich area. But these are the well-publicised exceptions. Overall, CCTV cameras aren’t very effective.

This fact has been demonstrated again and again: by a comprehensive study for the Home Office in 2005, by several studies in the US, and again with new data announced last month by New Scotland Yard. They actually solve very few crimes, and their deterrent effect is minimal.

Conventional wisdom predicts the opposite. But if that were true, then camera-happy London, with something like 500,000, would be the safest city on the planet. It isn’t, of course, because of technological limitations of cameras, organisational limitations of police and the adaptive abilities of criminals.

To some, it’s comforting to imagine vigilant police monitoring every camera, but the truth is very different. Most CCTV footage is never looked at until well after a crime is committed. When it is examined, it’s very common for the viewers not to identify suspects. Lighting is bad and images are grainy, and criminals tend not to stare helpfully at the lens. Cameras break far too often. The best camera systems can still be thwarted by sunglasses or hats. Even when they afford quick identification—think of the 2005 London transport bombers and the 9/11 terrorists—police are often able to identify suspects without the cameras. Cameras afford a false sense of security, encouraging laziness when we need police to be vigilant.

The solution isn’t for police to watch the cameras. Unlike an officer walking the street, cameras only look in particular directions at particular locations. Criminals know this, and can easily adapt by moving their crimes to someplace not watched by a camera—and there will always be such places. Additionally, while a police officer on the street can respond to a crime in progress, the same officer in front of a CCTV screen can only dispatch another officer to arrive much later. By their very nature, cameras result in underused and misallocated police resources.

Cameras aren’t completely ineffective, of course. In certain circumstances, they’re effective in reducing crime in enclosed areas with minimal foot traffic. Combined with adequate lighting, they substantially reduce both personal attacks and auto-related crime in car parks. And from some perspectives, simply moving crime around is good enough. If a local Tesco installs cameras in its store, and a robber targets the store next door as a result, that’s money well spent by Tesco. But it doesn’t reduce the overall crime rate, so is a waste of money to the township.

But the question really isn’t whether cameras reduce crime; the question is whether they’re worth it. And given their cost (£500 m in the past 10 years), their limited effectiveness, the potential for abuse (spying on naked women in their own homes, sharing nude images, selling best-of videos, and even spying on national politicians) and their Orwellian effects on privacy and civil liberties, most of the time they’re not. The funds spent on CCTV cameras would be far better spent on hiring experienced police officers.

We live in a unique time in our society: the cameras are everywhere, and we can still see them. Ten years ago, cameras were much rarer than they are today. And in 10 years, they’ll be so small you won’t even notice them. Already, companies like L-1 Security Solutions are developing police-state CCTV surveillance technologies like facial recognition for China, technology that will find their way into countries like the UK. The time to address appropriate limits on this technology is before the cameras fade from notice.

This essay was previously published in The Guardian.

EDITED TO ADD (7/3): A rebuttal.

EDITED TO ADD (7/6): More commentary.

EDITED TO ADD (7/9): Another good survey article, and commentary.

Posted on June 26, 2008 at 1:18 PM72 Comments

Fever Screening at Airports

I’ve seen the IR screening guns at several airports, primarily in Asia. The idea is to keep out people with Bird Flu, or whatever the current fever scare is. This essay explains why it won’t work:

The bottom line is that this kind of remote fever sensing had poor positive predictive value, meaning that the proportion of people correctly identified as having fever was low, ranging from 10% to 16%. Thus there were a lot of false positives. Negative predictive value, the proportion of people classified by the IR device as not having fever who in fact did not have fever was high (97% to 99%), so not many people with fevers will be missed with the IR device. Predictive values depend not only on the accuracy of the device but also how prevalent fever is in the screened population. In the early days of a pandemic, fever prevalence will be very low, leading to low positive predictive value. The false positives produced at airport security would make the days of only taking off your shoes look good.

The idea of airport fever screening to keep a pandemic out has a lot of psychological appeal. Unfortunately its benefits are also only psychological: pandemic preparedness theater. There’s no magic bullet for warding off a pandemic. The best way to prepare for a pandemic or any other health threat is to have a robust and resilient public health infrastructure.

Lots more science in the essay.

Posted on June 26, 2008 at 6:58 AM35 Comments

IT Attacks: Insiders vs. Outsiders

A new study claims that insiders aren’t the main threat to network security:

Verizon’s 2008 Data Breach Investigations Report, which looked at 500 breach incidents over the last four years, contradicts the growing orthodoxy that insiders, rather than external agents, represent the most serious threat to network security at most organizations.

Seventy-three percent of the breaches involved outsiders, 18 percent resulted from the actions of insiders, with business partners blamed for 39 percent—the percentages exceed 100 percent due to the fact that some involve multiple breaches, with varying degrees of internal or external involvement.

“The relative infrequency of data breaches attributed to insiders may be surprising to some. It is widely believed and commonly reported that insider incidents outnumber those caused by other sources,” the report states.

The whole insiders vs. outsiders debate has always been one of semantics more than anything else. If you count by attacks, there are a lot more outsider attacks, simply because there are orders of magnitude more outsider attackers. If you count incidents, the numbers tend to get closer: 75% vs. 18% in this case. And if you count damages, insiders generally come out on top—mostly because they have a lot more detailed information and can target their attacks better.

Both insiders and outsiders are security risks, and you have to defend against them both. Trying to rank them isn’t all that useful.

Posted on June 24, 2008 at 6:55 AM38 Comments

New Technology to Detect Chemical, Biological, and Explosive Agents

Interesting:

“We have found we can potentially detect an incredibly small quantity of material, as small as one dust-speck-sized particle weighing one trillionth of a gram, on an individual’s clothing or baggage,” Farquar said. “This is important because if a person handles explosives they are likely to have some remaining residue.”

Using a system they call Single-Particle Aerosol Mass Spectrometry, or SPAMS, the Livermore scientists already have developed and tested the technology for detecting chemical and biological agents.

The new research expands SPAMS’ capabilities to include several types of explosives that have been used worldwide in improvised explosive devices and other terrorist attacks.

“SPAMS is a sensitive, specific, potential option for airport and baggage screening,” Farquar said. “The ability of the SPAMS technology to determine the identity of a single particle could be a valuable asset when the target analyte is dangerous in small quantities or has no legal reason for being present in an environment.”

Posted on June 23, 2008 at 6:07 AM46 Comments

Eavesdropping on Encrypted Compressed Voice

Traffic analysis works even through the encryption:

The new compression technique, called variable bitrate compression produces different size packets of data for different sounds.

That happens because the sampling rate is kept high for long complex sounds like “ow”, but cut down for simple consonants like “c”. This variable method saves on bandwidth, while maintaining sound quality.

VoIP streams are encrypted to prevent eavesdropping. However, a team from John Hopkins University in Baltimore, Maryland, US, has shown that simply measuring the size of packets without decoding them can identify whole words and phrases with a high rate of accuracy.

The technique isn’t good enough to decode entire conversations, but it’s pretty impressive.

Posted on June 19, 2008 at 6:27 AM50 Comments

Security Through Obscurity

Sometimes security through obscurity works:

Yes, the New York Police Department provided an escort, but during more than eight hours on Saturday, one of the great hoards of coins and currency on the planet, worth hundreds of millions of dollars, was utterly unalarmed as it was bumped through potholes, squeezed by double-parked cars and slowed by tunnel-bound traffic during the trip to its fortresslike new vault a mile to the north.

In the end, the move did not become a caper movie.

“The idea was to make this as inconspicuous as possible,” said Ute Wartenberg Kagan, executive director of the American Numismatic Society. “It had to resemble a totally ordinary office move.”

[…]

Society staff members were pledged to secrecy about the timing of the move, and “we didn’t tell our movers what the cargo was until the morning of,” said James McVeigh, operations manager of Time Moving and Storage Inc. of Manhattan, referring to the crew of 20 workers.

From my book Beyond Fear, pp. 211-12:

At 3,106 carats, a little under a pound and a half, the Cullinan Diamond was the largest uncut diamond ever discovered. It was extracted from the earth at the Premier Mine, near Pretoria, South Africa, in 1905. Appreciating the literal enormity of the find, the Transvaal government bought the diamond as a gift for King Edward VII. Transporting the stone to England was a huge security problem, of course, and there was much debate on how best to do it. Detectives were sent from London to guard it on its journey. News leaked that a certain steamer was carrying it, and the presence of the detectives confirmed this. But the diamond on that steamer was a fake. Only a few people knew of the real plan; they packed the Cullinan in a small box, stuck a three-shilling stamp on it, and sent it to England anonymously by unregistered parcel post.

This is a favorite story of mine. Not only can we analyze the complex security system intended to transport the diamond from continent to continent—the huge number of trusted people involved, making secrecy impossible; the involved series of steps with their associated seams, giving almost any organized gang numerous opportunities to pull off a theft—but we can contrast it with the sheer beautiful simplicity of the actual transportation plan. Whoever came up with it was really thinking—and thinking originally, boldly, and audaciously.

This kind of counterintuitive security is common in the world of gemstones. On 47th Street in New York, in Antwerp, in London: People walk around all the time with millions of dollars’ worth of gems in their pockets. The gemstone industry has formal guidelines: If the value of the package is under a specific amount, use the U.S. Mail. If it is over that amount but under another amount, use Federal Express. The Cullinan was again transported incognito; the British Royal Navy escorted an empty box across the North Sea to Amsterdam—where the diamond would be cut—while famed diamond cutter Abraham Asscher actually carried it in his pocket from London via train and night ferry to Amsterdam.

Posted on June 18, 2008 at 1:13 PM45 Comments

Magnetic Ring Attack on Electronic Locks

Impressive:

The ‘ring of the devil’ is capable of attacking this kind of electronic motor lock on two ways.

Scenario 1: An electronic motor is nothing more then a metal part on an axe that turns because of a changing magnetic field. Turning electro magnets on and off will generate a pulling force on the metal part, making it rotate. The ring does the same thing. By turning the ring, the metal part in the electro motor starts turning, opening the lock. As Rop suggested in the comments of the previous posting, a bunch of bigger magnets and maybe a high-speed drill can amplify this effect some more.

Scenario 2: A dynamo is nothing more then a coil charged by a changing magnetic field. So any coil in the lock will start generating current when a magnetic field is rotating around it. If the coil is in the path of the electro motor, it might generate enough current for the motor to start turning.

Posted on June 18, 2008 at 6:35 AM17 Comments

LifeLock and Identity Theft

LifeLock, one of the companies that offers identity-theft protection in the United States, has been taking quite a beating recently. They’re being sued by credit bureaus, competitors and lawyers in several states that are launching class action lawsuits. And the stories in the media … it’s like a piranha feeding frenzy.

There are also a lot of errors and misconceptions. With its aggressive advertising campaign and a CEO who publishes his Social Security number and dares people to steal his identity—Todd Davis, 457-55-5462—LifeLock is a company that’s easy to hate. But the company’s story has some interesting security lessons, and it’s worth understanding in some detail.

In December 2003, as part of the Fair and Accurate Credit Transactions Act, or Facta, credit bureaus were forced to allow you to put a fraud alert on their credit reports, requiring lenders to verify your identity before issuing a credit card in your name. This alert is temporary, and expires after 90 days. Several companies have sprung up—LifeLock, Debix, LoudSiren, TrustedID—that automatically renew these alerts and effectively make them permanent.

This service pisses off the credit bureaus and their financial customers. The reason lenders don’t routinely verify your identity before issuing you credit is that it takes time, costs money and is one more hurdle between you and another credit card. (Buy, buy, buy—it’s the American way.) So in the eyes of credit bureaus, LifeLock’s customers are inferior goods; selling their data isn’t as valuable. LifeLock also opts its customers out of pre-approved credit card offers, further making them less valuable in the eyes of credit bureaus.

And, so began a smear campaign on the part of the credit bureaus. You can read their points of view in this New York Times article, written by a reporter who didn’t do much more than regurgitate their talking points. And the class action lawsuits have piled on, accusing LifeLock of deceptive business practices, fraudulent advertising and so on. The biggest smear is that LifeLock didn’t even protect Todd Davis, and that his identity was allegedly stolen.

It wasn’t. Someone in Texas used Davis’s SSN to get a $500 advance against his paycheck. It worked because the loan operation didn’t check with any of the credit bureaus before approving the loan—perfectly reasonable for an amount this small. The payday-loan operation called Davis to collect, and LifeLock cleared up the problem. His credit report remains spotless.

The Experian credit bureau’s lawsuit basically claims that fraud alerts are only for people who have been victims of identity theft. This seems spurious; the text of the law states that anyone “who asserts a good faith suspicion that the consumer has been or is about to become a victim of fraud or related crime” can request a fraud alert. It seems to me that includes anybody who has ever received one of those notices about their financial details being lost or stolen, which is everybody.

As to deceptive business practices and fraudulent advertising—those just seem like class action lawyers piling on. LifeLock’s aggressive fear-based marketing doesn’t seem any worse than a lot of other similar advertising campaigns. My guess is that the class action lawsuits won’t go anywhere.

In reality, forcing lenders to verify identity before issuing credit is exactly the sort of thing we need to do to fight identity theft. Basically, there are two ways to deal with identity theft: Make personal information harder to steal, and make stolen personal information harder to use. We all know the former doesn’t work, so that leaves the latter. If Congress wanted to solve the problem for real, one of the things it would do is make fraud alerts permanent for everybody. But the credit industry’s lobbyists would never allow that.

LifeLock does a bunch of other clever things. They monitor the national address database, and alert you if your address changes. They look for your credit and debit card numbers on hacker and criminal websites and such, and assist you in getting a new number if they see it. They have a million-dollar service guarantee—for complicated legal reasons, they can’t call it insurance—to help you recover if your identity is ever stolen.

But even with all of this, I am not a LifeLock customer. At $120 a year, it’s just not worth it. You wouldn’t know it from the press attention, but dealing with identity theft has become easier and more routine. Sure, it’s a pervasive problem. The Federal Trade Commission reported that 8.3 million Americans were identity-theft victims in 2005. But that includes things like someone stealing your credit card and using it, something that rarely costs you any money and that LifeLock doesn’t protect against. New account fraud is much less common, affecting 1.8 million Americans per year, or 0.8 percent of the adult population. The FTC hasn’t published detailed numbers for 2006 or 2007, but the rate seems to be declining.

New card fraud is also not very damaging. The median amount of fraud the thief commits is $1,350, but you’re not liable for that. Some spectacularly horrible identity-theft stories notwithstanding, the financial industry is pretty good at quickly cleaning up the mess. The victim’s median out-of-pocket cost for new account fraud is only $40, plus ten hours of grief to clean up the problem. Even assuming your time is worth $100 an hour, LifeLock isn’t worth more than $8 a year.

And it’s hard to get any data on how effective LifeLock really is. They’ve been in business three years and have about a million customers, but most of them have joined up in the last year. They’ve paid out on their service guarantee 113 times, but a lot of those were for things that happened before their customers became customers. (It was easier to pay than argue, I assume.) But they don’t know how often the fraud alerts actually catch an identity thief in the act. My guess is that it’s less than the 0.8 percent fraud rate above.

LifeLock’s business model is based more on the fear of identity theft than the actual risk.

It’s pretty ironic of the credit bureaus to attack LifeLock on its marketing practices, since they know all about profiting from the fear of identity theft. Facta also forced the credit bureaus to give Americans a free credit report once a year upon request. Through deceptive marketing techniques, they’ve turned this requirement into a multimillion-dollar business.

Get LifeLock if you want, or one of its competitors if you prefer. But remember that you can do most of what these companies do yourself. You can put a fraud alert on your own account, but you have to remember to renew it every three months. You can also put a credit freeze on your account, which is more work for the average consumer but more effective if you’re a privacy wonk—and the rules differ by state. And maybe someday Congress will do the right thing and put LifeLock out of business by forcing lenders to verify identity every time they issue credit in someone’s name.

This essay originally appeared in Wired.com.

Posted on June 17, 2008 at 6:51 AM73 Comments

Ransomware

I’ve never figured out the fuss over ransomware:

Some day soon, you may go in and turn on your Windows PC and find your most valuable files locked up tighter than Fort Knox.

You’ll also see this message appear on your screen:

“Your files are encrypted with RSA-1024 algorithm. To recovery your files you need to buy our decryptor. To buy decrypting tool contact us at: ********@yahoo.com”

How is this any worse than the old hacker viruses that put a funny message on your screen and erased your hard drive?

Here’s how I see it, if someone actually manages to pull this up and put it into circulation, we’re looking at malware Armegeddon. Instead of losing ‘just’ your credit card numbers or having your PC turned into a spam factory, you could lose vital files forever.

Of course, you could keep current back-ups. I do, but I’ve been around this track way too many times to think that many companies, much less individual users, actually keep real back-ups. Oh, you may think you do, but when was the last time you checked to see if the data you saved could actually be restored?

The single most important thing any company or individual can do to improve security is have a good backup strategy. It’s been true for decades, and it’s still true today.

Posted on June 16, 2008 at 1:09 PM74 Comments

Friday Squid Blogging: Cuttlefish Embryos Can See

Weird:

Usually, cuttlefish eggs lie in an envelope full of black ink. But this clears as the embryos grow older, leaving them growing within translucent eggs.

These unborn cuttlefish also have fully developed eyes. That leads the researchers to conclude that the cuttlefish embryos must peer through their eggs, and learn to recognise their prey, a behaviour which will help give them a head-start in life.

Posted on June 13, 2008 at 4:39 PM7 Comments

Kaspersky Labs Trying to Crack 1024-bit RSA

I can’t figure this story out. Kaspersky Lab is launching an international distributed effort to crack a 1024-bit RSA key used by the Gpcode Virus. From their website:

We estimate it would take around 15 million modern computers, running for about a year, to crack such a key.

What are they smoking at Kaspersky? We’ve never factored a 1024-bit number—at least, not outside any secret government agency—and it’s likely to require a lot more than 15 million computer years of work. The current factoring record is a 1023-bit number, but it was a special number that’s easier to factor than a product-of-two-primes number used in RSA. Breaking that Gpcode key will take a lot more mathematical prowess than you can reasonably expect to find by asking nicely on the Internet. You’ve got to understand the current best mathematical and computational optimizations of the Number Field Sieve, and cleverly distribute the parts that can be distributed. You can’t just post the products and hope for the best.

Is this just a way for Kaspersky to generate itself some nice press, or are they confused in Moscow?

EDITED TO ADD (6/15): Kaspersky <a href=http://www.securityfocus.com/news/11523″>now says:

The company clarified, however, that it’s more interested in getting help in finding flaws in the encryption implementation.

“We are not trying to crack the key,” Roel Schouwenberg, senior antivirus researcher with Kaspersky Lab, told SecurityFocus. “We want to see collectively whether there are implementation errors, so we can do what we did with previous versions and find a mistake to help us find the key.”

Schouwenberg agrees that, if no implementation flaw is found, searching for the decryption key using brute-force computing power is unlikely to work.

“Clarified” is overly kind. There was nothing confusing about Kaspersky’s post that needed clarification, and what they’re saying now completely contradicts what they did post. Seems to me like they’re trying to pretend it never happened.

EDITED TO ADD (6/30): A Kaspersky virus analyst comments on this entry.

Posted on June 12, 2008 at 12:30 PM62 Comments

New TSA ID Requirement

The TSA has a new photo ID requirement:

Beginning Saturday, June 21, 2008 passengers that willfully refuse to provide identification at security checkpoint will be denied access to the secure area of airports. This change will apply exclusively to individuals that simply refuse to provide any identification or assist transportation security officers in ascertaining their identity.

This new procedure will not affect passengers that may have misplaced, lost or otherwise do not have ID but are cooperative with officers. Cooperative passengers without ID may be subjected to additional screening protocols, including enhanced physical screening, enhanced carry-on and/or checked baggage screening, interviews with behavior detection or law enforcement officers and other measures.

That’s right; people who refuse to show ID on principle will not be allowed to fly, but people who claim to have lost their ID will. I feel well-protected against terrorists who can’t lie.

I don’t think any further proof is needed that the ID requirement has nothing to do with security, and everything to do with control.

EDITED TO ADD (6/11): Daniel Solove comments.

Posted on June 11, 2008 at 1:42 PM74 Comments

Bus Defended Against Terrorists Who Want to Reenact the Movie Speed

We’re spending money on this?

…a new GPS device enables authorities to remotely control a bus—slowing it down to 5 mph and preventing it from restarting once it has stopped. The device has been installed on thousands of local commuter and tourist buses.

The technology is designed to prevent a terrorist from ramming a bus filled with people and explosives into buildings or tunnels.

Private bus companies have received millions of dollars from the Department of Homeland Security for the security systems. It costs $1,500 to equip each bus, with $50-per-bus monthly maintenance costs.

Gray Line double-decker tourist buses and Coach USA have spent hundreds of thousands of dollars in federal funds to install 3,000 devices. After receiving a $124,000 federal grant, DeCamp Bus Lines is installing the device on its 80 commuter buses, which travel routes from northern New Jersey to the Port Authority Bus Terminal in Midtown.

New Jersey Transit is currently in the process of equipping all of its roughly 3,000 buses with the technology. NJ Transit Chief of Police Joseph Bober said: “This enhanced technology helps us protect our bus drivers and customers. It’s another proactive tool to protect our property, employees and customers.”

Posted on June 10, 2008 at 12:31 PM81 Comments

Sikhs Can Carry Knives on Airplanes in India

That’s what the rules say:

Sikh passengers are allowed to carry Kirpan with them on board domestic flights. The total length of the ‘Kirpan’ should not exceed 22.86 CMs (9 inches) and the length of the blade should not exceed 15.24 CMs. (6 inches). It is being reiterated that these instructions should be fully implemented by concerned security personnel so that religious sentiments of the Sikh passengers are not hurt.

How airport security is supposed to recognize a Sikh passenger is not explained.

Posted on June 10, 2008 at 6:27 AM85 Comments

Great Fear-Mongering Product: Subway Emergency Kit

Is Subivor even real?

Whether it is a train fire, a highrise building fire or worse. People should have more protection than a necktie, their shirt or paper towel to cover their mouth, nose and eyes. As you know an emergency can happen at anytime and in anyplace, leaving one vulnerable. Don’t be a sitting duck. The Subivor® Subway Emergency Kit can aid you in seeing and breathing while exiting. This all-in-one compact, portable and easy to use subway emergency kit contains some items never seen before in a kit.

This could have won my Third Movie-Plot Threat Contest.

Posted on June 9, 2008 at 12:11 PM66 Comments

Framing Computers Under the DMCA

Researchers from the University of Washington have demonstrated how lousy the MPAA/RIAA/etc. tactics are by successfully framing printers on their network. These printers, which can’t download anything, received nine takedown notices:

The researchers rigged the software agents to implicate three laserjet printers, which were then accused in takedown letters by the M.P.A.A. of downloading copies of “Iron Man” and the latest Indiana Jones film.

Research, including the paper, here.

Posted on June 9, 2008 at 6:47 AM30 Comments

Clever Museum Theft

Some expensive and impressive stuff was stolen from the University of British Columbia’s Museum of Anthropology:

A dozen pieces of gold jewelry designed by prominent Canadian artist Bill Reid were stolen from the museum sometime on May 23, along with three pieces of gold-plated Mexican jewelry. The pieces that were taken are estimated to be worth close to $2 million.

Of course, it’s not the museum’s fault:

But museum director Anthony Shelton said that elaborate computer program printouts have determined that the museum’s security system did not fail during the heist and that the construction of the building’s layout did not compromise security.

Um, isn’t having stuff get stolen the very definition of security failing? And does anyone have any idea how “elaborate computer program printouts” can determine that security didn’t fail? What in the world is this guy talking about?

A few days later, we learned that security did indeed fail:

Four hours before the break-in on May 23, two or three key surveillance cameras at the Museum of Anthropology mysteriously went off-line.

Around the same time, a caller claiming to be from the alarm company phoned campus security, telling them there was a problem with the system and to ignore any alarms that might go off.

Campus security fell for the ruse and ignored an automated computer alert sent to them, police sources told CBC News.

Meanwhile surveillance cameras that were still operating captured poor pictures of what was going on inside the museum because of a policy to turn the lights off at night.

Then, as the lone guard working overnight in the museum that night left for a smoke break, the thief or thieves broke in, wearing gas masks and spraying bear spray to slow down anyone who might stumble across them.

It’s a particular kind of security failure, but it’s definitely a failure.

Posted on June 6, 2008 at 5:04 AM49 Comments

Clever Micro-Deposit Scam

This is clever:

Michael Largent, 22, of Plumas Lake, California, allegedly exploited a loophole in a common procedure both companies follow when a customer links his brokerage account to a bank account for the first time. To verify that the account number and routing information is correct, the brokerages automatically send small “micro-deposits” of between two cents to one dollar to the account, and ask the customer to verify that they’ve received it.

Largent allegedly used an automated script to open 58,000 online brokerage accounts, linking each of them to a handful of online bank accounts, and accumulating thousands of dollars in micro-deposits.

Posted on June 5, 2008 at 1:25 PM25 Comments

The War on Photography

What is it with photographers these days? Are they really all terrorists, or does everyone just think they are?

Since 9/11, there has been an increasing war on photography. Photographers have been harassed, questioned, detained, arrested or worse, and declared to be unwelcome. We’ve been repeatedly told to watch out for photographers, especially suspicious ones. Clearly any terrorist is going to first photograph his target, so vigilance is required.

Except that it’s nonsense. The 9/11 terrorists didn’t photograph anything. Nor did the London transport bombers, the Madrid subway bombers, or the liquid bombers arrested in 2006. Timothy McVeigh didn’t photograph the Oklahoma City Federal Building. The Unabomber didn’t photograph anything; neither did shoe-bomber Richard Reid. Photographs aren’t being found amongst the papers of Palestinian suicide bombers. The IRA wasn’t known for its photography. Even those manufactured terrorist plots that the US government likes to talk about—the Ft. Dix terrorists, the JFK airport bombers, the Miami 7, the Lackawanna 6—no photography.

Given that real terrorists, and even wannabe terrorists, don’t seem to photograph anything, why is it such pervasive conventional wisdom that terrorists photograph their targets? Why are our fears so great that we have no choice but to be suspicious of any photographer?

Because it’s a movie-plot threat.

A movie-plot threat is a specific threat, vivid in our minds like the plot of a movie. You remember them from the months after the 9/11 attacks: anthrax spread from crop dusters, a contaminated milk supply, terrorist scuba divers armed with almanacs. Our imaginations run wild with detailed and specific threats, from the news, and from actual movies and television shows. These movie plots resonate in our minds and in the minds of others we talk to. And many of us get scared.

Terrorists taking pictures is a quintessential detail in any good movie. Of course it makes sense that terrorists will take pictures of their targets. They have to do reconnaissance, don’t they? We need 45 minutes of television action before the actual terrorist attack—90 minutes if it’s a movie—and a photography scene is just perfect. It’s our movie-plot terrorists that are photographers, even if the real-world ones are not.

The problem with movie-plot security is it only works if we guess the plot correctly. If we spend a zillion dollars defending Wimbledon and terrorists blow up a different sporting event, that’s money wasted. If we post guards all over the Underground and terrorists bomb a crowded shopping area, that’s also a waste. If we teach everyone to be alert for photographers, and terrorists don’t take photographs, we’ve wasted money and effort, and taught people to fear something they shouldn’t.

And even if terrorists did photograph their targets, the math doesn’t make sense. Billions of photographs are taken by honest people every year, 50 billion by amateurs alone in the US. And the national monuments you imagine terrorists taking photographs of are the same ones tourists like to take pictures of. If you see someone taking one of those photographs, the odds are infinitesimal that he’s a terrorist.

Of course, it’s far easier to explain the problem than it is to fix it. Because we’re a species of storytellers, we find movie-plot threats uniquely compelling. A single vivid scenario will do more to convince people that photographers might be terrorists than all the data I can muster to demonstrate that they’re not.

Fear aside, there aren’t many legal restrictions on what you can photograph from a public place that’s already in public view. If you’re harassed, it’s almost certainly a law enforcement official, public or private, acting way beyond his authority. There’s nothing in any post-9/11 law that restricts your right to photograph.

This is worth fighting. Search “photographer rights” on Google and download one of the several wallet documents that can help you if you get harassed; I found one for the UK, US, and Australia. Don’t cede your right to photograph in public. Don’t propagate the terrorist photographer story. Remind them that prohibiting photography was something we used to ridicule about the USSR. Eventually sanity will be restored, but it may take a while.

This essay originally appeared in The Guardian.

EDITED TO ADD (6/6): Interesting comment by someone who trains security guards.

EDITED TO ADD (6/13): More on photographers’ rights in the U.S.

Posted on June 5, 2008 at 6:44 AM146 Comments

More on Airplane Seat Cameras

I already blogged this once: an airplane-seat camera system that tries to detect terrorists before they leap up and do whatever they were planning on doing. Amazingly enough, the EU is “testing” this system:

Each camera tracks passengers’ facial expressions, with the footage then analysed by software to detect developing terrorist activity or potential air rage. Six wide-angle cameras are also positioned to monitor the plane’s aisles, presumably to catch anyone standing by the cockpit door with a suspiciously crusty bread roll.

But since people never sit still on planes, the software’s also designed so that footage from multiple cameras can be analysed. So, if one person continually walks from his seat to the bathroom, then several cameras can be used to track his facial movements.

The software watches for all sorts of other terrorist-like activities too, including running in the cabin, someone nervously touching their face or excessive sweating. An innocent nose scratch won’t see the F16s scrambled, but a combination of several threat indicators could trigger a red alert.

This pegs the stupid meter. All it will do is false alarm. No one has any idea what sorts of facial characteristics are unique to terrorists. And how in the world are they “testing” this system without any real terrorists? In any case, what happens when the alarm goes off? How exactly is a ten-second warning going to save people?

Sure, you can invent a terrorist tactic where a system like this, assuming it actually works, saves people—but that’s the very definition of a movie-plot threat. How about we spend this money on something that’s effective in more than just a few carefully chosen scenarios?

Posted on June 4, 2008 at 12:05 PM57 Comments

The ID Divide

Yesterday, the Center for American Progress published its paper on identification and identification technologies: “The ID Divide: Addressing the Challenges of Identification and Authentication in American Society.” I was one of the participants in the project that created this paper, and it’s worth reading.

Among other things, the paper identifies six principles for identification systems:

  • Achieve real security or other goals
  • Accuracy
  • Inclusion
  • Fairness and equality
  • Effective redress mechanisms
  • Equitable financing for systems

From the Executive Summary:

How can these principles be honored in practice? That’s where the “due diligence” process comes into play when considering and implementing identification systems. Due diligence in the financial world of mergers and acquisitions and other important corporate transactions is conducted before a company makes a major investment. Proponents of, say, a merger (or in our case, a new identification program) can err on the side of optimism, concluding too readily that the merger (or new ID program) is clearly the way to go. Thorough due diligence protects against such over-optimism.

In the pages that follow, we apply this due diligence process to some recurring technical problems with current and proposed identification programs. And we discover—as you’ll see toward the end of the report—that ID programs that rely on “shared secrets,” such as Social Security numbers or your mother’s maiden name, are becoming more insecure due to the increased use of identification. Similarly, ID programs based on biometrics such as fingerprints or iris scans are not the “silver bullets” that some proponents claim they are, but rather could become compromised rapidly if deployed in haphazard ways.

We then apply our progressive principles and due diligence insights to two current examples of identification programs. The first details why it would be bad policy to require government-issued photo ID for in-person voting. The second shows the basically sound policy rationale for the Transportation Worker Identification Card, used for workers with access to security-critical port facilities. By examining one identification program that is reasonable, and one that is not, our analysis shows the usefulness of the Progressive Principles for Identification Systems.

I participated in the panel discussion announcing this report, along with Jim Harper (Director of Information Policy Studies at the Cato Institute).

Posted on June 4, 2008 at 6:34 AM50 Comments

Filming in DC's Union Station

This video is priceless. A Washington, DC, news crew goes down to Union Station to interview someone from Amtrak about people who have been stopped from taking pictures, even though there’s no policy against it. As the Amtrak spokesperson is explaining that there is no policy against photography, a guard comes up and tries to stop them from filming, saying it is against the rules.

EDITED TO ADD (6/7): More.

Posted on June 3, 2008 at 1:57 PM55 Comments

Fax Signatures

Aren’t fax signatures the weirdest thing? It’s trivial to cut and paste—with real scissors and glue—anyone’s signature onto a document so that it’ll look real when faxed. There is so little security in fax signatures that it’s mind-boggling that anyone accepts them.

Yet people do, all the time. I’ve signed book contracts, credit card authorizations, nondisclosure agreements and all sorts of financial documents—all by fax. I even have a scanned file of my signature on my computer, so I can virtually cut and paste it into documents and fax them directly from my computer without ever having to print them out. What in the world is going on here?

And, more importantly, why are fax signatures still being used after years of experience? Why aren’t there many stories of signatures forged through the use of fax machines?

The answer comes from looking at fax signatures not as an isolated security measure, but in the context of the larger system. Fax signatures work because signed faxes exist within a broader communications context.

In a 2003 paper, “Economics, Psychology, and Sociology of Security,” Professor Andrew Odlyzko looks at fax signatures and concludes:

Although fax signatures have become widespread, their usage is restricted. They are not used for final contracts of substantial value, such as home purchases. That means that the insecurity of fax communications is not easy to exploit for large gain. Additional protection against abuse of fax insecurity is provided by the context in which faxes are used. There are records of phone calls that carry the faxes, paper trails inside enterprises and so on. Furthermore, unexpected large financial transfers trigger scrutiny. As a result, successful frauds are not easy to carry out by purely technical means.

He’s right. Thinking back, there really aren’t ways in which a criminal could use a forged document sent by fax to defraud me. I suppose an unscrupulous consulting client could forge my signature on an non-disclosure agreement and then sue me, but that hardly seems worth the effort. And if my broker received a fax document from me authorizing a money transfer to a Nigerian bank account, he would certainly call me before completing it.

Credit card signatures aren’t verified in person, either—and I can already buy things over the phone with a credit card—so there are no new risks there, and Visa knows how to monitor transactions for fraud. Lots of companies accept purchase orders via fax, even for large amounts of stuff, but there’s a physical audit trail, and the goods are shipped to a physical address—probably one the seller has shipped to before. Signatures are kind of a business lubricant: mostly, they help move things along smoothly.

Except when they don’t.

On October 30, 2004, Tristian Wilson was released from a Memphis jail on the authority of a forged fax message. It wasn’t even a particularly good forgery. It wasn’t on the standard letterhead of the West Memphis Police Department. The name of the policeman who signed the fax was misspelled. And the time stamp on the top of the fax clearly showed that it was sent from a local McDonald’s.

The success of this hack has nothing to do with the fact that it was sent over by fax. It worked because the jail had lousy verification procedures. They didn’t notice any discrepancies in the fax. They didn’t notice the phone number from which the fax was sent. They didn’t call and verify that it was official. The jail was accustomed to getting release orders via fax, and just acted on this one without thinking. Would it have been any different had the forged release form been sent by mail or courier?

Yes, fax signatures always exist in context, but sometimes they are the linchpin within that context. If you can mimic enough of the context, or if those on the receiving end become complacent, you can get away with mischief.

Arguably, this is part of the security process. Signatures themselves are poorly defined. Sometimes a document is valid even if not signed: A person with both hands in a cast can still buy a house. Sometimes a document is invalid even if signed: The signer might be drunk, or have a gun pointed at his head. Or he might be a minor. Sometimes a valid signature isn’t enough; in the United States there is an entire infrastructure of “notary publics” who officially witness signed documents. When I started filing my tax returns electronically, I had to sign a document stating that I wouldn’t be signing my income tax documents. And banks don’t even bother verifying signatures on checks less than $30,000; it’s cheaper to deal with fraud after the fact than prevent it.

Over the course of centuries, business and legal systems have slowly sorted out what types of additional controls are required around signatures, and in which circumstances.

Those same systems will be able to sort out fax signatures, too, but it’ll be slow. And that’s where there will be potential problems. Already fax is a declining technology. In a few years it’ll be largely obsolete, replaced by PDFs sent over e-mail and other forms of electronic documentation. In the past, we’ve had time to figure out how to deal with new technologies. Now, by the time we institutionalize these measures, the technologies are likely to be obsolete.

What that means is people are likely to treat fax signatures—or whatever replaces them—exactly the same way as paper signatures. And sometimes that assumption will get them into trouble.

But it won’t cause social havoc. Wilson’s story is remarkable mostly because it’s so exceptional. And even he was rearrested at his home less than a week later. Fax signatures may be new, but fake signatures have always been a possibility. Our legal and business systems need to deal with the underlying problem—false authentication—rather than focus on the technology of the moment. Systems need to defend themselves against the possibility of fake signatures, regardless of how they arrive.

This essay previously appeared on Wired.com.

EDITED TO ADD (6/3): 2005 story, “Federal Jury Convicts N.Y. Attorney of Faking Judge’s Order.”

Posted on June 3, 2008 at 7:01 AM59 Comments

The War on T-Shirts

London Heathrow security stopped someone from boarding a plane for wearing a Transformers T-shirt showing a cartoon gun.

It’s easy to laugh and move on. How stupid can these people be, we wonder. But there’s a more important security lesson here. Security screening is hard, and every false threat the screeners watch out for make it more likely that real threats slip through. At a party the other night, someone told me about the time he accidentally brought a large knife through airport security. The screener pulled his bag aside, searched it, and pulled out a water bottle.

It’s not just the water bottles and the t-shirts and the gun jewelry—this kind of thing actually makes us all less safe.

Posted on June 2, 2008 at 2:27 PM59 Comments

E-Mail After the Rapture

It’s easy to laugh at the You’ve Been Left Behind site, which purports to send automatic e-mails to your friends after the Rapture:

The unsaved will be ‘left behind’ on earth to go through the “tribulation period” after the “Rapture”…. We have made it possible for you to send them a letter of love and a plea to receive Christ one last time. You will also be able to give them some help in living out their remaining time. In the encrypted portion of your account you can give them access to your banking, brokerage, hidden valuables, and powers of attorneys’ (you won’t be needing them any more, and the gift will drive home the message of love). There won’t be any bodies, so probate court will take 7 years to clear your assets to your next of Kin. 7 years of course is all the time that will be left. So, basically the Government of the AntiChrist gets your stuff, unless you make it available in another way.

But what if the creator of this site isn’t as scrupulous as he implies he is? What if he uses all of that account information, passwords, safe combinations, and whatever before any rapture? And even if he is an honest true believer, this seems like a mighty juicy target for any would-be identity thief.

And—if you’re curious—this is how the triggering mechanism works:

We have set up a system to send documents by the email, to the addresses you provide, 6 days after the “Rapture” of the Church. This occurs when 3 of our 5 team members scattered around the U.S fail to log in over a 3 day period. Another 3 days are given to fail safe any false triggering of the system.

The site claims that the data can be encrypted, but it looks like the encryption key is stored on the server with the data.

EDITED TO ADD (6/14): Here’s a similar site, run by atheists so they can guarantee that they’ll be left behind to deliver all the messages.

Posted on June 2, 2008 at 1:09 PM58 Comments

Did the Chinese PLA Attack the U.S. Power Grid?

This article claims that the Chinese Peoples Liberation Army was behind, among other things, the August 2003 blackout:

Computer hackers in China, including those working on behalf of the Chinese government and military, have penetrated deeply into the information systems of U.S. companies and government agencies, stolen proprietary information from American executives in advance of their business meetings in China, and, in a few cases, gained access to electric power plants in the United States, possibly triggering two recent and widespread blackouts in Florida and the Northeast, according to U.S. government officials and computer-security experts.

One prominent expert told National Journal he believes that China’s People’s Liberation Army played a role in the power outages. Tim Bennett, the former president of the Cyber Security Industry Alliance, a leading trade group, said that U.S. intelligence officials have told him that the PLA in 2003 gained access to a network that controlled electric power systems serving the northeastern United States. The intelligence officials said that forensic analysis had confirmed the source, Bennett said. “They said that, with confidence, it had been traced back to the PLA.” These officials believe that the intrusion may have precipitated the largest blackout in North American history, which occurred in August of that year. A 9,300-square-mile area, touching Michigan, Ohio, New York, and parts of Canada, lost power; an estimated 50 million people were affected.

This is all so much nonsense I don’t even know where to begin.

I wrote about this blackout already: the computer failures were caused by Blaster.

The “Interim Report: Causes of the August 14th Blackout in the United States and Canada,” published in November and based on detailed research by a panel of government and industry officials, blames the blackout on an unlucky series of failures that allowed a small problem to cascade into an enormous failure.

The Blaster worm affected more than a million computers running Windows during the days after Aug. 11. The computers controlling power generation and delivery were insulated from the Internet, and they were unaffected by Blaster. But critical to the blackout were a series of alarm failures at FirstEnergy, a power company in Ohio. The report explains that the computer hosting the control room’s “alarm and logging software” failed, along with the backup computer and several remote-control consoles. Because of these failures, FirstEnergy operators did not realize what was happening and were unable to contain the problem in time.

Simultaneously, another status computer, this one at the Midwest Independent Transmission System Operator, a regional agency that oversees power distribution, failed. According to the report, a technician tried to repair it and forgot to turn it back on when he went to lunch.

To be fair, the report does not blame Blaster for the blackout. I’m less convinced. The failure of computer after computer within the FirstEnergy network certainly could be a coincidence, but it looks to me like a malicious worm.

The rest of the National Journal article is filled with hysterics and hyperbole about Chinese hackers. I have already written an essay about this—it’ll be the next point/counterpoint between Marcus Ranum and me for Information Security—and I’ll publish it here after they publish it.

EDITED TO ADD (6/2): Wired debunked this claim pretty thoroughly:

This time, though, they’ve attached their tale to the most thoroughly investigated power incident in U.S. history.” and “It traced the root cause of the outage to the utility company FirstEnergy’s failure to trim back trees encroaching on high-voltage power lines in Ohio. When the power lines were ensnared by the trees, they tripped.

[…]

So China…using the most devious malware ever devised, arranged for trees to grow up into exactly the right power lines at precisely the right time to trigger the cascade.

Large-scale power outages are never one thing. They’re a small problem that cascades into series of ever-bigger problems. But the triggering problem were those power lines.

Posted on June 2, 2008 at 6:37 AM32 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.