May 2011 Archives

The U.S. Seems to Have a Secret Stealth Helicopter

That's what the U.S. destroyed after a malfunction in Pakistan during the bin Laden assassination. (For helicopters, "stealth" is less concerned with radar signatures and more concerned with acoustical quiet.)

There was some talk about Pakistan sending it to China, but they're returning it to the U.S. I presume that the Chinese got everything they needed quickly.

Posted on May 31, 2011 at 1:12 PM54 Comments

Keeping Sensitive Information Out of the Hands of Terrorists Through Self-Restraint

In my forthcoming book (available February 2012), I talk about various mechanisms for societal security: how we as a group protect ourselves from the "dishonest minority" within us. I have four types of societal security systems:

  • moral systems -- any internal rewards and punishments;
  • reputational systems -- any informal external rewards and punishments;
  • rule-based systems -- any formal system of rewards and punishments (mostly punishments); laws, mostly;
  • technological systems -- everything like walls, door locks, cameras, and so on.

We spend most of our effort in the third and fourth category. I am spending a lot of time researching how the first two categories work.

Given that, I was very interested in seeing an article by Dallas Boyd in Homeland Security Affairs: "Protecting Sensitive Information: The Virtue of Self-Restraint," where he basically says that people should not publish information that terrorists could use out of moral responsibility (he calls it "civic duty"). Ignore for a moment the debate about whether publishing information that could give the terrorists ideas is actually a bad idea -- I think it's not -- what Boyd is proposing is actually very interesting. He specifically says that censorship is bad and won't work, and wants to see voluntary self-restraint along with public shaming of offenders.

As an alternative to formal restrictions on communication, professional societies and influential figures should promote voluntary self-censorship as a civic duty. As this practice is already accepted among many scientists, it may be transferrable to members of other professions. As part of this effort, formal channels should be established in which citizens can alert the government to vulnerabilities and other sensitive information without exposing it to a wide audience. Concurrent with this campaign should be the stigmatization of those who recklessly disseminate sensitive information. This censure would be aided by the fact that many such people are unattractive figures whose writings betray their intellectual vanity. The public should be quick to furnish the opprobrium that presently escapes these individuals.

I don't think it will work, and I don't even think it's possible in this international day and age, but it's interesting to read the proposal.

Slashdot thread on the paper. Another article.

Posted on May 31, 2011 at 6:34 AM47 Comments

Lockheed Martin Hack Linked to RSA's SecurID Breach

All I know is what I read in the news.

Posted on May 30, 2011 at 7:17 AM32 Comments

Aggressive Social Engineering Against Consumers

Cyber criminals are getting aggressive with their social engineering tactics.

Val Christopherson said she received a telephone call last Tuesday from a man stating he was with an online security company who was receiving error messages from the computer at her Charleswood home.

“He said he wanted to fix my problem over the phone,” Christopherson said.

She said she was then convinced to go online to a remote access and support website called Teamviewer.com and allow him to connect her computer to his company’s system.

“That was my big mistake,” Christopherson said.

She said the scammers then tried to sell her anti-virus software they would install.

At that point, the 61-year-old Anglican minister became suspicious and eventually broke off the call before unplugging her computer.

Christopherson said she then had to hang up on the same scam artist again, after he quickly called back claiming to be the previous caller’s manager.

Posted on May 30, 2011 at 6:58 AM34 Comments

Apple's iOS 4 Hardware Encryption Cracked

All I know is what's in these two blog posts from Elcomsoft. Note that they didn't break AES-256; they figured out how to extract the keys from the hardware (iPhones, iPads). The company "will be releasing the product implementing this functionality for the exclusive use of law enforcement, forensic and intelligence agencies."

Posted on May 27, 2011 at 6:04 AM59 Comments

U.S. Presidential Limo Defeated by Steep-Grade Parking Ramp

It's not something I know anything about -- actually, it's not something many people know about -- but I've posted some links about the security features of the U.S. presidential limousine. So it's amusing to watch the limo immobilized by a steep grade at the U.S. embassy in Dublin. (You'll get a glimpse of how thick the car doors are toward the end of the video.)

EDITED TO ADD (6/1): It was a spare; the president was not riding in it at the time.

EDITED TO ADD (6/13): Here's a video of President Bush's limo breaking down in Rome.

Posted on May 26, 2011 at 1:57 PM32 Comments

Blackhole Exploit Kit

It's now available as a free download:

A free version of the Blackhole exploit kit has appeared online in a development that radically reduces the entry-level costs of getting into cybercrime.

The Blackhole exploit kit, which up until now would cost around $1,500 for an annual licence, creates a handy way to plant malicious scripts on compromised websites. Surfers visiting legitimate sites can be redirected using these scripts to scareware portals on sites designed to exploit browser vulnerabilities in order to distribute banking Trojans, such as those created from the ZeuS toolkit.

Posted on May 25, 2011 at 11:55 AM28 Comments

New Siemens SCADA Vulnerabilities Kept Secret

SCADA systems -- computer systems that control industrial processes -- are one of the ways a computer hack can directly affect the real world. Here, the fears multiply. It's not bad guys deleting your files, or getting your personal information and taking out credit cards in your name; it's bad guys spewing chemicals into the atmosphere and dumping raw sewage into waterways. It's Stuxnet: centrifuges spinning out of control and destroying themselves. Never mind how realistic the threat is, it's scarier.

Last week, a researcher was successfully pressured by the Department of Homeland Security not to disclose details "before Siemens could patch the vulnerabilities."

Beresford wouldn't say how many vulnerabilities he found in the Siemens products, but said he gave the company four exploit modules to test. He believes that at least one of the vulnerabilities he found affects multiple SCADA-system vendors, which share "commonality" in their products. Beresford wouldn't reveal more details, but says he hopes to do so at a later date.

We've been living with full disclosure for so long that many people have forgotten what life was like before it was routine.

Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies -- who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities "theoretical" and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability -- and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

I wrote that in 2007. Siemens is doing it right now:

Beresford expressed frustration that Siemens appeared to imply the flaws in its SCADA systems gear might be difficult for a typical hacker to exploit because the vulnerabilities unearthed by NSS Labs "were discovered while working under special laboratory conditions with unlimited access to protocols and controllers."

There were no "'special laboratory conditions' with 'unlimited access to the protocols,'" Beresford wrote Monday about how he managed to find flaws in Siemens PLC gear that would allow an attacker to compromise them. "My personal apartment on the wrong side of town where I can hear gunshots at night hardly defines a special laboratory." Beresford said he purchased the Siemens controllers with funding from his company and found the vulnerabilities, which he says hackers with bad intentions could do as well.

That's precisely the point. Me again from 2007:

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers.... But that assumes that hackers can't discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

With the pressure off, Siemens is motivated to deal with the PR problem and ignore the underlying security problem.

Posted on May 24, 2011 at 5:50 AM68 Comments

Dropbox Security

I haven't written about Dropbox's security problems; too busy with the book. But here's an excellent summary article from The Economist.

The meta-issue is pretty simple. If you expect a cloud provider to do anything more interesting than simply store your files for you and give them back to you at a later date, they are going to have to have access to the plaintext. For most people -- Gmail users, Google Docs users, Flickr users, and so on -- that's fine. For some people, it isn't. Those people should probably encrypt their files themselves before sending them into the cloud.

EDITED TO ADD (6/13): Another security issue with Dropbox.

Posted on May 23, 2011 at 6:47 AM

The Normalization of Security

TSA-style security is now so normal that it's part of a Disney ride:

The second room of the queue is now a security check area, similar to a TSA checkpoint. The two G-series droids are still there, G2-9T scanning luggage and G2-4T scanning passengers. For those attraction junkies, you'll remember that the G-series droids are so named because in the original Disneyland Park version of the ride, they were created by removing the "skins" from two of the goose animatronics from the soon-to-close America Sings attraction (Goose = "G" series). While we won't tell you why, you'll enjoy paying a lot of attention to what the scans of the luggage show is inside. When it's your turn to go through the passenger scan (a thermal body scan), you may be verbally accosted by a security droid. Also, keep an eye out in the queue for an earlier version of RX-24 ("Captain Rex") from the original Star Tours; he's labeled "defective" and has some familiar dialogue.

This is the new Star Tours ride at Walt Disney World in Orlando.

Posted on May 20, 2011 at 2:43 PM34 Comments

Forged Subway Passes in Boston

For years, an employee of Cubic Corp -- the company that makes the automatic fare card systems for most of the subway systems around the world -- forged and then sold monthly passes for the Boston MBTA system.

The scheme was discovered by accident:

Coakley said the alleged scheme was only discovered after a commuter rail operator asked a rider where he had bought his pass. When the rider said he'd purchased the pass on Craigslist, the operator became suspicious and confiscated the ticket.

An investigation by the MBTA Transit Police found that despite opening electronic gates, the printed serial number in the MBTA database did not show the card had ever been activated. Hundreds of similar passes in use by passengers were then discovered, investigators said.

Although you'd think the MBTA would poke around the net occasionally, looking for discount tickets being sold on places like Craigslist.

Cubic Transportation Systems said in a written statement that it is cooperating with authorities. "Our company has numerous safeguards designed to prevent fraudulent production or distribution of Charlie Tickets," the statement said, referring to the monthly MBTA passes.

It always amuses me when companies pretend the obvious isn't true in their press releases. "Someone completely broke our system." "Say that we have a lot of security." "But it didn't work." "Say it anyway; the press will just blindly report it."

To be fair, we don't -- and probably will never -- know how this proprietary system was broken. In this case, an insider did it. But did that insider just have access to the system specifications, or was access to blank ticket stock or specialized equipment necessary as well?

EDITED TO ADD (5/22): More details:

On March 11, a conductor on the commuter rail’s Providence/Stoughton Line did a double-take when a customer flashed a discolored monthly pass, its arrow an unusually light shade of orange. The fading, caused by inadvertent laundering, would have happened even if the pass were legitimate, but the customer, perhaps out of nervousness, volunteered that he had purchased it at a discount on Craigslist, Coakley said.

That raised the conductor’s suspicion. He collected the pass and turned it over to the Transit Police, who found no record of its serial number and began investigating. Working with State Police from Coakley’s office, they traced it to equipment at the Beverly branch of Cubic Transportation Systems Inc. and then specifically to an employee: Townes, a 27-year-old Revere resident.

Auditing could have discovered the fraud much earlier:

A records check would have indicated that the serial numbers were not tied to accounts for paying customers. But the financially strapped MBTA, which handles thousands of passes and moves millions of riders a month, did not have practices in place to sniff out the small percentage of unauthorized passes in circulation, Davey said.

Posted on May 20, 2011 at 7:44 AM47 Comments

Bin Laden Maintained Computer Security with an Air Gap

From the Associated Press:

Bin Laden's system was built on discipline and trust. But it also left behind an extensive archive of email exchanges for the U.S. to scour. The trove of electronic records pulled out of his compound after he was killed last week is revealing thousands of messages and potentially hundreds of email addresses, the AP has learned.

Holed up in his walled compound in northeast Pakistan with no phone or Internet capabilities, bin Laden would type a message on his computer without an Internet connection, then save it using a thumb-sized flash drive. He then passed the flash drive to a trusted courier, who would head for a distant Internet cafe.

At that location, the courier would plug the memory drive into a computer, copy bin Laden's message into an email and send it. Reversing the process, the courier would copy any incoming email to the flash drive and return to the compound, where bin Laden would read his messages offline.

I'm impressed. It's hard to maintain this kind of COMSEC discipline.

It was a slow, toilsome process. And it was so meticulous that even veteran intelligence officials have marveled at bin Laden's ability to maintain it for so long. The U.S. always suspected bin Laden was communicating through couriers but did not anticipate the breadth of his communications as revealed by the materials he left behind.

Navy SEALs hauled away roughly 100 flash memory drives after they killed bin Laden, and officials said they appear to archive the back-and-forth communication between bin Laden and his associates around the world.

Posted on May 18, 2011 at 8:45 AM103 Comments

Fingerprint Scanner that Works at a Distance

Scanning fingerprints from six feet away.

Slightly smaller than a square tissue box, AIRprint houses two 1.3 megapixel cameras and a source of polarized light. One camera receives horizontally polarized light, while the other receives vertically polarized light. When light hits a finger, the ridges of the fingerprint reflect one polarization of light, while the valleys reflect another. "That's where the real kicker is, because if you look at an image without any polarization, you can kind of see fingerprints, but not really well," says Burcham. By separating the vertical and the horizontal polarization, the device can overlap those images to produce an accurate fingerprint, which is fed to a computer for verification.

No information on how accurate it is, but it'll only get better.

Posted on May 17, 2011 at 7:46 AM43 Comments

The Inner Workings of an FBI Surveillance Device

This FBI surveillance device, designed to be attached to a car, has been taken apart and analyzed.

A recent ruling by the 9th U.S. Circuit Court of Appeals affirms that it's legal for law enforcement to secretly place a tracking device on your car without a warrant, even if it's parked in a private driveway.

Posted on May 16, 2011 at 6:31 AM97 Comments

Friday Squid Blogging: Squid Sous Vide

Yum:

We learned to cook squid sous vide at 59°C when we were at Atelier in Canada. The cooking time and temperature we picked up produce squid which is meaty, juicy and rich in texture. Here we marinated the squid with mango pickle and then cooked them for three hours at 59°C. Then we cooled them down in an ice bath. Once cooled, we were able to score them and then sear them in olive oil. When the squid was good and brown we added butter to the pan, let it foam, and basted the squid. Then we removed the squid from the pan and added cabbage leaves to saute them in the juices. When the cabbage was blistered we dressed the squid and cabbage with fresh lemon juice. To bring the dish together we added a few spoonfuls of grilled yogurt.

Posted on May 13, 2011 at 4:54 PM17 Comments

Interview with Me About the Sony Hack

These are what I get for giving interviews when I'm in a bad mood. For the record, I think Sony did a terrible job with its customers' security. I also think that most companies do a terrible job with customers' security, simply because there isn't a financial incentive to do better. And that most of us are pretty secure, despite that.

One of my biggest complaints with these stories is how little actual information we have. We often don't know if any data was actually stolen, only that hackers had access to it. We rarely know how the data was accessed: what sort of vulnerability was used by the hackers. We rarely know the motivations of the hackers: were they criminals, spies, kids, or someone else? We rarely know if the data is actually used for any nefarious purposes; it's generally impossible to connect a data breach with a corresponding fraud incident. Given all of that, it's impossible to say anything useful or definitive about the attack. But the press always wants definitive statements.

Posted on May 13, 2011 at 11:29 AM55 Comments

Drugging People and Then Robbing Them

This is a pretty scary criminal tactic from Turkey. Burglars dress up as doctors, and ring doorbells handing out pills under some pretense or another. They're actually powerful sedatives, and when people take them they pass out, and the burglars can ransack the house.

According to the article, when the police tried the same trick with placebos, they got an 86% compliance rate.

Kind of like a real-world version of those fake anti-virus programs that actually contain malware.

Posted on May 13, 2011 at 7:11 AM56 Comments

RFID Tags Protecting Hotel Towels

The stealing of hotel towels isn't a big problem in the scheme of world problems, but it can be expensive for hotels. Sure, we have moral prohibitions against stealing -- that'll prevent most people from stealing the towels. Many hotels put their name or logo on the towels. That works as a reputational societal security system; most people don't want their friends to see obviously stolen hotel towels in their bathrooms. Sometimes, though, this has the opposite effect: making towels and other items into souvenirs of the hotel and thus more desirable to steal. It's against the law to steal hotel towels, of course, but with the exception of large-scale thefts, the crime will never be prosecuted. (This might be different in third world countries. In 2010, someone was sentenced to three months in jail for stealing two towels from a Nigerian hotel.) The result is that more towels are stolen than hotels want. And for expensive resort hotels, those towels are expensive to replace.

The only thing left for hotels to do is take security into their own hands. One system that has become increasingly common is to set prices for towels and other items -- this is particularly common with bathrobes -- and charge the guest for them if they disappear from the rooms. This works with some things, but it's too easy for the hotel to lose track of how many towels a guest has in his room, especially if piles of them are available at the pool.

A more recent system, still not widespread, is to embed washable RFID chips into the towels and track them that way. The one data point I have for this is an anonymous Hawaii hotel that claims they've reduced towel theft from 4,000 a month to 750, saving $16,000 in replacement costs monthly.

Assuming the RFID tags are relatively inexpensive and don't wear out too quickly, that's a pretty good security trade-off.

Posted on May 11, 2011 at 11:01 AM69 Comments

"Resilience of the Internet Interconnection Ecosystem"

This blog post by Richard Clayton is worth reading.

If you have more time, there's 238-page report and a 31-page executive summary.

Posted on May 11, 2011 at 6:12 AM9 Comments

Medieval Tally Stick Discovered in Germany

Interesting:

The well-preserved tally stick was used in the Middle Ages to count the debts owed by the holder in a time when most people were unable to read or write.

"Debts would have been carved into the stick in the form of small notches. Then the stick would have been split lengthways, with the creditor and the borrower each keeping a half," explained Hille.

The two halves would then be put together again on the day repayment was due in order to compare them, with both sides hoping that they matched.

Note the security built into this primitive contract system. Neither side can cheat -- alter the notches -- because if they do, the two sides won't match. I wonder what the dispute resolution system was: what happened when the two sides didn't match.

EDITED TO ADD (5/14): In comments, lollardfish answers my question: "One then gets accused of fraud in court. In most circumstances, local power/reputation wins in fraud cases, since it's not about finding of fact but who do you trust."

Posted on May 10, 2011 at 1:47 PM65 Comments

The Era of "Steal Everything"

Good comment:

"We're moving into an era of 'steal everything'," said David Emm, a senior security researcher for Kaspersky Labs.

He believes that cyber criminals are now no longer just targeting banks or retailers in the search for financial details, but instead going after social and other networks which encourage the sharing of vast amounts of personal information.

As both data storage and data processing becomes cheaper, more and more data is collected and stored. An unanticipated effect of this is that more and more data can be stolen and used. As the article says, data minimization is the most effective security tool against this sort of thing. But -- of course -- it's not in the database owner's interest to limit the data it collects; it's in the interests of those whom the data is about.

Posted on May 10, 2011 at 6:20 AM40 Comments

Vulnerabilities in Online Payment Systems

This hack was conducted as a research project. It's unlikely it's being done in the wild:

In one attack, Wang and colleagues used a plug-in for the Firefox web browser to examine data being sent and received by the online retailer Buy.com. When users make a purchase, Buy.com directs them to PayPal. Once they have paid, PayPal sends Buy.com a confirmation message tagged with a code that identifies the transaction.

PayPal handles its side of the process securely, says Wang, but Buy.com was relatively easy to fool. First the team purchased an item and noted the confirmation code used by PayPal. Then they selected a second item on Buy.com but did not pay up. Instead, they used the code from the first transaction to fake a confirmation message, which Buy.com accepted as proof of payment.

Paper here.

Posted on May 9, 2011 at 1:50 PM24 Comments

Status Report: The Dishonest Minority

Three months ago, I announced that I was writing a book on why security exists in human societies. This is basically the book's thesis statement:

All complex systems contain parasites. In any system of cooperative behavior, an uncooperative strategy will be effective -- and the system will tolerate the uncooperatives -- as long as they're not too numerous or too effective. Thus, as a species evolves cooperative behavior, it also evolves a dishonest minority that takes advantage of the honest majority. If individuals within a species have the ability to switch strategies, the dishonest minority will never be reduced to zero. As a result, the species simultaneously evolves two things: 1) security systems to protect itself from this dishonest minority, and 2) deception systems to successfully be parasitic.

Humans evolved along this path. The basic mechanism can be modeled simply. It is in our collective group interest for everyone to cooperate. It is in any given individual's short-term self interest not to cooperate: to defect, in game theory terms. But if everyone defects, society falls apart. To ensure widespread cooperation and minimal defection, we collectively implement a variety of societal security systems.

Two of these systems evolved in prehistory: morals and reputation. Two others evolved as our social groups became larger and more formal: laws and technical security systems. What these security systems do, effectively, is give individuals incentives to act in the group interest. But none of these systems, with the possible exception of some fanciful science-fiction technologies, can ever bring that dishonest minority down to zero.

In complex modern societies, many complications intrude on this simple model of societal security. Decisions to cooperate or defect are often made by groups of people -- governments, corporations, and so on -- and there are important differences because of dynamics inside and outside the groups. Much of our societal security is delegated -- to the police, for example -- and becomes institutionalized; the dynamics of this are also important. Power struggles over who controls the mechanisms of societal security are inherent: "group interest" rapidly devolves to "the king's interest." Societal security can become a tool for those in power to remain in power, with the definition of "honest majority" being simply the people who follow the rules.

The term "dishonest minority" is not a moral judgment; it simply describes the minority who does not follow societal norm. Since many societal norms are in fact immoral, sometimes the dishonest minority serves as a catalyst for social change. Societies without a reservoir of people who don't follow the rules lack an important mechanism for societal evolution. Vibrant societies need a dishonest minority; if society makes its dishonest minority too small, it stifles dissent as well as common crime.

At this point, I have most of a first draft: 75,000 words. The tentative title is still "The Dishonest Minority: Security and its Role in Modern Society." I have signed a contract with Wiley to deliver a final manuscript in November for February 2012 publication. Writing a book is a process of exploration for me, and the final book will certainly be a little different -- and maybe even very different -- from what I wrote above. But that's where I am today.

And it's why my other writings continue to be sparse.

Posted on May 9, 2011 at 7:02 AM260 Comments

Friday Squid Blogging: Noise Pollution and Squid

It literally blows holes in their heads:

In the study, led by Michel André of the Technical University of Catalonia in Barcelona, biologists exposed 87 individual cephalopods of four species -- Loligo vulgaris, Sepia officinalis, Octopus vulgaris and Illex coindeti -- to short sweeps of relatively low intensity, low frequency sound between 50 and 400 Hertz (Hz). Then they examined the animals' statocysts -- fluid-filled, balloon-like structures that help these invertebrates maintain balance and position in the water. André and his colleagues found that, immediately following exposure to low frequency sound, the cephalopods showed hair cell damage within the statocysts. Over time, nerve fibers became swollen and, eventually, large holes appeared.

Posted on May 6, 2011 at 4:31 PM8 Comments

Forged Memory

A scary development in rootkits:

Rootkits typically modify certain areas in the memory of the running operating system (OS) to hijack execution control from the OS. Doing so forces the OS to present inaccurate results to detection software (anti-virus, anti-rootkit).

For example rootkits may hide files, registries, processes, etc., from detection software. So rootkits typically modify memory. And anti-rootkit tools inspect memory areas to identify such suspicious modifications and alarm users.

This particular rootkit also modifies a memory location (installs a hook) to prevent proper disk access by detection software. Let us say that location is X. It is noteworthy that this location X is well known for being modified by other rootkit families, and is not unique to this particular rootkit.

Now since the content at location X is known to be altered by rootkits in general, most anti-rootkit tools will inspect the content at memory location X to see if it has been modified.

[...]

In the case of this particular rootkit, the original (what's expected) content at location X is moved by the rootkit to a different location, Y. When an anti-rootkit tool tries to read the contents at location X, it is served contents from location Y. So, the anti-rootkit tool thinking everything is as it should be, does not warn the user of suspicious activity.

Posted on May 6, 2011 at 12:32 PM46 Comments

Extreme Authentication

Exactly how did they confirm it was Bin Laden's body?

Officials compared the DNA of the person killed at the Abbottabad compound with the bin Laden "family DNA" to determine that the 9/11 mastermind had in fact been killed, a senior administration official said.

It was not clear how many different family members' samples were compared or whose DNA was used.

[...]

Also to identify bin Laden, a visual ID was made. There were photo comparisons and other facial recognition used to identify him, the official said. A second official said that in addition to DNA, there was full biometric analysis of facial and body features.

EDITED TO ADD (5/5): A better article.

Posted on May 5, 2011 at 12:52 PM83 Comments

Bin Laden's Death Causes Spike in Suspicious Package Reports

It's not that the risk is greater, it's that the fear is greater. Data from New York:

There were 10,566 reports of suspicious objects across the five boroughs in 2010. So far this year, the total was 2,775 as of Tuesday compared with 2,477 through the same period last year.

[...]

The daily totals typically spike when terrorist plot makes headlines here or overseas, NYPD spokesman Paul Browne said Tuesday. The false alarms themselves sometimes get break-in cable news coverage or feed chatter online, fueling further fright.

On Monday, with news of the dramatic military raid of bin Laden's Pakistani lair at full throttle, there were 62 reports of suspicious packages. The previous Monday, the 24-hour total was 18. All were deemed non-threats.

Despite all the false alarms, the New York Police Department still wants to hear them:

"We anticipate that with increased public vigilance comes an increase in false alarms for suspicious packages," Kelly said at the Monday news conference. "This typically happens at times of heightened awareness. But we don't want to discourage the public. If you see something, say something."

That slogan, oddly enough, is owned by New York's transit authority.

I have a different opinion: "If you ask amateurs to act as front-line security personnel, you shouldn't be surprised when you get amateur security."

People have always come forward to tell the police when they see something genuinely suspicious, and should continue to do so. But encouraging people to raise an alarm every time they're spooked only squanders our security resources and makes no one safer.

"Refuse to be terrorized," people.

Posted on May 5, 2011 at 6:43 AM41 Comments

"Operation Pumpkin"

Wouldn't it be great if this were not a joke: the security contingency that was in place in the event that Kate Middleton tried to run away just before the wedding.

After protracted, top-secret negotiations between royal staff from Clarence House and representatives from the Metropolitan Police, MI5 and elements of the military, a compromise was agreed. In the event of Operation Pumpkin being put into effect Ms Middleton will be permitted to run out of Westminster Abbey with her bodyguards trailing discreetly at a distance. Plain-clothes undercover police, MI5 officers and SAS soldiers stationed in the crowd will form a mobile flying wedge ahead of her, clearing a path for the fugitive future princess to escape down.

Prince William will then have a limited time, the subject of tense negotiations between Clarence House and security chiefs, in which the path behind Ms Middleton will be kept open for him to go after her, after which the mobile protective cordon will close again at the Abbey end due to lack of manpower and the Prince will have let his bride slip through his fingers.

If Wills reacts fast enough, however, he will be able to chase after his fleeing fiancee for just under half a mile.

I wonder what security would have done if she just took off and ran.

EDITED TO ADD (5/5): The double negative in the first sentence has confused some people. To be clear: the article quoted, and Operation Pumpkin in general, is fiction.

Posted on May 4, 2011 at 12:15 PM42 Comments

Unintended Security Consequences of the New Pyrex Recipe

This is interesting:

When World Kitchen took over the Pyrex brand, it started making more products out of prestressed soda-lime glass instead of borosilicate. With pre-stressed, or tempered, glass, the surface is under compression from forces inside the glass. It is stronger than borosilicate glass, but when it's heated, it still expands as much as ordinary glass does. It doesn't shatter immediately, because the expansion first acts only to release some of the built-in stress. But only up to a point.

One unfortunate use of Pyrex is cooking crack cocaine, which involves a container of water undergoing a rapid temperature change when the drug is converted from powder form. That process creates more stress than soda-lime glass can withstand, so an entire underground industry was forced to switch from measuring cups purchased at Walmart to test tubes and beakers stolen from labs.

Posted on May 4, 2011 at 6:40 AM43 Comments

Decline in Cursive Writing Leads to Increase in Forgery Risk?

According to this article, students are no longer learning how to write in cursive. And, if they are learning it, they're forgetting how. Certainly the ubiquity of keyboards is leading to a decrease in writing by hand. Relevant to this blog, the article claims that this is making signatures easier to forge.

While printing might be legible, the less complex the handwriting, the easier it is to forge, said Heidi H. Harralson, a graphologist in Tucson. Even though handwriting can change -- and become sloppier -- as a person ages, people who are not learning or practicing it are at a disadvantage, Ms. Harralson said.

"I'm seeing an increase in inconstancy in the handwriting and poor form level -- sloppy, semi-legible script that’s inconsistent," she said.

Most everyone has a cursive signature, but even those are getting harder to identify, Ms. Harralson said.

"Even people that didn't learn cursive, they usually have some type of cursive form signature, but it's not written very well," she said. "It tends to be more abstract, illegible and simplistic. If they’re writing with block letters it’s easier to forge."

Maybe, but I'm skeptical. Everyone has a scrawl of some sort; mine has been completely illegible for years. But I don't see document forgery as a big risk; far bigger is the automatic authentication systems that don't have anything to do with traditional forgery.

Posted on May 3, 2011 at 2:25 PM55 Comments

Nikon Image Authentication System Cracked

Not a lot of details:

ElcomSoft research shows that image metadata and image data are processed independently with a SHA-1 hash function. There are two 160-bit hash values produced, which are later encrypted with a secret (private) key by using an asymmetric RSA-1024 algorithm to create a digital signature. Two 1024-bit (128-byte) signatures are stored in EXIF MakerNote tag 0×0097 (Color Balance).

During validation, Nikon Image Authentication Software calculates two SHA-1 hashes from the same data, and uses the public key to verify the signature by decrypting stored values and comparing the result with newly calculated hash values.

The ultimate vulnerability is that the private (should-be-secret) cryptographic key is handled inappropriately, and can be extracted from camera. After obtaining the private key, it is possible to generate a digital signature value for any image, thus forging the Image Authentication System.

News article.

Canon's system is just as bad, by the way.

Fifteen years ago, I co-authored a paper on the problem. The idea was to use a hash chain to better deal with the possibility of a secret-key compromise.

Posted on May 3, 2011 at 7:54 AM25 Comments

Hijacking the Coreflood Botnet

Earlier this month, the FBI seized control of the Coreflood botnet and shut it down:

According to the filing, ISC, under law enforcement supervision, planned to replace the servers with servers that it controlled, then collect the IP addresses of all infected machines communicating with the criminal servers, and send a remote "stop" command to infected machines to disable the Coreflood malware operating on them.

This is a big deal; it's the first time the FBI has done something like this. My guess is that we're going to see a lot more of this sort of thing in the future; it's the obvious solution for botnets.

Not that the approach is without risks:

"Even if we could absolutely be sure that all of the infected Coreflood botnet machines were running the exact code that we reverse-engineered and convinced ourselves that we understood," said Chris Palmer, technology director for the Electronic Frontier Foundation, "this would still be an extremely sketchy action to take. It's other people's computers and you don't know what's going to happen for sure. You might blow up some important machine."

I just don't see this argument convincing very many people. Leaving Coreflood in place could blow up some important machine. And leaving Coreflood in place not only puts the infected computers at risk; it puts the whole Internet at risk. Minimizing the collateral damage is important, but this feels like a place where the interest of the Internet as a whole trumps the interest of those affected by shutting down Coreflood.

The problem as I see it is the slippery slope. Because next, the RIAA is going to want to remotely disable computers they feel are engaged in illegal file sharing. And the FBI is going to want to remotely disable computers they feel are encouraging terrorism. And so on. It's important to have serious legal controls on this counterattack sort of defense.

Some more commentary.

Posted on May 2, 2011 at 6:52 AM37 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..