April 2010 Archives

Friday Squid Blogging: Squid Purity Test

I didn't know this:

A Squid is a motorcycle rider who, experienced or not, rides outside his abilities and sets poor examples by attire, propriety, and general behavior on the motorcycle.

115 questions in the test.

Posted on April 30, 2010 at 4:04 PM26 Comments

Homeopathic Bomb

This is funny:

The world has been placed on a heightened security alert following reports that New Age terrorists have harnessed the power of homeopathy for evil. "Homeopathic weapons represent a major threat to world peace," said President Barack Obama, "they might not cause any actual damage but the placebo effect could be quite devastating."

[...]

Homeopathic bombs are comprised of 99.9% water but contain the merest trace element of explosive. The solution is then repeatedly diluted so as to leave only the memory of the explosive in the water molecules. According to the laws of homeopathy, the more that the water is diluted, the more powerful the bomb becomes.

[...]

"A homeopathic attack could bring entire cities to a standstill," said BBC Security Correspondent, Frank Gardner. "Large numbers of people could easily become convinced that they have been killed and hospitals would be unable to cope with the massive influx of the 'walking suggestible.'"

It's a little too close to reality, though.

Posted on April 30, 2010 at 2:28 PM49 Comments

Fun with Secret Questions

Ally Bank wants its customers to invent their own personal secret questions and answers; the idea is that an operator will read the question over the phone and listen for an answer. Ignoring for the moment the problem of the operator now knowing the question/answer pair, what are some good pairs? Some suggestions:

Q: Do you know why I think you're so sexy?
A: Probably because you're totally in love with me.

Q: Need any weed? Grass? Kind bud? Shrooms?
A: No thanks hippie, I'd just like to do some banking.

Q: The Penis shoots Seeds, and makes new Life to poison the Earth with a plague of men.
A: Go forth, and kill. Zardoz has spoken.

Q: What the hell is your fucking problem, sir?
A: This is completely inappropriate and I'd like to speak to your supervisor.

Q: I've been embezzling hundreds of thousands of dollars from my employer, and I don't care who knows it.
A: It's a good thing they're recording this call, because I'm going to have to report you.

Q: Are you really who you say you are?
A: No, I am a Russian identity thief.

Okay, now it's your turn.

Posted on April 30, 2010 at 7:24 AM224 Comments

Hypersonic Cruise Missiles

The U.S. is developing a weapon capable of striking anywhere on the planet within an hour. The article talks about the possibility of modifying Trident missiles -- problematic because they would be indistinguishable from nuclear weapons -- and using the Mach 5–capable X-51 hypersonic cruise missile.

Interesting technology, but we really need to think through the political ramifications of this sort of thing better.

EDITED TO ADD (5/13): Report on the policy implications.

Posted on April 29, 2010 at 1:28 PM63 Comments

Frank Furedi on Worst-Case Thinking

Nice essay by sociologist Frank Furedi on worse-case thinking, exemplified by our reaction to the Icelandic volcano:

I am not a natural scientist, and I claim no authority to say anything of value about the risks posed by volcanic ash clouds to flying aircraft. However, as a sociologist interested in the process of decision-making, it is evident to me that the reluctance to lift the ban on air traffic in Europe is motivated by worst-case thinking rather than rigorous risk assessment. Risk assessment is based on an attempt to calculate the probability of different outcomes. Worst-case thinking ­ these days known as precautionary thinking' -- is based on an act of imagination. It imagines the worst-case scenario and then takes action on that basis. In the case of the Icelandic volcano, fears that particles in the ash cloud could cause aeroplane engines to shut down automatically mutated into a conclusion that this would happen. So it seems to me to be the fantasy of the worst-case scenario rather than risk assessment that underpins the current official ban on air traffic.

[...]

Worst-case thinking encourages society to adopt fear as of one of the key principles around which the public, the government and various institutions should organise their lives. It institutionalises insecurity and fosters a mood of confusion and powerlessness. Through popularising the belief that worst cases are normal, it also encourages people to feel defenceless and vulnerable to a wide range of future threats. In all but name, it is an invitation to social paralysis. The eruption of a volcano in Iceland poses technical problems, for which responsible decision-makers should swiftly come up with sensible solutions. But instead, Europe has decided to turn a problem into a drama. In 50 years' time, historians will be writing about our society’s reluctance to act when practical problems arose. It is no doubt difficult to face up to a natural disaster -- but in this case it is the all-too-apparent manmade disaster brought on by indecision and a reluctance to engage with uncertainty that represents the real threat to our future.

Posted on April 29, 2010 at 6:40 AM69 Comments

Can Safes

Hiding your valuables in common household containers is an old trick.

Diversion safes look like containers designed to hide your valuables in plain sight. Common diversion safes include fake brand name containers for soda pop, canned fruit, home cleaners, or even novels. Diversion can safes have removable tops or bottoms so that you can put your goods in them, and the safes are weighed so that they appear normal when handled.

These are relatively inexpensive, although it's cheaper to make your own.

Posted on April 28, 2010 at 1:21 PM58 Comments

Seat Belt Use and Lessons for Security Awareness

From Lance Spitzner:

In January of this year the National Highway Traffic Safety Administration released a report called "Analyzing the First Years Of the Ticket or Click It Mobilizations"... While the report is focused on the use of seat belts, it has fascinating applications to the world of security awareness. The report focuses on 2000 - 2006, when most states in the United States began campaigns (called Ticket or Click-It) promoting and requiring the use of seat belts. Just like security awareness, the goal of the campaign was to change behaviors, specifically to get people to wear their seat belts when driving... The campaigns were very successful, resulting in a 20-23% increase in seat belt use regardless of which statistics they used. The key finding of the report was that enforcement and not money spent on media were key to results. The states that had the strongest enforcement had the most people using seat belts. The states with the weakest enforcement had the lowest seat belt usage.

[..]

I feel the key lesson here is not only must an awareness program effectively communicate, but to truly change behaviors what you communicate has to be enforced. An information security awareness campaign communicates what is enforced (your policies) and in addition it should communicate why. Then, follow-up that campaign with strong, visible enforcement.

Posted on April 28, 2010 at 7:39 AM57 Comments

New York Police Protect Obama from Bicycles

They were afraid that they might contain pipe bombs.

This is the correct reaction:

In any case, I suspect someone somewhere just panicked at the possibility that something might explode near the President on his watch, since the whole operation has the finesse of a teenage stoner shoving his pot paraphernalia under the bed and desperately trying to clear the air with a copy of "Maxim" when he hears his parents coming home.

Seems that it's legal:

When asked by Gothamist, their precinct contact replied: "No, they just did this because the president was coming and they didn't want anything on the sidewalks. You're not supposed to lock you bike to signposts anyway, they have those new bike racks you're supposed to use."

I'll bet you anything that they didn't leave the bicycles that were locked to the racks.

Posted on April 27, 2010 at 6:27 AM56 Comments

Punishing Security Breaches

The editor of the Freakonomics blog asked me to write about this topic. The idea was that they would get several opinions, and publish them all. They spiked the story, but I already wrote my piece. So here it is.

In deciding what to do with Gray Powell, the Apple employee who accidentally left a secret prototype 4G iPhone in a California bar, Apple needs to figure out how much of the problem is due to an employee not following the rules, and how much of the problem is due to unclear, unrealistic, or just plain bad rules.

If Powell sneaked the phone out of the Apple building in a flagrant violation of the rules -- maybe he wanted to show it to a friend -- he should be disciplined, perhaps even fired. Some military installations have rules like that. If someone wants to take something classified out of a top secret military compound, he might have to secrete it on his person and deliberately sneak it past a guard who searches briefcases and purses. He might be committing a crime by doing so, by the way. Apple isn't the military, of course, but if their corporate security policy is that strict, it may very well have rules like that. And the only way to ensure rules are followed is by enforcing them, and that means severe disciplinary action against those who bypass the rules.

Even if Powell had authorization to take the phone out of Apple's labs -- presumably someone has to test drive the new toys sooner or later -- the corporate rules might have required him to pay attention to it at all times. We've all heard of military attachés who carry briefcases chained to their wrists. It's an extreme example, but demonstrates how a security policy can allow for objects to move around town -- or around the world -- without getting lost. Apple almost certainly doesn't have a policy as rigid as that, but its policy might explicitly prohibit Powell from taking that phone into a bar, putting it down on a counter, and participating in a beer tasting. Again, if Apple's rules and Powell's violation were both that clear, Apple should enforce them.

On the other hand, if Apple doesn't have clear-cut rules, if Powell wasn't prohibited from taking the phone out of his office, if engineers routinely ignore or bypass security rules and -- as long as nothing bad happens -- no one complains, then Apple needs to understand that the system is more to blame than the individual. Most corporate security policies have this sort of problem. Security is important, but it's quickly jettisoned when there's an important job to be done. A common example is passwords: people aren't supposed to share them, unless it's really important and they have to. Another example is guest accounts. And doors that are supposed to remain locked but rarely are. People routinely bypass security policies if they get in the way, and if no one complains, those policies are effectively meaningless.

Apple's unfortunately public security breach has given the company an opportunity to examine its policies and figure out how much of the problem is Powell and how much of it is the system he's a part of. Apple needs to fix its security problem, but only after it figures out where the problem is.

Posted on April 26, 2010 at 7:20 AM71 Comments

The Doghouse: Lock My PC

Lock My PC 4 has a master password.

EDITED TO ADD (4:26): In comments, people are reporting that the master password doesn't work. Near as I can tell, those are all recent downloads. So either they took out the feature, or changed the password.

Posted on April 23, 2010 at 7:43 AM41 Comments

NIST on Protecting Personally Identifiable Information

Just published: Special Publication (SP) 800-122, "Guide to Protecting the Confidentiality of Personally Identifiable Information (PII)."

It's 60 pages long; I haven't read it.

Posted on April 22, 2010 at 6:19 AM28 Comments

Security Fog

An odd burglary prevention tool:

If a burglar breaks in, the system floods the business with a dense fog similar to what's used in theaters and nightclubs. An intense strobe light blinds and disorients the crook.

[..]

Mazrouei said the cost to install the system starts at around $3,000.

Police point out that the system blinds interior security cameras as well as criminals. Officers who respond to a burglary also will not enter a building when they can't see who's inside. Local firefighters must be informed so they don't mistake the fog for smoke.

EDITED TO ADD (4/21): I blogged about the same thing in 2007, though that version was marketed to homeowners. It's interesting how much more negative my reaction is to fog as a home security device than as a security device to protect retail stock.

Posted on April 21, 2010 at 12:55 PM54 Comments

Young People, Privacy, and the Internet

There's a lot out there on this topic. I've already linked to danah boyd's excellent SXSW talk (and her work in general), my essay on privacy and control, and my talk -- "Security, Privacy, and the Generation Gap" -- which I've given four times in the past two months.

Last week, two new papers were published on the topic.

"Youth, Privacy, and Reputation" is a literature review published by Harvard's Berkman Center. It's long, but an excellent summary of what's out there on the topic:

Conclusions: The prevailing discourse around youth and privacy assumes that young people don't care about their privacy because they post so much personal information online. The implication is that posting personal information online puts them at risk from marketers, pedophiles, future employers, and so on. Thus, policy and technical solutions are proposed that presume that young would not put personal information online if they understood the consequences. However, our review of the literature suggests that young people care deeply about privacy, particularly with regard to parents and teachers viewing personal information. Young people are heavily monitored at home, at school, and in public by a variety of surveillance technologies. Children and teenagers want private spaces for socialization, exploration, and experimentation, away from adult eyes. Posting personal information online is a way for youth to express themselves, connect with peers, increase popularity, and bond with friends and members of peer groups. Subsequently, young people want to be able to restrict information provided online in a nuanced and granular way.

Much popular writing (and some research) discusses young people, online technologies, and privacy in ways that do not reflect the realities of most children and teenagers’ lives. However, this provides rich opportunities for future research in this area. For instance, there are no studies of the impact of surveillance on young people-- at school, at home, or in public. Although we have cited several qualitative and ethnographic studies of young people’s privacy practices and attitudes, more work in this area is needed to fully understand similarities and differences in this age group, particularly within age cohorts, across socioeconomic classes, between genders, and so forth. Finally, given that the frequently-cited comparative surveys of young people and adult privacy practices and attitudes are quite old, new research would be invaluable. We look forward to new directions in research in this area.

"How Different Are Young Adults from Older Adults When it Comes to Information Privacy Attitudes & Policy?" from the University of California Berkeley, describes the results of a broad survey on privacy attitudes.

Conclusion: In policy circles, it has become almost a cliché to claim that young people do not care about privacy. Certainly there are many troubling anecdotes surrounding young individuals’ use of the internet, and of social networking sites in particular. Nevertheless, we found that in large proportions young adults do care about privacy. The data show that they and older adults are more alike on many privacy topics than they are different. We suggest, then, that young-adult Americans have an aspiration for increased privacy even while they participate in an online reality that is optimized to increase their revelation of personal data.

Public policy agendas should therefore not start with the proposition that young adults do not care about privacy and thus do not need regulations and other safeguards. Rather, policy discussions should acknowledge that the current business environment along with other factors sometimes encourages young adults to release personal data in order to enjoy social inclusion even while in their most rational moments they may espouse more conservative norms. Education may be useful. Although many young adults are exposed to educational programs about the internet, the focus of these programs is on personal safety from online predators and cyberbullying with little emphasis on information security and privacy. Young adults certainly are different from older adults when it comes to knowledge of privacy law. They are more likely to believe that the law protects them both online and off. This lack of knowledge in a tempting environment, rather than a cavalier lack of concern regarding privacy, may be an important reason large numbers of them engage with the digital world in a seemingly unconcerned manner.

But education alone is probably not enough for young adults to reach aspirational levels of privacy. They likely need multiple forms of help from various quarters of society, including perhaps the regulatory arena, to cope with the complex online currents that aim to contradict their best privacy instincts.

They're both worth reading for anyone interested in this topic.

Posted on April 20, 2010 at 1:50 PM35 Comments

The Effectiveness of Political Assassinations

This is an excellent read:

I wouldn't have believed you if you'd told me 20 years ago that America would someday be routinely firing missiles into countries it’s not at war with. For that matter, I wouldn't have believed you if you'd told me a few months ago that America would soon be plotting the assassination of an American citizen who lives abroad.

He goes on to discuss Obama's authorization of the assassination of Anwar al-Awlaki, an American living in Yemen. He speculates on whether or not this is illegal, but spends more time musing about the effectiveness of assassination, referring to a 2009 paper from Security Studies: "When Heads Roll: Assessing the Effectiveness of Leadership Decapitation": "She studied 298 attempts, from 1945 through 2004, to weaken or eliminate terrorist groups through 'leadership decapitation' -- eliminating people in senior positions."

From the paper's conclusion:

The data presented in this paper show that decapitation is not an effective counterterrorism strategy. While decapitation is effective in 17 percent of all cases, when compared to the overall rate of organizational decline, decapitated groups have a lower rate of decline than groups that have not had their leaders removed. The findings show that decapitation is more likely to have counterproductive effects in larger, older, religious, and separatist organizations. In these cases decapitation not only has a much lower rate of success, the marginal value is, in fact, negative. The data provide an essential test of decapitation’s value as a counterterrorism policy.

There are important policy implications that can be derived from this study of leadership decapitation. Leadership decapitation seems to be a misguided strategy, particularly given the nature of organizations being currently targeted. The rise of religious and separatist organizations indicates that decapitation will continue to be an ineffective means of reducing terrorist activity. It is essential that policy makers understand when decapitation is unlikely to be successful. Given these conditions, targeting bin Laden and other senior members of al Qaeda, independent of other measures, is not likely to result in organizational collapse. Finally, it is essential that policy makers look at trends in organizational decline. Understanding whether certain types of organizations are more prone to destabilization is an important first step in formulating successful counterterrorism policies.

Back to the article:

Particularly ominous are Jordan's findings about groups that, like Al Qaeda and the Taliban, are religious. The chances that a religious terrorist group will collapse in the wake of a decapitation strategy are 17 percent. Of course, that’s better than zero, but it turns out that the chances of such a group fading away when there's no decapitation are 33 percent. In other words, killing leaders of a religious terrorist group seems to increase the group's chances of survival from 67 percent to 83 percent.

Of course the usual caveat applies: It's hard to disentangle cause and effect. Maybe it's the more formidable terrorist groups that invite decapitation in the first place -- and, needless to say, formidable groups are good at survival. Still, the other interpretation of Jordan’s findings -- that decapitation just doesn't work, and in some cases is counterproductive -- does make sense when you think about it.

For starters, reflect on your personal workplace experience. When an executive leaves a company -- whether through retirement, relocation or death — what happens? Exactly: He or she gets replaced. And about half the time (in my experience, at least) the successor is more capable than the predecessor. There's no reason to think things would work differently in a terrorist organization.

Maybe that's why newspapers keep reporting the death of a "high ranking Al Qaeda lieutenant"; it isn't that we keep killing the same guy, but rather that there's an endless stream of replacements. You're not going to end the terrorism business by putting individual terrorists out of business.

You might as well try to end the personal computer business by killing executives at Apple and Dell. Capitalism being the stubborn thing it is, new executives would fill the void, so long as there was a demand for computers.

Of course, if you did enough killing, you might make the job of computer executive so unattractive that companies had to pay more and more for ever-less-capable executives. But that's one difference between the computer business and the terrorism business. Terrorists aren’t in it for the money to begin with. They have less tangible incentives -- and some of these may be strengthened by targeted killings.

Read the whole thing.

I thought this comment, from former senator Gary Hart, was particularly good.

As a veteran of the Senate Select Committee to Investigate the Intelligence Services of the U.S. (so-called Church committee), we discovered at least five official plots to assassinate foreign leaders, including Fidel Castro with almost demented insistence. None of them worked, though the Diem brothers in Vietnam and Salvador Allende in Chile might argue otherwise. In no case did it work out well for the U.S. or its policy. Indeed, once exposed, as these things inevitably are, the ideals underlying our Constitution and the nation's prestige suffered incalculable damage. The issue is principle versus expediency. Principle always suffers when expediency becomes the rule. We simply cannot continue to sacrifice principle to fear.

Additional commentary from The Atlantic.

EDITED TO ADD (4/22): The Church Commmittee's report on foreign assassination plots.

EDITED TO ADD (5/13): Stratfor

Lt. Gen. Alexander and the U.S. Cyber Command

Lt. Gen. Keith Alexander, the current Director of NSA, has been nominated to head the US Cyber Command. Last week Alexander appeared before the Senate Armed Services Committee to answer questions.

The Chairman of the Armed Services Committee, Senator Carl Levin (D Michigan) began by posing three scenarios to Lieutenant General Alexander:

Scenario 1. A traditional operation against an adversary, country "C". What rules of engagement would prevail to counter cyberattacks emanating from that country?

Answer: Under Title 10, an "execute" order approved by the President and the Joint Chiefs would presumably grant the theater commander full leeway to defend US military networks and to counter attack.

Title 10 is the legal framework under which the US military operates.

Scenario 2. Same as before but the cyberattacks emanate from a neutral third country.

Answer. Additional authority would have to be granted.

Scenario 3. "Assume you're in a peacetime setting now. All of a sudden we're hit with a major attack against the computers that manage the distribution of electric power in the United States. Now, the attacks appear to be coming from computers outside the United States, but they are being routed through computers that are owned by U.S. persons located in the United States, so the routers are in here, in the United States.

Now, how would CYBERCOM respond to that situation and under what authorities?"

Answer: That would be the responsibility of the Department of Homeland Security (DHS) and the FBI.

Alexander was repeatedly asked about privacy and civil liberties impact of his new role, and gave answers that were, well, full of platitudes but essentially uninformative.

He also played up the threat, saying that U.S. military networks are seeing "hundreds of thousands of probes a day," whatever that means.

Prior to the hearing, Alexander answered written questions from the commitee. Particularly interesting are his answers to questions 24 and 27.

24. Explaining Cybersecurity Plans to the American People

The majority of the funding for the multi-billion dollar Comprehensive National Cybersecurity Initiative (SNCI) is contained in the classified National Intelligence Program budget, which is reviewed and approved by the congressional intelligence committees. Almost all important aspects of the CNCI remain highly classified, including the implementation plan for the Einstein 3 intrusion detection and prevention system. It is widely perceived that the Department of Homeland Security is actually likely to simply extend the cyber security system that the NSA developed for DOD into the civilian and even the private sector for defense of critical infrastructure. DOD is creating a sub-unified Cyber Command with the Director of NSA as its Commander.

24a) In your view, are we risking creating the perception, at home and abroad, that the U.S. government’s dominant interests and objectives in cyberspace are intelligence- and military-related, and if so, is this a perception that we want to exist?

(U) No, I don’t believe we are risking creating this perception as long as we communicate clearly to the American people—and the world—regarding our interests and objectives.

24b) Based on your experience, are the American people likely to accept deployment of classified methods of monitoring electronic communications to defend the government and critical infrastructure without explaining basic aspects of how this monitoring will be conducted and how it may affect them?

(U) I believe the government and the American people expect both NSA and U.S. Cyber Command to support the cyber defense of our nation. Our support does not in any way suggest that we would be monitoring Americans.

(U) I don’t believe we should ask the public to accept blindly some unclear “classified” method. We need to be transparent and communicate to the American people about our objectives to address the national security threat to our nation—the nature of the threat, our overall approach, and the roles and responsibilities of each department and agency involved—including NSA and the Department of Defense. I am personally committed to this transparency, and I know that the Department of Defense, the Intelligence Community, and the rest of the Administration are as well. What needs to remain classified, and I believe that the American people will accept this as reasonable, are the specific foreign threats that we are looking for and how we identify them, and what actions we take when they are identified. For these areas, the American people have you, their elected representatives, to provide the appropriate oversight on their behalf.

(U) Remainder of answer provided in the classified supplement.

24c) What are your views as to the necessity and desirability of maintaining the current level of classification of the CNCI?

(U) In recent months, we have seen an increasing amount of information being shared by the Administration and the departments and agencies on the CNCI and cybersecurity in general, which I believe is consistent with our commitment to transparency. I expect that trend to continue, and personally believe and support this transparency as a foundational element of the dialogue that we need to have with the American people on cybersecurity.

[...]

27. Designing the Internet for Better Security

Cyber security experts emphasize that the Internet was not designed for security.

27a) How could the Internet be designed differently to provide much greater inherent security?

(U) The design of the Internet is—and will continue to evolve—based on technological advancements. These new technologies will enhance mobility and, if properly implemented, security. It is in the best interest of both government and insustry to consider security more prominently in this evolving future Internet architecture. If confirmed, I look forward to working with this Committee, as well as industry leaders, academia, the services, and DOD agencies on these important concerns.

27b) Is it practical to consider adopting those modifications?

(U) Answer provided in the classified supplement.

27c) What would the impact be on privacy, both pro and con?

(U) Answer provided in the classified supplement.

The Electronic Privacy Information Center has filed a Freedom of Information Act request for that classified supplement. I doubt we'll get it, though.

The U.S. Cyber Command was announced by Secretary of Defense Robert Gates in June 2009. It's supposed to be operational this year.

Posted on April 19, 2010 at 1:26 PM30 Comments

Life Recorder

In 2006, writing about future threats on privacy, I described a life recorder:

A "life recorder" you can wear on your lapel that constantly records is still a few generations off: 200 gigabytes/year for audio and 700 gigabytes/year for video. It'll be sold as a security device, so that no one can attack you without being recorded.

I can't find a quote right now, but in talks I would say that this kind of technology would first be used by groups of people with diminished rights: children, soldiers, prisoners, and the non-lucid elderly.

It's been proposed:

With GPS capabilities built into phones that can be made ever smaller, and the ability for these phones to transmit both sound and audio, isn't it time to think about a wearable device that could be used to call for help and accurately report what was happening?

[...]

The device could contain cameras and microphones that activate if the device was triggered to create evidence that could locate an attacker and cause them to flee, an alarm sound that could help locate the victim and also help scare off an attacker, and a set of sensors that could detect everything from sudden deceleration to an irregular heartbeat or compromised breathing.

Just one sentence on the security and privacy issues:

Indeed, privacy concerns need to be addressed so that stalkers and predators couldn't compromise the device.

Indeed.

Posted on April 19, 2010 at 6:30 AM81 Comments

Fake CCTV Cameras

CCTV cameras in Moscow have been accused of streaming prerecorded video instead of live images.

What I can't figure out is why? To me, it seems easier for the cameras to stream live video than prerecorded images.

Posted on April 16, 2010 at 12:46 PM27 Comments

Guns Painted to Look Like Toys

Last weekend I was in New York, and saw posters on the subways warning people about real guns painted to look like toys. And today I find these pictures from the Baltimore police department. Searching, I find this article from 2006 New York.

I had no idea this was a thing.

Posted on April 16, 2010 at 6:28 AM80 Comments

Security for Implantable Medical Devices

Interesting study: "Patients, Pacemakers, and Implantable Defibrillators: Human Values and Security for Wireless Implantable Medical Devices," Tamara Denning, Alan Borning, Batya Friedman, Brian T. Gill, Tadayoshi Kohno, and William H. Maisel.

Abstract: Implantable medical devices (IMDs) improve patients' quality of life and help sustain their lives. In this study, we explore patient views and values regarding their devices to inform the design of computer security for wireless IMDs. We interviewed 13 individuals with implanted cardiac devices. Key questions concerned the evaluation of 8 mockups of IMD security systems. Our results suggest that some systems that are technically viable are nonetheless undesirable to patients. Patients called out a number of values that affected their attitudes towards the systems, including perceived security, safety, freedom from unwanted cultural and historical associations, and self-image. In our analysis, we extend the Value Sensitive Design value dams and flows technique in order to suggest multiple, complementary systems; in our discussion, we highlight some of the usability, regulatory, and economic complexities that arise from offering multiple options. We conclude by offering design guidelines for future security systems for IMDs.

Posted on April 15, 2010 at 1:55 PM10 Comments

Storing Cryptographic Keys with Invisible Tattoos

This idea, by Stuart Schechter at Microsoft Research, is -- I think -- clever:

Abstract: Implantable medical devices, such as implantable cardiac defibrillators and pacemakers, now use wireless communication protocols vulnerable to attacks that can physically harm patients. Security measures that impede emergency access by physicians could be equally devastating. We propose that access keys be written into patients' skin using ultraviolet-ink micropigmentation (invisible tattoos).

It certainly is a new way to look at the security threat model.

Posted on April 15, 2010 at 6:43 AM50 Comments

Matt Blaze Comments on his 15-Year-Old "Afterword"

Fifteen years ago, Matt Blaze wrote an Afterword to my book Applied Cryptography. Here are his current thoughts on that piece of writing.

Posted on April 14, 2010 at 1:30 PM31 Comments

Externalities and Identity Theft

Chris Hoofnagle has a new paper: "Internalizing Identity Theft." Basically, he shows that one of the problems is that lenders extend credit even when credit applications are sketchy.

From an article on the work:

Using a 2003 amendment to the Fair Credit Reporting Act that allows victims of ID theft to ask creditors for the fraudulent applications submitted in their names, Mr. Hoofnagle worked with a small sample of six ID theft victims and delved into how they were defrauded.

Of 16 applications presented by imposters to obtain credit or medical services, almost all were rife with errors that should have suggested fraud. Yet in all 16 cases, credit or services were granted anyway.

In the various cases described in the paper, which was published on Wednesday in The U.C.L.A. Journal of Law and Technology, one victim found four of six fraudulent applications submitted in her name contained the wrong address; two contained the wrong phone number and one the wrong date of birth.

Another victim discovered that his imposter was 70 pounds heavier, yet successfully masqueraded as him using what appeared to be his stolen driver's license, and in one case submitted an incorrect Social Security number.

This is a textbook example of an economic externality. Because most of the cost of identity theft is borne by the victim -- even with the lender reimbursing the victim if pushed to -- the lenders make the trade-off that's best for their business, and that means issuing credit even in marginal situations. They make more money that way.

If we want to reduce identity theft, the only solution is to internalize that externality. Either give victims the ability to sue lenders who issue credit in their names to identity thieves, or pass a law with penalties if lenders do this.

Among the ways to move the cost of the crime back to issuers of credit, Mr. Hoofnagle suggests that lenders contribute to a fund that will compensate victims for the loss of their time in resolving their ID theft problems.

Posted on April 14, 2010 at 6:57 AM66 Comments

Terrorist Attacks and Comparable Risks, Part 2

John Adams argues that our irrationality about comparative risks depends on the type of risk:

With "pure" voluntary risks, the risk itself, with its associated challenge and rush of adrenaline, is the reward. Most climbers on Mount Everest know that it is dangerous and willingly take the risk. With a voluntary, self-controlled, applied risk, such as driving, the reward is getting expeditiously from A to B. But the sense of control that drivers have over their fates appears to encourage a high level of tolerance of the risks involved.

Cycling from A to B (I write as a London cyclist) is done with a diminished sense of control over one's fate. This sense is supported by statistics that show that per kilometre travelled a cyclist is 14 times more likely to die than someone in a car. This is a good example of the importance of distinguishing between relative and absolute risk. Although 14 times greater, the absolute risk of cycling is still small -- 1 fatality in 25 million kilometres cycled; not even Lance Armstrong can begin to cover that distance in a lifetime of cycling. And numerous studies have demonstrated that the extra relative risk is more than offset by the health benefits of regular cycling; regular cyclists live longer.

While people may voluntarily board planes, buses and trains, the popular reaction to crashes in which passengers are passive victims, suggests that the public demand a higher standard of safety in circumstances in which people voluntarily hand over control of their safety to pilots, or to bus or train drivers.

Risks imposed by nature -- such as those endured by those living on the San Andreas Fault or the slopes of Mount Etna -- or impersonal economic forces -- such as the vicissitudes of the global economy -- are placed in the middle of the scale. Reactions vary widely. They are usually seen as motiveless and are responded to fatalistically - unless or until the threat appears imminent.

Imposed risks are less tolerated. Consider mobile phones. The risk associated with the handsets is either non-existent or very small. The risk associated with the base stations, measured by radiation dose, unless one is up the mast with an ear to the transmitter, is orders of magnitude less. Yet all round the world billions are queuing up to take the voluntary risk, and almost all the opposition is focussed on the base stations, which are seen by objectors as impositions. Because the radiation dose received from the handset increases with distance from the base station, to the extent that campaigns against the base stations are successful, they will increase the distance from the base station to the average handset, and thus the radiation dose. The base station risk, if it exist, might be labelled a benignly imposed risk; no one supposes that the phone company wishes to murder all those in the neighbourhood.

Less tolerated are risks whose imposers are perceived as motivated by profit or greed. In Europe, big biotech companies such as Monsanto are routinely denounced by environmentalist opponents for being more concerned with profits than the welfare of the environment or the consumers of its products.

Less tolerated still are malignly imposed risks -- crimes ranging from mugging to rape and murder. In most countries in the world the number of deaths on the road far exceeds the numbers of murders, but far more people are sent to jail for murder than for causing death by dangerous driving. In the United States in 2002 16,000 people were murdered -- a statistic that evoked far more popular concern than the 42,000 killed on the road -- but far less than the 25 killed by terrorists.

This isn't a new result, but it's vital to understand how people react to different risks.

Posted on April 13, 2010 at 1:18 PM12 Comments

Terrorist Attacks and Comparable Risks, Part 1

Nice analysis by John Mueller and Mark G. Stewart:

There is a general agreement about risk, then, in the established regulatory practices of several developed countries: risks are deemed unacceptable if the annual fatality risk is higher than 1 in 10,000 or perhaps higher than 1 in 100,000 and acceptable if the figure is lower than 1 in 1 million or 1 in 2 million. Between these two ranges is an area in which risk might be considered "tolerable."

These established considerations are designed to provide a viable, if somewhat rough, guideline for public policy. In all cases, measures and regulations intended to reduce risk must satisfy essential cost-benefit considerations. Clearly, hazards that fall in the unacceptable range should command the most attention and resources. Those in the tolerable range may also warrant consideration -- but since they are less urgent, they should be combated with relatively inexpensive measures. Those hazards in the acceptable range are of little, or even negligible, concern, so precautions to reduce their risks even further would scarcely be worth pursuing unless they are remarkably inexpensive.

[...]

As can be seen, annual terrorism fatality risks, particularly for areas outside of war zones, are less than one in one million and therefore generally lie within the range regulators deem safe or acceptable, requiring no further regulations, particularly those likely to be expensive. They are similar to the risks of using home appliances (200 deaths per year in the United States) or of commercial aviation (103 deaths per year). Compared with dying at the hands of a terrorist, Americans are twice as likely to perish in a natural disaster and nearly a thousand times more likely to be killed in some type of accident. The same general conclusion holds when the full damage inflicted by terrorists -- not only the loss of life but direct and indirect economic costs -- is aggregated. As a hazard, terrorism, at least outside of war zones, does not inflict enough damage to justify substantially increasing expenditures to deal with it.

[...]

To border on becoming unacceptable by established risk conventions -- that is, to reach an annual fatality risk of 1 in 100,000 -- the number of fatalities from terrorist attacks in the United States and Canada would have to increase 35-fold; in Great Britain (excluding Northern Ireland), more than 50-fold; and in Australia, more than 70-fold. For the United States, this would mean experiencing attacks on the scale of 9/11 at least once a year, or 18 Oklahoma City bombings every year.

Posted on April 13, 2010 at 6:07 AM29 Comments

Man-in-the-Middle Attacks Against SSL

Says Matt Blaze:

A decade ago, I observed that commercial certificate authorities protect you from anyone from whom they are unwilling to take money. That turns out to be wrong; they don't even do that much.

Scary research by Christopher Soghoian and Sid Stamm:

Abstract: This paper introduces a new attack, the compelled certificate creation attack, in which government agencies compel a certificate authority to issue false SSL certificates that are then used by intelligence agencies to covertly intercept and hijack individuals' secure Web-based communications. We reveal alarming evidence that suggests that this attack is in active use. Finally, we introduce a lightweight browser add-on that detects and thwarts such attacks.

Even more scary, Soghoian and Stamm found that hardware to perform this attack is being produced and sold:

At a recent wiretapping convention, however, security researcher Chris Soghoian discovered that a small company was marketing internet spying boxes to the feds. The boxes were designed to intercept those communications -- without breaking the encryption -- by using forged security certificates, instead of the real ones that websites use to verify secure connections. To use the appliance, the government would need to acquire a forged certificate from any one of more than 100 trusted Certificate Authorities.

[...]

The company in question is known as Packet Forensics.... According to the flyer: "Users have the ability to import a copy of any legitimate key they obtain (potentially by court order) or they can generate 'look-alike' keys designed to give the subject a false sense of confidence in its authenticity." The product is recommended to government investigators, saying "IP communication dictates the need to examine encrypted traffic at will." And, "Your investigative staff will collect its best evidence while users are lulled into a false sense of security afforded by web, e-mail or VOIP encryption."

Matt Blaze has the best analysis. Read his whole commentary; this is just the ending:

It's worth pointing out that, from the perspective of a law enforcement or intelligence agency, this sort of surveillance is far from ideal. A central requirement for most government wiretapping (mandated, for example, in the CALEA standards for telephone interception) is that surveillance be undetectable. But issuing a bogus web certificate carries with it the risk of detection by the target, either in real-time or after the fact, especially if it's for a web site already visited. Although current browsers don't ordinarily detect unusual or suspiciously changed certificates, there's no fundamental reason they couldn't (and the Soghoian/Stamm paper proposes a Firefox plugin to do just that). In any case, there's no reliable way for the wiretapper to know in advance whether the target will be alerted by a browser that scrutinizes new certificates.

Also, it's not clear how web interception would be particularly useful for many of the most common law enforcement investigative scenarios. If a suspect is buying books or making hotel reservations online, it's usually a simple (and legally relatively uncomplicated) matter to just ask the vendor about the transaction, no wiretapping required. This suggests that these products may be aimed less at law enforcement than at national intelligence agencies, who might be reluctant (or unable) to obtain overt cooperation from web site operators (who may be located abroad).

Posted on April 12, 2010 at 1:32 PM73 Comments

Makeup to Fool Face Recognition Software

An NYU student has been reverse-engineering facial recognition algorithms to devise makeup patterns to confuse face recognition software.

Posted on April 12, 2010 at 6:08 AM36 Comments

Schneier on "Security, Privacy, and the Generation Gap"

Last month at the RSA Conference, I gave a talk titled "Security, Privacy, and the Generation Gap." It was pretty good, but it was the first time I gave that talk in front of a large audience -- and its newness showed.

Last week, I gave the same talk again, at the CACR Higher Education Security Summit at Indiana University. It was much, much better the second time around, and there's a video available.

Posted on April 9, 2010 at 12:55 PM12 Comments

Cryptanalysis of the DECT

New cryptanalysis of the proprietrary encryption algorithm used in the Digital Enhanced Cordless Telecommunications (DECT) standard for cordless phones.

Abstract. The DECT Standard Cipher (DSC) is a proprietary 64-bit stream cipher based on irregularly clocked LFSRs and a non-linear output combiner. The cipher is meant to provide confidentiality for cordless telephony. This paper illustrates how the DSC was reverse-engineered from a hardware implementation using custom firmware and information on the structure of the cipher gathered from a patent. Beyond disclosing the DSC, the paper proposes a practical attack against DSC that recovers the secret key from 215 keystreams on a standard PC with a success rate of 50% within hours; somewhat faster when a CUDA graphics adapter is available.

News.

Posted on April 8, 2010 at 1:05 PM23 Comments

The Effectiveness of Air Marshals

Air marshals are being arrested faster than air marshals are making arrests.

Actually, there have been many more arrests of Federal air marshals than that story reported, quite a few for felony offenses. In fact, more air marshals have been arrested than the number of people arrested by air marshals.

We now have approximately 4,000 in the Federal Air Marshals Service, yet they have made an average of just 4.2 arrests a year since 2001. This comes out to an average of about one arrest a year per 1,000 employees.

Now, let me make that clear. Their thousands of employees are not making one arrest per year each. They are averaging slightly over four arrests each year by the entire agency. In other words, we are spending approximately $200 million per arrest. Let me repeat that: we are spending approximately $200 million per arrest.

Posted on April 8, 2010 at 6:22 AM139 Comments

Cryptography Broken on American Military Attack Video

Any ideas?

At a news conference at the National Press Club, WikiLeaks said it had acquired the video from whistle-blowers in the military and viewed it after breaking the encryption code. WikiLeaks released the full 38-minute video as well as a 17-minute edited version.

And this quote from the WikiLeaks Twitter feed on Feb 20th:

Finally cracked the encryption to US military video in which journalists, among others, are shot. Thanks to all who donated $/CPUs.

Surely this isn't NSA-level encryption. But what is it?

Note that this is intended to be a discussion about the cryptanalysis, not about the geopolitics of the event.

EDITED TO ADD (4/13): It was a dictionary attack.

Posted on April 7, 2010 at 1:37 PM57 Comments

New York and the Moscow Subway Bombing

People intent on preventing a Moscow-style terrorist attack against the New York subway system are proposing a range of expensive new underground security measures, some temporary and some permanent.

They should save their money - and instead invest every penny they're considering pouring into new technologies into intelligence and old-fashioned policing.

Intensifying security at specific stations only works against terrorists who aren't smart enough to move to another station. Cameras are useful only if all the stars align: The terrorists happen to walk into the frame, the video feeds are being watched in real time and the police can respond quickly enough to be effective. They're much more useful after an attack, to figure out who pulled it off.

Installing biological and chemical detectors requires similarly implausible luck - plus a terrorist plot that includes the specific biological or chemical agent that is being detected.

What all these misguided reactions have in common is that they're based on "movie-plot threats": overly specific attack scenarios. They fill our imagination vividly, in full color with rich detail. Before long, we're envisioning an entire story line, with or without Bruce Willis saving the day. And we're scared.

It's not that movie-plot threats are not worth worrying about. It's that each one - Moscow's subway attack, the bombing of the Oklahoma City federal building, etc. - is too specific. These threats are infinite, and the bad guys can easily switch among them.

New York has thousands of possible targets, and there are dozens of possible tactics. Implementing security against movie-plot threats is only effective if we correctly guess which specific threat to protect against. That's unlikely.

A far better strategy is to spend our limited counterterrorism resources on investigation and intelligence - and on emergency response. These measures don't hinge on any specific threat; they don't require us to guess the tactic or target correctly. They're effective in a variety of circumstances, even nonterrorist ones.

The result may not be flashy or outwardly reassuring - as are pricey new scanners in airports. But the strategy will save more lives.

The 2006 arrest of the liquid bombers - who wanted to detonate liquid explosives to be brought onboard airliners traveling from England to North America - serves as an excellent example. The plotters were arrested in their London apartments, and their attack was foiled before they ever got to the airport.

It didn't matter if they were using liquids or solids or gases. It didn't even matter if they were targeting airports or shopping malls or theaters. It was a straightforward, although hardly simple, matter of following leads.

Gimmicky security measures are tempting - but they're distractions we can't afford. The Christmas Day bomber chose his tactic because it would circumvent last year's security measures, and the next attacker will choose his tactic - and target - according to similar criteria. Spend money on cameras and guards in the subways, and the terrorists will simply modify their plot to render those countermeasures ineffective.

Humans are a species of storytellers, and the Moscow story has obvious parallels in New York. When we read the word "subway," we can't help but think about the system we use every day. This is a natural response, but it doesn't make for good public policy. We'd all be safer if we rose above the simple parallels and the need to calm our fears with expensive and seductive new technologies - and countered the threat the smart way.

This essay originally appeared in the New York Daily News.

Posted on April 7, 2010 at 8:52 AM31 Comments

Privacy and Control

In January Facebook Chief Executive, Mark Zuckerberg, declared the age of privacy to be over. A month earlier, Google Chief Eric Schmidt expressed a similar sentiment. Add Scott McNealy's and Larry Ellison's comments from a few years earlier, and you've got a whole lot of tech CEOs proclaiming the death of privacy--especially when it comes to young people.

It's just not true. People, including the younger generation, still care about privacy. Yes, they're far more public on the Internet than their parents: writing personal details on Facebook, posting embarrassing photos on Flickr and having intimate conversations on Twitter. But they take steps to protect their privacy and vociferously complain when they feel it violated. They're not technically sophisticated about privacy and make mistakes all the time, but that's mostly the fault of companies and Web sites that try to manipulate them for financial gain.

To the older generation, privacy is about secrecy. And, as the Supreme Court said, once something is no longer secret, it's no longer private. But that's not how privacy works, and it's not how the younger generation thinks about it. Privacy is about control. When your health records are sold to a pharmaceutical company without your permission; when a social-networking site changes your privacy settings to make what used to be visible only to your friends visible to everyone; when the NSA eavesdrops on everyone's e-mail conversations--your loss of control over that information is the issue. We may not mind sharing our personal lives and thoughts, but we want to control how, where and with whom. A privacy failure is a control failure.

People's relationship with privacy is socially complicated. Salience matters: People are more likely to protect their privacy if they're thinking about it, and less likely to if they're thinking about something else. Social-networking sites know this, constantly reminding people about how much fun it is to share photos and comments and conversations while downplaying the privacy risks. Some sites go even further, deliberately hiding information about how little control--and privacy--users have over their data. We all give up our privacy when we're not thinking about it.

Group behavior matters; we're more likely to expose personal information when our peers are doing it. We object more to losing privacy than we value its return once it's gone. Even if we don't have control over our data, an illusion of control reassures us. And we are poor judges of risk. All sorts of academic research backs up these findings.

Here's the problem: The very companies whose CEOs eulogize privacy make their money by controlling vast amounts of their users' information. Whether through targeted advertising, cross-selling or simply convincing their users to spend more time on their site and sign up their friends, more information shared in more ways, more publicly means more profits. This means these companies are motivated to continually ratchet down the privacy of their services, while at the same time pronouncing privacy erosions as inevitable and giving users the illusion of control.

You can see these forces in play with Google's launch of Buzz. Buzz is a Twitter-like chatting service, and when Google launched it in February, the defaults were set so people would follow the people they corresponded with frequently in Gmail, with the list publicly available. Yes, users could change these options, but--and Google knew this--changing options is hard and most people accept the defaults, especially when they're trying out something new. People were upset that their previously private e-mail contacts list was suddenly public. A Federal Trade Commission commissioner even threatened penalties. And though Google changed its defaults, resentment remained.

Facebook tried a similar control grab when it changed people's default privacy settings last December to make them more public. While users could, in theory, keep their previous settings, it took an effort. Many people just wanted to chat with their friends and clicked through the new defaults without realizing it.

Facebook has a history of this sort of thing. In 2006 it introduced News Feeds, which changed the way people viewed information about their friends. There was no true privacy change in that users could not see more information than before; the change was in control--or arguably, just in the illusion of control. Still, there was a large uproar. And Facebook is doing it again; last month, the company announced new privacy changes that will make it easier for it to collect location data on users and sell that data to third parties.

With all this privacy erosion, those CEOs may actually be right--but only because they're working to kill privacy. On the Internet, our privacy options are limited to the options those companies give us and how easy they are to find. We have Gmail and Facebook accounts because that's where we socialize these days, and it's hard--especially for the younger generation--to opt out. As long as privacy isn't salient, and as long as these companies are allowed to forcibly change social norms by limiting options, people will increasingly get used to less and less privacy. There's no malice on anyone's part here; it's just market forces in action. If we believe privacy is a social good, something necessary for democracy, liberty and human dignity, then we can't rely on market forces to maintain it. Broad legislation protecting personal privacy by giving people control over their personal data is the only solution.

This essay originally appeared on Forbes.com.

EDITED TO ADD (4/13): Google responds. And another essay on the topic.

Posted on April 6, 2010 at 7:47 AM64 Comments

Detecting Being Watched

This seems like science fiction to me:

The camera uses the same "red eye" effect of from camera flashes to project it hundreds of meters, allowing it to identify binoculars, sniper scopes, cameras and even human eyeballs that are staring at you....

Posted on April 5, 2010 at 1:30 PM51 Comments

DHS Cybersecurity Awareness Campaign Challenge

This is a little hokey, but better them than the NSA:

The National Cybersecurity Awareness Campaign Challenge Competition is designed to solicit ideas from industry and individuals alike on how best we can clearly and comprehensively discuss cybersecurity with the American public.

Key areas that should be factored into the competition are the following:

  • Teamwork
  • Ability to quantify the distribution method
  • Ability to quantify the receipt of message
  • Solution may under no circumstance create spam
  • Use of Web 2.0 Technology
  • Feedback mechanism
  • List building
  • Privacy protection
  • Repeatability
  • Transparency
  • Message

It should engage the Private Sector and Industry leaders to develop their own campaign strategy and metrics to track how to get a unified cyber security message out to the American public.

Deadline is end of April, if you want to submit something. "Winners of the Challenge will be invited to an event in Washington D.C. in late May or early June." I wonder what kind of event.

Posted on April 2, 2010 at 6:14 AM18 Comments

Explosive Breast Implants -- Not an April Fool's Joke

Is MI5 playing a joke on us?

Female homicide bombers are being fitted with exploding breast implants which are almost impossible to detect, British spies have reportedly discovered.

[...]

MI5 has also discovered that extremists are inserting the explosives into the buttocks of some male bombers.

"Women suicide bombers recruited by Al Qaeda are known to have had the explosives inserted in their breasts under techniques similar to breast enhancing surgery," Terrorist expert Joseph Farah claims.

They're "known to have" this? I doubt it. More likely, they could be:

Radical Islamist plastic surgeons could be carrying out the implant operations in lawless areas of Pakistan, security sources are said to warned.

They also could be having tea with their families. They could be building killer robots with lasers shooting out of their eyes.

I love the poor Photoshop job in this article from The Sun.

Perhaps we should just give up. When this sort of hysterical nonsense becomes an actual news story, the terrorists have won.

Posted on April 1, 2010 at 1:33 PM69 Comments

Fifth Annual Movie-Plot Threat Contest

Once upon a time, men and women throughout the land lived in fear. This caused them to do foolish things that made them feel better temporarily, but didn't make them any safer. Gradually, some people became less fearful, and less tolerant of the foolish things they were told to submit to. The lords who ruled the land tried to revive the fear, but with less and less success. Sensible men and women from all over the land were peering behind the curtain, and seeing that the emperor had no clothes.

Thus it came to pass that the lords decided to appeal to the children. If the children could be made more fearful, then their fathers and mothers might also become more fearful, and the lords would remain lords, and all would be right with the order of things. The children would grow up in fear, and thus become accustomed to doing what the lords said, further allowing the lords to remain lords. But to do this, the lords realized they needed Frightful Fables and Fear-Mongering Fairytales to tell the children at bedtime.

Your task, ye Weavers of Tales, is to create a fable or fairytale suitable for instilling the appropriate level of fear in children so they grow up appreciating all the lords do to protect them.

That's this year's contest. Make your submissions short and sweet: 400 words or less. Imagine that someone will be illustrating this story for young children. Submit your entry in comments; deadline is May 1. I'll choose several semifinalists, and then you all will vote for the winner. The prize is a signed copy of my latest book, Cryptography Engineering. And if anyone seriously wants to illustrate this, please contact me directly -- or just go for it and post a link.

Thank you to loyal reader -- and frequent reader of my draft essays -- "grenouille," who suggested this year's contest.

And good luck!

The First Movie-Plot Threat Contest rules and winner. The Second Movie-Plot Threat Contest rules, semifinalists, and winner. The Third Movie-Plot Threat Contest rules, semifinalists, and winner. The Fourth Movie-Plot Threat Contest rules and winner.

EDITED TO ADD (4/1): I'm looking for entries in the form of a fairytale or fable. Plot summaries and descriptions won't count as entries, although you are welcome to post them and comment on them -- and use them if others post them.

EDITED TO ADD (5/15): Voting is now open here.

Posted on April 1, 2010 at 6:24 AM110 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..