Entries Tagged "economics of security"

Page 21 of 39

The Risk of Anthrax

Some reality to counter the hype.

The Bottom Line

While there has been much consternation and alarm-raising over the potential for widespread proliferation of biological weapons and the possible use of such weapons on a massive scale, there are significant constraints on such designs. The current dearth of substantial biological weapons programs and arsenals by governments worldwide, and the even smaller number of cases in which systems were actually used, seems to belie—or at least bring into question—the intense concern about such programs.

While we would like to believe that countries such as the United States, the United Kingdom and Russia have halted their biological warfare programs for some noble ideological or humanitarian reason, we simply can’t. If biological weapons were in practice as effective as some would lead us to believe, these states would surely maintain stockpiles of them, just as they have maintained their nuclear weapons programs. Biological weapons programs were abandoned because they proved to be not as effective as advertised and because conventional munitions proved to provide more bang for the buck.

Posted on August 13, 2008 at 2:29 PMView Comments

Memo to the Next President

Obama has a cyber security plan.

It’s basically what you would expect: Appoint a national cyber security advisor, invest in math and science education, establish standards for critical infrastructure, spend money on enforcement, establish national standards for securing personal data and data-breach disclosure, and work with industry and academia to develop a bunch of needed technologies.

I could comment on the plan, but with security the devil is always in the details—and, of course, at this point there are few details. But since he brought up the topic—McCain supposedly is “working on the issues” as well—I have three pieces of policy advice for the next president, whoever he is. They’re too detailed for campaign speeches or even position papers, but they’re essential for improving information security in our society. Actually, they apply to national security in general. And they’re things only government can do.

One, use your immense buying power to improve the security of commercial products and services. One property of technological products is that most of the cost is in the development of the product rather than the production. Think software: The first copy costs millions, but the second copy is free.

You have to secure your own government networks, military and civilian. You have to buy computers for all your government employees. Consolidate those contracts, and start putting explicit security requirements into the RFPs. You have the buying power to get your vendors to make serious security improvements in the products and services they sell to the government, and then we all benefit because they’ll include those improvements in the same products and services they sell to the rest of us. We’re all safer if information technology is more secure, even though the bad guys can use it, too.

Two, legislate results and not methodologies. There are a lot of areas in security where you need to pass laws, where the security externalities are such that the market fails to provide adequate security. For example, software companies who sell insecure products are exploiting an externality just as much as chemical plants that dump waste into the river. But a bad law is worse than no law. A law requiring companies to secure personal data is good; a law specifying what technologies they should use to do so is not. Mandating software liabilities for software failures is good, detailing how is not. Legislate for the results you want and implement the appropriate penalties; let the market figure out how—that’s what markets are good at.

Three, broadly invest in research. Basic research is risky; it doesn’t always pay off. That’s why companies have stopped funding it. Bell Labs is gone because nobody could afford it after the AT&T breakup, but the root cause was a desire for higher efficiency and short-term profitability—not unreasonable in an unregulated business. Government research can be used to balance that by funding long-term research.

Spread those research dollars wide. Lately, most research money has been redirected through DARPA to near-term military-related projects; that’s not good. Keep the earmark-happy Congress from dictating how the money is spent. Let the NSF, NIH and other funding agencies decide how to spend the money and don’t try to micromanage. Give the national laboratories lots of freedom, too. Yes, some research will sound silly to a layman. But you can’t predict what will be useful for what, and if funding is really peer-reviewed, the average results will be much better. Compared to corporate tax breaks and other subsidies, this is chump change.

If our research capability is to remain vibrant, we need more science and math students with decent elementary and high school preparation. The declining interest is partly from the perception that scientists don’t get rich like lawyers and dentists and stockbrokers, but also because science isn’t valued in a country full of creationists. One way the president can help is by trusting scientific advisers and not overruling them for political reasons.

Oh, and get rid of those post-9/11 restrictions on student visas that are causing so many top students to do their graduate work in Canada, Europe and Asia instead of in the United States. Those restrictions will hurt us immensely in the long run.

Those are the three big ones; the rest is in the details. And it’s the details that matter. There are lots of serious issues that you’re going to have to tackle: data privacy, data sharing, data mining, government eavesdropping, government databases, use of Social Security numbers as identifiers, and so on. It’s not enough to get the broad policy goals right. You can have good intentions and enact a good law, and have the whole thing completely gutted by two sentences sneaked in during rulemaking by some lobbyist.

Security is both subtle and complex, and—unfortunately—doesn’t readily lend itself to normal legislative processes. You’re used to finding consensus, but security by consensus rarely works. On the internet, security standards are much worse when they’re developed by a consensus body, and much better when someone just does them. This doesn’t always work—a lot of crap security has come from companies that have “just done it”—but nothing but mediocre standards come from consensus bodies. The point is that you won’t get good security without pissing someone off: The information broker industry, the voting machine industry, the telcos. The normal legislative process makes it hard to get security right, which is why I don’t have much optimism about what you can get done.

And if you’re going to appoint a cyber security czar, you have to give him actual budgetary authority. Otherwise he won’t be able to get anything done, either.

This essay originally appeared on Wired.com.

Posted on August 12, 2008 at 6:36 AMView Comments

Italians Use Soldiers to Prevent Crime

Interesting:

Soldiers were deployed throughout Italy on Monday to embassies, subway and railway stations, as part of broader government measures to fight violent crime here for which illegal immigrants are broadly blamed.

[…]

The conservative government of Silvio Berlusconi won elections in April while promising to crack down on petty crime and illegal immigrants. The new patrols of soldiers, who are not empowered to make arrests, do not seem aimed only at illegal immigrants, though the patrols were deployed to centers where illegal immigrants are housed.

“Security is something concrete,” Mr. La Russa said on Monday. The troops, he said, will be a “deterrent to criminals.”

That reminds me of one of my favorite logical fallacies: “We must do something. This is something. Therefore, we must do it.” It does seem largely to be a demonstration of “doing something” by the Berlusconi government. The legitimate police, of course, think it’s a terrible idea.

“You need to be specially trained to carry out some kinds of controls,” Nicola Tanzi, the secretary of a trade union that represents Italian police officers. “Soldiers just aren’t qualified.”

He also questioned whether the $93.6 million that will be spent for the extra deployment, called Operation Safe Streets, might not have been better used to increase the budgets for Italy’s police and military.

Posted on August 5, 2008 at 6:36 AMView Comments

Cost/Benefit Analysis of Airline Security

This report, “Assessing the risks, costs and benefits of United States aviation security measures” by Mark Stewart and John Mueller, is excellent reading:

The United States Office of Management and Budget has recommended the use of cost-benefit assessment for all proposed federal regulations. Since 9/11 government agencies in Australia, United States, Canada, Europe and elsewhere have devoted much effort and expenditure to attempt to ensure that a 9/11 type attack involving hijacked aircraft is not repeated. This effort has come at considerable cost, running in excess of US$6 billion per year for the United States Transportation Security Administration (TSA) alone. In particular, significant expenditure has been dedicated to two aviation security measures aimed at preventing terrorists from hijacking and crashing an aircraft into buildings and other infrastructure: (i) Hardened cockpit doors and (ii) Federal Air Marshal Service. These two security measures cost the United States government and the airlines nearly $1 billion per year. This paper seeks to discover whether aviation security measures are cost-effective by considering their effectiveness, their cost and expected lives saved as a result of such expenditure. An assessment of the Federal Air Marshal Service suggests that the annual cost is $180 million per life saved. This is greatly in excess of the regulatory safety goal of $1-$10 million per life saved. As such, the air marshal program would seem to fail a cost-benefit analysis. In addition, the opportunity cost of these expenditures is considerable, and it is highly likely that far more lives would have been saved if the money had been invested instead in a wide range of more cost-effective risk mitigation programs. On the other hand, hardening of cockpit doors has an annual cost of only $800,000 per life saved, showing that this is a cost-effective security measure.

From the body:

Hardening cockpit doors has the highest risk reduction (16.67%) at lowest additional cost of $40 million. On the other hand, the Federal Air Marshal Service costs $900 million pa but reduces risk by only 1.67%. The Federal Air Marshal Service may be more cost-effective if it is able to show extra benefit over the cheaper measure of hardening cockpit doors. However, the Federal Air Marshal Service seems to have significantly less benefit which means that hardening cockpit doors is the more cost-effective measure.

Cost-benefit analysis is definitely the way to look at these security measures. It’s hard for people to do, because it requires putting a dollar value on a human life—something we can’t possibly do with our own. But as a society, it is something we do again and again: when we raise or lower speed limits, when we ban a certain pesticide, when we enact building codes. Insurance companies do it all the time. We do it implicitly, because we can’t talk about it explicitly. I think there is considerable value in talking about it.

(Note the table on page 5 of the report, which lists the cost per lives saved for a variety of safety and security measures.)

The final paper will eventually be published in the Journal of Transportation Security. I never even knew there was such a thing.

EDITED TO ADD (8/13): New York Times op-ed on the subject.

Posted on July 21, 2008 at 5:53 AMView Comments

Homeland Security Cost-Benefit Analysis

This is an excellent paper by Ohio State political science professor John Mueller. Titled “The Quixotic Quest for Invulnerability: Assessing the Costs, Benefits, and Probabilities of Protecting the Homeland,” it lays out some common send premises and policy implications.

The premises:

1. The number of potential terrorist targets is essentially infinite.

2. The probability that any individual target will be attacked is essentially zero.

3. If one potential target happens to enjoy a degree of protection, the agile terrorist usually can readily move on to another one.

4. Most targets are “vulnerable” in that it is not very difficult to damage them, but invulnerable in that they can be rebuilt in fairly short order and at tolerable expense.

5. It is essentially impossible to make a very wide variety of potential terrorist targets invulnerable except by completely closing them down.

The policy implications:

1. Any protective policy should be compared to a “null case”: do nothing, and use the money saved to rebuild and to compensate any victims.

2. Abandon any effort to imagine a terrorist target list.

3. Consider negative effects of protection measures: not only direct cost, but inconvenience, enhancement of fear, negative economic impacts, reduction of liberties.

4. Consider the opportunity costs, the tradeoffs, of protection measures.

Here’s the abstract:

This paper attempts to set out some general parameters for coming to grips with a central homeland security concern: the effort to make potential targets invulnerable, or at least notably less vulnerable, to terrorist attack. It argues that protection makes sense only when protection is feasible for an entire class of potential targets and when the destruction of something in that target set would have quite large physical, economic, psychological, and/or political consequences. There are a very large number of potential targets where protection is essentially a waste of resources and a much more limited one where it may be effective.

The whole paper is worth reading.

Posted on July 17, 2008 at 6:43 AMView Comments

LifeLock and Identity Theft

LifeLock, one of the companies that offers identity-theft protection in the United States, has been taking quite a beating recently. They’re being sued by credit bureaus, competitors and lawyers in several states that are launching class action lawsuits. And the stories in the media … it’s like a piranha feeding frenzy.

There are also a lot of errors and misconceptions. With its aggressive advertising campaign and a CEO who publishes his Social Security number and dares people to steal his identity—Todd Davis, 457-55-5462—LifeLock is a company that’s easy to hate. But the company’s story has some interesting security lessons, and it’s worth understanding in some detail.

In December 2003, as part of the Fair and Accurate Credit Transactions Act, or Facta, credit bureaus were forced to allow you to put a fraud alert on their credit reports, requiring lenders to verify your identity before issuing a credit card in your name. This alert is temporary, and expires after 90 days. Several companies have sprung up—LifeLock, Debix, LoudSiren, TrustedID—that automatically renew these alerts and effectively make them permanent.

This service pisses off the credit bureaus and their financial customers. The reason lenders don’t routinely verify your identity before issuing you credit is that it takes time, costs money and is one more hurdle between you and another credit card. (Buy, buy, buy—it’s the American way.) So in the eyes of credit bureaus, LifeLock’s customers are inferior goods; selling their data isn’t as valuable. LifeLock also opts its customers out of pre-approved credit card offers, further making them less valuable in the eyes of credit bureaus.

And, so began a smear campaign on the part of the credit bureaus. You can read their points of view in this New York Times article, written by a reporter who didn’t do much more than regurgitate their talking points. And the class action lawsuits have piled on, accusing LifeLock of deceptive business practices, fraudulent advertising and so on. The biggest smear is that LifeLock didn’t even protect Todd Davis, and that his identity was allegedly stolen.

It wasn’t. Someone in Texas used Davis’s SSN to get a $500 advance against his paycheck. It worked because the loan operation didn’t check with any of the credit bureaus before approving the loan—perfectly reasonable for an amount this small. The payday-loan operation called Davis to collect, and LifeLock cleared up the problem. His credit report remains spotless.

The Experian credit bureau’s lawsuit basically claims that fraud alerts are only for people who have been victims of identity theft. This seems spurious; the text of the law states that anyone “who asserts a good faith suspicion that the consumer has been or is about to become a victim of fraud or related crime” can request a fraud alert. It seems to me that includes anybody who has ever received one of those notices about their financial details being lost or stolen, which is everybody.

As to deceptive business practices and fraudulent advertising—those just seem like class action lawyers piling on. LifeLock’s aggressive fear-based marketing doesn’t seem any worse than a lot of other similar advertising campaigns. My guess is that the class action lawsuits won’t go anywhere.

In reality, forcing lenders to verify identity before issuing credit is exactly the sort of thing we need to do to fight identity theft. Basically, there are two ways to deal with identity theft: Make personal information harder to steal, and make stolen personal information harder to use. We all know the former doesn’t work, so that leaves the latter. If Congress wanted to solve the problem for real, one of the things it would do is make fraud alerts permanent for everybody. But the credit industry’s lobbyists would never allow that.

LifeLock does a bunch of other clever things. They monitor the national address database, and alert you if your address changes. They look for your credit and debit card numbers on hacker and criminal websites and such, and assist you in getting a new number if they see it. They have a million-dollar service guarantee—for complicated legal reasons, they can’t call it insurance—to help you recover if your identity is ever stolen.

But even with all of this, I am not a LifeLock customer. At $120 a year, it’s just not worth it. You wouldn’t know it from the press attention, but dealing with identity theft has become easier and more routine. Sure, it’s a pervasive problem. The Federal Trade Commission reported that 8.3 million Americans were identity-theft victims in 2005. But that includes things like someone stealing your credit card and using it, something that rarely costs you any money and that LifeLock doesn’t protect against. New account fraud is much less common, affecting 1.8 million Americans per year, or 0.8 percent of the adult population. The FTC hasn’t published detailed numbers for 2006 or 2007, but the rate seems to be declining.

New card fraud is also not very damaging. The median amount of fraud the thief commits is $1,350, but you’re not liable for that. Some spectacularly horrible identity-theft stories notwithstanding, the financial industry is pretty good at quickly cleaning up the mess. The victim’s median out-of-pocket cost for new account fraud is only $40, plus ten hours of grief to clean up the problem. Even assuming your time is worth $100 an hour, LifeLock isn’t worth more than $8 a year.

And it’s hard to get any data on how effective LifeLock really is. They’ve been in business three years and have about a million customers, but most of them have joined up in the last year. They’ve paid out on their service guarantee 113 times, but a lot of those were for things that happened before their customers became customers. (It was easier to pay than argue, I assume.) But they don’t know how often the fraud alerts actually catch an identity thief in the act. My guess is that it’s less than the 0.8 percent fraud rate above.

LifeLock’s business model is based more on the fear of identity theft than the actual risk.

It’s pretty ironic of the credit bureaus to attack LifeLock on its marketing practices, since they know all about profiting from the fear of identity theft. Facta also forced the credit bureaus to give Americans a free credit report once a year upon request. Through deceptive marketing techniques, they’ve turned this requirement into a multimillion-dollar business.

Get LifeLock if you want, or one of its competitors if you prefer. But remember that you can do most of what these companies do yourself. You can put a fraud alert on your own account, but you have to remember to renew it every three months. You can also put a credit freeze on your account, which is more work for the average consumer but more effective if you’re a privacy wonk—and the rules differ by state. And maybe someday Congress will do the right thing and put LifeLock out of business by forcing lenders to verify identity every time they issue credit in someone’s name.

This essay originally appeared in Wired.com.

Posted on June 17, 2008 at 6:51 AMView Comments

How to Sell Security

It’s a truism in sales that it’s easier to sell someone something he wants than a defense against something he wants to avoid. People are reluctant to buy insurance, or home security devices, or computer security anything. It’s not they don’t ever buy these things, but it’s an uphill struggle.

The reason is psychological. And it’s the same dynamic when it’s a security vendor trying to sell its products or services, a CIO trying to convince senior management to invest in security, or a security officer trying to implement a security policy with her company’s employees.

It’s also true that the better you understand your buyer, the better you can sell.

First, a bit about Prospect Theory, the underlying theory behind the newly popular field of behavioral economics. Prospect Theory was developed by Daniel Kahneman and Amos Tversky in 1979 (Kahneman went on to win a Nobel Prize for this and other similar work) to explain how people make trade-offs that involve risk. Before this work, economists had a model of “economic man,” a rational being who makes trade-offs based on some logical calculation. Kahneman and Tversky showed that real people are far more subtle and ornery.

Here’s an experiment that illustrates Prospect Theory. Take a roomful of subjects and divide them into two groups. Ask one group to choose between these two alternatives: a sure gain of $500 and 50 percent chance of gaining $1,000. Ask the other group to choose between these two alternatives: a sure loss of $500 and a 50 percent chance of losing $1,000.

These two trade-offs are very similar, and traditional economics predicts that the whether you’re contemplating a gain or a loss doesn’t make a difference: People make trade-offs based on a straightforward calculation of the relative outcome. Some people prefer sure things and others prefer to take chances. Whether the outcome is a gain or a loss doesn’t affect the mathematics and therefore shouldn’t affect the results. This is traditional economics, and it’s called Utility Theory.

But Kahneman’s and Tversky’s experiments contradicted Utility Theory. When faced with a gain, about 85 percent of people chose the sure smaller gain over the risky larger gain. But when faced with a loss, about 70 percent chose the risky larger loss over the sure smaller loss.

This experiment, repeated again and again by many researchers, across ages, genders, cultures and even species, rocked economics, yielded the same result. Directly contradicting the traditional idea of “economic man,” Prospect Theory recognizes that people have subjective values for gains and losses. We have evolved a cognitive bias: a pair of heuristics. One, a sure gain is better than a chance at a greater gain, or “A bird in the hand is worth two in the bush.” And two, a sure loss is worse than a chance at a greater loss, or “Run away and live to fight another day.” Of course, these are not rigid rules. Only a fool would take a sure $100 over a 50 percent chance at $1,000,000. But all things being equal, we tend to be risk-averse when it comes to gains and risk-seeking when it comes to losses.

This cognitive bias is so powerful that it can lead to logically inconsistent results. Google the “Asian Disease Experiment” for an almost surreal example. Describing the same policy choice in different ways—either as “200 lives saved out of 600” or “400 lives lost out of 600″—yields wildly different risk reactions.

Evolutionarily, the bias makes sense. It’s a better survival strategy to accept small gains rather than risk them for larger ones, and to risk larger losses rather than accept smaller losses. Lions, for example, chase young or wounded wildebeests because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there’s a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow. Similarly, it is better to risk a larger loss than to accept a smaller loss. Because animals tend to live on the razor’s edge between starvation and reproduction, any loss of food—whether small or large—can be equally bad. Because both can result in death, and the best option is to risk everything for the chance at no loss at all.

How does Prospect Theory explain the difficulty of selling the prevention of a security breach? It’s a choice between a small sure loss—the cost of the security product—and a large risky loss: for example, the results of an attack on one’s network. Of course there’s a lot more to the sale. The buyer has to be convinced that the product works, and he has to understand the threats against him and the risk that something bad will happen. But all things being equal, buyers would rather take the chance that the attack won’t happen than suffer the sure loss that comes from purchasing the security product.

Security sellers know this, even if they don’t understand why, and are continually trying to frame their products in positive results. That’s why you see slogans with the basic message, “We take care of security so you can focus on your business,” or carefully crafted ROI models that demonstrate how profitable a security purchase can be. But these never seem to work. Security is fundamentally a negative sell.

One solution is to stoke fear. Fear is a primal emotion, far older than our ability to calculate trade-offs. And when people are truly scared, they’re willing to do almost anything to make that feeling go away; lots of other psychological research supports that. Any burglar alarm salesman will tell you that people buy only after they’ve been robbed, or after one of their neighbors has been robbed. And the fears stoked by 9/11, and the politics surrounding 9/11, have fueled an entire industry devoted to counterterrorism. When emotion takes over like that, people are much less likely to think rationally.

Though effective, fear mongering is not very ethical. The better solution is not to sell security directly, but to include it as part of a more general product or service. Your car comes with safety and security features built in; they’re not sold separately. Same with your house. And it should be the same with computers and networks. Vendors need to build security into the products and services that customers actually want. CIOs should include security as an integral part of everything they budget for. Security shouldn’t be a separate policy for employees to follow but part of overall IT policy.

Security is inherently about avoiding a negative, so you can never ignore the cognitive bias embedded so deeply in the human brain. But if you understand it, you have a better chance of overcoming it.

This essay originally appeared in CIO.

Posted on May 26, 2008 at 5:57 AMView Comments

Airlines Profiting from TSA Rules

From CNN:

Before 9/11, airlines and security personnel—and I use the term “security personnel” loosely—might have let a nickname or even a maiden name on a ticket slide. No longer. If you have the wrong name on your ticket, you’re probably grounded. And there are two reasons for this: security and greed.

The Transportation Security Administration wants to be sure the same person who bought the ticket, and who was screened, is boarding the plane. But when there’s an inexact match, the airline can either charge a $100 “change” fee or force you to buy a new ticket. In an industry where every dollar counts, the exact-name rule is the government’s gift to cash-starved air carriers.

That’s the situation Gordon was confronted with, even when it was obvious that “Jan” and “Janet” were one and the same. There were suggestions that a new ticket might need to be purchased. “We didn’t let it get to that,” he recalls. Instead, he asked to speak with a supervisor who could finally fix the codes so that the ticket and passport matched up. How did all of this happen in the first place? Turns out Jan Gordon had signed up for a frequent flier account under her informal name, so when she booked an award ticket, it also used her informal—and inaccurate—name.

There are two things to get pissed off about here. One, the airlines profiting off a TSA rule. And two, a TSA rule that requires them to ignore what is obvious.

EDITED TO ADD (5/28): To add some more detail here, the rule makes absolutely no sense. If this were sensible, the TSA employee who checks the ticket against the ID would make the determination if the names were the same. Instead, the passenger is forced to go back to the airline who, for a fee, changes the name on the ticket to match the ID. This latter system is no more secure. If anything, it’s less secure. But rules are rules, so it’s what has to happen.

Posted on May 20, 2008 at 6:51 AMView Comments

Third Annual Movie-Plot Threat Contest Winner

On April 7—seven days late—I announced the Third Annual Movie-Plot Threat Contest:

For this contest, the goal is to create fear. Not just any fear, but a fear that you can alleviate through the sale of your new product idea. There are lots of risks out there, some of them serious, some of them so unlikely that we shouldn’t worry about them, and some of them completely made up. And there are lots of products out there that provide security against those risks.

Your job is to invent one. First, find a risk or create one. It can be a terrorism risk, a criminal risk, a natural-disaster risk, a common household risk—whatever. The weirder the better. Then, create a product that everyone simply has to buy to protect him- or herself from that risk. And finally, write a catalog ad for that product.

[…]

Entries are limited to 150 words … because fear doesn’t require a whole lot of explaining. Tell us why we should be afraid, and why we should buy your product.

On May 7, I posted five semi-finalists out of the 327 blog comments:

Sadly, two of those five was above the 150-word limit. Out of the three remaining, I (with the help of my readers) have chosen a winner.

Presenting, the winner of the Third Annual Movie Plot Threat Contest, Aaron Massey:

Tommy Tester Toothpaste Strips:

Many Americans were shocked to hear the results of the research trials regarding heavy metals and toothpaste conducted by the New England Journal of Medicine, which FDA is only now attempting to confirm. This latest scare comes after hundreds of deaths were linked to toothpaste contaminated with diethylene glycol, a potentially dangerous chemical used in antifreeze.

In light of this continuing health risk, Hamilton Health Labs is proud to announce Tommy Tester Toothpaste Strips! Just apply a dab of toothpaste from a fresh tube onto the strip and let it rest for 3 minutes. It’s just that easy! If the strip turns blue, rest assured that your entire tube of toothpaste is safe. However, if the strip turns pink, dispose of the toothpaste immediately and call the FDA health emergency number at 301-443-1240.

Do not let your family become a statistic when the solution is only $2.95!

Aaron wins, well, nothing really, except the fame and glory afforded by this blog. So give him some fame and glory. Congratulations.

Posted on May 15, 2008 at 6:24 AMView Comments

1 19 20 21 22 23 39

Sidebar photo of Bruce Schneier by Joe MacInnis.