Good essay. Nothing I haven't said before, but it's good to hear it from someone with a widely different set of credentials than I have.
February 2012 Archives
I was asked to talk about five books related to privacy.
You're best known as a security expert but our theme today is "trust". How would you describe the connection between the two?
Security exists to facilitate trust. Trust is the goal, and security is how we enable it. Think of it this way: As members of modern society, we need to trust all sorts of people, institutions and systems. We have to trust that they'll treat us honestly, won't take advantage of us and so on – in short, that they'll behave in a trustworthy manner. Security is how we induce trustworthiness, and by extension enable trust.
An example might make this clearer. For commerce to work smoothly, merchants and customers need to trust each other. Customers need to trust that merchants won't misrepresent the goods they're selling. Merchants need to trust that customers won't steal stuff without paying. Each needs to trust that the other won't cheat somehow. Security is how we make that work, billions of times a day. We do that through obvious measures like alarm systems that prevent theft and anti-counterfeiting measures in currency that prevent fraud, but I mean a lot of other things as well. Consumer protection laws prevent merchants from cheating. Other laws prevent burglaries. Less formal measures like reputational considerations help keep merchants, and customers in less anonymous communities, from cheating. And our inherent moral compass keeps most of us honest most of the time.
In my new book Liars and Outliers, I call these societal pressures. None of them are perfect, but all of them – working together – are what keeps society functioning. Of course there is, and always will be, the occasional merchant or customer who cheats. But as long as they're rare enough, society thrives.
How has the nature of trust changed in the information age?
These notions of trust and trustworthiness are as old as our species. Many of the specific societal pressures that induce trust are as old as civilisation. Morals and reputational considerations are certainly that old, as are laws. Technical security measures have changed with technology, as well as details around reputational and legal systems, but by and large they're basically the same.
What has changed in modern society is scale. Today we need to trust more people than ever before, further away – whether politically, ethnically or socially – than ever before. We need to trust larger corporations, more diverse institutions and more complicated systems. We need to trust via computer networks. This all makes trust, and inducing trust, harder. At the same time, the scaling of technology means that the bad guys can do more damage than ever before. That also makes trust harder. Navigating all of this is one of the most fundamental challenges of our society in this new century.
Given the dangers out there, should we trust anyone? Isn't "trust no one" the first rule of security?
It might be the first rule of security, but it's the worst rule of society. I don't think I could even total up all the people, institutions and systems I trusted today. I trusted that the gas company would continue to provide the fuel I needed to heat my house, and that the water coming out of my tap was safe to drink. I trusted that the fresh and packaged food in my refrigerator was safe to eat – and that certainly involved trusting people in several countries. I trusted a variety of websites on the Internet. I trusted my automobile manufacturer, as well as all the other drivers on the road.
I am flying to Boston right now, so that requires trusting several major corporations, hundreds of strangers – either working for those corporations, sitting on my plane or just standing around in the airport – and a variety of government agencies. I even had to trust the TSA [US Transportation Security Administration], even though I know it's doing a lousy job – and so on. And it's not even 9:30am yet! The number of people each of us trusts every day is astounding. And we trust them so completely that we often don't even think about it.
We don't walk into a restaurant and think: "The food vendors might have sold the restaurant tainted food, the cook might poison it, the waiter might clone my credit card, other diners might steal my wallet, the building constructor might have weakened the roof, and terrorists might bomb the place." We just sit down and eat. And the restaurant trusts that we won't steal anyone else's wallet or leave a bomb under our chair, and will pay when we're done. Without trust, society collapses. And without societal pressures, there's no trust. The devil is in the details, of course, and that's what my book is about.
As an individual, what security threats scare you the most?
My primary concerns are threats from the powerful. I'm not worried about criminals, even organised crime. Or terrorists, even organised terrorists. Those groups have always existed, always will, and they'll always operate on the fringes of society. Societal pressures have done a good job of keeping them that way. It's much more dangerous when those in power use that power to subvert trust. Specifically, I am thinking of governments and corporations.
Let me give you a few examples. The global financial crisis was not a result of criminals, it was perpetrated by legitimate financial institutions pursuing their own self-interest. The major threats against our privacy are not from criminals, they're from corporations trying to more accurately target advertising. The most significant threat to the freedom of the Internet is from large entertainment companies, in their misguided attempt to stop piracy. And the cyberwar rhetoric is likely to cause more damage to the Internet than criminals could ever dream of.
What scares me the most is that today, in our hyper-connected, hyper-computed, high-tech world, we will get societal pressures wrong to catastrophic effect.
Let's get stuck into the books you've chosen on this theme on trust. Beginning with Yochai Benkler's The Penguin and the Leviathan.
This could be considered a companion book to my own. I write from the perspective of security – how society induces cooperation. Benkler takes the opposite perspective – how does this cooperation work and what is its value? More specifically, what is its value in the 21st century information-age economy? He challenges the pervasive economic view that people are inherently selfish creatures, and shows that actually we are naturally cooperative. More importantly, he discusses the enormous value of cooperation in society, and the new ways it can be harnessed over the Internet.
I think this view is important. Our culture is pervaded with the idea that individualism is paramount – Thomas Hobbes's notion that we are all autonomous individuals who willingly give up some of our freedom to the government in exchange for safety. It's complete nonsense. Humans have never lived as individuals. We have always lived in communities, and we have always succeeded or failed as cooperative groups. The fact that people who separate themselves and live alone – think of Henry David Thoreau in Walden – is so remarkable indicates how rare it is.
Benkler understands this, and wants us to accept the cooperative nature of ourselves and our societies. He also gives the same advice for the future that I do – that we need to build social mechanisms that encourage cooperation over control. That is, we need to facilitate trust in society.
What's next on your list?
The Folly of Fools, by the biologist Robert Trivers. Trivers has studied self-deception in humans, and asks how it evolved to be so pervasive. Humans are masters at self-deception. We regularly deceive ourselves in a variety of different circumstances. But why? How is it possible for self-deception – perceiving reality to be different than it really is – to have survival value? Why is it that genetic tendencies for self-deception are likely to propagate to the next generation?
Trivers's book-long answer is fascinating. Basically, deception can have enormous evolutionary benefits. In many circumstances, especially those involving social situations, individuals who are good at deception are better able to survive and reproduce. And self-deception makes us better at deception. For example, there is value in my being able to deceive you into thinking I am stronger than I really am. You're less likely to pick a fight with me, I'm more likely to win a dominance struggle without fighting, and so on. I am better able to bluff you if I actually believe I am stronger than I really am. So we deceive ourselves in order to be better able to deceive others.
The psychology of deception is fundamental to my own writing on trust. It's much easier for me to cheat you if you don't believe I am cheating you.
Third up, The Murderer Next Door by David M Buss.
There have been a number of books about the violent nature of humans, particularly men. I chose The Murderer Next Door both because it is well-written and because it is relatively new, published in 2005. David M Buss is a psychologist, and he writes well about the natural murderousness of our species. There's a lot of data to support natural human murderousness, and not just murder rates in modern societies. Anthropological evidence indicates that between 15% and 25% of prehistoric males died in warfare.
This murderousness resulted in an evolutionary pressure to be clever. Here's Buss writing about it:
"As the motivations to murder evolved in our minds, a set of counterinclinations also developed. Killing is a risky business. It can be dangerous and inflict horrible costs on the victim. Because it's so bad to be dead, evolution has fashioned ruthless defences to prevent being killed, including killing the killer. Potential victims are therefore quite dangerous themselves. In the evolutionary arms race, homicide victims have played a critical and unappreciated role – they pave the way for the evolution of anti-homicide defences."
Those defences involved trust and societal pressures to induce trust.
Your fourth book is by psychologist, science writer and previous FiveBooks interviewee Steven Pinker.
The Better Angels of Our Nature is Steven Pinker's explanation as to why, despite the selection pressures for murderousness in our evolutionary past, violence has declined in so many cultures around the world. It's a fantastic book, and I recommend that everyone read it. From my perspective, I could sum up his argument very simply: Societal pressures have worked.
Of course it's more complicated than that, and Pinker does an excellent job of leading the reader through his analysis and conclusions. First, he spends six chapters documenting the fact that violence has in fact declined. In the next two chapters, he does his best to figure out exactly what has caused the "better angels of our nature" to prevail over our more natural demons. His answers are complicated, and expand greatly on the interplay among the various societal pressures which I talk about myself. It's not things like bigger jails and more secure locks that are making society safer. It's things like the invention of printing and the resultant rise of literacy, the empowerment of women and the rise of universal moral and ethical principles.
What is your final selection?
Braintrust, by the neuroscientist Patricia Churchland. This book is about the neuroscience of morality. It's brand new – published in 2011 – which is good because this is a brand new field of science, and new discoveries are happening all the time. Morality is the most basic of societal pressures, and Churchland explains how it works.
This book tries to understand the neuroscience behind trust and trustworthiness. In her own words:
"The hypothesis on offer is that what we humans call ethics or morality is a four dimensional scheme for social behavior that is shaped by interlocking brain processes: (1) caring (rooted in attachment to kin and kith and care for their well-being), (2) recognition of other's psychological states (rooted in the benefits of predicting the behavior of others) (3) problem-solving in a social context (e.g., how we should distribute scarce goods, settle land disputes; how we should punish the miscreants) and (4) learning social practices (by positive and negative reinforcement, by imitation, by trial and error, by various kinds of conditioning, and by analogy)."
Those are our innate human societal pressures. They are the security systems that keep us mostly trustworthy most of the time – enough for most of us to be trusting enough for society to survive.
Are we safer for all the security theatre of airport checks?
Of course not. There are two parts to the question. One: Are we doing the right thing? That is, does it make sense for America to focus its anti-terrorism security efforts on airports and airplanes? And two: Are we doing things right? In other words, are the anti-terrorism measures at airports doing the job and preventing terrorism? I say the answer to both of those questions is no. Focusing on airports, and specific terrorist tactics like shoes and liquids, is a poor use of our money because it's easy for terrorists to switch targets and tactics. And the current TSA security measures don't keep us safe because it's too easy to bypass them.
There are two basic kinds of terrorists – random idiots and professionals. Pretty much any airport security, even the pre-9/11 measures, will protect us against random idiots. They will get caught. And pretty much nothing will protect us against professionals. They've researched our security and know the weaknesses. By the time the plot gets to the airport, it's too late. Much more effective is for the US to spend its money on intelligence, investigation and emergency response. But this is a shorter answer than your readers deserve, and I suggest they read more of my writings on the topic.
How does the rise of cloud computing affect personal risk?
Like everything else, cloud computing is all about trust. Trust isn't new in computing. I have to trust my computer's manufacturer. I have to trust my operating system and software. I have to trust my Internet connection and everything associated with that. I have to trust all sorts of data I receive from other sources.
So on the one hand, cloud computing just adds another level of trust. But it's an important level of trust. For most of us, it reduces our risk. If I have my email on Google, my photos on Flickr, my friends on Facebook and my professional contacts on LinkedIn, then I don't have to worry much about losing my data. If my computer crashes I'll still have all my email, photos and contacts. This is the way the iPhone works with iCloud – if I lose my phone, I can get a new one and all my data magically reappears.
On the other hand, I have to trust my cloud providers. I have to trust that Facebook won't misuse the personal information it knows about me. I have to trust that my data won't get shipped off to a server in a foreign country with lax privacy laws, and that the companies who have my data will not hand it over to the police without a court order. I'm not able to implement my own security around my data; I have to take what the cloud provider offers. And I must trust that's good enough, often without knowing anything about it.
Finally, how many Bruce Schneier Facts are true?
This Q&A originally appeared on TheBrowser.com
U.S. Federal Court Rules that it Is Unconstitutional for the Police to Force Someone to Decrypt their Laptop
A U.S. Federal Court ruled that it is unconstitutional for the police to force someone to decrypt their laptop computer:
Thursday’s decision by the 11th U.S. Circuit Court of Appeals said that an encrypted hard drive is akin to a combination to a safe, and is off limits, because compelling the unlocking of either of them is the equivalent of forcing testimony.
Also note that the court’s analysis isn’t inconsistent with Boucher and Fricosu, the two district court cases on 5th Amendment limits on decryption. In both of those prior cases, the district courts merely held on the facts of the case that the testimony was a foregone conclusion.
There's a new study that shows that squid are faster in the air than in the water.
Squid of many species have been seen to 'fly' using the same jet-propulsion mechanisms that they use to swim: squirting water out of their mantles so that they rocket out of the sea and glide through the air. Until now, most researchers have thought that such flight was a way to avoid predators, but Ronald O'Dor, a marine biologist at Dalhousie University in Halifax, Canada, has calculated that propelling themselves through the air may actually be an efficient way for squid to travel long distances.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
The book is selling well. (Signed copies are still available on the website.) All the online stores have it, and most bookstores as well. It is available in Europe and elsewhere outside the U.S. And for those who wanted a DRM-free electronic copy, it's available on the OReilly.com bookstore for $11.99.
There's an interview with me about the book on TheBrowser.com.
The new movie Safe House features the song "No Church in the Wild," by Kanye West, which includes this verse:
I live by you, desire
I stand by you, walk through the fire
Your love is my scripture
Let me into your encryption
When Kenneth G. Lieberthal, a China expert at the Brookings Institution, travels to that country, he follows a routine that seems straight from a spy film.
He leaves his cellphone and laptop at home and instead brings "loaner" devices, which he erases before he leaves the United States and wipes clean the minute he returns. In China, he disables Bluetooth and Wi-Fi, never lets his phone out of his sight and, in meetings, not only turns off his phone but also removes the battery, for fear his microphone could be turned on remotely. He connects to the Internet only through an encrypted, password-protected channel, and copies and pastes his password from a USB thumb drive. He never types in a password directly, because, he said, "the Chinese are very good at installing key-logging software on your laptop."
We can now conclusively link Stuxnet to the centrifuge structure at the Natanz nuclear enrichment lab in Iran. Watch this new video presentation from Ralph Langner, the researcher who has done the most work on Stuxnet. It's a long clip, but the good stuff is between 21:00 and 29:00. The pictures he's referring to are still up.
According to a report by Juniper, mobile malware is increasing dramatically.
In 2011, we saw unprecedented growth of mobile malware attacks with a 155 percent increase across all platforms. Most noteworthy was the dramatic growth in Android Malware from roughly 400 samples in June to over 13,000 samples by the end of 2011. This amounts to a cumulative increase of 3,325 percent. Notable in these findings is a significant number of malware samples obtained from third-party applications stores, which do not enjoy the benefit or protection from Google's newly announced Android Market scanning techniques.
We also observed a new level of sophistication of many attacks. Malware writers used new and novel ways to exploit vulnerabilities. 2011 saw malware like Droid KungFu, which used encrypted payloads to avoid detection and Droid Dream, which cleverly disguised itself as a legitimate application, are a sign of things to come.
I don't think this is surprising at all. Mobile is the new platform. Mobile is a very intimate platform. It's where the attackers are going to go.
Research paper: "A birthday present every eleven wallets? The security of customer-chosen banking PINs," by Joseph Bonneau, Sören Preibusch, and Ross Anderson:
Abstract: We provide the first published estimates of the difficulty of guessing a human-chosen 4-digit PIN. We begin with two large sets of 4-digit sequences chosen outside banking for online passwords and smartphone unlock-codes. We use a regression model to identify a small number of dominant factors influencing user choice. Using this model and a survey of over 1,100 banking customers, we estimate the distribution of banking PINs as well as the frequency of security-relevant behaviour such as sharing and reusing PINs. We find that guessing PINs based on the victims' birthday, which nearly all users carry documentation of, will enable a competent thief to gain use of an ATM card once for every 11-18 stolen wallets, depending on whether banks prohibit weak PINs such as 1234. The lesson for cardholders is to never use one's date of birth as a PIN. The lesson for card-issuing banks is to implement a denied PIN list, which several large banks still fail to do. However, blacklists cannot effectively mitigate guessing given a known birth date, suggesting banks should move away from customer-chosen banking PINs in the long term.
EDITED TO ADD (2/22): News article
Marissa A. Ramsier, Andrew J. Cunningham, Gillian L. Moritz, James J. Finneran, Cathy V. Williams, Perry S. Ong, Sharon L. Gursky-Doyen, and Nathaniel J. Dominy (2012), "Primate communication in the pure ultrasound," Biology Letters.
Abstract: Few mammals -- cetaceans, domestic cats and select bats and rodents -- can send and receive vocal signals contained within the ultrasonic domain, or pure ultrasound (greater than 20 kHz). Here, we use the auditory brainstem response (ABR) method to demonstrate that a species of nocturnal primate, the Philippine tarsier (Tarsius syrichta), has a high-frequency limit of auditory sensitivity of ca 91 kHz. We also recorded a vocalization with a dominant frequency of 70 kHz. Such values are among the highest recorded for any terrestrial mammal, and a relatively extreme example of ultrasonic communication. For Philippine tarsiers, ultrasonic vocalizations might represent a private channel of communication that subverts detection by predators, prey and competitors, enhances energetic efficiency, or improves detection against low-frequency background noise.
Self-domestication happens when the benefits of cooperation outweigh the costs:
But why and how could natural selection tame the bonobo? One possible narrative begins about 2.5 million years ago, when the last common ancestor of bonobos and chimpanzees lived both north and south of the Zaire River, as did gorillas, their ecological rivals. A massive drought drove gorillas from the south, and they never returned. That last common ancestor suddenly had the southern jungles to themselves.
As a result, competition for resources wouldn't be as fierce as before. Aggression, such a costly habit, wouldn't have been so necessary. And whereas a resource-limited environment likely made female alliances rare, as they are in modern chimpanzees, reduced competition would have allowed females to become friends. No longer would males intimidate them and force them into sex. Once reproduction was no longer traumatic, they could afford to be fertile more often, which in turn reduced competition between males.
"If females don't let you beat them up, why should a male bonobo try to be dominant over all the other males?" said Hare. "In male chimps, it's very costly to be on top. Often in primate hierarchies, you don't stay on top very long. Everyone is gunning for you. You're getting in a lot of fights. If you don't have to do that, it's better for everybody." Chimpanzees had been caught in what Hare called "this terrible cycle, and bonobos have been able to break this cycle."
This is the sort of thing I write about in my new book. And with both bonobos and humans, there's an obvious security problem: if almost everyone is non-aggressive, an aggressive minority can easily dominate. How does society prevent that from happening?
From the abstract of the paper:
In this paper, we analyze the encryption systems used in the two existing (and competing) satphone standards, GMR-1 and GMR-2. The first main contribution is that we were able to completely reverse engineer the encryption algorithms employed. Both ciphers had not been publicly known previously. We describe the details of the recovery of the two algorithms from freely available DSP-firmware updates for satphones, which included the development of a custom disassembler and tools to analyze the code, and extending prior work on binary analysis to efficiently identify cryptographic code. We note that these steps had to be repeated for both systems, because the available binaries were from two entirely different DSP processors. Perhaps somewhat surprisingly, we found that the GMR-1 cipher can be considered a proprietary variant of the GSM A5/2 algorithm, whereas the GMR-2 cipher is an entirely new design. The second main contribution lies in the cryptanalysis of the two proprietary stream ciphers. We were able to adopt known A5/2 ciphertext-only attacks to the GMR-1 algorithm with an average case complexity of 232 steps. With respect to the GMR-2 cipher, we developed a new attack which is powerful in a known-plaintext setting. In this situation, the encryption key for one session, i.e., one phone call, can be recovered with approximately 5065 bytes of key stream and a moderate computational complexity. A major finding of our work is that the stream ciphers of the two existing satellite phone systems are considerably weaker than what is state-oft-he-art in symmetric cryptography.
There's some excellent research (paper, news articles) surveying public keys in the wild. Basically, the researchers found that a small fraction of them (27,000 out of 7.1 million, or 0.38%) share a common factor and are inherently weak. The researchers can break those public keys, and anyone who duplicates their research can as well.
The cause of this is almost certainly a lousy random number generator used to create those public keys in the first place. This shouldn't come as a surprise. One of the hardest parts of cryptography is random number generation. It's really easy to write a lousy random number generator, and it's not at all obvious that it is lousy. Randomness is a non-functional requirement, and unless you specifically test for it -- and know how to test for it -- you're going to think your cryptosystem is working just fine. (One of the reporters who called me about this story said that the researchers told him about a real-world random number generator that produced just seven different random numbers.) So it's likely these weak keys are accidental.
It's certainly possible, though, that some random number generators have been deliberately weakened. The obvious culprits are national intelligence services like the NSA. I have no evidence that this happened, but if I were in charge of weakening cryptosystems in the real world, the first thing I would target is random number generators. They're easy to weaken, and it's hard to detect that you've done anything. Much safer than tweaking the algorithms, which can be tested against known test vectors and alternate implementations. But again, I'm just speculating here.
What is the security risk? There's some, but it's hard to know how much. We can assume that the bad guys can replicate this experiment and find the weak keys. But they're random, so it's hard to know how to monetize this attack. Maybe the bad guys will get lucky and one of the weak keys will lead to some obvious way to steal money, or trade secrets, or national intelligence. Maybe.
And what happens now? My hope is that the researchers know which implementations of public-key systems are susceptible to these bad random numbers -- they didn't name names in the paper -- and alerted them, and that those companies will fix their systems. (I recommend my own Fortuna, from Cryptography Engineering.) I hope that everyone who implements a home-grown random number generator will rip it out and put in something better. But I don't hold out much hope. Bad random numbers have broken a lot of cryptosystems in the past, and will continue to do so in the future.
From the introduction to the paper:
In this paper we complement previous studies by concentrating on computational and randomness properties of actual public keys, issues that are usually taken for granted. Compared to the collection of certificates considered in , where shared RSA moduli are "not very frequent", we found a much higher fraction of duplicates. More worrisome is that among the 4.7 million distinct 1024-bit RSA moduli that we had originally collected, more than 12500 have a single prime factor in common. That this happens may be crypto-folklore, but it was new to us, and it does not seem to be a disappearing trend: in our current collection of 7.1 million 1024-bit RSA moduli, almost 27000 are vulnerable and 2048-bit RSA moduli are affected as well. When exploited, it could act the expectation of security that the public key infrastructure is intended to achieve.
And the conclusion:
We checked the computational properties of millions of public keys that we collected on the web. The majority does not seem to suffer from obvious weaknesses and can be expected to provide the expected level of security. We found that on the order of 0.003% of public keys is incorrect, which does not seem to be unacceptable. We were surprised, however, by the extent to which public keys are shared among unrelated parties. For ElGamal and DSA sharing is rare, but for RSA the frequency of sharing may be a cause for concern. What surprised us most is that many thousands of 1024-bit RSA moduli, including thousands that are contained in still valid X.509 certificates, offer no security at all. This may indicate that proper seeding of random number generators is still a problematic issue....
EDITED TO ADD (3/14): The title of the paper, "Ron was wrong, Whit is right" refers to the fact that RSA is inherently less secure because it needs two large random primes. Discrete log based algorithms, like DSA and ElGamal, are less susceptible to this vulnerability because they only need one random prime.
Joanne Kuzma of the University of Worcester, England, has analyzed photos that clearly show children's faces on the photo sharing site Flickr. She found that a significant proportion of those analyzed were geotagged and a large number of those were associated with 50 of the more expensive residential zip codes in the USA.
The location information could possibly be used to locate a child's home or other location based on information publicly available on Flickr," explains Kuzma. "Publishing geolocation data raises concerns about privacy and security of children when such personalized information is available to internet users who may have dubious reasons for accessing this data."
It's children, though, so it's going to be hard to have a rational risk discussion about this topic.
This writer wrestles with the costs and benefits of tighter controls on pseudoephedrine, a key chemical used to make methamphetamine:
Now, personally, I sincerely doubt that the pharmaceutical industry has reliable estimates of how many of their purchasers actually have colds--or that they would share data indicating that half of their revenues came from meth cooks. But let's say this is accurate: half of all pseudoephedrine is sold to meth labs. That still wouldn't mean that manufacturers of cold medicines are making "hundreds of millions of dollars a year" off of the stuff--not in the sense that they end up hundreds of millions of dollars richer. The margins on off-patent medicines are not high, and in retail, 50% or more of the cost of the product is retailer and distributor markup*. Then there's the costs of manufacturing.
But this is sort of a side issue. What really bothers me is the way that Humphreys--and others who show up in the comments--regard the rather extraordinary cost of making PSE prescription-only as too trivial to mention.
Let's return to those 15 million cold sufferers. Assume that on average, they want one box a year. That's going to require a visit to the doctor. At an average copay of $20, their costs alone would be $300 million a year, but of course, the health care system is also paying a substantial amount for the doctor's visit. The average reimbursement from private insurance is $130; for Medicare, it's about $60. Medicaid pays less, but that's why people on Medicaid have such a hard time finding a doctor. So average those two together, and add the copays, and you've got at least $1.5 billion in direct costs to obtain a simple decongestant. But that doesn't include the hassle and possibly lost wages for the doctor's visits. Nor the possible secondary effects of putting more demands on an already none-too-plentiful supply of primary care physicians.
I like seeing the debate framed as a security trade-off.
Liars and Outliers is available. Amazon and Barnes & Noble have been shipping the book since the beginning of the month. Both the Kindle and the Nook versions are available for download. I have received 250 books myself. Everyone who read and commented on a draft will get a copy in the mail. And as of today, I have shipped books to everyone who ordered a signed copy.
A bunch of people on Twitter have announced that they're enjoying the book. Right now, there are only three reviews on Amazon. Please, leave a review on Amazon. (I'll write about the problem of fake reviews on these sorts of sites in another post.)
I'm not sure, but I think the Kindle price is going to increase. So if you want the book at the current $10 price, now is the time to buy it.
What happens now? It seems as if this excuse would always be available to someone who doesn't want the police to decrypt her files. On the other hand, it might be hard to realistically forget a key. It's less credible for someone to say "I have no idea what my password is," and more likely to say something like "it was the word 'telephone' with a zero for the o and then some number following -- four digits, with a six in it -- and then a punctuation mark like a period." And then a brute-force password search could be targeted. I suppose someone could say "it was a random alphanumeric password created by an automatic program; I really have no idea," but I'm not sure a judge would believe it.
Interesting paper: Paul J. Freitas (2012), "Passenger aviation security, risk management, and simple physics," Journal of Transportation Security.
Abstract: Since the September 11, 2001 suicide hijacking attacks on the United States, preventing similar attacks from recurring has been perhaps the most important goal of aviation security. In addition to other measures, the US government has increased passenger screening requirements to unprecedented levels. This has raised a number of concerns regarding passenger safety from radiation risks associated with airport body scanners, psychological trauma associated with pat-down searches, and general cost/benefit analysis concerns regarding security measures. Screening changes, however, may not be the best way to address the safety and security issues exposed by the September 11 attacks. Here we use simple physics concepts (kinetic energy and chemical potential energy) to evaluate the relative risks from crash damage for various aircraft types. A worst-case jumbo jet crash can result in an energy release comparable to that of a small nuclear weapon, but other aircraft types are considerably less dangerous. Understanding these risks suggests that aircraft with lower fuel capacities, speeds, and weights pose substantially reduced risk over other aircraft types. Lower-risk aircraft may not warrant invasive screening as they pose less risk than other risks commonly accepted in American society, like tanker truck accidents. Allowing passengers to avoid invasive screening for lower-risk aircraft would introduce competition into passenger aviation that might lead to better overall improvements in security and general safety than passenger screening alone is capable of achieving.
The full paper is behind a paywall, but here is a preprint.
This essay is definitely thinking along the correct directions.
The error rate for hand-counted ballots is about two percent.
All voting systems have nonzero error rates. This doesn't surprise technologists, but does surprise the general public. There's a myth out there that elections are perfectly accurate, down to the single vote. They're not. If the vote is within a few percentage points, they're likely a statistical tie. (The problem, of course, is that elections must produce a single winner.)
In 2005, I wrote an essay called "The Failure of Two-Factor Authentication," where I predicted that attackers would get around multi-factor authentication systems with tools that attack the transactions in real time: man-in-the-middle attacks and Trojan attacks against the client endpoint.
This BBC article describes exactly that:
After logging in to the bank's real site, account holders are being tricked by the offer of training in a new "upgraded security system".
Money is then moved out of the account but this is hidden from the user.
Called a Man in the Browser (MitB) attack, the malware lives in the web browser and can get between the user and the website, altering what is seen and changing details of what is being entered.
The solution is to authenticate the transaction, not the person.
EDITED TO ADD (2/6): Another link.
It's called Squid.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Reuters discovered the information:
The VeriSign attacks were revealed in a quarterly U.S. Securities and Exchange Commission filing in October that followed new guidelines on reporting security breaches to investors. It was the most striking disclosure to emerge in a review by Reuters of more than 2,000 documents mentioning breach risks since the SEC guidance was published.
The company, unsurprisingly, is saying nothing.
VeriSign declined multiple interview requests, and senior employees said privately that they had not been given any more details than were in the filing. One said it was impossible to tell if the breach was the result of a concerted effort by a national power, though that was a possibility. "It's an ugly, slim sliver of facts. It's not enough," he said.
Are we finally ready to accept that the certificate system is completely broken?
Really good article on the huge incarceration rate in the U.S., its causes, its effects, and its value:
Over all, there are now more people under "correctional supervision" in America -- more than six million -- than were in the Gulag Archipelago under Stalin at its height. That city of the confined and the controlled, Lockuptown, is now the second largest in the United States.
The accelerating rate of incarceration over the past few decades is just as startling as the number of people jailed: in 1980, there were about two hundred and twenty people incarcerated for every hundred thousand Americans; by 2010, the number had more than tripled, to seven hundred and thirty-one. No other country even approaches that. In the past two decades, the money that states spend on prisons has risen at six times the rate of spending on higher education.
The trouble with the Bill of Rights, he argues, is that it emphasizes process and procedure rather than principles. The Declaration of the Rights of Man says, Be just! The Bill of Rights says, Be fair! Instead of announcing general principles -- no one should be accused of something that wasn't a crime when he did it; cruel punishments are always wrong; the goal of justice is, above all, that justice be done -- it talks procedurally. You can't search someone without a reason; you can't accuse him without allowing him to see the evidence; and so on. This emphasis, Stuntz thinks, has led to the current mess, where accused criminals get laboriously articulated protection against procedural errors and no protection at all against outrageous and obvious violations of simple justice. You can get off if the cops looked in the wrong car with the wrong warrant when they found your joint, but you have no recourse if owning the joint gets you locked up for life. You may be spared the death penalty if you can show a problem with your appointed defender, but it is much harder if there is merely enormous accumulated evidence that you weren't guilty in the first place and the jury got it wrong. Even clauses that Americans are taught to revere are, Stuntz maintains, unworthy of reverence: the ban on "cruel and unusual punishment" was designed to protect cruel punishments -- flogging and branding -- that were not at that time unusual.
The author mentions the rise of for-profit businesses increasingly running prisons in the U.S., but I don't think he makes the point strongly enough. There is now a corporate interest in the U.S. lobbying for such things as mandatory minimum sentencing.
Brian C. Kalt (2005), "The Perfect Crime," Georgetown Law Journal, Vol. 93, No. 2.
This article argues that there is a 50-square-mile swath of Idaho in which one can commit felonies with impunity. This is because of the intersection of a poorly drafted statute with a clear but neglected constitutional provision: the Sixth Amendment's Vicinage Clause. Although lesser criminal charges and civil liability still loom, the remaining possibility of criminals going free over a needless technical failure by Congress is difficult to stomach. No criminal defendant has ever broached the subject, let alone faced the numerous (though unconvincing) counterarguments. This shows that vicinage is not taken seriously by lawyers or judges. Still, Congress should close the Idaho loophole, not pretend it does not exist.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..