Schneier on Security
A blog covering security and security technology.
March 2009 Archives
In the United States, the concept of "expectation of privacy" matters because it's the constitutional test, based on the Fourth Amendment, that governs when and how the government can invade your privacy.
Based on the 1967 Katz v. United States Supreme Court decision, this test actually has two parts. First, the government's action can't contravene an individual's subjective expectation of privacy; and second, that expectation of privacy must be one that society in general recognizes as reasonable. That second part isn't based on anything like polling data; it is more of a normative idea of what level of privacy people should be allowed to expect, given the competing importance of personal privacy on one hand and the government's interest in public safety on the other.
The problem is, in today's information society, that definition test will rapidly leave us with no privacy at all.
In Katz, the Court ruled that the police could not eavesdrop on a phone call without a warrant: Katz expected his phone conversations to be private and this expectation resulted from a reasonable balance between personal privacy and societal security. Given NSA's large-scale warrantless eavesdropping, and the previous administration's continual insistence that it was necessary to keep America safe from terrorism, is it still reasonable to expect that our phone conversations are private?
Between the NSA's massive internet eavesdropping program and Gmail's content-dependent advertising, does anyone actually expect their e-mail to be private? Between calls for ISPs to retain user data and companies serving content-dependent web ads, does anyone expect their web browsing to be private? Between the various computer-infecting malware, and world governments increasingly demanding to see laptop data at borders, hard drives are barely private. I certainly don't believe that my SMSes, any of my telephone data, or anything I say on LiveJournal or Facebook -- regardless of the privacy settings -- is private.
Aerial surveillance, data mining, automatic face recognition, terahertz radar that can "see" through walls, wholesale surveillance, brain scans, RFID, "life recorders" that save everything: Even if society still has some small expectation of digital privacy, that will change as these and other technologies become ubiquitous. In short, the problem with a normative expectation of privacy is that it changes with perceived threats, technology and large-scale abuses.
Clearly, something has to change if we are to be left with any privacy at all. Three legal scholars have written law review articles that wrestle with the problems of applying the Fourth Amendment to cyberspace and to our computer-mediated world in general.
George Washington University's Daniel Solove, who blogs at Concurring Opinions, has tried to capture the byzantine complexities of modern privacy. He points out, for example, that the following privacy violations -- all real -- are very different: A company markets a list of 5 million elderly incontinent women; reporters deceitfully gain entry to a person's home and secretly photograph and record the person; the government uses a thermal sensor device to detect heat patterns in a person's home; and a newspaper reports the name of a rape victim. Going beyond simple definitions such as the divulging of a secret, Solove has developed a taxonomy of privacy, and the harms that result from their violation.
His 16 categories are: surveillance, interrogation, aggregation, identification, insecurity, secondary use, exclusion, breach of confidentiality, disclosure, exposure, increased accessibility, blackmail, appropriation, distortion, intrusion and decisional interference. Solove's goal is to provide a coherent and comprehensive understanding of what is traditionally an elusive and hard-to-explain concept: privacy violations. (This taxonomy is also discussed in Solove's book, Understanding Privacy.)
Orin Kerr, also a law professor at George Washington University, and a blogger at Volokh Conspiracy, has attempted to lay out general principles for applying the Fourth Amendment to the internet. First, he points out that the traditional inside/outside distinction -- the police can watch you in a public place without a warrant, but not in your home -- doesn't work very well with regard to cyberspace. Instead, he proposes a distinction between content and non-content information: the body of an e-mail versus the header information, for example. The police should be required to get a warrant for the former, but not for the latter. Second, he proposes that search warrants should be written for particular individuals and not for particular internet accounts.
Meanwhile, Jed Rubenfeld of Yale Law School has tried to reinterpret the Fourth Amendment not in terms of privacy, but in terms of security. Pointing out that the whole "expectations" test is circular -- what the government does affects what the government can do -- he redefines everything in terms of security: the security that our private affairs are private.
This security is violated when, for example, the government makes widespread use of informants, or engages in widespread eavesdropping -- even if no one's privacy is actually violated. This neatly bypasses the whole individual privacy versus societal security question -- a balancing that the individual usually loses -- by framing both sides in terms of personal security.
I have issues with all of these articles. Solove's taxonomy is excellent, but the sense of outrage that accompanies a privacy violation -- "How could they know/do/say that!?" -- is an important part of the harm resulting from a privacy violation. The non-content information that Kerr believes should be collectible without a warrant can be very private and personal: URLs can be very personal, and it's possible to figure out browsed content just from the size of encrypted SSL traffic. Also, the ease with which the government can collect all of it -- the calling and called party of every phone call in the country -- makes the balance very different. I believe these need to be protected with a warrant requirement. Rubenfeld's reframing is interesting, but the devil is in the details. Reframing privacy in terms of security still results in a balancing of competing rights. I'd rather take the approach of stating the -- obvious to me -- individual and societal value of privacy, and giving privacy its rightful place as a fundamental human right. (There's additional commentary on Rubenfeld's thesis at ArsTechnica.)
The trick here is to realize that a normative definition of the expectation of privacy doesn't need to depend on threats or technology, but rather on what we -- as society -- decide it should be. Sure, today's technology make it easier than ever to violate privacy. But it doesn't necessarily follow that we have to violate privacy. Today's guns make it easier than ever to shoot virtually anyone for any reason. That doesn't mean our laws have to change.
No one knows how this will shake out legally. These three articles are from law professors; they're not judicial opinions. But clearly something has to change, and ideas like these may someday form the basis of new Supreme Court decisions that brings legal notions of privacy into the 21st century.
This essay originally appeared on Wired.com.
The story broke in The New York Times yesterday:
In a report to be issued this weekend, the researchers said that the system was being controlled from computers based almost exclusively in China, but that they could not say conclusively that the Chinese government was involved.
The Chinese government denies involvement. It's probably true; these networks tend to be run by amateur hackers with the tacit approval of the government, not the government itself. I wrote this on the topic last year.
It's only circumstantial evidence that the hackers are Chinese:
In a report to be issued this weekend, the researchers said that the system was being controlled from computers based almost exclusively in China, but that they could not say conclusively that the Chinese government was involved.
And here's the report, from the University of Toronto.
Good commentary by James Fallows:
My guess is that the "convenient instruments" hypothesis will eventually prove to be true (versus the "centrally controlled plot" scenario), if the "truth" of the case is ever fully determined. For reasons the Toronto report lays out, the episode looks more like the effort of groups of clever young hackers than a concentrated project of the People Liberation Army cyberwar division. But no one knows for certain, and further information about the case is definitely worth following.
There's another paper, released at the same time on the same topic, from Cambridge University. It makes more pointed claims about the attackers and their origins, claims I'm not sure can be supported from the evidence.
In this note we described how agents of the Chinese government compromised the computing infrastructure of the Office of His His Holiness the Dalai Lama.
EDITED TO ADD (3/30): An interview with the University of Toronto researchers.
EDITED TO ADD (4/1): The Chinese government denies involvement.
EDITD TO ADD (4/1): My essay from last year on Chinese hacking.
If you conduct infrequent transactions which are also small, you'll never lose much money and it's not worth it to try to protect yourself - you'll sometimes get scammed, but you'll have no trouble affording the losses.
From Muppet Labs:
How many times have you awakened at night in the dark and said to yourself..."Is there a gorilla in here?" And how many people do you know whose vacations were ruined because they were eaten by undetected gorillas?
According to The Age in Australia:
"We would have to pay a lot of money," said Sephery-Rad, noting that most of the government's estimated one million PCs and the country's total of six to eight million computers were being run almost exclusively on the Windows platform.
This is impressive:
It is midnight on 22 September 2012 and the skies above Manhattan are filled with a flickering curtain of colourful light. Few New Yorkers have seen the aurora this far south but their fascination is short-lived. Within a few seconds, electric bulbs dim and flicker, then become unusually bright for a fleeting moment. Then all the lights in the state go out. Within 90 seconds, the entire eastern half of the US is without power.
Where you stand matters:
The two researchers have developed accurate physics-based models of a suicide bombing attack, including casualty levels and explosive composition. Their work also describes human shields available in the crowd with partial and full coverage in both two- and three-dimensional environments.
Presumably they also discovered where the attacker should stand to be as lethal as possible, but there's no indication that they published those results.
Chief Security Engineer Andrea Barisani and hardware hacker Daniele Bianco used a handmade laser microphone device and a photo diode to measure the vibrations, software for analyzing the spectrograms of frequencies from different keystrokes, as well as technology to apply the data to a dictionary to try to guess the words. They used a technique called dynamic time warping that's typically used for speech recognition applications, to measure the similarity of signals.
I think this is the first documented case of election fraud in the U.S. using electronic voting machines (there have been lots of documented cases of errors and voting problems, but this one involves actual maliciousness):
Five Clay County officials, including the circuit court judge, the county clerk, and election officers were arrested Thursday after they were indicted on federal charges accusing them of using corrupt tactics to obtain political power and personal gain.
Clay County uses the horrible ES&S iVotronic system for all of its votes at the polling place. The iVotronic is a touch-screen Direct Recording Electronic (DRE) device, offering no evidence, of any kind, that any vote has ever been recorded as per the voter's intent. If the allegations are correct here, there would likely have been no way to discover, via post-election examination of machines or election results, that votes had been manipulated on these machines.
The fraud itself is very low-tech, and didn't make use of any of the documented vulnerabilities in the ES&S iVotronic machines; it was basic social engineering. Matt Blaze explains:
The iVotronic is a popular Direct Recording Electronic (DRE) voting machine. It displays the ballot on a computer screen and records voters' choices in internal memory. Voting officials and machine manufacturers cite the user interface as a major selling point for DRE machines -- it's already familiar to voters used to navigating touchscreen ATMs, computerized gas pumps, and so on, and thus should avoid problems like the infamous "butterfly ballot". Voters interact with the iVotronic primarily by touching the display screen itself. But there's an important exception: above the display is an illuminated red button labeled "VOTE" (see photo at right). Pressing the VOTE button is supposed to be the final step of a voter's session; it adds their selections to their candidates' totals and resets the machine for the next voter.
Read the rest of Blaze's post for some good analysis on the attack and what it says about iVotronic. He led the team that analyzed the security of that very machine:
We found numerous exploitable security weaknesses in these machines, many of which would make it easy for a corrupt voter, pollworker, or election official to tamper with election results (see our report for details).
Me on electronic voting machines, from 2004.
Psychology Today on fear and the availability heuristic:
We use the availability heuristic to estimate the frequency of specific events. For example, how often are people killed by mass murderers? Because higher frequency events are more likely to occur at any given moment, we also use the availability heuristic to estimate the probability that events will occur. For example, what is the probability that I will be killed by a mass murderer tomorrow?
I've written about this sort of thing here.
Much of this research focuses on "micromechanical" devices -- tiny sensors that have microscopic probes on which airborne chemical vapors deposit. When the right chemicals find the surface of the sensors, they induce tiny mechanical motions, and those motions create electronic signals that can be measured.
Here's the paper, behind a paywall.
You just can't make this stuff up:
Buildings were evacuated, a street was cordoned off and a bomb disposal team called in after workmen spotted a suspicious object.
EDITED TO ADD (3/20): Lest you think this is tabloid hyperbole, here's the story in a more respectable newspaper.
"Book theft is very hard to quantify because very often pages are cut and it's not noticed for years," says Rapley. "Often we come across pages from books [in hauls of recovered property] and we work back from there." The Museum Security Network, a Dutch-based, not-for-profit organisation devoted to co-ordinating efforts to combat this type of theft, estimates that only 2 to 5 per cent of stolen books are recovered, compared with about half of stolen paintings.
Janis Gold: I isolated the data Renee uploaded to Bauer but I can't get past the filed header.
O'Brian spends just over 30 seconds at the keyboard.
This is the second time Blowfish has appeared on the show. It was broken the first time, too.
EDITED TO ADD (4/14): Avi Rubin comments.
Fingerprinting Blank Paper Using Commodity Scanners
The Bayer company is refusing to talk about a fatal accident at a West Virginia plant, citing a 2002 terrorism law.
CSB had intended to hear community concerns, gather more information on the accident, and inform residents of the status of its investigation. However, Bayer attorneys contacted CSB Chairman John Bresland and set up a Feb. 12 conference at the board's Washington, D.C., headquarters. There, they warned CSB not to reveal details of the accident or the facility's layout at the community meeting.
This isn't the first time that the specter of terrorism has been used to keep embarrassing information secret.
EDITED TO ADD (3/20): The meeting has been rescheduled. No word on how forthcoming Bayer will be.
Interesting piece of cryptographic history: a cipher designed by Robert Patterson and sent to Thomas Jefferson. The full story is behind a paywall.
EDITED TO ADD (4/14): The cipher itself is here.
It happens; sometimes they die.
"Death by hyperthermia" is the official designation. When it happens to young children, the facts are often the same: An otherwise loving and attentive parent one day gets busy, or distracted, or upset, or confused by a change in his or her daily routine, and just... forgets a child is in the car. It happens that way somewhere in the United States 15 to 25 times a year, parceled out through the spring, summer and early fall.
It's a fascinating piece of reporting, with some interesting security aspects. We protect against a common risk, and increase the chances of a rare risk:
Two decades ago, this was relatively rare. But in the early 1990s, car-safety experts declared that passenger-side front airbags could kill children, and they recommended that child seats be moved to the back of the car; then, for even more safety for the very young, that the baby seats be pivoted to face the rear.
There is a theory of why we forget something so important: dropping off the baby is routine:
The human brain, he says, is a magnificent but jury-rigged device in which newer and more sophisticated structures sit atop a junk heap of prototype brains still used by lower species. At the top of the device are the smartest and most nimble parts: the prefrontal cortex, which thinks and analyzes, and the hippocampus, which makes and holds on to our immediate memories. At the bottom is the basal ganglia, nearly identical to the brains of lizards, controlling voluntary but barely conscious actions.
There are technical solutions:
In 2000, Chris Edwards, Terry Mack and Edward Modlin began to work on just such a product after one of their colleagues, Kevin Shelton, accidentally left his 9-month-old son to die in the parking lot of NASA Langley Research Center in Hampton, Va. The inventors patented a device with weight sensors and a keychain alarm. Based on aerospace technology, it was easy to use; it was relatively cheap, and it worked.
There's talk of making this a mandatory safety feature, but nothing about the cost per lives saved. (In general, a regulatory goal is between $1 million and $10 million per life saved.)
And there's the question of whether someone who accidentally leaves a baby in the car, resulting in the baby's death, should be prosecuted as a criminal.
EDITED TO ADD (4/14): Tips to prevent this kind of tragedy.
What Loopt — and now Google — are asserting is this: when you tell your friends where you are, you are using a public conveyance to communicate privately. And, just as it would if it wanted to record your phone call or read your e-mail, the government needs to get a wiretap order. That's even tougher to get than a search warrant.
This site lets you build your own squid and let it loose in a virtual environment. You can even come back later and visit your squid.
Many can be opened with a default admin password:
Here's a fun little tip: You can open most Sentex key pad-access doors by typing in the following code:
When I was growing up, children were commonly taught: "don't talk to strangers." Strangers might be bad, we were told, so it's prudent to steer clear of them.
And yet most people are honest, kind, and generous, especially when someone asks them for help. If a small child is in trouble, the smartest thing he can do is find a nice-looking stranger and talk to him.
These two pieces of advice may seem to contradict each other, but they don't. The difference is that in the second instance, the child is choosing which stranger to talk to. Given that the overwhelming majority of people will help, the child is likely to get help if he chooses a random stranger. But if a stranger comes up to a child and talks to him or her, it's not a random choice. It's more likely, although still unlikely, that the stranger is up to no good.
As a species, we tend help each other, and a surprising amount of our security and safety comes from the kindness of strangers. During disasters: floods, earthquakes, hurricanes, bridge collapses. In times of personal tragedy. And even in normal times.
If you're sitting in a café working on your laptop and need to get up for a minute, ask the person sitting next to you to watch your stuff. He's very unlikely to steal anything. Or, if you're nervous about that, ask the three people sitting around you. Those three people don't know each other, and will not only watch your stuff, but they'll also watch each other to make sure no one steals anything.
Again, this works because you're selecting the people. If three people walk up to you in the café and offer to watch your computer while you go to the bathroom, don't take them up on that offer. Your odds of getting three honest people are much lower.
Some computer systems rely on the kindness of strangers, too. The Internet works because nodes benevolently forward packets to each other without any recompense from either the sender or receiver of those packets. Wikipedia works because strangers are willing to write for, and edit, an encyclopedia -- with no recompense.
Collaborative spam filtering is another example. Basically, once someone notices a particular e-mail is spam, he marks it, and everyone else in the network is alerted that it's spam. Marking the e-mail is a completely altruistic task; the person doing it gets no benefit from the action. But he receives benefit from everyone else doing it for other e-mails.
Tor is a system for anonymous Web browsing. The details are complicated, but basically, a network of Tor servers passes Web traffic among each other in such a way as to anonymize where it came from. Think of it as a giant shell game. As a Web surfer, I put my Web query inside a shell and send it to a random Tor server. That server knows who I am but not what I am doing. It passes that shell to another Tor server, which passes it to a third. That third server -- which knows what I am doing but not who I am -- processes the Web query. When the Web page comes back to that third server, the process reverses itself and I get my Web page. Assuming enough Web surfers are sending enough shells through the system, even someone eavesdropping on the entire network can't figure out what I'm doing.
It's a very clever system, and it protects a lot of people, including journalists, human rights activists, whistleblowers, and ordinary people living in repressive regimes around the world. But it only works because of the kindness of strangers. No one gets any benefit from being a Tor server; it uses up bandwidth to forward other people's packets around. It's more efficient to be a Tor client and use the forwarding capabilities of others. But if there are no Tor servers, then there's no Tor. Tor works because people are willing to set themselves up as servers, at no benefit to them.
Alibi clubs work along similar lines. You can find them on the Internet, and they're loose collections of people willing to help each other out with alibis. Sign up, and you're in. You can ask someone to pretend to be your doctor and call your boss. Or someone to pretend to be your boss and call your spouse. Or maybe someone to pretend to be your spouse and call your boss. Whatever you want, just ask and some anonymous stranger will come to your rescue. And because your accomplice is an anonymous stranger, it's safer than asking a friend to participate in your ruse.
There are risks in these sorts of systems. Regularly, marketers and other people with agendas try to manipulate Wikipedia entries to suit their interests. Intelligence agencies can, and almost certainly have, set themselves up as Tor servers to better eavesdrop on traffic. And a do-gooder could join an alibi club just to expose other members. But for the most part, strangers are willing to help each other, and systems that harvest this kindness work very well on the Internet.
This essay originally appeared on the Wall Street Journal website.
Blaming the victim is common in IT: users are to blame because they don't patch their systems, choose lousy passwords, fall for phishing attacks, and so on. But, while users are, and will continue to be, a major source of security problems, focusing on them is an unhelpful way to think.
People regularly don't do things they are supposed to: changing the oil in their cars, going to the dentist, replacing the batteries in their smoke detectors. Why? Because people learn from experience. If something is immediately harmful, e.g., touching a hot stove or petting a live tiger, they quickly learn not to do it. But if someone skips an oil change, ignores a computer patch, or chooses a lousy password, it's unlikely to matter. No feedback, no learning.
We've tried to solve this in several ways. We give people rules of thumb: oil change every 5,000 miles; secure password guidelines. Or we send notifications: smoke alarms beep at us, dentists send postcards, Google warns us if we are about to visit a website suspected of hosting malware. But, again, the effects of ignoring these aren't generally felt immediately.
This makes security primarily a hindrance to the user. It's a recurring obstacle: something that interferes with the seamless performance of the user's task. And it's human nature, wired into our reasoning skills, to remove recurring obstacles. So, if the consequences of bypassing security aren't obvious, then people will naturally do it.
This is the problem with Microsoft's User Account Control (UAC). Introduced in Vista, the idea is to improve security by limiting the privileges applications have when they're running. But the security prompts pop up too frequently, and there's rarely any ill-effect from ignoring them. So people do ignore them.
This doesn't mean user education is worthless. On the contrary, user education is an important part of any corporate security program. And at home, the more users understand security threats and hacker tactics, the more secure their systems are likely to be. But we should also recognise the limitations of education.
The solution is to better design security systems that assume uneducated users: to prevent them from changing security settings that would leave them exposed to undue risk, or—even better—to take security out of their hands entirely.
For example, we all know that backups are a good thing. But if you forget to do a backup this week, nothing terrible happens. In fact, nothing terrible happens for years on end when you forget. So, despite what you know, you start believing that backups aren't really that important. Apple got the solution right with its backup utility Time Machine. Install it, plug in an external hard drive, and you are automatically backed up against hardware failure and human error. It's easier to use it than not.
For its part, Microsoft has made great strides in securing its operating system, providing default security settings in Windows XP and even more in Windows Vista to ensure that, when a naive user plugs a computer in, it's not defenceless.
Unfortunately, blaming the user can be good business. Mobile phone companies save money if they can bill their customers when a calling card number is stolen and used fraudulently. British banks save money by blaming users when they are victims of chip-and-pin fraud. This is continuing, with some banks going so far as to accuse the victim of perpetrating the fraud, despite evidence of large-scale fraud by organised crime syndicates.
The legal system needs to fix the business problems, but system designers need to work on the technical problems. They must accept that security systems that require the user to do the right thing are doomed to fail. And then they must design resilient security nevertheless.
This essay originally appeared in The Guardian.
Read the whole thing:
He took the elevator, descending two floors underground to a small, claustrophobic room--the vault antechamber. A 3-ton steel vault door dominated the far wall. It alone had six layers of security. There was a combination wheel with numbers from 0 to 99. To enter, four numbers had to be dialed, and the digits could be seen only through a small lens on the top of the wheel. There were 100 million possible combinations.
Definitely a movie plot.
There are zillions of locksmiths in New York City.
Not really; this is the latest attempt by phony locksmiths to steer business to themselves:
This is one of the scary parts they have a near monopoly on the cell phone 411 system. They have filled the data bases with so many phony address listings in most major citys that when you call 411 on your cell phone ( which most people do now) you will get the same counterfiet locksmiths over and over again. you could ask for 10 listings and they will all be one of these scammers or another with some local adress that is phony. they use thousands of different names also. It is always the same 55.00 service qouted for a lockout and after they unlock your stuff the price goes much higher. These companys are really not in the rural areas but the are in just about all major citys from coast to coast and from top to bottom. [sic]
Google wasn't their first target. The "blackhats" in the industry have used whatever marketing vehicle was "au courant," whether it was the phone books, 411 or now Google and Yahoo.
Fascinating history of an illegal industry:
Today's schemes are technologically very demanding and extremely complex. It starts with the renting of computer servers in several countries. First the Carders are active to obtain the credit cards and client identities wrongfully. These data are then passed to the falsifiers who manufacture wonderful official documents so that they can be used to identify oneself. These identities and credit card infos are then sold as credit card kits to operators. There is still an alternative where no credit card is needed: in the U.S. one can buy so-called Visa or MasterCard gift cards. However, these with a certain amount of money charged Visa or MasterCard cards usually only usable in the U.S.. Since this anonymous gift cards to buy, these are used to over the Internet with fake identities to pay. Using a false identity and well-functioning credit card servers are then rented and domains purchased as an existing, unsuspecting person. Most of the time an ID is required and in that case they will simply send a forged document. There is yet another alternative: a payment system called WebMoney (webmoney.ru) that is in Eastern Europe as widespread as PayPal in Western Europe. Again, accounts are opened with false identities. Then the business is very simple in Eastern Europe: one buys domains and rents servers via WebMoney and uses it to pay.
First Baptist Church in Maryville, Illinois, had a security plan in place when a gunman walked into services Sunday morning and killed Pastor Fred Winters, said Tim Lawson, another pastor at the church.
Sounds like those plans didn't make much of a difference.
And does anyone really believe that security checkpoints at hotel entrances will make any difference at all?
Wikileaks has cracked the encryption to a key document relating to the war in Afghanistan. The document, titled "NATO in Afghanistan: Master Narrative", details the "story" NATO representatives are to give to, and to avoid giving to, journalists. An unrelated leaked photo from the war: a US soldier poses with a dead Afghani man in the hills of Afghanistan
This article gives an overview of U.S. military robots, and discusses a bit around the issues regarding their use in war:
As military robots gain more and more autonomy, the ethical questions involved will become even more complex. The U.S. military bends over backwards to figure out when it is appropriate to engage the enemy and how to limit civilian casualties. Autonomous robots could, in theory, follow the rules of engagement; they could be programmed with a list of criteria for determining appropriate targets and when shooting is permissible. The robot might be programmed to require human input if any civilians were detected. An example of such a list at work might go as follows: "Is the target a Soviet-made T-80 tank? Identification confirmed. Is the target located in an authorized free-fire zone? Location confirmed. Are there any friendly units within a 200-meter radius? No friendlies detected. Are there any civilians within a 200-meter radius? No civilians detected. Weapons release authorized. No human command authority required."
The article was adapted from his book Wired for War: The Robotics Revolution and Conflict in the 21st Century, published this year. I bought the book, but I have not read it yet.
Related is this paper on the ethics of autonomous military robots.
Here's a clever attack, exploiting relative delays in eBay, PayPal, and UPS shipping:
The buyer reported the item as "destroyed" and demanded and got a refund from Paypal. When the buyer shipped it back to Chad and he opened it, he found there was nothing wrong with it -- except that the scammer had removed the memory, processor and hard drive. Now Chad is out $500 and left with a shell of a computer, and since the item was "received" Paypal won't do anything.
Very clever. The seller accepted the return from UPS after a visual inspection, so UPS considered the matter closed. PayPal and eBay both considered the matter closed. if the amount was large enough, the seller could sue, but how could he prove that the computer was functional when he sold it?
It seems to me that the only way to solve this is for PayPal to not process refunds until the seller confirms what he received back is the same as what he shipped. Yes, then the seller could commit similar fraud, but sellers (certainly professional ones) have a greater reputational risk.
I'm sure you need some skill to actually use this, and I'm also sure it'll get through airport security checkpoints just fine.
EDITED TO ADD (3/12): More info.
This is scary:
Sir David Omand, the former Whitehall security and intelligence co-ordinator, sets out a blueprint for the way the state will mine data -- including travel information, phone records and emails -- held by public and private bodies and admits: "Finding out other people's secrets is going to involve breaking everyday moral rules."
In short: it's immoral, but we're going to do it anyway.
University of Miami law professor Michael Froomkin writes about ID cards and society in "Identity Cards and Identity Romanticism."
This book chapter for "Lessons from the Identity Trail: Anonymity, Privacy and Identity in a Networked Society" (New York: Oxford University Press, 2009)—a forthcoming comparative examination of approaches to the regulation of anonymity edited by Ian Kerr—discusses the sources of hostility to National ID Cards in common law countries. It traces that hostility in the United States to a romantic vision of free movement and in England to an equally romantic vision of the 'rights of Englishmen'.
One small excerpt:
...it is important to note that each ratchet up in an ID card regime—the introduction of a non-mandatory ID card scheme, improvements to authentication, the transition from an optional regime to a mandatory one, or the inclusion of multiple biometric identifiers—increases the need for attention to how the data collected at the time the card is created will be stored and accessed. Similarly, as ID cards become ubiquitous, a de facto necessity even when not required de jure, the card becomes the visible instantiation of a large, otherwise unseen, set of databases. If each use of the card also creates a data trail, the resulting profile becomes an ongoing temptation to both ordinary and predictive profiling.
Beet armyworm caterpillars react to the sound of a passing wasp by freezing in place, or even dropping off the plant. Unfortunately, armyworm intelligence isn't good enough to tell the difference between enemy aircraft (the wasps that prey on them) and harmless commercial flights (bees); they react the same way to either. So by producing nectar for bees, plants not only get pollinated, but also gain some protection against being eaten by caterpillars.
The small hive beetle lives by entering beehives to steal combs and honey. They home in on the hives by detecting the bees' own alarm pheromones. They also track in yeast that ferments the pollen and releases chemicals that spoof the alarm pheromones, attracting more beetles and more yeast. Eventually the bees abandon the hive, leaving their store of pollen and honey to the beetles and yeast.
Mountain alcon blue caterpillars get ants to feed them by spoofing a biometric: the sounds made by the queen ant.
This is an interesting case:
At issue in this case is whether forcing Boucher to type in that PGP passphrase--which would be shielded from and remain unknown to the government--is "testimonial," meaning that it triggers Fifth Amendment protections. The counterargument is that since defendants can be compelled to turn over a key to a safe filled with incriminating documents, or provide fingerprints, blood samples, or voice recordings, unlocking a partially-encrypted hard drive is no different.
An employee of Whole Foods in Ann Arbor, Michigan, was fired in 2007 for apprehending a shoplifter. More specifically, he was fired for touching a customer, even though that customer had a backpack filled with stolen groceries and was running away with them.
I regularly see security decisions that, like the Whole Foods incident, seem to make absolutely no sense. However, in every case, the decisions actually make perfect sense once you understand the underlying incentives driving the decision. All security decisions are trade-offs, but the motivations behind them are not always obvious: They're often subjective, and driven by external incentives. And often security trade-offs are made for nonsecurity reasons.
Almost certainly, Whole Foods has a no-touching-the-customer policy because its attorneys recommended it. "No touching" is a security measure as well, but it's security against customer lawsuits. The cost of these lawsuits would be much, much greater than the $346 worth of groceries stolen in this instance. Even applied to suspected shoplifters, the policy makes sense: The cost of a lawsuit resulting from tackling an innocent shopper by mistake would be far greater than the cost of letting actual shoplifters get away. As perverse it may seem, the result is completely reasonable given the corporate incentives — Whole Foods wrote a corporate policy that benefited itself.
At least, it works as long as the police and other factors keep society's shoplifter population down to a reasonable level.
Incentives explain much that is perplexing about security trade-offs. Why does King County, Washington, require one form of ID to get a concealed-carry permit, but two forms of ID to pay for the permit by check? Making a mistake on a gun permit is an abstract problem, but a bad check actually costs some department money.
In the decades before 9/11, why did the airlines fight every security measure except the photo-ID check? Increased security annoys their customers, but the photo-ID check solved a security problem of a different kind: the resale of nonrefundable tickets. So the airlines were on board for that one.
And why does the TSA confiscate liquids at airport security, on the off chance that a terrorist will try to make a liquid explosive instead of using the more common solid ones? Because the officials in charge of the decision used CYA security measures to prevent specific, known tactics rather than broad, general ones.
The same misplaced incentives explain the ongoing problem of innocent prisoners spending years in places like Guantanamo and Abu Ghraib. The solution might seem obvious: Release the innocent ones, keep the guilty ones, and figure out whether the ones we aren't sure about are innocent or guilty. But the incentives are more perverse than that. Who is going to sign the order releasing one of those prisoners? Which military officer is going to accept the risk, no matter how small, of being wrong?
I read almost five years ago that prisoners were being held by the United States far longer than they should, because ''no one wanted to be responsible for releasing the next Osama bin Laden.'' That incentive to do nothing hasn't changed. It might have even gotten stronger, as these innocents languish in prison.
In all these cases, the best way to change the trade-off is to change the incentives. Look at why the Whole Foods case works. Store employees don't have to apprehend shoplifters, because society created a special organization specifically authorized to lay hands on people the grocery store points to as shoplifters: the police. If we want more rationality out of the TSA, there needs to be someone with a broader perspective willing to deal with general threats rather than specific targets or tactics.
For prisoners, society has created a special organization specifically entrusted with the role of judging the evidence against them and releasing them if appropriate: the judiciary. It's only because the George W. Bush administration decided to remove the Guantanamo prisoners from the legal system that we are now stuck with these perverse incentives. Our country would be smart to move as many of these people through the court system as we can.
This essay originally appeared on Wired.com.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.