Schneier on Security
A blog covering security and security technology.
November 2009 Archives
The Psychology of Being Scammed
This is a very interesting paper: "Understanding scam victims: seven principles for systems security," by Frank Stajano and Paul Wilson. Paul Wilson produces and stars in the British television show The Real Hustle, which does hidden camera demonstrations of con games. (There's no DVD of the show available, but there are bits of it on YouTube.) Frank Stajano is at the Computer Laboratory of the University of Cambridge.
The paper describes a dozen different con scenarios -- entertaining in itself -- and then lists and explains six general psychological principles that con artists use:
1. The distraction principle. While you are distracted by what retains your interest, hustlers can do anything to you and you won't notice.
It all makes for very good reading.
EDITED TO ADD (12/12): Some of the episodes of The Real Hustle are available on the BBC site, but only to people with UK IP addresses -- or people with a VPN tunnel to the UK.
Friday Squid Blogging: Two Squid T-Shirts
Fear and Public Perception
He's talking about the role fear plays in the perception of nuclear power. It's a lot of the sorts of things I say, but particularly interesting is this bit on familiarity and how it reduces fear:
You see, we sited these plants away from metropolitan areas to "protect the public" from the dangers of nuclear power. What we did when we did that was move the plants away from the people, so they became unfamiliar. The major health effect, adverse health effect of nuclear power is not radiation. It's fear. And by siting them away from the people, we insured that that would be maximized. If we're serious about health in relationship to nuclear power, we would put them in downtown, big cities, so people would see them all the time. That is really important, in terms of reducing the fear. Familiarity is the way fear is reduced. No question. It's not done intellectually. It's not done by reading a book. It's done by being there and seeing it and talking to the people who work there.
So, among other reasons, terrorism is scary because it's so rare. When it's more common -- England during the Troubles, Israel today -- people have a more rational reaction to it.
My recent essay on fear and overreaction.
Leaked 9/11 Text Messages
Wikileaks has published pager intercepts from New York on 9/11:
WikiLeaks released half a million US national text pager intercepts. The intercepts cover a 24 hour period surrounding the September 11, 2001 attacks in New York and Washington.
Near as I can tell, these messages are from the commercial pager networks of Arch Wireless, Metrocall, Skytel, and Weblink Wireless, and include all customers of that service: government, corporate, and personal.
There are lots of nuggets in the data about the government response to 9/11:
One string of messages hints at how federal agencies scrambled to evacuate to Mount Weather, the government's sort-of secret bunker buried under the Virginia mountains west of Washington, D.C. One message says, "Jim: DEPLOY TO MT. WEATHER NOW!," and another says "CALL OFICE (sic) AS SOON AS POSSIBLE. 4145 URGENT." That's the phone number for the Federal Emergency Management Agency's National Continuity Programs Directorate -- which is charged with "the preservation of our constitutional form of government at all times," even during a nuclear war. (A 2006 article in the U.K. Guardian newspaper mentioned a "a traffic jam of limos carrying Washington and government license plates" heading to Mount Weather that day.)
Historians will certainly spend a lot of time poring over the messages, but I'm more interested in where they came from in the first place:
It's not clear how they were obtained in the first place. One possibility is that they were illegally compiled from the records of archived messages maintained by pager companies, and then eventually forwarded to WikiLeaks.
It's disturbing to realize that someone, possibly not even a government, was routinely intercepting most (all?) of the pager data in lower Manhattan as far back as 2001. Who was doing it? For that purpose? That, we don't know.
Mumbai Terrorist Attacks
Long, detailed, and very good story of the Mumbai terrorist attacks of last year.
My own short commentary in the aftermath of the attacks.
Virtual Mafia in Online Worlds
If you allow players in an online world to penalize each other, you open the door to extortion:
One of the features that supported user socialization in the game was the ability to declare that another user was a trusted friend. The feature involved a graphical display that showed the faces of users who had declared you trustworthy outlined in green, attached in a hub-and-spoke pattern to your face in the center.
EDITED TO ADD (12/12): SIM Mafia existed in 2004.
Users Rationally Rejecting Security Advice
This paper, by Cormac Herley at Microsoft Research, sounds like me:
Sounds like me.
EDITED TO ADD (12/12): Related article on usable security.
Norbt (no robot) is a low-security web application to encrypt web pages. You can create and encrypt a webpage. The key is an answer to a question; anyone who knows the answer can see the page.
I'm not sure this is very useful.
Decertifying "Terrorist" Pilots
This article reads like something written by the company's PR team.
When it comes to sleuthing these days, knowing your way within a database is as valued a skill as the classic, Sherlock Holmes-styled powers of detection.
The algorithm seems to be little more than matching up names and other basic info:
It used its algorithm-detection software to sift out uncommon names such as Abdelbaset Ali Elmegrahi, aka the Lockerbie bomber. It found that a number of licensed airmen all had the same P.O. box as their listed address -- one that happened to be in Tripoli, Libya. These men all had working FAA certificates. And while the FAA database information investigated didn't contain date-of-birth information, Safe Banking was able to use content on the FAA Website to determine these key details as well, to further gain a positive and clear identification of the men in question.
In any case, they found these three people with pilot's licenses:
Elmegrahi, who had been posted on the FBI Most Wanted list for a decade and was convicted of blowing up Pan Am Flight 103, killing 259 people in 1988 over Lockerbie, Scotland. Elmegrahi was an FAA-certified aircraft dispatcher.
And the article concludes with:
Suffice to say, after the FAA was made aware of these criminal histories, all three men have since been decertified.
Although I'm all for annoying international arms dealers, does anyone know the procedures for FAA decertification? Did the FAA have the legal right to do this, after being "made aware" of some information by a third party?
Of course, they don't talk about all the false positives their system also found. How many innocents were also decertified? And they don't mention the fact that, in the 9/11 attacks, FAA certification wasn't really an issue. "Excuse me, young man. You can't hijack and fly this aircraft. It says right here that the FAA decertified you."
Al Qaeda Secret Code Broken
I would sure like to know more about this:
Top code-breakers at the Government Communications Headquarters in the United Kingdom have succeeded in breaking the secret language that has allowed imprisoned leaders of al-Qaida to keep in touch with other extremists in U.K. jails as well as 10,000 "sleeper agents" across the islands....
EDITED TO ADD: Here's a link to the story that still works. I didn't realize this came from WorldNetDaily, so take it with an appropriate amount of salt.
Friday Squid Blogging: New Squid Discovered
An expedition to study seamounts in the Indian Ocean has discovered some new species, including some squid.
Interview with Me
Yet another interview with me. This one is audio, and was conducted in Rotterdam in October.
FailBlog on Security
Funny: career fair fail.
EDITED TO ADD: See the caption on the original photo for the real story.
Denial-of-Service Attack Against CALEA
The researchers say they've found a vulnerability in U.S. law enforcement wiretaps, if only theoretical, that would allow a surveillance target to thwart the authorities by launching what amounts to a denial-of-service (DoS) attack against the connection between the phone company switches and law enforcement.
A Taxonomy of Social Networking Data
At the Internet Governance Forum in Sharm El Sheikh this week, there was a conversation on social networking data. Someone made the point that there are several different types of data, and it would be useful to separate them. This is my taxonomy of social networking data.
Different social networking sites give users different rights for each data type. Some are always private, some can be made private, and some are always public. Some can be edited or deleted -- I know one site that allows entrusted data to be edited or deleted within a 24-hour period -- and some cannot. Some can be viewed and some cannot.
And people should have different rights with respect to each data type. It's clear that people should be allowed to change and delete their disclosed data. It's less clear what rights they have for their entrusted data. And far less clear for their incidental data. If you post pictures of a party with me in them, can I demand you remove those pictures -- or at least blur out my face? And what about behavioral data? It's often a critical part of a social networking site's business model. We often don't mind if they use it to target advertisements, but are probably less sanguine about them selling it to third parties.
As we continue our conversations about what sorts of fundamental rights people have with respect to their data, this taxonomy will be useful.
EDITED TO ADD (12/12): Another categorization centered on destination instead of trust level.
Stabbing People with Stuff You Can Get Through Airport Security
"Use of a pig model to demonstrate vulnerability of major neck vessels to inflicted trauma from common household items," from the American Journal of Forensic Medical Pathology.
Abstract. Commonly available items including a ball point pen, a plastic knife, a broken wine bottle, and a broken wine glass were used to inflict stab and incised wounds to the necks of 3 previously euthanized Large White pigs. With relative ease, these items could be inserted into the necks of the pigs next to the jugular veins and carotid arteries. Despite precautions against the carrying of metal objects such as knives and nail files on board domestic and international flights, objects are still available within aircraft cabins that could be used to inflict serious and potentially life-threatening injuries. If airport and aircraft security measures are to be consistently applied, then consideration should be given to removing items such as glass bottles and glass drinking vessels. However, given the results of a relatively uncomplicated modification of a plastic knife, it may not be possible to remove all dangerous objects from aircraft. Security systems may therefore need to focus on measures such as increased surveillance of passenger behavior, rather than on attempting to eliminate every object that may serve as a potential weapon.
How Smart are Islamic Terrorists?
Organizational Learning and Islamic Militancy contains significant findings for counter-terrorism research and policy. Unlike existing studies, this report suggests that the relevant distinction in knowledge learned by terrorists is not between tacit and explicit knowledge, but metis and techne. Focusing on the latter sheds new insight into how terrorists acquire the experiential "know how" they need to perform their activities as opposed to abstract "know what" contained in technical bomb-making preparations. Drawing on interviews with bomb-making experts and government intelligence officials, the PI illustrates the critical difference between learning terrorism skills such as bomb-making and weapons firing by abstraction rather than by doing. Only the latter provides militants with the experiential, intuitive knowledge, in other words the metis, they need to actually build bombs, fire weapons, survey potential targets, and perform other terrorism-related activities. In making this case, the PI debunks current misconceptions regarding the Internet's perceived role as a source of terrorism knowledge.
Quantum Ghost Imaging
This is cool:
Ghost imaging is a technique that allows a high-resolution camera to produce an image of an object that the camera itself cannot see. It uses two sensors: one that looks at a light source and another that looks at the object. These sensors point in different directions. For example, the camera can face the sun and the light meter can face an object.
Secret Knock Lock
Door lock that opens if you tap a particular rhythm.
EDITED TO ADD (11/20): Another knock lock.
EDITED TO ADD (12/12): A version for cars.
A Useful Side-Effect of Misplaced Fear
A study in the British Journal of Criminology makes the point that drink-spiking date-raping is basically an urban legend:
Abstract. There is a stark contrast between heightened perceptions of risk associated with drug-facilitated sexual assault (DFSA) and a lack of evidence that this is a widespread threat. Through surveys and interviews with university students in the United Kingdom and United States, we explore knowledge and beliefs about drink-spiking and the linked threat of sexual assault. University students in both locations are not only widely sensitized to the issue, but substantial segments claim first- or second-hand experience of particular incidents. We explore students' understanding of the DFSA threat in relationship to their attitudes concerning alcohol, binge-drinking, and responsibility for personal safety. We suggest that the drink-spiking narrative has a functional appeal in relation to the contemporary experience of young women's public drinking.
In an article on the study in The Telegraph, the authors said:
Among young people, drink spiking stories have attractive features that could "help explain" their disproportionate loss of control after drinking alcohol, the study found.
Basically, the hypothesis is that perpetuating the fear of drug-rape allows parents and friends to warn young women off excessive drinking without criticizing their personal choices. The fake bogeyman lets people avoid talking about the real issues.
Anti-Malware Detection and the Original Trojan Horse
Public Reactions to Terrorist Threats
For the last five years we have researched the connection between times of terrorist threats and public opinion. In a series of tightly designed experiments, we expose subsets of research participants to a news story not unlike the type that aired last week. We argue that attitudes, evaluations, and behaviors change in at least three politically-relevant ways when terror threat is more prominent in the news. Some of these transformations are in accord with conventional wisdom concerning how we might expect the public to react. Others are more surprising, and more disconcerting in their implications for the quality of democracy.
Nothing surprising here. Fear makes people deferential, docile, and distrustful, and both politicians and marketers have learned to take advantage of this.
Jennifer Merolla and Elizabeth Zechmeister have written a book, Democracy at Risk: How Terrorist Threats Affect the Public. I haven't read it yet.
Bruce Schneier Action Figure
A month ago, ThatsMyFace.com approached me about making a Bruce Schneier action figure. It's $100. I'd like to be able to say something like "half the proceeds are going to EPIC and EFF," but they're not. That's the price for custom orders. I don't even get a royalty. The company is working on lowering the price, and they've said that they'll put a photograph of an actual example on the webpage. I've told them that at $100 no one will buy it, but at $40 it's a funny gift for your corporate IT person. So e-mail the company if you're interested, and if they get enough interest they'll do a bulk order.
Friday Squid Blogging: Sperm Whale Eating Giant Squid
Blowfish in Fiction
The algorithm is mentioned in Von Neumann's War, by John Ringo and Travis Taylor.
The guy was using a fairly simple buffer overflow attack but with a very nice little fillip of an encryption packet designed to overcome Blowfish. The point seemed to be to create a zero day exploit, which he didn't have a chance of managing. So far, nobody had cracked Blowfish.
As far as he could tell, at first, it was a simple Denial of Service attack. A DoS occurred when... But this one was different. Every single packet contained some sort of cracking program ... Most had dumped to the honey trap, but they were running rampant through there, while others had managed to hammer past two firewalls and were getting to his final line of defense. Somebody had managed a zero day exploit on Blowfish. And more were coming in!
Video Interview with Me
Here's an interview with me, conducted at the Information Security Decisions conference in Chicago in October.
Beyond Security Theater
Terrorism is rare, far rarer than many people think. It's rare because very few people want to commit acts of terrorism, and executing a terrorist plot is much harder than television makes it appear. The best defenses against terrorism are largely invisible: investigation, intelligence, and emergency response. But even these are less effective at keeping us safe than our social and political policies, both at home and abroad. However, our elected leaders don't think this way: they are far more likely to implement security theater against movie-plot threats.
A movie-plot threat is an overly specific attack scenario. Whether it's terrorists with crop dusters, terrorists contaminating the milk supply, or terrorists attacking the Olympics, specific stories affect our emotions more intensely than mere data does. Stories are what we fear. It's not just hypothetical stories: terrorists flying planes into buildings, terrorists with bombs in their shoes or in their water bottles, and terrorists with guns and bombs waging a co-ordinated attack against a city are even scarier movie-plot threats because they actually happened.
Security theater refers to security measures that make people feel more secure without doing anything to actually improve their security. An example: the photo ID checks that have sprung up in office buildings. No-one has ever explained why verifying that someone has a photo ID provides any actual security, but it looks like security to have a uniformed guard-for-hire looking at ID cards. Airport-security examples include the National Guard troops stationed at US airports in the months after 9/11 -- their guns had no bullets. The US colour-coded system of threat levels, the pervasive harassment of photographers, and the metal detectors that are increasingly common in hotels and office buildings since the Mumbai terrorist attacks, are additional examples.
To be sure, reasonable arguments can be made that some terrorist targets are more attractive than others: aeroplanes because a small bomb can result in the death of everyone aboard, monuments because of their national significance, national events because of television coverage, and transportation because of the numbers of people who commute daily. But there are literally millions of potential targets in any large country (there are five million commercial buildings alone in the US), and hundreds of potential terrorist tactics; it's impossible to defend every place against everything, and it's impossible to predict which tactic and target terrorists will try next.
Feeling and Reality
Security is both a feeling and a reality. The propensity for security theater comes from the interplay between the public and its leaders. When people are scared, they need something done that will make them feel safe, even if it doesn't truly make them safer. Politicians naturally want to do something in response to crisis, even if that something doesn't make any sense.
Often, this "something" is directly related to the details of a recent event: we confiscate liquids, screen shoes, and ban box cutters on airplanes. But it's not the target and tactics of the last attack that are important, but the next attack. These measures are only effective if we happen to guess what the next terrorists are planning. If we spend billions defending our rail systems, and the terrorists bomb a shopping mall instead, we've wasted our money. If we concentrate airport security on screening shoes and confiscating liquids, and the terrorists hide explosives in their brassieres and use solids, we've wasted our money. Terrorists don't care what they blow up and it shouldn't be our goal merely to force the terrorists to make a minor change in their tactics or targets.
Our penchant for movie plots blinds us to the broader threats. And security theater consumes resources that could better be spent elsewhere.
Any terrorist attack is a series of events: something like planning, recruiting, funding, practising, executing, aftermath. Our most effective defenses are at the beginning and end of that process -- intelligence, investigation, and emergency response -- and least effective when they require us to guess the plot correctly. By intelligence and investigation, I don't mean the broad data-mining or eavesdropping systems that have been proposed and in some cases implemented -- those are also movie-plot stories without much basis in actual effectiveness -- but instead the traditional "follow the evidence" type of investigation that has worked for decades.
Unfortunately for politicians, the security measures that work are largely invisible. Such measures include enhancing the intelligence-gathering abilities of the secret services, hiring cultural experts and Arabic translators, building bridges with Islamic communities both nationally and internationally, funding police capabilities -- both investigative arms to prevent terrorist attacks, and emergency communications systems for after attacks occur -- and arresting terrorist plotters without media fanfare. They do not include expansive new police or spying laws. Our police don't need any new laws to deal with terrorism; rather, they need apolitical funding. These security measures don't make good television, and they don't help, come re-election time. But they work, addressing the reality of security instead of the feeling.
The arrest of the "liquid bombers" in London is an example: they were caught through old-fashioned intelligence and police work. Their choice of target (airplanes) and tactic (liquid explosives) didn't matter; they would have been arrested regardless.
But even as we do all of this we cannot neglect the feeling of security, because it's how we collectively overcome the psychological damage that terrorism causes. It's not security theater we need, it's direct appeals to our feelings. The best way to help people feel secure is by acting secure around them. Instead of reacting to terrorism with fear, we -- and our leaders -- need to react with indomitability.
Refuse to Be Terrorized
By not overreacting, by not responding to movie-plot threats, and by not becoming defensive, we demonstrate the resilience of our society, in our laws, our culture, our freedoms. There is a difference between indomitability and arrogant "bring 'em on" rhetoric. There's a difference between accepting the inherent risk that comes with a free and open society, and hyping the threats.
We should treat terrorists like common criminals and give them all the benefits of true and open justice -- not merely because it demonstrates our indomitability, but because it makes us all safer. Once a society starts circumventing its own laws, the risks to its future stability are much greater than terrorism.
Supporting real security even though it's invisible, and demonstrating indomitability even though fear is more politically expedient, requires real courage. Demagoguery is easy. What we need is leaders willing both to do what's right and to speak the truth.
Despite fearful rhetoric to the contrary, terrorism is not a transcendent threat. A terrorist attack cannot possibly destroy a country's way of life; it's only our reaction to that attack that can do that kind of damage. The more we undermine our own laws, the more we convert our buildings into fortresses, the more we reduce the freedoms and liberties at the foundation of our societies, the more we're doing the terrorists' job for them.
We saw some of this in the Londoners' reaction to the 2005 transport bombings. Among the political and media hype and fearmongering, there was a thread of firm resolve. People didn't fall victim to fear. They rode the trains and buses the next day and continued their lives. Terrorism's goal isn't murder; terrorism attacks the mind, using victims as a prop. By refusing to be terrorized, we deny the terrorists their primary weapon: our own fear.
Today, we can project indomitability by rolling back all the fear-based post-9/11 security measures. Our leaders have lost credibility; getting it back requires a decrease in hyperbole. Ditch the invasive mass surveillance systems and new police state-like powers. Return airport security to pre-9/11 levels. Remove swagger from our foreign policies. Show the world that our legal system is up to the challenge of terrorism. Stop telling people to report all suspicious activity; it does little but make us suspicious of each other, increasing both fear and helplessness.
Terrorism has always been rare, and for all we've heard about 9/11 changing the world, it's still rare. Even 9/11 failed to kill as many people as automobiles do in the US every single month. But there's a pervasive myth that terrorism is easy. It's easy to imagine terrorist plots, both large-scale "poison the food supply" and small-scale "10 guys with guns and cars." Movies and television bolster this myth, so many people are surprised that there have been so few attacks in Western cities since 9/11. Certainly intelligence and investigation successes have made it harder, but mostly it's because terrorist attacks are actually hard. It's hard to find willing recruits, to co-ordinate plans, and to execute those plans -- and it's easy to make mistakes.
Counterterrorism is also hard, especially when we're psychologically prone to muck it up. Since 9/11, we've embarked on strategies of defending specific targets against specific tactics, overreacting to every terrorist video, stoking fear, demonizing ethnic groups, and treating the terrorists as if they were legitimate military opponents who could actually destroy a country or a way of life -- all of this plays into the hands of terrorists. We'd do much better by leveraging the inherent strengths of our modern democracies and the natural advantages we have over the terrorists: our adaptability and survivability, our international network of laws and law enforcement, and the freedoms and liberties that make our society so enviable. The way we live is open enough to make terrorists rare; we are observant enough to prevent most of the terrorist plots that exist, and indomitable enough to survive the even fewer terrorist plots that actually succeed. We don't need to pretend otherwise.
FBI/CIA/NSA Information Sharing Before 9/11
It's conventional wisdom that the legal "wall" between intelligence and law enforcement was one of the reasons we failed to prevent 9/11. The 9/11 Comission evaluated that claim, and published a classified report in 2004. The report was released, with a few redactions, over the summer: "Legal Barriers to Information Sharing: The Erection of a Wall Between Intelligence and Law Enforcement Investigations," 9/11 Commission Staff Monograph by Barbara A. Grewe, Senior Counsel for Special Projects, August 20, 2004.
The report concludes otherwise:
"The information sharing failures in the summer of 2001 were not the result of legal barriers but of the failure of individuals to understand that the barriers did not apply to the facts at hand," the 35-page monograph concludes. "Simply put, there was no legal reason why the information could not have been shared."
James Bamford comes to much the same conclusion in his book, The Shadow Factory: The NSA from 9/11 to the Eavesdropping on America: there was no legal wall that prevented intelligence and law enforcement from sharing the information necessary to prevent 9/11; it was inter-agency rivalries and turf battles.
Security in a Reputation Economy
In the past, our relationship with our computers was technical. We cared what CPU they had and what software they ran. We understood our networks and how they worked. We were experts, or we depended on someone else for expertise. And security was part of that expertise.
This is changing. We access our email via the web, from any computer or from our phones. We use Facebook, Google Docs, even our corporate networks, regardless of hardware or network. We, especially the younger of us, no longer care about the technical details. Computing is infrastructure; it's a commodity. It's less about products and more about services; we simply expect it to work, like telephone service or electricity or a transportation network.
Infrastructures can be spread on a broad continuum, ranging from generic to highly specialized. Power and water are generic; who supplies them doesn't really matter. Mobile phone services, credit cards, ISPs, and airlines are mostly generic. More specialized infrastructure services are restaurant meals, haircuts, and social networking sites. Highly specialized services include tax preparation for complex businesses; management consulting, legal services, and medical services.
Sales for these services are driven by two things: price and trust. The more generic the service is, the more price dominates. The more specialized it is, the more trust dominates. IT is something of a special case because so much of it is free. So, for both specialized IT services where price is less important and for generic IT services -- think Facebook -- where there is no price, trust will grow in importance. IT is becoming a reputation-based economy, and this has interesting ramifications for security.
Some years ago, the major credit card companies became concerned about the plethora of credit-card-number thefts from sellers' databases. They worried that these might undermine the public's trust in credit cards as a secure payment system for the internet. They knew the sellers would only protect these databases up to the level of the threat to the seller, and not to the greater level of threat to the industry as a whole. So they banded together and produced a security standard called PCI. It's wholly industry-enforced by an industry that realized its reputation was more valuable than the sellers' databases.
A reputation-based economy means that infrastructure providers care more about security than their customers do. I realized this 10 years ago with my own company. We provided network-monitoring services to large corporations, and our internal network security was much more extensive than our customers'. Our customers secured their networks -- that's why they hired us, after all -- but only up to the value of their networks. If we mishandled any of our customers' data, we would have lost the trust of all of our customers.
I heard the same story at an ENISA conference in London last June, when an IT consultant explained that he had begun encrypting his laptop years before his customers did. While his customers might decide that the risk of losing their data wasn't worth the hassle of dealing with encryption, he knew that if he lost data from one customer, he risked losing all of his customers.
As IT becomes more like infrastructure, more like a commodity, expect service providers to improve security to levels greater than their customers would have done themselves.
In IT, customers learn about company reputation from many sources: magazine articles, analyst reviews, recommendations from colleagues, awards, certifications, and so on. Of course, this only works if customers have accurate information. In a reputation economy, companies have a motivation to hide their security problems.
You've all experienced a reputation economy: restaurants. Some restaurants have a good reputation, and are filled with regulars. When restaurants get a bad reputation, people stop coming and they close. Tourist restaurants -- whose main attraction is their location, and whose customers frequently don't know anything about their reputation -- can thrive even if they aren't any good. And sometimes a restaurant can keep its reputation -- an award in a magazine, a special occasion restaurant that "everyone knows" is the place to go -- long after its food and service have declined.
The reputation economy is far from perfect.
This essay originally appeared in The Guardian.
Hacking the Brazil Power Grid
We've seen lots of rumors about attacks against the power grid, both in the U.S. and elsewhere, of people hacking the power grid. President Obama mentioned it in his May cybersecurity speech: "In other countries cyberattacks have plunged entire cities into darkness." Seems like the source of these rumors has been Brazil:
Several prominent intelligence sources confirmed that there were a series of cyber attacks in Brazil: one north of Rio de Janeiro in January 2005 that affected three cities and tens of thousands of people, and another, much larger event beginning on Sept. 26, 2007.
60 Minutes called me during the research of this story. They had a lot more unsubstantiated information than they're provided here: names of groups that were involved, allegations of extortion, government coverups, and so on. It would be nice to know what really happened.
EDITED TO ADD (11/11): Wired says that the attacks were caused by sooty insulators. The counterargument, of course, is that sooty insulators are just the cover story because the whole hacker thing is secret.
Wired also mentions that, in an interview last month, Richard Clarke named Brazil as a victim of these attacks.
Thieves Prefer Stealing Black Luggage
It's obvious why if you think about it:
Thieves prefer to steal black luggage because so much of it looks alike. If the thief is caught red-handed by the bag's owner, he only has to say sorry, it looks just like mine. And he's out of there. Scott free.
Read the news story that prompted this blog post. I had no idea luggage theft could be so profitable.
Protecting OSs from RootKits
Interesting research: "Countering Kernel Rootkits with Lightweight Hook Protection," by Zhi Wang, Xuxian Jiang, Weidong Cui, and Peng Ning.
Abstract: Kernel rootkits have posed serious security threats due to their stealthy manner. To hide their presence and activities, many rootkits hijack control flows by modifying control data or hooks in the kernel space. A critical step towards eliminating rootkits is to protect such hooks from being hijacked. However, it remains a challenge because there exist a large number of widely-scattered kernel hooks and many of them could be dynamically allocated from kernel heap and co-located together with other kernel data. In addition, there is a lack of flexible commodity hardware support, leading to the socalled protection granularity gap kernel hook protection requires byte-level granularity but commodity hardware only provides pagelevel protection.
The research will be presented at the 16th ACM Conference on Computer and Communications Security this week. Here's an article on the research.
Is Antivirus Dead?
Security is never black and white. If someone asks, "for best security, should I do A or B?" the answer almost invariably is both. But security is always a trade-off. Often it's impossible to do both A and B -- there's no time to do both, it's too expensive to do both, or whatever -- and you have to choose. In that case, you look at A and B and you make you best choice. But it's almost always more secure to do both.
Yes, antivirus programs have been getting less effective as new viruses are more frequent and existing viruses mutate faster. Yes, antivirus companies are forever playing catch-up, trying to create signatures for new viruses. Yes, signature-based antivirus software won't protect you when a virus is new, before the signature is added to the detection program. Antivirus is by no means a panacea.
On the other hand, an antivirus program with up-to-date signatures will protect you from a lot of threats. It'll protect you against viruses, against spyware, against Trojans -- against all sorts of malware. It'll run in the background, automatically, and you won't notice any performance degradation at all. And -- here's the best part -- it can be free. AVG won't cost you a penny. To me, this is an easy trade-off, certainly for the average computer user who clicks on attachments he probably shouldn't click on, downloads things he probably shouldn't download, and doesn't understand the finer workings of Windows Personal Firewall.
Certainly security would be improved if people used whitelisting programs such as Bit9 Parity and Savant Protection -- and I personally recommend Malwarebytes' Anti-Malware -- but a lot of users are going to have trouble with this. The average user will probably just swat away the "you're trying to run a program not on your whitelist" warning message or -- even worse -- wonder why his computer is broken when he tries to run a new piece of software. The average corporate IT department doesn't have a good idea of what software is running on all the computers within the corporation, and doesn't want the administrative overhead of managing all the change requests. And whitelists aren't a panacea, either: they don't defend against malware that attaches itself to data files (think Word macro viruses), for example.
One of the newest trends in IT is consumerization, and if you don't already know about it, you soon will. It's the idea that new technologies, the cool stuff people want, will become available for the consumer market before they become available for the business market. What it means to business is that people -- employees, customers, partners -- will access business networks from wherever they happen to be, with whatever hardware and software they have. Maybe it'll be the computer you gave them when you hired them. Maybe it'll be their home computer, the one their kids use. Maybe it'll be their cell phone or PDA, or a computer in a hotel's business center. Your business will have no way to know what they're using, and -- more importantly -- you'll have no control.
In this kind of environment, computers are going to connect to each other without a whole lot of trust between them. Untrusted computers are going to connect to untrusted networks. Trusted computers are going to connect to untrusted networks. The whole idea of "safe computing" is going to take on a whole new meaning -- every man for himself. A corporate network is going to need a simple, dumb, signature-based antivirus product at the gateway of its network. And a user is going to need a similar program to protect his computer.
Bottom line: antivirus software is neither necessary nor sufficient for security, but it's still a good idea. It's not a panacea that magically makes you safe, nor is it is obsolete in the face of current threats. As countermeasures go, it's cheap, it's easy, and it's effective. I haven't dumped my antivirus program, and I have no intention of doing so anytime soon.
John Mueller on Zazi
I have refrained from commenting on the case against Najibullah Zazi, simply because it's so often the case that the details reported in the press have very little do with reality. My suspicion was, that as in in so many other cases, he was an idiot who couldn't do any real harm and was turned into a bogeyman for political purposes.
Recalls his step-uncle affectionately, Zazi is "a dumb kid, believe me." A high school dropout, Zazi mostly worked as doughnut peddler in Lower Manhattan, barely making a living. Somewhere along the line, it is alleged, he took it into his head to set off a bomb and traveled to Pakistan where he received explosives training from al-Qaeda and copied nine pages of chemical bombmaking instructions onto his laptop. FBI Director Robert Mueller asserted in testimony on September 30 that this training gave Zazi the "capability" to set off a bomb.
As I said in 2007:
Terrorism is a real threat, and one that needs to be addressed by appropriate means. But allowing ourselves to be terrorized by wannabe terrorists and unrealistic plots -- and worse, allowing our essential freedoms to be lost by using them as an excuse -- is wrong.
The problem with these arrests is that the crimes have not happened yet. So these cases involve trying to divine what people will do in the future. They involve trying to guess as to people's motives and abilities. They often involve informants with questionable integrity, and my worry is that in our zeal to prevent terrorism, we create terrorists where there weren't any to begin with.
It follows that any terrorism problem within the United States principally derives from homegrown people like Zazi, often isolated from each other, who fantasize about performing dire deeds. Penn State's Michael Kenney has interviewed dozens of officials and intelligence agents and analyzed court documents, and finds homegrown Islamic militants to be operationally unsophisticated, short on know-how, prone to make mistakes, poor at planning, and severely hampered by a limited capacity to learn. Another study documents the difficulties of network coordination that continually threaten operational unity, trust, cohesion, and the ability to act collectively. And the popular notion these characters have the capacity to steal or put together an atomic bomb seems, to put it mildly, as fanciful as some of the terrorists' schemes.
EDITED TO ADD (11/9): This is the Michael Kenney paper that Mueller cites.
Laissez-Faire Access Control
Recently I wrote about the difficulty of making role-based access control work, and how reasearch at Dartmouth showed that it was better to let people take the access control they need to do their jobs, and audit the results. This interesting paper, "Laissez-Faire File Sharing," tries to formalize the sort of access control.
Abstract: When organizations deploy file systems with access control mechanisms that prevent users from reliably sharing files with others, these users will inevitably find alternative means to share. Alas, these alternatives rarely provide the same level of confidentiality, integrity, or auditability provided by the prescribed file systems. Thus, the imposition of restrictive mechanisms and policies by system designers and administrators may actually reduce the system's security.
Think of Wikipedia as the ultimate example of this. Everybody has access to everything, but there are audit mechanisms in place to prevent abuse.
Friday Squid Blogging: Dentyne Ice Squid Ad
Interview with Me
The Doghouse: ADE 651
A divining rod to find explosives in Iraq:
ATSC’s promotional material claims that its device can find guns, ammunition, drugs, truffles, human bodies and even contraband ivory at distances up to a kilometer, underground, through walls, underwater or even from airplanes three miles high. The device works on “electrostatic magnetic ion attraction,” ATSC says.
Complete quackery, sold by Cumberland Industries:
Still, the Iraqi government has purchased more than 1,500 of the devices, known as the ADE 651, at costs from $16,500 to $60,000 each. Nearly every police checkpoint, and many Iraqi military checkpoints, have one of the devices, which are now normally used in place of physical inspections of vehicles.
James Randi says:
This Foundation will give you our million-dollar prize upon the successful testing of the ADE651® device. Such test can be performed by anyone, anywhere, under your conditions, by you or by any appointed person or persons, in direct satisfaction of any or all of the provisions laid out above by you.
And he quotes from the Cumberland Industries literature (not online, unfortunately):
Ignores All Known Concealment Methods. By programming the detection cards to specifically target a particular substance, (through the proprietary process of electro-static matching of the ionic charge and structure of the substance), the ADE651® will “by-pass” all known attempts to conceal the target substance. It has been shown to penetrate Lead, other metals, concrete, and other matter (including hiding in the body) used in attempts to block the attraction.
One interesting point is that the effectiveness of this device depends strongly on what the bad guys think about its effectiveness. If the bad guys think it works, they have to find someone who is 1) willing to kill himself, and 2) rational enough to keep his cool while being tested by one of these things. I'll bet that the ADE651 makes it harder to recruit suicide bombers.
But what happened to the days when you could buy a divining rod for $100?
EDITED TO ADD (11/11): In case the company pulls the spec sheet, it's archived here.
Mossad Hacked Syrian Official's Computer
It was unattended in a hotel room at the time:
Israel's Mossad espionage agency used Trojan Horse programs to gather intelligence about a nuclear facility in Syria the Israel Defense Forces destroyed in 2007, the German magazine Der Spiegel reported Monday.
Remember the evil maid attack: if an attacker gets hold of your computer temporarily, he can bypass your encryption software.
The Problems with Unscientific Security
From the Open Access Journal of Forensic Psychology, by a whole list of authors: "A Call for Evidence-Based Security Tools":
Abstract: Since the 2001 attacks on the twin towers, policies on security have changed drastically, bringing about an increased need for tools that allow for the detection of deception. Many of the solutions offered today, however, lack scientific underpinning.
In absence of systematic research, users will base their evaluation on data generated by field use. Because people tend to follow heuristics rather than the rules of probability theory, perceived effectiveness can substantially differ from true effectiveness (Tversky & Kahneman, 1973). For example, one well-known problem associated with field studies is that of selective feedback. Investigative authorities are unlikely to receive feedback from liars who are erroneously considered truthful. They will occasionally receive feedback when correctly detecting deception, for example through confessions (Patrick & Iacono, 1991; Vrij, 2008). The perceived effectiveness that follows from this can be further reinforced through confirmation bias: Evidence confirming one's preconception is weighted more heavily than evidence contradicting it (Lord, Ross, & Lepper, 1979). As a result, even techniques that perform at chance level may be perceived as highly effective (Iacono, 1991). This unwarranted confidence can have profound effects on citizens' safety and civil liberty: Criminals may escape detection while innocents may be falsely accused. The Innocence Project (Unvalidated or improper science, no date) demonstrates that unvalidated or improper forensic science can indeed lead to wrongful convictions (see also Saks & Koehler, 2005).
Article on the paper.
Fear and Overreaction
It's hard work being prey. Watch the birds at a feeder. They're constantly on alert, and will fly away from food -- from easy nutrition -- at the slightest movement or sound. Given that I've never, ever seen a bird plucked from a feeder by a predator, it seems like a whole lot of wasted effort against not very big a threat.
Assessing and reacting to risk is one of the most important things a living creature has to deal with. The amygdala, an ancient part of the brain that first evolved in primitive fishes, has that job. It's what's responsible for the fight-or-flight reflex. Adrenaline in the bloodstream, increased heart rate, increased muscle tension, sweaty palms; that's the amygdala in action. And it works fast, faster than consciousnesses: show someone a snake and their amygdala will react before their conscious brain registers that they're looking at a snake.
Fear motivates all sorts of animal behaviors. Schooling, flocking, and herding are all security measures. Not only is it less likely that any member of the group will be eaten, but each member of the group has to spend less time watching out for predators. Animals as diverse as bumblebees and monkeys both avoid food in areas where predators are common. Different prey species have developed various alarm calls, some surprisingly specific. And some prey species have even evolved to react to the alarms given off by other species.
Evolutionary biologist Randolph Nesse has studied animal defenses, particularly those that seem to be overreactions. These defenses are mostly all-or-nothing; a creature can't do them halfway. Birds flying off, sea cucumbers expelling their stomachs, and vomiting are all examples. Using signal detection theory, Nesse showed that all-or-nothing defenses are expected to have many false alarms. "The smoke detector principle shows that the overresponsiveness of many defenses is an illusion. The defenses appear overresponsive because they are 'inexpensive' compared to the harms they protect against and because errors of too little defense are often more costly than errors of too much defense."
So according to the theory, if flight costs 100 calories, both in flying and lost eating time, and there's a 1 in 100 chance of being eaten if you don't fly away, it's smarter for survival to use up 10,000 calories repeatedly flying at the slightest movement even though there's a 99 percent false alarm rate. Whatever the numbers happen to be for a particular species, it has evolved to get the trade-off right.
This makes sense, until the conditions that the species evolved under change quicker than evolution can react to. Even though there are far fewer predators in the city, birds at my feeder react as if they were in the primal forest. Even birds safe in a zoo's aviary don't realize that the situation has changed.
Humans are both no different and very different. We, too, feel fear and react with our amygdala, but we also have a conscious brain that can override those reactions. And we too live in a world very different from the one we evolved in. Our reflexive defenses might be optimized for the risks endemic to living in small family groups in the East African highlands in 100,000 BC, not 2009 New York City. But we can go beyond fear, and actually think sensibly about security.
Far too often, we don't. We tend to be poor judges of risk. We overreact to rare risks, we ignore long-term risks, we magnify risks that are also morally offensive. We get risks wrong -- threats, probabilities, and costs -- all the time. When we're afraid, really afraid, we'll do almost anything to make that fear go away. Both politicians and marketers have learned to push that fear button to get us to do what they want.
One night last month, I was awakened from my hotel-room sleep by a loud, piercing alarm. There was no way I could ignore it, but I weighed the risks and did what any reasonable person would do under the circumstances: I stayed in bed and waited for the alarm to be turned off. No point getting dressed, walking down ten flights of stairs, and going outside into the cold for what invariably would be a false alarm -- serious hotel fires are very rare. Unlike the bird in an aviary, I knew better.
You can disagree with my risk calculus, and I'm sure many hotel guests walked downstairs and outside to the designated assembly point. But it's important to recognize that the ability to have this sort of discussion is uniquely human. And we need to have the discussion repeatedly, whether the topic is the installation of a home burglar alarm, the latest TSA security measures, or the potential military invasion of another country. These things aren't part of our evolutionary history; we have no natural sense of how to respond to them. Our fears are often calibrated wrong, and reason is the only way we can override them.
This essay first appeared on DarkReading.com.
Recent stories have documented the ridiculous effects of zero-tolerance weapons policies in a Delaware school district: a first-grader expelled for taking a camping utensil to school, a 13-year-old expelled after another student dropped a pocketknife in his lap, and a seventh-grader expelled for cutting paper with a utility knife for a class project. Where's the common sense? the editorials cry.
These so-called zero-tolerance policies are actually zero-discretion policies. They're policies that must be followed, no situational discretion allowed. We encounter them whenever we go through airport security: no liquids, gels or aerosols. Some workplaces have them for sexual harassment incidents; in some sports a banned substance found in a urine sample means suspension, even if it's for a real medical condition. Judges have zero discretion when faced with mandatory sentencing laws: three strikes for drug offences and you go to jail, mandatory sentencing for statutory rape (underage sex), etc. A national restaurant chain won't serve hamburgers rare, even if you offer to sign a waiver. Whenever you hear "that's the rule, and I can't do anything about it" -- and they're not lying to get rid of you -- you're butting against a zero discretion policy.
These policies enrage us because they are blind to circumstance. Editorial after editorial denounced the suspensions of elementary school children for offenses that anyone with any common sense would agree were accidental and harmless. The Internet is filled with essays demonstrating how the TSA's rules are nonsensical and sometimes don't even improve security. I've written some of them. What we want is for those involved in the situations to have discretion.
However, problems with discretion were the reason behind these mandatory policies in the first place. Discretion is often applied inconsistently. One school principal might deal with knives in the classroom one way, and another principal another way. Your drug sentence could depend considerably on how sympathetic your judge is, or on whether she's having a bad day.
Even worse, discretion can lead to discrimination. Schools had weapons bans before zero-tolerance policies, but teachers and administrators enforced the rules disproportionally against African-American students. Criminal sentences varied by race, too. The benefit of zero-discretion rules and laws is that they ensure that everyone is treated equally.
Zero-discretion rules also protect against lawsuits. If the rules are applied consistently, no parent, air traveler or defendant can claim he was unfairly discriminated against.
So that's the choice. Either we want the rules enforced fairly across the board, which means limiting the discretion of the enforcers at the scene at the time, or we want a more nuanced response to whatever the situation is, which means we give those involved in the situation more discretion.
Of course, there's more to it than that. The problem with the zero-tolerance weapons rules isn't that they're rigid, it's that they're poorly written.
What constitutes a weapon? Is it any knife, no matter how small? Should the penalties be the same for a first grader and a high school student? Does intent matter? When an aspirin carried for menstrual cramps becomes "drug possession," you know there's a badly written rule in effect.
It's the same with airport security and criminal sentencing. Broad and simple rules may be simpler to follow -- and require less thinking on the part of those enforcing them -- but they're almost always far less nuanced than our complex society requires. Unfortunately, the more complex the rules are, the more they're open to interpretation and the more discretion the interpreters have.
The solution is to combine the two, rules and discretion, with procedures to make sure they're not abused. Provide rules, but don't make them so rigid that there's no room for interpretation. Give the people in the situation -- the teachers, the airport security agents, the policemen, the judges -- discretion to apply the rules to the situation. But -- and this is the important part -- allow people to appeal the results if they feel they were treated unfairly. And regularly audit the results to ensure there is no discrimination or favoritism. It's the combination of the four that work: rules plus discretion plus appeal plus audit.
All systems need some form of redress, whether it be open and public like a courtroom or closed and secret like the TSA. Giving discretion to those at the scene just makes for a more efficient appeals process, since the first level of appeal can be handled on the spot.
Zachary, the Delaware first grader suspended for bringing a combination fork, spoon and knife camping utensil to eat his lunch with, had his punishment unanimously overturned by the school board. This was the right decision; but what about all the other students whose parents weren't as forceful or media-savvy enough to turn their child's plight into a national story? Common sense in applying rules is important, but so is equal access to that common sense.
This essay originally appeared on the Minnesota Public Radio website.
EDITED TO ADD (11/11): Another example:
A former soldier who handed a discarded shotgun in to police faces at least five years imprisonment for "doing his duty."
Detecting Terrorists by Smelling Fear
The technology relies on recognising a pheromone - or scent signal - produced in sweat when a person is scared.
Seems like yet another technology that will be swamped with false positives.
And is there any justification to the hypothesis that terrorists will be more afraid than anyone else? And do we know why people tend to feel fear? Is it because they're up to no good, or because of more benign reasons -- like they're scared of something? This link from emotion to intent is very tenuous.
The FBI and Wiretaps
To aid their Wall Street investigations, the FBI used DCSNet, their massive surveillance system.
Prosecutors are using the FBI's massive surveillance system, DCSNet, which stands for Digital Collection System Network. According to Wired magazine, this system connects FBI wiretapping rooms to switches controlled by traditional land-line operators, internet-telephony providers and cellular companies. It can be used to instantly wiretap almost any communications device in the U.S. -- wireless or tethered. In other words, you and I have no privacy. The government can listen in on any call made in the continental U.S. (This is all well and good if you trust every government employee. But what if an attorney general running for higher office will do anything to finger a high-profile target? Or what if a prosecutor has a personal grudge he'd like to fulfill? It seems to me it would be easy for this power to fall into the wrong hands.)
Powered by Movable Type. Photo at top by Geoffrey Stone.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.