Schneier on Security
A blog covering security and security technology.
April 2008 Archives
The COFEE, which stands for Computer Online Forensic Evidence Extractor, is a USB "thumb drive" that was quietly distributed to a handful of law-enforcement agencies last June. Microsoft General Counsel Brad Smith described its use to the 350 law-enforcement experts attending a company conference Monday.
How long before this device is in the hands of the hacker community? Days? Months? They had it before it was released?
EDITED TO ADD (4/30): Seems that these are not Microsoft-developed tools:
COFEE, according to forensic folk who have used it, is simply a suite of 150 bundled off-the-shelf forensic tools that run from a script. None of the tools are new or were created by Microsoft. Microsoft simply combined existing programs into a portable tool that can be used in the field before agents bring a computer back to their forensic lab.
And it's certainly not a back door, as TechDirt claims.
Reminds me of this xkcd cartoon.
A real crime in Mexico:
"We've got your child," he says in rapid-fire Spanish, usually adding an expletive for effect and then rattling off a list of demands that might include cash or jewels dropped off at a certain street corner or a sizable deposit made to a local bank.
Will we ever win the war on photographers?
Interesting investigative article from Business Week on Chinese cyber espionage against the U.S. government, and the government's reaction.
When the deluge began in 2006, officials scurried to come up with software "patches," "wraps," and other bits of triage. The effort got serious last summer when top military brass discreetly summoned the chief executives or their representatives from the 20 largest U.S. defense contractors to the Pentagon for a "threat briefing." BusinessWeek has learned the U.S. government has launched a classified operation called Byzantine Foothold to detect, track, and disarm intrusions on the government's most critical networks. And President George W. Bush on Jan. 8 quietly signed an order known as the Cyber Initiative to overhaul U.S. cyber defenses, at an eventual cost in the tens of billions of dollars, and establishing 12 distinct goals, according to people briefed on its contents. One goal in particular illustrates the urgency and scope of the problem: By June all government agencies must cut the number of communication channels, or ports, through which their networks connect to the Internet from more than 4,000 to fewer than 100. On Apr. 8, Homeland Security Dept. Secretary Michael Chertoff called the President's order a cyber security "Manhattan Project."
It can only help for the U.S. government to get its own cybersecurity house in order.
We already knew this, but it's good to reinforce the lesson:
In the study, Dr Eichele and his colleagues asked participants to repeatedly perform a "flanker task" -- an experiment in which individuals must quickly respond to visual clues.
This has security implications whenever you have people watching the same thing over and over again, looking for anomalies: airport screeners looking at X-ray scans, casino dealers looking for cheaters, building guards looking for bad guys. It's hard to do it correctly, because the brain doesn't work that way.
EDITED TO ADD (4/28): This video demonstrates the point nicely.
From Jean-Michel Cousteau, a video of market squid spawning off the Channel Islands.
A low-tech solution.
List of deaths, intended to prevent identity theft, is used for identity theft:
Ironically, the government produces the monthly Death Index so that banks and other lenders can prevent people from applying for credit using a dead person's information -- the index is made public by the Department of Commerce under the Freedom of Information Act. The caper Kirkland's accused of mastering apparently exploits a loophole, by taking over accounts that are already open.
This won best-paper award at the First USENIX Workshop on Large-Scale Exploits and Emergent Threats: "Designing and implementing malicious hardware," by Samuel T. King, Joseph Tucek, Anthony Cozzie, Chris Grier, Weihang Jiang, and Yuanyuan Zhou.
Hidden malicious circuits provide an attacker with a stealthy attack vector. As they occupy a layer below the entire software stack, malicious circuits can bypass traditional defensive techniques. Yet current work on trojan circuits considers only simple attacks against the hardware itself, and straightforward defenses. More complex designs that attack the software are unexplored, as are the countermeasures an attacker may take to bypass proposed defenses.
Theoretical? Sure. But combine this with stories of counterfeit computer hardware from China, and you've got yourself a potentially serious problem.
This is a big deal:
At issue is a growing trend in which ISPs subvert the Domain Name System, or DNS, which translates website names into numeric addresses.
This is interesting research: given a security patch, can you automatically reverse-engineer the security vulnerability that is being patched and create exploit code to exploit it?
Turns out you can.
What does this mean?
Full paper here.
The TSA wants a tool that will assess risks against transportation networks:
"The tool will assist in prioritization of security measures based on their risk reduction potential," said the statement of work accompanying TSA's formal solicitation, which was posted April 18.
I don't think you have to be very good to qualify here. This automated system put Boise, ID, on the top of its list of most vulnerable cities. The bar isn't very high here; I'm just saying.
Last week was the RSA Conference, easily the largest information security conference in the world. Over 17,000 people descended on San Francisco's Moscone Center to hear some of the over 250 talks, attend I-didn't-try-to-count parties, and try to evade over 350 exhibitors vying to sell them stuff.
Talk to the exhibitors, though, and the most common complaint is that the attendees aren't buying.
It's not the quality of the wares. The show floor is filled with new security products, new technologies, and new ideas. Many of these are products that will make the attendees' companies more secure in all sorts of different ways. The problem is that most of the people attending the RSA Conference can't understand what the products do or why they should buy them. So they don't.
I spoke with one person whose trip was paid for by a smallish security firm. He was one of the company's first customers, and the company was proud to parade him in front of the press. I asked him if he walked through the show floor, looking at the company's competitors to see if there was any benefit to switching.
"I can't figure out what any of those companies do," he replied.
I believe him. The booths are filled with broad product claims, meaningless security platitudes, and unintelligible marketing literature. You could walk into a booth, listen to a five-minute sales pitch by a marketing type, and still not know what the company does. Even seasoned security professionals are confused.
Commerce requires a meeting of minds between buyer and seller, and it's just not happening. The sellers can't explain what they're selling to the buyers, and the buyers don't buy because they don't understand what the sellers are selling. There's a mismatch between the two; they're so far apart that they're barely speaking the same language.
This is a bad thing in the near term -- some good companies will go bankrupt and some good security technologies won't get deployed -- but it's a good thing in the long run. It demonstrates that the computer industry is maturing: IT is getting complicated and subtle, and users are starting to treat it like infrastructure.
For a while now I have predicted the death of the security industry. Not the death of information security as a vital requirement, of course, but the death of the end-user security industry that gathers at the RSA Conference. When something becomes infrastructure -- power, water, cleaning service, tax preparation -- customers care less about details and more about results. Technological innovations become something the infrastructure providers pay attention to, and they package it for their customers.
No one wants to buy security. They want to buy something truly useful -- database management systems, Web 2.0 collaboration tools, a company-wide network -- and they want it to be secure. They don't want to have to become IT security experts. They don't want to have to go to the RSA Conference. This is the future of IT security.
You can see it in the large IT outsourcing contracts that companies are signing -- not security outsourcing contracts, but more general IT contracts that include security. You can see it in the current wave of industry consolidation: not large security companies buying small security companies, but non-security companies buying security companies. And you can see it in the new popularity of software as a service: Customers want solutions; who cares about the details?
Imagine if the inventor of antilock brakes -- or any automobile safety or security feature -- had to sell them directly to the consumer. It would be an uphill battle convincing the average driver that he needed to buy them; maybe that technology would have succeeded and maybe it wouldn't. But that's not what happens. Antilock brakes, airbags, and that annoying sensor that beeps when you're backing up too close to another object are sold to automobile companies, and those companies bundle them together into cars that are sold to consumers. This doesn't mean that automobile safety isn't important, and often these new features are touted by the car manufacturers.
The RSA Conference won't die, of course. Security is too important for that. There will still be new technologies, new products, and new start-ups. But it will become inward-facing, slowly turning into an industry conference. It'll be security companies selling to the companies who sell to corporate and home users -- and will no longer be a 17,000-person user conference.
This essay originally appeared on Wired.com.
EDITED TO ADD (5/1): Commentary.
I am just sick of this story: people are willing to reveal their passwords for a bar of chocolate.
I haven't seen any indication they actually verified that the passwords are real. I would certainly give up a fake password for a bar of chocolate.
Homeland Security Secretary Michael Chertoff says:
QUESTION: Some are raising that the privacy aspects of this thing, you know, sharing of that kind of data, very personal data, among four countries is quite a scary thing.
Sounds like he's confusing "secret" data with "personal" data. Lots of personal data isn't particularly secret.
Usually I don't bother blogging about these, but this one is particularly bad. Anyone with basic SQL knowledge could have registered anyone he wanted as a sex offender.
One of the cardinal rules of computer programming is to never trust your input. This holds especially true when your input comes from users, and even more so when it comes from the anonymous, general public. Apparently, the developers at Oklahoma’s Department of Corrections slept through that day in computer science class, and even managed to skip all of Common Sense 101. You see, not only did they trust anonymous user input on their public-facing website, but they blindly executed it and displayed whatever came back.
I've already written about prospect theory, which explains how people approach risk. People tend to be risk averse when it comes to gains, and risk seeking when it comes to losses:
Evolutionarily, presumably it is a better survival strategy to -- all other things being equal, of course -- accept small gains rather than risking them for larger ones, and risk larger losses rather than accepting smaller losses. Lions chase young or wounded wildebeest because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there's a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow.
This behavior has been demonstrated in animals as well: "species of insects, birds and mammals range from risk neutral to risk averse when making decisions about amounts of food, but are risk seeking towards delays in receiving food."
A recent study examines the relative risk preferences in two closely related species: chimanzees and bonobos.
The basic argument is that in the natural environment of the chimpanzee, if you don't take risks you don't get any of the high-value rewards (e.g., monkey meat). Bonobos "rely more heavily than chimpanzees on terrestrial herbaceous vegetation, a more temporally and spatially consistent food source." So chimpanzees are less likely to avoid taking risks.
Fascinating stuff, but there are at least two problems with this study. The first one, the researchers explain in their paper. The animals studied -- five of each species -- were from the Wolfgang Koehler Primate Research Center at the Leipzig Zoo, and the experimenters were unable to rule out differences in the "experiences, cultures and conditions of the two specific groups tested here."
The second problem is more general: we know very little about the life of bonobos in the wild. There's a lot of popular stereotypes about bonobos, but they're sloppy at best.
Even so, I like seeing this kind of research. It's fascinating.
EDITED TO ADD (5/13): Response to that last link.
This article in CSO compares modern cybersecurity to open seas piracy in the early 1800s. After a bit of history, the article talks about current events:
In modern times, the nearly ubiquitous availability of powerful computing systems, along with the proliferation of high-speed networks, have converged to create a new version of the high seas--the cyber seas. The Internet has the potential to significantly impact the United States' position as a world leader. Nevertheless, for the last decade, U.S. cybersecurity policy has been inconsistent and reactionary. The private sector has often been left to fend for itself, and sporadic policy statements have left U.S. government organizations, private enterprises and allies uncertain of which tack the nation will take to secure the cyber frontier.
This should be a surprise to no one.
What to do?
With that goal in mind, let us consider how the United States could take a Jeffersonian approach to the cyber threats faced by our economy. The first step would be for the United States to develop a consistent policy that articulates America's commitment to assuring the free navigation of the "cyber seas." Perhaps most critical to the success of that policy will be a future president's support for efforts that translate rhetoric to actions--developing initiatives to thwart cyber criminals, protecting U.S. technological sovereignty, and balancing any defensive actions to avoid violating U.S. citizens' constitutional rights. Clearly articulated policy and consistent actions will assure a stable and predictable environment where electronic commerce can thrive, continuing to drive U.S. economic growth and avoiding the possibility of the U.S. becoming a cyber-colony subject to the whims of organized criminal efforts on the Internet.
What took place on a peaceful Californian university campus nearly four decades ago still has the power to disturb. Eager to explore the way that "situation" can impact on behaviour, the young psychologist enrolled students to spend two weeks in a simulated jail environment, where they would randomly be assigned roles as either prisoners or guards.
EDITED TO ADD (5/13): The website is worth visiting, especially the section on resisting influence.
I previously blogged about the UK's Regulation of Investigatory Powers Act (RIPA), which was sold as a means to tackle terrorism, and other serious crimes, being used against animal rights protestors. The latest news from the UK is that a local council has used provisions of the act to put a couple and their children under surveillance, for "suspected fraudulent school place applications":
Poole council said it used the legislation to watch a family at home and in their daily movements because it wanted to know if they lived in the catchment area for a school, which they wanted their three-year-old daughter to attend.
This kind of thing happens again and again. When campaigning for a law's passage, the authorities invoke the most heinous of criminals -- terrorists, kidnappers, drug dealers, child pornographers -- but after the law is passed, they start using it in more mundane situations.
Good article on the difficulty of keeping drugs out of prisons. Lots of ways to evade security, including making use of corrupt guards.
This is just ridiculous. Lie detectors are pseudo-science at best, and even the Pentagon knows it:
The Pentagon, in a PowerPoint presentation released to msnbc.com through a Freedom of Information Act request, says the PCASS is 82 to 90 percent accurate. Those are the only accuracy numbers that were sent up the chain of command at the Pentagon before the device was approved.
In this article analyzing a security failure resulting in live nuclear warheads being flown over the U.S., there's an interesting commentary on people and security rules:
Indeed, the gaff [sic] that allowed six nukes out over three major American cities (Omaha, Neb., Kansas City, Mo., and Little Rock, Ark.) could have been avoided if the Air Force personnel had followed procedure.
Procedures are a tough balancing act. If they're too lax, there will be security problems. If they're too tight, people will get around them and there will be security problems.
There is a theory that people have an inherent risk thermostat that seeks out an optimal level of risk. When something becomes inherently safer -- a law is passed requiring motorcycle riders to wear helmets, for example -- people compensate by riding more recklessly. I first read this theory in a 1999 paper by John Adams at the University of Reading, although it seems to have originated with Sam Peltzman.
In any case, this paper presents data that contradicts that thesis:
Abstract--This paper investigates the effects of mandatory seat belt laws on driver behavior and traffic fatalities. Using a unique panel data set on seat belt usage in all U.S. jurisdictions, we analyze how such laws, by influencing seat belt use, affect the incidence of traffic fatalities. Allowing for the endogeneity of seat belt usage, we find that such usage decreases overall traffic fatalities. The magnitude of this effect, however, is significantly smaller than the estimate used by the National Highway Traffic Safety Administration. In addition, we do not find significant support for the compensating-behavior theory, which suggests that seat belt use also has an indirect adverse effect on fatalities by encouraging careless driving. Finally, we identify factors, especially the type of enforcement used, that make seat belt laws more effective in increasing seat belt usage.
This seems very worrisome:
Federal regulators approved a plan on Wednesday to create a nationwide emergency alert system using text messages delivered to cellphones.
The real question is whether the benefits outweigh the risks. I could certainly imagine scenarios where getting short text messages out to everyone in a particular geographic area is a good thing, but I can also imagine the hacking possibilities.
And once this system is developed for emergency use, can a bulk SMS business be far behind?
This is a great essay by a mom who let her 9-year-old son ride the New York City subway alone:
No, I did not give him a cell phone. Didn't want to lose it. And no, I didn't trail him, like a mommy private eye. I trusted him to figure out that he should take the Lexington Avenue subway down, and the 34th Street crosstown bus home. If he couldn't do that, I trusted him to ask a stranger. And then I even trusted that stranger not to think, "Gee, I was about to catch my train home, but now I think I'll abduct this adorable child instead."
It's amazing how our fears blind us. The mother and son appeared on The Today Show, where they both continued to explain why it wasn't an unreasonable thing to do:
And that was Skenazy's point in her column: The era is long past when Times Square was a fetid sump and taking a walk in Central Park after dark was tantamount to committing suicide. Recent federal statistics show New York to be one of the safest cities in the nation -- right up there with Provo, Utah, in fact.
Of course, The Today Show interviewer didn't get it:
Dr. Ruth Peters, a parenting expert and TODAY Show contributor, agreed that children should be allowed independent experiences, but felt there are better -- and safer -- ways to have them than the one Skenazy chose.
Here's an audio interview with Skenazy.
I am reminded of this great graphic depicting childhood independence diminishing over four generations.
Just another example of our surveillance future:
Each wheel of the vehicle transmits a unique ID, easily readable using off-the-shelf receiver. Although the transmitter’s power is very low, the signal is still readable from a fair distance using a good directional antenna.
Excellent and well-written article.
It's a growing field:
More than 200 colleges have created homeland-security degree and certificate programs since 9/11, and another 144 have added emergency management with a terrorism bent.
So, do you trust it or not?
Security is both a feeling and a reality, and they're different. You can feel secure even though you're not, and you can be secure even though you don't feel it. There are two different concepts mapped onto the same word -- the English language isn't working very well for us here -- and it can be hard to know which one we're talking about when we use the word.
There is considerable value in separating out the two concepts: in explaining how the two are different, and understanding when we're referring to one and when the other. There is value as well in recognizing when the two converge, understanding why they diverge, and knowing how they can be made to converge again.
Some fundamentals first. Viewed from the perspective of economics, security is a trade-off. There's no such thing as absolute security, and any security you get has some cost: in money, in convenience, in capabilities, in insecurities somewhere else, whatever. Every time someone makes a decision about security -- computer security, community security, national security -- he makes a trade-off.
People make these trade-offs as individuals. We all get to decide, individually, if the expense and inconvenience of having a home burglar alarm is worth the security. We all get to decide if wearing a bulletproof vest is worth the cost and tacky appearance. We all get to decide if we're getting our money's worth from the billions of dollars we're spending combating terrorism, and if invading Iraq was the best use of our counterterrorism resources. We might not have the power to implement our opinion, but we get to decide if we think it's worth it.
Now we may or may not have the expertise to make those trade-offs intelligently, but we make them anyway. All of us. People have a natural intuition about security trade-offs, and we make them, large and small, dozens of times throughout the day. We can't help it: It's part of being alive.
Imagine a rabbit, sitting in a field eating grass. And he sees a fox. He's going to make a security trade-off: Should he stay or should he flee? Over time, the rabbits that are good at making that trade-off will tend to reproduce, while the rabbits that are bad at it will tend to get eaten or starve.
So, as a successful species on the planet, you'd expect that human beings would be really good at making security trade-offs. Yet, at the same time, we can be hopelessly bad at it. We spend more money on terrorism than the data warrants. We fear flying and choose to drive instead. Why?
The short answer is that people make most trade-offs based on the feeling of security and not the reality.
I've written a lot about how people get security trade-offs wrong, and the cognitive biases that cause us to make mistakes. Humans have developed these biases because they make evolutionary sense. And most of the time, they work.
Most of the time -- and this is important -- our feeling of security matches the reality of security. Certainly, this is true of prehistory. Modern times are harder. Blame technology, blame the media, blame whatever. Our brains are much better optimized for the security trade-offs endemic to living in small family groups in the East African highlands in 100,000 B.C. than to those endemic to living in 2008 New York.
If we make security trade-offs based on the feeling of security rather than the reality, we choose security that makes us feel more secure over security that actually makes us more secure. And that's what governments, companies, family members and everyone else provide. Of course, there are two ways to make people feel more secure. The first is to make people actually more secure and hope they notice. The second is to make people feel more secure without making them actually more secure, and hope they don't notice.
The key here is whether we notice. The feeling and reality of security tend to converge when we take notice, and diverge when we don't. People notice when 1) there are enough positive and negative examples to draw a conclusion, and 2) there isn't too much emotion clouding the issue.
Both elements are important. If someone tries to convince us to spend money on a new type of home burglar alarm, we as society will know pretty quickly if he's got a clever security device or if he's a charlatan; we can monitor crime rates. But if that same person advocates a new national antiterrorism system, and there weren't any terrorist attacks before it was implemented, and there weren't any after it was implemented, how do we know if his system was effective?
People are more likely to realistically assess these incidents if they don't contradict preconceived notions about how the world works. For example: It's obvious that a wall keeps people out, so arguing against building a wall across America's southern border to keep illegal immigrants out is harder to do.
The other thing that matters is agenda. There are lots of people, politicians, companies and so on who deliberately try to manipulate your feeling of security for their own gain. They try to cause fear. They invent threats. They take minor threats and make them major. And when they talk about rare risks with only a few incidents to base an assessment on -- terrorism is the big example here -- they are more likely to succeed.
Unfortunately, there's no obvious antidote. Information is important. We can't understand security unless we understand it. But that's not enough: Few of us really understand cancer, yet we regularly make security decisions based on its risk. What we do is accept that there are experts who understand the risks of cancer, and trust them to make the security trade-offs for us.
There are some complex feedback loops going on here, between emotion and reason, between reality and our knowledge of it, between feeling and familiarity, and between the understanding of how we reason and feel about security and our analyses and feelings. We're never going to stop making security trade-offs based on the feeling of security, and we're never going to completely prevent those with specific agendas from trying to take care of us. But the more we know, the better trade-offs we'll make.
This article originally appeared on Wired.com.
I can't believe I let April 1 come and go without posting the rules to the Third Annual Movie-Plot Threat Contest. Well, better late than never.
For this contest, the goal is to create fear. Not just any fear, but a fear that you can alleviate through the sale of your new product idea. There are lots of risks out there, some of them serious, some of them so unlikely that we shouldn't worry about them, and some of them completely made up. And there are lots of products out there that provide security against those risks.
Your job is to invent one. First, find a risk or create one. It can be a terrorism risk, a criminal risk, a natural-disaster risk, a common household risk -- whatever. The weirder the better. Then, create a product that everyone simply has to buy to protect him- or herself from that risk. And finally, write a catalog ad for that product.
Here's an example, pulled from page 25 of the Late Spring 2008 Skymall catalog I'm reading on my airplane right now:
A Turtle is Safe in Water, A Child is Not!
Entries are limited to 150 words -- the example above had 97 words -- because fear doesn't require a whole lot of explaining. Tell us why we should be afraid, and why we should buy your product.
Entries will be judged on creativity, originality, persuasiveness, and plausibility. It's okay if the product you invent doesn't actually exist, but this isn't a science fiction contest.
Portable salmonella detectors for salad bars. Acoustical devices that estimate tiger proximity based on roar strength. GPS-enabled wallets for use when you've been pickpocketed. Wrist cuffs that emit fake DNA to fool DNA detectors. The Quantum Sleeper. Fear offers endless business opportunities. Good luck.
Entries due by May 1.
EDITED TO ADD (4/7): Submit your entry in the comments.
EDITED TO ADD (4/8): You people are frighteningly creative.
Data from San Francisco:
Researchers examined data from the San Francisco Police Department detailing the 59,706 crimes committed within 1,000 feet of the camera locations between Jan. 1, 2005, and Jan. 28, 2008.
This quote is instructive:
Mayor Gavin Newsom called the report "conclusively inconclusive" on Thursday but said he still wants to install more cameras around the city because they make residents feel safer.
That's right: the cameras aren't about security, they're about security theater. More comments on the general issue here.
A review of Access Denied, edited by Ronald Deibert, John Palfrey, Rafal Rohozinski and Jonathan Zittrain, MIT Press: 2008.
In 1993, Internet pioneer John Gilmore said "the net interprets censorship as damage and routes around it", and we believed him. In 1996, cyberlibertarian John Perry Barlow issued his 'Declaration of the Independence of Cyberspace' at the World Economic Forum at Davos, Switzerland, and online. He told governments: "You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear."
At the time, many shared Barlow's sentiments. The Internet empowered people. It gave them access to information and couldn't be stopped, blocked or filtered. Give someone access to the Internet, and they have access to everything. Governments that relied on censorship to control their citizens were doomed.
Today, things are very different. Internet censorship is flourishing. Organizations selectively block employees' access to the Internet. At least 26 countries -- mainly in the Middle East, North Africa, Asia, the Pacific and the former Soviet Union -- selectively block their citizens' Internet access. Even more countries legislate to control what can and cannot be said, downloaded or linked to. "You have no sovereignty where we gather," said Barlow. Oh yes we do, the governments of the world have replied.
Access Denied is a survey of the practice of Internet filtering, and a sourcebook of details about the countries that engage in the practice. It is written by researchers of the OpenNet Initiative (ONI), an organization that is dedicated to documenting global Internet filtering around the world.
The first half of the book comprises essays written by ONI researchers on the politics, practice, technology, legality and social effects of Internet filtering. There are three basic rationales for Internet censorship: politics and power; social norms, morals and religion; and security concerns.
Some countries, such as India, filter only a few sites; others, such as Iran, extensively filter the Internet. Saudi Arabia tries to block all pornography (social norms and morals). Syria blocks everything from the Israeli domain ".il" (politics and power). Some countries filter only at certain times. During the 2006 elections in Belarus, for example, the website of the main opposition candidate disappeared from the Internet.
The effectiveness of Internet filtering is mixed; it depends on the tools used and the granularity of filtering. It is much easier to block particular URLs or entire domains than it is to block information on a particular topic. Some countries block specific sites or URLs based on some predefined list but new URLs with similar content appear all the time. Other countries -- notably China -- try to filter on the basis of keywords in the actual web pages. A halfway measure is to filter on the basis of URL keywords: names of dissidents or political parties, or sexual words.
Much of the technology has other applications. Software for filtering is a legitimate product category, purchased by schools to limit access by children to objectionable material and by corporations trying to prevent their employees from being distracted at work. One chapter discusses the ethical implications of companies selling products, services and technologies that enable Internet censorship.
Some censorship is legal, not technical. Countries have laws against publishing certain content, registration requirements that prevent anonymous Internet use, liability laws that force Internet service providers to filter themselves, or surveillance. Egypt does not engage in technical Internet filtering; instead, its laws discourage the publishing and reading of certain content -- it has even jailed people for their online activities.
The second half of Access Denied consists of detailed descriptions of Internet use, regulations and censorship in eight regions of the world, and in each of 40 different countries. The ONI found evidence of censorship in 26 of those 40. For the other 14 countries, it summarizes the legal and regulatory framework surrounding Internet use, and tests the results that indicated no censorship. This leads to 200 pages of rather dry reading, but it is vitally important to have this information well-documented and easily accessible. The book's data are from 2006, but the authors promise frequent updates on the ONI website.
No set of Internet censorship measures is perfect. It is often easy to find the same information on uncensored URLs, and relatively easy to get around the filtering mechanisms and to view prohibited web pages if you know what you're doing. But most people don't have the computer skills to bypass controls, and in a country where doing so is punishable by jail -- or worse -- few take the risk. So even porous and ineffective attempts at censorship can become very effective socially and politically.
In 1996, Barlow said: "You are trying to ward off the virus of liberty by erecting guard posts at the frontiers of cyberspace. These may keep out the contagion for some time, but they will not work in a world that will soon be blanketed in bit-bearing media."
Brave words, but premature. Certainly, there is much more information available to many more people today than there was in 1996. But the Internet is made up of physical computers and connections that exist within national boundaries. Today's Internet still has borders and, increasingly, countries want to control what passes through them. In documenting this control, the ONI has performed an invaluable service.
This was originally published in Nature.
Scientists are considering it:
The beak, made of hard chitin and other materials, changes density gradually from the hard tip to a softer, more flexible base where it attaches to the muscle around the squid's mouth, the researchers found.
What in the world is "terroristic threatening"?
The woman was also charged with one count of terroristic threatening for pointing a handgun at an officer, said university police Maj. Kenny Brown. The woman gave her handgun to a counselor at the health services building, he said.
We are all hurt by the application of the word "terrorist" to everything we don't like. Terrorism does not equal criminality.
That's the key entry system used by Chrysler, Daewoo, Fiat, General Motors, Honda, Toyota, Lexus, Volvo, Volkswagen, Jaguar, and probably others. It's broken:
The KeeLoq encryption algorithm is widely used for security relevant applications, e.g., in the form of passive Radio Frequency Identification (RFID) transponders for car immobilizers and in various access control and Remote Keyless Entry (RKE) systems, e.g., for opening car doors and garage doors.
I've written about this before, but the above link has much better data.
EDITED TO ADD (4/4): A good article.
We finally have some actual information about the "liquid bomb" that was planned by that London group arrested in 2006:
The court heard the bombers intended to use hydrogen peroxide and mix it with a product called Tang, used in soft drinks, to turn it into an explosive.
Any chemists want to take a crack at this one?
Fascinating. Note that it doesn't make it harder to open the door; it just takes longer.
EDITED TO ADD (1:00 PM): Seems like this is a hoax. Or an art project. Or something. I'm really disappointed; I want one.
Oddly enough, I flew into Orlando Airport on Tuesday night, hours after TSA and police caught Kevin Brown -- not the baseball player -- with bomb-making equipment in his checked luggage. (Yes, checked luggage. He was bringing it to Jamaica, not planning on blowing up the plane he was on.) Seems like someone trained in behavioral profiling singled him out, probably for stuff like this:
"He was rocking left to right, bouncing up and down ... he was there acting crazy," passenger Jason Doyle said.
But that was a passenger remembering Brown after the fact, so I wouldn't put too much credence in it.
"This is not him," she said in a phone interview. "It has to be a mental issue for him. I know if they looked through his medical records...I'm sure they will see..."He's not a terrorist."
Doesn't sound like a terrorist, but this does:
According to the affidavit, Brown admitted he had the items because he wanted to make pipe bombs in Jamaica. It also indicated he wanted to show friends how to make pipe bombs like he made while in Iraq.
Ignore the hyperbole; nitromethane is a liquid fuel, not a high explosive. Here's the whole affidavit, if you want to read it.
Even with all this news, the truth is that we just don't know what happened. It looks like a great win for behavioral profiling (which, when done well, I think is a good idea) and the TSA. The TSA is certainly pleased. But we've seen apparent TSA wins before that turn out to be bogus when the details finally come out. Right now I'm cautiously pleased with the TSA's performance, and offer them a tentative congratulations, especially for not over-reacting. I read -- but can't find the link now -- that only 11 flights were delayed because of the event. The TSA claims that no flights were delayed, and also says that no security checkpoints were closed. Either way, it's certainly something to congratulate the TSA about.
An eerily prescient article from The Atlantic in 1967 about the future of data privacy. It presents all of the basic arguments for strict controls on data collection of personal information, and it's remarkably accurate in it's predictions of the future development and importance of computers as well all of all of the ways the government would abuse them.
Well worth reading.
They were used against planes last week.
I'm sure criminals also used cars in Australia last week. Will the country ban them next?
On the other hand, I'm sick and tired of laser pointers myself. But the cats of Australia will be terribly disappointed.
The U.S. is outsourcing the manufacture of its RFID passports to some questionable companies.
This is a great illustration of the maxim "security trade-offs are often made for non-security reasons." I can imagine the manager in charge: "Yes, it's insecure. But think of the savings!"
The Government Printing Office's decision to export the work has proved lucrative, allowing the agency to book more than $100 million in recent profits by charging the State Department more money for blank passports than it actually costs to make them, according to interviews with federal officials and documents obtained by The Times.
This is 1) a good demonstration that a fingerprint is not a secret, and 2) a great political hack. Wolfgang Schauble, Germany's interior minister, is a strong supporter of collecting biometric data on everyone as an antiterrorist measure. Because, um, because it sounds like a good idea.
This is just insane:
The Quantum Sleeper Unit is a high-level security system designed for maximum protection in various hostile environments
Got an idea for how to build one? The TSA wants to give you money.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.