Schneier on Security
A blog covering security and security technology.
June 2006 Archives
Bishop Katharine Jefferts Schori of Nevada was elected as presiding bishop of the Episcopal Church:
A former research oceanographer who studied squid, octopuses and creatures living in marine mud, she was a second-career priest who was ordained in 1994.
The jokes have begun:
One wag noted that the study of invertebrates makes Bishop Schori supremely qualified to rule the ECUSA. She's studied oysters and squids...this is a mental picture that I really did not need. Is this a case of 'squid pro quo'?
Does Microsoft have the ability to disable Windows remotely? Maybe:
Two weeks ago, I wrote about my serious objections to Microsoft’s latest salvo in the war against unauthorized copies of Windows. Two Windows Genuine Advantage components are being pushed onto users’ machines with insufficient notification and inadequate quality control, and the result is a big mess. (For details, see Microsoft presses the Stupid button.)
And this, supposedly from someone at Microsoft Support:
He told me that "in the fall, having the latest WGA will become mandatory and if its not installed, Windows will give a 30 day warning and when the 30 days is up and WGA isn't installed, Windows will stop working, so you might as well install WGA now."
The stupidity of this idea is amazing. Not just the inevitability of false positives, but the potential for a hacker to co-opt the controls. I hope this rumor ends up not being true.
Although if they actually do it, the backlash could do more for non-Windows OSs than anything those OSs could do for themselves.
New invention, just patented:
Meyerle is patenting a design for a modified cartridge that would be fired by a burst of high-frequency radio energy. But the energy would only ignite the charge if a solid-state switch within the cartridge had been activated. This would only happen if a password entered into the gun using a tiny keypad matched one stored in the cartridge.
I'm sitting in a conference room at Cambridge University, trying to simultaneously finish this article for Wired News and pay attention to the presenter onstage.
I'm in this awkward situation because 1) this article is due tomorrow, and 2) I'm attending the fifth Workshop on the Economics of Information Security, or WEIS: to my mind, the most interesting computer security conference of the year.
The idea that economics has anything to do with computer security is relatively new. Ross Anderson and I seem to have stumbled upon the idea independently. He, in his brilliant article from 2001, "Why Information Security Is Hard -- An Economic Perspective" (.pdf), and me in various essays and presentations from that same period.
WEIS began a year later at the University of California at Berkeley and has grown ever since. It's the only workshop where technologists get together with economists and lawyers and try to understand the problems of computer security.
And economics has a lot to teach computer security. We generally think of computer security as a problem of technology, but often systems fail because of misplaced economic incentives: The people who could protect a system are not the ones who suffer the costs of failure.
When you start looking, economic considerations are everywhere in computer security. Hospitals' medical-records systems provide comprehensive billing-management features for the administrators who specify them, but are not so good at protecting patients' privacy. Automated teller machines suffered from fraud in countries like the United Kingdom and the Netherlands, where poor regulation left banks without sufficient incentive to secure their systems, and allowed them to pass the cost of fraud along to their customers. And one reason the internet is insecure is that liability for attacks is so diffuse.
In all of these examples, the economic considerations of security are more important than the technical considerations.
More generally, many of the most basic security questions are at least as much economic as technical. Do we spend enough on keeping hackers out of our computer systems? Or do we spend too much? For that matter, do we spend appropriate amounts on police and Army services? And are we spending our security budgets on the right things? In the shadow of 9/11, questions like these have a heightened importance.
Economics can actually explain many of the puzzling realities of internet security. Firewalls are common, e-mail encryption is rare: not because of the relative effectiveness of the technologies, but because of the economic pressures that drive companies to install them. Corporations rarely publicize information about intrusions; that's because of economic incentives against doing so. And an insecure operating system is the international standard, in part, because its economic effects are largely borne not by the company that builds the operating system, but by the customers that buy it.
Some of the most controversial cyberpolicy issues also sit squarely between information security and economics. For example, the issue of digital rights management: Is copyright law too restrictive -- or not restrictive enough -- to maximize society's creative output? And if it needs to be more restrictive, will DRM technologies benefit the music industry or the technology vendors? Is Microsoft's Trusted Computing initiative a good idea, or just another way for the company to lock its customers into Windows, Media Player and Office? Any attempt to answer these questions becomes rapidly entangled with both information security and economic arguments.
WEIS encourages papers on these and other issues in economics and computer security. We heard papers presented on the economics of digital forensics of cell phones (.pdf) -- if you have an uncommon phone, the police probably don't have the tools to perform forensic analysis -- and the effect of stock spam on stock prices: It actually works in the short term. We learned that more-educated wireless network users are not more likely to secure their access points (.pdf), and that the best predictor of wireless security is the default configuration of the router.
Other researchers presented economic models to explain patch management (.pdf), peer-to-peer worms (.pdf), investment in information security technologies (.pdf) and opt-in versus opt-out privacy policies (.pdf). There was a field study that tried to estimate the cost to the U.S. economy for information infrastructure failures (.pdf): less than you might think. And one of the most interesting papers looked at economic barriers to adopting new security protocols (.pdf), specifically DNS Security Extensions.
This is all heady stuff. In the early years, there was a bit of a struggle as the economists and the computer security technologists tried to learn each others' languages. But now it seems that there's a lot more synergy, and more collaborations between the two camps.
I've long said that the fundamental problems in computer security are no longer about technology; they're about applying technology. Workshops like WEIS are helping us understand why good security technologies fail and bad ones succeed, and that kind of insight is critical if we're going to improve security in the information age.
This essay originally appeared on Wired.com.
I can't believe I forgot to blog this great article about the communications intercept trade show in DC earlier this month:
"You really need to educate yourself," he insisted. "Do you think this stuff doesn't happen in the West? Let me tell you something. I sell this equipment all over the world, especially in the Middle East. I deal with buyers from Qatar, and I get more concern about proper legal procedure from them than I get in the USA."
Read the whole thing.
Maybe I shouldn't have said this:
"I have a completely open Wi-Fi network," Schneier told ZDNet UK. "Firstly, I don't care if my neighbors are using my network. Secondly, I've protected my computers. Thirdly, it's polite. When people come over they can use it."
For the record, I have an ultra-secure wireless network that automatically reports all hacking attempts to unsavory men with bitey dogs.
"Security Implications of Applying the Communications Assistance to Law Enforcement Act to Voice over IP," paper by Steve Bellovin, Matt Blaze, Ernie Brickell, Clint Brooks, Vint Cerf, Whit Diffie, Susan Landau, Jon Peterson, and John Treichler.
Almost every piece of personal information that Americans try to keep secret -- including bank account statements, e-mail messages and telephone records -- is semi-public and available for sale.
The committee subpoenaed representatives from 11 companies that use the Internet and phone calls to obtain, market, and sell personal data, but they refused to talk.
Richard Clayton is presenting a paper (blog post here) that discusses how to defeat China's national firewall:
...the keyword detection is not actually being done in large routers on the borders of the Chinese networks, but in nearby subsidiary machines. When these machines detect the keyword, they do not actually prevent the packet containing the keyword from passing through the main router (this would be horribly complicated to achieve and still allow the router to run at the necessary speed). Instead, these subsiduary machines generate a series of TCP reset packets, which are sent to each end of the connection. When the resets arrive, the end-points assume they are genuine requests from the other end to close the connection -- and obey. Hence the censorship occurs.
You'd think a national mint would have better security against insiders.
But Justice Connolly also criticised security at the mint, saying he was amazed a theft on this scale could happen.
This sort of thing happens so often it's no longer news:
Conte's e-mails were intended to be blacked out in a 51-page electronic filing Wednesday in which the government argued against the Chronicle's motion to quash the subpoena. Eight of those pages were not supposed to be public.
Another news article here.
According to CNN:
Besides the contact restrictions, all users -- not just those 14 and 15 -- will have the option to make only partial profiles available to those not already on their friends list.
Honestly, this all sounds a lot more like cover-your-ass security than real security: MySpace securing itself from lawsuits.
"Safety experts" seem to agree that it won't improve security much.
And two nights ago I had a watermelon and cucumber squid salad at Piperade in San Francisco.
I've long known about the possible Unix date issue, but this is the first I've heard of an actual bug due to the Unix time epoch rolling over in 2038.
The new policy says that AT&T -- not customers -- owns customers' confidential info and can use it "to protect its legitimate business interests, safeguard others, or respond to legal process."
EDITED TO ADD (6/27): User Friendly on the issue.
Behind the bugging operation were two pieces of sophisticated software, according to Ericsson. One was Ericsson's own, some basic elements of which came as a preinstalled feature of the network equipment. When enabled, the feature can be used for lawful interception by government authorities, which has become increasingly common since the Sept. 11 terror attacks. But to use the interception feature, operators like Vodafone would need to pay Ericsson millions of dollars to purchase the additional hardware, software and passwords that are required to activate it. Both companies say Vodafone hadn't done that in Greece at the time.
How good are these fake identities?
This sounds like a science fiction premise: Unmanned drones that monitor the population for crimes.
The security system of the Xbox has been a complete failure.
"How to build a low-cost, extended-range RFID skimmer," by Ilan Kirschenbaum and Avishai Wool. To appear in 15th USENIX Security Symposium, Vancouver, Canada, August 2006.
There are a variety of encryption technologies that allow you to analyze data without knowing details of the data:
Largely by employing the head-spinning principles of cryptography, the researchers say they can ensure that law enforcement, intelligence agencies and private companies can sift through huge databases without seeing names and identifying details in the records.
This is nothing new. I've seen papers on this sort of stuff since the late 1980s. The problem is that no one in law enforcement has any incentive to use them. Privacy is rarely a technological problem; it's far more often a social or economic problem.
...here's a more useful quiz:
Racial profiling doesn't work against terrorism, because terrorists don't fit any racial profile.
Surreal story about a person coming into the U.S. from Iraq who is held up at the border because he used to sell copyrighted images on T-shirts:
Homeland Security, the $40-billion-a-year agency set up to combat terrorism after 9/11, has been given universal jurisdiction and can hold anyone on Earth for crimes unrelated to national security -- even me for a court date I missed while I was in Iraq helping America deter terror -- without asking what I had been doing in Pakistan among Islamic extremists the agency is designated to stop. Instead, some of its actions are erasing the lines of jurisdiction between local police and the federal state, scarily bringing the words "police" and "state" closer together. As long as we allow Homeland Security to act like a Keystone Stasi, terrorism will continue to win in destroying our freedom.
Kevin Drum mentions it, too.
I can tell you one thing, you guys are really imaginative. The response to my Movie-Plot Threat Contest was more than I could imagine: 892 comments. I printed them all out -- 195 pages, double sided -- and spiral bound them, so I could read them more easily. The cover read: "The Big Book of Terrorist Plots." I tried not to wave it around too much in airports.
I almost didn't want to pick a winner, because the real point is the enormous list of them all. And because it's hard to choose. But after careful deliberation (see selection criteria here), the winning entry is by Tom Grant. Although planes filled with explosives is already cliche, destroying the Grand Coulee Dam is inspired. Here it is:
Mission: Terrorize Americans. Neutralize American economy, make America feel completely vulnerable, and all Americans unsafe.
Congratulations, Tom. I'm still trying to figure out what you win.
There's a more coherent essay about this on Wired.com, but I didn't reprint it here because it contained too much that I've already posted on this blog.
New Scientist has discovered that Pentagon's National Security Agency, which specialises in eavesdropping and code-breaking, is funding research into the mass harvesting of the information that people post about themselves on social networks. And it could harness advances in internet technology - specifically the forthcoming "semantic web" championed by the web standards organisation W3C - to combine data from social networking websites with details such as banking, retail and property records, allowing the NSA to build extensive, all-embracing personal profiles of individuals.
NIST has just published "Recommendation for Random Number Generation Using Deterministic Random Bit Generators."
The basic service that Pena provided is not uncommon. Telecommunications brokers often buy long-distance minutes from carriers -- especially VoIP carriers -- and then re-sell those minutes directly to customers. They make money by marking up the services they buy from carriers.
Great article comparing the barrier Israel is erecting to protect itself from the West Bank with the hypothetical barrier the U.S. would build to protect itself from Mexico:
The Israeli West Bank barrier, when finished, will run for more than 400 miles and will consist of trenches, security roads, electronic fences, and concrete walls. Its main goal is to stop terrorists from detonating themselves in restaurants and cafes and buses in the cities and towns of central Israel. So, planners set the bar very high: It is intended to prevent every single attempt to cross it. The rules of engagement were written accordingly. If someone trying to cross the fence in the middle of the night is presumed to be a terrorist, there's no need to hesitate before shooting. To kill.
Interesting paper on the security of contactless smartcards:
Interestingly, the outcome of this investigation shows that contactless smartcards are not fundamentally less secure than contact cards. However, some attacks are inherently facilitated. Therefore both the user and the issuer should be aware of these threats and take them into account when building or using the systems based on contactless smartcards.
From New Scientist:
The Pentagon considered developing a host of non-lethal chemical weapons that would disrupt discipline and morale among enemy troops, newly declassified documents reveal.
Technology always gets better; it never gets worse. There will be a time, probably in our lifetimes, when weapons like these will be real.
Interesting law review article by Helen Nissenbaum:
I've previously written about the risks of small portable computing devices; how more and more data can be stored on them, and then lost or stolen. But there's another risk: if an attacker can convince you to plug his USB device into your computer, he can take it over.
Plug an iPod or USB stick into a PC running Windows and the device can literally take over the machine and search for confidential documents, copy them back to the iPod or USB's internal storage, and hide them as "deleted" files. Alternatively, the device can simply plant spyware, or even compromise the operating system. Two features that make this possible are the Windows AutoRun facility and the ability of peripherals to use something called direct memory access (DMA). The first attack vector you can and should plug; the second vector is the result of a design flaw that's likely to be with us for many years to come.
The article has the details, but basically you can configure a file on your USB device to automatically run when it's plugged into a computer. That file can, of course, do anything you want it to.
Recently I've been seeing more and more written about this attack. The Spring 2006 issue of 2600 Magazine, for example, contains a short article called "iPod Sneakiness" (unfortunately, not on line). The author suggests that you can innocently ask someone at an Internet cafe if you can plug your iPod into his computer to power it up -- and then steal his passwords and critical files.
And here's an article about someone who used this trick in a penetration test:
We figured we would try something different by baiting the same employees that were on high alert. We gathered all the worthless vendor giveaway thumb drives collected over the years and imprinted them with our own special piece of software. I had one of my guys write a Trojan that, when run, would collect passwords, logins and machine-specific information from the user's computer, and then email the findings back to us.
There is a defense. From the first article:
AutoRun is just a bad idea. People putting CD-ROMs or USB drives into their computers usually want to see what's on the media, not have programs automatically run. Fortunately you can turn AutoRun off. A simple manual approach is to hold down the "Shift" key when a disk or USB storage device is inserted into the computer. A better way is to disable the feature entirely by editing the Windows Registry. There are many instructions for doing this online (just search for "disable autorun") or you can download and use Microsoft's TweakUI program, which is part of the Windows XP PowerToys download. With Windows XP you can also disable AutoRun for CDs by right-clicking on the CD drive icon in the Windows explorer, choosing the AutoPlay tab, and then selecting "Take no action" for each kind of disk that's listed. Unfortunately, disabling AutoPlay for CDs won't always disable AutoPlay for USB devices, so the registry hack is the safest course of action.
In the 1990s, the Macintosh operating system had this feature, which was removed after a virus made use of it in 1998. Microsoft needs to remove this feature as well.
EDITED TO ADD (6/12): In the penetration test, they didn't use AutoRun.
The website is hysterical:
Why are 256 bits the technically highest coding depth at all on computers possible are ?
My head hurts just trying to read that.
From "Assassination in the United States: An Operational Study of Recent Assassins, Attackers, and Near-Lethal Approachers," (a 1999 article published in the Journal of Forensic Sciences):
Few attackers or near-lethal approachers possessed the cunning or the bravado of assassins in popular movies or novels. The reality of American assassination is much more mundane, more banal than assassinations depicted on the screen. Neither monsters nor martyrs, recent American assassins, attackers, and near-lethal approachers engaged in pre-incident patterns of thinking and behaviour.
The quote is from the last page. The whole thing is interesting reading.
Gonzales and Mueller asked Google Inc., Time Warner Inc.'s AOL and other companies to preserve the data at a May 26 meeting, citing their value to investigations into child-pornography distribution and terrorism. Internet companies typically keep customer histories for only a few days or weeks.
Note that the Justice Department invoked two of the Four Horsemen of the Internet Apocalypse: child pornographers and terrorists. If they can figure out how to work kidnappers and drug dealers in, they can probably do anything they want.
Just hide this gadget in someone's car or briefcase -- or maybe sew it into his coat -- and then track his every move.
You have to recover the device to play it back, but presumably the next generation will be queryable remotely.
Nice article discussing the hype, and reality, over the threat of homebrew chemical weapons.
In case you thought a hard-to-forge national ID card would solve the fake ID problem, here's what the criminals have to say:
Luis Hernandez just laughs as he sells fake driver's licenses and Social Security cards to illegal immigrants near a park known for shady deals. The joke -- to him and others in his line of work -- is the government's promise to put people like him out of business with a tamperproof national ID card.
Title 18, United States Code, Section 1001 makes it a crime to: 1) knowingly and willfully; 2) make any materially false, fictitious or fraudulent statement or representation; 3) in any matter within the jurisdiction of the executive, legislative or judicial branch of the United States. Your lie does not even have to be made directly to an employee of the national government as long as it is "within the jurisdiction" of the ever expanding federal bureaucracy. Though the falsehood must be "material" this requirement is met if the statement has the "natural tendency to influence or [is] capable of influencing, the decision of the decisionmaking body to which it is addressed." United States v. Gaudin, 515 U.S. 506, 510 (1995). (In other words, it is not necessary to show that your particular lie ever really influenced anyone.) Although you must know that your statement is false at the time you make it in order to be guilty of this crime, you do not have to know that lying to the government is a crime or even that the matter you are lying about is "within the jurisdiction" of a government agency. United States v. Yermian, 468 U.S. 63, 69 (1984). For example, if you lie to your employer on your time and attendance records and, unbeknownst to you, he submits your records, along with those of other employees, to the federal government pursuant to some regulatory duty, you could be criminally liable.
We discuss credit card data centers getting hacked; why banks getting hacked doesn't make mainstream media; reissuing bank cards; how much he makes cashing out bank cards; how banks cover money stolen from credit cards; why companies are not cracking down on credit card crimes; how to prevent credit card theft; ATM scams; being "legit" in the criminal world; how he gets cash out gigs; getting PINs and encoding blank credit cards; how much money he can pull in a day; e-gold; his chances of getting caught; the best day to hit the ATMs; encrypting ICQ messages.
Animated political cartoon. And a song, too.
Great resource by Dr. James B. Wood at the Bermuda Biological Station for Research.
You can audit "Welcome to Practical Aspects of Modern Cryptography." It was taught at the University of Washington, Winter 2006, by Josh Benaloh, Brian LaMacchia, and John Manderdelli. The course materials and videos of the lectures are online.
Fascinating essay about how EU law would treat the NSA's collection of everyone's phone records.
Bank defends its bad security by saying that everyone else does it too.
Have you ever been to a retail store and seen this sign on the register: "Your purchase free if you don't get a receipt"? You almost certainly didn't see it in an expensive or high-end store. You saw it in a convenience store, or a fast-food restaurant. Or maybe a liquor store. That sign is a security device, and a clever one at that. And it illustrates a very important rule about security: it works best when you align interests with capability.
If you're a store owner, one of your security worries is employee theft. Your employees handle cash all day, and dishonest ones will pocket some of it for themselves. The history of the cash register is mostly a history of preventing this kind of theft. Early cash registers were just boxes with a bell attached. The bell rang when an employee opened the box, alerting the store owner -- who was presumably elsewhere in the store -- that an employee was handling money.
The register tape was an important development in security against employee theft. Every transaction is recorded in write-only media, in such a way that it's impossible to insert or delete transactions. It's an audit trail. Using that audit trail, the store owner can count the cash in the drawer, and compare the amount with what the register. Any discrepancies can be docked from the employee's paycheck.
If you're a dishonest employee, you have to keep transactions off the register. If someone hands you money for an item and walks out, you can pocket that money without anyone being the wiser. And, in fact, that's how employees steal cash in retail stores.
What can the store owner do? He can stand there and watch the employee, of course. But that's not very efficient; the whole point of having employees is so that the store owner can do other things. The customer is standing there anyway, but the customer doesn't care one way or another about a receipt.
So here's what the employer does: he hires the customer. By putting up a sign saying "Your purchase free if you don't get a receipt," the employer is getting the customer to guard the employee. The customer makes sure the employee gives him a receipt, and employee theft is reduced accordingly.
There is a general rule in security to align interest with capability. The customer has the capability of watching the employee; the sign gives him the interest.
In Beyond Fear I wrote about ATM fraud; you can see the same mechanism at work:
"When ATM cardholders in the US complained about phantom withdrawals from their accounts, the courts generally held that the banks had to prove fraud. Hence, the banks' agenda was to improve security and keep fraud low, because they paid the costs of any fraud. In the UK, the reverse was true: The courts generally sided with the banks and assumed that any attempts to repudiate withdrawals were cardholder fraud, and the cardholder had to prove otherwise. This caused the banks to have the opposite agenda; they didn't care about improving security, because they were content to blame the problems on the customers and send them to jail for complaining. The result was that in the US, the banks improved ATM security to forestall additional losses--most of the fraud actually was not the cardholder's fault--while in the UK, the banks did nothing."
The banks had the capability to improve security. In the US, they also had the interest. But in the UK, only the customer had the interest. It wasn't until the UK courts reversed themselves and aligned interest with capability that ATM security improved.
Computer security is no different. For years I have argued in favor of software liabilities. Software vendors are in the best position to improve software security; they have the capability. But, unfortunately, they don't have much interest. Features, schedule, and profitability are far more important. Software liabilities will change that. They'll align interest with capability, and they'll improve software security.
One last story… In Italy, tax fraud used to be a national hobby. (It may still be; I don't know.) The government was tired of retail stores not reporting sales and paying taxes, so they passed a law regulating the customers. Any customer having just purchased an item and stopped within a certain distance of a retail store, has to produce a receipt or they would be fined. Just as in the "Your purchase free if you don't get a receipt" story, the law turned the customers into tax inspectors. They demanded receipts from merchants, which in turn forced the merchants to create a paper audit trail for the purchase and pay the required tax.
This was a great idea, but it didn't work very well. Customers, especially tourists, didn't like to be stopped by police. People started demanding that the police prove they just purchased the item. Threatening people with fines if they didn't guard merchants wasn't as effective an enticement as offering people a reward if they didn't get a receipt.
Interest must be aligned with capability, but you need to be careful how you generate interest.
This essay originally appeared on Wired.com.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.