Schneier on Security
A blog covering security and security technology.
September 2006 Archives
Seems that some squid can hide messages in their skin:
In the animal world, squid are masters of disguise. Pigmented skin cells enable them to camouflage themselves—almost instantaneously—from predators. Squid also produce polarized skin patterns by regulating the iridescence of their skin, possibly creating a "hidden communication channel"? visible only to animals that are sensitive to polarized light.
My favorite security stories are from the natural world. Evolution results in some of the most interesting security countermeasures.
Maher Arar is a Syrian-born Canadian citizen. On September 26, 2002, he tried to fly from Switzerland to Toronto. Changing planes in New York, he was detained by the U.S. authorities, and eventually shipped to Syria where he was tortured. He's 100% innocent. (Background here.)
On Maher Arar, the Commissioner comes to one important conclusion: “I am able to say categorically that there is no evidence to indicate that Mr. Arar has committed any offence or that his activities constitute a threat to the security of Canada."
Certainly something that everyone who supports the U.S.'s right to detain and torture people without having to demonstrate their guilt should think about. But what's more interesting to readers of this blog is the role that inaccurate data played in the deportation and ultimately torture of an innocent man.
Privacy International summarizes the report. These are among their bullet points:
Judicial oversight is a security mechanism. It prevents the police from incarcerating the wrong person. The point of habeas corpus is that the police need to present their evidence in front of a neutral third party, and not indefinitely detain or torture people just because they believe they're guilty. We are all less secure if we water down these security measures.
A couple of weeks I ago I wrote about the battle between Microsoft's DRM system and FairUse4WM, which breaks it. The news for this week is that Microsoft has patched their security against FairUseWM 1.2 and filed a lawsuit against the program's anonymous authors, and those same anonymous authors have released FairUse4WM 1.3, which breaks the latest Microsoft patch.
We asked Viodentia about Redmond's accusation that he and/or his associates broke into its systems in order to obtain the IP necessary to crack PlaysForSure; Vio replied that he's "utterly shocked" by the charge. "I didn't use any Microsoft source code. However, I believe that this lawsuit is a fishing expedition to get identity information, which can then be used to either bring more targeted lawsuits, or to cause other trouble." We're sure Microsoft would like its partners and the public to think that its DRM is generally infallible and could only be cracked by stealing its IP, so Viodentia's conclusion about its legal tactics seems pretty fair, obvious, and logical to us.
What's interesting about this continuing saga is how different it is from the normal find-vulnerability-then-patch sequence. The authors of FairUse4WM aren't finding bugs and figuring out how to exploit them, forcing Microsoft to patch them. This is a sequence of crack, fix, re-crack, re-fix, etc.
The reason we're seeing this -- and this is going to be the norm for DRM systems -- is that DRM is fundamentally an impossible problem. Making it work at all involves tricks, and breaking DRM is akin to "fixing" the software so the tricks don't work. Anyone looking for a demonstation that technical DRM is doomed should watch this story unfold. (If Microsoft has any chance of winning at all, it's via the legal route.)
Torpark is a free anonymous web browser. It sounds good:
A group of computer hackers and human rights workers have launched a specially-crafted version of Firefox that claims to give users complete anonymity when they surf the Web.
From the website:
Torpark is a program which allows you to surf the internet anonymously. Download Torpark and put it on a USB Flash keychain. Plug it into any internet terminal whether at home, school, work, or in public. Torpark will launch a Tor circuit connection, which creates an encrypted tunnel from your computer indirectly to a Tor exit computer, allowing you to surf the internet anonymously.
More details here.
You can open a door in only 3,129 button presses. On the average, it should take half that. (Article is from 2004.)
In May 2003, Michael Ravnitzky submitted a Freedom of Information Act (FOIA) request to the National Security Agency for a copy of the index to their historical reports at the Center for Cryptologic History and the index to certain journals: the NSA Technical Journal and the Cryptographic Quarterly. These journals had been mentioned in the literature but are not available to the public. Because he thought NSA might be reluctant to release the bibliographic indexes, he also asked for the table of contents to each issue.
The request took more than three years for them to process and declassify -- sadly, not atypical -- and during the process they asked if he would accept the indexes in lieu of the tables of contents pages: specifically, the cumulative indices that included all the previous material in the earlier indices. He agreed, and got them last month. The results are here.
This is just a sampling of some of the article titles from the NSA Technical Journal:
"The Arithmetic of a Generation Principle for an Electronic Key Generator" · "CATNIP: Computer Analysis - Target Networks Intercept Probability" · "Chatter Patterns: A Last Resort" · "COMINT Satellites - A Space Problem" · "Computers and Advanced Weapons Systems" · "Coupon Collecting and Cryptology" · "Cranks, Nuts, and Screwballs" · "A Cryptologic Fairy Tale" · "Don't Be Too Smart" · "Earliest Applications of the Computer at NSA" · "Emergency Destruction of Documents" · "Extraterrestrial Intelligence" · "The Fallacy of the One-Time-Pad Excuse" · "GEE WHIZZER" · "The Gweeks Had a Gwoup for It" · "How to Visualize a Matrix" · "Key to the Extraterrestrial Messages" · "A Mechanical Treatment of Fibonacci Sequences" · "Q.E.D.- 2 Hours, 41 Minutes" · "SlGINT Implications of Military Oceanography" · "Some Problems and Techniques in Bookbreaking" · "Upgrading Selected US Codes and Ciphers with a Cover and Deception Capability" · "Weather: Its Role in Communications Intelligence" · "Worldwide Language Problems at NSA"
In the materials the NSA provided, they also included indices to two other publications: Cryptologic Spectrum and Cryptologic Almanac.
The indices to Cryptologic Quarterly and NSA Technical Journal have indices by title, author and keyword. The index to Cryptologic Spectrum has indices by author, title and issue.
Consider these bibliographic tools as stepping stones. If you want an article, send a FOIA request for it. Send a FOIA request for a dozen. There's a lot of stuff here that would help elucidate the early history of the agency and some interesting cryptographic topics.
Thanks Mike, for doing this work.
An anonymous note in the Harvard Law Review argues that there is a significant benefit from Internet attacks:
This Note argues that computer networks, particularly the Internet, can be thought of as having immune systems that are strengthened by certain attacks. Exploitation of security holes prompts users and vendors to close those holes, vendors to emphasize security in system development, and users to adopt improved security practices. This constant strengthening of security reduces the likelihood of a catastrophic attack -- one that would threaten national or even global security. In essence, certain cybercrime can create more benefits than costs, and cybercrime policy should take this concept into account.
You can't make this stuff up:
Electronic spy 'bugs' have been secretly planted in hundreds of thousands of household wheelie bins.
People applying for a visa to enter the United States have to answer these questions (among others):
Have you ever been arrested of convicted for any offense or crime, even through subject of a pardon, amnesty or other similar legal action? Have you ever unlawfully distributed or sold a controlled substance (drug), or been a prostitute or procurer for prostitutes?
Certainly, anyone who is a terrorist or drug dealer wouldn't worry about lying on his visa application. So, what's the point of these questions? I used to think it was so that if someone is convicted of one of these activities he can also be convicted of visa-application fraud...but I'm not sure that explanation makes any sense.
Anyone have any better ideas? What is the security benefit of asking these questions?
The Associated Press ran a profile about me.
A SciFi Channel movie, premiering tomorrow night.
Boy, does it look bad.
This is a blog post about the problems of being forced to check expensive camera equipment on airplanes:
Well, having lived in Kashmir for 12+ years I am well accustomed to this type of security. We haven't been able to have hand carries since 1990. We also cannot have batteries in any of our equipment checked or otherwise. At least we have been able to carry our laptops on and recently been able to actually use them (with the batteries). But, if things keep moving in this direction, and I'm sure it will, we need to start thinking now about checking our cameras and computers and how to do it safely. This is a very unpleasant idea. Two years ago I ordered a Canon 20D and had it "hand carried" over to meet me in England by a friend. My friend put it in their checked bag. The bag never showed up. She did not have insurance and all I got $100 from British Airways for the camera and $500 from American Express (buyers protection) that was it. So now it looks as if we are going to have to check our cameras and our computers involuntarily. OK here are a few thoughts.
Pretty basic stuff, and we all know about the risks of putting expensive stuff in your checked luggage.
The interesting part is one of the blog comments, about halfway down. Another photographer wonders if the TSA rules for firearms could be extended to camera equipment:
Why not just have the TSA adopt the same check in rules for photographic and video equipment as they do for firearms?
Then someone has the brilliant suggestion of putting a firearm in your camera-equipment case:
A "weapons" is defined as a rifle, shotgun, pistol, airgun, and STARTER PISTOL. Yes, starter pistols - those little guns that fire blanks at track and swim meets - are considered weapons...and do NOT have to be registered in any state in the United States.
I have to admit that I am impressed with this solution.
Last month, a man reprogrammed an automated teller machine at a gas station on Lynnhaven Parkway to spit out four times as much money as it should.
I am holding in my hands a legitimately obtained copy of the manual. There are a lot of security sensitive things inside of this manual. As promised, I am not going to reveal them, but there are:
This is from an eWeek article:
"If you get your hand on this manual, you can basically reconfigure the ATM if the default password was not changed. My guess is that most of these mini-bank terminals are sitting around with default passwords untouched," Goldsmith said.
So, as long as you can use an account that's not traceable back to you, and you disguise yourself for the ATM cameras, this is a pretty easy crime.
eWeek claims you can get a copy of the manual simply by Googling for it. (Here's one on eBay.
And Tranax is promising a fix that will force operators to change the default passwords. But honestly, what's the liklihood that someone who can't be bothered to change the default password will take the time to install a software patch?
EDITED TO ADD (9/22): Here's the manual.
Does it pay to scream if your cell phone is stolen? Synchronica, a mobile device management company, thinks so. If you use the company's Mobile Manager service and your handset is stolen, the company, once contacted, will remotely lockdown your phone, erase all its data and trigger it to emit a blood-curdling scream to scare the bejesus out of the thief.
The general category of this sort of security countermeasure is "benefit denial." It's like those dye tags on expensive clothing; if you shoplift the clothing and try to remove the tag, dye spills all over the clothes and makes them unwearable. The effectiveness of this kind of thing relies on the thief knowing that the security measure is there, or is reasonably likely to be there. It's an effective shoplifting deterrent; my guess is that it will be less effective against cell phone thieves.
Remotely erasing data on stolen cell phones is a good idea regardless, though. And since cell phones are far more often lost than stolen, how about the phone calmly announcing that it is lost and it would like to be returned to its owner?
Earlier this month, the popular social networking site Facebook learned a hard lesson in privacy. It introduced a new feature called "News Feeds" that shows an aggregation of everything members do on the site: added and deleted friends, a change in relationship status, a new favorite song, a new interest, etc. Instead of a member's friends having to go to his page to view any changes, these changes are all presented to them automatically.
The outrage was enormous. One group, Students Against Facebook News Feeds, amassed over 700,000 members. Members planned to protest at the company's headquarters. Facebook's founder was completely stunned, and the company scrambled to add some privacy options.
Welcome to the complicated and confusing world of privacy in the information age. Facebook didn't think there would be any problem; all it did was take available data and aggregate it in a novel way for what it perceived was its customers' benefit. Facebook members instinctively understood that making this information easier to display was an enormous difference, and that privacy is more about control than about secrecy.
But on the other hand, Facebook members are just fooling themselves if they think they can control information they give to third parties.
Privacy used to be about secrecy. Someone defending himself in court against the charge of revealing someone else's personal information could use as a defense the fact that it was not secret. But clearly, privacy is more complicated than that. Just because you tell your insurance company something doesn't mean you don't feel violated when that information is sold to a data broker. Just because you tell your friend a secret doesn't mean you're happy when he tells others. Same with your employer, your bank, or any company you do business with.
But as the Facebook example illustrates, privacy is much more complex. It's about who you choose to disclose information to, how, and for what purpose. And the key word there is "choose." People are willing to share all sorts of information, as long as they are in control.
When Facebook unilaterally changed the rules about how personal information was revealed, it reminded people that they weren't in control. Its eight million members put their personal information on the site based on a set of rules about how that information would be used. It's no wonder those members -- high school and college kids who traditionally don't care much about their own privacy -- felt violated when Facebook changed the rules.
But public perception is important. The lesson here for Facebook and other companies -- for Google and MySpace and AOL and everyone else who hosts our e-mails and webpages and chat sessions -- is that people believe they own their data. Even though the user agreement might technically give companies the right to sell the data, change the access rules to that data, or otherwise own that data, we -- the users -- believe otherwise. And when we who are affected by those actions start expressing our views -- watch out.
What Facebook should have done was add the feature as an option, and allow members to opt in if they wanted to. Then, members who wanted to share their information via News Feeds could do so, and everyone else wouldn't have felt that they had no say in the matter. This is definitely a gray area, and it's hard to know beforehand which changes need to be implemented slowly and which won't matter. Facebook, and others, need to talk to its members openly about new features. Remember: members want control.
The lesson for Facebook members might be even more jarring: if they think they have control over their data, they're only deluding themselves. They can rebel against Facebook for changing the rules, but the rules have changed, regardless of what the company does.
Whenever you put data on a computer, you lose some control over it. And when you put it on the internet, you lose a lot of control over it. News Feeds brought Facebook members face to face with the full implications of putting their personal information on Facebook. It had just been an accident of the user interface that it was difficult to aggregate the data from multiple friends into a single place. And even if Facebook eliminates News Feeds entirely, a third party could easily write a program that does the same thing. Facebook could try to block the program, but would lose that technical battle in the end.
We're all still wrestling with the privacy implications of the Internet, but the balance has tipped in favor of more openness. Digital data is just too easy to move, copy, aggregate, and display. Companies like Facebook need to respect the social rules of their sites, to think carefully about their default settings -- they have an enormous impact on the privacy mores of the online world -- and to give users as much control over their personal information as they can.
But we all need to remember that much of that control is illusory.
This essay originally appeared on Wired.com.
According to Newsday:
Hezbollah guerrillas were able to hack into Israeli radio communications during last month's battles in south Lebanon, an intelligence breakthrough that helped them thwart Israeli tank assaults, according to Hezbollah and Lebanese officials.
Read the article. Basically, the problem is operational error:
With frequency-hopping and encryption, most radio communications become very difficult to hack. But troops in the battlefield sometimes make mistakes in following secure radio procedures and can give an enemy a way to break into the frequency-hopping patterns. That might have happened during some battles between Israel and Hezbollah, according to the Lebanese official. Hezbollah teams likely also had sophisticated reconnaissance devices that could intercept radio signals even while they were frequency-hopping.
I agree with this comment from The Register:
Claims that Hezbollah fighters were able to use this intelligence to get some intelligence on troop movement and supply routes are plausible, at least to the layman, but ought to be treated with an appropriate degree of caution as they are substantially corroborated by anonymous sources.
But I have even more skepticism. If indeed Hezbollah was able to do this, the last thing they want is for it to appear in the press. But if Hezbollah can't do this, then a few good disinformation stories are a good thing.
In general, the problems of securing a university network are no different than those of securing any other large corporate network. But when it comes to data security, universities have their own unique problems. It’s easy to point fingers at students -- a large number of potentially adversarial transient insiders. Yet that’s really no different from a corporation dealing with an assortment of employees and contractors -- the difference is the culture.
Universities are edge-focused; central policies tend to be weak, by design, with maximum autonomy for the edges. This means they have natural tendencies against centralization of services. Departments and individual professors are used to being semiautonomous. Because these institutions were established long before the advent of computers, when networking did begin to infuse universities, it developed within existing administrative divisions. Some universities have academic departments with separate IT departments, budgets, and staff, with a central IT group providing bandwidth but little or no oversight. Unfortunately, these smaller IT groups don’t generally count policy development and enforcement as part of their core competencies.
The lack of central authority makes enforcing uniform standards challenging, to say the least. Most university CIOs have much less power than their corporate counterparts; university mandates can be a major obstacle in enforcing any security policy. This leads to an uneven security landscape.
There’s also a cultural tendency for faculty and staff to resist restrictions, especially in the area of research. Because most research is now done online -- or, at least, involves online access -- restricting the use of or deciding on appropriate uses for information technologies can be difficult. This resistance also leads to a lack of centralization and an absence of IT operational procedures such as change control, change management, patch management, and configuration control.
The result is that there’s rarely a uniform security policy. The centralized servers -- the core where the database servers live -- are generally more secure, whereas the periphery is a hodgepodge of security levels.
So, what to do? Unfortunately, solutions are easier to describe than implement. First, universities should take a top-down approach to securing their infrastructure. Rather than fighting an established culture, they should concentrate on the core infrastructure.
Then they should move personal, financial, and other comparable data into that core. Leave information important to departments and research groups to them, and centrally store information that’s important to the university as a whole. This can be done under the auspices of the CIO. Laws and regulations can help drive consolidation and standardization.
Next, enforce policies for departments that need to connect to the sensitive data in the core. This can be difficult with older legacy systems, but establishing a standard for best practices is better than giving up. All legacy technology is upgraded eventually.
Finally, create distinct segregated networks within the campus. Treat networks that aren’t under the IT department’s direct control as untrusted. Student networks, for example, should be firewalled to protect the internal core from them. The university can then establish levels of trust commensurate with the segregated networks’ adherence to policies. If a research network claims it can’t have any controls, then let the university create a separate virtual network for it, outside the university’s firewalls, and let it live there. Note, though, that if something or someone on that network wants to connect to sensitive data within the core, it’s going to have to agree to whatever security policies that level of data access requires.
Securing university networks is an excellent example of the social problems surrounding network security being harder than the technical ones. But harder doesn’t mean impossible, and there is a lot that can be done to improve security.
This essay originally appeared in the September/October issue of IEEE Security & Privacy.
This is impressive: a display that works on a flexible credit card.
One of the major security problems with smart cards is that they don't have their own I/O. That is, you have to trust whatever card reader/writer you stick the card in to faithfully send what you type into the card, and display whatever the card spits back out. Way back in 1999, Adam Shostack and I wrote a paper about this general class of security problem.
Think WYSIWTCS: What You See Is What The Card Says. That's what an on-card display does.
No, it doesn't protect against tampering with the card. That's part of a completely different set of threats.
Cybercrime is getting organized:
Cyberscams are increasingly being committed by organized crime syndicates out to profit from sophisticated ruses rather than hackers keen to make an online name for themselves, according to a top U.S. official.
I've been saying this sort of thing for years, and have long complained that cyberterrorism gets all the press while cybercrime is the real threat. I don't think this article is fear and hype; it's a real problem.
A secret investigation of news leaks at Hewlett-Packard was more elaborate than previously reported, and almost from the start involved the illicit gathering of private phone records and direct surveillance of board members and journalists, according to people briefed on the company's review of the operation.
Given this, I predict a real investigation into the incident:
Those briefed on the company's review of the operation say detectives tried to plant software on at least one journalist's computer that would enable messages to be traced, and also followed directors and possibly a journalist in an attempt to identify a leaker on the board.
I'm amazed there isn't more outcry. Pretexting, planting Trojans...this is the sort of thing that would get a "hacker" immediately arrested. But if the chairman of the HP board does it, suddenly it's a gray area.
EDITED TO ADD (9/20): More info.
Does this EyeCheck device sound like anything other than snake oil:
The device looks like binoculars, and in seconds it scans an individuals pupils to detect a problem.
Here's the company. The device is called a pupillometer, and "uses patented technologies to deliver reliable pupil measurements in less than five minutes for the detection of drugs and fatigue." And despite what the article implied, the device doesn't do this at a distance.
I'm not impressed with the research, but this is not my area of expertise. Anyone?
If you have a passport, now is the time to renew it -- even if it's not set to expire anytime soon. If you don't have a passport and think you might need one, now is the time to get it. In many countries, including the United States, passports will soon be equipped with RFID chips. And you don't want one of these chips in your passport.
RFID stands for "radio-frequency identification." Passports with RFID chips store an electronic copy of the passport information: your name, a digitized picture, etc. And in the future, the chip might store fingerprints or digital visas from various countries.
By itself, this is no problem. But RFID chips don't have to be plugged in to a reader to operate. Like the chips used for automatic toll collection on roads or automatic fare collection on subways, these chips operate via proximity. The risk to you is the possibility of surreptitious access: Your passport information might be read without your knowledge or consent by a government trying to track your movements, a criminal trying to steal your identity or someone just curious about your citizenship.
At first the State Department belittled those risks, but in response to criticism from experts it has implemented some security features. Passports will come with a shielded cover, making it much harder to read the chip when the passport is closed. And there are now access-control and encryption mechanisms, making it much harder for an unauthorized reader to collect, understand and alter the data.
Although those measures help, they don't go far enough. The shielding does no good when the passport is open. Travel abroad and you'll notice how often you have to show your passport: at hotels, banks, Internet cafes. Anyone intent on harvesting passport data could set up a reader at one of those places. And although the State Department insists that the chip can be read only by a reader that is inches away, the chips have been read from many feet away.
The other security mechanisms are also vulnerable, and several security researchers have already discovered flaws. One found that he could identify individual chips via unique characteristics of the radio transmissions. Another successfully cloned a chip. The State Department called this a "meaningless stunt," pointing out that the researcher could not read or change the data. But the researcher spent only two weeks trying; the security of your passport has to be strong enough to last 10 years.
This is perhaps the greatest risk. The security mechanisms on your passport chip have to last the lifetime of your passport. It is as ridiculous to think that passport security will remain secure for that long as it would be to think that you won't see another security update for Microsoft Windows in that time. Improvements in antenna technology will certainly increase the distance at which they can be read and might even allow unauthorized readers to penetrate the shielding.
Whatever happens, if you have a passport with an RFID chip, you're stuck. Although popping your passport in the microwave will disable the chip, the shielding will cause all kinds of sparking. And although the United States has said that a nonworking chip will not invalidate a passport, it is unclear if one with a deliberately damaged chip will be honored.
The Colorado passport office is already issuing RFID passports, and the State Department expects all U.S. passport offices to be doing so by the end of the year. Many other countries are in the process of changing over. So get a passport before it's too late. With your new passport you can wait another 10 years for an RFID passport, when the technology will be more mature, when we will have a better understanding of the security risks and when there will be other technologies we can use to cut the risks. You don't want to be a guinea pig on this one.
This op ed appeared on Saturday in the Washington Post.
I've written about RFID passports many times before (that last link is an op-ed from The International Herald-Tribune), although last year I -- mistakenly -- withdrew my objections based on the security measures the State Department was taking. I've since realized that they won't be enough.
EDITED TO ADD (9/29): This op ed has appeared in about a dozen newspapers. The San Jose Mercury News published a rebuttal. Kind of lame, I think.
EDITED TO ADD (12/30): Here's how to disable a RFID passport.
Lots of hype, but an interesting article nonetheless.
Ed Felten and his team at Princeton have analyzed a Diebold machine:
This paper presents a fully independent security study of a Diebold AccuVote-TS voting machine, including its hardware and software. We obtained the machine from a private party. Analysis of the machine, in light of real election procedures, shows that it is vulnerable to extremely serious attacks. For example, an attacker who gets physical access to a machine or its removable memory card for as little as one minute could install malicious code; malicious code on a machine could steal votes undetectably, modifying all records, logs, and counters to be consistent with the fraudulent vote count it creates. An attacker could also create malicious code that spreads automatically and silently from machine to machine during normal election activities -- a voting-machine virus. We have constructed working demonstrations of these attacks in our lab. Mitigating these threats will require changes to the voting machine's hardware and software and the adoption of more rigorous election procedures.
Diebold has repeatedly disputed the findings then as speculation. But the Princeton study appears to demonstrate conclusively that a single malicious person could insert a virus into a machine and flip votes. The study also reveals a number of other vulnerabilities, including that voter access cards used on Diebold systems could be created inexpensively on a personal laptop computer, allowing people to vote as many times as they wish.
A hacker is someone who thinks outside the box. It's someone who discards conventional wisdom, and does something else instead. It's someone who looks at the edge and wonders what's beyond. It's someone who sees a set of rules and wonders what happens if you don't follow them. A hacker is someone who experiments with the limitations of systems for intellectual curiosity.
I wrote that last sentence in the year 2000, in my book Secrets and Lies. And I'm sticking to that definition.
This is what else I wrote in Secrets and Lies (pages 43-44):
Hackers are as old as curiosity, although the term itself is modern. Galileo was a hacker. Mme. Curie was one, too. Aristotle wasn't. (Aristotle had some theoretical proof that women had fewer teeth than men. A hacker would have simply counted his wife's teeth. A good hacker would have counted his wife's teeth without her knowing about it, while she was asleep. A good bad hacker might remove some of them, just to prove a point.)
Computers are the perfect playground for hackers. Computers, and computer networks, are vast treasure troves of secret knowledge. The Internet is an immense landscape of undiscovered information. The more you know, the more you can do.
And it should be no surprise that many hackers have focused their skills on computer security. Not only is it often the obstacle between the hacker and knowledge, and therefore something to be defeated, but also the very mindset necessary to be good at security is exactly the same mindset that hackers have: thinking outside the box, breaking the rules, exploring the limitations of a system. The easiest way to break a security system is to figure out what the system's designers hadn't thought of: that's security hacking.
Hackers cheat. And breaking security regularly involves cheating. It's figuring out a smart card's RSA key by looking at the power fluctuations, because the designers of the card never realized anyone could do that. It's self-signing a piece of code, because the signature-verification system didn't think someone might try that. It's using a piece of a protocol to break a completely different protocol, because all previous security analysis only looked at protocols individually and not in pairs.
That's security hacking: breaking a system by thinking differently.
It all sounds criminal: recovering encrypted text, fooling signature algorithms, breaking protocols. But honestly, that's just the way we security people talk. Hacking isn't criminal. All the examples two paragraphs above were performed by respected security professionals, and all were presented at security conferences.
I remember one conversation I had at a Crypto conference, early in my career. It was outside amongst the jumbo shrimp, chocolate-covered strawberries, and other delectables. A bunch of us were talking about some cryptographic system, including Brian Snow of the NSA. Someone described an unconventional attack, one that didn't follow the normal rules of cryptanalysis. I don't remember any of the details, but I remember my response after hearing the description of the attack.
"That's cheating," I said.
Because it was.
I also remember Brian turning to look at me. He didn't say anything, but his look conveyed everything. "There's no such thing as cheating in this business."
Because there isn't.
Hacking is cheating, and it's how we get better at security. It's only after someone invents a new attack that the rest of us can figure out how to defend against it.
For years I have refused to play the semantic "hacker" vs. "cracker" game. There are good hackers and bad hackers, just as there are good electricians and bad electricians. "Hacker" is a mindset and a skill set; what you do with it is a different issue.
And I believe the best computer security experts have the hacker mindset. When I look to hire people, I look for someone who can't walk into a store without figuring out how to shoplift. I look for someone who can't test a computer security program without trying to get around it. I look for someone who, when told that things work in a particular way, immediately asks how things stop working if you do something else.
We need these people in security, and we need them on our side. Criminals are always trying to figure out how to break security systems. Field a new system -- an ATM, an online banking system, a gambling machine -- and criminals will try to make an illegal profit off it. They'll figure it out eventually, because some hackers are also criminals. But if we have hackers working for us, they'll figure it out first -- and then we can defend ourselves.
It's our only hope for security in this fast-moving technological world of ours.
This essay appeared in the Summer 2006 issue of 2600.
A paper from the CATO Institute.
Their scheme: Cut a closed store's phone lines. Hang back while cops respond to the alarm. After officers fail to spot anything wrong and drive away, break into the store and spend as much time as they need to make off with a weekend's worth of cash.
And one I wrote about in Beyond Fear (page 56):
Attackers commonly force active failures specifically to cause a larger system to fail. Burglars cut an alarm wire at a warehouse and then retreat a safe distance. The police arrive and find nothing, decide that it's an active failure, and tell the warehouse owner to deal with it in the morning. Then, after the police leave, the burglars reappear and steal everything.
According to CNN:
Sudanese security forces have begun seizing laptop computers entering the country to check on the information stored on them as part of new security measures.
(More commentary here.)
While the stated reason is pornography, anyone bringing a computer into the country should be concerned about personal information, writing that might be deemed political by the Sudanese authorities, confidential business information, and so on.
And this should be a concern regardless of the border you cross. Your privacy rights when trying to enter a country are minimal, and this kind of thing could happen anywhere. (I have heard anecdotal stories about Israel doing this, but don't have confirmation.)
If you're bringing a laptop across an international border, you should clean off all unnecessary files and encrypt the rest.
EDITED TO ADD (9/15): This is legal in the U.S.
EDITED TO ADD (9/30): More about the legality of this in the U.S.
If you define "critical infrastructure" as "things essential for the functioning of a society and economy," then software is critical infrastructure. For many companies and individuals, if their computers stop working, they stop working.
It's a situation that snuck up on us. Everyone knew that the software that flies 747s or targets cruise missiles was critical, but who thought of the airlines' weight and balance computers, or the operating system running the databases and spreadsheets that determine which cruise missiles get shipped where?
And over the years, common, off-the-shelf, personal- and business-grade software has been used for more and more critical applications. Today we find ourselves in a situation where a well-positioned flaw in Windows, Cisco routers or Apache could seriously affect the economy.
It's perfectly rational to assume that some programmers -- a tiny minority I'm sure -- are deliberately adding vulnerabilities and back doors into the code they write. I'm actually kind of amazed that back doors secretly added by the CIA/NSA, MI5, the Chinese, Mossad and others don't conflict with each other. Even if these groups aren't infiltrating software companies with back doors, you can be sure they're scouring products for vulnerabilities they can exploit, if necessary. On the other hand, we're already living in a world where dozens of new flaws are discovered in common software products weekly, and the economy is humming along. But we're not talking about this month's worm from Asia or new phishing software from the Russian mafia -- we're talking national intelligence organizations. "Infowar" is an overhyped term, but the next war will have a cyberspace component, and these organizations wouldn't be doing their jobs if they weren't preparing for it.
Marcus is 100 percent correct when he says it's simply too late to do anything about it. The software industry is international, and no country can start demanding domestic-only software and expect to get anywhere. Nor would that actually solve the problem, which is more about the allegiance of millions of individual programmers than which country they happen to inhabit.
So, what to do? The key here is to remember the real problem: current commercial software practices are not secure enough to reliably detect and delete deliberately inserted malicious code. Once you understand this, you'll drop the red herring arguments that led to CheckPoint not being able to buy Sourcefire and concentrate on the real solution: defense in depth.
In theory, security software are after-the-fact kludges because the underlying OS and apps are riddled with vulnerabilities. If your software were written properly, you wouldn't need a firewall -- right?
If we were to get serious about critical infrastructure, we'd recognize it's all critical and start building security software to protect it. We'd build our security based on the principles of safe failure; we'd assume security would fail and make sure it's OK when it does. We'd use defense in depth and compartmentalization to minimize the effects of failure. Basically, we'd do everything we're supposed to do now to secure our networks.
It'd be expensive, probably prohibitively so. Maybe it would be easier to continue to ignore the problem, or at least manage geopolitics so that no national military wants to take us down.
This is the second half of a point/counterpoint I did with Marcus Ranum (here's his half) for the September 2006 issue of Information Security Magazine.
For Sale By Owner - The Ultimate Secure Home:
The list of features starts out reasonable, but the description of how it was built and why just kept getting more surreal.
And, of course:
The exact location of the house will only be revealed to serious, pre-screened, and financially pre-qualified prospective buyers at an appropriate time. The owner believes that keeping the exact location secret to the general public is an important part of the home's security.
What's your vote? Real or hoax?
Last month, NIST hosted the Second Hash Workshop, primarily as a vehicle for discussing a replacement strategy for SHA-1. (I liveblogged NIST's first Cryptographic Hash Workshop here, here, here, here, and here.)
As I've written about before, there are some impressive cryptanalytic results against SHA-1. These attacks are still not practical, and the hash function is still operationally secure, but it makes sense for NIST to start looking at replacement strategies -- before these attacks get worse.
The conference covered a wide variety of topics (see the agenda for details) on hash function design, hash function attacks, hash function features, and so on.
Perhaps the most interesting part was a panel discussion called "SHA-256 Today and Maybe Something Else in a Few Years: Effects on Research and Design." Moderated by Paul Hoffman (VPN Consortium) and Arjen Lenstra (Ecole Polytechnique Federale de Lausanne), the panel consisted of Niels Ferguson (Microsoft), Antoine Joux (Universite de Versailles-Saint-Quentin-en-Yvelines), Bart Preneel (Katholieke Universiteit Leuven), Ron Rivest (MIT), and Adi Shamir (Weismann Institute of Science).
Paul Hoffman has posted a composite set of notes from the panel discussion. If you're interested in the current state of hash function research, it's well worth reading.
My opinion is that we need a new hash function, and that a NIST-sponsored contest is a great way to stimulate research in the area. I think we need one function and one function only, because users won't know how to choose between different functions. (It would be smart to design the function with a couple of parameters that can be easily changed to increase security -- increase the number of rounds, for example -- but it shouldn't be a variable that users have to decide whether or not to change.) And I think it needs to be secure in the broadest definitions we can come up with: hash functions are the workhorse of cryptographic protocols, and they're used in all sorts of places for all sorts of reasons in all sorts of applications. We can't limit the use of hash functions, so we can't put one out there that's only secure if used in a certain way.
Last week NIST released Special Publication 800-88, Guidelines for Media Sanitization.
There is a new paragraph in this document (page 7) that was not in the draft version:
Encryption is not a generally accepted means of sanitization. The increasing power of computers decreases the time needed to crack cipher text and therefore the inability to recover the encrypted data can not be assured.
I have to admit that this doesn't make any sense to me. If the encryption is done properly, and if the key is properly chosen, then erasing the key -- and all copies -- is equivalent to erasing the files. And if you're using full-disk encryption, then erasing the key is equivalent to sanitizing the drive. For that not to be true means that the encryption program isn't secure.
I think NIST is just confused.
From yesterday's New York Times, "Ten Ways to Avoid the Next 9/11":
If we are fortunate, we will open our newspapers this morning knowing that there have been no major terrorist attacks on American soil in nearly five years. Did we just get lucky?
Actually, they asked more than 10, myself included. But some of us were cut because they didn't have enough space. This was my essay:
Despite what you see in the movies and on television, it’s actually very difficult to execute a major terrorist act. It’s hard to organize, plan, and execute an attack, and it’s all too easy to slip up and get caught. Combine that with our intelligence work tracking terrorist cells and interdicting terrorist funding, and you have a climate where major attacks are rare. In many ways, the success of 9/11 was an anomaly; there were many points where it could have failed. The main reason we haven’t seen another 9/11 is that it isn’t as easy as it looks. Much of our counterterrorist efforts are nothing more than security theater: ineffectual measures that look good. Forget the "war on terror"; the difficulty isn’t killing or arresting the terrorists, it’s finding them. Terrorism is a law enforcement problem, and needs to be treated as such. For example, none of our post-9/11 airline security measures would have stopped the London shampoo bombers. The lesson of London is that our best defense is intelligence and investigation. Rather than spending money on airline security, or sports stadium security -- measures that require us to guess the plot correctly in order to be effective -- we’re better off spending money on measures that are effective regardless of the plot.
SquidSoap works by applying a small ink mark on a person's hand when they press the pump to dispense the soap. The ink is designed to wash off after the hands are washed for about 15-20 seconds, which is the time recommended by most doctors.
Note the security angle:
Dirty hands are a leading cause of the spread of infection and food-borne illness. Whether it's due to laziness or lack of education - our failure to wash our hands is costing the U.S. economy billions every year and causing thousands of unnecessary illnesses and deaths.
Never mind about terrorism. It's dirty hands!
Interesting article from The New York Times:
Flip open your husband's cellphone and scroll down the log of calls received. Glance over your teenager's shoulder at his screenful of instant messages. Type in a girlfriend's password and rifle through her e-mail.
It's only eight ounces of the stuff, but still....
There seems to be a small epidemic of land title fraud in Ontario, Canada.
What happens is someone impersonates the homeowner, and then sells the house out from under him. The former owner is still liable for the mortgage, but can't get in his former house. Cleaning up the problem takes a lot of time and energy.
The problem is one of economic incentives. If banks were held liable for fraudulent mortgages, then the problem would go away really quickly. But as long as they're not, they have no incentive to ensure that this fraud doesn't occur. (They have some incentive, because the fraud costs them money, but as long as the few fraud cases cost less than ensuring the validity of every mortgage, they'll just ignore the problem and eat the losses when fraud occurs.)
EDITED TO ADD (9/8): Another article.
Basically, the chairman of Hewlett-Packard, annoyed at leaks, hired investigators to track down the phone records (including home and cell) of the other HP board members. One board member resigned because of this. The leaker has refused to resign, although he has been outed.
Note that the article says that the investigators used "pretexting," which is illegal.
The entire episode--beyond its impact on the boardroom of a $100 billion company, Dunn's ability to continue as chairwoman and the possibility of civil lawsuits claiming privacy invasions and fraudulent misrepresentations—raises questions about corporate surveillance in a digital age. Audio and visual surveillance capabilities keep advancing, both in their ability to collect and analyze data. The Web helps distribute that data efficiently and effortlessly. But what happens when these advances outstrip the ability of companies (and, for that matter, governments) to reach consensus on ethical limits? How far will companies go to obtain information they seek for competitive gain or better management?
EDITED TO ADD (9/8): Good commentary.
EDITED TO ADD (9/12): HP Chairman Patricia Dunn was fired.
If you really want to see Microsoft scramble to patch a hole in its software, don't look to vulnerabilities that impact countless Internet Explorer users or give intruders control of thousands of Windows machines. Just crack Redmond's DRM.
Security patches used to be rare. Software vendors were happy to pretend that vulnerabilities in their products were illusory -- and then quietly fix the problem in the next software release.
That changed with the full disclosure movement. Independent security researchers started going public with the holes they found, making vulnerabilities impossible for vendors to ignore. Then worms became more common; patching -- and patching quickly -- became the norm.
But even now, no software vendor likes to issue patches. Every patch is a public admission that the company made a mistake. Moreover, the process diverts engineering resources from new development. Patches annoy users by making them update their software, and piss them off even more if the update doesn't work properly.
For the vendor, there's an economic balancing act: how much more will your users be annoyed by unpatched software than they will be by the patch, and is that reduction in annoyance worth the cost of patching?
Since 2003, Microsoft's strategy to balance these costs and benefits has been to batch patches: instead of issuing them one at a time, it's been issuing them all together on the second Tuesday of each month. This decreases Microsoft's development costs and increases the reliability of its patches.
The user pays for this strategy by remaining open to known vulnerabilities for up to a month. On the other hand, users benefit from a predictable schedule: Microsoft can test all the patches that are going out at the same time, which means that patches are more reliable and users are able to install them faster with more confidence.
In the absence of regulation, software liability, or some other mechanism to make unpatched software costly for the vendor, "Patch Tuesday" is the best users are likely to get.
Why? Because it makes near-term financial sense to Microsoft. The company is not a public charity, and if the internet suffers, or if computers are compromised en masse, the economic impact on Microsoft is still minimal.
Microsoft is in the business of making money, and keeping users secure by patching its software is only incidental to that goal.
There's no better example of this of this principle in action than Microsoft's behavior around the vulnerability in its digital rights management software PlaysForSure.
Now, this isn't a "vulnerability" in the normal sense of the word: digital rights management is not a feature that users want. Being able to remove copy protection is a good thing for some users, and completely irrelevant for everyone else. No user is ever going to say: "Oh no. I can now play the music I bought for my computer in my car. I must install a patch so I can't do that anymore."
But to Microsoft, this vulnerability is a big deal. It affects the company's relationship with major record labels. It affects the company's product offerings. It affects the company's bottom line. Fixing this "vulnerability" is in the company's best interest; never mind the customer.
This clearly demonstrates that economics is a much more powerful motivator than security.
It should surprise no one that the system didn't stay patched for long. FairUse4WM 1.2 gets around Microsoft's patch, and also circumvents the copy protection in Windows Media DRM 9 and 11beta2 files.
That was Saturday. Any guess on how long it will take Microsoft to patch Media Player once again? And then how long before the FairUse4WM people update their own software?
Certainly much less time than it will take Microsoft and the recording industry to realize they're playing a losing game, and that trying to make digital files uncopyable is like trying to make water not wet.
If Microsoft abandoned this Sisyphean effort and put the same development effort into building a fast and reliable patching system, the entire internet would benefit. But simple economics says it probably never will.
This essay originally appeared on Wired.com.
EDITED TO ADD (9/8): Commentary.
EDITED TO ADD (9/9): Microsoft released a patch for FairUse4WM 1.2 on Thursday, September 7th.
EDITED TO ADD (9/13): BSkyB halts download service because of the breaks.
EDITED TO ADD (9/16): Microsoft is threatening legal action against people hosting copies of FairUse4WM.
Can you identify the bombs?
In related news, here's a guy who makes it through security with a live vibrator in his pants.
There's also a funny video on Dutch TV. A screener scans a passenger's bag, putting aside several obvious bags of cocaine to warn him about a very tiny nail file.
Here's where to buy stuff seized at Boston's Logan Airport. I also read somewhere that some stuff ends up on eBay.
And finally,Quinn Norton said: "I think someone should try to blow up a plane with a piece of ID, just to watch the TSA's mind implode."
Business Week cover story on "The State of Surveillance."
And here's my essay on "The Future of Privacy."
EDITED TO ADD (9/6): The cover story is from August 2005.
EDITED TO ADD (9/7): CIO Insight on the death of privacy.
Does anyone think this California almost-law (it's awaiting the governor's signature) will do any good at all?
From 1 October 2007, manufacturers must place warning labels on all equipment capable of receiving Wi-Fi signals, according to the new state law. These can take the form of box stickers, special notification in setup software, notification during the router setup, or through automatic securing of the connection. One warning sticker must be positioned so that it must be removed by a consumer before the product can be used.
People sell, give away, and throw away their cell phones without even thinking about the data still on them:
A company, Trust Digital of McLean, Virginia, bought 10 different phones on eBay this summer to test phone-security tools it sells for businesses. The phones all were fairly sophisticated models capable of working with corporate e-mail systems.
In many cases, this was data that the owners erased.
A popular practice among sellers, resetting the phone, often means sensitive information appears to have been erased. But it can be resurrected using specialized yet inexpensive software found on the Internet.
More and more, our data is not really under our control. We store it on devices and third-party websites, or on our own computer. We try to erase it, but we really can't. We try to control its dissemination, but it's harder and harder.
This is absolutely essential reading for anyone interested in how the U.S. is prosecuting terrorism. Put aside the rhetoric and the posturing; this is what is actually happening.
Among the key findings about the year-by-year enforcement trends in the period were the following:
Transactional Records Access Clearinghouse (TRAC) puts this data together by looking at Justice Department records. The data research organization is connected to Syracuse University, and has been doing this sort of thing -- tracking what federal agencies actually do rather than what they say they do -- for over fifteen years.
I am particularly entertained by the Justice Department's rebuttal, which basically just calls the study names without offering any substantive criticism:
The Justice Department took issue with the study's methodology and its conclusions.
How do I explain it? Most "terrorism" arrests are not for actual terrorism; they're for other things. The cases are either thrown out for lack of evidence, or the penalties are more in line with the actual crimes. I don't care what anyone from the Justice Department says: someone who is jailed for four weeks did not commit a terrorist act.
Post links to your favorites, and I will add them to the post.
EDITED TO ADD (9/13): There are just too many down there to add; scroll through the comments to find the links. But here's a funny one from "Close to Home."
If you know of any other squid cartoons, post the links as comments -- or e-mail them to me -- and I will add them here.
EDITED TO ADD (9/4): Demolition Squid, Sausage Squid from Beaver and Steve,Creative Disease, and very funny one from The New Yorker caption contest. Also, nine cartoons from Dr. Fun; search for "squid."
EDITED TO ADD (9/13): Penny Arcade.
I don't know how much of this to believe.
A man wearing a jacket and carrying a bag was able to sneak a bomb onto a flight from Manila to Davao City last month at the height of the nationwide security alert after Britain uncovered a plot to blow up transatlantic planes.
In particular, if he actually built a working bomb in an airplane lavatory, he's an idiot. Yes, C4 is stable, but playing with live electrical detonators near high-power radios is just stupid. On the other hand, bringing everything through security and onto the plane is perfectly plausible. Security is so focused on catching people with lipstick and shampoo that they're ignoring actual threats.
EDITED TO ADD (9/3): More news.
EDITED TO ADD (9/8): The "expert" is Samson Macariola, and he has recanted.
Browzar automatically deletes Internet caches, histories, cookies and auto-complete forms. Auto-complete is the feature that anticipates the search term or Web address a user might enter by relying on information previously entered into the browser.
I know nothing else about this. If you want, download it here.
EDITED TO ADD (9/1): This browser seems to be both fake and full of adware.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.