Schneier on Security
A blog covering security and security technology.
November 2010 Archives
Jeffrey Rosen opines:
Although the Supreme Court hasn't evaluated airport screening technology, lower courts have emphasized, as the U.S. Court of Appeals for the 9th Circuit ruled in 2007, that "a particular airport security screening search is constitutionally reasonable provided that it 'is no more extensive nor intensive than necessary, in the light of current technology, to detect the presence of weapons or explosives.'"
In other news, The New York Times wrote an editorial in favor of the scanners. I was surprised.
I agree with Glenn Greenwald. I don't know if it's an actual terrorist that the FBI arrested, or if it's another case of entrapment.
All of the information about this episode -- all of it -- comes exclusively from an FBI affidavit filed in connection with a Criminal Complaint against Mohamud. As shocking and upsetting as this may be to some, FBI claims are sometimes one-sided, unreliable and even untrue, especially when such claims -- as here -- are uncorroborated and unexamined.
The JFK Airport plotters seem to have been egged on by an informant, a twice-convicted drug dealer. An FBI informant almost certainly pushed the Fort Dix plotters to do things they wouldn't have ordinarily done. The Miami gang's Sears Tower plot was suggested by an FBI undercover agent who infiltrated the group. And in 2003, it took an elaborate sting operation involving three countries to arrest an arms dealer for selling a surface-to-air missile to an ostensible Muslim extremist. Entrapment is a very real possibility in all of these cases.
In any case, notice that it was old-fashioned police investigation that caught this guy.
EDITED TO ADD (12/13): Another analysis.
From a study on zoo security:
Among other measures, the scientists recommend not allowing animals to walk freely within the zoo grounds, and ensuring there is a physical barrier marking the zoo boundaries, and preventing individuals from escaping through drains, sewers or any other channels.
Isn't all that sort of obvious?
Total cost for the Yemeni printer cartridge bomb plot: $4200.
"Two Nokia mobiles, $150 each, two HP printers, $300 each, plus shipping, transportation and other miscellaneous expenses add up to a total bill of $4,200. That is all what Operation Hemorrhage cost us," the magazine said.
Even if you add in costs for training, recruiting, logistics, and everything else, that's still remarkably cheap. And think of how many times that we spent in security in the aftermath.
As it turns out, this is bin Laden's plan:
In his October 2004 address to the American people, bin Laden noted that the 9/11 attacks cost al Qaeda only a fraction of the damage inflicted upon the United States. "Al Qaeda spent $500,000 on the event," he said, "while America in the incident and its aftermath lost -- according to the lowest estimates -- more than $500 billion, meaning that every dollar of al Qaeda defeated a million dollars."
None of this would work if we don't help them by terrorizing ourselves. I wrote this after the Underwear Bomber failed:
Finally, we need to be indomitable. The real security failure on Christmas Day was in our reaction. We're reacting out of fear, wasting money on the story rather than securing ourselves against the threat. Abdulmutallab succeeded in causing terror even though his attack failed.
At Woods Hole:
It is known now, through the work of Mooney and others, that the squid hearing system has some similarities and some differences compared to human hearing. Squid have a pair of organs called statocysts, balance mechanisms at the base of the brain that contain a tiny grain of calcium, which maintains its position as the animal maneuvers in the water. These serve a function similar to human ear canals.
I have been thinking a lot about security against psychopaths. Or, at least, how we have traditionally secured social systems against these sorts of people, and how we can secure our socio-technical systems against them. I don't know if I have any conclusions yet, only a short reading list.
EDITED TO ADD (12/12): Good article from 2001. The sociobiology of sociopathy. Psychopathic fraudsters and how they function in bureaucracies.
Interesting story of the withdrawal of the A5/2 encryption algorithm from GSM phones.
Good. It was always a dumb idea:
The color-coded threat levels were doomed to fail because "they don’t tell people what they can do -- they just make people afraid," said Bruce Schneier, an author on security issues. He said the system was "a relic of our panic after 9/11" that "never served any security purpose."
I wrote this in 2004:
In theory, the warnings are supposed to cultivate an atmosphere of preparedness. If Americans are vigilant against the terrorist threat, then maybe the terrorists will be caught and their plots foiled. And repeated warnings brace Americans for the aftermath of another attack.
Another alert system to compare this one to is the DEFCON system. At each DEFCON level, there are specific actions people have to take: at one DEFCON level -- and I'm making this up -- you call everyone back from leave, at another you fuel all the bombers, at another you arm the bombs, and so on. What actions am I supposed to take when the terrorist threat level is Yellow? When it is Orange? I have no idea.
EDITED TO ADD (11/25): Good observation:
The DHS National Threat Advisory is a public alert system. That a public alert system is indicating imminent disaster is not surprising. In fact it's inevitable. It's the nature of public alert systems to signal imminent disaster at all times. I've composed "Blakley's Law" (next time I come up with one of these I'll rename this one "Blakley's First Law") to describe the phenomenon:"Every public alert system's status indicator rises until it reaches its disaster imminent setting and remains at that setting until it is retired from service."
In Europe, although the article doesn't say where:
Many banks have fitted ATMs with devices that are designed to thwart criminals from attaching skimmers to the machines. But it now appears in some areas that those devices are being successfully removed and then modified for skimming, according to the latest report from the European ATM Security Team (EAST), which collects data on ATM fraud throughout Europe.
I think that's where my collection will be going, too.
How to spoof your location on Facebook with your BlackBerry.
A short history of airport security: We screen for guns and bombs, so the terrorists use box cutters. We confiscate box cutters and corkscrews, so they put explosives in their sneakers. We screen footwear, so they try to use liquids. We confiscate liquids, so they put PETN bombs in their underwear. We roll out full-body scanners, even though they wouldn’t have caught the Underwear Bomber, so they put a bomb in a printer cartridge. We ban printer cartridges over 16 ounces — the level of magical thinking here is amazing — and they’re going to do something else.
The other participants are worth reading, too.
I also did an interview in -- of all places -- Popular Mechanics.
Rare common sense:
But Gen Richards told the BBC it was not possible to defeat the Taliban or al-Qaeda militarily.
New research, published late last week, has established that Stuxnet searches for frequency converter drives made by Fararo Paya of Iran and Vacon of Finland. In addition, Stuxnet is only interested in frequency converter drives that operate at very high speeds, between 807 Hz and 1210 Hz.
The threat of Stuxnet variants is being used to scare senators.
Me on Stuxnet.
Photographic evidence from Jamaica.
Last week, I gave a talk on cyberwar and cyberconflict at the Institute for International and European Affairs in Dublin. Here's the video.
It was only the second time I've given the talk. About three quarters in, I noticed that I didn't have my fourth and final page of notes. So if the ending feels a bit scattered, that's why.
Things are happening so fast that I don't know if I should bother. But here are some links and observations.
This first-hand report, from a man who refused to fly rather than subject himself to a full-body scan or an enhanced patdown, has been making the rounds. (The TSA is now investigating him.) It reminds me of Penn Jillette's story from 2002.
A woman has a horrific story of opting-out of the full body scanners. More stories: this one about the TSA patting down a screaming toddler. And here's Dave Barry's encounter (also this NPR interview).
Sadly, I agree with this:
It is no accident that women have been complaining about being pulled out of line because of their big breasts, having their bodies commented on by TSA officials, and getting inappropriate touching when selected for pat-downs for nearly 10 years now, but just this week it went viral. It is no accident that CAIR identified Islamic head scarves (hijab) as an automatic trigger for extra screenings in January, but just this week it went viral. What was different?
Seems that once you enter airport security, you need to be subjected to it -- whether you decide to fly or not.
I experienced the enhanced patdown myself, at DCA, on Tuesday. It was invasive, but not as bad as these stories. It seems clear that TSA agents are inconsistent about these procedures. They've probably all had the same training, but individual agents put it into practice very differently.
Of course, airport security is an extra-Constitutional area, so there's no clear redress mechanism for those subjected to too-intimate patdowns.
This video provides tips to parents flying with young children. Around 2:50 in, the reporter indicates that you can find out if your child has been pre-selected for secondary, and then recommends requesting "de-selection." That doesn't make sense.
Neither does this story, which says that the TSA will only touch Muslim women in the head and neck area.
Nor this story. The author convinces people on line to opt-out with him. After the first four opt-outs, the TSA just sent people through the metal detectors.
Yesterday, the TSA administrator John Pistole was grilled by the Senate Commerce, Science, and Transportation Committee on full-body scanners. Rep. Ron Paul introduced a bill to ban them. (His floor speech is here.) I'm one of the plaintiffs in a lawsuit to ban them.
Good essay from a libertarian perspective. Two more. Marc Rotenberg's essay. Ralph Nader's essay. And the Los Angeles Times really screws up with this editorial: "Shut Up and Be Scanned." Amitai Etzioni makes a better case for the machines.
Michael Chertoff, former Department of Homeland Security secretary, has been touting the full-body scanners, while at the same time maintaining a financial interest in the company that makes them.
A typical dental X-ray exposes the patient to about 2 millirems of radiation. According to one widely cited estimate, exposing each of 10,000 people to one rem (that is, 1,000 millirems) of radiation will likely lead to 8 excess cancer deaths. Using our assumption of linearity, that means that exposure to the 2 millirems of a typical dental X-ray would lead an individual to have an increased risk of dying from cancer of 16 hundred-thousandths of one percent. Given that very small risk, it is easy to see why most rational people would choose to undergo dental X-rays every few years to protect their teeth.
Given that there will be 600 million airplane passengers per year, that makes the machines deadlier than the terrorists.
Nate Silver on the hidden cost of these new airport security measures.
According to the Cornell study, roughly 130 inconvenienced travelers died every three months as a result of additional traffic fatalities brought on by substituting ground transit for air transit. That's the equivalent of four fully-loaded Boeing 737s crashing each year.
Jeffrey Goldberg asked me which I would rather see for children: backscatter X-ray or enhanced pat down. After remarking what an icky choice it was, I opted for the X-ray; it's less traumatic.
Here are a bunch of leaked body scans. They're not from airports, but they should make you think twice before accepting the TSA's assurances that the images will never be saved. RateMyBackscatter.com.
The New York Times on the protests.
Common sense from the Netherlands:
The security boss of Amsterdam's Schiphol Airport is calling for an end to endless investment in new technology to improve airline security.
And here's Rafi Sela, former chief security officer of the Israel Airport Authority:
A leading Israeli airport security expert says the Canadian government has wasted millions of dollars to install "useless" imaging machines at airports across the country.
I'm quoted in the Los Angeles Times:
Some experts argue the new procedures could make passengers uncomfortable without providing a substantial increase in security. "Security measures that just force the bad guys to change tactics and targets are a waste of money," said Bruce Schneier, a security expert who works for British Telecom. "It would be better to put that money into investigations and intelligence."
I'm quoted in The Wall Street Journal twice -- once as saying:
"All these machines require you to guess the plot correctly. If you don't, then they are completely worthless," said Bruce Schneier, a security expert.
and once as saying:
Security guru Bruce Schneier, a plaintiff in the scanner suit, calls this "magical thinking . . . Descend on what the terrorists happened to do last time, and we'll all be safe. As if they won't think of something else."
In 2005, I wrote:
I'm not impressed with this security trade-off. Yes, backscatter X-ray machines might be able to detect things that conventional screening might miss. But I already think we're spending too much effort screening airplane passengers at the expense of screening luggage and airport employees...to say nothing of the money we should be spending on non-airport security.
On the other hand, CBS News is reporting that 81% of Americans support full-body scans. Maybe they should only ask flying Americans.
I still stand by this, also from 2005:
Exactly two things have made airline travel safer since 9/11: reinforcement of cockpit doors, and passengers who now know that they may have to fight back. Everything else -- Secure Flight and Trusted Traveler included -- is security theater. We would all be a lot safer if, instead, we implemented enhanced baggage security -- both ensuring that a passenger's bags don't fly unless he does, and explosives screening for all baggage -- as well as background checks and increased screening for airport employees.
And this, written in 2010 after the Underwear Bomber failed:
Finally, we need to be indomitable. The real security failure on Christmas Day was in our reaction. We're reacting out of fear, wasting money on the story rather than securing ourselves against the threat. Abdulmutallab succeeded in causing terror even though his attack failed.
What else is going on?
EDITED TO ADD: (11/19): Lots more political cartoons.
This has to win for DHS Quote of the Year, from Secretary Janet Napolitano on the issue:
I really want to say, look, let's be realistic and use our common sense.
The TSA doesn't train its screeners very well. A response to a letter-writer from Sen. Coburn. From Slate: "Does the TSA Ever Catch Terrorists?" A pilot's story. The screeners' point of view. Good essay from the National Post.
Fun with the Playmobil airline security screening playset.
EDITED TO ADD (11/20): I was interviewed by Popular Mechanics.
Here's an alert you can hand out to passengers at security checkpoints where there are backscatter machines.
EDITED TO ADD (11/21): Me in an Associated Press piece on the anti-TSA backlash:
"After 9/11 people were scared and when people are scared they'll do anything for someone who will make them less scared," said Bruce Schneier, a Minneapolis security technology expert who has long been critical of the TSA. "But ... this is particularly invasive. It's strip-searching. It's body groping. As abhorrent goes, this pegs it."
President Obama comments:
"I understand people’s frustrations, and what I’ve said to the TSA is that you have to constantly refine and measure whether what we’re doing is the only way to assure the American people’s safety. And you also have to think through are there other ways of doing it that are less intrusive," Obama said.
TSA sendup on Saturday Night Live yesterday.
EDITED TO ADD (11/22): The thing about Muslim women being exempt seems to be based on a misreading of this press release. What they seem to be saying is that if you're selected because you could have something under your hijab, then they only need to just pat down the area the hijab covers. It's not a special exemption.
TSA Administrator John Pistole comments:
We are constantly evaluating and adapting our security measures, and as we have said from the beginning, we are seeking to strike the right balance between privacy and security. In all such security programs, especially those that are applied nation-wide, there is a continual process of refinement and adjustment to ensure that best practices are applied and that feedback and comment from the traveling public is taken into account.
Yesterday I participated in a New York Times "Room for Debate" discussion on airline security. My contribution is nothing I haven't said before, so I won't reprint it here. The other participants are worth reading too.
More from Nate Silver, on public opinion and the likely TSA reaction:
It is perhaps foolish to predict how the T.S.A. will respond this time -- when they have relaxed rules in the past, they have done so quietly, rather than in response to some acute public backlash. But caution aside, I would be surprised if the new procedures survived much past the New Year without significant modification.
CNN's advice to the public.
Things are definitely strained out there:
Through a statement released by his attorney Sunday night, Wolanyk said "TSA needs to see that I'm not carrying any weapons, explosives, or other prohibited substances, I refuse to have images of my naked body viewed by perfect strangers, and having been felt up for the first time by TSA the week prior (I travel frequently) I was not willing to be molested again."
From the same article:
A woman, identified by Harbor police as Danielle Kelli Hayman,39, of San Diego was detained for recording the incident on a phone.
That's much more worrying.
Interview with Brian Michael Jenkins, a senior advisor at the RAND Corp. and a former member of the White House Commission on Aviation Safety and Security.
Here's someone who managed to avoid both the full-body scanners and the enhanced pat down. It took him two and a half hours. And here someone who got patted down, and managed to sneak two razor blades through security anyway.
How the TSA will deal with people with disabilities. How the pat downs affect survivors of sexual assault. (Read also the comments here.) Juan Cole on how airport security has shifted from looking for people with guns and traditional bombs to looking for people with PETN. And TSA-proof underwear.
EDITED TO ADD (11/24): Information on the health risks of the backscatter machines. And here's a woman who stripped down to her underwear before going through airport security. This comes from a perspective I generally don't buy, but it's hard to dismiss his writing. I don't think it's a conspiracy, but I do think it's a trend. "This Modern World" has a comic on the topic. Slate on the lack of guidelines. Why the TSA should be privatized.
EDITED TO ADD (11/25): I was on Keith Olbermann last night.
Here's a scenario:
Refuse to be terrorized, everyone.
Adding them all up, the U.S. government "receives between 8,000 and 10,000 pieces of information per day, fingering just as many different people as potential threats. They also get information about 40 supposed plots against the United States or its allies daily."
All of this means that first-time suspects and isolated pieces of information are less likely to be exhaustively investigated. That's what happened with underwear bomber Umar Farouk Abdulmutallab. Intelligence agencies had heard that a Nigerian was training with al-Qaeda, received information about a Christmas plot, and read a couple of intercepts about someone named Umar Farouk (no last name) before Abdulmutallab's father walked into a U.S. embassy to report him. No one ever figured out that these seemingly unrelated pieces of intelligence referred to the same plot, so intelligence agencies didn't pour enough resources into investigating it.
As I wrote in 2007, in my essay: "The War on the Unexpected":
If you ask amateurs to act as front-line security personnel, you shouldn't be surprised when you get amateur security.
Eye movements instead of eye structures.
The new system tracks the way a person's eye moves as he watches an icon roam around a computer screen. The way the icon moves can be different every time, but the user's eye movements include "kinetic features" -- slight variations in trajectory -- that are unique, making it possible to identify him.
These could surely be better. Anyone?
There are several services that do automatic plagiarism detection -- basically, comparing phrases from the paper with general writings on the Internet and even caches of previously written papers -- but detecting this kind of custom plagiarism work is much harder.
I can think of three ways to deal with this:
The real issue is proof. Most colleges and universities are unwilling to pursue this without solid proof -- the lawsuit risk is just too great -- and in these cases the only real proof is self-incrimination.
Fundamentally, this is a problem of misplaced economic incentives. As long as the academic credential is worth more to a student than the knowledge gained in getting that credential, there will be an incentive to cheat.
Related note: anyone remember my personal experience with plagiarism from 2005?
Last month, Scott Charney of Microsoft proposed that infected computers be quarantined from the Internet. Using a public health model for Internet security, the idea is that infected computers spreading worms and viruses are a risk to the greater community and thus need to be isolated. Internet service providers would administer the quarantine, and would also clean up and update users' computers so they could rejoin the greater Internet.
This isn't a new idea. Already there are products that test computers trying to join private networks, and only allow them access if their security patches are up-to-date and their antivirus software certifies them as clean. Computers denied access are sometimes shunned to a limited-capability sub-network where all they can do is download and install the updates they need to regain access. This sort of system has been used with great success at universities and end-user-device-friendly corporate networks. They're happy to let you log in with any device you want--this is the consumerization trend in action--as long as your security is up to snuff.
Charney's idea is to do that on a larger scale. To implement it we have to deal with two problems. There's the technical problem--making the quarantine work in the face of malware designed to evade it, and the social problem--ensuring that people don't have their computers unduly quarantined. Understanding the problems requires us to understand quarantines in general.
Quarantines have been used to contain disease for millennia. In general several things need to be true for them to work. One, the thing being quarantined needs to be easily recognized. It's easier to quarantine a disease if it has obvious physical characteristics: fever, boils, etc. If there aren't any obvious physical effects, or if those effects don't show up while the disease is contagious, a quarantine is much less effective.
Similarly, it's easier to quarantine an infected computer if that infection is detectable. As Charney points out, his plan is only effective against worms and viruses that our security products recognize, not against those that are new and still undetectable.
Two, the separation has to be effective. The leper colonies on Molokai and Spinalonga both worked because it was hard for the quarantined to leave. Quarantined medieval cities worked less well because it was too easy to leave, or--when the diseases spread via rats or mosquitoes--because the quarantine was targeted at the wrong thing.
Computer quarantines have been generally effective because the users whose computers are being quarantined aren't sophisticated enough to break out of the quarantine, and find it easier to update their software and rejoin the network legitimately.
Three, only a small section of the population must need to be quarantined. The solution works only if it's a minority of the population that's affected, either with physical diseases or computer diseases. If most people are infected, overall infection rates aren't going to be slowed much by quarantining. Similarly, a quarantine that tries to isolate most of the Internet simply won't work.
Fourth, the benefits must outweigh the costs. Medical quarantines are expensive to maintain, especially if people are being quarantined against their will. Determining who to quarantine is either expensive (if it's done correctly) or arbitrary, authoritative and abuse-prone (if it's done badly). It could even be both. The value to society must be worth it.
It's the last point that Charney and others emphasize. If Internet worms were only damaging to the infected, we wouldn't need a societally imposed quarantine like this. But they're damaging to everyone else on the Internet, spreading and infecting others. At the same time, we can implement systems that quarantine cheaply. The value to society far outweighs the cost.
That makes sense, but once you move quarantines from isolated private networks to the general Internet, the nature of the threat changes. Imagine an intelligent and malicious infectious disease: That's what malware is. The current crop of malware ignores quarantines; they're few and far enough between not to affect their effectiveness.
If we tried to implement Internet-wide--or even countrywide--quarantining, worm-writers would start building in ways to break the quarantine. So instead of nontechnical users not bothering to break quarantines because they don't know how, we'd have technically sophisticated virus-writers trying to break quarantines. Implementing the quarantine at the ISP level would help, and if the ISP monitored computer behavior, not just specific virus signatures, it would be somewhat effective even in the face of evasion tactics. But evasion would be possible, and we'd be stuck in another computer security arms race. This isn't a reason to dismiss the proposal outright, but it is something we need to think about when weighing its potential effectiveness.
Additionally, there's the problem of who gets to decide which computers to quarantine. It's easy on a corporate or university network: the owners of the network get to decide. But the Internet doesn't have that sort of hierarchical control, and denying people access without due process is fraught with danger. What are the appeal mechanisms? The audit mechanisms? Charney proposes that ISPs administer the quarantines, but there would have to be some central authority that decided what degree of infection would be sufficient to impose the quarantine. Although this is being presented as a wholly technical solution, it's these social and political ramifications that are the most difficult to determine and the easiest to abuse.
Once we implement a mechanism for quarantining infected computers, we create the possibility of quarantining them in all sorts of other circumstances. Should we quarantine computers that don't have their patches up to date, even if they're uninfected? Might there be a legitimate reason for someone to avoid patching his computer? Should the government be able to quarantine someone for something he said in a chat room, or a series of search queries he made? I'm sure we don't think it should, but what if that chat and those queries revolved around terrorism? Where's the line?
Microsoft would certainly like to quarantine any computers it feels are not running legal copies of its operating system or applications software.The music and movie industry will want to quarantine anyone it decides is downloading or sharing pirated media files--they're already pushing similar proposals.
A security measure designed to keep malicious worms from spreading over the Internet can quickly become an enforcement tool for corporate business models. Charney addresses the need to limit this kind of function creep, but I don't think it will be easy to prevent; it's an enforcement mechanism just begging to be used.
Once you start thinking about implementation of quarantine, all sorts of other social issues emerge. What do we do about people who need the Internet? Maybe VoIP is their only phone service. Maybe they have an Internet-enabled medical device. Maybe their business requires the Internet to run. The effects of quarantining these people would be considerable, even potentially life-threatening. Again, where's the line?
What do we do if people feel they are quarantined unjustly? Or if they are using nonstandard software unfamiliar to the ISP? Is there an appeals process? Who administers it? Surely not a for-profit company.
Public health is the right way to look at this problem. This conversation--between the rights of the individual and the rights of society--is a valid one to have, and this solution is a good possibility to consider.
There are some applicable parallels. We require drivers to be licensed and cars to be inspected not because we worry about the danger of unlicensed drivers and uninspected cars to themselves, but because we worry about their danger to other drivers and pedestrians. The small number of parents who don't vaccinate their kids have already caused minor outbreaks of whooping cough and measles among the greater population. We all suffer when someone on the Internet allows his computer to get infected. How we balance that with individuals' rights to maintain their own computers as they see fit is a discussion we need to start having.
This essay previously appeared on Forbes.com.
EDITED TO ADD (11/15): From an anonymous reader:
In your article you mention that for quarantines to work, you must be able to detect infected individuals. It must also be detectable quickly, before the individual has the opportunity to infect many others. Quarantining an individual after they’ve infected most of the people they regularly interact with is of little value. You must quarantine individuals when they have infected, on average, less than one other person.
Long article on convicted hacker Albert Gonzalez from The New York Times Magazine.
In an effort to shield their still-secret products from prying eyes, automakers testing prototype models, often in the desert and at other remote locales, have long covered the grilles and headlamps with rubber, vinyl and tape the perfunctory equivalent of masks and hats. Now the old materials are being replaced or supplemented with patterned wrappings applied like wallpaper. Test cars are wearing swirling paisley patterns, harlequin-style diamonds and cubist zigzags.
From Brian Krebs:
Hacked and malicious sites designed to steal data from unsuspecting users via malware and phishing are a dime a dozen, often located in the United States, and are a key target for takedown by ISPs and security researchers. But when online miscreants seek stability in their Web projects, they often turn to so-called "bulletproof hosting" providers, mini-ISPs that specialize in offering services that are largely immune from takedown requests and pressure from Western law enforcement agencies.
How often should you change your password? I get asked that question a lot, usually by people annoyed at their employer's or bank's password expiration policy: people who finally memorized their current password and are realizing they'll have to write down their new password. How could that possibly be more secure, they want to know.
The answer depends on what the password is used for.
The downside of changing passwords is that it makes them harder to remember. And if you force people to change their passwords regularly, they're more likely to choose easy-to-remember -- and easy-to-guess -- passwords than they are if they can use the same passwords for many years. So any password-changing policy needs to be chosen with that consideration in mind.
The primary reason to give an authentication credential -- not just a password, but any authentication credential -- an expiration date is to limit the amount of time a lost, stolen, or forged credential can be used by someone else. If a membership card expires after a year, then if someone steals that card he can at most get a year's worth of benefit out of it. After that, it's useless.
This becomes less important when the credential contains a biometric -- even a photograph -- or is verified online. It's much less important for a credit card or passport to have an expiration date, now that they're not so much bearer documents as just pointers to a database. If, for example, the credit card database knows when a card is no longer valid, there’s no reason to put an expiration date on the card. But the expiration date does mean that a forgery is only good for a limited length of time.
Passwords are no different. If a hacker gets your password either by guessing or stealing it, he can access your network as long as your password is valid. If you have to update your password every quarter, that significantly limits the utility of that password to the attacker.
At least, that's the traditional theory. It assumes a passive attacker, one who will eavesdrop over time without alerting you that he's there. In many cases today, though, that assumption no longer holds. An attacker who gets the password to your bank account by guessing or stealing it isn't going to eavesdrop. He's going to transfer money out of your account -- and then you're going to notice. In this case, it doesn't make a lot of sense to change your password regularly -- but it's vital to change it immediately after the fraud occurs.
Someone committing espionage in a private network is more likely to be stealthy. But he's also not likely to rely on the user credential he guessed and stole; he's going to install backdoor access or create his own account. Here again, forcing network users to regularly change their passwords is less important than forcing everyone to change their passwords immediately after the spy is detected and removed -- you don't want him getting in again.
Social networking sites are somewhere in the middle. Most of the criminal attacks against Facebook users use the accounts for fraud. "Help! I'm in London and my wallet was stolen. Please wire money to this account. Thank you." Changing passwords periodically doesn't help against this attack, although – of course – change your password as soon as you regain control of your account. But if your kid sister has your password -- or the tabloid press, if you're that kind of celebrity -- they're going to listen in until you change it. And you might not find out about it for months.
So in general: you don't need to regularly change the password to your computer or online financial accounts (including the accounts at retail sites); definitely not for low-security accounts. You should change your corporate login password occasionally, and you need to take a good hard look at your friends, relatives, and paparazzi before deciding how often to change your Facebook password. But if you break up with someone you've shared a computer with, change them all.
Two final points. One, this advice is for login passwords. There's no reason to change any password that is a key to an encrypted file. Just keep the same password as long as you keep the file, unless you suspect it’s been compromised. And two, it's far more important to choose a good password for the sites that matter -- don't worry about sites you don't care about that nonetheless demand that you register and choose a password -- in the first place than it is to change it. So if you have to worry about something, worry about that. And write your passwords down, or use a program like Password Safe.
This essay originally appeared on DarkReading.com.
EDITED TO ADD (11/14): Microsoft Research says the same thing.
The TSA is making us remove our belts even when we don't have to.
European airports have made us remove our belts for years. My normal tactic is to pull my shirt tails out of my pants and over my belt. Then I flash my waist and tell them I'm not wearing a belt. It doesn't set off the metal detector, so they don't notice.
Good article on security options for the Washington Monument:
Unfortunately, the bureaucratic gears are already grinding, and what will be presented to the public Monday doesn't include important options, including what became known as the "tunnel" in previous discussions of the issue. Nor does it include the choice of more minimal visitor screening -- simple wanding or visual bag inspection -- that might not require costly and intrusive changes to the structure. The choice to accept risk isn't on the table, either. Finally, and although it might seem paradoxical given how important resisting security authoritarianism is to preserving the symbolism of freedom, it doesn't take seriously the idea that perhaps the monument's interior should be closed altogether -- a small concession that might have collateral benefits.
EDITED TO ADD (11/15): More information on the decision process.
Internet Eyes is a U.K. startup designed to crowdsource digital surveillance. People pay a small fee to become a "Viewer." Once they do, they can log onto the site and view live anonymous feeds from surveillance cameras at retail stores. If they notice someone shoplifting, they can alert the store owner. Viewers get rated on their ability to differentiate real shoplifting from false alarms, can win 1000 pounds if they detect the most shoplifting in some time interval, and otherwise get paid a wage that most likely won't cover their initial fee.
Although the system has some nod towards privacy, groups like Privacy International oppose the system for fostering a culture of citizen spies. More fundamentally, though, I don't think the system will work. Internet Eyes is primarily relying on voyeurism to compensate its Viewers. But most of what goes on in a retail store is incredibly boring. Some of it is actually voyeuristic, and very little of it is criminal. The incentives just aren't there for Viewers to do more than peek, and there’s no obvious way to discouraging them from siding with the shoplifter and just watch the scenario unfold.
This isn't the first time groups have tried to crowdsource surveillance camera monitoring. Texas’s Virtual Border Patrol tried the same thing: deputizing the general public to monitor the Texas-Mexico border. It ran out of money last year, and was widely criticized as a joke.
This system suffered the same problems as Internet Eyes -- not enough incentive to do a good job, boredom because crime is the rare exception -- as well as the fact that false alarms were very expensive to deal with.
Both of these systems remind me of the one time this idea was conceptualized correctly. Invented in 2003 by my friend and colleague Jay Walker, US HomeGuard also tried to crowdsource surveillance camera monitoring. But this system focused on one very specific security concern: people in no-mans areas. These are areas between fences at nuclear power plants or oil refineries, border zones, areas around dams and reservoirs, and so on: areas where there should never be anyone.
The idea is that people would register to become "spotters." They would get paid a decent wage (that and patriotism was the incentive), receive a stream of still photos, and be asked a very simple question: "Is there a person or a vehicle in this picture?" If a spotter clicked "yes," the photo -- and the camera -- would be referred to whatever professional response the camera owner had set up.
HomeGuard would monitor the monitors in two ways. One, by sending stored, known, photos to people regularly to verify that they were paying attention. And two, by sending live photos to multiple spotters and correlating the results, to many more monitors if a spotter claimed to have spotted a person or vehicle.
Just knowing that there’s a person or a vehicle in a no-mans area is only the first step in a useful response, and HomeGuard envisioned a bunch of enhancements to the rest of that system. Flagged photos could be sent to the digital phones of patrolling guards, cameras could be controlled remotely by those guards, and speakers in the cameras could issue warnings. Remote citizen spotters were only useful for that first step, looking for a person or a vehicle in a photo that shouldn't contain any. Only real guards at the site itself could tell an intruder from the occasional maintenance person.
Of course the system isn't perfect. A would-be infiltrator could sneak past the spotters by holding a bush in front of him, or disguising himself as a vending machine. But it does fill in a gap in what fully automated systems can do, at least until image processing and artificial intelligence get significantly better.
HomeGuard never got off the ground. There was never any good data about whether spotters were more effective than motion sensors as a first level of defense. But more importantly, Walker says that the politics surrounding homeland security money post-9/11 was just too great to penetrate, and that as an outsider he couldn't get his ideas heard. Today, probably, the patriotic fervor that gripped so many people post-9/11 has dampened, and he'd probably have to pay his spotters more than he envisioned seven years ago. Still, I thought it was a clever idea then and I still think it’s a clever idea -- and it’s an example of how to do surveillance crowdsourcing correctly.
Making the system more general runs into all sorts of problems. An amateur can spot a person or vehicle pretty easily, but is much harder pressed to notice a shoplifter. The privacy implications of showing random people pictures of no-mans lands is minimal, while a busy store is another matter -- stores have enough individuality to be identifiable, as do people. Public photo tagging will even allow the process to be automated. And, of course, the normalization of a spy-on-your-neighbor surveillance society where it’s perfectly reasonable to watch each other on cameras just in case one of us does something wrong.
This essay first appeared in ThreatPost.
Talk #1: "The Art of Forensic Warfare," Andy Clark. Riffing on Sun Tzu's The Art of War, Clark discussed the war -- the back and forth -- between cyber attackers and cyber forensics. This isn't to say that we're at war, but today's attacker tactics are increasingly sophisticated and warlike. Additionally, the pace is greater, the scale of impact is greater, and the subjects of attack are broader. To defend ourselves, we need to be equally sophisticated and -- possibly -- more warlike.
Clark drew parallels from some of the chapters of Sun Tzu's book combined with examples of the work at Bletchley Park. Laying plans: when faced with an attacker -- especially one of unknown capabilities, tactics, and motives -- it's important to both plan ahead and plan for the unexpected. Attack by stratagem: increasingly, attackers are employing complex and long-term strategies; defenders need to do the same. Energy: attacks increasingly start off simple and get more complex over time; while it's easier to defect primary attacks, secondary techniques tend to be more subtle and harder to detect. Terrain: modern attacks take place across a very broad range of terrain, including hardware, OSs, networks, communication protocols, and applications. The business environment under attack is another example of terrain, equally complex. The use of spies: not only human spies, but also keyloggers and other embedded eavesdropping malware. There's a great World War II double-agent story about Eddie Chapman, codenamed ZIGZAG.
Talk #2: "How the Allies Suppressed the Second Greatest Secret of World War II," David Kahn. This talk is from Kahn's article of the same name, published in the Oct 2010 issue of The Journal of Military History. The greatest secret of World War II was the atom bomb; the second greatest secret was that the Allies were reading the German codes. But while there was a lot of public information in the years after World War II about Japanese codebreaking and its value, there was almost nothing about German codebreaking. Kahn discussed how this information was suppressed, and how historians writing World War II histories never figured it out. No one imagined as large and complex an operation as Bletchley Park; it was the first time in history that something like this had ever happened. Most of Kahn's time was spent in a very interesting Q&A about the history of Bletchley Park and World War II codebreaking.
Talk #3: "DNSSec, A System for Improving Security of the Internet Domain Name System," Whitfield Diffie. Whit talked about three watersheds in modern communications security. The first was the invention of the radio. Pre-radio, the most common communications security device was the code book. This was no longer enough when radio caused the amount of communications to explode. In response, inventors took the research in Vigenère ciphers and automated them. This automation led to an explosion of designs and an enormous increase in complexity -- and the rise of modern cryptography.
The second watershed was shared computing. Before the 1960s, the security of computers was the physical security of computer rooms. Timesharing changed that. The result was computer security, a much harder problem than cryptography. Computer security is primarily the problem of writing good code. But writing good code is hard and expensive, so functional computer security is primarily the problem of dealing with code that isn't good. Networking -- and the Internet -- isn't just an expansion of computing capacity. The real difference is how cheap it is to set up communications connections. Setting up these connections requires naming: both IP addresses and domain names. Security, of course, is essential for this all to work; DNSSec is a critical part of that.
The third watershed is cloud computing, or whatever you want to call the general trend of outsourcing computation. Google is a good example. Every organization uses Google search all the time, which probably makes it the most valuable intelligence stream on the planet. How can you protect yourself? You can't, just as you can't whenever you hand over your data for storage or processing -- you just have to trust your outsourcer. There are two solutions. The first is legal: an enforceable contract that protects you and your data. The second is technical, but mostly theoretical: homomorphic encryption that allows you to outsource computation of data without having to trust that outsourcer.
Diffie's final point is that we're entering an era of unprecedented surveillance possibilities. It doesn't matter if people encrypt their communications, or if they encrypt their data in storage. As long as they have to give their data to other people for processing, it will be possible to eavesdrop on. Of course the methods will change, but the result will be an enormous trove of information about everybody.
It's kind of an amazing story. A young Asian man used a rubber mask to disguise himself as an old Caucasian man and, with a passport photo that matched his disguise, got through all customs and airport security checks and onto a plane to Canada.
The fact that this sort of thing happens occasionally doesn't surprise me. It's human nature that we miss this sort of thing. I wrote about it in Beyond Fear (pages 153–4):
No matter how much training they get, airport screeners routinely miss guns and knives packed in carry-on luggage. In part, that's the result of human beings having developed the evolutionary survival skill of pattern matching: the ability to pick out patterns from masses of random visual data. Is that a ripe fruit on that tree? Is that a lion stalking quietly through the grass? We are so good at this that we see patterns in anything, even if they're not really there: faces in inkblots, images in clouds, and trends in graphs of random data. Generating false positives helped us stay alive; maybe that wasn't a lion that your ancestor saw, but it was better to be safe than sorry. Unfortunately, that survival skill also has a failure mode. As talented as we are at detecting patterns in random data, we are equally terrible at detecting exceptions in uniform data. The quality-control inspector at Spacely Sprockets, staring at a production line filled with identical sprockets looking for the one that is different, can't do it. The brain quickly concludes that all the sprockets are the same, so there's no point paying attention. Each new sprocket confirms the pattern. By the time an anomalous sprocket rolls off the assembly line, the brain simply doesn't notice it. This psychological problem has been identified in inspectors of all kinds; people can't remain alert to rare events, so they slip by.
A customs officer spends hours looking at people and comparing their faces with their passport photos. They do it on autopilot. Will they catch someone in a rubber mask that looks like their passport photo? Probably, but certainly not all the time.
Yes, this is a security risk, but it's not a big one. Because while -- occasionally -- a gun can slip through a metal detector or a masked man can slip through customs, it doesn't happen reliably. So the bad guys can't build a plot around it.
One last point: the young man in the old-man mask was captured by Canadian police. His fellow passengers noticed him. So in the end, his plot failed. Security didn't fail, although a bunch of pieces of it did.
EDITED TO ADD (11/10): Comment (from below) about what actually happened.
Okay, now the terrorists have really affected me personally: they're forcing us to turn off airplane Wi-Fi. No, it's not that the Yemeni package bombs had a Wi-Fi triggering mechanism -- they seem to have had a cell phone triggering mechanism, dubious at best -- but we can imagine an Internet-based triggering mechanism. Put together a sloppy and unsuccessful package bomb with an imagined triggering mechanism, and you have a new and dangerous threat that -- even though it was a threat ever since the first airplane got Wi-Fi capability -- must be immediately dealt with right now.
Please, let's not ever tell the TSA about timers. Or altimeters.
And, while we're talking about the TSA, be sure to opt out of the full-body scanners and remember your sense of humor when a TSA officer slips white powder into your suitcase and then threatens you with arrest.
EDITED TO ADD (11/8): We're banning toner cartridges over 16 ounces.
Additionally, toner and ink cartridges that are over 16 ounces will be banned from all U.S. passenger flights and planes heading to the United States, she said. That ban will also apply to some air cargo shipments.
There's some impressive magical thinking going on here.
Just in time for Halloween.
It can be lucrative:
Avanesov allegedly rented and sold part of his botnet, a common business model for those who run the networks. Other cybercriminals can rent the hacked machines for a specific time for their own purposes, such as sending a spam run or mining the PCs for personal details and files, among other nefarious actions.
EDITED TO ADD (11/11): Paper on the market price of bots.
Last week the police arrested Farooque Ahmed for plotting a terrorist attack on the D.C. Metro system. However, it's not clear how much of the plot was his idea and how much was the idea of some paid FBI informants:
The indictment offers some juicy tidbits -- Ahmed allegedly proposed using rolling suitcases instead of backpacks to bomb the Metro -- but it is notably thin in details about the role of the FBI. It is not clear, for example, whether Ahmed or the FBI (or some combination of the two) came up with the concept of bombing the Metro in the first place. And the indictment does not say when and why Ahmed first encountered the people he believed to be members of al-Qaida.
This is the problem with thoughtcrime. Entrapment is much too easy.
EDITED TO ADD (11/4): Much the same thing was written in The Economist blog.
Those with either an engineering or management background are aware that one cannot optimize everything at once that requirements are balanced by constraints. I am not aware of another domain where this is as true as it is in cybersecurity and the question of a policy response to cyber insecurity at the national level. In engineering, this is said as "Fast, Cheap, Reliable: Choose Two." In the public policy arena, we must first remember the definition of a free country: a place where that which is not forbidden is permitted. As we consider the pursuit of cybersecurity, we will return to that idea time and time again; I believe that we are now faced with "Freedom, Security, Convenience: Choose Two."
I had never heard the term "control fraud" before:
Control fraud theory was developed in the savings and loan debacle. It explained that the person controlling the S&L (typically the CEO) posed a unique risk because he could use it as a weapon.
This is an interesting paper about control fraud. It's by William K. Black, the Executive Director of the Institute for Fraud Prevention. "Individual 'control frauds' cause greater losses than all other forms of property crime combined. They are financial super-predators." Black is talking about control fraud by both heads of corporations and heads of state, so that's almost certainly a true statement. His main point, though, is that our legal systems don't do enough to discourage control fraud.
White-collar criminology has a set of empirical findings and theories that are useful to understanding when markets will act perversely. This paper addresses three, interrelated theories economists should know about. "Control fraud" theory explains why the most damaging forms of fraud are situations in which those that control the company or the nation use it as a fraud vehicle. The CEO, or the head of state, poses the greatest fraud risk. A single large control fraud can cause greater financial losses than all other forms of property crime combined they are the "super-predators" of the financial world. Control frauds can also occur in waves that can cause systemic economic injury and discredit other institutions essential to good government and society. Control frauds are commonly able to defeat for several years market mechanisms that neo-classical economists predict will prevent such frauds.
EDITED TO ADD (11/11): Related paper on the effects of executive compensation on the abuse of controls.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..