July 15, 2008
by Bruce Schneier
Chief Security Technology Officer, BT
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0807.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.
In this issue:
- CCTV Cameras
- Kill Switches and Remote Control
- LifeLock and Identity Theft
- Schneier/BT News
- The First Interdisciplinary Workshop on Security and Human Behavior
- The Truth About Chinese Hackers
- Man-in-the-Middle Attacks
- Comments from Readers
Pervasive security cameras don’t substantially reduce crime. There are exceptions, of course, and that’s what gets the press. Most famously, CCTV cameras helped catch James Bulger’s murderers in 1993. And earlier this year, they helped convict Steve Wright of murdering five women in the Ipswich area. But these are the well-publicized exceptions. Overall, CCTV cameras aren’t very effective.
This fact has been demonstrated again and again: by a comprehensive study for the Home Office in 2005, by several studies in the US, and again with new data announced last month by New Scotland Yard. They actually solve very few crimes, and their deterrent effect is minimal.
Conventional wisdom predicts the opposite. But if that were true, then camera-happy London, with something like 500,000, would be the safest city on the planet. It isn’t, of course, because of technological limitations of cameras, organizational limitations of police and the adaptive abilities of criminals.
To some, it’s comforting to imagine vigilant police monitoring every camera, but the truth is very different. Most CCTV footage is never looked at until well after a crime is committed. When it is examined, it’s very common for the viewers not to identify suspects. Lighting is bad and images are grainy, and criminals tend not to stare helpfully at the lens. Cameras break far too often. The best camera systems can still be thwarted by sunglasses or hats. Even when they afford quick identification—think of the 2005 London transport bombers and the 9/11 terrorists—police are often able to identify suspects without the cameras. Cameras afford a false sense of security, encouraging laziness when we need police to be vigilant.
The solution isn’t for police to watch the cameras. Unlike an officer walking the street, cameras only look in particular directions at particular locations. Criminals know this, and can easily adapt by moving their crimes to someplace not watched by a camera—and there will always be such places. Additionally, while a police officer on the street can respond to a crime in progress, the same officer in front of a CCTV screen can only dispatch another officer to arrive much later. By their very nature, cameras result in underused and misallocated police resources.
Cameras aren’t completely ineffective, of course. In certain circumstances, they’re effective in reducing crime in enclosed areas with minimal foot traffic. Combined with adequate lighting, they substantially reduce both personal attacks and auto-related crime in car parks. And from some perspectives, simply moving crime around is good enough. If a local Tesco installs cameras in its store, and a robber targets the store next door as a result, that’s money well spent by Tesco. But it doesn’t reduce the overall crime rate, so is a waste of money to the township.
But the question really isn’t whether cameras reduce crime; the question is whether they’re worth it. And given their cost (500 million pounds in the past 10 years), their limited effectiveness, the potential for abuse (spying on naked women in their own homes, sharing nude images, selling best-of videos, and even spying on national politicians) and their Orwellian effects on privacy and civil liberties, most of the time they’re not. The funds spent on CCTV cameras would be far better spent on hiring experienced police officers.
We live in a unique time in our society: the cameras are everywhere, and we can still see them. Ten years ago, cameras were much rarer than they are today. And in 10 years, they’ll be so small you won’t even notice them. Already, companies like L-1 Security Solutions are developing police-state CCTV surveillance technologies like facial recognition for China, technology that will find their way into countries like the UK. The time to address appropriate limits on this technology is before the cameras fade from notice.
Surveillance in China:
More good survey articles:
This essay was previously published in The Guardian.
I’ve never figured out the fuss over ransomware. Yes, it encrypts your data and charges you money for the key. But how is this any worse than the old hacker viruses that put a funny message on your screen and erased your hard drive? The single most important thing any company or individual can do to improve security is have a good backup strategy. It’s been true for decades, and it’s still true today.
Magnetic ring attack on electronic locks: impressive.
A great “security through obscurity” story, about a collection of coins and currency worth hundreds of millions of dollars being moved without a whole lot of security:
It’s possible to eavesdrop on encrypted compressed voice, at least a little bit, through traffic analysis:
A Jura F90 Coffee Machine can be hacked remotely over the Internet.
A runner-up in last year’s Underhanded C Contest was a flawed implementation of RC4 that, after some use, just passed plaintext through unencrypted. Plausibly deniable, and very clever.
Dilbert on workplace surveillance:
New technology to detect chemical, biological, and explosive agents.
Swimming pools around Shanghai are examining liquids by smelling them. This liquid ban has gotten weirder.
A new study claims that insiders aren’t the main threat to network security. The whole insiders vs. outsiders debate has always been one of semantics more than anything else. If you count by attacks, there are a lot more outsider attacks, simply because there are orders of magnitude more outsider attackers. If you count incidents, the numbers tend to get closer: 75% vs. 18% in this case. And if you count damages, insiders generally come out on top—mostly because they have a lot more detailed information and can target their attacks better. Both insiders and outsiders are security risks, and you have to defend against them both. Trying to rank them isn’t all that useful.
Confused security reasoning by Toronto Mayor David Miller: “‘In a day when you can’t bring a large tube of toothpaste on a plane how can you allow guns to wander through Union Station, the biggest transit hub in Canada?’ he asked his colleagues on city council.” By that logic, I think we can ban anything from anywhere.
UK teens are using Google Earth to find swimming pools they can crash. How long before someone finds a more serious crime that can be aided by Google Earth?
I’ve seen the IR screening guns at several airports, primarily in Asia. The idea is to keep out people with bird flu, or whatever the current fever scare is. This essay explains why it won’t work:
Carrier pigeons bringing contraband into prisons in Brazil:
I think this is the first security vulnerability found in RFC 1149: “Standard for the transmission of IP datagrams on avian carriers.” Deep packet inspection seems to be the only way to prevent this attack, although adequate fencing will prevent the protocol from running in the first place.
Top ten anti-terrorism patents—not a joke. My favorite is the airplane trap door.
The Pentagon is consulting social scientists on security. The article talks a lot about potential conflicts of interest and such, and less on what sorts of insights the social scientists can offer. I think there is a lot of potential value here.
One, possibly the only, writer of the Nugache worm was arrested in Wyoming. The 19-year-old will plead guilty.
It’s been a while since I’ve written about electronic voting machines, but Dan Wallach has an excellent blog post about the current line of argument from the voting machine companies and why it’s wrong.
This paper measures insecurity in the global population of browsers, using Google’s web server logs. Why is this important? Because browsers are an increasingly popular attack vector. The results aren’t good.
Random stupidity in the name of terrorism, part one: An air traveler in Canada is first told by an airline employee that it is “illegal” to say certain words, and then that if she raised a fuss she would be falsely accused.
Random stupidity in the name of terrorism, part two: A British man is forced to give up his hobby of photographing buses because he’s being harassed too often.
Random stupidity in the name of terrorism, part three: Israelis label a random homicidal Palestinian nut a terrorist:
Random stupidity in the name of terrorism, part four: New Jersey public school locked down after someone saw a ninja. Turns out the ninja was actually a camp counselor dressed in black karate garb and carrying a plastic sword.
A fine newspaper headline: “Giraffe helps camels, zebras escape from circus.”
The U.K. is learning that encrypting disks means that you don’t have to worry if they’re lost.
Time bomb neckties. Not to be worn at airports.
Automatic profiling is useless:
The U.S. wants to do it anyway: “The Justice Department is considering letting the FBI investigate Americans without any evidence of wrongdoing, relying instead on a terrorist profile that could single out Muslims, Arabs or other racial or ethnic groups.”
I’ve written about profiling before:
In a continued cheapening of the word “terrorism,” the Premier of New South Wales called a potential rail-worker strike “industrial terror tactics.” Terrorism is a heinous crime, and a serious international problem. It’s not a catchall word to describe anything you don’t like or don’t agree with, or even anything that adversely affects a large number of people. By using the word more broadly than its actual meaning, we muddy the already complicated popular conceptions of the issue. The word “terrorism” has a specific meaning, and we shouldn’t debase it.
George Carlin on airport security, filmed before 9/11.
Petty thieves are exploiting the “war on photography” to steal memory cards:
Great essay on TSA stupidity:
Security cartoon on password guessing:
Daniel Solove on the new FISA law:
Using a file erasure tool is considered suspicious:
Unbreakable fighting umbrellas.
Be sure to watch the video.
It used to be that just the entertainment industries wanted to control your computers—and televisions and iPods and everything else—to ensure that you didn’t violate any copyright rules. But now everyone else wants to get their hooks into your gear.
OnStar will soon include the ability for the police to shut off your engine remotely. Buses are getting the same capability, in case terrorists want to re-enact the movie Speed. The Pentagon wants a kill switch installed on airplanes, and is worried about potential enemies installing kill switches on their own equipment.
Microsoft is doing some of the most creative thinking along these lines, with something it’s calling “Digital Manners Policies.” According to its patent application, DMP-enabled devices would accept broadcast “orders” limiting their capabilities. Cell phones could be remotely set to vibrate mode in restaurants and concert halls, and be turned off on airplanes and in hospitals. Cameras could be prohibited from taking pictures in locker rooms and museums, and recording equipment could be disabled in theaters. Professors finally could prevent students from texting one another during class.
The possibilities are endless, and very dangerous. Making this work involves building a nearly flawless hierarchical system of authority. That’s a difficult security problem even in its simplest form. Distributing that system among a variety of different devices—computers, phones, PDAs, cameras, recorders—with different firmware and manufacturers, is even more difficult. Not to mention delegating different levels of authority to various agencies, enterprises, industries and individuals, and then enforcing the necessary safeguards.
Once we go down this path—giving one device authority over other devices—the security problems start piling up. Who has the authority to limit functionality of my devices, and how do they get that authority? What prevents them from abusing that power? Do I get the ability to override their limitations? In what circumstances, and how? Can they override my override?
How do we prevent this from being abused? Can a burglar, for example, enforce a “no photography” rule and prevent security cameras from working? Can the police enforce the same rule to avoid another Rodney King incident? Do the police get “superuser” devices that cannot be limited, and do they get “supercontroller” devices that can limit anything? How do we ensure that only they get them, and what do we do when the devices inevitably fall into the wrong hands?
It’s comparatively easy to make this work in closed specialized systems—OnStar, airplane avionics, military hardware—but much more difficult in open-ended systems. If you think Microsoft’s vision could possibly be securely designed, all you have to do is look at the dismal effectiveness of the various copy-protection and digital-rights-management systems we’ve seen over the years. That’s a similar capabilities-enforcement mechanism, albeit simpler than these more general systems.
And that’s the key to understanding this system. Don’t be fooled by the scare stories of wireless devices on airplanes and in hospitals, or visions of a world where no one is yammering loudly on their cell phones in posh restaurants. This is really about media companies wanting to exert their control further over your electronics. They not only want to prevent you from surreptitiously recording movies and concerts, they want your new television to enforce good “manners” on your computer, and not allow it to record any programs. They want your iPod to politely refuse to copy music to a computer other than your own. They want to enforce *their* legislated definition of manners: to control what you do and when you do it, and to charge you repeatedly for the privilege whenever possible.
“Digital Manners Policies” is a marketing term. Let’s call this what it really is: Selective Device Jamming. It’s not polite, it’s dangerous. It won’t make anyone more secure—or more polite.
Digital Manners Policies:
This essay originally appeared in Wired.com.
LifeLock, one of the companies that offers identity-theft protection in the United States, has been taking quite a beating recently. They’re being sued by credit bureaus, competitors and lawyers in several states that are launching class action lawsuits. And the stories in the media … it’s like a piranha feeding frenzy.
There are also a lot of errors and misconceptions. With its aggressive advertising campaign and a CEO who publishes his Social Security number and dares people to steal his identity—Todd Davis, 457-55-5462—LifeLock is a company that’s easy to hate. But the company’s story has some interesting security lessons, and it’s worth understanding in some detail.
In December 2003, as part of the Fair and Accurate Credit Transactions Act, or FACTA, credit bureaus were forced to allow you to put a fraud alert on their credit reports, requiring lenders to verify your identity before issuing a credit card in your name. This alert is temporary, and expires after 90 days. Several companies have sprung up—LifeLock, Debix, LoudSiren, TrustedID—that automatically renew these alerts and effectively make them permanent.
This service pisses off the credit bureaus and their financial customers. The reason lenders don’t routinely verify your identity before issuing you credit is that it takes time, costs money and is one more hurdle between you and another credit card. (Buy, buy, buy—it’s the American way.) So in the eyes of credit bureaus, LifeLock’s customers are inferior goods; selling their data isn’t as valuable. LifeLock also opts its customers out of pre-approved credit card offers, further making them less valuable in the eyes of credit bureaus.
And, so began a smear campaign on the part of the credit bureaus. You can read their points of view in New York Times article, written by a reporter who didn’t do much more than regurgitate their talking points. And the class action lawsuits have piled on, accusing LifeLock of deceptive business practices, fraudulent advertising and so on. The biggest smear is that LifeLock didn’t even protect Todd Davis, and that his identity was allegedly stolen.
It wasn’t. Someone in Texas used Davis’s SSN to get a $500 advance against his paycheck. It worked because the loan operation didn’t check with any of the credit bureaus before approving the loan—perfectly reasonable for an amount this small. The payday-loan operation called Davis to collect, and LifeLock cleared up the problem. His credit report remains spotless.
The Experian credit bureau’s lawsuit basically claims that fraud alerts are only for people who have been victims of identity theft. This seems spurious; the text of the law states that anyone “who asserts a good faith suspicion that the consumer has been or is about to become a victim of fraud or related crime” can request a fraud alert. It seems to me that includes anybody who has ever received one of those notices about their financial details being lost or stolen, which is everybody.
As to deceptive business practices and fraudulent advertising—those just seem like class action lawyers piling on. LifeLock’s aggressive fear-based marketing doesn’t seem any worse than a lot of other similar advertising campaigns. My guess is that the class action lawsuits won’t go anywhere.
In reality, forcing lenders to verify identity before issuing credit is exactly the sort of thing we need to do to fight identity theft. Basically, there are two ways to deal with identity theft: Make personal information harder to steal, and make stolen personal information harder to use. We all know the former doesn’t work, so that leaves the latter. If Congress wanted to solve the problem for real, one of the things it would do is make fraud alerts permanent for everybody. But the credit industry’s lobbyists would never allow that.
LifeLock does a bunch of other clever things. They monitor the national address database, and alert you if your address changes. They look for your credit and debit card numbers on hacker and criminal websites and such, and assist you in getting a new number if they see it. They have a million-dollar service guarantee—for complicated legal reasons, they can’t call it insurance—to help you recover if your identity is ever stolen.
But even with all of this, I am not a LifeLock customer. At $120 a year, it’s just not worth it. You wouldn’t know it from the press attention, but dealing with identity theft has become easier and more routine. Sure, it’s a pervasive problem. The Federal Trade Commission reported that 8.3 million Americans were identity-theft victims in 2005. But that includes things like someone stealing your credit card and using it, something that rarely costs you any money and that LifeLock doesn’t protect against. New account fraud is much less common, affecting 1.8 million Americans per year, or 0.8 percent of the adult population. The FTC hasn’t published detailed numbers for 2006 or 2007, but the rate seems to be declining.
New card fraud is also not very damaging. The median amount of fraud the thief commits is $1,350, but you’re not liable for that. Some spectacularly horrible identity-theft stories notwithstanding, the financial industry is pretty good at quickly cleaning up the mess. The victim’s median out-of-pocket cost for new account fraud is only $40, plus ten hours of grief to clean up the problem. Even assuming your time is worth $100 an hour, LifeLock isn’t worth more than $8 a year.
And it’s hard to get any data on how effective LifeLock really is. They’ve been in business three years and have about a million customers, but most of them have joined up in the last year. They’ve paid out on their service guarantee 113 times, but a lot of those were for things that happened before their customers became customers. (It was easier to pay than argue, I assume.) But they don’t know how often the fraud alerts actually catch an identity thief in the act. My guess is that it’s less than the 0.8 percent fraud rate above.
LifeLock’s business model is based more on the fear of identity theft than the actual risk.
It’s pretty ironic of the credit bureaus to attack LifeLock on its marketing practices, since they know all about profiting from the fear of identity theft. FACTA also forced the credit bureaus to give Americans a free credit report once a year upon request. Through deceptive marketing techniques, they’ve turned this requirement into a multimillion-dollar business.
Get LifeLock if you want, or one of its competitors if you prefer. But remember that you can do most of what these companies do yourself. You can put a fraud alert on your own account, but you have to remember to renew it every three months. You can also put a credit freeze on your account, which is more work for the average consumer but more effective if you’re a privacy wonk—and the rules differ by state. And maybe someday Congress will do the right thing and put LifeLock out of business by forcing lenders to verify identity every time they issue credit in someone’s name.
New York Times article:
This essay originally appeared in Wired:
Schneier interview in The Edge:
Video of a panel Schneier was on at Supernova; the topic was security and privacy.
The First Interdisciplinary Workshop on Security and Human Behavior (SHB 08) was held at MIT earlier this month. From the website:
“Security is both a feeling and a reality, and they’re different. There are several different research communities: technologists who study security systems, and psychologists who study people, not to mention economists, anthropologists and others. Increasingly these worlds are colliding.
“* Security design is by nature psychological, yet many systems ignore this, and cognitive biases lead people to misjudge risk. For example, a key in the corner of a web browser makes people feel more secure than they actually are, while people feel far less secure flying than they actually are. These biases are exploited by various attackers.
“* Security problems relate to risk and uncertainty, and the way we react to them. Cognitive and perception biases affect the way we deal with risk, and therefore the way we understand security—whether that is the security of a nation, of an information system, or of one’s personal information.
“* Many real attacks on information systems exploit psychology more than technology. Phishing attacks trick people into logging on to websites that appear genuine but actually steal passwords. Technical measures can stop some phishing tactics, but stopping users from making bad decisions is much harder. Deception-based attacks are now the greatest threat to online security.
“* In order to be effective, security must be usable—not just by geeks, but by ordinary people. Research into usable security invariably has a psychological component.
“* Terrorism is perceived to be a major threat to society. Yet the actual damage done by terrorist attacks is dwarfed by the secondary effects as target societies overreact. There are many topics here, from the manipulation of risk perception to the anthropology of religion.
“* There are basic research questions; for example, about the extent to which the use and detection of deception in social contexts may have helped drive human evolution.
“The dialogue between researchers in security and in psychology is rapidly widening, bringing in more and more disciplines—from security usability engineering, protocol design, privacy, and policy on the one hand, and from social psychology, evolutionary biology, and behavioral economics on the other.”
About a year ago, Ross Anderson and I conceived this conference as a way to bring together computer security researchers, psychologists, behavioral economists, sociologists, philosophers, and others—all of whom are studying the human side of security. I’ve read a lot—and written some—on psychology and security over the past few years, and have been continually amazed by some of the research that people outside my field have been doing on topics very relevant to my field. Ross and I both thought that bringing these diverse communities together would be fascinating to everyone. So we convinced behavioral economists Alessandro Acquisti and George Loewenstein to help us organize the workshop, invited the people we all have been reading, and also asked them who else to invite. The response was overwhelming. Almost everyone we wanted was able to attend, and the result was a 42-person conference with 35 speakers, including Nicholas Humphrey, Frank Furedi, and James Randi.
Invitees and their work:
Audio from the workshop:
The popular media conception is that there is a coordinated attempt by the Chinese government to hack into U.S. computers—military, government corporate—and steal secrets. The truth is a lot more complicated.
There certainly is a lot of hacking coming out of China. Any company that does security monitoring sees it all the time.
These hacker groups seem not to be working for the Chinese government. They don’t seem to be coordinated by the Chinese military. They’re basically young, male, patriotic Chinese citizens, trying to demonstrate that they’re just as good as everyone else. As well as the American networks the media likes to talk about, their targets also include pro-Tibet, pro-Taiwan, Falun Gong and pro-Uyghur sites.
The hackers are in this for two reasons: fame and glory, and an attempt to make a living. The fame and glory comes from their nationalistic goals. Some of these hackers are heroes in China. They’re upholding the country’s honor against both anti-Chinese forces like the pro-Tibet movement and larger forces like the United States.
And the money comes from several sources. The groups sell owned computers, malware services, and data they steal on the black market. They sell hacker tools and videos to others wanting to play. They even sell T-shirts, hats and other merchandise on their Web sites.
This is not to say that the Chinese military ignores the hacker groups within their country. Certainly the Chinese government knows the leaders of the hacker movement and chooses to look the other way. They probably buy stolen intelligence from these hackers. They probably recruit for their own organizations from this self-selecting pool of experienced hacking experts. They certainly learn from the hackers.
And some of the hackers are good. Over the years, they have become more sophisticated in both tools and techniques. They’re stealthy. They do good network reconnaissance. My guess is what the Pentagon thinks is the problem is only a small percentage of the actual problem.
And they discover their own vulnerabilities. Earlier this year, one security company noticed a unique attack against a pro-Tibet organization. That same attack was also used two weeks earlier against a large multinational defense contractor.
They also hoard vulnerabilities. During the 1999 conflict over the two-states theory conflict, in a heated exchange with a group of Taiwanese hackers, one Chinese group threatened to unleash multiple stockpiled worms at once. There was no reason to disbelieve this threat.
If anything, the fact that these groups aren’t being run by the Chinese government makes the problem worse. Without central political coordination, they’re likely to take more risks, do more stupid things and generally ignore the political fallout of their actions.
In this regard, they’re more like a non-state actor.
So while I’m perfectly happy that the U.S. government is using the threat of Chinese hacking as an impetus to get their own cybersecurity in order, and I hope they succeed, I also hope that the U.S. government recognizes that these groups are not acting under the direction of the Chinese military and doesn’t treat their actions as officially approved by the Chinese government.
This essay originally appeared on the Discovery Channel website:
Last week’s dramatic rescue of 15 hostages held by the guerrilla organization FARC was the result of months of intricate deception on the part of the Colombian government. At the center was a classic man-in-the-middle attack.
In a man-in-the-middle attack, the attacker inserts himself between two communicating parties. Both believe they’re talking to each other, and the attacker can delete or modify the communications at will. The Wall Street Journal reported how this gambit played out in Colombia:
“The plan had a chance of working because, for months, in an operation one army officer likened to a “broken telephone,” military intelligence had been able to convince Ms. Betancourt’s captor, Gerardo Aguilar, a guerrilla known as “Cesar,” that he was communicating with his top bosses in the guerrillas’ seven-man secretariat. Army intelligence convinced top guerrilla leaders that they were talking to Cesar. In reality, both were talking to army intelligence.”
This ploy worked because Cesar and his guerrilla bosses didn’t know one another well. They didn’t recognize one another’s voices, and didn’t have a friendship or shared history that could have tipped them off about the ruse. Man-in-the-middle is defeated by context, and the FARC guerrillas didn’t have any.
And that’s why man-in-the-middle, abbreviated MITM in the computer-security community, is such a problem online: Internet communication is often stripped of any context. There’s no way to recognize someone’s face. There’s no way to recognize someone’s voice. When you receive an e-mail purporting to come from a person or organization, you have no idea who actually sent it. When you visit a website, you have no idea if you’re really visiting that website. We all like to pretend that we know who we’re communicating with—and for the most part, of course, there isn’t any attacker inserting himself into our communications—but in reality, we don’t. And there are lots of hacker tools that exploit this unjustified trust, and implement MITM attacks.
Even with context, it’s still possible for MITM to fool both sides—because electronic communications are often intermittent. Imagine that one of the FARC guerrillas became suspicious about who he was talking to. So he asks a question about their shared history as a test: “What did we have for dinner that time last year?” or something like that. On the telephone, the attacker wouldn’t be able to answer quickly, so his ruse would be discovered. But e-mail conversation isn’t synchronous. The attacker could simply pass that question through to the other end of the communications, and when he got the answer back, he would be able to reply.
This is the way MITM attacks work against web-based financial systems. A bank demands authentication from the user: a password, a one-time code from a token or whatever. The attacker sitting in the middle receives the request from the bank and passes it to the user. The user responds to the attacker, who passes that response to the bank. Now the bank assumes it is talking to the legitimate user, and the attacker is free to send transactions directly to the bank. This kind of attack completely bypasses any two-factor authentication mechanisms, and is becoming a more popular identity-theft tactic.
There are cryptographic solutions to MITM attacks, and there are secure web protocols that implement them. Many of them require shared secrets, though, making them useful only in situations where people already know and trust one another.
The NSA-designed STU-III and STE secure telephones solve the MITM problem by embedding the identity of each phone together with its key. (The NSA creates all keys and is trusted by everyone, so this works.) When two phones talk to each other securely, they exchange keys and display the other phone’s identity on a screen. Because the phone is in a secure location, the user now knows who he is talking to, and if the phone displays another organization—as it would if there were a MITM attack in progress—he should hang up.
Zfone, a secure VoIP system, protects against MITM attacks with a short authentication string. After two Zfone terminals exchange keys, both computers display a four-character string. The users are supposed to manually verify that both strings are the same—”my screen says 5C19; what does yours say?”—to ensure that the phones are communicating directly with each other and not with an MITM. The AT&T TSD-3600 worked similarly.
This sort of protection is embedded in SSL, although no one uses it. As it is normally used, SSL provides an encrypted communications link to whoever is at the other end: bank and phishing site alike. And the better phishing sites create valid SSL connections, so as to more effectively fool users. But if the user wanted to, he could manually check the SSL certificate to see if it was issued to “National Bank of Trustworthiness” or “Two Guys With a Computer in Nigeria.”
No one does, though, because you have to both remember and be willing to do the work. (The browsers could make this easier if they wanted to, but they don’t seem to want to.) In the real world, you can easily tell a branch of your bank from a money changer on a street corner. But on the internet, a phishing site can be easily made to look like your bank’s legitimate website. Any method of telling the two apart takes work. And that’s the first step to fooling you with a MITM attack.
Man-in-the-middle isn’t new, and it doesn’t have to be technological. But the internet makes the attacks easier and more powerful, and that’s not going to change anytime soon.
Wall Street Journal article:
MITM hacker tools:
Problems with two-factor authentication:
NSA secure phones:
AT&T TSD 3600:
Checking SSL certificates:
The essay originally appeared on Wired.com.
There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is the Chief Security Technology Officer of BT (BT acquired Counterpane in 2006), and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2008 by Bruce Schneier.