December 15, 2006
by Bruce Schneier
Founder and CTO
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0612.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.
In this issue:
- Real-World Passwords
- Crypto-Gram Reprints
- Tracking People by their Sneakers
- Notary Fraud
- Separating Data Ownership and Device Ownership
- BT Counterpane News
- Fighting Fraudulent Transactions
- Cybercrime Hype Alert
- Comments from Readers
In the world of voting, automatic recount laws are not uncommon. Virginia, where George Allen lost to James Webb in the Senate race by 7,800 out of over 2.3 million votes, or 0.33%, is an example. If the margin of victory is 1% or less, the loser is allowed to ask for a recount. If the margin is 0.5% or less, the government pays for it. If the margin is between 0.5% and 1%, the loser pays for it.
We have recounts because vote counting is—to put it mildly—sloppy. Americans like their election results fast, before they go to bed at night. So we’re willing to put up with inaccuracies in our tallying procedures, and ignore the fact that the numbers we see on television correlate only roughly with reality.
Traditionally, it didn’t matter very much, because most voting errors were “random errors.”
There are two basic types of voting errors: random errors and systemic errors. Random errors are just that, random—equally likely to happen to anyone. In a close race, random errors won’t change the result because votes intended for candidate A that mistakenly go to candidate B happen at the same rate as votes intended for B that mistakenly go to A. (Mathematically, as candidate A’s margin of victory increases, random errors slightly decrease it.)
This is why, historically, recounts in close elections rarely change the result. The recount will find the few percent of the errors in each direction, and they’ll cancel each other out. In an extremely close election, a careful recount will yield a different result—but that’s a rarity.
The other kind of voting error is a systemic error. These are errors in the voting process—the voting machines, the procedures—that cause votes intended for A to go to B at a different rate than the reverse.
An example would be a voting machine that mysteriously recorded more votes for A than there were voters. (Sadly, this kind of thing is not uncommon with electronic voting machines.) Another example would be a random error that only occurs in voting equipment used in areas with strong A support. Systemic errors can make a dramatic difference in an election, because they can easily shift thousands of votes from A to B without any counterbalancing shift from B to A.
Even worse, systemic errors can introduce errors out of proportion to any actual randomness in the vote-counting process. That is, the closeness of an election is not any indication of the presence or absence of systemic errors.
When a candidate has evidence of systemic errors, a recount can fix a wrong result—but only if the recount can catch the error. With electronic voting machines, all too often there simply isn’t the data: there are no votes to recount.
This year’s election in Florida’s 13th Congressional District is such an example. The winner won by a margin of 373 out of 237,861 total votes, but as many as 18,000 votes were not recorded by the electronic voting machines. These votes came from areas where the loser was favored over the winner, and would have likely changed the result.
Or imagine this—as far as we know—hypothetical situation: After the election, someone discovers rogue software in the voting machines that flipped some votes from A to B. Or someone gets caught vote tampering—changing the data on electronic memory cards. The problem is that the original data is lost forever; all we have is the hacked vote.
Faced with problems like this, we can do one of two things. We can certify the result anyway, regretful that people were disenfranchised but knowing that we can’t undo that wrong. Or, we can tell everyone to come back and vote again.
To be sure, the very idea of revoting is rife with problems. Elections are a snapshot in time—election day—and a revote will not reflect that. If Virginia revoted for the Senate this year, the election would not just be for the junior senator from Virginia, but for control of the entire Senate. Similarly, in the 2000 presidential election in Florida, or the 2004 presidential election in Ohio, single-state revotes would have decided the presidency.
And who should be allowed to revote? Should only people in those precincts where there were problems revote, or should the entire election be rerun? In either case, it is certain that more voters will find their way to the polls, possibly changing the demographic and swaying the result in a direction different than that of the initial set of voters. Is that a bad thing, or a good thing?
Should only people who actually voted—records are kept—or who could demonstrate that they were erroneously turned away from the polls be allowed to revote? In this case, the revote will almost certainly have fewer voters, as some of the original voters will be unable to vote a second time. That’s probably a bad thing—but maybe it’s not.
The only analogy we have for this are run-off elections, which are required in some jurisdictions if the winning candidate didn’t get 50% of the vote. But it’s easy to know when you need to have a run-off. Who decides, and based on what evidence, that you need to have a revote?
I admit that I don’t have the answers here. They require some serious thinking about elections, and what we’re trying to achieve. But smart election security not only tries to prevent vote hacking—or even systemic electronic voting-machine errors—it prepares for recovery after an election has been hacked. We have to start discussing these issues now, when they’re non-partisan, instead of waiting for the inevitable situation, and the pre-drawn battle lines those results dictate.
This essay originally appeared on Wired.com.
How good are the passwords people are choosing to protect their computers and online accounts?
It’s a hard question to answer because data is scarce. But recently, a colleague sent me some spoils from a MySpace phishing attack: 34,000 actual user names and passwords.
The attack was pretty basic. The attackers created a fake MySpace login page, and collected login information when users thought they were accessing their own account on the site. The data was forwarded to various compromised web servers, where the attackers would harvest it later.
MySpace estimates that more than 100,000 people fell for the attack before it was shut down. The data I have is from two different collection points, and was cleaned of the small percentage of people who realized they were responding to a phishing attack. I analyzed the data, and this is what I learned.
Password Length: While 65% of passwords contain eight characters or less, 17% are made up of six characters or less. The average password is eight characters long.
Specifically, the length distribution looks like this:
Yes, there’s a 32-character password: “1ancheste23nite41ancheste23nite4.” Other long passwords are “fool2thinkfool2thinkol2think” and “dokitty17darling7g7darling7.”
Character Mix: While 81% of passwords are alphanumeric, 28% are just lowercase letters plus a single final digit—and two-thirds of those have the single digit 1. Only 3.8% of passwords are a single dictionary word, and another 12% are a single dictionary word plus a final digit—once again, two-thirds of the time that digit is 1.
numbers only 1.3%
letters only 9.6%
Only 0.34% of users have the username portion of their e-mail address as their password.
Common Passwords: The top 20 passwords are (in order): password1, abc123, myspace1, password, blink182, qwerty1, fuckyou, 123abc, baseball1, football1, 123456, soccer, monkey1, liverpool1, princess1, jordan23, slipknot1, superman1, iloveyou1 and monkey.
The most common password, “password1,” was used in 0.22% of all accounts. The frequency drops off pretty fast after that: “abc123” and “myspace1” were only used in 0.11% of all accounts, “soccer” in 0.04% and “monkey” in 0.02%.
For those who don’t know, Blink 182 is a band. Presumably lots of people use the band’s name because it has numbers in its name, and therefore it seems like a good password. The band Slipknot doesn’t have any numbers in its name, which explains the 1. The password “jordan23” refers to basketball player Michael Jordan and his number. And, of course, “myspace” and “myspace1” are easy-to-remember passwords for a MySpace account. I don’t know what the deal is with monkeys.
We used to quip that “password” is the most common password. Now it’s “password1.” Who said users haven’t learned anything about security?
But seriously, passwords are getting better. I’m impressed that less than 4% were dictionary words and that the great majority were at least alphanumeric. Writing in 1989, Daniel Klein was able to crack 24% of his sample passwords with a small dictionary of just 63,000 words, and found that the average password was 6.4 characters long.
And in 1992 Gene Spafford cracked 20% of passwords with his dictionary, and found an average password length of 6.8 characters. (Both studied Unix passwords, with a maximum length at the time of 8 characters.) And they both reported a much greater percentage of all lowercase, and only upper- and lowercase, passwords than emerged in the MySpace data. The concept of choosing good passwords is getting through, at least a little.
On the other hand, the MySpace demographic is pretty young. Another password study in November looked at 200 corporate employee passwords: 20% letters only, 78% alphanumeric, 2.1% with non-alphanumeric characters, and a 7.8-character average length. Better than 15 years ago, but not as good as MySpace users. Kids really are the future.
None of this changes the reality that passwords have outlived their usefulness as a serious security device. Over the years, password crackers have been getting faster and faster. Current commercial products can test tens—even hundreds—of millions of passwords per second. At the same time, there’s a maximum complexity to the passwords average people are willing to memorize. Those lines crossed years ago, and typical real-world passwords are now software-guessable. AccessData’s Password Recovery Toolkit—at 200,000 guesses per second—would have been able to crack 23% of the MySpace passwords in 30 minutes, 55% in 8 hours.
Of course, this analysis assumes that the attacker can get his hands on the encrypted password file and work on it offline, at his leisure; i.e., that the same password was used to encrypt an e-mail, file or hard drive. Passwords can still work if you can prevent offline password-guessing attacks, and watch for online guessing. They’re also fine in low-value security situations, or if you choose really complicated passwords and use something like Password Safe to store them. But otherwise, security by password alone is pretty risky.
Another analysis of the same data:
Other password studies:
This essay originally appeared on Wired.com.
Crypto-Gram is currently in its ninth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram-back.html>. These are a selection of articles that appeared in this calendar month in other years.
Sony’s DRM Rootkit:
Surveillance and Oversight:
Behavioral Assessment Profiling:
Kafka and the Digital Person:
Safe Personal Computing:
Blaster and the August 14th Blackout:
Computerized and Electronic Voting:
Comments on the Department of Homeland Security:
Crime: The Internet’s Next Big Thing:
National ID Cards:
Judges Punish Bad Security:
Computer Security and Liabilities:
Fun with Vulnerability Scanners:
Voting and Technology:
“Security Is Not a Product; It’s a Process”
European Digital Cellular Algorithms:
The Fallacy of Cracking Contests:
How to Recognize Plaintext:
Tracking People by their Sneakers
Researchers at the University of Washington have demonstrated a surveillance system that automatically tracks people through the Nike+iPod Sport Kit. Basically, the kit contains a transmitter that you stick in your sneakers and a receiver you attach to your iPod. This allows you to track things like time, distance, pace, and calories burned. Pretty clever.
However, it turns out that the transmitter in your sneaker can be read up to 60 feet away. And because it broadcasts a unique ID, you can be tracked by it. In the demonstration, the researchers built a surveillance device (at a cost of about $250) and interfaced their surveillance system with Google Maps. Very scary.
This is a great demonstration for anyone who is skeptical that RFID chips can be used to track people. It’s a good example because the chips have no personal identifying information, yet can still be used to track people. As long as the chips have unique IDs, those IDs can be used for surveillance.
To me, the real significance of this work is how easy it was. The people who designed the Nike+iPod system put zero thought into security and privacy issues. Unless we enact some sort of broad law requiring companies to add security into these sorts of systems, companies will continue to produce devices that erode our privacy through new technologies. Not on purpose, not because they’re evil—just because it’s easier to ignore the externality than to worry about it.
Many countries have the concept of a “notary public.” Their training and authority varies from country to country; in the United States, their primary role is to witness the signature of legal documents. Many important legal documents require notarization in addition to a signature, primarily as a security device.
When I get a document notarized, I present my photo ID to a notary public. Generally, I go to my local bank, where many of the employees are notary publics and I don’t have to pay a fee for the service. I sign the document while the notary watches, and he then signs an attestation to the fact that he saw me sign it. He doesn’t read the document; that’s not his job. And then I send my notarized document to whoever needed it: another bank, the patent office, my mortgage company, whatever.
It’s an eminently hackable system. Sure, you can always present a fake ID—I’ll bet my bank employee has never seen a West Virginia driver’s license, for example—but that takes work. The easiest way to hack the system is through social engineering.
Bring a small pile of documents to be notarized. In the middle of the pile, slip in a document with someone else’s signature. Since he’s busy with his own signing and stamping—and you’re engaging him in slightly distracting conversation—he’s probably not going to notice that he’s notarizing something “someone else” signed. If he does, apologize for your honest mistake and try again elsewhere.
Of course, you’re better off visiting a notary who charges by the document: he’ll be more likely to appreciate the stack of documents you’ve brought to him and less likely to ask questions. And pick a location—not like a bank—that isn’t filled with security cameras.
Of course, this won’t be enough if the final recipient of the document checks the signature; you’re on your own when it comes to forgery. And in my state the notary has to keep a record of the document he signs; this one won’t be in his records if he’s ever asked. But if you need to switch the deed on a piece of property, change ownership of a bank account, or give yourself power of attorney over someone else, hacking the notary system makes the job a lot easier.
Anyone know how often this kind of thing happens in real life?
Here’s a dumb idea: voting from your TiVo.
EPIC on electronic voting machines:
This paper describes an inherent flaw with the way ATM PINs are encrypted and transmitted on the international financial networks, making them vulnerable to attack from malicious insiders in a bank.
One of the most disturbing aspects of the attack is that you’re only as secure as the most untrusted bank on the network. Instead of just having to trust your own issuer bank that they have good security against insider fraud, you have to trust every other financial institution on the network as well. An insider at another bank can crack your ATM PIN if you withdraw money from any of the other banks’ ATMs.
UK RFID passport cracked:
This fraudster inserted a recording device into an ATM’s phone line and recorded customer card numbers and PINs. I’m amazed that ATMs still don’t have basic communications security measures.
A 2004 U.S. government study found that RFID passports are less reliable than traditional passports.
I’ve written about RFID passports before.
I’ve written about the 2006 Workshop on Economics of Information Security (WEIS); I think it’s the most interesting security conference out there.
WEIS 2007 will be held at Carnegie Mellon University on June 6-7. There’s still time to submit a paper.
Several TSA stories this month. First, an innocent passenger arrested for trying to bring a rubber-band ball onto an airplane.
Second, a woman passed out on a plane after her drugs were confiscated.
Third, San Francisco International Airport screeners were warned in advance of undercover test.
Fourth, frozen spaghetti sauce almost confiscated:
We have a serious problem in this country. The TSA operates above, and outside, the law. There’s no due process, no judicial review, no appeal.
And a TSA cartoon:
Six Muslim imams removed from a plane by US Airways because…well because they’re Muslim and that scares people. After they were cleared by the authorities, US Airways refused to sell them a ticket.
Note that US Airways is the culprit here, not the TSA. Refuse to be terrorized, people!
Interesting article on the history and current search for a drug that compels people to tell the truth:
David Kahn donates his cryptology library to the National Cryptologic Museum, at Fort Meade, MD:
Earlier this month there was a bioterrorism drill in Seattle. Postal carriers delivered dummy packages to “nearly thousands” of people (yes, that’s what the article said; my guess is “nearly a thousand”), testing how the postal system could be used to quickly deliver medications. Sure, there are lots of scenarios where this kind of delivery system isn’t good enough, but that’s not the point. In general, I think emergency response is one of the few areas where we need to spend more money. And, in general, I think tests and drills like this are good—how else will we know if the systems will work the way we think they will?
Last week, the U.S. Copyright Office released a new list of exemptions to the DMCA.
Erasable ink scam: Someone goes door-to-door, soliciting contributions to a charity. He prefers a check—it’s safer for you, after all. But he offers his pen for you to sign your check, and the pen is filled with erasable ink. Later, he changes both the payee and the amount, and cashes the check. This surely isn’t a new scam, but it’s happening in the UK right now. I’ve already written about attackers using different solvents to wash ink off checks, but this one is even more basic—the attacker gives the victim a bad pen to start with. I thought checks were printed with ink that also erased, voiding the check. Why does this sort of attack still work?
Photo ID required for pancakes:
The DHS wants to share terrorist biometric information with other countries, in a program called “Global Envelope.” Does anyone think that this will be any better than the no-fly list?
There’s new software that claims to be able to predict who is likely to become a murderer. Pretty scary stuff, as it gets into the realm of thoughtcrime.
In secret and for the past few years, immigration agents have been giving anyone entering or leaving the country a computer-generated terrorist risk score. Like all these systems, we are all judged in secret, by a computer algorithm, with no way to see or even challenge our score. Kafka would be proud. One quote from the AP story: “‘If this catches one potential terrorist, this is a success,’ Ahern said.” That’s just too idiotic a statement to even rebut.
Federal Register notice:
Comments to the notice:
Evidence that the program is illegal:
Congress has passed an anti-pretexting bill. The law doesn’t go as far as some of the state laws—which it pre-empts—but it’s still a good thing.
Previously, the MPAA killed a California anti-pretexting bill, claiming that it needed to commit fraud to stop illegal downloading. My comment at the time: These people are looking more and more like a criminal organization every day.
“A Romanian man has been indicted on charges of hacking into more than 150 U.S. government computers, causing disruptions that cost NASA, the Energy Department and the Navy nearly $1.5 million.” It’s been a while since I’ve seen one of these stories.
I give a talk called “The Future of Privacy,” where I talk about current and future technological developments that erode our privacy. One of the things I talk about is auditory eavesdropping, and I hypothesize that a cell phone microphone could be turned on surreptitiously and remotely. I never had any actual evidence one way or the other, but the technique has surfaced in an organized crime prosecution. Seems that the technique is to download eavesdropping software into the phone.
Interesting story of a British journalist buying 20 different fake EU passports. She bought a genuine Czech passport with a fake name and her real picture, a fake Latvian passport, and a stolen Estonian passport. Note that harder-to-forge RFID passports would only help in one instance; it’s certainly not the most important problem to solve.
I’ve written about backscatter X-ray technology before. It’s great for finding hidden weapons on a person, but it’s also great for seeing naked images of them. The TSA is piloting this technology in Phoenix, and they’re deliberately blurring the images to protect privacy. Note that the system is being made better by making the resulting images less detailed. Excellent.
Blog entry URL:
This is interesting. Ted Kaczynski (the Unabomber) wrote in code. It was a pencil-and-paper cipher that the government (the article says “CIA,” but presumably the NSA was involved) couldn’t crack until someone found the key amongst his papers. Does anyone know the details of the algorithm?
I’ll be the first to admit it: I know next to nothing about MySpace or Facebook. I do know that they’re social networking sites, and that—at least to some extent—your reputation is based on who are your “friends” and what they say about you. Which means that this follows, like day follows night: “Fake Your Space” is a site where you can hire fake friends to leave their pictures and personalized comments on your page. Now you can pretend that you’re more popular than you actually are. What’s next? Services that verify friends on your friends’ MySpace pages? Services that block friend verification services? Where will this all end up?
Note: This is probably a hoax site.
Banks are spending millions preventing outsiders from stealing their customers’ identities, but there is a growing insider threat.
Clever hack against gift cards. Fraudster takes unactivated cards off racks in stores and copies down the serial numbers. Later, he checks online to see if the card has been activated. If it has, he goes on a shopping spree.
What’s the security problem? A serial number on the cards that’s visible even though the card is not activated. This could be mitigated by hiding the serial number behind a scratch-off coating, or opaque packaging.
Absolutely fascinating paper about a personal RFID firewall. The basic idea is that you carry a personalized device that jams the signals from all the RFID tags on your person until you authorize otherwise. They even built a prototype. As Cory Doctorow points out, this is potentially a way to reap the benefits of RFID without paying the cost.
This is a weird story, about “the square root of terrorist intent.” It appears in an equation that determines how much federal money different locals receive for anti-terrorism defense.
I wrote an essay on spam for the Forbes.com website.
There’s little in it I haven’t said before.
Another essay on spam:
Hackers have gained access to a database containing personal information on 800,000 current and former UCLA students. This is barely worth writing about: yet another database attack exposing personal information. My guess is that everyone in the U.S. has been the victim of at least one of these already. But there was a particular section of the article that caught my eye. “Jim Davis, UCLA’s associate vice chancellor for information technology, described the attack as sophisticated, saying it used a program designed to exploit a flaw in a single software application among the many hundreds used throughout the Westwood campus. ‘An attacker found one small vulnerability and was able to exploit it, and then cover their tracks,’ Davis said.” It worries me that the associate vice chancellor for information technology doesn’t understand that *all* attacks work like that.
CATO report on data mining and terrorism. Definitely worth reading:
Defeating motion-sensor secured doors with a stick. An old trick, but a good story:
Separating Data Ownership and Device Ownership
Consider two different security problems. In the first, you store your valuables in a safe in your basement. The threat is burglars, of course. But the safe is yours, and the house is yours, too. You control access to the safe, and probably have an alarm system.
The second security problem is similar, but you store your valuables in someone else’s safe. Even worse, it’s someone you don’t trust. He doesn’t know the combination, but he controls access to the safe. He can try to break in at his leisure. He can transport the safe anyplace he needs to. He can use whatever tools he wants. In the first case, the safe needs to be secure, but it’s still just a part of your overall home security. In the second case, the safe is the only security device you have.
This second security problem might seem contrived, but it happens regularly in our information society: Data controlled by one person is stored on a device controlled by another. Think of a stored-value smart card: If the person owning the card can break the security, he can add money to the card. Think of a DRM system: Its security depends on the person owning the computer not being able to get at the insides of the DRM security. Think of the RFID chip on a passport. Or a postage meter. Or SSL traffic being sent over a public network.
These systems are difficult to secure, and not just because you give your attacker the device and let him utilize whatever time, equipment and expertise he needs to break it. It’s difficult to secure because breaks are generally “class breaks.” The expert who figures out how to do it can build hardware—or write software—to do it automatically. Only one person needs to break a given DRM system; the software can break every other device in the same class.
This means that the security needs to be secure not against the average attacker, but against the smartest, most motivated and best funded attacker.
I was reminded of this problem earlier this month, when researchers announced a new attack against implementations of the RSA cryptosystem. The attack exploits the fact that different operations take different times on modern CPUs. By closely monitoring—and actually affecting—the CPU during an RSA operation, an attacker can recover the key. The most obvious applications for this attack are DRM systems that try to use a protected partition in the CPU to prevent the computer’s owner from learning the DRM system’s cryptographic keys.
These sorts of attacks are not new. In 1995, researchers discovered they could recover cryptographic keys by comparing relative timings on chips. In later years, both power and radiation were used to break cryptosystems. I called these “side-channel attacks,” because they made use of information other than the plaintext and ciphertext. And where are they most useful? To recover secrets from smart cards.
Whenever I see security systems with this data/device separation, I try to solve the security problem by removing the separation. This means completely redesigning the system and the security assumptions behind it.
Compare a stored-value card with a debit card. In the former case, the card owner can create money by changing the value on the card. For this system to be secure, the card needs to be protected by a variety of security countermeasures. In the latter case, there aren’t any secrets on the card. Your bank doesn’t care that you can read the account number off the front of the card, or the data off the magnetic stripe off the back—the real data, and the security, are in the bank’s databases.
Or compare a DRM system with a financial model that doesn’t care about copying. The former is impossible to secure, the latter easy.
While common in digital systems, this kind of security problem isn’t limited to them. Last month, the province of Ontario started investigating insider fraud in their scratch-and-win lottery systems, after the CBC aired allegations that people selling the tickets are able to figure out which tickets are winners, and not sell them. It’s the same problem: the owners of the data on the tickets—the lottery commission—tried to keep that data secret from those who had physical control of the tickets. And they failed.
Compare that with a traditional drawing-at-the-end-of-the-week lottery system. The attack isn’t possible, because there are no secrets on the tickets for an attacker to learn.
Separating data ownership and device ownership doesn’t mean that security is impossible, only much more difficult. You can buy a safe so strong that you can lock your valuables in it and give it to your attacker—with confidence. I’m not so sure you can design a smart card that keeps secrets from its owner, or a DRM system that works on a general-purpose computer—especially because of the problem of class breaks. But in all cases, the best way to solve the security problem is not to have it in the first place.
New timing attack on RSA:
My essay on side-channel attacks:
My paper on data/device separation:
Street-performer protocol: an alternative to DRM:
Ontario lottery fraud:
This essay originally appeared on Wired.com.
Note: I completely misunderstood the lottery problem in Ontario. The frauds reported were perpetrated by lottery machine operators at convenience stores and the like stealing end-of-week draw tickets from unsuspecting customers. The customer would hand their ticket over the counter to be scanned to see if it was a winner. The clerk (knowing what the winning numbers actually were) would palm a non-winning ticket into the machine, inform the customer “sorry better luck next time” and claim the prize on their own at a later date. Nice scam, but nothing to do with the point of this essay.
BT Counterpane News
Bruce Schneier and Ray Stanton’s perspectives on BT’s acquisition of Counterpane and BT’s future directions:
Ovum’s report on BT’s acquisition of Counterpane:
There was a profile of me in the “St. Paul Pioneer Press.” I’m pretty pleased with the article, but this is—by far—my favorite line, about “Applied Cryptography”: “‘The first seven or eight chapters you can read without knowing any math at all,’ Walker said. ‘The second half of the book you can’t export overseas—it’s classified as munitions.'” It’s not true, of course, but it’s a great line.
Another article on me from the Providence Journal:
I was interviewed on the subject of RFID passports:
Gary McGraw interviewed me for his Silver Bullet Security Podcast.
Fighting Fraudulent Transactions
Last March I wrote that two-factor authentication isn’t going to reduce financial fraud or identity theft, that all it will do is force the criminals to change their tactics. Back then, this is what I said:
“Unfortunately, the nature of attacks has changed over those two decades. Back then, the threats were all passive: eavesdropping and offline password guessing. Today, the threats are more active: phishing and Trojan horses.
“Here are two new active attacks we’re starting to see:
“Man-in-the-Middle attack. An attacker puts up a fake bank website and entices user to that website. User types in his password, and the attacker in turn uses it to access the bank’s real website. Done right, the user will never realize that he isn’t at the bank’s website. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user’s banking transactions while making his own transactions at the same time.
“Trojan attack. Attacker gets Trojan installed on user’s computer. When user logs into his bank’s website, the attacker piggybacks on that session via the Trojan to make any fraudulent transaction he wants.
“See how two-factor authentication doesn’t solve anything? In the first case, the attacker can pass the ever-changing part of the password to the bank along with the never-changing part. And in the second case, the attacker is relying on the user to log in.”
The solution is not to better authenticate the person, but to authenticate the transaction. (Think credit cards. No one checks your signature. They really don’t care if you’re you. They maintain security by authenticating the transactions.)
Of course, no one listens to me. U.S. regulators required banks to implement two-factor authentication by the end of this year. But customers are rebelling, and banks are scrambling to figure out something—anything—else. And, amazingly enough and purely by accident it seems, they’ve stumbled on security solutions that actually work. From CSO:
“Instead, to comply with new banking regulations and stem phishing losses, banks and the vendors who serve them are hurriedly putting together multipronged strategies that they say amount to “strong” authentication. The emerging approach generally consists of somehow recognizing a customer’s computer, asking additional challenge questions for risky behavior and putting in place back-end fraud detection.
“Despite the FFIEC guidance about authentication, the emerging technologies that actually seem to hold the most promise for protecting the funds in consumer banking accounts aren’t authentication systems at all. They’re back-end systems that monitor for suspicious behavior.
“Some of these tools are rule-based: If a customer from Nebraska signs on from, say, Romania, the bank can determine that the log-on always be considered suspect. Others are based on a risk score: That log-on from Romania would add points to a risk score, and when the score reaches a certain threshold, the bank takes action.
“Flagged transactions can get bumped to second-factor authentication—usually, a call on the telephone, something the user has. This has long been done manually in the credit card world. Just think about the last phone call you got from your credit card company’s fraud department when you (or someone else) tried to make a large purchase with your credit card in Europe. Some banks, including Washington Mutual, are in the process of automating out-of-band phone calls for risky online transactions.”
Exactly. That’s how you do it.
My essay on two-factor authentication:
My essay on mitigating identity theft:
Banks required to implement two-factor authentication:
Cybercrime Hype Alert
It seems to be the season for cybercrime hype. First, we have a CNN article, which seems to have no actual news:
“Computer hackers will open a new front in the multi-billion pound ‘cyberwar’ in 2007, targeting mobile phones, instant messaging and community Web sites such as MySpace, security experts predict.
“As people grow wise to email scams, criminal gangs will find new ways to commit online fraud, sell fake goods or steal corporate secrets.”
And next, a BBC article which claims that criminal organizations are paying student members to get IT degrees:
“The most successful cyber crime gangs were based on partnerships between those with the criminals skills and contacts and those with the technical ability, said Mr Day.
“‘Traditional criminals have the ability to move funds and use all of the background they have,’ he said, ‘but they don’t have the technical expertise.’
“As the number of criminal gangs looking to move into cyber crime expanded, it got harder to recruit skilled hackers, said Mr Day. This has led criminals to target university students all around the world.
“‘Some students are being sponsored through their IT degree,’ said Mr Day. Once qualified, the graduates go to work for the criminal gangs.
“The aura of rebellion the name conjured up helped criminals ensnare children as young as 14, suggested the study.
“By trawling websites, bulletin boards and chat rooms that offer hacking tools, cracks or passwords for pirated software, criminal recruiters gather information about potential targets.
“Once identified, young hackers are drawn in by being rewarded for carrying out low-level tasks such as using a network of hijacked home computers, a botnet, to send out spam.
“The low risk of being caught and the relatively high rewards on offer helped the criminal gangs to paint an attractive picture of a cyber criminal’s life, said Mr Day.
“As youngsters are drawn in the stakes are raised and they are told to undertake increasingly risky jobs.”
Criminals targeting children—that’s sure to peg anyone’s hype-meter.
To be sure, I don’t want to minimize the threat of cybercrime. Nor do I want to minimize the threat of organized cybercrime. There are more and more criminals prowling the net, and more and more cybercrime has gone up the food chain—to large organized crime syndicates. Cybercrime is big business, and it’s getting bigger.
But I’m not sure if stories like these help or hurt.
Comments from Readers
There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Comments on CRYPTO-GRAM should be sent to firstname.lastname@example.org. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of BT Counterpane, and is a member of the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
BT Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. BT Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT or BT Counterpane.
Copyright (c) 2006 by Bruce Schneier.