Crypto-Gram

April 15, 2005

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@schneier.com
<http://www.schneier.com>
<http://www.counterpane.com>

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

Or you can read this issue on the web at <http://www.schneier.com/crypto-gram-0504.html>.

Schneier also publishes these same essays in his blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


More on Two-Factor Authentication

Recently I published an essay arguing that two-factor authentication is an ineffective defense against identity theft. For example, issuing tokens to online banking customers won’t reduce fraud, because new attack techniques simply ignore the countermeasure. Unfortunately, some took my essay as a condemnation of two-factor authentication in general. This is not true. It’s simply a matter of understanding the threats and the attacks.

Passwords just don’t work anymore. As computers have gotten faster, password guessing has gotten easier. Ever-more-complicated passwords are required to evade password-guessing software. At the same time, there’s an upper limit to how complex a password users can be expected to remember. About five years ago, these two lines crossed: It is no longer reasonable to expect users to have passwords that can’t be guessed. For anything that requires reasonable security, the era of passwords is over.

Two-factor authentication solves this problem. It works against passive attacks: eavesdropping and password guessing. It protects against users choosing weak passwords, telling their passwords to their colleagues or writing their passwords on pieces of paper taped to their monitors. For an organization trying to improve access control for its employees, two-factor authentication is a great idea. Microsoft is integrating two-factor authentication into its operating system, another great idea.

What two-factor authentication won’t do is prevent identity theft and fraud. It’ll prevent certain tactics of identity theft and fraud, but criminals simply will switch tactics. We’re already seeing fraud tactics that completely ignore two-factor authentication. As banks roll out two-factor authentication, criminals simply will switch to these new tactics.

One way to think about this is that two-factor authentication solves security problems involving authentication. The current wave of attacks against financial systems are not exploiting vulnerabilities in the authentication system, so two-factor authentication doesn’t help.

Security is always an arms race, and you could argue that this situation is simply the cost of treading water. The problem with this reasoning is it ignores countermeasures that permanently reduce fraud. By concentrating on authenticating the individual rather than authenticating the transaction, banks are forced to defend against criminal tactics rather than the crime itself.

Credit cards are a perfect example. Notice how little attention is paid to cardholder authentication. Clerks barely check signatures. People use their cards over the phone and on the Internet, where the card’s existence isn’t even verified. The credit card companies spend their security dollar authenticating the transaction, not the cardholder.

Two-factor authentication is a long-overdue solution to the problem of passwords. I welcome its increasing popularity, but identity theft and bank fraud are not results of password problems; they stem from poorly authenticated transactions. The sooner people realize that, the sooner they’ll stop advocating stronger authentication measures and the sooner security will actually improve.

This essay previously appeared in Network World as a “Face Off.”
<http://www.nwfusion.com/columnists/2005/…>

Joe Uniejewski of RSA Security wrote an opposing position:
<http://www.nwfusion.com/columnists/2005/…>

Another rebuttal:
<http://www.eweek.com/article2/0,1759,1782435,00.asp>

More coverage:
<http://searchsecurity.techtarget.com/…>

My original essay:
<http://www.schneier.com/essay-083.html>


Mitigating Identity Theft

Identity theft is the new crime of the information age. A criminal collects enough personal data on someone to impersonate a victim to banks, credit card companies, and other financial institutions. Then he racks up debt in the person’s name, collects the cash, and disappears. The victim is left holding the bag. While some of the losses are absorbed by financial institutions—credit card companies in particular—the credit-rating damage is borne by the victim. It can take years for the victim to clear his name.

Unfortunately, the solutions being proposed in Congress won’t help. To see why, we need to start with the basics. The very term “identity theft” is an oxymoron. Identity is not a possession that can be acquired or lost; it’s not a thing at all. Someone’s identity is the one thing about a person that cannot be stolen.

The real crime here is fraud; more specifically, impersonation leading to fraud. Impersonation is an ancient crime, but the rise of information-based credentials gives it a modern spin. A criminal impersonates a victim online and steals money from his account. He impersonates a victim in order to deceive financial institutions into granting credit to the criminal in the victim’s name. He impersonates a victim to the Post Office and gets the victim’s address changed. He impersonates a victim in order to fool the police into arresting the wrong man. No one’s identity is stolen; identity information is being misused to commit fraud.

The crime involves two very separate issues. The first is the privacy of personal data. Personal privacy is important for many reasons, one of which is impersonation and fraud. As more information about us is collected, correlated, and sold, it becomes easier for criminals to get their hands on the data they need to commit fraud. This is what’s been in the news recently: ChoicePoint, LexisNexis, Bank of America, and so on. But data privacy is more than just fraud. Whether it is the books we take out of the library, the websites we visit, or the contents of our text messages, most of us have personal data on third-party computers that we don’t want made public. The posting of Paris Hilton’s phone book on the Internet is a celebrity example of this.

The second issue is the ease with which a criminal can use personal data to commit fraud. It doesn’t take much personal information to apply for a credit card in someone else’s name. It doesn’t take much to submit fraudulent bank transactions in someone else’s name. It’s surprisingly easy to get an identification card in someone else’s name. Our current culture, where identity is verified simply and sloppily, makes it easier for a criminal to impersonate his victim.

Proposed fixes tend to concentrate on the first issue—making personal data harder to steal—whereas the real problem is the second. If we’re ever going to manage the risks and effects of electronic impersonation, we must concentrate on preventing and detecting fraudulent transactions.

Fraudulent transactions have nothing to do with the legitimate account holders. Criminals impersonate legitimate users to financial institutions. That means that any solution can’t involve the account holders. That leaves only one reasonable answer: financial institutions need to be liable for fraudulent transactions. They need to be liable for sending erroneous information to credit bureaus based on fraudulent transactions.

They can’t claim that the user must keep his password secure or his machine virus free. They can’t require the user to monitor his accounts for fraudulent activity, or his credit reports for fraudulently obtained credit cards. Those aren’t reasonable requirements for most users. The bank must be made responsible, regardless of what the user does.

If you think this won’t work, look at credit cards. Credit card companies are liable for all but the first $50 of fraudulent transactions. They’re not hurting for business; and they’re not drowning in fraud, either. They’ve developed and fielded an array of security technologies designed to detect and prevent fraudulent transactions. They’ve pushed most of the actual costs onto the merchants. And almost no security centers around trying to authenticate the cardholder.

That’s an important lesson. Identity theft solutions focus much too much on authenticating the person. Whether it’s two-factor authentication, ID cards, biometrics, or whatever, there’s a widespread myth that authenticating the person is the way to prevent these crimes. But once you understand that the problem is fraudulent transactions, you quickly realize that authenticating the person isn’t the way to proceed.

Again, think about credit cards. Store clerks barely verify signatures when people use cards. People can use credit cards to buy things by mail, phone, or Internet, where no one verifies the signature or even that you have possession of the card. Even worse, no credit card company mandates secure storage requirements for credit cards. They don’t demand that cardholders secure their wallets in any particular way. Credit card companies simply don’t worry about verifying the cardholder or putting requirements on what he does. They concentrate on verifying the transaction.

This same sort of thinking needs to be applied to other areas where criminals use impersonation to commit fraud. I don’t know what the final solutions will look like, but I do know that once financial institutions are liable for losses due to these types of fraud, they will find solutions. Maybe there’ll be a daily withdrawal limit, like there is on ATMs. Maybe large transactions will be delayed for a period of time, or will require a call-back from the bank or brokerage company. Maybe people will no longer be able to open a credit card account by simply filling out a bunch of information on a form. Likely the solution will be a combination of solutions that reduces fraudulent transactions to a manageable level, but we’ll never know until the financial institutions have the financial incentive to put them in place.

Right now, the economic incentives result in financial institutions that are so eager to allow transactions—new credit cards, cash transfers, whatever—that they’re not paying enough attention to fraudulent transactions. They’ve pushed the costs for fraud onto the merchants. But if they’re liable for losses and damages to legitimate users, they’ll pay more attention. And they’ll mitigate the risks. Security can do all sorts of things, once the economic incentives to apply them are there.

By focusing on the fraudulent use of personal data, I do not mean to minimize the harm caused by third-party data and violations of privacy. I believe that the U.S. would be well-served by a comprehensive Data Protection Act like the European Union. However, I do not believe that a law of this type would significantly reduce the risk of fraudulent impersonation. To mitigate that risk, we need to concentrate on detecting and preventing fraudulent transactions. We need to make the entity that is in the best position to mitigate the risk to be responsible for that risk. And that means making the financial institutions liable for fraudulent transactions.

Doing anything less simply won’t work.

This essay was previously published on CNet.
<http://news.com.com/Mitigating+identity+theft/…>


Crypto-Gram Reprints

Crypto-Gram is currently in its eighth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.

National ID Cards:
<http://www.schneier.com/crypto-gram-0404.html#1>

Stealing an Election:
<http://www.schneier.com/crypto-gram-0404.html#4>

Automated Denial-of-Service Attacks Using the U.S. Post Office:
<http://www.schneier.com/crypto-gram-0304.html#1>

National Crime Information Center (NCIC) Database Accuracy:
<http://www.schneier.com/crypto-gram-0304.html#7>

How to Think About Security:
<http://www.schneier.com/crypto-gram-0204.html#1>

Is 1028 Bits Enough?
<http://www.schneier.com/crypto-gram-0204.html#3>

Liability and Security
<http://www.schneier.com/crypto-gram-0204.html#6>

Natural Advantages of Defense: What Military History Can Teach Network Security, Part 1
<http://www.schneier.com/crypto-gram-0104.html#1>

UCITA:
<http://www.schneier.com/crypto-gram-0004.html#ucita>

Cryptography: The Importance of Not Being Different:
<http://www.schneier.com/crypto-gram-9904.html#different>

Threats Against Smart Cards:
<http://www.schneier.com/…>

Attacking Certificates with Computer Viruses:
<http://www.schneier.com/…>


New Risks of Biometrics

It’s the kind of attack we’ve been talking about since the advent of biometrics. In Malaysia, criminals cut off man’s finger to open the biometric lock on his Mercedes car.

What interests me about this story is the interplay between attacker and defender. The defender implements a countermeasure that causes the attacker to change his tactics. Sometimes the new tactics are more harmful, and it’s not obvious whether or not the countermeasure was worth it.

I wrote about something similar in Beyond Fear (p. 113): “Someone might think: ‘I am worried about car theft, so I will buy an expensive security device that makes ignitions impossible to hot-wire.’ That seems like a reasonable thought, but countries such as Russia, where these security devices are commonplace, have seen an increase in carjackings. A carjacking puts the driver at a much greater risk; here the security countermeasure has caused the weakest link to move from the ignition switch to the driver. Total car thefts may have declined, but drivers’ safety did, too.”

It’s certainly possible to design fingerprint readers that test for “liveness”: pulse, body temperature, etc. But these new security countermeasures will result in new criminal tactics, and the cycle will continue.

<http://news.bbc.co.uk/2/hi/asia-pacific/4396831.stm>


News

Failures of anti-terrorist radiation detectors:
<http://www.nti.org/d_newswire/issues/print.asp?…>

Specifications for U.S. electronic passports:
<http://a257.g.akamaitech.net/7/257/2422/…>

Hackers taking over webcams:
<http://www.theregister.co.uk/2005/02/28/…>

Story of social engineering at the IRS:
<http://www.cnn.com/2005/TECH/03/17/…>

Article on the some of the downright silly secrecy the U.S. government has imposed for security reasons (requires login):
<http://online.wsj.com/article/…>
The article explains that pilots are not allowed to fly near nuclear power plants, but can’t be told where those plants are. Here’s a story about how someone found the exact location of the nuclear power plant in Oyster Creek, N.J., using only publicly available information.
<http://synflood.at/blog/archives/2005:03:28/…>

Nice op-ed on the security problems with secrecy:
<http://www.independent-media.tv/item.cfm?…>

ID requirements for voters. Those who advocate photo IDs at polling places forget that not everyone has one. Not everyone flies on airplanes. Not everyone has a driver’s license. If a photo ID is required to vote, it had better be 1) free and 2) easily available everywhere to everyone. Otherwise it’s a poll tax.
<http://www.npr.org/templates/story/story.php?…>

Study shows (yet again) how easy it is to collect personal information that can be used for identity theft.
<http://news.bbc.co.uk/1/hi/technology/4378253.stm>

Anonymity and the Internet:
<http://slate.com/id/2115120/>
<http://wendy.seltzer.org/blog/archives/2005/03/19/…>

Great article saying that identity theft is inescapable:
<http://www.theregister.co.uk/2005/03/23/…>

Why surveillance cameras don’t reduce crime:
<http://gritsforbreakfast.blogspot.com/2005/03/…>

Sybase threatens to prosecute researchers who found vulnerabilities in their products:
<http://www.computerworld.com/securitytopics/…>

EPIC’s analysis of the Department of Homeland Security’s new multifunction identity card:
<http://www.epic.org/privacy/surveillance/spotlight/…>

Law review article on the price of restricting vulnerability information:
<http://www.digital-law.net/IJCLP/Cy_2004/…>

Insider attack against a bank, using a keyboard recorder:
<http://news.bbc.co.uk/1/hi/uk/4356661.stm>
Another insider attack, by employees at a call center in India:
<http://timesofindia.indiatimes.com/articleshow/…>

Sandia released a half sensible, half chilling, report on anti-terrorist security. I commented on it here:
<https://www.schneier.com/blog/archives/2005/04/…>

The London School of Economics recently published a report on the UK government’s national ID proposals. Definitely worth reading.
<http://www.lse.ac.uk/collections/…>

Texas cars with embedded RFID chips:
<http://gritsforbreakfast.blogspot.com/2005/04/…>

These comments on the security of electronic passports are an excellent primer on the dangers of the technology. Definitely read Attachment 1: “Security and Privacy Issues in E-Passports,” a more technical paper by Ari Juels, David Molnar, and David Wagner.
<http://www.epic.org/privacy/rfid/…>

Great Economist article on security as a trade-off:
<http://economist.com/opinion/displayStory.cfm?…>
Excerpt:
<https://www.schneier.com/blog/archives/2005/04/…>

We’ve all known that you can intercept Bluetooth communications from up to a mile away. What’s new is the step-by-step instructions necessary to build an interceptor for yourself for less than $400. Be the first on your block to build one.
<http://www.tomsnetworking.com/Sections-article106.php>
Is there anyone who can make a reasonable argument that RFID won’t be similarly interceptable?

A court ruled that simply password-protecting a file isn’t enough to make it a trade secret.
<http://www.internetcases.com/2005/04/…>

Large-scale license plate scanning by helicopter:
<http://www.thenewspaper.com/news/03/320.asp>


Student Hacks System to Alter Grades

The University of California Santa Barbara has a custom program, eGrades, where faculty can submit and alter grades. It’s password protected, of course. But there’s a backup system, so that faculty who forget their password can reset it using their Social Security number and date of birth.

A student worked for an insurance company, and she was able to obtain SSN and DOB for two faculty members. She used that information to reset their passwords and change grades for herself and several fellow students. According to the news report: “Police, university officials and campus computer specialists said Ramirez’s alleged illegal access to the computer grading system was not the result of a deficiency or flaw in the program.”

Sounds like a flaw in the program to me. It’s even one I’ve written about: a primary security mechanism that fails to a less-secure secondary mechanism.

Story:
<http://www.dailynexus.com/news/2005/9237.html>

My previous essay on the topic:
<https://www.schneier.com/blog/archives/2005/02/…>


Security Notes from All Over: Camouflage in Octopodes

Last month researchers released a video of an octopus camouflaging itself with coral and shells, and then walking across the ocean floor.

I have a fondness for security countermeasures in the natural world. As people, we try to figure out the most effective countermeasure for a given attack. Evolution works differently. A species tries different countermeasures at random, and stops at the first one that just barely works. The result is that the natural world illustrates an amazing variety of security countermeasures.

<http://www.nature.com/news/2005/050321/full/…>


Hacking the Papal Election

As the College of Cardinals prepares to elect a new pope, people like me wonder about the election process. How does it work, and just how hard is it to hack the vote?

Of course I’m not advocating voter fraud in the papal election. Nor am I insinuating that a cardinal might perpetrate fraud. But people who work in security can’t look at a system without trying to figure out how to break it; it’s an occupational hazard.

The rules for papal elections are steeped in tradition, and were last codified on 22 Feb 1996: “Universi Dominici Gregis on the Vacancy of the Apostolic See and the Election of the Roman Pontiff.” The document is well-thought-out, and filled with details.

The election takes place in the Sistine Chapel, directed by the Church Chamberlain. The ballot is entirely paper-based, and all ballot counting is done by hand. Votes are secret, but everything else is done in public.

First there’s the “pre-scrutiny” phase. “At least two or three” paper ballots are given to each cardinal (115 will be voting), presumably so that a cardinal has extras in case he makes a mistake. Then nine election officials are randomly selected: three “Scrutineers” who count the votes, three “Revisers,” who verify the results of the Scrutineers, and three “Infirmarii” who collect the votes from those too sick to be in the room. (These officials are chosen randomly for each ballot.)

Each cardinal writes his selection for Pope on a rectangular ballot paper “as far as possible in handwriting that cannot be identified as his.” He then folds the paper lengthwise and holds it aloft for everyone to see.

When everyone is done voting, the “scrutiny” phase of the election begins. The cardinals proceed to the altar one by one. On the altar is a large chalice with a paten (the shallow metal plate used to hold communion wafers during mass) resting on top of it. Each cardinal places his folded ballot on the paten. Then he picks up the paten and slides his ballot into the chalice.

If a cardinal cannot walk to the altar, one of the Scrutineers—in full view of everyone—does this for him. If any cardinals are too sick to be in the chapel, the Scrutineers give the Infirmarii a locked empty box with a slot, and the three Infirmarii together collect those votes. (If a cardinal is too sick to write, he asks one of the Infirmarii to do it for him) The box is opened and the ballots are placed onto the paten and into the chalice, one at a time.

When all the ballots are in the chalice, the first Scrutineer shakes it several times in order to mix them. Then the third Scrutineer transfers the ballots, one by one, from one chalice to another, counting them in the process. If the total number of ballots is not correct, the ballots are burned and everyone votes again.

To count the votes, each ballot is opened and the vote is read by each Scrutineer in turn, the third one aloud. Each Scrutineer writes the vote on a tally sheet. This is all done in full view of the cardinals. The total number of votes cast for each person is written on a separate sheet of paper.

Then there’s the “post-scrutiny” phase. The Scrutineers tally the votes and determine if there’s a winner. Then the Revisers verify the entire process: ballots, tallies, everything. And then the ballots are burned. (That’s where the smoke comes from: white if a Pope has been elected, black if not.)

How hard is this to hack? The first observation is that the system is entirely manual, making it immune to the sorts of technological attacks that make modern voting systems so risky. The second observation is that the small group of voters—all of whom know each other—makes it impossible for an outsider to affect the voting in any way. The chapel is cleared and locked before voting. No one is going to dress up as a cardinal and sneak into the Sistine Chapel. In effect, the voter verification process is about as perfect as you’re ever going to find.

Eavesdropping on the process is certainly possible, although the rules explicitly state that the chapel is to be checked for recording and transmission devices “with the help of trustworthy individuals of proven technical ability.” I read that the Vatican is worried about laser microphones, as there are windows near the chapel’s roof.

That leaves us with insider attacks. Can a cardinal influence the election? Certainly the Scrutineers could potentially modify votes, but it’s difficult. The counting is conducted in public, and there are multiple people checking every step. It’s possible for the first Scrutineer, if he’s good at sleight of hand, to swap one ballot paper for another before recording it. Or for the third Scrutineer to swap ballots during the counting process.

A cardinal can’t stuff ballots when he votes. The complicated paten-and-chalice ritual ensures that each cardinal votes once—his ballot is visible—and also keeps his hand out of the chalice holding the other votes.

Making the ballots large would make these attacks harder. So would controlling the blank ballots better, and only distributing one to each cardinal per vote. Presumably cardinals change their mind more often during the voting process, so distributing extra blank ballots makes sense.

Ballots from previous votes are burned, which makes it harder to use one to stuff the ballot box. But there’s one wrinkle: “If however a second vote is to take place immediately, the ballots from the first vote will be burned only at the end, together with those from the second vote.” I assume that’s done so there’s only one plume of smoke for the two elections, but it would be more secure to burn each set of ballots before the next round of voting. (Although the stack of ballots are pierced with a needle and thread and tied together, which 1) marks them as used, and 2) makes them harder to reuse.)

And lastly, the cardinals are in “choir dress” during the voting, which has translucent lace sleeves under a short red cape; much harder for sleight-of-hand tricks.

It’s possible for one Scrutineer to misrecord the votes, but with three Scrutineers, the discrepancy would be quickly detected. I presume a recount would take place, and the correct tally would be verified. Two or three Scrutineers in cahoots with each other could do more mischief, but since the Scrutineers are chosen randomly, the probability of a cabal being selected is very low. And then the Revisers check everything.

More interesting is to try and attack the system of selecting Scrutineers, which isn’t well-defined in the document. Influencing the selection of Scrutineers and Revisers seems a necessary first step towards influencing the election.

Ballots with more than one name (overvotes) are void, and I assume the same is true for ballots with no name written on them (undervotes). Illegible or ambiguous ballots are much more likely, and I presume they are discarded. The rules do have a provision for multiple ballots by the same cardinal: “If during the opening of the ballots the Scrutineers should discover two ballots folded in such a way that they appear to have been completed by one elector, if these ballots bear the same name they are counted as one vote; if however they bear two different names, neither vote will be valid; however, in neither of the two cases is the voting session annulled.” This surprises me, although I suppose it has happened by accident.

If there’s a weak step, it’s the counting of the ballots. There’s no real reason to do a pre-count, and it gives the Scrutineer doing the transfer a chance to swap legitimate ballots with others he previously stuffed up his sleeve. I like the idea of randomizing the ballots, but putting the ballots in a wire cage and spinning it around would accomplish the same thing more securely, albeit with less reverence.

And if I were improving the process, I would add some kind of white-glove treatment to prevent a Scrutineer from hiding a pencil lead or pen tip under his fingernails. Although the requirement to write out the candidate’s name in full gives more resistance against this sort of attack.

The recent change in the process that lets the cardinals go back and forth from the chapel into their dorm rooms—instead of being locked in the chapel the whole time as was done previously—makes the process slightly less secure. But I’m sure it makes it a lot more comfortable.

Lastly, there’s the potential for one of the Infirmarii to do what he wants when transcribing the vote of an infirm cardinal, but there’s no way to prevent that. If the cardinal is concerned, he could ask all three Infirmarii to witness the ballot.

There’s also enormous social—religious, actually—disincentives to hacking the vote. The election takes place in a chapel, and at an altar. They also swear an oath as they are casting their ballot—further discouragement. And the Scrutineers are explicitly exhorted not to form any sort of cabal or make any plans to sway the election under pain of excommunication: “The Cardinal electors shall further abstain from any form of pact, agreement, promise or other commitment of any kind which could oblige them to give or deny their vote to a person or persons.”

I’m sure there are negotiations and deals and influencing—cardinals are mortal men, after all, and such things are part of how humans come to agreement.

What are the lessons here? First, open systems conducted within a known group make voting fraud much harder. Every step of the election process is observed by everyone, and everyone knows everyone, which makes it harder for someone to get away with anything. Second, small and simple elections are easier to secure. This kind of process works to elect a Pope or a club president, but quickly becomes unwieldy for a large-scale election. The only way manual systems work is through a pyramid-like scheme, with small groups reporting their manually obtained results up the chain to more central tabulating authorities.

And a third and final lesson: when an election process is left to develop over the course of a couple thousand years, you end up with something surprisingly good.

Rules for a papal election:
<http://www.vatican.va/holy_father/john_paul_ii/…>

There’s a picture of choir dress on this page:
<http://dappledphotos.blogspot.com/2005/01/…>


Counterpane News

Schneier was selected as one of the top 25 CTOs by Infoworld:
<http://www.infoworld.com/article/05/04/11/…>

Counterpane has announced a partnership with MessageLabs for secure e-mail services:
<http://www.counterpane.com/pr-20050215b.html>


The Doghouse: ExeShield

Yes, there are companies that believe that keeping cryptographic algorithms secret makes them more secure. “ExeShield uses the latest advances in software protection and encryption technology, to give your applications even more protection. Of course, for your security and ours, we won’t divulge the encryption scheme to anyone.”

My essay on why secrecy in cryptographic algorithms is bad for security:
<http://www.schneier.com/crypto-gram-0205.html#1>


Secure Flight Is in Trouble

Report #1: There’s a report from the Department of Homeland Security’s Inspector General that the TSA lied about its role in obtaining personal information about 12 million airline passengers to test Secure Flight.

The report doesn’t explicitly say that the TSA lied, but the TSA lied.

The details are worth reading. And when you read it, keep in mind that it’s written by the DHS’s own Inspector General. I presume a more independent investigator would be even more severe. Not that the report isn’t severe, mind you. Here are some highlights from an AP story:

“The report cites several occasions where TSA officials made inaccurate statements about passenger data:

“In September 2003, the agency’s Freedom of Information Act staff received hundreds of requests from Jet Blue passengers asking if the TSA had their records. After a cursory search, the FOIA staff posted a notice on the TSA Web site that it had no JetBlue passenger data. Though the FOIA staff found JetBlue passenger records in TSA’s possession in May, the notice stayed on the Web site for more than a year.

“In November 2003, TSA chief James Loy incorrectly told the Governmental Affairs Committee that certain kinds of passenger data were not being used to test passenger prescreening.

“In September 2003, a technology magazine reporter asked a TSA spokesman whether real data were used to test the passenger prescreening system. The spokesman said only fake data were used; the responses “were not accurate,” the report said.”

There’s much more. The report reveals that TSA ordered Delta Air Lines to turn over passenger data in February 2002 to help the Secret Service determine whether terrorists or their associates were traveling in the vicinity of the Salt Lake City Olympics.

It also reveals that TSA used passenger data from JetBlue in the spring of 2003 to figure out how to change the number of people who would be selected for more screening under the existing system.

The report says that one of the TSA’s contractors working on passenger prescreening, Lockheed Martin, used a data sample from ChoicePoint.

The report also details how outside contractors used the data for their own purposes. And that “the agency neglected to inquire whether airline passenger data used by the vendors had been returned or destroyed.” And that “TSA did not consistently apply privacy protections in the course of its involvement in airline passenger data transfers.”

This is major stuff. It shows that the TSA lied to the public about its use of personal data again and again and again.

Report #2: The GAO (Government Accountability Office) issued its own report about Secure Flight. Last year, Congress passed a law that said that the TSA couldn’t implement Secure Flight until it met ten conditions: privacy protections, accuracy of data, oversight, cost and safeguards to ensure the system won’t be abused or accessed by unauthorized people, etc. The GAO report found nine of the ten conditions hadn’t yet been met and questioned whether Secure Flight would ultimately work.

Some tidbits: TSA plans to include the capability for criminal checks within Secure Flight (p. 12). The timetable has slipped by four months (p. 17). TSA might not be able to get personally identifiable passenger data in PNRs because of costs to the industry and lack of money (p.18). TSA plans to have intelligence analysts staffed within TSA to identify false positives (p.33). The DHS Investment Review Board has withheld approval from the “Transportation Vetting Platform” (p.39). TSA doesn’t know how much the program will cost (p.51). Final privacy rule to be issued in April (p. 56).

These two reports put the TSA in a bind. It is prohibited by Congress from fielding Secure Flight until it meets a series of criteria. On the other hand, I’m not sure the TSA cares. It’s already announced plans to roll out Secure Flight. In August they’re going to implement the program nationwide with two still-unnamed airlines.

My own opinions of Secure Flight are well-known. I am a member of a working group to help evaluate the privacy of Secure Flight. While I believe that a program to match airline passengers against terrorist watch lists is a colossal waste of money that isn’t going to make us any safer, I said “…assuming that we need to implement a program of matching airline passengers with names on terrorism watch lists, Secure Flight is a major improvement—in almost every way—over what is currently in place.” I still believe that, but unfortunately I am prohibited by NDA from describing the improvements. I wish someone at TSA would get himself in front of reporters and do so.

IG report:
<http://www.dhs.gov/interweb/assetlibrary/…>

Article on IG report:
<http://www.dailystar.com/dailystar/news/67386.php>

My previous comments on Secure Flight:
<http://www.schneier.com/crypto-gram-0501.html#9>
<http://www.schneier.com/crypto-gram-0502.html#1>

Airline passenger data also used by the Center for Disease Control:
<http://www.boston.com/news/nation/washington/…>
My commentary:
<https://www.schneier.com/blog/archives/2005/04/…>


Comments from Readers

From: Jonathan Tuliani <Jonathan.Tuliani cryptomathic.com>
Subject: The Failure of Two-Factor Authentication

I agree completely with your analysis that two-factor *user* authentication will not solve the problem of phishing attacks. Indeed, the situation is worse than you describe: the attacker does not need to resort to technically-advanced man-in-the-middle or virus/Trojan attacks; they simply need to ask the victim for a one-time password from their authentication token via a conventional spoof website, and to use it before it expires. In the case of counter-based rather than time-based schemes in particular, the time window available may be quite large.

The only solution to this problem is to migrate banking applications to explicit transaction authentication rather than user authentication. For example, a token with a calculator-style keyboard can prompt the user to enter the payee account number and amount to be paid directly into the token itself, and produce a one-time password that acts as a kind of MAC of these details.

The problem is that transaction authentication is less user-friendly and the authentication tokens required tend to be larger and more costly. Nevertheless, such schemes are under consideration, most notably within the MasterCard Chip Authentication Program, which allows consumers to use their normal banking chip-card (more common in Europe than the US) together with a self-contained card reader to provide two-factor authentication for both user and transaction authentication.

SMS offers not only a separate channel to the consumer, but also an independent user interface to the PC screen which is invulnerable to both Trojan and man-in-the-middle attacks. SMS one time passwords can therefore be used not only for user authentication, but also for transaction authentication, using messages such as “Pay $100 to account 12345? Confirm Code: AGEWN”

All this was described in my article ‘The Future of Phishing,’ published nearly a full year ago and still available, e.g., at <http://www.net-security.org/article.php?id=672>

From: Ernst Jan Plugge <rmc dds.nl>
Subject: The Failure of Two-Factor Authentication

I just read your interesting piece on two-factor authentication. You describe a system where a bank sends an SMS message for authentication, and the weakness inherent in it. My bank does something similar, but sidesteps some of these issues rather elegantly.

Authentication to the web application for my bank is done using just a password. I can prepare a batch of transactions within the application, without any further authentication. However, to confirm the batch, I have to provide a TAN, a transaction number. This is delivered by SMS to my mobile at the moment I initiate the final phase of the process, and the message includes a statement of the total amount of the transactions in the batch, and an ordinal TAN index, which bumps by one for each batch. I type in the number, and the batch is processed. SMS is, by the way, just one of the delivery mechanisms supported by the bank. You can also get a list of TANs printed on paper sent by snail mail. The options are mutually exclusive.

Someone who steals both my password and my mobile can, of course, rip me off. But a Trojan will have a much more difficult time of it. First of all, merely eavesdropping and replaying will not work, because the TAN is only valid for one transaction. A man-in-the-middle will have to monitor and divert my work in the application in real time, and substitute his own fraudulent details. The MitM will have to submit his batch for processing at almost the same time as me, and the amounts have to match exactly. So even if the MitM succeeds, the damage is quite limited.

This scheme still has its weaknesses, but it’s sufficiently secure for me to trust my personal online banking affairs to it.

From: “Wolfgang Daum” <wdaum xavety.com>
Subject: The Doghouse: Xavety

Company Xavety <www.xavety.com> presents a new encryption method, which is based on chaos mathematics and polynomial integer arithmetic, thus ensuring platform independency and theoretically arbitrary accuracy of all calculations. The CHADSEA method, which stands for CHAotic Digital Signature, Encryption and Authentication, is a symmetric encryption method generating block ciphers.

The chaos generator is based on the chaos function of the Logistic Equation, which is implemented in its recursive form and within a certain range of bifurcation factors well known to produce chaotic number series. < http://www.xavety.com/Technology.htm> The recursive application of the logistic equation produces very quickly very high non-linear functions of arbitrary and unknown polynomial order. Together with the so called self-similar nature of the logistic equation, which produces functional values absolutely chaotically within a well-limited solution interval, but containing an infinite number of solutions, it is by its mathematical nature impossible to reconstruct a starting value from a functional value, especially when not knowing the number of iterations nor the exact bifurcation factor used. CHADSEA implements this chaotic behavior so that it is a perfect pseudo-random number generator in the sense of a cryptographic strong pseudorandom bit generator forming poly-random collections, a perfect one-way (hash) function, and a perfect document signature algorithm.

Several statistical tests have been performed in order to validate that the CHADSEA method cannot be broken by statistical means. These tests are quite comprehensive and are described in detail in the following papers: the Signal-Noise Tests <http://www.xavety.com/Validation_SN_Test.htm>, the ENT Test Suite <http://www.xavety.com/Validation_ENT_Test.htm>, and the NIST Statistical Test Suite, <http://www.xavety.com/Validation_NIST_Test.htm>.

The CHADSEA method passes all these tests, especially also all of the 189 statistical tests of the National Institute of Standard and Technology (NIST).

Unfortunately the author and publisher of the Crypto-Gram newsletter Bruce Schneier of Counterpane Internet Security, Inc has called this new method in his March 15, 2005 Crypto-Gram newsletter as “snake oil” and company Xavety as “doghouse”. Because the newsletter is the most widely read publication in the market and Mr. Schneier one of the most influential individuals in the industry, this may cause serious damage to the company.

Xavety Corporation invites the scientific community to participate in a validation of this promising new encryption method.


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Comments on CRYPTO-GRAM should be sent to schneier@schneier.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Counterpane Internet Security, Inc.

Sidebar photo of Bruce Schneier by Joe MacInnis.