Entries Tagged "ID cards"

Page 1 of 10

Lessons Learned from the Estonian National ID Security Flaw

Estonia recently suffered a major flaw in the security of their national ID card. This article discusses the fix and the lessons learned from the incident:

In the future, the infrastructure dependency on one digital identity platform must be decreased, the use of several alternatives must be encouraged and promoted. In addition, the update and replacement capacity, both remote and physical, should be increased. We also recommend the government to procure the readiness to act fast in force majeure situations from the eID providers.. While deciding on the new eID platforms, the need to replace cryptographic primitives must be taken into account—particularly the possibility of the need to replace algorithms with those that are not even in existence yet.

Posted on December 18, 2017 at 6:08 AMView Comments

Security Flaw in Infineon Smart Cards and TPMs

A security flaw in Infineon smart cards and TPMs allows an attacker to recover private keys from the public keys. Basically, the key generation algorithm sometimes creates public keys that are vulnerable to Coppersmith’s attack:

While all keys generated with the library are much weaker than they should be, it’s not currently practical to factorize all of them. For example, 3072-bit and 4096-bit keys aren’t practically factorable. But oddly enough, the theoretically stronger, longer 4096-bit key is much weaker than the 3072-bit key and may fall within the reach of a practical (although costly) factorization if the researchers’ method improves.

To spare time and cost, attackers can first test a public key to see if it’s vulnerable to the attack. The test is inexpensive, requires less than 1 millisecond, and its creators believe it produces practically zero false positives and zero false negatives. The fingerprinting allows attackers to expend effort only on keys that are practically factorizable.

This is the flaw in the Estonian national ID card we learned about last month.

The paper isn’t online yet. I’ll post it when it is.

Ouch. This is a bad vulnerability, and it’s in systems—like the Estonian national ID card—that are critical.

EDITED TO ADD (11/14): More information from the researchers.

Posted on October 17, 2017 at 9:24 AMView Comments

Security Flaw in Estonian National ID Card

We have no idea how bad this really is:

On 30 August, an international team of researchers informed the Estonian Information System Authority (RIA) of a vulnerability potentially affecting the digital use of Estonian ID cards. The possible vulnerability affects a total of almost 750,000 ID-cards issued starting from October 2014, including cards issued to e-residents. The ID-cards issued before 16 October 2014 use a different chip and are not affected. Mobile-IDs are also not impacted.

My guess is that it’s worse than the politicians are saying:

According to Peterkop, the current data shows this risk to be theoretical and there is no evidence of anyone’s digital identity being misused. “All ID-card operations are still valid and we will take appropriate actions to secure the functioning of our national digital-ID infrastructure. For example, we have restricted the access to Estonian ID-card public key database to prevent illegal use.”

And because this system is so important in local politics, the effects are significant:

In the light of current events, some Estonian politicians called to postpone the upcoming local elections, due to take place on 16 October. In Estonia, approximately 35% of the voters use digital identity to vote online.

But the Estonian prime minister, Jüri Ratas, said at a press conference on 5 September that “this incident will not affect the course of the Estonian e-state.” Ratas also recommended to use Mobile-IDs where possible. The prime minister said that the State Electoral Office will decide whether it will allow the usage of ID cards at the upcoming local elections.

The Estonian Police and Border Guard estimates it will take approximately two months to fix the issue with faulty cards. The authority will involve as many Estonian experts as possible in the process.

This is exactly the sort of thing I worry about as ID systems become more prevalent and more centralized. Anyone want to place bets on whether a foreign country is going to try to hack the next Estonian election?

Another article.

EDITED TO ADD (9/18): More details.

Posted on September 5, 2017 at 3:23 PMView Comments

Human-Machine Trust Failures

I jacked a visitor’s badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they’re enabled when you check in at building security. You’re supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.

I kept the badge. I used my body as a shield, and the chain made a satisfying noise when it hit bottom. The guard let me through the gate.

The person after me had problems, though. Some part of the system knew something was wrong, and wouldn’t let her out. Eventually, the guard had to manually override something.

My point in telling this story is not to demonstrate how I beat the EEOB’s security—I’m sure the badge was quickly deactivated and showed up in some missing-badge log next to my name—but to illustrate how security vulnerabilities can result from human/machine trust failures. Something went wrong between when I went through the gate and when the person after me did. The system knew it but couldn’t adequately explain it to the guards. The guards knew it but didn’t know the details. Because the failure occurred when the person after me tried to leave the building, they assumed she was the problem. And when they cleared her of wrongdoing, they blamed the system.

In any hybrid security system, the human portion needs to trust the machine portion. To do so, both must understand the expected behavior for every state—how the system can fail and what those failures look like. The machine must be able to communicate its state and have the capacity to alert the humans when an expected state transition doesn’t happen as expected. Things will go wrong, either by accident or as the result of an attack, and the humans are going to need to troubleshoot the system in real time—that requires understanding on both parts. Each time things go wrong, and the machine portion doesn’t communicate well, the human portion trusts it a little less.

This problem is not specific to security systems, but inducing this sort of confusion is a good way to attack systems. When the attackers understand the system—especially the machine part—better than the humans in the system do, they can create a failure to exploit. Many social engineering attacks fall into this category. Failures also happen the other way. We’ve all experienced trust without understanding, when the human part of the system defers to the machine, even though it makes no sense: “The computer is always right.”

Humans and machines have different strengths. Humans are flexible and can do creative thinking in ways that machines cannot. But they’re easily fooled. Machines are more rigid and can handle state changes and process flows much better than humans can. But they’re bad at dealing with exceptions. If humans are to serve as security sensors, they need to understand what is being sensed. (That’s why “if you see something, say something” fails so often.) If a machine automatically processes input, it needs to clearly flag anything unexpected.

The more machine security is automated, and the more the machine is expected to enforce security without human intervention, the greater the impact of a successful attack. If this sounds like an argument for interface simplicity, it is. The machine design will be necessarily more complicated: more resilience, more error handling, and more internal checking. But the human/computer communication needs to be clear and straightforward. That’s the best way to give humans the trust and understanding they need in the machine part of any security system.

This essay previously appeared in IEEE Security & Privacy.

Posted on September 5, 2013 at 8:32 AMView Comments

Hacking TSA PreCheck

I have a hard time getting worked up about this story:

I have X’d out any information that you could use to change my reservation. But it’s all there, PNR, seat assignment, flight number, name, ect. But what is interesting is the bolded three on the end. This is the TSA Pre-Check information. The number means the number of beeps. 1 beep no Pre-Check, 3 beeps yes Pre-Check. On this trip as you can see I am eligible for Pre-Check. Also this information is not encrypted in any way.

What terrorists or really anyone can do is use a website to decode the barcode and get the flight information, put it into a text file, change the 1 to a 3, then use another website to re-encode it into a barcode. Finally, using a commercial photo-editing program or any program that can edit graphics replace the barcode in their boarding pass with the new one they created. Even more scary is that people can do this to change names. So if they have a fake ID they can use this method to make a valid boarding pass that matches their fake ID. The really scary part is this will get past both the TSA document checker, because the scanners the TSA use are just barcode decoders, they don’t check against the real time information. So the TSA document checker will not pick up on the alterations. This means, as long as they sub in 3 they can always use the Pre-Check line.

What a dumb way to design the system. It would be easier—and far more secure—if the boarding pass checker just randomly chose 10%, or whatever percentage they want, of PreCheck passengers to send through regular screening. Why go through the trouble of encoding it in the barcode and then reading it?

And—of course—this means that you can still print your own boarding pass.

On the other hand, I think the PreCheck level of airport screening is what everyone should get, and that the no-fly list and the photo ID check add nothing to security. So I don’t feel any less safe because of this vulnerability.

Still, I am surprised. Is this the same in other countries? Lots of countries scan my boarding pass before allowing me through security: France, the Netherlands, the UK, Japan, even Uruguay at Montevideo Airport when I flew out of there yesterday. I always assumed that those systems were connected to the airlines’ reservation databases. Does anyone know?

Posted on October 26, 2012 at 6:46 AMView Comments

High-Quality Fake IDs from China

USA Today article:

Most troubling to authorities is the sophistication of the forgeries: Digital holograms are replicated, PVC plastic identical to that found in credit cards is used, and ink appearing only under ultraviolet light is stamped onto the cards.

Each of those manufacturing methods helps the IDs defeat security measures aimed at identifying forged documents.

The overseas forgers are bold enough to sell their wares on websites, USA TODAY research finds. Anyone with an Internet connection and $75 to $200 can order their personalized ID card online from such companies as ID Chief. Buyers pick the state, address, name and send in a scanned photo and signature to complete their profile.

ID Chief, whose website is based in China, responds personally to each buyer with a money-order request.

[…]

According to Huff of the Virginia agency, it has always been easy for the untrained eye to be fooled by fake IDs. The difference is, Huff said, that the new generation of forged IDs is “good enough to fool the trained eye.”

The only real solution here is to move the security model from the document to the database. With online verification, the document matters much less, because it is nothing more than a pointer into a database. Think about credit cards.

Posted on June 13, 2012 at 6:45 AMView Comments

Developments in Facial Recognition

Eventually, it will work. You’ll be able to wear a camera that will automatically recognize someone walking towards you, and a earpiece that will relay who that person is and maybe something about him. None of the technologies required to make this work are hard; it’s just a matter of getting the error rate down low enough for it to be a useful system. And there have been a number of recent research results and news stories that illustrate what this new world might look like.

The police want this sort of system. I already blogged about MORIS, an iris-scanning technology that several police forces in the U.S. are using. The next step is the face-scanning glasses that the Brazilian police claim they will be wearing at the 2014 World Cup.

A small camera fitted to the glasses can capture 400 facial images per second and send them to a central computer database storing up to 13 million faces.

The system can compare biometric data at 46,000 points on a face and will immediately signal any matches to known criminals or people wanted by police.

In the future, this sort of thing won’t be limited to the police. Facebook has recently embarked on a major photo tagging project, and already has the largest collection of identified photographs in the world outside of a government. Researchers at Carnegie Mellon University have combined the public part of that database with a camera and face-recognition software to identify students on campus. (The paper fully describing their work is under review and not online yet, but slides describing the results can be found here.)

Of course, there are false positives—as there are with any system like this. That’s not a big deal if the application is a billboard with face-recognition serving different ads depending on the gender and age—and eventually the identity—of the person looking at it, but is more problematic if the application is a legal one.

In Boston, someone erroneously had his driver’s licence revoked:

It turned out Gass was flagged because he looks like another driver, not because his image was being used to create a fake identity. His driving privileges were returned but, he alleges in a lawsuit, only after 10 days of bureaucratic wrangling to prove he is who he says he is.

And apparently, he has company. Last year, the facial recognition system picked out more than 1,000 cases that resulted in State Police investigations, officials say. And some of those people are guilty of nothing more than looking like someone else. Not all go through the long process that Gass says he endured, but each must visit the Registry with proof of their identity.

[…]

At least 34 states are using such systems. They help authorities verify a person’s claimed identity and track down people who have multiple licenses under different aliases, such as underage people wanting to buy alcohol, people with previous license suspensions, and people with criminal records trying to evade the law.

The problem is less with the system, and more with the guilty-until-proven-innocent way in which the system is used.

Kaprielian said the Registry gives drivers enough time to respond to the suspension letters and that it is the individual’s “burden’” to clear up any confusion. She added that protecting the public far outweighs any inconvenience Gass or anyone else might experience.

“A driver’s license is not a matter of civil rights. It’s not a right. It’s a privilege,” she said. “Yes, it is an inconvenience [to have to clear your name], but lots of people have their identities stolen, and that’s an inconvenience, too.”

IEEE Spectrum and The Economist have published similar articles.

EDITED TO ADD (8/3): Here’s a system embedded in a pair of glasses that automatically analyzes and relays micro-facial expressions. The goal is to help autistic people who have trouble reading emotions, but you could easily imagine this sort of thing becoming common. And what happens when we start relying on these computerized systems and ignoring our own intuition?

EDITED TO ADD: CV Dazzle is camouflage from face detection.

Posted on August 2, 2011 at 1:33 PMView Comments

Man Flies with Someone Else's Ticket and No Legal ID

Last week, I got a bunch of press calls about Olajide Oluwaseun Noibi, who flew from New York to Los Angeles using an expired ticket in someone else’s name and a university ID. They all wanted to know what this says about airport security.

It says that airport security isn’t perfect, and that people make mistakes. But it’s not something that anyone should worry about. It’s not like Noibi figured out a new hole in the airport security system, one that he was able to exploit repeatedly. He got lucky. He got real lucky. It’s not something a terrorist can build a plot around.

I’m even less concerned because I’ve never thought the photo ID check had any value. Noibi was screened, just like any other passenger. Even the TSA blog makes this point:

In this case, TSA did not properly authenticate the passenger’s documentation. That said, it’s important to note that this individual received the same thorough physical screening as other passengers, including being screened by advanced imaging technology (body scanner).

Seems like the TSA is regularly downplaying the value of the photo ID check. This is from a Q&A about Secure Flight, their new system to match passengers with watch lists:

Q: This particular “layer” isn’t terribly effective. If this “layer” of security can be circumvented by anyone with a printer and a word processor, this doesn’t seem to be a terribly useful “layer” … especially looking at the amount of money being expended on this particular “layer”. It might be that this money could be more effectively spent on other “layers”.

A: TSA uses layers of security to ensure the security of the traveling public and the Nation’s transportation system. Secure Flight’s watchlist name matching constitutes only one security layer of the many in place to protect aviation. Others include intelligence gathering and analysis, airport checkpoints, random canine team searches at airports, federal air marshals, federal flight deck officers and more security measures both visible and invisible to the public.

Each one of these layers alone is capable of stopping a terrorist attack. In combination their security value is multiplied, creating a much stronger, formidable system. A terrorist who has to overcome multiple security layers in order to carry out an attack is more likely to be pre-empted, deterred, or to fail during the attempt.

Yes, the answer says that they need to spend millions to ensure that terrorists with a viable plot also need a computer, but you can tell that their heart wasn’t in the answer. “Checkpoints! Dogs! Air marshals! Ignore the stupid photo ID requirement.”

Noibi is an embarrassment for the TSA and for the airline Virgin America, who are both supposed to catch this kind of thing. But I’m not worried about the security risk, and neither is the TSA.

Posted on July 6, 2011 at 5:53 AMView Comments

Federated Authentication

New paper by Ross Anderson: “Can We Fix the Security Economics of Federated Authentication?“:

There has been much academic discussion of federated authentication, and quite some political manoeuvring about ‘e-ID’. The grand vision, which has been around for years in various forms but was recently articulated in the US National Strategy for Trustworthy Identities in Cyberspace (NSTIC), is that a single logon should work everywhere [1]. You should be able to use your identity provider of choice to log on anywhere; so you might use your driver’s license to log on to Gmail, or use your Facebook logon to file your tax return. More restricted versions include the vision of governments of places like Estonia and Germany (and until May 2010 the UK) that a government-issued identity card should serve as a universal logon. Yet few systems have been fielded at any scale.

In this paper I will briefly discuss the four existing examples we have of federated authentication, and then go on to discuss a much larger, looming problem. If the world embraces the Apple vision of your mobile phone becoming your universal authentication device ­ so that your phone contains half-a dozen credit cards, a couple of gift cards, a dozen coupons and vouchers, your AA card, your student card and your driving license, how will we manage all this? A useful topic for initial discussion, I argue, is revocation. Such a phone will become a target for bad guys, both old and new. What happens when someone takes your phone off you at knifepoint, or when it gets infested with malware? Who do you call, and what will they do to make the world right once more?

Blog post.

Posted on March 29, 2011 at 6:43 AMView Comments

1 2 3 10

Sidebar photo of Bruce Schneier by Joe MacInnis.