Entries Tagged "keys"

Page 8 of 13

Smuggling Drugs in Unwitting People's Car Trunks

This is clever:

A few miles away across the Rio Grande, the FBI determined that Chavez and Gomez were using lookouts to monitor the SENTRI Express Lane at the border. The lookouts identified “targets” — people with regular commutes who primarily drove Ford vehicles. According to the FBI affidavit, the smugglers would follow their targets and get the vehicle identification number off the car’s dashboard. Then a corrupt locksmith with access to Ford’s vehicle database would make a duplicate key.

Keys in hand, the gang would put drugs in a car at night in Mexico and then pick up their shipment from the parked vehicle the next morning in Texas, authorities say.

This attack works because 1) there’s a database of keys available to lots of people, and 2) both the SENTRI system and the victims are predictable.

Posted on July 25, 2011 at 5:59 AMView Comments

Physical Key Escrow

This creates far more security risks than it solves:

The city council in Cedar Falls, Iowa has absolutely crossed the line. They voted 6-1 in favor of expanding the use of lock boxes on commercial property. Property owners would be forced to place the keys to their businesses in boxes outside their doors so that firefighters, in that one-in-a-million chance, would have easy access to get inside.

We in the computer security world have been here before, over ten years ago.

Posted on July 14, 2011 at 6:38 AMView Comments

RSA Security, Inc Hacked

The company, not the algorithm. Here’s the corporate spin.

Our investigation has led us to believe that the attack is in the category of an Advanced Persistent Threat (APT). Our investigation also revealed that the attack resulted in certain information being extracted from RSA’s systems. Some of that information is specifically related to RSA’s SecurID two-factor authentication products. While at this time we are confident that the information extracted does not enable a successful direct attack on any of our RSA SecurID customers, this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack. We are very actively communicating this situation to RSA customers and providing immediate steps for them to take to strengthen their SecurID implementations.

Here are news articles. The worry is that source code to the company’s SecurID two-factor authentication product was stolen, which would possibly allow hackers to reverse-engineer or otherwise break the system. It’s hard to make any assessments about whether this is possible or likely without knowing 1) how SecurID’s cryptography works, and 2) exactly what was stolen from the company’s servers. We do not know either, and the corporate spin is as short on details as it is long on reassurances.

RSA Data Security, Inc. is probably pretty screwed if SecurID is compromised. Those hardware tokens have no upgrade path, and would have to be replaced. How many of the company’s customers will replace them with competitors’ tokens. Probably a bunch. Hence, it’s in RSA’s best interest for their customers to forget this incident as quickly as possible.

There seems to be two likely scenarios if the attackers have compromised SecurID. One, they are a sophisticated organization who wants the information for a specific purpose. The attackers actually are on RSA’s side in the public-relations spin, and we’re unlikely to see widespread use of this information. Or two, they stole the stuff for conventional criminal purposes and will sell it. In that case, we’re likely to know pretty quickly.

Again, without detailed information — or at least an impartial assessment — it’s impossible to make any recommendations. Security is all about trust, and when trust is lost there is no security. User’s of SecurID trusted RSA Data Security, Inc. to protect the secrets necessary to secure that system. To the extent they did not, the company has lost its customers’ trust.

Posted on March 21, 2011 at 6:52 AMView Comments

Proprietary Encryption in Car Immobilizers Cracked

This shouldn’t be a surprise:

Karsten Nohl’s assessment of dozens of car makes and models found weaknesses in the way immobilisers are integrated with the rest of the car’s electronics.

The immobiliser unit should be connected securely to the vehicle’s electronic engine control unit, using the car’s internal data network. But these networks often use weaker encryption than the immobiliser itself, making them easier to crack.

What’s more, one manufacturer was even found to use the vehicle ID number as the supposedly secret key for this internal network. The VIN, a unique serial number used to identify individual vehicles, is usually printed on the car. “It doesn’t get any weaker than that,” Nohl says.

Posted on December 23, 2010 at 2:02 PMView Comments

Prepaid Electricity Meter Fraud

New attack:

Criminals across the UK have hacked the new keycard system used to top up pre-payment energy meters and are going door-to-door, dressed as power company workers, selling illegal credit at knock-down prices.

The pre-paid power meters use a key system. Normally people visit a shop to put credit on their key, which they then take home and slot into their meter.

The conmen have cracked the system and can go into people’s houses and put credit on their machine using a hacked key. If they use this, it can be detected the next time they top up their key legitimately.

The system detects the fraud, in that it shows up on audit at a later time. But by then, the criminals are long gone. Clever.

It gets worse:

Conmen sell people the energy credit and then warn them that if they go back to official shops they will end up being charged for the energy they used illegally.

They then trap people and ratchet up the sales price to customers terrified they will have to pay twice ­ something Scottish Power confirmed is starting to happen here in Scotland.

Posted on September 21, 2010 at 1:42 PMView Comments

Master HDCP Key Cracked

The master key for the High-Bandwidth Digital Content Protection standard — that’s what encrypts digital television between set-top boxes and digital televisions — has been cracked and published. (Intel confirmed that the key is real.) The ramifications are unclear:

But even if the code is real, it might not immediately foster piracy as the cracking of CSS on DVDs did more than a decade ago. Unlike CSS, which could be implemented in software, HDCP requires custom hardware. The threat model for Hollywood, then, isn’t that a hacker could use the master key to generate a DeCSS-like program for HD, but that shady hardware makers, perhaps in China, might eventually create and sell black-market HDCP cards that would allow the free copying of protected high-def content.

Posted on September 17, 2010 at 1:57 PMView Comments

DNSSEC Root Key Split Among Seven People

The DNSSEC root key has been divided among seven people:

Part of ICANN’s security scheme is the Domain Name System Security, a security protocol that ensures Web sites are registered and “signed” (this is the security measure built into the Web that ensures when you go to a URL you arrive at a real site and not an identical pirate site). Most major servers are a part of DNSSEC, as it’s known, and during a major international attack, the system might sever connections between important servers to contain the damage.

A minimum of five of the seven keyholders — one each from Britain, the U.S., Burkina Faso, Trinidad and Tobago, Canada, China, and the Czech Republic — would have to converge at a U.S. base with their keys to restart the system and connect everything once again.

That’s a secret sharing scheme they’re using, most likely Shamir’s Secret Sharing.
We know the names of some of them.

Paul Kane — who lives in the Bradford-on-Avon area — has been chosen to look after one of seven keys, which will ‘restart the world wide web’ in the event of a catastrophic event.

Dan Kaminsky is another.

I don’t know how they picked those countries.

Posted on July 28, 2010 at 11:12 AMView Comments

Data at Rest vs. Data in Motion

For a while now, I’ve pointed out that cryptography is singularly ill-suited to solve the major network security problems of today: denial-of-service attacks, website defacement, theft of credit card numbers, identity theft, viruses and worms, DNS attacks, network penetration, and so on.

Cryptography was invented to protect communications: data in motion. This is how cryptography was used throughout most of history, and this is how the militaries of the world developed the science. Alice was the sender, Bob the receiver, and Eve the eavesdropper. Even when cryptography was used to protect stored data — data at rest — it was viewed as a form of communication. In “Applied Cryptography,” I described encrypting stored data in this way: “a stored message is a way for someone to communicate with himself through time.” Data storage was just a subset of data communication.

In modern networks, the difference is much more profound. Communications are immediate and instantaneous. Encryption keys can be ephemeral, and systems like the STU-III telephone can be designed such that encryption keys are created at the beginning of a call and destroyed as soon as the call is completed. Data storage, on the other hand, occurs over time. Any encryption keys must exist as long as the encrypted data exists. And storing those keys becomes as important as storing the unencrypted data was. In a way, encryption doesn’t reduce the number of secrets that must be stored securely; it just makes them much smaller.

Historically, the reason key management worked for stored data was that the key could be stored in a secure location: the human brain. People would remember keys and, barring physical and emotional attacks on the people themselves, would not divulge them. In a sense, the keys were stored in a “computer” that was not attached to any network. And there they were safe.

This whole model falls apart on the Internet. Much of the data stored on the Internet is only peripherally intended for use by people; it’s primarily intended for use by other computers. And therein lies the problem. Keys can no longer be stored in people’s brains. They need to be stored on the same computer, or at least the network, that the data resides on. And that is much riskier.

Let’s take a concrete example: credit card databases associated with websites. Those databases are not encrypted because it doesn’t make any sense. The whole point of storing credit card numbers on a website is so it’s accessible — so each time I buy something, I don’t have to type it in again. The website needs to dynamically query the database and retrieve the numbers, millions of times a day. If the database were encrypted, the website would need the key. But if the key were on the same network as the data, what would be the point of encrypting it? Access to the website equals access to the database in either case. Security is achieved by good access control on the website and database, not by encrypting the data.

The same reasoning holds true elsewhere on the Internet as well. Much of the Internet’s infrastructure happens automatically, without human intervention. This means that any encryption keys need to reside in software on the network, making them vulnerable to attack. In many cases, the databases are queried so often that they are simply left in plaintext, because doing otherwise would cause significant performance degradation. Real security in these contexts comes from traditional computer security techniques, not from cryptography.

Cryptography has inherent mathematical properties that greatly favor the defender. Adding a single bit to the length of a key adds only a slight amount of work for the defender, but doubles the amount of work the attacker has to do. Doubling the key length doubles the amount of work the defender has to do (if that — I’m being approximate here), but increases the attacker’s workload exponentially. For many years, we have exploited that mathematical imbalance.

Computer security is much more balanced. There’ll be a new attack, and a new defense, and a new attack, and a new defense. It’s an arms race between attacker and defender. And it’s a very fast arms race. New vulnerabilities are discovered all the time. The balance can tip from defender to attacker overnight, and back again the night after. Computer security defenses are inherently very fragile.

Unfortunately, this is the model we’re stuck with. No matter how good the cryptography is, there is some other way to break into the system. Recall how the FBI read the PGP-encrypted email of a suspected Mafia boss several years ago. They didn’t try to break PGP; they simply installed a keyboard sniffer on the target’s computer. Notice that SSL- and TLS-encrypted web communications are increasingly irrelevant in protecting credit card numbers; criminals prefer to steal them by the hundreds of thousands from back-end databases.

On the Internet, communications security is much less important than the security of the endpoints. And increasingly, we can’t rely on cryptography to solve our security problems.

This essay originally appeared on DarkReading. I wrote it in 2006, but lost it on my computer for four years. I hate it when that happens.

EDITED TO ADD (7/14): As several readers pointed out, I overstated my case when I said that encrypting credit card databases, or any database in constant use, is useless. In fact, there is value in encrypting those databases, especially if the encryption appliance is separate from the database server. In this case, the attacker has to steal both the encryption key and the database. That’s a harder hacking problem, and this is why credit-card database encryption is mandated within the PCI security standard. Given how good encryption performance is these days, it’s a smart idea. But while encryption makes it harder to steal the data, it is only harder in a computer-security sense and not in a cryptography sense.

Posted on June 30, 2010 at 12:53 PMView Comments

1 6 7 8 9 10 13

Sidebar photo of Bruce Schneier by Joe MacInnis.