Entries Tagged "encryption"

Page 54 of 56

Wired on Identity Theft

This is a good editorial from Wired on identity theft.

Following are the fixes we think Congress should make:

Require businesses to secure data and levy fines against those who don’t. Congress has mandated tough privacy and security standards for companies that handle health and financial data. But the rules for credit agencies are woefully inadequate. And they don’t cover other businesses and organizations that handle sensitive personal information, such as employers, academic institutions and data brokers. Congress should mandate strict privacy and security standards for anyone who handles sensitive information, and apply tough financial penalties against companies that fail to comply.

Require companies to encrypt all sensitive customer data. Any standard created to protect data should include technical requirements to scramble the data—both in storage and during transit when data is transferred from one place to another. Recent incidents involving unencrypted Bank of America and CitiFinancial data tapes that went missing while being transferred to backup centers make it clear that companies think encryption is necessary only in certain circumstances.

Keep the plan simple and provide authority and funds to the FTC to ensure legislation is enforced. Efforts to secure sensitive data in the health and financial industries led to laws so complicated and confusing that few have been able to follow them faithfully. And efforts to monitor compliance have been inadequate. Congress should develop simpler rules tailored to each specific industry segment, and give the FTC the necessary funding to enforce them.

Keep Social Security numbers for Social Security. Social Security numbers appear on medical and voter-registration forms as well as on public records that are available through a simple internet search. This makes it all too easy for a thief to obtain the single identifying number that can lead to financial ruin for victims. Americans need a different unique identifying number specifically for credit records, with guarantees that it will never be used for authentication purposes.

Force credit agencies to scrutinize credit-card applications and verify the identity of credit-card applicants. Giving Americans easy access to credit has superseded all other considerations in the cutthroat credit-card business, helping thieves open accounts in victims’ names. Congress needs to bring sane safeguards back into the process of approving credit—even if it means adding costs and inconveniencing powerful banking and financial interests.

Extend fraud alerts beyond 90 days. The Fair Credit Reporting Act allows anyone who suspects that their personal information has been stolen to place a fraud alert on their credit record. This currently requires a creditor to take “reasonable” steps to verify the identity of anyone who applies for credit in the individual’s name. It also requires the creditor to contact the individual who placed the fraud alert on the account if they’ve provided their phone number. Both conditions apply for 90 days. Of course, nothing prevents identity thieves from waiting until the short-lived alert period expires before taking advantage of stolen information. Congress should extend the default window for credit alerts to a minimum of one year.

Allow individuals to freeze their credit records so that no one can access the records without the individuals’ approval. The current credit system opens credit reports to almost anyone who requests them. Individuals should be able to “freeze” their records and have them opened to others only when the individual contacts a credit agency and requests that it release a report to a specific entity.

Require opt-in rather than opt-out permission before companies can share or sell data. Many businesses currently allow people to decline inclusion in marketing lists, but only if customers actively request it. This system, known as opt-out, inherently favors companies by making it more difficult for consumers to escape abusive data-sharing practices. In many cases, consumers need to wade through confusing instructions, and send a mail-in form in order to be removed from pre-established marketing lists. The United States should follow an opt-in model, where companies would be forced to collect permission from individuals before they can traffic in personal data.

Require companies to notify consumers of any privacy breaches, without preventing states from enacting even tougher local laws. Some 37 states have enacted or are considering legislation requiring businesses to notify consumers of data breaches that affect them. A similar federal measure has also been introduced in the Senate. These are steps in the right direction. But the federal bill has a major flaw: It gives companies an easy out in the case of massive data breaches, where the number of people affected exceeds 500,000, or the cost of notification would exceeds $250,000. In those cases, companies would not be required to notify individuals, but could comply simply by posting a notice on their websites. Congress should close these loopholes. In addition, any federal law should be written to ensure that it does not pre-empt state notification laws that take a tougher stance.

As I’ve written previously, this won’t solve identity theft. But it will make it harder and protect the privacy of everyone. These are good recommendations.

Posted on June 29, 2005 at 7:18 AMView Comments

Password Safe

Password Safe is a free Windows password-storage utility. These days, anyone who is on the Web regularly needs too many passwords, and it’s impossible to remember them all. I have long advocated writing them all down on a piece of paper and putting it in your wallet.

I designed Password Safe as another solution. It’s a small program that encrypts all of your passwords using one passphrase. The program is easy to use, and isn’t bogged down by lots of unnecessary features. Security through simplicity.

Password Safe 2.11 is now available.

Currently, Password Safe is an open source project at SourceForge, and is run by Rony Shapiro. Thank you to him and to all the other programmers who worked on the project.

Note that my Password Safe is not the same as this, this, this, or this PasswordSafe. (I should have picked a more obscure name for the program.)

It is the same as this, for the PocketPC.

Posted on June 15, 2005 at 1:35 PMView Comments

Eric Schmidt on Secrecy and Security

From Information Week:

InformationWeek: What about security? Have you been paying as much attention to security as, say Microsoft—you can debate whether or not they’ve been successful, but they’ve poured a lot of resources into it.

Schmidt: More people to a bad architecture does not necessarily make a more secure system. Why don’t you define security so I can answer your question better?

InformationWeek: I suppose it’s an issue of making the technology transparent enough that people can deploy it with confidence.

Schmidt: Transparency is not necessarily the only way you achieve security. For example, part of the encryption algorithms are not typically made available to the open source community, because you don’t want people discovering flaws in the encryption.

Actually, he’s wrong. Everything about an encryption algorithm should always be made available to everyone, because otherwise you’ll invariably have exploitable flaws in your encryption.

My essay on the topic is here.

Posted on May 31, 2005 at 1:09 PMView Comments

Holding Computer Files Hostage

This one has been predicted for years. Someone breaks into your network, encrypts your data files, and then demands a ransom to hand over the key.

I don’t know how the attackers did it, but below is probably the best way. A worm could be programmed to do it.

1. Break into a computer.

2. Generate a random 256-bit file-encryption key.

3. Encrypt the file-encryption key with a common RSA public key.

4. Encrypt data files with the file-encryption key.

5. Wipe data files and file-encryption key.

6. Wipe all free space on the drive.

7. Output a file containing the RSA-encrypted, file encryption key.

8. Demand ransom.

9. Receive ransom.

10. Receive encrypted file-encryption key.

11. Decrypt it and send it back.

In any situation like this, step 9 is the hardest. It’s where you’re most likely to get caught. I don’t know much about anonymous money transfer, but I don’t think Swiss bank accounts have the anonymity they used to.

You also might have to prove that you can decrypt the data, so an easy modification is to encrypt a piece of the data with another file-encryption key so you can prove to the victim that you have the RSA private key.

Internet attacks have changed over the last couple of years. They’re no longer about hackers. They’re about criminals. And we should expect to see more of this sort of thing in the future.

Posted on May 30, 2005 at 8:18 AMView Comments

AES Timing Attack

Nice timing attack against AES.

For those of you who don’t know, timing attacks are an example of side-channel cryptanalysis: cryptanalysis using additional information about the inner workings of the cryptographic algorithm. I wrote about them here.

What’s the big idea here?

There are two ways to look at a cryptographic primitive: block cipher, digital signature function, whatever. The first is as a chunk of math. The second is a physical (or software) implementation of that math.

Traditionally, cryptanalysis has been directed solely against the math. Differential and linear cryptanalysis are good examples of this: high-powered mathematical tools that can be used to break different block ciphers.

On the other hand, timing attacks, power analysis, and fault analysis all makes assumptions about implementation, and uses additional information garnered from attacking those implementations. Failure analysis assumes a one-bit feedback from the implementation—was the message successfully decrypted—in order to break the underlying cryptographic primitive. Timing attacks assumes that an attacker knows how long a particular encryption operation takes.

Posted on May 17, 2005 at 10:05 AMView Comments

RFID Passport Security

According to a Wired article, the State Department is reconsidering a security measure to protect privacy that it previously rejected.

The solution would require an RFID reader to provide a key or password before it could read data embedded on an RFID passport’s chip. It would also encrypt data as it’s transmitted from the chip to a reader so that no one could read the data if they intercepted it in transit.

The devil is in the details, but this is a great idea. It means that only readers that know a secret data string can query the RFID chip inside the passport. Of course, this is a systemwide global secret and will be in the hands of every country, but it’s still a great idea.

It’s nice to read that the State Department is taking privacy concerns seriously.

Frank Moss, deputy assistant secretary for passport services, told Wired News on Monday that the government was “taking a very serious look” at the privacy solution in light of the 2,400-plus comments the department received about the e-passport rule and concerns expressed last week in Seattle by
participants at the Computers, Freedom and Privacy conference. Moss said recent work on the passports conducted with the National Institute of Standards and Technology had also led him to rethink the issue.

“Basically what changed my mind was a recognition that the read rates may have actually been able to be more than 10 centimeters, and also recognition that we had to do everything possible to protect the security of people,” Moss said.

The next step is for them to actually implement this countermeasure, and not just consider it. And the step after that is for us to get our hands on some test passports to see if they’ve implemented it well.

Posted on April 28, 2005 at 8:30 AMView Comments

The Doghouse: ExeShield

Yes, there are companies that believe that keeping cryptographic algorithms secret makes them more secure.

ExeShield uses the latest advances in software protection and encryption technology, to give your applications even more protection. Of course, for your security and ours, we won’t divulge the encryption scheme to anyone.

If anyone reading this needs a refresher on exactly why secret cryptography algorithms are invariably snake oil, I wrote about it three years ago.

Posted on April 13, 2005 at 9:19 AMView Comments

The Doghouse: Xavety

It’s been a long time since I doghoused any encryption products. CHADSEA (Chaotic Digital Signature, Encryption, and Authentication) isn’t as funny as some of the others, but it’s no less deserving.

Read their “Testing the Encryption Algorithm” section: “In order to test the reliability and statistical independency of the encryption, several different tests were performed, like signal-noise tests, the ENT test suite (Walker, 1998), and the NIST Statistical Test Suite (Ruhkin et al., 2001). These tests are quite comprehensive, so the description of these tests are subject of separate publications, which are also available on this website. Please, see the respective links.”

Yep. All they did to show that their algorithm was secure was a bunch of statistical tests. Snake oil for sure.

Posted on March 15, 2005 at 11:00 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.