Entries Tagged "encryption"

Page 49 of 50

Flaw in Winkhaus Blue Chip Lock

The Winkhaus Blue Chip Lock is a very popular, and expensive, 128-bit encrypted door lock. When you insert a key, there is a 128-bit challenge/response exchange between the key and the lock, and when the key is authorized it will pull a small pin down through some sort of solenoid switch. This allows you to turn the lock.

Unfortunately, it has a major security flaw. If you put a strong magnet near the lock, you can also pull this pin down, without authorization — without damage or any evidence.

The worst part is that Winkhaus is in denial about the problem, and is hoping it will just go away by itself. They’ve known about the flaw for at least six months, and have done nothing. They haven’t told any of their customers. If you ask them, they’ll say things like “it takes a very special magnet.”

From what I’ve heard, the only version that does not have this problem is the model without a built-in battery. In this model, the part with the solenoid switch is aimed on the inside instead of the outside. The internal battery is a weak spot, since you need to lift a small lid to exchange it. So this side can never face the “outside” of the door, since anyone could remove the batteries. With an external power supply you do not have this problem, since one side of the lock is pure metal.

A video demonstration is available here.

Posted on March 2, 2005 at 3:00 PMView Comments

Microsoft RC4 Flaw

One of the most important rules of stream ciphers is to never use the same keystream to encrypt two different documents. If someone does, you can break the encryption by XORing the two ciphertext streams together. The keystream drops out, and you end up with plaintext XORed with plaintext — and you can easily recover the two plaintexts using letter frequency analysis and other basic techniques.

It’s an amateur crypto mistake. The easy way to prevent this attack is to use a unique initialization vector (IV) in addition to the key whenever you encrypt a document.

Microsoft uses the RC4 stream cipher in both Word and Excel. And they make this mistake. Hongjun Wu has details (link is a PDF).

In this report, we point out a serious security flaw in Microsoft Word and Excel. The stream cipher RC4 [9] with key length up to 128 bits is used in Microsoft Word and Excel to protect the documents. But when an encrypted document gets modified and saved, the initialization vector remains the same and thus the same keystream generated from RC4 is applied to encrypt the different versions of that document. The consequence is disastrous since a lot of information of the document could be recovered easily.

This isn’t new. Microsoft made the same mistake in 1999 with RC4 in WinNT Syskey. Five years later, Microsoft has the same flaw in other products.

Posted on January 18, 2005 at 9:00 AMView Comments

Airline Passenger Profiling

From an anonymous reader who works for the airline industry in the United States:

There are two initiatives in the works, neither of which leaves me feeling very good about privacy rights.

The first is being put together by the TSA and is called the “Secure Flight Initiative.” An initial test of this program was performed recently and involved each airline listed in the document having to send in passenger information (aka PNR data) for every passenger that “completed a successful domestic trip” during June 2004. A sample of some of the fields that were required to be sent: name, address, phone (if available), itinerary, any comments in the PNR record made by airline personnel, credit card number and expiration date, and any changes made to the booking before the actual flight.

This test data was transmitted to the TSA via physical CD. The requirement was that we “encrypt” it using pkzip (or equivalent) before putting it on the CD. We were to then e-mail the password to the Secure Flight Initiative e-mail address. Although this is far from ideal, it is in fact a big step up. The original process was going to have people simply e-mail the above data to the TSA. They claim to have a secure facility where the data is stored.

As far as the TSA’s retention of the data, the only information we have been given is that as soon as the test phase is over, they will securely delete the data. We were given no choice but had to simply take their word for it.

Rollout of the Secure Flight initiative is scheduled for “next year” sometime. They’re going to start with larger carriers and work their way down to the smaller carriers. It hasn’t been formalized (as far as I know) yet as to what data will be required to be transmitted when. My suspicion is that upon flight takeoff, all PNR data for all passengers on board will be required to be sent. At this point, I still have not heard as to what method will be used for data transmission.

There is another initiative being implemented by the Customs and Border Protection, which is part of the Department of Homeland Security. This (unnamed) initiative is essentially the same thing as the Secure Flight program. That’s right — two government agencies are requiring us to transmit the information separately to each of them. So much for information sharing within the government.

Most larger carriers are complying with this directive by simply allowing the CBP access to their records directly within their
reservation systems (often hosted by folks like Sabre, Worldspan, Galileo, etc). Others (such as the airline I work for) are opting to
only transmit the bare requirements without giving direct access to our system. The data is transmitted over a proprietary data network that is used by the airline industry.

There are a couple of differences between the Secure Flight program and the one being instituted by the CBP. The CBP’s program requires that PNR data for all booked passengers be transmitted:

  • 72 hours before flight time
  • 24 hours before flight time
  • 8 hours before flight time
  • and then again immediately after flight departure

The other major difference is that it looks as though there will be a requirement that we operate in a way that allows them to send a request for data for any flight at any time which we must send back in an automated fashion.

Oh, and just as a kick in the pants, the airlines are expected to pay the costs for all these data transmissions (to the tune of several thousand dollars a month).

Posted on December 22, 2004 at 10:06 AMView Comments

Safe Personal Computing

I am regularly asked what average Internet users can do to ensure their security. My first answer is usually, “Nothing–you’re screwed.”

But that’s not true, and the reality is more complicated. You’re screwed if you do nothing to protect yourself, but there are many things you can do to increase your security on the Internet.

Two years ago, I published a list of PC security recommendations. The idea was to give home users concrete actions they could take to improve security. This is an update of that list: a dozen things you can do to improve your security.

General: Turn off the computer when you’re not using it, especially if you have an “always on” Internet connection.

Laptop security: Keep your laptop with you at all times when not at home; treat it as you would a wallet or purse. Regularly purge unneeded data files from your laptop. The same goes for PDAs. People tend to store more personal data–including passwords and PINs–on PDAs than they do on laptops.

Backups: Back up regularly. Back up to disk, tape or CD-ROM. There’s a lot you can’t defend against; a recent backup will at least let you recover from an attack. Store at least one set of backups off-site (a safe-deposit box is a good place) and at least one set on-site. Remember to destroy old backups. The best way to destroy CD-Rs is to microwave them on high for five seconds. You can also break them in half or run them through better shredders.

Operating systems: If possible, don’t use Microsoft Windows. Buy a Macintosh or use Linux. If you must use Windows, set up Automatic Update so that you automatically receive security patches. And delete the files “command.com” and “cmd.exe.”

Applications: Limit the number of applications on your machine. If you don’t need it, don’t install it. If you no longer need it, uninstall it. Look into one of the free office suites as an alternative to Microsoft Office. Regularly check for updates to the applications you use and install them. Keeping your applications patched is important, but don’t lose sleep over it.

Browsing: Don’t use Microsoft Internet Explorer, period. Limit use of cookies and applets to those few sites that provide services you need. Set your browser to regularly delete cookies. Don’t assume a Web site is what it claims to be, unless you’ve typed in the URL yourself. Make sure the address bar shows the exact address, not a near-miss.

Web sites: Secure Sockets Layer (SSL) encryption does not provide any assurance that the vendor is trustworthy or that its database of customer information is secure.

Think before you do business with a Web site. Limit the financial and personal data you send to Web sites–don’t give out information unless you see a value to you. If you don’t want to give out personal information, lie. Opt out of marketing notices. If the Web site gives you the option of not storing your information for later use, take it. Use a credit card for online purchases, not a debit card.

Passwords: You can’t memorize good enough passwords any more, so don’t bother. For high-security Web sites such as banks, create long random passwords and write them down. Guard them as you would your cash: i.e., store them in your wallet, etc.

Never reuse a password for something you care about. (It’s fine to have a single password for low-security sites, such as for newspaper archive access.) Assume that all PINs can be easily broken and plan accordingly.

Never type a password you care about, such as for a bank account, into a non-SSL encrypted page. If your bank makes it possible to do that, complain to them. When they tell you that it is OK, don’t believe them; they’re wrong.

E-mail : Turn off HTML e-mail. Don’t automatically assume that any e-mail is from the “From” address.

Delete spam without reading it. Don’t open messages with file attachments, unless you know what they contain; immediately delete them. Don’t open cartoons, videos and similar “good for a laugh” files forwarded by your well-meaning friends; again, immediately delete them.

Never click links in e-mail unless you’re sure about the e-mail; copy and paste the link into your browser instead. Don’t use Outlook or Outlook Express. If you must use Microsoft Office, enable macro virus protection; in Office 2000, turn the security level to “high” and don’t trust any received files unless you have to. If you’re using Windows, turn off the “hide file extensions for known file types” option; it lets Trojan horses masquerade as other types of files. Uninstall the Windows Scripting Host if you can get along without it. If you can’t, at least change your file associations, so that script files aren’t automatically sent to the Scripting Host if you double-click them.

Antivirus and anti-spyware software : Use it–either a combined program or two separate programs. Download and install the updates, at least weekly and whenever you read about a new virus in the news. Some antivirus products automatically check for updates. Enable that feature and set it to “daily.”

Firewall : Spend $50 for a Network Address Translator firewall device; it’s likely to be good enough in default mode. On your laptop, use personal firewall software. If you can, hide your IP address. There’s no reason to allow any incoming connections from anybody.

Encryption: Install an e-mail and file encryptor (like PGP). Encrypting all your e-mail or your entire hard drive is unrealistic, but some mail is too sensitive to send in the clear. Similarly, some files on your hard drive are too sensitive to leave unencrypted.

None of the measures I’ve described are foolproof. If the secret police wants to target your data or your communications, no countermeasure on this list will stop them. But these precautions are all good network-hygiene measures, and they’ll make you a more difficult target than the computer next door. And even if you only follow a few basic measures, you’re unlikely to have any problems.

I’m stuck using Microsoft Windows and Office, but I use Opera for Web browsing and Eudora for e-mail. I use Windows Update to automatically get patches and install other patches when I hear about them. My antivirus software updates itself regularly. I keep my computer relatively clean and delete applications that I don’t need. I’m diligent about backing up my data and about storing data files that are no longer needed offline.

I’m suspicious to the point of near-paranoia about e-mail attachments and Web sites. I delete cookies and spyware. I watch URLs to make sure I know where I am, and I don’t trust unsolicited e-mails. I don’t care about low-security passwords, but try to have good passwords for accounts that involve money. I still don’t do Internet banking. I have my firewall set to deny all incoming connections. And I turn my computer off when I’m not using it.

That’s basically it. Really, it’s not that hard. The hardest part is developing an intuition about e-mail and Web sites. But that just takes experience.

This essay previously appeared on CNet

Posted on December 13, 2004 at 9:59 AMView Comments

Desktop Google Finds Holes

Google’s desktop search software is so good that it exposes vulnerabilities on your computer that you didn’t know about.

Last month, Google released a beta version of its desktop search software: Google Desktop Search. Install it on your Windows machine, and it creates a searchable index of your data files, including word processing files, spreadsheets, presentations, e-mail messages, cached Web pages and chat sessions. It’s a great idea. Windows’ searching capability has always been mediocre, and Google fixes the problem nicely.

There are some security issues, though. The problem is that GDS indexes and finds documents that you may prefer not be found. For example, GDS searches your browser’s cache. This allows it to find old Web pages you’ve visited, including online banking summaries, personal messages sent from Web e-mail programs and password-protected personal Web pages.

GDS can also retrieve encrypted files. No, it doesn’t break the encryption or save a copy of the key. However, it searches the Windows cache, which can bypass some encryption programs entirely. And if you install the program on a computer with multiple users, you can search documents and Web pages for all users.

GDS isn’t doing anything wrong; it’s indexing and searching documents just as it’s supposed to. The vulnerabilities are due to the design of Internet Explorer, Opera, Firefox, PGP and other programs.

First, Web browsers should not store SSL-encrypted pages or pages with personal e-mail. If they do store them, they should at least ask the user first.

Second, an encryption program that leaves copies of decrypted files in the cache is poorly designed. Those files are there whether or not GDS searches for them.

Third, GDS’ ability to search files and Web pages of multiple users on a computer received a lot of press when it was first discovered. This is a complete nonissue. You have to be an administrator on the machine to do this, which gives you access to everyone’s files anyway.

Some people blame Google for these problems and suggest, wrongly, that Google fix them. What if Google were to bow to public pressure and modify GDS to avoid showing confidential information? The underlying problems would remain: The private Web pages would still be in the browser’s cache; the encryption program would still be leaving copies of the plain-text files in the operating system’s cache; and the administrator could still eavesdrop on anyone’s computer to which he or she has access. The only thing that would have changed is that these vulnerabilities once again would be hidden from the average computer user.

In the end, this can only harm security.

GDS is very good at searching. It’s so good that it exposes vulnerabilities on your computer that you didn’t know about. And now that you know about them, pressure your software vendors to fix them. Don’t shoot the messenger.

This article originally appeared in eWeek.

Posted on November 29, 2004 at 11:15 AMView Comments

Letter: Lexar JumpDrives

Recently I talked about a security vulnerability in Lexar’s JumpDrives. I received this e-mail from the company:

From: Diane Carlini

Subject: Lexar’s JumpDrive

@stake’s findings revealed a slight security exposure in scenarios where an experienced hacker could potentially monitor and gain access to the secure area. This was only the case in version 1.0 which included SafeGuard. Lexar’s JumpDrive Secure 2.0 device now includes software based on 256-bit AES Encryption Technology. With this new product, JumpDrive Secure 2.0 offers the highest level of data protection that is commonly available today.

Registered JumpDrive Secure customers will be contacted to inform them of this Security Advisory found in version 1.

I have no technical information, either from Lexar or @Stake, that verifies or refutes this claim.

Posted on November 5, 2004 at 9:53 AMView Comments

Letter: Lexar JumpDrives

Recently I talked about a security vulnerability in Lexar’s JumpDrives. I received this e-mail from the company:

From: Diane Carlini

Subject: Lexar’s JumpDrive

@stake’s findings revealed a slight security exposure in scenarios where an experienced hacker could potentially monitor and gain access to the secure area. This was only the case in version 1.0 which included SafeGuard. Lexar’s JumpDrive Secure 2.0 device now includes software based on 256-bit AES Encryption Technology. With this new product, JumpDrive Secure 2.0 offers the highest level of data protection that is commonly available today.

Registered JumpDrive Secure customers will be contacted to inform them of this Security Advisory found in version 1.

I have no technical information, either from Lexar or @Stake, that verifies or refutes this claim.

Posted on November 5, 2004 at 9:53 AMView Comments

The Doghouse: Vadium Technology

Yet another one-time pad system. Not a lot of detail on the website, but this bit says it all:

“Based on patent-pending technology and 18 years of exhaustive research, Vadium’s AlphaCipher Encryption System ™, implements a true digital One-Time-Pad (“OTP”) cipher. The One-Time Pad is the only method of encrypting data where the strength of protection is immune to the mounting threats posed by breakthroughs in advanced mathematics and the ever-increasing processing power of computers. The consistently accelerated increases in computing power are proven to be a present and severe threat to all the other prevalent encryption methods.”

I am continually amazed at the never-ending stream of one-time pad systems. Every few months another company believes that they have finally figured out how to make a commercial one-time pad system. They announce it, are uniformly laughed at, and then disappear. It’s cryptography’s perpetual motion machine.

Vadium Technology’s website.

My essay on one-time pads.

Posted on November 4, 2004 at 12:08 PMView Comments

Doghouse: Merced County

Merced County is in California, and they explained why they chose Election Systems & Software (ES&S) as their electronic voting machines. There are a bunch of vague selection criteria, but this one is quite explicit: “Uses 1,064 bit encryption, not 128 which is less secure.”

I am simply too appalled to comment further.

Link

Posted on October 25, 2004 at 3:25 PMView Comments

The Legacy of DES

The Data Encryption Standard, or DES, was a mid-’70s brainchild of the National Bureau of Standards: the first modern, public, freely available encryption algorithm. For over two decades, DES was the workhorse of commercial cryptography.

Over the decades, DES has been used to protect everything from databases in mainframe computers, to the communications links between ATMs and banks, to data transmissions between police cars and police stations. Whoever you are, I can guarantee that many times in your life, the security of your data was protected by DES.

Just last month, the former National Bureau of Standards–the agency is now called the National Institute of Standards and Technology, or NIST–proposed withdrawing DES as an encryption standard, signifying the end of the federal government’s most important technology standard, one more important than ASCII, I would argue.

Today, cryptography is one of the most basic tools of computer security, but 30 years ago it barely existed as an academic discipline. In the days when the Internet was little more than a curiosity, cryptography wasn’t even a recognized branch of mathematics. Secret codes were always fascinating, but they were pencil-and-paper codes based on alphabets. In the secret government labs during World War II, cryptography entered the computer era and became mathematics. But with no professors teaching it, and no conferences discussing it, all the cryptographic research in the United States was conducted at the National Security Agency.

And then came DES.

Back in the early 1970s, it was a radical idea. The National Bureau of Standards decided that there should be a free encryption standard. Because the agency wanted it to be non-military, they solicited encryption algorithms from the public. They got only one serious response–the Data Encryption Standard–from the labs of IBM. In 1976, DES became the government’s standard encryption algorithm for “sensitive but unclassified” traffic. This included things like personal, financial and logistical information. And simply because there was nothing else, companies began using DES whenever they needed an encryption algorithm. Of course, not everyone believed DES was secure.

When IBM submitted DES as a standard, no one outside the National Security Agency had any expertise to analyze it. The NSA made two changes to DES: It tweaked the algorithm, and it cut the key size by more than half.

The strength of an algorithm is based on two things: how good the mathematics is, and how long the key is. A sure way of breaking an algorithm is to try every possible key. Modern algorithms have a key so long that this is impossible; even if you built a computer out of all the silicon atoms on the planet and ran it for millions of years, you couldn’t do it. So cryptographers look for shortcuts. If the mathematics are weak, maybe there’s a way to find the key faster: “breaking” the algorithm.

The NSA’s changes caused outcry among the few who paid attention, both regarding the “invisible hand” of the NSA–the tweaks were not made public, and no rationale was given for the final design–and the short key length.

But with the outcry came research. It’s not an exaggeration to say that the publication of DES created the modern academic discipline of cryptography. The first academic cryptographers began their careers by trying to break DES, or at least trying to understand the NSA’s tweak. And almost all of the encryption algorithms–public-key cryptography, in particular–can trace their roots back to DES. Papers analyzing different aspects of DES are still being published today.

By the mid-1990s, it became widely believed that the NSA was able to break DES by trying every possible key. This ability was demonstrated in 1998, when a $220,000 machine was built that could brute-force a DES key in a few days. In 1985, the academic community proposed a DES variant with the same mathematics but a longer key, called triple-DES. This variant had been used in more secure applications in place of DES for years, but it was time for a new standard. In 1997, NIST solicited an algorithm to replace DES.

The process illustrates the complete transformation of cryptography from a secretive NSA technology to a worldwide public technology. NIST once again solicited algorithms from the public, but this time the agency got 15 submissions from 10 countries. My own algorithm, Twofish, was one of them. And after two years of analysis and debate, NIST chose a Belgian algorithm, Rijndael, to become the Advanced Encryption Standard.

It’s a different world in cryptography now than it was 30 years ago. We know more about cryptography, and have more algorithms to choose among. AES won’t become a ubiquitous standard in the same way that DES did. But it is finding its way into banking security products, Internet security protocols, even computerized voting machines. A NIST standard is an imprimatur of quality and security, and vendors recognize that.

So, how good is the NSA at cryptography? They’re certainly better than the academic world. They have more mathematicians working on the problems, they’ve been working on them longer, and they have access to everything published in the academic world, while they don’t have to make their own results public. But are they a year ahead of the state of the art? Five years? A decade? No one knows.

It took the academic community two decades to figure out that the NSA “tweaks” actually improved the security of DES. This means that back in the ’70s, the National Security Agency was two decades ahead of the state of the art.

Today, the NSA is still smarter, but the rest of us are catching up quickly. In 1999, the academic community discovered a weakness in another NSA algorithm, SHA, that the NSA claimed to have discovered only four years previously. And just last week there was a published analysis of the NSA’s SHA-1 that demonstrated weaknesses that we believe the NSA didn’t know about at all.

Maybe now we’re just a couple of years behind.

This essay was originally published on CNet.com

Posted on October 6, 2004 at 6:05 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.