Entries Tagged "SSL"

Page 4 of 4

Florida E-Voting Study

Florida just recently released another study of the Diebold voting
machines. They—and it was real security researchers like the California study, and not posers—studied v4.6.5 of the Diebold TSx and v1.96.8 of the Diebold Optical Scan. (California studied older versions (v4.6.4 of the TSx and v1.96.6 of the Optical Scan).

The most interesting issues are (1) Diebold’s apparent “find- then-patch” approach to computer security, and (2) Diebold’s lousy use of cryptography.

Among the findings:

  • Section 3.5. They use RSA signatures, apparently to address previously documented flaws in the literature. But their signature verification step has a problem. It computes H = signature**3 mod N, and then compares _only 160 bits of H_ with the SHA1 hash of a message. This is a natural way to implement RSA signatures if you just read a security textbook. But this approach is also insecure—the report demonstrates how to create a 250-line Java program to forge RSA signatures over (basically) arbitrary messages of their choosing.
  • Section 3.10.3. The original Hopkins report talked about the lack of crypto for network (or dialup) communications between a TSX voting machine and the back-end GEMs server. Apparently, Diebold tried to use SSL to fix the problem. The RABA report analyzed Diebold’s SSL usage and found a security problem. Diebold then tried to patch their SSL implementation. This new report looks at the patched version, and finds that it is still vulnerable to a man-in-the-middle attack.
  • Section 3.7.1.1. Key management. Avi Rubin has already summarized some of the highlights.

    This is arguably worse than having a fixed static key in all of the machines. Because with knowledge of the machine’s serial number, anyone can calculate all of the secret keys. Whereas before, someone would have needed access to the source code or the binary in the machine.

    Other attacks mentioned in the report include swapping two candidate vote counters and many other vote switching attacks. The supervisor PIN is protected with weak cryptography, and once again Diebold has shown that they do not have even a basic understanding of how to apply cryptographic mechanisms.

Avi Rubin has a nice overall summary, too:

So, Diebold is doing some things better than they did before when they had absolutely no security, but they have yet to do them right. Anyone taking any of our cryptography classes at Johns Hopkins, for example, would do a better job applying cryptography. If you read the SAIT report, this theme repeats throughout.

Right. These are classic examples of problems that can arise if (1) you “roll your own” crypto and/or (2) employ “find and patch” rather than a principled approach to security.

It all makes me wonder what new problems will arise from future security patches.

The good news is that Florida has decided not to certify the TSX at this time. They may try to certify a revised version of the OS (optical scan) system.

Posted on August 6, 2007 at 6:34 AMView Comments

OpenSSL Now FIPS 140-2 Certified

The process took five years:

The biggest frustration OSSI encountered by the seemingly endless delays is that now the software that was validated by the CMVP is more than three years old. “[This toolkit] is branched from version 0.9.7, but 0.9.8 is already available and 0.9.9 is in development,” says Marquess. “We’re glad it’s available, but now it’s dated. We understand a lot better what the CMVP’s requirements are, though, so validation will go more smoothly next time around. We also know the criticism we’ll encounter, and we’ll nail them with the next release.”

This is one problem with long certification cycles; software development cycles are faster.

Posted on February 21, 2007 at 12:17 PMView Comments

Impressive Phishing Attack

Read about it here, or in even more detail.

I find this phishing attack impressive for several reasons. One, it’s a very sophisticated attack and demonstrates how clever identity thieves are becoming. Two, it narrowly targets a particular credit union, and sneakily uses the fact that credit cards issued by an institution share the same initial digits. Three, it exploits an authentication problem with SSL certificates. And four, it is yet another proof point that “user education” isn’t how we’re going to solve this kind of risk.

Posted on February 22, 2006 at 7:41 AMView Comments

The New Internet Explorer

I’m just starting to read about the new security features in Internet Explorer 7. So far, I like what I am reading.

IE 7 requires that all browser windows display an address bar. This helps foil attackers that operate by popping up new windows masquerading as pages on a legitimate site, when in fact the site is fraudulent. By requiring an address bar, users will immediately see the true URL of the displayed page, making these types of attacks more obvious. If you think you’re looking at www.microsoft.com, but the browser address bar says www.illhackyou.net, you ought to be suspicious.

I use Opera, and have long used the address bar to “check” on URLs. This is an excellent idea. So is this:

In early November, a bunch of Web browser developers got together and started fleshing out standards for address bar coloring, which can cue users to secured connections. Under the proposal laid out by IE 7 team member Rob Franco, even sites that use a standard SSL certificate will display a standard white address bar. Sites that use a stronger, as yet undetermined level of protection will use a green bar.

I like easy visual indications about what’s going on. And I really like that SSL is generic white, because it really doesn’t prove that you’re communicating with the site you think you’re communicating with. This feature helps with that, though:

Franco also said that when navigating to an SSL-protected site, the IE 7 address bar will display the business name and certification authority’s name in the address bar.

Some of the security measures in IE7 weaken the integration between the browser and the operating system:

People using Windows Vista beta 2 will find a new feature called Protected Mode, which renders IE 7 unable to modify system files and settings. This essentially breaks down part of the integration between IE and Windows itself.

Think of it is as a wall between IE and the rest of the operating system. No, the code won’t be perfect, and yes, there’ll be ways found to circumvent this security, but this is an important and long-overdue feature.

The majority of IE’s notorious security flaws stem from its pervasive integration with Windows. That is a feature no other Web browser offers—and an ability that Vista’s Protected Mode intends to mitigate. IE 7 obviously won’t remove all of that tight integration. Lacking deep architectural changes, the effort has focused instead on hardening or eliminating potential vulnerabilities. Unfortunately, this approach requires Microsoft to anticipate everything that could go wrong and block it in advance—hardly a surefire way to secure a browser.

That last sentence is about the general Internet attitude to allow everything that is not explicitly denied, rather than deny everything that is not explicitly allowed.

Also, you’ll have to wait until Vista to use it:

…this capability will not be available in Windows XP because it’s woven directly into Windows Vista itself.

There are also some good changes under the hood:

IE 7 does eliminate a great deal of legacy code that dates back to the IE 4 days, which is a welcome development.

And:

Microsoft has rewritten a good bit of IE 7’s core code to help combat attacks that rely on malformed URLs (that typically cause a buffer overflow). It now funnels all URL processing through a single function (thus reducing the amount of code that “looks” at URLs).

All good stuff, but I agree with this conclusion:

IE 7 offers several new security features, but it’s hardly a given that the situation will improve. There has already been a set of security updates for IE 7 beta 1 released for both Windows Vista and Windows XP computers. Security vulnerabilities in a beta product shouldn’t be alarming (IE 7 is hardly what you’d consider “finished” at this point), but it may be a sign that the product’s architecture and design still have fundamental security issues.

I’m not switching from Opera yet, and my second choice is still Firefox. But the masses still use IE, and our security depends in part on those masses keeping their computers worm-free and bot-free.

NOTE: Here’s some info on how to get your own copy of Internet Explorer 7 beta 2.

Posted on February 9, 2006 at 3:37 PMView Comments

New Phishing Trick

Although I think I’ve seen the trick before:

Phishing schemes are all about deception, and recently some clever phishers have added a new layer of subterfuge called the secure phish. It uses the padlock icon indicating that your browser has established a secure connection to a Web site to lull you into a false sense of security. According to Internet security company SurfControl, phishers have begun to outfit their counterfeit sites with self-generated Secure Sockets Layer certificates. To distinguish an imposter from the genuine article, you should carefully scan the security certificate prompt for a reference to either “a self-issued certificate” or “an unknown certificate authority.”

Yeah, like anyone is going to do that.

Posted on December 1, 2005 at 7:43 AMView Comments

Security Skins

Much has been written about the insecurity of passwords. Aside from being guessable, people are regularly tricked into providing their passwords to rogue servers because they can’t distinguish spoofed windows and webpages from legitimate ones.

Here’s a clever scheme by Rachna Dhamija and Doug Tygar at the University of California Berkeley that tries to deal with the problem. It’s called “Dynamic Security Skins,” and it’s a pair of protocols that augment passwords.

First, the authors propose creating a trusted window in the browser dedicated to username and password entry. The user chooses a photographic image (or is assigned a random image), which is overlaid across the window and text entry boxes. If the window displays the user’s personal image, it is safe for the user to enter his password.

Second, to prove its identity, the server generates a unique abstract image for each user and each transaction. This image is used to create a “skin” that automatically customizes the browser window or the user interface elements in the content of a webpage. The user’s browser can independently reach the same image that it expects to receive from the server. To verify the server, the user only has to visually verify that the images match.

Not a perfect solution by any means—much Internet fraud bypasses authentication altogether—but two clever ideas that use visual cues to ensure security. You can also verify server authenticity by inspecting the SSL certificate, but no one does that. With this scheme, the user has to recognize only one image and remember one password, no matter how many servers he interacts with. In contrast, the recently announced Site Key (Bank of America’s implementation of the Passmark scheme) requires users to save a different image with each server.

Posted on July 1, 2005 at 7:31 AMView Comments

Melbourne Water-Supply Security Risk

Here’s a scary hacking target: the remote-control system for Melbourne’s water supply. According to TheAge:

Remote access to the Brooklyn pumping station and the rest of the infrastructure means the entire network can be controlled from any of seven main Melbourne Water sites, or by key staff such as Mr Woodland from home via a secure internet connection using Citrix’s Metaframe or a standard web browser.

SCADA systems are hard to hack, but SSL connections—at least, that’s what I presume they mean by “secure internet connection”—are much easier.

(Seen on Benambra.)

Posted on March 11, 2005 at 9:17 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.