Entries Tagged "certifications"

Page 1 of 2

On the Insecurity of ES&S Voting Machines’ Hash Code

Andrew Appel and Susan Greenhalgh have a blog post on the insecurity of ES&S’s software authentication system:

It turns out that ES&S has bugs in their hash-code checker: if the “reference hashcode” is completely missing, then it’ll say “yes, boss, everything is fine” instead of reporting an error. It’s simultaneously shocking and unsurprising that ES&S’s hashcode checker could contain such a blunder and that it would go unnoticed by the U.S. Election Assistance Commission’s federal certification process. It’s unsurprising because testing naturally tends to focus on “does the system work right when used as intended?” Using the system in unintended ways (which is what hackers would do) is not something anyone will notice.

Also:

Another gem in Mr. Mechler’s report is in Section 7.1, in which he reveals that acceptance testing of voting systems is done by the vendor, not by the customer. Acceptance testing is the process by which a customer checks a delivered product to make sure it satisfies requirements. To have the vendor do acceptance testing pretty much defeats the purpose.

Posted on March 16, 2021 at 6:36 AMView Comments

New SHA-1 Attack

There’s a new, practical, collision attack against SHA-1:

In this paper, we report the first practical implementation of this attack, and its impact on real-world security with a PGP/GnuPG impersonation attack. We managed to significantly reduce the complexity of collisions attack against SHA-1: on an Nvidia GTX 970, identical-prefix collisions can now be computed with a complexity of 261.2rather than264.7, and chosen-prefix collisions with a complexity of263.4rather than267.1. When renting cheap GPUs, this translates to a cost of 11k US$ for a collision,and 45k US$ for a chosen-prefix collision, within the means of academic researchers.Our actual attack required two months of computations using 900 Nvidia GTX 1060GPUs (we paid 75k US$ because GPU prices were higher, and we wasted some time preparing the attack).

It has practical applications:

We chose the PGP/GnuPG Web of Trust as demonstration of our chosen-prefix collision attack against SHA-1. The Web of Trust is a trust model used for PGP that relies on users signing each other’s identity certificate, instead of using a central PKI. For compatibility reasons the legacy branch of GnuPG (version 1.4) still uses SHA-1 by default for identity certification.

Using our SHA-1 chosen-prefix collision, we have created two PGP keys with different UserIDs and colliding certificates: key B is a legitimate key for Bob (to be signed by the Web of Trust), but the signature can be transferred to key A which is a forged key with Alice’s ID. The signature will still be valid because of the collision, but Bob controls key A with the name of Alice, and signed by a third party. Therefore, he can impersonate Alice and sign any document in her name.

From a news article:

The new attack is significant. While SHA1 has been slowly phased out over the past five years, it remains far from being fully deprecated. It’s still the default hash function for certifying PGP keys in the legacy 1.4 version branch of GnuPG, the open-source successor to PGP application for encrypting email and files. Those SHA1-generated signatures were accepted by the modern GnuPG branch until recently, and were only rejected after the researchers behind the new collision privately reported their results.

Git, the world’s most widely used system for managing software development among multiple people, still relies on SHA1 to ensure data integrity. And many non-Web applications that rely on HTTPS encryption still accept SHA1 certificates. SHA1 is also still allowed for in-protocol signatures in the Transport Layer Security and Secure Shell protocols.

Posted on January 8, 2020 at 9:38 AMView Comments

Excellent Analysis of the Boeing 737 Max Software Problems

This is the best analysis of the software causes of the Boeing 737 MAX disasters that I have read.

Technically this is safety and not security; there was no attacker. But the fields are closely related and there are a lot of lessons for IoT security—and the security of complex socio-technical systems in general—in here.

EDITED TO ADD (4/30): A rebuttal of sorts.

EDITED TO ADD (5/13): The comments to this blog post are of particularly high quality, and I recommend them to anyone interested in the topic.

Posted on April 22, 2019 at 8:45 AMView Comments

Vulnerabilities in the WPA3 Wi-Fi Security Protocol

Researchers have found several vulnerabilities in the WPA3 Wi-Fi security protocol:

The design flaws we discovered can be divided in two categories. The first category consists of downgrade attacks against WPA3-capable devices, and the second category consists of weaknesses in the Dragonfly handshake of WPA3, which in the Wi-Fi standard is better known as the Simultaneous Authentication of Equals (SAE) handshake. The discovered flaws can be abused to recover the password of the Wi-Fi network, launch resource consumption attacks, and force devices into using weaker security groups. All attacks are against home networks (i.e. WPA3-Personal), where one password is shared among all users.

News article. Research paper: “Dragonblood: A Security Analysis of WPA3’s SAE Handshake“:

Abstract: The WPA3 certification aims to secure Wi-Fi networks, and provides several advantages over its predecessor WPA2, such as protection against offline dictionary attacks and forward secrecy. Unfortunately, we show that WPA3 is affected by several design flaws,and analyze these flaws both theoretically and practically. Most prominently, we show that WPA3’s Simultaneous Authentication of Equals (SAE) handshake, commonly known as Dragonfly, is affected by password partitioning attacks. These attacks resemble dictionary attacks and allow an adversary to recover the password by abusing timing or cache-based side-channel leaks. Our side-channel attacks target the protocol’s password encoding method. For instance, our cache-based attack exploits SAE’s hash-to-curve algorithm. The resulting attacks are efficient and low cost: brute-forcing all 8-character lowercase password requires less than 125$in Amazon EC2 instances. In light of ongoing standardization efforts on hash-to-curve, Password-Authenticated Key Exchanges (PAKEs), and Dragonfly as a TLS handshake, our findings are also of more general interest. Finally, we discuss how to mitigate our attacks in a backwards-compatible manner, and explain how minor changes to the protocol could have prevented most of our attack

Posted on April 15, 2019 at 2:00 PMView Comments

Is Cybersecurity a Profession?

A National Academy of Sciences panel says no:

Sticking to the quality control aspect of the report, professionalization, it says, has the potential to attract workers and establish long-term paths to improving the work force overall, but measures such as standardized education or requirements for certification, have their disadvantages too.

For example, formal education or certification could be helpful to employers looking to evaluate the skills and knowledge of a given applicant, but it takes time to develop curriculum and reach a consensus on what core knowledge and skills should be assessed in order to award any such certification. For direct examples of such a quandary, InfoSec needs only to look at the existing certification programs, and the criticisms directed that certifications such as the CISSP and C|EH.

Once a certification is issued, the previously mentioned barriers start to emerge. The standards used to award certifications will run the risk of becoming obsolete. Furthermore, workers may not have incentives to update their skills in order to remain current. Again, this issue is seen in the industry today, as some professionals chose to let their certifications lapse rather than renew them or try and collect the required CPE credits.

But the largest barrier is that some of the most talented individuals in cybersecurity are self-taught. So the requirement of formal education or training may, as mentioned, deter potential employees from entering the field at a time when they are needed the most. So while professionalization may be a useful tool in some circumstances, the report notes, it shouldn’t be used as a proxy for “better.”

Here’s the report.

Posted on October 3, 2013 at 12:55 PMView Comments

So You Want to Be a Security Expert

I regularly receive e-mail from people who want advice on how to learn more about computer security, either as a course of study in college or as an IT person considering it as a career choice.

First, know that there are many subspecialties in computer security. You can be an expert in keeping systems from being hacked, or in creating unhackable software. You can be an expert in finding security problems in software, or in networks. You can be an expert in viruses, or policies, or cryptography. There are many, many opportunities for many different skill sets. You don’t have to be a coder to be a security expert.

In general, though, I have three pieces of advice to anyone who wants to learn computer security.

  • Study. Studying can take many forms. It can be classwork, either at universities or at training conferences like SANS and Offensive Security. (These are good self-starter resources.) It can be reading; there are a lot of excellent books out there—and blogs—that teach different aspects of computer security out there. Don’t limit yourself to computer science, either. You can learn a lot by studying other areas of security, and soft sciences like economics, psychology, and sociology.
  • Do. Computer security is fundamentally a practitioner’s art, and that requires practice. This means using what you’ve learned to configure security systems, design new security systems, and—yes—break existing security systems. This is why many courses have strong hands-on components; you won’t learn much without it.
  • Show. It doesn’t matter what you know or what you can do if you can’t demonstrate it to someone who might want to hire you. This doesn’t just mean sounding good in an interview. It means sounding good on mailing lists and in blog comments. You can show your expertise by making podcasts and writing your own blog. You can teach seminars at your local user group meetings. You can write papers for conferences, or books.

I am a fan of security certifications, which can often demonstrate all of these things to a potential employer quickly and easily.

I’ve really said nothing here that isn’t also true for a gazillion other areas of study, but security also requires a particular mindset—one I consider essential for success in this field. I’m not sure it can be taught, but it certainly can be encouraged. “This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don’t have to exploit the vulnerabilities you find, but if you don’t see the world that way, you’ll never notice most security problems.” This is especially true if you want to design security systems and not just implement them. Remember Schneier’s Law: “Any person can invent a security system so clever that she or he can’t think of how to break it.” The only way your designs are going to be trusted is if you’ve made a name for yourself breaking other people’s designs.

One final word about cryptography. Modern cryptography is particularly hard to learn. In addition to everything above, it requires graduate-level knowledge in mathematics. And, as in computer security in general, your prowess is demonstrated by what you can break. The field has progressed a lot since I wrote this guide and self-study cryptanalysis course a dozen years ago, but they’re not bad places to start.

This essay originally appeared on “Krebs on Security,” the second in a series of answers to the question. This is the first. There will be more.

Posted on July 5, 2012 at 6:17 AMView Comments

FIPS 140-2 Level 2 Certified USB Memory Stick Cracked

Kind of a dumb mistake:

The USB drives in question encrypt the stored data via the practically uncrackable AES 256-bit hardware encryption system. Therefore, the main point of attack for accessing the plain text data stored on the drive is the password entry mechanism. When analysing the relevant Windows program, the SySS security experts found a rather blatant flaw that has quite obviously slipped through testers’ nets. During a successful authorisation procedure the program will, irrespective of the password, always send the same character string to the drive after performing various crypto operations—and this is the case for all USB Flash drives of this type.

Cracking the drives is therefore quite simple. The SySS experts wrote a small tool for the active password entry program’s RAM which always made sure that the appropriate string was sent to the drive, irrespective of the password entered and as a result gained immediate access to all the data on the drive. The vulnerable devices include the Kingston DataTraveler BlackBox, the SanDisk Cruzer Enterprise FIPS Edition and the Verbatim Corporate Secure FIPS Edition.

Nice piece of analysis work.

The article goes on to question the value of the FIPS certification:

The real question, however, remains unanswered ­ how could USB Flash drives that exhibit such a serious security hole be given one of the highest certificates for crypto devices? Even more importantly, perhaps ­ what is the value of a certification that fails to detect such holes?

The problem is that no one really understands what a FIPS 140-2 certification means. Instead, they think something like: “This crypto thingy is certified, so it must be secure.” In fact, FIPS 140-2 Level 2 certification only means that certain good algorithms are used, and that there is some level of tamper resistance and tamper evidence. Marketing departments of security take advantage of this confusion—it’s not only FIPS 140, it’s all the security standards—and encourage their customers to equate conformance to the standard with security.

So when that equivalence is demonstrated to be false, people are surprised.

Posted on January 8, 2010 at 7:24 AMView Comments

Unisys Blamed for DHS Data Breaches

This story has been percolating around for a few days. Basically, Unisys was hired by the U.S. Department of Homeland Security to manage and monitor the department’s network security. After data breaches were discovered, DHS blamed Unisys—and I figured that everyone would be in serious CYA mode and that we’d never know what really happened. But it seems that there was a cover-up at Unisys, and that’s a big deal:

As part of the contract, Unisys, based in Blue Bell, Pa., was to install network-intrusion detection devices on the unclassified computer systems for the TSA and DHS headquarters and monitor the networks. But according to evidence gathered by the House Homeland Security Committee, Unisys’s failure to properly install and monitor the devices meant that DHS was not aware for at least three months of cyber-intrusions that began in June 2006. Through October of that year, Thompson said, 150 DHS computers—including one in the Office of Procurement Operations, which handles contract data—were compromised by hackers, who sent an unknown quantity of information to a Chinese-language Web site that appeared to host hacking tools.

The contractor also allegedly falsely certified that the network had been protected to cover up its lax oversight, according to the committee.

What interests me the most (as someone with a company that does network security management and monitoring) is that there might be some liability here:

“For the hundreds of millions of dollars that have been spent on building this system within Homeland, we should demand accountability by the contractor,” [Congressman] Thompson said in an interview. “If, in fact, fraud can be proven, those individuals guilty of it should be prosecuted.”

And, as an aside, we see how useless certifications can be:

She said that Unisys has provided DHS “with government-certified and accredited security programs and systems, which were in place throughout 2006 and remain so today.”

Posted on October 3, 2007 at 6:50 AMView Comments

Perpetual Doghouse: Meganet

I first wrote about Meganet in 1999, in a larger article on cryptographic snake-oil, and formally put them in the doghouse in 2003:

They build an alternate reality where every cryptographic algorithm has been broken, and the only thing left is their own system. “The weakening of public crypto systems commenced in 1997. First it was the 40-bit key, a few months later the 48-bit key, followed by the 56-bit key, and later the 512 bit has been broken…” What are they talking about? Would you trust a cryptographer who didn’t know the difference between symmetric and public-key cryptography? “Our technology… is the only unbreakable encryption commercially available.” The company’s founder quoted in a news article: “All other encryption methods have been compromised in the last five to six years.” Maybe in their alternate reality, but not in the one we live in.

Their solution is to not encrypt data at all. “We believe there is one very simple rule in encryption: if someone can encrypt data, someone else will be able to decrypt it. The idea behind VME is that the data is not being encrypted nor transferred. And if it’s not encrypted and not transferred, there is nothing to break. And if there’s nothing to break, it’s unbreakable.” Ha ha; that’s a joke. They really do encrypt data, but they call it something else.

Read the whole thing; it’s pretty funny.

They’re still around, and they’re still touting their snake-oil “virtual matrix encryption.” (The patent is finally public, and if someone can reverse-engineer the combination of patentese and gobbledygook into an algorithm, we can finally see how actually awful it really is.) The tech on their website is better than it was in 2003, but it’s still pretty hokey.

Back in 2005, they got their product FIPS 140-1 certified (#505 on this page). The certification was for their AES implementation, but they’re sneakily implying that VME was certified. From their website: “The Strength of a Megabit Encryption (VME). The Assurance of a 256 Bit Standard (AES). Both Technologies Combined in One Certified Module! FIPS 140-2 CERTIFICATE # 505.”

Just goes to show that with a bit of sleight-of-hand you can get anything FIPS 140 certified.

Posted on June 14, 2007 at 1:05 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.