Entries Tagged "security engineering"

Page 4 of 14

Insider Logic Bombs

Add to the “not very smart criminals” file:

According to court documents, Tinley provided software services for Siemens’ Monroeville, PA offices for nearly ten years. Among the work he was asked to perform was the creation of spreadsheets that the company was using to manage equipment orders.

The spreadsheets included custom scripts that would update the content of the file based on current orders stored in other, remote documents, allowing the company to automate inventory and order management.

But while Tinley’s files worked for years, they started malfunctioning around 2014. According to court documents, Tinley planted so-called “logic bombs” that would trigger after a certain date, and crash the files.

Every time the scripts would crash, Siemens would call Tinley, who’d fix the files for a fee.

Posted on July 26, 2019 at 6:05 AMView Comments

Software Developers and Security

According to a survey: “68% of the security professionals surveyed believe it’s a programmer’s job to write secure code, but they also think less than half of developers can spot security holes.” And that’s a problem.

Nearly half of security pros surveyed, 49%, said they struggle to get developers to make remediation of vulnerabilities a priority. Worse still, 68% of security professionals feel fewer than half of developers can spot security vulnerabilities later in the life cycle. Roughly half of security professionals said they most often found bugs after code is merged in a test environment.

At the same time, nearly 70% of developers said that while they are expected to write secure code, they get little guidance or help. One disgruntled programmer said, “It’s a mess, no standardization, most of my work has never had a security scan.”

Another problem is it seems many companies don’t take security seriously enough. Nearly 44% of those surveyed reported that they’re not judged on their security vulnerabilities.

Posted on July 25, 2019 at 6:17 AMView Comments

Yubico Security Keys with a Crypto Flaw

Wow, is this an embarrassing bug:

Yubico is recalling a line of security keys used by the U.S. government due to a firmware flaw. The company issued a security advisory today that warned of an issue in YubiKey FIPS Series devices with firmware versions 4.4.2 and 4.4.4 that reduced the randomness of the cryptographic keys it generates. The security keys are used by thousands of federal employees on a daily basis, letting them securely log-on to their devices by issuing one-time passwords.

The problem in question occurs after the security key powers up. According to Yubico, a bug keeps “some predictable content” inside the device’s data buffer that could impact the randomness of the keys generated. Security keys with ECDSA signatures are in particular danger. A total of 80 of the 256 bits generated by the key remain static, meaning an attacker who gains access to several signatures could recreate the private key.

Boing Boing post.

EDITED TO ADD (6/12): From Microsoft TechNet Security Guidance blog (in 2014): “Why We’re Not Recommending ‘FIPS Mode’ Anymore.

Posted on July 1, 2019 at 5:55 AMView Comments

Hacking Hardware Security Modules

Security researchers Gabriel Campana and Jean-Baptiste Bédrune are giving a hardware security module (HSM) talk at BlackHat in August:

This highly technical presentation targets an HSM manufactured by a vendor whose solutions are usually found in major banks and large cloud service providers. It will demonstrate several attack paths, some of them allowing unauthenticated attackers to take full control of the HSM. The presented attacks allow retrieving all HSM secrets remotely, including cryptographic keys and administrator credentials. Finally, we exploit a cryptographic bug in the firmware signature verification to upload a modified firmware to the HSM. This firmware includes a persistent backdoor that survives a firmware update.

They have an academic paper in French, and a presentation of the work. Here’s a summary in English.

There were plenty of technical challenges to solve along the way, in what was clearly a thorough and professional piece of vulnerability research:

  1. They started by using legitimate SDK access to their test HSM to upload a firmware module that would give them a shell inside the HSM. Note that this SDK access was used to discover the attacks, but is not necessary to exploit them.
  2. They then used the shell to run a fuzzer on the internal implementation of PKCS#11 commands to find reliable, exploitable buffer overflows.
  3. They checked they could exploit these buffer overflows from outside the HSM, i.e. by just calling the PKCS#11 driver from the host machine
  4. They then wrote a payload that would override access control and, via another issue in the HSM, allow them to upload arbitrary (unsigned) firmware. It’s important to note that this backdoor is persistent ­ a subsequent update will not fix it.
  5. They then wrote a module that would dump all the HSM secrets, and uploaded it to the HSM.

Posted on June 20, 2019 at 6:56 AMView Comments

Chinese Military Wants to Develop Custom OS

Citing security concerns, the Chinese military wants to replace Windows with its own custom operating system:

Thanks to the Snowden, Shadow Brokers, and Vault7 leaks, Beijing officials are well aware of the US’ hefty arsenal of hacking tools, available for anything from smart TVs to Linux servers, and from routers to common desktop operating systems, such as Windows and Mac.

Since these leaks have revealed that the US can hack into almost anything, the Chinese government’s plan is to adopt a “security by obscurity” approach and run a custom operating system that will make it harder for foreign threat actors—mainly the US—to spy on Chinese military operations.

It’s unclear exactly how custom this new OS will be. It could be a Linux variant, like North Korea’s Red Star OS. Or it could be something completely new. Normally, I would be highly skeptical of a country being able to write and field its own custom operating system, but China is one of the few that is large enough to actually be able to do it. So I’m just moderately skeptical.

EDITED TO ADD (6/12): Russia also wants to develop its own flavor of Linux.

Posted on June 6, 2019 at 7:04 AMView Comments

Programmers Who Don't Understand Security Are Poor at Security

A university study confirmed the obvious: if you pay a random bunch of freelance programmers a small amount of money to write security software, they’re not going to do a very good job at it.

In an experiment that involved 43 programmers hired via the Freelancer.com platform, University of Bonn academics have discovered that developers tend to take the easy way out and write code that stores user passwords in an unsafe manner.

For their study, the German academics asked a group of 260 Java programmers to write a user registration system for a fake social network.

Of the 260 developers, only 43 took up the job, which involved using technologies such as Java, JSF, Hibernate, and PostgreSQL to create the user registration component.

Of the 43, academics paid half of the group with €100, and the other half with €200, to determine if higher pay made a difference in the implementation of password security features.

Further, they divided the developer group a second time, prompting half of the developers to store passwords in a secure manner, and leaving the other half to store passwords in their preferred method—hence forming four quarters of developers paid €100 and prompted to use a secure password storage method (P100), developers paid €200 and prompted to use a secure password storage method (P200), devs paid €100 but not prompted for password security (N100), and those paid €200 but not prompted for password security (N200).

I don’t know why anyone would expect this group of people to implement a good secure password system. Look at how they were hired. Look at the scope of the project. Look at what they were paid. I’m sure they grabbed the first thing they found on GitHub that did the job.

I’m not very impressed with the study or its conclusions.

Posted on March 27, 2019 at 6:37 AMView Comments

Critical Flaw in Swiss Internet Voting System

Researchers have found a critical flaw in the Swiss Internet voting system. I was going to write an essay about how this demonstrates that Internet voting is a stupid idea and should never be attempted—and that this system in particular should never be deployed, even if the found flaw is fixed—but Cory Doctorow beat me to it:

The belief that companies can be trusted with this power defies all logic, but it persists. Someone found Swiss Post’s embrace of the idea too odious to bear, and they leaked the source code that Swiss Post had shared under its nondisclosure terms, and then an international team of some of the world’s top security experts (including some of our favorites, like Matthew Green) set about analyzing that code, and (as every security expert who doesn’t work for an e-voting company has predicted since the beginning of time), they found an incredibly powerful bug that would allow a single untrusted party at Swiss Post to undetectably alter the election results.

And, as everyone who’s ever advocated for the right of security researchers to speak in public without permission from the companies whose products they were assessing has predicted since the beginning of time, Swiss Post and Scytl downplayed the importance of this objectively very, very, very important bug. Swiss Post’s position is that since the bug only allows elections to be stolen by Swiss Post employees, it’s not a big deal, because Swiss Post employees wouldn’t steal an election.

But when Swiss Post agreed to run the election, they promised an e-voting system based on “zero knowledge” proofs that would allow voters to trust the outcome of the election without having to trust Swiss Post. Swiss Post is now moving the goalposts, saying that it wouldn’t be such a big deal if you had to trust Swiss Post implicitly to trust the outcome of the election.

You might be thinking, “Well, what is the big deal? If you don’t trust the people administering an election, you can’t trust the election’s outcome, right?” Not really: we design election systems so that multiple, uncoordinated people all act as checks and balances on each other. To suborn a well-run election takes massive coordination at many polling- and counting-places, as well as independent scrutineers from different political parties, as well as outside observers, etc.

Read the whole thing. It’s excellent.

More info.

Posted on March 15, 2019 at 9:44 AMView Comments

On the Security of Password Managers

There’s new research on the security of password managers, specifically 1Password, Dashlane, KeePass, and Lastpass. This work specifically looks at password leakage on the host computer. That is, does the password manager accidentally leave plaintext copies of the password lying around memory?

All password managers we examined sufficiently secured user secrets while in a “not running” state. That is, if a password database were to be extracted from disk and if a strong master password was used, then brute forcing of a password manager would be computationally prohibitive.

Each password manager also attempted to scrub secrets from memory. But residual buffers remained that contained secrets, most likely due to memory leaks, lost memory references, or complex GUI frameworks which do not expose internal memory management mechanisms to sanitize secrets.

This was most evident in 1Password7 where secrets, including the master password and its associated secret key, were present in both a locked and unlocked state. This is in contrast to 1Password4, where at most, a single entry is exposed in a “running unlocked” state and the master password exists in memory in an obfuscated form, but is easily recoverable. If 1Password4 scrubbed the master password memory region upon successful unlocking, it would comply with all proposed security guarantees we outlined earlier.

This paper is not meant to criticize specific password manager implementations; however, it is to establish a reasonable minimum baseline which all password managers should comply with. It is evident that attempts are made to scrub and sensitive memory in all password managers. However, each password manager fails in implementing proper secrets sanitization for various reasons.

For example:

LastPass obfuscates the master password while users are typing in the entry, and when the password manager enters an unlocked state, database entries are only decrypted into memory when there is user interaction. However, ISE reported that these entries persist in memory after the software enters a locked state. It was also possible for the researchers to extract the master password and interacted-with password entries due to a memory leak.

KeePass scrubs the master password from memory and is not recoverable. However, errors in workflows permitted the researchers from extracting credential entries which have been interacted with. In the case of Windows APIs, sometimes, various memory buffers which contain decrypted entries may not be scrubbed correctly.

Whether this is a big deal or not depends on whether you consider your computer to be trusted.

Several people have emailed me to ask why my own Password Safe was not included in the evaluation, and whether it has the same vulnerabilities. My guess about the former is that Password Safe isn’t as popular as the others. (This is for two reasons: 1) I don’t publicize it very much, and 2) it doesn’t have an easy way to synchronize passwords across devices or otherwise store password data in the cloud.) As to the latter: we tried to code Password Safe not to leave plaintext passwords lying around in memory.

So, Independent Security Evaluators: take a look at Password Safe.

Also, remember the vulnerabilities found in many cloud-based password managers back in 2014?

News article. Slashdot thread.

Posted on February 25, 2019 at 6:23 AMView Comments

Security Flaws in Children's Smart Watches

A year ago, the Norwegian Consumer Council published an excellent security analysis of children’s GPS-connected smart watches. The security was terrible. Not only could parents track the children, anyone else could also track the children.

A recent analysis checked if anything had improved after that torrent of bad press. Short answer: no.

Guess what: a train wreck. Anyone could access the entire database, including real time child location, name, parents details etc. Not just Gator watches either—the same back end covered multiple brands and tens of thousands of watches

The Gator web backend was passing the user level as a parameter. Changing that value to another number gave super admin access throughout the platform. The system failed to validate that the user had the appropriate permission to take admin control!

This means that an attacker could get full access to all account information and all watch information. They could view any user of the system and any device on the system, including its location. They could manipulate everything and even change users’ emails/passwords to lock them out of their watch.

In fairness, upon our reporting of the vulnerability to them, Gator got it fixed in 48 hours.

This is a lesson in the limits of naming and shaming: publishing vulnerabilities in an effort to get companies to improve their security. If a company is specifically named, it is likely to improve the specific vulnerability described. But that is unlikely to translate into improved security practices in the future. If an industry, or product category, is named generally, nothing is likely to happen. This is one of the reasons I am a proponent of regulation.

News article.

EDITED TO ADD (2/13): The EU has acted in a similar case.

Posted on January 31, 2019 at 10:30 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.