Entries Tagged "security engineering"

Page 1 of 15

Ross Anderson

Ross Anderson unexpectedly passed away Thursday night in, I believe, his home in Cambridge.

I can’t remember when I first met Ross. Of course it was before 2008, when we created the Security and Human Behavior workshop. It was well before 2001, when we created the Workshop on Economics and Information Security. (Okay, he created both—I helped.) It was before 1998, when we wrote about the problems with key escrow systems. I was one of the people he brought to the Newton Institute, at Cambridge University, for the six-month cryptography residency program he ran (I mistakenly didn’t stay the whole time)—that was in 1996.

I know I was at the first Fast Software Encryption workshop in December 1993, another conference he created. There I presented the Blowfish encryption algorithm. Pulling an old first-edition of Applied Cryptography (the one with the blue cover) down from the shelf, I see his name in the acknowledgments. Which means that sometime in early 1993—probably at Eurocrypt in Lofthus, Norway—I, as an unpublished book author who had only written a couple of crypto articles for Dr. Dobb’s Journal, asked him to read and comment on my book manuscript. And he said yes. Which means I mailed him a paper copy. And he read it. And mailed his handwritten comments back to me. In an envelope with stamps. Because that’s how we did it back then.

I have known Ross for over thirty years, as both a colleague and a friend. He was enthusiastic, brilliant, opinionated, articulate, curmudgeonly, and kind. Pick up any of his academic papers—there are many—and odds are that you will find a least one unexpected insight. He was a cryptographer and security engineer, but also very much a generalist. He published on block cipher cryptanalysis in the 1990s, and the security of large-language models last year. He started conferences like nobody’s business. His masterwork book, Security Engineering—now in its third edition—is as comprehensive a tome on cybersecurity and related topics as you could imagine. (Also note his fifteen-lecture video series on that same page. If you have never heard Ross lecture, you’re in for a treat.) He was the first person to understand that security problems are often actually economic problems. He was the first person to make a lot of those sorts of connections. He fought against surveillance and backdoors, and for academic freedom. He didn’t suffer fools in either government or the corporate world.

He’s listed in the acknowledgments as a reader of every one of my books from Beyond Fear on. Recently, we’d see each other a couple of times a year: at this or that workshop or event. The last time I saw him was last June, at SHB 2023, in Pittsburgh. We were having dinner on Alessandro Acquisti‘s rooftop patio, celebrating another successful workshop. He was going to attend my Workshop on Reimagining Democracy in December, but he had to cancel at the last minute. (He sent me the talk he was going to give. I will see about posting it.) The day before he died, we were discussing how to accommodate everyone who registered for this year’s SHB workshop. I learned something from him every single time we talked. And I am not the only one.

My heart goes out to his wife Shireen and his family. We lost him much too soon.

EDITED TO ADD (4/10): I wrote a longer version for Communications of the ACM.

EDITED TO ADD (4/11): Two weeks before he passed away, Ross gave an 80-minute interview where he told his life story.

Posted on March 31, 2024 at 8:21 PMView Comments

New Report on IoT Security

The Atlantic Council has published a report on securing the Internet of Things: “Security in the Billions: Toward a Multinational Strategy to Better Secure the IoT Ecosystem.” The report examines the regulatory approaches taken by four countries—the US, the UK, Australia, and Singapore—to secure home, medical, and networking/telecommunications devices. The report recommends that regulators should 1) enforce minimum security standards for manufacturers of IoT devices, 2) incentivize higher levels of security through public contracting, and 3) try to align IoT standards internationally (for example, international guidance on handling connected devices that stop receiving security updates).

This report looks to existing security initiatives as much as possible—both to leverage existing work and to avoid counterproductively suggesting an entirely new approach to IoT security—while recommending changes and introducing more cohesion and coordination to regulatory approaches to IoT cybersecurity. It walks through the current state of risk in the ecosystem, analyzes challenges with the current policy model, and describes a synthesized IoT security framework. The report then lays out nine recommendations for government and industry actors to enhance IoT security, broken into three recommendation sets: setting a baseline of minimally acceptable security (or “Tier 1”), incentivizing above the baseline (or “Tier 2” and above), and pursuing international alignment on standards and implementation across the entire IoT product lifecycle (from design to sunsetting). It also includes implementation guidance for the United States, Australia, UK, and Singapore, providing a clearer roadmap for countries to operationalize the recommendations in their specific jurisdictions—and push towards a stronger, more cohesive multinational approach to securing the IoT worldwide.

Note: One of the authors of this report was a student of mine at Harvard Kennedy School, and did this work with the Atlantic Council under my supervision.

Posted on September 27, 2022 at 6:15 AMView Comments

Manipulating Machine-Learning Systems through the Order of the Training Data

Yet another adversarial ML attack:

Most deep neural networks are trained by stochastic gradient descent. Now “stochastic” is a fancy Greek word for “random”; it means that the training data are fed into the model in random order.

So what happens if the bad guys can cause the order to be not random? You guessed it—all bets are off. Suppose for example a company or a country wanted to have a credit-scoring system that’s secretly sexist, but still be able to pretend that its training was actually fair. Well, they could assemble a set of financial data that was representative of the whole population, but start the model’s training on ten rich men and ten poor women drawn from that set ­ then let initialisation bias do the rest of the work.

Does this generalise? Indeed it does. Previously, people had assumed that in order to poison a model or introduce backdoors, you needed to add adversarial samples to the training data. Our latest paper shows that’s not necessary at all. If an adversary can manipulate the order in which batches of training data are presented to the model, they can undermine both its integrity (by poisoning it) and its availability (by causing training to be less effective, or take longer). This is quite general across models that use stochastic gradient descent.

Research paper.

Posted on May 25, 2022 at 10:30 AMView Comments

Hiding Vulnerabilities in Source Code

Really interesting research demonstrating how to hide vulnerabilities in source code by manipulating how Unicode text is displayed. It’s really clever, and not the sort of attack one would normally think about.

From Ross Anderson’s blog:

We have discovered ways of manipulating the encoding of source code files so that human viewers and compilers see different logic. One particularly pernicious method uses Unicode directionality override characters to display code as an anagram of its true logic. We’ve verified that this attack works against C, C++, C#, JavaScript, Java, Rust, Go, and Python, and suspect that it will work against most other modern languages.

This potentially devastating attack is tracked as CVE-2021-42574, while a related attack that uses homoglyphs –- visually similar characters –- is tracked as CVE-2021-42694. This work has been under embargo for a 99-day period, giving time for a major coordinated disclosure effort in which many compilers, interpreters, code editors, and repositories have implemented defenses.

Website for the attack. Rust security advisory.

Brian Krebs has a blog post.

EDITED TO ADD (11/12): An older paper on similar issues.

Posted on November 1, 2021 at 10:58 AMView Comments

Open Source Does Not Equal Secure

Way back in 1999, I wrote about open-source software:

First, simply publishing the code does not automatically mean that people will examine it for security flaws. Security researchers are fickle and busy people. They do not have the time to examine every piece of source code that is published. So while opening up source code is a good thing, it is not a guarantee of security. I could name a dozen open source security libraries that no one has ever heard of, and no one has ever evaluated. On the other hand, the security code in Linux has been looked at by a lot of very good security engineers.

We have some new research from GitHub that bears this out. On average, vulnerabilities in their libraries go four years before being detected. From a ZDNet article:

GitHub launched a deep-dive into the state of open source security, comparing information gathered from the organization’s dependency security features and the six package ecosystems supported on the platform across October 1, 2019, to September 30, 2020, and October 1, 2018, to September 30, 2019.

Only active repositories have been included, not including forks or ‘spam’ projects. The package ecosystems analyzed are Composer, Maven, npm, NuGet, PyPi, and RubyGems.

In comparison to 2019, GitHub found that 94% of projects now rely on open source components, with close to 700 dependencies on average. Most frequently, open source dependencies are found in JavaScript—94%—as well as Ruby and .NET, at 90%, respectively.

On average, vulnerabilities can go undetected for over four years in open source projects before disclosure. A fix is then usually available in just over a month, which GitHub says “indicates clear opportunities to improve vulnerability detection.”

Open source means that the code is available for security evaluation, not that it necessarily has been evaluated by anyone. This is an important distinction.

Posted on December 3, 2020 at 11:21 AMView Comments

The Third Edition of Ross Anderson’s Security Engineering

Ross Anderson’s fantastic textbook, Security Engineering, will have a third edition. The book won’t be published until December, but Ross has been making drafts of the chapters available online as he finishes them. Now that the book is completed, I expect the publisher to make him take the drafts off the Internet.

I personally find both the electronic and paper versions to be incredibly useful. Grab an electronic copy now while you still can.

Posted on September 10, 2020 at 6:26 AMView Comments

1 2 3 15

Sidebar photo of Bruce Schneier by Joe MacInnis.