Entries Tagged "cybersecurity"

Page 2 of 21

Securing Open-Source Software

Good essay arguing that open-source software is a critical national-security asset and needs to be treated as such:

Open source is at least as important to the economy, public services, and national security as proprietary code, but it lacks the same standards and safeguards. It bears the qualities of a public good and is as indispensable as national highways. Given open source’s value as a public asset, an institutional structure must be built that sustains and secures it.

This is not a novel idea. Open-source code has been called the “roads and bridges” of the current digital infrastructure that warrants the same “focus and funding.” Eric Brewer of Google explicitly called open-source software “critical infrastructure” in a recent keynote at the Open Source Summit in Austin, Texas. Several nations have adopted regulations that recognize open-source projects as significant public assets and central to their most important systems and services. Germany wants to treat open-source software as a public good and launched a sovereign tech fund to support open-source projects “just as much as bridges and roads,” and not just when a bridge collapses. The European Union adopted a formal open-source strategy that encourages it to “explore opportunities for dedicated support services for open source solutions [it] considers critical.”

Designing an institutional framework that would secure open source requires addressing adverse incentives, ensuring efficient resource allocation, and imposing minimum standards. But not all open-source projects are made equal. The first step is to identify which projects warrant this heightened level of scrutiny—projects that are critical to society. CISA defines critical infrastructure as industry sectors “so vital to the United States that [its] incapacity or destruction would have a debilitating impact on our physical or economic security or public health or safety.” Efforts should target the open-source projects that share those features.

Posted on July 27, 2022 at 7:03 AMView Comments

Apple’s Lockdown Mode

I haven’t written about Apple’s Lockdown Mode yet, mostly because I haven’t delved into the details. This is how Apple describes it:

Lockdown Mode offers an extreme, optional level of security for the very few users who, because of who they are or what they do, may be personally targeted by some of the most sophisticated digital threats, such as those from NSO Group and other private companies developing state-sponsored mercenary spyware. Turning on Lockdown Mode in iOS 16, iPadOS 16, and macOS Ventura further hardens device defenses and strictly limits certain functionalities, sharply reducing the attack surface that potentially could be exploited by highly targeted mercenary spyware.

At launch, Lockdown Mode includes the following protections:

  • Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.
  • Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode.
  • Apple services: Incoming invitations and service requests, including FaceTime calls, are blocked if the user has not previously sent the initiator a call or request.
  • Wired connections with a computer or accessory are blocked when iPhone is locked.
  • Configuration profiles cannot be installed, and the device cannot enroll into mobile device management (MDM), while Lockdown Mode is turned on.

What Apple has done here is really interesting. It’s common to trade security off for usability, and the results of that are all over Apple’s operating systems—and everywhere else on the Internet. What they’re doing with Lockdown Mode is the reverse: they’re trading usability for security. The result is a user experience with fewer features, but a much smaller attack surface. And they aren’t just removing random features; they’re removing features that are common attack vectors.

There aren’t a lot of people who need Lockdown Mode, but it’s an excellent option for those who do.

News article.

EDITED TO ADD (7/31): An analysis of the effect of Lockdown Mode on Safari.

Posted on July 26, 2022 at 7:57 AMView Comments

When Security Locks You Out of Everything

Thought experiment story of someone who lost everything in a house fire, and now can’t log into anything:

But to get into my cloud, I need my password and 2FA. And even if I could convince the cloud provider to bypass that and let me in, the backup is secured with a password which is stored in—you guessed it—my Password Manager.

I am in cyclic dependency hell. To get my passwords, I need my 2FA. To get my 2FA, I need my passwords.

It’s a one-in-a-million story, and one that’s hard to take into account in system design.

This is where we reach the limits of the “Code Is Law” movement.

In the boring analogue world—I am pretty sure that I’d be able to convince a human that I am who I say I am. And, thus, get access to my accounts. I may have to go to court to force a company to give me access back, but it is possible.

But when things are secured by an unassailable algorithm—I am out of luck. No amount of pleading will let me without the correct credentials. The company which provides my password manager simply doesn’t have access to my passwords. There is no-one to convince. Code is law.

Of course, if I can wangle my way past security, an evil-doer could also do so.

So which is the bigger risk?

  • An impersonator who convinces a service provider that they are me?
  • A malicious insider who works for a service provider?
  • Me permanently losing access to all of my identifiers?

I don’t know the answer to that.

Those risks are in the order of most common to least common, but that doesn’t necessarily mean that they are in risk order. They probably are, but then we’re left with no good way to handle someone who has lost all their digital credentials—computer, phone, backup, hardware token, wallet with ID cards—in a catastrophic house fire.

I want to remind readers that this isn’t a true story. It didn’t actually happen. It’s a thought experiment.

Posted on June 28, 2022 at 6:22 AMView Comments

M1 Chip Vulnerability

This is a new vulnerability against Apple’s M1 chip. Researchers say that it is unpatchable.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory, however, have created a novel hardware attack, which combines memory corruption and speculative execution attacks to sidestep the security feature. The attack shows that pointer authentication can be defeated without leaving a trace, and as it utilizes a hardware mechanism, no software patch can fix it.

The attack, appropriately called “Pacman,” works by “guessing” a pointer authentication code (PAC), a cryptographic signature that confirms that an app hasn’t been maliciously altered. This is done using speculative execution—a technique used by modern computer processors to speed up performance by speculatively guessing various lines of computation—to leak PAC verification results, while a hardware side-channel reveals whether or not the guess was correct.

What’s more, since there are only so many possible values for the PAC, the researchers found that it’s possible to try them all to find the right one.

It’s not obvious how to exploit this vulnerability in the wild, so I’m unsure how important this is. Also, I don’t know if it also applies to Apple’s new M2 chip.

Research paper. Another news article.

Posted on June 15, 2022 at 6:05 AMView Comments

Security and Human Behavior (SHB) 2022

Today is the second day of the fifteenth Workshop on Security and Human Behavior, hosted by Ross Anderson and Alice Hutchings at the University of Cambridge. After two years of having this conference remotely on Zoom, it’s nice to be back together in person.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, Alice Hutchings, and myself. The forty or so attendees include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, geographers, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

For the past decade and a half, this workshop has been the most intellectually stimulating two days of my professional year. It influences my thinking in different and sometimes surprising ways—and has resulted in some unexpected collaborations.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. Because everyone was not able to attend in person, our panels all include remote participants as well. The hybrid structure is working well, even though our remote participants aren’t around for the social program.

This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, twelfth, thirteenth, and fourteenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio/video recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

EDITED TO ADD (6/15): Here are the videos for sessions 1, 2, 3, 4, 5, 6, 7 and 8.

Posted on May 31, 2022 at 4:12 AMView Comments

The Justice Department Will No Longer Charge Security Researchers with Criminal Hacking

Following a recent Supreme Court ruling, the Justice Department will no longer prosecute “good faith” security researchers with cybercrimes:

The policy for the first time directs that good-faith security research should not be charged. Good faith security research means accessing a computer solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability, where such activity is carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services.

[…]

The new policy states explicitly the longstanding practice that “the department’s goals for CFAA enforcement are to promote privacy and cybersecurity by upholding the legal right of individuals, network owners, operators, and other persons to ensure the confidentiality, integrity, and availability of information stored in their information systems.” Accordingly, the policy clarifies that hypothetical CFAA violations that have concerned some courts and commentators are not to be charged. Embellishing an online dating profile contrary to the terms of service of the dating website; creating fictional accounts on hiring, housing, or rental websites; using a pseudonym on a social networking site that prohibits them; checking sports scores at work; paying bills at work; or violating an access restriction contained in a term of service are not themselves sufficient to warrant federal criminal charges. The policy focuses the department’s resources on cases where a defendant is either not authorized at all to access a computer or was authorized to access one part of a computer—such as one email account—and, despite knowing about that restriction, accessed a part of the computer to which his authorized access did not extend, such as other users’ emails.

News article.

EDITED TO ADD (6/14): Josephine Wolff writes about this update.

Posted on May 24, 2022 at 6:11 AMView Comments

Corporate Involvement in International Cybersecurity Treaties

The Paris Call for Trust and Stability in Cyberspace is an initiative launched by French President Emmanuel Macron during the 2018 UNESCO’s Internet Governance Forum. It’s an attempt by the world’s governments to come together and create a set of international norms and standards for a reliable, trustworthy, safe, and secure Internet. It’s not an international treaty, but it does impose obligations on the signatories. It’s a major milestone for global Internet security and safety.

Corporate interests are all over this initiative, sponsoring and managing different parts of the process. As part of the Call, the French company Cigref and the Russian company Kaspersky chaired a working group on cybersecurity processes, along with French research center GEODE. Another working group on international norms was chaired by US company Microsoft and Finnish company F-Secure, along with a University of Florence research center. A third working group’s participant list includes more corporations than any other group.

As a result, this process has become very different than previous international negotiations. Instead of governments coming together to create standards, it is being drive by the very corporations that the new international regulatory climate is supposed to govern. This is wrong.

The companies making the tools and equipment being regulated shouldn’t be the ones negotiating the international regulatory climate, and their executives shouldn’t be named to key negotiation roles without appointment and confirmation. It’s an abdication of responsibility by the US government for something that is too important to be treated this cavalierly.

On the one hand, this is no surprise. The notions of trust and stability in cyberspace are about much more than international safety and security. They’re about market share and corporate profits. And corporations have long led policymakers in the fast-moving and highly technological battleground that is cyberspace.

The international Internet has always relied on what is known as a multistakeholder model, where those who show up and do the work can be more influential than those in charge of governments. The Internet Engineering Task Force, the group that agrees on the technical protocols that make the Internet work, is largely run by volunteer individuals. This worked best during the Internet’s era of benign neglect, where no one but the technologists cared. Today, it’s different. Corporate and government interests dominate, even if the individuals involved use the polite fiction of their own names and personal identities.

However, we are a far cry from decades past, where the Internet was something that governments didn’t understand and largely ignored. Today, the Internet is an essential infrastructure that underpins much of society, and its governance structure is something that nations care about deeply. Having for-profit tech companies run the Paris Call process on regulating tech is analogous to putting the defense contractors Northrop Grumman or Boeing in charge of the 1970s SALT nuclear agreements between the US and the Soviet Union.

This also isn’t the first time that US corporations have led what should be an international relations process regarding the Internet. Since he first gave a speech on the topic in 2017, Microsoft President Brad Smith has become almost synonymous with the term “Digital Geneva Convention.” It’s not just that corporations in the US and elsewhere are taking a lead on international diplomacy, they’re framing the debate down to the words and the concepts.

Why is this happening? Different countries have their own problems, but we can point to three that currently plague the US.

First and foremost, “cyber” still isn’t taken seriously by much of the government, specifically the State Department. It’s not real to the older military veterans, or to the even older politicians who confuse Facebook with TikTok and use the same password for everything. It’s not even a topic area for negotiations for the US Trade Representative. Nuclear disarmament is “real geopolitics,” while the Internet is still, even now, seen as vaguely magical, and something that can be “fixed” by having the nerds yank plugs out of a wall.

Second, the State Department was gutted during the Trump years. It lost many of the up-and-coming public servants who understood the way the world was changing. The work of previous diplomats to increase the visibility of the State Department’s cyber efforts was abandoned. There are few left on staff to do this work, and even fewer to decide if they’re any good. It’s hard to hire senior information security professionals in the best of circumstances; it’s why charlatans so easily flourish in the cybersecurity field. The built-up skill set of the people who poured their effort and time into this work during the Obama years is gone.

Third, there’s a power struggle at the heart of the US government involving cyber issues, between the White House, the Department of Homeland Security (represented by CISA), and the military (represented by US Cyber Command). Trying to create another cyber center of power within the State Department threatens those existing powers. It’s easier to leave it in the hands of private industry, which does not affect those government organizations’ budgets or turf.

We don’t want to go back to the era when only governments set technological standards. The governance model from the days of the telephone is another lesson in how not to do things. The International Telecommunications Union is an agency run out of the United Nations. It is moribund and ponderous precisely because it is run by national governments, with civil society and corporations largely alienated from the decision-making processes.

Today, the Internet is fundamental to global society. It’s part of everything. It affects national security and will be a theater in any future war. How individuals, corporations, and governments act in cyberspace is critical to our future. The Internet is critical infrastructure. It provides and controls access to healthcare, space, the military, water, energy, education, and nuclear weaponry. How it is regulated isn’t just something that will affect the future. It is the future.

Since the Paris Call was finalized in 2018, it has been signed by 81 countries—including the US in 2021—36 local governments and public authorities, 706 companies and private organizations, and 390 civil society groups. The Paris Call isn’t the first international agreement that puts companies on an equal signatory footing as governments. The Global Internet Forum to Combat Terrorism and the Christchurch Call to eliminate extremist content online do the same thing. But the Paris Call is different. It’s bigger. It’s more important. It’s something that should be the purview of governments and not a vehicle for corporate power and profit.

When something as important as the Paris Call comes along again, perhaps in UN negotiations for a cybercrime treaty, we call for actual State Department officials with technical expertise to be sitting at the table with the interests of the entire US in their pocket…not people with equity shares to protect.

This essay was written with Tarah Wheeler, and previously published on The Cipher Brief.

Posted on May 6, 2022 at 6:01 AMView Comments

US Critical Infrastructure Companies Will Have to Report When They Are Hacked

This will be law soon:

Companies critical to U.S. national interests will now have to report when they’re hacked or they pay ransomware, according to new rules approved by Congress.

[…]

The reporting requirement legislation was approved by the House and the Senate on Thursday and is expected to be signed into law by President Joe Biden soon. It requires any entity that’s considered part of the nation’s critical infrastructure, which includes the finance, transportation and energy sectors, to report any “substantial cyber incident” to the government within three days and any ransomware payment made within 24 hours.

Even better would be if they had to report it to the public.

Posted on March 15, 2022 at 6:01 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.