Entries Tagged "risks"

Page 1 of 14

The Legal Risks of Security Research

Sunoo Park and Kendra Albert have published “A Researcher’s Guide to Some Legal Risks of Security Research.”

From a summary:

Such risk extends beyond anti-hacking laws, implicating copyright law and anti-circumvention provisions (DMCA §1201), electronic privacy law (ECPA), and cryptography export controls, as well as broader legal areas such as contract and trade secret law.

Our Guide gives the most comprehensive presentation to date of this landscape of legal risks, with an eye to both legal and technical nuance. Aimed at researchers, the public, and technology lawyers alike, its aims both to provide pragmatic guidance to those navigating today’s uncertain legal landscape, and to provoke public debate towards future reform.

Comprehensive, and well worth reading.

Here’s a Twitter thread by Kendra.

Posted on October 30, 2020 at 9:14 AMView Comments

On Risk-Based Authentication

Interesting usability study: “More Than Just Good Passwords? A Study on Usability and Security Perceptions of Risk-based Authentication“:

Abstract: Risk-based Authentication (RBA) is an adaptive security measure to strengthen password-based authentication. RBA monitors additional features during login, and when observed feature values differ significantly from previously seen ones, users have to provide additional authentication factors such as a verification code. RBA has the potential to offer more usable authentication, but the usability and the security perceptions of RBA are not studied well.

We present the results of a between-group lab study (n=65) to evaluate usability and security perceptions of two RBA variants, one 2FA variant, and password-only authentication. Our study shows with significant results that RBA is considered to be more usable than the studied 2FA variants, while it is perceived as more secure than password-only authentication in general and comparably se-cure to 2FA in a variety of application types. We also observed RBA usability problems and provide recommendations for mitigation.Our contribution provides a first deeper understanding of the users’perception of RBA and helps to improve RBA implementations for a broader user acceptance.

Paper’s website. I’ve blogged about risk-based authentication before.

Posted on October 5, 2020 at 11:47 AMView Comments

The NSA on the Risks of Exposing Location Data

The NSA has issued an advisory on the risks of location data.

Mitigations reduce, but do not eliminate, location tracking risks in mobile devices. Most users rely on features disabled by such mitigations, making such safeguards impractical. Users should be aware of these risks and take action based on their specific situation and risk tolerance. When location exposure could be detrimental to a mission, users should prioritize mission risk and apply location tracking mitigations to the greatest extent possible. While the guidance in this document may be useful to a wide range of users, it is intended primarily for NSS/DoD system users.

The document provides a list of mitigation strategies, including turning things off:

If it is critical that location is not revealed for a particular mission, consider the following recommendations:

  • Determine a non-sensitive location where devices with wireless capabilities can be secured prior to the start of any activities. Ensure that the mission site cannot be predicted from this location.
  • Leave all devices with any wireless capabilities (including personal devices) at this non-sensitive location. Turning off the device may not be sufficient if a device has been compromised.
  • For mission transportation, use vehicles without built-in wireless communication capabilities, or turn off the capabilities, if possible.

Of course, turning off your wireless devices is itself a signal that something is going on. It’s hard to be clandestine in our always connected world.

News articles.

Posted on August 6, 2020 at 12:15 PMView Comments

The Unintended Harms of Cybersecurity

Interesting research: “Identifying Unintended Harms of Cybersecurity Countermeasures“:

Abstract: Well-meaning cybersecurity risk owners will deploy countermeasures (technologies or procedures) to manage risks to their services or systems. In some cases, those countermeasures will produce unintended consequences, which must then be addressed. Unintended consequences can potentially induce harm, adversely affecting user behaviour, user inclusion, or the infrastructure itself (including other services or countermeasures). Here we propose a framework for preemptively identifying unintended harms of risk countermeasures in cybersecurity.The framework identifies a series of unintended harms which go beyond technology alone, to consider the cyberphysical and sociotechnical space: displacement, insecure norms, additional costs, misuse, misclassification, amplification, and disruption. We demonstrate our framework through application to the complex,multi-stakeholder challenges associated with the prevention of cyberbullying as an applied example. Our framework aims to illuminate harmful consequences, not to paralyze decision-making, but so that potential unintended harms can be more thoroughly considered in risk management strategies. The framework can support identification and preemptive planning to identify vulnerable populations and preemptively insulate them from harm. There are opportunities to use the framework in coordinating risk management strategy across stakeholders in complex cyberphysical environments.

Security is always a trade-off. I appreciate work that examines the details of that trade-off.

Posted on June 26, 2020 at 7:00 AMView Comments

COVID-19 Risks of Flying

I fly a lot. Over the past five years, my average speed has been 32 miles an hour. That all changed mid-March. It’s been 105 days since I’ve been on an airplane — longer than any other time in my adult life — and I have no future flights scheduled. This is all a prelude to saying that I have been paying a lot of attention to the COVID-related risks of flying.

We know a lot more about how COVID-19 spreads than we did in March. The “less than six feet, more than ten minutes” model has given way to a much more sophisticated model involving airflow, the level of virus in the room, and the viral load in the person who might be infected.

Regarding airplanes specifically: on the whole, they seem safer than many other group activities. Of all the research about contact tracing results I have read, I have seen no stories of a sick person on an airplane infecting other passengers. There are no superspreader events involving airplanes. (That did happen with SARS.) It seems that the airflow inside the cabin really helps.

Airlines are trying to make things better: blocking middle seats, serving less food and drink, trying to get people to wear masks. (This video is worth watching.) I’ve started to see airlines requiring masks and banning those who won’t, and not just strongly encouraging them. (If mask wearing is treated the same as the seat belt wearing, it will make a huge difference.) Finally, there are a lot of dumb things that airlines are doing.

This article interviewed 511 epidemiologists, and the general consensus was that flying is riskier than getting a haircut but less risky than eating in a restaurant. I think that most of the risk is pre-flight, in the airport: crowds at the security checkpoints, gates, and so on. And that those are manageable with mask wearing and situational awareness. So while I am not flying yet, I might be willing to soon. (It doesn’t help that I get a -1 on my COVID saving throw for type A blood, and another -1 for male pattern baldness. On the other hand, I think I get a +3 Constitution bonus. Maybe, instead of sky marshals we can have high-level clerics on the planes.)

And everyone: wear a mask, and wash your hands.

EDITED TO ADD (6/27): Airlines are starting to crowd their flights again.

Posted on June 24, 2020 at 12:32 PMView Comments

On Cyber Warranties

Interesting article discussing cyber-warranties, and whether they are an effective way to transfer risk (as envisioned by Akerlof’s “market for lemons”) or a marketing trick.

The conclusion:

Warranties must transfer non-negligible amounts of liability to vendors in order to meaningfully overcome the market for lemons. Our preliminary analysis suggests the majority of cyber warranties cover the cost of repairing the device alone. Only cyber-incident warranties cover first-party costs from cyber-attacks — why all such warranties were offered by firms selling intangible products is an open question. Consumers should question whether warranties can function as a costly signal when narrow coverage means vendors accept little risk.

Worse still, buyers cannot compare across cyber-incident warranty contracts due to the diversity of obligations and exclusions. Ambiguous definitions of the buyer’s obligations and excluded events create uncertainty over what is covered. Moving toward standardized terms and conditions may help consumers, as has been pursued in cyber insurance, but this is in tension with innovation and product diversity.

[..]

Theoretical work suggests both the breadth of the warranty and the price of a product determine whether the warranty functions as a quality signal. Our analysis has not touched upon the price of these products. It could be that firms with ineffective products pass the cost of the warranty on to buyers via higher prices. Future studies could analyze warranties and price together to probe this issue.

In conclusion, cyber warranties — particularly cyber-product warranties — do not transfer enough risk to be a market fix as imagined in Woods. But this does not mean they are pure marketing tricks either. The most valuable feature of warranties is in preventing vendors from exaggerating what their products can do. Consumers who read the fine print can place greater trust in marketing claims so long as the functionality is covered by a cyber-incident warranty.

Posted on March 26, 2020 at 6:27 AMView Comments

Andy Ellis on Risk Assessment

Andy Ellis, the CSO of Akamai, gave a great talk about the psychology of risk at the Business of Software conference this year.

I’ve written about this before.

One quote of mine: “The problem is our brains are intuitively suited to the sorts of risk management decisions endemic to living in small family groups in the East African highlands in 100,000 BC, and not to living in the New York City of 2008.”

EDITED TO ADD (12/13): Epigenetics and the human brain.

Posted on December 6, 2019 at 6:55 AMView Comments

NSA on the Future of National Cybersecurity

Glenn Gerstell, the General Counsel of the NSA, wrote a long and interesting op-ed for the New York Times where he outlined a long list of cyber risks facing the US.

There are four key implications of this revolution that policymakers in the national security sector will need to address:

The first is that the unprecedented scale and pace of technological change will outstrip our ability to effectively adapt to it. Second, we will be in a world of ceaseless and pervasive cyberinsecurity and cyberconflict against nation-states, businesses and individuals. Third, the flood of data about human and machine activity will put such extraordinary economic and political power in the hands of the private sector that it will transform the fundamental relationship, at least in the Western world, between government and the private sector. Finally, and perhaps most ominously, the digital revolution has the potential for a pernicious effect on the very legitimacy and thus stability of our governmental and societal structures.

He then goes on to explain these four implications. It’s all interesting, and it’s the sort of stuff you don’t generally hear from the NSA. He talks about technological changes causing social changes, and the need for people who understand that. (Hooray for public-interest technologists.) He talks about national security infrastructure in private hands, at least in the US. He talks about a massive geopolitical restructuring — a fundamental change in the relationship between private tech corporations and government. He talks about recalibrating the Fourth Amendment (of course).

The essay is more about the problems than the solutions, but there is a bit at the end:

The first imperative is that our national security agencies must quickly accept this forthcoming reality and embrace the need for significant changes to address these challenges. This will have to be done in short order, since the digital revolution’s pace will soon outstrip our ability to deal with it, and it will have to be done at a time when our national security agencies are confronted with complex new geopolitical threats.

Much of what needs to be done is easy to see — developing the requisite new technologies and attracting and retaining the expertise needed for that forthcoming reality. What is difficult is executing the solution to those challenges, most notably including whether our nation has the resources and political will to effect that solution. The roughly $60 billion our nation spends annually on the intelligence community might have to be significantly increased during a time of intense competition over the federal budget. Even if the amount is indeed so increased, spending additional vast sums to meet the challenges in an effective way will be a daunting undertaking. Fortunately, the same digital revolution that presents these novel challenges also sometimes provides the new tools (A.I., for example) to deal with them.

The second imperative is we must adapt to the unavoidable conclusion that the fundamental relationship between government and the private sector will be greatly altered. The national security agencies must have a vital role in reshaping that balance if they are to succeed in their mission to protect our democracy and keep our citizens safe. While there will be good reasons to increase the resources devoted to the intelligence community, other factors will suggest that an increasing portion of the mission should be handled by the private sector. In short, addressing the challenges will not necessarily mean that the national security sector will become massively large, with the associated risks of inefficiency, insufficient coordination and excessively intrusive surveillance and data retention.

A smarter approach would be to recognize that as the capabilities of the private sector increase, the scope of activities of the national security agencies could become significantly more focused, undertaking only those activities in which government either has a recognized advantage or must be the only actor. A greater burden would then be borne by the private sector.

It’s an extraordinary essay, less for its contents and more for the speaker. This is not the sort of thing the NSA publishes. The NSA doesn’t opine on broad technological trends and their social implications. It doesn’t publicly try to predict the future. It doesn’t philosophize for 6000 unclassified words. And, given how hard it would be to get something like this approved for public release, I am left to wonder what the purpose of the essay is. Is the NSA trying to lay the groundwork for some policy initiative ? Some legislation? A budget request? What?

Charlie Warzel has a snarky response. His conclusion about the purpose:

He argues that the piece “is not in the spirit of forecasting doom, but rather to sound an alarm.” Translated: Congress, wake up. Pay attention. We’ve seen the future and it is a sweaty, pulsing cyber night terror. So please give us money (the word “money” doesn’t appear in the text, but the word “resources” appears eight times and “investment” shows up 11 times).

Susan Landau has a more considered response, which is well worth reading. She calls the essay a proposal for a moonshot (which is another way of saying “they want money”). And she has some important pushbacks on the specifics.

I don’t expect the general counsel and I will agree on what the answers to these questions should be. But I strongly concur on the importance of the questions and that the United States does not have time to waste in responding to them. And I thank him for raising these issues in so public a way.

I agree with Landau.

Slashdot thread.

Posted on October 1, 2019 at 6:54 AMView Comments

On Cybersecurity Insurance

Good paper on cybersecurity insurance: both the history and the promise for the future. From the conclusion:

Policy makers have long held high hopes for cyber insurance as a tool for improving security. Unfortunately, the available evidence so far should give policymakers pause. Cyber insurance appears to be a weak form of governance at present. Insurers writing cyber insurance focus more on organisational procedures than technical controls, rarely include basic security procedures in contracts, and offer discounts that only offer a marginal incentive to invest in security. However, the cost of external response services is covered, which suggests insurers believe ex-post responses to be more effective than ex-ante mitigation. (Alternatively, they can more easily translate the costs associated with ex-post responses into manageable claims.)

The private governance role of cyber insurance is limited by market dynamics. Competitive pressures drive a race-to-the-bottom in risk assessment standards and prevent insurers including security procedures in contracts. Policy interventions, such as minimum risk assessment standards, could solve this collective action problem. Policy-holders and brokers could also drive this change by looking to insurers who conduct rigorous assessments. Doing otherwise ensures adverse selection and moral hazard will increase costs for firms with responsible security postures. Moving toward standardised risk assessment via proposal forms or external scans supports the actuarial base in the long-term. But there is a danger policyholders will succumb to Goodhart’s law by internalising these metrics and optimising the metric rather than minimising risk. This is particularly likely given these assessments are constructed by private actors with their own incentives. Search-light effects may drive the scores towards being based on what can be measured, not what is important.

EDITED TO ADD (9/11): Boing Boing post.

Posted on September 10, 2019 at 6:23 AMView Comments

1 2 3 14

Sidebar photo of Bruce Schneier by Joe MacInnis.