Entries Tagged "risk assessment"

Page 3 of 21

Confusing Security Risks with Moral Judgments

Interesting research that shows we exaggerate the risks of something when we find it morally objectionable.

From an article about and interview with the researchers:

To get at this question experimentally, Thomas and her collaborators created a series of vignettes in which a parent left a child unattended for some period of time, and participants indicated the risk of harm to the child during that period. For example, in one vignette, a 10-month-old was left alone for 15 minutes, asleep in the car in a cool, underground parking garage. In another vignette, an 8-year-old was left for an hour at a Starbucks, one block away from her parent’s location.

To experimentally manipulate participants’ moral attitude toward the parent, the experimenters varied the reason the child was left unattended across a set of six experiments with over 1,300 online participants. In some cases, the child was left alone unintentionally (for example, in one case, a mother is hit by a car and knocked unconscious after buckling her child into her car seat, thereby leaving the child unattended in the car seat). In other cases, the child was left unattended so the parent could go to work, do some volunteering, relax or meet a lover.

Not surprisingly, the parent’s reason for leaving a child unattended affected participants’ judgments of whether the parent had done something immoral: Ratings were over 3 on a 10-point scale even when the child was left unattended unintentionally, but they skyrocketed to nearly 8 when the parent left to meet a lover. Ratings for the other cases fell in between.

The more surprising result was that perceptions of risk followed precisely the same pattern. Although the details of the cases were otherwise the same -­ that is, the age of the child, the duration and location of the unattended period, and so on -­ participants thought children were in significantly greater danger when the parent left to meet a lover than when the child was left alone unintentionally. The ratings for the other cases, once again, fell in between. In other words, participants’ factual judgments of how much danger the child was in while the parent was away varied according to the extent of their moral outrage concerning the parent’s reason for leaving.

Posted on August 25, 2016 at 11:12 AMView Comments

Report on the Vulnerabilities Equities Process

I have written before on the vulnerabilities equities process (VEP): the system by which the US government decides whether to disclose and fix a computer vulnerability or keep it secret and use it offensively. Ari Schwartz and Rob Knake, both former Directors for Cybersecurity Policy at the White House National Security Council, have written a report describing the process as we know it, with policy recommendations for improving it.

Basically, their recommendations are focused on improving the transparency, oversight, and accountability (three things I repeatedly recommend) of the process. In summary:

  • The President should issue an Executive Order mandating government-wide compliance with the VEP.
  • Make the general criteria used to decide whether or not to disclose a vulnerability public.
  • Clearly define the VEP.
  • Make sure any undisclosed vulnerabilities are reviewed periodically.
  • Ensure that the government has the right to disclose any vulnerabilities it purchases.
  • Transfer oversight of the VEP from the NSA to the DHS.
  • Issue an annual report on the VEP.
  • Expand Congressional oversight of the VEP.
  • Mandate oversight by other independent bodies inside the Executive Branch.
  • Expand funding for both offensive and defensive vulnerability research.

These all seem like good ideas to me. This is a complex issue, one I wrote about in Data and Goliath (pages 146-50), and one that’s only going to get more important in the Internet of Things.

News article.

Posted on July 11, 2016 at 12:15 PMView Comments

Financial Cyber Risk Is Not Systemic Risk

This interesting essay argues that financial risks are generally not systemic risks, and instead are generally much smaller. That’s certainly been our experience to date:

While systemic risk is frequently invoked as a key reason to be on guard for cyber risk, such a connection is quite tenuous. A cyber event might in extreme cases result in a systemic crisis, but to do so needs highly fortuitous timing.

From the point of view of policymaking, rather than simply asserting systemic consequences for cyber risks, it would be better if the cyber discussion were better integrated into the existing macroprudential dialogue. To us, the overall discussion of cyber and systemic risk seems to be too focused on IT considerations and not enough on economic consequences.

After all, if there are systemic consequences from cyber risk, the chain of causality will be found in the macroprudential domain.

Posted on June 10, 2016 at 12:56 PMView Comments

Breaking Semantic Image CAPTCHAs

Interesting research: Suphannee Sivakorn, Iasonas Polakis and Angelos D. Keromytis, “I Am Robot: (Deep) Learning to Break Semantic Image CAPTCHAs“:

Abstract: Since their inception, captchas have been widely used for preventing fraudsters from performing illicit actions. Nevertheless, economic incentives have resulted in an armsrace, where fraudsters develop automated solvers and, in turn, captcha services tweak their design to break the solvers. Recent work, however, presented a generic attack that can be applied to any text-based captcha scheme. Fittingly, Google recently unveiled the latest version of reCaptcha. The goal of their new system is twofold; to minimize the effort for legitimate users, while requiring tasks that are more challenging to computers than text recognition. ReCaptcha is driven by an “advanced risk analysis system” that evaluates requests and selects the difficulty of the captcha that will be returned. Users may be required to click in a checkbox, or solve a challenge by identifying images with similar content.

In this paper, we conduct a comprehensive study of reCaptcha, and explore how the risk analysis process is influenced by each aspect of the request. Through extensive experimentation, we identify flaws that allow adversaries to effortlessly influence the risk analysis, bypass restrictions, and deploy large-scale attacks. Subsequently, we design a novel low-cost attack that leverages deep learning technologies for the semantic annotation of images. Our system is extremely effective, automatically solving 70.78% of the image reCaptcha challenges, while requiring only 19 seconds per challenge. We also apply our attack to the Facebook image captcha and achieve an accuracy of 83.5%. Based on our experimental findings, we propose a series of safeguards and modifications for impacting the scalability and accuracy of our attacks. Overall, while our study focuses on reCaptcha, our findings have wide implications; as the semantic information conveyed via images is increasingly within the realm of automated reasoning, the future of captchas relies on the exploration of novel directions.

News articles.

Posted on April 8, 2016 at 6:39 AMView Comments

Smart Essay on the Limitations of Anti-Terrorism Security

This is good:

Threats constantly change, yet our political discourse suggests that our vulnerabilities are simply for lack of resources, commitment or competence. Sometimes, that is true. But mostly we are vulnerable because we choose to be; because we’ve accepted, at least implicitly, that some risk is tolerable. A state that could stop every suicide bomber wouldn’t be a free or, let’s face it, fun one.

We will simply never get to maximum defensive posture. Regardless of political affiliation, Americans wouldn’t tolerate the delay or intrusion of an urban mass-transit system that required bag checks and pat-downs. After the 2013 Boston Marathon bombing, many wondered how to make the race safe the next year. A heavier police presence helps, but the only truly safe way to host a marathon is to not have one at all. The risks we tolerate, then, are not necessarily bad bargains simply because an enemy can exploit them.

No matter what promises are made on the campaign trail, terrorism will never be vanquished. There is no ideology, no surveillance, no wall that will definitely stop some 24-year-old from becoming radicalized on the Web, gaining access to guns and shooting a soft target. When we don’t admit this to ourselves, we often swing between the extremes of putting our heads in the sand or losing them entirely.

I am reminded of my own 2006 “Refuse to be Terrorized” essay.

Posted on April 3, 2016 at 7:42 PMView Comments

Terrifying Technologies

I’ve written about the difference between risk perception and risk reality. I thought about that when reading this list of Americans’ top technology fears:

  1. Cyberterrorism
  2. Corporate tracking of personal information
  3. Government tracking of personal information
  4. Robots replacing workforce
  5. Trusting artificial intelligence to do work
  6. Robots
  7. Artificial intelligence
  8. Technology I don’t understand

More at the link.

Posted on December 9, 2015 at 1:48 PMView Comments

Should Companies Do Most of Their Computing in the Cloud? (Part 3)

Cloud computing is the future of computing. Specialization and outsourcing make society more efficient and scalable, and computing isn’t any different.

But why aren’t we there yet? Why don’t we, in Simon Crosby’s words, “get on with it”? I have discussed some reasons: loss of control, new and unquantifiable security risks, and — above all — a lack of trust. It is not enough to simply discount them, as the number of companies not embracing the cloud shows. It is more useful to consider what we need to do to bridge the trust gap.

A variety of mechanisms can create trust. When I outsourced my food preparation to a restaurant last night, it never occurred to me to worry about food safety. That blind trust is largely created by government regulation. It ensures that our food is safe to eat, just as it ensures our paint will not kill us and our planes are safe to fly. It is all well and good for Mr. Crosby to write that cloud companies “will invest heavily to ensure that they can satisfy complex…regulations,” but this presupposes that we have comprehensive regulations. Right now, it is largely a free-for-all out there, and it can be impossible to see how security in the cloud works. When robust consumer-safety regulations underpin outsourcing, people can trust the systems.

This is true for any kind of outsourcing. Attorneys, tax preparers and doctors are licensed and highly regulated, by both governments and professional organizations. We trust our doctors to cut open our bodies because we know they are not just making it up. We need a similar professionalism in cloud computing.

Reputation is another big part of trust. We rely on both word-of-mouth and professional reviews to decide on a particular car or restaurant. But none of that works without considerable transparency. Security is an example. Mr Crosby writes: “Cloud providers design security into their systems and dedicate enormous resources to protect their customers.” Maybe some do; many certainly do not. Without more transparency, as a cloud customer you cannot tell the difference. Try asking either Amazon Web Services or Salesforce.com to see the details of their security arrangements, or even to indemnify you for data breaches on their networks. It is even worse for free consumer cloud services like Gmail and iCloud.

We need to trust cloud computing’s performance, reliability and security. We need open standards, rules about being able to remove our data from cloud services, and the assurance that we can switch cloud services if we want to.

We also need to trust who has access to our data, and under what circumstances. One commenter wrote: “After Snowden, the idea of doing your computing in the cloud is preposterous.” He isn’t making a technical argument: a typical corporate data center isn’t any better defended than a cloud-computing one. He is making a legal argument. Under American law — and similar laws in other countries — the government can force your cloud provider to give up your data without your knowledge and consent. If your data is in your own data center, you at least get to see a copy of the court order.

Corporate surveillance matters, too. Many cloud companies mine and sell your data or use it to manipulate you into buying things. Blocking broad surveillance by both governments and corporations is critical to trusting the cloud, as is eliminating secret laws and orders regarding data access.

In the future, we will do all our computing in the cloud: both commodity computing and computing that requires personalized expertise. But this future will only come to pass when we manage to create trust in the cloud.

This essay previously appeared on the Economist website, as part of a debate on cloud computing. It’s the third of three essays. Here are Parts 1 and 2. Visit the site for the other side of the debate and other commentary.

Posted on June 10, 2015 at 3:27 PMView Comments

Should Companies Do Most of Their Computing in the Cloud? (Part 2)

Let me start by describing two approaches to the cloud.

Most of the students I meet at Harvard University live their lives in the cloud. Their e-mail, documents, contacts, calendars, photos and everything else are stored on servers belonging to large internet companies in America and elsewhere. They use cloud services for everything. They converse and share on Facebook and Instagram and Twitter. They seamlessly switch among their laptops, tablets and phones. It wouldn’t be a stretch to say that they don’t really care where their computers end and the internet begins, and they are used to having immediate access to all of their data on the closest screen available.

In contrast, I personally use the cloud as little as possible. My e-mail is on my own computer — I am one of the last Eudora users — and not at a web service like Gmail or Hotmail. I don’t store my contacts or calendar in the cloud. I don’t use cloud backup. I don’t have personal accounts on social networking sites like Facebook or Twitter. (This makes me a freak, but highly productive.) And I don’t use many software and hardware products that I would otherwise really like, because they force you to keep your data in the cloud: Trello, Evernote, Fitbit.

Why don’t I embrace the cloud in the same way my younger colleagues do? There are three reasons, and they parallel the trade-offs corporations faced with the same decisions are going to make.

The first is control. I want to be in control of my data, and I don’t want to give it up. I have the ability to keep control by running my own services my way. Most of those students lack the technical expertise, and have no choice. They also want services that are only available on the cloud, and have no choice. I have deliberately made my life harder, simply to keep that control. Similarly, companies are going to decide whether or not they want to — or even can — keep control of their data.

The second is security. I talked about this at length in my opening statement. Suffice it to say that I am extremely paranoid about cloud security, and think I can do better. Lots of those students don’t care very much. Again, companies are going to have to make the same decision about who is going to do a better job, and depending on their own internal resources, they might make a different decision.

The third is the big one: trust. I simply don’t trust large corporations with my data. I know that, at least in America, they can sell my data at will and disclose it to whomever they want. It can be made public inadvertently by their lax security. My government can get access to it without a warrant. Again, lots of those students don’t care. And again, companies are going to have to make the same decisions.

Like any outsourcing relationship, cloud services are based on trust. If anything, that is what you should take away from this exchange. Try to do business only with trustworthy providers, and put contracts in place to ensure their trustworthiness. Push for government regulations that establish a baseline of trustworthiness for cases where you don’t have that negotiation power. Fight laws that give governments secret access to your data in the cloud. Cloud computing is the future of computing; we need to ensure that it is secure and reliable.

Despite my personal choices, my belief is that, in most cases, the benefits of cloud computing outweigh the risks. My company, Resilient Systems, uses cloud services both to run the business and to host our own products that we sell to other companies. For us it makes the most sense. But we spend a lot of effort ensuring that we use only trustworthy cloud providers, and that we are a trustworthy cloud provider to our own customers.

This essay previously appeared on the Economist website, as part of a debate on cloud computing. It’s the second of three essays. Here are Parts 1 and 3. Visit the site for the other side of the debate and other commentary.

Posted on June 10, 2015 at 11:27 AMView Comments

Should Companies Do Most of Their Computing in the Cloud? (Part 1)

Yes. No. Yes. Maybe. Yes. Okay, it’s complicated.

The economics of cloud computing are compelling. For companies, the lower operating costs, the lack of capital expenditure, the ability to quickly scale and the ability to outsource maintenance are just some of the benefits. Computing is infrastructure, like cleaning, payroll, tax preparation and legal services. All of these are outsourced. And computing is becoming a utility, like power and water. Everyone does their power generation and water distribution “in the cloud.” Why should IT be any different?

Two reasons. The first is that IT is complicated: it is more like payroll services than like power generation. What this means is that you have to choose your cloud providers wisely, and make sure you have good contracts in place with them. You want to own your data, and be able to download that data at any time. You want assurances that your data will not disappear if the cloud provider goes out of business or discontinues your service. You want reliability and availability assurances, tech support assurances, whatever you need.

The downside is that you will have limited customization options. Cloud computing is cheaper because of economics of scale, and­ — like any outsourced task — ­you tend to get what you get. A restaurant with a limited menu is cheaper than a personal chef who can cook anything you want. Fewer options at a much cheaper price: it’s a feature, not a bug.

The second reason that cloud computing is different is security. This is not an idle concern. IT security is difficult under the best of circumstances, and security risks are one of the major reasons it has taken so long for companies to embrace the cloud. And here it really gets complicated.

On the pro-cloud side, cloud providers have the potential to be far more secure than the corporations whose data they are holding. It is the same economies of scale. For most companies, the cloud provider is likely to have better security than them­ — by a lot. All but the largest companies benefit from the concentration of security expertise at the cloud provider.

On the anti-cloud side, the cloud provider might not meet your legal needs. You might have regulatory requirements that the cloud provider cannot meet. Your data might be stored in a country with laws you do not like­ — or cannot legally use. Many foreign companies are thinking twice about putting their data inside America, because of laws allowing the government to get at that data in secret. Other countries around the world have even more draconian government-access rules.

Also on the anti-cloud side, a large cloud provider is a juicier target. Whether or not this matters depends on your threat profile. Criminals already steal far more credit card numbers than they can monetize; they are more likely to go after the smaller, less-defended networks. But a national intelligence agency will prefer the one-stop shop a cloud provider affords. That is why the NSA broke into Google’s data centers.

Finally, the loss of control is a security risk. Moving your data into the cloud means that someone else is controlling that data. This is fine if they do a good job, but terrible if they do not. And for free cloud services, that loss of control can be critical. The cloud provider can delete your data on a whim, if it believes you have violated some term of service that you never even knew existed. And you have no recourse.

As a business, you need to weigh the benefits against the risks. And that will depend on things like the type of cloud service you’re considering, the type of data that’s involved, how critical the service is, how easily you could do it in house, the size of your company and the regulatory environment, and so on.

This essay previously appeared on the Economist website, as part of a debate on cloud computing. It’s the first of three essays. Here are Parts 2 and 3. Visit the site for the other side of the debate and other commentary.

Posted on June 10, 2015 at 6:43 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.