I’m sure it pays less than the industry average, and the stakes are much higher than the average. But if you want to be a Director of Information Security that makes a difference, Human Rights Watch is hiring.
Entries Tagged "risk assessment"
Page 3 of 21
Good article that crunches the data and shows that the press’s coverage of terrorism is disproportional to its comparative risk.
This isn’t new. I’ve written about it before, and wrote about it more generally when I wrote about the psychology of risk, fear, and security. Basically, the issue is the availability heuristic. We tend to infer the probability of something by how easy it is to bring examples of the thing to mind. So if we can think of a lot of tiger attacks in our community, we infer that the risk is high. If we can’t think of many lion attacks, we infer that the risk is low. But while this is a perfectly reasonable heuristic when living in small family groups in the East African highlands in 100,000 BC, it fails in the face of modern media. The media makes the rare seem more common by spending a lot of time talking about it. It’s not the media’s fault. By definition, news is “something that hardly ever happens.” But when the coverage of terrorist deaths exceeds the coverage of homicides, we have a tendency to mistakenly inflate the risk of the former while discount the risk of the latter.
Our brains aren’t very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are. We fear them more than probability indicates we should.
There is a lot of psychological research that tries to explain this, but one of the key findings is this: People tend to base risk analysis more on stories than on data. Stories engage us at a much more visceral level, especially stories that are vivid, exciting or personally involving.
If a friend tells you about getting mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than reading a page of abstract crime statistics will.
Novelty plus dread plus a good story equals overreaction.
It’s not just murders. It’s flying vs. driving: the former is much safer, but accidents are so more spectacular when they occur.
Interesting research that shows we exaggerate the risks of something when we find it morally objectionable.
From an article about and interview with the researchers:
To get at this question experimentally, Thomas and her collaborators created a series of vignettes in which a parent left a child unattended for some period of time, and participants indicated the risk of harm to the child during that period. For example, in one vignette, a 10-month-old was left alone for 15 minutes, asleep in the car in a cool, underground parking garage. In another vignette, an 8-year-old was left for an hour at a Starbucks, one block away from her parent’s location.
To experimentally manipulate participants’ moral attitude toward the parent, the experimenters varied the reason the child was left unattended across a set of six experiments with over 1,300 online participants. In some cases, the child was left alone unintentionally (for example, in one case, a mother is hit by a car and knocked unconscious after buckling her child into her car seat, thereby leaving the child unattended in the car seat). In other cases, the child was left unattended so the parent could go to work, do some volunteering, relax or meet a lover.
Not surprisingly, the parent’s reason for leaving a child unattended affected participants’ judgments of whether the parent had done something immoral: Ratings were over 3 on a 10-point scale even when the child was left unattended unintentionally, but they skyrocketed to nearly 8 when the parent left to meet a lover. Ratings for the other cases fell in between.
The more surprising result was that perceptions of risk followed precisely the same pattern. Although the details of the cases were otherwise the same - that is, the age of the child, the duration and location of the unattended period, and so on - participants thought children were in significantly greater danger when the parent left to meet a lover than when the child was left alone unintentionally. The ratings for the other cases, once again, fell in between. In other words, participants’ factual judgments of how much danger the child was in while the parent was away varied according to the extent of their moral outrage concerning the parent’s reason for leaving.
I have written before on the vulnerabilities equities process (VEP): the system by which the US government decides whether to disclose and fix a computer vulnerability or keep it secret and use it offensively. Ari Schwartz and Rob Knake, both former Directors for Cybersecurity Policy at the White House National Security Council, have written a report describing the process as we know it, with policy recommendations for improving it.
Basically, their recommendations are focused on improving the transparency, oversight, and accountability (three things I repeatedly recommend) of the process. In summary:
- The President should issue an Executive Order mandating government-wide compliance with the VEP.
- Make the general criteria used to decide whether or not to disclose a vulnerability public.
- Clearly define the VEP.
- Make sure any undisclosed vulnerabilities are reviewed periodically.
- Ensure that the government has the right to disclose any vulnerabilities it purchases.
- Transfer oversight of the VEP from the NSA to the DHS.
- Issue an annual report on the VEP.
- Expand Congressional oversight of the VEP.
- Mandate oversight by other independent bodies inside the Executive Branch.
- Expand funding for both offensive and defensive vulnerability research.
These all seem like good ideas to me. This is a complex issue, one I wrote about in Data and Goliath (pages 146-50), and one that’s only going to get more important in the Internet of Things.
This interesting essay argues that financial risks are generally not systemic risks, and instead are generally much smaller. That’s certainly been our experience to date:
While systemic risk is frequently invoked as a key reason to be on guard for cyber risk, such a connection is quite tenuous. A cyber event might in extreme cases result in a systemic crisis, but to do so needs highly fortuitous timing.
From the point of view of policymaking, rather than simply asserting systemic consequences for cyber risks, it would be better if the cyber discussion were better integrated into the existing macroprudential dialogue. To us, the overall discussion of cyber and systemic risk seems to be too focused on IT considerations and not enough on economic consequences.
After all, if there are systemic consequences from cyber risk, the chain of causality will be found in the macroprudential domain.
Interesting research: Suphannee Sivakorn, Iasonas Polakis and Angelos D. Keromytis, “I Am Robot: (Deep) Learning to Break Semantic Image CAPTCHAs“:
Abstract: Since their inception, captchas have been widely used for preventing fraudsters from performing illicit actions. Nevertheless, economic incentives have resulted in an armsrace, where fraudsters develop automated solvers and, in turn, captcha services tweak their design to break the solvers. Recent work, however, presented a generic attack that can be applied to any text-based captcha scheme. Fittingly, Google recently unveiled the latest version of reCaptcha. The goal of their new system is twofold; to minimize the effort for legitimate users, while requiring tasks that are more challenging to computers than text recognition. ReCaptcha is driven by an “advanced risk analysis system” that evaluates requests and selects the difficulty of the captcha that will be returned. Users may be required to click in a checkbox, or solve a challenge by identifying images with similar content.
In this paper, we conduct a comprehensive study of reCaptcha, and explore how the risk analysis process is influenced by each aspect of the request. Through extensive experimentation, we identify flaws that allow adversaries to effortlessly influence the risk analysis, bypass restrictions, and deploy large-scale attacks. Subsequently, we design a novel low-cost attack that leverages deep learning technologies for the semantic annotation of images. Our system is extremely effective, automatically solving 70.78% of the image reCaptcha challenges, while requiring only 19 seconds per challenge. We also apply our attack to the Facebook image captcha and achieve an accuracy of 83.5%. Based on our experimental findings, we propose a series of safeguards and modifications for impacting the scalability and accuracy of our attacks. Overall, while our study focuses on reCaptcha, our findings have wide implications; as the semantic information conveyed via images is increasingly within the realm of automated reasoning, the future of captchas relies on the exploration of novel directions.
This is good:
Threats constantly change, yet our political discourse suggests that our vulnerabilities are simply for lack of resources, commitment or competence. Sometimes, that is true. But mostly we are vulnerable because we choose to be; because we’ve accepted, at least implicitly, that some risk is tolerable. A state that could stop every suicide bomber wouldn’t be a free or, let’s face it, fun one.
We will simply never get to maximum defensive posture. Regardless of political affiliation, Americans wouldn’t tolerate the delay or intrusion of an urban mass-transit system that required bag checks and pat-downs. After the 2013 Boston Marathon bombing, many wondered how to make the race safe the next year. A heavier police presence helps, but the only truly safe way to host a marathon is to not have one at all. The risks we tolerate, then, are not necessarily bad bargains simply because an enemy can exploit them.
No matter what promises are made on the campaign trail, terrorism will never be vanquished. There is no ideology, no surveillance, no wall that will definitely stop some 24-year-old from becoming radicalized on the Web, gaining access to guns and shooting a soft target. When we don’t admit this to ourselves, we often swing between the extremes of putting our heads in the sand or losing them entirely.
I am reminded of my own 2006 “Refuse to be Terrorized” essay.
I’ve written about the difference between risk perception and risk reality. I thought about that when reading this list of Americans’ top technology fears:
- Corporate tracking of personal information
- Government tracking of personal information
- Robots replacing workforce
- Trusting artificial intelligence to do work
- Artificial intelligence
- Technology I don’t understand
More at the link.
This is an interesting story. Someone posts a photograph of herself holding a winning horse-race betting ticket, and someone else uses the data from the photograph to forge the ticket and claim the winnings.
I have been thinking a lot about how technology is messing with our intuitions about risk and security. This is a good example of that.
Cloud computing is the future of computing. Specialization and outsourcing make society more efficient and scalable, and computing isn’t any different.
But why aren’t we there yet? Why don’t we, in Simon Crosby’s words, “get on with it”? I have discussed some reasons: loss of control, new and unquantifiable security risks, and—above all—a lack of trust. It is not enough to simply discount them, as the number of companies not embracing the cloud shows. It is more useful to consider what we need to do to bridge the trust gap.
A variety of mechanisms can create trust. When I outsourced my food preparation to a restaurant last night, it never occurred to me to worry about food safety. That blind trust is largely created by government regulation. It ensures that our food is safe to eat, just as it ensures our paint will not kill us and our planes are safe to fly. It is all well and good for Mr. Crosby to write that cloud companies “will invest heavily to ensure that they can satisfy complex…regulations,” but this presupposes that we have comprehensive regulations. Right now, it is largely a free-for-all out there, and it can be impossible to see how security in the cloud works. When robust consumer-safety regulations underpin outsourcing, people can trust the systems.
This is true for any kind of outsourcing. Attorneys, tax preparers and doctors are licensed and highly regulated, by both governments and professional organizations. We trust our doctors to cut open our bodies because we know they are not just making it up. We need a similar professionalism in cloud computing.
Reputation is another big part of trust. We rely on both word-of-mouth and professional reviews to decide on a particular car or restaurant. But none of that works without considerable transparency. Security is an example. Mr Crosby writes: “Cloud providers design security into their systems and dedicate enormous resources to protect their customers.” Maybe some do; many certainly do not. Without more transparency, as a cloud customer you cannot tell the difference. Try asking either Amazon Web Services or Salesforce.com to see the details of their security arrangements, or even to indemnify you for data breaches on their networks. It is even worse for free consumer cloud services like Gmail and iCloud.
We need to trust cloud computing’s performance, reliability and security. We need open standards, rules about being able to remove our data from cloud services, and the assurance that we can switch cloud services if we want to.
We also need to trust who has access to our data, and under what circumstances. One commenter wrote: “After Snowden, the idea of doing your computing in the cloud is preposterous.” He isn’t making a technical argument: a typical corporate data center isn’t any better defended than a cloud-computing one. He is making a legal argument. Under American law—and similar laws in other countries—the government can force your cloud provider to give up your data without your knowledge and consent. If your data is in your own data center, you at least get to see a copy of the court order.
Corporate surveillance matters, too. Many cloud companies mine and sell your data or use it to manipulate you into buying things. Blocking broad surveillance by both governments and corporations is critical to trusting the cloud, as is eliminating secret laws and orders regarding data access.
In the future, we will do all our computing in the cloud: both commodity computing and computing that requires personalized expertise. But this future will only come to pass when we manage to create trust in the cloud.
This essay previously appeared on the Economist website, as part of a debate on cloud computing. It’s the third of three essays. Here are Parts 1 and 2. Visit the site for the other side of the debate and other commentary.
Sidebar photo of Bruce Schneier by Joe MacInnis.