Really interesting first-hand experience from Maciej Cegłowski.
Entries Tagged "risks"
Page 2 of 14
This is the best analysis of the software causes of the Boeing 737 MAX disasters that I have read.
Technically this is safety and not security; there was no attacker. But the fields are closely related and there are a lot of lessons for IoT security — and the security of complex socio-technical systems in general — in here.
EDITED TO ADD (4/30): A rebuttal of sorts.
EDITED TO ADD (5/13): The comments to this blog post are of particularly high quality, and I recommend them to anyone interested in the topic.
I believe that, somewhere, there is a highly qualified security person who has had enough of corporate life and wants instead to make a difference in the world. If that’s you, please consider applying.
Cryptocurrencies, although a seemingly interesting idea, are simply not fit for purpose. They do not work as currencies, they are grossly inefficient, and they are not meaningfully distributed in terms of trust. Risks involving cryptocurrencies occur in four major areas: technical risks to participants, economic risks to participants, systemic risks to the cryptocurrency ecosystem, and societal risks.
I haven’t written much about cryptocurrencies, but I share Weaver’s skepticism.
EDITED TO ADD (8/2): Paul Krugman on cryptocurrencies.
Google has a new login service for high-risk users. It’s good, but unforgiving.
Logging in from a desktop will require a special USB key, while accessing your data from a mobile device will similarly require a Bluetooth dongle. All non-Google services and apps will be exiled from reaching into your Gmail or Google Drive. Google’s malware scanners will use a more intensive process to quarantine and analyze incoming documents. And if you forget your password, or lose your hardware login keys, you’ll have to jump through more hoops than ever to regain access, the better to foil any intruders who would abuse that process to circumvent all of Google’s other safeguards.
It’s called Advanced Protection.
I am part of this very interesting project:
For many users, blog posts on how to install Signal, massive guides to protecting your digital privacy, and broad statements like “use Tor” — all offered in good faith and with the best of intentions — can be hard to understand or act upon. If we want to truly secure civil society from digital attacks and empower communities in their to fight to protect their rights, we’ve got to recognize that digital security is largely a human problem, not a technical one. Taking cues from the experiences of the deeply knowledgeable global digital security training community, the Digital Security Exchange will seek to make it easier for trainers and experts to connect directly to the communities in the U.S. — sharing expertise, documentation, and best practices — in order to increase capacity and security across the board.
Summer Fowler at CMU has invented a new word: prosilience:
I propose that we build operationally PROSILIENT organizations. If operational resilience, as we like to say, is risk management “all grown up,” then prosilience is resilience with consciousness of environment, self-awareness, and the capacity to evolve. It is not about being able to operate through disruption, it is about anticipating disruption and adapting before it even occurs–a proactive version of resilience. Nascent prosilient capabilities include exercises (tabletop or technical) that simulate how organizations would respond to a scenario. The goal, however, is to automate, expand, and perform continuous exercises based on real-world indicators rather than on scenarios.
I have long been a big fan of resilience as a security concept, and the property we should be aiming for. I’m not sure prosilience buys me anything new, but this is my first encounter with this new buzzword. It would certainly make for a best-selling business-book title.
Good article that crunches the data and shows that the press’s coverage of terrorism is disproportional to its comparative risk.
This isn’t new. I’ve written about it before, and wrote about it more generally when I wrote about the psychology of risk, fear, and security. Basically, the issue is the availability heuristic. We tend to infer the probability of something by how easy it is to bring examples of the thing to mind. So if we can think of a lot of tiger attacks in our community, we infer that the risk is high. If we can’t think of many lion attacks, we infer that the risk is low. But while this is a perfectly reasonable heuristic when living in small family groups in the East African highlands in 100,000 BC, it fails in the face of modern media. The media makes the rare seem more common by spending a lot of time talking about it. It’s not the media’s fault. By definition, news is “something that hardly ever happens.” But when the coverage of terrorist deaths exceeds the coverage of homicides, we have a tendency to mistakenly inflate the risk of the former while discount the risk of the latter.
Our brains aren’t very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are. We fear them more than probability indicates we should.
There is a lot of psychological research that tries to explain this, but one of the key findings is this: People tend to base risk analysis more on stories than on data. Stories engage us at a much more visceral level, especially stories that are vivid, exciting or personally involving.
If a friend tells you about getting mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than reading a page of abstract crime statistics will.
Novelty plus dread plus a good story equals overreaction.
It’s not just murders. It’s flying vs. driving: the former is much safer, but accidents are so more spectacular when they occur.
Sidebar photo of Bruce Schneier by Joe MacInnis.