Looks interesting. Finnish residents can take it for credit.
Entries Tagged "cybersecurity"
Page 17 of 20
Obama: Traditionally, when we think about security and protecting ourselves, we think in terms of armor or walls. Increasingly, I find myself looking to medicine and thinking about viruses, antibodies. Part of the reason why cybersecurity continues to be so hard is because the threat is not a bunch of tanks rolling at you but a whole bunch of systems that may be vulnerable to a worm getting in there. It means that we’ve got to think differently about our security, make different investments that may not be as sexy but may actually end up being as important as anything.
What I spend a lot of time worrying about are things like pandemics. You can’t build walls in order to prevent the next airborne lethal flu from landing on our shores. Instead, what we need to be able to do is set up systems to create public health systems in all parts of the world, click triggers that tell us when we see something emerging, and make sure we’ve got quick protocols and systems that allow us to make vaccines a lot smarter. So if you take a public health model, and you think about how we can deal with, you know, the problems of cybersecurity, a lot may end up being really helpful in thinking about the AI threats.
On today’s Internet, too much power is concentrated in too few hands. In the early days of the Internet, individuals were empowered. Now governments and corporations hold the balance of power. If we are to leave a better Internet for the next generations, governments need to rebalance Internet power more towards the individual. This means several things.
First, less surveillance. Surveillance has become the business model of the Internet, and an aspect that is appealing to governments worldwide. While computers make it easier to collect data, and networks to aggregate it, governments should do more to ensure that any surveillance is exceptional, transparent, regulated and targeted. It’s a tall order; governments such as that of the US need to overcome their own mass-surveillance desires, and at the same time implement regulations to fetter the ability of Internet companies to do the same.
Second, less censorship. The early days of the Internet were free of censorship, but no more. Many countries censor their Internet for a variety of political and moral reasons, and many large social networking platforms do the same thing for business reasons. Turkey censors anti-government political speech; many countries censor pornography. Facebook has censored both nudity and videos of police brutality. Governments need to commit to the free flow of information, and to make it harder for others to censor.
Third, less propaganda. One of the side-effects of free speech is erroneous speech. This naturally corrects itself when everybody can speak, but an Internet with centralized power is one that invites propaganda. For example, both China and Russia actively use propagandists to influence public opinion on social media. The more governments can do to counter propaganda in all forms, the better we all are.
And fourth, less use control. Governments need to ensure that our Internet systems are open and not closed, that neither totalitarian governments nor large corporations can limit what we do on them. This includes limits on what apps you can run on your smartphone, or what you can do with the digital files you purchase or are collected by the digital devices you own. Controls inhibit innovation: technical, business, and social.
Solutions require both corporate regulation and international cooperation. They require Internet governance to remain in the hands of the global community of engineers, companies, civil society groups, and Internet users. They require governments to be agile in the face of an ever-evolving Internet. And they’ll result in more power and control to the individual and less to powerful institutions. That’s how we built an Internet that enshrined the best of our societies, and that’s how we’ll keep it that way for future generations.
Interesting survey of the cybersecurity culture in Norway.
96% of all Norwegian are online, more than 90% embrace new technology, and 6 of 10 feel capable of judging what is safe to do online. Still cyber-crime costs Norway approximately 19 billion NKR annually. At the same time 73.9% argue that the Internet will not be safer even if their personal computer is secure. We have also found that a majority of Norwegians accepts that their online activities may be monitored by the authorities. But less than half the population believe the Police is capable of helping them if they are subject to cybercrime, and 4 of 10 sees cyber activists (e.g. Anonymous) play a role in the fight against cybercrime and cyberwar. 44% of the participants in this study say that they have refrained from using an online service after they have learned about threats or security incidents. This should obviously influence digitalization policy.
Lots of details in the report.
Interesting research from Sasha Romanosky at RAND:
Abstract: In 2013, the US President signed an executive order designed to help secure the nation’s critical infrastructure from cyberattacks. As part of that order, he directed the National Institute for Standards and Technology (NIST) to develop a framework that would become an authoritative source for information security best practices. Because adoption of the framework is voluntary, it faces the challenge of incentivizing firms to follow along. Will frameworks such as that proposed by NIST really induce firms to adopt better security controls? And if not, why? This research seeks to examine the composition and costs of cyber events, and attempts to address whether or not there exist incentives for firms to improve their security practices and reduce the risk of attack. Specifically, we examine a sample of over 12 000 cyber events that include data breaches, security incidents, privacy violations, and phishing crimes. First, we analyze the characteristics of these breaches (such as causes and types of information compromised). We then examine the breach and litigation rate, by industry, and identify the industries that incur the greatest costs from cyber events. We then compare these costs to bad debts and fraud within other industries. The findings suggest that public concerns regarding the increasing rates of breaches and legal actions may be excessive compared to the relatively modest financial impact to firms that suffer these events. Public concerns regarding the increasing rates of breaches and legal actions, conflict, however, with our findings that show a much smaller financial impact to firms that suffer these events. Specifically, we find that the cost of a typical cyber incident in our sample is less than $200 000 (about the same as the firm’s annual IT security budget), and that this represents only 0.4% of their estimated annual revenues.
The result is that it often makes business sense to underspend on cybersecurity and just pay the costs of breaches:
Romanosky analyzed 12,000 incident reports and found that typically they only account for 0.4 per cent of a company’s annual revenues. That compares to billing fraud, which averages at 5 per cent, or retail shrinkage (ie, shoplifting and insider theft), which accounts for 1.3 per cent of revenues.
As for reputational damage, Romanosky found that it was almost impossible to quantify. He spoke to many executives and none of them could give a reliable metric for how to measure the PR cost of a public failure of IT security systems.
He also noted that the effects of a data incident typically don’t have many ramifications on the stock price of a company in the long term. Under the circumstances, it doesn’t make a lot of sense to invest too much in cyber security.
What’s being left out of these costs are the externalities. Yes, the costs to a company of a cyberattack are low to them, but there are often substantial additional costs borne by other people. The way to look at this is not to conclude that cybersecurity isn’t really a problem, but instead that there is a significant market failure that governments need to address.
Disaster stories involving the Internet of Things are all the rage. They feature cars (both driven and driverless), the power grid, dams, and tunnel ventilation systems. A particularly vivid and realistic one, near-future fiction published last month in New York Magazine, described a cyberattack on New York that involved hacking of cars, the water system, hospitals, elevators, and the power grid. In these stories, thousands of people die. Chaos ensues. While some of these scenarios overhype the mass destruction, the individual risks are all real. And traditional computer and network security isn’t prepared to deal with them.
Classic information security is a triad: confidentiality, integrity, and availability. You’ll see it called “CIA,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal it (confidentiality), modify it (integrity), or prevent you from getting it (availability).
So far, Internet threats have largely been about confidentiality. These can be expensive; one survey estimated that data breaches cost an average of $3.8 million each. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the Ashley Madison breach in 2015. They can be damaging, as when the government of North Korea stole tens of thousands of internal documents from Sony or when hackers stole data about 83 million customer accounts from JPMorgan Chase, both in 2014. They can even affect national security, as in the case of the Office of Personnel Management data breach by—presumptively—China in 2015.
On the Internet of Things, integrity and availability threats are much worse than confidentiality threats. It’s one thing if your smart door lock can be eavesdropped upon to know who is home. It’s another thing entirely if it can be hacked to allow a burglar to open the door—or prevent you from opening your door. A hacker who can deny you control of your car, or take over control, is much more dangerous than one who can eavesdrop on your conversations or track your car’s location.
With the advent of the Internet of Things and cyber-physical systems in general, we’ve given the Internet hands and feet: the ability to directly affect the physical world. What used to be attacks against data and information have become attacks against flesh, steel, and concrete.
Today’s threats include hackers crashing airplanes by hacking into computer networks, and remotely disabling cars, either when they’re turned off and parked or while they’re speeding down the highway. We’re worried about manipulated counts from electronic voting machines, frozen water pipes through hacked thermostats, and remote murder through hacked medical devices. The possibilities are pretty literally endless. The Internet of Things will allow for attacks we can’t even imagine.
The increased risks come from three things: software control of systems, interconnections between systems, and automatic or autonomous systems. Let’s look at them in turn:
Software Control. The Internet of Things is a result of everything turning into a computer. This gives us enormous power and flexibility, but it brings insecurities with it as well. As more things come under software control, they become vulnerable to all the attacks we’ve seen against computers. But because many of these things are both inexpensive and long-lasting, many of the patch and update systems that work with computers and smartphones won’t work. Right now, the only way to patch most home routers is to throw them away and buy new ones. And the security that comes from replacing your computer and phone every few years won’t work with your refrigerator and thermostat: on the average, you replace the former every 15 years, and the latter approximately never. A recent Princeton survey found 500,000 insecure devices on the Internet. That number is about to explode.
Interconnections. As these systems become interconnected, vulnerabilities in one lead to attacks against others. Already we’ve seen Gmail accounts compromised through vulnerabilities in Samsung smart refrigerators, hospital IT networks compromised through vulnerabilities in medical devices, and Target Corporation hacked through a vulnerability in its HVAC system. Systems are filled with externalities that affect other systems in unforeseen and potentially harmful ways. What might seem benign to the designers of a particular system becomes harmful when it’s combined with some other system. Vulnerabilities on one system cascade into other systems, and the result is a vulnerability that no one saw coming and no one bears responsibility for fixing. The Internet of Things will make exploitable vulnerabilities much more common. It’s simple mathematics. If 100 systems are all interacting with each other, that’s about 5,000 interactions and 5,000 potential vulnerabilities resulting from those interactions. If 300 systems are all interacting with each other, that’s 45,000 interactions. 1,000 systems: 12.5 million interactions. Most of them will be benign or uninteresting, but some of them will be very damaging.
Autonomy. Increasingly, our computer systems are autonomous. They buy and sell stocks, turn the furnace on and off, regulate electricity flow through the grid, and—in the case of driverless cars—automatically pilot multi-ton vehicles to their destinations. Autonomy is great for all sorts of reasons, but from a security perspective it means that the effects of attacks can take effect immediately, automatically, and ubiquitously. The more we remove humans from the loop, faster attacks can do their damage and the more we lose our ability to rely on actual smarts to notice something is wrong before it’s too late.
We’re building systems that are increasingly powerful, and increasingly useful. The necessary side effect is that they are increasingly dangerous. A single vulnerability forced Chrysler to recall 1.4 million vehicles in 2015. We’re used to computers being attacked at scale—think of the large-scale virus infections from the last decade—but we’re not prepared for this happening to everything else in our world.
Governments are taking notice. Last year, both Director of National Intelligence James Clapper and NSA Director Mike Rogers testified before Congress, warning of these threats. They both believe we’re vulnerable.
This is how it was phrased in the DNI’s 2015 Worldwide Threat Assessment: “Most of the public discussion regarding cyber threats has focused on the confidentiality and availability of information; cyber espionage undermines confidentiality, whereas denial-of-service operations and data-deletion attacks undermine availability. In the future, however, we might also see more cyber operations that will change or manipulate electronic information in order to compromise its integrity (i.e. accuracy and reliability) instead of deleting it or disrupting access to it. Decision-making by senior government officials (civilian and military), corporate executives, investors, or others will be impaired if they cannot trust the information they are receiving.”
The DNI 2016 threat assessment included something similar: “Future cyber operations will almost certainly include an increased emphasis on changing or manipulating data to compromise its integrity (i.e., accuracy and reliability) to affect decision making, reduce trust in systems, or cause adverse physical effects. Broader adoption of IoT devices and AI—in settings such as public utilities and healthcare—will only exacerbate these potential effects.”
Security engineers are working on technologies that can mitigate much of this risk, but many solutions won’t be deployed without government involvement. This is not something that the market can solve. Like data privacy, the risks and solutions are too technical for most people and organizations to understand; companies are motivated to hide the insecurity of their own systems from their customers, their users, and the public; the interconnections can make it impossible to connect data breaches with resultant harms; and the interests of the companies often don’t match the interests of the people.
Governments need to play a larger role: setting standards, policing compliance, and implementing solutions across companies and networks. And while the White House Cybersecurity National Action Plan says some of the right things, it doesn’t nearly go far enough, because so many of us are phobic of any government-led solution to anything.
The next president will probably be forced to deal with a large-scale Internet disaster that kills multiple people. I hope he or she responds with both the recognition of what government can do that industry can’t, and the political will to make it happen.
This essay previously appeared on Vice Motherboard.
EDITED TO ADD (8/11): An essay that agrees with me.
Lawfare is turning out to be the go-to blog for policy wonks about various government debates on cybersecurity. There are two good posts this week on the Going Dark debate.
The first is from those of us who wrote the “Keys Under Doormats” paper last year, criticizing the concept of backdoors and key escrow. We were responding to a half-baked proposal on how to give the government access without causing widespread insecurity, and we pointed out where almost of all of these sorts of proposals fall short:
1. Watch for systems that rely on a single powerful key or a small set of them.
2. Watch for systems using high-value keys over and over and still claiming not to increase risk.
3. Watch for the claim that the abstract algorithm alone is the measure of system security.
4. Watch for the assumption that scaling anything on the global Internet is easy.
5. Watch for the assumption that national borders are not a factor.
6. Watch for the assumption that human rights and the rule of law prevail throughout the world.
The second is by Susan Landau, and is a response to the ODNI’s response to the “Don’t Panic” report. Our original report said basically that the FBI wasn’t going dark and that surveillance information is everywhere. At a Senate hearing, Sen. Wyden requested that the Office of the Director of National Intelligence respond to the report. It did—not very well, honestly—and Landau responded to that response. She pointed out that there really wasn’t much disagreement: that the points it claimed to have issue with were actually points we made and agreed with.
In the end, the ODNI’s response to our report leaves me somewhat confused. The reality is that the only strong disagreement seems to be with an exaggerated view of one finding. It almost appears as if ODNI is using the Harvard report as an opportunity to say, “Widespread use of encryption will make our work life more difficult.” Of course it will. Widespread use of encryption will also help prevent some of the cybersecurity exploits and attacks we have been experiencing over the last decade. The ODNI letter ignored that issue.
EDITED TO ADD: Related is this article where James Comey defends spending $1M+ on that iPhone vulnerability. There’s some good discussion of the vulnerabilities equities process, and the FBI’s technical lack of sophistication.
I’ll be participating in an end-of-year trends and predictions webinar on Thursday, December 17, at 1:00 PM EST. Join me here.
Interesting research: “Identifying patterns in informal sources of security information,” by Emilee Rader and Rick Wash, Journal of Cybersecurity, 1 Dec 2015.
Abstract: Computer users have access to computer security information from many different sources, but few people receive explicit computer security training. Despite this lack of formal education, users regularly make many important security decisions, such as “Should I click on this potentially shady link?” or “Should I enter my password into this form?” For these decisions, much knowledge comes from incidental and informal learning. To better understand differences in the security-related information available to users for such learning, we compared three informal sources of computer security information: news articles, web pages containing computer security advice, and stories about the experiences of friends and family. Using a Latent Dirichlet Allocation topic model, we found that security information from peers usually focuses on who conducts attacks, information containing expertise focuses instead on how attacks are conducted, and information from the news focuses on the consequences of attacks. These differences may prevent users from understanding the persistence and frequency of seemingly mundane threats (viruses, phishing), or from associating protective measures with the generalized threats the users are concerned about (hackers). Our findings highlight the potential for sources of informal security education to create patterns in user knowledge that affect their ability to make good security decisions.
Sidebar photo of Bruce Schneier by Joe MacInnis.