The report notes that:
- The HSE did not have a Chief Information Security Officer (CISO) or a “single responsible owner for cybersecurity at either senior executive or management level to provide leadership and direction.
- It had no documented cyber incident response runbooks or IT recovery plans (apart from documented AD recovery plans) for recovering from a wide-scale ransomware event.
- Under-resourced Information Security Managers were not performing their business as usual role (including a NIST-based cybersecurity review of systems) but were working on evaluating security controls for the COVID-19 vaccination system. Antivirus software triggered numerous alerts after detecting Cobalt Strike activity but these were not escalated. (The antivirus server was later encrypted in the attack).
- There was no security monitoring capability that was able to effectively detect, investigate and respond to security alerts across HSE’s IT environment or the wider National Healthcare Network (NHN).
- There was a lack of effective patching (updates, bug fixes etc.) across the IT estate and reliance was placed on a single antivirus product that was not monitored or effectively maintained with updates across the estate. (The initial workstation attacked had not had antivirus signatures updated for over a year.)
- Over 30,000 machines were running Windows 7 (out of support since January 2020).
- The initial breach came after a HSE staff member interacted with a malicious Microsoft Office Excel file attached to a phishing email; numerous subsequent alerts were not effectively investigated.
PwC’s crisp list of recommendations in the wake of the incident as well as detail on the business impact of the HSE ransomware attack may prove highly useful guidance on best practice for IT professionals looking to set up a security programme and get it funded.
Entries Tagged "security policies"
Page 1 of 8
Even before Apple made its announcement, law enforcement shifted their battle for backdoors to client-side scanning. The idea is that they wouldn’t touch the cryptography, but instead eavesdrop on communications and systems before encryption or after decryption. It’s not a cryptographic backdoor, but it’s still a backdoor—and brings with it all the insecurities of a backdoor.
I’m part of a group of cryptographers that has just published a paper discussing the security risks of such a system. (It’s substantially the same group that wrote a similar paper about key escrow in 1997, and other “exceptional access” proposals in 2015. We seem to have to do this every decade or so.) In our paper, we examine both the efficacy of such a system and its potential security failures, and conclude that it’s a really bad idea.
We had been working on the paper well before Apple’s announcement. And while we do talk about Apple’s system, our focus is really on the idea in general.
Ross Anderson wrote a blog post on the paper. (It’s always great when Ross writes something. It means I don’t have to.) So did Susan Landau. And there’s press coverage in the New York Times, the Guardian, Computer Weekly, the Financial Times, Forbes, El Pais (English translation), NRK (English translation), and—this is the best article of them all—the Register. See also this analysis of the law and politics of client-side scanning from last year.
Together with Nate Kim (former student) and Trey Herr (Atlantic Council Cyber Statecraft Initiative), I have written a paper on IoT supply chain security. The basic problem we try to solve is: How do you enforce IoT security regulations when most of the stuff is made in other countries? And our solution is: enforce the regulations on the domestic company that’s selling the stuff to consumers. There’s a lot of detail between here and there, though, and it’s all in the paper.
We also wrote a Lawfare post:
…we propose to leverage these supply chains as part of the solution. Selling to U.S. consumers generally requires that IoT manufacturers sell through a U.S. subsidiary or, more commonly, a domestic distributor like Best Buy or Amazon. The Federal Trade Commission can apply regulatory pressure to this distributor to sell only products that meet the requirements of a security framework developed by U.S. cybersecurity agencies. That would put pressure on manufacturers to make sure their products are compliant with the standards set out in this security framework, including pressuring their component vendors and original device manufacturers to make sure they supply parts that meet the recognized security framework.
Zoom was doing so well…. And now we have this:
Corporate clients will get access to Zoom’s end-to-end encryption service now being developed, but Yuan said free users won’t enjoy that level of privacy, which makes it impossible for third parties to decipher communications.
“Free users for sure we don’t want to give that because we also want to work together with FBI, with local law enforcement in case some people use Zoom for a bad purpose,” Yuan said on the call.
This is just dumb. Imagine the scene in the terrorist/drug kingpin/money launderer hideout: “I’m sorry, boss. We could have have strong encryption to secure our bad intentions from the FBI, but we can’t afford the $20.” This decision will only affect protesters and dissidents and human rights workers and journalists.
Here’s advisor Alex Stamos doing damage control:
Nico, it’s incorrect to say that free calls won’t be encrypted and this turns out to be a really difficult balancing act between different kinds of harms. More details here:
Some facts on Zoom’s current plans for E2E encryption, which are complicated by the product requirements for an enterprise conferencing product and some legitimate safety issues. The E2E design is available here: https://github.com/zoom/zoom-e2e-whitepaper/blob/master/zoom_e2e.pdf
I read that document, and it doesn’t explain why end-to-end encryption is only available to paying customers. And note that Stamos said “encrypted” and not “end-to-end encrypted.” He knows the difference.
Yuan sought to assuage users’ concerns Wednesday in his weekly webinar, saying the company was striving to “do the right thing” for vulnerable groups, including children and hate-crime victims, whose abuse is sometimes broadcast through Zoom’s platform.
“We plan to provide end-to-end encryption to users for whom we can verify identity, thereby limiting harm to vulnerable groups,” he said. “I wanted to clarify that Zoom does not monitor meeting content. We do not have backdoors where participants, including Zoom employees or law enforcement, can enter meetings without being visible to others. None of this will change.”
Notice that is specifically did not say that he was offering end-to-end encryption to users of the free platform. Only to “users we can verify identity,” which I’m guessing means users that give him a credit card number.
The Twitter feed was similarly sloppily evasive:
We are seeing some misunderstandings on Twitter today around our encryption. We want to provide these facts.
Zoom does not provide information to law enforcement except in circumstances such as child sexual abuse.
Zoom does not proactively monitor meeting content.
Zoom does no have backdoors where Zoom or others can enter meetings without being visible to participants.
AES 256 GCM encryption is turned on for all Zoom users—free and paid.
Those facts have nothing to do with any “misunderstanding.” That was about end-to-end encryption, which the statement very specifically left out of that last sentence. The corporate communications have been clear and consistent.
Come on, Zoom. You were doing so well. Of course you should offer premium features to paying customers, but please don’t include security and privacy in those premium features. They should be available to everyone.
And, hey, this is kind of a dumb time to side with the police over protesters.
I have emailed the CEO, and will report back if I hear back. But for now, assume that the free version of Zoom will not support end-to-end encryption.
EDITED TO ADD (6/4): Another article.
EDITED TO ADD (6/4): I understand that this is complicated, both technically and politically. (Note, though, Jitsi is doing it.) And, yes, lots of people confused end-to-end encryption with link encryption. (My readers tend to be more sophisticated than that.) My worry that the “we’ll offer end-to-end encryption only to paying customers we can verify, even though there’s plenty of evidence that ‘bad purpose’ people will just get paid accounts” story plays into the dangerous narrative that encryption itself is dangerous when widely available. And I disagree with the notion that the possibility of child exploitation is a valid reason to deny security to large groups of people.
Matthew Green on this issue. An excerpt:
Once the precedent is set that E2E encryption is too “dangerous” to hand to the masses, the genie is out of the bottle. And once corporate America accepts that private communications are too politically risky to deploy, it’s going to be hard to put it back.
Want to help us work on end-to-end encrypted group video calling functionality that will be free for everyone? Zoom on over to our careers page….
The world is racing to contain the new COVID-19 virus that is spreading around the globe with alarming speed. Right now, pandemic disease experts at the World Health Organization (WHO), the US Centers for Disease Control and Prevention (CDC), and other public-health agencies are gathering information to learn how and where the virus is spreading. To do so, they are using a variety of digital communications and surveillance systems. Like much of the medical infrastructure, these systems are highly vulnerable to hacking and interference.
That vulnerability should be deeply concerning. Governments and intelligence agencies have long had an interest in manipulating health information, both in their own countries and abroad. They might do so to prevent mass panic, avert damage to their economies, or avoid public discontent (if officials made grave mistakes in containing an outbreak, for example). Outside their borders, states might use disinformation to undermine their adversaries or disrupt an alliance between other nations. A sudden epidemic—when countries struggle to manage not just the outbreak but its social, economic, and political fallout—is especially tempting for interference.
In the case of COVID-19, such interference is already well underway. That fact should not come as a surprise. States hostile to the West have a long track record of manipulating information about health issues to sow distrust. In the 1980s, for example, the Soviet Union spread the false story that the US Department of Defense bioengineered HIV in order to kill African Americans. This propaganda was effective: some 20 years after the original Soviet disinformation campaign, a 2005 survey found that 48 percent of African Americans believed HIV was concocted in a laboratory, and 15 percent thought it was a tool of genocide aimed at their communities.
More recently, in 2018, Russia undertook an extensive disinformation campaign to amplify the anti-vaccination movement using social media platforms like Twitter and Facebook. Researchers have confirmed that Russian trolls and bots tweeted anti-vaccination messages at up to 22 times the rate of average users. Exposure to these messages, other researchers found, significantly decreased vaccine uptake, endangering individual lives and public health.
Last week, US officials accused Russia of spreading disinformation about COVID-19 in yet another coordinated campaign. Beginning around the middle of January, thousands of Twitter, Facebook, and Instagram accounts—many of which had previously been tied to Russia—had been seen posting nearly identical messages in English, German, French, and other languages, blaming the United States for the outbreak. Some of the messages claimed that the virus is part of a US effort to wage economic war on China, others that it is a biological weapon engineered by the CIA.
As much as this disinformation can sow discord and undermine public trust, the far greater vulnerability lies in the United States’ poorly protected emergency-response infrastructure, including the health surveillance systems used to monitor and track the epidemic. By hacking these systems and corrupting medical data, states with formidable cybercapabilities can change and manipulate data right at the source.
Here is how it would work, and why we should be so concerned. Numerous health surveillance systems are monitoring the spread of COVID-19 cases, including the CDC’s influenza surveillance network. Almost all testing is done at a local or regional level, with public-health agencies like the CDC only compiling and analyzing the data. Only rarely is an actual biological sample sent to a high-level government lab. Many of the clinics and labs providing results to the CDC no longer file reports as in the past, but have several layers of software to store and transmit the data.
Potential vulnerabilities in these systems are legion: hackers exploiting bugs in the software, unauthorized access to a lab’s servers by some other route, or interference with the digital communications between the labs and the CDC. That the software involved in disease tracking sometimes has access to electronic medical records is particularly concerning, because those records are often integrated into a clinic or hospital’s network of digital devices. One such device connected to a single hospital’s network could, in theory, be used to hack into the CDC’s entire COVID-19 database.
In practice, hacking deep into a hospital’s systems can be shockingly easy. As part of a cybersecurity study, Israeli researchers at Ben-Gurion University were able to hack into a hospital’s network via the public Wi-Fi system. Once inside, they could move through most of the hospital’s databases and diagnostic systems. Gaining control of the hospital’s unencrypted image database, the researchers inserted malware that altered healthy patients’ CT scans to show nonexistent tumors. Radiologists reading these images could only distinguish real from altered CTs 60 percent of the time—and only after being alerted that some of the CTs had been manipulated.
Another study directly relevant to public-health emergencies showed that a critical US biosecurity initiative, the Department of Homeland Security’s BioWatch program, had been left vulnerable to cyberattackers for over a decade. This program monitors more than 30 US jurisdictions and allows health officials to rapidly detect a bioweapons attack. Hacking this program could cover up an attack, or fool authorities into believing one has occurred.
Fortunately, no case of healthcare sabotage by intelligence agencies or hackers has come to light (the closest has been a series of ransomware attacks extorting money from hospitals, causing significant data breaches and interruptions in medical services). But other critical infrastructure has often been a target. The Russians have repeatedly hacked Ukraine’s national power grid, and have been probing US power plants and grid infrastructure as well. The United States and Israel hacked the Iranian nuclear program, while Iran has targeted Saudi Arabia’s oil infrastructure. There is no reason to believe that public-health infrastructure is in any way off limits.
Despite these precedents and proven risks, a detailed assessment of the vulnerability of US health surveillance systems to infiltration and manipulation has yet to be made. With COVID-19 on the verge of becoming a pandemic, the United States is at risk of not having trustworthy data, which in turn could cripple our country’s ability to respond.
Under normal conditions, there is plenty of time for health officials to notice unusual patterns in the data and track down wrong information—if necessary, using the old-fashioned method of giving the lab a call. But during an epidemic, when there are tens of thousands of cases to track and analyze, it would be easy for exhausted disease experts and public-health officials to be misled by corrupted data. The resulting confusion could lead to misdirected resources, give false reassurance that case numbers are falling, or waste precious time as decision makers try to validate inconsistent data.
In the face of a possible global pandemic, US and international public-health leaders must lose no time assessing and strengthening the security of the country’s digital health systems. They also have an important role to play in the broader debate over cybersecurity. Making America’s health infrastructure safe requires a fundamental reorientation of cybersecurity away from offense and toward defense. The position of many governments, including the United States’, that Internet infrastructure must be kept vulnerable so they can better spy on others, is no longer tenable. A digital arms race, in which more countries acquire ever more sophisticated cyberattack capabilities, only increases US vulnerability in critical areas such as pandemic control. By highlighting the importance of protecting digital health infrastructure, public-health leaders can and should call for a well-defended and peaceful Internet as a foundation for a healthy and secure world.
This essay was co-authored with Margaret Bourdeaux; a slightly different version appeared in Foreign Policy.
EDITED TO ADD: On last week’s squid post, there was a big conversation regarding the COVID-19. Many of the comments straddled the line between what are and aren’t the the core topics. Yesterday I deleted a bunch for being off-topic. Then I reconsidered and republished some of what I deleted.
Going forward, comments about the COVID-19 will be restricted to the security and risk implications of the virus. This includes cybersecurity, security, risk management, surveillance, and containment measures. Comments that stray off those topics will be removed. By clarifying this, I hope to keep the conversation on-topic while also allowing discussion of the security implications of current events.
Thank you for your patience and forbearance on this.
This is crazy (and dangerous). West Virginia is allowing people to vote via a smart-phone app. Even crazier, the app uses blockchain—presumably because they have no idea what the security issues with voting actually are.
I believe that, somewhere, there is a highly qualified security person who has had enough of corporate life and wants instead to make a difference in the world. If that’s you, please consider applying.
Last month, the White House released the “National Cyber Strategy of the United States of America. I generally don’t have much to say about these sorts of documents. They’re filled with broad generalities. Who can argue with:
Defend the homeland by protecting networks, systems, functions, and data;
Promote American prosperity by nurturing a secure, thriving digital economy and fostering strong domestic innovation;
Preserve peace and security by strengthening the ability of the United States in concert with allies and partners to deter and, if necessary, punish those who use cyber tools for malicious purposes; and
Expand American influence abroad to extend the key tenets of an open, interoperable, reliable, and secure Internet.
The devil is in the details, of course. And the strategy includes no details.
In a New York Times op-ed, Josephine Wolff argues that this new strategy, together with the more-detailed Department of Defense cyber strategy and the classified National Security Presidential Memorandum 13, represent a dangerous shift of US cybersecurity posture from defensive to offensive:
…the National Cyber Strategy represents an abrupt and reckless shift in how the United States government engages with adversaries online. Instead of continuing to focus on strengthening defensive technologies and minimizing the impact of security breaches, the Trump administration plans to ramp up offensive cyberoperations. The new goal: deter adversaries through pre-emptive cyberattacks and make other nations fear our retaliatory powers.
The Trump administration’s shift to an offensive approach is designed to escalate cyber conflicts, and that escalation could be dangerous. Not only will it detract resources and attention from the more pressing issues of defense and risk management, but it will also encourage the government to act recklessly in directing cyberattacks at targets before they can be certain of who those targets are and what they are doing.
There is no evidence that pre-emptive cyberattacks will serve as effective deterrents to our adversaries in cyberspace. In fact, every time a country has initiated an unprompted cyberattack, it has invariably led to more conflict and has encouraged retaliatory breaches rather than deterring them. Nearly every major publicly known online intrusion that Russia or North Korea has perpetrated against the United States has had significant and unpleasant consequences.
Wolff is right; this is reckless. In Click Here to Kill Everybody, I argue for a “defense dominant” strategy: that while offense is essential for defense, when the two are in conflict, it should take a back seat to defense. It’s more complicated than that, of course, and I devote a whole chapter to its implications. But as computers and the Internet become more critical to our lives and society, keeping them secure becomes more important than using them to attack others.
The Five Eyes—the intelligence consortium of the rich English-speaking countries (the US, Canada, the UK, Australia, and New Zealand)—have issued a “Statement of Principles on Access to Evidence and Encryption” where they claim their needs for surveillance outweigh everyone’s needs for security and privacy.
…the increasing use and sophistication of certain encryption designs present challenges for nations in combatting serious crimes and threats to national and global security. Many of the same means of encryption that are being used to protect personal, commercial and government information are also being used by criminals, including child sex offenders, terrorists and organized crime groups to frustrate investigations and avoid detection and prosecution.
Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute. It is an established principle that appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards. The same principles have long permitted government authorities to search homes, vehicles, and personal effects with valid legal authority.
The increasing gap between the ability of law enforcement to lawfully access data and their ability to acquire and use the content of that data is a pressing international concern that requires urgent, sustained attention and informed discussion on the complexity of the issues and interests at stake. Otherwise, court decisions about legitimate access to data are increasingly rendered meaningless, threatening to undermine the systems of justice established in our democratic nations.
To put it bluntly, this is reckless and shortsighted. I’ve repeatedly written about why this can’t be done technically, and why trying results in insecurity. But there’s a greater principle at first: we need to decide, as nations and as society, to put defense first. We need a “defense dominant” strategy for securing the Internet and everything attached to it.
This is important. Our national security depends on the security of our technologies. Demanding that technology companies add backdoors to computers and communications systems puts us all at risk. We need to understand that these systems are too critical to our society and—now that they can affect the world in a direct physical manner—affect our lives and property as well.
This is what I just wrote, in Click Here to Kill Everybody:
There is simply no way to secure US networks while at the same time leaving foreign networks open to eavesdropping and attack. There’s no way to secure our phones and computers from criminals and terrorists without also securing the phones and computers of those criminals and terrorists. On the generalized worldwide network that is the Internet, anything we do to secure its hardware and software secures it everywhere in the world. And everything we do to keep it insecure similarly affects the entire world.
This leaves us with a choice: either we secure our stuff, and as a side effect also secure their stuff; or we keep their stuff vulnerable, and as a side effect keep our own stuff vulnerable. It’s actually not a hard choice. An analogy might bring this point home. Imagine that every house could be opened with a master key, and this was known to the criminals. Fixing those locks would also mean that criminals’ safe houses would be more secure, but it’s pretty clear that this downside would be worth the trade-off of protecting everyone’s house. With the Internet+ increasing the risks from insecurity dramatically, the choice is even more obvious. We must secure the information systems used by our elected officials, our critical infrastructure providers, and our businesses.
Yes, increasing our security will make it harder for us to eavesdrop, and attack, our enemies in cyberspace. (It won’t make it impossible for law enforcement to solve crimes; I’ll get to that later in this chapter.) Regardless, it’s worth it. If we are ever going to secure the Internet+, we need to prioritize defense over offense in all of its aspects. We’ve got more to lose through our Internet+ vulnerabilities than our adversaries do, and more to gain through Internet+ security. We need to recognize that the security benefits of a secure Internet+ greatly outweigh the security benefits of a vulnerable one.
We need to have this debate at the level of national security. Putting spy agencies in charge of this trade-off is wrong, and will result in bad decisions.
Cory Doctorow has a good reaction.
Sidebar photo of Bruce Schneier by Joe MacInnis.