Friday Squid Blogging: The Effect of Noise on Squid
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Read my blog posting guidelines here.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Read my blog posting guidelines here.
One follow-on to the story of Crypto AG being owned by the CIA: this interview with a Washington Post reporter. The whole thing is worth reading or listening to, but I was struck by these two quotes at the end:
...in South America, for instance, many of the governments that were using Crypto machines were engaged in assassination campaigns. Thousands of people were being disappeared, killed. And I mean, they're using Crypto machines, which suggests that the United States intelligence had a lot of insight into what was happening. And it's hard to look back at that history now and see a lot of evidence of the United States going to any real effort to stop it or at least or even expose it.
[...]
To me, the history of the Crypto operation helps to explain how U.S. spy agencies became accustomed to, if not addicted to, global surveillance. This program went on for more than 50 years, monitoring the communications of more than 100 countries. I mean, the United States came to expect that kind of penetration, that kind of global surveillance capability. And as Crypto became less able to deliver it, the United States turned to other ways to replace that. And the Snowden documents tell us a lot about how they did that.
The world is racing to contain the new COVID-19 virus that is spreading around the globe with alarming speed. Right now, pandemic disease experts at the World Health Organization (WHO), the US Centers for Disease Control and Prevention (CDC), and other public-health agencies are gathering information to learn how and where the virus is spreading. To do so, they are using a variety of digital communications and surveillance systems. Like much of the medical infrastructure, these systems are highly vulnerable to hacking and interference.
That vulnerability should be deeply concerning. Governments and intelligence agencies have long had an interest in manipulating health information, both in their own countries and abroad. They might do so to prevent mass panic, avert damage to their economies, or avoid public discontent (if officials made grave mistakes in containing an outbreak, for example). Outside their borders, states might use disinformation to undermine their adversaries or disrupt an alliance between other nations. A sudden epidemic -- when countries struggle to manage not just the outbreak but its social, economic, and political fallout -- is especially tempting for interference.
In the case of COVID-19, such interference is already well underway. That fact should not come as a surprise. States hostile to the West have a long track record of manipulating information about health issues to sow distrust. In the 1980s, for example, the Soviet Union spread the false story that the US Department of Defense bioengineered HIV in order to kill African Americans. This propaganda was effective: some 20 years after the original Soviet disinformation campaign, a 2005 survey found that 48 percent of African Americans believed HIV was concocted in a laboratory, and 15 percent thought it was a tool of genocide aimed at their communities.
More recently, in 2018, Russia undertook an extensive disinformation campaign to amplify the anti-vaccination movement using social media platforms like Twitter and Facebook. Researchers have confirmed that Russian trolls and bots tweeted anti-vaccination messages at up to 22 times the rate of average users. Exposure to these messages, other researchers found, significantly decreased vaccine uptake, endangering individual lives and public health.
Last week, US officials accused Russia of spreading disinformation about COVID-19 in yet another coordinated campaign. Beginning around the middle of January, thousands of Twitter, Facebook, and Instagram accounts -- many of which had previously been tied to Russia -- had been seen posting nearly identical messages in English, German, French, and other languages, blaming the United States for the outbreak. Some of the messages claimed that the virus is part of a US effort to wage economic war on China, others that it is a biological weapon engineered by the CIA.
As much as this disinformation can sow discord and undermine public trust, the far greater vulnerability lies in the United States' poorly protected emergency-response infrastructure, including the health surveillance systems used to monitor and track the epidemic. By hacking these systems and corrupting medical data, states with formidable cybercapabilities can change and manipulate data right at the source.
Here is how it would work, and why we should be so concerned. Numerous health surveillance systems are monitoring the spread of COVID-19 cases, including the CDC's influenza surveillance network. Almost all testing is done at a local or regional level, with public-health agencies like the CDC only compiling and analyzing the data. Only rarely is an actual biological sample sent to a high-level government lab. Many of the clinics and labs providing results to the CDC no longer file reports as in the past, but have several layers of software to store and transmit the data.
Potential vulnerabilities in these systems are legion: hackers exploiting bugs in the software, unauthorized access to a lab's servers by some other route, or interference with the digital communications between the labs and the CDC. That the software involved in disease tracking sometimes has access to electronic medical records is particularly concerning, because those records are often integrated into a clinic or hospital's network of digital devices. One such device connected to a single hospital's network could, in theory, be used to hack into the CDC's entire COVID-19 database.
In practice, hacking deep into a hospital's systems can be shockingly easy. As part of a cybersecurity study, Israeli researchers at Ben-Gurion University were able to hack into a hospital's network via the public Wi-Fi system. Once inside, they could move through most of the hospital's databases and diagnostic systems. Gaining control of the hospital's unencrypted image database, the researchers inserted malware that altered healthy patients' CT scans to show nonexistent tumors. Radiologists reading these images could only distinguish real from altered CTs 60 percent of the time -- and only after being alerted that some of the CTs had been manipulated.
Another study directly relevant to public-health emergencies showed that a critical US biosecurity initiative, the Department of Homeland Security's BioWatch program, had been left vulnerable to cyberattackers for over a decade. This program monitors more than 30 US jurisdictions and allows health officials to rapidly detect a bioweapons attack. Hacking this program could cover up an attack, or fool authorities into believing one has occurred.
Fortunately, no case of healthcare sabotage by intelligence agencies or hackers has come to light (the closest has been a series of ransomware attacks extorting money from hospitals, causing significant data breaches and interruptions in medical services). But other critical infrastructure has often been a target. The Russians have repeatedly hacked Ukraine's national power grid, and have been probing US power plants and grid infrastructure as well. The United States and Israel hacked the Iranian nuclear program, while Iran has targeted Saudi Arabia's oil infrastructure. There is no reason to believe that public-health infrastructure is in any way off limits.
Despite these precedents and proven risks, a detailed assessment of the vulnerability of US health surveillance systems to infiltration and manipulation has yet to be made. With COVID-19 on the verge of becoming a pandemic, the United States is at risk of not having trustworthy data, which in turn could cripple our country's ability to respond.
Under normal conditions, there is plenty of time for health officials to notice unusual patterns in the data and track down wrong information -- if necessary, using the old-fashioned method of giving the lab a call. But during an epidemic, when there are tens of thousands of cases to track and analyze, it would be easy for exhausted disease experts and public-health officials to be misled by corrupted data. The resulting confusion could lead to misdirected resources, give false reassurance that case numbers are falling, or waste precious time as decision makers try to validate inconsistent data.
In the face of a possible global pandemic, US and international public-health leaders must lose no time assessing and strengthening the security of the country's digital health systems. They also have an important role to play in the broader debate over cybersecurity. Making America's health infrastructure safe requires a fundamental reorientation of cybersecurity away from offense and toward defense. The position of many governments, including the United States', that Internet infrastructure must be kept vulnerable so they can better spy on others, is no longer tenable. A digital arms race, in which more countries acquire ever more sophisticated cyberattack capabilities, only increases US vulnerability in critical areas such as pandemic control. By highlighting the importance of protecting digital health infrastructure, public-health leaders can and should call for a well-defended and peaceful Internet as a foundation for a healthy and secure world.
This essay was co-authored with Margaret Bourdeaux; a slightly different version appeared in Foreign Policy.
EDITED TO ADD: On last week's squid post, there was a big conversation regarding the COVID-19. Many of the comments straddled the line between what are and aren't the the core topics. Yesterday I deleted a bunch for being off-topic. Then I reconsidered and republished some of what I deleted.
Going forward, comments about the COVID-19 will be restricted to the security and risk implications of the virus. This includes cybersecurity, security, risk management, surveillance, and containment measures. Comments that stray off those topics will be removed. By clarifying this, I hope to keep the conversation on-topic while also allowing discussion of the security implications of current events.
Thank you for your patience and forbearance on this.
The BBC is reporting a vulnerability in the Let's Encrypt certificate service:
In a notification email to its clients, the organisation said: "We recently discovered a bug in the Let's Encrypt certificate authority code.
"Unfortunately, this means we need to revoke the certificates that were affected by this bug, which includes one or more of your certificates. To avoid disruption, you'll need to renew and replace your affected certificate(s) by Wednesday, March 4, 2020. We sincerely apologise for the issue."
I am seeing nothing on the Let's Encrypt website. And no other details anywhere. I'll post more when I know more.
EDITED TO ADD: More from Ars Technica:
Let's Encrypt uses Certificate Authority software called Boulder. Typically, a Web server that services many separate domain names and uses Let's Encrypt to secure them receives a single LE certificate that covers all domain names used by the server rather than a separate cert for each individual domain.
The bug LE discovered is that, rather than checking each domain name separately for valid CAA records authorizing that domain to be renewed by that server, Boulder would check a single one of the domains on that server n times (where n is the number of LE-serviced domains on that server). Let's Encrypt typically considers domain validation results good for 30 days from the time of validation--but CAA records specifically must be checked no more than eight hours prior to certificate issuance.
The upshot is that a 30-day window is presented in which certificates might be issued to a particular Web server by Let's Encrypt despite the presence of CAA records in DNS that would prohibit that issuance.
Since Let's Encrypt finds itself in the unenviable position of possibly having issued certificates that it should not have, it is revoking all current certificates that might not have had proper CAA record checking on Wednesday, March 4. Users whose certificates are scheduled to be revoked will need to manually force-renewal before then.
And Let's Encrypt has a blog post about it.
EDITED TO ADD: Slashdot thread.
There's a vulnerability in Wi-Fi hardware that breaks the encryption:
The vulnerability exists in Wi-Fi chips made by Cypress Semiconductor and Broadcom, the latter a chipmaker Cypress acquired in 2016. The affected devices include iPhones, iPads, Macs, Amazon Echos and Kindles, Android devices, and Wi-Fi routers from Asus and Huawei, as well as the Raspberry Pi 3. Eset, the security company that discovered the vulnerability, said the flaw primarily affects Cypress' and Broadcom's FullMAC WLAN chips, which are used in billions of devices. Eset has named the vulnerability Kr00k, and it is tracked as CVE-2019-15126.
Manufacturers have made patches available for most or all of the affected devices, but it's not clear how many devices have installed the patches. Of greatest concern are vulnerable wireless routers, which often go unpatched indefinitely.
That's the real problem. Many of these devices won't get patched -- ever.
Privacy International has the details:
Key facts:
- Despite Facebook claim, "Download Your Information" doesn't provide users with a list of all advertisers who uploaded a list with their personal data.
- As a user this means you can't exercise your rights under GDPR because you don't know which companies have uploaded data to Facebook.
- Information provided about the advertisers is also very limited (just a name and no contact details), preventing users from effectively exercising their rights.
- Recently announced Off-Facebook feature comes with similar issues, giving little insight into how advertisers collect your personal data and how to prevent such data collection.
When I teach cybersecurity tech and policy at the Harvard Kennedy School, one of the assignments is to download your Facebook and Google data and look at it. Many are surprised at what the companies know about them.
Cool photo.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Read my blog posting guidelines here.
EDITED TO ADD (3/4): I just deleted a slew of comments about COVID 19. I may reinstate some of them later; right now I want some time to think about what is relevant and what is not. Surely lots of things are relevant to this blog -- fear, risk management, surveillance, containment measures -- but most of the talk about the virus are not. I would like to suggest that those who wish to talk about the virus do so elsewhere, and those who want to talk specifically about the security/risk implications continue to do so, politely and respectfully.
For years, Humble Bundle has been selling great books at a "pay what you can afford" model. This month, they're featuring as many as nineteen cybersecurity books for as little as $1, including four of mine. These are digital copies, all DRM-free. Part of the money goes to support the EFF or Let's Encrypt. (The default is 15%, and you can change that.) As an EFF board member, I know that we've received a substantial amount from this program in previous years.
Google presented its system of using deep-learning techniques to identify malicious email attachments:
At the RSA security conference in San Francisco on Tuesday, Google's security and anti-abuse research lead Elie Bursztein will present findings on how the new deep-learning scanner for documents is faring against the 300 billion attachments it has to process each week. It's challenging to tell the difference between legitimate documents in all their infinite variations and those that have specifically been manipulated to conceal something dangerous. Google says that 63 percent of the malicious documents it blocks each day are different than the ones its systems flagged the day before. But this is exactly the type of pattern-recognition problem where deep learning can be helpful.
[...]
The document analyzer looks for common red flags, probes files if they have components that may have been purposefully obfuscated, and does other checks like examining macros -- the tool in Microsoft Word documents that chains commands together in a series and is often used in attacks. The volume of malicious documents that attackers send out varies widely day to day. Bursztein says that since its deployment, the document scanner has been particularly good at flagging suspicious documents sent in bursts by malicious botnets or through other mass distribution methods. He was also surprised to discover how effective the scanner is at analyzing Microsoft Excel documents, a complicated file format that can be difficult to assess.
This is the sort of thing that's pretty well optimized for machine-learning techniques.
This law journal article discusses the role of class-action litigation to secure the Internet of Things.
Basically, the article postulates that (1) market realities will produce insecure IoT devices, and (2) political failures will leave that industry unregulated. Result: insecure IoT. It proposes proactive class action litigation against manufacturers of unsafe and unsecured IoT devices before those devices cause unnecessary injury or death. It's a lot to read, but it's an interesting take on how to secure this otherwise disastrously insecure world.
And it was inspired by my book, Click Here to Kill Everybody.
Sidebar photo of Bruce Schneier by Joe MacInnis.