Entries Tagged "cybersecurity"

Page 14 of 25

Cybersecurity During COVID-19

Three weeks ago (could it possibly be that long already?), I wrote about the increased risks of working remotely during the COVID-19 pandemic.

One, employees are working from their home networks and sometimes from their home computers. These systems are more likely to be out of date, unpatched, and unprotected. They are more vulnerable to attack simply because they are less secure.

Two, sensitive organizational data will likely migrate outside of the network. Employees working from home are going to save data on their own computers, where they aren’t protected by the organization’s security systems. This makes the data more likely to be hacked and stolen.

Three, employees are more likely to access their organizational networks insecurely. If the organization is lucky, they will have already set up a VPN for remote access. If not, they’re either trying to get one quickly or not bothering at all. Handing people VPN software to install and use with zero training is a recipe for security mistakes, but not using a VPN is even worse.

Four, employees are being asked to use new and unfamiliar tools like Zoom to replace face-to-face meetings. Again, these hastily set-up systems are likely to be insecure.

Five, the general chaos of “doing things differently” is an opening for attack. Tricks like business email compromise, where an employee gets a fake email from a senior executive asking him to transfer money to some account, will be more successful when the employee can’t walk down the hall to confirm the email’s validity—and when everyone is distracted and so many other things are being done differently.

NASA is reporting an increase in cyberattacks. From an agency memo:

A new wave of cyber-attacks is targeting Federal Agency Personnel, required to telework from home, during the Novel Coronavirus (COVID-19) outbreak. During the past few weeks, NASA’s Security Operations Center (SOC) mitigation tools have prevented success of these attempts. Here are some examples of what’s been observed in the past few days:

  • Doubling of email phishing attempts
  • Exponential increase in malware attacks on NASA systems
  • Double the number of mitigation-blocking of NASA systems trying to access malicious sites (often unknowingly) due to users accessing the Internet

Here’s another article that makes basically the same points I did:

But the rapid shift to remote working will inevitably create or exacerbate gaps in security. Employees using unfamiliar software will get settings wrong and leave themselves open to breaches. Staff forced to use their own ageing laptops from home will find their data to be less secure than those using modern equipment.

That’s a big problem because the security issues are not going away. For the last couple of months coronavirus-themed malware and phishing scams have been on the rise. Business email compromise scams—where crooks impersonate a CEO or other senior staff member and then try to trick workers into sending money to their accounts—could be made easier if staff primarily rely on email to communicate while at home.

EDITED TO ADD: This post has been translated into Portuguese.

EDITED TO ADD (4/13): A three-part series about home-office cybersecurity.

EDITED TO ADD: This post has been translated into Spanish.

Posted on April 7, 2020 at 10:00 AMView Comments

Internet Voting in Puerto Rico

Puerto Rico is considered allowing for Internet voting. I have joined a group of security experts in a letter opposing the bill.

Cybersecurity experts agree that under current technology, no practically proven method exists to securely, verifiably, or privately return voted materials over the internet. That means that votes could be manipulated or deleted on the voter’s computer without the voter’s knowledge, local elections officials cannot verify that the voter’s ballot reflects the voter’s intent, and the voter’s selections could be traceable back to the individual voter. Such a system could violate protections guaranteeing a secret ballot, as outlined in Section 2, Article II of the Puerto Rico Constitution.

The ACLU agrees.

Posted on March 24, 2020 at 6:01 AMView Comments

CIA Dirty Laundry Aired

Joshua Schulte, the CIA employee standing trial for leaking the Wikileaks Vault 7 CIA hacking tools, maintains his innocence. And during the trial, a lot of shoddy security and sysadmin practices are coming out:

All this raises a question, though: just how bad is the CIA’s security that it wasn’t able to keep Schulte out, even accounting for the fact that he is a hacking and computer specialist? And the answer is: absolutely terrible.

The password for the Confluence virtual machine that held all the hacking tools that were stolen and leaked? That’ll be 123ABCdef. And the root login for the main DevLAN server? mysweetsummer.

It actually gets worse than that. Those passwords were shared by the entire team and posted on the group’s intranet. IRC chats published during the trial even revealed team members talking about how terrible their infosec practices were, and joked that CIA internal security would go nuts if they knew. Their justification? The intranet was restricted to members of the Operational Support Branch (OSB): the elite programming unit that makes the CIA’s hacking tools.

The jury returned no verdict on the serious charges. He was convicted of contempt and lying to the FBI; a mistrial on everything else.

Posted on March 10, 2020 at 6:18 AMView Comments

Security of Health Information

The world is racing to contain the new COVID-19 virus that is spreading around the globe with alarming speed. Right now, pandemic disease experts at the World Health Organization (WHO), the US Centers for Disease Control and Prevention (CDC), and other public-health agencies are gathering information to learn how and where the virus is spreading. To do so, they are using a variety of digital communications and surveillance systems. Like much of the medical infrastructure, these systems are highly vulnerable to hacking and interference.

That vulnerability should be deeply concerning. Governments and intelligence agencies have long had an interest in manipulating health information, both in their own countries and abroad. They might do so to prevent mass panic, avert damage to their economies, or avoid public discontent (if officials made grave mistakes in containing an outbreak, for example). Outside their borders, states might use disinformation to undermine their adversaries or disrupt an alliance between other nations. A sudden epidemic­—when countries struggle to manage not just the outbreak but its social, economic, and political fallout­—is especially tempting for interference.

In the case of COVID-19, such interference is already well underway. That fact should not come as a surprise. States hostile to the West have a long track record of manipulating information about health issues to sow distrust. In the 1980s, for example, the Soviet Union spread the false story that the US Department of Defense bioengineered HIV in order to kill African Americans. This propaganda was effective: some 20 years after the original Soviet disinformation campaign, a 2005 survey found that 48 percent of African Americans believed HIV was concocted in a laboratory, and 15 percent thought it was a tool of genocide aimed at their communities.

More recently, in 2018, Russia undertook an extensive disinformation campaign to amplify the anti-vaccination movement using social media platforms like Twitter and Facebook. Researchers have confirmed that Russian trolls and bots tweeted anti-vaccination messages at up to 22 times the rate of average users. Exposure to these messages, other researchers found, significantly decreased vaccine uptake, endangering individual lives and public health.

Last week, US officials accused Russia of spreading disinformation about COVID-19 in yet another coordinated campaign. Beginning around the middle of January, thousands of Twitter, Facebook, and Instagram accounts­—many of which had previously been tied to Russia­—had been seen posting nearly identical messages in English, German, French, and other languages, blaming the United States for the outbreak. Some of the messages claimed that the virus is part of a US effort to wage economic war on China, others that it is a biological weapon engineered by the CIA.

As much as this disinformation can sow discord and undermine public trust, the far greater vulnerability lies in the United States’ poorly protected emergency-response infrastructure, including the health surveillance systems used to monitor and track the epidemic. By hacking these systems and corrupting medical data, states with formidable cybercapabilities can change and manipulate data right at the source.

Here is how it would work, and why we should be so concerned. Numerous health surveillance systems are monitoring the spread of COVID-19 cases, including the CDC’s influenza surveillance network. Almost all testing is done at a local or regional level, with public-health agencies like the CDC only compiling and analyzing the data. Only rarely is an actual biological sample sent to a high-level government lab. Many of the clinics and labs providing results to the CDC no longer file reports as in the past, but have several layers of software to store and transmit the data.

Potential vulnerabilities in these systems are legion: hackers exploiting bugs in the software, unauthorized access to a lab’s servers by some other route, or interference with the digital communications between the labs and the CDC. That the software involved in disease tracking sometimes has access to electronic medical records is particularly concerning, because those records are often integrated into a clinic or hospital’s network of digital devices. One such device connected to a single hospital’s network could, in theory, be used to hack into the CDC’s entire COVID-19 database.

In practice, hacking deep into a hospital’s systems can be shockingly easy. As part of a cybersecurity study, Israeli researchers at Ben-Gurion University were able to hack into a hospital’s network via the public Wi-Fi system. Once inside, they could move through most of the hospital’s databases and diagnostic systems. Gaining control of the hospital’s unencrypted image database, the researchers inserted malware that altered healthy patients’ CT scans to show nonexistent tumors. Radiologists reading these images could only distinguish real from altered CTs 60 percent of the time­—and only after being alerted that some of the CTs had been manipulated.

Another study directly relevant to public-health emergencies showed that a critical US biosecurity initiative, the Department of Homeland Security’s BioWatch program, had been left vulnerable to cyberattackers for over a decade. This program monitors more than 30 US jurisdictions and allows health officials to rapidly detect a bioweapons attack. Hacking this program could cover up an attack, or fool authorities into believing one has occurred.

Fortunately, no case of healthcare sabotage by intelligence agencies or hackers has come to light (the closest has been a series of ransomware attacks extorting money from hospitals, causing significant data breaches and interruptions in medical services). But other critical infrastructure has often been a target. The Russians have repeatedly hacked Ukraine’s national power grid, and have been probing US power plants and grid infrastructure as well. The United States and Israel hacked the Iranian nuclear program, while Iran has targeted Saudi Arabia’s oil infrastructure. There is no reason to believe that public-health infrastructure is in any way off limits.

Despite these precedents and proven risks, a detailed assessment of the vulnerability of US health surveillance systems to infiltration and manipulation has yet to be made. With COVID-19 on the verge of becoming a pandemic, the United States is at risk of not having trustworthy data, which in turn could cripple our country’s ability to respond.

Under normal conditions, there is plenty of time for health officials to notice unusual patterns in the data and track down wrong information­—if necessary, using the old-fashioned method of giving the lab a call. But during an epidemic, when there are tens of thousands of cases to track and analyze, it would be easy for exhausted disease experts and public-health officials to be misled by corrupted data. The resulting confusion could lead to misdirected resources, give false reassurance that case numbers are falling, or waste precious time as decision makers try to validate inconsistent data.

In the face of a possible global pandemic, US and international public-health leaders must lose no time assessing and strengthening the security of the country’s digital health systems. They also have an important role to play in the broader debate over cybersecurity. Making America’s health infrastructure safe requires a fundamental reorientation of cybersecurity away from offense and toward defense. The position of many governments, including the United States’, that Internet infrastructure must be kept vulnerable so they can better spy on others, is no longer tenable. A digital arms race, in which more countries acquire ever more sophisticated cyberattack capabilities, only increases US vulnerability in critical areas such as pandemic control. By highlighting the importance of protecting digital health infrastructure, public-health leaders can and should call for a well-defended and peaceful Internet as a foundation for a healthy and secure world.

This essay was co-authored with Margaret Bourdeaux; a slightly different version appeared in Foreign Policy.

EDITED TO ADD: On last week’s squid post, there was a big conversation regarding the COVID-19. Many of the comments straddled the line between what are and aren’t the the core topics. Yesterday I deleted a bunch for being off-topic. Then I reconsidered and republished some of what I deleted.

Going forward, comments about the COVID-19 will be restricted to the security and risk implications of the virus. This includes cybersecurity, security, risk management, surveillance, and containment measures. Comments that stray off those topics will be removed. By clarifying this, I hope to keep the conversation on-topic while also allowing discussion of the security implications of current events.

Thank you for your patience and forbearance on this.

Posted on March 5, 2020 at 6:10 AMView Comments

Humble Bundle's 2020 Cybersecurity Books

For years, Humble Bundle has been selling great books at a “pay what you can afford” model. This month, they’re featuring as many as nineteen cybersecurity books for as little as $1, including four of mine. These are digital copies, all DRM-free. Part of the money goes to support the EFF or Let’s Encrypt. (The default is 15%, and you can change that.) As an EFF board member, I know that we’ve received a substantial amount from this program in previous years.

Posted on February 28, 2020 at 1:53 PMView Comments

Policy vs. Technology

Sometime around 1993 or 1994, during the first Crypto Wars, I was part of a group of cryptography experts that went to Washington to advocate for strong encryption. Matt Blaze and Ron Rivest were with me; I don’t remember who else. We met with then Massachusetts Representative Ed Markey. (He didn’t become a senator until 2013.) Back then, he and Vermont Senator Patrick Leahy were the most knowledgeable on this issue and our biggest supporters against government backdoors. They still are.

Markey was against forcing encrypted phone providers to implement the NSA’s Clipper Chip in their devices, but wanted us to reach a compromise with the FBI regardless. This completely startled us techies, who thought having the right answer was enough. It was at that moment that I learned an important difference between technologists and policy makers. Technologists want solutions; policy makers want consensus.

Since then, I have become more immersed in policy discussions. I have spent more time with legislators, advised advocacy organizations like EFF and EPIC, and worked with policy-minded think tanks in the United States and around the world. I teach cybersecurity policy and technology at the Harvard Kennedy School of Government. My most recent two books, Data and Goliath—about surveillance—and Click Here to Kill Everybody—about IoT security—are really about the policy implications of technology.

Over that time, I have observed many other differences between technologists and policy makers—differences that we in cybersecurity need to understand if we are to translate our technological solutions into viable policy outcomes.

Technologists don’t try to consider all of the use cases of a given technology. We tend to build something for the uses we envision, and hope that others can figure out new and innovative ways to extend what we created. We love it when there is a new use for a technology that we never considered and that changes the world. And while we might be good at security around the use cases we envision, we are regularly blindsided when it comes to new uses or edge cases. (Authentication risks surrounding someone’s intimate partner is a good example.)

Policy doesn’t work that way; it’s specifically focused on use. It focuses on people and what they do. Policy makers can’t create policy around a piece of technology without understanding how it is used—how all of it’s used.

Policy is often driven by exceptional events, like the FBI’s desire to break the encryption on the San Bernardino shooter’s iPhone. (The PATRIOT Act is the most egregious example I can think of.) Technologists tend to look at more general use cases, like the overall value of strong encryption to societal security. Policy tends to focus on the past, making existing systems work or correcting wrongs that have happened. It’s hard to imagine policy makers creating laws around VR systems, because they don’t yet exist in any meaningful way. Technology is inherently future focused. Technologists try to imagine better systems, or future flaws in present systems, and work to improve things.

As technologists, we iterate. It’s how we write software. It’s how we field products. We know we can’t get it right the first time, so we have developed all sorts of agile systems to deal with that fact. Policy making is often the opposite. U.S. federal laws take months or years to negotiate and pass, and after that the issue doesn’t get addressed again for a decade or more. It is much more critical to get it right the first time, because the effects of getting it wrong are long lasting. (See, for example, parts of the GDPR.) Sometimes regulatory agencies can be more agile. The courts can also iterate policy, but it’s slower.

Along similar lines, the two groups work in very different time frames. Engineers, conditioned by Moore’s law, have long thought of 18 months as the maximum time to roll out a new product, and now think in terms of continuous deployment of new features. As I said previously, policy makers tend to think in terms of multiple years to get a law or regulation in place, and then more years as the case law builds up around it so everyone knows what it really means. It’s like tortoises and hummingbirds.

Technology is inherently global. It is often developed with local sensibilities according to local laws, but it necessarily has global reach. Policy is always jurisdictional. This difference is causing all sorts of problems for the global cloud services we use every day. The providers are unable to operate their global systems in compliance with more than 200 different—and sometimes conflicting—national requirements. Policy makers are often unimpressed with claims of inability; laws are laws, they say, and if Facebook can translate its website into French for the French, it can also implement their national laws.

Technology and policy both use concepts of trust, but differently. Technologists tend to think of trust in terms of controls on behavior. We’re getting better—NIST’s recent work on trust is a good example—but we have a long way to go. For example, Google’s Trust and Safety Department does a lot of AI and ethics work largely focused on technological controls. Policy makers think of trust in more holistic societal terms: trust in institutions, trust as the ability not to worry about adverse outcomes, consumer confidence. This dichotomy explains how techies can claim bitcoin is trusted because of the strong cryptography, but policy makers can’t imagine calling a system trustworthy when you lose all your money if you forget your encryption key.

Policy is how society mediates how individuals interact with society. Technology has the potential to change how individuals interact with society. The conflict between these two causes considerable friction, as technologists want policy makers to get out of the way and not stifle innovation, and policy makers want technologists to stop moving fast and breaking so many things.

Finally, techies know that code is law­—that the restrictions and limitations of a technology are more fundamental than any human-created legal anything. Policy makers know that law is law, and tech is just tech. We can see this in the tension between applying existing law to new technologies and creating new law specifically for those new technologies.

Yes, these are all generalizations and there are exceptions. It’s also not all either/or. Great technologists and policy makers can see the other perspectives. The best policy makers know that for all their work toward consensus, they won’t make progress by redefining pi as three. Thoughtful technologists look beyond the immediate user demands to the ways attackers might abuse their systems, and design against those adversaries as well. These aren’t two alien species engaging in first contact, but cohorts who can each learn and borrow tools from the other. Too often, though, neither party tries.

In October, I attended the first ACM Symposium on Computer Science and the Law. Google counsel Brian Carver talked about his experience with the few computer science grad students who would attend his Intellectual Property and Cyberlaw classes every year at UC Berkeley. One of the first things he would do was give the students two different cases to read. The cases had nearly identical facts, and the judges who’d ruled on them came to exactly opposite conclusions. The law students took this in stride; it’s the way the legal system works when it’s wrestling with a new concept or idea. But it shook the computer science students. They were appalled that there wasn’t a single correct answer.

But that’s not how law works, and that’s not how policy works. As the technologies we’re creating become more central to society, and as we in technology continue to move into the public sphere and become part of the increasingly important policy debates, it is essential that we learn these lessons. Gone are the days when we were creating purely technical systems and our work ended at the keyboard and screen. Now we’re building complex socio-technical systems that are literally creating a new world. And while it’s easy to dismiss policy makers as doing it wrong, it’s important to understand that they’re not. Policy making has been around a lot longer than the Internet or computers or any technology. And the essential challenges of this century will require both groups to work together.

This essay previously appeared in IEEE Security & Privacy.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

Posted on February 21, 2020 at 5:54 AMView Comments

Lousy IoT Security

DTEN makes smart screens and whiteboards for videoconferencing systems. Forescout found that their security is terrible:

In total, our researchers discovered five vulnerabilities of four different kinds:

  • Data exposure: PDF files of shared whiteboards (e.g. meeting notes) and other sensitive files (e.g., OTA—over-the-air updates) were stored in a publicly accessible AWS S3 bucket that also lacked TLS encryption (CVE-2019-16270, CVE-2019-16274).
  • Unauthenticated web server: a web server running Android OS on port 8080 discloses all whiteboards stored locally on the device (CVE-2019-16271).
  • Arbitrary code execution: unauthenticated root shell access through Android Debug Bridge (ADB) leads to arbitrary code execution and system administration (CVE-2019-16273).
  • Access to Factory Settings: provides full administrative access and thus a covert ability to capture Windows host data from Android, including the Zoom meeting content (audio, video, screenshare) (CVE-2019-16272).

These aren’t subtle vulnerabilities. These are stupid design decisions made by engineers who had no idea how to create a secure system. And this, in a nutshell, is the problem with the Internet of Things.

From a Wired article:

One issue that jumped out at the researchers: The DTEN system stored notes and annotations written through the whiteboard feature in an Amazon Web Services bucket that was exposed on the open internet. This means that customers could have accessed PDFs of each others’ slides, screenshots, and notes just by changing the numbers in the URL they used to view their own. Or anyone could have remotely nabbed the entire trove of customers’ data. Additionally, DTEN hadn’t set up HTTPS web encryption on the customer web server to protect connections from prying eyes. DTEN fixed both of these issues on October 7. A few weeks later, the company also fixed a similar whiteboard PDF access issue that would have allowed anyone on a company’s network to access all of its stored whiteboard data.

[…]

The researchers also discovered two ways that an attacker on the same network as DTEN devices could manipulate the video conferencing units to monitor all video and audio feeds and, in one case, to take full control. DTEN hardware runs Android primarily, but uses Microsoft Windows for Zoom. The researchers found that they can access a development tool known as “Android Debug Bridge,” either wirelessly or through USB ports or ethernet, to take over a unit. The other bug also relates to exposed Android factory settings. The researchers note that attempting to implement both operating systems creates more opportunities for misconfigurations and exposure. DTEN says that it will push patches for both bugs by the end of the year.

Boing Boing article.

Posted on December 19, 2019 at 6:31 AMView Comments

1 12 13 14 15 16 25

Sidebar photo of Bruce Schneier by Joe MacInnis.