Entries Tagged "national security policy"

Page 3 of 54

US Government Exposes North Korean Malware

US Cyber Command has uploaded North Korean malware samples to the VirusTotal aggregation repository, adding to the malware samples it uploaded in February.

The first of the new malware variants, COPPERHEDGE, is described as a Remote Access Tool (RAT) “used by advanced persistent threat (APT) cyber actors in the targeting of cryptocurrency exchanges and related entities.”

This RAT is known for its capability to help the threat actors perform system reconnaissance, run arbitrary commands on compromised systems, and exfiltrate stolen data.

TAINTEDSCRIBE is a trojan that acts as a full-featured beaconing implant with command modules and designed to disguise as Microsoft’s Narrator.

The trojan “downloads its command execution module from a command and control (C2) server and then has the capability to download, upload, delete, and execute files; enable Windows CLI access; create and terminate processes; and perform target system enumeration.”

Last but not least, PEBBLEDASH is yet another North Korean trojan acting like a full-featured beaconing implant and used by North Korean-backed hacking groups “to download, upload, delete, and execute files; enable Windows CLI access; create and terminate processes; and perform target system enumeration.”

It’s interesting to see the US government take a more aggressive stance on foreign malware. Making samples public, so all the antivirus companies can add them to their scanning systems, is a big deal — and probably required some complicated declassification maneuvering.

Me, I like reading the codenames.

Lots more on the US-CERT website.

Posted on May 14, 2020 at 6:29 AMView Comments

Emergency Surveillance During COVID-19 Crisis

Israel is using emergency surveillance powers to track people who may have COVID-19, joining China and Iran in using mass surveillance in this way. I believe pressure will increase to leverage existing corporate surveillance infrastructure for these purposes in the US and other countries. With that in mind, the EFF has some good thinking on how to balance public safety with civil liberties:

Thus, any data collection and digital monitoring of potential carriers of COVID-19 should take into consideration and commit to these principles:

  • Privacy intrusions must be necessary and proportionate. A program that collects, en masse, identifiable information about people must be scientifically justified and deemed necessary by public health experts for the purpose of containment. And that data processing must be proportionate to the need. For example, maintenance of 10 years of travel history of all people would not be proportionate to the need to contain a disease like COVID-19, which has a two-week incubation period.
  • Data collection based on science, not bias. Given the global scope of communicable diseases, there is historical precedent for improper government containment efforts driven by bias based on nationality, ethnicity, religion, and race­ — rather than facts about a particular individual’s actual likelihood of contracting the virus, such as their travel history or contact with potentially infected people. Today, we must ensure that any automated data systems used to contain COVID-19 do not erroneously identify members of specific demographic groups as particularly susceptible to infection.
  • Expiration. As in other major emergencies in the past, there is a hazard that the data surveillance infrastructure we build to contain COVID-19 may long outlive the crisis it was intended to address. The government and its corporate cooperators must roll back any invasive programs created in the name of public health after crisis has been contained.
  • Transparency. Any government use of “big data” to track virus spread must be clearly and quickly explained to the public. This includes publication of detailed information about the information being gathered, the retention period for the information, the tools used to process that information, the ways these tools guide public health decisions, and whether these tools have had any positive or negative outcomes.
  • Due Process. If the government seeks to limit a person’s rights based on this “big data” surveillance (for example, to quarantine them based on the system’s conclusions about their relationships or travel), then the person must have the opportunity to timely and fairly challenge these conclusions and limits.

Posted on March 20, 2020 at 6:25 AMView Comments

Security of Health Information

The world is racing to contain the new COVID-19 virus that is spreading around the globe with alarming speed. Right now, pandemic disease experts at the World Health Organization (WHO), the US Centers for Disease Control and Prevention (CDC), and other public-health agencies are gathering information to learn how and where the virus is spreading. To do so, they are using a variety of digital communications and surveillance systems. Like much of the medical infrastructure, these systems are highly vulnerable to hacking and interference.

That vulnerability should be deeply concerning. Governments and intelligence agencies have long had an interest in manipulating health information, both in their own countries and abroad. They might do so to prevent mass panic, avert damage to their economies, or avoid public discontent (if officials made grave mistakes in containing an outbreak, for example). Outside their borders, states might use disinformation to undermine their adversaries or disrupt an alliance between other nations. A sudden epidemic­ — when countries struggle to manage not just the outbreak but its social, economic, and political fallout­ — is especially tempting for interference.

In the case of COVID-19, such interference is already well underway. That fact should not come as a surprise. States hostile to the West have a long track record of manipulating information about health issues to sow distrust. In the 1980s, for example, the Soviet Union spread the false story that the US Department of Defense bioengineered HIV in order to kill African Americans. This propaganda was effective: some 20 years after the original Soviet disinformation campaign, a 2005 survey found that 48 percent of African Americans believed HIV was concocted in a laboratory, and 15 percent thought it was a tool of genocide aimed at their communities.

More recently, in 2018, Russia undertook an extensive disinformation campaign to amplify the anti-vaccination movement using social media platforms like Twitter and Facebook. Researchers have confirmed that Russian trolls and bots tweeted anti-vaccination messages at up to 22 times the rate of average users. Exposure to these messages, other researchers found, significantly decreased vaccine uptake, endangering individual lives and public health.

Last week, US officials accused Russia of spreading disinformation about COVID-19 in yet another coordinated campaign. Beginning around the middle of January, thousands of Twitter, Facebook, and Instagram accounts­ — many of which had previously been tied to Russia­ — had been seen posting nearly identical messages in English, German, French, and other languages, blaming the United States for the outbreak. Some of the messages claimed that the virus is part of a US effort to wage economic war on China, others that it is a biological weapon engineered by the CIA.

As much as this disinformation can sow discord and undermine public trust, the far greater vulnerability lies in the United States’ poorly protected emergency-response infrastructure, including the health surveillance systems used to monitor and track the epidemic. By hacking these systems and corrupting medical data, states with formidable cybercapabilities can change and manipulate data right at the source.

Here is how it would work, and why we should be so concerned. Numerous health surveillance systems are monitoring the spread of COVID-19 cases, including the CDC’s influenza surveillance network. Almost all testing is done at a local or regional level, with public-health agencies like the CDC only compiling and analyzing the data. Only rarely is an actual biological sample sent to a high-level government lab. Many of the clinics and labs providing results to the CDC no longer file reports as in the past, but have several layers of software to store and transmit the data.

Potential vulnerabilities in these systems are legion: hackers exploiting bugs in the software, unauthorized access to a lab’s servers by some other route, or interference with the digital communications between the labs and the CDC. That the software involved in disease tracking sometimes has access to electronic medical records is particularly concerning, because those records are often integrated into a clinic or hospital’s network of digital devices. One such device connected to a single hospital’s network could, in theory, be used to hack into the CDC’s entire COVID-19 database.

In practice, hacking deep into a hospital’s systems can be shockingly easy. As part of a cybersecurity study, Israeli researchers at Ben-Gurion University were able to hack into a hospital’s network via the public Wi-Fi system. Once inside, they could move through most of the hospital’s databases and diagnostic systems. Gaining control of the hospital’s unencrypted image database, the researchers inserted malware that altered healthy patients’ CT scans to show nonexistent tumors. Radiologists reading these images could only distinguish real from altered CTs 60 percent of the time­ — and only after being alerted that some of the CTs had been manipulated.

Another study directly relevant to public-health emergencies showed that a critical US biosecurity initiative, the Department of Homeland Security’s BioWatch program, had been left vulnerable to cyberattackers for over a decade. This program monitors more than 30 US jurisdictions and allows health officials to rapidly detect a bioweapons attack. Hacking this program could cover up an attack, or fool authorities into believing one has occurred.

Fortunately, no case of healthcare sabotage by intelligence agencies or hackers has come to light (the closest has been a series of ransomware attacks extorting money from hospitals, causing significant data breaches and interruptions in medical services). But other critical infrastructure has often been a target. The Russians have repeatedly hacked Ukraine’s national power grid, and have been probing US power plants and grid infrastructure as well. The United States and Israel hacked the Iranian nuclear program, while Iran has targeted Saudi Arabia’s oil infrastructure. There is no reason to believe that public-health infrastructure is in any way off limits.

Despite these precedents and proven risks, a detailed assessment of the vulnerability of US health surveillance systems to infiltration and manipulation has yet to be made. With COVID-19 on the verge of becoming a pandemic, the United States is at risk of not having trustworthy data, which in turn could cripple our country’s ability to respond.

Under normal conditions, there is plenty of time for health officials to notice unusual patterns in the data and track down wrong information­ — if necessary, using the old-fashioned method of giving the lab a call. But during an epidemic, when there are tens of thousands of cases to track and analyze, it would be easy for exhausted disease experts and public-health officials to be misled by corrupted data. The resulting confusion could lead to misdirected resources, give false reassurance that case numbers are falling, or waste precious time as decision makers try to validate inconsistent data.

In the face of a possible global pandemic, US and international public-health leaders must lose no time assessing and strengthening the security of the country’s digital health systems. They also have an important role to play in the broader debate over cybersecurity. Making America’s health infrastructure safe requires a fundamental reorientation of cybersecurity away from offense and toward defense. The position of many governments, including the United States’, that Internet infrastructure must be kept vulnerable so they can better spy on others, is no longer tenable. A digital arms race, in which more countries acquire ever more sophisticated cyberattack capabilities, only increases US vulnerability in critical areas such as pandemic control. By highlighting the importance of protecting digital health infrastructure, public-health leaders can and should call for a well-defended and peaceful Internet as a foundation for a healthy and secure world.

This essay was co-authored with Margaret Bourdeaux; a slightly different version appeared in Foreign Policy.

EDITED TO ADD: On last week’s squid post, there was a big conversation regarding the COVID-19. Many of the comments straddled the line between what are and aren’t the the core topics. Yesterday I deleted a bunch for being off-topic. Then I reconsidered and republished some of what I deleted.

Going forward, comments about the COVID-19 will be restricted to the security and risk implications of the virus. This includes cybersecurity, security, risk management, surveillance, and containment measures. Comments that stray off those topics will be removed. By clarifying this, I hope to keep the conversation on-topic while also allowing discussion of the security implications of current events.

Thank you for your patience and forbearance on this.

Posted on March 5, 2020 at 6:10 AMView Comments

US Department of Interior Grounding All Drones

The Department of Interior is grounding all non-emergency drones due to security concerns:

The order comes amid a spate of warnings and bans at multiple government agencies, including the Department of Defense, about possible vulnerabilities in Chinese-made drone systems that could be allowing Beijing to conduct espionage. The Army banned the use of Chinese-made DJI drones three years ago following warnings from the Navy about “highly vulnerable” drone systems.

One memo drafted by the Navy & Marine Corps Small Tactical Unmanned Aircraft Systems Program Manager has warned “images, video and flight records could be uploaded to unsecured servers in other countries via live streaming.” The Navy has also warned adversaries may view video and metadata from drone systems even though the air vehicle is encrypted. The Department of Homeland Security previously warned the private sector their data may be pilfered off if they use commercial drone systems made in China.

I’m actually not that worried about this risk. Data moving across the Internet is obvious — it’s too easy for a country that tries this to get caught. I am much more worried about remote kill switches in the equipment.

Posted on January 31, 2020 at 6:46 AMView Comments

Attacker Causes Epileptic Seizure over the Internet

This isn’t a first, but I think it will be the first conviction:

The GIF set off a highly unusual court battle that is expected to equip those in similar circumstances with a new tool for battling threatening trolls and cyberbullies. On Monday, the man who sent Eichenwald the moving image, John Rayne Rivello, was set to appear in a Dallas County district court. A last-minute rescheduling delayed the proceeding until Jan. 31, but Rivello is still expected to plead guilty to aggravated assault. And he may be the first of many.

The Epilepsy Foundation announced on Monday it lodged a sweeping slate of criminal complaints against a legion of copycats who targeted people with epilepsy and sent them an onslaught of strobe GIFs — a frightening phenomenon that unfolded in a short period of time during the organization’s marking of National Epilepsy Awareness Month in November.

[…]

Rivello’s supporters — among them, neo-Nazis and white nationalists, including Richard Spencer — have also argued that the issue is about freedom of speech. But in an amicus brief to the criminal case, the First Amendment Clinic at Duke University School of Law argued Rivello’s actions were not constitutionally protected.

“A brawler who tattoos a message onto his knuckles does not throw every punch with the weight of First Amendment protection behind him,” the brief stated. “Conduct like this does not constitute speech, nor should it. A deliberate attempt to cause physical injury to someone does not come close to the expression which the First Amendment is designed to protect.”

Another article.

EDITED TO ADD(12/19): More articles.

EDITED TO ADD (1/14): There was a similar case in Germany in 2012 — that attacker was convicted.

Posted on December 18, 2019 at 5:34 AMView Comments

Scaring People into Supporting Backdoors

Back in 1998, Tim May warned us of the “Four Horsemen of the Infocalypse”: “terrorists, pedophiles, drug dealers, and money launderers.” I tended to cast it slightly differently. This is me from 2005:

Beware the Four Horsemen of the Information Apocalypse: terrorists, drug dealers, kidnappers, and child pornographers. Seems like you can scare any public into allowing the government to do anything with those four.

Which particular horseman is in vogue depends on time and circumstance. Since the terrorist attacks of 9/11, the US government has been pushing the terrorist scare story. Recently, it seems to have switched to pedophiles and child exploitation. It began in September, with a long New York Times story on child sex abuse, which included this dig at encryption:

And when tech companies cooperate fully, encryption and anonymization can create digital hiding places for perpetrators. Facebook announced in March plans to encrypt Messenger, which last year was responsible for nearly 12 million of the 18.4 million worldwide reports of child sexual abuse material, according to people familiar with the reports. Reports to the authorities typically contain more than one image, and last year encompassed the record 45 million photos and videos, according to the National Center for Missing and Exploited Children.

(That’s wrong, by the way. Facebook Messenger already has an encrypted option. It’s just not turned on by default, like it is in WhatsApp.)

That was followed up by a conference by the US Department of Justice: “Lawless Spaces: Warrant Proof Encryption and its Impact on Child Exploitation Cases.” US Attorney General William Barr gave a speech on the subject. Then came an open letter to Facebook from Barr and others from the UK and Australia, using “protecting children” as the basis for their demand that the company not implement strong end-to-end encryption. (I signed on to another another open letter in response.) Then, the FBI tried to get Interpol to publish a statement denouncing end-to-end encryption.

This week, the Senate Judiciary Committee held a hearing on backdoors: “Encryption and Lawful Access: Evaluating Benefits and Risks to Public Safety and Privacy.” Video, and written testimonies, are available at the link. Eric Neuenschwander from Apple was there to support strong encryption, but the other witnesses were all against it. New York District Attorney Cyrus Vance was true to form:

In fact, we were never able to view the contents of his phone because of this gift to sex traffickers that came, not from God, but from Apple.

It was a disturbing hearing. The Senators asked technical questions to people who couldn’t answer them. The result was that an adjunct law professor was able to frame the issue of strong encryption as an externality caused by corporate liability dumping, and another example of Silicon Valley’s anti-regulation stance.

Let me be clear. None of us who favor strong encryption is saying that child exploitation isn’t a serious crime, or a worldwide problem. We’re not saying that about kidnapping, international drug cartels, money laundering, or terrorism. We are saying three things. One, that strong encryption is necessary for personal and national security. Two, that weakening encryption does more harm than good. And three, law enforcement has other avenues for criminal investigation than eavesdropping on communications and stored devices. This is one example, where people unraveled a dark-web website and arrested hundreds by analyzing Bitcoin transactions. This is another, where policy arrested members of a WhatsApp group.

So let’s have reasoned policy debates about encryption — debates that are informed by technology. And let’s stop it with the scare stories.

EDITED TO ADD (12/13): The DoD just said that strong encryption is essential for national security.

All DoD issued unclassified mobile devices are required to be password protected using strong passwords. The Department also requires that data-in-transit, on DoD issued mobile devices, be encrypted (e.g. VPN) to protect DoD information and resources. The importance of strong encryption and VPNs for our mobile workforce is imperative. Last October, the Department outlined its layered cybersecurity approach to protect DoD information and resources, including service men and women, when using mobile communications capabilities.

[…]

As the use of mobile devices continues to expand, it is imperative that innovative security techniques, such as advanced encryption algorithms, are constantly maintained and improved to protect DoD information and resources. The Department believes maintaining a domestic climate for state of the art security and encryption is critical to the protection of our national security.

Posted on December 12, 2019 at 6:11 AMView Comments

Reforming CDA 230

There’s a serious debate on reforming Section 230 of the Communications Decency Act. I am in the process of figuring out what I believe, and this is more a place to put resources and listen to people’s comments.

The EFF has written extensively on why it is so important and dismantling it will be catastrophic for the Internet. Danielle Citron disagrees. (There’s also this law journal article by Citron and Ben Wittes.) Sarah Jeong’s op-ed. Another op-ed. Another paper.

Here are good news articles.

Reading all of this, I am reminded of this decade-old quote by Dan Geer. He’s addressing Internet service providers:

Hello, Uncle Sam here.

You can charge whatever you like based on the contents of what you are carrying, but you are responsible for that content if it is illegal; inspecting brings with it a responsibility for what you learn.

-or-

You can enjoy common carrier protections at all times, but you can neither inspect nor act on the contents of what you are carrying and can only charge for carriage itself. Bits are bits.

Choose wisely. No refunds or exchanges at this window.

We can revise this choice for the social-media age:

Hi Facebook/Twitter/YouTube/everyone else:

You can build a communications based on inspecting user content and presenting it as you want, but that business model also conveys responsibility for that content.

-or-

You can be a communications service and enjoy the protections of CDA 230, in which case you cannot inspect or control the content you deliver.

Facebook would be an example of the former. WhatsApp would be an example of the latter.

I am honestly undecided about all of this. I want CDA230 to protect things like the commenting section of this blog. But I don’t think it should protect dating apps when they are used as a conduit for abuse. And I really don’t want society to pay the cost for all the externalities inherent in Facebook’s business model.

Posted on December 10, 2019 at 6:16 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.