Four Microsoft Exchange Zero-Days Exploited by China
Microsoft has issued an emergency Microsoft Exchange patch to fix four zero-day vulnerabilities currently being exploited by China.
EDITED TO ADD (3/12): Exchange Online is not affected.
Page 3 of 12
Microsoft has issued an emergency Microsoft Exchange patch to fix four zero-day vulnerabilities currently being exploited by China.
EDITED TO ADD (3/12): Exchange Online is not affected.
Researchers found, and Microsoft has patched, a vulnerability in Windows Defender that has been around for twelve years. There is no evidence that anyone has used the vulnerability during that time.
The flaw, discovered by researchers at the security firm SentinelOne, showed up in a driver that Windows Defender—renamed Microsoft Defender last year—uses to delete the invasive files and infrastructure that malware can create. When the driver removes a malicious file, it replaces it with a new, benign one as a sort of placeholder during remediation. But the researchers discovered that the system doesn’t specifically verify that new file. As a result, an attacker could insert strategic system links that direct the driver to overwrite the wrong file or even run malicious code.
It isn’t unusual that vulnerabilities lie around for this long. They can’t be fixed until someone finds them, and people aren’t always looking.
This report is six months old, and I don’t know anything about the organization that produced it, but it has some alarming data about router security.
Conclusion: Our analysis showed that Linux is the most used OS running on more than 90% of the devices. However, many routers are powered by very old versions of Linux. Most devices are still powered with a 2.6 Linux kernel, which is no longer maintained for many years. This leads to a high number of critical and high severity CVEs affecting these devices.
Since Linux is the most used OS, exploit mitigation techniques could be enabled very easily. Anyhow, they are used quite rarely by most vendors except the NX feature.
A published private key provides no security at all. Nonetheless, all but one vendor spread several private keys in almost all firmware images.
Mirai used hard-coded login credentials to infect thousands of embedded devices in the last years. However, hard-coded credentials can be found in many of the devices and some of them are well known or at least easy crackable.
However, we can tell for sure that the vendors prioritize security differently. AVM does better job than the other vendors regarding most aspects. ASUS and Netgear do a better job in some aspects than D-Link, Linksys, TP-Link and Zyxel.
Additionally, our evaluation showed that large scale automated security analysis of embedded devices is possible today utilizing just open source software. To sum it up, our analysis shows that there is no router without flaws and there is no vendor who does a perfect job regarding all security aspects. Much more effort is needed to make home routers as secure as current desktop of server systems.
One comment on the report:
One-third ship with Linux kernel version 2.6.36 was released in October 2010. You can walk into a store today and buy a brand new router powered by software that’s almost 10 years out of date! This outdated version of the Linux kernel has 233 known security vulnerabilities registered in the Common Vulnerability and Exposures (CVE) database. The average router contains 26 critically-rated security vulnerabilities, according to the study.
We know the reasons for this. Most routers are designed offshore, by third parties, and then private labeled and sold by the vendors you’ve heard of. Engineering teams come together, design and build the router, and then disperse. There’s often no one around to write patches, and most of the time router firmware isn’t even patchable. The way to update your home router is to throw it away and buy a new one.
And this paper demonstrates that even the new ones aren’t likely to be secure.
At the virtual Enigma Conference, Google’s Project Zero’s Maggie Stone gave a talk about zero-day exploits in the wild. In it, she talked about how often vendors fix vulnerabilities only to have the attackers tweak their exploits to work again. From a MIT Technology Review article:
Soon after they were spotted, the researchers saw one exploit being used in the wild. Microsoft issued a patch and fixed the flaw, sort of. In September 2019, another similar vulnerability was found being exploited by the same hacking group.
More discoveries in November 2019, January 2020, and April 2020 added up to at least five zero-day vulnerabilities being exploited from the same bug class in short order. Microsoft issued multiple security updates: some failed to actually fix the vulnerability being targeted, while others required only slight changes that required just a line or two to change in the hacker’s code to make the exploit work again.
[…]
“What we saw cuts across the industry: Incomplete patches are making it easier for attackers to exploit users with zero-days,” Stone said on Tuesday at the security conference Enigma. “We’re not requiring attackers to come up with all new bug classes, develop brand new exploitation, look at code that has never been researched before. We’re allowing the reuse of lots of different vulnerabilities that we previously knew about.”
[…]
Why aren’t they being fixed? Most of the security teams working at software companies have limited time and resources, she suggests—and if their priorities and incentives are flawed, they only check that they’ve fixed the very specific vulnerability in front of them instead of addressing the bigger problems at the root of many vulnerabilities.
Another article on the talk.
This is an important insight. It’s not enough to patch existing vulnerabilities. We need to make it harder for attackers to find new vulnerabilities to exploit. Closing entire families of vulnerabilities, rather than individual vulnerabilities one at a time, is a good way to do that.
This is bad:
More than 100,000 Zyxel firewalls, VPN gateways, and access point controllers contain a hardcoded admin-level backdoor account that can grant attackers root access to devices via either the SSH interface or the web administration panel.
[…]
Installing patches removes the backdoor account, which, according to Eye Control researchers, uses the “zyfwp” username and the “PrOw!aN_fXp” password.
“The plaintext password was visible in one of the binaries on the system,” the Dutch researchers said in a report published before the Christmas 2020 holiday.
This is a scarily impressive vulnerability:
Earlier this year, Apple patched one of the most breathtaking iPhone vulnerabilities ever: a memory corruption bug in the iOS kernel that gave attackers remote access to the entire device—over Wi-Fi, with no user interaction required at all. Oh, and exploits were wormable—meaning radio-proximity exploits could spread from one nearby device to another, once again, with no user interaction needed.
[…]
Beer’s attack worked by exploiting a buffer overflow bug in a driver for AWDL, an Apple-proprietary mesh networking protocol that makes things like Airdrop work. Because drivers reside in the kernel—one of the most privileged parts of any operating system—the AWDL flaw had the potential for serious hacks. And because AWDL parses Wi-Fi packets, exploits can be transmitted over the air, with no indication that anything is amiss.
[…]
Beer developed several different exploits. The most advanced one installs an implant that has full access to the user’s personal data, including emails, photos, messages, and passwords and crypto keys stored in the keychain. The attack uses a laptop, a Raspberry Pi, and some off-the-shelf Wi-Fi adapters. It takes about two minutes to install the prototype implant, but Beer said that with more work a better written exploit could deliver it in a “handful of seconds.” Exploits work only on devices that are within Wi-Fi range of the attacker.
There is no evidence that this vulnerability was ever used in the wild.
EDITED TO ADD: Slashdot thread.
There’s a new unpatched Bluetooth vulnerability:
The issue is with a protocol called Cross-Transport Key Derivation (or CTKD, for short). When, say, an iPhone is getting ready to pair up with Bluetooth-powered device, CTKD’s role is to set up two separate authentication keys for that phone: one for a “Bluetooth Low Energy” device, and one for a device using what’s known as the “Basic Rate/Enhanced Data Rate” standard. Different devices require different amounts of data—and battery power—from a phone. Being able to toggle between the standards needed for Bluetooth devices that take a ton of data (like a Chromecast), and those that require a bit less (like a smartwatch) is more efficient. Incidentally, it might also be less secure.
According to the researchers, if a phone supports both of those standards but doesn’t require some sort of authentication or permission on the user’s end, a hackery sort who’s within Bluetooth range can use its CTKD connection to derive its own competing key. With that connection, according to the researchers, this sort of erzatz authentication can also allow bad actors to weaken the encryption that these keys use in the first place—which can open its owner up to more attacks further down the road, or perform “man in the middle” style attacks that snoop on unprotected data being sent by the phone’s apps and services.
Another article:
Patches are not immediately available at the time of writing. The only way to protect against BLURtooth attacks is to control the environment in which Bluetooth devices are paired, in order to prevent man-in-the-middle attacks, or pairings with rogue devices carried out via social engineering (tricking the human operator).
However, patches are expected to be available at one point. When they’ll be, they’ll most likely be integrated as firmware or operating system updates for Bluetooth capable devices.
The timeline for these updates is, for the moment, unclear, as device vendors and OS makers usually work on different timelines, and some may not prioritize security patches as others. The number of vulnerable devices is also unclear and hard to quantify.
Many Bluetooth devices can’t be patched.
Final note: this seems to be another example of simultaneous discovery:
According to the Bluetooth SIG, the BLURtooth attack was discovered independently by two groups of academics from the École Polytechnique Fédérale de Lausanne (EPFL) and Purdue University.
Last year, ZecOps discovered two iPhone zero-day exploits. They will be patched in the next iOS release:
Avraham declined to disclose many details about who the targets were, and did not say whether they lost any data as a result of the attacks, but said “we were a bit surprised about who was targeted.” He said some of the targets were an executive from a telephone carrier in Japan, a “VIP” from Germany, managed security service providers from Saudi Arabia and Israel, people who work for a Fortune 500 company in North America, and an executive from a Swiss company.
[…]
On the other hand, this is not as polished a hack as others, as it relies on sending an oversized email, which may get blocked by certain email providers. Moreover, Avraham said it only works on the default Apple Mail app, and not on Gmail or Outlook, for example.
There’s a vulnerability in Wi-Fi hardware that breaks the encryption:
The vulnerability exists in Wi-Fi chips made by Cypress Semiconductor and Broadcom, the latter a chipmaker Cypress acquired in 2016. The affected devices include iPhones, iPads, Macs, Amazon Echos and Kindles, Android devices, and Wi-Fi routers from Asus and Huawei, as well as the Raspberry Pi 3. Eset, the security company that discovered the vulnerability, said the flaw primarily affects Cypress’ and Broadcom’s FullMAC WLAN chips, which are used in billions of devices. Eset has named the vulnerability Kr00k, and it is tracked as CVE-2019-15126.
Manufacturers have made patches available for most or all of the affected devices, but it’s not clear how many devices have installed the patches. Of greatest concern are vulnerable wireless routers, which often go unpatched indefinitely.
That’s the real problem. Many of these devices won’t get patched—ever.
All of life is based on the coordinated action of genetic parts (genes and their controlling sequences) found in the genomes (the complete DNA sequence) of organisms.
Genes and genomes are based on code—just like the digital language of computers. But instead of zeros and ones, four DNA letters—A, C, T, G—encode all of life. (Life is messy, and there are actually all sorts of edge cases, but ignore that for now.) If you have the sequence that encodes an organism, in theory, you could recreate it. If you can write new working code, you can alter an existing organism or create a novel one.
If this sounds to you a lot like software coding, you’re right. As synthetic biology looks more like computer technology, the risks of the latter become the risks of the former. Code is code, but because we’re dealing with molecules—and sometimes actual forms of life—the risks can be much greater.
Imagine a biological engineer trying to increase the expression of a gene that maintains normal gene function in blood cells. Even though it’s a relatively simple operation by today’s standards, it’ll almost certainly take multiple tries to get it right. Were this computer code, the only damage those failed tries would do is to crash the computer they’re running on. With a biological system, the code could instead increase the likelihood of multiple types of leukemias and wipe out cells important to the patient’s immune system.
We have known the mechanics of DNA for some 60-plus years. The field of modern biotechnology began in 1972 when Paul Berg joined one virus gene to another and produced the first “recombinant” virus. Synthetic biology arose in the early 2000s when biologists adopted the mindset of engineers; instead of moving single genes around, they designed complex genetic circuits.
In 2010, Craig Venter and his colleagues recreated the genome of a simple bacterium. More recently, researchers at the Medical Research Council Laboratory of Molecular Biology in Britain created a new, more streamlined version of E. coli. In both cases, the researchers created what could arguably be called new forms of life.
This is the new bioengineering, and it will only get more powerful. Today you can write DNA code in the same way a computer programmer writes computer code. Then you can use a DNA synthesizer or order DNA from a commercial vendor, and then use precision editing tools such as CRISPR to “run” it in an already existing organism, from a virus to a wheat plant to a person.
In the future, it may be possible to build an entire complex organism such as a dog or cat, or recreate an extinct mammoth (currently underway). Today, biotech companies are developing new gene therapies, and international consortia are addressing the feasibility and ethics of making changes to human genomes that could be passed down to succeeding generations.
Within the biological science community, urgent conversations are occurring about “cyberbiosecurity,” an admittedly contested term that exists between biological and information systems where vulnerabilities in one can affect the other. These can include the security of DNA databanks, the fidelity of transmission of those data, and information hazards associated with specific DNA sequences that could encode novel pathogens for which no cures exist.
These risks have occupied not only learned bodies—the National Academies of Sciences, Engineering, and Medicine published at least a half dozen reports on biosecurity risks and how to address them proactively—but have made it to mainstream media: genome editing was a major plot element in Netflix’s Season 3 of “Designated Survivor.”
Our worries are more prosaic. As synthetic biology “programming” reaches the complexity of traditional computer programming, the risks of computer systems will transfer to biological systems. The difference is that biological systems have the potential to cause much greater, and far more lasting, damage than computer systems.
Programmers write software through trial and error. Because computer systems are so complex and there is no real theory of software, programmers repeatedly test the code they write until it works properly. This makes sense, because both the cost of getting it wrong and the ease of trying again is so low. There are even jokes about this: a programmer would diagnose a car crash by putting another car in the same situation and seeing if it happened again.
Even finished code still has problems. Again due to the complexity of modern software systems, “works properly” doesn’t mean that it’s perfectly correct. Modern software is full of bugs—thousands of software flaws—that occasionally affect performance or security. That’s why any piece of software you use is regularly updated; the developers are still fixing bugs, even after the software is released.
Bioengineering will be largely the same: writing biological code will have these same reliability properties. Unfortunately, the software solution of making lots of mistakes and fixing them as you go doesn’t work in biology.
In nature, a similar type of trial and error is handled by “the survival of the fittest” and occurs slowly over many generations. But human-generated code from scratch doesn’t have that kind of correction mechanism. Inadvertent or intentional release of these newly coded “programs” may result in pathogens of expanded host range (just think swine flu) or organisms that wreck delicate ecological balances.
Unlike computer software, there’s no way so far to “patch” biological systems once released to the wild, although researchers are trying to develop one. Nor are there ways to “patch” the humans (or animals or crops) susceptible to such agents. Stringent biocontainment helps, but no containment system provides zero risk.
Opportunities for mischief and malfeasance often occur when expertise is siloed, fields intersect only at the margins, and when the gathered knowledge of small, expert groups doesn’t make its way into the larger body of practitioners who have important contributions to make.
Good starts have been made by biologists, security agencies, and governance experts. But these efforts have tended to be siloed, in either the biological and digital spheres of influence, classified and solely within the military, or exchanged only among a very small set of investigators.
What we need is more opportunities for integration between the two disciplines. We need to share information and experiences, classified and unclassified. We have tools among our digital and biological communities to identify and mitigate biological risks, and those to write and deploy secure computer systems.
Those opportunities will not occur without effort or financial support. Let’s find those resources, public, private, philanthropic, or any combination. And then let’s use those resources to set up some novel opportunities for digital geeks and bionerds—as well as ethicists and policy makers—to share experiences and concerns, and come up with creative, constructive solutions to these problems that are more than just patches.
These are overarching problems; let’s not let siloed thinking or funding get in the way of breaking down barriers between communities. And let’s not let technology of any kind get in the way of the public good.
This essay previously appeared on CNN.com.
EDITED TO ADD (9/23): Commentary.
Sidebar photo of Bruce Schneier by Joe MacInnis.