Entries Tagged "Russia"

Page 4 of 13

Details of the REvil Ransomware Attack

ArsTechnica has a good story on the REvil ransomware attack of last weekend, with technical details:

This weekend’s attack was carried out with almost surgical precision. According to Cybereason, the REvil affiliates first gained access to targeted environments and then used the zero-day in the Kaseya Agent Monitor to gain administrative control over the target’s network. After writing a base-64-encoded payload to a file named agent.crt the dropper executed it.

[…]

The ransomware dropper Agent.exe is signed with a Windows-trusted certificate that uses the registrant name “PB03 TRANSPORT LTD.” By digitally signing their malware, attackers are able to suppress many security warnings that would otherwise appear when it’s being installed. Cybereason said that the certificate appears to have been used exclusively by REvil malware that was deployed during this attack.

To add stealth, the attackers used a technique called DLL Side-Loading, which places a spoofed malicious DLL file in a Windows’ WinSxS directory so that the operating system loads the spoof instead of the legitimate file. In the case here, Agent.exe drops an outdated version that is vulnerable to DLL Side-Loading of “msmpeng.exe,” which is the file for the Windows Defender executable.

Once executed, the malware changes the firewall settings to allow local windows systems to be discovered. Then, it starts to encrypt the files on the system….

REvil is demanding $70 million for a universal decryptor that will recover the data from the 1,500 affected Kaseya customers.

More news.

Note that this is yet another supply-chain attack. Instead of infecting those 1,500 networks directly, REvil infected a single managed service provider. And it leveraged a zero-day vulnerability in that provider.

EDITED TO ADD (7/13): Employees warned Kaseya’s management for years about critical security flaws, but they were ignored.

Posted on July 8, 2021 at 10:06 AMView Comments

More Russian Hacking

Two reports this week. The first is from Microsoft, which wrote:

As part of our investigation into this ongoing activity, we also detected information-stealing malware on a machine belonging to one of our customer support agents with access to basic account information for a small number of our customers. The actor used this information in some cases to launch highly-targeted attacks as part of their broader campaign.

The second is from the NSA, CISA, FBI, and the UK’s NCSC, which wrote that the GRU is continuing to conduct brute-force password guessing attacks around the world, and is in some cases successful. From the NSA press release:

Once valid credentials were discovered, the GTsSS combined them with various publicly known vulnerabilities to gain further access into victim networks. This, along with various techniques also detailed in the advisory, allowed the actors to evade defenses and collect and exfiltrate various information in the networks, including mailboxes.

News article.

Posted on July 2, 2021 at 6:26 AMView Comments

The DarkSide Ransomware Gang

The New York Times has a long story on the DarkSide ransomware gang.

A glimpse into DarkSide’s secret communications in the months leading up to the Colonial Pipeline attack reveals a criminal operation on the rise, pulling in millions of dollars in ransom payments each month.

DarkSide offers what is known as “ransomware as a service,” in which a malware developer charges a user fee to so-called affiliates like Woris, who may not have the technical skills to actually create ransomware but are still capable of breaking into a victim’s computer systems.

DarkSide’s services include providing technical support for hackers, negotiating with targets like the publishing company, processing payments, and devising tailored pressure campaigns through blackmail and other means, such as secondary hacks to crash websites. DarkSide’s user fees operated on a sliding scale: 25 percent for any ransoms less than $500,000 down to 10 percent for ransoms over $5 million, according to the computer security firm, FireEye.

Posted on June 2, 2021 at 9:09 AMView Comments

The Misaligned Incentives for Cloud Security

Russia’s Sunburst cyberespionage campaign, discovered late last year, impacted more than 100 large companies and US federal agencies, including the Treasury, Energy, Justice, and Homeland Security departments. A crucial part of the Russians’ success was their ability to move through these organizations by compromising cloud and local network identity systems to then access cloud accounts and pilfer emails and files.

Hackers said by the US government to have been working for the Kremlin targeted a widely used Microsoft cloud service that synchronizes user identities. The hackers stole security certificates to create their own identities, which allowed them to bypass safeguards such as multifactor authentication and gain access to Office 365 accounts, impacting thousands of users at the affected companies and government agencies.

It wasn’t the first time cloud services were the focus of a cyberattack, and it certainly won’t be the last. Cloud weaknesses were also critical in a 2019 breach at Capital One. There, an Amazon Web Services cloud vulnerability, compounded by Capital One’s own struggle to properly configure a complex cloud service, led to the disclosure of tens of millions of customer records, including credit card applications, Social Security numbers, and bank account information.

This trend of attacks on cloud services by criminals, hackers, and nation states is growing as cloud computing takes over worldwide as the default model for information technologies. Leaked data is bad enough, but disruption to the cloud, even an outage at a single provider, could quickly cost the global economy billions of dollars a day.

Cloud computing is an important source of risk both because it has quickly supplanted traditional IT and because it concentrates ownership of design choices at a very small number of companies. First, cloud is increasingly the default mode of computing for organizations, meaning ever more users and critical data from national intelligence and defense agencies ride on these technologies. Second, cloud computing services, especially those supplied by the world’s four largest providers—Amazon, Microsoft, Alibaba, and Google—concentrate key security and technology design choices inside a small number of organizations. The consequences of bad decisions or poorly made trade-offs can quickly scale to hundreds of millions of users.

The cloud is everywhere. Some cloud companies provide software as a service, support your Netflix habit, or carry your Slack chats. Others provide computing infrastructure like business databases and storage space. The largest cloud companies provide both.

The cloud can be deployed in several different ways, each of which shift the balance of responsibility for the security of this technology. But the cloud provider plays an important role in every case. Choices the provider makes in how these technologies are designed, built, and deployed influence the user’s security—yet the user has very little influence over them. Then, if Google or Amazon has a vulnerability in their servers—which you are unlikely to know about and have no control over—you suffer the consequences.

The problem is one of economics. On the surface, it might seem that competition between cloud companies gives them an incentive to invest in their users’ security. But several market failures get in the way of that ideal. First, security is largely an externality for these cloud companies, because the losses due to data breaches are largely borne by their users. As long as a cloud provider isn’t losing customers by the droves—which generally doesn’t happen after a security incident—it is incentivized to underinvest in security. Additionally, data shows that investors don’t punish the cloud service companies either: Stock price dips after a public security breach are both small and temporary.

Second, public information about cloud security generally doesn’t share the design trade-offs involved in building these cloud services or provide much transparency about the resulting risks. While cloud companies have to publicly disclose copious amounts of security design and operational information, it can be impossible for consumers to understand which threats the cloud services are taking into account, and how. This lack of understanding makes it hard to assess a cloud service’s overall security. As a result, customers and users aren’t able to differentiate between secure and insecure services, so they don’t base their buying and use decisions on it.

Third, cybersecurity is complex—and even more complex when the cloud is involved. For a customer like a company or government agency, the security dependencies of various cloud and on-premises network systems and services can be subtle and hard to map out. This means that users can’t adequately assess the security of cloud services or how they will interact with their own networks. This is a classic “lemons market” in economics, and the result is that cloud providers provide variable levels of security, as documented by Dan Geer, the chief information security officer for In-Q-Tel, and Wade Baker, a professor at Virginia Tech’s College of Business, when they looked at the prevalence of severe security findings at the top 10 largest cloud providers. Yet most consumers are none the wiser.

The result is a market failure where cloud service providers don’t compete to provide the best security for their customers and users at the lowest cost. Instead, cloud companies take the chance that they won’t get hacked, and past experience tells them they can weather the storm if they do. This kind of decision-making and priority-setting takes place at the executive level, of course, and doesn’t reflect the dedication and technical skill of product engineers and security specialists. The effect of this underinvestment is pernicious, however, by piling on risk that’s largely hidden from users. Widespread adoption of cloud computing carries that risk to an organization’s network, to its customers and users, and, in turn, to the wider internet.

This aggregation of cybersecurity risk creates a national security challenge. Policymakers can help address the challenge by setting clear expectations for the security of cloud services—and for making decisions and design trade-offs about that security transparent. The Biden administration, including newly nominated National Cyber Director Chris Inglis, should lead an interagency effort to work with cloud providers to review their threat models and evaluate the security architecture of their various offerings. This effort to require greater transparency from cloud providers and exert more scrutiny of their security engineering efforts should be accompanied by a push to modernize cybersecurity regulations for the cloud era.

The Federal Risk and Authorization Management Program (FedRAMP), which is the principal US government program for assessing the risk of cloud services and authorizing them for use by government agencies, would be a prime vehicle for these efforts. A recent executive order outlines several steps to make FedRAMP faster and more responsive. But the program is still focused largely on the security of individual services rather than the cloud vendors’ deeper architectural choices and threat models. Congressional action should reinforce and extend the executive order by adding new obligations for vendors to provide transparency about design trade-offs, threat models, and resulting risks. These changes could help transform FedRAMP into a more effective tool of security governance even as it becomes faster and more efficient.

Cloud providers have become important national infrastructure. Not since the heights of the mainframe era between the 1960s and early 1980s has the world witnessed computing systems of such complexity used by so many but designed and created by so few. The security of this infrastructure demands greater transparency and public accountability—if only to match the consequences of its failure.

This essay was written with Trey Herr, and previously appeared in Foreign Policy.

Posted on May 28, 2021 at 6:20 AMView Comments

Adding a Russian Keyboard to Protect against Ransomware

A lot of Russian malware—the malware that targeted the Colonial Pipeline, for example—won’t install on computers with a Cyrillic keyboard installed. Brian Krebs wonders if this could be a useful defense:

In Russia, for example, authorities there generally will not initiate a cybercrime investigation against one of their own unless a company or individual within the country’s borders files an official complaint as a victim. Ensuring that no affiliates can produce victims in their own countries is the easiest way for these criminals to stay off the radar of domestic law enforcement agencies.

[…]

DarkSide, like a great many other malware strains, has a hard-coded do-not-install list of countries which are the principal members of the Commonwealth of Independent States (CIS)—former Soviet satellites that mostly have favorable relations with the Kremlin.

[…]

Simply put, countless malware strains will check for the presence of one of these languages on the system, and if they’re detected the malware will exit and fail to install.

[…]

Will installing one of these languages keep your Windows computer safe from all malware? Absolutely not. There is plenty of malware that doesn’t care where in the world you are. And there is no substitute for adopting a defense-in-depth posture, and avoiding risky behaviors online.

But is there really a downside to taking this simple, free, prophylactic approach? None that I can see, other than perhaps a sinking feeling of capitulation. The worst that could happen is that you accidentally toggle the language settings and all your menu options are in Russian.

EDITED TO ADD (6/14): According to some, this doesn’t work.

Posted on May 18, 2021 at 10:31 AMView Comments

Ransomware Shuts Down US Pipeline

This is a major story: a probably Russian cybercrime group called DarkSide shut down the Colonial Pipeline in a ransomware attack. The pipeline supplies much of the East Coast. This is the new and improved ransomware attack: the hackers stole nearly 100 gig of data, and are threatening to publish it. The White House has declared a state of emergency and has created a task force to deal with the problem, but it’s unclear what they can do. This is bad; our supply chains are so tightly coupled that this kind of thing can have disproportionate effects.

EDITED TO ADD (5/12): It seems that the billing system was attacked, and not the physical pipeline itself.

Posted on May 10, 2021 at 2:17 PMView Comments

Biden Administration Imposes Sanctions on Russia for SolarWinds

On April 15, the Biden administration both formally attributed the SolarWinds espionage campaign to the Russian Foreign Intelligence Service (SVR), and imposed a series of sanctions designed to punish the country for the attack and deter future attacks.

I will leave it to those with experience in foreign relations to convince me that the response is sufficient to deter future operations. To me, it feels like too little. The New York Times reports that “the sanctions will be among what President Biden’s aides say are ‘seen and unseen steps in response to the hacking,” which implies that there’s more we don’t know about. Also, that “the new measures are intended to have a noticeable effect on the Russian economy.” Honestly, I don’t know what the US should do. Anything that feels more proportional is also more escalatory. I’m sure that dilemma is part of the Russian calculus in all this.

Posted on April 20, 2021 at 6:19 AMView Comments

National Security Risks of Late-Stage Capitalism

Early in 2020, cyberspace attackers apparently working for the Russian government compromised a piece of widely used network management software made by a company called SolarWinds. The hack gave the attackers access to the computer networks of some 18,000 of SolarWinds’s customers, including US government agencies such as the Homeland Security Department and State Department, American nuclear research labs, government contractors, IT companies and nongovernmental agencies around the world.

It was a huge attack, with major implications for US national security. The Senate Intelligence Committee is scheduled to hold a hearing on the breach on Tuesday. Who is at fault?

The US government deserves considerable blame, of course, for its inadequate cyberdefense. But to see the problem only as a technical shortcoming is to miss the bigger picture. The modern market economy, which aggressively rewards corporations for short-term profits and aggressive cost-cutting, is also part of the problem: Its incentive structure all but ensures that successful tech companies will end up selling insecure products and services.

Like all for-profit corporations, SolarWinds aims to increase shareholder value by minimizing costs and maximizing profit. The company is owned in large part by Silver Lake and Thoma Bravo, private-equity firms known for extreme cost-cutting.

SolarWinds certainly seems to have underspent on security. The company outsourced much of its software engineering to cheaper programmers overseas, even though that typically increases the risk of security vulnerabilities. For a while, in 2019, the update server’s password for SolarWinds’s network management software was reported to be “solarwinds123.” Russian hackers were able to breach SolarWinds’s own email system and lurk there for months. Chinese hackers appear to have exploited a separate vulnerability in the company’s products to break into US government computers. A cybersecurity adviser for the company said that he quit after his recommendations to strengthen security were ignored.

There is no good reason to underspend on security other than to save money—especially when your clients include government agencies around the world and when the technology experts that you pay to advise you are telling you to do more.

As the economics writer Matt Stoller has suggested, cybersecurity is a natural area for a technology company to cut costs because its customers won’t notice unless they are hacked ­—and if they are, they will have already paid for the product. In other words, the risk of a cyberattack can be transferred to the customers. Doesn’t this strategy jeopardize the possibility of long-term, repeat customers? Sure, there’s a danger there—­ but investors are so focused on short-term gains that they’re too often willing to take that risk.

The market loves to reward corporations for risk-taking when those risks are largely borne by other parties, like taxpayers. This is known as “privatizing profits and socializing losses.” Standard examples include companies that are deemed “too big to fail,” which means that society as a whole pays for their bad luck or poor business decisions. When national security is compromised by high-flying technology companies that fob off cybersecurity risks onto their customers, something similar is at work.

Similar misaligned incentives affect your everyday cybersecurity, too. Your smartphone is vulnerable to something called SIM-swap fraud because phone companies want to make it easy for you to frequently get a new phone—and they know that the cost of fraud is largely borne by customers. Data brokers and credit bureaus that collect, use, and sell your personal data don’t spend a lot of money securing it because it’s your problem if someone hacks them and steals it. Social media companies too easily let hate speech and misinformation flourish on their platforms because it’s expensive and complicated to remove it, and they don’t suffer the immediate costs ­—indeed, they tend to profit from user engagement regardless of its nature.

There are two problems to solve. The first is information asymmetry: buyers can’t adequately judge the security of software products or company practices. The second is a perverse incentive structure: the market encourages companies to make decisions in their private interest, even if that imperils the broader interests of society. Together these two problems result in companies that save money by taking on greater risk and then pass off that risk to the rest of us, as individuals and as a nation.

The only way to force companies to provide safety and security features for customers and users is with government intervention. Companies need to pay the true costs of their insecurities, through a combination of laws, regulations, and legal liability. Governments routinely legislate safety—pollution standards, automobile seat belts, lead-free gasoline, food service regulations. We need to do the same with cybersecurity: the federal government should set minimum security standards for software and software development.

In today’s underregulated markets, it’s just too easy for software companies like SolarWinds to save money by skimping on security and to hope for the best. That’s a rational decision in today’s free-market world, and the only way to change that is to change the economic incentives.

This essay previously appeared in the New York Times.

Posted on March 1, 2021 at 6:12 AMView Comments

Another SolarWinds Orion Hack

At the same time the Russians were using a backdoored SolarWinds update to attack networks worldwide, another threat actor—believed to be Chinese in origin—was using an already existing vulnerability in Orion to penetrate networks:

Two people briefed on the case said FBI investigators recently found that the National Finance Center, a federal payroll agency inside the U.S. Department of Agriculture, was among the affected organizations, raising fears that data on thousands of government employees may have been compromised.

[…]

Reuters was not able to establish how many organizations were compromised by the suspected Chinese operation. The sources, who spoke on condition of anonymity to discuss ongoing investigations, said the attackers used computer infrastructure and hacking tools previously deployed by state-backed Chinese cyberspies.

[…]

While the alleged Russian hackers penetrated deep into SolarWinds network and hid a “back door” in Orion software updates which were then sent to customers, the suspected Chinese group exploited a separate bug in Orion’s code to help spread across networks they had already compromised, the sources said.

Two takeaways: One, we are learning about a lot of supply-chain attacks right now. Two, SolarWinds’ terrible security is the result of a conscious business decision to reduce costs in the name of short-term profits. Economist Matt Stoller writes about this:

These private equity-owned software firms torture professionals with bad user experiences and shitty customer support in everything from yoga studio software to car dealer IT to the nightmarish ‘core’ software that runs small banks and credit unions, as close as one gets to automating Office Space. But they also degrade product quality by firing or disrespecting good workers, under-investing in good security practices, or sending work abroad and paying badly, meaning their products are more prone to espionage. In other words, the same sloppy and corrupt practices that allowed this massive cybersecurity hack made Bravo a billionaire. In a sense, this hack, and many more like it, will continue to happen, as long as men like Bravo get rich creating security vulnerabilities for bad actors to exploit.

SolarWinds increased its profits by increasing its cybersecurity risk, and then transferred that risk to its customers without their knowledge or consent.

Posted on February 4, 2021 at 6:11 AMView Comments

More SolarWinds News

Microsoft analyzed details of the SolarWinds attack:

Microsoft and FireEye only detected the Sunburst or Solorigate malware in December, but Crowdstrike reported this month that another related piece of malware, Sunspot, was deployed in September 2019, at the time hackers breached SolarWinds’ internal network. Other related malware includes Teardrop aka Raindrop.

Details are in the Microsoft blog:

We have published our in-depth analysis of the Solorigate backdoor malware (also referred to as SUNBURST by FireEye), the compromised DLL that was deployed on networks as part of SolarWinds products, that allowed attackers to gain backdoor access to affected devices. We have also detailed the hands-on-keyboard techniques that attackers employed on compromised endpoints using a powerful second-stage payload, one of several custom Cobalt Strike loaders, including the loader dubbed TEARDROP by FireEye and a variant named Raindrop by Symantec.

One missing link in the complex Solorigate attack chain is the handover from the Solorigate DLL backdoor to the Cobalt Strike loader. Our investigations show that the attackers went out of their way to ensure that these two components are separated as much as possible to evade detection. This blog provides details about this handover based on a limited number of cases where this process occurred. To uncover these cases, we used the powerful, cross-domain optics of Microsoft 365 Defender to gain visibility across the entire attack chain in one complete and consolidated view.

This is all important, because MalwareBytes was penetrated through Office 365, and not SolarWinds. New estimates are that 30% of the SolarWinds victims didn’t use SolarWinds:

Many of the attacks gained initial footholds by password spraying to compromise individual email accounts at targeted organizations. Once the attackers had that initial foothold, they used a variety of complex privilege escalation and authentication attacks to exploit flaws in Microsoft’s cloud services. Another of the Advanced Persistent Threat (APT)’s targets, security firm CrowdStrike, said the attacker tried unsuccessfully to read its email by leveraging a compromised account of a Microsoft reseller the firm had worked with.

On attribution: Earlier this month, the US government has stated the attack is “likely Russian in origin.” This echos what then Secretary of State Mike Pompeo said in December, and the Washington Post‘s reporting (both from December). (The New York Times has repeated this attribution—a good article that also discusses the magnitude of the attack.) More evidence comes from code forensics, which links it to Turla, another Russian threat actor.

And lastly, a long ProPublica story on an unused piece of government-developed tech that might have caught the supply-chain attack much earlier:

The in-toto system requires software vendors to map out their process for assembling computer code that will be sent to customers, and it records what’s done at each step along the way. It then verifies electronically that no hacker has inserted something in between steps. Immediately before installation, a pre-installed tool automatically runs a final check to make sure that what the customer received matches the final product the software vendor generated for delivery, confirming that it wasn’t tampered with in transit.

I don’t want to hype this defense too much without knowing a lot more, but I like the approach of verifying the software build process.

Posted on February 3, 2021 at 6:10 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.