Thousands of WordPress Websites Infected with Malware

The malware includes four separate backdoors:

Creating four backdoors facilitates the attackers having multiple points of re-entry should one be detected and removed. A unique case we haven’t seen before. Which introduces another type of attack made possibly by abusing websites that don’t monitor 3rd party dependencies in the browser of their users.

The four backdoors:

The functions of the four backdoors are explained below:

  • Backdoor 1, which uploads and installs a fake plugin named “Ultra SEO Processor,” which is then used to execute attacker-issued commands
  • Backdoor 2, which injects malicious JavaScript into wp-config.php
  • Backdoor 3, which adds an attacker-controlled SSH key to the ~/.ssh/authorized_keys file so as to allow persistent remote access to the machine
  • Backdoor 4, which is designed to execute remote commands and fetches another payload from gsocket[.]io to likely open a reverse shell.

Posted on March 10, 2025 at 7:01 AM15 Comments

Comments

Hugo March 10, 2025 9:32 AM

Just take a look at the WordPress source code and you’ll know why WordPress will never be trustworthy. What a horrible mess. I don’t understand why people still use it, while better and far more secure alternatives are available.

TimH March 10, 2025 10:38 AM

@Hugo:
“Just take a look at the WordPress source code and you’ll know why WordPress will never be trustworthy. What a horrible mess.” You give no supporting argument, so that’s just an ad hominem attack.

“I don’t understand why people still use it, “. Come on. Why do people use Windows, which is insecure enough to be patched monthly at least, and performs massive user surveillance? Because it’s easy to use, and people are generally not technical, and too a lesser degree don’t care much.

“while better and far more secure alternatives are available.” Again, would it have been too much trouble to suggest one or two with a one sentence supporting reason?

wiredog March 10, 2025 10:43 AM

Backdoor 3 is really nasty. Once they have ssh access… Why is that file writable by the WordPress user?

Clive Robinson March 10, 2025 11:34 AM

@ Bruce, ALL,

With regards the “c/side” article intro and,

“Creating four backdoors facilitates the attackers having multiple points of re-entry should one be detected and removed. A unique case we haven’t seen before.

It raises an interesting set of thoughts…

Firstly, whilst this is not the first case of multiple attack code entry points being put into target systems, in the past even “State Sponsored APT” did not put in four.

This implies that who ever was behind it thought they would need them for some reason.

Which Secondly raises the question as to “Why so many?”

The three obvious answers are,

1, They wanted to maintain entrance over a considerable period of time.
2, They thought getting another exploit to get in would be difficult.
3, Creating a covert backdoor is usually a lot simpler than finding a vulnerability.

The last point there also applies to a group with “multiple skill sets”, which is why “State Sponsored” APT would be many peoples attribution assumption.

However it gives another potential answer,

4, They wanted to appear as different attackers.

That point might also be because they wanted to “rent the backdoors out” to multiple untrusted clients.

Or they wanted to act as multiple unrelated attackers.

Unfortunately there is no further information given to work from

However the second article from “thehackernews” likewise gives no further information, but… Then starts talking about a second attack in a way that might make people think the two separate attacks were the same or strongly related.

The second attack is used to,

‘”fully hijacks the user’s browser window” to redirect site visitors to Chinese-language gambling platforms.’

If the two are in fact related this would suggest the level of attacker is “higher up the experience scale” and have a “campaign” they want to keep around a while.

If that is the case and they are of a higher level, it means that they very probably have a lot more than four APT backdoors they have developed and are in effect “keeping their powder dry” on the others.

That is as one backdoor gets nixed they can still get in and install a new to defenders backdoor to replace it.

But the question that came to my mind first was,

“Why has it taken so long for State Sponsored APT style activities to be taken up in the more general criminal class?”

One implication would be that criminal attackers did not feel they needed to have multiple entry routes to individual targets. Either because it was a “target rich environment” with an expected large numbers of easy to exploit vulnerabilities. The number of “easy targets” was so high going after the same person multiple times had no real advantage.

I guess that now this multiple entry point notion is out there, and compared to exploiting code vulnerabilities, adding covert back doors is easier we will see a lot more of it in future as the gap between Criminal and State blurs further.

m7drc March 10, 2025 1:20 PM

It seems like these types of attacks would fail if people used proper access control methods (e.g. SELinux, AppArmor), especially for the ssh backdoor. Other intrusion detection measures (e.g. tripwire, snort, fail2ban) should also be used to detect unauthorized changes, especially when they occur outside expected update windows.

Given that these tools have been around for decades, is there any way to reasonably push people to do the bare minimum and use them? At what point is the blame put on the individual/organization operating the site, similar to how blame is placed with data breaches when basic security standards are not met?

Who? March 10, 2025 1:52 PM

In my humble opinion WordPress is overkill for most users; large corporations have better approaches when running a web server, like hiring a team of developers that write what the corporation really needs, small users do not need what WordPress offers either and would, usually, be happy writing their own, simple, web servers using HTTP(S) and CSS.

WordPress had a lot of ugly security vulnerabilities in the last years, and it is one of the reasons technologies like JavaScript will be required forever.

On the other hand, web servers should run on chroot(8)’d environments, something that is nearly impossible with WordPress. I never use technologies like WordPress on my own projects; I used it a few years ago to help a NGO, it was enough to understand how dangerous can be this path.

We can say the same about browsers; too large to be auditable, but are a key element of current computer networks.

Keep things simple if you want to make them auditable; keep things auditable if you want to make them secure.

Who? March 10, 2025 2:04 PM

This post should go to the latest squid thread, however it seems impossible posting on these threads anymore, so it goes here…

Until now, I was happy with Mozilla Firefox. Writing some JSON policy files, and a mozilla.cfg to lock those settings that were not configurable through policies.json was enough to have a mostly privacy-friendly browser.

Not anymore. It seems the Mozilla Foundation has choosed the route most corporations follow these days, I doubt telemetry can be blocked now (and, by default, it is much more intrusive in Firefox than we can expect from a supposedly privacy-friendly foundation). As I said a lot of times in the last years, governments and corporations play in the same league as users, but they are certainly not in the same team.

What can we do now? Right now, it seems ungoogled Chromium is the only choice we have.

I think this important policy change should have its own thread on this blog. I would love to know from other experts.

Who? March 10, 2025 2:14 PM

As I said, until now it was mostly possible to block tracking features, telemetry, and some unwanted features (even if just because they may increase the attack surface). Right now it is not clear how functional they will be after removal of the compromise of Mozilla Foundation to preserve users privacy.

It seems configuring a reasonable policy, keep some knobs closed with lockPref(); and follow some good practices like running isolated tabs by means of Firefox Multi-Account Containers, may not be enough anymore. This one is the reason I would like to see a thread about this matter, to listen to experts advice.

Clive Robinson March 10, 2025 6:35 PM

@ m7drc, ALL,

Your comment of,

“It seems like these types of attacks would fail if people used proper access control methods (e.g. SELinux, AppArmor), especially for the ssh backdoor. Other intrusion detection measures (e.g. tripwire, snort, fail2ban) should also be used to detect unauthorized changes, especially when they occur outside expected update windows.”

Sorry but that is the wrong way to look at it.

What you are saying is,

1, Enlarge the attack surface.
2, Increase the complexity
3, Add increasing ambiguity

Etc, etc, etc.

At what point do you stop shoveling stuff on?

The correct response is,

1, How do you reduce the attack surface?
2, How do you decrease the complexity?
3, How do you remove the ambiguity?

Etc, etc, etc.

When the suggested solution makes the problem worse and in this case it does, you really need to look at solutions that make things simpler to manage and run whilst importantly reducing needless resource issues.

What this tells us is two things,

1, The application is not fit for purpose.
2, Likewise the environment the application executes in is not fit to be put on the market.

Thus the priority is not to make problems worse by adding attack surface, complexity and ambiguity, but clean up the application and the environment so they are “fit for” both “purpose” and “market”.

The fact that the industry wants to go the other way, should tell you something…

Which is as currently set up the industry it’s self is “Not Fit” and should be legislated and regulated untill it is as is done in nearly every other consumer and common commercial market.

Martin March 10, 2025 7:13 PM

@Leigh: Thank you for the mention of LibreWolf. I haven’t look at this option for some time. Being an independent version of Firefox / Mozzela this is worth a serious look. Appreciate your post.

lurker March 10, 2025 9:39 PM

Which introduces another type of attack made possibly by abusing websites that don’t monitor 3rd party dependencies in the browser of their users.

What does this mean?

I’m getting more than slightly annoyed with websites that force me to update fictitious UserAgent strings, or otherwise refuse to serve standards compliant browsers not on their whitelist …

ResearcherZero March 10, 2025 11:34 PM

@Clive Robinson

rubber hoses

Nation States do contract out work and those contractors also sometimes engage in criminal activity. There is likely to be a limit to resources and restrictions on capability.

Bob March 11, 2025 10:28 AM

@TimH “insecure enough to be patched monthly at least”

What OS are you using that doesn’t have patches at least monthly? I know it’s not anything based on RHEL, Debian, or OSX. Been a long time since I’ve used anything in the BSD family, is that where you’re finding your patchless OS?

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.