HAMMERTOSS: New Russian Malware

FireEye has a detailed report of a sophisticated piece of Russian malware: HAMMERTOSS. It uses some clever techniques to hide:

The Hammertoss backdoor malware looks for a different Twitter handle each day -- automatically prompted by a list generated by the tool -- to get its instructions. If the handle it's looking for is not registered that day, it merely returns the next day and checks for the Twitter handle designated for that day. If the account is active, Hammertoss searches for a tweet with a URL and hashtag, and then visits the URL.

That's where a legit-looking image is grabbed and then opened by Hammertoss: the image contains encrypted instructions, which Hammertoss decrypts. The commands, which include instructions for obtaining files from the victim's network, typically then lead the malware to send that stolen information to a cloud-based storage service.

Another article. Reddit thread.

Posted on July 31, 2015 at 11:12 AM • 15 Comments


ickJuly 31, 2015 1:14 PM

Makes me think about the asymmetry in the effort required (by attackers to hide/use their malware, vs. defenders trying to detect/block/reverse engineer it). The attackers would obviously like to tilt that as much in their favor as they can, thus they design it to have hundreds of potential C&C channels per day, and if the defenders miss even one of them, they can still control their malware. Defenders have to monitor/block all of them, a much higher level of effort. Its a lot like what Blizzard did with the Warden anti-cheat system for World of Warcraft... they created a situation where cheat developers would have to capture and reverse-engineer hundreds of different Warden code fragments each time Blizzard updated them, in order to safely use their cheats. So 1 developer spends a few hours a week changing the anticheat code, and creates dozens or hundreds of hours of work for the cheaters each time. If APT malware authors can pull the same trick, they can overwhelm the resources of the defenders, and/or create enough noise for their attack to hide in. Defenders will have to get increasingly subtle and clever to overcome these kind of asymmetric tactics. For example, they will have to find subversions that unmask all of the C&C channels simultaneously--reliable ways of distinguishing them from legit user channels.

CpragmanJuly 31, 2015 7:10 PM

If authorities wanted to shutdown the C&C, they'd have to shutdown Twitter.

That's a lot harder than shutting done some rogue servers.

Clive RobinsonJuly 31, 2015 8:50 PM


The Hammertoss backdoor malware looks for a different Twitter handle each day -- automatically prompted by a list generated by the tool -- to get its instructions.

The blackhats have finaly made my point about headless command and control through a service that's to big to take down using preselected search identifiers (though I used Google and blogs and posters names).

I'm realy suprised it's taken this long, because to me atleast it seems an obvious step to take. Especially as I've written about it on this blog a number of times.

lalaJuly 31, 2015 11:54 PM


Nah, Twitter wouldn't be phased. Lots of outfits track the traffic around a twitter acount if they care to. Assuming that the encryption's good, it becomes a game of opsec. Budgets matter there.

mooAugust 1, 2015 3:25 AM

After reading the fireeye report about how this thing works, I was disappointed. It sounds like standard botnet stuff, and I guess I was expecting more subtlety from APT malware authors. I thought they would try harder to stego their C&C traffic. Maybe there is no need because no humans or software algorithms actually look at that traffic carefully enough to detect abnormal traffic patterns. But that seems to be a basic thing you have to do if you want to detect malware infections, so I admit I'm surprised. Maybe the state of the art on both sides is much clumsier than I had assumed.

A competent defender who was actually paying attention and looking out for unusual traffic patterns could spot this. A business that MITM's its employees' SSL connections would still be able to scan the twitter traffic, and should be logging those anyway. Users who visit a lot of different twitter accounts should stand out. If an admin looked at the list, he'd see a whole bunch of hexadecimal numbers and a little alarm bell would go off in his head. He'd then write a script to scan all of the twitter traffic from the past few weeks and look for hexadecimal characters at the beginning and end of each handle, and print out any suspicious handles, when they were accessed and from which machine.

I suppose if the defenders tipped their hand, these APT authors would have risen to the occasion, so I can only conclude that this "Hammertoss" thing is already over the heads of most of the organizations they are attacking with it, and they haven't had any need to make something better. I guess mounting a vigilant defense takes time and money, and even the folks who are skilled can get lazy. Attackers only need one way in (or two if they want to keep going after you find and close off the first one).

AnuraAugust 1, 2015 4:42 AM

@Clive Robinson

I still like my idea for a random unique identifier posted somewhere on the internet that you can find through google (or you can put it on Freenet) rather than having to look into a specific place. It's not designed for that, though, it's designed as a minimal-metadata offline messaging service (providing semi-perfect [imperfect] forward secrecy) as an alternative to email, but the idea is the same.

Clive RobinsonAugust 1, 2015 6:34 AM

@ lala, moo,

Assuming that the encryption's good, it becomes a game of opsec. Budgets matter there.

If what has been quoted is correct, the malware authors have already failed the OpSec test...

It appears they are connecting to twitter themselves, which means that a history of "past post/check" will have a past history of IP addresses and network packet times which gives further information against which to filter / look etc.

It's why when I did my version I "decoupled it" by using google but actually posting to random blogs google scaned, and only using each blog only once. Thus the IP and network time info of bothe the "bot hurder" and "bots" was compleatly scattered across the globe which would make investigators jobs way way harder.

Further the search terms could be relaxed some what because google gets near-misses if you do it right, which further makes investigators jobs harder.

Since then newer features and behaviours of these "to big to take down" services have made different OpSec options and tracking available for use. Oh and of course blogging it's self has changed, blogs are getting moved to other "to big to take down" systems which makes my original method of "decoupling" less effective, but since then I've thought up many more which are better than the method used in my POC.

And whilst I was as far as I'm aware "first to publish" the idea was triggered by odd posts on this blog. I even joked with Bruce that they were "secret messages". Interestingly they have now stopped or Bruce / Moderator have thought up a way to filter them.

A further wriggle I mentioned about my POC was that "google cache" got around the issue of blog owners taking down spam posts. It worked on the race condition between googles bot and the human moderator needing to sleep and work etc. However what I did not mention but should now be obvious, doing so would reduce OpSec "decoupling", I'll leave it to others to work out why I did not mention it at the time ;-)

However as I posted to @Figureitout on another page just yesterday, most people take an incremental aproach to "raising the bar" in OpSec. I gave the example of breaking Enigma, where the Germans used it in a way that was just breakable at the bigining of the war, thus it got broken, if however they had started with the system they used towards the end of the war it would not have been as easily broken if at all. That is the OpSec bar was never raised by enough or fast enough to stop earlier techniques of breaking being refined and new ones thought up.

This comes back to the "Defence Spending quandary", you never know when you are spending to much, but you do find out to late when you don't because you have "incoming", and you mostly can not play catch up.

What many realise is the same applies to both attack and defence when it comes to OpSec, the main difference on the attack side, is that the defenders do not have to show their hand early on, so you can proceed with an attack blissfully unaware you are being tracked down till either the door comes off at the hinges or the tasser makes you break dance into the silver bracelets. As we now know these attackers have not invested enough in their OpSec.

Oh and there is another problem with defence spending, if you spend enough those you are defending against will see this as a threat, and thus build up their defences and things escalate easily. But even if your enemy does not ramp up, you keep going after all you can never hav to much defense can you? Unfortunatly you can,defence spending is sunk costs in shiny toys, and at some point rather than scrap them, someone will want an ROI and start to play with them, hence Iraq etc...

Clive RobinsonAugust 1, 2015 7:46 AM

@ Anura,

I still like my idea for a random unique identifier posted somewhere on the internet that you can find through google (or you can put it on Freenet

I've had similar but different thoughts using hidden stores in a mixnet type anonymity network.

I posted them on this blog, but cann't at the moment find the link.

The essential idea was to use randomised decoupling via the stores. Where you just post an encrypted file on a file store then send the link via a pubkey encrypted message to the recipients distributed hidden mail box in the mixnet.

Usually @Wael would popup the link faster than I could search for it, but he's not been around just recently.

So I'll get out my Giigle Fu Spade and start digging later this Bank Hol weekend, I'm doing the cave man thing today and cremating food in the offering pit to the god of BBQ and food poisoning ;-)

Coyne TibbetsAugust 1, 2015 11:43 PM

This should have been obvious; I think it was inevitable. The artificial URL or IP contact strategies were always lame and subject to tracking.

One of the most secure methods of communication between controller and spy was always the advertisement published openly in a newspaper, where the controller only had to place the ad and it was read by 10,000,000 people. Including the spy.

Even if you figure out that a message is being sent, the question is: To whom? It's not like a feasible pool of agents can just run out and check on everyone who bought the newspaper.

Hammertoss: same strategy, different media.

rgaffAugust 1, 2015 11:53 PM

@Coyne Tibbets

The ad in newspaper thing doesn't work so well in the modern electronic surveillance world, where you apparently really CAN monitor all 10,000,000 people reading every blog in your city... and if you know the message is being sent, there's a lot fewer than that looking at that particular message...

Jonathan WilsonAugust 2, 2015 1:42 AM

I dont know about targeted malware but for widespread botnets why haven't the hackers gone for some sort of peer-to-peer command & control setup. The bots would contain a public key (something strong enough to not be cracked by AV vendors etc) and then have the bots send the C&C messages to each other. Messages would be encrypted by the malware author using the private key and then fed into the system through any infected machine.

With that, it would be pretty much impossible to bring down botnet C&C systems...

Clive RobinsonAugust 2, 2015 3:26 AM

@ Jonathan Wilson,

I dont know about targeted malware but for widespread botnets why haven't the hackers gone for some sort of peer-to- peer command & control setup.

They would if they could get it to work reliably in the sight of smart adversaries.

The first problem with such a network is "discovery", that is how does a new bot find existing bots, not just initialy, but every time one or more bots or communication paths get blocked.

The second problem is the "broadcast model" of communication. The current Routed via IP and service ID by header, favours those wishing to disrupt communication paths.

It will also make "easy mapping" of the bot net service by observers at choke points, which is effectivly what the NSA, GCHQ, et al do.

This is why the use of "to big to shut down" services like google, twitter, facebook etc are the way to go, until anonymity mix networks with both client and server nodes "in the netwok" become everyday. And to be honest I can not see that being alowed to happen by the likes of Comey in LE and those we don't know by name in IC.

The big design flaw in TOR is that clients and servers nodes are not in the network, thus Traffic Analysis is possible to those with the eyes to see.

Now you could argue that it was a "necessary design choice" and in some respects that may have been true once for a very limited set of users. However it's a mistake the military in the FiveEyes did not make after WWII when the "Bletchly Crew" invented TA.

If you want to know more about setting up ad hoc networks and resolving the discovery and communications issues have a a look on Ross J. Anderson's home page. He and others looked into how you develop such networks for dust mote sensors for battleground use, where disruption is to be expected the moment hostilities start.

Tw1tt3r-St0rmAugust 2, 2015 4:11 AM

What if... Twitter, given an image, will change it with different compression (and erasing metadata) "the secret information" will be broken.... No need to take over the entire twitter ecosystem...

rAugust 2, 2015 10:29 PM


#1 that is called homogenization... I think there's another word for that kind off systemic scrubbing also but can't place it sry.
#2, maybe Twitter doesn't want to? These games with companies being forced to divulge master keys in my mind runs both ways - maybe images aren't being recalibrated because the five eyes are exfiltrating using said methods tool.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.