Stealth Falcon: New Malware from (Probably) the UAE

Citizen Lab has the details:

This report describes a campaign of targeted spyware attacks carried out by a sophisticated operator, which we call Stealth Falcon. The attacks have been conducted from 2012 until the present, against Emirati journalists, activists, and dissidents. We discovered this campaign when an individual purporting to be from an apparently fictitious organization called "The Right to Fight" contacted Rori Donaghy. Donaghy, a UK-based journalist and founder of the Emirates Center for Human Rights, received a spyware-laden email in November 2015, purporting to offer him a position on a human rights panel. Donaghy has written critically of the United Arab Emirates (UAE) government in the past, and had recently published a series of articles based on leaked emails involving members of the UAE government.

Circumstantial evidence suggests a link between Stealth Falcon and the UAE government. We traced digital artifacts used in this campaign to links sent from an activist's Twitter account in December 2012, a period when it appears to have been under government control. We also identified other bait content employed by this threat actor. We found 31 public tweets sent by Stealth Falcon, 30 of which were directly targeted at one of 27 victims. Of the 27 targets, 24 were obviously linked to the UAE, based on their profile information (e.g., photos, "UAE" in account name, location), and at least six targets appeared to be operated by people who were arrested, sought for arrest, or convicted in absentia by the UAE government, in relation to their Twitter activity.

The attack on Donaghy -- and the Twitter attacks -- involved a malicious URL shortening site. When a user clicks on a URL shortened by Stealth Falcon operators, the site profiles the software on a user's computer, perhaps for future exploitation, before redirecting the user to a benign website containing bait content. We queried the URL shortener with every possible short URL, and identified 402 instances of bait content which we believe were sent by Stealth Falcon, 73% of which obviously referenced UAE issues. Of these URLs, only the one sent to Donaghy definitively contained spyware. However, we were able to trace the spyware Donaghy received to a network of 67 active command and control (C2) servers, suggesting broader use of the spyware, perhaps by the same or other operators.

News story.

Posted on June 2, 2016 at 7:49 AM • 37 Comments

Comments

Spyware Inc.June 2, 2016 8:11 AM

When a user clicks on a URL shortened by Stealth Falcon operators, the site profiles the software on a user's computer, perhaps for future exploitation, before redirecting the user to a benign website containing bait content. We queried the URL shortener with every possible short URL, and identified 402 instances of bait content which we believe were sent by Stealth Falcon, 73% of which obviously referenced UAE issues. Of these URLs, only the one sent to Donaghy definitively contained spyware.

Perhaps the smarter journalists and freedom fighters are running Qubes-Whonix and opening likely vicious bait-traps (shortened urls, poisoned pdfs and documents) with a disposable VM whilst running the entirety of their system across the Tor network with a Tor-ProxyVM and only using the hardened Tor Browser?

Or do they still run Windoze with a million proprietary applications and plug-ins, use the 'cutting edge' Microsoft Edge browser with all scripts enabled, in between logging in and out of GMail and updating their Facebook profiles/Twitter spam?

Who?June 2, 2016 8:56 AM

@ Spyware Inc.

I wish the answer being the former, but guess it really is the latter.

Avoiding government surveillance is extremely difficult, if possible at all. However most people look down on the most basic countermeasures, like running a reasonably secure operating system.

Your name says all: "Spyware Inc."; why developing advanced malware when nearly all targets (ranging from activists to nuclear facilities) run Microsoft Windows or Apple's OS X?

People do not listen, people do not learn, even when their own lifes are at risk.

Spyware is a great business because it is easy with some help from targets.

Who?June 2, 2016 9:00 AM

why developing advanced malware when nearly all targets (ranging from activists to nuclear facilities) run Microsoft Windows or Apple's OS X?

...or SCADA systems, or unmaintained software.

Sometimes it is manufacturers' fault to provide required security updates, but other it is target's fault when choosing the wrong tool for their tasks.

Clive RobinsonJune 2, 2016 9:02 AM

It's unfortunately what you expect from people with two much money and a fear of losing their autocratic position.

You finger the same sort of people in the US or UK and the results are a little more subtle but no less effective or deadly.

I fully expect to see a lot more of this behaviour, which is why I generaly urge people to take precautions about how they go about things.

There is forinstance no reason why you should use modern EMail clients that actually download attachments etc in the background, there are quite a few nasty tricks that can leverage that to your disadvantage.

Likwise as I keep saying, you need to seperate where you can your "on-line" vulnerable systems you browse and download email etc, from your "ofline" systems you do your confidential work.

We need a lot more workable systems such as data guards which malware etc can not cross for intermediate systems that act as data drops/pumps between your offline and online machines. One such way is the use of a printer on the "online" machine and a scanner with OCR on the "ofline" machine. Due to unknown "security" functions in modern printers (to stop you printing currency etc etc) it's not a solution that works the other way --from the "offline" to "online" machine-- safely with more modern equipment.

The thing is it's not just journalists that need to take these precautions, any one with any kind of trade or other comercial or similar secret needs to take the same precautions.

paulJune 2, 2016 9:20 AM

"Sometimes it is manufacturers' fault to provide required security updates, but other it is target's fault when choosing the wrong tool for their tasks."

I don't really buy this. Pretty much by definition, individual targets of state malware have a huge pile of other jobs to do, some of which they will be fired or shot for not doing. Yes, they should ideally be choosing better tools. But better tools should be easily available, selectable and configurable without becoming a part-time security expert on top of all the other jobs. (And, of course, even the best tools are ultimately not going to protect a target from a determined state-level attack.)

DanielJune 2, 2016 10:10 AM

Likwise as I keep saying, you need to seperate where you can your "on-line" vulnerable systems you browse and download email etc, from your "ofline" systems you do your confidential work.

Because this is difficult to do in practice. There is an inherent trade-off between security and usability. This is true not only in hardware and software but most vividly in operational security. Self-regulation/self-discipline is not a characteristic many people have. So whenever I read comments like Clive's I perceive not useful advice but someone railing at the human condition.

Step back and analyze for a moment: if humans were experts at compartmentalizing their behaviors modern society probably would have never evolved in the first place.

Mike D.June 2, 2016 10:16 AM

I'm amused that people think journalists have any choice at all in the tools they use. Maybe if they're independent. But if they're part of an organization? They're using whatever the IT gods tell them to, and when it comes to office software, that's whatever the C-level folks told them to buy.

Glenn Greenwald is an outlier.

Exasperated ProgrammerJune 2, 2016 12:26 PM

@Daniel

"Because this is difficult to do in practice. There is an inherent trade-off between security and usability."

Yes... it's difficult... but it shouldn't have to be... it's POSSIBLE to design systems that aren't so difficult... why do engineers refuse to do so?

Yes, there are inherent tradeoffs... But it doesn't HAVE to be anywhere near as bad as it is... It's possible to make far far more secure systems that are far far more easy to use than there currently exist... why do engineers refuse to create such systems? It's exasperating!

Look, I'm a programmer. I write software. I know what I'm talking about. I work in a company. It's like pulling teeth to get A SINGLE ONE of my colleagues to learn ANYTHING about security!!!!! It's soooo infuriating! What's the matter with everyone? sigh.

I mean, right now I'm working on a financial application for a customer who is FREAKING OUT that we're "TESTING TOO MUCH"!!! He wants us to leave large parts of it completely untested... Does he want a guarantee that his customers will have their money stolen? What's the matter with everyone? AAAARGH!

BabakJune 2, 2016 1:52 PM

If Rori Donaghy does not write against Iran regime, so I bet 100% he is an Iranian agent or he receives money from Iran.

Citizen JournalistJune 2, 2016 1:54 PM

@Mike D.

Glenn Greenwald is not an outlier, when you consider all the "citizen journalists" everywhere nowadays. Everyone with a cell phone camera can now become a journalist, tracking and recording police brutality for example, and exposing it by publishing it online for all to see...

When you consider this kind of "journalism" then everyone needs to take care of their security themselves, not just depend on some IT department for it. Since the average Joe can't learn about such things, security needs to become the default way things are set up in every system... instead of some kind of rare thing.

Jesse ThompsonJune 2, 2016 2:28 PM

@Exasperated Programmer

While I feel your pain, you must at least concede that the "too much testing" you refer to at least does carry a cost (in the dimmest view of your client that cost is miniscule, but even in that dim view the cost is nonzero) and that the software's primary functionality is then saddled with the responsibility to bear that cost by pulling in sufficient revenue to counterbalance it, as well as keep the rest of the client's operations afloat.

If this is something like one or more $100/hr engineers spending a fair portion of a work week dedicated to this testing phase, then the financial liability in question rivals the purchase of a new car. Not to mention the additional week of money left on the ground this would lead to since the software couldn't yet be introduced into production.

Security can never be flawless, it is always a balancing act between "value sacrificed to threat deterrence and recovery" and "value at risk multiplied by magnitude of risk". Your client's job is weighing costs like this, so they most likely aren't insane but are simply arriving at different figure estimates on each side of this equation. So, that's the best battleground on which to engage them: they understand the "what you are charging them" side so clarify the value at stake and the magnitude of risk to that value, including why the magnitude is what it is and how your tests relate to decimating that figure. :3

rJune 2, 2016 2:50 PM

@exacerbated programmer,

"I mean, right now I'm working on a financial application for a customer who is FREAKING OUT that we're "TESTING TOO MUCH"!!! He wants us to leave large parts of it completely untested... Does he want a guarantee that his customers will have their money stolen? What's the matter with everyone? AAAARGH!"

that is something you will probably want to see your company lawyer about, and retain some sort of documentation of.

for you own benefit and sanity incase you're forced to sharply release the project due to management.

maybe even talk to a personal lawyer about it privately.

rJune 2, 2016 2:53 PM

@jesse thompson,

thank you for clarifying the other (financial/investment) side of that fence.

Exasperated ProgrammerJune 2, 2016 3:20 PM

@Jesse Thompson:

Yes... testing is a cost... in the same way that making sure anything actually works, instead of just pretending/lying that it works, is a cost... Lopping off the testing as "too expensive" is an absolute GUARANTEE that some part (maybe small, maybe major) of that untested part WILL NOT WORK. If the app touches people's money, this means their money will be stolen. You're guaranteeing failure. If you can't afford to write an app that works, perhaps you should not be writing one that touches people's money. Stay with apps that don't matter.

Now... there could be more efficient ways of performing testing that are less expensive... but that's a different issue than "too much testing" (which implies do less testing)... that's "the wrong kind of testing" (which implies do the testing differently, rather than leaving parts untested). You see the difference?

@r:

While your advice to seek a lawyer might be considered good advice, and what society has grown to accept... this is not the right way things should be. The right way is to try to fix things, rather than just CYA.

@Jesse Thompson, @r:

I noticed neither of you addressed the first issue I brought up, the one where none of my fellow programmers want to learn about security, they prefer to keep their heads in the sand, afraid of change... what do we do about this? I suspect it's an industry-wide problem, otherwise there would be a lot better security across the board... It's possible to have vastly better security! People who create things (hardware and software) just have to start getting a clue! I mean, at least try. At least read, and educate yourself about it. At least have an interest, instead of a resistance.

RichardJune 2, 2016 4:19 PM

This is interesting because it highlights a previously little discussed attack vector - using a malicious "URL Shortener" site to mount website impersonation attacks on the cheap for an attacker who lacks the resources and access needed to do a full-on DNS spoofing or router subversion attack.

Most folks don't have a clue just how easy it is to spoof web pages, and just how dangerous this type of attack can be.

The safe approach is to NEVER assume that any URL found on some random web page or email actually points to where it says it does whether it is 'shortened' or not.

To really understand why web pages can be so easily spoofed check out:

HTTPS://en.wikipedia.org/wiki/DNS_spoofing

K15June 2, 2016 4:58 PM

How's that govt web security org coming along, Bruce?
Does Citizen Lab autoreply to emails?

SuckerJune 2, 2016 5:23 PM

@Richard

Your link is excellent! I learned so much!! Thank you!!! ;)

Ergo SumJune 2, 2016 5:48 PM

@Exasperated Programmer...

Yes... it's difficult... but it shouldn't have to be... it's POSSIBLE to design systems that aren't so difficult... why do engineers refuse to do so?

May I gently remind you that systems are nothing more than a collection of programs that are written by, well, programmers. While the engineers try to overcome the vulnerabilities of the software platform, it is not always possible. And even if they could, they also needs to protect against application vulnerabilities as well. It's a no win circumstance for engineers, until the programmers get their act together. I should've said never...


rJune 2, 2016 6:33 PM

@exasperated programmer,

I'm a security noob, but I'm interest in it's implications and application.

My opinion, as an industry outsider is that your owners, managers and coworkers will not change their mentality until liability or responsibility play larger roles in the commercial environment.

If they don't want to learn, they're just skating by.
When it comes to security, we're all just skating by.

Somebody around here, or maybe HN posted an article on code quality and 'superstar' programmers... about them not really being 'superstars' due to the introduction of 'easy/earlyout' errors.

It costs $$$ to audit code, and code that isn't well structured or tested prior to release costs less money to produce.

The lawyer recommendation was due to the 'cyber insurance' issue, I would hate to see you in trouble for wanting to do the right thing and being held responsible for not discovering implementation errors in an immature release. We're kinda trapped between "don't jepardize deadlines" and "don't introduce bugs/implementation errors."

Code is usually active for years before being retired, if cyber insurance policies change after your program is deployed are new liability issues incurred?

This is why i make that statement, which way is the wind blowing?
Who's responsible in @Bruce's previous malware thread? The employees? The Banks? The FED? Swift?? North Korea???

Insurance only covers so much, when do we go after the implementors?

Exasperated ProgrammerJune 2, 2016 7:44 PM

@Ergo Sum

There is a term "software engineer"... I was not intending to say that "programmers" create crap and a separate group called "engineers" have to clean it up... When I said it's possible to do better, I meant the full stack of every technology in existence, not just a software application. But so few of the very people who build everything electronic are interested or care or are even trying, it's very disheartening...

NateJune 2, 2016 9:32 PM

Exasperated Programmer: "Yes... it's difficult... but it shouldn't have to be... it's POSSIBLE to design systems that aren't so difficult... why do engineers refuse to do so?"

Ergo Sum: "May I gently remind you that systems are nothing more than a collection of programs that are written by, well, programmers. While the engineers try to overcome the vulnerabilities of the software platform, it is not always possible."

@Ergo: that's the surface answer, and I suppose it's correct as far as it goes. But it doesn't answer the original question.

@Exasperated: YES. YOU ARE ASKING THE RIGHT QUESTION! PLEASE KEEP ASKING THIS!

I am exactly as frustrated as you are with just how badly flawed our system architectures are. And with how difficult it is to make programmers and system engineers and language designers understand that they need to fix it.

If the answer to "why do our systems keep getting hacked?" is "well, programmers make mistakes, you can't stop that", then that's THE WRONG ANSWER! It is simply incorrect. No two ways about it. We CAN stop these mistakes from breaking our systems! We simply choose not to, because it requires redesigning our entire systems - hardware, language and operationg systems - from scratch.

The correct question is "why do programmer mistakes lead to fatal security compromises? Why isn't the system structured in such a way that these mistakes - which we KNOW will ALWAYS happen because programmers are human - CAN'T violate the predetermined system invariants established by a proof-checked, algebraically correct lower layer? Our software is based on maths - some of it 50 to 100 years old - why have we not actually deployed the maths we have?"

And the answer is one of: "Because we choose not to believe that our architecture is so damaged. Because we're lazy. Because we don't want to believe the cost of security breaches. Because it's someone else's problem. Because the OS-to-hardware layer is not owned by us and we're not allowed to touch it. Because we don't have the time or money and Market Forces (tm) want us to ship dangerous junk, fast, and make it our customer's problem. Which it will inevitably become."


We're in roughly the position of the 18th-19th century steam engine industry, with boilers exploding every other day and killing bystanders, but we haven't yet grasped that it's our responsibility to make boilers that DON'T explode.

We CAN design high assurance systems that don't explode, at least that don't explode in some of the extremely dumb ways our current software does.

We could START by, for example, applying the lessons of "functional programming" - every component is a pure function with no side effects - to the operating system itself. Remember in first-year programming class, you learned that "global namespaces are bad" and you should have local variables? And then over in introduction to OSes, what do we do? We put all the data on the computer into one giant global namespace, called the filesystem.

How long have we known that local namespaces are good? Since the 1970s, I think? And yet we haven't absorbed even that one lesson.

And that's why we will continue to fail at security, because for decades now the OS design people haven't been applying what the CS people know about programming. And the hardware people haven't caught up with the OS people. And the Internet of Things think the entire Internet is a private secure LAN.

"Hey, let's make a universal serial bus where, when you plug in some random piece of plastic you found in the car park, it can install a root kernel driver and write to the entire system RAM, bypassing all security."

"Sure, sounds fine to me."

"Great, now let's put this in a car and plug its brake system into the Internet. What's the worst that could happen?"


Ergo SumJune 2, 2016 9:38 PM

@Exasperated Programmer...

You didn't intend to say so, but that was the outcome. The engineers aren't as innocent in this equation either as I've indicated in my response. The reality is somewhere between our opinions, give or take couple ticks, or tocks. We're good...

NateJune 2, 2016 9:43 PM

And a footnote:

"We're in roughly the position of the 18th-19th century steam engine industry, with boilers exploding every other day and killing bystanders, but we haven't yet grasped that it's our responsibility to make boilers that DON'T explode."

This in itself would be bad enough. But a huge complicating factor is that every national government has decided that since there's now an explosive steam boiler in every company, every house and every pocket, on a hair trigger ready to blow - that this network can, should and must be weaponized.

For the good of the realm, to protect us from anarchists and Bolsheviks and Zeppelin pirates, of course. Only Her Majesty's most trusted lieutenants and generals will have the keys to explode the boilers. They will only use them in the most dire need. Of course we can't be told when and how they'll detonate them; such information is much too sensitive.

Why don't YOU want an explosive steam boiler in your pocket, sir? Did you wrap it in tape to prevent it exploding? That, sir, is treason! Are you a Zeppelin pirate in disguise? Or worse, a Martian invader? To the Tower with you!

And that's the deeper reason why we can't get our fundamental computing infrastructure fixed. What looks terribly dangerous and criminally negligent engineering to us is an asset to some very powerful people with very deep pockets.

Nate FanJune 3, 2016 1:42 AM

@Nate

OMG that was awesome, you saved me some typing, thanks. What's worse is that Snowden I think knows the situation, and either chooses or is coerced into not addressing it quite as head on as you did. Right now we are in a security deprived environment, and it is going to take fundamental change at the NSA (beyond what Snowden was able to achieve in the last 3 years) to get us to the greener pastures. NSA pretty much salted the landscape... trying to be a gardner is a tough job sometimes.

Nate FanJune 3, 2016 1:52 AM

@Nate

Actually I disagree with about half of what you said, but the other half was said so well I had to verbalize the mad props. Specifically, I disagree with the idea that there is a top down mathematically provable right way to fix things like you describe. What I believe will get us to the better situation is lowering the barriers to entry to competition. Focused efforts by a few groups can and will be nullified by NSA efforts that see them as a threat to the kind of (im)balance they have been enjoying. Of course they know this. It's their cyber world, not ours, they've made that clear enough.

Exasperated ProgrammerJune 3, 2016 8:36 AM

So how do we get our sleeping peers to wake up and give a crap about security? Frankly, I see education of all the people in the industry (hardware, software, full stack everything) as the solution, not just fixing the NSA... But so many are resisting education, that's what makes it extra hard and slow...

K15June 3, 2016 10:45 AM

Does anyone know if Citizen Lab sends an autoreply to acknowledge receiving an email?

Check Your PremisesJune 3, 2016 1:06 PM

@Exasperated

I think you need to look at the situation with a little more nuance. When things are in bad enough shape vis a vis the balance of power between the individual and the state, incentives are changed in an often nuanced way. It's sort of like Hugh Akston and the diner. Check your premises. Some people like to be able to feed and provide for their families. Taking a principled unyielding stance alongside Snowden is probably not a very profitable thing to do under a Trump, or even a Hillary presidency. In fact, I don't recall any remotely high profile candidates that seemed to be anywhere near the right side of these important and nuanced issues. This is a relative dark age in the realm of cyber security. Get used to it. Hope for quick change died with Snowden.

Exasperated ProgrammerJune 3, 2016 4:28 PM

@Check Your Premises

You're suggesting that people don't want to learn about how to do things more securely, because they're all afraid they'll be thrown in the gulag or be stuck in Russia forever... I'm sure what you're saying does exist somewhere, but this has not been my experience so far... My friends and colleagues that I know are not afraid, they're just not interested. We haven't had enough literal rivers of blood flowing down every street in every po dunk town in the USA to produce real fear. It's just disinterest, from what I can see. I'm still baffled why though. Nobody has the least bit of pride in what they do? They just live paycheck to paycheck and that's all that matters in their life? Do they really have to wait for the rivers before they'll notice anything amiss? Sigh.

Nancy NuanceJune 3, 2016 4:48 PM

@Exasperated

Nuance matters. Complexity yields subtlety. Disinterest in that which you cannot control is perhaps natural. Or rather disinterest in changing it. I suspect that natural survival traits lead those 'disinterested' people to actually serve their own self interests by mitigating the effects of that which they cannot change. If you get flooded too many times, you move a little further away from the rivers, or engineer some alternate solution. Those people may be disinterested in engineering global scale weather control to prevent the river flooding in the first place (upstream so to speak), but that is of course because such a task is beyond them. However they clearly are interested in the risks, and will adjust their lifestyles to mitigate those risks as the demands of a spectrum of rewards and punishments dictate.

@Moderator @sucker @richard, there is an opportunity for a little bit of engineering to this forum here. Though of course their are upsides in keeping it simple as well.

Exasperated ProgrammerJune 3, 2016 5:27 PM

@Nancy Nuance

You have a good point... except that you're sidestepping my main point: I'm frustrated that people refuse to have the least bit of interest in changing what they CAN control even... Even highly technical people that should know better.

The flash flood is coming down the river, there's plenty of warning, and they're busy playing in the water and ignoring the warning. They could swim to the side and get to higher ground... nope... they LIKE DROWNING it seems! So frustrating!

Just read a book, read a blog, show the slightest bit of interest in securing the very software you're writing a little bit better... No, "industry practices" are not enough, they're falling short... everything's getting hacked right left and center... Why don't you want to improve? Why why why? It's not even that hard, just open your eyes and read a little... You don't have to be perfect, I'm not asking for perfection, I'm asking for alertness and trying and small improvements...

Nancy NuanceJune 3, 2016 5:50 PM

@Exasperated

please be more specific in your prognostications about this flash flood coming down the river we have apparently been warned about. As best I can tell, the waters rose slowly over decades and then Snowden told us all that it was OK to say out loud that the emperor was skinny dipping in the deep waters.

Anti Virus Software and Windows Updates Will Not Save You

Exasperated ProgrammerJune 3, 2016 6:12 PM

@Nancy Nuance

"Well, the water just rose another 4 feet in the past 5 minutes 2 miles upstream... and it's not done rising, it's still accelerating... that water's going somewhere... you're downstream... you do the math, you were a straight A student in all your math and physics classes, after all..."

"Pffft, that's literally miles away, it has nothing to do with what I'm doing here..."

hermanSeptember 10, 2016 4:12 AM

A little paranoia can go a long way to keep the dunderhead next door from raiding your bank account, but defending against a determined state actor is very hard.

Clive RobinsonSeptember 12, 2016 2:24 PM

@ herman,

but defending against a determined state actor is very hard.

The hardness depends a lot on how easy you want to make it for them.

With a little thought you can make their problem one of physical rather than electronic access.

Mankind has spent over a couple of thousand years more developing physical access protection and what you need to know in that respect is fairly well known.

Thus the trick is not to fight them on a battlefield of their choosing but one of your choosing, and often you will find there is a natural home advantage, a little like putting a castle on the top of a hill or other structure where they have to additionally fight gravity.

But as with all physical security you have the advantage of security by obscurity being a valid mechanism. A study of coldwar field craft should give ideas to those who can see between the lines drawn their own unique solutions.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.