Hacking Team's Purchasing of Zero-Day Vulnerabilities

This is an interesting article that looks at Hacking Team's purchasing of zero-day (0day) vulnerabilities from a variety of sources:

Hacking Team's relationships with 0day vendors date back to 2009 when they were still transitioning from their information security consultancy roots to becoming a surveillance business. They excitedly purchased exploit packs from D2Sec and VUPEN, but they didn't find the high-quality client-side oriented exploits they were looking for. Their relationship with VUPEN continued to frustrate them for years. Towards the end of 2012, CitizenLab released their first report on Hacking Team's software being used to repress activists in the United Arab Emirates. However, a continuing stream of negative reports about the use of Hacking Team's software did not materially impact their relationships. In fact, by raising their profile these reports served to actually bring Hacking Team direct business. In 2013 Hacking Team's CEO stated that they had a problem finding sources of new exploits and urgently needed to find new vendors and develop in-house talent. That same year they made multiple new contacts, including Netragard, Vitaliy Toropov, Vulnerabilities Brokerage International, and Rosario Valotta. Though Hacking Team's internal capabilities did not significantly improve, they continued to develop fruitful new relationships. In 2014 they began a close partnership with Qavar Security.

Lots of details in the article. This was made possible by the organizational doxing of Hacking Team by some unknown individuals or group.

Posted on July 27, 2015 at 6:17 AM • 29 Comments

Comments

Who?July 27, 2015 9:34 AM

@ Chris

It looks defense contractors are not the only hyenas here, when they are not developing their own real-world wars --ideally working on both sides-- they fight against other contractors. It seems cyberweapon manufacturers are not more honorable. Sad.

rgaffJuly 27, 2015 10:01 AM

I wish people would stop using the term "cyberweapon"... all it does is put the general public in the mindset that computers should all be outlawed.... as if cars and rocks should be outlawed too because they can be used for evil purposes too...

1111111111111111July 27, 2015 10:41 AM

@ Chris, Who?

Thinning out of the competition is probably good for the business sector… like gang-bang drive by shootings of each other… done all the time.

rgaffJuly 27, 2015 12:38 PM

"Finding mistakes" is not "manufacturing weapons"... if it is, then we "weaponize" each other every day... like if I correct something you say, welp, I just "manufactured" a weapon. Ridiculous isn't it? So also, finding a mistake in a computer program is not manufacturing a weapon either, just because that mistake, once found, can be used for nefarious purposes. If you outlaw or regulate finding mistakes because they can be used for evil, then you effectively ban or squelch the fixing of them because then they can't be looked for. So all people who use the term "cyberweapon" are either knowingly or unwittingly playing into the hand of those who are purposefully trying to make things less secure.

Coyne TibbetsJuly 27, 2015 4:04 PM

So in this, it becomes even more clear: Hacking Team is either defense or a contractor so tightly associated with defense as makes no difference.

The only question that remains is, who doxxed them? If it was Gamma Group as suggested by @Chris, which is also defense or a defense contractor, UK-based, then this starts to look like internecine conflict; either retaliation or competitive.

What a wonderful thought, that defense contractors might dox each other to prove they're more competitive. Such a positive benefit to national defense.

Dirk PraetJuly 27, 2015 6:17 PM

@ Coyne Tibbets

If it was Gamma Group as suggested by @Chris, which is also defense or a defense contractor ...

It isn't. The @GammaGroupPR Twitter handle taking credit for both the Hacking Team and Gamma Group doxxing has (almost certainly) nothing to do with Gamma Group UK.

J af SilbergJuly 27, 2015 8:13 PM

Rook Security has published a tool that scans the computer for binaries from the Hacking Team:


Milano: Hacking Team Malware Detection Utility
https://www.rooksecurity.com/hacking-team-malware-detection-utility/


"The Milano utility, which we, Rook Security, are sharing freely, scans for the presence of files associated with the recent Hacking Team breach. For this first iteration of our tool, we have conducted analysis on 93 Windows binaries released from the Hacked Team breach. These files were specific to the projects found on the Hacked Team git projects."


Tadashi TogoJuly 27, 2015 8:35 PM

Yes, been following this story, a bit. It is different from Finfisher and many similar stories for a number of reasons. For an important one, the aid they gave to totalitarian countries against dissidents seems significantly less, and was curtailed. Not sure if a more recent story has proved otherwise. Another point is the CEO actually had skill. He did make a poor decision career wise, but skill wise he is not some brainless face that does not understand the technology.

From some emails leaked, as reported, they also paid hackers, who otherwise might have just sold their attacks on the black market. Lesser of two evils, but I do not reckon people in Russia and Eastern Europe and some other places have the same options as Americans and Western Europeans may have.

Probably a higher percentage of these attacks will hit real criminals, that is, as opposed to hitting everyone, and that to get cash from them. Probably relatively low percentage are innocent people caught up in some Kafkaesque nightmare.

They also are paying researchers much, much better then what major industries do, and what vulnerability finding mills do. Maybe that will encourage major corporations to be more competitive.

A bit controversial statements, but real.

Tadashi TogoJuly 27, 2015 8:51 PM

Reading from this version:
http://www.wired.com/2015/07/hacking-team-leak-shows-secretive-zero-day-exploit-sales-work/

They did do a very poor job at advertising and soliciting hackers. Syscan was a good enough idea, but they should have used linkedin, groups, individuals. They could have found plenty of poorly paid vulnerability analysts that way. How many people can you reach at a conference, anyway? Conferences are good to meet people you have already contacted, or to meet major speakers who have found useful vulnerabilities.

Soliciting the service companies was a very bad idea.

They could have contacted their employees, easily enough. Most of them are not even classified workers, but work under NDAs, at best.

Similar firms have performed much better at communicating such needs, and paid very far less. eg, zdi, idefense, etc.

INOC | NOC for Data CenterJuly 27, 2015 10:17 PM

The use of the term "Cyberweapon" is both quaint and amusing. A powerful gaming rig or even an innocent soccer mom's laptop can be considered as a cyberweapon? What next, android smartphones too?

Dirk PraetJuly 28, 2015 6:00 AM

@ Tadashi Togo

Probably relatively low percentage are innocent people caught up in some Kafkaesque nightmare.

From the 2015 Privacy International Report: "Hacking Team has a consistent track record of delivering its software, including the RCS, to government agencies with records of human rights abuse and unlawful surveillance, and its products have been repeatedly used to conduct unlawful surveillance of journalists, activists and human rights defenders."

Now imagine it was you or one of your loved ones being one of those deliberately targeted "innocent people". Would you still hold the same opinion?

KokeJuly 28, 2015 8:21 AM

@ Dirk Praet

I doubt intended use is specifically worded in their sales contracts. They have no way of knowing what their customers are going to do with their products or so they claim at point of sale. Guns don't kill people, people kill people.

"its products have been repeatedly used to conduct unlawful surveillance"

The lawfulness argument doesn't hold because totalitarian states can shape it anyway they want; thus, the mean doesn't necessarily justify the end.

PSPIJuly 28, 2015 10:07 AM

@Koke "the lawfulness argument doesn't hold" This is the US propaganda line, continually repeated to try and get everybody nodding yes. The typical premise is, Everybody Does It. Skeptical parrots that obsessively. Here is why it's bullshit.

When states act covertly, that simply reinforces the existence of the prohibitions written into every country's law. Clandestine action does nothing to establish a legal rule or change its content. Rule of law presupposes that the rule be public and the UN Charter bans secret treaties, so secret practice can't establish a precedent or an agreement. Ask the International Law Association’s Committee on Formation of Customary (General) International Law: a “secret physical act (e.g. secretly “bugging” diplomatic premises) is probably not an example of the objective element. And if the act is discovered, it probably does not count as State practice unless the State tries to assert that its conduct was legally justified.” And in fact the USG defines CNE as armed attack, the most illegal thing.

Dirk PraetJuly 28, 2015 7:42 PM

@ Koke

They have no way of knowing what their customers are going to do with their products or so they claim at point of sale.

I wonder if that would also hold up if they had been selling nuclear technology to certain clients of theirs.

Guns don't kill people, people kill people.

That argument doesn't fly in Europe, mate. And as to the lawfulness: if they have broken any international treaties, export controls or embargoes the exporting country is a signatory to, then there is a price to pay. That's what international law is about, irrespective of the domestic legislation in the buyer's country.

rgaffJuly 28, 2015 7:53 PM

@Dirk Praet

Selling "mistakes in software" on the black market to criminals to hack people with, instead of fixing them... reprehensible... yes... but... do you really want to equate "mistakes in software" with "nuclear technology" and "guns"?? Do you really want to make it ILLEGAL to FIND a MISTAKE??? Think about the implications of that for a moment...

Dirk PraetJuly 28, 2015 8:11 PM

@ rgaff

Do you really want to equate "mistakes in software" with "nuclear technology" and "guns"??

Excuse me, but I didn't bring up the comparison with guns. @Koke did. And yes, it's flawed. That's what I was trying to point out in the first place. The comparison with nuclear technology is equally flawed, and only relevant in the sense that this technology too can be used for either good or bad without the vendor knowing what the buyer is going to do with it.

And no, I am not advocating a ban on security research, quite to the contrary. But that's not the business Hacking Team & co. are in. They are in the business of buying, developing and selling weapons-grade exploits to government agencies, and which I believe should indeed be subject to regulation the same way traditional weapons are.

HankJuly 29, 2015 3:57 AM

Selling found mistakes isn't the same as selling buggy software os, or placing intentional backdoors out of patriotic duty or not. Not the same as guns. Nope.

Clive RobinsonJuly 29, 2015 4:44 AM

@ Hank,

Selling found mistakes isn't the same as selling buggy software os, or placing intentional backdoors out of patriotic duty or not. Not the same as guns. Nope.

I don't know what part of the world you are from, but that's not how the law, judiciary, legislators and executive in many WASP jurisdictions see it...

In fact even the US Gov are trying to make not just the selling but researching and communication of computer vulnerabilities equivalent to Weapons of Mass Destruction via the Wassnaar Agreement.

You sell a gun illegaly in the US and in many cases even if caught you will not get prosecuted, unless there is Federal level or "political motivation" behind it as the frequent "stings / setups" have shown or it's related or can be made to apear related to other "hot button political interest" issues which have been made crimes (such as writing poetry as a Muslim).

For the same "political hot button intrest" in this case faux terrorism, whistle blowing or embarrassing the Feds or executive, various prosecutors are driving people --for what would otherwise not even be misdemeanors if no ICT was involved-- into suicide, or plea bargin to become someone who sets up others.

This "throw the toys out the pram" behaviour from the political level is clear evidence of their impotence to cover up their grevious incompetence and other failings, including not being able to face reality.

Which is why it's seen as not working in the world outside US boarders and those of the other WASP FiveEye nations. Citizens out there if they care are laughing at US Pres Obama, UK Prime Minister Cameron, Australian PM.... etc etc. But especialy Obama and his "control freak" ways, hence the drive towards "WMD trading" equivalent punishment for pointing out US computer systems have more holes than third hand string underware...

So whilst most would agree more or less with your proposition, the people who's oppinion does count because they can lock you up forever without even the nicety of a show trial think otherwise... if you are in their jurisdiction or can be dragged there legaly or otherwise. Which unfortunatly appears to be the "new way of the world" post 9/11 for more than the FiveEye WASP nations.

rgaffJuly 29, 2015 9:39 AM

@Dirk Praet

"I am not advocating a ban on security research, quite to the contrary."

Ok, fair enough, and sorry about the wrong attribution.... But then you say:

"Hacking Team & co. ... are in the business of buying, developing and selling weapons-grade exploits"

Don't you see you just contradicted yourself? Simply calling those "mistakes" that Hacked Team was selling "weapons-grade exploits" is all by itself suggesting a ban on finding such mistakes, with your choice of terminology. Mistakes are NOT weapons. They're not guns. They're not nukes. They must be found to be fixed. We can't regulate or ban the "developing" (i.e. FINDING) of mistakes! That is equivalent to banning or regulating the FIXING of them too! Because they must be found first in order to be fixed.

But you don't stop there just with your terminology... then you go on:

"buying, developing and selling weapons-grade exploits to government agencies, and which I believe should indeed be subject to regulation the same way traditional weapons are"

Oh, so here you even make it explicit! You do explicitly want to regulate/ban "buying" (acquiring, learning about), "developing" (finding, researching), and "selling" (distributing, telling others about) mistakes! You are full of contradiction. Your suggestion is ABSOLUTELY regulating/banning security research, despite all your claims to the contrary.

And thanks, Clive, for pointing out that the government disagrees with my opinion on this, and really does want to lock up security researchers! We need bigger prisons, cause largest per capita prison population for over a decade is not enough.

Dirk PraetJuly 29, 2015 7:13 PM

@ rgaff

Your suggestion is ABSOLUTELY regulating/banning security research, despite all your claims to the contrary.

You're barking up the wrong tree, mate. From what I can make of your rant, you're intellectually incapable of differentiating between a vulnerability ("mistake") and an exploit. Security research is the process of finding and analyzing vulnerabilities, with or without a PoC to abuse (or exploit) them. A bona fide security researcher may chose to publish his/her findings, with or without previous notice to the vendor responsible for the flaw. The rules for "responsible disclosure" can be debated, but the expected outcome is for the vendor to provide a timely patch to be applied by any private individual or entity affected. That's one part of the equation.

The second part is commercial entities or criminal groups buying non-published vulnerabilities from security researchers wanting to cash in on their work to subsequently turn them into abuse tools (or frameworks thereof). That's when your "software mistake" becomes a weaponised exploit that in the wrong hands can be used to wreak havoc in all sorts of ways ranging from extortion, surveillance and data theft to crippling critical infrastructure.

The former makes us all safer and should be encouraged, preferably with bug bounties to make it researchers worth their while. The latter, however, is making everybody more insecure and that is why I strongly believe the activities of outfits like Hacking Team and Gamma Group UK should be appropriately regulated.

Have a nice day.

rgaffJuly 29, 2015 8:17 PM

Whether I am intellectually capable of anything or not is always debatable. :)

So what exactly turns "vulnerabilities" into "weapons" by your definition then? Exchanging money for them? Or is it the motive for which they will be used? Or both? If it's the motive, do they become weapons at time of sale based on a possible future use and trying to prove someone's intent, or at the instant when they are used for some "bad" purpose? Are you proposing regulating money exchange or motive or use or all the above or something else? Maybe the building of all frameworks should be regulated instead? Or maybe they become weapons when vulnerabilities are found or when they're written even (sounds like you're saying "no" to this one, though I can see a prosecutor arguing for it)?

If you are not very clear, I fear what you are proposing will be overly broad in any kind of legislation. Nebulousness in regulation is always interpreted too broadly hurting innocent use, because there will always be someone that doesn't like innocent use who has powerful lawyers to twist the law to fit their grievance. That's why I'm responding the way I am. That's why I'm against using the term "weapon" (or "gun" or "nuke") for any vulnerabilities-becoming-anything.

Thanks, I am having a good day, you too :)

Dirk PraetJuly 30, 2015 5:56 AM

@ rgaff

So what exactly turns "vulnerabilities" into "weapons" by your definition then?

Re-read my previous post. The basic difference between a vulnerability and an exploit is not that hard to understand. Neither is its intended use.

And yes, any regulation carries the risk of being overly broad, open to interpretation or bias by stakeholders. That's why subject matter experts and other folks with a vested interest in the topic took time to examine and comment on the new BIS/Wassenaar proposals. I know I did and I'd like to believe that it is a much more relevant contribution to a balanced solution than merely shouting out black & white opinions on public fora.

rgaffJuly 30, 2015 9:43 AM

When asking you to expound further, your reply is to re-read? Ok, I guess that means "no, I don't want to expound"

Your description of how "vulnerabilities" become "weapons" is somewhat vague in the particulars, so also any legislation for such is likely to be too, thus hurting security research. This is why I'm against such terminology in the first place.

Dirk PraetJuly 30, 2015 10:59 AM

@ rgaff

When asking you to expound further, your reply is to re-read? Ok, I guess that means "no, I don't want to expound"

The prerequisite for any informed debate is thatt all parties engaging in a discussion possess a basic knowledge framework of the subject matter on the table. That doesn't seem to be the case here and I really don't feel like elaborating any further on essential definitions and distinctions you can look up pretty much anywhere on the internet.

rgaffJuly 30, 2015 3:13 PM

I've been reading what you say, I just disagree with it. I've explained why. Your refusal to explain further on grounds that I'm too stupid to understand looks suspiciously like you've been caught off guard, and you don't want to admit it. Any vagueness in legal definitions of things generally leads to overreach and/or unpredictability when it comes to the courts. We either need to be very specific, or we need to give up trying to define things legally. I hope you're not a lawyer.

Dirk PraetJuly 30, 2015 4:48 PM

@ rgaff

Feel free to disagree. I stand by what I've said and I have nothing to add to it. Neither do I wish to continue this discussion for the reasons I mentioned earlier. What's not to understand?

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.