Phishing Has Gotten Very Good

This isn’t phishing; it’s not even spear phishing. It’s laser-guided precision phishing:

One of the leaked diplomatic cables referred to one attack via email on US officials who were on a trip in Copenhagen to debate issues surrounding climate change.

“The message had the subject line ‘China and Climate Change’ and was spoofed to appear as if it were from a legitimate international economics columnist at the National Journal.”

The cable continued: “In addition, the body of the email contained comments designed to appeal to the recipients as it was specifically aligned with their job function.”

[…]

One example which demonstrates the group’s approach is that of Coca-Cola, which towards the end was revealed in media reports to have been the victim of a hack.

And not just any hack, it was a hack which industry experts said may have derailed an acquisition effort to the tune of $2.4bn (£1.5bn).

The US giant was looking into taking over China Huiyuan Juice Group, China’s largest soft drinks company—but a hack, believed to be by the Comment Group, left Coca-Cola exposed.

How was it done? Bloomberg reported that one executive—deputy president of Coca-Cola’s Pacific Group, Paul Etchells—opened an email he thought was from the company’s chief executive.

In it, a link which when clicked downloaded malware onto Mr Etchells’ machine. Once inside, hackers were able to snoop about the company’s activity for over a month.

Also, a new technique:

“It is known as waterholing,” he explained. “Which basically involves trying to second guess where the employees of the business might actually go on the web.

“If you can compromise a website they’re likely to go to, hide some malware on there, then whether someone goes to that site, that malware will then install on that person’s system.”

These sites could be anything from the website of an employee’s child’s school – or even a page showing league tables for the corporate five-a-side football team.

I wrote this over a decade ago: “Only amateurs attack machines; professionals target people.” And the professionals are getting better and better.

This is the problem. Against a sufficiently skilled, funded, and motivated adversary, no network is secure. Period. Attack is much easier than defense, and the reason we’ve been doing so well for so long is that most attackers are content to attack the most insecure networks and leave the rest alone.

It’s a matter of motive. To a criminal, all files of credit card numbers are equally good, so your security depends in part on how much better or worse you are than those around you. If the attacker wants you specifically—as in the examples above—relative security is irrelevant. What matters is whether or not your security is better than the attackers’ skill. And so often it’s not.

I am reminded of this great quote from former NSA Information Assurance Director Brian Snow: “Your cyber systems continue to function and serve you not due to the expertise of your security staff but solely due to the sufferance of your opponents.”

Actually, that whole essay is worth reading. It says much of what I’ve been saying, but it’s nice to read someone else say it.

One of the often unspoken truths of security is that large areas of it are currently unsolved problems. We don’t know how to write large applications securely yet. We don’t know how to secure entire organizations with reasonable cost effective measures yet. The honest answer to almost any security question is: “it’s complicated!”. But there is no shortage of gungho salesmen in expensive suits peddling their security wares and no shortage of clients willing to throw money at the problem (because doing something must be better than doing nothing, right?)

Wrong. Peddling hard in the wrong direction doesn’t help just because you want it to.

For a long time, anti virus vendors sold the idea that using their tools would keep users safe. Some pointed out that anti virus software could be described as “necessary but not sufficient” at best, and horribly ineffective snake oil at the least, but AV vendors have big PR budgets and customers need to feel like they are doing something. Examining the AV industry is a good proxy for the security industry in general. Good arguments can be made for the industry and indulging it certainly seems safer than not, but the truth is that none of the solutions on offer from the AV industry give us any hope against a determined targeted attack. While the AV companies all gave talks around the world dissecting the recent publicly discovered attacks like Stuxnet or Flame, most glossed over the simple fact that none of them discovered the virus till after it had done it’s work. Finally after many repeated public spankings, this truth is beginning to emerge and even die hards like the charismatic chief research officer of anti virus firm FSecure (Mikko Hypponen) have to concede their utility (or lack thereof). In a recent post he wrote: “What this means is that all of us had missed detecting this malware for two years, or more. That’s a spectacular failure for our company, and for the antivirus industry in general.. This story does not end with Flame. It’s highly likely there are other similar attacks already underway that we havn’t detected yet. Put simply, attacks like these work.. Flame was a failure for the anti-virus industry. We really should have been able to do better. But we didn’t. We were out of our league, in our own game.”

Posted on March 1, 2013 at 5:05 AM48 Comments

Comments

Zombie John March 1, 2013 5:54 AM

What about an email system that challenged all external links somehow? Perhaps combined with a secure browser that opened clicked links in a sandbox. Seems easy to do with company internal email. It wouldn’t stop everything, but it might help.

JeanF March 1, 2013 6:35 AM

Interesting. I am pretty sure that is only a couple of examples among dozens.

There is no certainty something can be done: after the link in the e-mail, it will be in a nicely printed letter, then on a CD, then as a USB drive, then …

cpragman March 1, 2013 6:39 AM

Wait, so a Coca-Cola exec that is in secret merger talks, regularly corresponds with his counterpart via plain text e-mails? No encryption?

Richard Birenheide March 1, 2013 7:14 AM

@cpragman:

Encrypting communication does not help when your machine is compromised.

cpragman March 1, 2013 7:28 AM

true, but it does help you know that you are actually communicationg with who you think you are.

Paul March 1, 2013 7:38 AM

This isn’t phishing, it’s out-and-out espionage. And this post very profoundly and succinctly demonstrates why AV tools, and even more advanced tools from that industry, can’t protect you against targeted espionage.

LinkTheValiant March 1, 2013 7:40 AM

@Zombie John:

“Hey! You! IT guy! I can’t see the dancing cats my friend sent me! Make it work!”

It’s hyperbolic, but not by much.

@cpragman:

This presupposes that either A) The executive in question fully understands PGP and key management, or B) The IT department can “work around it for him” on every device he might use. It’s not an impossible problem, but it’s nowhere near as simple as “just throw cryptography at it”.

andrewb March 1, 2013 7:44 AM

The only plausible way to combat these sorts of attacks is with a “security by isolation” approach. An operating system like Qubes OS which can automagically launch a non-persistent (“disposable”) VM to read email attachments is the surest defense. Even if such a disposable VM is compromised by a 0-day, isolation from the rest of the system ensures no user data is compromised. A disposable VM in this scenario needs no networking (the attachment can be copied between VMs with some simple code in Xen dom0), so the exploit does not offer the attacker any pivot points into the network. Finally, the non-persistence of the VM ensures that any access obtained is transient.

Even better, rather of a TCB which includes all client software for parsing and displaying PDFs, DOCs, etc., you end up with a very small TCB (only Xen and a few thousand lines of GUI glue).

Tom March 1, 2013 8:41 AM

A start would be to disable the ability to click links in emails. Force users to type the urls. Then disable java, in the browser, in acrobat, in the entire machine. That would make the attacker’s job much more difficult.

0day March 1, 2013 9:05 AM

No @Tom, APT will use 0-day vulnerabilities in firefox, chrome, …

@andrewb: xen and vmware have vulnerabilities on a regular basis.

William Payne March 1, 2013 9:25 AM

Writing general-purpose A/V software is a very hard (unsolved) problem. The systems that we currently have can defend very well against common “amateur” vandalism attacks, or attacks that use well-known techniques that have been used previously, but fall down when pitted against novel, obscure and tightly-targeted attacks, particularly if those attacks have been designed, written and tested with the specific intent of bypassing the detection algorithms of, say, the top 5 A/V vendors.

It is an arms race, and, as you rightly point out, the defender faces a much harder challenge than the attacker, mainly because the attacker has access to the A/V endpoint software to test against, whereas the A/V vendor does not have access to the virus until it has been detected in the wild at least once. As we have seen, the time delay between a piece of malware being released into the wild and it being picked up by the A/V industry can (in the case of highly targeted attacks) be on the order of years, maybe even decades (I suspect). However juvenile and disruptive the vandals may be, their actions serve to test, stretch and improve the A/V industry’s detection algorithms, harden the software industry’s products, and close off vulnerabilities and avenues of attack that the less-noisy-but-more-dangerous attackers might exploit. Personally, I am far more frightened of sophisticated criminals and unscrupulous nation states (you know who you are) than I am of juvenile vandals on an ego-trip.

This is one case where we might do well to step back, consider the ecosystem as a whole, and try to implement an “anti-fragile” solution to the problem. (And it is a very very serious problem).

Kyle March 1, 2013 9:28 AM

Ultimately the problem isn’t the particular software, it’s the system as a whole. Any particular piece can be secure, but if all the pieces aren’t secure then eventually the pieces that aren’t will be used to breach the system. This includes more than just the software, the interactions between the software must also be secure (E.G. SQL injection attacks).

The complexity of a modern OS instance (plus human nature) means that building a completely secure system is nearly impossible. Not only are truly secure applications mostly non-existent, but any system complex enough to be useful is going to be largely too complex to make any strong security guarantees about.

I don’t think the problem is entirely insurmountable, but it will definitely require some rethinking and re-evaluating of certain aspects of systems development and programming. To start with we’re probably going to have to give up C to a large extent, leaving programmers in charge of memory allocations and keeping track of references is just too error prone to do securely in any large system.

Paul Brian March 1, 2013 1:13 PM

@LinkTheValiant

Could a member of the CIA or NSA get dancing cats on their laptop? If not, then
why should the guy doing the 2bn dollar merger?

Almost all security problems are easy to fix if you get fired for a security lapse.

Its a choice – and one I bet Warren Buffet is talking to the Coca Cola board about right now

NobodySpecial March 1, 2013 1:32 PM

Clicks on links in an email

An email client that installs applications without asking you

An OS that lets an email client install applications

An OS that lets an application installed by an email client have access to other programs data on the machine?

Have we learnt precisely nothing in the last 40 years of OS design?

  • smuggly being typed from my chromebook. At least now only Google are spying on me.

Nick P March 1, 2013 2:15 PM

@ andrewb

“Even better, rather of a TCB which includes all client software for parsing and displaying PDFs, DOCs, etc., you end up with a very small TCB (only Xen and a few thousand lines of GUI glue).”

Are you saying that QubesOS doesn’t depend on dom0 to work for security? If it does, the TCB is quite a bit larger. I’ve covered QubeOS before on this blog and had a nice little debate with the project leader. It’s 2013 and the points I made in the post below are still valid for the project. They’ve done good work, though.

http://www.schneier.com/blog/archives/2011/06/malware_in_goog.html#c552054

All that said, I still congratulated them on their 1.0 release.

http://theinvisiblethings.blogspot.com/2012/09/introducing-qubes-10.html

ElvenThreeTwo March 1, 2013 2:34 PM

@andrewb

Excercise for the weekend: Do a search with the terms BluePill, Xen, and dom0.

wumpus March 1, 2013 2:44 PM

@Richard Birenheide

“Encrypting communication does not help when your machine is compromised.”

Oddly enough, this means that somebody had to “attack the machine” along with the person. A computer where security was ever a consideration (mostly just non-windows) would be a start, but I suspect that would just mean that the laser-guided-spear-fishing would evolve until it was good enough to convince the user to turn on enough scripting to “0wn” his machine.

Looks like you can still do a lot to protect the machine, but only so much to protect it from a user who wants that lure.

LinkTheValiant March 1, 2013 3:00 PM

@Paul Brian:

Could a member of the CIA or NSA get dancing cats on their laptop? If not, then
why should the guy doing the 2bn dollar merger?

Almost all security problems are easy to fix if you get fired for a security lapse.

Its a choice

It is one thing to shoot the stablehand who lets the horses get stolen. It’s quite another to shoot the squire’s son.

We can see what they do to him. My guess is an internal reprimand. With at least a little justice, he’ll “retire for health reasons” or some similar bilgewater. But an actual firing, with concrete reasons? Not a chance. (I’ll be VERY happy if I’m wrong though.)

@Kyle:

I don’t think the problem is entirely insurmountable, but it will definitely require some rethinking and re-evaluating of certain aspects of systems development and programming. To start with we’re probably going to have to give up C to a large extent, leaving programmers in charge of memory allocations and keeping track of references is just too error prone to do securely in any large system.

That will be supremely difficult. Memory management in C is a price paid for faster software. Move to a managed-memory language and you lose a great deal of that speed advantage to overhead. TANSTAAFL. Never mind “TRAH-DISH-UHN”

Not that I don’t agree with you, of course.

TAL March 1, 2013 4:50 PM

As is shown on the page pointed to above, Brian Snow was the technical director, not the director. He actually understood technology and security problems.

Tracy Reed March 1, 2013 6:08 PM

I know technology can’t solve all problems but this really looks like a job for digital signatures. I’ve been digitally signing my emails for years in the hope that someone else would follow my example.

Sure, the exec will have to learn something about how to use GPG. But it is really pretty easy with modern email clients. Enigmail makes it easy for Thunderbird.

@LinkTheValiant: If you’re an executive responsible for a $2.4B transaction, surely you’re capable of figuring out how this email signing thing works and understanding why it is necessary. Establishing secure communications and even meeting face to face or at least sending trusted representatives such as lawyers to sign keys really doesn’t seem like an unreasonable measure given what is at stake.

@Richard Birenheide: His machine wasn’t compromised until he clicked the link, right? And he wouldn’t have clicked the link if the email wouldn’t display because the signature check failed.

Nick P March 2, 2013 1:14 AM

@ Tracy Read

“If you’re an executive responsible for a $2.4B transaction, surely you’re capable of figuring out how this email signing thing works and understanding why it is necessary. Establishing secure communications and even meeting face to face or at least sending trusted representatives such as lawyers to sign keys really doesn’t seem like an unreasonable measure given what is at stake.”

Exactly. This is certainly the right mindset. As the value of assets increase, then the strength of risk mitigation should also increase. Signatures are a start to authenticating the transaction (and maybe individual). Trusted path for signing mechanism, people’s behavior and processes would be next issues.

One of my employers put much stronger controls in place that were actually used (most of the time) by low wage workers and executives alike that didn’t like the scheme. The reason it worked is that work could still be done (albeit painfully) and violating security policy resulted in firing for some people. It’s amazing that we did all that for assets with very limited value, while these other companies can’t bother to put adequate protections in for assets in the millions.

Nick P March 2, 2013 1:37 AM

@ William Payne

“This is one case where we might do well to step back, consider the ecosystem as a whole, and try to implement an “anti-fragile” solution to the problem. (And it is a very very serious problem).”

We might do better to consider the nature of the problem. If you go back far enough, the problem space is really about controlling/knowing the state of the system. Particularly, every possibility must be predictable at some point, control flow integrity must exist, and information flows through the system in the correct way. Looking at it this way, it goes from being a nearly untractable problem to a difficult engineering problem. A few of us on this blog have discussed many attempts to deal with this.

I say solving those issues gets rid of most of the technical security issues (and some others). We have academic and commercial (semi)solutions for this in the areas of hardware, OS’s, drivers, low-level software, high level software, web apps, protocols (to a degree), and some specific constructs (e.g. state machines). We’ve built quite resilient stuff in the past. There are companies doing it in the present. The mainstream way of doing things simply doesn’t allow high security. We have building blocks, though, and if plenty of funding goes toward making more we will have even more assurance to inject into our systems.

@ Kyle

“The complexity of a modern OS instance (plus human nature) means that building a completely secure system is nearly impossible. Not only are truly secure applications mostly non-existent, but any system complex enough to be useful is going to be largely too complex to make any strong security guarantees about.”

“I don’t think the problem is entirely insurmountable, but it will definitely require some rethinking and re-evaluating of certain aspects of systems development and programming. ”

Now we’re getting somewhere. Fortunately, it’s only partly true. If we implement the goals I mentioned, then we might get somewhere. We naturally have to do decomposition, well-defined interfaces, total error handling, layering, etc. These design strategies actually make it easier to make security claims about a complex system as a whole. The whole security argument is broken into pieces, these are demonstrated individually, and they are linked together. Alternatively, schemes can be designed around few simple secure components and untrusted components relying on them.

“To start with we’re probably going to have to give up C to a large extent, leaving programmers in charge of memory allocations and keeping track of references is just too error prone to do securely in any large system.”

I’d note that there’s a difference between eliminating memory management and getting rid of C. There’s systems type languages that were used in the past that had fewer issues than C. There’s also work in subsetting, static analysis, modeling, etc. Combine them and we might get be able to write plenty of low-level programs safely. But yes, C/C++ as a main implementation language is crazy from a quality or security point of view compared to competing languages. C/C++ main strengths lie in available talent, libraries, tools, etc. That’s another problem entirely. 😉

Atavatistic Jones March 2, 2013 10:32 AM

This is the reason for layered security. If you have APT threats, then you design your security around that. Signature based AV is known by everyone in anti-malware or pro-malware 🙂 to be weak for APT.

Testing against all the AV systems is simply part of the test and release process.

It is true, an APT threat which is good is very much a planning situation. If the individual or corporate entity is very motivated to keep their system undiscovered, they do their homework. They plan out their mission, and they try and leave no stone unturned their research.

The CN Mandiant report, Flame, Stuxnet, these are failures, not successes. It is not that hard to make a dormant, undetected system.

One of the worst ways to do this is simply by intentionally putting in some sophisticated system/root level vulnerabilities… in software they have.

The most professional, and still very easy way to do this, is to have mole developers on the software project. Or target key developers and alter their code before it goes into production w/o their knowing.

Further up the supply chain, more stealth.

Another smart method to use is to first get in there with dumber systems made to look like they are street attacks. The intention there is to get into the networks to probe for AV and security systems — if detected, to appear to be “clearly” from “someone else”.

This kind of data can also be potentially taken by finding where and how they buy or create or find security software.

Successful companies “target the individual” (or “target audience”)… I think we can be astonished when we see governments do this because they tend to be so incredibly “anti-open market competition” and more closed, so much more “dumb” at producing audience targeted material.

Probably why they contract this stuff out and have companies creating this stuff competing against each other. Internally, they are big and slow, rigid. But by contracting out they can get the quickness and flexibility open, strong competition can give.

Countries like Russia and China who coddle and befriend hackers succeed well at this model. Countries like the US tend not to do so well. They have a very fascist, legalistic leaning in their top secret cultures — apart from some very small and key areas (cowboy military, CIA, NSA red teams, Delta force types, and the like.)

Groups which beat them all: small and larger organized criminal groups after The Money. They have to adapt to survive. They have to not get caught, or they lose (do not get) money, and they are under personal motivation which is very strong Not To Go To Jail.

Atavatistic Jones March 2, 2013 10:41 AM

Another, unrelated comment:

WiFi and portable devices like the android, are a current step being used. But they are also burgeoning. WiFi attacks are used by hackers and the security there remains weak. Some researchers late last year got “monitor mode” going on some Android chipsets.

And last year also saw the rise of Android centric hard core MiTM tools like dsploit, backtrack on android, kismet, and so on. There has been wifi pineapple for a few years now. Probably most security people know the rules, but they tend not to aggressively pen test their networks.

An article in ARS just a few weeks ago had the CTO of NTO Objectives talking about his experience going to the mall with a portable rogue AP.

A lot of hard core enthusiasts have been on this stuff for years, likely government has as well.

FireSheep did not end the problem, but it helped change the way many bigger companies dealt with SSL and the like. They rightly freaked out and improved both the server and the clients, and made a strong move to SSL.

On top of all of this, while wifi and wifi hacking has been around for a long time, even android and other small apps and systems for this — more and more devices are going wireless. And they are repeating the “do not think on security, race to production” mistakes made in the past.

I almost get the idea forces in various governments wanted and pushed for continued weak wifi. (And bluetooth! And other wireless protocols.)

Atavatistic Jones March 2, 2013 10:44 AM

(One last note, ‘how does the wifi comment apply’, because most wifi and other proximity based attacks require personal targeting. You or your little team finds your target and goes on site. It is well worth the expenditure and risk to do so.)

Michael March 2, 2013 10:48 AM

Chinese did it 😉 Seriously, this is a great example of a targeted attack and how precisely they can create such campaigns from the beginning to the end, from 007 style spying to rather sophisticated malware. General purpose antivirus products are pretty much useless against targeted attacks. I’m just curious, whether or not cyber criminals exploited Java vulnerability? It wouldn’t be a surprise. By the way, the latest flaw was discovered by security firm FireEye, on February 28th, Oracle still doesn’t have a patch available

andrewb March 2, 2013 12:02 PM

@0day:

Yes, Xen is still a large piece of software, and bugs will be found. The point is that in this architecture the amount of code which must be trusted is vastly reduced. It is surely much, much smaller than the TCB of a typical user’s machine.

@Nick P:

I don’t want to be drawn into a dispute here. I’m simply pointing out that whatever the circumstances, there is a surprisingly practical, inarguably-more-secure-than-standard OS architecture out there which is designed to combat exactly this type of threat. No suggestions about how well Qubes does it in practice, who came first, how novel it is, etc.

@ElevenThreeTwo:

No need; I am already familiar [at a high level] with all three. I’m not suggesting this is a panacea. I’m not suggesting Qubes is perfect. I’m suggesting that this increases the depth of defense. Keep in mind that the additional isolation barriers are all complementary to the standard security hardening measures.

Nick P March 2, 2013 1:34 PM

@ andrewb

“there is a surprisingly practical, inarguably-more-secure-than-standard OS architecture out there which is designed to combat exactly this type of threat.”

“The point is that in this architecture the amount of code which must be trusted is vastly reduced. It is surely much, much smaller than the TCB of a typical user’s machine.”

It certainly has these benefits. All this time working on low TCB designs I’ve been wondering, “How much of this really matters?” With the advanced attackers, they end up finding 0-days in code anyway. Compartmentalization can limit damage. However, most of these virtualization, MAC and sandboxing schemes really just stop non-targeted attacks. Realizing this, it led me to think that all non-high assurance security schemes are merely obfuscation. If they are, then we can benefit by picking the easiest to work with obfuscation, as sophisticated attackers can break most of them anyway. Using NoScript, non-Intel processors, dedicated (cheap) machines for untrusted Web, and occasional containment mechanisms have proven most effective. Qubes architecture is a nice new tool for keeping out the riff raff, as one previous blog reader would say.

Matthew X. Economou March 2, 2013 11:47 PM

I wonder how many of those suggesting that the Coca Cola executive be fired for misfeasance have actually had to deploy, use, or support currently-available digital encryption/signing technologies. My first thought wasn’t, “What a maroon! What an ignoramus!” It was, “there, but for the grace of God, goes Matthew Economou.” It was, “oh no – I opened that PDF my sales rep sent me without thinking twice about it.” It was, “I can’t even get my smartcard to work properly on my laptop – how would I even begin to approach certificate enrollment for smartphones?”

This is not a small matter of engineering. These attacks have as their primary target the human brain. That they also affect a particular mix of hardware/software is incidental. This attack could happen to you all, too. There, but for the grace of God, go all of us.

wkwillis March 3, 2013 10:56 AM

This is an insoluble problem.
1. We have to have back doors installed in all government programs so the government can spy on us.
2. The cops can’t be fired or jailed or it demotivates them.
3. They sell access for money.
4. Which is why the NSA in effect works for your stalker exboyfriend.

Clive Robinson March 3, 2013 12:33 PM

@ Matthew,

It was, “there, but for the grace of God…”

Yes it applies to all of us, I’ve said before it’s “a numbers game”.

The reality is the probability of any individual being attacked is dependent more on how many targets there are in their area than the quality of their security defences. For plain simple theft and the like, there is a compleate glut of low hanging fruit sufficient that they cannot all be attacked as the number of attackers is very low by comparison to effectively defencless targets.

Even smart ITSec guys are getting successfully attacked and there is little that they can do about it with current commodity hardware, OS’s and Apps.

I used to think I was a lone voice when it came to app security especialy in the likes of browser memory, and used to bang on a bit about how OS security techniques needed to be applied to app development as standard. Then Google anounced Chrome and although it’s a significant improvment it’s by no means what it could be.

There is the old ‘security -v- usability’ maxim to consider, security is far from easy and in most cases is a losing game when made to have sufficient usability.

Whilst many of the people who say “They should have done X” are correct it hides the bigger issue that a dedicated attacker will have the full gamut of attack vectors at their disposal and it is just not possible to protect against them all, there will always be the one or three you miss and a couple of dozen you don’t know about at any one point in time.

Yes there are ways to get any reasonable degree of security you like within your own ‘scope’ but once you start dealing with people outside that scope all bets are off.

Jon March 3, 2013 3:18 PM

@ Tracy Reed “… you’re an executive responsible for a $2.4B transaction, surely you’re capable of figuring out how this email signing thing works and understanding why it is necessary.”

Excpet, executives responsible for 2$.4B transactions don’t spring fully formed out of the ground. They come from university then various low-to-mid-to-high level positions, in a number of different organisations. And they’re busy (all that golf isn’t going to play itself, you know). Busy making money.

Then, suddenly, when they’re in the midst of a major deal you want them to become experts on email signing and encryption and security theory, because they’ve passed some arbitrary transaction-dollar-value threshhold?

I’m not saying you’re wrong to expect those things, but I do think the expectation is unrealistic.

Regards
JonS

Joe Curran March 3, 2013 6:55 PM

The fact that hackers who targeted a BIT9 customer found it easier to attack BIT9 (and their digital signing certificate) than the customer says volumes about the virtue of using whitelisting for applications vs. near useless blacklist perimeter/endpoint AV/malware solutions. I think (in a windows dominated world) we need to think about layering solutions like frequently rebooted non-persistent VM’s booting off of locked down zero footprint terminals with process inspection tools like Bit9, Applocker, SecureBoot, ESET and the like. Yes – things were in some ways better when text email ruled the world (actually you’ll still find it in daily use on some 200,000 Bloomberg terminals) but most of the world isn’t going to accept that kind of convenience trade off so its up to us to try to stay ahead of these targeted attacks.

Average (Technical-Savy) Joe March 3, 2013 7:28 PM

An angle that I think might be largely overlooked is not trying to prevent machines from being compromised, though that is certainly a good thing to do, but to prevent compromised machines from communicating undetected. I’m not a security expert or anything, so I apologize if any of my comments are mis-informed, but it seems to me that every major piece of malware that has been discovered recently calls home on a regular basis. Would it not make sense to have a server between the users computer and the network that monitors all the traffic that passes through it, and depending on security -v- convenience required either reports suspicious traffic or requires all traffic to be confirmed by a human. What constitutes “suspicious traffic” is of course impossible to define perfectly, and would leave loop-holes, but might still catch many things. Having activity confirmed by a human would off-course be cumbersome, but provided a good interface (second monitor/key-board, makes you confirm urls you are visiting, recipients of email, ect, before forwarding to web) might be workable and a significant addition to current security. the go-between computer would of course be at risk of being hacked itself, but as it is a much more dedicated machine, there should be less code to be exploited, and as there would be no third party programs, the code is probably much more secure on average (presuming it is written by security experts).

Figureitout March 4, 2013 12:04 AM

@Matthew X. Economou
–As you hinted & Clive R. summed up (pay attention when he addresses you), he rightly admits that even he’s vulnerable to attack. Anyone who digs in any modern electronics knows that, experienced developers can leave undefined variables that crash an entire program; OR, leave some other gaping holes. As well as there being other “disgusting” areas in which someone can attack. I’ve managed to attack some pretty nice targets (that I’ve set up) w/ what I call luck but many others would call skill.

If I were an attacker in this minefield of today, I would be paralyzed to attack b/c I know just how serious a “little attack” can become.

Clive Robinson March 4, 2013 3:19 AM

@ Bruce,

Some years ago (an unlucky 13) you published a paper on software mono-cultures that you and Cory revisited here in Dec 2010 ( http://www.schneier.com/blog/archives/2010/12/software_monocu.html ) Perhaps it’s time to revisit again.

Likewise as has been pointed out by one or two people on your blog recently the not well publicized attack on BIT9 puts the second of the two most widely used AV detection methods (blacklisting / whitelisting) into doubt. In BIT9’s case because they alowed one of their signing keys to be stolen by attackers who it appears went on and used it against three of BIT9’s major customers, because of the “software monoculture” issues.

Daniel Martin March 4, 2013 9:14 AM

@ Average (Technical-Savy) Joe:

Fortunately, that angle (and other related ones) isn’t being completely overlooked by every part of the industry though the product development along those lines is still in its early stages:

About CrowdStrike Technology

(Disclaimer: vested interest – that’s a video posted by my employer, featuring the guy who signs off on my expense reports)

Nick P March 4, 2013 11:36 AM

@ Joe Curran

“The fact that hackers who targeted a BIT9 customer found it easier to attack BIT9 (and their digital signing certificate) than the customer says volumes about the virtue of using whitelisting for applications vs. near useless blacklist perimeter/endpoint AV/malware solutions.”

I partially agree. Here’s the thing: the attackers hitting Bit9 doesn’t actually prove what you say. It might mean the boxes protected by Bit9 were too hard to it. Here’s the other side of the coin.

  1. They knew the boxes were running Bit9, which checks signatures for “known good” apps.
  2. They looked into Bit9 and found it was ridiculously insecure.
  3. They compromised Bit9 just in case it might make their job easier.
  4. They quite easily obtained the signing keys so as to bypass the HIPS software.

As people here often say, the attackers will hit the weakest link in the security chain. Bit9 keeps saying their solution was tough enough that the attackers hit them instead of their competitors. The alternative hypothesis, which is just as likely, is that Bit9’s security was weak enough that they were the easier target.

So, we have the same HIPS at several companies. HIPS bypasses mainly require a kernel exploit, OS exploit, or exploit of a highly privileged app. These all take time and money to produce. Top hacker groups have a stockpile but they don’t like wasting them. The HIPS software has updates and it’s verifier is trusted by the systems. HIPS signing mechanism is weakly protected. Hit the HIPS, then three attacks can follow without wasting high value exploits. This is both a good strategy and economical. Note that in my analysis the Bit9 software can be barely effective and hitting Bit9 still makes the most sense due to the 3 for price of 1 deal. There’s also a possible stealth aspect where coming through an update or using a verified file might delay detection.

I’m a fan of whitelisting, don’t get me wrong. I just don’t think the situation proves that Bit9’s solution was strong. It leads me to believe quite the opposite. Another claim they make is their systems were compromised b/c they weren’t running Bit9’s software. That assumes Bit9’s software alone would have stopped the attackers & they had no other way of compromising machines. That’s a big and arrogant assumption considering the wits of the attackers.

Joe Curran March 5, 2013 10:34 AM

@Nick P.

I get your point on how a smart hacker would nearly simultaneously try compromising Bit9 just in case it was easy (where I would simply have assumed that would be nearly impossible).

I wasn’t trying to make a case for Bit9 specifically but more for whitelisting in general and working with known good images. There are multiple options here which I listed including using frequently rebooted/patched non-persistent VM’s. Whitelisting (even if only using Applocker in an audit-only mode) can be safely layered on top to help detect probing but as others have pointed out most attacks need to phone home to get the goods and this is another good area for research. Next Gen application firewalls and DRM solutions have started down this path but there is still a long way to go.

Clive Robinson March 5, 2013 11:00 AM

@ Joe Curran,

I wasn’t trying to make a case for Bit9 specifically but more for whitelisting in general and working with known good images.

The problem in what you say is “known” when it should be “assumed”.

Which means neither whitelisting or blacklisting cover what is “currently unknown to AV vendors, but known to attackers”.

Which means that they are both limited in what they can achieve as is painfully obvious when you read of AV software tests having at best 70% coverage of malware.

They are reactive not proactive technology at best and we knew many years ago that polymorphing malware would easily evade blacklist AV techniques. Further we also know that whitelisting is in effect only an indicator of the software revision at best and has all sorts of issues with patching etc.

What is needed is proactive defense systems that monitor the activity of systems for abnormal behaviour. This is currently a very difficult problem due to the nature of the way we write applications and provide them with resources.

We could see orders of magnitude improvment on such systems if we actually spent a little time designing applications to be monitored for abnormal behaviour and likewise the resources.

Both Nick P and myself have had conversations about this in the past and for some strange reason there is now academic papers investigating the various asspects (I’ll leave it upto Nick P to give a list).

Joe Curran March 5, 2013 7:15 PM

@Clive Robinson,

While I’d love to read the academic papers, as a practitioner I don’t think there is better option than using a known (OK, assumed) good image made from trusted vendors’ media in a non-persistent mode. Layering on a whitelisting solution should guarantee notification when something attempts to introduce a change to the environment while a reboot should guarantee anything newly introduced is destroyed. Is there a better option I’m not thinking about?

Nick P March 5, 2013 7:54 PM

@ Joe Curran

That’s the thing, though: that’s not what white listing does. In most mainstream OSs, the security software kind of hooks APIs, monitors certain kinds of state, etc. The attackers work at additional levels of the system many schemes abstract away.

A good example is an attack on control pointer. The attacker injects code disguised as data into an app that doesn’t handle it properly, resulting in unauthorized execution. In this (common) scenario, the app is white listed, it’s allowed to have data in memory and the control pointer may be unprotected for legacy or functionality reasons. The attack will succeed.

White listing mainly handles the “user accidentally causes program execution” and “only authorized programs are running” issues. Once running, different security measures must be in place to stop further attacks. The best measures historically havebeen low TCB managed run times, tagged architectures or designs that prevent common errors. Modern examples are safety-critical Ada, crash-safe.org, and JX Operating System, respectively. Plenty more older ones.

The old defence in depth mantra applies to the system’s internals just as well as to networks and organizations.

Nick P March 5, 2013 8:01 PM

@ Joe Curran

I just saw your Bit9 reply. Thanks for clarification. I agree about the utility of white listing and improved network visibility. The real key, though, to putting system level attacks in their place will (and was) be changing the fundamental design on hardware to help enforce isolation, integrity and high level development. The TIARA project, now called SAFE, is a good example of what might work. I linked to it in previous post.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.