The Business of Botnets

It can be lucrative:

Avanesov allegedly rented and sold part of his botnet, a common business model for those who run the networks. Other cybercriminals can rent the hacked machines for a specific time for their own purposes, such as sending a spam run or mining the PCs for personal details and files, among other nefarious actions.

Dutch prosecutors believe that Avanesov made up to €100,000 ($139,000) a month from renting and selling his botnet just for spam, said Wim De Bruin, spokesman for the Public Prosecution Service in Rotterdam. Avanesov was able to sell parts of the botnet off "because it was very easy for him to extend the botnet again," by infecting more PCs, he said.

EDITED TO ADD (11/11): Paper on the market price of bots.

Posted on November 4, 2010 at 7:04 AM • 45 Comments

Comments

JeroenNovember 4, 2010 7:25 AM

What's also interesting is that the cybercrime unit took over the botnet themselves, and used it to display warning messages on infected computers. This has caused a bit of a minor scandal, since in doing so the unit probably broke the law.

Clive RobinsonNovember 4, 2010 8:49 AM

I would treet the income figures with a bit of caution.

Other researchers hav indicated in the past that income models are less than a tenth of the 100,000 euros given here.

It may be a case of the usual prosecutorial multiplication to get tougher sentances or justify the investegatory expenses.

DaveNovember 4, 2010 8:59 AM

Would it be possible to cut off the C&C centre of a botnet legally ?

Presumably, the law allegedly broken here was accessing computers without permission. But cutting off the C&C servers would cause the botnet to stop sending spam. I doubt any law regarding unauthorised access of a computer allows stopping it from doing something while disallowing making it do something. So shutting down a C&C server breaks the same law that displaying warning messages to users does.

One would expect that a cybercrime unit would be quite well aware of the legal ramifications of their primary purpose for existence, although this is not a certainty. The general public and the media can still cause plenty of controversy while being wrong too.

Hopefully, sense will prevail.

Bringing Avanesov to justice may not be a simple matter. €100,000 per month for ten years (or somewhat less because the article said "up to €100,000") is a lot of money. Assuming he hasn't spent it all and he can access enough of it, he will be able to hire a very good team of lawyers. Another thing tyo consider is that only a small portion of the botnet will be in any one country.

vwmNovember 4, 2010 9:30 AM

I do not think that shutting down a botnet and issuing warnings via the botnet are against any law.

Seems to me that "negotiorum gestio", "defense of others" maybe even "self-defence" apply.

Clive RobinsonNovember 4, 2010 9:34 AM

@ Dave,

"Would it be possible to cut off the C&C centre of a botnet legally ?"

Yes perfectly legaly and without the requirment for a warrant or anything like that.

If you inspect any agreement to provide communications over a channel you will find that there is a clause about harming the network and another one for technical reasons.

If you go and dig out the EU R&TTE regs (which apply across the whole of the of Europe) you will see that it specificaly has clauses for disconection for harm / endangering the network and for technicaly justifiable reasons.

Removing the comms is by no means the same as gaining entry to a computer (which is at best questionable and very dangerous at worst).

However as was seen with the Spanish controled bot net choping the head off does not of necessity stop the botnet operator getting control back if they can subvert a node on the botnet side of the comms break they can simply redirect the botnet comms to another controler.

The thing that realy makes me scratch my head with these bot net operators is that they realy don't try to make the Command and Control channel effectivly One Time through the likes of google searches and random posts to blogs.

BF SkinnerNovember 4, 2010 10:32 AM

And renting, of course, to anyone who needs deniability...

DDoS attacks take out Asian nation
Myanmar fades to black
http://www.theregister.co.uk/2010/11/03/...

This last year of political campaigning saw the creation of a lot of groups for the purpose of delivering large quantities of cash to politicians. At least one individual, loser, spent 140 million of her own money on her campaign.

The Indian firm Aiplex announced that they were launching Distribute Denial-of-Service (DDoS) attacks against The Pirate Bay.

When legitimate organizations use, what most regard as illegitimate, means to further their ends. We can soon expect to see criminal techniques used as a part of the normal campaign process (especially in attempts to interfere with online fundraising)

One of the basic facts of cyber war would be to establish and maintain footholds into the enemies networks. (fill in your enemy here).

I've thought for years that it's easier to buy already compromised networks than build to suit. Are military and IC budgets already paying into botherder coffers?

Considering that; I'm wondering about the purpose, or secondary purposes, of Darkmarket. Only to identify and persecute criminals? It could easily be used to channel best of breed into other programs.

WinterNovember 4, 2010 10:57 AM

"Is this legal"

In the EU, it mostly is. As I understand it, law enforcement stopping someone who is breaking the law is almost always legal. Especially if the crime is in progress.

I think there is a legal difference between entering a place to search for evidence, which requires a search warrant, and entering to stop a crime in progress, which should not.

Note that there might be problems with any evidence unearthed while interfering with the crime. But that is a completely different matter.

WinterNovember 4, 2010 11:02 AM

For good (dated) empirical estimates on the market price of bots see also:

Paxson, V., Franklin, J., Perrig, A., and Savage, S. An inquiry into the nature and causes of the wealth of internet miscreants. In CCS ’07: Proceedings of the 14th ACM conference on Computer and communications security (New York, NY, USA, 2007), ACM, pp. 375–388.

http://www.icsi.berkeley.edu/cgi-bin/pubs/...

Peter MaxwellNovember 4, 2010 1:42 PM

@Clive Robinson at November 4, 2010 9:34 AM

"The thing that realy makes me scratch my head with these bot net operators is that they realy don't try to make the Command and Control channel effectivly One Time through the likes of google searches and random posts to blogs."

Have pondered along similar lines, specifically:

i) why do they use centralised "command and control" nodes when it is conceptually fairly easy to create a completely decentralised system then inject instruction messages to random nodes that then get propagated through the botnet;

ii) how on earth can authorities compromise a "command and control" server in a manner in which they can then use the botnet - surely something as simple as requiring all instructions to be signed with a private key and not storing that key on the server would suffice;

iii) why they do not use other communication vectors like you said - forums, usenet, free email accounts, mailing lists, blog comments, even public ldap keyservers would work.


Guess we can be thankful that most of the botnet folk are slightly inept.

Aye, gone are the days when spirits were brave, stakes were high, hackers were real hackers, women were real women and small furry creatures from alpha centauri were real small furry creatures from alpha centauri.


Nick PNovember 4, 2010 3:15 PM

@ Clive Robinson

Yeah, the financial figures are usually unreliable. The only reliable financial figures in the underground are what something is selling for (it's posted) or what a guy just made off a crime. Even these can be inaccurate due to haggling or exaggeration. Does anyone really think a guy about to do 10 years for hacking is going to brag about making $14 a day?

Nick PNovember 4, 2010 3:22 PM

@ Peter Maxwell

I don't think they are inept as much as just using readily available kits. Most of the botnet builders use kits made by a few hackers in their spare time. Others build extensions. But the botherders seem to mostly be non-technical as far as system-level programming goes. They are usually more like modern code monkeys: slapping pieces together and hoping something good happens.

As far as the features you mentioned, I designed a botnet last year I think that had all of those features. It was inspired by a certain, little-known P2P program with excellent security. The design extended a well-known kit with some protocol logic and crypto added in. It took an hour for the basic design. Full verification could take considerably longer, but I didn't intend to build so I didn't have to go that far.

From what I see, most of the hackers building the kits aren't very skilled cryptographers or security engineers. They are just good programmers with knowledge of obfuscation and evasion techniques. That may explain why they don't do some obvious stuff. Also, as we've repeatedly posted ideas here, we can also assume they don't read schneier's blog. Always a sign of a non-professional in this field...

Davi OttenheimerNovember 4, 2010 3:53 PM

These botnet business discussions always remind me of PointCast and SETI clients. Wonder if anyone running a nefarious botnet is also trying to make it run legit computations and use it as a defense: "don't shut it down, it's searching for a cure for cancer"

WinterNovember 4, 2010 4:14 PM

@Everybody:
"Guess we can be thankful that most of the botnet folk are slightly inept."

If you are really, really, smart, brilliant and only want to make a lot of money, the obvious route to riches is not crime. Especially not cracking other peoples home computers. You go working for a merchant bank on the stock exchange.

As Inspector Columbo said in one of his shows, "Murderers are not very smart. If they would have been smart, they could have gotten what they wanted in a better way".

ThomasNovember 4, 2010 5:52 PM

@Peter
"Guess we can be thankful that most of the botnet folk are slightly inept."

@Albert Einstein
"Make everything as simple as possible, but not simpler."

If it works, don't fix it.

John HardinNovember 4, 2010 7:01 PM

@ Peter:
"...requiring all instructions to be signed with a private key..."

Being found in possession of that private key would be pretty damning evidence...

JamesNovember 4, 2010 7:04 PM

"specially-trained transit police officers would begin looking into passengers' bags"

Didn't know having a pair of eyes requires special training. Then again, they'll use any justification to get training money and form those useless special taskforces.

Maybe soon there'll be a need to inspect the toilets, so they'll go for special smell resistant training.

MarkHNovember 4, 2010 7:37 PM

@John Hardin:

"Being found in possession of that private key would be pretty damning evidence..."

True security-guy paranoid thinking! I salute you!

Nick PNovember 4, 2010 10:45 PM

@ Winter

I think Columbo is wrong. Ignoring legality here, murder is often a viable and useful strategy for gaining riches if it accompanies a theft and the source is properly disguised. This has happened enough in real life that we should take it seriously. Most get caught, but a few don't or we can't do anything to them legally. Another application is when you want to create instability or losses that financially benefits you in an indirect way. Using put options in connection with a bombing comes to mind, although they are audited (need alibi).

I do agree that really smart people typically don't need murder or crime to make money: I've said it myself on occasion. Even ignoring stock exchanges, there are many grey areas where people can essentially defraud others or steal legally. Personally, I'd say what the patent trolls are doing is theft. Intellectual ventures made like five billion on it, though.

The reality is that a person following pure pragmatism should do whatever produces the most money the quickest. You have to admit, Winter, that it's much easier to become a profitable botherder than a stock broker. It takes almost no brains to get one going and make money off it. Avoiding getting caught via the money is more difficult, but a smart guy will do the research and figure that out. Hence, crime is a viable option for a smart and amoral person, especially if they live in Northern Asia or Eastern Europe. The money issues disappear when the mob does the laundering or dirty politicians look the other way. ;)

Nick PNovember 4, 2010 11:00 PM

@ Everyone

Hey, I just got an idea that I don't think anyone has posted about. Most writers have been talking about the kind of money people make botherding and the kind of time they do if caught. How about making money by busting botherders?

If you can find the Crimefighters Handbook online, as it used to be a free txt file, you will see that there are numerous bounty hunter laws civilian crimefighters can use. The basic one is up to $25,000 for info leading to the arrest (not conviction) of a felon, with another $25,000 if they had drugs on them. RICO lawsuits allow the crimefighter to collect up to 50% of all fines and forfeitures. So, if these guys had a huge bankroll, a crimefighter would make six digits off their bust.

I was going to do this in my city but I didn't like the risk of retribution. However, most botherders that have been arrested aren't that type and the risk is almost zero. A few ideas come to mind. Infiltration and gaining their trust. Possibly pretexts to get them to come into the United States to make more money, then busting them upon entry. The American ones would be especially easy.

So, my target is botherders, bank account thieves and CC fraud dudes. Most use Western Union, with many stores having video cameras. If we can't get them technically, then this could be a decent way to do it. Cost a bit, though. Should probably put a $25,000 limit on expenses. Then, sue the guys for the expenses and put a lien on their property. I could see this working for someone with professionalism. ;) What do you guys think? Should we spread the word about the reward laws to give incentives to people capable of busting these guys?

Nick PNovember 4, 2010 11:05 PM

Hey, I just noticed something. Remember when people were panicking that Conficker and Storm may have controlled *a million* PC's? Check out the quote below:

"the Dutch High Tech Crime Team began taking over command-and-control servers used to issue instructions to the *29 million* infected computers"
(asterisks added)

Wow! That's a lot of computers under this guys' control. I figure reporters would be focusing a bit on how huge this thing was and how long it took to build. It could be the biggest I've heard of. For DDOS lovers, that thing is Excaliber. Any comments?

WinterNovember 5, 2010 1:42 AM

@Nick P:
"It takes almost no brains to get one going and make money off it. Avoiding getting caught via the money is more difficult, but a smart guy will do the research and figure that out. Hence, crime is a viable option for a smart and amoral person, especially if they live in Northern Asia or Eastern Europe. The money issues disappear when the mob does the laundering or dirty politicians look the other way. ;)"

I agree with everything you wrote. And Columbo was only a TV show. But the question was why criminals are not very bright?

And the point was that if they *were* very bright, they could earn their money with less risk. Those who remain in crime are those that (have to) chose the easier, and riskier, options. But indeed, keeping your own hands clean doesn't mean you cannot "incentivive" others to perform the dirty work.

Getting the mob to launder your money and politicians to back you and still get rich would be an achievement in itself.

I glossed over the obvious point that there are places where crime might indeed be the only option to an ambitious genius.

Clive RobinsonNovember 5, 2010 5:14 AM

@ John Hardin,

"Being found in possession of that private key would be pretty damning evidence..."

Except for the fact it would not be found...

Consider the simple case of a 2048 bit personal key and a 512 bit bot net key.

There are very easy ways to hide the 512bit key in the upper third of the 2048bit key. Proving it was there would be almost impossible unless caught in the act of using it.

The amount of redundancy in a 2prime multiple is astounding for any given bit length so just very simple search algorithms are required...

There are very many other tricks that can be used to hide a private key with little difficulty I've thought of a few myself.

I would comend the works of Adam Young and Moti Yung to those who want to see just how an inventive mind can be put to use.

Jenny JunoNovember 5, 2010 6:02 AM

I keep waiting to hear about somebody turning botnets into TOR networks. At the very least they would be useful to any organization that controls the botnet directly and somewhat useful to "little guys" renting usage - still gotta worry about the botherder snooping since technically he does have control over all the nodes in the network. But as long as the botherder isn't the CIA or another TLA, its probably good enough for most uses.

Chris HollandNovember 5, 2010 10:02 AM

@ Nick P:

29 Million is indeed scary.

I've seen 25 bots do damage to a badly-tuned application. I've seen 1,000 bots take-down a bad firewall.

I wonder what kind of bandwidth they were pushing when they threw those 225K computers against the Dutch authorities.


@ Jenny Juno:

I could see those bots become Tor exit nodes indeed.

@ Peter Maxwell:

They do use all sorts of decentralized C&C techniques, such as dynamically registering domain names. That 2006 paper covers a few of them, google for ndss_botax.pdf

PaeniteoNovember 5, 2010 10:48 AM

@Clive: "...hide a private key with little difficulty..."

As with all steganography approaches, these are generally very hard to implement correctly.

E.g., hiding the key in another key would likely require some program to get it back out (unless you'd prefer to code/compile such a program for every access and destroy it thereafter) which would be pretty suspicious in itself (and could be used to find the key).

Peter MaxwellNovember 5, 2010 11:42 AM

@John Hardin at November 4, 2010 7:01 PM

"Being found in possession of that private key would be pretty damning evidence..."

Given that any self respecting "cyber criminal" should have said key residing on encrypted media and only used via an secure anonymous proxy, if law enforcement can find it - they bloody well deserve to nail the user.

-------------

@Winter at November 4, 2010 4:14 PM

"If you are really, really, smart, brilliant and only want to make a lot of money, the obvious route to riches is not crime. Especially not cracking other peoples home computers. You go working for a merchant bank on the stock exchange."

Unfortunately, merchant banks tend to be somewhat picky in who they employ. Shame really, otherwise we would have got on famously ;-)

------

@Winter at November 5, 2010 1:42 AM

"But the question was why criminals are not very bright?"

The bright ones are those we never hear about, or are running the country.

----------

@Nick P at November 4, 2010 11:00 PM

"The basic one is up to $25,000 for info leading to the arrest (not conviction) of a felon, with another $25,000 if they had drugs on them. RICO lawsuits allow the crimefighter to collect up to 50% of all fines and forfeitures."

To be honest, that really surprised me when I read it, but, yup it's in there. I do believe the 3059 provision has been repealed in February this year though, however US legislation seems even more difficult to discern than UK statutelaw so I could be wrong.

B. TaylorNovember 5, 2010 8:01 PM

@Peter Maxwell

i) Centralized CnC provide resilience. P2P botnets were gaining in popularity until rival operators realized how easy it is to steal other botnet operators' nodes. Malicious distributors using "Pay Per Install" schemes will pay operators a certain amount of money for each machine they can infect. So the more machines you can control, the more you get paid. Having machines stolen from you is literally having money stolen from you. Therefore some form of centralized CnC is preferred. These assets are easier to manage and retask on-demand.

ii) Who can say? There are countless ways this can occur.

iii) This is a growing trend. We see botnet malware CnC posing as seemingly innocent posts to web forums, "image" downloads and the like. The use of filesharing domains are also popular. However, the criminals don't ave a need need to get too complex. Most use simple DNS and HTTP/HTTPS as the CnC infrastructure. The newer malware is proxy-aware, so it can move out of the network with or without a web proxy.

Most organizations either don't see botnet traffic moving through their networks, or they DO see it, but it looks benign or mundane enough not to earn further scrutiny.

Peter MaxwellNovember 6, 2010 2:08 AM

@B. Taylor at November 5, 2010 8:01 PM

"Centralized CnC provide resilience. P2P botnets were gaining in popularity until rival operators realized how easy it is to steal other botnet operators' nodes."

Surely that's down to poor design though: it is not difficult to design a distributed P2P system where a rival cannot steal nodes. Actually that attribute should arise as default out of the requirement that any nodes compromised by law enforcement cannot subvert other nodes. (the design to do this is fairly trivial, although the implementation would require attention to detail)


"Most use simple DNS and HTTP/HTTPS as the CnC infrastructure. The newer malware is proxy-aware, so it can move out of the network with or without a web proxy."

When considering a command and control type setup, the I agree with you, it doesn't require too much complexity. For a stealthy P2P setup, it will probably need to be a bit, erm, smarter.

I like your file sharing suggestion, I hadn't thought of that.


"Most organizations either don't see botnet traffic moving through their networks, or they DO see it, but it looks benign or mundane enough not to earn further scrutiny."

Most organisations do actually pick it up: either the NMS/SEM will flag something unusual, or it gets discovered when executing a payload, i.e. it's pretty obvious when you company's own systems start trying to send spam through your internal mail.

Clive RobinsonNovember 6, 2010 3:13 AM

@ Paenitio,

"As with all steganography approaches, these are generally very hard to implement correctly."

To hide something has a few basic aspects the two most obvious being the object and the container. The security relies on an adversary not being able to prove the existance of the object in the container.

So the questions to ask are, if you use a 2048bit public key as the container and a 512bit secret key as the object,

1, Is it possible to put the object in the container?
2, Is it possible for an adversary to find the object in your container?

To answer (1),

If you examine a "pq" public key you will find the most significant bit is 1 and that if it is also a Blum integer the two least significant bits are defined by the constraint on both p & q of being "congruent 3 mod 4" which is of the form "4t+3" with t being an integer. Thus the two least significant bits of p and q will be set (binary 3 or '11') and thus the two least significant bits of the p,q multiple or key are "01", giving you a public key foramt of "1... y ...01"

If you then consider y, the bits in between the start and end bits (ignoring the carry in for now) they are the result of an A.B multiple where A and B are the "t" from the "4t+3" right shifted two places which is just t.

The question then is is the distribution of the t's any different from the distribution of ordinary integers?

The answer is obviously yes, as "4t+3" has to be prime for p and q to be prime. Where as for ordinary integers there is not this constraint.

The question then arises does this matter?

The answer is no, because for the pq multiple to be a valid public key that is the expected distribution, not the distribution of A.B where A and B are any integer in the range 1 to 2^(y/2).

The next question arising is can I find a bit patern x in the available keys 1... y ...01, where x is a sub string of y?

To which the answer is yes. That is you make up your short key x then go on a hunt using a fairly simple algorithm. If after a period of time a legitimate public key containg x has not been found simply generate a new x and run the algorithm again.

On a PC with the legitimate public key being 2048 bits a secret short key x of 512bits can usually be found in less than 24 hours. Also usually at an easily rememberable point in the legitimate 2048 bit long public key.

The important question (2) is can x the short secret key be recognised by an adversary in the public key 1... y ...01?

The answer is simple,

Not untill they have cracked the 512bit short secret key x from the bot net traffic public key, and thus know the bit pattern to look for.

So as easy as factoring any other RSA public key to get the pq primes to build the private 512bit secret key x.

So to use the secret short private key you have to remember,

1, where the 2048bit public key is stored
2, the length of the secret short private key
3, the bit offset in the public key to find the start of the secret short key.
4, any additional obsfication used.

Or you could do the whole thing in reverse and find a legitimate public key everybody has on their PC and then find a secret short key to match it with a little easily remembered obsfication.

Which is easier than it sounds when you consider the number of CA public keys tucked away in the likes of the default install of Internet Explorer or other browser.

Clive RobinsonNovember 6, 2010 4:11 AM

@ B. Taylor, Peter Maxwell,

"Most organizations either don't see botnet traffic moving through their networks, or they DO see it but it looks benign or mundane enough not to earn further scrutiny."

I had an argument with Richard Clayton of at Camb Labs over this point some time ago. His view appearing to be that bots are only used for spam and DDoS (which I am very sure they are not in some selected cases).

My view is that those botherders using botnets for spam or DDoS are wasting a resourse. That is they don't actually know how to capitalise on their assets any better.

Simply because when they start Spamming or DDoSing most people in SMSE's and above cannot miss the increase in network traffic and thus take investagative steps to find and remove the cause. Also some ISP's inform users that they are in violation of their T&C's and will threaten to cut them off if they don't clean up their machine.

One of the reasons I sugested back then to use Google or other search engines for making a non centralised command structure was that it looks just like normal user traffic on the network and provided it keeps the bandwidth low it's not going to be that easy to spot.

The reason I was thinking about it is botnets are essentialy "fire and forget" malware and can be used to amongst other things search out valuable information and find ways across air gaps.

That is when used for intel information gathering a botnet can enumerate network topology and also search for user documentation etc. Thus it becomes a significant and rather worrying APT tool not just a resource using anoyance.

B. TaylorNovember 6, 2010 7:44 AM

@ Clive Robinson,

"My view is that those botherders using botnets for spam or DDoS are wasting a resourse. That is they don't actually know how to capitalise on their assets any better."

That's an insanely modern and ultimately correct view, IMO. Or to put it in the words of a teenager..."Spam and DDoS bots are sooooo 2005." Depending on when you and Richard talked, he may have been right. However the game has substantially changed in over the last year.

Theft of corporate banking instructions/credentials, PPI, intellectual property and PII are far more lucrative and don't require that cozy criminal-to-victim interaction (as some of the DDoS-for-Ransom attacks do). Clive hit the nail on the head. Criminals are getting smarter and using their (stolen) resources more effectively.

"Most organisations do actually pick it up: either the NMS/SEM will flag something unusual, or it gets discovered when executing a payload, i.e. it's pretty obvious when you company's own systems start trying to send spam through your internal mail"

Again, spambots are giving way to theft-bots that primarily use HTTP or HTTPS. Most of the time, the CnC will be "seen" over the web proxy as a user attempting to visit a blocked site. That may be categorized as 'Spyware/Malware' but may also flag as Adult or even the ubiquitous 'Uncharacterized'. With hundreds or thousands of these events coming in per day, the security teams say "Proxy blocking. Proxy working. OK, dudes... Nothing to see here, let's go work on our other 234 projects that are due on Friday..."

If there user is on a mobile device, they will leave the network and the bot will often update itself by downloading newly created domains that will slip past proxies and other blacklists. We have observed some of the Monkif and Zeus-based bots going to legit sites like Google just to check if Internet access is allowed and if anything is blocking outbound requests.

Google 'tdss rootkit analysis' and 'building botnet campaigns' for a look at common tactics.

PrometheeFeuNovember 6, 2010 10:08 AM

These days, stenography has become a lot easier. Look at your <insert any utility here>-bill. They all have QR codes. Print a set of fake bills replacing the QR code with your key. All you need then is a way to convert the QR code into a long number into a text file. Nobody will ever bother scanning the QR code on your bills... Depending on your level of paranoia, you can iterate on the concept with secret sharing between different QR codes, replacing the QR code on different products or if you are really so paranoid, there is an age-old technique: memorize the key! I know it's a long string of hex. But seriously, if there was a string of hex which I could use to be a multi-millionaire or could be used to send me to jail, I would spend however long it took to memorize the darn thing. It would be worth it!

Clive RobinsonNovember 6, 2010 11:03 AM

@ B. Taylor,

"Depending on when you and Richard talked, he may have been right. However the game has substantially changed in over the last year."

It was on the 24th Oct 2009 Richard said,

"Well if the botnet doesn’t send spam or perform DDoS attacks, then the damage it does is really rather limited — and we’ve won anyway"

Lest I be accused of taking him out of context you can read the postings at,

http://www.lightbluetouchpaper.org/2009/10/17/...

You can see from what I initialy said at the time I was aware of other activities bubbling up but was reluctant to say "out loud" what they were.

However I found out a writer on a security blog had almost coincidently put out an article letting the cat out of the bag so my later posting went into more details of what a bot net could be used for in terms of enumerating out honeynets etc as well as how to make a decoupled control channel using search engines.

Peter MaxwellNovember 6, 2010 12:18 PM

@B. Taylor at November 6, 2010 7:44 AM

"Again, spambots are giving way to theft-bots that primarily use HTTP or HTTPS. Most of the time, the CnC will be "seen" over the web proxy as a user attempting to visit a blocked site....Nothing to see here, let's go work on our other 234 projects that are due on Friday"

You've made one assumption here, which invalidates your result - you've assumed the system in question is already compromised. To compromise a system within a security conscious organisation is a difficult and often noisy business with normal case scenarios falling into at least one of the following:

i) one system within the organisation is infected with malware through a brower/email exploit which is not zero day and picked up by border scanners;

ii) one system within the organisation is infected with malware through a browser/email exploit which is zero day, which then tried to replicate to other internal hosts and is picked up the the NMS/IDS/SEM;

iii) a system is infected via removable media and tries to replicate, being discovered by the NMS/IDS/SEM;

iv) the malware somehow doesn't trigger anything but activates when the user in question is not logged into the system, or violates some other base rule in the SEM.


Granted, when talking about what Clive suggested - specifically targeted attacks - then it is possible for something to go unnoticed but it would have to satisfy several criteria:

a) it either cannot infect other internal hosts, or any infection vector must evade IDS detection and not create unusual traffic patterns;

b) if infecting via a network vector, it must use zero day exploits to ensure vulnerability isn't patched and evade any IDS;

c) if (b) then it must be coded in a way to evade any IDS heuristics;

d) if infection is via removable media then it will likely involve user interaction, so you have to convince the user to do something and not be suspicious;

e) assuming the malware returns information via HTTP/HTTPS then it must be encoded in such a fashion that it is not obvious to any border scan (I'm fairly sure some proxies can recertificate https with a company cert and as such scan the plaintext, although I could be wrong).


Not particularly easy, but again not impossible either.


Clive RobinsonNovember 7, 2010 3:24 AM

@ B. Taylor, Peter Maxwell,

"To compromise a system within a security conscious organisation is a difficult and often noisy busines with normal case scenarios..."

The weasle words are "normal case scenarios"...

The reason being that even in security conscious organisations there are resource limitations and usually an ever expanding external connections demand over and above the increasing demand internal to the organisation.

It gives rise to the age old philosophical question of,

"If a tree falls where nobody is does it make a noise?"

That is you can create as much noise as you like providing you know that it is not going to be heard, or if heard not understood for what it is.

There are three basic stages to exploiting a computer,

1, Find an exploit.
2, Find a computer on which the exploit will work.
3, Get the exploit onto the computer.

Preventing this is usually a case of "recognition by signiture" and this is where it starts going wrong for the defender.

Signitures are "statments of the known" not "recognition of the unknown".

If you have a file that is never modified, taking a couple of hashes via different algorithms will be sufficient to detect any changes, likewise with filesystem directory and allocation tables. This is usually the case with most OS files. With one proviso this is a working security stratagy for detecting unauthorised changes to effectivly static files and file systems.

It does not work as well for files or filesystems that get appended, That is it works upto the last snapshot point but not after, but it can immensely reduce the search/check space when making the next snapshot provided it is coupled to an effective checking system. However as a security system it is only effective for the known not the unknown thus it has windows of opportunity for an attacker between the snapshots.

For files and filesystems that get random changes hashes are not going to achieve anything unless you have a journal to track the changes. I'm not directly aware of journaling systems coupled with effective checking systems, but even if they exist that only effectivly makes them equivalent to the above appending systems.

The more random and the more frequent the changes to file and file systems the larger the effective window of opportunity becomes, thus the less effective a signature based system becomes.

The proviso to all these systems is of course the reliability of the reporting mechanism (ie it does not report falsely as in the case of root kits).

The same argument applies to a computer systems memory and any excecutable code or data that effects a codes execution.

Likewise with network data this is very much a dynamic system where unlike a static file or file system not matching a signiture is the norm not the exception.

Intrusion Detection / Prevention Systems (IDS/IPS) work by matching patterns on known attacks however they have some real limitations

A, They have to have a usable signiture
B, They have to be able to check a packet to a signiture.

The first (A) is a real problem in of it's self a signiture is only going to exist for a known issue. Likewise the signature has to be able to produce a reliable indication to be of use.

From an attackers point of view all they have to do is either come up with an unknown exploit or a method of changing a known exploits signiture beyond the point it is recognised. There are tools out there (ENG++) that will take an exploit apart and then "fuzz" the basic components such that a signiture is very much changed.

Likewise we have seen in the past polymorphic virus code that does similar with signiture avoidance encryption and compression engines.

However IDS/IPS have a "real time" problem, to be of use they have to be able to do packet inspection at all levels in a very very short time frame on very large numbers of packets.

Needless to say this is very very resource heavy to do even moderatly well on compleate packets. And this is where their second problem (B) arises many IDS/IPS's cannot handle fragmented packets let allone fragmented packets that are significantly out of sequence.

Part of the recent Advanced Evasion Attacks (AET) bruhar appears to be over this very issue. They revolve mainly around the known packet fragmentation and timestamp out sequencing techniques but bring "fuzzing" to the party. Thus taking the fourty or so known techniques and multipling them well beyond the capabilities of many IDS/IPS systems.

However from some comments by H.D.Moore (who did some of the original evasion research) who has had the details disclosed to him there is a bit more to AET than just a reboiling of known issues (but we are going to have to wait untill the end of this month to find out what).

The upshot is that "fuzzing" brings a whole new dimension to foiling signitures that allow even years old exploits to be got past the detectors not just of IDS/IPS but Anti Virus Systems (AVS) as well.

Further is the issue of patches guess what it appears that some are vulnerable to fuzzing as well...

That is taking a known exploit taking it appart and reordering it gets past some patches.

The reason for this is that sometimes there are fundemental vulnerabilities that cannot be fixed just obsficated. And importantly there are way more of these than at first you would think.

Thus there is a not just 0-Day exploits but what some chose to call Z-Day exploits which are known exploits that are fuzzed into new forms.

So even very security concious organisations that have reasonable resources are potentialy going to get caught out by fuzzing.

And as has been more recently seen even old well trusted security techniques that where once thought invincable (ie Air Gaps) are being breached due to the way we now work. Gone are the days of locked down systems not just in the OS and applications but in connectivity, access and data types.

Mobile devices don't have the resources to adiquatly protect themselves from known attacks.

Thus mobile devices have in themselves become the new Achaean (Greek) wooden horses. Wheeled through the defending ramparts into the modern Troy like corporate citadel by lax Trojan executives. Who bouyed up by past imperviousness view them as tributes to their own abilities. Not as they should as hiding places for Achaean cohort like rich data formats. That can spring forth to rampage around the network of citadel highways to pillage as they feel the treasure of information within the citadel data stores. Safe in the knowledge that any noise they make or alarms sounded will be drowned in the corousing of drunk with power executives who see no value in the caution of more battle hardend and wary troops.

Peter MaxwellNovember 7, 2010 2:25 PM

@Clive Robinson at November 7, 2010 3:24 AM

"For files and filesystems that get random changes hashes are not going to achieve anything unless you have a journal to track the changes. I'm not directly aware of journaling systems coupled with effective checking systems, but even if they exist that only effectivly makes them equivalent to the above appending systems."

I don't mean to be rude, but what on earth do journaliing file systems have to do with this discussion?


"Intrusion Detection / Prevention Systems (IDS/IPS) work by matching patterns on known attacks however they have some real limitations

A, They have to have a usable signiture
B, They have to be able to check a packet to a signiture."

Firstly, most decently sized organisations will have at least a border IDS; I also suspect the high end commercial solutions perform a lot better than you expect, and don't rely solely on signature detection. On normal traffic these solutions should perform at almost wire speed.


"The first (A) is a real problem in of it's self a signiture is only going to exist for a known issue. Likewise the signature has to be able to produce a reliable indication to be of use."

Ofcourse, an unknown exploit is unlikely to be covered (although I am aware of one IDS solution that partially deals with the problem of unknown vulnerabilities).


"From an attackers point of view all they have to do is either come up with an unknown exploit or a method of changing a known exploits signiture beyond the point it is recognised."

Altering an exploit to evade a decent IDS can range from slightly tricky to impossible, depending on the IDS and vulnerability involved. New exploits against something you can attack from the outside are worth a lot of money - the value of the target will have to exceed the potential easy earnings of selling the exploit. And the "all you have to do" can invariably be rather involved :-)


"There are tools out there (ENG++) that will take an exploit apart and then "fuzz" the basic components such that a signiture is very much changed."

You're assuming the IDS is similar to say, Snort, and is using a signature engine - that is not necessarily the case.


"Likewise we have seen in the past polymorphic virus code that does similar with signiture avoidance encryption and compression engines."

Again, this assumes a purely signature basic IDS. Encrypted packets where there shouldn't be any will almost certainly raise alarm bells. Exploits have an addition problem that virus code does not: often with an exploit there must be a certain structure to work; so while the payload can be easily obfuscated the actual exploit usually has to fit a semi-rigid pattern.


"However IDS/IPS have a "real time" problem, to be of use they have to be able to do packet inspection at all levels in a very very short time frame on very large numbers of packets."

They seem to manage, as do high end firewall products like Checkpoint (the last time I read a spec they could punt 6Gb/s across a link and that was about three years ago).


"Needless to say this is very very resource heavy to do even moderatly well on compleate packets. And this is where their second problem (B) arises many IDS/IPS's cannot handle fragmented packets let allone fragmented packets that are significantly out of sequence."

I thought packet reassembly was implemented even in open source IDS solutions some time ago? More to the point, how are highly fragmented packets getting past the firewall?


"Part of the recent Advanced Evasion Attacks (AET) bruhar appears to be over this very issue."

Would be good to see how it performs against different commercial IDS solutions (too many fragmented packets alone should raise an alarm).


"That is taking a known exploit taking it appart and reordering it gets past some patches."

Now that, I can believe :-)


"So even very security concious organisations that have reasonable resources are potentialy going to get caught out by fuzzing."

Quite possibly.

However, this discussion is not looking at the whole context: an organisation's security does not come down only to an IDS and patching. Any potential attacker/malware will have to be able to breach other security measures:

i) at the rudamentary level, firewalls will partition security domains and dramatically lower any potential attack surface;

i) (a) an attacker will have to find a vulnerable service that is actually accessible, or rely on a client making an egress connection with a vulnerable client of some form;

i) (b) any exploit the attacker uses will have to bypass packet sanitisation by the firewall, and the albiet basic packet inspection performed (badly fragmented or otherwise malformed packets *WILL* get dropped here);

i) (c) an exploit will often have to reuse the existing socket, which will potentially increase the complexity of the exploit;

ii) if an organisation has a decent IDS, it is fairly likely to be piped into a SEM;

ii) (a) SEMs do not just collate data, they can determine odd behaviour based on data collected from devices throughout the organisation so an exploit or attack cannot trigger a rule in the SEM;

ii) (b) any further internal propogation will almost certainly cause an alert on the SEM (how do you compromise another host on the LAN?)

iii) there will be a minimum stealthiness requirement on any malware as any sysadmin worth their salt will notice something obvious going on (new accounts, changing of permissions, unusual processes, etc);

iv) the attacker will likely not be aware of any custom modifications or system hardening made to the target, this may not affect Windows systems as much but with many Linux/Solaris/*nix type hosts it really does matter;

iv) (a) it has been a while since I did much Windows sysadmin but on most modern *nix system a successful exploit may not actually get the attacker anywhere other than a well firewalled sandbox.


"Thus mobile devices have in themselves become the new Achaean (Greek) wooden horses. Wheeled through the defending ramparts into the modern Troy like corporate citadel by lax Trojan executives."

Yes, erm, I have heard of that happen in a number of companies. Excellent example of the weakest link principle :-)


"Safe in the knowledge that any noise they make or alarms sounded will be drowned in the corousing of drunk with power executives who see no value in the caution of more battle hardend and wary troops."

Business systems are usually watched fairly carefully for them, and their own personal systems, well.

On the whole, I'm not disagreeing with you - it is quite possible and more than feasible to do "slow and low" type attacks. However, against a security conscious target it is not as easy as it may first appear.

Clive RobinsonNovember 8, 2010 3:04 AM

@ Peter Maxwell,

"I don't mean to be rude, but what on earth do journaliing file systems have to do with this discussion"

Agh, the joy of problems with communications with a limited vocabulary where words have many but similar meanings in any field of endeavor 8)

Any and all file systems can be regarded as bit buckets with memory or containers from a 20,000ft view. At an interface level there are four operations you can do on any data item stored in one, Create, Read, Updadte, Delete (CRUD) the same with any abstract container such as a database.

For databases and similar systems the idea is that the create/write/delete operations should be atomic to ensure consistancy. As this cannot be done in reality the idea of roll back came in where a record to be changed is first read into a "journal". The operation is then performed on the actual database and then the new written record is then checked against the modification if it was OK then the original record is deleted from the journal. If the record is not ok for some reason then the journal record is written back to the database. A process that earned the name of the three phase commit.

Journaling file systems work in the same way to ensure consistancy in the file system. The original reason for this was to get around the Unix file system check at boot up as checking a 1TByte of file system for integraty can take a very long time. Checking the journals in the file system however can take less than a blink of an eye.

At a slightly lower than 20,000ft view a container can be said to consist of two parts those where data is unchanging in any given time period and those that have changed in any time period. Unless the changes are stored in a journal you have to go through the whole container when doing any kind of check or update.

Now one such check is retreving lost or mangled files caused at some higher level such as by an application crashing or a user with finger problems. This is often done with "backups" however like a file system consistency check theses can be unbelivably slow on large filesystems. So the idea of heirachical backups to save time came in that is you only do a full backup say once a month and then do backups of only the "delta" files that have been modified inbetween the full backups. However the old way of doing this by looking at a file modification time or status bit only works on inactive file systems not on active file systems. To do it reliably on active file systems you need a journaling process such that a time point can be set to take an acurate picture of the file system or a "snapshot"

Now on some network based file systems the file system automaticaly maintains "snapshots" of how the file system was at some previous point in time. That is there is a record kept of the "deltas" (all the changes) kept between snapshots so you can roll a view of the file system backwards or in some cases forwards in time from any particular shapshot. This is also done by what are effectivly "journals".

Now checking for malware is a long slow process just like checking a file system consistancy or doing a backup and if you can you don't want to be doing it on parts of the file system you have already checked, but as in backing up a live file system you don't want to miss any bits.

As I said briefly I'm not aware of any security systems that use hashes to check file consistancy that interface with journal systems such as snapshots. I'm not saying that they don't exist it's just that I'm not aware of them as I've never gone looking to see.

Clive RobinsonNovember 8, 2010 3:52 AM

@ Peter Maxwell,

"I also suspect the high end commercial solutions perform a lot better than you expect, and don't rely solely on signature detection"

Same communications problem again with "signature".

I'm not using it in the "bit for bit" comparison sense I'm using it in the much broader sense.

That is all recognition systems use some form of patern matching, in which a particular activity has some recocnisable traits. A recognisable trait whatever it might be is a signiture. The evaluation of the signitures can be either determanistic as in yes / no or probabalistic as for instance the wieghted sum of several such signitures set against a threshold.

Obviously the simple bit matching determanistic system is going to be the fastest, but the easiest to avoid. As the IDS/IPS becomes more probablistic the resources required goes up.

On problem of rule based systems (which most AI systems are) is that changing the order of an event can make the rule fail. In general rules are optomised for efficiency and thus are fairly strict.

There is a window of oportunity where the exploit or attack method used to deliver it can be reorganised. If a rule is to strict it will fail to recognise either the exploit or the attack method.

There are two ways you can find a modification you can make that will get you through. The first is the determanistic method of examing the rule set to see how it works. The second is the probabilistic method of get a system and fuzz the input and see if it gets picked up or not.

The advantage the second method has is it tests the whole IDS/IPS system not just the particular rule set you are looking at. This means it will also find bugs in the system where some underlying asspect is deficient.

The fact that people are doing this demonstrates the second principle of "low hanging fruit". That is the visable low hanging fruit have been eaten, you either reach up higher for visable fruit (first principle) or you start looking for fruit low down that are not visable because of something obscuring them.

There is of course still the third and forth principles of "find another tree" and "learn to climb a tree". We tend to hope that removing the low hanging fruit will cause the third principle and with script kiddies it tends to. However it has been obvious for some time that because we never raise the bar high enough we are actually teaching our opponents not just to climb trees but mountains as well (which is a point I've made for some time now).

Then of course for those with sufficient power there are the fifth and sixth principles of "shake the tree" and "knock the whole tree down" so even the highest fruit gets to ground level but both tend to require considerable force and be quite noisy. But we have seen with DDoS and worms that in the information world "force multipliers" don't have physical world limitations and are thus virtually costless to an attacker.

Peter MaxwellNovember 12, 2010 3:48 AM

@Clive Robinson at November 8, 2010 3:52 AM

I've just realised you've replied on this, so apologies as there is a fair chance you'll not see my response now :-(


"I'm not using it in the "bit for bit" comparison sense I'm using it in the much broader sense."

Yeah, I was assuming that too.


"That is all recognition systems use some form of patern matching, in which a particular activity has some recocnisable traits. A recognisable trait whatever it might be is a signiture. The evaluation of the signitures can be either determanistic as in yes / no or probabalistic as for instance the wieghted sum of several such signitures set against a threshold."

That paragraph seems to suggest the assertion that all IDS systems must work by pattern recognition and hence are all signature schemes? If that is so, then I can show that assertion is incorrect by counter example: I know of at least one IDS system that checks for non-procotol input. So for example, if you have an input which expects a maximum of n characters and the input has n+m then it will raise an alarm. Now while that is certainly pattern matching, it is not a signature scheme as it can potentially flag up unknown exploits which would be by definition impossible to code a signature for.


"Obviously the simple bit matching determanistic system is going to be the fastest, but the easiest to avoid. As the IDS/IPS becomes more probablistic the resources required goes up."

Yes, but modern IDS products specify the resource requirements necessary to use their product. And more of the ones I am aware of essentially run at almost wire speed.

"There are two ways you can find a modification you can make that will get you through. The first is the determanistic method of examing the rule set to see how it works. The second is the probabilistic method of get a system and fuzz the input and see if it gets picked up or not."

You may try that, but for large classes of exploits that simply will not work. For example: an SQL injection attack *will* require certain special characters and no amount of fuzzing will change that; a buffer overflow *will* require an input that is longer than the buffer and no amount of fuzzing will change that. I would imagine the vulnerabilities this may be effective against are multivariable logic flaws where manipulating several inputs obtains the exploit, which is a narrow window to work within.


"The advantage the second method has is it tests the whole IDS/IPS system not just the particular rule set you are looking at. This means it will also find bugs in the system where some underlying asspect is deficient."

You also have to have the same IDS to test against. When these things cost in the hundreds of thousands, that puts it out the range for most attackers.


"The fact that people are doing this demonstrates the second principle of "low hanging fruit"."

I wouldn't tall this low hanging fruit. Low hanging fruit is finding the target without the IDS ;-)


Going back to my original statement: it is usually a difficult and noisy affair to compromise a system in a security conscious organisation.

Yes, those companies/institutions that don't bother will be your "low hanging fruit" but even the most basic measures can make compromising a network difficult.


Clive RobinsonNovember 12, 2010 5:46 AM

@ Peter Maxwell,

"I've just realised you've replied on this, so apologies as there is a fair chance you'll not see my response now :-("

Oh there's a certain probability I won't based on my smiley detection huristic :-)

"That paragraph seems to suggest the assertion that all IDS systems must work by pattern recognition and hence are all signature schemes?"

It depends on your definition of signature, as far as I'm aware all systems (apart from some learning systems) recognise activity that either matches or does not match a rule set of some kind.

That is it is programed to recognise a charecteristic and decide if it is a match or not then by some process make a choice on another rule set as to if the charecteristic is acceptable or not.

I regard the charecteristic (what ever it is) as being a signature, so your example of n+m charecters as a signiture to be picked up on in the rule set as 'X equals n', 'X is less than n' or 'X is greater than n' etc this then used as a go no go at the simplest level or adds a weighted value into another rule set that might or might not include specific exceptions.

and I suspect from your comment of,

"Now while that is certainly pattern matching, it is not a signature scheme as it can potentially flag up unknown exploits which would be by definition impossible to code a signature for."

Your definition of signature is more restrictive than mine, which is fair enough.

(I'll leave the discussion about learning systems for another day as they almost invariably involve feed back processes to get a rule set ;)

Likewise I was using fuzzing in a very general sense, your example of,

"For example: an SQL injection attack *will* require certain special characters and no amount of fuzzing will change that; a buffer overflow *will* require an input that is longer than the buffer and no amount of fuzzing will change that."

Yes but a even if the requirment for special charecters or required length is met a given input may not work thus you fuzz the unknown component. Fuzzing does not need to be across the whole input and likewise it does not need to be just fuzzing by substitution but by transposition.

If you have a previously working attack that consists parts aranged as ABCDEF, if it nolonger works you can fuzz the order of the parts to say FEABCD and find it does get through simply because it follows a different path down the IDS rule set.

As I said You have two basic ways to find a working order, the first is examin a known rule set and find a path through by examination this works against publicaly available rule sets. However if the rule set is unavailable or very complex you have to try different input orders and this can be either fully determanistic "brut force" every option or probablistic "fuzzing" in the abscence of directing information (ie knowledge of the rule set) general fuzzing would be expected to find a working solution faster than brut force on average.

Obviously the fastest route for an attacker would be to use directed fuzzing where the direction comes from partial knowledge of the system or feedback from the response changes.

With regards,

"You also have to have the same IDS to test against."

True,

"When these things cost in the hundreds of thousands, that puts it out the range for most attackers"

Not true, you just have to know where several otherwise unrelated sites have the same IDS that you can see and use them in a round robbin fashion.

Which is one of the asymmetric advantages attackers have over defenders. Attackers can chose a point of attack that is advantageous to themselves, the defenders have three problems,

1, Stop the attack.
2, Avoid providing information about their defences in the process.
3, Determin what purpose the attack had if any.

Most organisations are happy to stop at the first point they simply don't have the resources to ensure the second (if they are even aware of the need). As for the last point few if any organisations do this and if they do how many make their findings available to the wider community?

That is although they have been attacked they may actually be a "test site" to refine an attack for a specific "target site"...

Although testing defences are not unknown in conventional warfare (probing attacks) they are usually very expensive in casualties for no gain and serve to alert the enemy. Neither constraint realy applies in the current Information Security setup we currently have.

Which brings me around to your point,

"Going back to my original statement: it is usually a difficult and noisy affair to compromise a system in a security conscious organisation"

Is true, but not if the attack has first been refined at another "test site" so it either makes little noise or noise where it is not heard at the "target site".

One of the almost forgoton requirments in the mainly nonsense "Cyber-war-Command" idea is sharing of intel about all attacks. This is something that is virtually nonexistent currently and it's absence gives a huge advantage to the attacker and causes a vast immeasurable cost to organisations because of the defence spending maxim,

'You never know when you have spent to much on defense, only to little'.

I suspect you are aware of the old saying about football "It's a funny old game" well it applies equaly as well to information security.

Peter MaxwellNovember 15, 2010 7:41 PM

@Clive Robinson at November 12, 2010 5:46 AM

"Not true, you just have to know where several otherwise unrelated sites have the same IDS that you can see and use them in a round robbin fashion."

That is prone to a number of problems and would be of doubtful reliability: most institutions will have differing network topologies, differing server configurations, different firewalls, and perhaps customised IDS configurations.


"Which is one of the asymmetric advantages attackers have over defenders. Attackers can chose a point of attack that is advantageous to themselves, the defenders have three problems,..."

"...Most organisations are happy to stop at the first point they simply don't have the resources to ensure the second (if they are even aware of the need). As for the last point few if any organisations do this and if they do how many make their findings available to the wider community?"

Most of the higher end IDS or firewall solutions do this automatically by feeding information back to the supplier for analysis. A trend sould be noticed fairly quickly and attention diverted to that issue.


"Fuzzing does not need to be across the whole input and likewise it does not need to be just fuzzing by substitution but by transposition.

If you have a previously working attack that consists parts aranged as ABCDEF, if it nolonger works you can fuzz the order of the parts to say FEABCD and find it does get through simply because it follows a different path down the IDS rule set."

Sorry, but that is missing he point. For an exploit to work it must conform to a certain structure for that associated vulnerability - if you scan for that structure any further modifications are irrelevant and ineffectual.

In a weak form, take an IDS that scans for *anything* that will trigger a specific vulnerability then your exploit will either be detected, or it will go undetected and will not work. For example, if an input expects 5 characters and 6 chars cause a buffer overflow, it is trivial to check for more than 5 characters - changing the order of the first 5 is irrelevant.

In a stronger form: if your IDS scans for any input which violates expectations, it will include the last example and also catch a more generalised class of exploit. I know of at least one well known product that works on this basis.

The vulnerabilities that are possible to "fuzz" for are generally the more complex logic flaw types where there are dependences between multiple inputs. Again in a trivial example, one input specifies how many characters to expect in the next input and then more are sent.


"One of the almost forgoton requirments in the mainly nonsense "Cyber-war-Command" idea is sharing of intel about all attacks. This is something that is virtually nonexistent currently and it's absence gives a huge advantage to the attacker and causes a vast immeasurable cost to organisations because of the defence spending maxim,"

I can say from experience, that is not entirely true - relevant information on security threats is exchanged, it is just not advertised or available to general staff.


In the end, I'm not really disagreeing with you; all I am saying is that it is much harder to successfully attack a security aware organisation without detection than it may first appear.


Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..