Schneier on Security
A blog covering security and security technology.
« Wrongly Accused |
| War on Terror Over in the UK »
January 2, 2008
The Cybercrime Economy
While standard commercial software vendors sell software as a service, malware vendors sell malware as a service, which is advertised and distributed like standard software. Communicating via internet relay chat (IRC) and forums, hackers advertise Iframe exploits, pop-unders, click fraud, posting and spam. "If you don't have it, you can rent it here," boasts one post, which also offers online video tutorials. Prices for services vary by as much as 100-200 percent across sites, while prices for non-Russian sites are often higher: "If you want the discount rate, buy via Russian sites," says Genes.
In March the price quoted on malware sites for the Gozi Trojan, which steals data and sends it to hackers in an encrypted form, was between $1,000 (£500) and $2,000 for the basic version. Buyers could purchase add-on services at varying prices starting at $20.
This kind of thing is also discussed here.
Posted on January 2, 2008 at 7:21 AM
• 19 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Things are getting really worse. Innovation is going on in all fields... May be except security. Our best security measures are still the ones seen in airports as you always write about Schneier....
I wonder when there will be meta-malware, that lets you buy data stolen from malware sites.
DNS blackholes are a good way to start dropping this crap at your perimeter. And of course with good user education we can eliminate most of this stuff.
"Good user education" is a chimera, so long as users continue to bear little of the cost of their compromised computers. Most computer-illiterate ("indigitate"?) owners of compromised computers aren't even aware of a problem. Even when their botnet is on the move and things are runnning slowly and weirdly, they have been trained to believe that sluggishness and unpredictability is expected from their computers, and think little of it.
Perhaps one day people will have to post a bond to their ISP for their connectivity, and that bond will be forfeit if the ISP detects malware activity at their host. Then, user education will be a realistic possibility. That will be a dread day for Microsoft.
Why should I have to post a bond to my ISP, just because MS and a hundred other SW vendors write crappy software? Maybe MS should have to post the bond, don't ya think? Jeebus, all I wanted was a black box that would let me check my email. Don't blame me for crummy SW that came pre-installed.
You may not be the author of what is on your computer, but you are the one who purchased it and decided which OS and applications to use. You are the one who visited the sites necessary to infect yourself, or read the latest chain email with a great attachment.
There has to be some personal responsibility here. Oh wait, this is America, no there doesn't!
@ ... Everyone so far
I don't need to be a car expert to secure my car from the vast majority of security threats to cars. Computers can only be called secure or "usable" when I don't need to be a expert to use it and make it secure.
Note that crappy usability is also part of the problem. Example, Joe can't play a game online with windows firewall on. He doesn't configure the firewall, he just turns it off.
You -- that is, we -- should be required to post that bond because inept and uneducated management of a networked computer is a danger to the commons, analogous (say) to inept and uneducated management of an automobile, or an aircraft, or a weapon.
This fact is glossed over or even ignored in the effort to crank computer sales as high as possible. Those efforts benefit from the view that you express so concisely, that a networked PC is a harmless appliance, analogous to a toaster. Just like a toaster is expected to brown toast without burning down the neighborhood, you expect a networked computer to manage your mail without assisting international crime cartels.
It doesn't work that way. MS should certainly have it's feet held to the fire, but even if MS suddenly and miraculously became a security-conscious good citizen, users with bad network security habits would still be a serious problem. And the thing is, so long as those users see no cost for their essentially sociopathic network computing habits, there can be no solution to this aspect of the problem.
On the other hand, shifting at least some of the costs associated with malware onto users could create a network of positive economic effects tending to ameliorate the malware problem --- including serious legal pressure on software vendors not to sell crapware to Grandma.
This is an economic problem more than a technical issue -- it's the externality thing again. When costs and benefits are borne by different parties, it is difficult to arrive at some kind of sensible and efficient solution. If Grandma's computer is causing some fraction of the economic losses attributed to malware, the solution inevitably must route through forcing Grandma to assume some commensurate fraction of the responsibility for those losses.
Meanwhile, on the demand side of the equation...
Towards the beginning of last month, Chicago patent attorney Ray Niro publicly advertized a $5,000 bounty for the "identity" of an anonymous blogger. It's hard to understand how Ray Niro expects to acquire this information other than via a computer intrusion.
"A Bounty of $5,000 to Name Troll Tracker" (Dec 4, 2007)
But, even if Ray Niro did not think at all about how anyone was going to collect the bounty, it's worth pointing out that the news was quickly translated into Russian.
"Patent trolls offer $ 5000 reward for information about blogere" (Google translation of Russian language headline) (7 Dec 2007).
It probably doesn't help any that I'm repeating the news on this site, but otoh, the news spread pretty far, pretty quick. So it probably doesn't hurt that much either.
The bad guys probably already know that the "reward" for breaking into the anonymous blogger's computer has now been raised to $10,000.
There are problems with a bond, too. Too small and it'll have no effect; too large and it'll encourage corruption. It'd probably be a bad thing, for example, if forfeited bonds became a noticeable portion of ISP income — it's in the ISP's financial interest to encourage insecurity in their clients.
If its like a security deposit, ISPs will have to either charge the same amount to everyone — which is unfair and just moves the externalty around to people who run secure machines — or build a new credit reporting agency-like entity to give security scores, which would probably be expensive and be incompatible with anonymity. Alternatively, they might give different rates for different OS's, but then good luck if you like OpenBSD.
And all this for an externalty that, net per person, is probably much less than $100. Anyone have some actual numbers on that?
Why even bother with a bond? If the users system is compromised, disconnect their service and require it pass a screening before it's allowed back on the internet. Until they get certified as clean just redirect any requests to a page that says something like "Your system has been compromised by a virus, trojan, or worm and your internet connection has been disabled. Please call for more information."
Personally I think the solution is a mix of social and technological. We need a new OS. Period. End of story. Everything out there fails to go far enough with security, while at the same time committing various other sins. A balance must be maintained between usability, and security. And by security I mean real security, not just nagging the user constantly, all that does is lead people to click "Allow" constantly without actually reading anything. The user should only get a security warning if something out of the ordinary is happening, not every time they do anything. At the same time though, the system should not assume the user knows what they're doing.
I'm envisioning the next generation of OS, and I think one of the key things is going to be a variable interface. Essentially it asks you a series of questions when you install it, and then recommends one of several interfaces you can use based on your answers. The simplest idiot box setting would be the most paranoid in disabling everything, but would also require virtually no user interaction to remain secure. On the extreme other end would be the one that is totally configurable (would still default to locked down though) and doesn't hide any of the gory details from the user (there would also be several levels in between these two). I think this would be the best way to please everyone. Those that want a "toaster oven" experience for checking e-mail can have it, and the IT professionals that want to tinker in the guts of the OS are happy as well.
Ultimately our current problems come down to two things. First, it's far to easy to either directly get malware into the OS, or indirectly trick users into installing malware. This is the failure of the OS to provide enough information and security checks to allow the user to distinguish actual risks from normal behaviors. Second, users need to be educated to recognize shady sites and e-mails. To a certain extent this goes back to the first point, but even then as the phone scams you see occasionally show users themselves or often the source of the problem (I particularly liked the article a little while ago where British office workers gave away their passwords for candy bars).
I don't want too many features removed to lock down security. I would be very unhappy if computers ceased being useful general-purpose devices to ensure total lockdown.
I don't think there's any question that significantly more robust systems could be developed, the challenge is to do it without simply cutting out features until it's easily debuggable.
One thing I would *really* love to see is serious sandboxing at a very fine-grained level. At some conceptual level, malware does something that the user is not expecting. Ignoring the fine-print based malware that actually tells you what it's doing, the challenge is to come up with a permissions scheme that is straightforward enough to work.
If you have an app that connects to numerous remote systems, it's going to be tough to stop it connecting to un-expected ones. However, if you have, say, a video CODEC, you don't expect it to be opening up your spreadsheet documents.
Really, what I'm envisioning is a set of pre-defined application behavioural profiles that limit an app to doing what you expect it to do. I think a set like that could be a very powerful for stopping un-expected application behaviour.
You're close to what I envision. I think as a default though, particularly in the simplest brain-dead case, yes the system should have features axed to protect the user. These sort of people don't need the advanced features anyway, they only get in the way. Do you really need something like ActiveX when all you do is check your e-mail and browse eBay? At the same time though, the system needs to be flexible enough to allow experienced users the control they want, which is why I think multiple interfaces are the key. The simple one protects the people who don't know what they're doing, and the advanced one lets you get your hands dirty so to speak.
As to the fine grained sandboxing, that would be part of it of course. I envision several levels of this. First, each installation of the OS would create a private key that's inaccessible outside the kernel (unless you access the HD from another system or OS, but physical security is beyond the power of the OS) and sign all user approved applications with that key. If the program is modified (say by a virus) the signature would no longer be valid and the OS could warn the user. Second, companies (or individuals) would be expected to sign their applications so you could verify that the copy of the application you have hadn't been tampered with at some point and that different applications came from the same author or company. Lastly applications would be sandboxed and different behaviors could be "trained" as normal, so that the first time an application accesses a network resource for instance the user is prompted and can choose to deny always, deny once, allow once, or allow always from that application. In addition it would need more technical security measures of course, such as non-fixed memory locations, and buffer checking, but from the users standpoint it should be relatively simple to maintain strong control of all the applications. One important feature would also be strong sandboxing, so even if a user didn't trust and application it could still be run in a minimal permissions environment without any danger of it doing anything dangerous (no or limited access to local resources as permitted by the user).
I'd really really love to work on an OS (I recently ordered Andrew Tanenbaum's classic book on OS design [the one with the raccoons that talks about minix]), but as it stands now virtually all of my time goes towards my job that pays the bills, with my little extra time spent mostly on my family.
People who argue for more "secure by default" systems miss a big point. As long as my bank, my employer, and my HMO _require_ me to disable security to use their websites, that "default" is going to be pretty short-lived. Now, if we could institute a "safe for scripting" license for web-developers, with real teeth, maybe we'd get somewhere. :-)
("You used an ActiveX control just to make the dollar-sign sparkle, so you are responsible for the damage done by the 60K machine botnet that you enabled. Pay up or lose your license")
> Ray Niro publicly advertized a $5,000 bounty for the "identity" of an anonymous blogger.
It's my mother-in-law! I send all her contact info to you now, Ray.....
Really, how on earth could anyone actually claim this "bounty"? Show records to Ray that he hacked the server and has IP address info?
Which brings up an interesting question, and with it, an interesting issue. If you post a bounty which can _only_ be claimed by executing an illegal act, wouldn't that be a prosecutable crime in itself? It wouldn't apply in this case, because the operators of the server involved _can_ claim the bounty without criminal action (perhaps breaking their terms-of-use agreement with the blogger, but that's not criminal, as far as I understand the law).
This line of reasoning leads us to the interesting conclusion that a law making the exposure of personal information of anonymous users of websites illegal would have value. Ray Niro could then be prosecuted for inciting a crime....
It's not illegal to post a bounty for something illegal. Similarly it's not illegal to make a contract for something illegal, but the contract itself is unenforceable (or at least the portion pertaining to the illegal act is). Seeing as a bounty is essentially a contract all it would mean is that the contract is void and there's no obligation to pay it out. However, because the contract doesn't specify that you need to perform an illegal act to fulfill the requirements it's not itself illegal as it's up to the person accepting the contract as to how they execute the requirements of the contract.
The system should be secure by default. That's a basic premise that everyone should be able to agree on. I should be able to take a fresh install of the OS, hook it up to the internet with no firewall, and be completely safe.
As for the scripting, there should be strong sandboxing as has already been mentioned. Installing some script should not allow the attacker access to anything but the attackers site. Cross site scripting attacks are well understood and rely on existing flaws with the security of browser scripting, a ground up implementation should be able to avoid those flaws. Furthermore, prevention of a compromise of a users online accounts although unfortunate isn't really the goal of the OS, its purpose is to protect the users system. Beyond that it should provide the tools the user needs to differentiate between legitimate content, and malware content. If the user, even after being advised that something is dangerous continues to do so anyway, that's beyond the control of the software. One of the key failures with Vista is that it doesn't do enough to determine appropriate risk levels associated with different activities (this is partially due to wanting to maintain backwards compatibility with insecure XP applications) and the user is then unable to gain any meaningful information from the warning it does provide. Essentially the user is conditioned that any positives in Vista are false positives.
Perhaps one day people will have to post
a bond to their ISP for their connectivity, and that bond will be forfeit if the ISP detects malware activity at their host. Then, user education will be a realistic possibility. That will be a dread day for Microsoft.
One step away from a 'computer license', government control and supervision. 1/2 step away from criminillization of anonymity. Guess who reallywins.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.