Schneier on Security
A blog covering security and security technology.
« Germans Spying on British Trash |
| Indexes to NSA Publications Declassified and Online »
September 26, 2006
The Hidden Benefits of Network Attack
An anonymous note in the Harvard Law Review argues that there is a significant benefit from Internet attacks:
This Note argues that computer networks, particularly the Internet, can be thought of as having immune systems that are strengthened by certain attacks. Exploitation of security holes prompts users and vendors to close those holes, vendors to emphasize security in system development, and users to adopt improved security practices. This constant strengthening of security reduces the likelihood of a catastrophic attack -- one that would threaten national or even global security. In essence, certain cybercrime can create more benefits than costs, and cybercrime policy should take this concept into account.
Posted on September 26, 2006 at 6:42 AM
• 48 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
So... being occasionally reminded of the systems' weaknesses actually strengthens the overall system? Hmm... that works up to a point, but I belive that the depth of the underlying weaknesses, the ease with which they can be accessed, and the breadth to which they are distributed bears very badly for the overall functioning of the system. Just a single catastrophic failure in Windows, Cisco IOS, or Junos (for three handy examples) could cause a lot of heartburn on any given day. A pair of the problems that happened simultaneously ... well, that would go a long way toward reminding everyone exactly how tenuous the systems we rely on every day really are.
The Darwinian analogy doesn't look good to me. In nature, diversity ensures constant competition to use resources. If the environment changes to the detriment of one species, other species move in to take over. We don't have much diversity in the Internet (Microsoft) and there are mechanisms such as patent law and proprietary data formats intended to reduce diversity.
Although we tend to think of parasites as inherently dangersous e.g. flu and malaria, most communicable illnesses are actually pretty harmless to a healthy person. Parasites exist because they adapt to survive whereas computer viruses are virtually all designed to do damage to their host.
I think jamgill has a good point as well; life is remarkably resilient whereas IT systems just keep letting us down.
No one forces you guys to use Windows. One should assume that Windows, under the guise of 'ease of use' and 'looks pretty', will be compromised on any given work day.
My problem is this. When will the government give corporations the proper tools needed to build up case files against hackers. We all know that current law helps only the government track down abuses against them, but don't you think it disingenuous that the public sector does not have a proper form of recourse ? What am I to do ? Call the police only to have them laugh at me when I point out a Woot scan from some box in Arizona ? Come on.
"whereas computer viruses are virtually all designed to do damage to their host."
That was before keeping the host alive, but remotely useful (zombie), became big business.
Even the analogy to your "pretty harmless" illness, a virus which does a one-time data collection and disappears, would not surprise me nowadays.
Tim: Going after the source of ever single scan I see on one of my boxes would certainly be quite costly. And what would I want them (law enforcement) to do: imprison everyone who runs a scanner targetting my boxes? I believe that there are better ways to spend tax money ...
Not only would going after the source of every scan be costly, but imprisoning everyone who owns a machine that scans you might not be easily doable. The problem is that the scan *probably* came from some Joe Sixpack who's only real mistake was not updating IE with the not-yet-shipped patch for the vulnerability that was waiting for him at a website he surfed to.
Follow the money - what will pay off is figuring out who paid the guy who set up the malware, then you can get both hacker and employer. (Yes, I know, easier said than done...)
Uh....Has anyone ever heard of criminal intent ?
That was before keeping the host alive, but remotely useful (zombie), became big business. - ???
This idea has been discussed, on and off, on almost all security lists -- including the use of vaccines (if we keep on with the biological analogy): a worm/virus that closes the security hole.
Most of the times the discussion would rapidly escalate to a flame war.
Recently, this was put forth in a workshop -- the New Security Paradigms Workshop, 2005 ("Internet instability and disturbance: goal or menace?", available at the ACM Digital Library -- subscription required -- http://portal.acm.org/toc.cfm?...)
I do not know of any other actual "formal" discussion so far. No matter if right or wrong, this should be discussed, since no clear consensus exists. And, of course, ideally without degenerating in a flame war.
I should stress that I personally am not either in favour or against: it is just an interesting idea.
You can't test security into a system.
Regardless of if the analagy is right or wrong, or if you believe it or not... it's true.
Anyone want to list to number of holes that are now closed that were wide open until someone exploited it? (WinNuke, anyone?) Does anyone actually believe that the software writers are going to spend large amounts of time and money closing holes that no one knows about?
Of course not, they're going to add more features. New features sell software, not security fixes.
Attacks make us safer in the long run...hmmm...yes, defenses are usually made better in response to creative offensive actions. For instance, my cash (and yours) is much safer in the bank today because of the actions of Bonnie and Clyde, Pretty-boy Floyd, and George Nelson.
I suppose the position would be, then, that it's ok to lose a few battles and skirmishes, so long as you win the war. There are always externalities and associated collateral "costs"...this is the Risk Management model, right? Cover what you can afford and insure for the rest?
Or is it really "Risk Acceptance"?
Risk Management works well until you are the affected "externality". Those folks displaced in New Orleans took the brunt of someone elses's decision that a Category 3+ storm was an unlikely enough event such that current levee protection was adequate during 99.9% of storm seasons.
It's ugly when this model fails. How effective is this when it's your IT systems being attacked by a new vector? Your clients data being posted to a website hosted overseas, and your company has to write letters to hundreds of thousands of formerly trusting clients explaining that they need to watch their credit reports daily for the next five years?
I suppose the "good of the many" argument trumps this, on a national/global view...just remember that woe is the "affected few".
27.5 million potential "few" were at risk from one such security lapse recently. I am sure that this is a record number which will soon be broken.
How long before the affected are not the "few"?
when reading the article, i first thought of the advice to let children play in the dirt. because keeping them too clean may weaken their immune system, so one day a harmless infection could be lethal.
but all in all i don't see the inherent wisdom in this. the article argues about the threat of lethal attacks if there wouldn't be defense mechanisms developed because of minor ones.
so i say: "it's good, normal sized people try to jump over fences, because of this fences are built high enough so that even big people can't jump over them." - nonsense!
maybe someone could come up with a situation where no minor accidents, but only critical ones happen - so the "wisdom" of the note could be applied. atomic power plants? airplanes? i can't think of one.
imho the article argues about a purley hypothetical situation.
@ Lazy Sumo
> Regardless of if the analagy is right or wrong, or if you believe it or not... it's true.
No, it's not.
First, a fundamental difference between a biological system and a technological one is biological defense adaption is continuous, and technological defense adaption is discrete.
A human immune system, developing antibodies against a flu strain, will have increased resistence to many/all flu strains. A technological immune system, developing "antibodies" against a worm strain, will have either total resistence to similiar strains or no resistence to similiar strains (depending upon the definition of similiar).
If the technological immune system gets a software patch, it's immune to those worms that exploit that particular vulnerability, but it has no defense against other worms that may exploit a similar vulnerability.
Second, and more telling, a biological immune system is internally regulated. A technological immune system is externally regulated. Many, many worm viruses use the IE backend of Outlook to propagate (just one example, not picking on MS here). Turning off HTML mail on your mail client is a defensive adaption. In a biological system, the immune response would persist. In the technological one, the user (after some amount of time) gets annoyed that (s)he can't read HTML mail, and disables the immune response.
This is an oversimplified analysis, but IMO the anonymous author is incorrect.
In other words, "That which does not kill the Internet only makes it stronger." Great. So let's unleash a super cyber-plague that costs so much money in lost manpower, cleanup costs, and missed utilization of IT infrastructure that it's barely worth it to keep everything running. Then the world will be even better!
@ Rob Shein
My point exactly. Thank you for stating it so succinctly.
Actually, most computer malware these days is parasitic. The intent isn't simply to cause damage; rather, if a machine can be taken over it can be added to a bot network, and can be made to send spam, or launch denial-of-service attacks. That's why virus-infected machines still appear to work; it isn't in the interest of the bad guys to destroy the machines they infect, because a working machine is more useful to them.
After all, it would be a simple matter to wipe the entire disk and re-flash the BIOS so the machine will become a brick. But that doesn't happen (at least, it happens very rarely).
If someone purposely infects me with a cold virus, and then claims no pummeling is warranted as it was for my own good, I'm still going to take exception. However, I will concede that my immune system may be improved from the experience.
Point being, that I agree these events may lead to improved security, but the fool who claims he did it for our own good should receive no consideration during sentencing.
I would like to personally "thank" all those hackers, crackers, and network intruders Where can I meet them? How do I extend them an invitation to a "thank you" party?
PS. where do I hide their bodies after I've finished "thanking" them?
Unfortunately, the analogy falls flat on its face when applied correctly. While healthy immune systems develop antibodies, unhealthy systems don't. We can consider home users and small businesses "stuck with" Microsoft software and without budgeted dollars for security or innate security knowledge to have untreatable AIDS or some other severe immune deficiency. These "patients", however, don't die. They significantly increase the scope of any pandemic outbreak by increasing the number of systems affected by orders of magnitude more than the attack ordinarily would hit.
In other words, attacks on the sick make more systems sick, which is the opposite of the article's premise. In fact, only the largest of companies have the knowledge and budget to immunize. While the number of immunized systems is significant, the number "sick" systems is much larger.
Isn't it true, though, that the "immune system" of a network as a whole can be better than the immune systems of individual machines?
For example, my employer (fortune 500 tech company) was bitten badly by Code Red. Important machines were cleaned up relatively quickly. Unimportant, and basically unused, IIS servers wreaked havoc for weeks. Since then, my employer has adopted several technologies and policies that will mitigate the damage even if another worm were to show up.
- they've put together a rigorous patching system and patch policy. If the new worm is trying to exploit a patched vulnerability, it won't get a foothold in this organization.
- they've installed firewalls that require that end-users authenticate before ports are opened up. The firewalls grant individual IP addresses access to segments of the network for a limited time period, after someone or something at that IP address enters a username and password. These firewalls are annoying, and the system has its own set of security problems, but an unattended machine will not be able to spread a worm.
- they've increased their capacity to identify badly-behaving machines and disable the associated network ports. If a machine does get infected, they can shut it down remotely.
These are just the systems I've observed. I suspect IDS/IPS is being deployed as well. My point is that this network is more secure than it was before, and not just against known threats. In large part that happened because of the Code Red worm. I see the point the authors are trying to make.
Well, I did say I was oversimplifying. :)
Sure, a layered security policy on a network can produce a more robust "immune system", because there is going to be overlapping layers of security where a worm (for example) may successfully compromise a host, but may not be able to successfully bypass the network security policy.
That doesn't mean that the analogy isn't broken ;)
Of course, I haven't read TFA, but in one sense having policy take into account the benefits of hacking is tolerable. That sense, if I may steal Ed Felten's words, is by explicitly preserving "freedom to tinker".
Now, when I think of "cybercrime", I don't picture someone tinkering with their own gear, but because so much of our policy prohibits perfectly reasonable (IMNSHO) activities which do not involve trespass or anything like it, *perhaps* there is some sliver of value in the anonymous author's line of reasoning.
"In essence, certain cybercrime can create more benefits than costs, and cybercrime policy should take this concept into account."
The also applies to copyright enforcement. Anti-piracy organizations like to claim that software copyright violation costs billions, but it's certain that some proportion of the pirates make more productive use of the money they save than the software vendor would.
It's quite possible that perfect software copyright enforcement would be a net loss to the economy (even ignoring the direct costs of better enforcement).
'In essence, certain cybercrime can create more benefits than costs, and cybercrime policy should take this concept into account.'
This is the problem with the utilitarian ethic. The real question should be who benefits? and who pays the cost?
Otherwise it's like saying that the thief who steals the cash under your mattress is giving retail a much needed boost, therefore the benefits (many stores) outweigh the cost (the personal stash of money).
Quote: "In essence, certain cybercrime can create more benefits than costs, and cybercrime policy should take this concept into account."
This is known as the "broken window fallacy" in economics. It is the mistake that by causing damage you can "stimulate" productive activity in the form of repairs and prevention. A hooligan breaking a shop window thereby "creates" business for the glass-maker, the window installer, and the security guard hired by the shopkeeper to prevent future vandalism.
The problem is, if the window had not been broken, the shopkeeper would have used all that money for other things -- perhaps to expand his business, or perhaps to install a security camera to catch the next vandal. By breaking the window, the hooligan deprives the shopkeeper not only of the funds that it costs to fix the window, but also of the opportunity to use these funds for something more desirable.
I don't think the author's argument relies on the broken window fallacy. The "benefit" they thinking of doesn't accrue to the organizations selling better security services. It accrues to the organization that develops better security practices because of the broken window. That organization reduces the risk of more serious crimes in the future.
1 @ Lazy Sumo: I love your line, "Of course not, they're going to add more features. New features sell software, not security fixes."
This is very true, as corporations are, by and large, economic entities. But let's not get into that topic here...yikes!
2 @ Pat Cahalan and others:
I think you simplified your point so much that I don't end up buying it, so I'm curious about a more robust "debunk" (for lack of a better word at 4pm...) on the use of the biological immune system analogy. I want to warn you though, I think it is a bit outside the scope of any analogy to get too into detail, such as talking about exactly how a biological virus may attach to certain receptors and block chemical balances in bodies or simply take over other cells or destroy them on a really low level...let's not go there if we can help it. :)
However, I don't quite buy into not using a immune system analogy, yet. In your post, you mention that a biological system will develop immunity/resistance to many/all flu strains, where as a technological system is protected against just one. This may be true in a signature-based approach to malware prevention, but what about heuristic or behavioral approaches? Or approaches that happen to just improve security for other parts (someone else mentioned how Code Red responses improved many other areas of security) by coincidence? What about knowing a particular stack overflow is possible and patching the code up, or quite simply looking for the result of pretty much ANY method to exploit a specific vulnerability? If I know zxcvb.dll is vulnerable, but I can't patch it or remove it, I can, however, monitor it for any trigger and better react to it.
All in all, nothing here has yet convinced me to put away the immune system analogy, yet, but I'd love to hear more. In the end, biology has rules, even if they seem chaotic and well beyond our grasp, but humans are based in biology and technological systems on both of them. It only is natural that technological systems will, at least in part, eventually mimic biological systems, whether on purpose or just because the design ideas are that...well...natural.
Many of the biological responses to attack are generic (increasing temperature) in an attempt to block the most with the least cost. The technological equivalent would be to make buffer overflows impossible to write.
The analogy also fails because God created immune systems and you guys are acting like evolution means something ;-)
Git yer DOCTER Huckster's Snake OIL! Symptomatic RELIEF from COLDS and grippe.
And be sure to tell all your friends about DOCTOR Huckster's while you are still contagious.
Dere odda be a law gainst all dose udder pipple.
This is simply a load of useless clap trap.
It seems there might be something in it until I see that all it brings is reduced clarity and confusion.
The opening premise is simply wrong and so the higher you build the argument the wonkier it looks.
I am not surprise it was anonymous.
Whoa, whoa, whoa! Sure, what they're saying is ridiculous if you only look at the software, and talk 'immune systems'.
But if you consider the entire system - including the human bits - it makes perfect sense.
How many people do you know who take regular backups? How many of those people are doing that because they got a virus at one stage?
As people become more aware that the computers can/will fail, they will be more interested in making the systems failure tolerant (paper ballots for e-voting, anyone?)
The assumption 'absense of cybercrime would have made us more vulnerable to cybercrime' doesn't explain the circumstances, how a world without cybercrime would be thinkable.
The preconditions of a cybercrimeless world are left to our phantasies, and the "catastrophic attack" too.
If the precondition of 'no cybercrime' is: all people are gentle and good, there wouldn't be a catastrophic attac.
If it is 'every cybercriminal is keeping his knowledge of vulnerabilities secret, to prepare for the global-malicious-day' - well ... useless speculations.
The likelihood of a catastrophic attac?
World-domination of MS isn't that strong on the serverside.
And even the different windows variants aren't likewise affectable by the same malware. The diversity of applications (and versions) is even bigger.
What catastrophic attac is meant here?
Every windowsclient world-wide has to be reinstalled?
@derf made a good point about today infected machines being servers for tomorrows malware.
@James: I make backups, because I heard of hardware failures and user mistakes. No malware involved.
I have to apologize for my comment.
I didn't mention that I didn't read the whole article - so perhaps I'm not fair in judging on it on the abstract.
This is a strawman or non-argument. A false dillemma in fact.
We are not presented with the option of 'no network attacks' so it is senseless to argue whether it is better or worse to have network attacks.
"An anonymous note in the Harvard Law Review argues that there is a significant benefit from Internet attacks:"
And was the anonymous person connected to Microsoft in any way?
By the way, Ubuntu Linux is free and I don't need to waste money on bullshit scanners to scan for malware like people so often do on Windows. Never again will I use Windows or a Microsoft product, never again.
I'm not surprised it's an anonymous note; I'd be embarrased to put my name on it.
"In essence, certain cybercrime can create more benefits than costs, and cybercrime policy should take this concept into account." And each political assassination, and embezzlement from a large corporation, and terrorist attack also teaches us lessons...
While there's plenty of bad law regarding different types of computer-related crime, the "benefits" argument fails to persuade.
I know it may be non-intuitive, but the ISP the person uses may be better placed to do something about it than the police. I've got abuse contacts for a number of major ISPs in my address book, and if one of their users tries something funny, I mail away the logs.
The ISPs are usually pretty happy with that, so long as you emphasize that it's an infraction of acceptable use policy, (which is good, because infraction of AUP is not nessecarily illegal)
I'm pretty sure that if it were not for the last10 years of constant worms, viruses and hacks/attacks against our infrastructure, we would not have fully patched servers, anti-virus software, firewalls, IDS's, border ACL's, null routes, bot-detectors, proxies & all the other things that make our system of computers and networks so much more robust than what it was 10 years ago.
Think about all the changes to computing that are a direct result of worms/viruses and other attacks that did NOT break the internet. We have automatic patch management of desktops & servers, we have default OS and application installations that are far more hardened, firewalls protecting essentially all critical infrastructure, and any router you buy for $49 has a crude firewall built in, all corporate desktops have AV software, all because of viruses, worms & the ilk. I can't imagine any of this happening if it were not for the constant stream of annoying worms/hacks/viruses that we've have for so long.
Someone invents a new attack. the defenders have a bad day figuring out how to mitigate and block the attack. The defenders apply lessons learned to build a new defense that applies to the broader class of attacks related to the specific attack, so that broader class of attacks now is ineffective.
Sounds like an immune system to me.
Windows has reached a point where successful attacks are more likely due to user error than vulnerabilities. There's no question the system is hard to use... hard for Grandma to know she should not run as an administrator and should set a password and change it every so often.
But seriously, you stop 99.9% of attacks merely by not running as admin and passwording your account. I haven't run a virus scanner for at least a decade, since no one attacks that way anymore (when simple email and web fraud will do) and run a spyware cleaner every so often when I've let in a tracking cookie.
Yes, there are occassional elevation of privilege vulnerabilities found... as they are found on every OS. But that's a pretty narrow scope once you lowered the privileges of IE and Office which are the applications most people use 99.9% of the time.
Applications that have required elevated privileges in the past, like games, were poorly written by people who didn't care and didn't have a stake in it. "Well, we know you want our game more than we want to care about your security, so too bad." Blizzard was guilty of this (and my still be) for a long time. It was printed right on their packaging for Diablo 2! Bioware did it with Neverwinter Nights and may do it again if they are lazy. UbiSoft wrote right to the Program Files folder in Rainbow Six 3 and "needed Admin rights". They do need to be called out because they write games that stay connected for long periods. They should do better.
And no... the thesis in the article quoted is ridiculous. No one who takes computer security seriously will think simply because there haven't been any major attacks that the system is perfectly hardened. They'll believe it is hardened... for now... after they've throughly reviewed it.
I disagree Pat. Here's why:
"A human immune system, developing antibodies against a flu strain, will have increased resistence to many/all flu strains. A technological immune system, developing "antibodies" against a worm strain, will have either total resistence to similiar strains or no resistence to similiar strains (depending upon the definition of similiar)."
Whether the immunity is broad or specific the system is still gaining immunities. At the very least the system is no longer vulnerable to that contagion. Sounds to me like you are saying that if it isn't broad immunity then it's no immunity, that's incorrect.
"If the technological immune system gets a software patch, it's immune to those worms that exploit that particular vulnerability, but it has no defense against other worms that may exploit a similar vulnerability."
Yes, I agree. So what? It's still immunity of some sort, and for every hole/hack/exploit that is closed the system overall is (maybe incrementally) that much more secure.
One thing that I think many folks should take into consideration is that no one seriously recommends letting children play in raw human-waste sewage to strengthen their immune systems. Some environments are simply deadly no matter what. Likewise the author states "In essence, certain cybercrime can create more benefits than costs..." Certain cybercrimes, not all, but some stregthen the system overall.
How about this test: *IF* we had a criminalistic bent, how much fun could the readers of this list have if we were transported back a mere ten years with what we know about computers and security now?
Yes, I do believe that there are more closed holes now than there were then and those holes are ONLY closed as a result of exposure, usually criminal.
This reminds me forest fires. It's better to have some fires every year, if not, you are feeding the next big one.
That is why I don't like it. At first glance it looks like a sensible argument.
However, the logic is deeply assumptive and ignores all other variables outside of the argument.
If we assume all cats have four legs, and I demonstrate that my dog has four legs - then logically (we ignore all other variables) my dog is a cat.
Whilst we can appreciate that there are a handful of superficial similarities between a human immune system feedback loop and changes that may have been brought about in a feedback way to our computer systems, to leap to the conclusion that therefore people attacking other people's computers is a good thing is a ridiculous over simplification.
The same argument could be used to demonstrate the benefits of torture or genocide. The fact they happened was the cause of action against them.
Y'know, just acknowledging that more attacks can result in a more secure system doesn't mean we have to thank the attackers and starting sending them christmas cards. One of the ways computer security has improved over the last few years has to do with response: we have more law-enforcement types who are interested in prosecuting people for computer crime.
Admittedly, law-enforcement still doesn't have anything like the bandwidth they would need to prosecute every script kiddie on the planet. But arresting a bot-herder now and then is a good thing.
Many, many, of these posters need to read the entire note! Most of the concrete recommendations come down to accounting for various factors in sentencing guidelines. "Taking into account" is *not* the same as 'giving a get out of jail free card'.
One of the examples works rather well, I think. It suggests that penalties should be higher for unknown vulnerabilities. If someone levarages an unknown vulnerability, that *does* benefit us all to some extent, so you get a minor penalty. But if you leverage a known vulnerability, you're just being nasty, and we'll through the book at you.
This would be an incentive for vendors to announce their vulnerabilities with credit to the finder, since announcing them will keep the penalties higher. This should encourage people to legitimately look for new vulnerabilities, rather than illicitly try and leverage the old ones.
This is true... attacks harden the world networks. But, this is no different then anything else. Disease hardens the body against disease. Light weights, you induce tiny injuries which heal back stronger. Stress your bones, you do the same.
But tear a muscle or break a bone -- and don't let that bone heal back properly... and that is different.
Break the wrong bone, such as a key spinal bone... and you are in trouble.
If I beat you up... is that a good way to teach you lessons about learning self-defense?
Someone pointed out 'criminal intent'... very true. That is what it is all about.
But, let's go even further, on a philosophical level... what is any story without a "hero" and a "villain"... or what is a "hero" without adversity? A hero without adversity is some guy sitting in his chair watching television.
Adversity defines us. Without it we are nothing. But that doesn't mean adversity is the good guy... or that we welcome it.
Maybe we watch too many movies... and start to think, 'gee, what would this movie be without the bad guy and the adversity' -- but let's get real. That is a fictional character dealing with those problems. Put yourself in the same position... and you see what it is like.
Bad things define us, overcoming bad things makes us who we are. Adversity can be as inert as a mountain we climb... or it may be as aggressive as someone firing a gun at us.
Is it cool to have old ladies ripped of their savings? Should we celebrate that? Should we celebrate the Chinese government hacking into dissident's computers and using that intelligence to kill people who only want to help their own country?
I think not.
There are numerous attempts to apply the immune system model to computers, so the analogy is apt in a strictly literal sense. If people aren't availing themselves of it, or aren't aware of this, that's not the analogy's problem. Search for computer immune systems, Stephanie Forrest had an ACM or IEEE article on it years ago. Check out apparmor/immunix.
And the "defense in depth" metaphor is similar in many ways to the diversity defense, assuming that one is concerned about a certain fraction of the systems remaining operational (that is, if failures don't spread from the vulnerable hosts to the invulnerable hosts via some different mechanism that wasn't available directly, arguable whether this is a valid assumption on the whole).
Remember, the root cause of vulnerabilities is vulnerable software. It's not the script-kiddie, because the vulnerability exists whether or not someone exploits it. I suspect, like Ron Rivest, that we will see a gradual trend towards more secure software, and if the consumer can accurately judge security in the long run, then perhaps one day we will reach a point where the costs of exploitation are more than the returns, like with bank vault security today. It's not clear to me that the average consumer can judge the vulnerability of software at all, though, or that security is important enough to make a difference in their purchase decisions.
While it is probably true that you can't test in security (at the design level), finite-length code has a finite number of implementation errors, right? So if you only make bugfixes and don't introduce features, the bugs will get more and more expensive to find, until there are none.
Does the lack of pretexting make my personal information impossible to obtain by someone who is capable and ready to do so? It's a latent vuln, but a vuln nonetheless.
On the other hand, throwing a brick through my window does not teach me a useful lesson about "windows vulnerabilities" (pun intended). But physics and code are different. I can't build unbreakable objects in real life. I *can* eliminate all implementation errors in code, though I never know when I have. And even with design flaws, I can come up with validators that apply several libraries of attacks (for example, see SPIN and murphi for finite-state protocol analysis). We can develop more expressive languages, and we can learn which ones are a best fit for the human mind (probably it's not one-size-fits-all).
Is the ability of people to spam a design flaw in email? I don't feel comfortable saying that or its opposite, without qualification. Certainly not all abuse (or "new use idioms") are due to exploitable info-war style vulnerabilities. The malware - botnet - spam theme is more due to a new economic model than any inherent vulnerability methinks.
In the absence of obvious exploitation, security is invariably lax. This is a field day for people who know what they're doing. This was true of the 90's Internet, but I'm thinking this is less true every day. Exploitation of systems configured by security-conscious people is getting difficult. Certainly it's still true of passive attacks (surveillance/espionage); there's no feedback there, losses are speculative, so it's a treasure trove, more than likely.
I think one thing that is abundantly clear is that most exploit scenarios could be much more damaging. I have seen exploits that turn hardware into warm bricks. I'm just glad they aren't de rigeur as worm payloads.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.