Schneier on Security
A blog covering security and security technology.
« Friday Squid Blogging: Squid/Whale Yin-Yang |
| Dead Drop from the 1870s »
March 11, 2013
Is Software Security a Waste of Money?
I worry that comments about the value of software security made at the RSA Conference last week will be taken out of context. John Viega did not say that software security wasn't important. He said:
For large software companies or major corporations such as banks or health care firms with large custom software bases, investing in software security can prove to be valuable and provide a measurable return on investment, but that's probably not the case for smaller enterprises, said John Viega, executive vice president of products, strategy and services at SilverSky and an authority on software security. Viega, who formerly worked on product security at McAfee and as a consultant at Cigital, said that when he was at McAfee he could not find a return on investment for software security.
I agree with that. For small companies, it's not worth worrying much about software security. But for large software companies, it's vital.
Posted on March 11, 2013 at 6:12 AM
• 50 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
The sad thing is that he's probably right. For your average small to medium size company, there is no reason at all to worry about how secure your software is. Hardly anyone chooses products based on software security. And as long as companies don't end up in court for security issues (the way car manufacturers would, for example), there's no indirect economic incentive either.
But how long is that necessarily going to be true? Software is become a larger and larger part of our lives, and some of the large companies are (finally) taking security seriously. How soon before competing with the big players in a market is going to require the small startup to prove they take security seriously? Maybe never...but maybe not.
So, what the article is saying is that the customer has only these options:
* do not use software at all,
* give up on security and privacy entirely,
* push for strong industry regulations mandating at least some basic security.
That is sad.
ee.. maybe you have a specific definition on software security and I am misunderstanding the nuances of the statement, but isn't it so that _all_ should _worry_ about software security, it's just difference in what one can _do_ about it. While small companies must more or less trust standard providers and rely on basic hygiene level, the big ones can afford big investments and more complex measures.. I do not see fundamental difference from.. say.. physical security.. A locked door may be enough for a man and a dog company but large banks build vaults and sophisitcated access control systems..
I think it depends on what the product is that the small company is developing. A small company can develop software for use my much larger organisations. Though those larger organisations should question the security of the software they are using or purchasing.
I also think start-ups, especially of the web variety need to be mindful of security as they can quickly grow large user bases, and often those of the social variety collect personal information.
We've been through this "cann't find return on investment" argument befor with Quality Control, yet it's alive and spreading nicely because not only because it shows in the bottom line, but because it also improves all sorts of things without the faux balance sheet benifits you find in the likes of so called efficiency drives that are realy death by a thousand cuts.
But first the big problem to get over is ROI...
As a tool ROI is a very blunt instrument designed only to answer questions of highly specific and well found questions such as "What is the return time on this new CNC tool?" where ALL the measurands are known or are predictable (like the energy costs direction).
The important thing is knowing ALL your measurands and if you don't then an ROI is less usefull than throwing darts at a dart board with a blind fold on as a predictive tool.
The other elephant in the room problem is software code cutters and the way they are renumerated.
It does not encorage best of anything practices and it relfects again on managment and their inability to find and use measurands that are appropriate to what they are doing.
The one thing that can be definatly said about it is the process is neither scientific or even remotly like an engineering process. It's faddy full of latest must haves and as a result is at best artisanal.
As was once observed it does not matter how you shovel it, how high, large or to where, a pile of 5h1t is a pile of 5h1t, no matter how decorative, adding more to it just makes the problem worse.
There is a tale about an Emperor and his New Clothes where a couple of con arists sell the Emperor about their wonderfull new process for making the finest of spun cloth so light and so fine thatt only the best of minds can comprehend the finery of the ressult. So it is with many of the tools and methods in the software industry.
Quality control started in manufacturing for very good and proper reasons, it actually appeared initialy to be an expense to far but it worked. What it did was fix a broken process that had been optomised by efficiency to a point where it was killing companies. In the ROI terms of the time QC made no sense but it produced big results in a clearly ailing system. Since then the methods have been shown to work in all sorts of other areas that have different RoI etc efficiency optomisation issues.
As I've repeatedly said for well over a sixth of a century "Security is a Quality Process" and it's a process we need very badly in software and other design and production environments.
And a first good step would actually be to bring engineering practices into software production.
I forgot to mention why this is painfully obvious,
Viega, who formerly worked on product security at McAfee and as a consultant at Cigital, said that when he was at McAfee he could not find a return on investment for software security
Firstly ask yourself what it was that McAfee supplied as a product?
It's a faux product, dessigned to address deficiences in another market (OS and App production).
It's like Victorian boiler makers, when a boiler exploded you just rivit or bolt on another lump of metal. It's not how engineers work, you first analyse then solve the original problem (the OS and apps) not chuck another dead weight (AV) on to stress and break the system else where, which then requires another dead weight. Eventually if you carry on the entire system colapses and ceases to function under the dead weight.
The ROI on dead weight is exactly what you would expect to be, and that is what he found.
No surprises there then.
I agree with that. For small companies, it's not worth worrying much about software security. But for large software companies, it's vital.
In engineering small companies make component parts and large companies make compleate systems such as bridges, aircraft, cars etc.
Do you realy want a company that makes the brake cylinder components in your car not bother with quality? how about a company that makes a fastner used in the engines of planes you spend a disproportionate part of your life in?
Or perhaps you would like to drive across a bridge designed and built by a small company where engineering principles are not used but just the esthetics of the artists pencil?
Unless you find a way to measure SDLC, yes it is waste of money for all organizations..
OpenBSD doesnt cost anything, comes pre secured (:
I think it depends upon the value of one's digital properties, including IP. Just because a company is small, doesn't mean that it has nothing worth protecting vigorously, or that data/IP thieves would want to misappropriate. That is a decision for the company in question, although most CEO's would think that their cruft is the most valuable cruft in the world, even if it is not worth time time to abscond with. This is where an outside expert opinion would not be remiss to obtain.
software security is important! if a small company, say a website is hacked the clean up phase would be expensive!
Software normally doesn't interact physically with the user. Comparing it with engineering (and with car auto makers, no less!) is so far fetched it shouldn't be even mentioned.
Your computer freezing shouldn't put you in vital risk. If it does, you are using it for something it wasn't designed.
Of course, there is such a thing as "mission critical software." That's pretty secure and it follows strict procedures. But it is a class of its own, and not something you would call "normal."
I also think he is wrong. As soon as you create or use software, software security is just as important as your protection goals. The company size has absolutely no impact on this.
For smaller companies, other than basic good software acquisition practices, and basic practices like webserver hardening, the greatest security control might be safety in numbers. If you can't sufficiently protect yourself, join a herd, er, I mean go into the cloud.
For all companies, focusing solely on ROI is a race to the bottom in terms of security. What's the ROI of an insurance policy?
Perhaps this is related to the relatively low cost of run-of-the-mill personal information such as credit-card numbers? Inside information from most small companies isn't really worth much -- they don't have a huge stream of credit-card numbers, they don't have particularly valuable intellectual property, they don't have enough money to be worthwhile targets for DDos or other attacks.
Sure, there will be exceptions, either in terms of companies or in terms of attacks -- such as wire-transfer fraud -- that are effective against pretty much anyone, but in general hacking small companies doesn't get you a lot of money. So if the average small company gets hacked once every 10 years for $10K, just to pull some numbers out of nowhere, then anything more than $1K a year (including employee time) for security would be a net loss.
Of course, as more interesting attacks get more automated, and information about possible buyers of stolen IP gets easier to find, the cost of hacking smaller businesses will decrease for criminals, and security might become a net win for them -- if it actually works.
@ Leonardo Herrera,
Software normally doesn't interact physically with the user
Oh brother what a cockeyed statment...
Software does not have to physically interact with you to clean out your bank account, and most people would consider that a major harm and thus a major risk.
Comparing it [software] with engineering (and with car auto makers, no less!) is so far fetched it shouldn't be even mentioned
That's a realy silly statment and you should know it if you wished to be taken seriously. Nearly all systems these days from your microwave oven upwards contain computers with software providing virtualy all the functionality or purpose of the system.
Asking that "software" development should be done in an engineering like fashion is not a big ask especialy on say your ABS system.
After all if I was to follow your argument because the fastner in the brake caliper does not physicaly interact with me the user just the ABS then it should likewise be disregarded so a bit of old string and sealing wax will do then...
Your computer freezing shouldn't put you in vital risk.
It should not put you at ANY involuntary risk full stop. Go and take a look at product liability legislation and then ask why it is not appled to software which is a product in it's own right and thus should be fit not just for sale but purpose as well.
If it [software] does, you are using it for something it wasn't designed for.
Now there's an odd assumption if ever there was one. Tell me why do they put safety features in products?
The way you describe it a product does not need to be designed to be safe so just having the end of the stearing column sticking out of the driving wheel hub at your chest not an air bag is an OK design by you?
Of course, there is such a thing as "mission critica software." That's pretty secure and it follows strict procedures. But it is a class of its own, and not something you would call "normal".
I do wish people would stop using "Mission Critical" it's a managment nonsense expression as any experianced engineer will tell you (especialy if they have chartered status or equivalent).
If something effects the normal operation of a system then the system fails to meet it's requirments. It's that simple, there is nothing "normal" or acceptable about systems that fail, they are required to be "fit for purpose". Thinking that such failures are 'normal", is the sign of a seriously disturbed mind.
I think software security is more important for small companies than larger. Large companies will take a hit on the balance sheet due to a security breach. The same breach in a small company may result in bankruptcy and/or the business closing it's doors.
To be clear, the author was talking about small commercial vendors that sell software. The economics just don't add up to spend a lot of money trying to prevent problems when most of the economic damage is paid for by your customers.
A great (but poorly-titled) book, Geekonomics by David Rice, goes into the economics of software security. His comparison to the evolution of automobile safety is particularly apt. When car manufacturers were made financially liable for safety issues, safety improved remarkably.
It's hard to see the ROI if you're not getting hacked. I would be curious to know if he meant all software security doesn't have value or does he just mean a "real deal" software security process. Implementing a full process a la BSIMM or OpenSAMM may not pass a cost-benefit analysis, but some training and simple checks for SQLI and XSS will go along way. We all know that's not enough, but every app should be doing at least some security. Sending a message that it's not worth it, especially coming from people with much influence, is troubling.
@aaaa, you lack imagination.
Consumers can choose to use product from large companies rather than smaller ones (but is that really safer given that those are also more attractive targets?).
Consumers can also choose products from small or medium companies who advertise extra security over their competitors (but is that feature worth the extra cost?).
The regulatory approach is similar to the second one, but with government decreeing that yes such extra cost is worthwhile and setting the feature requirements.
First, does that decrease competition?
Second, does that give a differential advantage to larger firms over smaller ones?
Last, does government have its own biases, misaligned incentives (alarmism, power capture) and susceptibility to regulatory capture?
All of those can be answered 'yes'.
There's some really good comments above. I was going to add something about all the big shots blaming users and now everything is the fault of some third-party. But there's something a lot bigger going on here: RESIGNATION.
I wonder how McAfee feels about that comment. Says a lot for the state of their product line, no?
considering the continuing problems with mcafee's sites, its siteadvisor product and cross-site scripting vulnerabilities, and the fact this doesn't seem to phase the compnay's profitibility or reputation at all, I guess that means he's right.
All he's saying is that security specific quality control would often be better replaced by general quality control, because generally the cost of security flaws is small. Large enough companies (and large really just refers to the size of the cost of a discovered flaw) are the exception to "generally".
I kinda agree with his Viega's argument when it comes to small companies as the cost of adding dedicated security reviews and proactive development to the software development cycle leads to costs that just don't add up for the small companies.
The point of his argument is that time spent on security would be better used to develop functional requirements and that security fixes are more cost effective to be fixed when discovered later.
If a small company is selling a product they are selling it mostly on it's functionality and not necessarily it's security features. I can buy the argument that it's better for most companies to spend resources to improve functionality and design first before looking at building a secure software.
A totally secure software is very hard to impossible to develop an organization would be better off spending their $$ and resources toward building a new functionality, design, and quality specially if they are competing for market share against large entrenched companies.
Unless that small company is a software product vendor...
To: Bruce Schneier and others
Counterpoint: Value of software security for small businesses
I agree with what I think Viega is *trying* to say, but not what he said (i'll call that the broader claim). Let's say what he said in a nutshell: a process like Microsoft's SDL, with all its labor and costs, makes no sense for smaller businesses. This is entirely true. However, it would be false to state that SDL is representative of secure or reliable software development in general. Further, most user's security expectations are minimum: don't corrupt the data; basic confidentiality protection; good service availability; best practices for rest. It doesn't take a huge SDL to accomplish this.
(Note at outset: my definition of security in this essay isn't absolute. Almost all software can be hacked or will be compromised if attackers really want in. I'm thinking more on the lines of reduce vulnerability, reduce risk, keep attackers from noticing you, make it harder, etc. The kind that is practical.)
Now, it's time to attack the broader claim. I think the mere existence of useful, secure software from smaller organizations, esp. open source, seems to dispute any claim that smaller organizations can't do software security or it's too costly. Turns out they can. They just have to prioritize and be lean when they do. Preferrably, they have a development methodology that bakes quality in and a platform that deals with issues.
So, first what are our security goals? See first paragraph. Those security objectives show due diligence, reduce problems that user sees, and establish a nice baseline for other activites. Second, how to cost effectively reach these goals? A methodology such as Cleanroom goes a long way. It emphasizes clean functional design, it's modular, it's efficient, low defect, testing is usage-based, and defect rate is certifiable. Empirical studies showed Cleanroom boosted productivity, reduced defects, took little training effort, and programs were often close to defect free on first run. And that was the 1980's. ;) Combine that with a typesafe language or managed platform to prevent all sorts of issues without any real overhead at all.
There's other processes that do that. Fagan Software Inspection and PSP/TSP are good ones for low cost or easy adoption. More specialized skill and investment is required for Praxis' Correct by Construction, Galois, PHASE, etc. However, we've had for decades low defect methodologies that are usable and inexpensive. They save so much money in the maintenance cycle that there's little to justify, ROI-wise, not using them in some form.
The next step is their target deployment platform. The most common platforms are Windows/x86 and Linux/x86. If they use them, they need to look up security guides for them and follow best practices. The most cost-effective way to do that for the system is to do the hard work once on a master image. If any patches are applied on it, then the developers should check that no functionality is broken. In production, the platform should have all unnecessary software removed from the machine. If mandatory controls or security-enhancing features are available, the application should be able to take advantage of them. Mininum for that are non-root, NX bit enabled, and Windows Integrity Control (or NIX equiv).
Using a more obscure, yet high quality, platform can reduce the chance that an attack on popular software component will result in system compromise. The obscurity can be the middleware, managed language, operating system, or processor instruction set. For example, a well-configured nginx web server on OpenBSD/PPC will virtually never get compromised. IBM's i and POWER systems have a good security track record, among other benefits. Also, my experience showed replacing TCP, HTTP, XML and other complex insecure middleware with simpler stacks can confound hackers. (Making the packets look like TCP or HTTP helps with firewall issues and frustrates attackers even more. Lets you reuse mature parsing code, too.)
There are the usuals regarding SCM. The system should have automated build and test processes where possible. Continuous integration is a good idea. Any changes done should be intregrated, built and tested before the day is over. As said in Cleanroom, usage-centric testing takes priority over methods such as unit testing because bugs the customer sees matter more than residual bugs. Builds should be signed. The computer that does all this better be protected well and have no access to main Internet. Failure to protect the SCM can result in press problems (*cough* Bit9 *cough*). There's free software and good step-by-step guides on the net for almost all of this. The hardware can be cheap too.
So, here's my counterargument in premise/conclusion form:
a) There are small software groups and companies making software that is low defect, high security, etc.
b) There are companies that differentiate themselves on software quality or security that are succeeding.
c) Small businesses can inexpensively do most of the high gain stuff I mentioned, much of it free and easy
d) Many high gain activities I mentioned, like better dev processes or quality OS's, are already in use at many small businesses (and some nonprofits/govt's)
So, (e), these collectively refute the broader claim that software/system security is a waste of money for small businesses. Trying to do it like Microsoft or a large vendor is a waste of money. Doing the cheap basics with high security/quality ROI is smart. It can also get you market share or reputation if it's done right (priorities in order) and marketed as a differentiator.
@ Clive Robinson
I think your comparisons to safety-criticial, embedded systems is an apples to oranges comparison. Safety-critical and security-critical are certainly not the same thing. Many of the former couldn't make do for the latter. Further, desktop and web application quality standard baseline in todays market is quite low. Users just want few crashes, they don't want to loose all their data, and breaches should be few.
Your comments, esp about ROI, mainly apply to companies in quality- or security-critical markets. In those, even small companies need to invest in quality and security to make it. There is also a ROI benefit in the generic software market for high integrity apps, but it's tougher to market that. There are a few companies that do that, though.
I see security as just another feature which is part of the investment and design trade-off. So I expect that this problem will be progressively solved by innovation and competition.
Expensive features become cheaper over time, which makes them affordable to more consumers and more companies.
There is no Pareto efficient way of accelerating the process (trying to artificially push more resources into this feature will cause other aspects to suffer).
it's vital for everyone -
lots of small infected systems can bring bring down major IT infrastructures thru DDoS, spam campaigns etc
As a small business start-up with a focus on research and development, we have had to take a serious look at the market landscape. Components of risk, often underestimated in an ROI valuation, does not lead on forward with greater confidence. One; vendor dependencies are often inaccurate. For example, what are the true and complete risks associated with CAD/CAM environment. Are your design systems plugged into a network? Are your CAM systems plugged into the same network. Two; what exposure does your software procurement process have to your production systems. I remember when deep water horizon platform went down in the gulf. My immediate reaction was the coincidence of the McAffe AV system corruption from an update happened at the same time and I wondered if gas flow monitoring systems went off-line before the platform exploded.
My two cents
I think just making the remark that there is no ROI for a small software company to invest in software security is just the expression of a point of view, even from reputational authority in the field like John Viega. Does John has perhaps any metrics to share to back his point such as Capers Jones herein that actually prove the opposite point perhaps for large software organizations ?https://www.owasp.org/index.php/File:Issue_SDLC_metrics.jpg
One of the many problems with the position that small/medium companies can get away without software security is that small and medium companies write a lot of the software that big companies use, whether as products or through online platforms. Big companies put their customers at risk by using insecure software built by small companies all of the time. It's a stupid situation.
Yes, Microsoft's SDL is too big and expensive for most companies - Microsoft doesn't recommend that anybody else even try to follow their model. We need to scale software security down so that it is affordable for small companies as well as big companies. This isn't easy, but to give up on the problem and pretend that it is just a enterprise problem is foolish - especially because there is no evidence that big companies can solve it either.
Rather than wasting more time on phony ROI arguments for software security, we should be thinking about how to make it simple and cheap to build good, secure software.
@ Leonardo Herrera:
I recommend looking up the definition of engineering! Here is the start of it from WikiPedia: "Engineering is the application of scientific, economic, social, and practical knowledge, in order to design, build, and maintain structures, machines, devices, systems, materials and processes." Now, if you think that does not apply to software, than you have some serious problem you should have looked at.
When it comes down to it, most software used as part of a technical or business (or medical, ...) process is mission critical. And basically all software is connected to the Internet in one way or another. Your statement just shows that you ignore reality.
This holds true if we also give up lawyers or just buy insurance that assumes all risk.
I think that the real problem is the use of the phrase "software security". It is just too broad. And the various categories are not all the same.
Here's a list of categories, arranged in descending order of severity that I use:
1. Remote root access that does NOT require human intervention or other app running.
2. Remote non-root access that does NOT require human intervention or other app running.
3. Local root access that does NOT require human intervention or other app running.
4. Local non-root access that does NOT require human intervention or other app running.
5. Remote root access that requires some human interaction or some combination of apps.
6. Remote non-root access that requires some human interaction or some combination of apps.
7. Local root access that requires some human interaction or some combination of apps.
8. Local non-root access that requires some human interaction or some combination of apps.
9. Remote OS crash.
10. Remote app crash.
11. Local OS crash.
12. Local app crash.
Now, is it financially feasible for a small software company to address items 1, 2 and 3? It had better be. If nothing else than by NOT leaving a port open for remote connections.
But a few cases of item #12? Deal with those as they show up. AS LONG AS THEY DON'T TURN INTO ITEM #3.
Comparisons of software with bolts are usually made by people who do not know about bolts. In engineering the imagined complete specification and substitutability of bolts is still a sometimes difficult issue 150 years after whitworth. What would engineering look like if engineers had to reinvent thread profiles for every customer (Cadillac had their own thread profile for example). The problem is, clients want to specify that stuff. They expect to specify how fast the bolt will turn when the head is turned. You can complicate software development by adding more certifications but how do you certify the user expectations? In engineering people do still design fasteners and they expect to own the problem when they break. Software, and software security is the same. Attempts at premature standardisation, like the early US adoption of whitworth's thread forms failed, early specification of software security engineering is likely to go the same way.
@ Nick P,
I think your comparisons to safety-criticial embedded systems is an apples to oranges comparison.
Only if you look from the direction you are looking from, try walking to a different spot and looking again.
Writing software for a CPU and system is a process that works in layers. It realy does not matter what the system is the process of writing the software is the same.
Look at it this way you take two people who speak different languages they may not know how to convey information by words but they can both speak, sing the same notes and whistle the same tune. The mechanics of whistling, singing and speaking are the same it is mearly the words that are different.
If you are using a sound methodology for writing code provided there are sufficient resources on your chosen platform it does not matter if the code runs on an embbeded system or a mainframe.
Calling it "safety critical" or "mission critical" or "secure" are just words to say the code has been written to meet a certain standard.
Writing code to a certain standard is in the main a state of mind, an orderlyness and understanding of the job in hand and following well tried, tested and proven techniques. In essence it's what engineering is about, at the base is science which has tried tested and quantified the charecteristics of the basic materials and parts. By using correct proceadures you then build the parts into other parts that go in turn towards the final system.
Provided the correct rules have been followed and the parts are kept within their specifications you have an engineered solution that complies with the original specification. And provided the original specification was correct you have a working system.
It's not magic, it's not artistic, it is science and it is what we call engineering.
To put it another way, why should I expect as a norm that my ABS software be written to a higher standard of reliability than my spreadsheet program?
I don"t expect my vacume cleaner to be any more or less reliable than my car, and nor do I suspect do you or just about anybody else.
You've actually said it yourself above with your comment about small companies,
Turns out they can. They just have to prioritize and be lean when they do. Preferrably, they have a development methodology that bakes quality in and a platform that deals with issues.
The real issue the "Elephant in the Room" is that we the customers have been trained by large companies marketing departments to take a large pile of 5h1t and call it quality, when it's anything but. And if we dare say the Emperor Has no clothes their litigation specialists will ensure that we don't say it twice in a way that will effect their bottom line.
There is an evil little myth you hear sprouted about markets, that "they no what's best" with the assumption that it will be good for the market and it's customers. Well history says otherwise, history says if you don't set and enforce standards the market will head to the bottom as fast as possible and will hurt maim and kill along the way and on each iteration the products will become shoddier, the profit less and eventually the market will wither and die.
We actually saw the start of this with Microsoft, then Bill stepped back in and decided to try to drive the quality back up. The problem MS has is the legacy code, they did not get a new clean start and have been running around ever since quietly pruning off the bad old stuff, and that sort of activity is expensive.
Further history shows that when standards are maintained from the start they actually tend to improve the market, the market actually becomes stronger and more viable and the oporttunity for niche suppliers opens. People want to buy quality, they would actually prefer it to reduced prices.
Oddly Open source even though free can be a quality product, and in quite a few cases is actually a higher standard than commercial offerings (the BSD's being one you mention fairly frequently). Economists cannot explain this by their standard rules which says more about them than it does about Open Source.
The problem with software is quality is difficult to spot because mainly it's non obvious it sits almost hidden in the background just doing what it's designed to do. We only notice it when it breaks. A clasic example would be the code in your phone, not the "smart" software of the personal organiser web brouser etc etc, but the background highly standardised network software. It's reliable and it would have been secure if those who payed the bills at the time (UK Gov) had not wanted it to be insecure.
Economists find it difficult to explain (or don't wish to) which is why they have troubles predicting how consumers behave and why despite their basic rules certain markets survive and thrive whilst others that stick to their rules wither and die.
Put simply some people have pride in what they do and given the right incentives will do the best job they know how to do.
And that's the rub the sneaky little "KNOW" word, it's the not knowing how to do a better job that holds them back. And that's what Quality processes are all about it's putting the knowledge of how to do better in peoples hands.
Security, like stability are just two of many asspects of doing a better job, when you know how to do them you can then make a rational choice. Without the knowledge it's just a game of darts in the dark.
Unfortunatly the same issues apply to specifications and standards, and if you don't know if the specification or standard you are working to is good or bad then the quality of the work you do is not what makes the final result good. Because,
Poor code + poor standard = poor result.
Good code + poor standard = poor result.
Poor code + good standard = poor result.
Good code + good standard = good result.
Comparisons of software with bolts are usually made by people who do not know about bolts.
And the opposit is also true, which is why history throws up some oddities.
For instance do you know what the connection is between Babbage and Whitworth?
It might cause a wry smile when you think on your other statment,
Attempts at premature standardisation, like the early US adoption of whitworth's thread forms failed, early specification of software security engineering is likely to go the same way
Viega's comments, should be taken with another grain of salt: they had code review there already well before he ever got there.
So he can not really say, "We just had three security vulnerabilities that just cost us a few tens of thousands, so why did we waste all of this money of making our apps secure".
Any one of those security vulnerabilities could have been devastating for their company. They were fortunate, but they also did their work beforehand.
Another reason he could have such an attitude is everyday McAfee operates by largely discovering attacks *after* they happen on people's systems.
Someone gets hit. It isn't them. They get paid more if they react quickly, kind of. And they get paid if the people hit are their customers who are hit.
So, what if there was a key remote security vulnerability in, say McAfee's firewall products and someone used that to create a worm? It could have happened.
Smaller businesses then fortune 500... true, there is an exponential point where the money invested and return security wise starts to really be a straight line up.
But before you get there, you are wisely investing money performing due diligence on behalf of your customers.
It really can depend what your customers are trusting you with.
If you are a gaming company which is small or if you are a POS terminal company (like Heartland Payment Systems)... if you are a telecommunications hardware company whose buyer is, say, Verizon... or if you are a small bank.
Not impressive statement and here's the translation from other perspective...
"People of America, for high income earner security is important but not probably to the minimum wage earner."
This is why people react and do not appreciate his speech.
He painted the picture of providing high-end solution to the elite people only - as if - the commoners do not deserve this.
Security must be seen by an expert in a 360 degree angle - with zero loop hole (as much as possible). Limiting the security from the high-end individuals will soon cause serious trouble and that's until someone will understand that masses are greater in numbers versus to the so-called enterprise.
He needs to understand that next time - he should never make a comparison of security importance between the enterprise and small biz. Every single individual should get the same quality of security - regardless of the cost. Because there's always an alternative solution - whether tools or process.
A point not made is that with low assurance IT and what amounts to largely penetrate-and-patch security efforts; a lot of resource can be expended for only little gain against high-end attackers. So rather than determining worth by considering size of enterprise, perhaps worth of investment is determined more by the nature of the attackers of concern and whether the type of security efforts consuming an organization's resources really impact those attackers. Doubling-down on ineffective (but perhaps feel-good) security will be, by definition, cost ineffective.
Oh-oh, Bruce, that's a slippery track you are on. The public does not know nearly enough to be able to judge that kind of theater. Their message for the public, in or out of context, in pretense or for real, was clear: "security does not matter". Manipulation of public opinion in this direction is not going to do us any good and I hope you realize that.
> For small companies, it's not worth
> worrying much about software security.
My doctor is a simple partnership with a handful of employees. My accountant is a sole proprietor. Until recently, my bank probably didn't have more than a dozen employees.
Just because a business is small doesn't make it unimportant, or unworthy of security. Having my medical or financial information spread about would not only cost them my business, it would put them afoul of various Federal and state laws on privacy.
@Julien Couvreur Company that advertises extra security over their competitors does not necessary created more secure product. Advertisement contains what the company think customers value, not what the company produce. You can stretch the truth quite far in advertisement and still be ok.
Yes, the regulatory approach has disadvantages of course and can give advantage to big corporation. That may not be the most important consideration.
Linked article says that small software producers should not fix security holes untill problems happen, because that would cost them more money for very little financial benefit. Bruce Schneier agree. If they are both right, then I have a choice between not using software or giving up on software security.
What I'm saying is that since market forces will force my company to use the software (I would not be able to compete), it make sense for me to push for regulations. The regulation can also have bad side effects, but if I have a choice between them and no security I can decide that they are lesser evil.
Unless those small software producers decide that security does matter or article is wrong, they are at risk of:
* new regulations over software,
* new liabilities over software bugs and vulnerabilities.
Having to deal with unpredictable juries and court costs may be game over for a small company, even if it did nothing wrong. If they become liable, they will likely have to settle, so regulations might still be better solution even for them.
There is another aspect to this which is what is the releative cost between prevention (AV software etc) and actual criminal gain?
Ross Anderson has pointed out in the past that in some cases the money spent on AV exceeds the value of the crimes commited  and would be better utilised on catching the criminals (although to be fair the context of the statment needs to be considered).
Anyway I was reminded of this just recently due to reading tthis,
 this is also true of physical security where the security industry for just selling domestic locks and alarms easily outweighs the actual value of goods stolen .
 This is the value the insurance company assumes due to fair wear and tear, not replacment value or the value realised by the criminals when selling the stolen items.
 The major cost of domestic crime such as burglary is not the value of the goods stolen but the costs of making good andministration and investigation. The later of these is usually a hidden cost in taxation, the other costs hidden in insurance premiums.
This is just wrong!
I manage a startup company with a patent-pending technology and propriety algorithms, if my "small company" will get hacked and my code will be ex-filitrated i'm doomed, my business will shutdown.
I'm not Microsoft that i can 'shh' the incident and move on.
There is an interesting chapter in The Six Sigma Path to Leadership: Observations from the Trenches by David H. Treichler and Ronald D. Carmichael that talks about the foolishness of trying to measure ROI on cost avoidance measures (have you measured the ROI on your insurance policies?) There also was an article in the Sloan Review that talks about shortcomings of classical risk management in today's fast paced work
Although the Bayesian view is well accepted in some circles, it has not penetrated the risk management world. Traditional risk management has instead adopted the frequentist view, despite its three inherent, and major, shortcomings. First, it puts excessive reliance on historical data and performs poorly when addressing issues where historical data is lacking or misleading. Second, the frequentist view provides little room - and no formal or rigorous role – for judgment built upon experience and expertise. And third, it produces a false sense of security – indeed, sometimes a sense of complacency – because it encourages practitioners to believe that their actions reflect scientific truth. Many of a corporation’s most important and riskiest decisions – which often do not fall into the narrow frequentist paradigm – are made without the help of the more sophisticated and comprehensive Bayesian approach. (Borison and Hamm 2010, 53)
I think that this is somewhat the heart of the matter, even if you set ROI aside as nonsense, you may still need to deal with classical risk managers that are more used to managing the risks of roof collapses due to snowfall patterns.
@ Jim Moore,
... you may still need to deal with classical risk managers that are more used to managing the risks of roof collapses due to snowfall patterns
This is one of the major failings that is hidden away because of assumptions.
We are beings of the physical world and our understanding of probability is usually based on our physical world view. Which unfortunatly gives rise to our limited view assumptions in effect becoming axioms without the benifit of the usual scientific process.
In the physical world we have assumptions such that you can only be in one place, your abilities are limited by the speed of light and forces and importantly an activity requires the use of force thus you need force multipliers to go beyond your normal capabilitiess and these involve physical objects and energy which have cost implications.
Thus we don't get one person robbing one hundred thousand stores in the blink of an eye and that's what our physical world experiance tells us and forms our gut reactions.
The problem with this is we know that in what you might describe as the information world the costs of information duplication and transmission are as close to zero as makes no odds for an attacker. Also because the co-opt other peoples computerss there is no cost for force multipliers, and with a little forthought attacks can be timed suchh that they happen as near simultaniously as makes no odds.
Thus in the information world robbing one hundred thousand stores ti the blink of an eye is not just possible it's relativly easy...
Thus nearly all the assumptions that go behind our actuarial tables and tools from the physical world don't hold in the information world and can lead to quite spectacular and catastrophic failures.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.