Security vs. Business Flexibility

This article demonstrates that security is less important than functionality.

When asked about their preference if they needed to choose between IT security and business flexibility, 71 percent of respondents said that security should be equally or more important than business flexibility.

But show them the money and things change, when the same people were asked if they would take the risk of a potential security threat in order to achieve the biggest deal of their life, 69 percent of respondents say they would take the risk.

The reactions I've read call this a sad commentary on security, but I think it's a perfectly reasonable result. Security is important, but when there's an immediate conflicting requirement, security takes a back seat. I don't think this is a problem of security literacy, or of awareness, or of training. It's a consequence of our natural proclivity to take risks when the rewards are great.

Given the option, I would choose the security threat, too.

In the IT world, we need to recognize this reality. We need to build security that's flexible and adaptable, that can respond to and mitigate security breaches, and can maintain security even in the face of business executives who would deliberately bypass security protection measures to achieve the biggest deal of their lives.

This essay previously appeared on Resilient Systems's blog.

Posted on December 2, 2015 at 6:14 AM • 37 Comments

Comments

blakeDecember 2, 2015 6:42 AM

> 69 percent of respondents say they would take the risk

> it's a consequence of our natural proclivity to take risks when the rewards are great.

That may be the common decision in the current business climate, it might even be our psychological disposition as a species, but that doesn't mean it's a sustainable strategy in the long term.

If the benefit of the deal is less than the total risk of a possible security breach - including the costs externalised onto the customers who get stalked or have their identities stolen etc - then it's still a losing strategy.

Wicked LadDecember 2, 2015 7:28 AM

How many times a day do I make decisions that "take the risk of a potential security threat"? It's uncountable. Every time I use a credit card. Every time I visit a website. Every time I go outdoors. Every time I stay indoors.

Taking the risk of a potential security threat is mundane in the extreme. 69% of respondents said they would take the risk, but 100% would and do.

Clive RobinsonDecember 2, 2015 7:33 AM

@ Bruce,

Was there a "gender breakdown" in the findings?

Tests carried out over the years from young children through to pensioners show a clear gender difference on risk taking behaviour.

Women of any age take less risks in general, it's why "femail exception" is so surprising when it happens, more so than men "betting the farm".

Also is the question of psychopatic behaviour, executives score much more highly than the norm by a long way. Which suggests you can not prevent only mitigate their behaviours.

Execs take unacceptable risks for short term gain over longterm existence. We have seen this time and time again with utility companies where preventative maintenance is cut to improve the short term bottom line.

There view on risk is it always happens to other not me. The way they ensure this is by "jumping the gun".

I've mentioned before the tactic of start a big project and get out before the results come in. If it becomes a success you claim it as yours for laying the good ground work, If it fails you blaim those left behind and those they brought in for mismanaging your good work and making it fail. With that mentality and behaviour you are always a success untill there is no place left to "jump ship" to...

In such environments security will always be a cost, never of worth, so it is to be traded out or externalised in some way. With the proviso that if it does happen on your shift you find a scape goat "who is not one of us" and watch them jailed, even helping investigators to put them there...

WinterDecember 2, 2015 8:14 AM

What might help is to relabel "Security" something that can earn you money, like "Robustness", "Fault tolerance", "Quality of Service". These are all good euphemisms for security.

Just as "rapid response emergency services" can be a label for anti-terrorist policies.

CJDDecember 2, 2015 9:12 AM

As the saying goes in security, you can pick any 2 of the following 3:
-low cost
-easy to use
-secure

If it is low cost and easy to use, it wont be secure.
If it is secure and low cost, it wont be easy to use.
If it is secure and easy to use, it wont be low cost.

HJohnDecember 2, 2015 9:25 AM

A lot of this can also be explained by the "framing effect."

In normal discussions about security, the alternative is naturally framed as a loss, so people chose security.

What happened here is the alternative was framed as a gain, so people were naturally more likely to pursue the gain.

For anyone who hasn't read Bruce's "Psychology of Security," I recommend it. I've used prospect theory and framing effect a lot, it's unreasonable to expect people to behave one way when the incentives (real or perceived) are lined up otherwise.

In 2001, I wrote a paper and made the statement "Controls are like speed limits: too strict and you'll never get where you are going, too lax and you are more likely to get hurt on your way." Makes me chuckle to recall a friend of mine. He can make a political commentary out of anything. When the speed limit was raised from 65 mph to 70 mph, he fumed "they're going to get people killed to enrich the oil companies." Yet, he, too, drives faster now. He was concerned about safety (security) in theory, yet in practice he wants to get to his destination faster.

One of my favorite quotes from Yogi Berra, that is so true and applicable: "In theory, there is no difference between theory and practice. In practice there is."

Nicolas GeorgeDecember 2, 2015 9:38 AM

IMHO, the question itself is biased: “potential security threat” versus “biggest deal of their life”. Ask them if they would play 5/6 Russian roulette to gain “a lot” of money, the answer would be the opposite.

Dirk PraetDecember 2, 2015 9:49 AM

@ Bruce

We need to build security that's flexible and adaptable, that can respond to and mitigate security breaches, and can maintain security even in the face of business executives who would deliberately bypass security protection measures to achieve the biggest deal of their lives.

Flexible and adaptable security: yes. Unconditional capitulation to the irresponsible idiots that brought us global warming, preventable data breaches and the financial crisis: no.
CxO's and SMB managers - just like anyone else - should be held responsible and fully accountable for their decisions, not just for the short term profits, but just as much for the long term effects. As long as this principle is not adequately enshrined in law and regulations as well as enforced by the judiciary, CISO's and IT security professionals in many companies will keep facing an uphill battle they can't win and unfortunately also often take the fall for the day something goes seriously wrong.

Gerard van VoorenDecember 2, 2015 9:59 AM

@ Dirk Praet,

I completely agree. It's about liability. PHK wrote a blog post about it. https://queue.acm.org/detail.cfm?id=2030258

Today I heard that data from 124.730 Dutch children has been stolen because of a hack at Vtech, a Chinese company that makes toys and gadgets. Liability is the only answer that I can think of to *really* deal with this kind of neglect.

Frank WilhoitDecember 2, 2015 10:44 AM

So the real problem is that people do not know how to assess or compare risks, or how to distinguish between consequences that they can take responsibility for and those that they cannot.

This affects every tradeoff that has risk on one side, not just those that have security on the other side.

Part of this is human nature, but the primary avoidable reason for it is information compartmentalization within organizations and the concomitant misalignment of responsibility and authority.


paulDecember 2, 2015 10:55 AM

Among the many problems with the current system is that we don't get to make that kind of choice. The question about accepting a security risk in return for some business opportunity implies a limited-time deviation. So, for example, you might leave a door unlocked after hours when you know there's doing to be a crucial delivery and no one is available for reception duty. But what we have now is more like "If you ever think you might need an after-hours delivery, you have to specify your building with no locks on any external or internal door."

DanielDecember 2, 2015 3:27 PM

The reactions I've read call this a sad commentary on security, but I think it's a perfectly reasonable result.

Said the person who is working for an incident response company writing on said company's blog.


blakeDecember 2, 2015 5:53 PM

Isn't this also how the Nigerian Prince Scam works? This could be the best financial opportunity of your life - are you prepared to facilitate it with some minor security concessions?


Let's define a game called "Symmetric CTO's dilemma" (which probably just ends up a variant of regular Prisoner's Dilemma). There are M players who are the CTOs and they each have to provide a service while also using the services of the others. (IE the CTO of Amazon still has to do his online banking and internet dating, and the CTO of Ashley Madison still wants to order his books online.)

Each CTO can decide to implement their service either SAFE or RISKY; those are the choices of the game. Any RISKY service has an x chance of a major breach, 1>x>0, which will cost everyone F, and so the payoff matrix is:

*for SAFE moves: T - C - n*F

*for RISKY moves: T - n*F

where T>0 is the revenue for each service (assumed equal for simplicity), C>0 is the cost of implementing security (which RISKY moves don't pay), n is the number of failed services (with E(n)=Nx where N of the M players chose the RISKY move), and F>0 is the cost of the fallout of a single failed service (assumed to be even across all players).

So in these terms, the study suggests that T being "the biggest deal of their life" will encourage more RISKY moves regardless of the value of F? F and C are probably both also large, since it's the biggest deal of their life, but presumably x should matter?

Dirk PraetDecember 2, 2015 6:04 PM

@ Daniel

Said the person who is working for an incident response company writing on said company's blog.

Trolling again, are we? This blog has been around from way before @Bruce joined Resilient.


tyrDecember 2, 2015 7:09 PM


Having been around when Nevada had Resume Safe Speed
signs on the highway (no speed limit) and if you had
an accident they fried you for liability (you weren't
at a safe speed obviously), the arguments for speed
limits are usually bogus. What higher speed accidents
do is cause more damage to drivers and passengers
while they cause less accidents.

So you get a trade-off and have to strike a balance
at some point. Security is just like that, the more
you are exposed the worse you get burned, the less
exposed you are the probability of being burned by
a breach goes down. You can't make it go away so
you have to decide how much to risk.

The second part of this is you might feel perfectly
safe at 165 MPH, that isn't how your passengers feel
about it. That's a lousy business model when you are
exposing others to risks they can't control. That's
where the legal penalties should come in to promote
reasonable security measures.

If you keep your reckless behaviors personal evolution
will apply reasonable curbs society steps in when you
start risking others.

Robert A. SchweizerDecember 2, 2015 10:14 PM

I am a doctor. A solo practioner. We are being forced to computerize.

I do not mind as the pros look greater than the cons.

But one of the cons of Hipaa (got Google?) is that security is favored over "business flexibility."
Ease of use saves lives, while good security keeps Ukrainian hackers (or is it Argentinian this week, or is my name calling bigoted?) from knowing you have bunions. I fail to get reports on the hospitalization of patients that show up at my office that I regularly got in the past because I am not registered with that hospital system. Or some such BS. Someone's going to die. Likely they have.

Having said that, I read this blog as part of my constant effort to protect the server I have bolted to the floor in a steel cage with cameras and motion detectors protecting the encrypted data in the encrypted drives within. Etc, etc. Shoot me. Thanks for reading my rant. Back to the ICD-10 conversion.

HJohnDecember 3, 2015 8:55 AM

I *love* how in almost any conversations, a couple people show up then take a reasonable statement and paint it in the most ridiculous extreme. Bruce has never said to ignore security for any opportunity, just that security isn't the only consideration and that calculated risks may be worthwhile if the potential payoff is high enough.

Note "calculated," not any, and certainly not some ridiculous bajillion dollar scam (which is in no way a trade off since the odds of making even a dollar are nill).

We all do this every day. How many people drive 20 mph under the speed limit on the Interstate? If security is the only consideration, shouldn't they? After all, it will significantly decrease the risk that they will perish in an accident. If you say "but that would take too long to get there and wouldn't be worth it" you've just discarded some security (safety) in exchange for a benefit, and have in essence made an economic trade off.

We can always come up with some ridiculous extreme on either side (go 20 mph for safety, or raise it to 120 mph for efficiency), but that misses the point. Choosing 65 or 70 is balancing the tradeoff between safety and speed, just like making security decisions based on risk/rewards is making a trade off between cost and profit. Sure, either side could be painted in a ridiculous extreme as well, but that is not the point.

blakeDecember 3, 2015 10:33 AM

@HJohn

> Bruce has never said to ignore security for any opportunity, just that security isn't the only consideration and that calculated risks may be worthwhile if the potential payoff is high enough.

Yes, however this is where the survey falls short. The survey doesn't actually indicate whether the CTO decisions were calculated, or what the calculations were, or where the CTOs put the thresholds. There's nothing to tell whether they're making considered rational decisions or whether they're just chasing the big deal. You can give them the benefit of the doubt, but that's also an assumption.

Ruslan KiianchukDecember 3, 2015 12:03 PM

This is just another demonstration of cognitive bias called "loss aversion": " people's tendency to strongly prefer avoiding losses to acquiring gains".

The rest is the matter of phrasing. In the second question they changed it to "potential security threat" which was less convincing for people to result the loss. I'm quite sure that with "significant security risk" phrasing the results would be different.

markDecember 3, 2015 12:09 PM

There's another factor at work here, also: all of this is based on what *managers* perceive as the big gain... as opposed to end users. Give end users a way they're willing to work with to add security, and it's no big deal. Let managers design it, without ever talking to, and *esp.* never let architects and developers talk to end users, and you're guaranteed to get stuff where the end users' feelings range between dislike and hate.

I once worked for the Scummy Mortgage Co, and I think it was Collections who had the (mainframe) software designed that way, and the people working in that dept avoided using it until the end of the deal, when the data had to be entered.

mark

EdwinDecember 3, 2015 5:26 PM

Lots of comments talk about risk, but almost every one of them does so in a way that sort of fuzzes over the precise meaning of the term. Specifically, people tend to equate risk with the cost or impact if something bad happens (like a data breach). The two are not the same, and if you make decisions based on this equivalence, you'll make really bad decisions.

A useful definition of risk, and one that is almost universal in my field, is:

Risk = Probability * Impact

In other words, risk is the probability of something bad happening, multiplied by the cost/impact of that something. So if the cost is very high (such as a the kind of data breach suffered by Sony), but the probability is correspondingly low, the risk is only moderate.

Apply that to this case. Say there's a really profitable deal, but there's a chance of a security breach that would ultimately cost 10 times the amount of profit on the deal. If the chance of a breach is less than 10%, it makes sense to go ahead with the deal. The benefit is X (the profit) with a 100% chance; the risk is p *(10X), with p being the probability of a breach. If p is less than 0.1 (or 10%), the benefits outweigh the costs. Granted, this is an overly simplistic case, but it illustrates the basic point.

You can't make decisions based purely on trying to drive the probability of a security failure to be lower. You have to consider the costs of doing so, and the benefits you're giving up, as well as the current probability. Maybe a relatively high probability of a security failure (and 10% can be, in many situations, considered very high) is still worth accepting.

LessThanObviousDecember 3, 2015 5:50 PM

Long ago I worked for a company and I really cared about the welfare of the company. I had countless security arguments and lost %90 of them because business priorities always trump potential security risks. After a long time in that role I stopped arguing the cases that I knew were losers. It's easier to give people what they want when you know, it's what in the end, the business wants. The side effect was that I no longer felt I served any meaningful purpose. The stress of contention and argument, no matter how fruitless, was better in the end than passive compliance which is just demoralizing and purposeless.

ChrisDecember 4, 2015 8:11 AM

@ Edwin

"In other words, risk is the probability of something bad happening, multiplied by the cost/impact of that something. So if the cost is very high (such as a the kind of data breach suffered by Sony), but the probability is correspondingly low, the risk is only moderate."

This would be the case if security breaches were a random walk in the park, but the Sony breach, according to the good guys, was a targeted.

ChrisDecember 4, 2015 8:26 AM

@ mark

"Let managers design it, without ever talking to, and *esp.* never let architects and developers talk to end users, and you're guaranteed to get stuff where the end users' feelings range between dislike and hate."

Sometimes it's best not for developers to talk to end users, especially when you're costing down on not only development but also labor. It's less about liking it than paying up when it's all said and done. The best architects, IMHO, project their ideals.

Cats Think You Are A CatDecember 5, 2015 12:28 AM

When asked about their preference if they needed to choose between IT security and business flexibility, 71 percent of respondents said that security should be equally or more important than business flexibility.
But show them the money and things change, when the same people were asked if they would take the risk of a potential security threat in order to achieve the biggest deal of their life, 69 percent of respondents say they would take the risk.

There are extensive, well replicated studies in the cognitive behavioral field of psychology showing some of these approaches people natively have to risk taking...

The fact remains: in computer security *risk continues to not be monetized*.

Slogan: monetize your risk.

Exactly as insurance companies have long made into a science needs to be performed routinely in IT Security departments. This is certainly the future.

Until then, teams are merely talking abstract, theoretical information to upper management and the board. Their budgets will not be in line with the threats they face.

And as Sony said, before they got lobotomized, "We are not going to spend ten million dollars to secure one million dollars of goods".

http://mashable.com/2014/12/05/sony-hack-infosec-comments/#Br3TMTx3Jiq6

In 2007, Sony's executive director of information security said in an interview with CIO that he wasn't willing to put up a lot of money to defend the company's sensitive information. He also talked about how he convinced a security auditor, a year before in 2006, that the company's use of very weak passwords wasn't such a big deal.
"It’s a valid business decision to accept the risk” said Jason Spaltro, who is now Sony Pictures' senior vice president of information security, in the interview. “I will not invest $10 million to avoid a possible $1 million loss."

That is what the guy said in public. The reality is that ratio is far, far worse. And it is entirely a guessing game. One which is significantly biased towards wishful, unrealistic thinking.

And every organization - just about - does it.

Cats Think You Are A CatDecember 5, 2015 12:36 AM

@LessThanObvious

Long ago I worked for a company and I really cared about the welfare of the company. I had countless security arguments and lost %90 of them because business priorities always trump potential security risks. After a long time in that role I stopped arguing the cases that I knew were losers. It's easier to give people what they want when you know, it's what in the end, the business wants. The side effect was that I no longer felt I served any meaningful purpose. The stress of contention and argument, no matter how fruitless, was better in the end than passive compliance which is just demoralizing and purposeless.

That is, more or less, the life story of countless IT Security department workers.

It is a near impossible problem. Getting into the field, you have to engage in very extensive training. And it is entirely different then what is required to convince the board to show you the money.

Which, is, by its' own self, an entirely different field and one just as obscure and difficult to master.

Very few products at this time are providing monetization of risk for IT Security departments. This trend is changing, but slowly. The value is enormous, because it means very increased budget size.

And it means getting that security solution you need implemented.

Cats Think You Are A CatDecember 5, 2015 12:45 AM

In the IT world, we need to recognize this reality. We need to build security that's flexible and adaptable, that can respond to and mitigate security breaches, and can maintain security even in the face of business executives who would deliberately bypass security protection measures to achieve the biggest deal of their lives.

There is a golden ratio between usability and security. Always been there, always will be there. Reaching it should be the goal of every product and service.

The lazy, inexperienced tendency is to throw this to the wind. It requires architecting and planning to come up with such solutions. Much easier to just mindlessly go, "Just cut off all access and iron up all doors".

This remains a problem in every aspect of security, including the justice and overall legal systems.

You can see it to the extreme where authoritarian powers are able to abuse their positions, passing laws that are static and incapable of handling extremely dynamic situations.

Cats Think You Are A CatDecember 5, 2015 12:57 AM

@Clive Robinson

You really have to treat everyone 'as if' they are a psychopaths on these levels. But, they all have pain points. And part of the job is well beyond the technical. You have to get rapport and speak in terms they will understand.

We mitigate and find vulnerabilities in computer systems, we are certainly capable of doing so with people. Very much of a solid plan involves the weaknesses inherent in humanity.

Cats Think You Are A CatDecember 5, 2015 12:58 AM

@Ruslan Kiianchuk

Yes, exactly. Cognitive behavioral psychology, FTW. (Also, to other posters who have brought this up.)

Cats Think You Are A CatDecember 5, 2015 1:04 AM

@Edwin

Yes, risk does equal probability and impact, though this is deeply ingrained in basic IT Security as it is exactly how security vulnerabilities are measured in threat rating.

eg, https://en.wikipedia.org/wiki/DREAD_(risk_assessment_model)
https://en.wikipedia.org/wiki/Threat_model

etc

But, this is very far from actually getting to the insurance industry's level of monetizing risk.

A very good set of work is being done under this model:
https://en.wikipedia.org/wiki/Factor_analysis_of_information_risk


Cats Think You Are A CatDecember 5, 2015 1:15 AM

@CJD

As the saying goes in security, you can pick any 2 of the following 3:
-low cost
-easy to use
-secure
If it is low cost and easy to use, it wont be secure.
If it is secure and low cost, it wont be easy to use.
If it is secure and easy to use, it wont be low cost.

This is not the way to approach security product design. It is not the way to approach security architecture.

This can mean you have "mission impossible". But, not really. Your specifications from the beginning should be worked out.

Low cost, easy to use, and secure. It most certainly can be done. If you actually plan your product and do not skip that all important phase.

If you are going to be an artist, don't shoot low and for crap. Shoot to be a Michelangelo, a DaVinci.

Otherwise, why are you even in the game?

These things said, "cost" is really irrelevant. The real ratio is usability and security.

"Cost" is typically ephemeral. For instance, a good security architect is obviously someone of great value. They have strong experience and the capacity for exactly the result of strong usability and strong security. They can only be underpaid, if paid at all. But, they work for the product, for the art of it.

There are plenty of products out there which can be immediately raised which prove these points. For instance, open source, free products that provide strong security and literally no cost. Apache is an excellent example. Yes, they have had some security bugs, but overall, especially considering their vast threat landscape, they have produced an extremely solid product.

This is separate from the singular and critical issue of security departments tending to be deeply under resourced. What is very true in that perspective is you simply have to be able to obtain appropriate funding to the risk.

Many of the best products do require expenditure.

But from a design level, really, "cost" is not in the equation. The real difficulty is security which is seamless. Which works, regardless of how the user is going to operate.

The usability factor takes in the very serious reality of the limitations of human fallibility.

WaelDecember 5, 2015 1:41 AM

@Cats Think You Are A Cat,

And as Sony said, before they got lobotomized, "We are not going to spend ten million dollars to secure one million dollars of goods".

Yea, that was a remarkable thing to say in public! "Lobotomized" sounds like a probable diagnosis!

Cats Think You Are A CatDecember 5, 2015 2:20 AM

@Wael

Yea, that was a remarkable thing to say in public! "Lobotomized" sounds like a probable diagnosis!

Hey, I like Sony. We use PS3's for most of our home video services, and have been loving Fallout 4 & Bloodbourne on PS4. The Interview was a fantastic movie.

Sony is doing fine, and North Korea is still a laughable "worst case for nations" mess. *shrug*

Nevermind great fodder for deeply needed budgets across the board.

Free nations, from South Korea to Germany need to hunker down and focus on *defense*.

They are ahead, and will certainly remain so, as long as they do not screw it up with overly scoping surveillance programs against their own people. ;-)

WaelDecember 5, 2015 2:42 AM

@Cats Think You Are A Cat,

Hey, I like Sony.

And so do I. I know many of "them" personally! Some of the hardest working crew I encountered!

The Interview was a fantastic movie.

Oh well, we can't agree on everything :)

Clive RobinsonDecember 5, 2015 10:42 AM

@ Cats...,

Whoa there, to many posts in one go :-)

Any way to reply to some points you raise,

Exactly as insurance companies have long made into a science needs to be performed routinely in IT Security departments. This is certainly the future.

The way the "insurance companies" go about it is inappropriate for ITsec. Anyone who "borrows" from the insurance risk analysis field of endevor is very likely to come unstuck if they try to apply it to information risk analysis. Unless they have a quite indepth knowledge and know where the gotchers are and importantly why.

One reason for this is a series of underlying assumptions that whilst they appears to be true in our tangible world of physical events does not hold even close to true in the intangible world of information events.

As I've said before physical security is in affect a subset of information security and should be treated that way in our thinking. This is because not all the rules of physical security can be upwardsly used for information security, and there are a whole load of information security rules that have no physical security analogs.

Various people have talked of "an army of one" when talking about information attacks but appear not to want to take it further, which will cause problems down the line.

Take one assumption in our physical security model that does not apply to information security and that is "locality". The physical risk argument is one person can only be in one place at any one time, therefore the amount of damage they can do is limited by both time and distance. This thinking gives rise to the idea that risk is somehow uniform across an area and probablistic in nature which it is not in a number of ways one of which @Chris mentioned above. This is despite the "low hanging fruit principle" of targeted attacks they do apply when calculating the risk for an individual (ie lower premiums for better locks etc).

With a little thought it is easy to see that one person can be in many places at the same time with information attacks. That is they can develop "attack scripts" that can be deployed against multiple targets simultaniously. It's similar to a "cluster bomb attack", however it differs in many ways due to limitations on matter, energy and the duplication process. That is it takes considerable inputs or cost of energy and matter in the physical world to make a cluster bomb which is a significant limitation on what can be achived. But in the information world once the design phase is compleate, the duplication and deployment phases have little energy input, and that which there is mainly comes from the defenders not the attackers, because the attack subverts the defenders resources to continue the attack.

I could go on at length about other issues, but I hope the above is sufficient to get the point across about the fundemental asymmetry of phisical and information attacks and how it applies to risk analysis, and how the unknowing transferance of physical risk analysis can go horribly wrong when formulating information risk analysis.

Moving on...

There is a golden ratio between usability and security. Always been there, always will be there. Reaching it should be the goal of every product and service.

Whilst it is a more a rule of thumb than a golden rule it is not the only one of interest, I've said before on a number of occasions "Efficiency-v-Security". That is the general case for software and communications due to a number of reasons is that the more efficient you try to make something in any domain the less secure it becomes in that domain. Hardware has similar issues but the relationships are more complex.

For instance, with software the faster you want it to execute, then the less it can do as you are generaly giving up CPU cycles in some way it also opens up the process bandwidth and transparancy. This has the effect of reducing various types of checking which means that side channels open up and have more bandwidth to leak information. Often you will see that it opens up reverse channels whereby the bandwidth of error / exception handeling alows an attacker to inject errors that propagate backwards against the normal flow of information. Very few software developers are aware of this thus the general case is systems are very vulnerable to this attack vector.

RichardDecember 6, 2015 3:40 PM

Practicality for the win. I don't really have a comment to give, I feel the need to support what I believe is the best point of view. Business first, security second, all else down the track.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.