Security and Function Creep

Security is rarely static. Technology changes both security systems and attackers. But there’s something else that changes security's cost/benefit trade-off: how the underlying systems being secured are used. Far too often we build security for one purpose, only to find it being used for another purpose -- one it wasn't suited for in the first place. And then the security system has to play catch-up.

Take driver's licenses, for example. Originally designed to demonstrate a credential -- the ability to drive a car -- they looked like other credentials: medical licenses or elevator certificates of inspection. They were wallet-sized, of course, but they didn't have much security associated with them. Then, slowly, driver's licenses took on a second application: they became age-verification tokens in bars and liquor stores. Of course the security wasn't up to the task -- teenagers can be extraordinarily resourceful if they set their minds to it -- and over the decades driver's licenses got photographs, tamper-resistant features (once, it was easy to modify the birth year), and technologies that made counterfeiting harder. There was little value in counterfeiting a driver's license, but a lot of value in counterfeiting an age-verification token.

Today, US driver's licenses are taking on yet another function: security against terrorists. The Real ID Act -- the government's attempt to make driver's licenses even more secure -- has nothing to do with driving or even with buying alcohol, and everything to do with trying to make that piece of plastic an effective way to verify that someone is not on the terrorist watch list. Whether this is a good idea, or actually improves security, is another matter entirely.

You can see this kind of function creep everywhere. Internet security systems designed for informational Web sites are suddenly expected to provide security for banking Web sites. Security systems that are good enough to protect cheap commodities from being stolen are suddenly ineffective once the price of those commodities rises high enough. Application security systems, designed for locally owned networks, are expected to work even when the application is moved to a cloud computing environment. And cloud computing security, designed for the needs of corporations, is expected to be suitable for government applications as well -- maybe even military applications.

Sometimes it's obvious that security systems designed for one environment won't work in another. We don't arm our soldiers the same way we arm our policemen, and we can't take commercial vehicles and easily turn them into ones outfitted for the military. We understand that we might need to upgrade our home security system if we suddenly come into possession of a bag of diamonds. Yet many think the same security that protects our home computers will also protect voting machines, and the same operating systems that run our businesses are suitable for military uses.

But these are all conscious decisions, and we security professionals often know better. The real problems arise when the changes happen in the background, without any conscious thought. We build a network security system that's perfectly adequate for the threat and -- like a driver's license becoming an age-verification token -- the network accrues more and more functions. But because it has already been pronounced "secure," we can't get any budget to re-evaluate and improve the security until after the bad guys have figured out the vulnerabilities and exploited them.

I don't like having to play catch-up in security, but we seem doomed to keep doing so.

This essay originally appeared in the January/February 2010 issue of IEEE Security and Privacy.

Posted on February 4, 2010 at 6:35 AM • 40 Comments

Comments

EYFebruary 4, 2010 8:14 AM

I used to work for a business services provider and we developed many software tools for in house and external use. As a member of the security team, I reviewed all of them while they were in development. One question I asked each time was "Will this ever be used offsite? Say at a customer's location?" Some were built for that purpose and had adequate controls in the design.
When a tool was designed for in house use, I would ask "Are you 100% sure that we will *never* provide this outside our walls? Because for that to happen, controls A, B and C need to be built." The howling over what a stickler I was would get loud. I noticed that the more complaints I got, the shorter the timeframe for a client to say "Hey, can I have one of those here?" And when I was asked to re-review, I'd ask my question again...to more howling this time because now I was "preventing revenue".
Catchup and function creep...never ending.

PaulFebruary 4, 2010 8:19 AM

I've heard a very simple saying from another security researcher, Mr. Steve Gibson, who puts it like so: "Complexity is the enemy of security." Basically he says that as we add complexity to any system we increase vulnerability to attack, as there are more avenues available to try and exploit. Function creep adds complexity to any system, no matter how "useful" the new functions are.

TangentialFebruary 4, 2010 8:19 AM

A scope creep or inadequate discovery before proceeding towards developing a solution.
The reason security theater exists is because the intent is not to protect, security products are no better

The last paragraph sums it well, though I would like to add the glaring gap is the cartesian approach to problem solving. This blinders-on approach to problem solving takes problems in isolation and tries to fix them. When the solution is applied in reality, the interaction with other system makes the system less than perfect. Adequate awareness of what the product is limited to and under what perfect circumstances it will function properly is lost/disguised by salespeople.

The varying degrees of perception, formed by attempts to comprehend the sales pitches promising a solution to everything are the root cause.

Very often a new product is sold to address problems with other systems and domains without an adequate understanding of what causes caused such systems to fail in the first place. This resonates with the user of the failed product who needs a new product to achieve what they had hoped for in the first place, and the cycle starts.

First TimerFebruary 4, 2010 8:26 AM

@Cae

Like anywhere.

Security professionals know that security is not a path to a door, but a road that goes on forever. It's about risk management and process, and no system can stamped "Secure" - you always have to take into account the environment. And anyone who says a product is secure is selling snake oil. Secure where? When? Under what conditions? For what users? Under what network systems?

You could set up a standalone PC in a block of concerete with a nuclear power source and drop it in the Marianas Trench. But if the dtaa it holds is valuable enough, a resourceful enough sub driver with a jackhammer can still get at it.

RobFebruary 4, 2010 8:28 AM

There's also the problem on one-size-fits-all within an organisation. A corporate computer system may have the same e-mail and network authentication system for everyone, from cleaners, clerks through to top management. This tends to mean that security is dumbed down a little bit for everybody in the organisation, when some people should have stricter security.

aikimarkFebruary 4, 2010 8:31 AM

>>We don't arm our soldiers the same way we arm our policemen

Actually, we used to do exactly that. We 'converted' hunting rifles and shotguns into military and police firearms. I use 'converted' quite liberally, since there was often little to no modification of the civilian weapon.

Fortunately, that has changed...at least for the military.

Clive RobinsonFebruary 4, 2010 9:09 AM

@ Bruce,

"We don't arm our soldiers the same way we arm our policemen, and we can't take commercial vehicles and easily turn them into ones outfitted for the military. We understand that we might need to upgrade our home security system if we suddenly come into possession of a bag of diamonds."

Don't conflate security for the tangable physical world and objects with in it. Security for the intagable information world is very very different due to constraints in the physical world that do not apply in the information world.

For example,

"We don't arm our soldiers the same way we arm our policemen"

The reason for this is the physical world constraints of locality and force multipliers.

In the tangable world there is only so many "force multiplers" a single person can have and control. That is they cannot be an army of one. However in the intangable world a single person with the right force multipliers can take on the world...

"Yet many think the same security that protects our home computers will also protect voting machines, and the same operating systems that run our businesses are suitable for military uses."

If they where any good they would be.

But they are not by and large they all have more holes in them than a Swiss cheese (@ Nick P I'll let you give more details on this ;)

Which brings me onto,

@ Paul,

Steve Gibson, who puts it like so,

"Complexity is the enemy of security."

Is only partly right. If you do it right complexity can actually be handled very well.

The real cause of the problem is the drive for "efficiency" or "Bang for your Buck".

The more efficient you make a system the more potential there is for insecurity for a whole host of reasons.

If people would wake up to the fact that "Specmanship" by marketing does not improve you life to any real degree and certainly not your security then we might have a chance at getting security that works.

The sad thing is we actually know quite a lot about how to make all systems secure and the majority of the applications that run on them. What we lack is the will to take an "engineering aproach" to the problem.

AnonymousFebruary 4, 2010 9:09 AM

>>..we can't take commercial vehicles and easily turn them into ones outfitted for the military.

Except when we do. http://en.wikipedia.org/wiki/... --Modified, to be sure, but not really a custom design.
So SOMETIMES it is perfectly appropriate, and other times it's not. You really want to have stuff that is appropriate to the mission. Sometimes the "same as everybody else" is appropriate from a cost and perfromance standard.

colFebruary 4, 2010 9:54 AM

cf in the UK: the DVLA database, and the proposed ID card, which seems to garner new purposes by the week

Dave PageFebruary 4, 2010 11:45 AM

col, I don't know what "proposed" UK ID card you're talking about - they're already rolling it out across large chunks of the country. It's not a proposal, it's a reality - better get involved with No2ID and help put a stop to it!

Matt from CTFebruary 4, 2010 12:14 PM

>There's also the problem on one-size-
>fits-all within an organisation. A
>corporate computer system may have
>the same e-mail and network
>authentication system for everyone,
>from cleaners, clerks through to top
>management. This tends to mean that
>security is dumbed down a little bit
>for everybody in the organisation,
>when some people should have
>stricter security.

Quite true.

How often have folks seen the clerk's security system weakened because, while they were OK with it, senior managers didn't like being told what to do and wanted something easier for themselves.

AppSecFebruary 4, 2010 12:28 PM

@Clive: "In the tangable world there is only so many "force multiplers" a single person can have and control. That is they cannot be an army of one"

Chuck Norris would disagree!

I am half torn to disagree with you on the "army of one" can take down the world in the intangible sense. I see where you are coming from, but part of me still says you need reliance on others failures to be successful. If those failures are corrected before your success, then you won't be able to take down the world.

As I said, it's a partial disagreement and I see where you are coming from.

PackagedBlueFebruary 4, 2010 12:30 PM

Security and the function creep, ah, so that is what is up with Google and its new creepy partner, the NSA.

While we are at it, lets have the FBI, etc have a poll-lice, "police only" web interface for ISPs, and data retention.

All these fun news, all out today!

In other news, the stock market may drop. Fancy that.

I always knew that error that IBM thinkpads display with older 2.5 hds meant something. 2010 ERROR.

Jim A.February 4, 2010 12:49 PM

>>..and we can't take commercial vehicles and easily turn them into ones outfitted for the military.
Except when we do. http://en.wikipedia.org/wiki/... They were modified from the comercial versions with 24volt electrical systems, but otherwise pretty similar. Sometime it DOES make sense to use what's already out there, and sometimes it doesn't.

sidelobeFebruary 4, 2010 1:05 PM

So, when do I get to reclaim my Social Security Number as being nothing but a bank account number, and not something that can be used to open credit card accounts?

ShaneFebruary 4, 2010 1:15 PM

"I don't like having to play catch-up in security, but we seem doomed to keep doing so."

It's also a terrible way to play Go. Sente is always best, but security, especially recently here in the US, does seem doomed to forever be in Gote.

That is why the terrorists always prevail (assuming their success is determined by the lunacy of our subsequent reactions to their attempted plots, successful or not), because they are always in Sente, they always have that initiative, that 'free move' as it were, the one move that can be made after extensive review of all current board positions and prior reactionary moves by their opponent, knowing that their next move can again force another reaction out of their adversary, thus netting them another move in Sente.

Rinse, repeat.

Isn't the definition of insanity along the lines of: a mindset that attempts to solve a problem repeatedly using the same methods while each time expecting different results?

Sounds like the DHS/TSA mission statement.

You can't win a war on anything when the opponent gets to choose every move you make.

ShaneFebruary 4, 2010 1:44 PM

@Clive: "[...] we actually know quite a lot about how to make all systems secure and the majority of the applications that run on them. What we lack is the will to take an 'engineering aproach' to the problem."

Seems to me what we lack here is the positive financial motive to do so, rather than the 'will'. Nearly every security-conscious developer I know (myself included) would love the opportunity to secure the myriad of vulnerabilities we see in all of the systems we not only use but administrate. Unfortunately, the US (at least) is run mainly under the principle that every citizen is unimaginably greedy, and that that very trait trumps everything else (think of how we have to instill Government regulations, by making the adherence to them economically appealing). The corporations are always running from the wolves in terms of getting things to market as quickly as possibly, so there is nearly zero incentive from a corporate standpoint to waste extra money on retroactively re-engineering everything they've built, and every foundation they've built it on top of, to simply appease the minority (because face it, if security was really the deciding factor in a software purchase for the majority of users, we wouldn't be having this discussion).

Take Al Capone for example. He spent most of his life running up a criminal empire. What took him down? Tax evasion. Moral? You can cause all the trouble you want, but don't you dare f*ck with a company's profit margins. Our government? A company like any other, more so now than ever before. A publicly traded commodity with a publicly elected board.

Since the corporations and their profit margins are the real driving force behind US policy, especially after the recent Supreme Court ruling (*waits for the four horsemen*), it doesn't seem to me that an engineering approach is required, seems to me an economic one is.

Having said all this though, I'm reminded of how zealously I've contested Bruce's idea of pushing to force the market to figure end-user security out, as my idea of 'educating all the users' seems to be overly idealistic, but I stand by it. I still think that ultimately the coroporations will always win the government's heart over the will and/or well being of the populace, and federal regulations aren't written in stone (ie - again, recent SupCourt ruling), so I believe, again, the most effective way to force the overhaul of an entire industry is to get the consumer to demand it.

It seems to be working (albeit very slowly, hopefully not deathly slow) for the 'Green' movement, as it were, here in the US. There has been a real paradigm shift over here for the middle class, finally willing to acknowledge and confront how wasteful we are as individuals, which is translating into companies moving towards 'greener' tech simply to drive sales (cause we all know how much big daddy business cares about mother earth after the divorce was finalized). If we ever kill our dependence on oil, I don't think we'll have our government, or the corporations, to thank for it. If that were to be the case, methinks we'd have killed that dependence 30 years ago, with the advent of all mess of patents for alternatively powered vehicles, most of which were bought and filed away by the biggest oil / automotive industries at the time and never heard from again.

BF SkinnerFebruary 4, 2010 2:01 PM

@Clive "Is this a school for trainiee "hit persons"

Nah...just a personal philosophy ain't it?

If you gonna do a thing make it solid.

Dave EgerFebruary 4, 2010 2:17 PM

"Far too often we build security for one purpose, only to find it being used for another purpose"

I was worrying about this same thing as I read about the budget perils of Boeing's virtual boarder security system this morning.

DavidFebruary 4, 2010 3:23 PM

@Clive: There are good reasons why home security might not be good enough for voting machine security.

There is no such thing as perfect security, provided you want to keep the option of doing something or referring to something. Security is about imposing a cost on the attacker in some form or another (or many) that will slow and/or deter the attacker. It involves cost to the defender.

Therefore, the amount of security appropriate depends on the value of what's being protected. It makes no sense for me to spend ten thousand dollars securing my laptop, when there's little personal or un-backed-up data on it, because I can replace it for maybe a thousand dollars in cash and inconvenience. If I can impose a cost of a thousand dollars on a would-be attacker, that's more than the value of my laptop to him or her.

Fundamentally, a voting machine is much more valuable than my laptop. It's far more valuable to an attacker, since it can change the result of an election. It's far more valuable to the owner. Therefore, the proper security measures are likely more cost (in money and hassle) than I'd want to use on my laptop.

It would be nice if I could get total security for my laptop at a reasonable price, and hence the same security would work for voting machine and laptop, but that isn't realistic.

mcbFebruary 4, 2010 4:35 PM

@ clive robinson

"Hmm a lot of 'Double Tapping' Is this a school for trainee 'hit persons' ;)"

Instead of letting artificial gender neutrality muddle the meaning of the phrase "hit man" I much prefer the terms "assassin" and "assassinette". Besides "double taps" are for ninnies who can't manage a "hammer" - two trigger presses for one sight picture. ;)

BF SkinnerFebruary 4, 2010 6:03 PM

@mcb ""assassinette".

Is where I keep my murderous child.

Double tap is rule 2 and we've got to have rules otherwise there's anarchy.

NobodyFebruary 4, 2010 6:04 PM

Surely security of the card isn't the point?
Even if the card is perfect encrypted biometric magic smoke the idea is to compare it to a secret list which seems to mostly mostly babies, 8 year olds, dead senators and people from Nixon's enemies list.

At the moment matching one of these names means a bit more hassle at the airport - what's going to happen when it means you can no longer use a credit card, enter a government building, pass a traffic cop or open a bank account?


StephanieFebruary 4, 2010 8:40 PM

Thanks Bruce, great post.

I'm guessing its gonna depend on how we define terrorists. Elderly Dominican nuns in Maryland ring a bell?


So is the purpose of the chip in the new DL that Homeland and law enforcement promotes to keep track of us when we become citizen observations farmed out by the FBI to a federal contractor? Then the 9-1-1 fellowship, the NIMS group, and FEMA Sprint/cellphone network folks can more easily keep tabs on peace activists?

Clive RobinsonFebruary 4, 2010 10:53 PM

@ AppSec,

"As I said, it's a partial disagreement and I see where you are coming from."

Have a little more thought on it, pretend you are a "wiley cracker" and you never know ;)

For those that,

"Don't quite see what the issue is"

And we nearly all have a "Pointy Hair Boss" who dosen't, in the "managment train" above us, most easily charecterised by their mantra,

"I want solutions not problems!"

You sometimes "have to line the ducks up for them"... (but please don't give them a machine gun, they'll only trip with the safety off ;)

The first trick they have to realise is that in the ICT world it's very much all bits and Bytes (yes we are talking that Pointy Haired Boss).

The cost of "bits and bytes once you have them is all in

1, Storage,
2, Communication,
3, Processing (CPU cycles).

And in most cases it is a tiny cost at best.

For an attacker the cost is even less as they use other peoples resources (which I'll come back to later).

The three basic constraints of the tangable world that realy don't apply to the intangable world and ruin our day (after day after day ad nausium)

1, Locality.
2, Near Zero cost "force multipliers".
3, Near Zero cost "duplication".

In reverse order,

Most of us are now aware that the cost of duplicating "bits and bytes" is negligable. So small we generaly only measure the time asspect in irritation as in "why is it taking so long".

For a "wiley cracker" the time to copy is not normaly one they are overly worried about, it's "staying below the radar" that's important. As in the intagable world it's possible to steal with out the owner realising, and the value is usualy higher the longer the owner remains in ignorance.

In the tangable world we humans as individuals are very very weak. That is why we only started to appear different to the other primates when we started to use "tools" or "force multipliers".

Your Pointy Haired Boss probably understands the "30lb Lump Hammer principle" (esspecialy when it comes to nuts ;)

The Military have an expression for it "Shock and Awe" (which ended up in managment speak)

Or more correctly "The projection of overwhelming force" ('walk softly and carry a big stick' ;)

Now it is not normaly just "overwhelming numbers" that count but the tools those numbers have at their disposal (the so called "nuke option" which lead to the MAD doctrine of the cold war).

In the tangable world "tools" have design, production and use costs.
However in the intangable information world in general it is only the design cost that is non negligable.

For the "wiley cracker" this is more so as the production (duplication) and use (CPU cycles) are usually paid for by the target of their attentions (or previous target etc) not by them.

Then there is the issue of locality. In the tangable world to misapropriate an object you have to be able to get your hands on it.

This can be either directly or through others but they need to be there to do it.

Thus the amount of damage you can inflict is limited to where you or your expensive force multipliers are located.

For the "wiley cracker" there is no distance/cost metric to worry about. They can be at any point in the world, likewise the target. The only issue is communications to get the force multiplier to the target (and if required the bits and bytes back again).

A high visability example of this are botnets engaged in a DoS attack. Usually there is little or no warning that an attack is about to happen and when it does there is generaly nothing that can be done (without the considerable help of others).

Now a "wiley cracker" can also use this "fire and forget" mode in a covert way to enumerate a target or find a way around a "communications block".

One way is a targeted covert virus to get past an "air gap" defence. The only difficulty is aranging a covert return path (and yes I've a solution for this that will scare the likes of google).

Thus it can be seen that the usual hidden assumptions that effect the way we think about tangable goods (gold bars etc) in the ordinary way do not apply to intangable goods (bits and bytes).

It's also why the proper security maxims for the intangable information world apply to the tangable physical world but not the other way around...

This incorect idea of trying to "borrow and translate" tangable world security into the intangable world is one fundemental reason why information security generaly fails.

@ Shane,

You highlight another fundemental reason with,

"Unfortunately, the [world] is run mainly under the principle that every citizen is unimaginably greedy, and that that very trait trumps everything else... ...The corporations are always running from the wolves in terms of getting things to market as quickly as possibly"

Greed is the root cause of "Marketing" which gives us "Specmanship" and "Share Holder Return" which gives us "efficiency".

And yes you are almost spot on with,

"I still think that ultimately the coroporations will always win the government's heart over the will and/or well being of the populace... ...so I believe, again, the most effective way to force the overhaul of an entire industry is to get the consumer to demand it."

The question is how...

I have said in the past a look at the motor car industry is a good place to start.

In the US you had "lemon laws" but an analysis shows that it is actually "safety ratings" that do the job. The European New Car Assessment Programme (EuroNCAP) is one example, where a series of independent tests are carried out and a "star" rating is applied.

It is very clear that these ratings have driven the development of car safety systems over the years as the tests evolve the industry responds.

And apparantly oddly the result has usually been a saving for the manufactures. The reason for this is the same as Quality Assurance, once the process is built in across the entire production process the benifits acrue rapidly (however occasionaly it does fail see todays news about the 2billion cost of a recall on cars simply due to a tiny shim being left out of the brake systems).

Which is why I keep banging on about from the managment perspective the Security Assurance process is the same as the Quality Assurance process (which I'm waiting to see if others will pick up on). However this means we need usable "metrics" at all levels (as is found in manufacturing where feedstock rate metrics on the shop floor get converted from level to level and become a "cost/time" metric at board level).

However there is a fly in the ointment...

I noted that there where three basic issues,

1, Locality.
2, Near Zero cost "force multipliers".
3, Near Zero cost "duplication".

That differentiate the intangable and tangable worlds. Well if applies to the manufacturing world of intangable and tangable "goods" as well.

I noted above that,

In the tangable world "tools" have design, production and use costs.
However in the intangable information world in general it is only the design cost that is non negligable.

And this applies to manufacturing.

The reason that QA gained traction was "return/rework costs" they are very asymetrical. That is the cost of manufacture and distrubtion is optomised towards the consumer and not the other way.

Infact the cost of a single warrenty "full return" on FMCE (fast moving consumer electronics) can easily be as high as the profit on fifty items... (which is why you generaly get replacment and local landfill, which is what the various WEEE legislation around the world is trying to cure).

Not so the software industry. They have "patch tuesday" where the only cost to the manufacture is the rework, design, test and server farm. The rest of the cost and risk is bourn by the consumer...

Which is one of the reasons why I have in the past gone out on a limb and suggested that outbound (only) data costs on the Internet would be good for security...

Yes I know I can hear the burners being lit for a flame war even as I type this but rather than accuse me of wanting to kill the dream, think of it as a suggestion to save it from exploitation.

If an end user had to pay for all the spam etc from their "owned" PC at the end of the month it would encorage them to reduce the cost. This would knock up the line fairly quickly, and security would suddenly be a very real cost issue for the consumer. Likewise "patch tuesday" would have a very very high cost for the software manufactures putting them back into the asymetric cost model and forcing them to spend more time on "engineering" their product not "artisan" bolt another patch on (have a look at the history of Victorian Boiler Explosions and what it did for making mankind move from artisan production to science and thus engineering manufacturing).

It would also kill off spam and a whole load of other nasties. As well as quite a few marketing models.

However it would also kill of free software and a whole host of other benifits. It would also have the downside of protecting other entrenched and parasitic monopolistic marketing models.

So my view point is we have to find another way of getting both the consumer to demand security, but also for the market to offer secure alternatives.

Which brings me back to the likes of EuroNCAP etc.

Which means that unlike US Legislation that is "You will do this, this way" we need EU type legislation whereby there is a reguirment to meet the tests of an independant standards body (CENLEC ETSI etc) who bring in test requirments as and when required.

Clive RobinsonFebruary 5, 2010 12:04 AM

@ David,

"Security is about imposing a cost on the attacker in some form or another (or many) that will slow and/or deter the attacker. It involves cost to the defender."

I covered a bit of this in my reply to Shane above.

There is another issue with the three constraints on the tangable world I noted,

1, Locality.
2, Near Zero cost "force multipliers".
3, Near Zero cost "duplication".

That differentiate the intangable and tangable worlds.

It is the asspect of "mutual security" and "liability to others".

In many parts of the world you require not just a "licence to operate" but the machine must be "certified as compliant" to be used.

It is one of the reasons we have zoning laws and to a degree Environmental Protection laws.

For instance an aircraft and a ship must both have "worthyness certificates before they can be used.

Your laptop should by "communual rights" not cause a "nuisance or harm to others". Likewise you should again by communal rights, be sufficiently profficient in the machines operation to not operate it in a way that is liable to cause a nuisance or harm to others.

This is accepted legal procedure that if a machine can potentialy cause "nuisance or harm" and it is not "intrinsicaly safe" for use then the operator has to be licenced.

An example is radio communications, in the case of the most radio transmitters you need a licence to operate aproved equipment. The licence is issued to "any person legal or natural" who has proved themselves to be competant to operate the euipment in an appropriate manner. However "mobile phones" by dint of the way they are designed are considered to be "intrinsicaly safe" to operate and thus do not require a licence.

There used to be an exception to the licence requirment (as there is with cars) in that, if it was operated within the bounds of a property and thus could not be a nuisance or a harm outside the property then no licence was required.

A computer connected to a public network is operating in a way that could cause a nuicance or a harm to others. The fact that we do not currently have a legaslative framework yet does not change the issue.

Therefore you need to consider a good deal more when you say,

"...the amount of security appropriate depends on the value of what's being protected."

Than the very limited view point of what the laptop is worth to the owner.

Oddly currently the tangable part of your laptop is actually certified in many ways (turn it over and look for the FCC CE VDE conformance marks) but the OS and apps are not...

That is there is currently no legislation in place for "hosts" to not cause a nuisance or a harm. Which is why you can currently say,

"It makes no sense for me to spend ten thousand dollars securing my laptop, when there's little personal or un-backed-up data on it, because I can replace it for maybe a thousand dollars in cash and inconvenience."

However if your laptop was a car in a number of places world wide (not sure about the US) then it would require to be certified as "road worthy" every year. Which if applied to "Internet hosts" would negate,

"If I can impose a cost of a thousand dollars on a would-be attacker, that's more than the value of my laptop to him or her."

As you would be liable to all other "hosts" which are attacked from your "host".

Which means,

"Fundamentally, a voting machine is much more valuable than my laptop."

Would not be true as you would be liable for it's cost if it was attacked from your "host".

That is irrepective of if,

"It's far more valuable to an attacker, since it can change the result of an election. It's far more valuable to the owner."

You would for the "common good" both be required to put in place,

"The proper security measures"

Irrespective of what the
"cost (in money and hassle)"

Is.

However if you did not conect your laptop to the Internet or other host in any other way then it's security would be a moot point.

With regards,

"It would be nice if I could get total security for my laptop at a reasonable price, and hence the same security would work for voting machine and laptop, but that isn't realistic."

If we required by law the intangable goods of the OS and Applications to conform to certain standards like we do for tangable goods. And thus requiref the producers to have approprte "product laibility" insurance then it might very well be "realistic".

However that opens a whole new "Pandora's Box" of issues that quite frankly I'd prefere to be left closed, untill the market becomes a lot less monopolistic.

charlesFebruary 5, 2010 6:59 AM

Thanks for this post full of great insight!

This seems to me tied to the limits of expressing systems as sums of functions and proper evaluation of "non functional requirements" (btw do you consider security a functional or a non functional requirement?)

DavidFebruary 5, 2010 4:31 PM

@Clive: I'm still failing to see the relevance here. My laptop is a far less valuable target than a voting machine, by any measure. Nor will stealing or hacking into my laptop automatically crack a voting machine. It would be of use in a botnet, but I doubt it would do all that much individual harm.

How much harm does a botnet do per bot? Further, if the authorities can trace a botnet to me, they can notify me early on, when damage is limited. (Actually, my ISP will do that. I had an incident last year where somebody with a Romanian IP address took advantage of two things I'd done, neither of which was actually that dumb separately.)

Therefore, if there was proper notification and liability assignment for botnet members, I really don't think it would be all that expensive to me, and a thousand dollars for security would probably still be way excessive.

I do understand that a script kiddie in Korea can use a technique developed in Belarus to hit my computers in Minneapolis, whereas somebody trying to pick my front door lock would have to be a competent picker and present on my porch. That changes modes of attack only, and the likelihood of compromise and the nature of security. The fact that my computer can be used to hurt others does raise the cost of insecurity some, but not to the point where a security breach would be excessively expensive, even if properly accounted for.

So, my original point remains. The optimal cost of security is where the cost minus the expected loss given that security is at a minimum. The expected loss is the sum of problem likelihood times problem costs. The problem likelihood curve is not constant over machines; it's not worth anybody's time to target my laptop specifically, but it is worthwhile to go to great efforts to hit specific targets.

And so I maintain that the appropriate amount of security is less for my laptop than a voting machine.

RonFebruary 5, 2010 7:00 PM

I'd like to toss another idea on the pile.

EY mentioned apps designed for "in-house" use had lower security requirements than ones for "out-house" ;-) use.

I would propose that idea is no longer valid. The days of the "hard" perimeter are past. Rarely can we now say that the "internal" perimeter is separate from the internet.

With the move to SAAS and (external) cloud computing, more and more of what is considered "internal" is being moved outside of corporate walls. Internal and external clients are pushing IT to provide "web" access to formerly in-house data and applications. The "cost effective" approach is "Fast and dirty, simple webifying" of formerly internal apps. It ignores all of the new exposure being introduced.

Clive RobinsonFebruary 6, 2010 2:31 PM

@ David,

"I'm still failing to see the relevance here. My laptop is a far less valuable target than a voting machine, by any measure."

So you say, but I only have your word on it. And thus I have to take several views on it,

1 - You do not put anything of value on it.

2 - You do not knowingly put anything of value on it.

3 - You may put things on that are of personal value.

4 - You may put things on that are of value to others but not you.

5 - You may put things on that are valubale to you and to others.

A - You don't connect your laptop up to others either directly (comms) or indirectly (via removable media).

B - You do connect up your laptop to others indirectly via removable media.

C - You do connect up your laptop via communications.

Now you are claiming that your machine is a very cheap old machine of type 1A with your statment,

"My laptop is a far less valuable target than a voting machine, by any measure".

Which to be honest I don't belive because you go on to say,

"It would be of use in a botnet"

Which makes it of type 1C as a minimum. Thus making your argument a bit mute.

Because most voting machines are of type 1B and whilst being expensive to buy actually have very low value theft wise and spend most of their time unpowered in a cupboard that may or may not be locked.

With regards your statment,

"Nor will stealing or hacking into my laptop automatically crack a voting machine."

Hmm, I don't know you from Adam or Eve...

So for instance you might just be working for a voting machine company as an on the road service tech, which would actualy make your laptop more valuable than any one voting machine....

Also we tend to know that the chances are that a lot of on the road techs / sales persons just hook their laptop up via mobile broadband or whatever and do a bit of surfing over lunch or between jobs, irespective of what the company rules may be. Which tends to make them of a type of a combination of 345/BC but most likley 5C.

Thus the broader security view is that most laptops are capable of very considerable damage.

Which is why many attackers work on the "fire and forget" principle, they launch of a self replicating agent and wait and for it to spread around.

Some agents are for access some for predetermined searching but many are general purpose and await instructions. Some of the latter may be used to look for bits of information such as CC numbers PK files and email addressess or other user enumerating information.

And the botnet controler may get lucky, and get onto a machine like that, which was used by a CIA Director to work at home and also for his kids to browse the Internet...

It is this same "significant but low probability risk with a common item" which is the reason
why you have car insurance. In any one year only a few percent have accidents and some smaller percent involve serious injury or worse, but they do happen and the cost of such an accident usually outweighs the cost of the car hundreds of times over (ie a ten or fifteen year old "beat up" car worth 250USD scrappage causing 1millionUSD+ medical bills and injury payment).

Thus if you are unfortunate to be a person walking down the pavement/side walk that gets hit by a car that did not stop befor hitting you hard enough knock you over and you end up with a broken spine, you atleast get some degree of compensation from the insurance company. They get the money to do that from their pool of premium payments.

Thus to be a common good there is a shared or common commitment mandated by law.

Yes you hear lots of car drivers complaining about how much car insurance is and how they never have an accident or get broken into etc etc etc.

Your argument is the same and it fails to the "broken bone" response,

"The fact that you currently don't have a broken bone does not mean you have never had one, nor does it mean you never will, taken on mass there is a well defined probability that you will."

Thus taken on mass laptops are not of type 1A and thus do represent a very real risk.

Further, currently the Internet is like the wild west supposadly once was, that is there is no common good only self interest and no shared commitment just look after number one (or as many would put it "The freemarket way").

Now I would say contary to your assertion a voting machine is of little worth in the general scheam of things and probably a lot less usefull than your laptop.

That is a vote only happens maybe once a year or so and the machine is not very usefull as an attack vector due to virtualy zero connectivity (type 1B).

Thus it is going to be subjected to a highly directed attack and not a general attack which your laptop would be.

Which brings me around to your point,

"It would be of use in a botnet but I doubt it would do all that much individual harm."

You cannot make that statment safely. Botnets are getting more and more sophisticated and some are very covert these days and their purpose is to identify attack vectors to high value machines that may not be accessable in any other way (ie there's an air gap).

You may not use your laptop for very much, you might not plug in a thumb drive to do a bit of work at home etc. But if that is the case you are the exception.

Oh and when you get a new laptop what are you going to do with your current one?

Clive RobinsonFebruary 7, 2010 3:59 AM

@ David,

Further to your comment

"How much harm does a botnet do per bot?

Have a look at a very recently reported attack,

http://www.washingtonpost.com/wp-dyn/content/...

Basicaly somebody from a supposadly Russia site sent emails to US .mil and .gov users that pretended to have a copy of a legitimate but small circulation report.

Instead it had a modified copy of the old Zeus virus/bot. Which although primarily used to steal passwords can also pull PDF's and other files, as well as being updated for other tasks.

Oh and only some virus scanners pick it up....

So with regards your statment,

"Further, if the authorities can trace a botnet to me, they can notify me early on, when damage is limited."

This is a bit of a problem.

That is virus protection software is at best unreliable with old attacks (otherwise the above attack would not have succeeded in the way it did) and at worse usless against a new attack agent.

So detecting the "infective agent" early on is a bit of a non starter as far as reliability goes.

Which brings me onto your comment,

"Actually, my ISP will do that. I had an incident last year..."

I suspect the ISP picked up on your machine being used for "spaming", "DoSing" or some other high bandwidth activity.

Currently this is how "sacrificial botnets" are discovered not "covert botnets".

Sacrificial botnets are realy the only ones in the news currently which is why people think that about their machine as,

"It would be of use in a botnet but I doubt it would do all that much individual harm."

If they even have reason to think about it at all as you did when your ISP notified you.

The thing is it is way to easy for even a carefull user to get caught. As you know from,

"where somebody with a Romanian IP address took advantage of two things I'd done, neither of which was actually that dumb separately."

I've even seen very computer savy and security concious people get caught by "zero day" attacks. So the odds are very much in favour of the attacker by a very very very long margin...

So much so I'm happy to put my neck out and say that the majority of people who use commercial software and carry out everyday computer activities (ie some 95% of the IP hosts) are going to get hit well within the lifetime of their machine.

As a proof of concept to somebody I built a "net bridge" traffic logger to record net activity on a new machine (single IP Host) freshly connected to the Internet via a very well known UK comms supplier. The "goat" machine had a freshly installed copy of a well known and widely used in business OS with the relevant updates (supplied on CD) from the OS company and similarly office work products.

Even I was surprised to see the goat was "owened" before the machine was fully connected up to the ISP email etc...

So if the attack vector is "zero day" it is more than likley to get in to the vast majority of machines. Even if it's not "zero day" the chances are that over a quater of virus checking software is not going to pick up on a slightly modified old virus.

And if the attack vector is covert and does not anounce it's presence by blowing out vast amounts of network traffic etc how is an "owned machine" going to be detected?

So I think you can see that your statment,

"Therefore, if there was proper notification"

Has a significant issue in that notification can only happen when the bot is detected.

So of the five main methods of bot net detection,

1, Detect the vector.
2, Detect the output.
3, Detect the rootkit.
4, Detect the damage.
5, Detect the control channel.

1-3 are already unreliable to even known attacks.

With ordinary users and the way they use their machines detecting a well designed root kit is problematic in the extream. Once the machine is booted the root kit is inplace and operating so masking the files etc. So the only reliable way is to boot to a secondary OS and scan from there... But this again has a whole heap of issues (which I'm not going to go into here) which makes it far from 100% and some would argue (correctly) makes the machine more vulnerable to removable media attacks.

It can be taken as read that for a covert vector it is not going to be doing direct damage (SPAM DoS etc) thus it will not show up on (4).

Which leaves looking for the control channel (5) which unfortunatly is very easy to get around, and with current internet activities impossible to stop.

Basicaly you ruin Googles Day and co-opt them into the "crime" simply via their primary usage as a search engine.

As the bot controler you do the following,

1, Write your vector (bot code) such that it detects which search engine the user uses.

2, The code occasionaly sends a search for a "control message identifier"

3, The code on getting a hit follows the link to get the control message.

So far this is not that much different to other methods and would result in the control host being identified and taken down etc.

However this is easy to avoid you don't have a control host...

4, As the controler you post your control message to an open blog indexed by the search engine (of which there are very very many)

This of course relies on the control message not getting taken down by the blog owner before it is searched and cached by the search engine.

5, As the bot controler you post to multiple blogs, in a way that is relevant to the blog, so it does not stand out.

Which leaves the problem of the search string, if this is determined then the search engine can trap the requests etc. So,

6, As the controler you write your bot code to have an evolving search string, and depending on the bot usage possibly unique as well.

This is one of those things that is causing a few sleepless nights for the likes of the search engine operators because of people thinking the way you do with,

"Therefore, if there was proper notification and liability assignment for botnet members,"

In such a system why go after the botnet members any sensable lawyer would go after those with the ability to pay their fees and their clients damages. Which would be the likes of Google.

Worse Google would be open to lawsuits not just from those who had been attacked by the botnet but also by those who's machines where in the botnet (a nice little "class action").

The second line of attack would be the blog owners or the hosting services they use...

It is also the sort of system those with any interest in "Cyber-warfare" are scared by as well.

This is simply because a bot net might be covert for only untill a pre-aranged signal and then go very overt without any kind of warning. And as it's all in the "home network" of a country or region pulling the plug won't work.

There is also the issue that the bot controler makes the infected machines "suicied warriors" that is the botnet encryptes the hard drive with a random key and then makes it only boot up to be a "DoS machine".

Just work out the cost of 1million PC's in the US becoming nothing more than network jammers with their hard disks locked from their owners. The country based E-commerce stopped dead. All those accountant lead moves to use the Internet to save money on out of hours command and control on infrastructure preventing control all at the worst possible time (say hard winter in New York and W-DC). No E-billing / banking etc etc...

What is the liability then?

It all depends on your view point but I do know this, where there's a harm and there are assets a lawyer is going to be interested...

And that scares the insurance industry, because the problem is all their business models are based on systems that have "tangable constraints" of the physical world. Not the intangable information world where the constraints of the tangable world do not apply.

And it frightens all Governments because at the end of the day they are the "insurer of last resort"...

As the acient Chinese curse has it,

"May you live in interesting times"

I know Google are begining to think they are living it, and oh boy are they running scared...

Clive RobinsonFebruary 7, 2010 10:25 AM

@ Ron,

"EY mentioned apps designed for "in-house" use had lower security requirements than ones for "out-house" ;-) use."

I think the issue was not so much for entirely new apps but for older apps and those that re-use an old code base.

The traditional way programers did security was as an after thought (at best).

The way an app was built was to build the program logic, interface it to the back end storage etc. Then and only then bolt the security etc on at the front as and if needed...

Well when apps moved to the web we saw a lot of the security code put into the client code...

People wised up to that but the cloud is going to do the same thing.

The question is where and how do you put in the security in the cloud?

The first question should be "do we trust our cloud supplier..."

If not and the answer in many cases should be no then the cloud is a bit of a non starter for many peoples applications NOT that this will stop them going there and losing whatever data there...

DavidFebruary 8, 2010 9:27 AM

@Clive:
Okay, to describe my laptop more fully: it's frequently hooked up to the net, although it won't get incoming connections unless somebody reprograms my router from outside. It has stuff of personal value on it, which I've got backed up in a few places.

In intrinsic value, it's a functioning laptop with some additional computer programs and not-publishable-quality fiction on it. As an addition to a botnet, it isn't particularly vulnerable, since it runs Ubuntu Linux and I normally surf with NoScript, although neither of those is a guarantee of safety.

Moreover, there's a difference in value between part of a collective and an individual unit. The works of Shakespeare and Milton and Keats are cultural treasures of great value, and the marginal value of another copy is pretty small. A botnet can do great harm, but whether it has one fewer bot or not is insignificant, and there is nothing particularly significant about my laptop.

Therefore, my laptop is not inherently very valuable, and its marginal value to bad guys as a part of a botnet is low. Therefore, I believe that the harm done by cracking into it would be relatively minor, which means that relatively low-cost security measures would generally deter an attacker, and high-cost security measures are just not worth it to me.

I contrast that with a voting machine, which can do a good deal of harm when compromised, and which is likely to be specifically targetted. I conclude that the security measures adequate for my laptop are not adequate for a voting machine.

Here!February 21, 2010 1:35 PM

Perfect!Great! This helped a bunch! I've seen a few
rather confusing websites lately, this cleared up a lot confusion I had.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..