Comments

Richard Steven Hack February 25, 2011 4:08 PM

Just listened to it. Nice interview more or less.

One point of interest was Bruce’s comment that Anonymous are “human beings” and could be stopped by regular police work.

The latest Ars Technica article on the HBGary case touched on that. HBGary is working with the FBI on tracking down the hackers who took them. Considering that one of the reasons for the attack was Barr’s attempts to identify members of Anonymous, I would expect that at the very least someone will be arrested and some sort of trumped-up case will be made that they were directly involved in the hack – whether they were or not.

I don’t think the FBI can afford to allow this case to just fall by the wayside. Since they are intent on destroying Wikileaks and any one who aids Wikileaks, I would guess Anonymous, which was already under major investigation for the DoS attacks in favor of Wikileaks, will now become one of the top members on the “FBI Most Wanted” list.

In addition, the fact that a hacker group was able to not only penetrate an IT security company – that has happened before, several times – but were able to release information in a “mini-Wikileaks” style about the activities of a major Washington law firm and a major US bank conducting “dirty tricks” campaigns – well, that’s just “outside the pale”. There is no way the US government can allow this sort of thing.

So Anonymous can count on being pursued until the cows come home.

It will be interesting to see how they fight back. In my opinion, their best bet would be to penetrate someone or something serious and then hold back that information as a “doomsday device”, much as Wikileaks has withhold a large data cache against the day when it might be taken down.

If there’s such a thing as “cyberwar”, this might be it.

mw February 26, 2011 6:22 AM

My takeaway from the Bruce Schneier CHOMP.FM 008 interview is that in a given security system, the human factor is the weakest link, not the security technology itself.

If you look at the continuous distribution of new security patches and updates that purport to close critical flaws in software, I’m not sure I agree – the security patches represent flaws in technology. When a web site or a database is hacked, you may jump to the conclusion that the web site, network, or database administrator failed to take adequate preventive steps, but for mission critical applications the technology, whether it’s an OS, browser, web application server, database, or router should be built to be foolproof.

Sure, Wikileaks is an example of a web site that needs the human factor. However, on the average when I read day-after-day of a new security breach, I principally blame the poorly designed and implemented technology.

Clive Robinson February 26, 2011 10:58 AM

@ mw,

“… is that in a given security system, the human factor is the weakest link, not the security technology itself.”

The glib answer for this is that ‘it’s humans that specify and build security technology’…

However there is more than a germ of truth in it, afterall,

Why do people make purchasing choices which do not take security into the equation?

Why do users take actions that are known to be of high security risk?

Why do systems developers and implementers give low priority to security?

All these are “human” not “technological” choices.

mw February 26, 2011 12:27 PM

@ Clive

A product should be designed to be foolproof so it can be easily and correctly used. Developers give high, not low, priority to security issues but typically release products with huge design flaws and long lists of known unfixed bugs. Too often an iterate often release early philosophy leads to consumers becoming the beta testers while the inevitable security patches and updates roll in.

However, I’m not arguing against the proposition that the human factor is a weak link.

moo February 27, 2011 1:25 PM

@mv:

It would be nice, but it doesn’t happen because (1) designing and making highly secure products is very time-consuming and expensive, and (2) the customers buying the products are often not willing to pay for that the high cost or wait so long for the product, and (3) even if they were, building highly secure products is intrinsically difficult, and probably beyond the skills of all but a small fraction of developers.

The world is run on insecure and badly-designed software because writing all software the way the space shuttle software was written would be utterly impractical.

Clive Robinson February 27, 2011 6:15 PM

@ moo,

“The world is run on insecure and badly-designed software because writing all software the way the space shuttle software was written would be utterly impractical.”

And if you remember even NASA’s Space Shuttle software has had a hickup or two…

@ mv,

The world is an imperfect place where the only absoluts we thought we had appear to be based on the “roll of God’s dice” which is why Albert got so upset.

We either live with it or as Albert did “paint ourselves into a corner”.

We new from before the first electronic computer was ever built that we were going to have a fun time with them. Kurt Godel showed that any system of logic that was effectivly practical to use could not be used to prove it’s self (undecidability). Church and Turing (halting problem) slightly later showed that this held with even determanistic systems.

And if you take it forward you will realise the implication is that any moderatly complex system cannot realy be known. That is if the flow of the program can be controled by the input data the program it’s self becomes as unknown in it’s behaviour as the input data. Further if this data can be stored within the program in some way and the data can make the program access other data you are basicaly back to the Turing Engine…

Only the simplest of programs can be “proved” in any way and this is generaly within several constraints with the proof itself. often resting on the assumptions we call axioms.

You can take these ideas forward to show that no useful program can be 100% error free and thus cannot be in turn 100% secure.

Is this a problem well actually no I don’t think it is.

Back when they were designing the atom bomb at Los Alomos they ran into a problem with some of the mathmatics involved. In effect they could not get answers determanisticaly in a meaningful way. So they used probability and ended up with what we now call Monte Carlo Methods.

I think the same reasoning can be applied to the functioning of all moderatly complex programs and thus in turn security.

I won’t go into the details but consider this.

We have a very limited number of programers who can write secure code is this an issue?

Currently yes, but does it need to be?

I think not. Consider the following proposition.

We get programers to use what are effectivly scripting languages, but these are slightly different to the normal scripting languages.

They actually have two parts the conventional sub function to be used in the program but also a security function that gets run in a hypervisor system. This has a signiture of how the sub function behaves and the level of resources such as memory and CPU cycles etc the sub function needs.

If the subfunction trys to excead the limits or it’s execution signiture becomes abnormal the hpervisor stops the function in it’s tracks and then goes in to perform a sanity check on the subfunctions current data and memory contents.

If things are not as they should be then it kills the subfunction and chucks it up to the hypervisor for human analysis.

If the subfunctions are written in the correct way then both the execution signiture and the limits can be set by the scripting engine as it compiles down to it’s version of byte code.

But this “right way” has many advantages in that it means that each subfunction can just like those in a Unix shell script run on any available CPU core so you will get implicit parallel processing with just a few other constraints on how the scripts are written.

Further the script sub functions need have no knowledge of the outside world their input data is “drip fed” to them by the hypervisor stack and their output is fed back a drip at a time to the hypervisor stack. This criticaly limits the need to store data in any function thus the ability to subvert the function by any input into it.

But as the subfunction is decoupled from the world and any sub function the hypervisor can halt it randomly and sanity check it’s code and data memory. If it has become suspect it can be stoped and kicked back up the hypervisor stack.

Now have a think about each sub function being written by three different teams and the hyporvisor randomly using one of the three versions to execute one iteration of the sub function.

If a signiture or limit exception is raised the hypervisor can present the same data to all three versions of the subfunction and take a vote and decide what to do.

Further the hypervisor can randomly send one small data set to all three versions if they all agree in the way they behave then the chances are the program is behaving functionaly correctly

There are a whole load of other things the security hypervisor can do such as out of order execution add random time delays and run through nonce data.

Effectivly the script programer is blisfully unaware of any ot this background activity by the hypervisor. However any malware writer wishing to put data into the script to subvert it in a meaningful way is going to have a much much harder time.

In this way the scarce resource of those capable of writing the subfunctions securely can do this, the everyday jobbing programmer needs not get involved with the actual security at this level.

Thus you end up with a script that is both inherantly more secure but also probabalisticaly check for incorect function by the hypervisor.

Such a system would not be perfect but it would certainly be orders of magnitude more secure from external malware attacks than current methods.

Have a think on it and see if you can come up with further benifits.

I’m aware that it has certain issues as well such as it is not “CPU cycle” efficient, but then neither is Java, PERL or just about any other post Java language.

Oh and one other benifit you can make the subfunction of the scripting language much more highlevel. It is known that the number of “bugs” a programer introduces into any program is related more to the number of lines of code than just about any other metric, which is why the likes of LISP programers are generaly more productive than C++ programmers and they in turn than assembler level programmers.

Jay February 27, 2011 7:25 PM

@Clive:

That would work quite well for a well-specified problem.

Unfortunately, anything with humans involved becomes ill-specified quite quickly…

(Is your word processor supposed to write to /etc/passwd? Well, it is if you’re editing it… but did you mean to? Is this email signed by “Legitimate Bank”? Yes, but it’s not the “LegitimateBank” you usually deal with. Security flaws are all about when our problem model didn’t match the domain…)

w February 27, 2011 10:39 PM

“Kurt Godel showed that any system of logic that was effectivly practical to use could not be used to prove it’s self (undecidability).”
If a program like “hello world” had a function that memcpy, couldn’t it check itself == , if it comes back true, it could then try to change “world” to world1″ and if that came back true,logicaly it would know(not gut) that it is the program and it is changing itself

w February 28, 2011 12:32 AM

@Clive having any function check another function is still relies on the first function not to be comprised.
Giving the second function the knowledge that 1+1=2, but a orcale like the first function can corrupt that, and then sets out to find how the first function can pass data into it(one way). The first function passes hash into the second of code or memory that are dulpicated(say xor reg,reg, which the second checks. If the second trys to change the hash it will think it has but the first stopped it or passes back data that was the orginal-change.
If second will allways see the hash is correct, but knows that the first can change it, so resets itself if the hash is wrong, or the hash doesn’t match one that the orcale can’t input data into

Is there a edit buttion

Dirk Praet February 28, 2011 8:21 AM

Even in a perfect world where designers and programmers alike build in security from the ground up, practice decent project and release management, versioning control and the like, in most companies it is not the engineering crews that are in control of life cycle management, but product management and marketing. Far too often, products and updates thereof are being released before having reached sufficient maturity levels and under permanent pressure of marketing and sales teams in desperate need of new stuff so they can make their targets.

I also firmly believe that most, if not all software benefits from code scrutiny by independent third parties, which adds to quality and security on many open source products, but which is far less the case with typical closed software.

Clive Robinson February 28, 2011 11:20 AM

@ Jay,

“Security flaws are all about when our problem model didn’t match the domain…”

Yes they are and this is a major part of the problem.

As a general rule of thumb large complex models are not that usefull (due to many reasons) and can be effectivly as close to usless as makes no practical difference.

So we have eveolved ways of reducing the model into parts and then dealing with those as seperate bits with less complexity and thus more usefull.

Likewise we break the inputs and outputs into types and deal with those seperatly.

We end up with a multilayer model with each layer consisting of multiparts.

Likewise with a dwelling such as a house which has foundations, possibly a celler, walls, floors, ceilings, lofts, eaves and roof tiles.

The building design / techniques for foundations are generaly different to walls although they may look the same at the damp course interface. Thus the layers have distinct and different purposes that usually have different design requirments and rules

Likewise the internal and external walls which also have doors and windows the doors internal to the building being very different to those in external walls. Thus at the various layers there are multiple parts that may conceptually be similar (ie doors & windows) still have very different design requirments (internal doors to keep noise down, external doors to keep intruders etc out).

A mistake we often make with ICT is to make a “one size fits all for all things” mentality. Thus building huge and compleatly unnecessary complexity, usually with insuffisient internal compartmentality to deal with it.

We normaly don’t see this in any other walk of life, afterall how many cars have you seen with beds and kitchens and swiming pools in, and where you have would you real want to drive it or pay for the upkeep and cost of keeping it on the road?

No so why do we do this with software?

It’s partly because we cann’t see software as a physical object and thus the distortions and inconsistencies these “features” create are mainly hiden not just from view but perception as well.

Worse the desire for additional “features” to be “bolted on” either post design or post build are driven by “marketing”. And as we cannot perceive the ugliness that has been created we don’t reject it as not fit for function.

Thus the complexity at any level is getting compleatly out of hand and thus compleatly unmanagable at any given level.

But it getts worse applications are not only feature bloated sidewards within a layer they are also with the advent of the web app taking on (badly) the roles of other layers as well.

One such example is a web browser as an application it sits above the OS and effectivly runs many applications within it.

The OS in many traditional models supplied much of the security by ensuring that independant functions (originaly apps) do not have access to each others memory space etc etc. Thus the OS security model was “seperation” both verticaly and horizontaly with “mediated” (by the OS) and “controled” (by the OS) data communications.

Most apps especialy the likes of web browsers are designed to offer ease of use and easy data sharing etc etc in a single “efficient” memory model. Thus the web browser bypassess the majority of the security that used to be provided by the OS and few of the “apps” are now in any way realisticaly secure from each other (the same applies with MS products like Office where individual apps at some level they have free access to eachothers data).

One of the ideas behind the hypervisor is it builds the more traditional OS security model into the application by default. Thus external malware etc finds it difficult to get in.

However at the bottom end of the hypervisor stack where the scripted app runs it does not address the asspects of policy.

That is if a user decides to do something stupid it is not at this low (anti malware) level the hypervisor provides security.

Without going into the details of the upper layers of the hypervisor (partly because I’m still working through them) you have layers that deal with “policy”.

There are a number of ways of doing policy but in general the more traditional models fail because they deal with “entities” that “own” or have rights over objects.

Humans don’t work as “entities” they have “roles” thus at higher policy layers the hypervisor deals with “entities” that have “roles” where “access” across “objects” and their “roles” is controld by the matrix of the particular policy of the role the user is carrying out.

Thus over simplisticaly the administrator who owned the password file could not edit it unless they had switched into an appropriate role.

Thus a web browser could have several windows open but would only alow access between them based on a particular role. So it could have a window open to “Myloot Bank” and to “robmeblind music downloads” as the role for one is “banking” and the role for the other is “download MP3s” the security hypervisor would stop access between the two.

Unless of course the policy of either alowed it which is a failure of the human setting the policy not the hypervisor script. That is a hammer cannot be held responsable for having beaten in you brains because some nutter had it with them to do just that.

Clive Robinson February 28, 2011 11:55 AM

@ w,

If you have a function which copies memory from one location to another (non overlaping) area unless the function then went on to specifficaly check the data it could not tell it had succeeded.

That is the copy function calls a comparison function

Now if the comparison function could not act on the data it is checking and it had not been modified by malware then yes it would report back correctly that Loc2 == Loc1.

However let us assume as has been shown over and over again that the original comparison function gets changed or replaced by malware.

How does the program know this has occured or not occured?

Lets say it treats the memory where the comparison function as a string and checks it against another string…

That is the process cannot it’s self, check if it is self consistant or not. Only something outside of the process that the process cannot see or be aware of and thus change can do this by looking into the process space

What the hypervisor does (and I’m looking at doing this in seperate hardware) is halt the process and running in an entirely seperate way run a checksum or other signature on the process memory space and data.

Obviously this cannot happen all the time hence the “probabalistic” or “Monte Carlo” security. The probability of the hypervisor catching such an illicit change is bassed on what percentage of real world time it halts the process and checks it.

w February 28, 2011 4:29 PM

@Clive
Would the kenerl have access to the computer hardware and you device, or just the computer. ?
If the latter what stops your device effecting the kenerl?

It well know that 802.3 has a 56bit preamble(to infinty) with a high chance you can send commands to hardware without the kernel knowing(fully)

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.