Harris is far smarter than the average layperson, better trained and inclined to accept conclusions from abstract academic arguments that contradict common sense and above all willing to devote a large amount of time and effort to personally arguing with someone he knows to be a world respected security expert. If you can’t convince someone under those conditions what hope do you have of convincing the general populance who will only give the issue a few minutes thought, will hear the issue in soundbyte form mangled by the media and are much more resistant to overturning their preconceptions based on abstract analysis.

As you keep emphasizing on this very blog our intuitive analysis of probability is very bad. Having had to teach probability to college kids I can assure you that we are unlikely (prior to the singularity) to ever reach a point where most adults have a sophisticated grasp of probability (even the ones who learn it mostly forget) and are willing to trust that over their intuitions.

Unfortunately, terrorism is too big an attention draw to expect people to just overlook the matter and leave it to experts.

This seems like a nice dream but an unlikely one.

]]>And others. Some things are skills that need a unique mindset. Amateurs – even very smart ones -fail because they ask smart but wrong questions.

But people want certification in silver bullet manufacture, not thought-wars. They feel safe based on inconvenience or cost. Worse, they fear those who shatter the illusions.

]]>I’d consider it slightly differently, a a generalist considers the entire whole of a system of security, while the specialist may well ALSO be a generalist (as you said) AND perform specialist functions.

The generalist will also, by nature of being a generalist, call in specialists to achieve the goal of the level of security demanded of the organization.

Now, when addressing a target audience, one must adapt one’s address.

When speaking to end users, “security speak” isn’t a good idea. It alienates them, it confuses them and the confusion tends to lose the message. In short, they ignore what confuses them.

So, when I address end users, I’ll speak (I specialize in systems and network systems security and generalize in physical security of the NOC/server room), in terms of “not letting evil spirits into the network”, as a colorful term that gets attention, due to it being ridiculous, especially from a professional. I then move into the harms caused by simple errors of procedure (such as the proverbial thumb drive plugged into a networked system (lived through one such debacle in 2008, but my installation was unimpacted)). The distraction of a non-professional term tends to re-enforce the lesson.

When dealing with middle management level, I’ll employ a bit of the “evil spirits”, again, it re-inforces the message and more detail and specifics.

When dealing with upper management, I tend to rarely use the “evil spirits”, unless I experience a genuine lack of knowledge, then I step it down a bit to get the target audience to comprehend what is going on and why. But, I’ll also include metrics in detail, as requested (I insist on two way meetings, with questions at higher levels interrupting sessions, within reason) and provide metrics, including monetary ones.

I’ve addressed some corporate audiences, with good feedback. I’ve addressed far more governmental audiences, due to my career path.

Originally, I lived “at the sharp tip of the sword”. After I retired, I went into NA/SA admin positions, but always was security vigilant. Later, I graduated into information security (in the DoD, it’s Information Assurance), as I was essentially doing that job, but getting paid less.

I’ve always always had the ethic, if I’m doing a job, I should ALWAYS be the expert. Steep learning curves are trivial to me, indeed, I rather love them. I end up “the Shell answer man” in rather short order. Learning is accepted as an expert, the expert only learns more as the expert progresses. Otherwise, said expert isn’t an expert, but stagnated.

To this very day, when I walk into a corporate office or branch, I evaluate everything, from the approach to the parking lot to the entry. Then, the entry itself and making entry. Then, to the various doors, marked or hinted/suggested/discussed. For, the average worker will happily discuss ANYTHING unless they’re trained to NOT discuss some items.

It all comes down to teaching concepts to a target audience. It then comes down to REACHING the target audience.

For, those who don’t know about security overall, which the majority of the populace does not, won’t have a clue. So, you need to educate them.

From the CEO/CIO/COO to the end user, for ALL are a point of failure.

*I’m also skeptical that being an expert in one aspect of security, say cryptography, means that you’re an expert in overall computer security.*

The way you have worded it the answer most people would expect would be no.

However, in any field of endevor you have two extreams of expert, “The Generalist” and “The Specialist”, at some point in between is where you will find the likes of “Renaissance Man” and Polymaths.

So there is no reason why a generalist should not also be a specialist in certain subjects within the field of endevor. Nor is their any reason why the skillsets of a specialist in one domains are not transferable to other domains, or to the entirety of the field of endevor.

So you should realy be talking about skill sets and outlook of individuals and showing where a skill set or outlook is not transferable.

Oh and it should be noted that whilst the lower levels of university education are designed to give a broad foundation the higher level “research” qualifications encorage “specialism” not “generalism” which might account for why we have few true generalists at the higher levels.

Another consiquence of this is that generalists tend to “break new ground” whilst specialists refine the quality of a particular area of endevor and in effect improve the methods. Also as has been pointed out to me generalists trend towards “experimental” and specialists towards “theoretical” research.

]]>It wasn’t my intention to dwell on a non-security-related definition. I was only giving an example relating to Clive Robinson’s analysis:

Well for a start split the term into it’s to parts “security” and “engineer”i>

where decomposing a two word expression into it’s components may not produce the expected meaning. I originally thought of “eggplant”, “pineapple”‘, “grapefruit”, “American Indian”. But I choose something different. Anyway, I was going to drop the subject, until my friend came over. We had a cup of tea together (the mints did not look good, Clive, so we didn’t add them – Some Middle easterners usually put mint in the tea (Egypt, Morocco, Tunis.) Where my friend comes from (another middle eastern country; Syria, Jordan, Palestine), they often add Sage to their tea, but I m not too keen on that flavor. Try to add fresh cardamom to your tea next time — it’s good for your heart, and tastes good, too…

Anyway, I talked to him about three subjects relating to this blog:

This is an example where Wikipedia is wrong in “wording” the first few paragraphs. Flavor is the formal mathematical definition:

Weisstein, Eric W. “Random Variable.”MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/RandomVariable.html

*Random Variable
A random variable is a measurable function from a probability space into a measurable space known as the state space (Doob 1996). Papoulis (1984, p. 88) gives the slightly different definition of a random variable as a real function whose domain is the probability space and such that:
1. The set is an event for any real number.
2. The probability of the events {X=+infinity} and {X=–infinity} equals zero.
The abbreviation “r.v.” is sometimes used to denote a random variable. *

Incidentally, my friend’s perception of what “Security Engineer” means is the following:

*“A person whose job is to make sure breaking a system is a difficult task for the opponent”*. Keep in mind that he is not a “security person”…

“Don’t you know you’re always supposed to change doors when given the opportunity.”

I told him, I understand the theory, and the justification, but in my view it’s incorrect. It was a long discussion. But we both agreed it makes no difference whether you change the doors or not. Still justifiable with conditional probability reasoning, but in our view that is incorrect (unless there is a long previous history sample, and the problem is treated as a Markov chain) – details left out 😉

I was basically asking him to look at that problem from a probabilistic point of view, assign probabilities, and calculate expectations. I was too lazy to do that. He was also reluctant, and said the best answer is: “Do what works for you”…

Can you both indicate which “flavour” of “random variable” you are talking about, as it has different meanings to different people at different times, sometimes they prefix the term with another to indicate the flavour they are using sometimes it’s clear from the context of where the term is being used (neither case holding here).

Very loosely all flavours of random variable, are “elerments” whose “state” on any particular experement/try is arived at purely by chance. The outcome of state over many experiments/trys has a probability distrubution over a set or range of states.

The reason for not using “values” in the above is that “random variables” need not be numeric in nature, they could be physical states or elements of spoken or written language etc. These non numeric elements are usually amenable to enumeration in some way even if by just an indexed list and associated probabilities.

As I’ve indicated in the past it’s very very important not to confuse the number of states and the probability each state has.

For instance a 747 has four engines and the state for each engine (could) be the binary choice of “fully functional” or “not fully functional” [1] in which case it could be seen that as ther are four engines the total number of states for the 747 in this discussion is 2^4 or 16 discrete states. However each of these states has a probability of occuring. This way of looking at the 747 would be as a “discrete random variable”.

Now you could decide that as the engines whilst not fully functional may still be usable the binary choice of engine state is inapropriate thus you assume each engine has a range of states from fully functional down to fully not functional [1]. Thus the state of each engine lies on a curve and is thus a “continuous random variable”.

Now the problem with the “continuous” view is that our 747 now has a very complex state that is a complex product of the continuous states of the four engines, that might be directly comparable individually, but due to their position on a wing have different probabilities in the final state of the 747.

Back when the 747 was designed we did not have the computing power to model the complex state based on the continuous random variables atributable to each engine, and the initial design process would have used the 16 state model of the discrete binary view of the engines.

However as the design process progressed the 16 state model would have been augmented by using multiple states for each engine which would have been selected by a lookup table, indexed not on the number of a state but by the probability of the state. And fairly quickly the number of states would be so larrge that other (Monte Carlo) methods would be used…

Now the function that maps the probability to a state has a “distribution” and this is sometimes used as part of the name the element and there appears to be only limited agreemeent on how to do this (I’ve heard people say “a normal continuous random variable” and “a continuous random variable of normal distribution” and mean the same thing and sometimes not…).

Now there is a problem a continuous random variables function could quite easily (in fact often is with physical objects) be discontinuous in nature (the simple example being a load on a chain, as the load is increased the chain starts to streach untill it breaks). And the simple nomoculture for “random variables” starts to become difficult at best, or you could say “it jumps the rails”.

But… this has real world knock on effects, not all discontinuous functions need have catastrophic real world effects, so the system used to model them needs to allow for the region around the change to be dealt with increased sensitivity. Thus you need. to be carefull you don’t “run out of bits”.

But, there are further problems with real world mapping, that is “hysterisis” and “lag”. The normal assumption with “random variables” is that they are “memoryless” that is it does not matter in which direction you traverse the probability curve, or how fast the mapping remains the same… This is almost at compleate variance to the real world and for various reasons both engineers and mathmaticians “pretend” they don’t exist by limiting the scope of the model in some way… Maths says an infinatly thin beam can be infinatly stiff but reality has very different ideas and we get oddities.

Both hysterisis and lag will give rise to frequency dependencies and thus oscillitory conditions in the real world which is why engineers have “stability criteria for operation” or filter the inputs / feedback in some way so you get “unconditional stability in operation”…

Then you have to remember that physical objects do have “memory” a beam will bend under load and will if the load is not excessive return to it’s original state. However above a certain load you excead the “plastic limit” and the beam does not return to it’s original state. Your “random variable” then becomess a “random active variable” and you can see this in “Catastrophe Theory”. Which encompases such terms as “tipping point”, “avalanche effect”, “Domino effect”, “Snowball effect”, “Butterfly effect”, etc. [2].

But whilst complex maths models can take some of this into account it becomes problematical at best. So other theories have arisen to deal with it (to a limited extent). One such is “Chaos theory”, normaly in pop culture it is talked of as being the result of the Butterfly effect, or the high sensitivity to imput conditions. However there are many systems that have high input sensitivity that are not chaotic in behaviour. Thus other conditions are required one of which is “topolgical mixing” which uses “strange attractors” or Julia set repulsors. However this in of it’s self is insufficient to guaranty chaos, You can view those sloped nail boards at fun fairs as a field of static repulsors into which you roll you ball however you usually get a 50/50 on movment left or right so the output is in effect a normal distrubution because the ball is in effect memory less and is just a random variable not a random active variable. Real chaos requires either or both the “particle” or the “attractors”/”repulsors” to have memory and change activly in effect we can never see it because it also requires the lack of external influance. For instance Brownian motion would be an ideal candidate if it were not for the external effects of gravity causing the more active and thus less dense areas of liquid to be less attracted towards the gravitational source…

Any way Brownian motion reminds me via Douglas Adams that my breakfast cup of tea is cooling and thus improbability is reducing to the normality that it requires drinking or microwaving 😉

So to recap “random variable” is often used in a way that makes it’s meaning almost incomprehensable outside of a given context. At it’s simplest it means that “chance” makes fore knowledge of the outcome of the probability function on any given try/experiment unknowable, just as you would expect with a perfect coin or die.

[1] It is important to note the difference between “not fully functional” and “fully not functional” in the two examples as this small difference in wording has significant effects on the number and type of states to be considered.

[2] It is important to remember that these terms also have different meanings to different people. For instance cryptography has borrowed “avalanche effect” from engineering, and information theory has borrowed “entropy” from thermodynamics. In each case the implications of the terms especialy around edge cases is very different.

]]>understanding is that a ‘random variable’ is a dependent variable with

The mathematical definition of a “random variable” is a mapping from a set of outcomes to the real numbers. So I would say in your style that a random variable **has** a dependent and independent variables. My applied math PhD friend is visiting me this weekend, and I ran this by him…