## TSA Uses Monte Carlo Simulations to Weigh Airplane Risks

Does this make sense to anyone?

TSA said Boeing would use its Monte Carlo simulation model “to identify U.S. commercial aviation system vulnerabilities against a wide variety of attack scenarios.”

The Monte Carlo method refers to several ways of using randomly generated numbers fed into a computer simulation many times to estimate the likelihood of an event, specialists in the field say.

The Monte Carlo method plays an important role in many statistical techniques used to characterize risks, such as the probabilistic risk analysis approach used to evaluate possible problems at a nuclear power plant and their consequences.

Boeing engineers have pushed the mathematical usefulness of the Monte Carlo method forward largely by applying the technique to evaluating the risks and consequences of aircraft component failures.

A DHS source said the work of the U.S. Commercial Aviation Partnership, a group of government and industry organizations, had made TSA officials aware of the potential applicability of the Monte Carlo method to building an RMAT for the air travel system.

A paper by four Boeing technologists and a TSA official describing the RMAT model appeared recently in Interfaces, a scholarly journal covering operations research.

I can’t imagine how random simulations are going to be all that useful in evaluating airplane threats, as the adversary we’re worried about isn’t particularly random—and, in fact, is motivated to target his attacks directly at the weak points in any security measures.

Maybe “chatter” has tipped the TSA off to a Muta al-Stochastic.

From the abstract, it seems they are attempting to perform a cost-benefit analysis of proposed security measures:

===========ABSTRACT============

# The United States Commercial Aviation Partnership (USCAP), a group of government and industry stakeholders, combined several operations research methods in an analytical process and model that encompasses the US commercial aviation industry, including travelers, airlines, airports, airline and airport suppliers, government agencies, and travel and tourism entities. With input from these stakeholders, the model, combining system dynamics and econometrics, evaluates the effects of proposed security measures over 25 years. It enables all stakeholders to share a common understanding of these effects and helps government decision makers to improve security without undue and unforeseen operational and economic impact. The model uses linear and nonlinear programming, single and multivariate regression, system dynamics, econometrics, and Monte Carlo simulation. Since 2004, the government has considered the model results in determining policies for screening and credentialing airport employees, screening passengers and cargo, determining security staffing levels, and charging security fees. All participating stakeholders reviewed each analysis for acceptability. Based on the model’s success, they envision extending its use to include nonsecurity policy issues.

I don’t see how this can work. Sure, simulations like this will work when dealing with complex albeit static situations like a jet (like the Boeing engineers), but how can it work when the enemy is not static, and can change their modes quickly, or call off an attack quickly. The enemy has intelligence they can use that the Monte Carlo situation can’t incorporate when running the simulations.

One thing I would like to see, though, is: does the Monte Carlo simulation show that what happened on Sept. 11, 2001, will happen (like companies using actual sales history to derive a reasonable forecast of the future)? If they can prove that the MC simulation can show that the atrocities on Sept. 11 will happen/have happened, then I might believe this simulation can predict future terrorist attacks/airplane mishaps.

The key point is identifying the vulnerabilities before the enemy does so. One way of doing so is attempting to model the scenario in a mathematical way, in order to do risk assessment.

The problem with such models is that they tend to get very complex, and difficult to evaluate (at least if you want them to be somewhat close to reality). Monte Carlo techniques are methods for overcoming these limitations, by sacrifising simplicity for computational complexity.

Therefore – while the rhetoric question about randomness may give some small laughs – I think Monte Carlo methods can clearly be useful for attempting to model an attack scenario.

“From the abstract, it seems they are attempting to perform a cost-benefit analysis of proposed security measures”

I’m all for performing cost-benefit analyses of proposed security measures, but I don’t think Monte Carlo simulations are the way to do it. What’s the random variable here? Terrorists are not random. The London liquid bombers didn’t choose liquids at random, they chose liquids because the airplane screeners ignored liquids. And next time they won’t choose liquids because the TSA screeners confiscate liquids. Or maybe they will choose them because the TSA is pretty incompetent about confiscating liquids. In either case, it won’t be a random decision that you can simulate with a Monte Carlo model.

anon123 June 22, 2007 1:59 PM

Wow, I don’t know if that last comment really came from Bruce or not, but it shows a very weak understanding of Monte Carlo methods. They have nothing to do in general with random variables, etc. They are just basically an approximation technique for solving many kinds of mathematical problems. Yow.

dragonfrog June 22, 2007 2:02 PM

I can see some applications – specifically, the work on the likelihood of random aircraft failure could probably be extended to predicting the likelihood of a bomb of a particular size and power taking down a plane of a particular model and wear characteristics.

Suppose you could figure out that, for example, a bomb with less than X grams of nitroglycerine, placed in the hold of a 747 that is under 8 years old, is less than 5% likely to cause the plane to crash. Then you could tune your screening equipment to minimize costly false positives due to people’s heart medication, while being reasonably certain to find any dynamite bombs that would stand a serious chance of taking out a plane.

When the next flight is a more vulnerable plane, you turn up the sensitivity, and have to absorb the cost of more false positives to get the same degree of certainty.

dragonfrog June 22, 2007 2:05 PM

OK, I guess that particular example is lame – baggage gets checked in once, and transferred over connections.

If you did what I proposed, you could make sure your first flight was on a big tough plane, and get your bags tranferred to a feeder flight to some small regional airport, which would be on a weaker plane…

“Wow, I don’t know if that last comment really came from Bruce or not, but it shows a very weak understanding of Monte Carlo methods. They have nothing to do in general with random variables, etc. They are just basically an approximation technique for solving many kinds of mathematical problems. Yow.”

It came from me. And I admit a weak understanding of Monte Carlo methods. If you have the time and inclination, please explain.

They may well know what they are doing.

Monte Carlo methods are a well-known and well-studied technique for solving difficult integration problems that arise in the analysis of Bayesian inference networks ( http://en.wikipedia.org/wiki/Bayesian_network ). The Monte Carlo aspect of it isn’t modeling the actions of people involved, it’s just providing methods for analyzing how probabilities propagate around the network.

http://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo

Also, in a network large enough to be meaningful, there are bound to be a lot of inputs that can be well-predicted. For example, if attackers use a certain attack like blowing the door off a plane (the probability of which you probably don’t know very well at all), then perhaps the pilot has a certain probability of being able to bring the plane down safely (the probability of which might be very well known). The benefit of putting all these things into a network and solving for aspects of that network is that you can see the cumulative, mutual effects of very uncertain things and very certain things at the same time, within the same framework.

Aaron Shelmire June 22, 2007 2:20 PM

Many Risk Assessment methods, especially quantitative methods, will fail in the face of rare but devastating attacks. Risk Assessments seem best used for deciding how to respond to the “typical” or “common” threats.

I could see them doing something similar to what is done in the RROI framework proposed by LBNL, Ashish Arora, and Rahul Telang at CMU.

At this link. There is an excel worksheet that uses Monte Carlo simulations to perform quantitative risk assessment simulations that I was given while taking Ashish Arora’s Risk Analysis course.

From the abstract, I believe that the modelling is at a much more coarse grain than evaluating attacks – I suspect it is more to evaluate the possible costs for implementing a particular screening strategy (manpower etc.) given various approximations on how the industry will grow over the time of the model. This is much more suitable for monte-carlo techniques. There may be more included on the economical impact of a successful attack (the second paragraph – not included above – discusses the economic effects of the 9-11 attacks) however I do not have easy access to the article at the moment.

With respect to using monte-carlo techniques to evaluate attack vectors – I could believe that there are models that would be ameniable (sp?) to this, for example those with a very large number of parameters which can be difficult to explore the phase space by other means. However in these cases, you are really only solving a model problem, and it is likely that the “real” problem has avenues of attack you have not predicted. In that sense, I agree with the skeptisism expressed here…

Andy Vaught June 22, 2007 2:32 PM

It appears that Monte Carlo methods are being used to “sum over all possible scenarios”. The idea being that since there are too many scenarios to examine, you pick a random subset of these and hope (or prove) that the average of your subset is the same as the average over all the scenarios.

dragonfrog, the conclusion may be a little lame but I think the concept is valid. Instead of thinking of the attackers as random variables the effectiveness of any given component of an attack can be considered a random variable. For example the effectiveness of screening for explosives is a random variable depending on the type of explosive, the mass of the explosives, and how well it is sealed and concealed. Another variable in the chain would be the effectiveness of the explosives given it is placed in the cargo hold, essentially at random, by a baggage handler. Another variable is the effectiveness of the baggage handler screening. What are the odds that the explosive laden baggage will be handled by someone who will deliberately place it where it will do the most damage? If a particular event requires a large number of variables to all be close to 1.0 in their likelihood before serious loss of life and/or property occurs then perhaps that threat should not be placed particularly high on the list of threats to be addressed.

Carlo Graziani June 22, 2007 2:38 PM

To expand a bit on what Ken Williams wrote:

There are several variants on what Monte Carlo techniques are used for. These variants can be so different as to make one deplore the fact that they are described by the same term.

What appears to be the case here is not “Simulation” — it’s not like an N-body simulation to attempt to follow system dynamics.

Rather, this appears to be more along the lines of the Integration/Probability Density exploration techniques, the most common and popular and useful of which fall under the rubric of Markov Chain Monte Carlo (MCMC).

One typically resorts to MCMC in order to get a handle on probability distributions in very high-dimensional spaces. Since the typical thing one wishes to do with a distribution is integrate it with respect to some parameters, this class of techniques also comes up in connection with “How the hell am I supposed to numerically integrate this function over a 100-dimensional space?”

My guess (I have not been able to locate the paper) is that they have a probability model for bad, terrorist-related things to happen. That model in effect yields a probability distribution over an enormously multi-variate space. The only game in town for extracting information from such a probability distribution is MCMC.

Aaron Shelmire June 22, 2007 2:39 PM

As per Bruce’s comments regarding the attackers using different means to the ends, couldn’t the process ignore the path to the end, and still model what the outcomes will be.

By this I mean, couldn’t you say X% of attacks will end up with the attackers being caught in the screening lines. Y% of attacks will result in a plane being hijacked, but the plane is shot down before a larger attack occurs. Z% of attacks will result in the plane being used for larger ends. A% of attacks will purely be a hijacking for ransom ends. B% of attacks will be destruction attacks against the airplane. C% of attacks will be destruction attacks against the airport. And so forth….

You could do this at a higher level of detail as well. Such as figuring out the cost of human casualties (always an easy subject) x the amount of humans that are in each type of plane, how many different planes of that type are typically around, and then you would have your average amount of human casualty cost, at least for plane incidents. You would have to figure out something else for incidents in the airport, and then for incidents in the parking lot, etc…

Then you would run the monte carlo simulation to see how the results hold up across many runs with whatever your confidence rate is (typically 95%) to lend greater confidence to the results.

It’s still based upon the creativeness of the “end results of the attack”, but I believe you could ignore the path of attack, and focus on the possible results of the attack to make things easier. There are lots of paths, but not so many results(human death, capitol loss, reputation loss, damage to external entities, business loss due to cancelled flights and delayed flights, etc.)

So, basically this press release is saying that Boeing has developed a statistical model and they will analyze said model using statistical techniques. Monte Carlo sounds cool, but it really just refers to a generalized set of sampling techniques used to perform inference on a model without a tractable closed form (i.e. most of the interesting ones). It’s like saying you will use a calculator to do arithmetic.

Monte Carlo simulation could indeed be a very useful tool for risk analysis, if used properly. I would guess that the intention is not to predict human decisions — that would be pointless to even try — but rather to model particular failure modes.

Put another way, Monte Carlo can’t tell you to X-Ray everyone’s shoes. It can, however, be used to figure out things like what effect a given set of broad scenarios (explosive on a plane, for example) may have. For example, that simulation may tell us that a fairly small amount of explosive in a particular part of a plane would be extremely damaging, and so that part may be inspected more closely in the future.

The idea would be that a human designs a potential event model, and variations on that event are tested via Monte Carlo to determine an estimated impact (and thereby help with a risk analysis). This is much more efficient than building in-depth simulations for each event.

Ideally, the results from the Monte Carlo simulations would be used to identify events and failure points that merit further analysis.

JackG't June 22, 2007 3:37 PM

I don’t see how the RMAT could productively be applied to the behaviors of humans. Which humans? Some that come to mind are plane crews, airport personnel of all kinds, passengers, visitors, vendors, government agency personnel, computer and communication device users in many places, attackers, and attack planners.

If the RMAT is to be a smokescreen, it’s unlikely to repel attackers or to reassure the public. Something basic, such as successfully repelling tiger teams, would come much nearer to reassuring the public.

Andy Dingley June 22, 2007 3:39 PM

I suggest looking at the history of Monte Carlo in Operational Research, particularly by the Royal Navy back in WW2 developing convoy tactics against U-boats. Although it’s horribly easy to mis-use these techniques, they can work out usefully — especially in fields where your concrete information is particularly sparse.

Here is one example of a use case for the problem at hand: How often will false positives occur for a possible new screening test? Let’s say we’re testing checked luggage for traces of volatile chemicals that could be from explosives but also from certain everyday items.

Start with a large population of travellers, all innocent. Use existing data to model how many travellers are in each group, along with their ages and sexes. Generate one or more random suitcases of belongings for each, based on what people usually take on trips. If there are any that leak the signature gases, estimate the leak rate based on the (randomized) source size, the (randomized) position within the suitcase, and the reduction due to absorption by the rest of the suitcase contents.

This kind of model allows you to ask questions like: How many false alarms will we get per day on average? Are there particular types of traveller that rarely produce false alarms (such that we should be extra-suspicious if we do see an alarm)? We can look at the travel patterns of different populations — maybe the false positive rate would be too high in one airport but not in another. Or in the international vs the domestic terminal. And so on. And once you have your passenger luggage model set up, you can do similar studies for other scenarios. The model is only as good as its assumptions, of course, but the nice thing is that you can validate the assumptions more or less piecewise, and carry out comparisons to real passenger data.

Now, the converse problem — how often does a security system correctly identify a real threat — is a lot harder because there we’re dealing with small statistics and an adversary who’s trying to game the system. That may need a different set of tools. (Maybe they have a way to use MC for that, too.) But if the cost of your security system is driven by false positives, this kind of approach can definitely help you understand the cost half of cost-benefit analysis better.

RSaunders June 22, 2007 3:53 PM

A lot of cost benefit analysis can be done efficiently with stochastic methods like Monte Carlo simulation. Fortunately, terrorist attacks are not among them. These models rely on things that happen frequently with a random variable like timing. They are great for telling you in adding another drive-up lane will increase taco sales enough to pay for the construction. You do hundreds of taco transactions per day, and some folks leave because the queue is too long. The time between cars is considered random, even though the traffic signal near you no doubt provides a strong bias signal. Another queue provides faster service, and more customers. A model can tell you if it’s enough more customers to justify the expense.

The abstract is talking about a model of the air transport system. That is a lot like my taco analogy, and you might be able to understand how many more TSA lines you will need if you decide to spend 15 seconds more on each traveler.

However, it would be wrong to say this is modeling terrorists. Terrorists are not like taco customers. They are very few, and anything with a small number of samples will be very hard to assess with this sort of a model. You could model the time delay caused by a power failure in the TSA screening area, but lightning or electrician error are much higher probability causes than terrorism.

From looking at the paper, it appears that they developed a detailed model of the entire air transport system and are using it to evaluate the “operational and economic effects” of various security proposals.

For example, they model passenger demand as a function of GDP, ticket price, travel time, fear, and hassle. The “hassle” parameter is used to quantify the decrease in travel to due to annoying security measures.

They used Monte Carlo simulation while quantifying passenger screening throughput.

I do think it would be a real good idea for people to read the paper before bloviating too much.

dave tweed June 22, 2007 5:33 PM

I haven’t found enough info to identify the paper and hence haven’t read it. However, one thing Monte Carlo model approximation might be a useful target for estimating the economic/social effects of various terrorist acts (using other knowledge to see if they succeed). Eg, supposing a dirty bomb gets set off in coastal oil terminal X requiring k months to decontaminate, what’s the likely knock-on effect on refining hence fuel prices hence prices of transported goods, etc. That is, you could model the effects assuming the terrorists succeed in their initial goal. Because these involve the interaction of lots of people/objects a broad statistical model is much more likely to be informative. Of course, even with this information you’re still probably in movie plot threat land.

Any modeling involves assigning probabilities. If the team makes assumptions like “Nobody would be so stupid as to let this happen” — and they’re wrong — then the model is irrelevant.

Carl Grayson June 22, 2007 8:36 PM

As part of Risk Management you’re obviously looking for the best bang for your buck in risk reduction and this appears to be the intent (to quote from the abstract “evaluates the effects of proposed security measures”). With MC you can enter the relative reduction for defences to specific attack vectors to see what the overall minimisation could potentially be.

Although I agree that some outcomes will likely be sensitive to initial values (say hello to chaos theory), others will likely fall out in a stable way. By testing the sensitivity of the output to multiple input variables by adjusting them, you can also determine confidence levels in the outputs (much in the same way that weather prediction is presently run). Although it isn’t precise all the time it can still be useful.

Neighborcat June 22, 2007 8:57 PM

Take a step back and it makes perfect sense; someone from the “industry” side of the Commercial Aviation Partnership has a service to sell, so they sell it to a government agency desperate to justify it’s continued existence. Whether it works or not is irrelevant. The term “non-disprovable security measure” comes to mind.

Anonymous June 23, 2007 6:22 AM

For example, they model passenger demand as a function of GDP,
ticket price, travel time, fear, and hassle. The “hassle” parameter
is used to quantify the decrease in travel to due to annoying
security measures.

Typical American. Economy, convenience, service, blah, blah, blah…

Maybe someone can do some Monte Carlo simulation on starting bullshit wars on lies, killing of hundreds of thousands of innocent civilians, pissing in everyone’s cornflakes and the resulting economical ramifications?
And how come that despite all this economical expertise and countless Nobel Prize winners the country is so much in debt that every yank owes hundreds of thousands of dollars?
I’d rather spend all that money in Monte Carlo/Monaco if I were you!

Maybe you yanks should start to use some COMMON SENSE again instead of all that incessant “benefitting the economy” hogwash?

well you got a few explinations on what MCMC is. I’m currently teaching it and i can assure you that there are lots of abuses of the genral class of methods. Lots of people use it becuase they think they should. Not becuase they know they should…. Anyway there are some good explinations above i think.

One interesting point is the source of randomness. Most of the time Linear generators are not as bad as some think (for this task). But the MT is common. Sometimes you get better performance (convergance) with m-sequences (maxinal LFSR) esp in classic intergration problems. I have used AES in counter mode to check my simulations don’t change with the RNG.

Its all lots of fun.

Ok so now really OT: talking of fun.
Do you have a response to “Differential-Linear Attacks against the Stream Cipher Phelix”. I would call it a choosen nouce attack rather than just a choosen plain text attack. I assume there has been disscusion about this and its validity, but i cannot find it.

Also becuase Phelix did not go to round 3 I was hoping that this genral apprach to a steam cipher will not be abondend. ie no sboxes and a mac at the same time is great (Rabbit has no sboxes but patents and no MAC). Stream ciphers that need HMAC or whatever as well don’t really seem to solve the problem better than AES in a appropreate set of modes IMO.

SteveJ June 23, 2007 7:31 AM

@Bruce: “it won’t be a random decision that you can simulate with a Monte Carlo model.”

Monte Carlo is useful for more than just truly random inputs. Weather forecasting uses Monte Carlo simulations, not because the current state of the atmosphere is “random” (it isn’t – it has exactly one state), but because our knowledge of it is incomplete. The crude idea is that you run a squillion scenarios with different inputs within the possibilities dictated by the measurements you can make. If particular outcomes occur a lot, then that’s your “most likely” prediction.

Since we don’t know if, when or how the terrorists will attack, it might make sense to estimate probabilities of different combinations (strictly speaking, these would be Bayesian reverse probabilities conditional on what we do know about terrorists), run them against our defenses in some kind of simulation, and observe which outcomes are most common. That might help tell you what the cost is likely to be of an attack, and what mitigation and recovery responses are likely to be of benefit. Or it might not – hard to say since I don’t know how much the modellers know about terrorist behaviour.

In fact, the next terrorist attack could be a truly random event, if there’s some terrorist mastermind with an undergraduate course in Game Theory, who thinks he has calculated the optimal mixed strategy for getting a bomb past some set of security measures. Bit it’s actually irrelevant whether attacks are truly random: we can’t perfectly predict them, so it may be that the best we can do is to model them as random.

John Scholes June 23, 2007 8:09 AM

Many good points above. I agree that MC is good on costs (false positives etc), not so good on catching terrorists. But you have to put up effective barriers that the terrorists will not bother to challenge (because they can see they are effective). That is the nature of defense.
But I liked the bit at the end of the referenced website – hinting that this kind of thing could be a smokescreen for more effective techniques!
More fundamentally, I agree you have to pander to silly public fears, but there is only one serious terrorist threat (fission, and even fusion, weapons). The safeguards issue is less worrying than it was, but the wretched things are still far too easy to make. I am amazed no terrorist group has done it yet.
I think much more attention ought to go on that.

Schmoe June 23, 2007 11:20 PM

Monte Carlo simulation is fine is you have well understood random distributions, like weather, production systems, etc., but for terrorist acts, the probability distributions are pretty well unknown. What are the odds of snakes on a plane? The data on terrorist activity is pretty sparse and since terrorists are adaptive, even good distributions of past results will underestimate the unseen extremes of the distributions. For example, if the MC simulation assumes a 0.0001% chance of snakes on a plane, it won’t ever think to simulate any rabid wolverines on a plane.

Ben Kaduk June 24, 2007 2:34 AM

As mentioned above, Monte Carlo methods are essentially the only game in town for getting useful information from high-dimensional probability distributions. For the record, I have not read the paper in question, so I don’t know what their exact approach is, but there seem to be a few obvious ways to apply MC to planes and failure modes:

(1) model the (presumably random) failures of airplane components due to wear and fatigue — this makes sense, since airplanes are immensely complex, and their dependency graphs are convoluted. MC is good for determining just how vital a particular airplane component is.

(2) model the entire air-traffic system. I do not think this makes much sense, as the interactions between different planes (through air-traffic control and the FAA and the like) are not very well-understood (and subject to fluctuations of human whim). Thus, a mathematical model of them is difficult to formulate (random may be your best bet, if you can figure out what random means. . .), so the results of the MC are not likely to be useful

(3) model terrorist attacks on the aviation system. This is an attempt to extend both (1) and (2) to include the human factors and motivations of groups of individuals that do not behave in ways that are easy to understand rationally. Again, this is difficult to model mathematically, so the results of the MC are of questionable utility

In summary, MC simulations get practical information from mathematical models (usually models involving many components or dimensions, like the failure modes for all the parts in an airplane). However, the results are only as good as the mathematical model used. For things like parts failures on an airplane, this works quite well, but accurate mathematical models of human behaviour are quite difficult to come by — just ask your favourite psychologist.

MC is good for engineering; I don’t see it helping in the war against terror.

J. Robertson June 24, 2007 7:57 AM

Many problems in math involve the evaluation of complex integrals and, as others pointed out, the Monte Carlo method gives you a good way to evaluate difficult integrals in multiple dimensions.

Try it out: if you want to compute the area of a circle of radius 1 with center in (0,0), get many random points in the square [-1:1,-1:1] and count those that are in the circle. The number of points in the circle divided by the total number of points converges to the ratio of the area of the circle divided by the area of the square.

In many cases, this silly little algorithm gives brilliant results and works regardless of the number of dimensions of the problem. Of course it has some uses in probabilistic problems as well.

It seems to me that this article points the finger at a very small detail of a overly broken method: that you can use math “for screening and credentialing airport employees”, for instance.

Brian June 24, 2007 11:07 AM

I love the idea of using stochastic models to estimate the costs of security mechanisms, but I’m terrified that it’s been almost six years since 9/11 and we are still stuck obsessing about planes and airports.

Paul Johnson June 24, 2007 11:53 AM

Looks to me like a useful but pedestrian piece of work being trumpeted by the PR department to show how their boffins are really on the job. Lots of acronyms and jargon that the PR puff writer didn’t understand, in the hope that the average reader will be impressed.

As far as I can tell from Google “RMAT” is about reliability and maintainability. Aviation engineers appear to develop these models routinely. They predict the likely maintenance costs and downtime for aircraft, and as others have noted, Monte Carlo methods are generally used in the evaluation engines of these tools.

So it looks like someone at the TSA has taken an RMAT tool and tried to build a model of terrorist success. As others have noted, terrorists can apply intelligence, but if you consider a certain attack mode (e.g. 50g of a certain explosive) then you can use a model like that to predict the likely impact: does the airline break in half, or does it just get a hole in the side? Would overpressure release mechanisms help? And so on.

So yes, it makes sense. The only sad thing is that the press release makes it sound a lot more important than it is. But hey, thats what we employ PR people to do.

Paul.

Weight vulnerabilites? yes

Identify vulnerabilities? no, the potential attacks and defenses as well as all components involved must be well known to run Monte Carlo.

adam smith June 25, 2007 8:17 AM

It looks to me like they’re modelling the impact on the industry of proposed security methods, not the effectiveness of the methods nor the failure modes, but “will Johnny fly?” and how to deal with the answers.

This is likely in response to the “next time I’m driving” sentiment and the recent news stories about the diminished utility of air travel for time and money savings. By predicting the wait time more accurately and having pre-baked impact plans, they’ll allow people to spend less time idle at the airport – they can diminish the security exposure of the massive campgrounds at the terminal, and improve passenger throughput (and presumably revenue).

guvn'r June 25, 2007 12:01 PM

I love it!

All this discussion, and still no one has mentioned the last paragraph of the article referenced by Bruce’s link:

“As for operations research’s role in the Battle of the Atlantic, the Navy used it not only to help find submarines but also as a public relations smokescreen to hide the more effective mathematical tool used to sink subs: Cryptanalysis of German naval radio traffic.”

duh.

On your nightly news, how many aircraft have been brought down lately?

Now, how many car bombs have you seen detonated?

If you were the TSA, which would you defend against – a suicide bomber on a plane or a suicide bomber in the security line in the airport leading to the plane?

Today’s TSA doesn’t even functionally prevent the suicide bomber on the plane scenario – they just steal toddlers’ sippy cup liquids. So where does that leave us? Without rights or security. Go Ben.

“Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety.” – Ben Franklin

Monte Carlo methods have been used successfully in audits. You don’t have time to audit everything, so you let a computer decide which items to audit. The advantage of letting the computer decide is that it removes potential bias.

I can imagine this working in a safety context. You don’t have the resources to check the security of every part of an airport. Instead you do a thorough review of those areas that a random number generator told you to.

The advantage? Again, it removes bias. In particular it forces you to review a certain number of areas where you can’t imagine a vulnerability.

Anonymous June 28, 2007 6:03 AM

@Bruce, Schmoe and others:

Monte Carlo methods can be usefuleven in problems that ave no randomness in the original problem at all. By transforming it into a randomised problem and then solving by Monte Carlo methods, an approximate solution can be obtained with vastly less computation (and hence, lower cost.)

The classic example of this is finding the value of pi by Monte Carlo. Obviously, pi is not random. But what we do is generate random points (pairs of random numbers) where x is in (-1,1) and y is in (-1,1). We then find the distance of (x,y) from origin, it is sqrt(x^2+y^2). If the distance is less than 1 then it is in the unit circle, so we count it “in”. In the long run the fraction of points counted “in” is pi/4, so we can find pi.

This is actually not a very good way to calculate pi (the number of random points required rises exponentially with the number of digits of precision wanted) but it shows that the domain of Monte Carlo methods goes far beyond “randomised”, “probabilistic” and “statistical” problems and includes even such things as complex, nonlinear but strictly analytic problems.

So is the Boeing team approach sensible? It is impossible to say from this article. It might as well say they are using calculus. All it tells us it that they know some mathematics, it doesn’t tell us anything about whether or not their proposed method is correct.

Hello all… lots of inaccuracies in the GCN coverage of RMAT (they never actually talked to us). Also, the USCAP abstract cited in the first comment is for a seperate but related project. USCAP is an econometric model, while RMAT is an agent-based simulation model to assess risks.

If folks are interested in coming to work on RMAT and can pass a security clearance, send me your resume. I am hiring.

tsa_rmat@hotmail.com

• TSA Director of Risk Mgmt

Julie July 23, 2007 10:17 AM

Has no one here read The Black Swan by Nassim Taleb? Monte Carlo simulations may work well in situations that are driven by physical laws, such as reliability, where the outliers cannot significantly impact the aggregate if the sample is adequately large (think of the effect of including the tallest man in the calculation of the average heights of a sample of people). But social, political and cultural systems exhibit randomness in a very different way than physical systems. The magnitude and frequency of outliers in social systems has a HUGE impact on the aggregate (think of including Carlos Slim in the calculation of the average salary of a sample of people) .
And since the tails of Monte Carlo risk distributions, which would be of the most interest here, are very sensitive to the inputs, they are pretty useless as forecasting devices for phenomena such as terrorism, even though they work quite well for phenomena such as failure rates of machinery.

Sidebar photo of Bruce Schneier by Joe MacInnis.