Schneier on Security
A blog covering security and security technology.
« Flying While Armed |
| Disguised USB Drive »
December 9, 2008
Who Worries About Terrorism?
The paper, "Terrorism-Related Fear and Avoidance Behavior in a Multiethnic Urban Population," is for subscribers only.
Objectives. We sought to determine whether groups traditionally most vulnerable to disasters would be more likely than would be others to perceive population-level risk as high (as measured by the estimated color-coded alert level) would worry more about terrorism, and would avoid activities because of terrorism concerns.
Methods. We conducted a random digit dial survey of the Los Angeles County population October 2004 through January 2005 in 6 languages. We asked respondents what color alert level the country was under, how often they worry about terrorist attacks, and how often they avoid activities because of terrorism. Multivariate regression modeled correlates of worry and avoidance, including mental illness, disability, demographic factors, and estimated color-coded alert level.
Results. Persons who are mentally ill, those who are disabled, African Americans, Latinos, Chinese Americans, Korean Americans, and non-US citizens were more likely to perceive population-level risk as high, as measured by the estimated color-coded alert level. These groups also reported more worry and avoidance behaviors because of concerns about terrorism.
Conclusions. Vulnerable populations experience a disproportionate burden of the psychosocial impact of terrorism threats and our national response. Further studies should investigate the specific behaviors affected and further elucidate disparities in the disaster burden associated with terrorism and terrorism policies.
This is certainly related. As people search for health-related information on the Internet, a common result of their newfound "knowledge" is more stress and anxiety, which can manifest itself in new symptoms.
Posted on December 9, 2008 at 12:58 PM
• 27 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
There's a fault in the study, or at least the abstract: "...perceive population-level risk as high, as measured by the estimated color-coded alert level."
Asking someone to estimate what the current alert color (what the *government* believes the risk is) is does not tell you what *they* believe the risk to be.
Without checking, I would guess it to be orange, but I also believe the color thing is just overblown fear-mongering and that the actual risk low.
@Melissa: "Without checking, I would guess it to be orange, but I also believe the color thing is just overblown fear-mongering and that the actual risk low."
It probably is orange, and it is probably low, but I don't see where there has been widespread fearmongering. Of course, the ones who make the fearmongering charge will likely be the first to ask why the dots weren't connected and people weren't warned afterward.
Hindsight is pretty convenient: If nothing happens, say there was fearmongering and exaggeration. If something happens demand to know why it wasn't prevented and why we weren't warned (but, if they had prevented, they'd be accused of fearmongering and exaggeration).
Not saying they are always right (far from it), I'm just glad I don't have to deal with the "damned if you do, damned if you don't" scenarios that they do.
Isn't it always orange?
My beef with the color system is that even if you know it is orange (aside from the fact that it's always orange) you still have no idea what YOU are supposed to do.
The original intent of a signaling system is to pass on meaningful data to people watching so they can act accordingly. Red means stop, green go, etc.
These days just about everyone wants to put up a "threat-o-meter" as though they are measuring something tangible, but who knows what is supposed to happen when it changes.
I was directly involved in the security meter when I worked at Y! (security.yahoo.com) and would not let it online until they could get Symantec to include some kind of useful action information. Ok, I settled with action information but perhaps someone finds it useful to know that they should scan their PC for viruses when the light is yellow.
You don't see where there has been widespread fearmongering? Politicians and journalists are constantly talking about terrorism as though it were the biggest threat we face, when in reality it's somewhere between shark attacks and lightning strikes in terms of its deadliness.
Years ago I was Paramedic. In school, one of my clinical instructiors always taught us "If you hear hoof beats in the night - look out your window and expect to see horses not zebras".
The media-driven, fear-mongering, society we live in leads many people to always expect the worse, to look for the worst possible outcome for a given concern - to always look for zebras...
That's an interesting app on Yahoo!, but I'm a bit confused as to what it means in practice.
If I keep my PC up to date with patches for the OS and my anti-virus stuff and run regular scans (ie use the softeware as it's intended), am I tangibly more likely to have a problem if the threat level is red than I am if it's green? If everything I use is up to date and working as expected, I'd expect that I'm only at risk from an "unknown unknown", and I guess the threat level can't really quantify the unknown unknown, can it, in which case I'm more or less equally vulnerable at all threat levels?
I can see that if I had an unprotected PC the threat level might measure something useful, but then if the average life of an unprotected Windows PC is 20 seconds or whatever the headline quote is, even then the threat level won't make much difference, I'll be owned in 15 seconds for "high" levels and 30 seconds for "low"?
Or am I missing something?
@Davi Ottenheimer: "My beef with the color system is that even if you know it is orange (aside from the fact that it's always orange) you still have no idea what YOU are supposed to do."
I agree with that. On the flip side, it is a way for them to communicate if intelligence indicates a higher risk than normal, even if that risk is still low, and let people decide for themselves.
@Michael Ash: "You don't see where there has been widespread fearmongering?"
Perhaps I don't take their comments seriously enough. But I don't really think so. Before 9/11, there was very flimsy intelligence, and they put the airlines on notice and didn't tell the public. Afterwards, this came back that "They Knew" and didn't warn us or connect the dots. And remember, when that information came in, people felt about it the way some of us feel about it now--it wasn't much. After the fact, however, given the general reaction about who knew what, what they didn't do, who wasn't warned, etc., we shouldn't be surprised that that adjust color schemes and warn people when they have some intelligence.
Look, just to make sure the point is not lost, I am not, not saying the government is doing everything right, they by no means are, they are far from it. I'm simply making the point that, had they published the pre-9/11 intelligence and it didn't happen, they would have been accused of fear mongering even for that.
As stated on another thread, it is the "reduce risk" vs "prevent" argument. If they reduce the time between attacks from 8 years to 20, no one will be saying--this is happening less than before, good job. No, the focus will be on the failure to prevent--a near impossible task.
The point is not publishing the intelligence, but acting on it. Publishing warnings does little good and much harm if the warnings are vague in time and place. Most of the warnings we have gotten after 9/11 have been exactly of that type.
Telling people to beware because there have been robberies in their neighborhood is a reasonable warning. Telling people to beware because there was a spate of robberies 20 miles away is much less useful and more likely to instill fear for no reason. Or perhaps for a self-serving reason (selling burglar alarms, pushing up news ratings, the list is endless).
You're right, things can be done much better. Ulterior motives are commonly a factor as with anything. In this case, the motive is likely CYA (the consequences of not doing a little theater are far worse than the consequences of some useless warnings).
In any case, I hope everyone is enjoying this holiday season.
As I have said before with untrained personel these systems are either "a self denial of service attack" or "they don't work".
The reasoning is simple. If you are untrained you have no real knowledge of what each colour is supposed to tell you. Therefore the chances are you either over or under react.
If you under react and not report something "hinky" then an "event" happens that you might have otherwise stopped.
If you over react and report everything then the responders are overwhelmed by the level of perceived threat and end up chasing their tails (people from Boston show your hands).
So what are the odds of the colour systems working just right? Well how rare are the real "events"? (very) Have there yet been enough to statisticaly measure with any confidence? (no).
So the odds of the colour system working correctly are to small to measure...
So as they are for "untrained personal" as I have noted they do not work either way.
So what about trained personel?
I used to be such a person once (thought it never felt like it) and the whole purpose of them was to alert you that there was information you should be aware of... So you where supposed to look the information up "in orders".
Now unless the "information reporting" part of the procedure is so carefully hidden that only personnel with a "Captin Crunch Decoder Ring" can read it I would say that there is "no information reporting" system in place. So the colour system does not work for trained personel either due to lack of communication of essential information.
Which means the colour system has a different purpose altogether.
I can think of two obviouse possabilities as a "paliative passifier" or for those in authority to "Cover Your A55".
As a CYA device it's not credable as there is no information component and it's status (orange) apparently never changes.
As a passifier it would have to spend most of it's time in the "comfort zone" of "no immediate threat" to work and as it never gets there and there have been no "events" the explanation of "we catch them all" seams a little to good to be true (we are talking about humans who work in federal agencies after all).
So is there another posible purpose behind the colour system?
Well as it obviously does not work for trained personel (no information communication channel and status does not change) and it is for public consumption how about it being the opposit of a passifier?
That is it is designed to make you feel that you are living in dangerous times (ie always on orange) but as there are no "events" then the authoraties are doing a perfect job...
Is this view supported by other evidence?
Unfortunatly yes, we hear the TSA giving figures about how many people on the no fly list are identified (is it ten per week per major airport I forget) and about all sorts of other successess (incidents responded to etc).
But no "event" slips through the Governments "ever present eye".
Hmmm if you look at the claims made by the TSA and DHS etc they realy do not have any substance. That is how many people have appeared in court so far? err it's as close to zero as makes no differance, and for those few it's for stupid or unrelated things...
I guess George Orwell would be proud his predictions made during the second world war have finally come of age some sixty years latter.
The colour system is just acossiated "spin" of "newspeak" ment to keep people in some sense of both dread and awe of the omnipitent and ever present "Government who does not fail" like any good "big brother"...
Ho hum, I guess as "citizen Smith" I'm due a visit to "room 101" any time now, is that "the clock striking thirteen" I hear...
@ Nobby Nuts
"Or am I missing something?"
No, you got it. The Defense Readiness Condition (known as DEFCON) model is perhaps most famous for being a functional example. Each level has a corresponding action not only known to participants but actually practiced/tested and measured.
The problem is that it is really hard to define action outside a discrete area, yet it is trivial to flash colors at people and win kudos for "doing something".
When a light turns red (stop) in traffic it is easy to manage people, but figuring out how to issue actions for DHS warning level orange, or Symantec Internet threat level medium...who has the cycles (let alone authority) to really figure it out, let alone run tests and measure progress? In that context, "scan your PC for viruses" is as good a start as any.
Studies like the one Bruce points to hint at the urge in some people to give what others a placebo instead of what is believed to help them. I have seen similar data already, but from a different perspective. Surveys we did found that users responded "more positively" to bright colors on a web page. It didn't matter that no one could understand what it meant -- it was pretty. Apparently people even like hearing "DHS level orange" although we have no idea what to do about it, as if we imagine someone somewhere must know something. Cynical? Check out the data.
I only sometimes have a nightmare about it all backfiring and everything grinding to a halt as people ask "WTF do you mean we're at level orange? Now what do we do?" In fact, I often see that response at traffic lights that go haywire. Seems to me yellow should be the highest level allowed without a clear playbook for all those exposed to the system.
"...let people decide for themselves"
That's not really the best way to prepare for disasters/incidents. In fact, I would call that potentially worse than only communicating the threat level to trained response teams and letting them disperse the word, as was the plan in WWII.
...Rolla Missouri. That's who.
@ Davi Ottenheimer
"Apparently people even like hearing "DHS level orange" although we have no idea what to do about it, as if we imagine someone somewhere must know something."
Actually, I can see that. Even I, although cynical about the meaning and the purpose these internet threat level gauges, feel a little bit of comfort deep down knowing that someone else thinks there is some degree of threat and is (presumably) doing something about it. Even though I'm quite happy to acknowledge that it's purely a marketing tool.
Symantec keeps changing the name of the threat warning thing on their website, but it always seems to hover round the middle of the scale. It's currently ThreatCon, bringing a reassuring image of NORAD under Chyenne Mountain protecting us from that nasty red threat.
"Results. Persons who are mentally ill, those who are disabled, African Americans, Latinos, Chinese Americans, Korean Americans, and non-US citizens were more likely to perceive population-level risk as high, as measured by the estimated color-coded alert level. These groups also reported more worry and avoidance behaviors because of concerns about terrorism."
I get the mental image of Michael Moore saying "poor whity".
If you look at the groups mentioned from a little distance it boils down to Africans / Europeans / Assians / South Americans / and those not born in the US, which leaves only Michael Moores "good old whity" (which you always feel means republican voting red neck types and other undesirable types such as motor car company executives ;)
Oh and those with depression or other mental illnesses that are likley to lift the rose tinted specticals...
So just the self deluding white Americans of middle America...
I realy realy do get the feeling that this adds weight to the notion the whole colour threat jig is a political confidence trick to help get votes...
s/Oh and those/Oh and also exclude those/
It's one in the morning hear and I hit post not preview on this little mobile phone I use.
Tired eyes and tired hands never a good job did 8(
wrt the color code:
i don't know the current color. but as i do know that it will never be green, i don't care either.
"We conducted a random digit dial survey ... Multivariate regression modeled correlates of worry and avoidance, including mental illness, ..."
Huh?! When randomly phoning up strangers at home, how did they determine that they are suffering from a mental illness? Was it:
a) Over-the-phone psychiatric diagnosis. A method not approved by the American Psychiatric Association.
b) Asked them.
If option b), thus getting responses only from:
i) Extremely naive people who were happy to discuss their medical problems with an unauthenticated random stranger on the phone; and
ii) People who yanked the survey-takers' chains for the whole darn call.
Come to think of, uninvited telephone interruptions have become so irksome that most telephone-owning inhabitants of at least the western world would just hang up on them. Of those people who do still take surveys, I suspect there must be a very high percentage who do so solely in order to give misleading answers as a form of revenge.
threat-o-meters are a rational veneer to an irrational problem. Or 'total bollocks' to use the local vernacular.
Their very creation signals that terrorists have *by definition* succeeded in part.
@: Davi Ottenheimer: "That's not really the best way to prepare for disasters/incidents. In fact, I would call that potentially worse than only communicating the threat level to trained response teams and letting them disperse the word, as was the plan in WWII."
I don't mean let the people decide for themselves how to respond, that would be a disaster. I meant let people decide for themselves whether or not to even incur the risk of travel.
I'm not saying it's the best way. The threat levels, while vague and fairly useless, are much milder in the 'fear mongering' category than what would be said about more direct warnings, let it calms the "you knew and didnt' tell us" hindsight fingerpoint (based on vague info) that will follow an incident.
Remember, not everyone in the public is as security conscious or rational about risks as those of us here.
"likley to lift the rose tinted specticals..."
Maybe it should untinted skepticals.
"I don't mean let the people decide for themselves how to respond, that would be a disaster. I meant let people decide for themselves whether or not to even incur the risk of travel."
That doesn't justify the insane color codes we have now. That reasoning would only work if they would actually tell us the true risk of travel that they have estimated. Of course they would never do this, because even at the most dire threat level, the risk is still lower than that of winning the lottery.
It's interesting that the most vulnerable population groups who, presumably, have more real threats to worry about would be disproportionately worried about terrorism.
I suppose this follows the theme of people disproportionately worrying about remote, dramatic threats that they are powerless to prevent while ignoring mundane, likely ones over which they have some control.
As others have pointed out, people's guess at the "Terrorist Flavour of the Week" is an indication of what they think the government would like them to think of the state of terrorist danger, not of the actual state of that danger.
So, there are a two possible interpretations of the finding that e.g. black and latino people thought the government's flavour was set to a more alarming choice:
(1) Blacks and latinos tend to suspect empty fear-mongering from their government more than others do.
(2) Blacks and latinos suspect empty fear-mongering from the government no more than average, and think the real threat is higher than others do.
I would guess there's some of both, but rather more of (1) than the authors account for.
Both sides have better talking points than the other side cares to admit. That said, if we're attacked again, it will be interesting to see how the comments of some people after the fact compare to their comments now. I suspect that many who play the "fearmongering" card now will play the "they knew" and/or "dots were there and not connected" cards later.
In any case, both sides have good ideas to contribute. Other than that, I will kindly bow out because I don't want to go in cirlces, or worse trigger an argument that would be counter-productive.
Happy Holidays, fellow bloggers!
'I suspect that many who play the "fearmongering" card now will play the "they knew" and/or "dots were there and not connected" cards later.'
You act as though these ideas are mutually exclusive. In fact they support each other. The current climate of fearmongering hurts efforts to actually discover legitimate attack plans. Every dollar spent making Americans take off their shoes is a dollar that doesn't go toward discovering and preventing attacks.
>>In fact they support each other.
Exactly! If every day, we are told "Orange", then when a real "Orange" comes around, how will we distinguish it? This is the "False Positive" problem with many tests. When the False Positive rate is too high (orange every day for 6 years is too high), then the signal is ignored.
How many dollars of productivity have been wasted forcing everyone to take off shoes when only a single case of a shoe bomb has ever been seen?
Economically, the sept 11 attackers are still attacking.
No, I did not nor will I play the "They knew" card unless there is very credible evidence they "they knew" very specific things like who/what/when and then failed to act. Likely, they failed to act because of the high false positive rate. (Back to Michael Ash's point)
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.