Schneier on Security
A blog covering security and security technology.
« Identity Theft and Children |
| Do-it-Yourself Phishing Kit »
January 16, 2007
Security Theater and a Secure Data Center
Posted on January 16, 2007 at 6:23 AM
• 44 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
This blog advocates for the removal of human agents in perimeter defense of secured facilities.
Sometimes the humans are there for reasons other than improvement of security. In a relatively unpublicized event a few years ago, an employee unable to exit a secure facility during a security staff shift change tried to attract attention by pulling the fire alarm, which led to the employee's death by asphyxiation in Halon. Now the security post is alway staffed notwithstanding the risk of social engineering.
What sort of facility floods the place with halon gas without allowing the person who pulled the alarm to exit first?
I'd say that saving lives, including the lives of employees, is a reasonable part of security. Yes, that person pulled the fire alarm when there wasn't a fire, but it sounds as though pulling the fire alarm during a fire would have been just as fatal. That's bad design, to put it mildly.
Its like pilots. They cause far more accidents than they ever save. The planes computer says "pull up its the ground!!" and everyone knows that humans know best. BOOM.
The irony is that people wouldn't feel safe if the aircraft was in the hands of a computer. But if the computer systems do fail... well you already are trusting the computer and just putting a human there to get in they way.
Also if its the pilots falut and he is dead you can't sue him/her. What about the software enginner?
What would be better is not what people feel is better. The road toll is enlighting.
"used to get very hot" ... "retinal scanner also had problems" ... "Seems the card readers weren't functioning properly"
So what exactly does social engineering have to do with this?
"So what exactly does social engineering have to do with this?"
Perhaps you should ask why they don't work...
Perhaps somebody "social engineered" the guards into beliving they where the maintanance bod.
Afterall if you plan to attack a fortress it is often best to attack at the weakest point, and if it's not weak enough weaken it first.
I am totaly perplex at the different IT Security concepts companies use. Some of them are so wrong that you have problems figuring out what to start with. And they are not just wrong from the security standpoint, they implace business-crippling measures. I can remember basic security proceses (firewall port opening)that took longer to get approved than the overall development time of the respective app. By the time the port was opened, the market was so changed that you could almost forget about launching that app. How do business people/developers/non IT Sec people in these organisations react to such environments? Thay adapt. They try to use every little possible leak or oversight of the cerbei to try to plain _survive_ from a business standpoint. And these leaks exist. After one year what you're left with is a grotesque landscape.
Of coures I mean _some_ companies.
I dont mean to generalize.
As someone who does data center security on occasion, thank you for the laugh.
The guards generally have reported the issues -over and over again- and gotten nowhere, because who listens to the lowly security guards?
Biometric systems are flaky and unreliable, sensors misbehave, card readers go on the fritz.
What's a human to do?
Use their best judgment. In this case, since the guy was an authorized party, all was good. If he wasn't authorized, they'd pick up the phone and start making calls . . . and Our Hero would be on the wrong side of a different set of bars.
"So what exactly does social engineering have to do with this?"
The author wasn't saying that this was an example of a social engineering attack. Merely that the existence of the guards allowed for social engineering attacks, which would presumably not be possible without the guards.
Of course, he was wrong.
Having no guards would still allow social engineering attacks - especially in the environment presented. A desperate late-night call to the support center, claiming "the Palm-Scanner is broken, again!" would get some sort of sleepy service-tech down there to promptly bypass security for you, and open the door. The guy might even be friendly enough to "save you further trouble" by opening *all* the doors... After all, the service tech isn't really reponsible for site security, just maintenance.
The problem here seems to be basic auditing. It is often the case that a perfectly good system starts to degrade over time due to maintenance and other issues. However, a simple auditing and testing schedule of the security would have caught most if not all of these problems (depending on if the audit was secret or pre-announced). Its true in and process oriented system, esp one that includes people, that auditing (and retraining) are the ante to play. If the system is broke (guard gets hot) the process can be changed (better A/C).
I believe people serve an important judgment role, but the training and systems need to be in place to limit and control that judgment. For instance, if some guy walked into the automated system with a severed hand and an eyeball on ice, he could circumvent the automated security, but not a reasonably observant guard.
@greg: "Its like pilots. They cause far more accidents than they ever save."
I don't have any statistics pro or con. However, my father (a commercial airline pilot) was often forced to make landings in the Canary Islands under circumstances that exceeded the design constraints of the aircraft (45 mph wind-shear). He planned for contingencies (do I drive off the cliff-edge towards the sea, and try to recover - or do I drive into the cliff-face guaranteeing a stop - if the landing starts to go bad). He always succeeded in landing safely.
I doubt that any current automated systems could have done the same. They wouldn't be programmed to handle conditions beyond the manufacturer's design limitations, for one thing.
My dad always said that the reason you had a live pilot was to deal with the situation when things went wrong. Under ordinary circumstances, the modern pilot doesn't really do anything. In unforeseen circumstances, automated equipment simply hasn't been designed to handle the (unforseen) circumstance. Under those conditions, humans can still make bad decisions - but they have an experientially-driven chance to make good decisions that pure automation would never allow.
Discriminating, well-trained humans are the most cost-effective backup for automated systems, when conditions fall outside of expected parameters. They always will be: when computers get good-enough at handling chaotic conditions and playing hunches to replace us, they will effectively have become "human" minds, anyway.
@Andrew - presumably, the guards aren't notified of personnel changes, especially if this guy's working for a third-party contractor. So there's the scenario where the guards all know this person, but don't know he was fired this morning, or that he has a bulk eraser/pipe bomb/gallon of salt water in his satchel.
I wouldn't say you should remove the human element entirely - stuff does go wrong from time to time - but the writer is correct that removing the guard from behind the palm scanner would have actually improved security in this case.
"Under those conditions, humans can still make bad decisions - but they have an experientially-driven chance to make good decisions that pure automation would never allow."
I would argue that the autopilot has more potential for "experientially-driven" learned behavior in extreme situations, as data has a higher chance of surviving a crash than a human.
Of course, on the other hand, any designed system will typically perform better in the circumstances for which it was designed. I've no clue when the one factor will overtake the other, or whether it has already or ever will.
The human guard is incredibly effective at certain tasks, recognition of individuals being one of them. The problem with this system is that they overlapped things so that you could play their weaknesses off on one another. (I.E. If mom says no ask dad.)
If I were going to design a system like this I'd have two doors - for the first the guard would look at you and say, "Oh, that's Bob and he's allowed in there". For the second, you'd have to enter a password or insert an ID or some such. Most of the system described is like making you unlock three locks all with the same combination. It might resist a physical attack better, but the first lock establishes your knowlege of the combination.
Concerning the halon, I've been wondering about that stuff. I looked it up on wikipedia (http://en.wikipedia.org/wiki/Haloalkane#Fire_extinguishing), and excerpt the following info for you:
"...Halon 1301 total flooding systems are typically used at concentrations no higher than 7% v/v in air, and can suppress many fires at 2.9% v/v."
"Halon 1301 causes only slight giddiness at its effective concentration of 5%, and even at 15% persons remain conscious but impaired and suffer no long term effects."
So maybe that story about dying from it isn't true.
"What sort of facility floods the place with halon gas without allowing the person who pulled the alarm to exit first?"
The systems are designed and installed by different people for different purposes. Should the fire suppression system choose not to respond if it cannot sense that the door is unlocked? Making the decision to configure it to not respond in some cases might kill a firefighter with a breathing apparatus.
The mistake is made at a (probably unexamined) higher level, where the interactions between systems becomes important.
The "Halon" story might refer to a CO2 oxygen displacement fire supression system instead. Those are quite deadly.
It may have been co2, the report said "fire suppressant gas" and I assumed Halon. Whatever it was, was definitely fatal.
Yes of course it's "bad design" that's the point.
In this case the system is based on the reasonable assumption that if no one is in the facility you dont have to worry about fatalities. The guards who were leaving certified that everyone who entered had left. Elaborate "man-trap" entry and exit points made it "impossible" for anyone to enter or leave undetected. Meanwhile the victim had entered the site via an access passage that had been subsequently added between two separately secured sites. No one thought to realize that the guards could no longer accurately assess whether the facility was empty.
What's the point? Humans are still better than algorithms at responding to the totally unexpected, -even- in security matters.
First off Halons (of which there are a few) in general act as combustion inhibitors not as oxygen displacers
There has been some limited research by Huntington Life Sciences (those who the Animal Lib people hate for many reasons) who found that exposure above 9% killed 3 out of 4 dogs within 48 hours of being exposed to octafluoro-2-butene (reason apparently unknown),
One of the known problems with Halon is that it is a bit like water.
If you get it in your lungs the only way to get it out is to cough it up or you pass out.
Coughing it up usually involves bending over or on hands and knees.
Like water it is a lot heavier than air and sinks to the lowest level, so if you bend over or colapse your head ends up back in the Halon at a couple of on top of floor level.
Apparently the result is just like drowning invariably fatal...
It is not for this reason however it's use is not allowed for new systems in Europe, but for it's Ozone depleation properties which are apparently also quite outstanding...
>In a relatively unpublicized event a few
>years ago, an employee unable to exit
>a secure facility during a security staff
>shift change tried to attract attention
>by pulling the fire alarm, which led to
>the employee's death by asphyxiation
>in Halon. Now the security post is
>alway staffed notwithstanding the risk
>of social engineering.
Calling Urban Legend on that unless a citation can be produced. Comments below based on U.S. codes & legal system
1) Who on earth pulls a fire alarm due to a momentary inconvience of a shift change?
2) Fire Codes, at least the model NFPA 1 Fire Code and NFPA 101 Life Safety Code -- which most local & government fire codes are based on -- have for at least 30 years and probably much longer required panic hardware for all but specialized situations like correctional facilities (which have other mitigating controls).
Panic hardware doesn't breach any security practices -- first, someone was properly authorized to have access, second the panic hardware can be set to trip an alarm and have a 15 second delay before unlocking the door, and third for super-high-security facilities only need to let someone out of the fire compartment -- they can be confined in an area of refuge within the building, or a secured courtyard or fenced area outside.
3) If a facility spent the money on a suppression system, they wouldn't have spent the money on the panic hardware?
4) A Fire Protection Engineer who had put his PE stamp on plans that didn't include panic hardware deserves to have the PE yanked
5) The company / organization even if they met a weaker fire code than NFPA 1 & 101 would still find themselves in deep trouble in court in a situation like this -- common sense backed by the model codes say you don't allow a situation where someone can't self-evacuate.
What the original post says to me is that the author was relatively new to data-center security and had some high, perhaps unrealistic, expectations. The longer someone spends evaluating security systems (i.e. gains experience/skill), the more granularity and related flaws they may stumble onto. This is hardly different that the process of discovery in any field, no? I would even say that the author's conclusion is correct, but not in a negative sense.
And rather than concluding a control (e.g. humans) is fundamentally flawed, the author could have found ways to compensate with more depth of control. For example, if a guard props a door to stay cool, the temperature controls need to be addressed. That seems like the more logical approach rather than removing the guard. It's layers of complimentary systems that will secure the data center, along with good management, not simply dispensing with every control that shows a weakness.
After all, no control is perfect.
Cost of the employee: $100K / year
Cost of the computer system: $10M / year
kill the employee...
$100K vs. $10K
You are comparing apples with oranges.
You can have both the employee and computer system. Computers without people are useless. People without computers get things done.
I believe the real issue with that data center was not the human factor of the security guards, but that when security measures are excessive or badly implemented to the point that they prevent the work from being done, people work around them. People have to get their work done.
Hoping to clarify the Halon question (worked in fire protection engineering years ago):
@Clive Robinson, the toxic effects cited in the NIST report are NOT from Halon, but rather a non-Halon suppressant proposed as an alternative to Halon. Halons are chlorofluorocarbons, chemically very different from the toxic materials in the report.
Chlorofluorocarbons are a family including Freon that were used as refrigerants, solvents, etc. They came into their important commercial use as refrigerants precisely because they are non-toxic; previously, people sometimes died in their homes because of leaks in kitchen refrigerators.
Apart from their potency as agents that deplete ozone in the upper atmosphere, chlorofluorocarbons are to my knowledge very safe to handle, and I know of know case of injury due to toxic effects.
That being said, I offer one caveat: Halons suppress fire by breaking down into very unstable compounds where the Halon gas contacts the flame front (boundary between fuel vapor and air at which combustion is taking place). Most of these compounds immediately oxidize, depleting the oxygen needed to sustain the flame. These products of Halon breakdown include compounds that are extremely toxic and corrosive. Because they are only created in the minute volume typically occupied by the flame front, their quantities are generally very small, but it is still prudent to immediately exit a Halon discharge area IF A FIRE WAS PRESENT. (Because a Halon discharge often is accompanied by a helluva lot of noise and loose objects flying around, most people would probably be inclined to scram anyway.)
My colleague who specialized in Halon systems told me that phosgene (a well-known military nerve agent) is among these suppression byproducts, and noted that in the company's Halon test facility, there was visible corrosion on some metal fixtures (I emphasize, not from the presence of Halon itself, but the action of Halon in extinguishing test fires). Notwithstanding that, he ran the test fires without showing any worry, the toxic concentrations being very low.
The computer makes stealing so much easier. Once the computer is responsible for security. Google is helpful for breaking security with brute force attacks. It's that powerful.
If you really want to see a bad "secure location" example, that doesn't require days or weeks of visits to uncover all the flaws, take a look at this site:
Side rant -- Why would anyone consider using Wikipedia as an authoritative source of information?
Have you ever heard a HALON system discharge? It could have easily frightened the person and caused them to collapse, in which case they'd be inhaling the more highly concentrated fumes closer to floor level rather than the lower concentration at standing height. If they had any form of respiratory problems, this would add to the difficulty.
-The computer still cant see out the window to avoid other planes
-90% of the time when it says "pull up" its not an emergency
-there are 15 other alarms/lights at the same time, mostly for nonemergency events
There are two problems here.
One, as has already been pointed out, the security system obviously had not undergone an audit since the dawn of time, and no security system can be regarded as useful if it has no routine audit built-in to the design. So, I wouldn't call this a "secure" data center at all, just a fantastically annoying one.
Two, the human employees are able to circumvent to security processes, which actually represents both a major failure in the system itself (leaving a door propped open for some length of time ought to trigger an alarm of some sort in and of itself) and indicative of poor handling of security personnel.
Injecting human beings into a security process is both good and bad -> good because automated systems behave poorly when exposed to unforeseen circumstances, and bad because human beings that aren't properly trained are easily socially engineered. If you're trying to solve the unforeseen circumstances problem by injecting people, it behooves you to properly train (and motivate) the humans. Cheap humans reduce security just like cheap systems do... or to use the old saw, "You get what you pay for."
Maybe what they really need is some of those fancy glass security doors:
This reminds me of a security presentation I attended some time around 1980. Don Parker talked about similar waning of alertness by security personel as he spent more time at the business. About five months later, we were getting a site visit by Pansophic when I recognized Don as one of our visitors. Sure enough, Don had repositioned his visitor badge into his inside suit jacket pocket (out of site). He'd done this during the one floor elevator ride from the lobby down to the data center.
I introduced myself, noted that I'd seen his security presentation, and would appreciate him displaying his visitor badge at ALL times during his visit. I didn't want our company to be added to his list of companies with lax security. We all got a laugh, but it served as a lesson to the data center staff present and a lesson I retain to this day.
@greg: "Its [sic] like pilots. They cause far more accidents than they ever save."
This is complete rubbish. Pilots routinely sort out problems with the computer or other aircraft systems which, if left alone, would probably cause an accident. This does not necessarily reflect badly on the computer systems as they are not designed for autonomous operation.
@X the Unknown
"Discriminating, well-trained humans are the most cost-effective backup for automated systems, when conditions fall outside of expected parameters."
Hence our wonderful experiences with the barely minimum wage earning TSA employees. I doubt the TSA as a whole could be considered discriminating, well-trained, or cost effective.
I have lived through a halon 1301 dump, and I can tell you that the *most* important thing to do is NOT to lean down. Halon sinks, and if you go down, there will be no getting up. Your average halon system is designed to dump enough halon to make a 5% concentration in the room, which is enough to kill the fire but not enough to kill people. However, if you breathe in too much, you will be unconscious. However, the average person will IMMEDIATELY panic and will be disoriented by the extremely loud shrieking noise caused by the halon discharge.
@X the Unknown,
I guess I'd like to know a bit more about why your dad the airline pilot was "often forced" to land in conditions that exceeded the design limits of the aircraft. If there's a reasonable chance that the conditions at the destination airport might not allow for a safe landing, then there should have been enough fuel onboard to make it to an alternate airport.
>> @Andrew - presumably, the guards aren't notified of personnel changes, especially if this guy's working for a third-party contractor.
Bad presumption. In high-security environments, such notification is a contract term with severe penalties for noncompliance. Like losing your contract. The techs -- and their managers -- know it.
>> So there's the scenario where the guards all know this person, but don't know he was fired this morning, or that he has a bulk eraser/pipe bomb/gallon of salt water in his satchel.
The former is accomplished through database checks, with higher levels of security through verification, exactly for the reason you name. "Hi, Mr. VP? This is Andrew in Security Services . . . is Doug still on the approval list for White Two? He's still in the directory and badging database, but he seems a bit out of sorts today . . . oh, _really_ . . . shall we escort him from the premises or call the police?"
The second is a combination of strong people skills and cursory search procedures. Most people who go through security checks every day are bored with the concept. Anyone nervous rings alarm bells for us. Add a casual search where people are required to show us the bag, and it evens out.
We're not going to stop the pipe bomb.
We are going to gather enough data about the person (authorized or not) who carried it in, that they will be far too busy running from the FBI for the rest of their short, miserable lives to either enjoy their ill-gotten gains and/or the ego trip.
I fear devices that can be carried in pockets, such as Web servers built into what look like Ethernet plugs and the infamous USB keychain drive masquerading as a pen, sushi, AA battery, etc... but not much I can do about it, either.
>> I wouldn't say you should remove the human element entirely - stuff does go wrong from time to time - but the writer is correct that removing the guard from behind the palm scanner would have actually improved security in this case.
I hate palm scanners. They break a lot, can be defeated with high-tech measures ranging from shorting the DC power supply (!) through placing a cut-out photocopy of the offending hand on the sensor (!!!)
If the device breaks a lot, no one relies on it and the security measure may as well not exist.
Data centers are accessed by technical people. Relying on electronic security systems is like operating a hotel where all the customers are locksmiths . . . and this is one reason why the most secure sites rely heavily on what appears to be the good old-fashioned padlock.
I think it just goes to show that:
Security is not something you buy over-the-counter. You think through processes of all kinds... human interfaces and automated interfaces... to identify threats and tradeoffs.
Security is not something you apply and be done with. It requires constant review and considerable resources.
Security theater is what you get when an entity isn't interested in spending resources. Instead, they install something that makes them feel better.
I wrote the original message linked to in this blog.
I don't advocate removing 100% of the people. For instance, the first set of security guards were rather helpful, and doubled as monitors watching over the NOC. But the security guard behind the palm scanner was completely unnecessary. The retinal scanner, while flaky, would eventually work. I think the money spent on the security guard would have been better spent on maintaining the scanner. Or, alternatively, get rid of the biometric devices and increase the pay/training of the security guard. The point is that the redundant security was reduced to the weakest link. This is not dissimilar to what Schneier advises regarding layering multiple encryption technologies together (it often reduces an attacker to only having to circumvent the weakest link).
As for the non-working smart card readers? I don't know what to say. Tough security ought to be matched with tough testing and auditing.
Our auditing procedures for "patches" as opposed to code releases? Simply inexcusable, especially when one can bundle in a mail server as part of a "patch".
The whole situation was rather laughable. By attempting to make unbreakable security, they ended up reducing everything to the lowest common denominator. I have seen far more effective security where ambitions were far lower and each stage of security was implemented better.
A contractor in an Adelaide government computing site accidently triggered the Halon system.
He thought that the fire exit from the computer room was an alternative exit, and the great big red mushroom button was the door open button.
No one died, but much embarrassment ensued.
I do not quite see the problem. This looks like a practical application of the "smart profiling" approach that Bruce generally advocates.
After all, 'The Mayor' seems to have been required to do the whole process a few times, as long as there was no other way to assure his identity. (btw, where the doors closed extra for him at the beginning?)
Once he was properly identified, a quicker process would be established.
The only problem I see is how to revoke access privileges when the guards bypass a central database where authorization data is stored. But as the auditing was still in place, unauthorized access would at least be detected.
Two more issues:
Might be interesting to see how the guards react to actual (pretended) social engineering attacks (ever seen "Faceman" in the "A-Team" 80's TV show?).
The non-working double card-reader is of course not excused.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.