Schneier on Security
A blog covering security and security technology.
« Disguised USB Drive |
| Remote-Controlled Thermostats »
December 10, 2008
As the first digital president, Barack Obama is learning the hard way how difficult it can be to maintain privacy in the information age. Earlier this year, his passport file was snooped by contract workers in the State Department. In October, someone at Immigration and Customs Enforcement leaked information about his aunt's immigration status. And in November, Verizon employees peeked at his cell phone records.
What these three incidents illustrate is not that computerized databases are vulnerable to hacking -- we already knew that, and anyway the perpetrators all had legitimate access to the systems they used -- but how important audit is as a security measure.
When we think about security, we commonly think about preventive measures: locks to keep burglars out of our homes, bank safes to keep thieves from our money, and airport screeners to keep guns and bombs off airplanes. We might also think of detection and response measures: alarms that go off when burglars pick our locks or dynamite open bank safes, sky marshals on airplanes who respond when a hijacker manages to sneak a gun through airport security. But audit, figuring out who did what after the fact, is often far more important than any of those other three.
Most security against crime comes from audit. Of course we use locks and alarms, but we don't wear bulletproof vests. The police provide for our safety by investigating crimes after the fact and prosecuting the guilty: that's audit.
Audit helps ensure that people don't abuse positions of trust. The cash register, for example, is basically an audit system. Cashiers have to handle the store's money. To ensure they don't skim from the till, the cash register keeps an audit trail of every transaction. The store owner can look at the register totals at the end of the day and make sure the amount of money in the register is the amount that should be there.
The same idea secures us from police abuse, too. The police have enormous power, including the ability to intrude into very intimate aspects of our life in order to solve crimes and keep the peace. This is generally a good thing, but to ensure that the police don't abuse this power, we put in place systems of audit like the warrant process.
The whole NSA warrantless eavesdropping scandal was about this. Some misleadingly painted it as allowing the government to eavesdrop on foreign terrorists, but the government always had that authority. What the government wanted was to not have to submit a warrant, even after the fact, to a secret FISA court. What they wanted was to not be subject to audit.
That would be an incredibly bad idea. Law enforcement systems that don't have good audit features designed in, or are exempt from this sort of audit-based oversight, are much more prone to abuse by those in power -- because they can abuse the system without the risk of getting caught. Audit is essential as the NSA increases its domestic spying. And large police databases, like the FBI Next Generation Identification System, need to have strong audit features built in.
For computerized database systems like that -- systems entrusted with other people's information -- audit is a very important security mechanism. Hospitals need to keep databases of very personal health information, and doctors and nurses need to be able to access that information quickly and easily. A good audit record of who accessed what when is the best way to ensure that those trusted with our medical information don't abuse that trust. It's the same with IRS records, credit reports, police databases, telephone records – anything personal that someone might want to peek at during the course of his job.
Which brings us back to President Obama. In each of those three examples, someone in a position of trust inappropriately accessed personal information. The difference between how they played out is due to differences in audit. The State Department's audit worked best; they had alarm systems in place that alerted superiors when Obama's passport files were accessed and who accessed them. Verizon's audit mechanisms worked less well; they discovered the inappropriate account access and have narrowed the culprits down to a few people. Audit at Immigration and Customs Enforcement was far less effective; they still don't know who accessed the information.
Large databases filled with personal information, whether managed by governments or corporations, are an essential aspect of the information age. And they each need to be accessed, for legitimate purposes, by thousands or tens of thousands of people. The only way to ensure those people don't abuse the power they're entrusted with is through audit. Without it, we will simply never know who's peeking at what.
This essay first appeared on the Wall Street Journal website.
Posted on December 10, 2008 at 2:21 PM
• 39 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Whilst "audit" is an essential you need to step back a bit and ask a some questions,
The first and most obvious is "what are we auditing and "why are we auditing it".
The second is not as obvious and is the real issue with regards to privacy "how do we ensure acurate traceability of access".
Which brings us back to the thorny issue of digital-v-real identities.
As long as an individual can misrepresent themselves to the audit process the process is not achiving it's primary goal (traceability of actions).
Which gives rise to the third question, how do you ensure that a request is being made by the person who is logged on and not by a third party hijacking the connection etc?
For example, In the UK the National Health System (NHS) has the NHS Spine which is a centralised database of patient records.
Access is supposed to be by secure tokens (smart cards). Each person gets issed with one and it appears that the default password is their birthday (which obviously is easy to find when you have access as it is one of several secondry keys used to find individuals records).
However due to the way the system is implemented it is possible to get around the token in several ways.
For instance it appears that users workstations are not sufficiently protected from maleware and other rouge software therefor it would be possible for an adversary to inject requests into the system under the ID of a person logged on at that workstation.
These are problems that are very difficult to solve in comparison with the actual audit process and without them being resolved the audit trail is at best unreliable...
Going off of Clive's argument that the audit is defined as "ensuring accurate traceability of access," there's another part of the equation to worry about. Audits can not prevent a user from abusing the system. They happen too late.
What Audits do is prepare for retribution. A government audit prepares prosecutors with evidence needed to put someone away for a few years. A cash register prepares a store owner with enough evidence to fire an employee.
Since the power of an audit is tied to the power of the punishment system, those auditing should always ensure that those who would abuse the system are under threat of sufficient punishment to deter the abuse.
As an example, a system which revokes the visa of a foreign terrorist might stop someone from vandalizing a store front, but will do nothing at all against a suicide bomber.
Thinking about it further, an audit also has one other purpose - limit the time frame of an accidental vulnerability. If a system audits every month, and the audit is sufficient to catch a vulnerability, then it cannot be abused for more than a month's worth of damages.
Further to Clive and RH, audits provide a deterrent to certain types of behaviour. That needs to be considered in the design of the control.
But security has always been about more than just preventative and detective controls. It's about how they are designed to mitigate risk.
There is another use for audits (although you may want to call it something else), especially for audits of positive access. You can use this to measure the impact of proposed preventative measures.
Let's not forget how "Joe the Plumber" was "researched" by various factions of the Ohio state government.
For governmental agencies, the audit trail must be as open as (or perhaps even more so than) the information is itself. Thus the FOIA should cover audit trail requests.
Traceability of access will be addressed in Bruce's next OpEd after everyone is onboard with audits and the headlines then become full of stories about how someone has been wrongly accused of accessing information which they claim they didn't, due to their account being hacked, malware, etc.
The MEHARI risk assessment method categorizes security controls in 6 categories:
3 that act on the probability of an incident:
- Natural exposure
3 that act on the impact of an incident:
- Palliative measures
- Recovery (insurance)
Audit would fall essentially in the palliative category, because audits allow someone to correct something after the fact. As someone said, it also act as a deterrent, if people know it is there.
@RH : It does prevent abuses. If I go in the office and says "Ok people, I told you how it worked now just to dissuade the curious people, each querry you make must have a rational, and the higher the personality the better you must provide a good one, if you try to access a personality file, I'm going to systematicaly check the rational and if your rational doesn't hold I'm going to personnaly drag you in jail. The system automaticaly flag some querry for my review, so you got no chance of escaping it".
Your basicaly saying that one, the sanction is heavy, two it's automatic and unavoidable.
Criminal nearly never strike when they are certain they are going to be caught.
Nice. I love it when security is discussed in terms of political science and philosophy, but I can't believe you wrote the whole thing without any mention at all of video surveillance.
BTW, the other day I found a comedic interpretation by Mitch Hedberg of the register example you use.
"I bought a doughnut and they gave me a receipt for the doughnut. I don’t need a receipt for a doughnut. I just give you the money, you give me the doughnut. End of transaction. We don’t need to bring ink and paper into this. I just can not imagine a scenario where I had to prove I bought a doughnut. Some skeptical friend. Don’t even act like I didn’t get that doughnut. I got the documentation right here. Oh, wait it’s back home in the file. Under D."
When I worked for the State of CA, in the criminal records division of the DOJ, there was an auditing feature for each record accessed. Records for celebrities and politicians were flagged. Anyone trying to pull a record on a celebrity would have to be a high-level non-supervisorial employee, lower-level employees had to go to their supervisors to handle those forms. Supervisors would then handle those disposition forms themselves or hand them off to a high-level employee. This system would make sure that the person accessing the information had some skin in the game. It took a few years to get to each level, so the person who had access had more to lose than an entry-level employee.
However, it didn't stop a low-level employee from accessing the record of an average citizen personally known to them (friends, acquaintances, neighbors) other than relations, if they knew enough about them. They couldn't alter the records, but they could get criminal records and other dirt on them. Since it could easily happen in the normal course of events, it was hard to prove that access was for any intent other than work.
The filing section was much worse. Even though filed by number and not name, the numbers of the notorious got around. They hired college students who cared little whether they stayed at that job (at
> Audits can not prevent a user from abusing the system. They happen too late.
Aside from the deterrence factor (which audits do provide)... the purpose of the audit is *not* to prevent abuse.
Over time, abuse is inevitable. You cannot create a system with perfect security. Proper audit means that you have a good chance of noticing a breach after it occurs.
the real questions is what will happend to RealID after January '09?
You don't need to prove you bought a donut. The clerk, however, needs to prove to the shopowner that the payment for the donut went in the till, and not into his pocket. If he cannot provide a receipt, perhaps he didn't ring it up, and will pocket the price you paid.
"Audit helps ensure that people don't abuse positions of trust."
"Criminal nearly never strike when they are certain they are going to be caught."
We know people commit murder and get away with it, as Pat Cahalan notes,
"Over time, abuse is inevitable. You cannot create a system with perfect security."
There will always be somebody who either knows how to get away with it, or knows they can get away before the consiquences of their actions occure.
Then there are people who do not (for whatever reason) care about the consiquences.
For instance those who commit espionage know that in all probability they will be caught and punnished (by death in a lot of cases) yet they still do it.
This "I'm doing it for the greater good" attitude is impossible to stop with "consiquences" and some (suicide bombers and terrorists) actually relish the consiquences for the "greater glory" of martyrdom, as possibly they see it as either their only way to imortality or of protecting "the children" of their clan or belief system.
There is a (fairly long) paper that discussess this mind set and preventative measures with the context of terroristic acts on US soil,
It shows that prevention is at best an illusion within a resource limited open environment.
Off topic: Here's a textbook case of security through disclosure:
A security hole existed in Facebook for four months after they were privately informed of the fact. Then The Register publishes a story with links to a demonstration of the exploit, and Facebook fix it within 3 hours. (Assuming you believe The Register, of course.)
"The State Department's audit worked best; they had alarm systems in place that alerted superiors when Obama's passport files were accessed"
It wouldn't have worked if the superiors had an interest in keeping mum about the unauthorized access. So who guards the guards?
Applying FOIA to audit trail requests is only part of the solution. Every access to digital information creates a copy of the data. Are those copies also subject to audit?
Facebook are seriously lazy about security. You could until recently access any photo you liked through URL munging, and they only fixed that after a few people were seriously embarrassed. You can still get a lot of private-only info through munging, and for more serious attacks just grab a few pics of a cute girl and set up a profile. Nobody will refuse your friend requests...
Regarding the cash register as an audited system - it's stronger when coupled with cashiers who know how to make change. The traditional method of counting back to the bill tendered is a double-check/self-audit for the cashier; the end-of-day receipts only tells you that someone screwed up somewhere.
That's why many new stores, especially ones that have large ticket items like department stores, have cameras focussed on the cash drawer.
Similarly, nine employees of the Illinois Secretary of State's Office face suspensions for improperly looking up Obama's driving record.
It's nothing new for people to abuse their authorization to information. There's just more information to look at now, and more controls to catch perpetrators.
@wsinda: "It wouldn't have worked if the superiors had an interest in keeping mum about the unauthorized access. So who guards the guards?"
You'll never create a situation where it is impossible not to have collusion to cover it up. Key is to have enough of a trail and enough review to reduce it to an acceptable level, which may vary based on situation. Contrary to what many of my colleagues (in the audit profession) think, there is such a thing as too much control, not because they don't work, but because they are just too costly.
Echoing another comment, I'd be interested to what you (Bruce) think about video surveillance cameras as an audit tool, especially since you have argued against them in the past.
It would create additional overhead and would not always be feasible, but there are some of these situations where they may be able to reduce the "act alone risk." Granted, there is always the risk of collusion, but it still reduces risk.
Perhaps have one person check the customer in, and have another look up their records.
Example: You walk into a driver's license facility. One person checks you in, takes your information and flags your record, but cannot see the information. You then take your number and go to the service staff, and when it is your turn, they can see your record, but only because you checked into the facility.
Not perfect, but better for situations that aren't life and death, like health records.
@wasinda: "So who guards the guards?"
Marines with automatic weapons. Said Marines also roam around after hours checking for passwords under keyboards, unlocked safes, etc. Woe to you if they find a security violation.
But you didn't mean physical security, I suppose. I'll tell you who guards the guards. The freakin' Secretary of State her/himself. The Secretary DOES NOT like to go sit on the hot seat in front of Congress to answer for security breaches.
Everyone who gets access to any of State's systems has to attend a security briefing. A few things are made very clear: First, the aforementioned aversion of the SOS to being grilled by Congress, which means that security violations including unauthorized access will be dealt with severely; second, they WILL find out about unauthorized access because of the audits.
I know of employees getting a phone call within minutes of accessing certain people's records. The access was legitimate so there was no problem, but it removed any doubt about whether queries were being monitored in real time.
No one has mentioned collusion, another way of saying who audits the auditor.
Can audit logs be tampered with in a way that will not, itself, be audited?
It would be far more damaging to be able to create false audits implicating someone else.
A payoff here, a "favor" there, might give you access to the backend to directly influence the audit trail.
At some point, you just have to "trust" someone to be honest.
I don't understand health privacy issues. Could some explain what the main threat of disclosing health records is?
Dear Mr. Health Security,
We regret to inform you that you were not selected for the well-paying job you applied for. We chose a slightly less qualified applicant because his extended family has far fewer health problems. Particularly, we don't want to have your children in our health plan.
Thanks for applying and good luck finding work anywhere.
"Could some explain what the main threat of disclosing health records is"
- a health care work may accidently disclose your HIV+ test results to your husband and the world ... where your husband is not the source of your infection
- a server vulnerability may be exploited and your recent, multiple, hospital stays may end up on a blog read by a manager considering hiring you ... but the firm is reluctant to hire on 'yet another sicko'.
- a hospital may discard records on your hospital stay for treatment for erectile disfunction. The records become public knowledge and become cafeteria fodder
Everyone has something to hide. Medical records are potentially the most embarrasing and, socially, the most crippling of all.
@ HJohn, James
> Key is to have enough of a trail and enough review to reduce
> it to an acceptable level, which may vary based on situation.
> No one has mentioned collusion, another way of saying who
> audits the auditor.
This isn't exactly rocket science (although in practice a huge percentage of audit systems are designed really badly), it's just a matter of aligning the incentives properly. Given the scale of responsibility of the auditors, you have an appropriate method of ensuring autonomy.
For example, if you're worried about security at a military installation, you should have the organization responsible for auditing security have a hierarchy that is independent from the command structure of the installation. So the base commander may be responsible for the day-to-day actions on the base, but when you audit the security policies, the lowliest member of the audit team answers to a command structure that bypasses the normal chain of command (Private Schmoe on the audit team, in this context, outranks General Schmee on the base).
Sure, this causes additional problems in and of itself (just ask any police officer what they think of Internal Affairs), but those are problems that should be managed by other mechanisms.
One of the problems with SOX (as a tangent) is that they try to bypass this problem by front-loading the burden of the audit on the entity being audited (which is just foolish). They do this because creating a proper autonomous auditing mechanism for major corporations would require a huge investment in whatever organization performs the audit.
@ Bruce "Which brings us back to President Obama..."
He's still President-elect at this time.
He won't become "President Obama" until he takes the oath of office on his inauguration day.
@ Health Security,
"I don't understand health privacy issues. Could some explain what the main threat of disclosing health records is?"
Another reason is the change of drug usage with time.
For instance a well known drug by a company called Fizer was origanly developed as a vasodilator and one of it's side effects discovered during testing became a much more lucrative use.
Likewise certain early anti depresant drugs (Tricyclic antidepressants) have been found to be much more usefull when dealing with neurological pain in nerve endings in a milde disorder you might know as "shingles". Furth new research shows there may well be a crossover between antiviral and antipsychotic drugs for the same thing.
An HR manager on finding just one of these drugs knowing little or nothing about there usage might well look them up on the Internet and draw advers (HIV positive severly depressed or "psychotic") conclusions about you simply based on reading the original drug data sheet.
One thing not known too widely is that there are "private HR databases" where data on high flying or specialised candidates are held. Access to the data is partly on a quid pro quo basis so adding new data earns more equitable access to existing data.
Needless to say there is a market for medical information on all sorts of people and you or your relatives do not have to (yet) be in the public domain to come under the microscope.
Which is why all medical records should be maintained at a high level of confidentiality for atleast the equivalent of three life times (say >250 years) not generations (30 years) as we are now realising adverse medical data can hurt your great grand children's children...
And a few other typos in my above post 8(
Tricyclic antidepressants, like amitryptyline (called Elavil then)? A doctor gave me that drug for a separated shoulder when I was in my 20s. I was in a total stupor for two days after taking it (I couldn't even get out of a chair without assistance), and my mother (a longtime nurse) called the doctor (one of those useless workmen's comp doctors) who prescribed it and gave her a telephone beat-down I bet she doesn't forget to this day. I flushed the rest of the prescription down the toilet when I regained my ability to think straight.
I used this article as a reference for some advice regarding a recent exploit that came to light in the online MMORPG, EVE-Online. A major exploit had been in the wild for 4 years before a whistle-blower brought it to the attention of the developers.
CCP, the maker of EVE-Online, was able to use their auditing to determine who had been utilizing the exploit and where the "money" had gone, resulting in bans and items being removed from the game. However, they didn't have the type of prophylactic warning triggers that you spoke of for when data goes out of normal.
Link to exploit notification:
Link to my reference:
There was a dangerous assumption made in the essay itself that I haven't seen mentioned yet: "The police provide for our safety by investigating crimes after the fact and prosecuting the guilty: that's audit."
This is incorrect. The police prosecute the ACCUSED. Once they're found guilty, the prosecution is over. It is very very dangerous to assume that everyone haled before the beak is guilty.
Re auditing: you know this, but a point you didn't make is that knowledge of auditing probably also helps security in reducing folks' willingness to share access or take risks. If I know that there's a security camera on a door, I'm less likely to let someone piggyback. If I know that company computer systems are audited, I'm less likely to walk away and leave my PC unattended, knowing that the jerk across the way could drop in and do something that will get ME an unpleasant interview with the security folks.
No, it won't happen immediately; but once a few folks *do* get slapped or fired, it will start to seep into folks' consciousness.
ObAnecdote: I was consulting at a large automobile manufacturer in 1999, visiting every other week or so. One week I arrived and one of the cubicles was empty. "Where's 'Howard?'" I asked. "Um...he was disappeared" was the answer I got. Seems Howard didn't take the auditing seriously, and spent some time at naughty websites. Security showed up and spirited him away and he was never seen (on-premises, anyway) again. Even his manager didn't know until sometime later.
The rest of the folks in that group were VERY careful from that point on...
(Mind you, this was the same place where I had to sign my laptop in -- but if I waited until after 5, the guard went home, and I could walk out without signing it out. Or, presumably, with anything else I might have wanted to take with me.)
Re your "Audit" story from the December newsletter, two thoughts:
1) With respect to the NSA mention below...largely because of our litigious
society, we have merged audit with precursor legal review, and the
resistance by NSA and others is driven primarily by the audit function being
shifted to pre-approval requirements. Lawyer/auditors will never move
quickly enough for operational needs, hence legitimate and even critical
surveillance events are lost. Better (in my opinion) to give operating
organizations authority to act based on approved guidelines, and audit the
hell out of them after the fact. The system will settle out on a reasonable
line between type 1 and type 2 errors.
2) It might also be worth trying out "peer auditing" approaches, where (for
example) CIA auditors grilled NSA operational records, NSA auditors led the
review of FBI operational records and practices, etc. Might lead to a
stronger system overall if one's peers graded out practices rather than
Was just reading this month's Crypto-Gram. Interestingly, the question of policies for dealing with trusted employees was also discussed on the SAGE mailing list just this past week.
As for the Fannie Mae story and removing network access as soon as an employee is fired - I suspect no companies of appreciable size can even accomplish that feat.
And for the more cynical folks - the obvious thing for any paranoid employee to do is to install such logic bombs well in advance, with a deadman switch / watchdog timer. E.g., "if I don't issue this reset command every week, trigger within X days." There's no reason to assume that terminating Makwana's network access immediately would have made any difference; the script may have been installed long before. I suppose it's a bit ironic to use such a standard technique of reliable systems (watchdogs) to intentionally destabilize a system, but it's all the same, really - making sure the code you want to run actually runs. Writing effective malware is not much different from any other type of software...
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.