Schneier on Security
A blog covering security and security technology.
« Friday Squid Blogging: Dentyne Ice Squid Ad |
| John Mueller on Zazi »
November 9, 2009
Laissez-Faire Access Control
Recently I wrote about the difficulty of making role-based access control work, and how reasearch at Dartmouth showed that it was better to let people take the access control they need to do their jobs, and audit the results. This interesting paper, "Laissez-Faire File Sharing," tries to formalize the sort of access control.
Abstract: When organizations deploy file systems with access control mechanisms that prevent users from reliably sharing files with others, these users will inevitably find alternative means to share. Alas, these alternatives rarely provide the same level of confidentiality, integrity, or auditability provided by the prescribed file systems. Thus, the imposition of restrictive mechanisms and policies by system designers and administrators may actually reduce the system's security.
We observe that the failure modes of file systems that enforce centrally-imposed access control policies are similar to the failure modes of centrally-planned economies: individuals either learn to circumvent these restrictions as matters of necessity or desert the system entirely, subverting the goals behind the central policy.
We formalize requirements for laissez-faire sharing, which parallel the requirements of free market economies, to better address the file sharing needs of information workers. Because individuals are less likely to feel compelled to circumvent systems that meet these laissez-faire requirements, such systems have the potential to increase both productivity and security.
Think of Wikipedia as the ultimate example of this. Everybody has access to everything, but there are audit mechanisms in place to prevent abuse.
Posted on November 9, 2009 at 6:59 AM
• 39 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
"Think of Wikipedia as the ultimate example of this. ... there are audit mechanisms in place to prevent abuse."
Which kinda, sorta work, most of the time, if you don't mind spending so much effort repairing damage ...
"...there are audit mechanisms in place to prevent abuse."
Do they prevent it? Wikipedia could be full of abuse and who would know? I'm always running into messages to the effect that "The content on this page is completely unsubstantiated"
If the users can circumvent the central role-based control, why can't they circumvent the auditing? If you attempt to audit users they will find a way around the system so we need Laissez-Faire auditing to go with.
And you can see how good they are working in the German Wikipedia, which now has a huge number of self-appointed censors with admin rights playing "Blockwart 2.0"... there's a huge discussion about that in Germany right now.
Users could circumvent the auditing. The assumption is that they don't want to. Point here is, people only circumvent the restrictions because the restrictions are a problem, and maybe the auditing wouldn't be a problem so people wouldn't want to circumvent it.
It's a sensible enough idea. It's also one of those "interesting, if true" things; I'd like to see somebody try it, and hear about how well it works out.
I've always thought of this as optimistic security, in the same vein as databases use optimistic locking: assume that things will work, and handle the errors as they occurr instead of trying to prevent them as in pessimistic locking.
I strongly disagree. These are two different concepts: Prevent, or detect& punish.
The decisive question is: Which mistakes can your company allow to happen? These mistakes are best dealt with by audits and subsequent correction/fining.
But for mistakes that you cannot let happen in the first place, you have to go for prevent.
To take the Wikipedia example: Wikipedia does not allow Users to edit anything else than the articles.
There is a growing number of articles that wikipedia does not allow to be edited by everyone. Eventually they got fed up with having to undo vandalism every second on controversial or very public articles.
And if laissez-faire file-sharing is modeled after laissez-faire capitalism then I fear for the abuse it will lead to. Extremes are rarely a good thing.
Wikipedia style editing works very well in a corporate environment with changes being tracked and everyone being able to see who wrote what.
Yes, someone can write completely untrue crap, but in our case it's not really a problem.
It's completely internal so the public doesn't see it. Bad information isn't making it to our customers.
The staff knows enough so unless someone was putting in technical information which looks good, it's easily caught.
And there's a strong incentive to not be stupid: Your name is on your changes and you will be held accountable.
Put in great work and you get the respect of peers and gratitude of bosses. An occasional minor mistake is understandable and easily corrected, but anyone who majorly or deliberately messes up can clear out their desk.
Exactly -- except wikipedia is a bad example.
Wikipedia is now HUGE, making auditing and cooperative methods less and less efficacious. But since teh wiki is bigger than almost any other organization, it's failure modes are fairly irrelevant except as a boundary condition.
In most private organizations, almost all communications is within "tribal" boundaries -- most folks are talking with less than 500 peoples, making cooperative efforts (and "auditing" in general) work well.
The reason it's not used more often isn't rational, but political -- folks want power, and auditing methods diffuse power.
As much as its power (every tribe wants more power/prestige than the next one) its also face saving.. people want assurance that they aren't going to send out a report with every 3rd word changing to penis or worse words (something I found in an article my kid was looking for something on Wikipedia last week).
And while it might be the stock boys playing a prank.. if your paychecks all go away because someone got smart and transferred the money to Antigua before someone caught it in an audit.. people begin to want a central control.
And its more likely going to be some geeky person who just wants to show how chaotic it could be because people aren't paying attention to every access 100% of the time. In either case, people don't get paid and people get miffed and want stronger controls.. and eventually you end up where you began.
IT security truly is a thankless job. When things are secure, people grumble about operational problems and inconveniences, and accuse IT security of being an impediment and/or alarmist. However, those same people will demand to know why IT didn't do more of this, more of that, detect this, detect that, predict this or that, the minute something happens.
I guess it is a Murphy's Law for IT security, which is basically hindsight bias. People demand that IT explain why they are wasting resources on certain controls until an incident happens, then those same people demand to know why IT wasn't doing more (of the very thing that was called a waste before).
I think the fancy book-learnin' phrase here is: throw everything at a wall and see what sticks.
That approach might work well with a large sample size, but in a company small enough where control measures such as segregation of duties isn't physically possible, you're virtually guaranteeing fraud.
One other thing- after the audit, it would have been nice to see what steps were taken to implement corrective measures and the efficacy of those measures.
Obviously it doesn't work everywhere. You can't undo a data leak. Once the data is gone, it's gone. If you have sensitive information that might be high value to an outsider, then you probably don't want your janitor to have access, or the janitor might be out of the country with a USB stick by the time your audit catches it.
@HJohn: "IT security truly is a thankless job."
I've often said, at my company, "IT security is about spending huge sums of time/money in big burst to save little bits of time/money over long periods of time and spending little bits of time/money over long periods of time to save big bursts of time/money fixing broken things. No wonder its so hard to get money for it - impedance mismatch."
This reminds me about a statement from Google about Wave, saying that they intentionally don't provide a way to block others from editing your messages, in order to get people to use it differently from normal email.
This has been thoroughly misunderstood by almost everyone commenting on the subject, but it makes a lot of sense to me. Anybody who edits a message clearly appears as a coauthor and you can play back all changes so there is accountability, and it only applies to people invited to the wave, so you don't have the Wikipedia effect (hordes of anonymous assholes).
Malicious editing is prevented by accountability, and it greatly reduces the need for having various independent copies of the same content, and thus losing control of it. You also can invite additional people to "subthreads" of a wave to share some of the information contained in it. If somebody wants to "steal" a copy that's a whole different problem anyway, but I feel (from the demos I have seen) that it gives a lot of flexibility, with accountability, and can greatly reduce the necessity to work around the system
@RH at November 9, 2009 11:09 AM
Part of the problem is many people don't realize the value unless something bad happens. If something bad is prevented, the value isn't realized and there is no material gain (just loss).
For example, buying a new firewall or a better antivirus solution doesn't generate revenue, it prevents loss of resources. I'm sure you know this, but this is why security is a tough sell.
Interesting, I've noticed a paradox of this sort when it comes to encryption. Many people who don't understand the value of a good firewall or IDS seem to really feel like they are doing something if they strengthen their encryption level from X-bits to 2X-bits or 4X-bits, etc. Probably because, unlike other defenses that are largely just words to them, they can measure how good they feel the encryption is in terms of bits.
Like most things, though, when something happens the decision makers miss the mark in terms of their response. I.e., someone uses a keystroke logger to bypass encryption, blame the encryption level instead of the implementation. Someone gets through a poor firewall, undetected because their is no IDS, start encryption. Someone breaks the back window and steals stuff, arm your security guard at the front door. etc.
I think it is human nature for people to want to strengthen what is already strong so they can tout its strength rather than strengthen the weakest links. After all, who wants to report to the board that they get a B in security when one can wow the board with an A in cryptography (nevermind the D that can cause it to be circumvented).
Audit mechanisms do not prevent abuse any more than laws prevent crime. All they do is provide a tool with which to, hopefully, a) correct the situation and, b) punish the offender.
@KingSnake: "Audit mechanisms do not prevent abuse any more than laws prevent crime. All they do is provide a tool with which to, hopefully, a) correct the situation and, b) punish the offender."
You are correct, there is no preventative value. But there is a deterrent value.
Likewise, audit may not make it tougher to do something wrong, but it does make it tougher to get away with it.
@HJohn: There is only deterrent value if the person wishes to stay at that organization after the theft or breach. If the person steals some data that is vital and disappears, what good is the audit trail if you can't find the person? This type of access control only applies to situations where the organization is looking to prevent casual/accidental access to prohibited data.
Wikipedia increasingly does not allow free access to everything. It's a good example of a small free-market system transitioning to a regulated-market system as it grows.
@Mike: "There is only deterrent value if the person wishes to stay at that organization after the theft or breach. If the person steals some data that is vital and disappears, what good is the audit trail if you can't find the person? This type of access control only applies to situations where the organization is looking to prevent casual/accidental access to prohibited data."
Which is why, if there is no law in place, or even if there is a law in place, you need them to sign some kind of solid disclosure agreement to provide basis for legal action against them.
It's not just Wikipedia. In the past 10 years we've also seen how badly laissez-faire-with-auditing works in the financial markets. In most setups, auditing is once again a cost center, and the interests of the auditors are apparently opposed to those of the people "getting stuff done". There may be ways to fix this with crowdsourcing or large incentives for auditors who discover omething screwy, but that takes the same kind of top-echelon commitment as any other good security practice.
In addition, you pretty much have to make all communications go through the system, or else there will be a huge incentive for auditors who find something questionable to use a side channel to talking to the person who did the questionable thing. Buit in an organization where a large majority were actually committed to the organization's goals, sure.
@paul and Audit as a cost-center.
That makes perfect since. What happens when you fix all of the issues discovered and Auditors have nothing meaningful to find and report? They lose budget and that won't do.
There is no easy solution.
@Jason: "That makes perfect since. What happens when you fix all of the issues discovered and Auditors have nothing meaningful to find and report? They lose budget and that won't do."
That's not really how it works, at least not in a good audit shop. I've issued countless reports with clean opinions (yes, in case you haven't guessed, I'm an auditor). The value of an audit function is not in the number of findings, but in the level of accuracy. Both accuracy of reports, but the accuracy we foster in an organization. After all, an audit with a clean opinion does not necessarily reflect what the circumstances would have been and how people would have performed had there not been an audit function. (It's like how many people don't speed because of speed being radar enforced, and how many would speed if they were certain they wouldn't get a ticket.)
This is also to say nothing of the consulting engagements and auditor performs which is about implementing systems and controls right, not detecting what is wrong afterward.
That said, there are no doubt bad auditors who will exagerrate issues and cook up findings for budgetary or other reasons. I think there are fewere than one may think, but we call them "gotcha auditors" in my shop. And they do more harm than good.
HJohn, CIA, CISA
Wikipedia is such a clueless example. You are talking about the mechanism, not the efficacy.
Another attack route is to hijack someone else's identity, then do malicious things masquerading as them. If hijacking identities is easier (or less auditable), then auditing hasn't really solved anything.
@Harley Quinn: "Another attack route is to hijack someone else's identity, then do malicious things masquerading as them. If hijacking identities is easier (or less auditable), then auditing hasn't really solved anything."
If stealing an ID is easier, then it is a weakest link problem, and it is a problem with authentication, not with auditing.
Of course, auditing should involve determining if the authentication is suitable. But if a user writes their password on a sticky on their monitor, against corporate policy, and someone uses their ID to perform unauthorized actions, that isn't a problem with auditing.
Auditing, in the sense we are using it, is to determine if the actions taken by the authenticated user is appropriate.
Hmm, laissez-faire policy and audit are unrelated to each other, and thus should be viewed seperatly.
Laissez-faire works in a world without Empires, every where else it eventually fails.
It is important to realise that laissez-faire does not scale beyond the point where all the players are well known to each other and importantly do not wish to compeate with each other.
This has actually been put into practice by some "small business clubs" in the UK, in that each club can only have a single member in a particular area of business.
It removes internal competiton and increases mutual co-operation. Which importantly adds value by not wasting the significant effort and resources required to be "competative" within the club.
In essence laissez-faire needs a community of peers, not a heirachy of tryany and patronage.
As has been noted above "audit" mearly records the events for later analysis.
It does not prevent wrong doing, any more than it improves business processes.
It's purpose is to provide key data to those that need it in a way that can then be used by an organisation to assess it's operation.
Audit is the sensing part of a feedback process. It produces a signal (raw data) that is conditioned according to a set of criteria and made available to effect the functioning of a system
Audit used for the purpose of "punishment" is a compleate waste of resources, all it achieves is fear and tryany, which gives rise to patronage and empire building.
Audit used to provide key data is almost the exact opposit, the resources are used towards organisational improvment in a timely fashion.
Where it is used for key data and people see the benifits they co-operate with the process and the organisation starts to become more flexable and thus more resiliant to change by being responsive to it.
Just from the abstract, they're talking about file systems.
Seriously, what sorts of file are there for which laissez-faire access control is acceptable? The same set as digital signatures are useful for: non-executable documents. Which rules out .doc, .pdf, .html...
Otherwise, to generate your audit trail - how do you know *what's* been changed?
What about a trojan accessing data with credentials of an employee? Surely, the bad guy will get all the data he needs from the system, framing the poor guy who got pwned.
@ "impersonation comenters"
Impersonation is one of the reasons "audit" should not be used to "punish"
And "Impersonation" happens due to "authetication failing".
And as we see every day we do not have reliable ways of authenticating users let alone their actions.
Imagine having to enter a "capatcher" every time you hit the "enter key".
It was the driving force behind "trusted path systems". However these are always susceptable to "end run" attacks.
Hence multi factor authentication where one thing is "something you have" ie a token.
As we are finding "something you know" is vulnerable to "end run" attacks.
And "something you are" in the sense of bio-metrics has so many problems it is at best a "curiosity" in it's own right, and as such can not be used on it's own.
The prefrence for tokens is the assumption that the valid user can only be "impersonated" if they "break the rules" by allowing access by a third party to it.
However we know from the likes of credit cards and chip-n-spin that tokens have their own "protocol" problems that alow either MITM or "end run" attacks.
An example of this is the recent problems with a protocol error in TLS/SSL where a MITM attack can use what is effectivly a "replay" attack to slip in through the authentication process...
As has been observed many times "security is a hard problem", but we have to ask ourselves at what point do the costs out weigh the benifits?
In the human world we have "two person" and "voting" protocols where actions only happen where two or more different "entities" agree it should.
Perhaps if the "authenticated actions" problem can not be "cost effectivly" solved then we may need to go to "soft AI" systems to check an action is "reasonable"?
Probably not because there is then the question of "perspective", an action may or may not be reasonable depending on other contributory factors.
And that is the real issue we are trying to put "human atributes" into systems that only understand "rule sets" and "exceptions to rule sets" that just cannot be built into working systems.
My vote would be for a system that is not secure "bandwith limiting" but it limits what can be taken.
If you have an allowable action rule set and an exception occurs you pause the system and wait for a "one or more parties" to agree or disagree with the action.
It does not prevent security breaches but it does help limit the damage that can be done .
I agree with Clive Robinson and would like to add that there is a fundamental issue in that traditional access control systems are modeled on hierarchical organizations where secrecy is considered important (need to know). These traditional systems are therefore not intended to promote collaboration, but instead to maximize control. Any system that inherently promotes collaboration decreases the amount of control over data and its use. IMHO it therefore doesn't make much sense to try to have both sharing and extensive control. Personally, as a sysadmin, I feel that in an ordinary organization there is little the really needs to be kept secret internally, or even externally. But that is not the default view. The default is to compartmentalize, and create secrets. I would say, you either trust your colleagues and employees or you don't.
"Laissez-faire works in a world without Empires, every where else it eventually fails."
...except that Empires fail too - spectacularly, and with lots of damage to both their own and outside populations.
In fact, historically the laissez-faire societies tended to live much longer that empires. I wonder why. (And, no, you don't generally get to read about them in history textbooks because there isn't much to write about - no major wars, no bigger-than-life chieftains, no nothing). Meanwhile, there's a growing mountain of evidence that the glory of the empires was mostly propaganda (check, for example, "Barbarians" by Terry Jones).
I dont know if it prevents abuse; but it certainly reduces utility. I have some very nice photos on several of the subjects in Wiki. I wanted to upload them, but you can only upload a photo if you have already uploaded a bunch of photos.
Its just like getting a helicopter license. You can only rent a helicopter if you already have 1,200 hours flying helicopters...
@Clive: "And that is the real issue we are trying to put "human atributes" into systems that only understand "rule sets" and "exceptions to rule sets" that just cannot be built into working systems."
I think the problem is less about how to represent the legitimate access requirements of an authenticated user in a certain situation, and more about how to know what those requirements actually are. The problem is that users often don't *know* what they need to do their jobs, until the moment they need it --- and then if they have to go through some process to request the access to it, inefficiency results and users quickly start looking for a way to circumvent it. And even if they do usually know in advance, explicitly managing fine-grained access control takes too much time and effort and is too difficult to get right. And sooner or later some sort of "really urgent" situation will come up that isn't covered by the current permissions, and then the security mechanism will literally be the one thing preventing the person from doing their job in that urgent situation.
I see this "lassez-faire" idea as saying, "remove all of the inconvenience and let the users do whatever they think they need to do". But in an auditable fashion of course, so that you can detect misuses after the fact.
One really nice property of this, is that it adapts to unusual situations where a user suddenly has a legitimate access requirement and needs immediate access. You can flag it as "unusual" and follow up later to make sure it was actually legitimate; meanwhile, the user has the access necessary to address the situation.
By removing all of the inconvenience, you get rid of most of the motivation users might have to try and circumvent the system. (I think only users who are trying to avoid the audit logging would still have a reason to try and subvert it).
It does open you up to some kind of attacks (such as data theft) but at least you have the audit trail, which you wouldn't have if the users circumvent the system entirely.
Unfortunately, you need real time audit control of these permissions. Otherwise, one rogue employee assigns themself full rights, downloads your entire SAN to a few terrabyte disks, and sells them to your competitor in all of a few hours with you none the wiser until you find your data on rentacoder.com.
@derf: "Unfortunately, you need real time audit control of these permissions."
Not always. Real time audit control is expensive and not always feasible, but that does not mean you are helpless.
We make employees sign disclosure agreements, and we cite applicable laws (or policies if no laws exist), penalties, and consequences. We also do some other withholdings if they do so on depature and make sure we have firm grounds for lawsuits.
Granted, after the fact does nothing to make data harder to take, but it does deter people knowing there will be consequence.
Years ago we tried to manage the sites people visited by having the IT team audit based on policy, that failed to scale. Later we had individual managers audit, mostly to manage the volume of data, different managers had different impacts on behavior. Some cared too much and others did not, that was a failure because groups compared each other to the relaxed management.
What worked best is having groups audit themselves, everyone within a given group could review, block, unblock URLs. This worked very well and issues could be worked out and the audit trail for access control was viewable. What was interesting, in groups of all men, elicit viewing was being allowed on occasion but self corrected most of the time.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.