Privacy Violations by Facebook Employees

I don't know if this is real, but it seems perfectly reasonable that all of Facebook is stored in a huge database that someone with the proper permissions can access and modify. And it also makes sense that developers and others would need the ability to assume anyone's identity.

Rumpus: You've previously mentioned a master password, which you no longer use.

Employee: I'm not sure when exactly it was deprecated, but we did have a master password at one point where you could type in any user's user ID, and then the password. I'm not going to give you the exact password, but with upper and lower case, symbols, numbers, all of the above, it spelled out 'Chuck Norris,' more or less. It was pretty fantastic.

Rumpus: This was accessible by any Facebook employee?

Employee: Technically, yes. But it was pretty much limited to the original engineers, who were basically the only people who knew about it. It wasn't as if random people in Human Resources were using this password to log into profiles. It was made and designed for engineering reasons. But it was there, and any employee could find it if they knew where to look.

I should also say that it was only available internally. If I were to log in from a high school or library, I couldn't use it. You had to be in the Facebook office, using the Facebook ISP.

Rumpus: Do you think Facebook employees ever abused the privilege of having universal access?

Employee: I know it has happened in the past, because at least two people have been fired for it that I know of.

[...]

Employee: See, the thing is -- and I don't know how much you know about it -- it's all stored in a database on the backend. Literally everything. Your messages are stored in a database, whether deleted or not. So we can just query the database, and easily look at it without every logging into your account. That's what most people don't understand.

Rumpus: So the master password is basically irrelevant.

Employee: Yeah.

Rumpus: It's just for style.

Employee: Right. But it's no longer in use. Like I alluded to, we've cracked down on this lately, but it has been replaced by a pretty cool tool. If I visited your profile, for example, on our closed network, there's a 'switch login' button. I literally just click it, explain why I'm logging in as you, click 'OK,' and I'm you. You can do it as long as you have an explanation, because you'd better be able to back it up. For example, if you're investigating a compromised account, you have to actually be able to log into that account.

Rumpus: Are your managers really on your ass about it every time you log in as someone else?

Employee: No, but if it comes up, you'd better be able to justify it. Or you will be fired.

Rumpus: What did they do?

Employee: I know one of them went in and manipulated some other person's data, changed their religious views or something like that. I don't remember exactly what it was, but he got reported, got found out, got fired.

Posted on January 19, 2010 at 11:25 AM • 33 Comments

Comments

OdalchiniJanuary 19, 2010 12:02 PM

No surprise here.  *Every* service, system, database, whatever, has administrators with privilege to do anything or change anything anywhere.  You have to have such people to run the service.  You have to trust them.  Sometimes they go bad.  In a good company they should be vetted and their activities monitored, but sometimes they go bad despite all the precautions that anyone can think up.  It’s a risk that goes with using computers.  It’s a manageable risk, but still a non-zero risk.

wiredogJanuary 19, 2010 12:19 PM

As you say, makes sense. That's how Slashdot, for example, works.

Interesting (and good) that they fire people who access the db without a good reason.

mtinbergJanuary 19, 2010 12:22 PM

This is a good example of using audit rather than draconian restrictions to enforce policy.

mooJanuary 19, 2010 12:22 PM

@Odalchini:
Its possible to build systems that don't allow any one person to have such godlike access. Consider banks, for example: normally it takes at least two persons to do anything godlike with their systems, and ideally *everything* is logged for audit purposes. But even then, your high-ranking administrators probably can come up with lots of clever ways to screw you. They must be people who can be trusted. Anyway your comment is generally correct about 99% of systems everywhere: someone has the root password for those systems, and literally has the keys to the whole castle of your business in their hands, and that person better be someone you trust.

mikeJanuary 19, 2010 12:39 PM

This isn't just a problem with computers and databases. Anytime anyone has access to restricted information there is always someone with unfettered access to that data. Think about university or college registrars before everything was converted to electronics. Anyone with access to the records room could change grades or in the military, Page 13 disciplinary records disappear all the time if you know the right yeoman/clerk.

RoyJanuary 19, 2010 12:44 PM

We have the same thing at Pwned.com. Like Odalchini said, it's something that's required by the developers. Only 2 of us have actual access to the database though and we are the 2 owners of the site.

You have to have a healthy balance of trust for your admins and privacy for your users.

RonKJanuary 19, 2010 1:02 PM

@ moo
> and that person better be someone you trust.

1. It's obviously stupid to rely on someone you don't trust. Did you mean instead "better be someone who you are correct to trust"? (brought to you by the duke of pedantry)

2. It seems to me that the amount that an ordinary trustworthy person can be trusted has a lot to do with the perceived damage he does by betraying your trust. For example, I might consider revealing the content of a seemingly harmless Facebook message or two if someone had made a realistic death threat against my family. OTOH, I would be suspicious that someone would be willing to go so far to get what seems to be useless (and harmless) information, so I'm not _totally_ sure what I would do in that situation.

Mace MonetaJanuary 19, 2010 1:59 PM

@RonK > It's obviously stupid to rely on someone you don't trust

Well, do you honestly think that every technician in every central office or network data center is a personally trusted individual?

They have the knowledge and tools to troubleshoot telecom and/or network problems. But sometimes there are long, boring, nights - and physical access doesn't leave an audit trail.

I always assume a bored tech is listening, no matter what the communications medium. If that makes you uncomfortable, make sure everything you do is encrypted.

SteveR-TTHJanuary 19, 2010 2:14 PM

FWIW

Facebook disputed the claims in a statement, noting that "This piece contains the kind of inaccuracies and misrepresentations you would expect from something sourced anonymously".

Steve

Clive RobinsonJanuary 19, 2010 2:25 PM

@ Bruce,

Since the medical proffession have chosen to stick a needle in the back of my hand to put in a plastic pipe or two then banadage it up my yping on the mobile is not what I would wish for.

However I could not fail to notice the "Chuck Norris" refrence 8)

(Oh and how are the lookalike "li'l Bruces" coming along?)

On a more serious note.

Why does the "super user" concept cause such wonder in people. It's been around for over a third of a century with little change.

It has been recognised as a security hole for almost the same length of time the concept has existed.

I guess one always needs to check "one's gods" are not "fallen angels" in "sheeps clothing". But even the most honest of people have been known to fall foul of what apear with hindsight to be trivial inducments.

Thus the assumption nobody can be trusted all the time would be a sound design choice and thus putting in "two man" operating/control and fall auditing would apear to mitigate "individual" transgretions.

But I guess the "root concept" like that of "passwords" is one that will continue to outlive it's uselfulness that gave rise to their birth in less resoruced times.

MarkJanuary 19, 2010 2:46 PM

@Bruce, I believe that this tool does exist, though some specifics may not be accurate.

Facebook provides a limited version of the tool to every user.

In the privacy settings for your profile, you can view your profile (and only your profile) through the eyes of anyone on your friend list or the public.

Honestly, it's one of the best UX tools I've seen for verifying privacy settings.

It's a small leap of faith to imagine that limited number of Facebook staff can access the same PHP script w/o the "friends list" restriction.

SlonobJanuary 19, 2010 2:49 PM

They could add a kind of just-in-time provisioning for something like this. You can try to login as someone else, but a secondary must approve it. Even if a secondary was notified but does not participate in the runtime authorization they could very quickly catch troublesome requests. For example, send the secondary an email with two links: expected, not expected. Not expected goes into a manual review by security.

I could imagine this addressing a lot of the risk. Even as a white hat hacker, my boss would not give me a pass if he got such an email indicating that I requested access to a system with sensitive data where that activity was not expected. Because of my unusual role, I might get away with one or two but eventually it would start to smell bad.

Further you could graph these requests and watch for outliers. If someone is requesting a lot and his boss is approving a lot or more than ever before, maybe they both need to have an interview on the subject.

If FB really works as described, it's just plain irresponsible of them. I'm sure few employees need this capability.

tedJanuary 19, 2010 6:19 PM

Fired? It would be nice if the same standards applied when the FBI illegally accessed phone records.

VoodooTruckerJanuary 19, 2010 7:08 PM

That is just plain irresponsible. On every system I code I make sure user's passwords are one-way hashed so that I don't even know them. Soon I will be going to OpenID. Either way however, there is no super-user password. A secure(r) system would allow administrators to *impersonate* users, but the admin would still log in with *his* password, and it would show up that way in the logs. If an admin posts terrorist propoganda on someone's page, at least you have an audit trail of who did it. Not so if everyone shares a password.

Dominic WhiteJanuary 19, 2010 10:32 PM

The part about messages never getting deleted is true. If you delete messages and a follow up is sent, there is an option to view the 'deleted conversation' and no option to make the delete permanent. So at least one part of it is true.

BF SkinnerJanuary 20, 2010 7:01 AM

@Odalchini " has administrators with privilege...risk that goes with using computers. It’s a manageable risk..."

Just came back from a brief about the 2009 verizon breach report. http://www.verizonbusiness.com/resources/... - 20% of the breaches were from internal causes. The breaches that were from internal sources disclosed more records (unsurprising). Most internal breaches were caused by end users and IT admins/super users (~50/50).

The risk may be managable but it doesn't look like it's being managed for those that got breached.


Bruce talks about the 2008 report here - http://www.schneier.com/blog/archives/2008/06/...

Can't wait for the 2010 report which I'm told is going to be released at RSA in March.

NobodyJanuary 20, 2010 10:03 AM

>VoodooTrucker
They aren't saying anything about how passwords are stored - the login system simply has to verify both the user passwd or the master passwd.

It is necessary (rather than a root login and su) to check the login process for a given user. And this can be logged in the same way.

MarcTJanuary 20, 2010 11:11 AM

The backdoor password, while unsurprising, is clearly a problem. The replacement system, with built-in auditing, is the only practical solution.

In my first day orientation as a programmer at a large Internet retailer, we were told, "Yes, Al Gore shops here. And Keanu Reeves. And Bruce Schneier. And you will have access to look up their records in the db. And we will fire you if you do." Perfectly clear, and the only way to get the job done.

Credit Cards were a completely different matter - one-way hashes, 3-headed dogs, etc. But we didn't need to compare your CC number against other people's to guess what you'd like to buy.

* No, they didn't really mention Bruce. He only ever buys his own books anyway...

MoeJanuary 20, 2010 3:45 PM

Chuck Norris doesn't *need* a master password, he just asks nicely and the website lets him in.

Nor does Bruce Schneier. He has rainbow tables memorized for every encryption scheme, so he can brute force your password in his head.

Nobody January 20, 2010 6:08 PM

> They aren't saying anything about how passwords are stored - the login system simply has to verify both the user passwd or the master passwd.

Granted, I strayed from the point.

> It is necessary (rather than a root login and su) to check the login process for a given user. And this can be logged in the same way.

Really? are you saying that the login process differs by user?

The concept of a super-user password is a security flaw. You can't log who is using it, and you can't blacklist someone who knows it.

Okay, so they say it only works internally. Great, one hackers are only one air snort away...

Nick PJanuary 20, 2010 6:42 PM

The Facebook thing shouldn't be a surprise. I wouldn't be surprised if providers of other service types, including "secure/private" email or collaboration vendors, were doing this. The traditional approach is to break up superuser into distinct roles, like network administrator and auditor. Unfortunately, one person usually ends up with access to both.

Having at least admin and auditor done by separate people is a plus. They'd have to be very different people who collaborate little, though. Why? Well, I'd say two geeks on the receiving end of headaches by "lay" management are pretty likely to form a clique and team up. This increases the odds of collusion in criminal or otherwise unethical activity. Also, if any *real* security is desired, then both must be present for any hardware changes from opening a shipped package, to installing, to configuring and backing up the software. This is because it's easy to compromise a system with a physical attack, even easier for an admin, and at that point access controls and logging don't matter much.

The main defense against malicious users doing physical attacks would be schemes with a TPM, IOMMU and secure OS (maybe Turaya or INTEGRITY Desktop). Would this beat a clever sys admin with plenty of time to goof with the hardware? I wouldn't count on it, as TPM's are mainly designed to beat software attacks. In some applications, a tamper-resistant processor like the IBM crypto-processor or the recent Curtiss-Wright rugged board may do. However, I'd say a trustworthy centralized control scheme with an untrusted administrator with physical access is currently low assurance. Splitting it up between geographically separated people, auditing by external firms, etc. are better options.

blue92January 21, 2010 12:15 AM

It's just as much about how the app & database are structured as it is the login page. You can have all the security in the world on your front-end password hash, but if your DB has an open ODBC service on your weak in-house intranet or VPN, any J. Random Hacker can lift a programmer's physical Blackberry and grab the his intranet's static cookie value... and once he's in he can touch all the data he wants. I've had to patch these types of vulnerabilities. Let's face it; people get lazy, and programmers are indeed people too.

Worse is the trade-off between secure data and support people being able to do their jobs. Even if they are logged and traceable, the damage can still be done.

At some point you have to trust a certain set of people, but there are no absolute guarantees. The tale in question may or may not be true, but it's not factually improbable. Back doors are not merely the imaginings of comp/sci-fi hacks; they do indeed exist on occasion -- sometimes intentionally and sometimes not.

elven wine butlerJanuary 21, 2010 1:12 AM

"Joy is a weakness" - Dukat, ST:DS9

Oh, you silly humans and your unencrypted, advertisement burdened internet. When will you ever learn? All of the cool cats are chatting on Tor hidden services discussion forums, not putting their lives up online on fauxbook.

MarkJanuary 21, 2010 7:41 AM

@Clive Robinson
Why does the "super user" concept cause such wonder in people. It's been around for over a third of a century with little change.

Considerably longer than that. The issue of telecommunications being intercepted has been around for as long as telecommunications have existed. N.B. all that's needed for effective telecommunications is a written language. Encryption methods were developed thousands of years ago to address the issue.
Fairly soon after the invention of the telephone Almon Strowger invented a method of automatic telephone switching. Because he suspected that human operators might be untrustworthy. (No doubt before the end of the 19th century other people were working out how to "tap" telephones connected to machines of his design.)

EricJanuary 21, 2010 2:21 PM

I'm pretty sure gmail admins CANT login to my account, much like myspace or hotmail admins...

I mean come on, the data is sitting on their laps. What is the big surprise here? The master password? ok, not very smart but its not there anymore

Nick PJanuary 21, 2010 6:15 PM

@ Eric

No doubt, but it doesn't have to be that way. Services like Facebook or Gmail could be set up where you only have to trust a small set of people. Mainly, one or more administrators and a group of testers or auditors. Regular web developers don't need access to the customer data. They can use mock data for most testing purposes. This setup is better than everyone having total access to all the users' data.

BenJanuary 22, 2010 2:11 AM

Why not role-based authorization? Sure, you can have a superuser group, but obviously you limit who is in it, and still log what ID does what action. Not that hard.

Nick PJanuary 22, 2010 12:23 PM

@ Ben

The thing to worry about is physical access. If a person has physical access to a server and a seemingly legitimate reason to mess with it, they can own it. RAM attacks, USB drives that load rootkits before the OS, virtualization-based form of the same, firewire-based malware planting... the list goes on. A guy may just have an Auditor role, but if he has physical access then he must be trusted to have total access. Because, if he wants to, he will have total access. You see why simply dividing it all up doesn't work without tamper-proof hardware or geographic separation?

XettJanuary 22, 2010 10:55 PM

I work for a Mobile Phone provider and I can tell you that privacy is not about access, it's about culture. let me explain

In a system with no access the average user will be unable to access anything without completing a request with the admin. This puts time pressures on the admin and generates a lot of extra work. The admin also has very little time to investigate the reasons for any access. However this system does not stop people with the skills and motivation from cracking the system, it actually allows them more time to crack the system unnoticed while the admin is taking care of the requests from the users.

The other end of the spectrum is a free access system with the admin randomly auditing access attempts. This means that the unwelcome intruder has always got a good chance at getting caught, meaning they will spend more time trying to eliminate evidence of their presence. As any admin will know, this actually makes them easier to spot as a single point of access with no other data for the session is a much better indicator of something wrong than a mass of mostly ok sessions. An admin can also apply a simple 'append only' fix to the file and remove tho ability to change that attribute from the root user, making even a full access profile unable to remove all trace of their presence.

All up, a balance is usually found in determining if the amount of sessions expected per day is going to be better matched by a pre-approval system or an audit system.

benJanuary 23, 2010 5:52 AM

@Nick P -- if they have physical access you are p0wned. I suppose that's the ultimate, unavoidable super-user. Kindal like "ED," Des Carte's "Evil Demon."

But assuming the hardware is situated such that you'd need to suborn a *lot* of people (and that routers have some similar protections, though end-to-end encryption have made even man-in-the-middle pretty tough), role-based authentication would have eliminated the problem at Facebook, and has been a standard for a long, long, time.

Nick PJanuary 23, 2010 1:25 PM

@ ben

It's not so simple. RBAC often creates as many problems as it tries to solve. Read Kevin Mitnick's book for a steady stream of exploits even against two-factor systems that often requires nothing more than a con. Access control in environments like these is simply too restrictive, kills productivity and is fought by users until a broad form of access is allowed.

The best that can be done at a place like facebook is accountability. Similar to what they have, but with few trusted individuals. For instance, a hardened database with strong auditing that only the designated Auditor or administrator can access. The guy would regularly audit and investigate security-relevant events to determine wrongdoing. Both admins and anyone with physical access are still ED-style superusers and should be minimized in number and trustworthy in character.

3leeNovember 28, 2010 10:04 PM

I know someone who works for facebook and he can see anyones private messages.

I reeaalllly wish I knew that earlier

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..