Firesheep is a new Firefox plugin that makes it easy for you to hijack other people’s social network connections. Basically, Facebook authenticates clients with cookies. If someone is using a public WiFi connection, the cookies are sniffable. Firesheep uses wincap to capture and display the authentication information for accounts it sees, allowing you to hijack the connection.

Slides from the Toorcon talk.

Protect yourself by forcing the authentication to happen over TLS. Or stop logging in to Facebook from public networks.

EDITED TO ADD (10/27): To protect against this attack, you have to encrypt the entire session—not just the initial authentication.

EDITED TO ADD (11/4): Foiling Firesheep.

EDITED TO ADD (11/10): More info.

EDITED TO ADD (11/17): Blacksheep detects Firesheep.

Posted on October 27, 2010 at 7:53 AM74 Comments


Larry Seltzer October 27, 2010 8:18 AM

It’s not so much public networks as open, unencrypted Wifi. If you log in to AT&T Wifi from a Starbucks for example, you shouldn’t be vulnerable to this.

BTW, there’s nothing at all surprising about this. HTTP session hijacking is not news.

Also, you can force Facebook and many of these other services to use HTTPS, but things aren’t quite right. For instance, you won’t have Facebook chat and just about anything you click on will try to go back to HTTP.

bob October 27, 2010 8:28 AM

It’s not so much public networks as any network.

I’ve just run this on my work LAN. Bizarrely, all my colleagues have posted “Facebook sucks” on their walls almost simultaneously.

Of course, there’s nothing new here but it’s a very simple, easy interface.

Why does encrypted wifi make a difference? Both the attacker and I have authenticated against the router.

Yvan Boily October 27, 2010 8:37 AM

There are some design considerations to think about there. It is reasonable to generate a session and allow a user to interact with a site unauthenticated, then when the user elects to authenticated, switch HTTPS, and regenerate your session key. Once you step a user from unauthenticated to authenticated, it becomes critical to protect the session variable and other state information via HTTPS.

Clive Robinson October 27, 2010 8:58 AM

If only facebook where the only guilty parties…

Some of the people that make authentication errors that are exploitable also want to sell you other services (including cloud)… Why on earth should I use them?

Sadly as many find they have litttle choice. In their “social world” it will be due to peer preasure, in the workplace managment will say use service X (as it’s cheap etc etc) and will justify the risk by saying something along the lines of it will not involve snsitive data or some such.

The problem is in time sensitive data will go on a poorly authenticated service almost as surely as the sun will come up tomorow. And people will find as many face book users have found to late (often at their cost) once sensitive data is out of your grasp you have no control on it, infact it can end up controling them once it is in others grasp.

I am reminded of various (so called primitave) cultures that had beliefs about the power that could be gained over them if their name, image or some part of them was made available to others…

thecoldspy October 27, 2010 9:07 AM

He demands that we demand SSL everywhere we go, yet he links us to a site that has zero SSL attached to it for his plugin. Granted there isn’t much on the site but the plugin and the slideshow showing us various issues. But if he is going to demand that we demand SSL, then he better be sure he uses SSL on his site to sell us on his own plugins and slideshow for we may just get penetrated if we don’t lol.

n October 27, 2010 9:08 AM

Bruce, you got it wrong. You have to encrypt the ENTIRE SESSION to protect from this, not just authentication. Otherwise, anyone can sniff the cookies.

MarCon October 27, 2010 9:16 AM

“people will find as many face book users have found to late…once sensitive data is out of your grasp you have no control on it”

How true. So why would anyone post anything “sensitive” on a social networking site in the first place?

kiwano October 27, 2010 9:35 AM

@thecoldspy: He doesn’t demand SSL everywhere, just everywhere that you log in to. His website doesn’t have a login, so there’s nothing to protect with SSL.

GregW October 27, 2010 9:51 AM

@kiwano: If his website hosts code and the code isn’t digitally signed, then the transmission channel for the code should at least be protected via SSL or it is subject to a (malicious) MITM substitution of some kind. Right?

I have confirmed that the .xpi plugin is not digitally signed. However, I have also confirmed that one can manually change the (, actually) download links to be HTTPS.

So a secure download of Firesheep is possible, but since the whole point of the plugin is to complain about sloppy security defaults, I have to agree with @thecoldspy that there seems to be a bit of irony/hypocrisy going on here.

calandale October 27, 2010 10:20 AM

This is simply highlighting that a login page provides a form of negative security – it gives an illusion, but unless there are sufficient protections to enforce that perception, it’s nothing more.

Since many sites use the credentials paradigm to just collect valid contact information, they don’t bother serving up real protection.

I think there should be a push on browsers to support warnings to users, when they log into a site which is not providing adequate controls to meet those expectations. While a plugin could do this, pretty much anyone concerned enough to install one is likely aware enough to look at whether the site is served up via SSL (though perhaps not enough to make sure the cookies are handled securely).

The problem is that a browser has no way of knowing which cookies need to be treated securely, in order to provide the warning. It can make some pretty good guesses though. Users faced with a warning every time they provide credentials to access a page, and then have an apparent session token exposed, might well start putting pressure on the sites that they visit.

Shane October 27, 2010 10:31 AM

@calandale Re: “a browser has no way of knowing which cookies need to be treated securely”

Well, despite it not being in the RFC (iirc), there is a ‘secure’ flag for the cookie header, which, when set, will instruct the browser only to send the cookie over an HTTPS (or similar TLS if the HttpOnly flag is not set).

Frankly, this is just Facebook sucking. Nothing new here. The security of their marketing data is about the only thing they’re concerned with. I’ve never built a site using authentication that didn’t specify both HttpOnly when setting cookies, and the secure flag, with an added bonus of using SSLRequireSSL directives for any sensitive paths. It’s certainly not fool proof, but it’s the common sense first step approach to making script-kiddy-shit like Firesheep useless.

Bad Santa October 27, 2010 10:51 AM


hoodathunkit October 27, 2010 10:53 AM

@Coldspy, GregW – It is your problem, not his. If you are reading his site, then there is no need for security: you are not —and should not be— sending or receiving sensitive information to his site.

If you download the plugin, that’s your problem too. He signed on/authenticates to his server (that may be T/SSL) and he uploaded a known valid program. No cookies or details are needed to download, you merely have to trust the plugin. OTOH, even over a secure channel you still have to trust the plugin.

The breach is against the server, which is accessed by capturing ‘keys’ that clients openly broadcast.
@BF Skinner – If 3rd party is enabled, Facebook serves personal pages past login.

Nick P October 27, 2010 11:00 AM

So, where can this trend of doing too much on insecure web 2.0 sites take us? Let see… wait until they start doing banking via Facebook. Then, Firesheep and all the hijacked accounts it produces will be worth a lot of money online, esp. to vertically integrated criminal organizations.

I also agree with two claims by other commenters: 1) the author not protecting from MITM makes him a hypocrit; 2) the attack is nothing new and session hijacking via cookies has been a problem for a long time. That these easily beaten problems keep recurring on web applications shows why I have little confidence in web 2.0 from privacy or security standpoints.

winter October 27, 2010 11:01 AM

“Force TLS” is a Firefox AddOn being promoted as a workaround for Firesheep. It seems you have to configure Force TLS to work on the sites you want to visit. I’ve been using the AddOn “HTTPS Everywhere” (I think Bruce mentioned it a few months back) for a while now, which does the same thing but all the time and for every site you visit (even Google) without having to configure anything. Which is good for lazy people like me.

NoScript Options October 27, 2010 11:13 AM


Force the following sites to use secure (HTTPS) connections:

Shane October 27, 2010 11:35 AM

I think some ppl are really missing the point here. If you’re an end-user that doesn’t even realize what’s so important about sending your authentication credentials across https vs. http, do you really think it’s going to help by installing even more things you don’t understand, and then being responsible for configuring them properly? And all that is even under the shaky assumption that said end-user even uses FireFox.

Subesquently, if you do understand why http vs. https is important vis-a-vis authentication, then just type the damn ‘s’ after the protocol. I mean c’mon ppl. Forcing a whole domain to be sent over HTTPS is silly IMHO, and probably breaks all over the place unless it drops the requirement for areas not secured by HTTPS (making it pretty useless anyhow, wouldn’t you think?). Just my 2 cents plus tax.

jgreco October 27, 2010 11:40 AM

@thecoldspy at October 27, 2010 9:07 AM

He doesn’t demand SSL, and his objective doesn’t even seem to be demanding SSL.

Rather, his objective seems to be making you, and everyone else, demand SSL. Security people have been pointing out how insecure so many of our daily online activities are, but up until now nobody has quite managed to figure out how to make the general population care.

That’s why this is so clever. Session hijacking is pretty old-hat, but the idea that the best way make the general population care is to make it accessible to the general public (not just security researchers who happen to read your public paper) has never really been explored this effectively as far as I can remember.

NoScript Options October 27, 2010 11:58 AM


It’s the only way to be sure…

You’re missing the point that this is a cookie interception problem and the fault lies with Facebook. If Facebook won’t require the authentication cookie/token be sent over secure connections, it’s up to the user – like it or not.

It also counters attacks like Moxie’s ARP-spoof then proxy then strip the ‘s’ from ‘https’ attacks (sslstrip).

Shane October 27, 2010 12:04 PM

@NoScript: “fault lies with Facebook”.

Yea, it does, which is exactly what I said in my first comment. 😛

nobodyspecial October 27, 2010 12:23 PM

Because unless there is a flaw in the router, both being logged onto the same encrypted router doesn’t let you sniff other Wifi users’ traffic.

Joel October 27, 2010 1:10 PM

Protect yourself by forcing the authentication to happen over TLS. Or stop logging in to Facebook from public networks.

OR, do the sensible thing and don’t use Facebook at all.

jgreco October 27, 2010 1:10 PM

@nobodyspecial at October 27, 2010 12:23 PM

You certainly can for definitions of “encrypted” that include WEP, and (I was under the impression, could be wrong..) WPA-PSK/TKIP.

kats October 27, 2010 1:19 PM

@Bad Santa: That code doesn’t help. It redirects you to the https version after you’ve already loaded the http version, at which point it’s too late.

Badder Santa October 27, 2010 2:04 PM


You’ve misunderstood “Bad Santa’s” purpose in posting the simple script. The “crowbar” isn’t supposed to help against firesheep. It shows that wrapping secure cookies and https logins inside insecure pages doesn’t protect against anything – if your outer layer is insecure you can’t garuntee that your inner layers are secure (because someone can just modify your outer layer to exclude inner layer security).

Namely, Bad Santa is making a specific point about the futility of secure authentication if the entire session isn’t encrypted.

Vishal October 27, 2010 2:59 PM

Any user connecting to a secure (including WPA2 protected) WiFi access point can later attack other users by creating a rogue access point with matching SSID and shared key. Other clients next time will auto-connect to this rogue access point due to same SSID and key.

Problem: Clients do not authenticate Servers.

Solution: SSID should be like an HTTPS URL (uniquely identifiable by human and secured using Verisign certificates that clients will verify before connecting).

Richard October 27, 2010 3:08 PM

given the typically ‘hardcore’ technical nature of this blog’s content I’m really surprised by how many people are failing to understand exactly what this issue is (including shamefully Bruce himself!)

Anything unencrypted during transport is susceptible to interception; that has always been true, Firesheep just wrapped an simple GUI around this that lets anyone grab your session cookie and impersonate you on that site (likely people you don’t know at a coffee shop or people you do know when you’re at work.)

The answer is to be outraged that these sites don’t enforce HTTPS at all times and they will start doing so or a competitor will in their place.

btw… I hear FB are working on HTTPS for all pages, but you have to appreciate that given their scale it’s not just a flip of a switch for them.

Shane October 27, 2010 3:17 PM

@Richard Re: “I hear FB are working on HTTPS for all pages, but you have to appreciate that given their scale it’s not just a flip of a switch for them.”

Are you kidding me? No one should ‘appreciate’ the time it takes a giant meglocorp to retroactively implement something that should have been in place from day 1 of going public.

Shane October 27, 2010 3:21 PM

The sad fact is, that since it takes 50 op/eds, 50,000,000 angry users, and apparently a statue of the virgin Mary crying tears of blood to even get Facebook to implement basic privacy controls, Firesheep is bascially a PSA illustrating the fallacy of trusting some kid’$ data mining megacorp to properly handle all of your most sensitive personal information with integrity and transparency.

Damien October 27, 2010 3:24 PM

It goes far beyond Facebook !

If one was running a packet sniffer on any public network, one could get POP3 passwords just as well — and then also use the victim’s credentials for the SMTP authentication. With the capacity of reading and sending a victim’s email, not only can I totally take control of his entire facebook account, but also twitter, hotmail, gmail, yahoo, and a variety of other sites offering “I forgot my password, reset and send me a new password”.

kangaroo October 27, 2010 4:03 PM

Richard: The answer is to be outraged that these sites don’t enforce HTTPS at all times and they will start doing so or a competitor will in their place.

HAHAHAHAHAHAHAHAHAHAHAHA! Yeah — that’s how the market works, because the end-consumer is their customer! HAHAHAHAHA!

Oh, were you serious?

Brian October 27, 2010 5:35 PM

When I was a TA at university, one of my responsibilities was to create new lab assignments for the class. As it was a security related degree, I created a lab that was expressly about stealing ones session cookies from across the lab. It was, by far, the most popular and interesting lab of them all. Needless to say, not a single student in that lab ever logged into facebook from the public computers again.

BF Skinner October 27, 2010 5:59 PM

@Richard ” hear FB are working on HTTPS for all pages, but you have to appreciate that given their scale it’s not just a flip of a switch for them.”

I had a client that had 37 major applications (multi level arcitecture with 10 to 15 webservers each (both production and disaster recovery)) that required TLS encryption between the client and the webserver. The data center enabled it in a day (well 2 weeks to get the change approved and 2 weeks to test but a day to implement) because all the systems sat behind a F5 Big IP load balancer. The load balancer was establishing the https sessions. The protocol was a setting on the LB plink. Done.

Zynga is considered a sucess because they can spin up an average of a couple hundred machines a day to cope with their demand growth. There’s load balancing going on there.

Jay October 27, 2010 7:39 PM

I guess we should be relieved that Gmail no longer does the “encrypt the login but not the session” thing that they used to… thanks, China!

And to all those who keep saying “use https”… well, Facebook redirects you to http after the login if you try that. I do it anyway, although I don’t know why I worry about login page MitM when there’s this 🙂

Nick P October 27, 2010 10:07 PM

@ BF Skinner on SSL deployment

Good points. Many overstate the cost of SSL deployments. It’s usually only a small percentage of the total cost and the obfuscation is worth it. There’s quite a few hardware appliances, cheap FPGA’s with cores, and processors with crypto primitives on the market. Many of these crypto accelerators have enough performance in transactions per second that they can do the work for more than one web server. Of course, if end-to-end only means the public IP, then having it at the load-balancer as you pointed out can be enough.

So, these companies just need to quit screwing around. Btw, on your network example, did you mean each security level had 10-15 web servers or each of the 37 apps had 10-15 web servers? And were the levels separated by network hardware or did they use a MLS OS for this? Just curious.

RonK October 28, 2010 2:25 AM

@ Clive : … I am reminded of various (so called primitive) cultures that had beliefs about the power that could be gained over them if their name, image or some part of them was made available to others.

Wow, Clive, I really love this comparison. You’ve outdone your (usually impressive) self.

Kossi Yetongnon October 28, 2010 5:11 AM

Easy Hotmail Session Hijacking on secure Wi-Fi.

Tested Firesheep over my home WPA secure connection and was able to access any Windows live account even after logging out from Hotmail thanks to this add-on.
This security breach allows anyone with access to your computer (friend, relative, wife, …) to have unrestricted access to any Hotmail session you’d have started from your PC. This only worked with Windows Live, didn’t work for Facebook, Yahoo Mail, Twitter, …

Public Wi-Fi spots aren’t the only concern here but MSN cookie-based authentication definitely needs a revamp!

Baddest Santa October 28, 2010 9:25 AM

@ Badder Santa: Thank you.

SSL (Secure Sockets Layer) “protection” is USELESS. It is layered ABOVE the connection protocol (TCP-IP), but BENEATH the application protocols (HTTP, SMTP, etc.)—and the latter includes the incredibly powerful combination of the javascript: pseudo-protocol and RegExp.

Adding 2-factor authentication (a “2-way handshake”) to SSL does NOT make it any more secure. It wasn’t secure TO BEGIN WITH, remember? And now, since the “secure” layer isn’t there anymore, neither are the hands.

Shane October 28, 2010 10:50 AM

@Baddest Santa: Um, what?

SSL is, yes, usually implemented above all transport layer protocols, but it encapsulates the application-specific protocols (http/smtp/et al). It is not ‘beneath’ them at all.

Flawless? No. Useless? Hardly.

hmmm October 28, 2010 11:58 AM

@Clive: “I am reminded of various (so called primitave) cultures that had beliefs about the power that could be gained over them if their name, image or some part of them was made available to others…”

Hmmm, it seems that those “primitives” have had it right all along.

Nick P October 28, 2010 3:14 PM

@ BF Skinner

Thanks for the info. It seems that separation at the network level rather than OS or app level is the most common strategy. I guess the MLS operating systems are just that much of a pain in the ass to the users. Whatever became of the GEMSOS/Blacker VPN’s and XTS-400 servers? Have you seen anything like that recently or is everyone going low assurance with SELinux or Solaris 10 Trusted Extensions?

Black Panther October 28, 2010 7:00 PM

Surely authentication is carried out on every page load/refresh, so every page would need to be viewed over an SSL connection to ensure the cookie is secured?

I just cleared out my cache whilst viewing a page, then loaded another page – I was kicked to the login page as the authentication data was now trashed.

I think a few people need to go back and review exactly how these systems are working, and realize very quickly indeed that if you need SSL, USE IT EVERYWHERE ON THE SITE.

To do otherwise is to lock your front door and leave the key in the lock – totally pointless!

George Ou October 28, 2010 11:32 PM

Forcing SSL only works on a few sites. A lot of sites will automatically dump you back to the HTTP site, or they flat out don’t support it period.

Richard October 29, 2010 6:46 PM


do you all seriously think that its easy for FB to enable SSL overnight you are all CRAZY!

I ran what was the busiest website on the Internet so believe me its not a matter of just cost to enable SSL… there’s a little bit more complexity to it than that. SSL overhead may not seem like much, but at scale like FB it’s massive, assuming they are scaled efficiently it will mean buying a crap ton more servers and/or F5s even for only a few % points of overhead.

To compare them to any other site in complexity of enabling SSL is laughable. Only Google and their switch to SSL on gmail is close to resembling the scale and Google are so embarrassingly inefficient by comparison that for them it was not such a big deal (just adding a few thousand more of their under utilized servers is virtually a flip of switch there.)

jsecure October 30, 2010 8:20 AM

@Baylink “switching to WPA does indeed fix this problem, as WPA AP’s assign a separate session key to each association.”

Actually, switching to WPA does not necessarily solve the problem.

Consider the following: Anyone who knows the Pre Shared key can set up a rogue access point with the SSID of the real accesspoint and use the same Pre Shared key (PSK) to lock it.

So the rogue access point acts as a proxy or relay and forwards the traffic to and from the real access point. This is pretty much a variation of the man-in-the-middle attack.

The attacker can see the unencrypted traffic of the people who connect to the rogue access point. Hence, firesheep will work in this setup

Tom Anderson October 30, 2010 10:51 AM

The really sad thing is that this attack is (correct me if i’m wrong!) completely ineffective against a connection secured with RFC 2617 digest authentication, a mechanism built right into HTTP and supported by every browser going (IE’s is buggy before 7, but tolerably so). That uses a broadly cookie-like mechanism, but the cookie is computed with every request, and includes a request counter to defeat replay attacks.

It’s also a mechanism that nobody uses. Why not? Because it involves using the browser’s crummy password entry UI, rather than being able to present a nice snazzy login form. The W3C needs to come up with a way to drive HTTP authentication from HTML or javascript, and then issue a fatwa against form login.

John Reynolds November 1, 2010 7:24 AM

Current web security is very much all or nothing: as noted in the article you need to use TLS for the whole session to properly secure it, and such sessions can’t easily include non-TLS secured content.

However, using TLS for all content causes problems. In particular, as it encrypts the content then the content becomes uncacheable by the network infrastucture. This is fine for sites where the content is personalised (e.g. email), but really painful for shared-content sites where much of the size is taken by images and video.

The flaw in web protocols is that there is no way to have content that is authenticated, but not encrypted.

You could write a javascript HMAC system to do this, but the browser same-origin policy means you can’t securely bootstrap the session: a securely (TLS) loaded script can’t load non-TLS content to be able to validate it.

Julian Evans November 2, 2010 8:19 AM

Seems this sniffer doesn’t work on Macs which is pleasing for Mac users and not so pleasing for white hackers. We tested this with MacBook Pro Safari 10.6.4 & 10.5.8 OS. / Julian

Stefan Fouant November 2, 2010 11:57 AM

There seems to be so many misconceptions about this particular vulnerability. Surprisingly enough, even Bruce himself has fallen victim to these fallacies.

@Larry Seltzer – it’s not just unencrypted wireless networks that are vulnerable. Encrypted networks, including networks encrypted via WPA2 are succeptible to this form of attack. In addition, wired networks are vulnerable as well.

@jsecure – On WPA2 you don’t necessarily need to set up a rogue access point to exploit this problem. An ARP Spoofing attack could be implemented on the WPA2 wireless network to force all traffic to go through the attackers machine, allowing the attacker to sniff for the HTTP Session keys.

I covered this in a blog article at:

David Schwartz November 3, 2010 4:19 PM

@John Reynolds

This is just a sign of a poorly-designed caching setup. The cache can be the SSL endpoint. The web site can tell the cache what content is cacheable and what is private. The cache can honor these settings.

The cached content can only be used, SSL or no SSL, for two clients that (directly or indirectly) use the same cache anyway.

However, I do agree that while all this is possible, it’s not very practical. I proposed a set of HTTP extensions to solve these problems some time ago.

The main impetus was seeing windows updates not being cached because they were sent over SSL and Linux updates not being cached because each machine chose to retrieve them from a different server.

The mechanisms provided the ability to request an object only if the hash didn’t match one of a given set of hashes, the ability to query the hash of an object, the ability for a cache to cache encrypted data that it cannot decrypt, and so on. They also permitted using SSL only to transfer an encryption key and then transmitting a pre-encrypted file, saving the web server the effort of encrypting the file for each client that downloads it.

Sadly, the world rarely works the way we all know it should.

The lesson of Firesheep is what we all already knew — pretty much everything should be encrypted. If nothing else, it makes it harder to know what to attack.

Antago November 8, 2010 12:14 PM

During the last weeks more and more users of Firesheep were looking for the possibility to use it in a switched environment. So the next step to support the target of Firesheep (showing how dangerous the usage of plain HTTP really is) was to find a easy solution to perform HTTP session hijacking in a switched network environment by combining Firesheep with ARP spoofing.
Our paper describes how to archive this goal with a user friendly interface.

You can find it here

bigrotor November 8, 2010 2:12 PM

Firesheep is a Firefox extension, not a plugin. If you try and install it in the plugin folder, it won’t work, it goes into the extension folder.

Longpoke November 16, 2010 8:03 AM

Everyone talks like they are surprised about the existence of this plugin or beat the dead horse about how to “fix” it.


Facebook could easily be secured via TLS if they wanted to (cryptography isn’t expensive; proof: Google mail). AES-NI, etc.

But none of this matters because:

X.509 with public CAs is insecure, it relies on trusting every public CA in existence, and I’ve seen a few hacked already. Plus browsers don’t care if someone MITMs and unencapsulates the TLS session, it will just not show the lock in your browser, and you probably wont notice. Something like HSTS can fix this, but it still doesn’t lock the site to a single certificate, so all you need to do is hack one public CA, and you can MITM anyone. It would be better if CAs only existed to vouch that a certificate belongs to a certain site, and let the site’s owners take care of managing it from then on. This way security at bootstrap is increased by obscurity, but if bootstrap didn’t get tampered with, you have a trusted route to that site forever.

But none of this matters because:

The web wasn’t made to be secure. Basic things like authentication don’t even work, you need anti-CSRF just to make a request on behalf of yourself. No browser or web server follows the specs properly, and most also violate them for legacy compatibility or to be compatible with some other app that violates the spec, leading to undefined behavior. Every undefined behavior is a bug which means a potential security vulnerability. Then you get programs that can’t integrate properly with the web (such as PHP, (any web framework?), Flash, etc, which cause more vulnerabilities for not integrating properly (typically both cause XSS related issues, PHP due to lack of strong typing so there’s no way to display data without running it through some retarded sanitization process every time, Flash because of crappy integration).

I doubt any of the problems with the web will ever be fixed…

Also these problems also are insignificant to the fact that every browser is riddled with 0day remote code execution issues (once again due to lack of strong typing; read: C/C++ (SQLI also exists due to lack of strong typing, BTW)).

It all comes down to the fact that everyone is so obsessed with their precious legacy and are to incompetent to just ditch it and make something really secure. We could have capability-secure OSs now, which would be faster, easier to use, more secure (wouldn’t really have any security problems other than trusting TCB developers and one or two logic flaws / century) than current OS, all you have to do is ditch the legacy (including the web).

Clive Robinson November 16, 2010 12:00 PM

@ Longpoke,

“… due to lack of strong typing… …because of crappy integration…”

And the real reasons for this is “poor programing practice” and “code reuse” and “kitchen sink features”.

Most “programers” are not “software engineers” they are “journyman code cutters” who are learning their trade on the job.

Once upon a time being an apprentice was a recognised position, and you had a “master craftsman” teaching you your trade.

The problem is that now you leave college/university with skills that are not fundamental to your trade but what employers have told your educators they want (either directly or indirectly through hiring practice). That is knowing how to use certain highlevel tools without understanding the why or the wherefore of the tools you use.

This unfortunatly has a big impact when it comes to things like off the shelf code libraries or existing code re-use. Usually required because it appears to be a way to shorten development time (which it’s usually not).

Often the worst are he paid for code libraries for certain archain or bespoke functionality. Of those I have had the misfortune to come across they have almost invariably been not worth the money as you tend to pay for them five times,

1, to get the library
2, to use the code in your products
3, to debug the library for the producer
4, to debug the “in the field” issues
5, to strip it out and do it properly.

The first two you kind of expect and they are oh so cheap compared to the last three. And that’s not including such things as ongoing support when the company supplying the library goes belly up or get’s taken over or any one of a thousand contractual ills that arise.

Even when you get to formal standards you have to ask yourself these days “what the heck where they thinking” they appear to have tried to be all things to all men and ended up making a midden out of the whole thing.

Small and simple was once the way when programers actually understood what they where doing. Now strongly typed as high level as possible for the code cutters.

Why as high level as possible?

Well one thing that appears almost a constant in programing is the average number of bugs per line of code irespective of language and tools…

So you will tend to get less bugs the shorter the length of program to be written which means either simple programs or high level languages…

Longpoke November 16, 2010 10:19 PM

@Clive Robinson:

I can’t agree more that the average quality of programmers (and therefore reusable components) is miserable. Code reuse can be very messy and anyone serious about quality is forced to reinvent the wheel over and over.

Of course I also agree that not all standards are good, as I don’t consider the web’s standards good.

However, there are fake high level languages and real high level languages. You cannot really build a complex system in a low level language, and even if you did, it would be full of abstractions, effectively making it a high level language via design patterns.

Look at PHP, it’s a horrible mess, a big ball of crap thrown together to make a “framework” to “ease” web development. The developers don’t have a clue what they’re doing and there are lots of new vulns in it every month at both the high and low levels. The builtin functions are completely inconsistent, some things return magic values and some things raise exceptions. PHP just takes random features from other languages and piles them together in an incoherent way. PHP is weakly typed (which is always useless and causes more bugs). PHP is configurable, different libraries depend on different configurations of it. PHP has a garbage optional security model thing called safe-mode, which doesn’t work as it’s just more crap incoherently thrown together.

Now look at a real high level language, like Haskell, everything is immutable, which immediately mitigates one of the big class of bugs. Lazy evaluation means everything is more efficient most of the time (most problems are best solved lazily), but can be turned off when needed. The type system tells you all the information you’d ever want to know about the architecture of a given program, as well as catches lots of bugs up front. Functions can easily be reasoned about due to type safety and immutability. This is a language that was actually thought out, unlike PHP.

Also, there are capability-secure languages/OSs, such as E (language), where security is orthogonal and forces the principle of least authority upon the entire architecture from the ground up. They have already been proven easy to use (obviously, much easier to use than any *nix crap because they have strong typing and aren’t C) and exponentially more secure. E is basically reifies one of the things computing is meant to have but doesn’t currently: you can run any code without caring if it’s evil or crappy. Sort of like the web but not completely flawed/unusable/insecure. The capability model also stop programs from dumping random garbage all over your OS, which is what happens on *nix and Windows.

Did I mention POLA is important? Why one would ever want to run random code off the internet with full privileges is beyond me. Java has an actual half decent security/object model BTW, but it was ruined by pragmatism, and now everyone just signs their applets and the users can’t run them unless they modify their JVM or allow the code to run with full privileges.

Heck, even QNX is vastly superior to *nix just because they took a few old academic ideas into practice.

But what I really want to get down to is that strong typing is mandatory. (Strong typing in the data sense as well as language).

What I mean by strong typing in the data level is that you don’t take a bunch of random text and try to interpret it. You don’t take a bunch of machine words off the stack/heap and try to interpret them. Data must be organized mechanically, never trust the human.

Stack smashing happens because of lack of strong typing; array’s don’t really exist, you just get pointers to memory and are free to do anything with them, integers don’t exist either, you just have machine words. Actually nothing exists because C is just one crappy abstraction with a ton of undefined behavior. Strongly typed arrays cannot be violated, nor can ints etc. C is another example of this nonsense re-usability notion, to make it “fast”, as well as portable, it has to have tons of undefined behavior. A secure system would have a high level language and the underlying low level components would be written in a way specialized to that architecture, in which case you can get performance as well as no undefined behavior.

XSS is because of lack of strong typing, you are just taking a bunch of text and blindly treating it as code or crap to inject into some text that is meant to be code. This is about as unreliable as it gets. Strongly typed declarative data structures prevent this.

SQL Injection? Ditto. But has been solved by A) ORMs that aren’t vulnerable B) Object databases (which are on par and sometimes better than RDBMS with respect to performance, BTW) C) HaskellDB 😉

HTTP injection? Well this doesn’t happen as much because one typically uses a safe API to build the HTTP request as opposed to editing it and parameterizing it by hand. This is analogous to using a safe API to manipulate a binary data structure (that is strongly typed). Hey? Binary is compact and efficient, then why do we have text protocols and formats? Oh yeah, someone thought it would be a good idea to have plaintext stuff because maybe we would be able to edit it by hand or something. BS. Do they expect us to edit 20MB XML documents by hand? You should be using a graph editor that provides a set of idiomatic transforms and queries (does this exist for XML? it should.). The only time you can edit any plaintext document by hand is when it’s tiny, except this still doesn’t work because you don’t know what character set encoding to use… XML sucks, it’s not a cyclic graph so you have to retype things instead of just putting a reference to them, or invent an ad-hoc way of doing this for your schema. This is clearly propagated by the “simple” design of plaintext XML, if it were binary, they probably would have just made a real cyclic graph. This stuff also reminds me of another major issue with nix, everything is a file (which is nonsense, everything should be an object, or maybe there is something better? I dunno, but objects are already superior to files in every way. Want streams? read: lazy evaluation), which for some reason, means everything is text with an arbitrary grammar, so you have to write a ton of regular expressions to get anything done in *nix. Ooh and you have magic numbers as return values from programs. Except these both break all the time as new versions of the programs come out. And they are a bitch to fix because they aren’t objects; you have to rewrite the parsers (text) instead of using the new fields (objects). Even the input to any *nix program is braindead and violates strong typing principles so they can’t be used without causing new vulnerabilities. Even if you escape the input to make it safe for bash or whatever shell you are using, the data within that parameter might still need to be further escaped because it’s passed as an SQL string or some other weakly typed system. I don’t know why “command line” mantra even exists anymore (now that we don’t use ancient terminal things anymore, well some people do, but not me at least), every program that has an interactive bash shell thing is different and crappy and has no multiline input and a lot of them overwrite your input as you’re typing it with output. Command line apps shouldn’t exist, there should just be objects that you invoke methods on with a _real_ (bash isn’t a real language) statically strongly typed language and get back some other objects. Static typing allows *rich auto-completion for free. The shell can infer the variables in the local namespace that match the input parameter you are on, and then suggest them all. And the shell should actually be a decent thing, not a stupid pair of I/O streams. nix terminal things suck hard. If you copy and paste some text from a web page, it might contain malicious code followed by a newline (a page can make you copy something other than what is displayed and highlighted on it), and you get owned. Oooh, oooh, X11 is also broken, it doesn’t let non-you users display to a constricted area on it, so you have to run random crap programs off the internet as you, and they get to steal all your non-root stuff (the stuff that *actually matters). But they get root by ptracing your bash shell and escalating to root the next time you sudo or su. The Qubes OS tries to fix some of this stuff, but fails miserably, because it’s not even as good as or better than the Object Capability Model, which was invented decades ago, but I guess this is because they want compatibility with current software, which means they have to diminish security. Maybe they would be better off joining Tunes.

Computing sucks hard for many other reasons, but I’m getting tired of writing for now and am hungry.

What does this all have to do with firesheep? Dunno, but it’s not surprising what firesheep can do, and it’s about time someone is stepping up and trying to expose more loudly one of the many critical flaws that have been being used for malicious purposes for the past decades. I predict the next movement is for someone to write a nice metamorphic virus and take over hundreds of millions of boxes on the internet and all those crappy embedded devices. (It’s not that hard to pull off.)

Anyone interested in creating a secure computing paradigm and ditch all this other nonsense? Call me.

Chuck Lin November 29, 2010 9:23 PM

Whenever I use public WIFI, I proxy my web traffic using privoxy. The problem for the average user is that they will need to have a server to relay the proxy.
I’m trying to set up something where I can provide proxy service free to users.

BornCurious December 1, 2010 12:11 AM

Naturally Blacksheep fails if the attacker is using VPN. I suspect that facebook will use https more as they roll out their new messaging service.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.