Entries Tagged "browsers"

Page 4 of 7

Firesheep

Firesheep is a new Firefox plugin that makes it easy for you to hijack other people’s social network connections. Basically, Facebook authenticates clients with cookies. If someone is using a public WiFi connection, the cookies are sniffable. Firesheep uses wincap to capture and display the authentication information for accounts it sees, allowing you to hijack the connection.

Slides from the Toorcon talk.

Protect yourself by forcing the authentication to happen over TLS. Or stop logging in to Facebook from public networks.

EDITED TO ADD (10/27): To protect against this attack, you have to encrypt the entire session—not just the initial authentication.

EDITED TO ADD (11/4): Foiling Firesheep.

EDITED TO ADD (11/10): More info.

EDITED TO ADD (11/17): Blacksheep detects Firesheep.

Posted on October 27, 2010 at 7:53 AMView Comments

Evercookies

Extremely persistent browser cookies:

evercookie is a javascript API available that produces extremely persistent cookies in a browser. Its goal is to identify a client even after they’ve removed standard cookies, Flash cookies (Local Shared Objects or LSOs), and others.

evercookie accomplishes this by storing the cookie data in several types of storage mechanisms that are available on the local browser. Additionally, if evercookie has found the user has removed any of the types of cookies in question, it recreates them using each mechanism available.

Specifically, when creating a new cookie, it uses the following storage mechanisms when available:

  • Standard HTTP Cookies
  • Local Shared Objects (Flash Cookies)
  • Storing cookies in RGB values of auto-generated, force-cached PNGs using HTML5 Canvas tag to read pixels (cookies) back out
  • Storing cookies in Web History (seriously. see FAQ)
  • HTML5 Session Storage
  • HTML5 Local Storage
  • HTML5 Global Storage
  • HTML5 Database Storage via SQLite

And the arms race continues….

EDITED TO ADD (9/24): WARNING—When you visit this site, it stores an evercookie on your machine.

Posted on September 23, 2010 at 11:48 AMView Comments

UAE Man-in-the-Middle Attack Against SSL

Interesting:

Who are these certificate authorities? At the beginning of Web history, there were only a handful of companies, like Verisign, Equifax, and Thawte, that made near-monopoly profits from being the only providers trusted by Internet Explorer or Netscape Navigator. But over time, browsers have trusted more and more organizations to verify Web sites. Safari and Firefox now trust more than 60 separate certificate authorities by default. Microsoft’s software trusts more than 100 private and government institutions.

Disturbingly, some of these trusted certificate authorities have decided to delegate their powers to yet more organizations, which aren’t tracked or audited by browser companies. By scouring the Net for certificates, security researchers have uncovered more than 600 groups who, through such delegation, are now also automatically trusted by most browsers, including the Department of Homeland Security, Google, and Ford Motors­and a UAE mobile phone company called Etisalat.

In 2005, a company called CyberTrust­—which has since been purchased by Verizon­—gave Etisalat, the government-connected mobile company in the UAE, the right to verify that a site is valid. Here’s why this is trouble: Since browsers now automatically trust Etisalat to confirm a site’s identity, the company has the potential ability to fake a secure connection to any site Etisalat subscribers might visit using a man-in-the-middle scheme.

EDITED TO ADD (9/14): EFF has gotten involved.

Posted on September 3, 2010 at 6:27 AMView Comments

Detecting Browser History

Interesting research.

Main results:

[…]

  • We analyzed the results from over a quarter of a million people who ran our tests in the last few months, and found that we can detect browsing histories for over 76% of them. All major browsers allow their users’ history to be detected, but it seems that users of the more modern browsers such as Safari and Chrome are more affected; we detected visited sites for 82% of Safari users and 94% of Chrome users.

    […]

  • While our tests were quite limited, for our test of 5000 most popular websites, we detected an average of 63 visited locations (13 sites and 50 subpages on those sites); the medians were 8 and 17 respectively.
  • Almost 10% of our visitors had over 30 visited sites and 120 subpages detected—heavy Internet users who don’t protect themselves are more affected than others.

    […]

  • The ability to detect visitors’ browsing history requires just a few lines of code. Armed with a list of websites to check for, a malicious webmaster can scan over 25 thousand links per second (1.5 million links per minute) in almost every recent browser.
  • Most websites and pages you view in your browser can be detected as long as they are kept in your history. Almost every address that was in your browser’s address bar can be detected (this includes most pages, including those retrieved using https and some forms with potentialy private information such as your zipcode or search query). Pages won’t be detected when they expire from your history (usually after a month or two), or if you manually clear it.

For now, the only way to fix the issue is to constantly clear browsing history or use private browsing modes. The first browser to prevent this trick in a default installation (Firefox 4.0) is supposed to come out in October.

Here’s a link to the paper.

Posted on May 20, 2010 at 1:28 PMView Comments

Tracking your Browser Without Cookies

How unique is your browser? Can you be tracked simply by its characteristics? The EFF is trying to find out. Their site Panopticlick will measure the characteristics of your browser setup and tell you how unique it is.

I just ran the test on myself, and my browser is unique amongst the 120,000 browsers tested so far. It’s my browser plugin details; no one else has the exact configuration I do. My list of system fonts is almost unique; only one other person has the exact configuration I do. (This seems odd to me, I have a week old Sony laptop running Windows 7, and I haven’t done anything with the fonts.)

EFF has some suggestions for self-defense, none of them very satisfactory. And here’s a news story.

EDITED TO ADD (1/29): There’s a lot in the comments leading me to question the accuracy of this test. I’ll post more when I know more.

EDITED TO ADD (2/12): Comments from one of the project developers.

Posted on January 29, 2010 at 7:06 AMView Comments

Flash Cookies

Flash has the equivalent of cookies, and they’re hard to delete:

Unlike traditional browser cookies, Flash cookies are relatively unknown to web users, and they are not controlled through the cookie privacy controls in a browser. That means even if a user thinks they have cleared their computer of tracking objects, they most likely have not.

What’s even sneakier?

Several services even use the surreptitious data storage to reinstate traditional cookies that a user deleted, which is called ‘re-spawning’ in homage to video games where zombies come back to life even after being “killed,” the report found. So even if a user gets rid of a website’s tracking cookie, that cookie’s unique ID will be assigned back to a new cookie again using the Flash data as the “backup.”

Posted on August 17, 2009 at 6:36 AMView Comments

Too Many Security Warnings Results in Complacency

Research that proves what we already knew:

Crying Wolf: An Empirical Study of SSL Warning Effectiveness

Abstract. Web users are shown an invalid certificate warning when their browser cannot validate the identity of the websites they are visiting. While these warnings often appear in benign situations, they can also signal a man-in-the-middle attack. We conducted a survey of over 400 Internet users to examine their reactions to and understanding of current SSL warnings. We then designed two new warnings using warnings science principles and lessons learned from the survey. We evaluated warnings used in three popular web browsers and our two warnings in a 100-participant, between-subjects laboratory study. Our warnings performed significantly better than existing warnings, but far too many participants exhibited dangerous behavior in all warning conditions. Our results suggest that, while warnings can be improved, a better approach may be to minimize the use of SSL warnings altogether by blocking users from making unsafe connections and eliminating warnings in benign
situations.

Posted on August 4, 2009 at 10:01 AMView Comments

Second SHB Workshop Liveblogging (4)

Session three was titled “Usability.” (For the record, the Stata Center is one ugly building.)

Andrew Patrick, NRC Canada until he was laid off four days ago (suggested reading: Fingerprint Concerns: Performance, Usability, and Acceptance of Fingerprint Biometric Systems), talked about biometric systems and human behavior. Biometrics are used everywhere: for gym membership, at Disneyworld, at international borders. The government of Canada is evaluating using iris recognition at a distance for events like the 2010 Olympics. There are two different usability issues: with respect to the end user, and with respect to the authenticator. People’s acceptance of biometrics is very much dependent on the context. And of course, biometrics are not secret. Patrick suggested that to defend ourselves against this proliferation of using biometrics for authentication, the individual should publish them. The rationale is that we’re publishing them anyway, so we might as well do it knowingly.

Luke Church, Cambridge University (suggested reading: SHB Position Paper; Usability and the Common Criteria), talked about what he called “user-centered design.” There’s a economy of usability: “in order to make some things easier, we have to make some things harder”—so it makes sense to make the commonly done things easier at the expense of the rarely done things. This has a lot of parallels with security. The result is “appliancisation” (with a prize for anyone who come up with a better name): the culmination of security behaviors and what the system can do embedded in a series of user choices. Basically, giving users meaningful control over their security. Luke discussed several benefits and problems with the approach.

Diana Smetters, Palo Alto Research Center (suggested reading: Breaking out of the browser to defend against phishing attacks; Building secure mashups; Ad-hoc guesting: when exceptions are the rule), started with these premises: you can teach users, but you can’t teach them very much, so you’d better carefully design systems so that you 1) minimize what they have to learn, 2) make it easier for them to learn it, and 3) maximize the benefit from what they learn. Too often, security is at odds with getting the job done. “As long as configuration errors (false alarms) are common, any technology that requires users to observe security indicators and react to them will fail as attacks can simply masquerade as errors, and users will rationally ignore them.” She recommends meeting the user halfway by building new security models that actually fit the users’ needs. (For example: Phishing is a mismatch problem, between what’s in the user’s head and where the URL is actually going. SSL doesn’t work, but how should websites authenticate themselves to users? Her solution is protected links: a set of secure bookmarks in protected browsers. She went on to describe a prototype and tests run with user subjects.

Jon Callas, PGP Corporation (suggested reading: Improving Message Security with a Self-Assembling PKI), used the metaphor of the “security cliff”: you have to keep climbing until you get to the top and that’s hard, so it’s easier to just stay at the bottom. He wants more of a “security ramp,” so people can reasonably stop somewhere in the middle. His idea is to have a few policies—e-mail encryption, rules about USB drives—and enforce them. This works well in organizations, where IT has dictatorial control over user configuration. If we can’t teach users much, we need to enforce policies on users.

Rob Reeder, Microsoft (suggested reading: Expanding Grids for Visualizing and Authoring Computer Security Policies), presented a possible solution to the secret questions problem: social authentication. The idea is to use people you know (trustees) to authenticate who you are, and have them attest to the fact that you lost your password. He went on to describe how the protocol works, as well as several potential attacks against the protocol and defenses, and experiments that tested the protocol. In the question session he talked about people designating themselves as trustees, and how that isn’t really a problem.

Lorrie Cranor, Carnegie Mellon University (suggested reading: A Framework for Reasoning about the Human in the Loop; Timing Is Everything? The Effects of Timing and Placement of Online Privacy Indicators; School of Phish: A Real-Word Evaluation of Anti-Phishing Training; You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings), talked about security warnings. The best option is to fix the hazard; the second best is to guard against it—but far too often we just warn people about it. But since hazards are generally not very hazardous, most people just ignore them. “Often, software asks the user and provides little or no information to help user make this decision.” Better is to use some sort of automated analysis to assist the user in responding to warnings. For websites, for example, the system should block sites with a high probability of danger, not bother users if there is a low probably of danger, and help the user make the decision in the grey area. She went on to describe a prototype and user studies done with the prototype; her paper will be presented at USENIX Security in August.

Much of the discussion centered on how bad the problem really is, and how much security is good enough. The group also talked about economic incentives companies have to either fix or ignore security problems, and whether market approaches (or, as Jean Camp called it, “the happy Libertarian market pony”) are sufficient. Some companies have incentives to convince users to do the wrong thing, or at the very least to do nothing. For example, social networking sites are more valuable if people share their information widely.

Further discussion was about whitelisting, and whether it worked or not. There’s the problem of the bad guys getting on the whitelist, and the risk that organizations like the RIAA will use the whitelist to enforce copyright, or that large banks will use the whitelist as a tool to block smaller start-up banks. Another problem is that the user might not understand what a whitelist signifies.

Dave Clark from the audience: “It’s not hard to put a seat belt on, and if you need a lesson, take a plane.”

Kind of a one-note session. We definitely need to invite more psych people.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 11, 2009 at 2:56 PMView Comments

Worldwide Browser Patch Rates

Interesting research:

Abstract:

Although there is an increasing trend for attacks against popular Web browsers, only little is known about the actual patch level of daily used Web browsers on a global scale. We conjecture that users in large part do not actually patch their Web browsers based on recommendations, perceived threats, or any security warnings. Based on HTTP useragent header information stored in anonymized logs from Google’s web servers, we measured the patch dynamics of about 75% of the world’s Internet users for over a year. Our focus was on the Web browsers Firefox and Opera. We found that the patch level achieved is mainly determined by the ergonomics and default settings of built-in auto-update mechanisms. Firefox’ auto-update is very effective: most users installed a new version within three days. However, the maximum share of the latest, most secure version never exceeded 80% for Firefox users and 46% for Opera users at any day in 2007. This makes about 50 million Firefox users with outdated browsers an easy target for attacks. Our study is the result of the first global scale measurement of the patch dynamics of a popular browser.

Posted on February 13, 2009 at 6:27 AMView Comments

Interview with an Adware Developer

Fascinating:

I should probably first speak about how adware works. Most adware targets Internet Explorer (IE) users because obviously they’re the biggest share of the market. In addition, they tend to be the less-savvy chunk of the market. If you’re using IE, then either you don’t care or you don’t know about all the vulnerabilities that IE has.

IE has a mechanism called a Browser Helper Object (BHO) which is basically a gob of executable code that gets informed of web requests as they’re going. It runs in the actual browser process, which means it can do anything the browser can do—which means basically anything. We would have a Browser Helper Object that actually served the ads, and then we made it so that you had to kill all the instances of the browser to be able to delete the thing. That’s a little bit of persistence right there.

If you also have an installer, a little executable, you can make a Registry entry and every time this thing reboots, the installer will check to make sure the BHO is there. If it is, great. If it isn’t, then it will install it. That’s fine until somebody goes and deletes the executable.

The next thing that Direct Revenue did—actually I should say what I did, because I was pretty heavily involved in this—was make a poller which continuously polls about every 10 seconds or so to see if the BHO was there and alive. If it was, great. If it wasn’t, [ the poller would ] install it. To make sure the poller was less likely to be detected, we developed this algorithm (a really trivial one) for making a random-looking filename that was consistent per machine but was not easy to guess. I think it was the first 6 or 8 characters of the DES-encoded MAC address. You take the MAC address, encode it with DES, take the first six characters and that was it. That was pretty good, except the file itself would be the same binary. If you md5-summed the file it would always be the same everywhere, and it was always in the same location.

Next we made a function shuffler, which would go into an executable, take the functions and randomly shuffle them. Once you do that, then of course the signature’s all messed up. [ We also shuffled ] a lot of the pointers within each actual function. It completely changed the shape of the executable.

We then made a bootstrapper, which was a tiny tiny piece of code written in Assembler which would decrypt the executable in memory, and then just run it. At the same time, we also made a virtual process executable. I’ve never heard of anybody else doing this before. Windows has this thing called Create Remote Thread. Basically, the semantics of Create Remote Thread are: You’re a process, I’m a different process. I call you and say “Hey! I have this bit of code. I’d really like it if you’d run this.” You’d say, “Sure,” because you’re a Windows process—you’re all hippie-like and free love. Windows processes, by the way, are insanely promiscuous. So! We would call a bunch of processes, hand them all a gob of code, and they would all run it. Each process would all know about two of the other ones. This allowed them to set up a ring…mutual support, right?

So we’ve progressed now from having just a Registry key entry, to having an executable, to having a randomly-named executable, to having an executable which is shuffled around a little bit on each machine, to one that’s encrypted—really more just obfuscated—to an executable that doesn’t even run as an executable. It runs merely as a series of threads. Now, those threads can communicate with one another, they would check to make sure that the BHO was there and up, and that the whatever other software we had was also up.

There was one further step that we were going to take but didn’t end up doing, and that is we were going to get rid of threads entirely, and just use interrupt handlers. It turns out that in Windows, you can get access to the interrupt handler pretty easily. In fact, you can register with the OS a chunk of code to handle a given interrupt. Then all you have to do is arrange for an interrupt to happen, and every time that interrupt happens, you wake up, do your stuff and go away. We never got to actually do that, but it was something we were thinking we’d do.

EDITED TO ADD (1/30): Good commentary on the interview, showing how it whitewashes history.

EDITED TO ADD (2/13): Some more commentary.

Posted on January 30, 2009 at 6:19 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.