Schneier on Security
A blog covering security and security technology.
« Friday Squid Blogging: Humboldt Squid |
| Nepenthes »
July 24, 2006
Hacked MySpace Server Infects a Million Computers with Malware
According to The Washington Post:
An online banner advertisement that ran on MySpace.com and other sites over the past week used a Windows security flaw to infect more than a million users with spyware when people merely browsed the sites with unpatched versions of Windows....
EDITED TO ADD (7/27): It wasn't MySpace that was hacked, but a server belonging to the third-party advertising service that MySpace uses. The ad probably appeared on other websites as well, but MySpace seems to have been the biggest one.
EDITED TO ADD (8/5): Ed Felten comments.
Posted on July 24, 2006 at 6:46 AM
• 63 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
The last line is provocative (deliberately?)
Was the attack really that clever?
1. The attacker relied on machines not patched for months.
2. They got caught.
I think a really 'clever' attack would avoid 1 and 2.
Wait..what server got hacked exactly?
The headline is misleading. An adbanner box was serving up the .wmf exploit, and the ad network was a customer of myspace and webshots, among (presumably) others.
However, as long as MySpace allows users to build their pages piecemeal from content hosted anywhere in the world, there's no reason this can't happen again and again. A poorly secured affiliate site is a poorly secured affiliate, whether you're ChoicePoint or News Corp.
(I've been wary of MySpace ever since my wife told me that visiting friends' pages on that network makes her "really glad we run firefox on mac," since apparently several friends of ours + my sister-in-law have gotten burned, or nearly so, with similar bad-news linking and scripting)
@Pat Sutlaw: I'd say a clever attack is one that works, and relying on your victim target group not keeping their systems patched properly is perfectly acceptable (from an attacker's point of view) if they actually *are* reasonably likely to do that. This attack would've been stupid on a Mac site, for example, but on MySpace, it was clever, and the fact that they got a million machines proves that.
Also, what do you mean, "they got caught"? People found out about the attack, but that was pretty much inevitable, so as long as they owned a significant number of systems, I'd think that they consider the attack successful.
This is such a case where the actors that can do the most to prevent an attack have the least incentive to do anything about it, because they don't feel the damages.
Who gets burned: the computer user
Who could avoid the problems: Microsoft, MySpace, the ad agent.
So, should an advertising-supported service be responsible / liable for the security practices of the advertisers whose ads they post?
It was a "shooting fish in a barrel" attack. Given you are targetting a bunch of young teenagers whose parents haven't raised them with a sense of self preservation, and that they're probably the most tech savvy people in the family running the computer, it's a no-brainer that a lot of the computers will be unpatched. And also an indictment of allowing third party content to be served up on your website. I have always felt that if you are going to serve ads, you had better host the server and either employ or hire someone with extremely good security knowledge and practice to vet content before submitting it to your captive audience who has no choice but to accept it's trojan-like inclusion with content they want.
From the article, part of Myspace.com's response:
"At the same time we strongly urge all Internet users to follow basic Internet security practices such as running the latest version of the Windows operating system, installing the latest Windows security patches, and running the latest anti-spyware and anti-adware software."
I wonder why he didn't mention using ad blocking software?
"So, should an advertising-supported service be responsible / liable for the security practices of the advertisers whose ads they post?"
Absolutely, Ads serving up malware are no different from those advertising p0rn, drugs, or other bad/illegal things. In this case, if MySpace doesn't allow p0rn ads, then why allow malware ads?
The advertising-supported service hosting ads should review ALL ads placed on their site, _before_ they are posted. Those ads that don't "check out" should not be shown. This is just basic common sense, and the responsibility of the advertising-supported service allowing ads to be shown on their site.
The real mistake is using Internet Explorer.
I only use IE with all the features turned on when browsing sites without third-party advertising owned by Fortune 500 companies.
Virus and spyware free for over a year, barring tracking cookies...
1.07 million machines sounds like a successful attack to me.
myspace security chief hemanshu nigam sounds like a douche:
"we are working to have these ad networks remove this ad so that they do not appear on our site."
wtf? why not just cut them off at the knees, or is the ad revenue more important than security for (mostly teenage) customers?
Gosh, it would be simply awful if the NSA datamining operation that's slurping up Myspace data became infected.
I don't see how this is MySpace's fault. Surfing web pages should not be a security hazard. I suppose things have changed in this era where it's expected that users will want to run foreign code from their browser. Personally, I want to use the web to read text and see pictures. Lynx or Dillo do quite nicely.
I think that the implication in the article was that the image was not placed by MySpace nor the third-party advertizer, but by a "fourth-party" hacker who either put it on the advertizer's servers or changed the link in MySpace's servers.
Though MySpace did nothing to cause this, they may bear responsibility in that they should do more to prevent it. Also, they are the only party that the public can exert pressure on the advertizers.
On the other hand, making them responsible is a "chilling effect" on Internet innovation.
Microsoft also did nothing directly to cause this, yet it is a feature in their software that was used to perpetrate it.
We could talk further about the inventors of HTML, HTTP, TCP/IP, etc., as having liability. We have to draw a line somewhere. Does it belong only with the criminal hacker, or does civil liability rest with people further up the chain?
I wish I knew.
One more example, then I'll shut up.
Suppose I post a blog entry on LiveJournal that references an image on Flickr. A hacker replaces my Flickr image with a malicious one, such as the one in question here. Which parties bear responsibility?
Suppose I post an image URL on this blog which is then replaced by a bit of nastyware?
If I bear any responsibility, then I'll be closing out ALL of my blog and e-mail accounts today and withdrawing from the Internet. This kind of thing could prevent anyone from being able to provide services.
We absolutely did not get hacked. That headline is not just misleading it's a complete lie. Bruce I'm personally very disappointed in you for even suggesting this. You should change the headline at the very least.
Hmmm, I thought this was old news for a sec, but I guess it's just the rate of incidents recently makes it seem like a month has passed since last Weds.
The real attack vector is on trust between a host and whomever creates the content. Some are extremely aware of "offensive" content and the need to filter it (since it's readily visible), but code is often obscure enough that some companies are unintentionally allowing malware through.
Things are further complicated when partners outsource to a company that outsources portions of the design and code, as is often the case, so that noone really takes responsibility for quality control. I have seen incidents where a complete lack of a contract, let alone trust, exists between the originator of the code and the company hosting it because there are so many intermediaries...
Thus, MySpace gives an example of how important it is to have strict test and validation controls in place prior to allowing content to go live. It might seem like an affront to those who want to publish their work unfettered, and/or those who want to push costs out to the development partners, but unfortunately trust only goes so far and criminals love opportunity.
What's in the best interests of users might be hard to define, but that should not be used as an excuse to prevent basic input validation and tests for known malware and other criminal-related activity.
"We absolutely did not get hacked."
How so? You became a distribution channel for something that your own CSO describes as "a criminal act". If that was not a result of hack, what was it?
Figures, right as Myspace rises to #1 most popular site on the internet hackers finally get smart and start utilizing it. If Myspace wants to stay at the top and remain secure for its members it has a lot of work to do. Right now it has to be one of the most dangerous web sites out there right now with all the sketchy individuals and poor security systems.
Also, for a major site to go down for 12 hours without any major backup systems is rather juvenile. Myspace needs help if it wants to remain as popular as it is.
You are spot on with the need for strict testing and validation controls.
My concern here is that this is yet another example where the party in the best position to fix the problem (MySpace), is also the one to gain the most (financially) by not fixing it.
The simple fix, MySpace just removes the banner ad. In addition, besides performing their own testing and validation, MySpace doesn't allow any ads to be placed on their website by advertisers that don't strictly follow a SLA (Service Level Agreement), whereby the SLA requires the advertiser to perform strict testing and validation of all ads.
Why won't this happen? MySpace is making (likely lots of) money by putting these ads on their website and any "problems" caused by malware serving ads do not directly affect MySpace (an externality, as Bruce would put it). Right now, they can simply "point the finger" at someone else (i.e. the third party advertiser) and say "Not I". Until MySpace and other hosting websites are held accountable and made responsible, the problem won't go away.
Hm. I agree wih the Myspace employee. This doesn't sound like Myspace got hacked. It sounds like they just pipe in remote content like a million other web sites. They shouldn't take the brunt of this.
Even if MySpace came up with a technological solution that could strip out all Windows exploits that pass through their servers (if such a technology were even possible), it still wouldn't solve the problem for the next web link you click on. It's infeasible to expect every web developer to solve this problem of Microsoft's creation. (We've been doing that since the browser wars!) Web authors should be able to pipe in third-party web content from other sources without fear of killing their visitors' machines.
The appropriate place to lay blame is at Microsoft's feet. Your browser should not allow your computer to become infected no matter WHAT content you're browsing. It should be strictly contained in a sandbox. I understand that this was a patched vulnerability, but the Microsoft approach to security probably played a large role in the vulnerability being there in the first place; if you start with the premise that nothing should affect the system, you're more likely to succeed at protecting the system than if you go adding a bunch of feature-driven exceptions ("but wouldn't it be cool if you could just go to a web page and have the software installed FOR YOU?").
This is a failure of Microsoft, not MySpace, and until Microsoft changes their approach to security (and the quality of their software), we're going to continue to see problems like this.
> Web authors should be able to pipe in third-party web content from other sources
> without fear of killing their visitors' machines.
You're saying that you should be able to allow a third party to send you content, and you should be able to forward that content, and you have no obligation to check that content?
Microsoft's level of culpability in this isn't really relevant. If you're providing a service to a customer base (providing content) and you outsource part of that service to another vendor by allowing them to serve content out through your service, you're at least partially responsible to your customers for forwarding bad content to them.
cow, sheep, people?
listen to the roar of the mindless millions who devote their finite energy to writing about themselves in piss poor html design:
"look what I ate today!"
"look what I like!"
"life sucks! why was I born?"
The greatest thing about places like myspace, in my opinion, is that all this noise is concentrated online in keystrokes, rather than polluting the outside world with all these ego projections.
I hope in the future more and more people who love to talk about themselves spend more and more time in their little prisons.. I mean rooms, typing about themselves so I never have to hear about it or read it.
What did the news have to say about Unix, Linux, and Mac OS X myspace users? I find it interesting that whenever there's a news story (especially on TV) they use the blanket term PC or just mention Windows but NEVER usually mention other operating systems that aren't vulnerable to all these viruses and trojans. WHY?
I'd rather enjoy silence or reading a good book (or blog like Schneier's) to listening to the cow and sheep people or reading their cow sheep person's moos and baahs.
"It's infeasible to expect every web developer to solve this problem of Microsoft's creation. (We've been doing that since the browser wars!) Web authors should be able to pipe in third-party web content from other sources without fear of killing their visitors' machines."
I hate this line of reasoning because it actually contradicts itself. If you don't hold a site accountable, then how do you plan to push quality/controls through to the source of the problem? Each consumer makes their case individually? That would take forever and historically doesn't work very well, if at all.
When car companies recall vehicles to replace parts in/on their cars made by other companies (e.g. Firestone tires), I do not see anyone saying "car companies should be held blameless". Sure it's a tragedy when you get a dangerous product, but it's completely absurd to say the car company doesn't have responsibility to their consumers to correct their quality control failure and prevent it happening again.
That's the whole idea of buying a Ford instead of a hodge-podge of pieces to assemble yourself, and the idea of going to a MySpace instead of your own social-networking site that you manage on your own. And that's why I say this was an attack on trust that needs to be handled specifically by MySpace or they will lose consumer confidence...
Something like this happened to "The Register" (www.theregister.co.uk) a while ago (about a year?) They immediately severed connections with the ad serving company which had allowed this through.
MySpace's "We are working to have these ad networks remove this ad" doesn't cut it. MySpace should follow The Register's example and immediately ban that ad network, and not let them back until they convince MySpace that they have adequate quality control.
Myspace is an abomination.
@Davi, Pat, WhoIsResponsible,
I respectfully disagree.
My first thought, too, was that MySpace should be responsible to ensure that their advertizers had appropriate controls in place. But the more I think about it, the more I realize that this is impractical and wrong.
The automobile tire analogy is illustrative, but we have to be careful to not stretch these analogies too thin. The Web is not a car. In a way, it is actually "assembled" at the browser, rather than at the factory, or server.
It's all well to require a large company to monitor 3rd party service providers. But what if I, as a shareware author, use Google Adsense? Am I to monitor Google's practices? How many small, innovative companies will find this impossible to manage, leaving us with only the behemoths?
I hope Bruce has thoroughly checked the Site Meter practices and procedures, since their image appears below. He should also check the practices and procedures of the companies from whom Site Meter accepts ads, since he links to Site Meter. I would hate to see Bruce close his blog because someone hacked into Site Meter.
I agree that it is impractical for someone to check third party content that they may be serving up with their web page. That's not entirely the issue, however.
MySpace has two sets of customers. The first is the people who use their service. The product is their network. The service is free. If you look at just this side of the equation, this is a net loss, obviously. The second set of customers is the advertisers that MySpace allows to portal ads into their content. The _product_ that MySpace is selling is *the other set of customers*.
There are obvious conflicts of interest here. Since MySpace gets their actual revenue from the second set of customers, they are going to make business decisions that err on the side of the money (this can ultimately be a self-defeating policy, if you damage the network of users enough that you no longer have anything to sell to the advertisers).
I'm not saying that MySpace has *full* culpability here. My response was to Nephilim, who was saying essentially that this isn't MySpace's problem. It *is* MySpace's problem, and their responsibility to act - the responsibility is to their first set of customers.
If you, as a shareware author, use Google Adsense, and you started getting flooded with complaints that Google Adsense was serving up malware infected files, it is your responsibility to halt the ads. You shouldn't pick up the phone and call Google and ask them to pretty please remove the infected files and continue to allow Google's Adsense to run through your site... (from the Washington Post article, this is what it sounds like MySpace was doing).
If your business model relies upon you serving up some other third party's content, you're putting yourself in a tricky position. If you choose not to verify that content, that's certainly your perogative, but the flip side of that is you have to accept the consequences when your site passes along something objectionable, whatever it is.
I agree with Pat, but just wanted to respond to a couple specific items:
"The Web is not a car. In a way, it is actually "assembled" at the browser, rather than at the factory, or server."
Interesting perspective but that seems like saying food from the restaurant is not really assembled in your plate, it's assembled in your stomach. The point is (at least so far) you are consuming a product that is packaged for you and that gives you little/no control of the quality of the input. You might choose not to eat at the restaurant again, or complain to the staff there, but you aren't likely to go to the farmer and demand that they change their practices. The restaurant/manufacturer/website company has an incentive to present you with something you will consume, and they have far more sway over their partners than the individual consumers unless some sort of class-action effort is undertaken. Thus, it doesn't make sense for a restaurant to say "don't blame us for a bad meal, we just pass on everything the farmers want to put on your table". There are handling and cleanliness regulations that apply, meant to filter out the bad food from the table.
"But what if I, as a shareware author, use Google Adsense? Am I to monitor Google's practices?"
Actually, yes, but not alone. Trust but verify, as they say. If you believe that Google can be trusted not to push malware to your consumers, how they react to malware will probably influence whether you will continue with the risk or choose a new and higher-quality relationship.
In factoring risk (R), as long as the vulnerabilities (V) remain significant and largely out of your control, and the asset value (A) is high and also out of your control (whatever people have on their computers), companies have little choice but to ensure they reduce the threats (T) to their consumers. R=VxAxT
That doesn't mean V is irrelevant. Quite the opposite, companies should also be pressuring those responsible for the V to decline as well so that T is less of a factor (as some have pointed out above), but that does not mean they are entirely off the hook for T.
You have to check out this article from the "CTO of a packet shaping company". Shows that expertise in one part of IT shows you don't have to understand crypto worth a hoot (IMHO)...
"Huh? You're saying that you should be able to allow a third party to send you content, and you should be able to forward that content, and you have no obligation to check that content?"
Yes, I'm saying exactly that. Consider technologies like RSS feed aggregators, for example. Or services like Blogger. Or web forums. Or freely-postable discussion threads like the one you're reading right now. YOU are a third party, who sent this web site content in the form of a discussion post. Should Schneier have to hold up posting every comment before actually visiting every URL that gets mentioned to make sure said URL doesn't have a malware-infected file at the other end? No. That would be ludicrous. If one comes to his attention, he might want to remove the link as a courtesy, but it is certainly not his responsibility to police content on servers he does not control. (Especially considering the remote content could change at any time, so that even if he did check it and deem it safe, it could easily be changed after being "blessed.")
And none of this useful stuff causes any problem as long as the web browser keeps everything in a sandbox and doesn't, say, let a JPG file give a hacker root access to your computer.
"Microsoft's level of culpability in this isn't really relevant. If you're providing a service to a customer base (providing content) and you outsource part of that service to another vendor by allowing them to serve content out through your service, you're at least partially responsible to your customers for forwarding bad content to them."
So if the newspaper prints something slanderous about me, I can sue the paperboy? Or a clipping service? Or the corner newsstand?
Microsoft is eminently culpable here. The bottom line is that they released a browser that, if you visit the wrong web page, your web browser executes code that can turn your entire OS over to an attacker. It doesn't matter what server the web page came from, because hackers can drop that code anywhere - a MySpace page, a blog, a forum post, wherever. They could post a link to it on this discussion page if they wanted to, and people would get infected. Even if every single web service that pipes third-party content out there conscientiously removes all references to known malware-infected files, somehow doing it before anyone gets infected, the hackers could still run their own servers. The MySpaces of the world can't permanently stop this sort of attack. Only fixing the web browser can provide any lasting protection.
"Microsoft is eminently culpable here."
Not for what MySpace decides to publish to their users, or the quality of code/content on the MySpace site.
"The bottom line is that they released a browser that, if you visit the wrong web page, your web browser executes code that can turn your entire OS over to an attacker."
A bit dramatic, but yes, vulnerabilities exist in the browser.
"It doesn't matter what server the web page came from, because hackers can drop that code anywhere - a MySpace page, a blog, a forum post, wherever."
No, it does matter. Some sites will establish something called "trust" with their users and practice safer content than other sites. Trust and reputation matter.
"They could post a link to it on this discussion page if they wanted to, and people would get infected."
Um, no, you can't compare a service like MySpace to every link everywhere to everything. You're mixing vulnerabilities in the client with the threats from malicious sites. Both should be addressed, but they are not the same thing.
"Even if every single web service that pipes third-party content out there conscientiously removes all references to known malware-infected files, somehow doing it before anyone gets infected, the hackers could still run their own servers."
Which they would have to get people to visit, unless they can somehow get a green-light from MySpace to push people to them.
"The MySpaces of the world can't permanently stop this sort of attack. Only fixing the web browser can provide any lasting protection."
Not so. You can reduce the threat and get an overall risk reduction as a result. Vulnerabilities are important, but different. MySpace can *reduce* the likelihood of the attack by deciding not to allow it to flow through their servers. It's not much different from mail servers that run anti-spam/anti-virus software.
The culpability argument is a futile exercise in passing the buck. While all parties involved are probably partially to blame, none of them are completely culpable.
I think that looking at incentives might help us with getting a clearer picture.
As Pat said:
"There are obvious conflicts of interest here. Since MySpace gets their actual revenue from the second set of customers, they are going to make business decisions that err on the side of the money (this can ultimately be a self-defeating policy, if you damage the network of users enough that you no longer have anything to sell to the advertisers)."
Service providers like MySpace are essentially balancing acts of interests: do we give preference to customer set 1 (the users) or to customer set 2 (the advertisers)? These interests are linked, however, as Pat states above (to paraphrase): you earn money by promising the advertisers a large user base, but if you lose your user base through constant malware and security issues on your site, you will no longer be interesting to the advertisers.
Unfortunately, a lot of businesses seem to operate on the principle of the 'quick buck' or perhaps the 'path of least resistance'. For that matter, so do users. They are all slightly culpable when such an exploit succeeds.
A company like Microsoft is partially responsible for creating an OS and software which often, especially out-of-the-box, contravenes most accepted common-sense security practices. But, they've managed to sell their software to millions and millions of users...why fix it (a successful business model) if it ain't broke?
Websites such as MySpace (or even the third-parties that pipe advertisements to them) are partially reponsible (or entirely, if they choose to accept that responsibility) for the content they are serving up (obviously, not content that is linked to by users on their service...but if they are consenting to serve up ad banners, they should have a strict, secury policy on it). It's unrealistic to expect them to catch everything, but every effort should be taken, and it should be made clear to content providers that continued laxity in security/safety practices will not be tolerated. If they encounter parties that prove untrustworthy, they should sever connections with them (both on the business front as on the digital front). For any service provider with long-term vision, it should be clear that a vibrant, loyal, ever-growing user base is a much more attractive prospect than making a financial 'quick killing'. With such a user base, advertisers (and associated revenue) are a dime a dozen.
Which brings us to the poor, defenseless users. Here also, we see the same laxity that characterizes the big businesses (which goes a long way to proving this to be a universal human trait): users can step up to the plate and take precautions to prevent their computers from being exploited (or at least, do the best they can). What does this mean? Patch your OS and software when patches are released. Use more secure browsers, anti-spy/virus/spam/ad ware, firewalls, common sense, etc. Unfortunately, most users find computers to be too complex to figure all this stuff out for themselves. Or often, they are too lazy. The former is partly a failing of the developers of their software, who should actually be taking into account this human frailty and ensuring that the necessary precautions are built-in (inasfar as possible) into the software they sell.
In the end, who really suffers? Most users are blissfully oblivious to the hacked state of their computers, and probably wouldn't even care if they were told. It's only when they get embroiled in a legal issue due to the way their computer has been used, or when they feel the effects in their personal lives of computer-related crimes (ie. identity theft, financial fraud, blackmail, etc.) that they suddenly wake up to the dangers.
Basically, the people who are actually /aware/ of the problems are in the minority (and largely on this blog ;) ). For both, businesses and individuals, incentive to change the status quo only arises when they are burned by it. If Microsoft were to start losing its customer base, or MySpace, or their advertisers, due to their current practices they would have to start working on a new business plan. Perhaps they would then consider better practice.
If a person gets into trouble due to a hacked computer, this might also give him enough of a push to actually take the time to do his homework and figure out how to run a tight ship.
Rather than culpability, which gets passed around, it's incentive that makes the world go round (in this context at least).
A facetious hacker could probably argue that he/she is providing a public service! Hackers/Crackers/Spyware/Spam/Virus authors are the ones who will ultimately provide all the involved parties with the incentive to improve.
*disclaimer*Which is not to say that I endorse what they do.*disclaimer*
I generally agree with the comments from Pat and Davi.
MySpace has created a _business_ to provide content from their website and to make money in the process. As a result, I expect them to carry most of the responsibility for that content.
In any product/service offering that combines services from multiple providers, each "component" provider also carries some responsibility for their part. In the case of MySpace, the product is content, so the authors are responsible to create good content, the advertisers are responsible to provide good content, etc.
The general purpose tools (i.e. web browser, computer OS, etc.) used by the consumer to view this content, while they still carry some responsibility in the "big picture", are not really relevant in this specific case, as any tools (OS, browser, etc.) can have vulnerabilities, regardless of where they came from (Microsoft, Apple, wherever).
However, in the end, MySpace has chosen (they don't have to, no one is forcing them) to bring all these content pieces together to provide a content "package", making money in the process. This makes them ultimately responsible for that content. If they didn't build their business with the resources to properly manage that content (i.e. ensure it is free from malware, p0rn, etc.), then maybe they shouldn't be in business.
So as the argument goes, MySpace and any other systems that serves up advertising don't have to do any policing of any advertising appearing on their websites.
I have always been told that blocking advertising was either stealing services or denying the website owners the right to make an income or at the least, to pay for keeping it up and running.
So basically, the actual people who attempt to foist the above guilt are now giving me full justification to block all advertising. After all, they tell me it's my responsibility to protect myself from having my system invaded, not theirs if they serve up a compromised ad. Since they deny the basic responsibility of vetting any of this stuff they send to my web browser, they don't get my trust and therefore get denied the right to serve ads on my system for security reasons.
Thank you for verifying that there is no trust relationship and therefore no contract that obligates me to view ads served by your website.
I'm quite shocked that so many people are even considering that MySpace is at fault. I'm no fan of MySpace--I think sharing one's private information in such an easily datamined form is sheer stupidity. However, I don't see how MySpace is in any way to blame for what happened. Browsing web content should not be a security risk!
"I don't see how MySpace is in any way to blame for what happened. Browsing web content should not be a security risk!"
Those two sentences contradict each other.
> So if the newspaper prints something slanderous about me, I can sue the
> paperboy? Or a clipping service? Or the corner newsstand?
Er, no, this is a *totally* broken analogy. The paperboy does not have the ability to affect the newspaper's advertising policy. If the newspaper chooses to put a slanderous ad in the paper, the paperboy can't suspend *or* revoke that decision. At best, he has a very small degree of control over the 100 or so papers that he distributes on his morning route -> he certainly can't affect the entire circulation.
Compare this to MySpace (or any other website that portals third party content). MySpace has the ability to either suspend the ad service until the offending ad is removed, or revoke their relationship with the ad service entirely.
If a newspaper carries an ad that is slanderous about you, you can sue the person who took out the ad *and* the newspaper. The newspaper is responsible for distributing the slanderous content -> it certainly can't argue in court that it has no responsibility to examine the advertisements that people want to put in the paper.
Taking that to a further extreme, the newspaper, for example, can't carry an advertisement from Al Quaeda that offers a reward for successful suicide bombings or the American Nazi Party that offers a reward for successful lynchings.
You are arguing that since web sites (like MySpace) can't appropriately examine ads coming from third parties, they're absolved from any responsibilty. That's absurd. I'm saying they have to have policies in place that permit the speedy revocation of malware infected ads, and they have to accept the fact that if they choose not (for business reasons) to examine all the ads, they have to accept the consequences of that decision.
Look at it this way. Let us assume instead that the ad in question did not have malware attached. Let's instead examine the possibility that the ad service in question inserted a sexually explicit porn site advertisement or an advertisement for something criminally prohibited, like child pornography. Now Microsoft's (or their customers') lack of security practices is no longer an issue.
MySpace's handling of their ad service customers remains a major issue, in this case.
@ David Ottenheimer
> Those two sentences contradict each other.
Yes, I see now that you're right because.. because.. er, hold on, I have to say your assertion did not convince me. Explain to me why web browsers should execute foreign code (except perhaps in a heavily sandboxed area, but even then I am doubtful). I suppose you think that running attached programs from an e-mail is appropriate for mail programs, too?
I stand by my philosophy that any web content should be safe to visit. The security is up to the people who design web browsers, and implementing web browser security may involve leaving out "features" which are highly exploitable.
I want to add that I think system upgrade tools should be completely separate from the web browser. I do not approve of Microsoft's method of integrating the system updates with IE. In my opinion, the web browsing tool should be for web browsing, and the system upgrade tool should be a standalone program which is only used for that one purpose.
I don't suppose anyone has a URL of the malicious ads that i could check against my proxy server logs?
"The Web is not a car."
It's a series of tubes.
"I don't suppose anyone has a URL of the malicious ads that i could check against my proxy server logs?"
I don't, but I don't visit MySpace. :)
However, even if I were to visit MySpace, most of the ads are blocked by the hosts file I use, google for MVPS hosts file and enjoy.
"Explain to me why web browsers should execute foreign code"
Good one. I suppose you also want me to explain to you why the personal computer exists. Oh, dumb-terminal, where art thou...
The idea that a secure web can be achieved by using a completely secure client is like the old saying that the best way to keep your PC safe from network attacks is to disconnect the network cable. Simple, right? But you've solved the wrong problem. Another analogy might be the small towns that thought they would have less "riff-raff" and crime if they pushed the interstate system many miles away. That's true, by the way.
For those people who want a "richer" experience while still being able to interoperate within a diverse web of input types, threats as well as vulnerabilities have to be considered. From a risk management perspective, even if you get really good at reducing all the vulnerabilties you can find, if you aren't considering threats you haven't really factored risk properly. In other words, many people would rather live in a house without bars on the windows in a safe neighborhood than inside a bunker in a war-zone...big windows are nice to have, as long as they are reasonable risks to take.
"Explain to me why web browsers should execute foreign code"
I guess a simpler answer would just be that at some point a browser needs to *trust* code, and foreign code is the very definition of the networked computer.
It's not a broken analogy. It's not even an analogy. I took what you said and applied it to a real-world case.
Here's what you asserted: "If you're providing a service to a customer base (providing content) and you outsource part of that service to another vendor by allowing them to serve content out through your service, you're at least partially responsible to your customers for forwarding bad content to them."
A freelance paperboy provides a service to a customer base (his neighborhood), and he outsources part (really, all) of that service to another vendor (the newspaper) which serves content out through his service (paper delivery). By your statement, the paperboy is therefore partially responsible for forwarding any bad content the newspaper publishes.
Now, if that "analogy" seems broken, it's because your assertion is broken. It has nothing to do with the sphere of influence, as you tried to suggest in a later post (MySpace has more control and sway over the ad content than a paperboy), and even if it did, that would only mean that trying to solve this problem at the delivery / aggregation level is not scaleable if we can only expect large corporations to be able to police the content they deliver.
The key concept here is the ROLE each entity plays in the chain from original author to reader. Expecting the paperboy to be responsible for newspaper content is ludicrous because his role - his expertise, and his position in that chain - is solely delivery. He gets content from point A to point B, and he doesn't tinker with it. It's a closed package as far as he's concerned. That's the service he provides. And that's how we want it - we don't want our paperboy editing, censoring, and adding to our newspaper.
Now consider the chain from the MySpace example. There are actually two chains:
Myspace sends its own content to the web client. Part of the content is a blank area in their content which references the ad supplier, but no content selection is being made, and that content doesn't pass through their servers. In fact, the ad supplier probably actively opposes MySpace from knowing who gets sent what ad, because that's valuable IP. When the content gets to the computer, it is displayed as expected on the computer screen. So we have:
myspace -> web client -> computer screen
Anyway, once the web client receives the MySpace page, it follows normal HTML rules and loads the content from the ad supplier. Which ad to serve is selected by the ad supplier's server - MySpace has no way of knowing or controlling which ad the ad supplier sends back. Normally, the ad supplier is the original source (for purposes here) of the content, and it gets displayed normally like any other image. That chain looks like this:
ad supplier -> web client -> computer screen
But in this case, a malware hacker was the original author of content, and that content ended up someplace it wasn't supposed to: outside the sandbox in the computer's OS. So we have:
hacker -> ad supplier -> web client -> computer OS
Notice that MySpace is not even in the compromised chain. Also notice that there were TWO points of failure for this to happen. First, the hacker had to be able to insert content into the ad supplier. This means there was a security failure on the part of the ad supplier. Second, the content ended up someplace where it wasn't supposed to: outside the sandbox. This means there was a security failure in the web client. There was no security failure by MySpace - the best they can do is help clean up after someone else's security failure, and they have no way of controlling or detecting any similar failures in the future. They are powerless to solve this problem unless they are willing to stop using an ad supplier altogether.
So, if you don't want this to happen again, you have to fix two things: you have to not let hackers add content, and you have to have the web client not let content go where it's not supposed to go. Right?
Wrong. The fact that the malware came through an ad supplier was irrelevant. Consider this chain: a legitimate user of a web forum (knowingly or unknowingly) posts the URL to a malware-infected piece of content. Here's the chain:
legitimate user -> forum page -> web client -> computer OS
Because anyone is given the ability to post images to the forum, there is no security breach when the malware is introduced to the chain. The only security problem is in the web client allowing the content to go to the wrong place.
Moreover, if we were to conceptually forgive the web client for this security flaw, and say things like Davi does, likening having a web browser in a sandbox to being relegated to a backwoods small town, there is no way to have the functionality of this forum. You must treat all content contributors as untrusted hackers that can damage computers, so you cannot allow untrusted content contributors.
THIS is why web browsers should never execute foreign code. Not only is it a security risk for the user, but it also drastically precludes the sorts of social networking and site functionalities that you gain if you can reasonably assume that the content stays inert once it gets to the client. If anything, it is ALLOWING untrusted code to run which will turn the web into a backwoods, because it will force web developers to erect walls between content, between functionalities, and between people, because no piece of content can be trusted.
"A freelance paperboy provides a service to a customer base (his neighborhood), and he outsources part (really, all) of that service to another vendor (the newspaper) which serves content out through his service (paper delivery). By your statement, the paperboy is therefore partially responsible for forwarding any bad content the newspaper publishes."
I don't buy that at all. You have the analogy backwards since a newpaper outsources delivery to the paperboy. They are arguably liable for his behavior because the reader has contracted with the paper for delivery, and the paper has outsourced delivery to the paperboy.
And if the readers do contract directly with a delivery service, it is highly likely that the service will be reasonably seen as limited to physical delivery only, since they have *nothing* to do with the content of the paper, including any way to control what/how things are published. MySpace certainly not only has influence but full editorial control and the ability to decide what appears on their site. That's a far cry from a digital paperboy...ooh, hey, do we get to debate net neutrality now?
"if we were to conceptually forgive the web client for this security flaw, and say things like Davi does, likening having a web browser in a sandbox to being relegated to a backwoods small town"
Really? I'm not sure what you're trying to get at since I never said backwoods, and I'm not forgiving the client. Quite the opposite, I'm saying you can not completely forgive the server of any liabilities and yet have a trust relationship with your providers.
Seems obvious to me that not forgiving the server != forgive the web client.
Anyway, I was likening the idea of trying to living in an "invulnerable" space to trying to browse with an "invulnerable" browser. You're basically arguing for cost and sacrifice, while still never really achieving the absolute security that you argued was justification for the cost and sacrifice in the first place. And that's because you're not dealing with the issue of threats.
Instead of advocating for an extreme, I am thus saying the risks are best controlled at the client as well as the server. That means MySpace needs to filter or they will lose the trust of their users.
Balance is best.
> A freelance paperboy provides a service to a customer base (his neighborhood), and he
> outsources part (really, all) of that service to another vendor (the newspaper) which serves
> content out through his service (paper delivery).
Incorrect. The paperboy is not contracted by the neighborhood to go out, write articles, get advertising contracts, edit articles, compose page layouts (which includes putting the ads in the paper), publish the paper, and then deliver it to the customer base.
The paperboy is instead providing a single service -> take a preassembled content (the paper) and deliver it - content that is bundled. He is not responsible for, and has no control over, the content. He cannot choose to deliver only part of the content, and he cannot excise parts of the content. He also has no relationship with those who may provide content to the newspaper -> he has no link to the advertisers whatsoever. If the userbase goes to another delivery method (say, the newspaper stand or the leftover newspaper bucket at their local coffee shop), they are getting the same content regardless of the delivery mechanism.
Compare this to MySpace. They are providing two interacting services simultaneously, to two customer bases. The first service involves providing content to one userbase, the second service involves referring the first userbase to the second userbase and the content that they provide to the first userbase.
Do you not see that these two instances are different, and that MySpace, due to its business relationships with the two userbases, has obligations to them both?
> Also note that MySpace, in this case, is a lot more like a paperboy than a newspaper editor with respect to the
> ad. It's a hands-off delivery, much like what a paperboy does.
I don't dispute that. That is, in fact, the current situation. However, what I'm saying is that this is a business decision on MySpace's part. They are choosing not to perform editorial functions, and attempting to do business as a paperboy. I'm not arguing against the business logic - there are certainly good reasons for them to try and do business this way. I'm just saying that whether or not they choose to perform the functions of editors, that is in fact what they are, and they are abrogating their responsibility to check the content. They are paid, by the ad provider, *explictly* to introduce the ad provider's content to their userbase, just as a newspaper is paid *explicitly* to introduce their advertisers' content to their userbase.
> They are powerless to solve this problem unless they are willing to stop using an
> ad supplier altogether.
First, I don't believe this is true. They certainly could solve this problem, or at the very least mitigate this problem, while using an ad supplier. Technical solutions aside, they can require the ad supplier to check their own content as part of a contract. Second, there is no reason why MySpace cannot have their own advertising department and editorial staff and have static advertisers.
You seem to be arguing that advertising-driven web sites can't do business unless they are relieved of their responsibility to check the content. I'm saying, no, they can't relieve themselves of the responsibility, and if they can't find a way to make their business model work, then their model is broken.
It's like saying, "Chemical plant startups can't compete with large companies because they can't afford to dispose of their waste properly, so we should enable them to dump their crap in the river so that they can survive. Besides, people that swim in the river should be wearing chemical proof swimwear and they should be testing the water for dangerous substances before going for a swim."
> Part of the content is a blank area in their content which references the ad supplier, but no content
> selection is being made, and that content doesn't pass through their servers.
> So we have: hacker -> ad supplier -> web client -> computer OS
> Notice that MySpace is not even in the compromised chain.
I notice your post included no reply to:
PC> Look at it this way. Let us assume instead that the ad in question did not
PC> have malware attached. Let's instead examine the possibility that the ad
PC> service in question inserted a sexually explicit porn site advertisement or
PC> an advertisement for something criminally prohibited, like child
PC> pornography. Now Microsoft's (or their customers') lack of security practices
PC> is no longer an issue.
I'm not disputing that the client security is bad. Obviously, this is the case. There are a million threads on this blog about why IE is bad, and if you'll browse through them I think you'll see I agree with you 100%.
What I'm trying to illustrate (using the porn example) is that regardless of the technical security issues involved in the bits of software passing data back and forth, the issue at hand is that MySpace is handing untrusted content to their userbase.
> Consider this chain: a legitimate user of a web forum (knowingly or unknowingly)
> posts the URL to a malware-infected piece of content. Here's the chain:
> legitimate user -> forum page -> web client -> computer OS
Yes, and that does indeed indicate that there are security problems with the OS and with forums.
However, legitimate forum users are not equivalent to MySpace.
Your paperboy analogy to MySpace is not correct. The other posters have it correct. From your description, the paperboy is more like TCP/IP or the web protocols, just a delivery channel. The correct analogy is that MySpace is like a newspaper, and like a newspaper, MySpace needs to be held responsible for the content they are putting together and providing to visitors at their site.
On the topic of expecting a web browser to be invulnerable, I tend to think of the web (and Internet) along the lines of a radio receiver and transmitter. While one wants to have a radio receiver that can detect and ascertain a signal from any and all noise, under any circumstance, it becomes increasingly diffiicult if the noise in the channel increases, especially if a lot of the noise is generated by the transmitter sending the signal. One way to help in this equation is to elimiate noise sources. The less noise in the channel, the easier it is for the reciever. The receiver still has to have the basic ability to filter noise, but it is also the responsibility of the transmitter to ensure it is not generating unnecessary noise.
So, back to the Internet. If we expect the web browser to be the only part in content delivery to filter out the noise, we will get into an impossible situation. The content services, like MySpace, puttng the content into the channel, need to bear responsibility for ensuring they are not introducing unnecessary noise.
Similarly, in the case of spam, while the email receivers need to have proper filters, the only real solution to reduce spam noise is to focus energies on the transmission sources, by not letting spam noise into the channel in the first place.
I think my paperboy "analogy" is not being understood in the manner I meant it.
I'm not making a paperboy analogy to MySpace. The purpose of bringing up paperboys was to point out that there are a wide variety of levels and roles of content delivery services. The blanket statement, as was made, that if you deliver third-party content, then you are (partially) responsible for any "bad content" you forward on, is belied by the simple-case delivery role of a paperboy. (Or a newsstand, or a clipping service.)
That doesn't mean I'm saying MySpace is like a paperboy. It obviously does more with content than a paperboy. I just used that as an example to demonstrate that there are existing content delivery roles that don't have content culpability associated with them, so it can't be as simple as that blanket statement. In other words, I'm talking about a spectrum of roles of content delivery that involve varying degrees of ownership and responsibility from "newspaper-ish" to "paperboy-ish". Probably not the best terms to describe what I'm getting at - maybe someone else can suggest better terms.
Anyway, to my mind, the context and original source has to be taken into account. I make a distinction between MySpace's responsibility for content it authors, for content its users author, and for content merely piped in from some ad service. This is the spectrum I was trying to get at with the "newspaper to paperboy" range. Content authored by MySpace employees (say, promoting particular MySpace pages or users), is more on the "newspaper" end of the spectrum, and they should bear heavy responsibility for that content. Using an ad service to drop a banner ad on a page is more on the "paperboy" end of the spectrum (not all the way to paperboy, mind you), which should, in turn, cause them to bear far less of the responsibility burden for that content than the ad provider, which is on the "newspaper" end of the spectrum for those banner ads. That's the distinction I'm trying to describe: responsibility should be tied to role, not who's closest to the reader in the publishing chain. This is especially true the more automated the chain gets.
The reason I bring up this spectrum is that I don't buy the premise that if you aggregate content, you should automatically become legally beholden to it at a level where you're at fault *if you merely forward it on*. The internet is increasingly using automated routing of third-party-generated content, such as RSS feeds or instant-add public comments to articles (like this one). By their very nature, these things get forwarded on to others without any review, and I think the value of these communications constructs is evident.
But if we make web developers legally responsible for content that gets forwarded from various syndicated sources, even when it is clear from context that it's just an automated pull with no human review, certain social networking constructs will be fraught with legal peril (or modified to be less useful) because you're basically aggregating nothing but untrusted content.
> That's the distinction I'm trying to describe: responsibility should be tied to role,
> not who's closest to the reader in the publishing chain. This is especially
> true the more automated the chain gets.
I understand the distiction you're trying to make, but I disagree entirely. In fact, I think you have it absolutely backwards -> the more automated the chain is, the more responsibility falls upon the entity that's closest to the reader, since the security (or policy) EVENT occurs at that point. That is the weakest link. And in security, we're supposed to be strengthening the weak links, right? Reading the rest of your post, I'm thinking you're confusing my idea of "responsibility" with my idea of "liability", but I'll get to that later.
I think you're over-emphasizing one side of the relationship. You're saying that the ad service, channeled by MySpace, is closer to the role of a MySpace user than the role of a MySpace employee. You're effectively trying to say that MySpace is an uninterested third party who facilitates communications between the ad provider and the MySpace community, but that's clearly not the case -> MySpace gets compensation from the ad provider. MySpace has very complex relationships with both the users and the advertisers.
> The reason I bring up this spectrum is that I don't buy the premise that if you
> aggregate content, you should automatically become legally beholden
> to it at a level where you're at fault *if you merely forward it on*.
Ah. I think I sense a communication break here. I don't know that making MySpace (or in general any entity that aggregates content) legally liable for third party content is the correct solution. I'm a pretty big proponant of liability, but I don't know that it's the correct solution here. (It may be). As Bruce likes to point out, liability works best when it's imposed upon the entity with the control and the money. In this particular case, actual legal liability (if any) probably should fall upon the ad service, although MySpace can't be completely absolved. They both have some measure of control, although the ad service quite obviously has more control over the particulars of the content.
However, legality aside and from a pure business standpoint, if you want to be in the business of providing content from multiple sources, your business model ought to include "disaster" procedures for recovering from bad content coming from a provider. You need to have relationships with the other entites involved that allow you to recover from this sort of event. MySpace has these sorts of relationships with the user community -> users can report abuse. Users can have their sites removed and accounts revoked. MySpace watches over the user community (for obvious reasons -> a healthy user community is the commodity that enables it to have the relationship with the ad service!).
MySpace obviously does *not* have this sort of relationship with the ad provider, at least in their current incarnation. This is evident from the story. This is a flaw in the business model that they need to correct.
If you're intentionally or unintentionally allowing the ad service to harm the community, you're in a self-defeating business model. You might gain short term profit, but ultimately the community is going to distrust you, and you're going to lose your resource.
> The internet is increasingly using automated routing of
> third-party-generated content, such as RSS feeds or instant-add public comments
> to articles (like this one). By their very nature, these things get forwarded
> on to others without any review, and I think the value of these communications
> constructs is evident.
I agree, but you're forgetting a few things. First, the primary topics are vetted, they aren't passed without review (Bruce writes the initial posts). Second, although the posts are *passed* without review, there is an audit entity in place. I've seen lots of spam posters to this blog (that damn Nokia ad that crops up every other week or so) but if the posts get intrusive, the moderator removes them. There have been a few instances of posters being restricted or having posts removed (astonishingly infrequent given the general state of internet blogs, but that just goes to show you that most of the people who frequent here are civil, if sometimes annoying or oddball).
Well said Pat.
Regarding the last point about the content available on this blog, I think there are differing levels of "responsibility" when it comes to hosting content.
Basically, if the "hoster" is making money providing the content, I would hold them to a greater expectation of responsibility for that content.
In the case of MySpace and other such content "hosters" that are operating their content service as a business (with the intent to be profitable, via paid ads or paid subscriptions), I would hold them to a much greater expectation of "responsibility" for the content they are providing. That business needs to have proper resources (technology, people, etc.) to properly manage the content they are providing, if not, they perhaps they should not be in business.
In the case of Bruce's blog (and other "not for profit" content providers), I would expect a lesser degree of responsibility (but still diligence regarding the quality of the content), since he is not running his blog as a business (to make a profit).
Exactly. This gives MySpace a _whole lot_ of $$$ to invest in people, process, and technology to ensure the content "package" they are providing is free from malware, p0rn, etc.
I personally am not very pleased with the way that myspace is working. I have worked very hard on mine and now it will not even let me get on. There has never been this much difficulty with anything. I believe that if it was going to be this much trouble...people should have never made it. My question is...why is it always down and you can never get on?
Account was hacked 7/28/2006 on Myspace and I'm genuinely pissed off...Unable to retrieve password, not receiving the password via my email, and no response from Myspace. Deleted all my images from image server separately so that whoever uses my site is shit out of luck - so ...
The dearth of coverage about today's downer is weird.
Air travel has become a major part of our society, with industries and individuals depending on air transport for their livelihood. But have you ever wondered what happens to the artifacts of our airborne culture when they're no longer needed? More..
need to get on my page can not at all help!!!!!!!!!!!!!!!!!!!!!
What's up to all, how is everything, I think every one is getting more from this website, and your views are nice in favor of new people.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.