Schneier on Security
A blog covering security and security technology.
« HSBC Insecurity Hype |
| Stealing Credit Card Information off Phone Lines »
August 14, 2006
Good essay on "faux disclosure": disclosing a vulnerability without really disclosing it.
You've probably heard of full disclosure, the security philosophy that calls for making public all details of vulnerabilities. It has been the subject of debates among
researchers, vendors, and security firms. But the story that grabbed most of the headlines at the Black Hat Briefings in Las Vegas last week was based on a different type of disclosure. For lack of a better name, I'll call it faux disclosure. Here's why.
Security researchers Dave Maynor of ISS and Johnny Cache -- a.k.a. Jon Ellch -- demonstrated an exploit that allowed them to install a rootkit on an Apple laptop in less than a minute. Well, sort of; they showed a video of it, and also noted that they'd used a third-party Wi-Fi card in the demo of the exploit, rather than the MacBook's internal Wi-Fi card. But they said that the exploit would work whether the third-party card -- which they declined to identify -- was inserted
in a Mac, Windows, or Linux laptop.
How is that for murky and non-transparent? The whole world is at risk -- if the exploit is real -- whenever the unidentified card is used. But they won't say which card, although many sources presume the card is based on the Atheros chipset, which Apple employs.
It gets worse. Brian Krebs of the Washington Post, who first reported on the exploit, updated his original story and has reported that Maynor said, "Apple had leaned on Maynor and Ellch pretty hard not to make this an issue about the Mac drivers -- mainly because Apple had not fixed the problem yet."
That's part of what is meant by full disclosure these days -- giving the vendor a chance fix the vulnerability before letting the whole world know about it. That way, the thinking goes, the only people who get hurt by it are the people who get exploited by it. But damage to the responsible vendor's image is mitigated somewhat, and many in the security business seem to think that damage control is more important than anything that might happen to any of the vendor's customers.
Big deal. Publicly traded corporations like Apple and Microsoft and all the rest have been known to ignore ethics, morality, any consideration of right or wrong, or anything at all that might divert them from their ultimate goal: to maximize profits. Because of this,
some corporations only speak the truth when it is in their best interest. Otherwise, they lie or maintain silence.
Full disclosure is the only thing that forces vendors to fix security problems. The further we move away from full disclosure, the less incentive vendors have to fix problems and the more at-risk we all are.
Posted on August 14, 2006 at 1:41 PM
• 47 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
When the public content of a disclosure is basically, "we can break this, we told the vendor how, but we won't tell you," what's the real information content for the general public? Why tell us? Is there anything we can change to fix the problem? Is there at least a workaround?
The other main purpose of public disclosure in most scientific fields is to allow independant verification. That's obviously not possible here (and maybe it's not even that important when discussing vulnerabilties).
In the end, is a low-content vulnerability announcement actually useful to the security community, or is the real message just, "Look how cool we are?"
Same thing different time.
Why do you think that boycotts are so heavily regulated?
Publicly traded corporations should only be concerned with making profits (short, medium, long time spans), but they get a free pass on consumer satisfaction thanks to papa government. The legal order is full of methods for trampling on your right to disclose, complain, boycott.
Well same thing applies here. I assure you that making full disclosure illegal will be lobbied by corporations for years to come.
To be perfectly honest, I'm not sure what to believe about this exploit. It's pretty clear that the security researchers are getting a lot of publicity for it. But...Apple was specifically chosen for the demonstration because of smug and complacent Mac users, when the exploit works just as well on other OSes. It appears that the fault lies in the wireless chipset/driver/specs/something, and is either the responsibility of that vendor or every single OS out there. The default setting for Apple's wireless (not to join untrusted networks) would prevent the exploit. It is so unclear what is happening that I am not sure that this is a valid entry into the "full disclosure" debate; how long as the vendor had to review it? _Who_ is the vendor failing to respond? Did Apple really lean on the researchers? The article has been updated to note, "Maynor recently left ISS and is now at SecureWorks. As a matter of fact, SecureWorks is trumpeting the faux disclosure as a major news event, listing 29 different sites reporting on it. You can even watch the tape of the video on their site. "
Maynor was also quoted as saying, "If you watch those ‘Get a Mac’ commercials enough, it eventually makes you want to stab one of those users in the eye with a lit cigarette or something." This does not sound like someone who's operating with objectivity.
If I were Apple, and a security researcher said they were going to demonstrate an exploit on one of my computers, specifically set up with a stupid non-default setting (silently joining any available network is dumb, though I frequently do it), specifically in order to puncture the smug superiority of my customers, when the insecure device is not mine and not specific to my computers, I might lean on the researchers not to present things in that way, too. Maybe Apple did do something wrong. But most of what I see here is the researchers and blogger being so eager to make a public point about Mac user complacency that they have managed to confuse any value that point had.
Yeah, some Mac users are complacent. But not all of them are. "More secure" does not mean "absolutely secure." And, to be honest -- if an operating is generally less exploited than any other, and an exploit comes around that affects all operating systems, then that operating system is still arguably more secure.
(Yes, I love my Mac. Sosumi. And give me a real exploit, rather than just trying to make me feel bad.)
I'm not aware of any regulations on boycotts per se. In what time and place are or were there such things?
Daedala: What made the demonstration so popular was the fact that it the exploit _does_ work with the default settings on OS X. i.e. it is exploitable even you do not join a specific network. Short of turning off your airport card, the only way to resolve this is for vendors and the companies that produce drivers for them to fix the drivers.
OpenBSD's "no binary blobs" policy agains binary only drivers now looks brilliant.
Considering OSX is based on BSD, you would think they would have gone for this...
I'm not sure if "responsible" disclosure exists as such or the concept is only a way to distract attention. I think the main problem about "responsible" disclosure is failure to set a date of full disclosure.
If deadline for full disclosure is given, companies will be pressed to issue the patches within reasonable time. For example critical exploints (remote root) could be given say 1 month gracetime before full disclosure.
On the other hand, question is if it really matters much as long as people don't keep their pc's updated.
During the demonstration, the MacBook did in fact join the network provided by the Dell AP. The researcher made a point of that, and noted the IP the MacBook was given. The claims beyond what was actually demonstrated in the video (even presuming the video was straight) is part of the reason I suspect that this is a publicity stunt.
I know that Macs have their problems; I'm not saying they're invulnerable or something stupid like that. But I am saying that the researchers have been sufficiently unclear to cast severe doubt on their claims, and it appears to me that the "faux disclosure" is less due to vendor pressure than to researchers trying to create more publicity.
If a bounty system is used, vendors have a problem trying to silence security testers. You can run it, you just can't hide a flaw forever. This could be a good business opportunity for smart programmers who uncover exploits, which should be encouraged and rewarded. Instead of breaking stuff, you can do better finding the broken stuff. It's out there.
Security & Wifi engineer Jim Thompson has an interesting discussion on this at .
It comments on some of the contradictions in the video, Apple "pressuring them" and why he thinks these guys wrote their own drivers, rather than using the standard drivers.
Sorry, the url didn't come through
www.smallworks.com / archives / 00000455.htm
take out the spaces (duh!)
Full disclosure is great, but timing is important. Partial disclosure with, for example, a 30-day fix window has a lot of value to the user community.
"I can break X, and I'll tell you how next month," with a private message describing the problem in detail to X's maintainer serves many purposes: it deservedly sullies the reputation of X, it puts X's users on notice to expect a patch soon and an exploit in 30 days, and it puts the fear of god in X's maintainers, because if they don't fix the issue quick, they'll have angry users.
Without the time limit, X's maintainer can drag his heels with the fix. Without the window, X's users are vulnerable to the zero-day exploit. A reasonable time delay between announcement and details/exploit is the best way to protect users.
How do you define "full disclosure"? If it means letting the vendor know, giving them time to fix the flaw, then disclosing if the flaw is not fixed after an appropriate amount of time (2 - 4 weeks or more, depending on the flaw), then I am in agreement.
If it means spouting the problem to the world before the vendor knows of the problem and making on the vendor race against the baddies before they create an exploit, then it is dead wrong. That does nothing but make the world less secure and causes us security admins and managers headaches and possible breaches. Not a good idea.
Vendors need to have a chance to fix their flaws. If they don't fix them, then release it. They will get bad press, and they will learn a lesson.
Apple's problem is that they try to make PC's and Microsoft look bad (even though they would barely be afloat if not for a few million from MSFT) by touting how much more secure they are, then they get hit harder when they start making traction and get to be a bigger target. Stupid. Very stupid.
I guess paying the bounty would buy silence. Everything is for sale. Silence is golden. Any idea how much is being paid? I counted 24 flaws in one program. There could be more. The program is proprietary, so there is an incentive to hide the flaws and an incentive to exploit the flaws. I guess open source is more secure or at least more easily fixed since everybody knows or has some idea about the flaw in real time. Open source bounties lead toward innovation, not just bought silence. From what I can determine, at least. You give the encrypion code away and say I will pay $1,000 if you can do X with it or find a way to do X that works. Doing X takes a week. It's unlimited really. Corporation X says we will give you $1,000 to play Sgt. Schultz, say nothing, do nothing and know nothing. Talk about dumbing down an industry. No wonder so much stuff is broken.
With the bounty system you either get silence, which can be good or you get conversation, which can be good. Word of mouth is the best advertising. Companies market crappy products that aren't secure and test them on consumers. If somebody gets injured, wins a settlement and signs a non-disclosure agreement you never know a thing. You only know when a bunch of people get injured, possibly you included. Defective software can get people killed. Vendors seem to be like so what, what do you want us to do. Did you patch everything? Then the patch is defective, so you are worried about patches all the time. It's now a national security issue. Blame everything on the users, for not patching the code that's a mess. It usually depends on where you start.
Maynor replied to the several errors of the of the smallworks post today:
Aside from that the author of the post Bruce showed also didn't seem to do their research as Maynor is no longer with ISS, something he said several times in the video. You have to wonder if the author even watched the video.
What is the legal situation like? I tell BigCorp about a vulnerability and promise to release the information publically in a month. What if they respond with an injunction to prevent the release? What is my legal situation if I anticipated the injunction and sent a copy of the exploit to a friend in a different juristiction? Or if I've programmed a computer somewhere to post the exploit automatically at some later date? Does an injuction require me to take all steps in my power to prevent the disclosure, including ratting on my friend who has a copy or revealing the computer which will auto-release it? What if I've taken steps so that even I can't halt the release? (E.g. the release will come from some bot somewhere, and I've deleted the information which would allow me to find that computer again.) Have I committed an illegal act by attempting to circumvent the injuction, even though the act was before the injuction was in place? Can BigCorp serve an injuction against me if I've maintained anonymity?
If finding security holes is a good idea (see Eric Rescorla "Is finding security holes a good idea") and if full disclosure really reduces our risk it seems to me there is a need for anonymous full disclosure.
"I'm not aware of any regulations on boycotts per se. In what time and place are or were there such things?"
There is no single law against boycotting, but there is a whole category of laws specifically aimed at reducing this free market method of voicing complaints.
The category of 'commercial speech' as opposed to treatment under 'free speech' has been a magic creation of the courts in the early '40s. It literally was created with no precedent and no good reason.
Commercial speech is heavily regulated today. Let's see: FCC, FTC, SEC all control how people can communicate. There is a strict ethical code to follow, that in reality is actually unethical and immoral.
"What if they respond with an injunction to prevent the release?"
Exactly! Here is an example of preventing free speech. But it is not recognized as free speech, but is discussed as commercial speech, as if there is an actual difference.
Might I suggest an editorial describing what you think full disclosure means, how it should be implemented, and how it helps security?
"There is no single law against boycotting, but there is a whole category of laws specifically aimed at reducing this free market method of voicing complaints."
I'm not convinced. I would like some specific information on how the government can even try to force people to *not* stop buying a product in something vaguely resembling a free market.
anti-Boycott laws: the easiest option is to allow companies to sue the organisers of the boycott for losses caused by it. So a fictional entity is given rights under the law to sue a real person because of something that might not happen as a result of the real person's actions. In parallel with that you can allow media to operate a closed market for advertising or program space, and vary the prices charged at will. So instead of "ads at this time cost X", they can refuse to sell, or charge different prices based on who the buyer is. Thus publicising the boycott becomes harder (the internet makes that less true today).
All perfectly legal, by definition.
Moz: it's their airtime. Are you saying that they shouldn't be able to sell it how they want, to who they want? Why?
You still haven't given an example of a case where a corporation successfully sued a "natural person" for losses caused by a boycot, without proving that person lied in order to cause the boycot. (Which is illegal for good reason, and just as illegal when a corporation does it.)
I'm not saying that making corporations people wasn't stupid, or that differenciating classes of speach isn't stupid, or that corporations can't afford to sue people into bankrupcy without winning. It seems that you have a lot of points to make without resorting to this vaugeness.
> The other main purpose of public disclosure [...] is to allow independant verification. That's obviously not possible here (and maybe it's not even that important when discussing vulnerabilties).
Well, it's important for vulnerabilities too. If you can't prove if the vulnerability really exists you have to trust the announcer. That can a) lead to several kinds of DoS for the company with the vulnerabilty, including, but not limited to, bankruptcy and b) if you do not trust the announcer despite the correctness of the finding you may get a problem or even have it already, no chance to know untill it's too late.
> A reasonable time delay between announcement and details/exploit is the best way to protect users.
It is not save to assume that nobody else but you and the company knows about the vulnerability.
BTW: what is the length of a "reasonable time delay"? If it is a couple of hours or less (the delay time of most reputable OpenSource projects and even some of the companies with closed sources) it might be OK if the vulnerability merely results in a crash of an insignificant little application. But who is able to define "insignificant" in _all_ cases?
And then there are some of the bigger software companies which react so slow, that even a tectonic plate can easily dodge their hits.
And then there are hefty legal threats as mentioned already by another poster.
What if the vulnerability is in a significant application that allows your competitor to steal or even worse: manipulate your data? You do not know if your competitor doesn't already have an exploit as shown above. Do you want that competitor to silently try to ruin you another month or two or do you want a chance to switch the application off or at least calculate the risks?
I am involved in both sides of it and decided that full disclosure is the best for me as both a consumer and a producer; but your mileage may vary of course, especially if you run a publicy traded company.
> Full disclosure is great, but timing is important.
> Partial disclosure with, for example, a 30-day fix > window has a lot of value to the user community.
Full disclosure without an opportunity to fix is morally equivalent to publishing a real-time list of houses with broken door locks, with the purported aim of getting manufacturers to make locks more reliable.
With open-source projects, the only option short of immediate full disclosure is privately contacting the maintainers, who will probably announce the flaw immediately and get a patch out within a few days at the latest. Oftentimes, though, if you find a flaw in OSS, you can identify the offending line of code and provide a patch to the maintainers.
In general, about the possibility of existing exploits, the initial partial disclosure should be detailed enough for you to assess the risks for your company and possibly will include a workaround. Full and immediate disclosure means my competitors have a month to develop and deploy exploits before I get patched.
Really, the main issue is legal injunctions against revealing the exploit.
I have found and responsibly disclosed 4 vulnerabilities in 4 different products from 4 different companies. Not much, but enough to get responses ranging from "We don't care" to "Thank you, here is the fix we are publishing tomorrow".
The problem with responsible disclosure is that it doesn't work with irresponsible companies.
And what if your are personally using a faulty software and you can't protect yourself ? Nobody wants to be it's own target.
And what if the vulnerability is discovered while working for a client / employer ? They usually own everything you do for them, so involving them (by using their license of the product, for instance) is just begging for trouble.
"I'm not convinced. I would like some specific information on how the government can even try to force people to *not* stop buying a product in something vaguely resembling a free market."
It's not the the gov can prevent anyone from not buying - that would be totalitarianism.
The legal structure LIMITS the ability to make an EFFECTIVE boycotts.
Here are some examples from the top of my head:
- Public sector unions are forbidden to strike. (Not to say that unions themselves do not have privileges)
- There is regulation for government whistleblowers.
- A TV commercial comparing product X to Y, can not state that Y is bad, but instead has to be "fair" and say that X simply performs better than the 'other' leading brands.
- Because of gov ownership of the streets, one cannot stage a spontaneous (obviously non-violent, and non-disruptive) boycott close to the company.
- Boycotters must tread carefully not to be sued by the ridiculous libel, slander, bribery, blackmail laws.
- Gov nationalization of the radio spectrum has for decades cartelized broadcasts and has made effective boycotts too expensive for most. Of course this has slowly changed with cable & satellite.
- News broadcasts are regulated by the FCC to have 'balanced' news. Which creates a bias of balanced. In fact, unless there is an opposing view, you may not have any view at all. Newspapers for decades were not allowed to have editorials & op-eds.
- As previously noted, a court can place an injunction on anything that is magically deemed not to be in favor of the 'general welfare', what ever that means.
Again, one has to stress the fact that there is a legal difference between 'commercial speech' and regular free speech. You are not free to have a negative opinion of commercial nature, and have ALL the media channels completely open for your complaint.
"Full disclosure without an opportunity to fix is morally equivalent to publishing a real-time list of houses with broken door locks, with the purported aim of getting manufacturers to make locks more reliable."
This analogy is terrible!
It is nothing like that at all. To be a proper software analogy, one would have to be a customer of the houses.
The more proper analogy would be a storage facility disclosing that some of their storage units are open, and that customers should come and make sure that it's not their unit.
Of course it would never do it that badly. It would obviously try to inform customer X & Y that their units are not locked. Which would be a partial disclosure. The potential still exists for someone to intercept their mail, phone, or e-mail.
I think you are confusing "protest" with "boycott".
Protest: "the act of protesting; a public (often organized) manifestation of dissent"
Boycott: "refusing to deal with a person, group, nation, or group of nations so as to punish or show disapproval"
There are goverment actions that restrict effective protest, but that is very different from government actions that restrict effective organized boycott. They are two different issues to be taken up separately.
@Anonymoose: Don't forget that boycot is usually done as a protest though, and for it to be effective it has to be coordinated to get as many as possible to participate.
But this is really a derailment :)
The door lock analogy is funny. Security is cultural, not only technical. You lock the doors and the intruder breaks a window to get in. If you left the door unlocked, you wouldn't need to replace the broken window after being robbed. What if all the windows are broken? Do you keep the doors locked or just protest? People have brass in their pockets. When you big chunk of the bloody third world, the babies just come with the scenery. It's corporate.
As someone who maintains a cluster, I would actually prefer (from an individual standpoint) completely full disclosure at the moment the vulnerability is discovered. I realize this is not the case for everyone who maintains a cluser (and indeed, would not be the case for me, necessarily, if I was maintaining a different cluster).
If software is compromisable, I (currently) want to know about it immediately. I realize that this means that the black hat community gets this information as well. I can accept that, because I would rather TURN OFF or block a vulnerable service until a patch is released. People will argue that this means that mission critical software will be disabled to the detriment of those who need it. I understand that, but the counterpoint is that you're driving around on Firestones hoping the tires don't blow. Since my current definition of "mission critical" doesn't include "we lose millions of dollars a day if this software doesn't stay up", I'd rather just turn the broken stuff off. Again, I realize this is not a universal sentiment.
Delaying public notification for any amount of time keeps script kiddies in the dark, yes. However, the black hat community can still exploit the software *if* they are aware of the vulnerability. The delayed notification gives me a defense against the "tool using" hacker crowd, but much less defense from the "tool inventing" hacker crowd.
Obviously from a risk assessment standpoint you're weighing the probability that (someone is skilled enough to discover this vulnerability on their own + they're targeting your machines) vs. the probability that (disclosure will result in exploit code going around before the vendor can patch the software + some worm/script kiddie will hack your machines unless you turn the software off).
The problem with the whole "disclosure" debate is that there is no meaningful discussion along the lines of risk analysis - everyone is arguing one side or another based upon their own comfort level when performing that analysis, instead of talking about how to accomodate the entire community of "software users" and the fact that risk analysis of this issue is NOT UNIVERSAL.
I think that software developers (open source and closed source) should absolutely have a disclosure policy attached to the packages of software they are distributing.
This way, when you install something, you *know* (as a customer) what the maintainers will do in the event someone notifies them of a vulnerability. You can make an assessment of which software package to deploy in your enterprise while taking the disclosure policy into account.
Moreover, security researchers then have an established policy to follow. Discover a hole in sshd? According to F-Secure SSH's policy, you should inform the developer crew and keep silent for two weeks. According to some other (OpenSSH) SSH, you should publish immediately. According to a third (BlackBox SSH) you'll get sued if you publish any results.
Now cluster maintainers can pick which SSH they want to run, based upon the mission-critical level of SSH in their enterprise, and they know what the vulnerability policy is. Nobody (except perhaps the very foolish) will use BlackBox SSH, because nobody wants to run that risk. Security researchers know they don't have to worry about getting sued by F-Secure or jailed for DMCA violations if they follow F-Secure's policy. Everybody wins.
Correction: When you own a big chunk of the bloody third world...
The tone of this article seems to be in opposition to that of the Rescorla paper that was posted here earlier in the month, wherein it was said that "we find little evidence that vulnerability disclosure is worthwhile". (Section 7.3)
I found that paper pretty hard to digest for these kinds of reasons.
I was driving on Firestones last night and never thought twice about it until Pat posted his thoughts here. The car came with them preloaded.
I think you might find it difficult to boycott the IRS, though the government already takes way more than its fair share for the myriad of expensive useless services it provides to most citizens.
Back on topic, though, what's the answer?
A full disclosure tells criminals how to attack. At that point it's a race between the criminals and the company that created the vulnerable product to see who can get to the users first. Company and user have the advantage, because it's public.
A partial disclosure tells the attacker what to look for or where to focus efforts, but doesn't have the same incentive for the company to raise the priority of a fix. At that point, the attacker will likely win.
Nondisclosure means the user has little to no clue about the problem, the company has zero incentive to fix the problem, and the attacker may or may not be able to exploit. Attacker has the advantage.
Of the 3 possibilities, I'll take full disclosure. If the vulnerability is bad enough and very public, third party solutions and workarounds tend to show up to assist until the company can fix the vulnerability.
It's not very nuanced to say if i may say, "faux disclosure" can be a good thing sometimes. It may prevent -or can bridge- given it enough time to "faux disclose" a workaround on a flaw in a system. At the other side, nothing is nothing that won't be spotted by someone.
> In general, about the possibility of existing exploits, the initial partial disclosure should be detailed enough for you to assess the risks for your company and possibly will include a workaround.
Who is able and willing to determine the exact quantity of "detailed enough"?
Who do you trust to tell you the truth? If you are not able to verificate the existance of the vulnerability you can't do anything but trust the hearsay. Especially interresting if the company involved disputes the sheer existance of this vulnerability.
> Full and immediate disclosure means my competitors have a month to develop and deploy exploits before I get patched.
The cynic in me wants to say: that's the cost of doing business, dude. But serious:
You are fully enabled to decide if you can live with it or switch the application off if you can't.
Switching off may get very expensive, I know off some services where every hour off-time costs several million US$ and several clients will(!) jump the ship, which costs even more. This is the situation where the difference between "trust" and "verifiable knowledge" hits the purse directly and is an attacking vector (DoS) in and of itself.
Non-verifiable partial disclosure (a tautology?) does not work; even the offered workaround is not testable, because the exact vulnerability is not known, so the workaround cannot be tested against the vulnerability and it is therefore not known if the workaround works at all.
There is no alternative to full and immediate disclosure of all vulnerabilities someone gets knowledge of.
A big problem is legability of course. There are a lot of examples where the discloser had been threatened to get sued, had been sued or even arrested. Thus it is highly necessary to implement a facility to post such vulnerabilities anonymously and that's a very hard thing to do if you want sufficient high values of "anonymous".
 I'm a bit more pragmatical, so the one or two hours you need to word your publication can be used by the producers to write a patch if you send them a little note beforehand. But don't wait for an answer, the programer might be on the other side of the world and asleep.
But beware! Even that one or two hours might be way to long! There were some worms and other varmints in the history of networked computers which were able to infect large numbers of nodes in mere minutes.
Anybody can sue anybody for anything, so a big corporation can sue you if you call for a boycott. They lose 100,000$ and you lose 100,000$. Guess who feels it worse?
Slander laws are not boycott laws. They just work the same because boycotts are for a reason, so they sue you if you advocate boycotts on the grounds that you are slandering them. The laws are typified by the veggie libel laws that got Oprah. She won, but they fined her lots of money in legal fees.
Veggie libel laws cover both biotech plants and meat packers. If you complain that meatpackers are unsanitary, the meatpackers can sue you and do just that. Truth is not a defence in state court, but it is a defence in the Federal appeal, if you have the money for an appeal.
Some boycotts are illegal. Boycotts against Israel are specifically illegal and legal proceedings can be found in google, but there are others.
That is in some countries, not all.
for that reason i.m.h.o often glad i do not live in a country like say the U.S.
``Gov nationalization of the radio spectrum has for decades cartelized broadcasts and has made effective boycotts too expensive for most. Of course this has slowly changed with cable & satellite.''
Hmm. It seems like the FCC auctions off the airways, which are a public resource, and vulnerable to a tragedy of the commons. If we had a completely free market for spectrum, it might look a bit like CB radio; anyone remember those? Anyone you know in the market for one?
It seems to me that a lot of people don't realize that vulnerabilities exist whether or not anyone finds them and announces them. If one person found it and partially disclosed, that means that another person may already know about it. These exploits are carefully guarded, but occasionally sold or traded, without any public disclosure at all. Partial disclosure means the clock is ticking and they will try to make the most use of it before time runs out. If you think this makes you safer, think again.
It is perfectly possible to write code that is provably correct, and therefore contains no implementation errors (note that design errors and misfeatures are still possible). I don't know of many groups that are doing anything like this, except I think java bytecode can come with a proof that it doesn't over/underflow the stack. Companies just don't care enough to put that level of effort into writing correct code when good code suffices, or when consumers can't tell the difference. A premise of free markets being good things is complete knowledge by the consumer, but most software users just have no clue about software quality beyond the UI.
'Hmm. It seems like the FCC auctions off the airways, which are a public resource, and vulnerable to a tragedy of the commons. If we had a completely free market for spectrum, it might look a bit like CB radio; anyone remember those?'
No. Actually, radio spectrum can be exclusively privately owned.
In fact the courts had already applied the common law (homesteading principle) to the radio spectrum in 1926, allowing property in a given spectrum bandwidth and radial distance.
It was precisely that it could be private property that Herbet Hoover rushed to nationalize the whole thing.
The fact is that the government auctions off property that never belonged to it in the first place. It is no different than the government using taxpayer money to acquire land (louisiana purchase, gadsen purchase, lands from war) and then selling it back to the people!
It is a classic scheme.
'Anybody can sue anybody for anything, so a big corporation can sue you if you call for a boycott. They lose 100,000$ and you lose 100,000$. Guess who feels it worse?'
Exactly. The law is perverse, and changed drastically by repeatedly violating the constitution.
Slander laws are illegitimate, despite the fact that they are enshrined in so called 'public law'.
Appropriate slander (otherwise considered fraud, a genuine crime) and boycotts are effective free-market tools, that had to take a backseat to corporatist demands.
"Reasonableness is determined by balancing privacy against the government's need."
I had a thought when I read this sentence and I don't want to come off sounding strange, but since when have we begun to blindly accept that the government has greater or more important needs than the people? Isn't it a government of the people and for the people? So aren't the needs of the government the needs of the people? In my mind this means that the government's need is to protect the people and their privacy. Perhaps if the people who govern were to think more in terms of protecting the people's needs, including the need for security, as opposed to an institutions needs, then they would make wiser decisions.
What type of government is it that puts the needs of the state above the needs of the people? Any guesses? Communist! Are we slipping away from our democratic ideals towards a communist state ourselves?
"What type of government is it that puts the needs of the state above the needs of the people?"
That's a silly question: ALL OF THEM!
"Are we slipping away from our democratic ideals towards a communist state ourselves?"
All states are socialistic. And no state has ever been limited by merely written laws, when the power to decide upon those laws and its legality rests with the same institution.
Its much worse than you think. The USB card isn't part of the "hack" here. The internal Apple Airport card has the IP address indicated.
Details here: http://www.smallworks.com/archives/00000461.htm
I agree with Bruce, "full disclosure" is the only path.
When I try to get on Yahoo or some sites it coes across my screen that i have committed a illegal act and shuts me down. I have never put anything illegal on Yahoo i just like to answers the questions for points and to pass time. Could you please help me someone gave this P.C over a year ago and no problems, but for the last couple of weeks this keeps happening. I know the P.C is old as it is windows 95 and I don't have alot of money to buy a new or even used one unless it was really cheap. As I am disables and can not go out alot, this helps me get through the day. Please tell me what I am doing wrong or how I can fix the problem. Thank you and God bless....Sandy
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.