Schneier on Security
A blog covering security and security technology.
« Refuse to be Terrorized |
| Friday Squid Blogging: Stinky Squid »
September 11, 2009
Schneier on "The Future of the Security Industry"
Here's a video of a talk I gave at an OWASP meeting in August.
Posted on September 11, 2009 at 12:29 PM
• 28 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Is there a transcript available?
You knew someone was going to ask that, right?
It's friday. Where's the Squid??
I tend to disagree with the premise that regulation should play a bigger role in security. I think that would be a disaster because it would increase the cost of IT and not actually improve security by a whole lot.
I think the reason that automobiles have security and safety features because people want them, not because the government mandates them. We want locks on the doors and air bags because the idea of dying in a car accident is not a pleasant one.
We could discuss the merit of regulation all day, but I see the problem as a fundamental flaw of regulation to begin with. Regulation, like everything, is subject to the law of unintended consequences. Because we often make regulation where we feel people are not risk averse enough. Laws themselves, because of perverse incentives, create other problems which are even more invisible and difficult to gauge the risk of. This leads to further regulation and further unintended consequences and, therefore, even more difficult-to-understand risks.
I think that, eventually, the risks become so esoteric and the downsides of regulation so invisible that "progress" in regulation becomes difficult to attain because too much expertise is required to navigate the entire legal structure and the risks are so much a product of the system that created them. This makes new regulation a tough sell, because the people who made it to begin with can't understand the issues enough to explain them and those of us who listen to the explanations are confused, skeptical, mislead by widespread disinformation campaigns, or simply not well-versed enough in the subject matter to buy into the solutions proposed.
This, eventually, leads to apathy until the problem becomes too big to ignore. A huge, much-debated example of this is health care. I think it's exceedingly clear that much of the problems that sector faces are actually artifacts of regulation - unintended consequences of otherwise well-intentioned regulation. Certainly a lot of the laws are well-meaning, but some of them don't even meet that standard (if a specific corporation or lobbying effort is for the legislation one must be wary because it's almost certain to benefit them to the detriment of everyone else). Worse yet, the eventual outcome is a major power grab by government itself. The mess gets so big and people so sick of it that they allow themselves to become convinced that it would all be better off if the control were centralized, which leads to its own set of unintended consequences.
This spiral of regulation leading to unintended consequences leading to further regulation leading to public apathy until it gets to be such a huge problem that it can't be ignored any more, leading to the eventual massive power grab is difficult for the common mind to grasp, let alone be convinced of without appearing to have ulterior motives. It's like akin to asking a layperson to provide advice in a match between two masters of chess - both of whom have mastered a level of thinking out layer upon layer of moves and counter-moves ahead of their opponent that few of us can imagine, let alone possess. The layperson can't even make an informed decision as to who will win the match based on understanding, so they resort to statistics, intuition, or even to appeals to their emotion instead.
We could break this cycle which kills entire industries and cripples our economy by stopping it when it's small and saying no to the beginnings of the regulatory nightmare. No matter how well-intentioned we are, we can't provide any insight to the masters of the game, and it's high time that we admit it and let the market decide who wins and who loses.
I often hear the same argument advocating for the dissolution of the FAA. "But airlines don't want their aircraft to crash", the proponents say, "They don't need regulation to keep their airplanes in good repair".
Sorry, but this Utopian view of the world and the innate goodness of corporations doesn't correlate with the real world. The "regulatory nightmare" that you refer to helps to level the playing field so that [in theory at least] the rights of a single citizen has the same weight (and the same pull) as that of the multi-national corporate behemoth that has already assigned a dollar value to the "life of just one child" (and parent, young adult traveler, etc).
Please, someone weigh in with scholarly discourse on the studies of acceptable risk and loss by businesses. And governments.
>Sorry, but this Utopian view
This is the kind of statement that ends conversations (or starts their decline into insults/derails/etc.) fast. Avoid this language, because it's clearly not a dismissal based upon facts or logic but rather your personal, emotional attachment to your political worldview.
>innate goodness of corporations
This is not a position being advanced by anything I said at all. My point, since you seem to have missed it, is that economics isn't easy to understand. We can only pretend that there are no unintended consequences to regulation. One of the very few times in history where we, collectively, were smart enough to realize a massive mistake created by a lack of foresight and correct it was prohibition. It was repealed only after some pretty significant damage was done to our society and a lot of crime and violence were created.
Why is it that we allow ourselves to become so blind to the downsides of our ideological positions, especially when our positions are based upon a desire to do good? Good intentions can have VERY bad outcomes, and most regulation comes from good intentions. Unfortunately, the beast which is our massive federal government rarely admits error, incompetence, or defeat to better judgement. It would rather blunder on, growing more powerful and more damaging to everything around it. Is this what we really want?
All of this talk of "level[ing] the playing field" is enablement, nothing more. It's the moral equivalent of a wife allowing a husband to continue to beat her because at least he works to support the household and the beatings just aren't that severe in retrospect. I, for one, reject political Stockholm syndrome.
Remember that everything you advocate under the leadership of your favorite political party will be meddled with, perverted, twisted or destroyed by your political foes when they come into power. One must think pragmatically and think ahead and not with their emotions. We have to, sooner or later, admit there are certain ends which no amount of regulation will ever accomplish, no amount of tax dollars will ever solve.
From my understanding, if you don't like direct regulation of a market, there are pretty much two options:
1) Modify the market pressures so it selects for what you want.
2) Wait until the market chooses the (nearly invariably single) winner, then deal with the headaches of the monopoly's effects, at which time you no longer have an effective market.
Personally, in the realm of security, I tend to prefer #1. Liability for breaches of claimed security would be an example of this. Note that in itself this requires government intervention to create a stilted market (for example, one biased against monopolies, or one that requires evidence to support claims made).
However, if you can come up with a third option, and point to some economic theory to back it up, I'd be happy to look it over.
Sorry... I forgot the obvious additional option. Have the government take over the market, so that it is no longer a functional market.
Thanks for the comments.
I think #2 is flawed simply because it doesn't account for what it is which creates a monopoly to begin with. People do not suffer monopolies willingly if they're abusing their workers, abusing their customers, abusing the environment, etc. So where do they come from?
Regulation. Consider some of the biggest and most powerful monopolies in history. All have come about as a result of regulation of one form or another. The railroad monopolies were created by well-intentioned government decisions to allow access to a resource to only the first comer (or selecting a winner) - that resource was land use rights. Because individuals who owned property could easily have stood in the way of the progress of western expansion, government felt it had no other choice but to step in and use eminent domain law to take land from some and distribute it to others for the public good.
Because only one company controlled the tracks (and owned them, since they paid the money to make them) that company became very powerful. We had not yet envisioned an interstate highway system, so it created railway monopolies. Competition had not even been considered.
Look at all of the most regulated industries and you'll find that many of them are considered "utilities" which were originally created by eminent domain takings. People don't want their land taken from them to allow five different telephone companies to bury lines on or erect telephone poles, and they definitely don't want to see five different companies show up to maintain or repair that equipment on a regular basis. The same is true for sewers, water, electrical providers, natural gas, and so on. If we can agree that these monopolies are created by government regulation, and deserve to be heavily regulated until competition (the market force which drives prices down and establishes an equilibrium - what we like to think of as "fair") we can move on to the next type of "monopoly."
This is the "monopoly" created by consumer choice. Microsoft or Google are examples of these. There are other search providers - lots of them, in fact. The quality of the Google search engine is readily apparent to its users, and the users reward it by being customers of Google's. The same is true of Microsoft (though regulation in the form of patent and copyright laws does have a role to play in establishing Microsoft's dominance as well). These kinds of "monopolies" are healthy, so long as governmenth regulation doesn't enable them to maintain their market dominance. If others are free to come along and dethrone them, nobody has anything to fear from them. Google can't show up at our door and, under threat of force, make us use their search engine over Yahoo, for example. Microsoft doesn't make us buy their product when we could easily select an Apple or *nix system of whatever flavor we desire.
Have I sufficiently illustrated why it is that there really is no such thing as a monopoly (at least of the variety which actually do damage to consumers) except through government intervention in the market?
@Ward S. Denker
"Unfortunately, the beast which is our massive federal government rarely admits error, incompetence, or defeat to better judgement. It would rather blunder on, growing more powerful and more damaging to everything around it."
'The War on Drugs' in a nutshell.
(Sorry, haha, I'm still stuck hammering about 7 posts ago, everything looks like a nail)
If the "bad" outcome is low probablility and expensive to mitigate, then a potential business strategy is to not take the mitigating action, earn a bigger profit as long as nothing bad happens and go bankrupt (or let the government eat the loss) when the bad thing happens.
For current examples consider AIG and CDOs.
"I think the reason that automobiles have security and safety features because people want them, not because the government mandates them."
First, why do you differentiate "people want them" from "government mandates them"? The two are not mutually exclusive.
It seems that a concept of representative governance is absent from your posts -- a government designed to represent the wants of people.
With that in mind the difference between security and compliance is really just that the latter has more than one party involved.
Ford or GM may think something is safe, but it can never say it is in compliance until another group agrees with its assessment of safety. That is why there is an obvious problem with "we'll build it if they come" argument. High-risk and low-profit safety features will not be delivered by industry due to want alone; there have to be groups that agree on a standard, and that is the basis of regulation.
Note that the auto industry is notorious for selling safety below what many people want. They opposed seat-belts in the 1960s, opposed airbags in the 1980s, are still opposed to 30mph crash tests, etc.
Airbags are actually an example of this "what we want" problem. Reagan set about reversing emission regulations and safety regulations in 1981. The move to stop airbags (and other safety regulations) was billed as a way to help car manufacturers increase their profits. Thus, airbags entered the market *despite* attempts during this time by industry to influence government and use deregulation to kill them off -- regulation is what delivered them and forced the innovation in security.
1981: Under the anti-regulatory Reagan administration, NHTSA announces one-year delay of passive-restraint rule, proposes that it be rescinded altogether. [Transportation Secy: Elizabeth Dole]
1983: The Supreme Court rules against the Reagan administration and directs NHTSA to review the case for air bags.
1984: Chrysler CEO, Iacocca lambastes air bags as example of "solution being worse than the problem."
1989: Ford announces driver air bags will be standard equipment in nine car lines.
1988: In a dramatic turnaround from CEO Iacocca's previous anti-bag position, Chrysler becomes the first U.S. automaker to install driver air bags as standard equipment in all its domestic-made cars.
Also read "Corporate power, American democracy, and the automobile industry"
by Stan Luger
The problem with regulatory requirements is usually that only the largest of corporations can afford to implement them. This absolutely cants the playing field towards big business. In some instances there are balancing factors such as the accounting requirements that fall on publicly listed companies traded on the OTCBB vs the companies traded on the New York Stock Exchange. The smaller companies don't have to comply with the accounting requirements for listing on the NYST but they can still raise capital by listing on the OTCBB. If all publicly traded companies were to face the same accounting requirements as those listed on the NYST then many small publicly traded businesses would go out of business or have to rely solely on bank funding or independent investors to raise capitol until their profits were sufficient to meet the requirements for listing on the more regulated stock exchanges. Although i'm certain that these listing requirements provide a significant amount of protection for would be investors they are only as effective as the integrity of those who enforce them, after all Enron was traded on a major exchange.
Where does Gary McKinnon fit?
Regulations - laws will, probably, incarcerate him... do you feel safer?
Have the US Navy and Nasa learnt lessons and improved their security?
Is Gary the criminal for getting in or are officers at the various organisations guilty, negligent, for allowing him in?
Can we think of Gary as a whistle blower - effectively showing us when US government security officers didn't do their job well enough?
If securing my customers' data is expensive, and the government doesn't require me to do so, I just won't do it. On the off chance I get sued over that, the worst case scenario is that I have to fold my corp and start a new one. Big whoop!
Your argument is that complex systems that are hard to comprehend (for the common mind- whatever that is) so no action can have merit. Regulation will have unintended consequences so they must be avoided. No regulation will also have unintended consequences- what do we do about those? Simply put, you don't trust gov't (made up of inadequate people) but do trust corporations (drawn from the same pool of inadequate people). One may argue that the minds in corporations are of a higher quality than govt. That same argument could be made for academia over corporations, but I bet you would not let college professors make our decisions.
>Sorry, but this Utopian view of the
>world and the innate goodness of >corporations doesn't correlate with the
>real world. The "regulatory nightmare"
>that you refer to helps to level the
>playing field so that [in theory at least]
>the rights of a single citizen has the
>same weight (and the same pull) as
>that of the multi-national corporate
>behemoth that has already assigned a
>dollar value to the "life of just one
>child" (and parent, young adult
Actually, the consumer v. provider is NOT the playing field Government exists in large part to level.
It levels the playing field between providers of goods and services, establishing the minimum acceptable standards -- so that one corporation that values life at what society in general considers an acceptable level isn't underbid by a corporation that choses to value it less.
Those companies can then compete with a baseline of standards that they all must meet.
The balancing act is to assure that regulations are not so onerous as to prevent new competitors from entering the market place, nor lock us into a single technology or stale idea of how to accomplish a goal.
Good regulations, well enforced, are a key part of having a vibrant marketplace that creates better ways of doing things while simultaneously reducing costs.
Thanks for a link to this speech. It was great to hear your thoughts rather than just read them in an essay.
>but do trust corporations (drawn from the same pool of inadequate people).
Again, this is putting words in my mouth. I do not trust corporations any more than I trust government. What I do trust is that most people are, in general, good people and that they can make decisions which benefit all of us on their own.
A lot of people acting in their own interest tends to be a benefit to more than just them. Would Jonas Salk have invented the cure for polio if there were no jobs in research which provided for his basic needs? Note that government expenditures are not required in order to provide jobs like those - pharmaceuticals employ many people who search for treatments and cures for a lot of things. They're self-interested, since they seek a profit, but they benefit all of us by extending our lives.
If it wasn't that big pharma was so heavily regulated it would not be so difficult for the little guy to get started in and compete with the behemoth. Small operations who do big things, such as invent cures to ailments, end up having to sell their inventions to the big guys because they simply cannot afford the lawyers or bureaucratic machinery to navigate the complex legal system to get their drug to the people who need it. The big guys turn around and sell that drug for a much higher price than it might otherwise be offered for, partially because they have a huge bureaucratic machine to feed (lawyers and specialists) but also because they don't have anyone to compete with.
Big business, contrary to what people believe, loves regulation - so long as it insulates them from competition. Lack of competition can make a very wealthy company.
What is the libertarian solution to a lemons market?
>What is the libertarian solution to a lemons market?
Consumer reporting solutions show up spontaneously when a market isn't producing the quality people expect. Many people ask around to find out what other people are doing before they make decisions on big or important purchases.
Occasionally some people end up with a lemon on a home (because they didn't avail themselves of the copious housing inspection services) or car (because they didn't take it to a mechanic before completing the sale).
Consider how you make a decision about purchasing a car. What things are important to you? Do you go to the lot and simply buy whatever it is you see first, or do you spend a lot of time considering options you want, a price you have in mind, and the best quality you can afford at your price point? How about how you select a doctor? Do you call up the first doctor listed in the phone book, or do you ask your family (if they live near enough to use their practitioner) or friends who they prefer and why?
Anyway, the distinction to be made here is that libertarians, in general, support laws which protect people from two things: force and fraud. This definitely falls under the category of fraud, since it's clear that the intent is to deceive people on a massive scale in order to reap higher profits. Laws against force are generally the obvious ones: everyone wants police protection from muggers, murderers, rapists, etc.
I think the word 'utopian' applies here - underlying your arguments seems to be the assumption that if everyone would just do things the libertarian way, and act in their own informed and balanced self-interest, it would work out for all of us. While possibly true, it's incredibly unlikely outside of some utopian society.
As a group, physicists, economists, and political idealists tend to ignore friction and rounding errors. Just like in physics, ignoring some parts of the problem makes the problem appear solvable. You just hope the solution is a decent approximation of the real-world answer.
But in political problems, getting caught in the margin of error is a real problem.
Let's use big pharma again as an example. Merck got caught ignoring cardiac issues with Vioxx (in spite of regulations, not because of them). People died.
Now Merck will lose a bunch of lawsuits, and maybe they won't do it again. But that won't help the dead people. They're just a rounding error.
And maybe monopolies will eventually correct themselves - but Standard Oil of Ohio could have taken decades to fall apart. The friction is such that entire generations could live under the thumb of an abusive monopoly.
And to try to bring this back on topic - maybe some day companies will start to treat security and privacy issues as something that customers need to have corrected. But right now they get to treat these kind of things as externalities that that they get to deflect to consumers or just disclaim any involvement in ... and that's not likely to change without some sort of regulation.
>I think the word 'utopian' applies here
You might think so, but only because you've not been paying attention to what I've been saying.
>Let's use big pharma again as an example. Merck got caught ignoring cardiac issues with Vioxx (in spite of regulations, not because of them). People died.
This is an example of fraud - intentionally withholding information about the safety of a product for profit. It's the kind of thing many libertarians are still for regulatuon of.
>And maybe monopolies will eventually correct themselves
Monopolies created by government deserve to be heavily regulated - it's the price an industry very well should pay when they receive preferential treatment from government - until competitive forces arise which will protect the interests of consumers. The other type of "monopoly" like Microsoft and Google should be left alone, so long as no laws are made to keep them in their position of market dominance. If they're a "monopoly" due to consumer choice, it's because they play the game well and succeed at delivering exactly what people want. There isn't any harm done since people can always choose the other guy when they disagree with the price or the product isn't competitive anymore.
You see, you're arguing against straw men - positions I, and many libertarians, do not hold.
As for back on topic, I still hold that regulation is not what the industry needs. In fact, the law currently upholds the click-through and printed EULAs, which is what these corporations are shielding themselves with. I think it will be a painful price, but I do think that Bruce is right in that security will eventually arise routinely included in packages and we'll cease to care who delivers it or how, so long as they don't screw up.
I'm a software developer, among many things I've done, and I can tell you for a fact that all of the regulation in the world won't improve security all that much. It would be difficult to even draft the regulation and not actually create a vehicle for abuse. The problem is that, all too often, most computer software is built with many components and not all of them are designed on site. Black box controls/widgets/libraries/etc. are commonly in use. To regulate development of software would dramatically increase costs (security ain't easy) and the benefit would actually not be that substantive (i.e. look at how many years some very subtle DNS vulnerabilities have been kicking around, unnoticed).
It's all layer upon layer upon layer of software. If you're not a programmer, it's probably extremely difficult to picture just how much is going on behind the scenes just to type in this box and click "post." Every single point along the chain is a potential security vulnerability. It seems such a mundane action when you're used to using software, but just the action of making this post of Bruce's blog has probably passed through half a hundred different routines ranging from the keyboard driver to the textbox control in the browser, to a routine probably in a re-usable library which handles url encoding, through your computer's firewall software layer, through the windows API, TCP-IP protocol stack, out through code running on your gateway through code running on maybe a dozen or so other routers and firewalls, all to be interpreted by the server, checked for security weaknesses and stored in a database. After all of that, it passes through a bunch more code in order to be served up worldwide in a (presumably) secure fashion.
There's certainly more complexity to it than that, simply because many routines are built to be re-usable, so data passes through a bunch of them before it moves on its way. All of that is done, more or less, transparently. Most people don't even consider any of it at all - until it stops working.
Some regulation would probably tend to the removal of low hanging fruit by hopefully making certain persons in authority accountable for incompetence or irresponsible decisions, such as non-action when needed. The creation of some minimum standards is certainly balance the needs of individuals versus industry.
You do paint a vivid picture of why the economic basis for software security and possible regulation is supect.
Perhaps economically a better approach would be to fix systems so that they are not subject to the effects of software vulnerabilities, or as Ranum would say, correct the inherent design flaw which is the underlying root cause of our computer insecurity today.
>You might think so, but only because you've not been
>paying attention to what I've been saying.
You're a good enough writer - you could have found a better way to argue your point here.
Anyway, a couple of thoughts to leave with:
- What would this system that purports to protect against fraud in the the big pharma world look like? My guess it would be similar to, and as expensive as, the one we have today. e.g., not performing a double-blind placebo comparison trial would be considered fraudulent and/or negligent, even if the methodology isn't mandated. Your goal of letting the little guy compete on that field is unlikely to be reached ...
- I've actually developed and implemented fairly secure software systems. It can be done when the incentives are there (in our case, it was HIPAA). I think that's basically Bruce's point - not that technologies should be mandated, but that externalities and incentives should be set up to encourage secure systems - and regulations can get you there.
>You're a good enough writer - you could have found a better way to argue your point here.
Thank you for the compliment. I've learned that it's best to not waste words on ideologues who simply won't pay attention to what has already been said. It's a Sisyphean effort - any progress made ends up at the bottom of the hill again anyway. It's just too damned hard to argue against deeply-ingrained prejudices.
>What would this system that purports to protect against fraud in the the big pharma world look like?
I think this is all a matter of goals. Let's say that our goal is to reduce fraud and still allow the market to be agile enough that it can deliver drugs without the mountain of added costs added by bureaucratic process. Why can't a system like the FDA be well-funded at hiring medical professionals, chemists, biologists, etc. to investigate the claims made by drug manufacturers and only have the power to step in when there's considerable evidence that a drug is harming more people than it's helping? If its goal is mostly advisory and is relatively toothless when it comes to enforcement, it won't be as susceptible to the kind of political games-playing it currently experiences.
With such a re-focus, it would be more difficult for companies to use the FDA to harm their competition through traditional corruption mechanisms (bribery, inside men, etc.) since the FDA wouldn't have the power to do anything about it. That power should probably be reserved by Congress anyway - and their votes for or against legislation are public. Take the CDC as an example of a relatively toothless organization, as far as enforcement is concerned (it takes a lot for them to be able to wield their power to call for quarantines), but they're definitely an excellent advisory body.
- I've actually developed and implemented fairly secure software systems. It can be done when the incentives are there (in our case, it was HIPAA)
I, too, have had experience developing software under the nightmare of regulation that is HIPAA. One organization I worked for pretty much only existed because of that legislation, and that makes it a pretty clear drain on the system. If legislation actually creates new businesses just to deal with the complexity added by the legislation, something was probably done wrong.
What guarantees can you make that no security flaws exist within the systems you developed? I've developed systems for the criminal justice system, and I can't say that I feel that the software itself is 100% free of security vulnerabilities, since the systems I developed relied upon other systems which I never possessed the source code for. I can make no claims that there aren't subtle vulnerabilities in the code I wrote which can't be exploited by a determined attacker, which is why a big part of the security provided to those systems was access control.
Don't get me wrong, I do like the concept of using externalities to our advantage and I do agree that regulation could be used in this fashion. That takes some extremely smart people to accomplish, and I'm just not convinced that the people we elect to public office are up to that challenge (we don't tend to elect literal geniuses because society prefers that such people be put to use propelling our science and technology sectors - for obvious reasons).
@ Ward S. Denker,
"... and only have the power to step in when there's considerable evidence that a drug is harming more people than it's helping?"
I understand your point that there needs to be a test but certainly not one that is somewhat arbitary such as "harming more than it helps".
There are two major issues with this,
If you had a single test you would have to apply it equally to a run of the mill pain killing drug, the number of "acceptable harmings" would be immense.
Likewise there are other drugs where the opposit would be true as in a "poison" that will eventualy kill you but prolong your life or improve your quality of life when you have a fatal condition (think certain cancer treatments for instance).
So the first issue is there need to be different criteria for different types of drugs. Which unfortunatly gives rise to the second problem,
Who sets and maintains the tests and who reviews those choices.
It is a case of "who watches the watchers" and has been seen in the case of the "smoking related industries" all sorts of tricks can be used to get quite dangerous chemicals past the FDA.
Likewise it is no secret that a number of substances that would be usefull are deliberatly restricted for "political reasons".
Unfortunatly all systems humans appear to need are heirachical and suffer from this problem.
Arguing about how to make a clearly broken process possibly slightly better is only productive in the short term.
We need to look at longterm solutions that both work and do not involve the potential for lobbying / favour / politics / corruption.
I would be one of the first to put my hand up to not knowing how to do it.
Likewise I would also be deeply suspicious of anyone who does without properly tested proof...
It may not be possible to come up with a realistic solution but just tickling the margins of an already broken process just strikes me as "spitting on a burning town"...
>Likewise there are other drugs where the opposit would be true as in a "poison" that will eventualy kill you but prolong your life or improve your quality of life when you have a fatal condition (think certain cancer treatments for instance).
That's actually the truth for just about all drugs. An awful lot of people aren't aware that most of the over-the-counter pain medications can do some very serious damage to your liver (that's why they list such long periods of time between doses and very explicit dose sizes - usually two tablets).
Sometimes people take a bottle of Tylenol to try and commit suicide, wake up the next morning wondering why it didn't work.
A couple of weeks or so later they wake up orange as a pumpkin (severe jaundice) and by that time it's too late, even though most of the time they've gotten over their suicidal urge.
I guess the measuring stick of harm vs. help is mostly between a doctor and a patient. There are drugs which would shorten one's life, but which would improve the quality of that life immeasurably. I think that people should be able to discuss risks like this with their doctor and still take drugs which harm them, so long as they understand what it is they're doing and their doctor agrees with the patient.
"Sometimes people take a bottle of Tylenol to try and commit suicide, wake up the next morning wondering why it didn't work."
It's the acetaminophen (paracetamol) that does the liver damage, and it gets put into all sorts of over the counter cold and flu medications.
Unfortunatly a lot of people die from it not because of suicidal tendencies but due to accidental over dose.
They read the bottle and take the max recomended dose, then they take a couple of sachets of lemon flavourd cough relife etc. Three or four days later they go back to work feeling better and as you say a week or so later the liver failure comes to their attention.
I was once told by somebody who worked for a drugs company that,
The anoying thing is there is an enzime that can be added to paracetamol medications to help protect the liver, however they are not allowed as "they are not of pharmacological benifit"...
Kind of say's it all realy...
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.