Security in a World of Physically Capable Computers

It's no secret that computers are insecure. Stories like the recent Facebook hack, the Equifax hack and the hacking of government agencies are remarkable for how unremarkable they really are. They might make headlines for a few days, but they're just the newsworthy tip of a very large iceberg.

The risks are about to get worse, because computers are being embedded into physical devices and will affect lives, not just our data. Security is not a problem the market will solve. The government needs to step in and regulate this increasingly dangerous space.

The primary reason computers are insecure is that most buyers aren't willing to pay -- in money, features, or time to market -- for security to be built into the products and services they want. As a result, we are stuck with hackable internet protocols, computers that are riddled with vulnerabilities and networks that are easily penetrated.

We have accepted this tenuous situation because, for a very long time, computer security has mostly been about data. Banking data stored by financial institutions might be important, but nobody dies when it's stolen. Facebook account data might be important, but again, nobody dies when it's stolen. Regardless of how bad these hacks are, it has historically been cheaper to accept the results than to fix the problems. But the nature of how we use computers is changing, and that comes with greater security risks.

Many of today's new computers are not just screens that we stare at, but objects in our world with which we interact. A refrigerator is now a computer that keeps things cold; a car is now a computer with four wheels and an engine. These computers sense us and our environment, and they affect us and our environment. They talk to each other over networks, they are autonomous, and they have physical agency. They drive our cars, pilot our planes, and run our power plants. They control traffic, administer drugs into our bodies, and dispatch emergency services. These connected computers and the network that connects them -- collectively known as "the internet of things" -- affect the world in a direct physical manner.

We've already seen hacks against robot vacuum cleaners, ransomware that shut down hospitals and denied care to patients, and malware that shut down cars and power plants. These attacks will become more common, and more catastrophic. Computers fail differently than most other machines: It's not just that they can be attacked remotely -- they can be attacked all at once. It's impossible to take an old refrigerator and infect it with a virus or recruit it into a denial-of-service botnet, and a car without an internet connection simply can't be hacked remotely. But that computer with four wheels and an engine? It -- along with all other cars of the same make and model -- can be made to run off the road, all at the same time.

As the threats increase, our longstanding assumptions about security no longer work. The practice of patching a security vulnerability is a good example of this. Traditionally, we respond to the never-ending stream of computer vulnerabilities by regularly patching our systems, applying updates that fix the insecurities. This fails in low-cost devices, whose manufacturers don't have security teams to write the patches: if you want to update your DVR or webcam for security reasons, you have to throw your old one away and buy a new one. Patching also fails in more expensive devices, and can be quite dangerous. Do we want to allow vulnerable automobiles on the streets and highways during the weeks before a new security patch is written, tested, and distributed?

Another failing assumption is the security of our supply chains. We've started to see political battles about government-placed vulnerabilities in computers and software from Russia and China. But supply chain security is about more than where the suspect company is located: we need to be concerned about where the chips are made, where the software is written, who the programmers are, and everything else.

Last week, Bloomberg reported that China inserted eavesdropping chips into hardware made for American companies like Amazon and Apple. The tech companies all denied the accuracy of this report, which precisely illustrates the problem. Everyone involved in the production of a computer must be trusted, because any one of them can subvert the security. As everything becomes a computer and those computers become embedded in national-security applications, supply-chain corruption will be impossible to ignore.

These are problems that the market will not fix. Buyers can't differentiate between secure and insecure products, so sellers prefer to spend their money on features that buyers can see. The complexity of the internet and of our supply chains make it difficult to trace a particular vulnerability to a corresponding harm. The courts have traditionally not held software manufacturers liable for vulnerabilities. And, for most companies, it has generally been good business to skimp on security, rather than sell a product that costs more, does less, and is on the market a year later.

The solution is complicated, and it's one I devoted my latest book to answering. There are technological challenges, but they're not insurmountable -- the policy issues are far more difficult. We must engage with the future of internet security as a policy issue. Doing so requires a multifaceted approach, one that requires government involvement at every step.

First, we need standards to ensure that unsafe products don't harm others. We need to accept that the internet is global and regulations are local, and design accordingly. These standards will include some prescriptive rules for minimal acceptable security. California just enacted an Internet of Things security law that prohibits default passwords. This is just one of many security holes that need to be closed, but it's a good start.

We also need our standards to be flexible and easy to adapt to the needs of various companies, organizations, and industries. The National Institute of Standards and Technology's Cybersecurity Framework is an excellent example of this, because its recommendations can be tailored to suit the individual needs and risks of organizations. The Cybersecurity Framework -- which contains guidance on how to identify, prevent, recover, and respond to security risks -- is voluntary at this point, which means nobody follows it. Making it mandatory for critical industries would be a great first step. An appropriate next step would be to implement more specific standards for industries like automobiles, medical devices, consumer goods, and critical infrastructure.

Second, we need regulatory agencies to penalize companies with bad security, and a robust liability regime. The Federal Trade Commission is starting to do this, but it can do much more. It needs to make the cost of insecurity greater than the cost of security, which means that fines have to be substantial. The European Union is leading the way in this regard: they've passed a comprehensive privacy law, and are now turning to security and safety. The United States can and should do the same.

We need to ensure that companies are held accountable for their products and services, and that those affected by insecurity can recover damages. Traditionally, United States courts have declined to enforce liabilities for software vulnerabilities, and those affected by data breaches have been unable to prove specific harm. Here, we need statutory damages -- harms spelled out in the law that don't require any further proof.

Finally, we need to make it an overarching policy that security takes precedence over everything else. The internet is used globally, by everyone, and any improvements we make to security will necessarily help those we might prefer remain insecure: criminals, terrorists, rival governments. Here, we have no choice. The security we gain from making our computers less vulnerable far outweighs any security we might gain from leaving insecurities that we can exploit.

Regulation is inevitable. Our choice is no longer between government regulation and no government regulation, but between smart government regulation and ill-advised government regulation. Government regulation is not something to fear. Regulation doesn't stifle innovation, and I suspect that well-written regulation will spur innovation by creating a market for security technologies.

No industry has significantly improved the security or safety of its products without the government stepping in to help. Cars, airplanes, pharmaceuticals, consumer goods, food, medical devices, workplaces, restaurants, and, most recently, financial products -- all needed government regulation in order to become safe and secure.

Getting internet safety and security right will depend on people: people who are willing to take the time and expense to do the right things; people who are determined to put the best possible law and policy into place. The internet is constantly growing and evolving; we still have time for our security to adapt, but we need to act quickly, before the next disaster strikes. It's time for the government to jump in and help. Not tomorrow, not next week, not next year, not when the next big technology company or government agency is hacked, but now.

This essay previously appeared in the New York Times. It's basically a summary of what I talk about in my new book.

Posted on October 12, 2018 at 8:14 AM • 32 Comments


Clive RobinsonOctober 12, 2018 9:12 AM

@ Bruce,

Security is not a problem the market will solve. The government needs to step in and regulate this increasingly dangerous space.

I'm of the view that the market will solve by far the majority of problems it is both "required to do" and "allowed to do"[1].

Thus the question is "How best to set the requirments?" before the legislators and lobbyists get involved...

Further the question of "honestly meeting" not just the tests but the aim of the requirment[2] and how we ensure the market remains honest.

[1] We have seen this with safety standards, an agency or entity sets a standard test and the industry then finds an honest way to meet then[2]. Often reducing cost of manufacture in the process. It also often stops the normal untegulated market downeard spiral and encorages innovation in the market to gain a competitive edge.

[2] There are a couple of examples of what could be considered "cheating to meet the test" that are well known. The first is the diesel engine emmition test, where Volkswagen amongst others made changes to engine managment software. The second less well known but of the same level of (dis)honesty is the use of Spread Spectrum / Whitening techniques to smear the energy emmitions from computer motherboards and similar to get under the level mask in the EMC tests. One has been deemed illigal practice, the other acceptable practice.

BrandonOctober 12, 2018 9:31 AM

Here's the problem: everything the politicians touch, they bring their corruption with them. Or, if one party manages to come up with a decent framework, the next time they get voted out it'll get dismantled or sabotaged. (and no: planning to keep your favourite party in power forever is never a good plan, however tempting it is)

georgeOctober 12, 2018 9:41 AM

nice write up. please do something about it. you have the name recognition and respect of peers. please do something about the iot security risks.

JG4October 12, 2018 9:52 AM

---------- Forwarded message ---------
From: JG4
Date: Fri, Nov 18, 2011 at 3:47 PM
Subject: Doug Casey on terror, an excellent treatise
To: Bruce Schneier

Well written and insightful. Not yet for public distribution. Please treat this leaked advance copy with great discretion.

I'd like to coin the term "projected intent" for systems that do dirty work remotely. They are going to be a major criminal, regulatory and legislative problem.

The leading edge is the FAA's attempt to ban model airplanes that fly over 60 miles per hour. Any idiot can go to x* and buy y* that will power a modest UAV to nearly 200 miles per hour.

*redacted, and 200 mph probably is low

Impossibly StupidOctober 12, 2018 10:18 AM

security takes precedence over everything else

While I agree with most of your points, Bruce, this takes things to a dangerous extreme. This is how freedoms die. As important as security is, I should not have to repeat Franklin's liberty/safety quote to point out that what you secure should be the guiding principle for the exercise.

Our choice is no longer between government regulation and no government regulation

That's a false dichotomy. Regardless of who is involved, we need smart solutions. As I look to government currently, especially in the US, I'm just not seeing a lot of those smarts. Consequently, I have zero expectation that your suggestions would do anything other than fall on deaf ears.

Until that changes, the reality is that civilization will remain in decline. We'll keep getting companies supplying our modern "bread and circuses" until one day our roads filled with "ultra-safe" self-driving cars all go tits up (either because of hackers or some other inherent software bug) and kill 100 million people in a single instant. Click here to watch Rome burn.

WillardOctober 12, 2018 10:30 AM

Bruce, how about a Federal Reserve type organization that is politically independent that develops security policy based on merit rather than winning the next election? Thanks.

wiredogOctober 12, 2018 11:01 AM

The insurance industry can certainly have a part to play. When I worked in industrial automation, building actual Things, those things had to be UL certified before our customers would take possession. Absent that certification they couldn't get insurance. Since the insurance industry doesn't want to pay out they have an incentive to be sure the machines are actually safe to operate.

I suspect insurance will do more to drive the acceptance of self-driving cars than the legislators will. Once self-driving is cheaper to insure than manually operated the manual cars will go away.

VRKOctober 12, 2018 12:21 PM

**smoke rolling** I'm almost speechless about this.

nobody dies when [[*] data is] stolen

But rather than stew and shoot my mouth off until dead, I decided to invest my very divergent programming capabilities learnt as a heavy truck mechanic (: to focus a bit of time on "skript kiddy" soft-war-fare by tapping into the groundwork laid in "computational linguistics":

Namely, to shrink and obfuscate input text, by the replacement of multi-word n-grams, with single UTF-8 glyphs. Its a nice ratio of common phrases and words, and the glyphs available. Using "web workers", it even runs on the clunker here at the library. (I CAN smell dust burning.)

Voila: smaller; lower bandwidth, harder to crack when shipped by https, safer on the midpoint servers, continuously morphing.

However, since I can NEITHER afford, nor open, the mammoth NLC "Web 1T 5-gram" lists, and since the "internet track" corpus they used is grotesquely "web" it's no surprise that I hereby beg our very talented SOS moderator to publish a disc on Amazon containing a zip of this website, for "crack pots" who speak the local vernacular, and have 15 bucks, "for personal consumption only", blah blah blah. :p Thanks!

"Shall we not fell this thief to earth?"

TheoOctober 12, 2018 12:23 PM


Most of the dangers are due to networking. Confusing the two means the obvious and easiest solution is overlooked and results in wasted effort.

I spent much of 1999 writing fifty page reports saying over and over, in many different ways "It doesn't have an f***ing calander". If Bruce has his way in 2020 I'll have to write 500 page reports for the same devices saying the same devices do not have network connections.

Wes ReynoldsOctober 12, 2018 12:36 PM

Bruce, what about Underwriters Labs? History seems to show that UL has successfully mitigated safety issues for a century without government interference. I don't know about you, but I wouldn't even buy an extension cord without a UL endorsement (no logo, no purchase).

I'd like to see a UL for the cybersecurity industry. Maybe the market can't solve these issues, but it can sure help...and in the security world there's no such thing as perfect security. Let's mitigate what threats we can with something that we can get going right away instead of waiting for the wheels of government to turn.

Mike PieruOctober 12, 2018 12:59 PM

> Security is not a problem the market will solve. The
> government needs to step in and regulate this
> increasingly dangerous space.

If all computers are controlled by the government, who is offering security against the government itself?

What if I decide to install the security-pack from an enemy-government. Am I secure then? What if a citizen of the enemy country installs the security-pack from my government. Is he secured?

Today the greatest threat comes from the government. And before we can decide which government is "the safe government", they need to wage a (WW III) war to make the decision. -- Maybe I anyhow prefer an insecure computer!

vas pupOctober 12, 2018 3:34 PM

@Bruce"Our choice is no longer between government regulation and no government regulation, but between smart government regulation and ill-advised government regulation."
My best guess that any smart government regulation should be developed by team of professionals in subject to be regulated and legal experts how to map technical requirements into legal requirements.
The first step should be to answer the question: "who is the target of regulation?" The answer will require corresponding level of understanding of regulation. It is like business analyst between user and IT/programmer guy, but in reverse direction.
Regarding objections in this respected blog which derived from wrong assumption that government involvement is always bad, they are derived from experience of observation how dysfunctional government works. Look around the globe how functional government work. E.g. financial regulation Canada - no crisis for decades in banking sector. Or Singapore. You know what I am talking about. Vague regulations are even more dangerous than no regulation at all because you are in legal gambling situation. So, clarity, consistency, uniformity. Amen.

CPCOctober 12, 2018 4:04 PM

Hey Bruce,

Great post, as usual. But let me disagree with a small bit:

> The primary reason computers are insecure is that most buyers aren't willing to pay -- in money, features, or time to market -- for security to be built into the products and services they want. As a result, we are stuck with hackable internet protocols, computers that are riddled with vulnerabilities and networks that are easily penetrated.

There's another factor: it's hard for consumers to judge a product's security. Even if you're willing to pay more, pricier products aren't always safer.

I'll take a very concrete example: wifi thermostats. Say I want something that doesn't talk to a cloud (I want it only on my LAN; I can VPN in from outside if needed). My only option is , more or less (Nest, Honeywell, etc. all require going through a remote command server).

Now, that product is horribly insecure. Horribly as in "any website you visit while on Wifi can take complete control over your heating and AC". It's been known since 2013 (, and the company has done nothing (this is a trivial, well-known bug; it would likely not have cost more or taken more time to design the product properly).

The problem is, how do regular consumers find out that it's so bad? You need significant technical chops to find the vulnerability and get a sense of what it means, and the product isn't much cheaper than other presumably more secure competitors.

In other words, even if customers are ready to pay, higher prices (today) don't correlate with security. It's just hard to determine what's safer, what's less safe, and what's recklessly unsafe.

Hopefully government regulation will help.

vas pupOctober 12, 2018 4:09 PM

Related to the subject:
Sky battles: Fighting back against rogue drones:
"A series of sensors around the perimeter of the prison identify any incoming drones. Once alerted to an intruder, the system fires up multiple radio transmitters that emit a signal designed to overwhelm the drone's radio transmissions.
This interrupts the connection with the operator and stops the drone proceeding any further.

And as most drones are programmed to return to their last point of control if the signal is lost, it gives law enforcement a chance to track the drone and trace the operator.

But one problem with all these drone detection and neutralization systems is that they require a high level of technical competence on the part of the operator.
The answer, he says, is to train the anti-drone system using artificial intelligence (AI).

"The AI is machine learning, training the computer to learn rather like a human does," he says.

And hacking drones related to the subject directly. Do you recall how Iranians hacked Us drone and forced it landing having their hands on it for reverse engineering?

MartinOctober 12, 2018 4:42 PM

The Underwriters Laboratory (UL) Certification model is an interesting approach that might successfully be applied to this problem. It should be carefully considered as it has been effective in insuring electrical devices with this certification can be purchased and used safely. UL does have on site inspections at manufactures sites to insure compliance with approved specifications.

This would keep government involvement to a minimum; and, that's most always desirable.

tzOctober 12, 2018 4:43 PM

When you say "regulation" it normally conjures up visons of ninny-nanny government and crony capitalism.

But there are full on Libertarian approaches.

A "dumb" toaster is supposed to produce toast and not burn my house down because of defects.

Simply extend that to "smart" toasters where the manufacture is liable for damages. If the IoT Botnet vendors had to pay when their devices caused damage (think a reverse class action so that a website shut down could then go and sue for damages all the vendors of the devices, at least after they were aware of the problem - I would give them a pass for some new zero-day, but after 2 weeks it should be patched).

Also, I would adjust the law given that Apple and Google and Samsung etc. claims they still own the device and are just renting or leasing it to you with their ToS and user agreements. Fine. Apple says it owns the evil device, then Apple, not I, is responsible for bad behavior. There is an old EULA decision where financial data as destroyed just before an IRS audit and the company was held liable because they technically retained ownership.

This would requrire more subtlety than the usual command and control "regulation". Just give rights to the damaged, remove arbitration clauses (sans a notarized wet signature like on mortgages), and a few other things and it would no longer be in the economic interest of companies to sell DEFECTIVE prducts.

The economist Coase noted that the main problem is when no one owns something it goes to waste because everyone exploits it but no one conserves or maintains it - the tragedy of the commons, but the lack of liability is related (especially with limited liability corpsorations animated by Dr. Frankenstate). If no one is held liable, then anarchy reigns.

Clive RobinsonOctober 12, 2018 4:51 PM

@ CPC,

Hopefully government regulation will help.

You left the word "good" out before government.

And the what is "good" issue, is the elephant in the room, so large the room can not be made with the materiels we currently have...

Even security experts are like "the three blind men describing an elephant", they just do not have the bredth to see all that is involved, and I very much doubt any one individual can even with a lifetime of experience. It's one of those "The more you know the more you realise there is still to know" problems.

Worse still as people are now finding out both the government and corporations regard the citizen as the enemy. With legislators being told what to legislate for from behind closed doors by unelected individuals in government entities and corporations...

It's not something that is going to end well unless we get ahead and stay ahead of their games.

Originally "Standards Bodies" were supposed to do this but we know know they are as easy to manipulate as the legislators are and often at much lower risk and cost.

Thus the only advantage to the citizen is standards bodies can revoke bad standards more easily than legislators can revoke bad laws.

And before people ask no I don't know how to fix the malign influance / finessing these behind closed doors entities perform. But unless we discuss it we never will.

Karl LembkeOctober 12, 2018 5:26 PM

This is actually a plot point in The Dark Forest by Cixin Liu. A character wakes up after spending a couple of centuries in hibernation. Soon after he's revived, "accidents" start happening to him. These "accidents" are due to hardware misbehaving, and but for extreme good luck, any one of them would have killed him.

After the first two or three, he's told that he will get an automatic settlement payment, implying that liability is automatically assessed when a physically capable computer malfunctions. (This may also be the result of an interesting take on U.S. culture from afar.) After one or two more, investigators discover that the character is the target of a virus targeted to kill one specific person. Most of those viruses had been cleaned from the Internet of Things, but because that particular one had been inactive until its target reappeared, it was never detected and never removed.

TE901October 12, 2018 6:20 PM

Software companies invariably disclaim all liability for all consequences for liability for operation of "their" software. A car is more than "..a computer with wheels", it is a potentially lethal device with a long history of class action law suits for safety defects.

Every tender document I have ever seen contains phrasing similar to
" accordance with current good trade practices and exercising all practicable care, diligence and professional skill."

It seems simple enough, the history of Hartford Boiler is instructive. With meaningful legal sanctions for gross carelessness no underwriter equals no sale.

Tony H.October 12, 2018 9:07 PM

The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures. (As we shall see later, this very tractability has its own problems.)

Yet the program construct, unlike the poet's words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself. It prints results, draws pictures, produces sounds, moves arms. The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be.

Frederick P. Brooks, Jr.
The Mythical Man Month, 1975

Douglas L CoulterOctober 13, 2018 12:30 PM

I like the "hoist on their own petard" approach suggested by tz here.
Rights two wrongs.
What is this BS about how I don't own what I bought?
OK, so the manufacturer or sales outfit still owns it.
So they own the liability - you break my stuff, you have to buy it again for me.

Now I'm not so silly as to believe that this would go on. In the extremely unlikely event that such an idea reaches law despite rampant corruption it seems likely that they'd just change that EULA - an at least, now you'd own your stuff.

Baby step, but it's a step. If I own it, I can fix it, I can demand all kinds of ownership related rights...

I suspect some or all the big boys would find some reason to build up a spin campaign that would rival the telecoms saying they compete and X would be bad for the consumer, but you know, we consumers are starting to see though this junk.
People need to get off this crazy partisan whole-platform-package worship and vote on real issues again - a few tossed out would change some things about listening to constituents vs lobbyists. Or is that hopelessly naive?

MikeAOctober 13, 2018 12:51 PM

1) I have seen what appeared to be poor-quality counterfeit UL and CE tags on stuff offered for sale in the U.S. Having a requirement and an organization is not enough.

2) Last night's local news included a story about would-be skimmer-installers, who had in their possession seals presumably to replace the ones they broke opening the victim gas pumps. Not stated whether these were also counterfeit, or somehow diverted from Weights and Measures.

3) CA can pass all the consumer-friendly laws it wants. With all three branches of the Federal government controlled by one party, they are hell-bent on squashing the existing CA Net-Neut and emissions standards already. I'm sure this will be added to the list. I'm also sure ALEC is on it as we speak. (Those disturbed by my political tone can rest assured I am also no fan of many things favored by the other party. It's just that they can't do anything right now. Hell or High Water Choose one)

MarkHOctober 13, 2018 1:10 PM

a propos of Underwriters Laboratories, as a model

In the 1960s and 70s, there was a notorious spate of sometimes deadly house fires which initiated in television sets. These failures were a predictable result of marginal designs.

As far as I am aware, all of those dangerous models had UL certification. Ouch!

UL does safety-only certification for a wide variety of consumer products, without regard to whether they function properly or reliably. They're allowed to break, but not to hurt people.

However, for certain categories of products whose purpose is to ensure safety, UL has a much more intensive process intended to ensure that they function as intended. The example of which I am aware, is fire alarm systems (my information dates back a decade or more, so things have quite possibly evolved in different directions).

For alarm system, the UL process was paralyzingly strict in some areas (for example, control of hardware configuration), while almost completely neglecting other areas (for example, control of any/all software processes) responsible for almost all system failures ...

That being said, it seems to me that UL generally does a fine job, and has helped to save countless lives.

I also expect that they learned from their mistakes (like the color TV fire scandal) and grew all the better for it.

MartinOctober 13, 2018 1:30 PM

If the government gets involved, and perhaps it should, their involvement should be patterned after the FAA or FDA and NOT the FCC, IRA, EPA and most other government organizations.

Little LambOctober 13, 2018 2:22 PM

Security is not a problem the market will solve.

Security breaches are expensive. The market will solve them, because the market wants to make money and keep money, which is impossible without adequate security. People with money do not like thieves.

The government needs to step in and regulate this increasingly dangerous space.

The government that continues to mandate proprietary chip-level "back doors" for "law enforcement" and "copyright protection" in our personal computers and mobile devices has no interest whatsoever in allowing us as "consumers" to make the choices that constitute a free market rather than a prison commissary.

The money just isn't there for all the extra regulation, enforcement, judgment, and punishment for violations of some arcane petty rule or another, while serious white collar crime continues to be tolerated, ignored, and given a wink and a nod by the good old boys.

Impossibly StupidOctober 14, 2018 12:23 PM

@Little Lamb

Security breaches are expensive.

To whom? The CEO of Uber doesn't go to jail when their self-driving car kills people, and neither do the engineers who built it or any of the other employees involved. The common, perverse "market solution" practice is to externalize expenses whenever possible. That means that there is no incentive for industries to improve security, because everyone simply passes on the "cost of doing business" for breaches on to you.

People with money do not like thieves.

Bull. They very often are thieves; you don't amass a fortune in the billions by treating people fairly and being civic minded. They are the influence that corrupts governments to the point that citizens are no longer being properly represented.

The money just isn't there

Nonsense. There plenty of money out there to make the world a better place. It's just that people in power, both in the government sector and in the commercial space, don't really care about doing that. You don't need a rising tide to lift all ships when you can purchase an aircraft carrier or your own yacht.

WinterOctober 14, 2018 12:34 PM

"Security breaches are expensive. The market will solve them, because the market wants to make money and keep money, which is impossible without adequate security. "

History shows us this has never been true. The market had no problems with thousands of deaths yearly, be it at work, in cars, bad wiring, due to pollution, or toxic drugs and food.

Every regulation was installed to solve a problem where companies took maiming and killing people for granted to make a profit.

In the end there proved to be enough money to protect consumers and workers from danger.

Keith BellairsOctober 15, 2018 3:44 PM

Consider highways and automobiles.The market cannot solve the safety problem. We accept construction standards for roads, speed limits, drivers licenses, safety tests on vehicles before they are permitted on the road. And so on. Failure to have regulations like this would endanger the life of every user of the public roads.

Eventually we will need a similar regime for the internet. Unlicensed computers will not be allowed. Intrusive activities will be illegal.

And the equivalent of the highway patrol will be required. Government controlled servers will scour the net in order to seek and kill servers and processes that try to go outside the rules. The rules cannot be optional.

The hard part is the lack of a model for highways that span international borders. Can a US internet cop shut down a Russian black site? If the law of the sea were not so 18th century it might provide a model. The obvious choice is the UN but that raises fears of blue helmets and black helicopters.

The whole idea of licensing use of the internet is so stunningly anti-libertarian that it is unthinkable. But if the alternative is living in fear of our refrigerators ...

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.