Regulation of the Internet of Things

Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the "Internet of Things" and increased regulation of what are now critical and life-threatening technologies. It's no longer a question of if, it's a question of when.

First, the facts. Those websites went down because their domain name provider — a company named Dyn —­ was forced offline. We don't know who perpetrated that attack, but it could have easily been a lone hacker. Whoever it was launched a distributed denial-of-service attack against Dyn by exploiting a vulnerability in large numbers ­— possibly millions — of Internet-of-Things devices like webcams and digital video recorders, then recruiting them all into a single botnet. The botnet bombarded Dyn with traffic, so much that it went down. And when it went down, so did dozens of websites.

Your security on the Internet depends on the security of millions of Internet-enabled devices, designed and sold by companies you've never heard of to consumers who don't care about your security.

The technical reason these devices are insecure is complicated, but there is a market failure at work. The Internet of Things is bringing computerization and connectivity to many tens of millions of devices worldwide. These devices will affect every aspect of our lives, because they're things like cars, home appliances, thermostats, light bulbs, fitness trackers, medical devices, smart streetlights and sidewalk squares. Many of these devices are low-cost, designed and built offshore, then rebranded and resold. The teams building these devices don't have the security expertise we've come to expect from the major computer and smartphone manufacturers, simply because the market won't stand for the additional costs that would require. These devices don't get security updates like our more expensive computers, and many don't even have a way to be patched. And, unlike our computers and phones, they stay around for years and decades.

An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don't care. They wanted a webcam —­ or thermostat, or refrigerator ­— with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­— you can't even tell they were used in the attack. The sellers of those devices don't care: They've already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It's a form of invisible pollution.

And, like pollution, the only solution is to regulate. The government could impose minimum security standards on IoT manufacturers, forcing them to make their devices secure even though their customers don't care. They could impose liabilities on manufacturers, allowing companies like Dyn to sue them if their devices are used in DDoS attacks. The details would need to be carefully scoped, but either of these options would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

It's true that this is a domestic solution to an international problem and that there's no U.S. regulation that will affect, say, an Asian-made product sold in South America, even though that product could still be used to take down U.S. websites. But the main costs in making software come from development. If the United States and perhaps a few other major markets implement strong Internet-security regulations on IoT devices, manufacturers will be forced to upgrade their security if they want to sell to those markets. And any improvements they make in their software will be available in their products wherever they are sold, simply because it makes no sense to maintain two different versions of the software. This is truly an area where the actions of a few countries can drive worldwide change.

Regardless of what you think about regulation vs. market solutions, I believe there is no choice. Governments will get involved in the IoT, because the risks are too great and the stakes are too high. Computers are now able to affect our world in a direct and physical manner.

Security researchers have demonstrated the ability to remotely take control of Internet-enabled cars. They've demonstrated ransomware against home thermostats and exposed vulnerabilities in implanted medical devices. They've hacked voting machines and power plants. In one recent paper, researchers showed how a vulnerability in smart light bulbs could be used to start a chain reaction, resulting in them all being controlled by the attackers ­— that's every one in a city. Security flaws in these things could mean people dying and property being destroyed.

Nothing motivates the U.S. government like fear. Remember 2001? A small-government Republican president created the Department of Homeland Security in the wake of the 9/11 terrorist attacks: a rushed and ill-thought-out decision that we've been trying to fix for more than a decade. A fatal IoT disaster will similarly spur our government into action, and it's unlikely to be well-considered and thoughtful action. Our choice isn't between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important and complex ­— and they're coming. We can't afford to ignore these issues until it's too late.

In general, the software market demands that products be fast and cheap and that security be a secondary consideration. That was okay when software didn't matter —­ it was okay that your spreadsheet crashed once in a while. But a software bug that literally crashes your car is another thing altogether. The security vulnerabilities in the Internet of Things are deep and pervasive, and they won't get fixed if the market is left to sort it out for itself. We need to proactively discuss good regulatory solutions; otherwise, a disaster will impose bad ones on us.

This essay previously appeared in the Washington Post.

Posted on November 10, 2016 at 6:06 AM • 62 Comments


ConanNovember 10, 2016 6:25 AM

How to have regulation / liability without giving vendors a excuse to lock out owners from modifying their devices under the pretence of "security"? (OpenWRT, CyanogenMod, GNU/Linux, etc)

CpragmanNovember 10, 2016 6:27 AM

Either IEEE or ICANN could do this. Seems like it fits under their existing purview without any real stretches.

VinceNovember 10, 2016 6:56 AM

@Conan There are probably less than 5% owners who modify their devices, 95% of them just use them as intended. We can find solutions on many sides, from vendors to telecommunication operators. The main focus should be on majority of people and solutions that cover majority of cases.

ConanNovember 10, 2016 7:09 AM


I'd argue that's short sighted, amd that long term security depends on thr free/libre movement, plus the other societal advantages of

SashaNovember 10, 2016 7:30 AM

FYI, the em-dashes from the WaPo article are missing here, making certain sentences look odd.

I wonder which government organization would be the most appropriate one to be tasked with such a responsibility. The FTC perhaps?

Chicago IT ConsultingNovember 10, 2016 7:45 AM

We've been concerned about this for some time. The mass use of zombie 'things' could prove to be a substantial weapon. I still find it disappointing though that there wasn't any substantial traffic safeguards in place.

AlanNovember 10, 2016 8:34 AM

If the people/organizations harmed by the insecure devices (such as the DDoS victims) could sue the device manufacturers for damages, that might lead to the problems getting solved.

MiksaNovember 10, 2016 8:38 AM


That is certainly a concern, but the regulation doesn't have to have that effect and probably in practice it shouldn't. Third party firmware can provide competitive advantage with some customers and not all manufacturers would want to relinguish that.

At minimum we need to regulate two things, the devices need to be updated easily/automatically and the manufacturer will need to provide security patches for extended time period. Third party firmware would in no way be against this kind of regulation. They might even receive a boost from this. Why would manufacturer provide updates for their device, when they could just built it on top of OpenWRT with slight rebranding. The upstream could provide the necessary updates. Just as Dell and HP let Microsoft handle OS updates and sell the surrounding hardware.

Sok PuppetteNovember 10, 2016 9:06 AM

Possible approach one (liability):

  1. A manufacturer (or other seller?) is liable for any exploitation of any bugs in whatever it sells. The liability extends to third parties and cannot be disclaimed through a EULA or whatever. You're going to need insurance, and you can expect your insurance company to put some pretty stringent conditions on it.
  2. A manufacturer must define a product lifetime, at least two years. The lifetime must be clearly disclosed in advance of any sale. The manfacturer must provide security updates for at least this lifetime. Security updates must not reduce the functionality of the product (could be tricky to forumlate). When the manufacturer does in fact stop providing updates, the product must PERMANENTLY DISABLE ITSELF. And as long as we have a committed lifetime, let's say that any cloud service associated with the product must remain available for that long.
  3. A manufacturer may be released from (1) and (2) if an end user intentionally chooses to install firmware significantly different from what the manufacturer provided... but only if the manufacturer has erected no techical (or legal?) barriers that would prevent that from being done in a secure way.
  4. You as an end user are liable for what you INSTALL. If your thermostat participates in a DDoS, you personally can (in theory) be sued. However, you have safe harbor if you use only manufacturer-provided firmware and you promptly install updates from the manufacturer.
  5. Third-party firmware providers are treated as manufacturers, and take on similar liability if they sell their code. If they give the modified firmware away, they take on no liability.

Possible approach two (self-help securability):

  1. If you offer anything for sale, you must publicly disclose every detail of its entire design. Source code, hardware designs, everything. This obligation extends to your upstream suppliers. It also applies to any cloud service associated with the product. Any agreement to keep any of this secret is void as a matter of public policy. Yes, that means you and I get to see Qualcomm's and Intel's VHDL and chip masks. Secret designs are dangerous. If they want "IPR", then they can go file for patents.
  2. There is never any criminal or civil penalty for mere testing, inspection, or other searches for security bugs, nor to public disclosure of security issues. Actual exploitation to cause damage to other people is of course still criminal.
  3. Sale of a physical device implies a license to modify the software (or hardware) of that specific unit, including in ways derivative of the original design. Agreements to the contrary are void.

Possible approach three (design regulations):

  1. It must be possible to upgrade the main firmware in any product
  2. Upgrades must be done automatically by default. The user may be allowed to disable this.
  3. At the same time, it must be possible for a person with physical access to the product to recover to a known state. This may imply a small ROM loader that cannot be modified and is used only for recovery. Physical access must be verified using a physical switch.
  4. Specified cryptographic means (not weird ad-hoc protocols dreamed up by manufacturers or non-specialist standards bodies) must be used to secure manufacturer updates. Any change in the keys trusted for update must require physical access to the device. Note that if the "securability" group were adopted, it would have to be possible to change these keys.
  5. Similarly, all network communication must use similarly standardized cryptography. No roll-your-own command and control.
  6. [Your idea here...]

GkNovember 10, 2016 9:07 AM

Regulation will have little to no effect because the 'nasty', cheap, part of the IoT market is largely made by many small companies that act as outlets for largely faceless manufacturers.

The US could impose any number of regulations to these small 'front' companies the end result is they would just fold and another will open in its place still leaving all previous devices at risk.

Any established, reputable, company would not need regulation because they already have their brand to upheld and are an easy target for public name and shame - they can't be seen associated insecure software.

Sok PuppetteNovember 10, 2016 9:32 AM

Regulation will have little to no effect because the 'nasty', cheap, part of the IoT market is largely made by many small companies that act as outlets for largely faceless manufacturers.

So push the liability back on those "largely faceless manufacturers". It's not impossible for an enforcement agency to find out who actually designed the "reference board" and "reference software" that went into some toy. You introduced the bug, you own it. It might take some international cooperation, but that might not be an impossibility since every country is at risk.

The actually trickier part is what you do when a bug is traced back to an open source project, like, say, Linux, which had issued no patch before things went sideways and was therefore clearly responsible. The open source project isn't going to be able to pay and can just fold... assuming it's even a legal entity in the first place. Liability on individual developers seems unlikely to work other than to maybe shut down open source completely (and that's unlikely to be a net security win even with rules that prevent secret commercial development).

I don't know how to solve that one.

Any established, reputable, company would not need regulation because they already have their brand to upheld and are an easy target for public name and shame - they can't be seen associated insecure software.

So the constant stream of rooting bugs from Google, Apple, Microsoft, and whoever else is just an illusion? The only really critical place they're ahead of the IOT stuff seems to be in the ability to get patches out.

I'm with HerNovember 10, 2016 9:35 AM

You're about a day late. More regulation is not going to happen in the next four years. Sure, the EU could start some regulation hearings, but the EU will move at a glacier's pace. In any event, the US is the major market and will drive the behavior of the manufacturers.

The only remaining option is to out and boycott the inferior manufacturers. Name and shame. This won't be very effective, but it is the only option.

ATSNovember 10, 2016 9:45 AM

@Sok Puppette

#2 is a pure non-starter. Not the least because it does jack all to fix the actual problem.

Of your three options, #1 is the only viable path.

ChelloveckNovember 10, 2016 9:52 AM

I don't think there's a workable way to make manufacturers add sufficient security to their devices. Legislate a minimum standard? Okay, but what about attacks that exploit bugs in implementations of those standards? What about when today's super-secure standard is found to have an exploitable weakness in a year or two or ten?

Mandating automatic updates is undesirable. Computers are already seeing this problem. I have a friend who's had overnight 3D print jobs fouled because of automatic Windows updates. I don't want my lights or thermostat to go off just because the manufacturer decided it's time for a security update.

Any fix based on regulating or penalizing the manufacturers' behavior is about as likely to work as stopping spam by regulating or penalizing the spammers' behavior. I hate to say it but I think the only fix is going to be self-defense on the part of vulnerable parties. It's not going to be possible to fix all the networked devices without legislating them right out of existence. Companies like Dyn are going to have to be able to protect themselves against attack. Yeah, it sucks. It's a financial burden that they shouldn't have to bear. But like all crime, you can't make it go away by outlawing it.

(Disclosure: I work for a company which manufactures DDoS protection equipment. I don't think that's factoring into my opinion on this, but take it as you will. And obviously, I don't speak for the company.)

EconNovember 10, 2016 10:07 AM

The battle between security of products and bad actors is economic. How much time and effort does it take to hack. What value can be gained by that time and effort. And, conversely, how much does the company gain by spending more on security. If you simply legislate security, you put small startups out of business before they get off the ground, stifling innovation. Maybe the NSA should provide free security consulting services, to shore up the IoT.

Sok PuppetteNovember 10, 2016 10:07 AM

Of your three options, #1 is the only viable path.

They're not meant to be mutually exclusive.

#2 is a pure non-starter.

Probably... but only because entrenched corporate interests can't imagine any change to the way they compete. On the whole it would be good for everybody, but that doesn't mean it could be done.

Not the least because it does jack all to fix the actual problem.

... but not for that reason. It does in fact address the problem. And it's more effective if combined with the others.

People get away with these bugs because they're relatively hard to find. You can release a product that's swiss cheese, and one of the many holes MAY get caught in the lifetime of your product. If you have to release the whole design, it's harder to get away with that, and it's even HARDER to get away with releasing a half-assed "fix" that doesn't solve the underlying cause of a discovered problem. Which happens All The Time (TM).

It also keeps you honest about your own back doors. That matters because those back doors can be and are used by malware.

And self-help is sometimes a useful escape hatch.

It also has a lot of good effects that aren't related to security. Basically markets don't work as well with asymmetric information.

AJWMNovember 10, 2016 11:12 AM

If it's an electronic device, it already has to get some kind of FCC approval that it doesn't interfere with legitimate frequency band use.

Just extend that to requiring that it be secure against spamming the internet, too. Details of how are left as an exercise to the manufacturer, who is not liable if an end-user subsequently modifies the equipment (hardware or software).

Just how FCC goes about verifying that is another question, but it's the kind of thing they do already (either in their own labs or via third-party certified testing labs).

Clive RobinsonNovember 10, 2016 11:29 AM

@ Bruce,

The technical reason these devices are insecure is complicated, but there is a market failure at work

More than one, the underlying cause of this problem, was not so much the DDoS annoying as it was, but the "single point of failure" of the down stream customers.

As I've said before figures show that the top two hundred or so of websites use only one or two of a half dozen DNS providers.

This lack of providers is a serious market failure, and the "race to the bottom" that has caused it has resulted in excessive fragility in these websites.

Whilst regulation might under some circumstances limit DDoS by Iot it is not in any way fixing the real problem. That is if attackers can not use IoT devices they will find other methods to launch DDoS attacks.

Thus any regulation should address the real "broken bone" by splinting it, not the "slap a band aid on" attempts to regulate what is near impossible to solve due to Black Swan / unknown unknowns issues.

Sok PuppetteNovember 10, 2016 11:37 AM

AJWM, the testing the FCC does is trivially simple rote stuff compared to what you'd have to do to have any confidence in the security of anything. They're not comparable.

For FCC testing, you put the device through its paces in a well-defined set of environments, and you measure RF levels in a well-defined way at a well-defined set of points. The tests are the same for every device you do. Even for devices like WiFi nodes that have relatively complex software controlled deliberate emission profiles, you at least fully understand the boundaries of what you can check and can be sure when you're done. And there's a lot of commonality in how different devices are expected to behave at the WiFi layer.

Real security testing is not like that at all. Yes, you can start with a scan (and a full scan can take weeks, and the results you get will usually depend on how you configure the device before you start scanning it). But after that you have to look for device-specific problems. You have to have an idea of how the device works inside, then you have to probe it creatively.

And then you can be sure you haven't found everything. It's just not possible to test for every possible security hole, even the already known types.

If you wanted to truly understand the full set of security issues in anything, you'd have to resort to formal methods plus clean room software engineering plus probably some other things. Testing from outside could never do it. And even then there might be whole new classes of problems you hadn't thought to guard against.

I'm not saying you couldn't do a scan on every device at FCC approval time. Scans do find common errors, especially in devices that reuse lots of existing buggy software, which is almost all of them. It's goodness. It's just not enough. You'll end up with 100,000 or 100,000,000 units in the field with some bug you didn't check for. And those units won't just mess up whatever's within transmitter range; they could potentially mess up anything on the Internet.

Which is why it's important to be able to take software updates and to be able to recover from compromised systems reliably.

Clive RobinsonNovember 10, 2016 11:39 AM

@ Miksa,

At minimum we need to regulate two things, the devices need to be updated easily/automatically and the manufacturer will need to provide security patches for extended time period.

NO, to both.

The first is an open invitation to the likes of the FBI or any other government entity to force "front doors".

The second is as daft as legislating that sufficient rain will fall for an extended time. Put simply manufactures, embeded software houses and most other low margin commercial organisations can be gone in the blink of an eye.

The traditional way to deal with this has been "code escrow" but due to the requirments of "code signing" any such "escrow" will become a target of opportunity for covert entities be they state or criminal in nature. Thus we would end up with the same or similar problems we have already seen with CA's allowing negligently or otherwise signing of false certificates.

albertNovember 10, 2016 11:41 AM

That's the forceful, no-holds-barred kinda stuff I like to see from Bruce.

As for solutions:

1. 'The market' is the problem, it will -never- be a solution to itself.

2. 'Regulation' often requires dragging manufacturers kicking and screaming into court. When it works, it's a good thing. It's not a perfect example, but look at the airline industry. Between the FAA and the NTSB, airlines and aircraft manufacturers have managed to develop a reasonably safe transportation system. (yes, I see the irony; Bruces last statement explains why)

3. IoT devices cover a wide range of embedded system types, for dip-shit-simple to full OSes. I suspect small companies might have one person assigned to the firmware, and often the f/w is designed by a outside contractor. Back in the day, hardware costs were paramount. S/w existed to complement or even replace hardware.

4. There are 2 classes of IoT, monitor and control. One would think the monitor class would be safe, but the DDOS hacks say "no". How does a 'read only' device (like a baby cam) do this?r It's absurd to think that a baby cam needs the capability to be 'updated' for 'security' reasons. A forced strong password and well designed firmware should be all that's necessary.

Eliminate the ability to -change the code- and most problems disappear.

5. The 'control' class can be the dangerous one. They only need user-level access, and they can do whatever you can do. Even protected code won't help. I don't feel for the folks who really need to control their refrigerator from Aruba. They can afford to replace a fridge full of food. The door locks and alarm system might mean a lot more expense, but they DO have insurance...right? Same applies to HVAC.

This sort of remote access is a foolish fad, and I don't feel for those folks at all.

Anytime you put a full COTS OS in an embedded, Internet-connected system, you are asking for trouble. Even in systems like autos, where remote control isn't provided. I guess if you can't learn to parallel park, or avoid hitting something, then you'll have to take a chance.

Keep throwing the dice, everyone's got a chance to lose.

. .. . .. --- ....

Clive RobinsonNovember 10, 2016 11:48 AM

@ Sok Puppette,

When the manufacturer does in fact stop providing updates, the product must PERMANENTLY DISABLE ITSELF

Don't be daft, that is not just a ludicrous suggestion it goes against all sane reasoning about property ownership, as well as WEEE and other environmental legislation. Should I say that your garage door be "PERMANENTLY DISABLED" simply because the manufacturer decides to EOL it?

Think about these IoT devices as "Attract nuisances" and look at the legislative framework already in existance. Then reason out why that would be undesirable as well.

TedNovember 10, 2016 11:57 AM

In 2013, the FTC settled one its first liability suits on the basis of inadequate software security. The settlement required HTC America, Inc, a mobile device manufacturer, to release patches to fix software vulnerabilities for millions of devices, establish a program to address security risks, and undergo an independent security assessment every two years for the next 20 years.[1]

In 2016, the FTC reached a similar settlement with router manufacturer ASUSTeK Computer Inc. regarding critical software security flaws in its routers.[2]

The publication “CrossTalk: The Journal of Defense Software Engineering” covers the topic of ‘Supply Chain Risks in Critical Infrastructure’ in its September/October 2016 issue and has some great articles on software, liability, and engineering.

[1] (press release)
[2] (press release)

Clive RobinsonNovember 10, 2016 12:08 PM


Just how FCC goes about verifying that is another question, but it's the kind of thing they do already (either in their own labs or via third-party certified testing labs).

It would not verify as such, just lay down requirments, that on notification of a failure of a product to comply, it would launch punitive action it already has the statutory right to do.

The problem as we have recently seen with the "FCC fear of SDR" is that they will go overboard or cause the manufacturers to do so out of fear of litigation.

The net result would be the ceasation of FOSS developments as they could not be loaded. Thus it would very quickly significantly harm the market as most IoT devices are effectively FOSS based.

So it's a "walking on egg shells" type approach which needs extream caution to have even a small chance of being effective.

TessNovember 10, 2016 12:43 PM

A manufacturer is definitely liable for such bugs. Their service shouldn't simply stop once they deliver the product.

TJNovember 10, 2016 1:21 PM

Devices with firmware built for the absolute cheapest in south-asia are being hacked with simple configuration back-doors? You don't say?

Let's regulate.. How do we make south-asia have better QA and development standards?

MikeANovember 10, 2016 2:43 PM

If the FCC takes the same approach to IoT security as they do to PC RFI, it will be at best pointless. "self certification" of PC-related devices, or the use of "certified" (nudge nudge wink wink) testing facilities, already make VW look like amateur cheaters.

Even before they allowed such self certification, they drastically weakened the standards because one manufacturer told them (and a judge) that the existing standards were impossible to meet, despite them being met by devices on retailer shelves at the time.

Markus OttelaNovember 10, 2016 6:12 PM

2013-15: "TCB in networked devices is fine for end-to-end encryption; Exploitation doesn't scale."

2016: "Record-breaking DDoS reportedly delivered by >145k hacked cameras"

Maybe we can finally have a debate about our mono-culture of systems.

thesaucymugwumpNovember 10, 2016 7:17 PM

I'm with Her wrote "More regulation is not going to happen in the next four years"

IoT was essentially born during the Obama administration, yet nothing was done to regulate it. But that's not surprising given how many Wall Street criminals BHO prosecuted (a handful), not to mention the over 250 people who traveled back-and-forth between Google and the Obama administration. You have your villains confused.

FigureitoutNovember 10, 2016 11:59 PM

we need more government involvement
--So stubborn when you get going on something, like political solutions to technical problems which will be a total fail, NO. It won't help. Were you not here for the past 3 years? This is where mandated backdoors will be inserted, you've heard this story before. They cannot be trusted, even though you can be sure they're breaking the law to insert some anyway. And besides, the horse has left the barn. Give the market more time unless you want a duopoly of backdoored products in this market space too. Want to bet that won't happen? That would suck so much. I mean, I like embedded offline (RF and otherwise) space more, but I'd feel sorry for people that *love* the internet so much they want to connect all aspects of their house to it. If customers are scared, reach out to someone who would love to help you! We love to help and educate and set you on your way. Time to learn and take control of your own network. Don't buy products that won't let you reprogram them at least.

How long did it take all the vendors involved w/ internet to get their sh*t together? Some would say it's still not so (me! It's an impossible problem to solve, secure network w/ malicious humans on other end)...Why not regulate CA's more? Why not force the DNS providers to be able to handle larger traffic? The original purpose of the internet was to route around failed connections to be robust, now w/ the centralization of it, it fails? Huh, no sh*t. Why not incident response companies too? Don't leave them out. They need lots of regulations too.

Your security on the Internet depends on the security of millions of Internet-enabled devices
--Holy cow, so over dramatic. I know we get dramatic here w/ rare attacks we've experienced first hand and whatnot, but...No it doesn't. My real security doesn't depend on that. I don't really care about DDOS attacks, can easily get around those if need be (I won't die, my education could be ruined b/c they put curriculum online; email goes down I have other channels). What's the longest time a DDOS attack has ever lasted? Like a few days right? I didn't even notice the last "record breaking" ones, just went right along w/ my day as normal. Nothing's changed besides lots of garbage traffic in logs and waste of power. Otherwise, it's just another boring day.

DDOS isn't the attack that should worry us, it's totally in the open. Remote reprogramming methods (normally the chip needs certain voltages at certain periods to put it into programming mode, remotely hacking this when it's designed not to is actually scary) and malware that hides well should; basically creating the botnets, not the DDOS attacks from them. B/c that's what will infect everyone and you won't even know it. Framing you for a crime is way more scary than using your bandwidth for a DDOS.

And, like pollution, the only solution is to regulate
--This is a last resort when we've tried other options like education from young age. And it'll likely have backdoors.

he government could impose minimum security standards on IoT manufacturers
--Backdoored standards.

RE: only US regulations
--Fake chips w/ false FCC stamps and the like make their way into the US all the time. This will take real money to inspect all these imports and actually enforce these regulations; that's bordering on impossible to inspect all those mail shipments (AKA security theater). But hey let's bankrupt ourselves for DDOS attacks.

We need to proactively discuss good regulatory solutions
--So long as they don't contain backdoors, and this needs to be independently verified (won't trust unless I can understand the analysis as well), I'm ok w/ regulations for solid crypto implementations, and that's about it. Why even be in that industry then, there'll be like 2 designs that will pass standards and that's it. Move on to where we engineers can design things w/ some freedom; that's all we want to frickin' do. Consumers need to stop buying, but they keep buying them. What can we do? It'll be blame-central ("my hands are tied!", "waiting on approval!").

Just calm your t*ts and wait for an actual bad attack to happen. Otherwise let's plan for every other possible attack eh? Spread those cheeks, you got a bomb in there?

ATSNovember 11, 2016 2:08 AM

@Sok Puppette

#2 is a non-starter because it doesn't do jack. You don't need source code to exploit something and source code doesn't make anything more secure. One only needs to look at the MANY exploits that have happened in OpenSSL for that. And the OpenSSL code is infinitely simpler than a SOC/CPU design in Verilog/VHDL. As far as masks, no one is every going to be able to figure out jack from masks, literally never. Sorry but that shit isn't going to help security.

And the masks and VHDL are only going to help counterfeiters not improve security.

And if you are worried about back doors, then someone handing you Verilog/VHDL isn't going to do jack shit. You can just as easily remove lines in Verilog/VHDL as anything else. And good luck trying to back track from masks to Verilog/VHDL in any modern design.

The ONLY viable solutions are those that hold the manufactures responsible for security. Anything else is smoke and mirrors.

Robert.WalterNovember 11, 2016 6:44 AM

"...There is no market solution because the insecurity primarily affects other people...."

Such is the Tragedy of the Commons.

GkNovember 11, 2016 7:43 AM

Since the problem is especially with *connected* devices surely any enforcement should simply be done at the connection level, i.e. by ISPs.

Device acting badly? Shut off the connection to it.

Then people faced with having no connection will quickly learn not to buy devices from shoddy manufacturers who don't support them in a timely manner.

ISP doesn't care? Shut off it's connection too.

I'm with HerNovember 11, 2016 9:48 AM

@thesaucymugwump -
Your argument hinges on prosecution based on IOT laws that do not exist. Further, your argument hinges on President Obama using these non-existing laws to prosecute Wall Street types for products created by people/corporations in the Far East. I see you have selected your desired villain, but I am totally unable to follow your reasoning.

Looking forward, the incoming administration is highly unlikely to create new regulations or to prosecute American businesses for any reason.

Not True At AllNovember 11, 2016 12:20 PM

"Your security on the Internet depends on the security of millions of Internet-enabled devices, designed and sold by companies you've never heard of to consumers who don't care about your security."

This is where Bruce Schneier's line of thinking needs a course correction.

Of course I'm still waiting to hear Snowden elaborate a little further on what he thought was so problematic with Hillary's use of a home email server...

David ThornleyNovember 11, 2016 1:20 PM

@Tess: Manufacturer liability, as we've known it, not going to work the way we expect.

Normally, it's clear what problems a manufacturer might be liable for. A car component might cause the car to crash or spontaneously catch fire. An old-style thermostat might turn off heat or air conditioning when it's needed, or perhaps start an electrical fire. In this case, the thermostat can cause untold harm.

There's usually ways to keep track of the more dangerous stuff. If it turns out that 2008 Civics have a problem, there's enough centralized information that I'll get the recall notice, and if I don't, if I hear that Civics made from 2006 to 2010 I know mine is likely included. Nobody has a good idea as to what thermostats I have in my house (it used to be a duplex, so I have two), including me. I think at least one of them was made by Honeywell, and that's not very useful by itself in case of recall.

If I get a recall, I know where to take my Civic and how to get it fixed. My thermostats are significantly less mobile, and in these cases there's no way to update them where they are. If there were, it would solve some potential security problems and introduce others.

As Bruce pointed out, as a thermostat owner I care about how it controls the heating and air conditioning, and that it doesn't cause fires or anything on its own. I don't generally care about it possibly serving as part of a cyberattack. I was once part of a cyberattack, and knew about it because my ISP told me. Eventually, I managed to get enough information out of them to track down which system through Wireshark and remove the offending program.

The thermostat company faces essentially unlimited liability from devices that it designed and sold an indefinite number of years ago, it can't necessarily find where the devices are (it might be able to find them on the net, but that's not necessarily possible or useful), and it can't do anything about them even if it can find them. As the guy with two thermostats installed, I face an unknown amount of liability just for having the things in my house, and I won't find out the damage until it's too late to do anything about it. As pointed out above, it isn't possible to evaluate or limit the destructive power of the mighty thermostat beforehand.

There is no technological solution in sight, since we can't come up with absolutely secure devices on the Internet. Government regulation can reduce the problem but not eliminate it. Unless there are liability lawsuits, the market will do nothing to stop it, and liability lawsuits aren't going to work unless we develop reverse class action suits (Facebook vs. a class of defendants, say) or we can solve the problem of fly-by-night companies.

Anyone else reminded of Battlestar Galactica, where the only reason the Galactica survived the initial Cylon attack in the first episode was that Commander Adama refused to allow its systems to be networked?

Sancho_PNovember 11, 2016 5:46 PM

I’m a bit concerned with crying for regulation, because often it turns out we don’t understand what we try to regulate.

So what makes an IoT device an IoT device?
Until that’s clear stay clear from regulating IoT devices.

IoT is a marketing buzzword, not useful in a serious discussion.


”Eliminate the ability to -change the code- and most problems disappear.”

Valid point, however not fully true in case the code was bad from the beginning:

But here I’m with you: Auto-Update, e.g. like TR-069, is a no go.

Also the distinction between foolish and useful or monitor and control doesn’t change the inherent problem:
Any device connected to a (our) vulnerable (Inter-) network poses a potential threat.

The (TCP/IP) Net was designed to push data through, not to mitigate malice.
This is the point we should think about: How to mitigate (in this case) DDoS?
Is it better at the target (bigger pipes, no, much bigger pipes, no, I mean much …)
or at the beginning, where the attack comes from? (But not at the devices!)

The article was written for publicity (IoT, DDoS - but where is cyber?).
Unfortunately @Bruce didn’t mention the technical part (OK, I see the audience).

TamNovember 11, 2016 7:58 PM

End result of these types of regulation is usually a new tertiary market for insurers. They drive up costs and do very little to ensure our safety. Once we do it, every other country is going to slap their own version on it, sans some trade agreeements hated by all the little people of the world.

ab praeceptisNovember 11, 2016 8:14 PM

Not True At All

If that statement from Bruce Schneider can be criticized than only for the missng last part -> "In fact, they do not even care about their own security".

It seems to me that you should think somewhat more about what Bruce Schneiers statement implies before contradicting.

Clive RobinsonNovember 12, 2016 1:01 AM

@ Markus Ottela,

Maybe we can finally have a debate about our mono-culture of systems.

No not yet, because it's still not to late to do something...

The earliest ICT mono-culture most get told about was those computers controling those radio-therapy devices. However that was not the first --a telco-switch was earlier-- and arguably all mass produced devices can be "hacked" in the more traditional sense. That is just like motorbikes, radios, planes and other "surplus" equipment after wars (arguably personal weapons like the Winchester etc, even swords, armour and cross-bows being the first).

Usually such hacking/personalization does end up by having some harmful downsides for society, but the previous scope for this has been limited due to the combined issues of "force multiplication" and "Directing mind". That is there was only so much force multiplication possible for a single individual and only a limited number of minds that could be manipulated by any given directing mind not seen as "lawfull authority".

Since industrialization the problems arising from force multipliers / directing minds has risen. The most significant change being that of communication enabling wide spread control which has then been agumented by programable control systems. Thus it is now possible for a single individual to become "An army of one".

Thus our usual tangible physical world assumptions about limitations have been shown to fail when intangible information world ideas are put to use (which I've mentioned a few times before on this blog ;)

Thus these DDoS attacks via IoT have been predictable for many many years (atleast over half a century). The problem is that most people do not want to think that way due to the cognitive dissonance it causes them, thus they chose to self deny on it even it is provably happening and just as provably getting worse and will continue to do so. A clear side effect of which is the historical "demonization process" of people who "can think and act that way", by those who's power positions become threatened.

But... there is an aspect that few consider and realy should. That is there are those that "can think and act that way" who quite deliberatly prevent discussion of "what to do" whilst "anything can be done". Either because they fear the demonization or because they see significant advantages for them to build a new power base.

Thus I predict that nothing will be done about such mono-cultures untill long after the point it is to painfull / expensive to do anything about it... Thus certain people get their new "power base" to exploit and dipose older power bases.

They say there are "Three sign posts to disaster, the first is invisable to all but a few, the second is obvious with hindsight, the third is obvious to all but a few at the time." I would say we are in the transition period between the second and third sign posts. I guess people will have to wait a while for the historians to make judgment. But either way the horse has disapeared over the horizon and that unlocked stable door is banging in the wind...

P.S. My son who has just looked over my shoulder and read the paragraph above has said "Are you blogging about Trump?", I found it difficult to keep a strait face.

Jan CeuleersNovember 12, 2016 8:40 AM

Another way to look at this that the DNS infrastructure is overly vulnerable to DDOS attacks.

As I understand it what happened is that the domains in question became unreachable because their authoritative name servers were under attack. But these are normally not consulted by end-users: they generally use their ISPs' caching resolvers, who in turn only consult the authoritative servers when the requested information is not (or no longer) in their cache.

The likelihood of popular sites' DNS records to be in ISPs' resolver caches is quite high. The problem is that cache entries are supposed to be evicted at the end of a specified time-to-live (TTL). The next query that comes along from an end user then causes the caching resolver to pull the record back into its cache by re-requesting it from an authoritative server. If this fails (due to unreachability of the authoritative servers) the end-user's query also fails, leading to the outage.

So what if resolver cache entries (or at least those of popular sites, i.e. the ones that are queried "often" or that were last queried "recently") were not discarded at the end of their TTL period? The resolver would try and refresh the entry as usual, but if that failed it would continue to serve the previously cached entry.

Of course a sensible alternative cache eviction policy would need to be implemented in order to avoid the caches becoming overloaded themselves.

Another point is that such an approach would not work for ephemeral records (i.e. dynamic DNS), which is one of the services Dyn is well-known for. Not sure that particular problem is solvable.

Clive RobinsonNovember 12, 2016 9:42 AM

@ Jan Ceuleers,

The problem is that cache entries are supposed to be evicted at the end of a specified time-to-live (TTL).

Arguably the TTL is kept way to short in many caches which artificially increases the trafic and thus load on the servers. But resolves what untill recently had been other more important problems. So it's partly a "swings and roundabouts" issue, where the operational sweet spot moves to meet new chalenges.

But at the end of the day the current DNS architecture was not designed for the usage it currently has. That is it's from a "more innocent time" when malicious behaviour was not a consideration compared to the resources of just getting it up and running. Now the problems are the other way around, in that the resources to support DNS used correctly are small compared to those to defend it against malicious behaviour. Also the resources for malicious behaviour are realy quite minimal.

Because of this significant resources imbalance it can now be fairly said that investigating ways to change the way DNS works would be money well spent, because in it's current form DNS is not realy a sufficiently scaleable solution when faults occure or are induced by malicious behaviour.

One method might be to have very local caches that are easy to access with very simple low resource usage, but communicating upwards gets progressively more expensive in an asymetric way where making a query is resource intensive whilst answering it is not. This would make DDoS attacks considerably harder to mount and less difficult to mitigate.

Sancho_PNovember 12, 2016 7:00 PM

@Jan Ceuleers, Clive Robinson

In your eyes, what makes the difference between
- asking the DNS server for a reply / address
- connecting the target’s server directly?

Both is stupid if done 20 times (or more) per second,
but at the end, isn’t it only a question who will break down first?

If a ISP does it for >2k of it’s private, dynamic $16.-/month customer lines
clearly the ISP is trying to bring down the target.

My dynamic IP doesn’t have the power to DoS my own servers, let alone Dyn’s.

Btw., a good ISP would tell me that I’m doing stupid requests since hours.

FigureitoutNovember 12, 2016 10:21 PM

--Went a little off the deep end (again), sorry but I don't want yet another industry ruined which is really close to where I want to be. My living, livelihood, seems at stake. I got a taste of what it's like designing something that's heavily standardized, you can barely do any design yourself, end products look the same or there's 1 company that'll completely own the market. Not fun. I don't care about gimmicky IoT toasters and the like, it's the knock-on effects that worry me, in embedded space. But these points/questions/concerns remain:

1) How to assure gov't standards don't mandate exploitable backdoors? Defeats its purpose.

2) How many viruses/worms happened in the 90's before we got a handle on them and now they're mostly spyware? It's very hard to write a malware today from scratch, compared to before.

3) Is this worth it w/ just a DDOS attack? How similar is this to creating a TSA or DHS for electronics from 9/11 attack? There's already a ton of regulations that you couldn't even read in your lifetime (you spend a ton of time referring to other sections all over the place, like how laws are written, to deliberately confuse and waste time) for critical infrastructure, and tons of soft targets that remain untouched.

Jan CeuleersNovember 13, 2016 5:12 AM


I don't disagree that it is technically possible for ISPs to play a greater role in DDoS prevention. Doing so however requires deep packet inspection. Fact is that they don't, because it costs money. Perhaps also because doing it only makes sense if they intend to act on the patterns thus learned, which goes against the idea that ISPs are supposed to just be providers of dumb pipes, which is further enshrined in law and regulation under the heading of net neutrality.

Clive RobinsonNovember 13, 2016 1:40 PM

@ Jan Ceuleers,

Fact is that they don't, because it costs money.

It may not be only money, it may endanger their "common carrier" status, which in turn would turn them into "publishers" that on turn brings on a whole load of leagle liability, they neither need or want.

Whilst you might think only US legislation applies in the US it can become problematic when wealthy people "Go libel Shopping" claiming that Intetnet connectivity is "world wide publication", thus the question of the "common carier exemption" in another juresdiction becomes relevant...

Sancho_PNovember 13, 2016 6:13 PM

@Jan Ceuleers

”Doing so however requires deep packet inspection.”

Could you please explain what you mean by “deep packet inspection”?

This term, often together with “net neutrality”, is sometimes used by laymen (no offense intended, I’m myself a layman in networking technology, this is why I’m asking) to stop further discussion, as naming all the world’s evils (Putin, Kim Jong-un, Ali Khamenei, Assad, Xi Jinping, …) at once in US foreign politics.

I’d like to discuss what the ISP has to read from my camera’s DoS attempt to find out it’s bad.


@Clive Robinson

”… it may endanger their "common carrier" status, which in turn would turn them into "publishers" …”

Nope, they already have to read the header to deliver the request (on the customer side, probably not at their outgoing side where it would make sense to stop their DDoS attack).
They also “publish” your emails and forum comments without having publisher status.

In fact they already do much more than most would expect, e.g. to fight spam, viruses and logical loops.

Money definitely is an argument because until now they don’t fight DDoS.
But is money really an argument when someone attacks e.g. Wall Street?

Clive RobinsonNovember 14, 2016 2:01 AM

@ Sancho_P,

Nope, they already have to read the header to deliver the request...

Exactly the same as the first ever "common carrier" the flat rate postal service.

As for "publishing" ISPs tend to "deliver" not "publish" the legal arguments boil down to defining the difference, and contrary to what you might think, it's not the number of people/places that "content" gets delivered to but on if there is any "editorial" duty express or implied by having viewed the content. Again this came about from the postal system where journals that are publications are delivered by the common carrier postal service that only examines the delivery address and some physical properties (size/mass) to determine the rate.

Interestingly radio stations that carry adverts are usually considered "common carrier" even though they do have a limited requirment to check content. In this respect they are like "printers" not publishers as they have no "editorial" input, just a choice of if to reproduce the content in it's entirety or not.

What ISPs do not want to get into just like many social media distributors is "editorial" behaviour either expressly or as some smart suited mouthpiece with high hourly rates might be able to convince a tribunal of truth is editorial behaviour.

Not True At AllNovember 14, 2016 3:47 AM

@ab praeceptis

I totally disagree. Tell me how exactly a bog standard iptables firewall doesn't contradict BS?. Or rather, if you and a group of friends use the internet with bittorrent to share news videos. How in that situation is your security dependent in any way on these millions of crap devices on the net? The fact is that this particular threat model does not trivially destroy the entire ability of the internet to basically function as a global news and interpersonal communication medium. It may well destroy those who lack the skills to mitigate the damage. But there are enough people who know how things work well enough that that is not a real concern. To the extent that hackers succeed in DoSing various subsets of the internet for various temporary timeframes, the end result will just be a large complex organism that evolves more effective defenses. That's just the way it works.

Take a deep breath, step back, and try to look at the big picture. Sum up all the biggest highest profile DoS attacks that have ever occurred. Now on the other side of the balance, look at how much corresponding 'uptime' the entirety of the internet has had.

Now entertain the theory that journalists like Bruce Schneier benefit from the way that alarmism over issues (here, cyber security) increases readership.

The real danger here is in people who don't know better thinking that something is a far bigger deal relative to all the other issues in their life than it really is.

I for one lose no sleep fearing how crap IoT devices may cause troubles for the world. Trump having the ability to start WW3 on a whim- that I need medication to cope with.

ab praeceptisNovember 14, 2016 4:41 AM

Not True At All

I see your point but I disagree. Your POV looks valid and to a degree is, but looked from the ITsec perspective there *is* solid reason to be very worried.

One major problem is the fact that we basically are empty handed against DDOS. Many are not worried as we *seem* to have protection/mitigation available (cloudflare and the like) - but actually we do not.

With all due respect for cloudflare and the like (and some of their services *are* useful) their DDOS approach is futile; it pretty much boils down to ever fatter pipes and more of them.

Besides the fact that purely quantitative approaches are rarely the smart way to go, it is bound to fail. For one the number of dumb-thingies is increasing faster than the network capacity. Secondly the attacking side is evolving, too. Thirdly, even if the "protectors" succeeded in always having large enough pipes there still were major collateral dammage.

To make it worse, the vast majority of politicians are on the low end of tech. knowledge and strongly prefer inadequate approaches, like legal or political ones.

Last but not least, Bruce Schneier is right because there is a hierarchie involved, namely the question of ignorance. While (on the top) sensitive systems or servers are managed (at all, and often by not incapable people), on the desktop the situation gets worse. Yet another level deeper it's again worse and finally on the IoT level, the vast majority of users not only care not but the do not even consider themselves users (in the IT sense); and the sellers sell light bulbs or thermo-controllers after all (and not "IT equipment") hence those users even can justifiably claim that their ignorance is perfectly alright.

We fucked up and we fucked up big time and on multiple levels.

jayNovember 14, 2016 2:26 PM

There is a difference in these potential regulations and, for example fire or automobile regulations.

Fire is predictable. Good fire safety practices (both prevention and mitigation of risk) are well established and work well year after year.

Devices, and software for that matter, does not have the advantage of a stable adversary. Indeed, the adversary is very cunning and may have deep pockets (organized crime or nation states). Imagine trying prevent fire deaths if fire were highly intelligent and was constantly looking at new ways to bypass your protections.

Ross SniderNovember 14, 2016 3:16 PM

Regulation isn't going to do much of anything for the security of internet of things, and it's surprising to hear you recommend it. Nor will regulation solve the underlying problem (software easy to compromise as scale, and at scale compromised networks of resources can be used for DDoS). A number of measures should be taken to mitigate potential DDoS of core internet infrastructure (name resolution) that has nothing to do with IoT devices. Focusing on IoT is meaningless for these cases, and can not be brought up as a justification for regulation.

In addition, regulation is likely to provide an opportunity for national security interests to influence the designs of devices so that they can eavesdrop and compromise them at scale for maintaining control over populations.

A better approach would be to provide funding to the better business bureau / department of commerce / etc so that it can assess the security of digital products, which should be labeled on security assessments like food is labeled with nutrition assessments.

The label could be as easy as a few numbers or charts and a QR code pointing to more information.

This puts the security assessment (actual hackers looking at actual deployed products) in front of consumers when they make decisions - forcing companies to compete on the security of their devices but without introducing lengthy and arbitrary and politicized process and gates in front of companies.

Jan CeuleersNovember 15, 2016 12:51 PM


What I mean by deep packet inspection is any inspection of a packet beyond what is needed to forward that packet to the next node towards its destination. This implies:

(1) that any access to an IP packet outside of the IP header would be classified as DPI.
(2) that if state is kept, this would be classified as DPI.

So according to this definition firewalling and NAT are examples of DPI. So is traffic shaping.

Note though that I would argue in favour of ISPs implementing certain forms of stateless DPI that can be done in a router's fast path, such as RFC2827 (which is a form of firewalling).

Sancho_PNovember 16, 2016 5:54 PM

@Jan Ceuleers

Um, so I think we disagree about that term (DPI), to me it means analyzing the content, not the header.
Similar to email or chat,
I think the header is theirs (TLAs, including global surveillance),
but the content is mine.

We can’t hide metadata, and they need metadata to do their work ("to protect us").
But the content is ours, and I’ll fight till my end that it remains ours.
Not their business.
(OK, they may ask and I may tell them, but they can’t have it in the dark)

gordoNovember 16, 2016 8:55 PM

U.S. House of Representatives
Committee on Energy and Commerce
Subcommittee on Communications and Technology joint committee hearing with the Subcommittee on Commerce, Manufacturing and Trade
Understanding the Role of Connected Devices in Recent Cyber Attacks
Wednesday, November 16, 2016

- Video length: [02:58:58]
- Hearing starts at: [00:45:27]


Mr. Dale Drew [01:05:45 for opening statement]
Senior Vice President, Chief Security Officer, Level 3 Communications

Mr. Bruce Schneier [01:10:10 for opening statement]
Adjunct Lecturer, Kennedy School of Government, Harvard University, and Fellow, Berkman Klein Center, Harvard University

Dr. Kevin Fu [01:17:39 for opening statement]
CEO, Virta Labs, and Associate Professor, Department of Electrical Engineering and Computer Science, University of Michigan


Related articles/Same-day coverage:

Nick PNovember 25, 2016 12:10 PM

I’m with Bruce on bringing in regulations or liability. I know it brings a ton of speculation and counterpoints when brought up. I think the one’s I see on Hacker News are worth addressing collectively since they turn up constantly on any forum. They reflect a consensus of concerns by IT & business people in general. Here goes with concerns paraphrased.

“I’m don’t know if security for software could be encoded into regulations.” “If it could be, I’m not sure it could lead to more secure software.”

It was done successfully before. Resulted in most secure products ever to exist. A few still use such methods with excellent results during pentests. Also preempted major vulnerabilities in popular stuff. Such methods would’ve also prevented a good chunk of Snowden leaks and TAO catalog. Bell, of Bell-LaPadula security model, describes it here:

Examples included Boeing SNS server (going strong 20+ years), BAE XTS-400, Aesec GEMSOS, esp KeyKOS, a secure VMM, an embedded OS in Ada, GUI’s immune to keylogging/spoofing, databases immune to external leaks, and so on. CompSci projects with practical bent had even more stuff. Such research continues today but is a trickle compared to, say, Java extensions or machine learning stuff. Same thing happened with DO-178B, etc for safety-critical markets: tons of high-quality components showed up with many reusable.

“So then I guess you also want a Personal Computer or Home Security System to be $1,000,000.”

“You’re mentioning an industry funded by about 1/6th[0] of the Federal Budget related to war and consequences if things go wrong. Just cuz it worked for the DOD doesn’t mean it will work for anything else.”

If we’re increasing the baseline, there’s companies size of startups that do it all the time while remaining lean in expenses with speed in development. An example of low-defect methodology was Cleanroom that varied from reducing costs to neutral to slightly increasing them. Complexity could still be high as Cleanroom just forced better structuring. Cost and speed overall unaffected on average since up-front quality reduced debugging and maintenance phase costs so much. Finally, for high-assurance, applying it to well-understood problem areas… such as TCB’s, VPN’s, or compilers… ranged from 35-50% premium on top of normal development process. Nearly unhackable Windows at $150 instead of $100 sounds like a steal.

Main drawback of highest security is loss of development speed. The amount of rigor on problems with unknowns means you are going to spend quite a bit of time on modeling them, analyzing them, prototyping them, pentesting that, and so on. Lipner, when leading VAX VMM for high-assurance, said it too “two to three quarters” to implement a major change in the product. Probably weeks to a month with crud approach common in industry. They also had less features due to (a) need to figure out how to secure them and (b) esp complexity & inherent insecurity of many standard features. High-assurance systems would need to pick ZeroMQ over OLE or CORBA, JSON over XML, native apps over web apps, Modula-3 over C++, musl over whatever GNU builds, and so on. Throwing stuff together with no thinking about effect of their architecture, coding style, or implementation tooling will go down in huge way for products with high assurance requirements. This would be opposed by a lot of people. Even “INFOSEC” people that I see. ;)

Quick note on pricing. Remember that the extra cost get spread out among large numbers of users. If one survives the initial development, then the resulting software gets cheaper as it grows. The idea that we’re spending a million for Windows or Oracle is ridiculous. On per-customer basis, it would probably have the same ridiculous price it has now if medium assurance under Cleanroom with a safe language.

“The free market crowd would argue that the surviving companies will bake in the right security if consumers demand it. If companies don’t take it seriously, either their customers aren’t demanding it, or they will be replaced by companies that do a good job at it.”

I argue this myself. However, let’s include the caveats of cartel and otherwise malicious behavior againts consumers that reduces the impact of that preference:

Companies lie to customers about how necessary these vulnerabilities are. They condition them to expect it. They also charge them for fixes. It takes almost no effort to knock out the common ones with only 30-50% premium for high-assurance of specific components. Even premium producers often don’t do either with those that do so rare most consumers or businesses might have never heard of them.

Years of lock-in via legacy code, API’s, formats, patents, etc means consumers often don’t have a choice or only have a few if they want the modern experience. Many times specific choices will even be mandated by groups like colleges. Market created the problem that now lets it milk a captive audience out of money. It won’t solve that problem no matter what they want.

First Mover advantage, lock-in techniques like obscure formats, and patents combine by themselves to give rise to the situation we’re in. Two of those are protected by government. They will need to be solved at government level. Or just force the incumbents to provide increased security in what they lock us into.

“I mean the only reason lawmakers and regulators are not all over this issue is because they don’t realise how bad things are.”

They realize it. Impenetrable systems are also impenetrable to FBI and NSA which advise against the good stuff being mandated. The bribes they take from COTS vendors also brought in a preference for insecure solutions from samd vendors. Everyone paying them wants to maximize profit, too. Costly rewrites will cut into that.

So, they’re willingly covering their ears while sitting on their asses. At least those on major committees.

“What would a better version of the situation look like that would preserve benefits of the Internet while dealing with events like massive DDOS’s?”

A combo of per-customer authentication at packet-level, DDOS monitoring, and rate limiting (or termination) of specific connection upon DDOS or malicious activity. That by itself would stop a lot of these right at the Tier 3 ISP level. Trickle those suckers down to dialup speeds with a notice telling them their computer is being used in a crime with a link to helpful ways on dealing with it (or support number).

Far as design, they could put cheap knockoff of an INFOSEC guard in their modems with CPU’s resistant to code injection. Include accelerators for networking functions and/or some DDOS detection (esp low-layer flooding) right at that device.

Old one from high-assurance field, albeit with medium rating, that did what I’m describing in an Ethernet, card computer:

Modern implementation could probably be done in a cheap clone and security-enhanced mod of this product:

ab praeceptisNovember 25, 2016 12:59 PM

Nick P

Main drawback of highest security is loss of development speed.

Not really, at least not necessarily. If we cut away all the idiots projects and look at those projects ranging from "We need some reasonable level of safety, reliability, security" up to "safety, reliability, security is an absolute must", which cover quite a large range then the above statement doesn't hold true.

Main reason: Development doesn't end when the compiler accepts it without errors and when the thing seems to do what it's supposed to do.

For any kind of professional and/or long-term project, maintenance enters the game.

Moreover, plenty of experience has shown over and again the the coding part is about the cheapest one. Oftentimes, for instance, a bug discovered after the product is shipped shipped costs 10++ times more to fix than during development.

There are other factors, too, like new versions; how cheap or expensive is it to create them? That

At this point I'd like to draw a separation line:

One must not fully formally specify and verify every line of code. In many contexts (typ. commercial ones) "more solid, reliable, safe" is damn good enough. The difference between poor hacking in C++ and a much more solid product can be as simple as doing the project in e.g. Pascal (Modula-3 which you mentioned is even better, but Pascal has way more "batteries", a modern IDE, etc.)

The next step up in quality and cost typically is *some* formal spec/modelling, typically of critical parts. Usually at that level some other requirements enter the game, proper doc and basic unit tests, for instance, some basic but stringent rules, for instance regarding naming, proc sizes, etc.

The cost added ist rather modest, maybe around the 10% - 20% range and that amortises extremely quickly by much less debugging, much easier to maintain codebase etc.

Only on the even higher levels that changes, i.e. gets considerably more expensive - but again: on the code creation side. From what I heard as well as what I personally experienced, that is usually balanced out by the advantages.

Somewhat simplifying one might almost say that (except for extreme cases) the costs stay about the same for the full life cycle of a sw product. With typical C/C++/java projects w/o formal spec/modelling, formal rules it's cheaper during development but from then on it gets expensive. With, say, Ada, formal spec/modelling (of critical and sensitive parts) and some Spark'ing costs during development may be around 1.5 to 2.5 times higher but sharply drop from then on. Maintenance, for instance isn't a nightmare but a quite easy job. Over the products lifetime one quite probably actually saves money.
Based on my experience about the only problem might be expected at the hiring side.

Finally, one must not just look at a projects costs. With Wirth languages and derivates one almost automatically builds lots of reusable - and reliably so - code thanks to proper modularisation, easy readability, and other factors.

So when asking the question on a larger scale, for instance, whether a sw company in a profesional field will be better off using, say Pascal (or even Ada) plus proper spec/modelling, - or - say Java the classical "let's hack stuff" way, then the former will almost always be the better - and cheaper - way.

As for the entry question of gov. regulation my answer is: Theoretically I'm with you and B. Schneier. Regulation would be desirable and helpful. Practically though, looking at the "secretary, make me an printout of google!" guys sitting in gov. agencies I have grave doubts.

Keep in mind that regulating a few areas where money isn't a strangling factor anyway, is one thing. Regulating, say, "internet programs" is a rather different beast.

Nick PNovember 25, 2016 1:49 PM

re OpenBSD and you

I'm tempted to demolish it right now on Lobsters. I'm going to hold off given I'm trying to relax a bit today & that's a prime hangout for OpenBSD devs. Has jcs not disabled downvotes, they'd have made the shit disappear for sure haha. I'll submit the rebuttal as a story later. Let's prototype and discuss it, here, though. :)

"OpenBSD has been around for more than 20 years (October 1995)"

I'll let them have this claim. It's probably true.

"OpenBSD is proactively secure with only 2 remote holes in default install in 20+ years." (my emphasis added)

Now, this is a pile of bullshit. We start here. First, the competition includes whatever services and apps people might need. OpenBSD in same presentation comes with all services, even OpenSSH, disabled by default. In other words, anyone who didn't use any services or apps on OpenBSD only had 2 remote holes by their count. Is that how it's deployed in the real world, though? I imagine there were more problems than that when people used it in combination with actual software.

Next is the foundational problem here. They find and fix bugs in their C language code all the time. Sometimes, they find a new class or pattern of bugs they then look for all over their code base. What they don't do is stop to assess whether the bug could be turned into a vulnerability. That is, they count what they find as bugs not vulnerabilities. Except for those 2 for some reason. Whereas, most bug hunters for Linux or Windows try to write an exploit for what they find then call them vulnerabilities. It's really dishonest to claim only two, remote holes in default install if there's been a crapload of bugs found in default install with them just not assessing whether they were vulnerabilities.

I rate that common claim by OpenBSD proponents as a straight-up lie. Pure propaganda. The common case for bugs vs vulnerabilities predicts at least one of those bugs they found was a vulnerability. Probably a lot more.

"OpenBSD pioneered and is still leading in code audit."

Code auditing actually started with the people who independently invented softare engineering around same period of time:

1. Bob Barton's 1950's proposal called for higher-level CPU's, safer languages like ALGOL60 for OS code, and teams of people from development to verification. They used that in Burroughs B5000.

2. Dijkstra (and/or Hoare due to fuzzy memory) came up with concept of making software modular, putting checks at interface in each module, carefully scrutinizing code for correctness, and using formal specs for them. The resulting THE multiprocessing system was one of most reliable OS's ever built at the time.

3. Margaret Hamilton at Apollo ran a team specializing in writing software correctly with a least two modules handed off to them. She and her people focused on correct specs (esp with formal notations), interface checks, careful writing of code, reviewing of all code, and testing everything. She coined term "software engineering" to describe such activities. She also used asynchronous execution with monitors and human-in-the-loop recovery in case the software broke due to hardware problems or just unforseen software errors. Her team's code ran error-free for the duration of the mission plus saved it when another party screwed up.

That's where correct software came from. Later, in the 1970's, Fagan at IBM pioneered code auditing as a formal process via his Software Inspection Process. Like Hamilton, he noticed many types of errors were recurring patterns. His SIP put them on a check list, had team dedicate time to finding those, fixed them by priority, and surprised management with apps that performed correctly more often. Instituted that at parts of IBM plus lots of other companies through his consulting career.

So far, a select group of proprietary, academic, and government-funded software was the highest in quality with strong, review processes. The high-assurance, security field took this to the whole lifecycle of software by late 70's to early 80's with first system (SCOMP) finished in mid-80's. So, before 1990, only secure OS's were STOP OS in SCOMP (1986), Karger's VAX VMM, GEMSOS in BLACKER VPN, Boeing's MLS LAN (aka SNS Server) in 1989, and LOCK's kernel w/ UNIX layer around 1992. These methods kept proving to work well during penetration tests with numerous 3rd-party projects built on them. None of these were never compromised in the field from what data I've found. Although, I'm sure experts could find something if source was released. ;)

The high-assurance, security community also tried those methods on UNIX. The goal was to see if UNIX could even be secured with its complexity ("dozens of system calls" lol) and backward compatibility with how UNIX apps used it. UCLA Secure UNIX (1980) got the earliest treatment with it decomposed carefully, a formally-specified security kernel at bottom, and attempt to reduce information leaks by covert channels. It wasn't promising: architecture, API's, and complexity inherently caused problems that could only be closed by breaking backward compatibility & rewriting apps. Commercial sector tried anyway for medium-assurance solutions Secure Xenix at IBM in 1987 that didn't go anywhere. Around 1990 or so, Trusted Information Systems (TIS) built & got evaluated a medium-assurance version called Trusted Xenix that implemented MAC, added a trusted path, tried some covert-channel suppression, and at least got rid of setuid root. It did all this while basically remaining compatible with Xenix apps (including setuid root ones). Kept getting updated and evaluated until version 4 in 1994 IIRC.

OpenBSD shows up in 1995. They fork an insecure UNIX called NetBSD. Their security effort is essentially looking for source code problems and fixing them. They leave the insecure architecture, have no MAC, have no trusted path, no covert-channel analysis, and so on. The product is weaker than Trusted Xenix which itself, due to UNIX-level issues, was weaker than A1-class products like XTS-400, GEMSOS, or LOCK. OpenBSD team does add some code and OS level mitigations ranging from provably eliminating problem to probabilistic (aka hope and prey). They also replace insecure services with those with better protection. More importantly, they release their code for free with source available to hit price points and adoption levels prior solutions could never do.

I think I've thoroughly debunked the idea that OpenBSD pioneered source auditing, was the first secure UNIX, or any of that crap. Their assurance of correctness of kernel code might still be less than UCLA Secure UNIX was in 1980. Their architectural security is hit and miss compared to Trusted Xenix of 1990. The ones copycating A1 products, especially separation kernel vendors, are ahead by virtualizing things like Linux in user-mode with all critical stuff running in isolated partitions directly on a 4-12Kloc kernel with mediated communications. That OpenBSD refuses such architectures means they'll never be as secure as a product done that way. However, I think the user-mode layers could strongly benefit from adding OpenBSD-level mitigations or code quality. I've encouraged using a slimmed-down version of OpenBSD itself in those layers for that reason.

"OpenBSD has all security enhancements enabled by default; hard to disable."

This is true. It's a good practice. It's what medium- and high-assurance proprietary systems were required to do under TCSEC & Orange Book.

"OpenBSD is open source, free software, and enabled independent verification"

This is true. It's a benefit but often overstated. The private stuff I quoted got stronger evaluation because people were paid do that. Some also distributed the source code and evaluation evidence to paying users so their engineers could evaluate it themselves. This claim basically comes off as neutral on security where it *can* help but might not (see OpenSSL) and isn't strictly necessary given private parties can evaluate closed-source w/ hash comparisons on binaries. I still prefer it, though, due to FOSS's benefits outside hunting vulnerabilities plus unknown classes of attack.

"OpenBSD has a high-profile, quality image based on code quality and real world use."

High-profile among people that love OpenBSD or tiny niche of users. That barely constitutes "high profile" to me. I find this deceptive. Their attention to quality of code and documentation can't be refuted by anyone that knows about their work. However, OpenBSD is one of the least-used OS's in existence, virtually no user will have ever heard of it, many sys admins outside of UNIX don't know about it, and its community was famous for essentially telling people to fuck off if their expectations didn't match OpenBSD peoples' and/or they weren't coding it themselves. By using a different approach, FreeBSD and Linux had tremendous impact powering much of the web, parts of Mac OS X, Linux desktops in corporate/consumer spaces, Android, etc. *Those* are high-profile and high-impact. OpenBSD is low-impact and low-profile.

It would be different if they said OpenSSH instead. That's high-profile and high-uptake among server market. Notice how its design and marketing was handled a bit differently than OpenBSD itself. Probably had something to do with it. ;)

"OpenBSD is upstream (origin) for several widely used pieces of software... innovation list"

This is true. They deserve credit for this.

"OpenBSD has been 'growing up in public' via public, anonymous CVS (the first of its kind) since 1995 - transparent process, development discussions on public tech @ mailing list."

This is true but neutral. It doesn't matter whether a product's development was public from the start or not if it's eventually OSS with plenty of review. The strongest ever built were closed-source with some really, strong ones done with cathedral model closed followed by FOSSing it. seL4 is one. Bernstein's might be another good example depending on whether he developed in private initially. This will happen less now that things like Github are kind of the norm for personal projects. My memory problems are preventing me from evaluating whether it was first to do anonymous CVS for development during or prior to 1995. May or may not be true. Old timers want to chip in on that?


OpenBSD continues to present itself as a legend. That is, a mix of truth and fairy tales. It was not the first, secure UNIX or even project with code auditing. That goes to proprietary, academic, and government groups dating to the early 1960's. First, secure UNIX's were done around 15 years before it started with first products proprietary & a few years ahead. It's still weaker in design and architecture although stronger probably in code quality and probabilistic mitigations. They did develop it transparently with a secure, default configuration. They did create a number of better, real-world protocols/apps to layer on top. They deliberately underreport vulnerabilities to support their claim about 2 holes that's likely false. They do overall set a positive example for others in terms of code and documentation quality.

This Conclusion written as bullet points on a presentation will probably look different than their own for some time into the future. They love their own legend too much. :)

Nick PNovember 25, 2016 2:05 PM

@ ab praeceptis

"Not really, at least not necessarily."

Every development has reported this if result involved imperative code on popular platforms. I'm not aware of a single exception. The reason is fundamental: they have to take time to understand, document, spec, etc unknowns up front. They also take time carefully specing and testing changes. It always takes more time to engineer software than to throw it together with a minimal amount of testing.

Now, if baseline goes up (eg Cleanroom w/ safe languages), then the reduction in debugging and maintenance problems will show little productivity difference. There might even be a boost. This is just initial, though, as a more reasonable baseline is established where quality is actually a factor vs current of none. Once that happens, you'll start to see the productivity hit of increased assurance as the developments get slower and slower. I mean, the brightest people in academic spent over a decade trying to formalize the C language. Compiler authors got the first compilers done using low-quality methods in a few years. This gap will always be there with the market currently fine with what the latter produces.

"One must not fully formally specify and verify every line of code. In many contexts (typ. commercial ones) "more solid, reliable, safe" is damn good enough. "

I agree. It's also Gerard van Vooren's position where he pushes tools like you follow up with to increase the baseline with almost no work. That's what I allude to above with current situation having basically no quality.

"The next step up in quality and cost typically is *some* formal spec/modelling... cost added is rather modest"

"Only on the even higher levels that changes, i.e. gets considerably more expensive"

We agree given I promoted the EAL's and you just described them. Almost word for word equivalent to a brief summary someone did of going from lower EAL's to EAL5 were shit begins to get real a small cost then to EAL6/7 where verification gets real... *expensive*... with best results. ;)

Medium-assurance, which I promote as baselines, just takes a few activities that don't add much to cost as you said. That's clear specs, limited formality of most-critical components, safer languages, interface checks, code review, and thorough testing optionally supported by specs. That small teams and startups do this on regular basis means that the masses of developers and big companies claiming it's impossible or too costly are Full of Shit (TM).

"With, say, Ada, formal spec/modelling (of critical and sensitive parts) and some Spark'ing costs during development may be around 1.5 to 2.5 times higher but sharply drop from then on. Maintenance, for instance isn't a nightmare but a quite easy job. "

Totally agree. Integration and maintenance costs on Ada were historically low since it was designed to make that easy. Interesting how language's design and customers' needs regarding quality can make quality easier in practice. I'm still convincing C people. C++ people are getting there.

"Practically though, looking at the "secretary, make me an printout of google!" guys sitting in gov. agencies I have grave doubts."

I'm working very hard on these write-ups to focus on just what will have results. Those people inspire a lot of doubt in me, too. Just look at what they did with Common Criteria. Watered down the assurance aspect into a pile of feature requests, paperwork, and lucrative evaluations. All beaurocrats doing it. The methods will have to be real and applied for real.

@ All

EDIT: I forgot something on the essay. That was whether startups will be killed by such a criteria given they don't have budget for evaluations. I proposed alternative where the evaluations could be applied *after a harm* is caused by the software *in court*. So, the products don't get evaluated. They just perform assurance activities plus document them. If lawsuit happens, private party that's licensed or certified to be capable of security review can compare the product's source & design against sane criteria to determine how lax it is. The punishment goes up as the product deviated from secure practice if the deviation caused the harm.

ab praeceptisNovember 25, 2016 3:25 PM

Nick P

For a start, it seems you are mistook me. I didn't contradict (for the major part) but actually solidified the wall on which you built.

"on popular platforms" - there you have the hook. I read that as idiot projects (the 90+% crap on sourceforge or gitgub or ...) and I expressly put up a frame starting above those for what I said. *Of course*, projects on popular platforms give different results! But to me they are completely meaningless.

Pardon my french but, to name an example: OpenSSL isn't rotten because spec/modelling or using an adequate language or verif. is, oh so expensive in time or whatever. Nope, it's crap because we could call ourselves lucky if even 1 in 10 "developers" were not a complete and utter unprofessional loser. Moreover those guys act like members of a religious sect and not like rational grown ups.

I still remember when my most important and beloved tool was codeview (at *very* cool debugger then) and not the C compiler. I also remember colleagues and pals considering me to be a weird, probably paranoid safety freak because I did not consider my debugger as a luxury tool or as a tool to avoid if any possible.

So, I'm comparing time and effort needed to create at least reasonably well designed and built software. And looking at it that way and over the whole lifecycle proper design, proper spec&modelling of at least critical parts, using an adequate language like Modula-2 or 3 and running the best available verif. tools *is cheaper* than hacking away in C. Period.

Just look at the first problem in the curl report. Clear case: lack of proper design, spec. and modelling. Et voila, there you have your fgets being used *assuming* something (which can be painfully wrong).

Which leads us to another *major* problem: *Unseen* bugs.

So, what are you comparing in community projects with funny completely unknown bugs - vs - properly designed and built software? Pardon me but that comes down to saying "bug ridden sw with not so funny surprises built in is cheap (in time or whatever)". I don't think that's a point you want to make.

Which leads us back to the matter, regulation.

Let's assume there was regulation. Assume moreover that it enforced and ignoring can be punishable or open expensive liabilites. Let me tell you about one effect that is virtually certain: The whole foss thing breaks down in no time.
To make it worse, it's easier to go against foss than against closed-ware because the source code is easily available. Probably even a whole new "sue the shit out of all them projects" industry would spring up.

As for commercial software: Uhm, we do have regulatory agencies, we even have limited proof that regulations can help to create a better situation - but some strange reason that has always been mostly limited to niches like air, rail, medical.

Now, what could the reason be? Maybe the gazillion $ behemoths, politics and certain well oiled mechanisms?

From what I see we have no technical issue, nor a regulatory one but one with crooked politicians who don't care sh*t about the average Joe and Harry. You feel your smartphone is shitty? Well, a buy new one.
Oh and: Considerably better software would make it much harder for the state to eavesdrop on us.

It's a tough reality but from what I see our software will be exactly as good as we ourselves make it. It's not Trump or clinton or this or that agency to bring us a safer and more secure world. It's us - if we wake up, understand, and act properly.

ab praeceptisNovember 25, 2016 6:55 PM

Nick P

2 more points re "regulation and formal methods":

they have to take time to understand, document, spec, etc unknowns up front.

Well, not necessarily. Again, let's divide into "not shitty standard sw" and "highly sensitice/critical sw". For the latter: tough luck; it's worth it and it's required, period. But for the former: No, not really. Because those (Wirth (-like)) languages have "built-ins". "some_var : range 3..42" nails it down. "for x in y'Range loop" nails it down. In C or java one would need "gibberish" (as you called them *g) annotations. Basic Spec is built in, too. Unlike "unsigned int /* start and end who know where */" "range 3..42" *is* a basic spec.

Of course, that requires thinking front up. ("why from [3 to 42]?") but some thinking front up is required anyway. That's one of the major differences between hacking away and engineering.

Granted, that's rather rudimentary but that would still kill quite many - and frequent! - bug before they were born.

Secondly, I **want** them to think front up.

Let me bring up curl/fgets/5KiB buffer again. That's exactly the kind of error that I made, too, when I was still programming in C. It's also an attitude issue. To summarize it somewhat brutally, the typical C developer thinks in terms of "how to get it done?" - that's not good enough, that's the very mindset that is such fertile ground for heartbleeds.

I *want* them to think, to act like engineers, to not just ask how they get it done but to reflect the context, too, the design, the data involved, etc. Many Pascal programmers approach things more like that. Most Modula programmers did it, and pretty every Ada programmer does it.
Just a hint: Modula or Ada beginners often complain about things like needing to expressely decide whether a parameter is writable or const. Even worse, in C "const" is the special case; you need to write an extra qualifier. Et voila, few C programmers are in the habit of using "const" in parameters. If they use it the meaning usually is "this must really, really, seriously not be changed or else ...".

My position: const should be the bloody default and a developer should so extra steps to have a writeable parameter. Would save us from a ton of bugs. In critical stuff I even go so far to write (comment) a full justification for a non const parameter (no extra pain, writing specs - not from a committe perspective but as the guy who knows bloody well what that translates to for the programmers! - is a good education).

It's not just languages. It's the mindset, too, that's needed for reliable, safe, secure sw. And good, well thought out formal methods will guide and support you (if in a sometimes demanding and strict way ...).

Now, to a some first statements re OpenBSD

I agree with much. And I'm still undecided between your hard view (which has merits) and a softer one.
You see, the OpenBSD people never said (well, to my knowledge) that they were out to build a really secure OS (the way you or Clive or I understand that). Nope. They split from NetBSD and had the goal to create a secure BSD/Unix.

I'm feeling that you are judging them to hard. Yes, one might say, they have big mouth, but you see, when they say "secure OS" they clearly mean "secure Unix" and even is relative as in "more secure than the others".

One might call it funny or sad, but looking closer shows that they usually created quite good stuff, when they themselves created it (granted, still with a limited Unix mindset, but anyway). The lousy stuff is mostly the stuff they "inherited" and worked hard to make it better.

Another point that makes me little soft in my judgement is that they are the small BSD and in a niche. The FreeBSD or the netBSD guys may come up with lots of arguments but the OpenBSD guys live or die with their "secure" niche. So, yes it's a little bending things saying "just 2 bugs ..." but hey, what's the alternative? Should they say "We'll throw away 15 years of work because we now know what we didn't understand when we were youngsters"? Because that's the tough truth. It was an understandible and laudible idea but also a stupid one to make something secure out of NetBSD: After all, unless they were ready to write a full userland, too, they were stuck in a quite tight frame.

And btw, I'd install OpenBSD over cube os every day and sundays twice.

I personally think that a slap on their hands might be adequate and maybe even helpful but slamming them seems undeserved in my minds eye (and heart).

And hey, how many volunteer groups of a dozen or two of people do you know who did better? All them L4 "OSs" are quite naked although they were pampered with millions of tax payer money. Same with pretty all other "secure" or secure OSs. Either lots of public money or fat corp budgets. One of the few that come to mind is Minix 3 which, however, doesn't call itself a secure OS (although it offer more than many). It also burned millions of public money but it's at least in a kind of usable state.

The three biggest rocks are out of the way. We have a) much learned about OS and made quite some progress (as well as a couple of steps back ...), we have b) access to lots of information for drivers (which as you know was maybe tzhe toughest rock of all), and we have c) much much better tools.

It seems it's the programmers and the mindset that's still lacking.

Let us not be too hard on the OpenBSD people.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient, an IBM Company.