RhysMarch 10, 2017 5:04 PM

Speaking only for me, it always make my warning signals light up when someone says "government".

Instead of more anointed regulators and policing, could this whole issue be seen more as civil (tort law) problem?

Even a self-organizing insurance (pooling) of risk and liability with behavior modification through "experience modification"?

Statutes and administrative rulings only serve to codify 'what was'. Not 'what is' or 'what will be'.

The IoT for life or safety critical systems needs a practice rigor different than the rigor and exposure of a refrigerator or thermostat.

I am just saying that the unnecessary impedance of innovation with anxieties that so broad, haven't we seen that we only license an evil greater than the disease?

Ross SniderMarch 10, 2017 6:32 PM


Wonderful breakdown of regulation toolbox. Breaking these down further (like you did when discussing liabilities and Open Source) and trying to find the right kind of regulation is absolutely critical.

Nice, succinct, and informative.

I feel like we ought to go for consumer labeling first because of its compatibility with open source and because it incentivizes customer purchase decisions that include some real assessment about the security posture of various products. The costs are probably fairly minimal and scale with the size of the organization producing code (with larger organizations being able to bear larger costs).

Licensure is also pretty interesting. I wouldn't mind being liable/losing a license in the case that I write code that causes death (a good example might be the northeast blackouts that killed a number of people around two decades ago).

Nick PMarch 10, 2017 7:11 PM

@ Rhys

Regulation working is something that already happened for software safety and security. The court and insurance model has so far led to companies just throwing lawyers at their problems. They do just enough to convince a lay judge or jury they made a solid attempt. Krebs reported banks even tried to argue in court, which they won in that case, that requiring a username and password on potentially-infected computers constituted "reasonable security" for online banking. Private sector on supplier and insurance sides can only be trusted to do what's good for their bottom lines. So far, that hasn't been real safety or security in unregulated environments.

Far as INFOSEC, the first regulations were done under the TCSEC with good results. The good results went away the second the regulations were canceled in favor for acquiring commercial products (COTS). At that point, it was just a few niche suppliers who started dropping their assurance over time as it was no longer a competitive advantage. Suppliers in general conspired to leave so many vulnerabilities in software that users got the false impression that software was inherently like that. They didn't even expect better. Made it even harder for companies to justify high-security alternatives to investors.

The next model was in safety-critical development of aerospace, trains, and so on. The main one being DO-178B (now C) that forced certain assurance activities to happen to show software did what it was supposed to do and *only* what it was supposed to do. These activities increased as criticality of the system increased. The cost of recertification due to failures led industry as a whole to invest in (a) tech/methods to dramatically improve quality of software and (b) reusable, pre-certified components sold at significantly lower than cost of in-house development. Tools supporting these efforts included Esterel's SCADE, AdaCore's Ada/SPARK tools, stack-usage analyzers, WCET tools, CompCert compiler, and so on. Components made included RTOS's (eg INTEGRITY-178B), graphics stacks (esp Alt Software's), and partitioning networking or filesystems to isolate failures into domains. The graphics driver was a big piece of evidence for me given the private companies NVIDIA and ATI *never* made robust drivers even when customers were paying $1,000+ a unit for their products wanting robust drivers to come with them. As an oligopoly, they mutually ignored the demand to boost profits by tiny amount until regulations forced them to do robust drivers for their products to be used in aerospace.

So, the only thing that's worked was sound regulation. It's happened twice. It should happen again pervasively in the industry. It can be as simple as forcing memory safety with full, input validation on any function. Those by themselves would knock out vast majority of vulnerabilities. We can go further such as requiring that every feature have at least one pass or fail test to ensure it works as advertised. Lots of testing strategies available. Maybe require at least one code review of each module that's critical. I'd say the regs go on software used for commercial or government only, esp if anyone profits off them. They also need to be simple with minimal time required for maximum gain like the ones I just mentioned.

Anyone interested in a full list can see my comment here on that listed methods shown with strong evidence to increase assurance of software. Any combination of those might end up doing good in some regulations.

glenn everhartMarch 11, 2017 9:17 AM

At first blush, the idea that government might regulate software practices to
force security sounds appealing, in that private efforts get little traction.

However it is apparent too that government is neither infallible, nor even
of constant good will. Regulations would be devised by people who are
often completely ignorant of what they are supposed to be guarding, and
enforced in ways often more directed to "gaining scalps" by prosecutors
rather than mitigating social risks or losses. Expecting regulation to fix
software vulnerability is like expecting your garbage man to treat ruptured
appendix cases. He might have some idea at some level the effect he wants
to achieve, but most of the time has no experience in how to achieve this
effect without killing the patient. (We'll leave aside the thought that he
might also legislate tht nobody but a garbage man may perform this operation.)

Let us drop back a bit though and consider the way malware operates. At a
high level, it runs by doing things that are not authorized by whoever is
running the computer, and generally which are not known to the computer owner.
This gets us into trouble because computer functions are authorized by asking
"is this person allowed to perform this operation?", where the 'person" is
generally assumed to be the machine owner and the operation on the computer
is usually constrained only very coarsely (due to the cost of administering
very fine grained permissions).
This thought suggests that a practice that said "when you publish software
you must indicate what it will do" might be helpful. Commercial vendors might
claim they don't want customers knowing what they do, but disclosure even at
high level of which areas of a computer were touched could be useful in
allowing a knowledgeable user to decide whether to run a piece of code. Malware
would be tripped up immediately. Open source software would be less affected
since its source code is a fully detailed description of its function, but higher
level description of what is being done should be encouraged there too.
Anything implementing an interpreter might of course spoil such description.
If there exists a command telling a program 'open file X for read" or "write
file Y", it could read or write anything not externally blocked, even though
it might almost never be given such a command. High level disclosure will
quickly get you into the weeds and lead to useless and probably boilerplate
documents that help nobody.

Let us think about what offering a public service means on the internet
though. It has long been standard practice that anyone might attempt an
ftp connection with some site, use username "anonymous", and if that is
accepted, anyone might be given file access to some storage, without any
accusation of wrongdoing. Using username "guest" or 'user' would also
generally be viewed as benign. Accepting connections on port 80 or 443 for
http or https functions also is an indication that the site accepting these
has authorized such and offers them to anyone. In general, we can find what
services are offered on the Internet by attempting to connect to services
and finding them open to us.
An ambiguity occurs as soon as user credentials are asked, and while common
names like "anonymous" or "guest" sometimes indicate harmless intent, some
sites want a mail address and don't really care what mail address is given
for a username.
It is thus reasonable to think of a public service on the internet as
some service which can be obtained with "little' effort by connecting to
a server.
So then, is access to by using, say, "admin" and password
access to a public service? I am inclined to think it is,
though the operator of that machine might not have intended to offer it
as such.
The next question is "should there be liability for actions your machine
takes as part of a public service you run"?
Now, if you buy a router and it offers some services you don't know
about, due to not being told prominently enough by the vendor about them,
the one running the service arguably is the vendor. If on the other hand
you set up a service yourself and don't control what is done, you are the
one running it. Folks might remember when ftp sites were ubiquitous and
some got in trouble for allowing everyone r/w access and the sites
became drop sites for stolen software or documents. The implementors
did not intend their sites to be places to quickly replicate files that
should have not gotten out, but this kind of abuse meant that just setting
up an ftp site and letting everyone write to it with little or no
supervision was a problem. Likewise some service that can be misused
to, say, send out millions of mails might be set up with innocent
intent, and might run a long time without trouble, but might become
a problem due to its users.

If one sets liability rules for services offered to the public which cause
trouble, there are lots of ways these could cause troubles, but if the
rules are instituted with exemptions for those who are innocent (the
not-too-technically-savvy individual who runs some machine that gets connected
to a network and whose machine behaves in ways he is not informed of,
as a prime example) these kinds of considerations would at least
enable some rational direction to deciding what might be done to whom.
There needs to be some clear showing of responsibility, and also clear
statements about fraction of blame, which occurs in cases of civil damages
but not AFAIK in criminal proceedings.

I don't think government has any special wisdom about liability or to set
up regulations that can be relied on. However, whatever limitations may
have prevented liability issues from being addressed might be lifted and
perhaps case law might come up with some results that could help with
automata which become dangerous partly due to emergent effects. Thinking
of the functions as "public services" as above may help in that evolution.

Emma LilliestamMarch 20, 2017 9:05 AM

About the Regulating the Internet of Things: you mention in the nymag essay that

As a result, there are medical systems that can’t have security patches installed because that would invalidate their government approval.

However, I have not seen any other medical iot researchers who claim this. And in the case of Jay Radcliffe on Blackhat 2013, he even claims the opposite (32 minutes in):

Nymag essay:
Blackhat talk:

Do you have any more insights to this?

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.