Hacking Drug Pumps

When you connect hospital drug pumps to the Internet, they're hackable. This is only surprising to people who aren't paying attention.

Rios says when he first told Hospira a year ago that hackers could update the firmware on its pumps, the company "didn't believe it could be done." Hospira insisted there was "separation" between the communications module and the circuit board that would make this impossible. Rios says technically there is physical separation between the two. But the serial cable provides a bridge to jump from one to the other.

An attacker wouldn't need physical access to the pump because the communication modules are connected to hospital networks, which are in turn connected to the Internet.

"From an architecture standpoint, it looks like these two modules are separated," he says. "But when you open the device up, you can see they're actually connected with a serial cable, and they"re connected in a way that you can actually change the core software on the pump."

An attacker wouldn't need physical access to the pump. The communication modules are connected to hospital networks, which are in turn connected to the Internet. "You can talk to that communication module over the network or over a wireless network," Rios warns.

Hospira knows this, he says, because this is how it delivers firmware updates to its pumps. Yet despite this, he says, the company insists that "the separation makes it so you can't hurt someone. So we're going to develop a proof-of-concept that proves that's not true."

One of the biggest conceptual problems we have is that something is believed secure until demonstrated otherwise. We need to reverse that: everything should be believed insecure until demonstrated otherwise.

Posted on June 17, 2015 at 2:02 PM • 65 Comments

Comments

keinerJune 17, 2015 2:48 PM

Windows has to proof to be secure? COOOL!

Check statistics: Proof something is correct (safe) is NOT possible. You can only disproof a hyothesis. Check you Popper...

Alan KaminskyJune 17, 2015 2:49 PM

Patient in hospital is on a Hospira drug pump. Hacker reprograms pump to deliver fatal overdose of drug. Patient's grieving and outraged family sues hospital, doctors, nurses, Hospira, etc. for millions. Scenario repeats until Hospira makes their drug pumps secure.

"Yes, and how many deaths will it take 'till he knows
That too many people have died?
The answer, my friend, is blowin' in the wind
The answer is blowin' in the wind"
-- Bob Dylan

NameJune 17, 2015 2:53 PM

"everything should be believed insecure until demonstrated otherwise"

I didn't think anything can be demonstrated to be secure beyond the present location in space-time, which isn't terribly useful (unless you have a Stasis Gun, which I believe you don't).

I prescribe Disconnection, because I don't know anything about network security, except that disconnection should be very effective. If that's inconvenient, well hospitals are inconvenient to begin with!

Andrey FedorovJune 17, 2015 3:00 PM

Can it be connected to the internet and still be secure? I'll believe these claims when I see the proof of concept.

> One of the biggest conceptual problems we have is that something is believed secure until demonstrated otherwise. We need to reverse that: everything should be believed insecure until demonstrated otherwise.

Oh, right. Interesting. But why?

I would argue that whether we believe something to be secure is a function of the resources of the company making it, the risk to them of it being insecure, the complexity of the product, and probably some other things. Saying "we need proof" is a bit silly, even without going to the epistemological extremes.

GuyJune 17, 2015 3:42 PM

One of the biggest conceptual problems we have is that something is believed secure until demonstrated otherwise. We need to reverse that: everything should be believed insecure until demonstrated otherwise.

Succinct and brilliantly said. You should sell signed portraits with that quote printed on them. I'd hang one in my office and point to it whenever the boss comes in to ask why we're spending so much time and effort on vulnerability testing and mitigation.

Of course, as we know, unless every possible input and output of a system is known and can be mapped in advance, that system can never be declared 100% proved secure. Try explaining something like that to a bean counter, though.

ThunderbirdJune 17, 2015 3:45 PM

I would argue that whether we believe something to be secure is a function of the resources of the company making it, the risk to them of it being insecure, the complexity of the product, and probably some other things. Saying "we need proof" is a bit silly, even without going to the epistemological extremes.

Bruce didn't say "proof," he said "everything should be believed insecure until demonstrated otherwise." I think "proof" crept in in the first comment.

I think that the original statement is more prudent, for a number of reasons. For instance: even a large company with a great deal of resources and a lot on the line is not a single organism--there are likely pockets of competence and of incompetence and "ISO 9000" or "Six Sigma" or whatever the methodology de jour is won't ensure that sufficient care was taken in any given case. For another instance: I don't know what the risks to the manufacturer are. Perhaps they're teetering on the brink of bankruptcy because of cash flow issues and they perceive their biggest risk is not shipping their insecure product, vs. a possible suit for damages in the indefinite future (and that after the management team has cashed in their options and moved to Hawaii).

David MaysJune 17, 2015 3:48 PM

One of the biggest conceptual problems we have is that something is believed secure until demonstrated otherwise.

Security isn't binary. It exists along a spectrum. Something is not "either secure or insecure" but merely differing levels of security. Of course I know Bruce knows this.

But in the case of a medical device such as this, I am stunned that there is not even a cryptographic signature required on the drug library. Untrusted user data is such an obvious attack vector that I have to wonder who is designing these systems, and if the FDA even contemplates security in its certification process.

I also posit that a physical interlock that engages while the device is in use would be a good minimal measure, to prevent firmware or drug library updates when there is a medicine vial inserted. Ground a certain pin on the SOC to instruct it to ignore serial communication, or even more simply, pull down the TX or RX pin on the serial port to prevent that bridge from being used while the device is in use on a patient.

tyrJune 17, 2015 3:48 PM

I'm just a dumb old country boy so this occurs to me.

Why not disconnect the serial link if the pump is in use
on a patient ? And I mean a physical disconnection. Any
upgrades should have to be verified by the company and
then the pump should be ready for usage again. Imperfect
because you can always do a work around if sufficiently
nefarious and dedicated but that should remove most of
the random dangers of the current lunatic model pumps.

The IoT model is broken if we discard all security for
ease of usage when adding a network off switch will
prevent the worst of the problems.

You can then charge extra for the feature... : ^ )

Nick PJune 17, 2015 3:51 PM

@ keiner

You can prove safety and security: google formal verification of software. The method is creating an accurate model of both the actions of the system and the safety/security properties. Then you show one embodies the other. Must ensure all abnormal states fail-safe. So you're really proving a positive in the sense that it possesses one or more properties.

Systems done this way have had incredible reliability and security. Many exhibited no failures during pentesting or field use.

albertJune 17, 2015 4:13 PM

@Alan Kaminsky,
.
This is true of most industries. Accountants can calculate lawsuit costs, damage payouts, and claims on their own liability insurance. They can predict the effect of implementing safety vs. profits. This stuff doesn't ever happen in a vacuum.
.
There's a trend in even medium and large companies to hire contractors for programming and design work. Aside from the comm module (which is probably supplied by vendors), the control firmware has to be comparatively simple. There has to be data, at least, dosage and duration, which probably goes into flash. So you can mess with the data and the firmware; either way, you are screwed.
.
Advantages:
.
1. Remote update of firmware
2. Remote change of data
3. Remote monitoring (this can be done safely if it's read only)
.
3 appears essential; 1 & 2 are strictly for cost savings.
.
The whole remote firmware update thing is becoming ubiquitous. Don't we have anyone who can program embedded systems that won't need to be 'updated' (bug fixed) so frequently that a permanent remote connection is needed? There appears to be a serious lack of expertise and programming standards in the field of embedded systems. The easier it is to change the firmware, the less folks worry if it's buggy. Add to that a total lack of security considerations.
.
...

65535June 17, 2015 4:17 PM

I had a horrible thought. Would US Navy or the NSA buy this type of zero day [or n-day] exploit?

If so, it would stimulate a cottage industry of zero-day or n-day exploits for all sorts of medical, automobile, power plant, and gas transmission pipeline exploit builders with fatal consequences.

This behavior gets into the “moral hazard” territory where the government infuses a huge amount of money into an industry for a specified outcome that could prove very dangerous to society as a whole.

There are thousands upon thousands of coders who have heard “success stories” of selling zero days governments for large sums of money [Vupen, the Hacking Team and so on]. They are chomping on the bit for a slice of this zero-day market.

At some point these coders will be motivated to make successful exploits which will over-dose people, crash cars, meltdown power plant and blow-up gas pipe lines with fatal results.

Humans make code with mistakes. I am sure the vast majority of code can be hack [software/hardware code]. I have a feeling that encouraging hackers to do that for a monetary reward will prove unexpected and harmful consequences.

Sure, the Chinese and Russians are working on cyber exploits to explode munitions on our ships and crashing our drone aircraft – but attacking the civilian medical sector seems extreme.

Once you start a cyber war it will be difficult to stop.

Karl LembkeJune 17, 2015 4:18 PM

Prediction: In an upcoming episode of "CSI Cyber", a wave of deaths will occur when hackers hack drug pumps.

David MaysJune 17, 2015 4:21 PM

@albert

The underlying problem is that software development is not a true Engineering discipline yet. If software that controls systems that implicate "life safety" required a Professional Engineer to sign off on the design, I think things would be a bit different.

Unfortunately, too many people still think that a Computer Science degree makes you a computer programmer, when in fact what it mostly does is make you a mathematician with a light dusting of programming in whatever last year's popular language was.

Make a true "Software Engineering" curriculum, and we might get somewhere.

JustinJune 17, 2015 4:27 PM

@ Nick P

I have to agree with you here. My knowledge of high assurance methods is extremely limited, but this is definitely a case where formal verification seems appropriate.

Key (it seems to me) would be keeping the whole system simple enough so it can be formally verified with reasonable resources. We're not building anything so complicated as a jetliner here, and all the same, it is a life-critical application.

@ David Mays

True "Software Engineers" have to be able to do the math and get it right, too. Especially when methods like formal verification are in use.

David MaysJune 17, 2015 4:33 PM

@Justin

I don't disagree. I would characterize the math aspect as "necessary but insufficient". I would also say that training in general Engineering principles would go a long way.

You know what else they also apparently don't teach in Computer Science curricula? Formal testing. I am blown away in this day and age to still get software developer applicants who don't know what a unit test is. It's simply inexcusable at this point.

MattJune 17, 2015 4:50 PM

I would propose that proving something secure is much like proving something true in science: it's impossible. All you can do in science is test a hypothesis and fail to disprove it; in security, all you can do is fail to breach it. Once the security has held up enough times, you can call it "secure" in the same way that a hypothesis becomes a theory becomes a law, but even the "laws" of physics *potentially* can be disproved. Any "secure" mechanism can potentially be broken.

All you can do is demonstrate a set of things that a security mechanism hasn't been defeated by yet.

TimHJune 17, 2015 5:12 PM

@Matt to agree with you by suggesting the that word 'proof' is too often used when the context really means 'strong evidence'. Proof is a categoric concept, and really should only be used with mathematic structures.

JustinJune 17, 2015 5:46 PM

@ TimH, Matt

Computer programs are mathematical structures, and they can be proven to correctly satisfy certain properties in a mathematical sense. That is what formal verification is all about.

albertJune 17, 2015 7:07 PM

@65535,
"...Sure, the Chinese and Russians are working on cyber exploits to explode munitions on our ships and crashing our drone aircraft – but attacking the civilian medical sector seems extreme...."
This is probably more likely on the US side. Let's just say that _everyone_ is working on cyber-attack/defense strategies, both military and civilian. As you said, hacking for dollars opens the whole world to danger. Attacks on a countrys infrastructure could come from anywhere, even from within that country, and accurate attribution is a bitch to prove. Major military powers won't start cyber-wars against each other. It's just stupid.

@David Mays,
In control systems programming, the first requirement for a software engineer is to understand the system. Not a general overview, but the details. Anyone can learn to write code. You don't need to be a mathematician. I hope programmers aren't reinventing math functions, when there tested math libraries available. If you don't know how a product works, you can't write software to control it. For programmers, I look for general knowledge of mechanical systems, electrical controls, basic electronics, creativity, and the ability to assess concepts from top to bottom. EKG machine experience won't help much in programming a drug dispenser.
Testing: W.E. Deming said a long time ago: "You can't inspect quality into a product." Similarly, you can't 'test' quality into a program. Every line code has to be 'tested', as it is written. I'd prefer code reviews by several different people. 'What if' scenarios: Is there a possibility of race condition? Is there a question of a variables value being incorrect when it's used? What about interrupts? I'm extremely uncomfortable simply testing these issues. Sometimes a rewrite is the best solution.

.
...

Robert.WalterJune 17, 2015 7:36 PM

I think that many companies claim their product is secure because they think they are above average in their security implementation; the rest of them claim this because they are trying to dismiss the liability implications with smoke and mirrors.

JustinJune 17, 2015 7:44 PM

@ albert

Anyone can learn to write code. You don't need to be a mathematician. I hope programmers aren't reinventing math functions, when there tested math libraries available.

Nobody said programmers should reinvent math functions. But programmers need the same kind of reasoning a mathematician uses to prove a theorem in order to reason why the code they are writing does what it is supposed to do, correctly.

And if anyone can learn to write code without learning to reason correctly about it, I hope that code isn't dispensing drugs I need in the correct dosage at the correct time to stay alive.

(@Tom: sorry to hear about your posts disappearing. One of my posts on a different thread ended up owned by someone else, by the way. I suspect the forum may be hacked.)

@ Robert.Walter

That is probably true, although in no way does that justify "average" security implementation.

Nick PJune 17, 2015 8:20 PM

@ albert

Good point. Older high security work, along with every DEFCON & Black Hat conference, showed the very best method to find issues was having experts in domain and in systems just sit down to think about what can go wrong. That's requirements, design, interface, implementation and environment. Such reviews should always happen to every part of the lifecycle of a safety- or security-critical system. I agree with you to have several different minds looking at it too.

@ Justin

Good news is there's a lot of academic and even some commercial work in verifying medical devices. Here's some examples of the formal methods' uses: an infusion pump, a pacemaker, another pacemaker, a medical protocol, a monitoring system, and a user interface. Example of a lightweight method. Regulators are also looking at these things and are wisely leaning in favor of outcome-based regulations: use whatever tool you want so long as you can show it works and the product is safe.

Hopefully, the situation becomes more like the DO-178B market: a whole ecosystem of companies making reusable, certified components with high quality. Those vendors already market to the medical industry but more adoption would lead to more development.

NateJune 17, 2015 9:32 PM

@Nick P: "Hopefully, the situation becomes more like the DO-178B market: a whole ecosystem of companies making reusable, certified components with high quality. "


I wish we could have that, yes. But that's something that the NSA has stolen from us: we now have no plausible basis for trust in any component that's more complicated than -- perhaps -- a single transistor.

We know intelligence agencies lie to us. We know they require our manufacturers to lie to us. We know they are EXTREMELY interested in getting subverted components into the consumer and coporate supply chain (as well as their rivals' defense supply chain), and have essentially infinite budgets to do so and the desire to do it in bulk to make their lives easier.

We also know that they (as well as commercial interests) have subverted the highest standards committees, and that our highest governments have bought deeply into the idea that they are fighting a long multi-generational war against 'terrorists' who can be any person, anywhere, in any civilian environment.

Given this - how DO we certify the behaviour of ANY component which could become part of a shooting cyberwar? Which means everything?

Just asking the manufacturer to say 'yes, it does X' at a high level doesn't suffice. Nor does asking an 'independent' inspection body to agree.

Didn't we do just that with Certificate Authorities? And yet, other than making a few overnight millionaires, it hasn't worked. We basically now have no trust that any CA won't provide fake signing keys to any sufficiently evil agency.

It seems that the only approach that can work is
1) make our components as simple and basic as possible. Complexity hides backdoors.
2) make their production TOTALLY transparent. Full source disclosure, mass deployment of reverse-engineering tools. 'Trust but verify' every component, at every level, at random. Even better, ship EVERYTHING as high-level definitions and source code and run your own compilers and 3D printers.
3) REQUIRE multiple physical manufacturing sources for EVERYTHING. Nothing which exists only as a single company's pet black box product. Randomly swap suppliers at will to at least minimise the chance of subversion.


FigureitoutJune 17, 2015 9:52 PM

David Mays
--Yeah, problem is these kinds of defenses (prevent re-programming, well...the cheeky git Clive Robinson has mentioned a way of still getting serial *data* on a micro (doesn't say what memory) via what I assume are separate pins from standard programming pins via an LED...) are chip-specific and you also NEED to know all the connections on the board (tracing w/ an ohm-meter is best one can do "on the outside") b/c pins share same buses a lot. For physical attacks (at extreme end, ie: someone's trying to kill you), one can do pin obfuscation and also physically removing programming ports. Suppose one could add a bit of firmware (or just don't initialize pins) as well and lock it w/ security bits but those can be flipped if someone goes the extra mile and knows precisely how to reprogram it. These protections need to start from chip manufacturers.

albert
--I still call myself a "junior" dev but I learn some new trick in each firmware and what looks like a dirty hack (which, ok it probably is) is most likely due to hardware changes dealing w/ simply "strange effects" (no other words) of physics (having electronics work outside under unknown stresses...makes life so much harder...).

The phenomenon of remote firmware updating needs to die now before it gets too easy (like our routers lol, f*ck) unless we don't want to take security seriously. Who wants to go back to OTP-ROM's and UV-EPROMs? Need to be pretty confident for OTP-memory lol. Remote monitoring is fine so long as it's not super critical data, just monitoring data.

Regardless, unbelievable some companies can be so careless w/ what is *medical* equipment saving lives, it needs to be reliable AND secure period.

Nate RE: lack of trust
--Ugh..amen...We need to take back our computers...

NateJune 17, 2015 9:58 PM

And now I see that DO-178B basically is what I'm asking for - https://en.wikipedia.org/wiki/DO-178B - lots of documentation at every step, so yeah, we need that to be the norm for consumer/corporate tech, please.

I feel like I pretty much understood the 8-bit 6502 and Z-80 machines of the 80s. The hardware was simple and well understood, we had full dumps of the ROMs, there was really nowhere anything nasty could hide.

Similarly with the early IBM 8088 PCs.

In the mid-90s is when things start to shift, and I'll arbitrarily point my finger at the Intel Pentium. That, I think, is when we stopped being able to _know_ our CPU, because it was actually an opaque microcode blob just emulating a real CPU. From that point on we lost control.

I believe at a minimum for a trustworthy system we need to get rid of all proprietary firmware blobs, and that includes CPU microcode. Richard Stallman has been banging on about this for decades now, and sadly he's once again been proved right.

What I would like to see is have almost all of our motherboard chipset replaced with something very, very dumb and simple: a regular FPGA array, totally programmable by software. Then have a tiny known-good bootloader (that could be maybe verified by physical or X-ray inspection) and EVERYTHING ELSE programmed into it by software after it leaves the factory.

We would need a better systems language than C: something more like Verilog that could compile right down to silicon or at least FPGA elements. I'm sure this would give us much better performance too.

We need our system hardware to be SO TINY AND SO SIMPLE that anyone can prove that the core is correct; and then the rest (the software) has to be completely open to inspection by anyone at any point. Then we need to produce it in bulk from multiple sources.

And the next time any manufacturer talks about 'innovating' we say 'sure, you can write any program you like, but you do it here in front of us, in source code, with your hands in the open, so we can follow every move.'

Not many tech companies are going to like that, but we'd all be safe.

NateJune 17, 2015 10:20 PM

This, incidentally, is why I feel queasy about strongly typed functionally programming languages like Haskell: because the very idea of 'typing' basically argues 'the user cannot write their own systems language in the standard systems language'. And that's not going to work in a fully open, distributed, never-rebooting system where the OS _must_ be under the high-level control of the users at all times.

I can't put it completely into words (because it's so fundamentally self-evident to me, yet completely against the mainstream of CS and mathematics), but I feel very strongly that there is something inherently wrong about the computer-science concept of 'type'. It's not how the human mind works and it doesn't permit the kind of systems we need desperately to be building now. To me, a type declaration is a kind of logical assertion; that's all it is. It's not an undefined kind of mathematical object floating above logic, as we treat it today.

For example: What is the type of the statement 'X is a variable of type Y'? You'd better have an answer, because your interpreter/compiler is going to be parsing a LOT of those statements, and reading them from the Internet at runtime, and creating new types dynamically - and while 'it's a string of letters' is true, it's missing all of the important points.

I feel like a grumpy old man shouting at clouds about this, but then I look at some of the really smart people - for instance Alan Kay and Chuck Moore - and they seem to have been on a very similar wavelength. I believe we're hard-coding far too much complexity into our CPUs, motherboards and compilers - even into our formal languages - and this is the wrong place to put it.

Why is this important? Because if we don't even have formal languages that can talk about formal languages that talk about formal languages.... while our Internet is MADE OF nothing but layers of formal languages (protocols) that talk about other formal languages (software and data)... then we can't make any kind of assertions, let alone proofs, about applications that span the Internet.

And we need to, because we're running them.

NateJune 17, 2015 10:50 PM

I guess I'll add that we DO currently have various partial formal models of Internet-connected computers but they are NOT sufficient because:

* the core model is both extremely low-level ('a set of CPUs with registers, instruction pointers and RAM, plus an OS, linked by the DoD TCP/IP packet network protocols') and too vague (how TCP/IP is implemented at the CPU level is undefined)

* C is an abstraction of the 'CPU/RAM machine' which assumes only one processor; the specification of multiprocessing is left up to the individual hardware

* our general CPU/RAM C+TCP/IP Machine model doesn't specify some of the very important parts of the hardware: north bridge, south bridge, virtualization, memory management, interrupts, USB, LAN controller. All these completely violate even the C specification but are left up to the device driver writers. Who are the ones who are breaking our security.

Above the bit/packet level, we've got utterly incompatible high-level paradigms. Functional, OOP, relational/SQL - they can't even express the same fundamental formal concepts, let alone describe how to interoperate in a machine-readable manner.

We don't have - as far as I'm aware - any single language which can describe the operation of BOTH the CPU (and non-CPU hardware like SANs and disk and network controllers and GPUs and memory-mapped devices) down to the bit level, as required for software proof-of-correctness - AND can describe how compositions of millions, if necessary, of these systems can be chunked into Internet-connected application clouds of dynamically user-created data types and objects.

Many programmers look at me strangely when I talk about this and they say 'but you don't NEED such a language and it's probably impossible to HAVE such a language. Use the right tool for the job.'

To which I say: Look, imagine a world where you could only describe your desktop applications in English (without nouns), your web server in French (without verbs), and your mobile phone in Japanese (without adjectives). And not only are there no dictionaries or translation services between each language, it's strictly forbidden for any to be written. And everyone thinks you're crazy to ask for one anyway 'because you should just use the right language for the job'.

Now imagine you're asked to write an application that works across all three platforms, correctly - but you can't even describe its behaviour or what 'correctness' actually means, because that would require linking the three languages. Your development teams each speak a different language and you manage to get a few things done by pointing, playing charades and using a kind of cartoon pidgin with boxes and lines that you scrawl on a whiteboard.

When you try to convey (in your pidgin) the idea that 'we need to prove correctness', each of your development teams breaks down into tears of laughter. They think it's the most hilariously stupid idea they've ever heard.

This is the situation we've created for ourselves in Internet application development in 2015. I don't understand why we tolerate it and think it's normal.

FigureitoutJune 17, 2015 11:05 PM

Nate
--Interesting thoughts...

"Real" Z80's aren't really made anymore w/ 80's manufacturing techniques, you'd have to pull them from older electronics (this is a losing strategy as they eventually decay); there was still "undocumented instructions" on the Z80 which you can find on the big Z80 fanatic sites.

Agree on when our CPU's firstly got overtaken by monopolies (...Intel) and squeezed out other alternatives to death and get all the geniuses to make extremely opaque microcode and questionable hacks to push addressable memory up and up and up...Now we've got x86 "WINTEL" monopolies pushing prices down so low and complexity so high that's fricking...just not re-assuring at all. It's insane it even works! No one knows the whole system and can trace it.

We've had our little run here on getting secure chips, reality is a harsh b*tch. There's too many holes where attackers can easily poison (I said physically take over a fab lab (none will agree to that w/o some serious $$$) and begin behaving like an intel agency vetting and surveilling employees trying to sneak a fast one on a run of chips...makes you question yourself slipping to "that level"...), too much knowledge needed, and too much $$$ at stake.

RE: C's replacement
--Have a go (no not that language lol), keep the C syntax (it's too good) and I'm game, or just make the compiler better to detect non-standard C (no "I'm a macho-man" or "I'm a psychopath" C-code allowed). The longer we let C rein king, the harder it'll be to change though.

RE: chuck moore
--No doubt a genius, I just can't bring myself to code Forth. I got into "4th" too, cool guy behind it, ran all the example programs, played w/ it, made a function or 2; just wasn't for me. It's not how *I* think, I think in C for computers. You have to think in cold logic to keep complexity down or you get javascript...I don't know, maybe there's another way...

Anyway, should move this to squid thread; but good b/c we need more people thinking on something better...

Nick PJune 17, 2015 11:09 PM

@ Nate

"because the very idea of 'typing' basically argues 'the user cannot write their own systems language in the standard systems language'."

Do some more research on the benefits and types of type systems. I'm saying that straight up rather than sarcastically. I've seen types applies from higl-level stuff such as concurrency to classes to scalar types for variables to typed assembler language. Types are orthogonal to everything else. They're just a category you put stuff in with rules for how they're used. Then, they have effects on your code.

" It's not how the human mind works and it doesn't permit the kind of systems we need desperately to be building now."

It actually *is* how the human mind works. The human mind works by associating a thing with other things. This is why some argued for OOP: a model where we start with abstractions that are fleshed out over time and applied to so many real-world situations in many specific ways. Each object is a type. Each data value with its constraints and properties is a type. Each abstract thing you're thinking about with its own situation and applications is a *type* of thing. You're always thinking of types whether you know it or not. And types in programming are similarly useful abstractions.

"What is the type of the statement 'X is a variable of type Y'? You'd better have an answer, because your interpreter/compiler is going to be parsing a LOT of those statements"

That's not a statement against types so much as a statement saying your interpreter/compiler better understand what it's working with and what it's doing with those. There's even type systems for metaprogramming: the art of doing exactly the types of things you're talking about. Whether people apply them is another matter.

"Why is this important? Because if we don't even have formal languages that can talk about formal languages that talk about formal languages.... while our Internet is MADE OF nothing but layers of formal languages (protocols) that talk about other formal languages (software and data)"

Depending on your definition of formal... Actually, the Internet is a bunch of kludges and band-aids put together that works way better than it should. About everytime I've seen a protocol formalized and analyzed it sucked in ways they didn't know. Hell, BGP was written on a napkin or whatever. Still with us.

"then we can't make any kind of assertions, let alone proofs, about applications that span the Internet. And we need to, because we're running them."

I totally agree. We need to do much better than we've been doing. There's groups trying to do that. We need a better design that actually meets our goals. It must be rigorously designed, implemented, and verified. Preferrably, it can run side-by-side with the existing Internet so it can be gradually adopted. Meanwhile, people and companies will continue to be smashed over the Internet because its nature is to do so.

NateJune 18, 2015 12:26 AM

@Nick P: "Types are orthogonal to everything else. They're just a category you put stuff in with rules for how they're used. Then, they have effects on your code.

...It actually *is* how the human mind works. The human mind works by associating a thing with other things."


I agree. But - and this is my point, which is perhaps a little subtle - an association between two things is mathematically a relation, not a type.

The idea I'm drawn toward is that 'X is of type Y' is nothing more and nothing less than a logical assertion and that a set of such assertions form nothing more and nothing less than a relation. To which the standard tools of logic and relational calculus can be applied. Data structures, functions and types then are just different arrangements and usages of relations, and sets of inference rules for analyzing those relations.

A language which has relations and assertions as its fundamental primitives - rather than data structures, functions and types - exposed directly to the user, can then make statements ABOUT type relations, or any other kind of relation, and could compute all the reasoning about 'types' used in 'type systems' of whatever complexity one desires.

But in such a language the concept of 'type' itself as a special, fundamental built-in mathematical object distinct from a relation would not exist in its own right. It would not be a fundamental but a derived object.

This kind of thinking is what led to Prolog, but that line of development was mostly abandoned in the early 1990s in favour of compiled systems with large non-user-accessible type systems in the compiler.

"Can you construct a new type, and a new type system, at runtime, put it a data structure and return it from a function? And then apply it to arbitrary user-supplied data?" is the sort of question I would ask of any language which wants to be used on a dynamically interactive Internet.

Outside of logic programming, prototype-based OOP (such as JavaScript) does come closest to this idea, since you _can_ dynamically create a new object, mark it as a prototype and return it from a method. JavaScript has other major flaws; Lua and Io are better. But it seems to me we can still get a little simpler.

The point is to encode only the bare minimum of necessary assumptions into the core language, and then allow the language itself to construct the rest by self-reference. Type systems strike me as very complex pieces of computing machinery which are obviously built out of smaller parts. Therefore, we should find out what those smaller parts are, and build them first.


NateJune 18, 2015 12:27 AM

@Jack: "why should you hack that thing if there is a minus/plus button on the front accessible for everyone?"

Because one person can only push one button, but could hack 1,000,000 of them simultaneously over the Internet.

AnuraJune 18, 2015 1:04 AM

@Figureitout

"keep the C syntax (it's too good) and I'm game"

I have problems with C syntax. Optional brackets which can get hard to match with a lot of nesting, for one, and also kind of a bitch to parse sometimes since it's not a context-free grammar. I have a C-replacement language in my head and I am going with "if ... end if" syntax (although I'm not good enough to actually write it) and then repurpose curly braces for type casts.

David HendeersonJune 18, 2015 2:23 AM

I'm retired from a prestigious university. I still use the premises.

The point is, when I plug my Mac laptop into the campus-wide ethernet I took a look at the log file and was astounded at the transcontinental range of port sniffers stepping through the ports on the laptop. (You can reproduce this by turning on OSX stealth mode firewall and look at the log file.).

When I transitioned to Linux, I made sure that only ssh packets could originate from outside through the Linux firewall.

Maybe I'm missing something, but in my experience its a really hostile world when seen through an internet port. I know it can't be this simple, because there could be vulnerabilities in ssh, but can't the makers of infusion pumps just start with some sort of ssh authentication for their firmware updates?

That at least would put the problem more under obvious sysadmin control rather than relying on secret ports and services. Who writes the requirements? Security through obscurity has long been shown to fail.

keinerJune 18, 2015 2:53 AM

@Nick P

"Systems done this way have had incredible reliability and security. Many exhibited no failures during pentesting or field use."

As you rote: "many", but definitely not all...

Please: Software is not something completely different to other technical systems, it is not out of our established philosophical framework (except for the pricing and that you companies can put a label "best before" on it, although it does not age).

Read Popper, it's the best we have at that time.

PantherJune 18, 2015 3:52 AM

Putting aside security aspects in terms of programming, I have one single question:

Why on earth does everything nowadays have to be connected to the internet?

Power plants, traffic lights, hospital equipment and loads of other stuff do NOT need access to the internet - they will never have to google something or read the current newsfeed or whatever. If you feel the need to link that stuff to some kind of network for whatever purpose, go ahead - but do not link that network to the internet! NEVER EVER! If that means you'll have to put 2 PCs (oh my god the costs!!1!) at the controller's desk - one for your internal network, one for Google -, than be it so.

Using the internet for critical infrastructure would be like running through the streets of the city clothed only in thousand-dollar-notes - you might get into some trouble or another and who is to wonder?

PS: and in the rare cases you might still want to use the internet-infrastructure, would you please at least read a little about VPN?

65535June 18, 2015 4:16 AM

@ albert

“Let's just say that _everyone_ is working on cyber-attack/defense strategies, both military and civilian. As you said, hacking for dollars opens the whole world to danger. Attacks on a countrys infrastructure could come from anywhere…”

Yes, it is becoming a growth industry. But where will it end?

“Major military powers won't start cyber-wars against each other. It's just stupid.” –albert

Maybe and maybe not. I remember something about Gas Line explosion in Russia:

“The Trans-Siberian Pipeline, as planned, would have a level of complexity that would require advanced automated control software, Supervisory Control And Data Acquisition (SCADA). The pipeline used plans for a sophisticated control system and its software that had been stolen from a Canadian firm by the KGB. The CIA allegedly had the company insert a logic bomb in the program for sabotage purposes, eventually resulting in an explosion with the power of three kilotons of TNT” -Wikipedia

https://en.wikipedia.org/wiki/Siberian_pipeline_sabotage

Sure, there was some help by the CIA but there was probably some neglect upon the Russian's to secure the pipe line… Not unlike the current neglect with OPM hack.

[OPM hack aided by neglect]

“He asked Seymour pointedly about the legacy systems that had not been adequately protected or upgraded. Seymour replied that some of them were over 20 years old and written in COBOL, and they could not easily be upgraded or replaced. These systems would be difficult to update to include encryption or multi-factor authentication because of their aging code base, and they would require a full rewrite…A consultant… told Ars that he found the Unix systems administrator for the project "was in Argentina and his co-worker was physically located in the [People's Republic of China]. Both had direct access to every row of data in every database: they were root.” -Arstechncia

http://arstechnica.com/security/2015/06/encryption-would-not-have-helped-at-opm-says-dhs-official/

https://www.schneier.com/blog/archives/2015/06/friday_squid_bl_480.html#c6698844

These are two cases of cyber warfare in one form or another. The question is will this cyber warfare escalate?

DavidJune 18, 2015 6:34 AM

Simplest way for company to prove their level of comfort with the security of the device would be for the CEO to be 'plugged in' and then challenge the researchers to try and kill him from their desk.

Peter A.June 18, 2015 7:05 AM

@Nate:

"We would need a better systems language than C: something more like Verilog that could compile right down to silicon or at least FPGA elements. I'm sure this would give us much better performance too."

Doesn't really work.

Have you ever programmed a modern FPGA? Ok, you can create your pretty logical circuits in Verilog, VHDL, or even draw it all in MATLAB and generate Verilog from that - fine. But than you have to run the "place and route" process using the FPGA manufacturer's binary blob to convert your pretty code to another binary blob, that you load onto the FPGA chip using manufacturer-defined procedure. What goes into that blob? can you trust it? Well, today most manufacturers don't even document the internal structure of the FPGA chips, just say how many "gates" are there, how many memory elements, how many I/O pins etc. etc. The full internal structure of the network of logic elements is not known to the developer, and the format of the blob to load onto the chip is unknown as well, so there's no way to verify that it corresponds to your pretty Verilog app.

One could imagine a company creating an "open-spec" FPGA - but can you trust there aren't any hidden "taps" in the structure?

As for the performance - FPGA's won't ever get faster and cheaper than ASICs. FPGAs are perfect for things that are still evolving and may easily change in future, are sold in not-too-large volumes yet have long expected product life, like software radios for various comms systems, control systems of many kinds etc. Implementing a mass-market short-lived device (like a smartphone or a toy) on an FPGA is too expensive, you'll be beaten by ASIC-using competitors in no time. I don't think that uber-geek niche of FPGA-based fully-programmable completely-open-source personal devices is big enough for a business to survive.
Come on, even geeky-nerdy Raspberrys and the like use custom chips with binary-blob drivers - or they won't sell.

JPJune 18, 2015 9:44 AM

@Jack, @Nate:
"why should you hack that thing if there is a minus/plus button on the front accessible for everyone?"

"Because one person can only push one button, but could hack 1,000,000 of them simultaneously over the Internet."

Not only that, but it could be:
- Done from anywhere in the world, which might add a layer of jurisdiction to complicate things for law enforcement. If I kill someone on the U.S. over the internet and I'm comfortably living in China, even if the U.S. pinpoints me there's not much they can (legally) do.
- Much harder to detect. Something went wrong with the pump, but it could be just a bug in the device, not the result of malicious intent. If not overused an attack like this could take years to be found out, or never at all.
- Very easy to conceal. It won't leave any fingerprints, video evidence, witnesses, nothing at all. There could be a log trail back to the perp, but it's not hard to cover your tracks in the digital world (unless your adversary is Big Brother).
- Hard to prove in court, even if the police gets to the conclusion that the perpetrator must be a certain someone. Judges and juries probably will not be acquainted with the technological lingo.

Nick PJune 18, 2015 10:13 AM

@ Nate

It's an interesting idea but quite vague and impractical. Your scheme essentially demands that the programmer build an entire type system or determine a suitable one to use like a library. Further, everyone's code will suddenly mean totally different things. This would be a nightmare for integration. Such a language would be a toy for academics or explorers at best.

If you want it, though, then LISP seems closest to what you describe. About every paradigm and even type systems have been implemented in LISP. Specifically, Shen uses the sequent calculus as its type system. This let's the programmer define new types using a form of first-order logic. This lets one customize the type system per function even. That's the closest thing I know to what you want.

I think most people with an interest in something like this just go straight to formal verification of software using a theorem prover such as Coq or Isabel/HOL. Sounds like about the same amount of mathematical work. Chlipala's Certified programming with dependent types is somewhere in the middle of difficulty. Then there's languages like ATS, Idris and Agda.

@ Peter A, Nate

"We would need a better systems language than C: something more like Verilog that could compile right down to silicon"

It's called a C-to-silicon compiler or high-level synthesis. If it's any good, then it's expensive as hell: EDA tools often start at six digits a seat. More commercial ones and immature free one's are described here. Rather than eliminating issues, it introduces a whole series of them that virtually no programmer is going to know how to handle. Rather than eliminating trust, one has now increased their trust in a bunch of hardware people, tool builders, mask makers, fab operators, packaging firms, and shipping companies. Your algorithm will at least be faster and non-modifyable if on anti-fuse FPGA or ASIC.

@ keiner

"As you rote: "many", but definitely not all..."

Ok. Point still stands. Also remember that the ones that made it were being modelled in a field that had just been invented a few decades before to have properties just invented within that decade using methodologies of a similar age. Everything in use was in its infancy. So, while a few failures reduces your confidence, I have opposite reaction where the majority of them achieving their objectives give me great confidence in the approach. From then on, it's done what it's supposed to do wherever it was applied the right way. Give it time to mature like all the other engineering fields have had and we'll see even better results.

Of course, the market and majority are working very hard to ignore as much high quality tech as possible. There's probably only a few hundred people contributing anything serious to this field. Those numbers need to increase. That so few are working on strong verification tech is the main reason we've seen little of it. The only industries different are DO-178B and ASIC design. Their tool builders have performed near-miracles given about every solution requires getting one or more NP-hard problems in check.

"Read Popper, it's the best we have at that time."

I've been reading papers on real-world use of verification technology, successes and failures. Taught me plenty about what works in practice. If Popper has experience in this, then give me a link. If he's a theoretician or speculator, then I'll pass as they haven't been helpful at all. Even Godel's work was just used as an excuse not to try by many while others cheated around it in many ways to obtain real-world results.

AnuraJune 18, 2015 11:15 AM

In a systems language, I think you would want something a bit higher level than C, not lower level. The problems with C tend to be that it is a bit too low-level, and that it's a bit lacking in features: wild/dangling pointers due to a lack of proper memory management, unsafe type casts allowing integers to be cast to pointers, lack of bounds checking on arrays/pointers. Object oriented programming paradigms can eliminate the need for passing void pointers and casting back to a specific type through the use of Interfaces. I particularly like C#'s model of multiple inheritance of interfaces and single inheritance of implementations although I would tweak it a little bit to make it more flexible*.

I think you need to place safety over performance, and only accept a loss in safety if it would cause a large loss in performance - checking bounds does not cause an unacceptable loss in performance. A new systems language needs some sort of error handling, but I would not give it full C++/C#/Java-style exception handling. A simple way to do it, and the way my language would handle it, would be to force the developer to handle exceptions by creating two return paths: one path for if the function is successful, and one for if it's a failure. Other things to do is to have the compiler enforce that variables are initialized before accessing them (C# does this, as do some other higher level languages), even in classes and arrays. If it means defaulting integers to zero in classes and arrays, take the performance hit; it's better than having something that randomly fails 1% of the time - it also prevents you from accidentally leaking potentially sensitive information a la heartbleed.

For memory management, I would use RAII with memory being freed as soon as the variable going out of scope, with some special types of references with rules:

1) "Owners" would own memory, must be initialized before use, cannot be destroyed or resized, ownership can be transferred but only to variables with the same or a longer lifetime

2) "References" would reference but not own memory, and can only reference owners that have a longer lifetime than it. These references would only be allowed to live within functions, not classes, and cannot point to uninitialized owners

3) "Shared Owners" would use reference counting for memory management, and there would be no restrictions on what can share ownership. Can be cast to a reference by implicitly creating a local shared owner and incrementing the reference count

4) "Weak references" would point to shared owners

With the case of 1 and 2, the rules (combined with bounds checking) are sufficient so that you can guarantee you never have memory errors: no null pointers (although I would allow that if you explicitly say so), no wild pointers, no dangling pointers, no buffer overflows, and no memory leaks.


These are the main points I think any systems language


*If you want to see notes on what my imaginary language has for interfaces, I uploaded here:

http://pastebin.com/7EBdwUDU

Frank WilhoitJune 18, 2015 12:01 PM

@Nate:

Strong typing is your friend. Code that won't compile can't be deployed.

Your enemy, on the other hand, is not microcode: it is concurrency.

Finally, don't blame languages for software written by people who have only half learned them.

Thanks,
FW
.

rgaffJune 18, 2015 1:32 PM

@Peter A

"Come on, even geeky-nerdy Raspberrys and the like use custom chips with binary-blob drivers - or they won't sell."

All very true. But we can always hope for kind of a better future someday maybe... look at Novena... :)

Nick PJune 18, 2015 1:40 PM

@ Anura

Ada's restrictions and structures are a good start. It's been doing safe, systems programming in embedded space for decades. I'd say also check out Cyclone.

@ keiner

Oh, yeah, I remember him now! Got a poor memory these days, esp with names, so it didn't ring a bell at first. Still, I'm not sure what you're getting at with the reference. Since "read Popper" is pretty vague, I have to guess at it. The verification claims are falsifiable, weaker ones were falsified, and the believable ones were supported by proof or argument. On his supported by facts vs absolute truth, the various empirical work applying it to all sorts of problems with success is building quite a case for it. Popper also liked differentiating the level of confidence or application each claim made. While many proofs use logic, they tend to say what they've proven, to what degree, and what is trusted. It's as honest an approach as one can get.

So, with a quick perusal, I don't think Popper's claims matter much. If anything, his philosophy was to make sure the claim was testable, collect evidence to back it, and be clear on how much to trust each piece of evidence plus the model/theory itself. Formal verification community, outside the zealots, are doing that to demostrate the value. I think the empirical evidence is more important than any theoretical work. It's working in our favor so far.

@ Frank Wilhoit

People often look at type systems like they're just a limitation. In reality, properly designed type systems empower the users by letting them put their designs into practice and having them actually work.

rgaffJune 18, 2015 1:52 PM

@ Anura

Also, make it so unused memory is always zeroed: both as soon as accessible at boot time (yes!) and whenever freed. This is a performance hit, but I think totally worth it when whole classes of exploits go away.

@Frank Wilhoit

"Finally, don't blame languages for software written by people who have only half learned them.'

Right... but can I blame people for still promoting old hard-to-do-right languages when easier-to-use-safely alternatives are around?

keinerJune 18, 2015 2:58 PM

@Nick P

" I don't think Popper's claims matter much."

Funny. Rest of the world thinks that he is the basis of ALL modern science. But what does software have in common with science? :-D


Whenever a theory appears to you as the only possible one,
take this as a sign that you have neither understood the theory
nor the problem which it was intended to solve. Karl R. Popper

AnuraJune 18, 2015 3:13 PM

Making sure all memory is set to a value before reading is definitely worth it. On freeing I'm not as certain. I would definitely have a keyword to mark memory as "sensitive" so it is definitely zeroed on free, as well as pinned to memory so it doesn't get written to a swap file.

rgaffJune 18, 2015 3:28 PM

@ Anura

Haven't you seen those things where you force a reboot and then scan the graphics memory looking for what the screen looked like before the reboot? Making sure free memory doesn't ever have "leftovers" to snoop in the first place is pretty important to security, when you can't foresee every possible way that stuff could leak when something goes wrong.... When it's integral enough and used often enough, it should be sped up in hardware...

AnuraJune 18, 2015 4:05 PM

@rgaff

Zeroing memory on free doesn't solve that, since on a forced reboot the free functions aren't called (also, they won't be called when programs terminate unexpectedly). On boot, that's really not something that is up to the language itself, it's up to the operating system.

Nick PJune 18, 2015 4:34 PM

@ keiner

What specifically about Popper's claims means software can't be verified to have certain properties?

@ rgaff

I've always thought the hardware should come with a function for doing that on initialization. If not, then we put it in the OS or the graphics interface.

rgaffJune 18, 2015 5:07 PM

@ Anura, Nick P

Basically, I'm not really looking for just a secure language, I'm looking for secure hardware and firmware and operating system and language all working together to make things secure using the same principles. Boundaries and separations between them might be drawn differently if we did that. It seems to me that overall, things should always be clear when empty, and things should always contain something useful when in use... whatever we need to do to get there, including dealing with unexpected things like crashes and reboots.

AnuraJune 18, 2015 5:59 PM

So basically:

1) Hardware should clear memory on power off and/or power on
2) OS should clear memory when a process exits or when freed memory is reallocated for another process
3) Process (via the language) should clear memory on allocation

Nick PJune 18, 2015 9:00 PM

@ rgaff, Anura

Don't forget partitioning. It was an old trick to avoid this problem used in MLS systems and some secure processor schemes in recent times. The idea is that the TCB divides up the shared resources in some way with a strong isolation mechanism enforcing the boundaries. So, this keeps you from having to clear it every use. This is particularly helpful with caches since cache flushes hurt performance. Registers still have to be flushed, though.

FigureitoutJune 18, 2015 9:07 PM

Anura
--That's why, unless you use generic comparisons (eg: if(i==1) etc.) I thought it's standard professional practice to label endings ( }//end if(i==1) }//end if(true) etc.) or many IDE's simply highlight brackets (Vim colors them unless you go beyond a page and have to jump to it). I had to "repair" a decent sized program adding this in and the code got mucked up w/ different editors, so yeah that was really annoying. Programmer not doing that will probably comment sh*tty in any language and generally be sloppy (compiler will catch it anyway, but give errors way off from where it is lol). Socially shun people who write like sociopaths, it's malicious. This starts in school making people feel bad.

The excessive verbiage is *really* annoying to me, just more typing that isn't needed and looks worse.

Pretty ticky tacky if you ask me, didn't you write some of your crypto you posted here in C or something very C-like?

If you have no freedom in a language then it's not very fun writing programs (for instance, some old games, knowing the tricks and hacks made the game in some respects), might as well not program them if the computer is going to basically do it all for you...The crappy auto-complete in Android Studio pissed me off so much I didn't feel like writing an app anymore lol.

AnuraJune 18, 2015 9:30 PM

@Figureitout

I write most of the small projects I do at home in C or Python, but for any large projects I would rather go with C++ or something like C# (which I primarily use for work) if it's a bit more involve1d. I use these because they are what I have the most experience in, and going for other languages will take longer because I can't do it without looking up the library functions every few minutes.

As for the practice of following up braces with //end if... I don't see it very often in real world code, and if you are going to that, then why not just build it into the language? In a large company, when you have a program that's been worked on by hundreds of different contractors or volunteers over the period of over a decade, I think you end up with better results with a language that is slightly more verbose then hoping that all the developers follow good practices. In C-like languages, you can have problems like the "goto fail" bug if you omit the braces, and the syntax I propose is immune to that while also providing better readability.

I used to be a fan of brevity, but I rarely come across situations where it is advantageous for anything but laziness and sometimes come across situations where it is a disadvantage, and the time to type it is actually not that much. In fact, if you use Visual Studio, and you are coding in VB.Net and you type "If x = 1" and hit enter, it will automatically add the "Then" and "End If" for you (note that I'm personally against the mixed-case language keywords). I'm sure there are Vim/Emacs plugins for this when coding in relevant languages.

FigureitoutJune 18, 2015 10:32 PM

Anura
--Yeah, I've never worked on super large code bases or companies so I wouldn't know what's that like. I prefer this space though (larger embedded is a goal but it would be another step for me). Fun, get to piece together everyone's style and hacks... But me and another employee are bringing new standards where I'm at now (original owner and older engineer used to write schematics on napkins and schematics used to be all hand-drawn on paper, which is a cool/legendary story, but not acceptable engineering practice or professional these days w/ part numbers, precision needed and tracking down bugs from the field).

And I can't refer to library functions or types sometimes they haven't been ported/defined, it's weird having some of these easy functions I used to call taken away from me. And if people can't follow basic, very reasonable standards then call them out like Linus b/c it wastes so much time.

RE: goto fail
--That's an obvious bug if someone actually double-checked and read the code; says something about QC and testing of code. That said, no programmer has any room to speak b/c we all know we've made really dumb errors or overlooked obvious problems...

RE: laziness
--Potayto potahto, maybe by not being lazy you're wasting a bunch of time on something not needed? Took me a summer to really get the idea of "working smart" and focus on things that matter and step back if you start going down a path that's a time-suck if you hit a roadblock and need to come at it a different angle. You can add it if you want (which would take not that long), but not forced.

RE: auto-complete
--I'm just generally not a fan, even on my smartphone it's dumb getting the wrong words and saying things I don't mean. I like to type websites in browser, not autocomplete; only thing is on search engines if I forget something and need a hint. *IF* it works good in IDE and I can customize it easily, then yes, otherwise it's just someone else's style I don't like being forced on me.

AnuraJune 18, 2015 11:36 PM

@Figureitout

That said, no programmer has any room to speak b/c we all know we've made really dumb errors or overlooked obvious problems...

Exactly my point; every single person on this earth is flawed. The goal in any new language should be to accept that fact and attempt to minimize the number of flaws that enter the code and the ease of detecting them. I find that this little bit of verbosity goes a long way in avoiding those problems.

scrambled eggs and gravyJune 19, 2015 12:52 AM

@ keiner
"Whenever a theory appears to you as the only possible one,
take this as a sign that you have neither understood the theory
nor the problem which it was intended to solve. Karl R. Popper""

That sounds too open ended for the field of computers. Engineers and programmers set bounds in the software world. There is a finite end and closure. Even infinite loops can be interrupted. The realm of possibility there is not unbound. Just pull the plug. :D

@ Anura
"" The goal in any new language should be to accept that fact and attempt to minimize the number of flaws that enter the code and the ease of detecting them. ""

In my view, the concept of verification attempts to go beyond correcting linguistic flaws. This is demonstrated in spoken languages where a grammatically correct sentence does not necessarily mean its contents are politically correct. When we communicate, we analyze not only the what but how it is said, and then we form our understanding of context. A well formed program can contain malicious code, whether by accident or design. The understanding of context is what is really useful.

thevoidJune 19, 2015 1:36 AM

@Figureitout, Anura

i've often used vi's '%' command:

% - Move to the parenthesis, square bracket or curly brace matching the one found at the cursor position or the closest to the right of it.

so if i want to get to the end of a code block, i go to the '{' and press '%' and that takes me to the end of the block, or vice versa. sometimes it's also quite useful for finding those bugs where you left one out, as the matching brace will usually be at a different level of indentation. it's done me well at least. i suspect it may be the reason for this command. it works well with nesting, at least of the same type of brace.

keinerJune 19, 2015 6:57 AM

Last year Mr. Schneier proposed the use of "password aggregators", logic behind:

1 you can not remember a safe password.

2 So use something to put al your passwords together in ONE safe

3 protect this with one password - but: think of step 1


I said: this is utter nonsense, as I cannot protect a single account with a secure password, I protect all my accounts with one single (secure? insecure?) password.

Only one laughing: the NSA, as they get all your passwords in one place.

Now let's have a look here:

https://blog.agilebits.com/2015/06/17/1password-inter-process-communication-discussion/

I'm not convinced this is a "security blog", even considering the discussion on Windows/Bitlocker, Truecrypt and related software this week....

Clive RobinsonJune 19, 2015 11:05 AM

@ Kiener,

I'm not convinced this is a "security blog", even considering the discussion on Windows/Bitlocker, Truecrypt and related software this week....

Hmm a rather limited and out of date perspective...

The discussion on the other blog is about detailed technical aspects of a single instance of what is a password manager program.

If you care to look such has been discussed here several times in the past with regards password managers, and our host has in the past written one and made it available for use by the community.

The problem with deaply technical comments on security issues original discussed from times long past, is that they are effectivly "known / solved" issues that the designers of new systems should have been aware of with just a little research.

Likewise if you do a little research you will find that this blog has discussed all general aspects of password systems, and the very many security models and their issues. You will find that the consensus is that there is no realy secure system for password managment all have the issue that the ultimate limitation is the human mind and it's inability to store sufficient entropy. Further that what you might call password "guessing/finding" attack systems have now progressed way way past "brut force searching" and actually model to a certain extent the way human memory works.

The old method that Bruce used to recomend against external hackers was generate a real random process password, write it down on a piece of paper and keep it along with the other valuable pieces of paper in your wallet. However these days whilst the external hacker threat is still there, there are now "insider" and "state level" threats that are of more concern.

An insider will fairly quickly get to know you are keeping your password on a piece of paper in your wallet. And judging from the number of thefts in work places most human beings don't take enough care of their wallets, to prevent an insider attack.

As regards state level attackers, just about any uniformed person in any jurisdiction has the "state given right" to search you and your possessions, so it won't take them long to get that piece of paper.

However as you might or might not know there is a more modern version of "shoulder surfing" that involves the use of medium to high resolution CCTV systems, especially if they are IP based. Thus you get the quaint situation of one security system, making another security system much much weaker at best. It's just one of many security issues I christened "end run attacks" many years ago, which even now most ICTsec people fail to consider in their models.

So nothing personal but think as soldiers are taught and get a bit of situational awarness and scope out the lie of the land, and remember the old saw that "no battle plan survives first contact with the enemy".

Password systems can not be made secure except in very very limited cases that almost always don't exist these days. The problem is that nobody want's to take it on board for a whole variety of reasons, not least because nobody has come up with a scalable solution to the problem. So if you think you have "a better mouse trap" feel free to make it known, but expect the usual hail of bullets of critiques and critisisms, and be prepared to defend / surrender gracefully.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.