Friday Squid Blogging: How to Fish for Squid

The Washington Department of Fish and Wildlife explains how to fish for squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on February 6, 2015 at 5:57 PM135 Comments


mike~acker February 6, 2015 6:40 PM

Document Authentication in the Digital Age

Are you who you say you are?

How can I verify that you are who you say you are?

Is this important?

In some cases: authentication is not only important — it
is vital.

Let us consider Federal Tax: Forms 1040.

We must all file our tax forms, every year, under penalty of
law. Unfortunately we have some crooks around who like to
file a phony tax return so they can scam some money from the
IRS. They might use YOUR Social Security number for this
and if they do then the IRS will reject your 1040 —
stating: you already filed. Next, about 3 years from now —
they will send you a nasty letter demanding settlement of
variances. They are not nice about doing this.

How might you authenticate your 1040 in such a way that
crooks will fail should they attempt to file a phone return
using your Social Security Number?

The IRS will need a means by which it can AUTHENTICATE your
return. You can’t just say “I’m Jones”. Anyone can do

I would like to refer you now to the testimony of Whitfield
Diffie, given November 2013 at a Marshall Texas patent
lawsuit, defending NewEgg supply Company. This was reported
by Ars Technica 2013-11-25

The relevant part is under the heading: A brief history of
public-key crypto

In part: There was one other big need: proving authenticity.
“The receiver of the document can come into court with the
signed document and prove to a judge that the document is
legitimate,” he said. “That person can recognize the
signature but could not have created the signature.”

Read the above very carefully: the problem we are solving
here is the need to produce a digital signature which can be
recognized — i.e. verified, or authenticated, — but which
cannot be created by an intruder, scam artist, crook, or
hacker etc.

Doing this is a mathematical problem, and a difficult one.
Fortunately for most of us that work has already been done
and we are all free to make use of the mechanism. This can
be obtained at no cost through the Gnu Privacy Guard (GnuPG)
or as PGP4WIN. Alternatively PGP/Desktop could be used.


Given that you have either the Gnu Privacy Guard or
PGP/Desktop installed your tax software could be programmed
to SIGN your tax return for you, using either PGP/Desktop,
or the Gnu Privacy Guard.

Needless to say the IRS would have to be notified that this
would be an optional procedure. Implementing it would not be
particularly difficult: a DETACHED signature could be used.
The tax return — as in use today, would then be “Zipped”
together with the detached signature and the resulting .ZIP
container sent to the IRS.

The IRS would then observe that they had received a PGP
signed return. For this they would simply download the
required Public Key from one of the commonly used
keyservers. This way they wouyld obtain your public key.
To authenticate your public key you would need to have
taken you key to your local credit union and obtained an
authenticating signature for it before you uploaded it to
the keyserver.

One more thing: The IRS would need to note that you would
be using a PGP signature henceforth. They don’t have any
problem doing this sort of thing: If you file 1040ES you
will get new 1040ES forms every year after that.

Once the digital signature protocol were established the IRS
would then reject any return from you that was not signed ot
that had an invalid signature. An invalid signature would
indicate someone other that the proper person signed the
return — or — that someone had altered the return —
after it was signed.

we need to start implementing effective security procedures
for all our electronic commerce. if we continue doing things
as we have up until now we are going to get more and more

Buck February 6, 2015 7:01 PM


No offense, but that’s an incredibly short-sighted implementation… If a cryptographic signature is your only layer of defense, how would you then propose to secure these new numbers any better than the standard SSN..?

Thoth February 6, 2015 7:09 PM

I remembered that recent market rumours showed that corporations are currently interested on digital signing and securing tax documents of Amercian employees in corporations (to be sent back to the USA for tax reporting) and are currently calling in for reviews. In essence the corporations would sign and encrypt tax documents of every American employees in their company stationed overseas to be sent back.

The encryption and signing are based off XML-based signature and encryption schemes (what I heard). I have a feeling the country-based signature schemes would favour more towards XML + SSL Certificate-like (S/MIME ?) based standardizations rather than PGP but it’s still worth a look.

tyr February 6, 2015 7:21 PM

In ye goode olde dayes, you had a paper with a signature
on it. This eliminated the way for a random stranger to
send in a fake return and get IRS to accept it. They had
to be able to forge a credible signature. You could then
show your own signature and explain that the fake return
wasn’t yours.

E-filing like so much of the worlds rush to use comps for
every odd job imaginable has stripped away that form of
proof and like many other things exposed them to shiny
new vulnerabilities. Couple this with the idiotic policy
of collection of unneeded data by commercial and govt.
low level workers who haven’t a clue about how to do
reasonable protection on what they collect and you have
todays nightmare. Possession of any official looking
paper confirms your identity, without the current type
of plastic you have ceased to exist.

I’m more inclined to think Venter might come up with a
reasonable method of connectible proof tied to biometrics
that might work. Short of a guaranteed repository for
key enquiry that isn’t a huge honeypot the comp isn’t
an overly viable solution to the problem.

The recent examples of retina and fingerprint hacking are
a bad sign about how to prove who is who.

With a government who has demonstrated that they cannot
be trusted any attempt to convince them that you should
be trusted runs into a real problem. The IRS can’t even
provide documentation of the law that lets them collect
taxes on income in court, so expecting them to believe
you’re not as crooked will be a lot of fun.

Maybe we need to open Bluffdale to public access that
would provide enough data to allow a defense in court
and a lot of juicy details about public figures as a
nice bonus.

sena kavote February 6, 2015 9:06 PM

Minix or Hurd?

Which is better, Debian GNU/Hurd or NetBSD/Minix?

Easier to call them Hurd Linux and minixBSD or Minix3.

Which microkernel architecture is more promising, which has better ideas?

Which distro is better? The Linux distro with Hurd kernel instead of Linux kernel or the netBSD distro with minix kernel?

Which one would you put:

on your machine

on grandmas machine

on a spacecraft

on a drone

on a TV

on a web server

on a computer in a bar

on virtualbox

on XEN

Terry Cloth February 6, 2015 9:24 PM

RFID implants for humans are here

[Spotted in /.’s Firehose.]

A Computerworld article reports that a few Swedes at a high-tech office building have volunteered to have RFID implants which will allow them to open secure doors, use office copiers, and pay for lunch.

How long before someone snoops the transaction and builds a duplicate? To say nothing of the fact that, as one person points out, anyone with an RFID stimulator can read your ID, removing whatever anonymity you had. Unless you wear a lead glove or the like.

Tangentially: I just bought a Toyota Prius, which uses an RFID fob to enable unlocking the doors and starting the engine—no key needed or available. I’m not happy. Slightly worse, if the key is in the vicinity of the car, it will unlock to anyone’s touch. I at least want to push the key fob button, so I know for certain whether it’s locked or not. That aspect can be disabled by changing a setting, but my SO already likes it, and wants to keep it.

Nick P February 6, 2015 9:28 PM

@ sena

Minix 3 because it’s getting more development than Hurd. Monolithic would be FreeBSD (features), OpenBSD (code quality), or NetBSD (easy kernel changes).

just sayin' February 6, 2015 9:53 PM

  • Crunchbang Linux development ceased by developer –

“The End” post from the developer “corenominal”

“I have decided to stop developing CrunchBang. This has not been an easy decision to make and I’ve been putting it off for months. It’s hard to let go of something you love.”

  • The listing at Distrowatch now shows:

“Status: Discontinued”

  • CrunchBang Linux Halts Development – Slashdot


Thoth February 6, 2015 10:22 PM

@Terry Cloth
RFID Car key fob security is a bad idea. Most vehicles are now equipped with that feature where an owner of a car would only need to carry the key fob within a close range of the car and the door would be vulnerable. Imagine you park the vehicle next to the street and closed the door (but still within range of the RFID signal) and buying some food and newspaper from the street side store when someone dashes into your vehicle, opens the door and drives away with it (it unlocks due to the RFID key fob being nearby).

Wael February 6, 2015 10:24 PM

@Terry Cloth,

How long before someone snoops the transaction and builds a duplicate?

How long before new borns get the implant before they leave the hospital?

Fenrir February 6, 2015 11:05 PM


Written signatures are worthless as a third-hand proof of identity if an attacker has any trivial amount of skill and the time to practice. They’re worth a little bit when they’re produced under observation, and they do protect against impersonation by the totally unskilled, but the current tendency to rely on them as a proof of identity is just the unfortunate residue of a custom that originated when there was no general literacy to worry about.

Biometric schemes are equally terrible at third hand. They don’t offer any way to recover from successful identity theft, especially if it was only done as a joe job, and most can’t cope with intentional mutilations either. If some guy takes your right hand, say, and uses it to produce thumbprint ID – and you can’t because you don’t have a right thumb anymore – is the correct protocol to take the word of the guy who does have your hand, or to take the word of anyone with a missing hand who claims to be you with a missing hand?

Nick P February 6, 2015 11:09 PM

@ Wael

One of the early opinion pieces I remember on the topic was a guy from the NSA suggesting we should put trackers on anyone arrested for a crime. The crooks would have tracking devices following their location and future tech might even track their emotional state. The cost would be covered by the wearers (“subscribers”).

Today, we do track criminals with devices and expansive law enforcement power would mean this might also be a way to track more of general population. He was closer to the mark than most speculating.

gordo February 6, 2015 11:33 PM

@ SnoopingKickback:

A glimmer of hope (maybe)?

It’s on appeal (see second item, below).


‘Trust us’ mantra undermined by GCHQ tribunal judgment
Judgment rules that secret intelligence sharing arrangements between Britain and the US did not comply with human rights laws
Alan Travis, home affairs editor | The Guardian | 6 February 2015

The 12-page tribunal judgment in the case brought by Liberty and Privacy International does not rule that the British GCHQ bulk interception programmes were unlawful. But it has ruled that the secret intelligence sharing arrangements between Britain and the US, known as Prism and Upstream, did not comply with human rights laws for seven years because the internal rules and safeguards supposed to guarantee our privacy have themselves been kept secret.

It was only public disclosure of those rules for the first time as part of the first of two IPT [Investigatory Powers Tribunal] rulings in December that brought the intelligence-sharing regime into compliance with human rights law in general, and article 8 of the European convention on human rights on the right to privacy in particular.

The declaration by the tribunal judges is quite clear that until that public disclosure was made on 5 December, the Prism and Upstream programmes under which the private personal data of people living in the UK was obtained by the American authorities contravened human rights laws.

The judges go on to add, however, that the arrangements now comply with human rights laws.

[para. 3-6]


British Court Says Spying on Data Was Illegal
Mark Scott | The New York Times | February 6, 2015

The privacy groups have appealed the British court’s December ruling, which allowed for American and British intelligence agencies to continue sharing surveillance materials.

The appeal, which will be heard by the end of the year at the earliest, has been sent to the European Court of Human Rights in Strasbourg, France. The ruling means that individuals, both in Britain and potentially abroad, who believe they may have been targeted before December 2014 can petition the court to see what information the British intelligence agency may have collected on them.

[last two paragraphs]


Mass surveillance is fundamental threat to human rights, says European report
Europe’s top rights body says scale of NSA spying is ‘stunning’ and suggests UK powers may be at odds with rights convention
Luke Harding | The Guardian | 26 January 2015

Though the recommendations are not binding on governments, the European court of human rights looks to the assembly [the Parliamentary Assembly of the Council of Europe (PACE)] for broad inspiration, and occasionally cites it in its rulings.

[para. 7]


Investigatory Powers Tribunal (IPT) documents:

Operation – List of Judgments
February 2015 (06/02/15) judgment
[Paragraph 4 references disclosures made in paragraph 47 of 05/12/14 judgment]
December 2014 (05/12/14) judgment
[Disclosures referenced in 06/02/15 judgement are in paragraph 47]


Government Communications Headquarters (GCHQ) News & features:

IPT Ruling on Interception
News Article – 6 Feb 2015
IPT rejects assertions of mass surveillance
News Article – 05 Dec 2014
GCHQ is not acting within an unlawful regime or seeking to carry out ‘mass’ or ‘bulk’ surveillance.

Wael February 6, 2015 11:52 PM

@Nick P,

I envision a future where various signals emitted by the human body to be either actively or passively detected and analyzed remotely without the need of any implants. Too far fetched? I know any thing I think of has a link somewhere in your logs 🙂

sena kavote February 7, 2015 1:25 AM

re:Authentication with official papers

One option should be to write some short authentication code received previously, possibly even 10 years ago. That is for paper and internet.

Other electronic option besides public keys should be attaching the authentication code with text file for generating a sha256 hash to prove knowledge of the code. All possible “wrong” ways to do that would be accounted for by the system, so no worry about character encoding etc.

Hashing is simpler than using gpg. Every Linux has sha256sum, sha512sum and GNU privacy guard terminal / console programs by default.

With gpg, simpler to use symmetric encryption key received via snail mail than fiddle with public keys.

This kind of schemes should start with Linux users because smaller percentage of citizens in most countries use Linux than Windows, Android excluded, so Linux users are a more manageable test group and problems with servers can be handled better. Once the system is tested, fixed and optimized with Linux users, make the Windows and macOS X versions or start encouraging, advertising and advising Windows and mac users how to use and install some existing windows or mac version of gpg that they can find from some web page.

In some situations, one authentication option among others should be a stone. With papers, stone can be used as a stamp. Maybe it would be possible to use fingerprint reading hardware to recognize stones instead. If not, common camera hardware is good enough. Faking a specific glittering stone would require very special manufacturing or it could be completely impossible.

sena kavote February 7, 2015 2:06 AM

Quick and dirty fix for some buffer overflow vulnerabilities

This explanation is itself quick and dirty.

Make a c and c++ source code text auto-editor that finds every use of buffer and replaces it with a function that does that same thing but with bounds checking and other checks+error message feature. Also add functions that calculate those bounds and global variables that contain the sizes. Put #include on top that has those extra functions.

This would mean that most things are checked twice, which reduces performance. Let the users and distro maintainers balance performance and security per program and library.

Wesley Parish February 7, 2015 2:43 AM


If the car door automatically locks because the key fob isn’t handy, stealing a car with such a feature might be a long slow suicide, particularly if it also locks the windows and shuts off the air circulation. Asphyxiation from breathing in your carbon dioxide is not a very pleasant way to go, particulary if you understand what is happening, particularly if the windows are toughened to resist the hammers of inept car thieves.

A Nonny Bunny February 7, 2015 4:09 AM

@ Wesley Parish

If the car door automatically locks because the key fob isn’t handy, stealing a car with such a feature might be a long slow suicide, particularly if it also locks the windows and shuts off the air circulation.

I wouldn’t want to have a car with such a feature, because there is always a chance your key fob breaks, for whatever technical reason.

BoppingAround February 7, 2015 9:19 AM


Too far fetched?
Just be sure to eat a lot of cabbage and drink some milk. Your surveillors would sure appreciate that.

Bong-smoking Primitive Monkey-Brained Sockpuppet February 7, 2015 10:25 AM


Phorgive me for jumping in — this’s, for no particular reason, an area of intrest for me. In fact, you can tell by my name that I am a domain expert… Leave @Wael out of it, he knows diddly sh*t about this subject — goes without saying, pun not intended…

Just be sure to eat a lot of cabbage and drink some milk. Your surveillors would sure appreciate that.

@Clive Robinson’s recommendation:

So a diet of Russian Cabbage Soup and Steak tarta might give you the desired results.

Will impress this sorry bastard. They’ll need to adjust his canabis attenuator to reduce the effect to human tolerable levels. In addition, they’ll need to add some circuitry to optimize the frequency response for the effect at hand, loosely speaking.

Not sure this weapon can be added to the arsenal. This is dangerous stuff! It’s a mysterious weapon of Mass Destruction, so be careful! In thrust we trust</>!

SnoopingKickback February 7, 2015 10:59 AM

From NobodySpecial

@snoopingkickback – although it is now legal because the Snowden revelations mean it is public, so it is no longer a secret government spying operation, so it is no longer illegal!

If the snooping was deemed illegal, from the time period before the Snowden revelations, that snooping is STILL ILLEGAL. The revelations don’t make something legal (that would take a change in the law).

That is, talking about a formerly unknown crime, does not make that activity “not a crime”. Making a secret government operation “not a secret” does not mean that said illegal activity is no longer illegal.

Your statement reminds me of the whining criminal that says “I didn’t do anything wrong, I just got caught”.

MikeA February 7, 2015 11:15 AM

@A Nonny Bunny

The keyfob doesn’t even have to break. A while back we went to dinner with another couple. The husband was driving and we dropped off the wives in front of the restaurant to get on the list, while we went to find parking. Once we drove off we realized that the fob had gone with one of the wives, and if the car stalled, we would be unable to restart it. Also, if we left the car and the doors locked, we would not be able to get back in. We quickly called the fob-carrier, only to hear her phone ringing from inside the car…

Yeah, we handled it, but this situation was created in the first place by the sort of muddled thinking about security that is all too common.
It seems that “security” is like a brightly colored cluster-bomb, so cheerful and pretty.

AlanS February 7, 2015 11:47 AM

Brennan Center and Defense One: The U.S. Intelligence Community is Bigger Than Ever, But is It Worth It?

“The nonpartisan Project on Government Oversight and the Columbia Journalism Review back up Friedman’s estimate that the U.S. now spends roughly $1 trillion a year for national security. This figure dwarfs the combined defense budgets of all possible contenders, combined.

Friedman argues that the threats we face today don’t justify such profligate spending. Protected by oceans and bordered by friendly nations, there’s little risk of a foreign invasion. Deaths from wars and other political violence abroad have sharply decreased as well. Terrorism and violent crime in the U.S. are at historically low levels.

Yet despite the relative safety our nation enjoys and the enormous effort and expense dedicated toward strengthening U.S. security, Americans feel less safe than any time since the 9/11 terrorist attacks. So the question isn’t just whether our national security measures are necessary, but whether they work. Do our intelligence agencies actually improve U.S. security and give policy makers the best available information to make wise policy decisions?”

AlanS February 7, 2015 12:14 PM

Links to transcript and video segments of Brennan Center interview with Ben Friedman, a Defense and Homeland Security policy research fellow at the CATO Institute here.

tyr February 7, 2015 2:29 PM


The magic formulae is half baked leek
and Albacore steak. Not only should
it be banned as a weapon of inhumane
violence it contaminates the area for
long periods after the initial offense.


Dedicated attackers can not be defeated
with any elaborate scheme, they can always
use forgery, body parts, etc. Fending off
the random stranger is what the security
business is all about. That isolates the
dedicated criminal and makes it an enforcement
problem instead of a probability. Just as
Spandam Alexander led the NSA in the wrong
direction with mass collection any comp
scheme that extends trustability to most
is a solution that makes the problem easy
to fix instead of impossible. Coupling a
level of trust into the mechanism is a
good start, extending the economy to add
the dis-enfranchised is a second area we
need to have to build a better world on.
Of course if you think this is the best
humans are capable of nothing will save
you from your own illusions.

Does anyone have an opinion on the recent
shakeout by Big Blue (IBM)?

65535 February 7, 2015 4:37 PM

@ Rob

That is a timely article.

I notice that the data being sold is very cheap. This could be interpreted to mean that multiple hacking groups have access to it [and the NSA via the IRS] A large number of sellers tend to bring down the price.

The problem with tax prep software and any consumer accounting software is that these software companies have all of your information [and possibly your customer’s]. They have you name, DOB, Social Security number, marital status, age, address, internet address, credit card data, income and deductions – the whole shebang.

Although, intuit has a privacy policy it has holes. The biggest is their privacy policy differs from country to country. Given that the NSA has been known to rout your communications around the world this leaves a huge opportunity for it to be intercepted.

“We comply with applicable laws and security standards.” – intuit

[This would include the infamous CALEA law and various FISA court laws]


“…Intuit is a global company and may access or store personal information in multiple countries, including countries outside of your own country to the extent permitted by applicable law.” – intuit

“5.4 Intuit may monitor your Content.”

“…We may disclose any information necessary to satisfy our legal obligations, protect Intuit or its customers, or operate the Software properly. Intuit, in its sole discretion, may refuse to post, remove, or refuse to remove, any content, in whole or in part, alleged to be unacceptable, undesirable, inappropriate, or in violation of this Agreement.” –intuit

“6.3 Communications.”

“Intuit may be required by law to send you communications about the Software or Third Party Products. You agree that Intuit may send these communications to you via email or by posting them on our websites.” – intuit



“Electronic Filing Services”

“…to file your return electronically, your tax return will be forwarded to Intuit’s Electronic Filing Center, where Intuit will transmit it to the applicable federal and/or state taxing authority. You are responsible for verifying the status of your return to confirm that it has been received and accepted by the applicable taxing authority and, if necessary, for filing it manually in the event that the taxing authority rejects your electronically filed return (e.g., if the taxpayer name and SSN don’t match). The Internal Revenue Service (“IRS”) requires Intuit to notify it, in connection with the electronic filing of your tax return, of the Internet Protocol (“IP”) address of the computer from which the return originated and whether the email address of the person electronically filing the return has been collected. By using this electronic filing service to prepare and submit your tax return, you consent to the disclosure to the IRS and any other tax or revenue authority of all information relating to your use of the Electronic Filing Services.” -intuit

You can read all of details but the bottom line is you are giving ALL your information to Intuit, the IRS [and by extension various law enforcement agencies; most likely the NSA and all of its tentacles]. Further, I would guess that there is public certificate from the government embedded in the software and placed on your computer that can strip SSL/TLS encryption – Before warned.

Nick P February 7, 2015 5:05 PM

re homebrew systems

This page has the description, software source, and hardware source of a modern Oberon system port. There’s also a Windows emulator. People coding non-Oberon systems might find the description and source code for Oberon routines useful for more quickly making their systems usable. Of course, porting the Oberon System to the new hardware is the ideal bootstrap route.

Riceteeth February 7, 2015 6:29 PM

@Alan S, re 11:47, in the civilized world national security itself is a obsolete concept supplanted by human security. Elite indoctrination will keep US foreign-policy apparatachiks ignorant of human security, because it’s got lots of confusing concepts that don’t involve blowing shit up. The national security industry can’t make money that way. Arguably, the last strong impetus for reorienting national security was the interval between Der Mauerfall and the 1991 US armed attack on Iraq. At that time favored cadres in the officer corps were enthusiastically exploring protective roles.

Naturally, it was Fletcher, not Harvard, that showed how intelligence could support human security. “Digital Humanitarians are BURYING the secret world.” Only in terms of being useful and saving lives. NSA still has a bright future targeting civilians and protected persons for wilful killing, disappearance, and torture.

Wesley Parish February 7, 2015 6:32 PM

@A Nonny Bunny, @MikeA

Believe you me, the scenario I painted seems to have happened with a number of people in a number of cars. There was a case in the early 2000s where a Malaysian senior official was locked inside his car because of some bug with the automation; relatively recently there was something on the news in New Zealand about a couple who likewise got locked inside their car with the windows up and they spent an uncomfortable time before they got rescued.

There was a joke going round in the seventies: I heard it at school.

“Hello and welcome aboard the latest, most up-to-date airliner. Everything is fully automated and nothing can go wrong … go wrong … go wrong … go wrong …”

Bob S. February 7, 2015 8:37 PM

The government needs a way to authenticate tax payers?

So long as the governments of all nations hire thousands of bright people to hack, crack and destroy privacy and secure data transfer they will never get a viable authentication method. The government(s) won’t allow it. Also, they don’t deserve absolute authentication. Worse yet they “share” whatever they get with virtually any corporation or government agency that wants it.

Another way to look at it is the tail wagging dog syndrome: About 1% (or less) of users are criminals intent on stealing data and money via the internet. Why should the 99% lose all their civil and humanitarian rights* as a sacrifice to “security”. (*by for example “volunteering” to be chipped)

The way to approach this is to focus on the criminals and ways to make their lives very difficult….and insecure. Even government criminals. Detection and termination (of communication) by criminals is the correct route.

Thoth February 7, 2015 9:25 PM

@Bob S.
The Governments’ views (especially of the FiveEyes Warhawks) seems to indicate only Government businesses (tax, military, national security, state “policing” …etc…) would enjoy the privileges (and the blatant abuse of privileges) of high assurance ITSec/InfoSec/ComSec or any form thereof. So I would guess you are necessary only to secure Government mandated communications ONLY and no other (as I view it of their interpretations over the leaks and debates). You are fine signing and encrypting to Govt but if you use it else where or for foreign matters, you might be in trouble. The very reason is due top the architecture of our selfish Govt that are only interested in self-profiteering businesses of the elites….

If you look at history, most foundations of countries are based on two things … terrorizing the people and worshipping the elite. The elites would engage in self-glorification (God-King and those sort) but there are more rational and “democratic” countries that seeks council but those are rare in history. Most of the time you have some guy who “leads” the people and over time it degrades and slowly tyranny appears.

Thoth February 7, 2015 10:29 PM

@Nick P, Clive Robin, Wael & Markus Otella
Kleptography ( have been shown as a PKI-based backdoor virology technique and have been shown to be a very potent and formidable way of key and data exflitration. It has been used in the early days of PKI’s advent and now it is something not uncommon. In simply, a backdoor public key lying dormant in a compromised system (even if air-gapped with data diodes, serial cables, IR transmissions and guards) can be devastating.

Assuming the Linux/BSD OS or the RPi hardware the TFC is unclean, the TxM module, can be affected by kleptographic functions and the user of a TFC might turn oblivious. Personally compiling and inspecting the thousands of lines of codes in the OS kernel might help to the extend of detecting pretty obvious bugs and can be very time consuming where something hardware related might have slipped pass code inspection.

Clive Robinson’s “Prison” functions would definitely be rather expensive or very expensive to implement and Wael’s “Castle” would assume a trusted platform module which TFC’s RPi might not have one and cannot be assumed to have one.

I would imagine the Prison model would be to link up a ton of Ardunios, RPis, Adafruit and those hobbyist mini computing boards via their GPIO cables to each other to vote and to do encrypting of messages before sending out via the TxM module. That would inherently increase the complexity and can be a problem to common adoption and DIY hacking to build a TFC and also to have some bits of honest machines in the protocol.

For the Castle model, RPi was never made to be one Castle of it’s own and least, even to be trusted or to be easily assumed honest party.

What would be the likely way to secure the TFC’s TxM module from exflitration from within with modest amount of complexity and resources that can be much more easily adopted ?

Nick P February 7, 2015 10:55 PM

@ Skeptical

re QubesOS vs UNIX w/ mandatory controls

QubesOS is a Xen-based virtualization scheme for security. These simulate entire OS’s with running software, attempt to isolate them, share hardware between them in controlled way, and provide a means of data sharing to/from virtual machines. QubesOS does this on a smaller-than-usual hypervisor (Xen), addes some nice GUI support, isolates a few critical components, and makes launching new instances easy. The drawbacks are heavy resource use compared to OS processes, less hardware support, and little attention code has had by pentesters or blackhats.

The others are OS’s that share hardware, isolate more fine-grained processes, optionally provide OS-level virtualization, provide means of sharing/storage, and implement mandatory controls on that sharing/storage. This scheme is easier to setup, supports many security technologies, and is faster. OS’s I mention get more code review. Drawbacks are more attack surface in highly-privileged kernel mode and more likely persistence of malicious data [depending on how system is setup].

Both are low assurance in that they use vanilla methods attackers can hit. Both focus on containing damage from attacks. Both have strengths and weaknesses. The main issue is going to be how well you personally can use each type of system. If I’m forced to choose a winner, I’d give it to QubesOS over mandatory controls for one reason: QubesOS is currently not targeted by black hats. Like Mac’s when market share was low, that almost every skilled hacker is focusing on other security schemes will greatly reduce the number of QubesOS users hit. You’ll get the most benefit from it if you split your stuff between VM’s with classifications like confidential, limited connectivity (eg banking, corporate VPN), and unrestricted Internet. Latter category should probably include email given its risk, random web surfing, and definitely porn. Not accusing: I warn everyone.

Just make sure that, once you’ve come up with the split, you take extra care about how you move data between them. Still do backups of files. Critical ones on read-only media (DVD-R’s). Such a system is so new that it could [in theory] corrupt your stuff accidentally, not to mention software errors in VM’s themselves. So, definitely keep a good backup regiment.

Hope that helps.

Nick P February 7, 2015 11:16 PM

@ Thoth

That kleptography seems to require trusted components to be a black box made the solution obvious: a transparent box. That’s subversion-resistant development. The features map to the specs which map to the modules/functions which are compiled by trusted software into binary. This is the model for high assurance going way back. Even DO-178B, a safety certification, requires this traceability to ensure the resulting system does exactly what it says it will. Security is a super-set of safety and so requires at least as much rigor.

If it’s a black box, then we have a lot more than kleptography to worry about. It could be subverted in any number of ways. Although, this new type of subversion is way too clever for my comfort.

re Castle

I do appreciate Wael’s attempt at giving credit. To clarify, though, (a) I push rigorously implemented designs with proofs of immunity to major attack classes, (b) Clive created the Castle vs Prison metaphor to compare/contrast our work, and (c) Wael promoted TPM use at various times because participated in the process of making them mass market (i.e. knows their strengths well). Hope that clears things up for you a bit.

Nick P February 7, 2015 11:54 PM

@ Clive Robinson

I appreciate the link. I plan to read through it soon. I’ll also try to send it to some cypherpunks and such that might give them solid advice.

sena kavote February 8, 2015 12:04 AM

re: Minix 3 vs Hurd

I think the most important question now is, which project to join / help?

Maybe they have parts that are compatible with both?

Have those parts in separate source code files?

Have a c header named microkernel.h? Library that both could use?

Clive Robinson February 8, 2015 12:05 AM

@ Thoth,

Kleptography ( Adam Young and Moti Yung ) is part of a series of interesting things including crypto counters that are of use to some types of attacker. It is a subset of attacks that work by using redundancy “in the channel”, thus removing redundancy is one way to prevent it. Block ciphers of reasonable design generally don’t have the sort of redundancy required, stream ciphers can have quite a bit of redundancy depending on the underlying data format, but the worst offenders by far are the “mathmatical” ciphers such as those used for Public Key systems which can have very large amounts of redundancy, but the worst is the likes of randomly selected numbers like nonces. Thus the covert channel that “steals away the secret” uses the redundancy for cover, eliminating the redundancy or controlling it being the primary preventative measures.

One attack in particular that you can not find no matter how hard you look at the result is a backdoored PKI PublicKey. Where as part of the key generation process you hide a pointer / short cut to one of the primes –I came up with a practical example of this some years ago, which I’ve mentioned before– thus you should never use the likes of GoDaddy etc who make your PKcert for you… Further you should not trust “black box” software generating the PKcert for you either (so don’t use all those “Windows Apps” either).

However for those who have tried to manually make a PKcert it’s a nightmare you only want to attempt once in your life “to understand the process”, so you do need software tools to do it.

Thus the trick is knowing how to split the PKcert generation up into parts that can not hide Kleptography from you, whilst also not making the task overly burdensome.

Thus I tend to regard kleptography as an “insider” attack, in that can be avoided with care. However it’s a subject that needs a lot of consideration, because every bit in a random number is a redundant bit that can be used for information hiding, but at the same time random numbers are essential to the security of many protocols.

Nick P February 8, 2015 12:21 AM

re “Nick P”‘s Guinness and refried beans

Good to know the fake doesn’t have me under surveillance. I surely wouldn’t torture my date’s nostrils like that.

John February 8, 2015 3:36 AM


The US’s mad-tech military boffin unit is developing a form of biometric measurement based on how user handles a mouse.

Behaviour-based biometrics, for example how a computer user handles their mouse or crafts an email, would add to the existing repertoire of authentication techniques. Existing authentication techniques include something you know (such as a password or PIN), something you have (such as a number from an RSA token key-fob) and conventional biometrics (such as your fingerprints).

Researchers at the US military’s West Point academy have been given a multi-million dollar grant, after getting the green light from DARPA (the Defence Advanced Research Projects Agency). The award is part of DARPA’s active authentication programme.


Clive Robinson February 8, 2015 4:22 AM

Raspberry Pi 2 suffers flash light induced seizures.

When taking pictures with a flash gun of a running Pi2 a user noticed it effectively died. Others soon reported the fault as well, and a good bit of old fashioned “Blu-Tac” fault finding indicates it’s probably the light –not EMC– getting into the Switch Mode Power Supply chip.

What actually happens is unknown currently, but if it causes an “over-volt” on the digital supply lines it’s not going to do the chips any good and an early “End Of Life Fail” could result. Thus untill further news of the actuall fault mode becomes known I would not take photos of the Raspberry Pi 2 or operate it in bright light.

sena kavote February 8, 2015 5:50 AM

re: The “prison” metaphor of computer security

One thing that stuck from what little I have found about that “prison” thing was an idea about collection of software components / a scripting language made or inspected by security experts. What kind of components would those be? If a list is too long, what are the categories? Should some of those components be with a microkernel?

Radio keys

Those are strange invention. Tiny improvement in convenience in exchange for safety, privacy and security problems. It looks slightly cool when people magically wake up their car.

What next? Opening doors, computers and bicycles with those?

Linux distro vote in a mostly fluffy top 10 site

It would be nice if people voted and suggested here:

Nick P February 8, 2015 11:05 AM

@ Clive Robinson

That’s interesting. Now the Feds can defeat RP2-based security schemes by executing a search warrant and taking pictures with enhanced cameras. 😉 The guy who discovered it changed his signature to “Discoverer of the PI2 XENON DEATH FLASH !” Haha.

@ sena

re prison

The components would likely be equivalent to the standard libaries of most languages. On top of that, people would contribute components to solve various problems in sites like SourceForge or CPAN. The components would likely be written by people of various skill levels. This means the probable implementation of Clive’s model would need two programming languages: a highly productive and safe scripting language for average developer; a safer systems language amenable to strong verification for the native components. Optionally, metaprogramming like in Racket Scheme for ease of integration and auto-generating/optimizing code for various targets.

re distro

Probably Ubuntu and Mint because they’re the most usable desktop experience in Linux. Usable plus reliable and plenty hardware support. The trouble I’ve gone through in some distros for wireless is a good example of how far ahead Canonical is.

re distro February 8, 2015 1:02 PM

Ubuntu: What Nick P. said, consistently usable, but to avoid Unity’s privacy groping, stick to Xubuntu, Kubuntu or Backbox

Debian: more of a sporting proposition but generally easier to secure and harden than Ubuntu

Pentoo: nice idea but implementation is rough around the edges

Tails (or Whonix): indispensable

Honeydrive: very slick and usable but requires some care because of its specialized purpose

Nick P February 8, 2015 3:43 PM

High Assurance News: Update on my recent reading
(Focus is on NICTA and ETH in this one.)


Building high assurance secure applications using security patterns for capability-based platforms

Towards a verified component platform (applies above)

Termite: Automatic synthesis of device drivers

Note: I’ve posted original Termite paper before. They’ve revamped it (Termite-2) into human guided synthesis with software available online. In “Publications,” see the paper “User-guided device driver synthesis.”

AutoCorres: Automatic specification abstraction [from C source code]

Note: Based on tech used in seL4 verification that went down to C code. The source of the tool is BSD licensed.

Trustworthy File Systems

Note: Source isn’t available yet as it’s probably in design phase. Meanwhile, we can just use clustered file systems with trusted front end components that make the file systems untrusted. These components usually include encryption, integrity protection, and splitting across nodes.

From ETH Zurich:

ARPKI: Attack Resilient Public-Key Infrastructure

Note: Distributed PKI with strong security properties in presence of compromised nodes and deterrence effect by making malice obvious. Haven’t read it yet but the concept is awesome. Abstract description reminds me of my multinational, distributed SCM security scheme.

Secure Remote Execution of Sequential Computations

Pay as you Browse: Microcomputations as Micropayments in Web-based Services

Note: Interesting alternative to advertising. Many Web users’ PC’s have a surplus of spare CPU cycles that cost them nothing extra to use. Using them as payment is clever and might be viable. Further, if more automation is used in development, this might be useful for the build systems of open source software: each view of the page contributing a static check, unit test, code gen for specific ISA, and so on.

ETH’s Network Security Group Publications List

Note: So much interesting and potentially useful stuff on one page I might as well let you decide yourself on what you want.

Conclusion: NICTA and ETH Zurich are doing great work with plenty of practical value. Best wishes to their teams on current and future work. Hope good stuff gets adopted.

Clive Robinson February 8, 2015 4:44 PM

@ sena kavote,

Re: The “prison” model of computer security and the scripting language.

Firstly you need to accept that there are not enough people to write secure code to go around, and that it would take to long to train them up any way which means that the modern “information society” economy would be “negatively impacted” if we tried.

So how best to use those who can write secure code. Well in times past the solution was that they wrote operating system and below code, and the OS took care of the areas where security mattered. Further the old unix Rapid Prototype model was to use script tools to build prototypes and then migrate over to C or another lower level language and code it more “efficiently”. Well the honest truth is that in the last ten or twenty years of the last century, there was actually not much of a need to rewrite prototyped shell code in a lower level language just to make things “more efficient” because the time taken was about the same as it took hardware performance to increase to the point it was unnecessary, so what happend was “Wirth’s Law” instead with “mission creep”.

More importantly is defect rectification, most studies have shown that human errors in writing code happen at a rate proportional to the number of lines of code irrespective of the power of the language in use. Therefore the higher the level language you use the less defect rectification is required for any given program. A properly designed shell scripting language is about the highest level you can get.

Therefor there are two immediate pay offs, firstly due to the reduction in defect correction time and the higher level of the scripting language application programers should be more productive. Secondly as secure systems developers develop the shell tasklets the application programers use the overall security of the shell scripted code will be better than that of an equivalent application coded by non secure system application developers in a low level insecure language.

These two points are known to be correct on any kind of underlying hardware system as various studies have shown from time to time. Thus the question is does the Prison architecture give you a further advantage over a more accepted sequential architecture?

Well the answer is yes. By breaking down applications into standard tasklets you can run each on a seperate CPU that has a strongly mediated interface monitored by it’s accompanying hypervisor, which can do two things, firstly ensure that what transitions across the interface is within range etc, but secondly and more importantly the hypervisor can monitor the tasklet execution signiture in various ways, such that unexpected behaviour can be caught and an exception raised.

This has a desirable effect on application programers in that it will force them to stop certain bad habits such as pushing exceptions and the required code –if they can be bothered to write it– to the left, where it becomes exceptionaly brittle and thus extreamly difficult and therefore expensive to maintain, as well as potentialy opening a multitude of security problems.

I hope that fills in some of the gaps for you.

Skeptical February 8, 2015 5:22 PM

@Nick: Sounds like excellent advice. I have enough trouble with Windows XP (darn thing refuses to update, and I may revert to Windows ME, an OS in keeping with a security principle I’ve dubbed “security by frustration”, i.e. any uninvited guest deals with the same unworkable system that I do) so I wouldn’t consider anything else, although Qubes is supposedly highly intuitive to anyone familiar with Linux and with at least the metaphor of virtualization, but given the security by isolation theme you began with and what I pick up from time to time on the evening news, it seemed like an even stronger fit than your alternatives.

Re black-hats – no doubt Qubes as an OS receives less attention, but (with my usual disclaimer of technical ignorance) Xen is, I hear, rather broadly used on systems that would be of some interest to many. There may be something about Qubes that carries vulnerabilities, but to the extent it relies appropriately on Xen for much of the work of isolation, it would be relying on something that must receive a fair amount of ongoing testing, no?

From a 94,000 foot perspective without my eyeglasses on, I’m a little mystified as to why the use of virtualization would necessarily imply better security than a monolithic kernel. In principle it seems as though either could achieve, or fail to achieve, separation of certain defined domains from one another.

Perhaps in more practical terms, though, the relative simplicity of a type 1 hypervisor (or whatever those things are called) allows easier verification of that achievement, or failure, than a monolithic kernel?

It occurs to me that one of the neat things about “air-gapping” is how easy (seemingly) it is to verify. Mark-1 eyeball, leaving aside certain things that might require an additional tool or spec or two. Even including those complications, it sounds like something even someone like me could accomplish if given a few directions.

And if the root of all this is transparency of isolation, I saw a concept referenced offhand somewhere called “Transparent Computing”. It sounds like an oxymoron to me of course, but I’ve heard that various folks have an interest in it, and given the name, I wondered whether it might not be at least tangentially related to this issue.

Re distros: wouldn’t certain Red Hat flavours offer equal if not greater reliability and desktop experience? I’m told they push security fixes with a certain amount of alacrity. For that matter, if one doesn’t mind a little extra set-up, doesn’t Arch fit well with any desktop environment while being extremely well maintained? There are disadvantages to bleeding edge, of course, but bleeding edge is not the same as testing, and I’ve heard Arch manages to be quite stable. This is all fourth-hand information, obviously. Personally, I tried Mint once but could not find the “Start” button and IE wasn’t in its usual place either.

Re distros February 8, 2015 6:31 PM

Red Hat is tainted by NSA fondling. Unlike Arch, Gentoo variants and Slackware avoid systemd, NSA’s newest planned vector of infection. Astra linux is NSA-resistant out of the box, but tricky to customize. Linux is linux is linux, actually, and it all comes down to individually-customized precautions.

65535 February 8, 2015 7:14 PM

@ Alan S 11:47

“The U.S. spends nearly $1 trillion on national security programs and agencies annually…” –brennancenter

It appears that the “spy industrial complex” is larger than previously thought. If those $1 trillion figures are half-way correct it’s a huge industry with a lot of people feeding at the trough.

If the same ratio of spending occurs in the other “Five-eyes” countries, the number of people making a profit from this “Spy Industry” is uncomfortably large [those people would include virus makers, zero-day vendors, stingray makers, telecoms and up to high ranking politicians and military persons].

Worse, dismantling and weaning these people from the money trough could be very difficult.

Markus Ottela February 8, 2015 9:28 PM

@ Thoth:

This is the main issue of TFC.

I’ve included a warning about the attack and introduced the ways to detect covert leaks. So if the user bothers to read the white paper and manual, he or she will not be oblivious about the issue. Users who can reflect the complexity and probability of the attack to their threat model are somewhat safe.

I’m thinking the probability that an unintentional programming error would introduce a covert channel is negligible, but as an attack vector, it is more that probable: Pre-compromise of TxM HW / OS is probably one of the only ways to compromise the system in mass scale. Throwing money to combat TFC is unlikely as it’s not cost-effective method due to adoption rate. Of course, NSA hasn’t exactly had profit responsibility so they might still do it.

Avoiding pre-compromised OS:
First step is to ensure we have strong way of authenticating the hash of ISOs. Let’s take the case of Tails. We can’t trust PKI: a single subpoena to Gandi CA and you can MITM the Tails ISO download process, effortlessly on a massive scale. We need to utilize Web of Trust. The cypherpunk community needs to meet, organize and sign each other’s keys. We need to protect the keys with airgapped computers. This is the only way to truly trust the ISOs. There are still many issues, including undercover agents in organisations developing the tools etc. Open source and formal explanation of change of each LoC and peer review is going to be necessary. It’s going go be tediously slow and hard. But at least we don’t need a gazillion new features for the ideal TxM OS:

TxM can use any OS that allows it to perform the needed tasks of encryption and serial output. The community could use anything from minix to openBSD to linux from scratch. Once we have a trustworthy distro, the developers should create and distribute something strong, 796-bit keccak hash of the ISO for example. The functional, non-backdoored OS needs only a single audition. The OS could even pre-date TFC since it doesn’t have to be 0-day immune, it only needs to be malware free. Alternatively, by ensuring a wide variety of OSs, every one of those would need to be compromised. I’ve given this some thought and I’m leaning on OS with smaller codebase such as Minix. It’s going to be yet another learning process for me so I encourage those who agree and who have the skills to put their minds to it.

Detecting pre-compromised OS / Hardware
The transmitter LED of data diode has a minimum forward voltage, usually around 2V, so any measuring equipment can detect untimely transmission: even the flip-flop circuit works. Additionally you can use “dumb” devices that analyse the content of transmission anywhere from another computer and OS with RS232 Tap ( ) to Spectrum Analysers ( ). The TxM malware could theoretically also compromise these devices the community probably needs something analogue.

Adding Guards
HSA doesn’t necessarily know what’s receiving the ciphertext from TxM. It might be an auditor device, it might be a guard, it might be printer + scanner with OCR. While I’m not sure what the best approach is, I’m sure deliberate and accidental leaks can be made much harder to get through the TxM data diode.

Thoth February 8, 2015 10:12 PM

@Markus Otella
One of the very simple defenses to hidden functionality is a functionality check while on the run with deterministic crypto values. Using multiple devices to generate randoms and then hash them for a final random securely (within your knowledge) as your secret key. Setup the software to have a monitor and don’t use the built-in crypto functions on the chip (if there is e.g. AES-NI). When you encrypt in a deterministic way, the results would be predictable and the “secure and forgetful monitor” would ring alarms if something is out of place.

The kleptographic attack (exflitration and interruption) is a high level attack view which bunches the need for CvP concepts and the TFC is a creation that expresses it in some form. Almost everything the Prison model states is about functionality checks and deterministic functions by their checks and mechanisms like voting. Almost every other security techniques Clive Robinson, Nick P, Wael…etc.. proposes are geared towards preventing the above from happening in a higher level view.

sena kavote February 8, 2015 10:40 PM

re: The “prison” metaphor of computer security

To use all the existing source code, it seems that the scripting languages should be low level languages interpreted instead of compiled. It would be possible to run c and c++ interpreted, but slower.

Trying to read outside of a buffer would trigger detailed error message and exit. This would have stopped heartbleed bug becoming vulnerability. Trying to write outside of a buffer could cause 2 things depending on settings: immediate error message and exit, or logging the written data to help debugging and/or figuring out attackers intent.

The c interpreter program could even store all buffer data encrypted if that somehow helps in the context of the wider operating system. The encryption could be just very light and weak. All released / freed memory can be overwritten.

All kinds of tricks besides security could be done.

One “contiguous RAM area” could actually be in several different machines connected by 10 gigabit LAN. “Saving to disk” could actually be writing to RAM or sending to other computer. “Clock” could be slowed down flexibly to make timings normal. This is similar to what can or could be done with virtualization of binary machine code.

Function calls in every level could have counters about how often they are used and other stats. This information could be used to improve performance or detect something abnormal / anomalous for safety and security purposes. This would help spot hanging loops precisely.

To lessen the performance reduction, some parts of c source code could be marked for compilation so those parts get compiled before launch and therefore run faster.

Some of the extra things during interpreting could be done in other core so they won’t reduce speed so much.

Markus Ottela February 8, 2015 10:50 PM

@ Thoth

You should assume the malware is in control of the TxM. It knows when you’re inputting test vectors to input. When you send a proper message, the rootkit could access the key directly from SD-card / memory, and it would then be leaked prepended/appended around the original packet. Key would then be stripped on NH before receives the packet. Somehow you need to analyse the packet content while it’s being transmitted. One of the most useful ways would be analysing byte count. Length of packets is always static. So if there’s a hardware device that could in a trustworthy way display this information, user would immediately know if additional information is included inside any packet. Replacing the bits is not an option for attacker since users can immediately detect if MAC fails in receiving end: just ensure communication is turn-based and avoid long messages to minimise amount of plaintext that could be leaked.

I don’t think the TxM OS is capable of detecting malware in a way that would count as high assurance, but I’d like to read more on the CvP concept anyway: any papers URLs I could look at?

Clive Robinson February 8, 2015 11:10 PM

@ Nick P,

I’ve just been rereading your comment above, and the following part of it needs some comment,

If it’s a black box, then we have a lot more than kleptography to worry about. It could be subverted in any number of ways. Although, this new type of subversion is way too clever for my comfort.

Kleptography does not need to be a “black box” just not obvious to those doing a code review, which due to the lack of crypto training means that the odds of getting it through is very high indeed.

As I’ve said before the easiest way to get something past a code review is to be bold and upfront and to find a real reason to have a piece of code there.

Kleptography needs a trapdoored one way algorithm of which a PubKey crypto system is ideal. The question is how to make it unobvious, step up to the plate the BBS secure pseudo random bit generator, and a little bit of underhand abuse of C’s heap memory allocator.

The BBS CS-PRNG is considered quite highly for it’s properties which it gets from being based on Michael O. Rabin’s oblivious transfer ( ), which in turn is based on RSA. It is easy to justify BBS’s inclusion in the code or modify existing RSA code to make a BBS generator. Either way it’s used to “hide in plain sight” a short public key and use it to encrypt a secret which is then used by the person who holds the coresponding private key.

That is by apparently generating secure random numbers you are actually RSA encrypting a secret that acts as a back door. To hide the fact you are doing this you can use the side effects of malloc whereby you allocate memory for a tempory buffer of size X, use it then “free it” but then create a new buffer of size X, to which malloc hands you the buffer you just freed with it’s contents in tact.

Unless the code reviewers are not just smart, but suspicious and clued up on crypto, then you have a better than average chance of getting this through the code review. And even if the code reviewer does catch it the use of the side effects of the BBS generator and malloc gives you very good deniability. That even post Ed Snowden Revelations you stand a very good chance of getting away with.

The hard part is then getting this RSA encrypted secret into the public key certificate you are generating if you read the Adam Young and Moti Yung book (of which there are PDFs on the web) you will see a simple version of how to do this.

The result is the code to generate a Public Key certificate will have the backdoor secret to one of it’s PQ primes RSA encrypted under the short PubKey hidden in the BBS CS-PRNG, embedded in the generated PubKey. Which only the person(s) who have the coresponding private key of the BBS short public key will be able to use…

Although this might sound like a “golden key” solution to some it is in fact a very bad idea to use it as such. Basicaly whilst long public keys can not be factored, if short enough, short public keys can.

Thus when news of this backdoor secret leaks –and it will if known to more than a couple of people– the hunt for the method and then the factoring of the short public key will begin, and will with quite a high probability be discovered by those the “golden key” originators would not want knowing it.

Which loosely put means such a “golden key” system is only secure as long as it is not used, as using it reveals it’s existance and thus it’s disclosure…

Markus Ottela February 8, 2015 11:14 PM

@ Thoth:

I realized I’m forgetting the fact you can hide information in more subtle details of the signal. Slight differences in high / low signal length etc. There should additionally be some sort of dumb device that “cleans” the signal (ensures steady voltage levels, sharp edges and static length) before it’s pushed to NH.

Nick P February 8, 2015 11:46 PM

@ Skeptical
(from the real Nick P)

“Obviously I lack all technical expertise, but from a position of great ignorance, I must ask ”

“but to the extent it relies appropriately on Xen for much of the work of isolation, it would be relying on something that must receive a fair amount of ongoing testing, no?”

I think you’re a tad more technically proficient than you let on. 😉 Good catch on a flaw in my hasty post. The Xen isolation component does receive a lot of attention. That increases the review aspect for that component. Then, there’s their own components which are sometimes privileged (bypass isolation property). These still need more review. One other issue is that (I think) you can’t run Qubes on the variety of processor architectures and hardware that you can run BSD/Linux on. This negates a useful obfuscation whereby you use hardware nobody, even pro’s throwing a wide net, are targeting. Not to mention diverse suppliers to counter subversion. I got plenty of mileage out of those tricks in the past.

” I have enough trouble with Windows XP (darn thing refuses to update, and I may revert to Windows ME, an OS in keeping with a security principle I’ve dubbed “security by frustration”, i.e. any uninvited guest deals with the same unworkable system that I do) so I wouldn’t consider anything else”

That’s hilarious. Same strategy as military’s nuclear C&C computers, too. Windows ME was Microsoft’s greatest achievement in user frustration until Windows Vista. That you’re using WinXP is a problem: it’s pretty much EOL and unsupported far as security is concerned. The best option for you is Windows 7: the security and functional enhancements of Vista with only 300-400MB of RAM. IIRC, WinXP typically uses around 300 background. So, Win7 with unneeded stuff turned off, appearance set to vanilla windows (no Aero BS), and so on is quite efficient. Way more secure than WinXP and around $80 on eBay. Apply a hardening guide or two after you install it along with Firefox + NoScript and/or Firefox + Sandboxie. Do backups, etc.

“I’m a little mystified as to why the use of virtualization would necessarily imply better security than a monolithic kernel. In principle it seems as though either could achieve, or fail to achieve, separation of certain defined domains from one another. ”

They absolutely can. The problem is that OS’s have a ton of complex code running in kernel mode doing all kinds of things, including separation of user-mode processes. A flaw in any of that can compromise the whole system due to unrestricted memory and CPU access of kernel mode. OS’s architected better largely faded in market due to incompatibility with legacy stuff, less performance, less eye candy, and so on.

A compromise was wrapping a legacy OS in a container running on a much simpler [in theory] piece of software (i.e. hypervisor). The OS is less privileged or deprivileged. Any software using the OS interface can run on it, maintaining prior investment. The hypervisor might abstract away the hardware while providing guests functions to call to emulate hardware operations. This allows the hypervisor to mediate access to resources along with multiplex them among many simulated machines that all think they’re the real thing. So, long story short, a hypervisor like Xen at 50,000 lines of code is much easier to assure than a kernel like Linux at 100,000-1mil+ lines of code. Provides benefits outside of security, too. Icing on the cake.

“Perhaps in more practical terms, though, the relative simplicity of a type 1 hypervisor (or whatever those things are called) allows easier verification of that achievement, or failure, than a monolithic kernel? ”

I should’ve read your whole post before I wrote the reply. You figured out a huge chunk on your own it seems.

“It occurs to me that one of the neat things about “air-gapping” is how easy (seemingly) it is to verify. Mark-1 eyeball, leaving aside certain things that might require an additional tool or spec or two. Even including those complications, it sounds like something even someone like me could accomplish if given a few directions.”

Somewhat. Against malware, I’d say it’s pretty true. Targeted attacks by people with resources still need expertise. The trick is that data is likely to be shared somehow. That’s where the attack and/or specialist techniques come in. I swore I had an essay or two here on air gapping yet I can’t find it. Really weird. If I do, I’ll link to it in a future comment.

” It sounds like an oxymoron to me of course, but I’ve heard that various folks have an interest in it, and given the name, I wondered whether it might not be at least tangentially related to this issue.”

Never heard of it. There are plenty of tech and research centered on mutually distrusting parties trying to work together on something. Most computing relies on overpriveleged, black boxes at many layers of the system.

“wouldn’t certain Red Hat flavours offer equal if not greater reliability and desktop experience? ”

The “enterprise” distribution aims at high reliability. The Fedora desktop distribution aims to be cutting edge. That counters reliability a bit. The most important metric is “easy enough to use that you will keep using it.” Fedora falls behind a bit there, I’ve heard SUSE does a bit better on that, and Ubuntu-based distros lead in that category. Your Mint comment is strange because it’s designed to look a lot like Windows: a “Menu” button in bottom left simulating Start menu, quick launch icons to the right of it, tabs of running apps to the right of that, and far right is app + configuration icons like in Windows. Some Linux lovers dodge it for this reason. Ubuntu seems to have a bit better performance and quality, though.

Thoth February 9, 2015 12:06 AM

@Clive Robinson, Markus Otella
Clive Robinson, it would be nice if you could do a quick run-down on CvP concept so that Markus Otella could have a better understanding (and also for me to get a much better picture).

I would also like to suggest experts on CvP maybe write an article/pub papers so I can publish them and be used as a future link on my website which would be useful (

Markus Otella, I would also like to point out that I did mention a “security monitor” derived from Clive Robinson’s concept of Prison model with some form of a little more honest than the rest monitoring machine. Maybe Clive Robinson can correct me, but my proposal works by “covertly” monitoring the crypto process via another machine (considering the keys and algorithms are deterministic). Maybe this is not applicable on RPi as of now as it lacks a monitoring feature in the core by default (actually the ARMv6/7 core may have TPM which now you have to wonder if you can trust it’s TPM – I am not familiar on that ???).

On a logical basis, here’s what’s assumed:
– Malware knows that it is being monitored and cannot do blatant data corruption.
– Malware has access to RNG and crypto functions.
– Malware cannot overwrite or interrupt monitor.

What can be done to keep the malware in place:
– A quorum of monitor uses votes to decide if a function call is acceptable (correctness).
– RNG functions performed on over multiple RNG machine and fed into an honest CSPRNG machine to derive a bunch of random bits.
– Whenever a random key is necessary, the random key is fed to the monitor and crypto function at the same time.
– Whenever key is used, the monitor uses it’s copy of key to calculate the same output the crypto function should derive. If results are not satisfactory, a new random key is loaded from the CSPRNG machine and repeated.

These functions are more complex and requires more machines thus they become harder to handle (just like the Prison model being complex) but they at least gives you the ability to monitor activities of the malware or to discover the existence of one.

Wael February 9, 2015 12:29 AM

@Thoth, @Markus Ottela, @sena kavote, @Usual suspects (and unusual ones, too),

More thoughts and clarifications on the subject…

This was an early and rather naïve attempt at understanding the characteristics of C-v-P. While each of @Nick P and @Clive Robinson are defending their approaches, I see them as complementary — both are needed.

The Castle includes OPSEC and high assurance, it’s a tomb-to-womb thing. The Prison, I think, doesn’t depend heavily on assurances — quite the contrary, it operates under the premiss of lack of trust, which means (in C-v-P parlance) that the castle will be breached one way or another — you can’t trust any code or component.  Although a TCB of some sort is needed for both, a Prison may require a TCB with different capabilities than a Castle’s TCB, which we haven’t discussed yet.

This was part of my perspective leading to the C-v-P! It hasn’t changed significantly since. But still this is a very high level, and a little tangential to the discussion. I frequently make such goal clear to show that my objective isn’t to rank which is superior — this is irrelevant in my mind.

Both Castles and Prisons must necessarily be guided by a set of security principles. Here is a list of some “quick and dirty Security Principles”:

  1. Least Privilege, POLA, or principle of least authority
  2. Economy of Mechanism; KISS (Keep It Simple, Silly). See the related discussions about Security-v-Complexity
  3. Trust no one, also trust but verify
  4. Check at the gate — Check at any boundary or interface crossing (don’t delay the check)
  5. Default deny
  6. Separation of Duties
  7. Segregation of Roles
  8. Reduction of the Surface of Attack — more of a philosophy than a principle
  9. Fail hard and fail fast
  10. Fail consistently. This, for example, is a mitigation control against side channel and timing attacks, see related Security-v-Efficiency discussions
  11. Expansion of search space
  12. Show me a 100-foot wall, and I’ll show you a 101-foot ladder. This also means that Castles will be breached by a 101-foot siege tower. Pardon the analogy, @Nick P

Principles are basic rules that serve as guide. But Castles and Prisons aren’t built from “Principles” Per Se! They are built from “mechanisms” and “controls” to achieve the functionality needed for such constructs by applying these principles. Both may need all the principles with different proportions or emphasis. While security controls also (hopefully intentionally) implement these security principles, the principles also apply to, OPSEC, coding implementation, and other areas. Thus:

A set of mechanisms + a set of pure principles = a design pattern (I suspect this will be corrected and will evolve to something more useful)

A Castle is, at the conceptual level, a compartmentalization structure to protect assets inside it. It’s purpose is intrusion prevention. Whereas a Prison is primarily concerned with intrusion detection and mitigation. both Castles and Prisons (funny, I just noticed that a Castle starts with a ‘C’ just like @Clive Robinson’s name, and a Prison starts with a “P” just as @Nick P’s name ends, and it should be the reverse, but whatever…) The construction and arrangements of castles and prisons aim to achieve a “nested defense in depth” at a higher level. Not all Castles will be the same, and neither will prisons be, they’ll take different forms suitable for the design goals.

Back to C-v-P proper… A Castle’s boundary is more concrete than a Prison’s boundary, if operating at the construction level. The boundaries are more conceptual at the OPSEC and assurance levels. So a castle has “Protocols” to be adhered to and concrete “Boundaries” to be defended. While Castles have rooms and prisons have cells, the comparison and contrast between the function of a room and the function a cell hasn’t been discussed. And just like Castles contain concrete boundaries and conceptual ones, Prisons also contain concrete boundaries, and other mechanisms that don’t fit the “boundary” understanding. The imprisonment of a CPU by a hypervisor, or the distribution of applications to different CPUs or cores as subtasks, or the voting mechanisms, prediction of proper outputs and detection of anomalous behavior at the instruction set level isn’t a “Concrete Boundary” Per Se, but a “conceptual one”.

As far as complaints that this is an analogy that serves little purpose, I say:
Analogies are abundant: Firewalls and honeypots, viruses and memory, etc… These are also accepted analogies. Is there another construct (not an analogy but a Model, or a Security design pattern) needed? “Honeypots” can be a “Security construct” that serve as a disinformation vector to aid in “adaptive security measurement” fed into The Castle and the Prison for dynamic tweaking  of their parameters by collecting the attacker’s interest areas and vectors or methods of attack…

Wael February 9, 2015 12:32 AM


it would be nice if you could do a quick run-down on CvP

Fingers too tired to type a couple of hyphens, or is your keyboard missing a key? It’s C-v-P 🙂

Wael February 9, 2015 2:28 AM


Too much abstract talk is confusing. Perhaps a concrete application of C-v-P will clarify the potential: Take this set of instructions that aim to mitigate a threat. How could the application of C-v-P be applied to this set of instructions?

  1. Abstract them to a Castle and a Prison
  2. Define the interactions between the Castle and the Prison
  3. Bring up other ideas that wouldn’t necessarily be clear without a C-v-P

Hint: You’ll need to add at least one more step and mechanism to enhance the “Prison” part of this list.

PS: Normally one would start from the “Threat”, not from a defined solution as above, but this is an illustration, so it’s ok 🙂

Figureitout February 9, 2015 2:36 AM

sena kavote RE: radio keys
–Likely just 2 transceivers w/ an output line to a relay which then can open doors (not sure what you mean by opening a bike lol). What’s next? Severe security vulnerabilities when the protocols (if you don’t have the protocol, you got nothing) get hacked, and SDR keeps chugging along making “reconaissance” more and more easier.

There’s a replay-attack on your wireless keyfob or garage door opener for example, while highly unlikely, still implemented w/ ~$50 in components.

Only reason I recommend them for security is a separate authentication that bypasses your hacked router. A real-time attack is less likely than an exposed radio open all the time.

Markus Ottela
–I was going to suggest FreeRTOS again (after getting it working, moving to SafeRTOS) due to the fact of such a small kernel (7938 LoC not including library files). It’s definitely do-able for an individual to actually go thru line-by-line (again and again and again) and really know the code, at least starting off until you start adding your application. I know it can run a webserver, has been ported to RasPi. Definitely not easy at all though (some uncharted territory for sure and the size constraint can definitely be a security issue in other ways, like keeping all its “tasks” in a single memory area…then again it makes it easier to monitor…). There’s some python tools for it too that would be handy:

Also some work on porting it to Xen:

But there’s going to be severe headaches getting it to work for sure, and may turn out to be a bad option:

Getting Minix to work or using another small OS like Angstrom or making a tiny one w/ buildroot or yocto w/ just OTR, ethernet, and python libraries etc may turn out to be the more realistic option (but it’s still a decent amount of code you have to review and trust since your goal is high assurance, and not including the tool chain as always…). I haven’t played w/ Minix yet nor tried to build my own distro yet. It’s easy to say to just use a minimal OS w/ programs you need, but a feature (a very handy one) was also file exchange, which while it could ruin security, it’s definitely very useful as I believe it currently has.

Neat on the RS232 tap, using one of those in combination w/ an ethernet tap on 2 separate monitoring PC’s would be a cool setup (messy, but cool).

Just got components for that HWRNG you used, should be neat. If I get it working sometime (not a priority, but want to do) I’ll try to house it in shielded case and take a pic, and then maybe shield cable out but I think it’d still leak as it goes in PC.

Extreme Privilege Escalation on Win8/UEFI

Scary paper, it’s probably out of most’s threat model as it’s below OS and hard to understand, but it’s a really terrible prospect, there’s more and more work done on UEFI vulnerabilities (integer overflow in it…allows writing garbage which is guaranteed bricking and hard problem to diagnose) which while it mentions it’s very non-trivial to exploit and they only tested on 1 HP system, still nervewracking as this is game-over. UEFI implementations probably don’t differ that much.

AlanS February 9, 2015 11:09 AM


I believe the $1 Trillion figure is for the all national security costs, not just intelligence-related costs. There are links in the Brennan article that better explain how the figure adds up, e.g. CJR’s breakdown: The true cost of national security. However, note that it is hard to decode the budget figures –see discussion at the end of the CJR link. There’s a different analysis at POGO. Information on the Intelligence budget here, but see discussion at bottom of first link for discussion of costs this doesn’t include.

But, clearly the money trough has gotten very much bigger and more problematic since people started worrying about such things after WWII.

Nick P February 9, 2015 11:12 AM

@ Clive Robinson

That’s actually a good counterexample. I should amend my subversion-resistant development process to include domain experts where esoteric stuff like cryptography is used.

@ Thoth

re C v P

Clive came up with the metaphor. I hate it. It’s only semi-accurate and makes one put a lot of energy into the metaphor rather than systems. The conversation was really about comparing my design approach to his. Mine focused on prevention, his monitoring and detection. Here’s simplified versions of the actual design to illustrate.

My original method corresponded to EAL6-7. You capture requirements, create a matching design, a corresponding implementation, bake security into all that, testing, covert channel analysis and so on. You do this to the TCB in a way that establishes security properties for the rest of the software. This might mean memory safety, control flow safety, prevent leaks of secrets, isolation of malicious code, and so on. System as a whole is decomposed into easy to analyze modules with a security policy. By design and implementation choices, the security failure should be low probability and hopefully impossible in practice.

Clive’s design is more complex to describe. He envisions a CPU with hundreds of little execution cores with MMU’s controlled by a hypervisor. The hypervisor determines what memory or I/O the core can access. The system (apps and OS) is broken into very small functions, each function’s execution properties are profiled, each function executes on a tiny core with POLA, and misbehaving functions register an exception. He also recommended that apps be scripted combinations of individual functions that were written by experts.

The reason the metaphor sucks is that it would mislead you. Failures include that my model doesn’t treat everyone equally (eg capabilities), will put processes in cells (eg segments), and has the equivalent of Prison’s warden in exception handling by apps and admins. The metaphor takes pages to fail to describe approaches I summed up in two paragraphs. So, best to think of the real things while using C v P as a reference to them: C is prevention by high assurance TCB; P is monitoring and mitigation at the function level via execution signatures. I critiqued it as impractical here.

Meanwhile, research into prevention, detection, and combinations moves on. Researchers aren’t building whole high assurance systems [mostly]. The focus isn’t on monitoring the behavior of functions. The best research in both sides focuses on modifying or extending processors. Prevention might enforce controls per type, key, or ACL over a number of properties. Detection systems architect things to raise an exception when unusual behavior occurs based on simple criteria (eg an unusual source & destination on a jump). The better of these methods enforce a great many protections in the system while not slowing it down much and being easy to implement. As Clive predicted, the future would have pieces of both models. Fortunately, the new stuff is also easier on developers than both. 😉

Grauhut February 9, 2015 2:48 PM

TPTB are scared as hell about the “new internet induced uncertainty in our societies”. We dont buy their fabricated propaganda truth anymore. All those russian sock puppets are so evil…

Well, why should we continue to buy war propaganda after the funny curveball powerpoint irak weapons of mass disinformation? 🙂

From the munich security conference:

SoWhatDidYouExpect February 9, 2015 3:20 PM

Samsung SmartTV Customers Warned Personal Conversations May Be Recorded

While voice commands can be used to control the TV set, they should ONLY record the commands (result of the voice translation to the command), not the voice itself. There is something more insidious going on here than just the voice commands. I beliee it is eavesdropping, and for a purpose.

Remember, some of these so called “smart TV sets” also have cameras, so there may be video recording as well.

I wonder if these “voice commands” are sent out to a server on the network to be translated to commands for the TV.

I recall from some time in the past couple of years that someone was developing a TV screen (or computer monitor) that could record video using the pixels from the display itself as micro-camera inputs. Has anybody followed that concept?

Anura February 9, 2015 3:54 PM


I suspect it’s just because they are using generalized computing hardware. It’s essentially one computer running a microphone and with an internet connection. The firmware processes both the voice communications and handles the internet connection. Just like your phone, PC, or your laptop, if they can be accessed remotely then someone can turn on your mic and webcam.

Luckily, with the tight controls on firmware, they should be locked down by design and receive high assurance auditing and frequent updates in the case of a security issue with some of the software. I mean, I suspect security is such a forethought for the average consumer that they wouldn’t put security on the backburner in an attempt to get their products to the market as quickly as possible for the lowest cost. Right?



BoppingAround February 9, 2015 4:52 PM


Well, why should we continue to buy war propaganda after the funny curveball powerpoint irak weapons of mass disinformation? 🙂
We should not. There are tonnes of more pleasant propaganda pieces 🙂

Clive Robinson February 9, 2015 4:53 PM

@ Anura, SoWhatDidYouExpect,

They are simply following the specification George Orwell put in “1984” back two generations before that…

And it’s a racing certainty that the security of this “audio channel” will be sufficiently low that like the product of CarrierIQ and “mobile phone keyboard presses”, the likes of the NSA will not need to infiltrate the devices, just “listen in” as it goes by on the Internet…

What amazes me is that with so many real examples as to why this sort of thing is so bad, we keep turning out what is worse, in what appears to be a never ceasing race to the bottom.

But a question for you, in these days of ever increasing death by lack of excercise and other bad habits, why does such technology not come with a Government “health warning” as found on the sides of packets of cigarettes and other tobacco / alcohol / high fat products?

Anura February 9, 2015 5:53 PM

@Clive Robinson

My biggest question is “Why?” – what is the point of voice commands for a TV? I mean, I can understand in your car where your hands are occupied, but when I am sitting on my couch, it’s easier to type on my phone than try and hope it the system recognizes there are silent letters in the English language (seriously, what is up with the spelling of “phthisis”?).

In my car, I find half the time when I use the GPS it can’t figure out what street I am giving, and I just have to enter it manually anyway.

Anura February 9, 2015 6:18 PM


Whoops, I didn’t fully read the article. I take back my earlier comment; sounds like they are either using a third party service to process the voice commands or they are collecting them for the sake of collecting them (easy to test – do voice commands work when the internet is unplugged?).

Clive Robinson February 9, 2015 7:10 PM

@ Thoth,

When you follow Nick P’s link, read down further. Nick P made a lot of assumptions that made his analysis “inadvertantly biased” as I pointed out at the time, he was not comparing apples with apples, but apples with lemons. It’s why I said,

    As for your other comments I think you are still missing points I’ve already highlighted and explained, so in the mean time go back and read them.

The subject area is both complex and subtle, you raised in that thread a point about malware and the voting protocols, which I explained to you could be trapped by the “Prison”. However your assumption is correct for the Castle aproach, in that it cannot stop that sort of malware getting in.

I don’t want to turn the conversation about a work in progress into a pissing contest, as those generally force people to take up entrenched positions, and thus have effects on their judgment which in turn adversely influance the views of others. What I want is people to think and ask questions and make their own evaluations, otherwise they are in danger of running down blind allys and wasting their time.

As I’ve indicated befor neither C nor P will be the ultimate destination for the industry, hopefully that is clear to everybody?

We know the monolithic CPU systems used in the current Castle’s are known to be an evolutionary dead end that cannot be financialy justified any longer. The Chip Industry is acutely aware of this and have already splitting up into multi-CPU chips for quite some time now. However the current cores are “fat” with their attempts to fight off the inevitable with single core systems. This fat unfortunatly needs to be currently kept in for the “all hallowed backward compatability”, which is just one of the reasons why there are issues with current CISC multi-core systems.

One assumption Nick P makes is that highly parallel systems will use wide bus –ie 64bit– cores, they won’t, this can be seen with the journy through Flynn’s 1966 taxonomy[1] that started with the likes of the Intel SMID extentions and has arived at MIMD currently and hit the non local memory issue big style.

This is because wide data busses are realy only used for a limited number of mathmatical functions and to get the bulk memory bandwidth across PCBs etc closer to the bandwidth of the CPU core. There are other more efficient solutions to these problems, of which distributed memory with primary local memory is one.

If as I had asked Nick P to do, he had reread previous comments he would have found, like Robert T, I was not talking of 64 Bit CPUs but predominantly 8 bit CPUs in a MIMD array similar to the 8051 that Robert T mentioned, but reduced to a Harvard RISC design with local high speed memory not caches where in effect the tasklets are “microcoded” using efficient RTL not using inefficient CISC style instructions that only realy exist to try to get around part of the non local memory issues.

Thus the journy from sequential to parallel systems will first involve a lot of “unwinding” of the existing “go faster cludges” that have given rise to a lot of fat in CPU cores. Then at some point give rise to even the ALU being split up into a limited number of wide bus arithmetic units and narrow bus logical units, or effectivly units that are SPMD with their own private local memory and the SPMD sub core shared core memory.

The obvious difference the Prison offers is the extra security the core MMUs provide for access to memory. Like IOMMUs these are not controled by the point from which malware will arive. Thus they are not controled by the core it’s protecting but a lower layer hypervisor unit.

Will future parellel core chips be designed “exactly this way” no they probably won’t, because continuing development will find ways to optomise things for various desired attributes, that we don’t currently know. Which is just one reason we should be thinking and talking about it.

For instance not many people are aware that you can design chips to be placed on other larger chips and this technology is used by the largest manufacturer of FPGAs currently. Thus you could design a carrier chip that acts like a “switching backplane” including voting circuitry and non local memory and upper layer hypervisors on which MIMD arrays with local memory from various manufactures can be “flip chip” “ball/bump grid array” connected prior to encapsulation. This sort of technology will be developed by the likes of the US DoD for a whole set of reasons over and above those of increased security.

The main downside of such technology is “heat”, however even on single chips this is currently a problem especially with CISC cores, the solution in current use is “dark silicon” and for many reasons memory is very close to this. Thus RISC cores with local memory is a way the industry is progressing. There is no reason why this local memory can not be “dual ported” in some way, nore that the MMU logic cannot be built into the memory selection circuitry.

Thus research into finding an optimal arangment is likely to happen, which is why we we realy don’t currently know how things will pan out. It’s this reason why I know that neither pure Castle or pure Prison will be the longterm future. It’s also why I know MPMD distributed memory systems of varying scale from current Super Computers down to those on single chips are the future, the trick will be dealing with the memory.


Dirk Praet February 9, 2015 8:14 PM

@ Clive

But a question for you, in these days of ever increasing death by lack of excercise and other bad habits, why does such technology not come with a Government “health warning” as found on the sides of packets of cigarettes and other tobacco / alcohol / high fat products?

This usually only tends to happen when there is sufficient scientific proof and/or risk of massive litigation set forth by one or more judicial precedents that the product or service in question represents a major health hazard.

@ Anura

My biggest question is “Why?” – what is the point of voice commands for a TV?

I doubt that it is to accomodate those of us who have either grown too fat or too lazy to operate the remote. It’s just another cheap selling argument for slick salesreps and marketeers to impress lo-techs with. Or a practical joke of the engineering team that wanted to take credit for getting the first Orwell-compatible TV-sets out.

Nathanael February 9, 2015 8:24 PM

I’ve been reading a lot about what military historians call “grand strategy”, which overlaps into geopolitics. Recently I’ve been looking at Chuck Spinney’s blog. His description of Boyd’s criteria for grand strategy is relevant:

What is security, really? The best security you can have is for the vast majority of people around you to WANT to help you. If everyone in a city (or a nation) loves and cares about you, do you need to secure your business in any way? Probably not. Anyone stealing from or vandalizing it will face the wrath of the entire population of the city.

Famously, in the US Civil War, the Confederate Army was riddled with Union spies, and the Union army had practically no Confederate spies. This is because the Union soliders were loyal and attached to principles, while the Confederate principle was slavery… something only a few rich men were willing to die for.

The entire military-industrial complex in the US, including but not limited to the surveillance state, fails this basic test of security. Practically nobody trusts it, least of all the people who work for it. It will inherently be full of leaks, moles, defectors, and saboteurs because it is awful and will therefore generate such people. It is durable and powerful as a money-generating machine, but it is already a hollow, corrupt thing riddled by self-interested factions — which makes it weak. Certainly too weak to defend the US from any outside attack of any sort whatsoever.

This leads to the following fairly radical conclusions. In order to provide national security, we must:
(1) liquidate the entire “national security” establishment, which is a threat to national security;
(2) establish an entirely new US defense force, from scratch (this will be opposed by the establishment, obviously)

A President with the right attitude would just do this — the titular powers attached to the position make it quite possible to do it given strong support from civil society (everything from NGOs to Google & Amazon), which would be immediately forthcoming. But we haven’t had a President who would even consider it since FDR.

So instead we’ll probably muddle along until some other crisis gives the “national security” establishment a clear-cut military defeat. It’s unfortunate that that will probably require the military defeat of the US. But the Tsar had to be defeated in multiple foreign wars before it was possible for his government to be overthrown and replaced by a competent one. So that’s history as I see it.

Nick P February 9, 2015 11:21 PM

@ Clive Robinson

“I don’t want to turn the conversation about a work in progress into a pissing contest, as those generally force people to take up entrenched positions, and thus have effects on their judgment which in turn adversely influance the views of others.”

I’m trying to ignore that myself. Understanding and evaluating the attributes of your design instead.

“This fat unfortunatly needs to be currently kept in for the “all hallowed backward compatability”, which is just one of the reasons why there are issues with current CISC multi-core systems.”

My evaluation didn’t consider CISC at all. It considered the most resource efficient RISC (32-64 bit) network on a chips in existence. Some ISA’s had backwards compatibility but efficient nonetheless. Others were clean slate. Best clean slate topped out at 128 cores for an architecture with no brains or security built in. Smarter on-chip network or security enhancements use extra space so I dropped expected core count based on what you described in your post.

“If as I had asked Nick P to do, he had reread previous comments he would have found, like Robert T, I was not talking of 64 Bit CPUs but predominantly 8 bit CPUs in a MIMD array similar to the 8051 that Robert T mentioned,”

I read and ignored that: nobody is using 8-bit processors for desktops, servers, or even high performance embedded systems. Far from a “limited number of mathematical functions,” the higher bit processors have more register storage, throughput, and a larger address space. I’d rather not go back to only working a byte at a time with virtually no memory and tons of indirection to run relatively simple programs. The few MPP systems that have tried simplified processors ended up performing like crap on most workloads because few are parallel and simple enough to use such CPU’s (eg Amdahl’s Law). Likewise, people writing efficient network and compute stacks for cheaper embedded CPU’s always experience a huge drop in performance.

So, it’s safe to assume the design uses the 32-64 bit cores that defeat lower bit competition in the real world. I’ll remove that assumption if someone invents an 8- or 16-bit desktop NOC that can run workloads modern RISC chips do using same silicon with similar performance. If people think this can be done, I encourage them to prove it with actual chips: I’ll benefit from the cost savings and parallelism too.

“chips placed on other larger chips and this technology is used by the largest manufacturer of FPGAs currently. Thus you could design a carrier chip that acts like a “switching backplane” including voting circuitry and non local memory and upper layer hypervisors on which MIMD arrays with local memory from various manufactures can be “flip chip” “ball/bump grid array” connected prior to encapsulation.”

There’s definitely potential with 3D chip designs. I also agree DOD (specifically DARPA) will likely fund the initial work. They funded much of the best stuff we use today through their Strategic Computing Initiative. They’re also funding a lot of great stuff right now.

” It’s also why I know MPMD distributed memory systems of varying scale from current Super Computers down to those on single chips are the future, the trick will be dealing with the memory.”

That’s the present: petascale computing schemes already did that. The 128-core chip using memory fab technology being the most brilliant implementation. Most of the exascale approaches are working to improve on this as well. So, we’ll definitely see more of it. Hopefully, some of these tricks the exascale research funds will spill over into our own work and make it more efficient.

gordo February 9, 2015 11:52 PM

How the NSA Spying Programs Have Changed Since Snowden
Sarah Childress | FRONTLINE | February 9, 2015

The government says it’s made reforms to its surveillance programs. But how much is really different?


FRONTLINE sifted through the various reports and recommendations to understand what the government has changed post-Snowden — and just how much is still exactly the same.

Thoth February 10, 2015 12:34 AM

@Nick P
Regarding 8-16 bit RISC processors, a lot of crypto-chips are using those. Here’s a 8bit ECC crypto-chip ( What I sense from Clive Robinson’s 8bit or 16bit processors can be translated into a bunch of security processors in slave mode and his aim is ultimately for security applications requiring high assurance just like the current SIM and smartcards are running 8 or 16bit RISC type security processors and a bunch by NXP. One good example is NXP’s current flagship crypto-chip which is the SmartMX series ( that have a 16bit RISC.

I can imagine if a bunch (say 30 of those) 8bit smartcard chips were to be bunched together and compact into a SoC “mothership” chip with a trusted supervisor with probably a 16bit or even 32bit RISC processor for higher throughput, it might actually work.

Of course most uses of such security critical applications cannot expect speed and security (tradeoff). Most daily low/none assurance desktop would still continue but the branch-off would be at the security industry where the debate over monolithic cores or “Prison” cores would be decided as the mainstream.

I think if there’s someone willing, 128 pieces of NXP SmartMX series cores could be linked to form a clustered and prisoned core system but whose gonna trust them for security applications.

I am guessing, consumer electronic would still stay the same and the difference is mostly the security electronics (that has already become very cheap for cost).

Bong-smoking Primitive Monkey-Brained Sockpuppet February 10, 2015 12:38 AM


How the NSA Spying Programs Have Changed Since Snowden

My guess is they will never hire anyone whose name has the dead giveaway anagrammatical sentence:

Wed Word End NSA

On Wed, June 5th, timeline!
June 5, 2013 – The Guardian reports that the U.S. government has obtained a secret court order that requires Verizon to turn over the telephone records of millions of Americans to the NSA.

It’s written all over the guy! They should have seen it coming, no?

Wael February 10, 2015 1:01 AM


I can imagine if a bunch (say 30 of those) 8bit smartcard chips were to be bunched together and compact into a SoC “mothership”

If I had control over cores, I would consider going the Tesla K40 route, with its 2880 CUDA cores.

Grauhut February 10, 2015 2:02 AM

@Wael: A 100$ Maxwell 512 low power cuda core GTX750 should be able to handle most sec jobs fast enough. 😉

Clive Robinson February 10, 2015 6:00 AM

@ Nick P,

You are making a mistake with thinking,

Far from a “limited number of mathematical functions,” the higher bit processors have more register storage, throughput, and a larger address space. I’d rather not go back to only working a byte at a time with virtually no memory and tons of indirection to run relatively simple programs

One of the biggest improvments in computer performance is down to the MMU and it’s “virtual memory”. In it’s purest form the CPU only knows the physical location of data internally such as it’s register file. On the other side of the MMU it has no clue what so ever, it just has a logical address, that the MMU and other parts of the system convert from logical to physical.

The question thus arises does the logical adress and physical address have to be the same size? Well the answer is most definately no.

If you look at program memory and data memory seperatly you start to see things from a different view point. Arguably the von Neuman architecture was the biggest brake put on computer development, which is why most modern CPUs are Harvard architecture internally.

One thing that becomes clear is both executable code and the data it works on are not randomly placed in the CPU’s logical view of memory, but within reason sequentialy. It’s why so much of the “fat” around the ALU works. In reality the CPU has a “window” on both the program memory and the data memory. In most cases these windows are realy very narrow and can be as little as four memory address bits wide for sequential execution and seven bits wide for data (assuming byte addressing). Even when branching the jumps are usually within a very short distance, hence in many CPU designs branching is only eight bits wide for +/- 127 addresses either side of the current executable memory pointer (ie the PC or program counter). When considering data, programs very very rarely work on the data memory directly, instead data is moved to the most local of memory to the ALU, the “register file” and worked on there. Even when dealing with memory less local to the ALU the narrow window still exists with “stack frames”. Thus internaly to the CPU the address busses are not actually needed to be 32 or 64 bits but eight bit or less.

Which gives rise to other questions, such as what if we make the dual ported register file much larger? This is what we have seen happen in the history of super computers where upto 128, 8byte registers were used for significant performance gains. Other sugestions have come forward as the pinch points change on the “memory wall” issue.

One such was to use short virtual addresses via a translation table to the full logical addresses, which through the MMU and other virtual memory systems eventually translate to the actual physical address. The problem, which is making short virtual addresses more desirable, is the memory address range limits. 32 bit addresses were once assumed to be more than sufficient, now even 64bit addresses are looking a bit tight around the “neck”, the simple fact is the more distributed the total memory the worse the problem will get. Which means bounded address sizes are going to become an ever tightening collar, which short virtual addresses will resolve acceptably giving plenty of breathing room now and into the future, where 20 terabyte memory blocks –sufficient to store a similar amount to the human brain– will be looking small.

We briefly discussed such short virtual address systems a while ago, the major problem was memory ordering, which nicely splits in the Harvard architecture such that what is required for code is seperated from that required by data. In a prison architecture the tasklets assigned to each cell is sufficiently small that code address size is not going to be an issue, further the “letter box” buffers and stream communications likwise render data address size issues irrelevent as the MMU/address decoding logic translates short virtual addresses to the required sized logical and/or physical addresses. This in effect makes each prison more like a vector processor than a sequential scalar processor, which moves many of the addressing issues from the low level code to the data schedular and compiler.

Further the short virtual memory addresses makes the generation of the RISC microcode/instructions much more efficient and tailored to the tasklet in each prison, alowing more to be done with less because of the tailored and optomised approach. Which also alowes the cell CPU to run at a higher clock rate.

Thus the question boils down to how long before address issues cease to be of actual relevance to the majority of the ALUs and further the ALU can efficiently work with local memory as fast as registers but not constrained by register size issues. That is if the data worked on is 8bit then that area of local memory is treated as a vector or collection of 8bit registers, if the data is 128bit then that area of local memory is treated as a vector or collection of 128bit registers. This being done in part by mulrltiple part ALUs that are some conveniently small size such as 8bits. The advantage is that coding up the task as RTL microcode and custom instructions gives a high degree of flexibility and efficiency as well as speed. One future of parallel computing will be not by using multiple traditional “general purpose X bit CPUs” but by using highly configerable CPUs.

Nick P February 10, 2015 12:43 PM

@ Clive Robinson

re range

You’ve addressed the address range problem with quite a number of creative solutions. I can see that working for especially simple or embedded machines. Not as sure for complex desktops or servers where there’d be lots of indirection and windows. Maybe on that one.

The throughput and storage benefits of higher bits remain. Most numbers and values I see in real-world applications can’t be held in 8 bits. This means a simple instruction must become a number of them executed sequentially. That kills efficiency and is one of the reasons supercomputers settled on a minimum of 32 bits for data. This was true for DSM’s, vector processors, NUMA, and clusters. Mainframes as well. Dropping to representations that your data can’t fit in kills throughput. Do you have examples of 8-bit CPU’s working on 32-bit numbers as efficiently as 32-bit CPU’s? It’s necessary because much of the work is inherently sequential.

re “Further the short virtual memory addresses makes the generation of the RISC microcode/instructions much more efficient and tailored to the tasklet in each prison, alowing more to be done with less because of the tailored and optomised approach. ”

That can definitely improve performance on many algorithms. I’ve advocated the microcode aspect myself.

“The advantage is that coding up the task as RTL microcode and custom instructions gives a high degree of flexibility and efficiency as well as speed. One future of parallel computing will be not by using multiple traditional “general purpose X bit CPUs” but by using highly configerable CPUs.”

One issue we have in this debate is that your thinking has evolved based on our discussions here, I reviewed your original statements, and your rebuttal differs from them significantly. For instance, what you just described is the Tensilica approach I gave you in 2012 in response to your post referencing whole “CPU blocks” mediated by a state machine hypervisor. (Tensilica is in comment below.) It’s also similar to NISC architecture I referenced here a number of times. So, your original architecture references simple, CPU cores while your current reply is talking about Tensilica/NISC-style systems with mediation.

Doing that is one of the quirks in your writing. Nothing wrong with evolving views as my own have changed as I learned during our discussions. My MPP security architecture was obviously influenced by your Prison proposals, for instance. It might help, though, if you sit down and come up with a clear picture of your Prison architecture as you see it today. I critiqued Prison v1 after digging through lots of comments where you gave specific, technical details (cited a few later on in that thread). Your current description, Prison v2, is more efficient and has less implementation issues than the original. The confusion arises from the fact that we are debating effectively two different designs as if they’re the same when you’ve clearly upgraded it with the results our prior discussions and your own research.

It might be worth detailing the concept, ensuring its consistency, describing how it might be implemented with methods that exist today, where it might be improved with future R&D, and putting it all in one post on a Squid thread for easy reference/reading. Then everyone (esp Wael and Thoth) will all be on the same page as to what exactly you’re describing and evaluate it fairly. I’ve done this myself and plan to do it again for my framework in near future: nobody is going to (or should be expected to) comb through thousands of my posts trying to understand what I describe in a comment. I provide indexes and revisions instead.

The other benefit to you doing this is that it might benefit the work of those thinking in similar directions as you. Quite a number of them I’ve seen. Plus, ideas like the window scheme for working with large amounts of memory in tiny CPU’s might be beneficial by themselves for those building homebrew systems.

gordo February 10, 2015 1:55 PM

@ Bong-smoking Primitive Monkey-Brained Sockpuppet

You too may have troubles getting hired:


Bong-smoking Primitive Monkey-Brained Sockpuppet February 10, 2015 2:10 PM


Lol! My name needs no anagrams to inhibit my employment prospects 😉 Not sure which is worse: My name or the anagram you shared 🙂

Clive Robinson February 10, 2015 4:16 PM

@ Nick P,

Yes the ideas behind the Prison, are evolving, oddly perhaps more to keep it upto date with what technology can do in general.

With regards the 8bit ALUs and 32 bit –or larger– numbers, there are two basic choices, the first is the simple sequential repeate you mention which takes about five times as long on an add. The second and much more interesting way is with four ALUs that work as either a 32bit adder, or as as SIMD or MIMD for other instructions depending on how you generate the RTL. Due to the “carry” issues the ALUs would have to be slowed to about 60% of the speed logical instructions would run at, thus making the ADD/SUB/CMP etc arithmetic instructions take two clock cycles would be exceptable.

An effective 8bit ALU is only 150-200 simple gates which is very small compared to other logic you will find in even old style 8bit processors. Supprisingly examinations of the Z80 CPU show that it’s ALU was actually 4Bits not 8, which shows various “behind the scenes” tricks with ALU bus widths have been going on almost from the “get go” with single chip CPUs.

The main take away is that I want the CPU to be as simple and flexible as possible, whilst adding the security into the “local memory” decode and non local memory MMU. I was going to get a Spartan FPGA and code it up by the chips are now depreciated and the prototype boards for other FPGA’s of similar spec are a lot more pricy.

Tomorow I’ll have a look at the Tensilica, to be honest the name does not “click in my head” which probably means I’ve either not looked or for some reason forgoton about it (old age creeps up on you no matter how hard you might try to stop it 🙁

Nick P February 10, 2015 4:49 PM

@ Clive Robinson

That’s exactly the degradation I was talking about. I’m sure there might be ALU tricks and such that reduce it. I look forward to seeing what engineers come up with in that area. Meanwhile, here are links to sites that have many inexpensive FPGA boards. Far as Tensilica, this article along with the Xtensa link in it explains what they do. NISC is an open source toolset doing a subset of that work and is targeted to cheap (Xilinx?) FPGA’s. Also allows you to prototype in C with synthesis to HDL.

Nick P February 10, 2015 5:30 PM

@ All re FPGA’s

A lot of interesting stuff on eBay as usual. However, this seems more interesting than usual because of what it comes with: Xilinx Vivado Design/Synthesis tools, tutorials, lectures, and game building examples. Vanilla Spartan-6 dev boards from Xilinx are around $250 while this is $350. Might be a decent buy.

Wael February 10, 2015 5:54 PM

@Nick “The C-v-P party pooper” P,

A lot of interesting stuff on eBay as usual.

I wish I had more time and money to play with all these toys and read all these links!

Then everyone (esp Wael and Thoth) will all be on the same page as to what exactly you’re describing and evaluate it fairly.

I read every scattered thread on the subject. As for collecting everything in one spot, I’ll add another request to @Moderator to create a “room” for special intrest groups. I’ll pay a moderate “subscription” fee, too!

Wael February 10, 2015 6:42 PM


Maxwell 512 low power cuda core GTX750 should be able to handle most sec jobs fast enough.

Probably so, if you’re willing to use OpenCL (and maybe C11 too.) I don’t think we have the level of control on the GPU cores and “glue logic circuitry” needed to implement an elementary proof of concept “Prison”, though. I haven’t checked…

Wael February 10, 2015 6:48 PM


I never viewed them as debates. I wouldn’t have the patience to last in a debate longer than a few iterations.

Do a search, Thoth! Besides, did you figure out the problem or you need more hints? Time’s running out… Tick-tock, tick-tock,…

Thoth February 10, 2015 7:03 PM

I don’t have much time limits on my side here 🙂 . Probably might create an index over this weekend or something for easier referencing on my side.

Nick P February 10, 2015 7:38 PM

@ Sancho_P

It’s an interesting chip, for sure. I wouldn’t mind seeing it used in things if only for the value of diversity against subversion and attacks. Especially considering black hats probably have never heard of it, much less target it. And 500MIPS is enough for plenty of workloads so long as the problem is parallel enough to use it. I’d use it as a coprocessor for application code while a more traditional processor handles I/O, configuration, and management.

@ Wael, Thoth

Yeah, the old one’s were discussions and the recent one leaned toward debate. They’re easy enough to look up: filtering out references that didn’t include discussions and organizing the actual discussions are the hard part. One can find the pieces by typing into Google to restrict results to here followed by combinations of keywords Nick P, Wael, Clive, Castle, Prison, C v P. Use quotes on Clive’s name, as he’s always in the discussion, with quotes also on the topic keyword you are using in a given search. That will limit the search to most likely candidates.

Examples: “Clive” “Nick P” “castle” “prison” will match our early conversations where he spells out the meanign “Clive” “Wael” “castle” “prison” will catch those I wasn’t in “Clive” “castle” “prison” will match about all of them with many false alarms and repeats “Clive” “prison” Nick P Wael castle c v p casts the broadest net with no precision at all

Use the date operator optionally to focus the search on small periods of time starting years in the past. That can let you see the conversations as they appear. Starting time should be February 2010 as that’s the oldest link Google has. Not sure if that was the first or if Google lost the first.

Wael February 10, 2015 11:07 PM

@Nick P,

Yeah, the old one’s were discussions and the recent one leaned toward debate.

Yeah! These were the days, weren’t they? 🙂 By the way, didn’t you say:

Oh, this again? Fun, fun. Wael, don’t let him think what he described is what WE came up with. […] It was his idea I contributed to a bit. His is the Prison, mine was the Castle. course, both academics and I have refined the idea over the years to reduce work required & make good tradeoffs. — Nick P, June 3rd 2012

But now you say, and I quote:

Clive came up with the metaphor. I hate it. It’s only semi-accurate and makes one put a lot of energy into the metaphor rather than systems […] The reason the metaphor sucks is that it would mislead you.– Nick P, Feb 9th 2015

Holly sh*t! Are you flip-flopping on me again @Nick P? Didn’t I caution you before man? You want me to “craft” another limerick for you, or would you rather have a talk with my sockpuppet, huh? I hear he’s got a bong with your name on it!

Wael February 10, 2015 11:41 PM

@Nick P,

Come to think of it, @Clive Robinson was right on the mark, too 😉

A limerick, followed by a poem! It’s gotta be true man!

Clive Robinson February 10, 2015 11:48 PM

@ Wael,

That little limerick you started for Nick P, there used to be a topical quiz program with comedians as contestants, in one round each team would be given a topical item to have to make up a limerick about. BUT…. they had to do it by taking turns giving a line to it…

So as you have given the first line I cann’t help feeling the second line should be along the lines of,

    Who had an enormous bit bucket

To “keep it within subject and decency” 😉

So your turn again….

Nick P February 10, 2015 11:55 PM

@ Wael

Ahh, that’s entertaining in a sinister way. Minus the selective highlighting, the first and second quote don’t contradict at all. Nice try Wael. Yet, you two’s poetry is still hilarious.

Wael February 11, 2015 12:05 AM

@Clive Robinson,

To “keep it within subject and decency” 😉

You lead me down this path, then you ask me to make it “decent”? Hmmm, challenging give me some time…

So your turn again….

That’s ingenious! I really thought about starting something line this, but with the movie plot @Schneier runs every year. Someone posts a paragraph, another continues, then we have a collective story…

Wael February 11, 2015 12:37 AM

@Clive Robinson,

Nick P was a Security Schizophrenic from Nantucket
Who had an enormous bit bucket
He saw a castle of brass

I made it equally difficult for you to keep decent! Guess what “brass” rhymes with? 🙂

Clive Robinson February 11, 2015 3:21 AM

@ Wael,

I made it equally difficult for you to keep decent! Guess what “brass” rhymes with?

Err “crass”, “glass”, “class” so the line could be any one of,

    And said “What no dungeon? don’t be crass!”.
    So he polished it, till it shone like glass
    And said “Brass!!!, do you think I’ve no class”

So still on topic 😉

The question is of course will Nick P get “all Victorian” on us, or finally reply in kind.

Andrew_K February 11, 2015 4:49 AM

Due to lack of time, I haven’t read all preceeding comments yet, so please apologize if this topic is already covered and I have missed it.

Linux kernel is about to be live-patchable. To fix security issues, the kernel development folks say. To create new security issues, I say.
Anyhow, considering Linux kernel not secure isn’t much of a new idea around here but this “feature” makes it even worse. And another great vicroty of p… contest mentality among OSS developers.

Wael February 11, 2015 5:35 AM

@Clive Robinson,

I choose the second one: So he polished it, till it shone like glass It becomes more difficult as you progress…

Nick P was a Security Schizophrenic from Nantucket
Who had an enormous bit bucket
He saw a castle of brass
So he polished it, till it shone like glass
But t’was a prison with enough bit-muppets to fill his bucket

Hate to use the word “bucket” twice!

Wael February 11, 2015 5:43 AM

I give you two choices! The previous and this:

It turned into a prison, and the bucket… mop’et

Wael February 11, 2015 5:56 AM

@Clive Robinson,

And here is your third choice:

It became a prison, as for his bucket…. he used to mop it

Nick P February 11, 2015 10:59 AM

@ Andrew_K

I assume it’s to come into feature parity with IBM’s AIX: the reigning king of UNIX OS uptime. One of it’s proponents bragged they could live update their kernels to avoid downtime. While it has a bit of security risk, it’s actually a good idea where downtime is considered a greater security threat. Mission critical enterprise apps, mainly.

Clive Robinson February 11, 2015 11:19 AM

@ Wael,

I chose your “bucket” option and raise you with,

    And on trying to stop, was told “up you must suck it”

Clive Robinson February 11, 2015 5:21 PM

@ Andrew_K, Nick P,

There are good sides and bad sides to live patching, as Nick P notes it can make up times look realy good and thus the likes of “availability” which the likes of the telecos are always looking for.

Importantly it sugests that the kernel code is being “unentwined” and possibly getting a sensible “framework”. All of which is good from one perspective, but as Nick P notes this more modular approach will make malware injection easier for attackers (in the same way it’s easier to get a pair of socks out of the draw if the draw is neatly ordered rather then an entwined mess).

My concern though is how many layers of patches will it be OK to run with before common sense says it’s time to “lift the Molly Guard” and hit the “shutdown process” switch to get to some kind of stable base. In the machine/engine world you do this as part of “Preventative Maintenance” proceadures.

Which brings up the question of poorly written applications. For a system to be truly “resilient” the apps need to not just fail gracefully, they need to be able to pass over processing to the same app running on another platform. If the app is not written in a way to facilitate this, then preventative maintenance will not be possible and the system is guaranteed to fail at some point with the loss of data not just up time.

Like security, writting apps that have the required properties, is beyond the understanding / abilities of most current “shop programers”. However availability is an easier sell to managment than security…

tbd February 11, 2015 7:54 PM

Wael is a lad with sharp wits
He know how to protect his bits
But castle? Or prison…
He shows indecision
And buries his data in pits

AlanS February 11, 2015 9:25 PM

Links to commentary on the partial summary judgement in Jewel v. NSA: Ars, EFF, EmptyWheel, and Stanford CIS.

Basically, because it’s all secret plaintiffs can’t prove standing and if they did have standing the judge would throw it out anyway because of the government’s state secrets privilege. Defending itself would be a threat to national security, or so the judge claims, but he can’t show us how he came to this decision because it’s all secret. So, the programs could very well be illegal but whether they are illegal or not is irrelevant. And citizens have no recourse in the law to challenge such programs. Ridiculous.

Gerard van Vooren February 12, 2015 3:35 AM

@ Clive Robinson – about live updates

My concern though is how many layers of patches will it be OK to run with before common sense says it’s time to “lift the Molly Guard” and hit the “shutdown process” switch to get to some kind of stable base. In the machine/engine world you do this as part of “Preventative Maintenance” proceadures.

Which brings up the question of poorly written applications.

In Windows you get to see that nasty little bugger when an update arrives. It usually means a couple of restarts. But you can configure how the updates are being installed (fully auto, fully manual and in between). In Ubuntu the same. I guess that Linux kernel ‘live updates’ can be configured the same way.

About poorly written applications: You are probably aware that Minix3 has a reincarnation server that checks for ‘dead’ servers [1]. If it finds one it auto restarts the server. The running app can crash (no matter how well written) but with a monolithic kernel such an error is able to crash the entire system.

Which brings me to auto ‘live’ updating of a Linux kernel. The kernel guys know what they are doing and I think that when they introduce live update it is well tested and the side effects are also well known. But still, updating one server (of dozens) in a microkernel OS feels less ‘massive’ to me.

Now with systemd (systemctl monitoring deamons) and the massive Linux kernel they are heading to microkernel territory yet this combo is still a massive workaround. With microkernels it is all there, and a lot more. That said, the Linux kernel momentum will most likely end up as the winner and we have to deal with it.

About the machine maintenance, most machines that I know are polluting and have built-in wearables which needs to be replaced from time to time. A shutdown in between shifts are common. Computers are not really polluting nor do they carry short lived wearables. Besides that, the startup time is still measured in minutes (in office settings). Also, from the moment a user is logged in it carries a lot of state (login names, passwd, open apps in the right place, the right sites on etc). Because of all that it makes sense that computers are ‘always on’.


Andrew_K February 13, 2015 12:52 AM

@ Nick P, Clive Robinson, Gerard van Vooren — Regarding kernel live updates

Nick P brought up mission critical enterprise apps and Clive the point of what is easier to sell. Fits my experience: Writing reliable applications (which are able to live-migrate to an other instance on a different machine) is expensive and not very sexy from the sales people’s perspective. So is having a dedicated backup machine for every critical application in hot standby. Heck, there are many applications out there which — when analyzed carefully — make one wonder how this could have been working without great failures in the first place.

Thus I’m much more troubled with systems ending up running kernels which have no representation in persistent memory aside from an old kernel image and half a dozen live-patches (of which, just by accident or false sense of cleaning up, some are deleted after patching them into the running kernel instance). Once again prooving the old saying that UNIX won’t prevent its users from shooting themselves in their feet. They just got great new ammo.

Igor November 13, 2017 5:03 AM

Riceteeth • February 7, 2015 6:29 PM

@Alan S, re 11:47, in the civilized world national security itself is a obsolete concept supplanted by human security. Elite indoctrination will keep US foreign-policy apparatachiks ignorant of human security, because it’s got lots of confusing concepts that don’t involve blowing shit up. The national security industry can’t make money that way. Arguably, the last strong impetus for reorienting national security was the interval between Der Mauerfall and the 1991 US armed attack on Iraq. At that time favored cadres in the officer corps were enthusiastically exploring protective roles.

this useful comment has an expiired link embedded in it. Here is the one that works.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.