Friday Squid Blogging: Penguins Fight over Squid

Watch this video of gentoo penguins fighting over a large squid.

This underwater brawl was captured on a video camera taped to the back of the second penguin, revealing this unexpected foraging behaviour for the first time. “This is completely new behaviour, not just for gentoo penguins but for penguins in general,” says Jonathan Handley, a doctoral student at Nelson Mandela Metropolitan University in Port Elizabeth, South Africa.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on December 18, 2015 at 4:11 PM130 Comments


Going Yard December 18, 2015 4:54 PM

Interesting crypto problem posted by Fred Reed on It’s beautiful in its simplicity. I don’t see an obvious reason why this wouldn’t work, and be very tough to crack. But maybe you will (from Fred’s website at

The following is an unrelated boring question that nobody needs to read except masochistic techies.

In a recent column dealing slightly with secret communication among terrorists, I inserted an off-the-top-of-my-head highly-amateurish blind stab at a steganographic technique for secret communication: “things like indexing by the digits of an irrational number into low-order bits of the RGB fields of a chromatically chaotic photo.”

Few people are dumber than wannabe cryptographers and message-hiders who think they are clever: If some mutt in Mexico can come up with an idea in a couple of minutes, it is either a bad idea or was thought of by NSA just after the Big Bang. It has probably been studied to death by people who know what they are doing. But I would like to know exactly why the following would not work. Somebody smart help me.

Suppose that Alice and Bob are planning an Evil Deed, and need to communicate fairly extensively in secret. As a steganographic key, Alice gives Bob an irrational number, such as “the cube root of nineteen.” The virtue is that it provides an endless string of digits which both can generate without the need of risky passing of pads of random numbers. A random number generator, more awkward, would do the same, but randomness is not necessary, only unguessability.

Alice, who is active on the internet and social media, wants to send Bob instructions on where to place the hydrogen bomb they have stolen from a submarine base. The message is of 500 words, which by a standard newspaper count is 2500 characters or, at 8 bits pr ASCII character, 20,000 bits. She has on her computer photos of the kind people normally post on the internet, “Cute things my cat does,” “Shaun’s and Vicki’s new baby,” and so on. One is a 6 megapixel .bmp of a field of mountain flowers.

Six megapixels is about 18 million bytes, each pixel consisting of three bytes for R, G, and B, for about 144 million bits total. Using an easily written algorithm, she indexes into the photo to hide the bits of the secret message. For example, if the digits of the irrational number are 3476 8906 4277…, she goes to byte 3476 and changes the low-order bit, if it needs changing , to the first bit of the secret message. She then goes 8906 bytes further and changes the low order bit to the second bit of the secret message, and so on. Since on average half of the bits will already be of the desired value, she is hiding ten thousand changed bits among 144 million. Since on average the changes she does make will be about evenly divided between zero-to-ones and ones-to zero, there will be no change in the ration of zeros and ones, which in any event (I think) would fall within the noise level.

By using the low-order bit of each byte, she avoids any visually unusual effect–#ffdbe1 and #ffdbe12 are (I think) visually indistinguishable—and the result in no way stands out from the surrounding pixels.

She then posts the photo in some ordinary place on the web. Bob reverses the process and knows how to put the hydrogen bomb in the phone booth at Fourth and Main.

Call me stupid, but I do not see how NSA could (a) by mass screening of photos determine that the photo contained a secret message—so much for traffic analysis, (b) examine the specific photo and determine whether it contained a message, or (c) find the message even if it knew for certain that it was contained in the photo, especially if it were also encrypted. Or even if NSA knew that the key was an irrational number. Keyspace exhaustion doesn’t work well with an uncountable infinity of keys.

There has to be some flaw. I’m just not bright enough to know what it is.

tyr December 18, 2015 5:48 PM

@Going Yard

You have to make sure that the transmitted photo is
unique. In other words take it yourself and destroy
the non steganography version. That eliminates the
discovery of message by comparison. It also implies
you have a non compromised machine to create the
messaged version on before you move to a Netted
machine for transmission. If you do your setup by
the communicators beforehand, then you can send a
list of irrationals to choose by a previously agreed
on method, plug that one in and read the message.

Notice the paranoia levels involved, face to face
meets, key material details exchange, multiple
machines to communicate. This also exposes that
you have a relationship by a real world meet.

Tactically your message would be safer with a cheap
and dirty encryption that can’t be broken before
the action takes place. Really big pictures are
far beyond human ability to resolve differences in
so your only adversary has to be machine stuff
akin to diff.

Infinity People December 18, 2015 8:44 PM

@Going Yard

Call me stupid, but I do not see how NSA could (a) by mass screening of photos determine that the photo contained a secret message—so much for traffic analysis, (b) examine the specific photo and determine whether it contained a message, or (c) find the message even if it knew for certain that it was contained in the photo, especially if it were also encrypted. Or even if NSA knew that the key was an irrational number. Keyspace exhaustion doesn’t work well with an uncountable infinity of keys.

Steganography is really a different animal than cryptography. Crazy key sizes are not what matters.

What matters:

  1. That the method for inserting the information is reliable, which includes factors like: is there enough entropy in certain parts of the medium to be able to induce non-entropic patterns in a “studied” entropic manner. So, for instance, image files have been one good medium for this, because of the massive amount of entropic data required for conveying images. Music and video files are even better. But, if the steganographic technique is solid, then all this really means is that the larger the data the message is put in, the more data it has to convey, the more securely the message is hidden.
  2. That the method used for inserting and retrieving the message is unknown. [Optional.]
  3. Generally, underneath even all of that, the message will be encrypted with strong cryptography.
  4. The transferring and receiving of messages is unknown. Typically, this requires considerable finesse.

I am not aware of a steganography user being caught in recent memory, except in the recent US-Russia spy case.

They were not caught because of their steganography, however.

They were caught because the North American head of the division went to the US and gave them up.

Which is usually always how such individuals are caught.

This is true, historically, as well. Spies are caught, and then their methods of hiding systems are disclosed if the ensuing investigation after they are identified is remotely competent.

Few people are dumber than wannabe cryptographers and message-hiders who think they are clever: If some mutt in Mexico can come up with an idea in a couple of minutes, it is either a bad idea or was thought of by NSA just after the Big Bang. It has probably been studied to death by people who know what they are doing. But I would like to know exactly why the following would not work. Somebody smart help me.

Fact is, it isn’t so much as lack of intelligence which separates security people who can devise sophisticated, efficient, elegant security systems from ‘everyone else’.

Some level of higher intelligence is required just to ‘get in the door’.

Drive and exceptional patience are the more rare ingredients.

The same is true in most fields.

The impossible and the infinite do not exist in the finite world. Nor has the finite world we exist in come from an infinite world. Far less true is this finite world being ruled by some realm of the infinite.

Dreams are delusions, madness, which sometimes come true. Usually, it is the substance of meaningless diversion. Death and taxes alone are what are assured.

Infinity People December 18, 2015 9:19 PM

@Tyr, Going Yard

Notice the paranoia levels involved, face to face meets, key material details exchange, multiple machines to communicate. This also exposes that you have a relationship by a real world meet.

This is what I was referring to in bullet point 4:

4. The transferring and receiving of messages is unknown. Typically, this requires considerable finesse.

Once an initial system of communication is established, that very system is most reliable if it is used to consistently change methodologies.

Steganographical communication, if done properly, is damned near well impossible to detect unless one of the members of the network has been compromised.

Which is what happened in the Russian spy example. And what happened during the Cold War and WWII. It is also what happened in other recent examples that were not using technical computer steganography, per se, but were using modern technical tools to hide messaging such as utilizing a ‘hidden in a park’ bluetooth system.

The major flaw, then, I would argue, is having everyone communicate in similar ways. Further, in having a non-cellular system. So, for instance, with the recent Russian case, the North American head knew of a lot of the agents in the US — and in Europe. That was incredibly stupid. So stupid one should wonder if the whole compromise was not a diversion.

You have to make sure that the transmitted photo is unique. In other words take it yourself and destroy the non steganography version.


Really big pictures are far beyond human ability to resolve differences in
so your only adversary has to be machine stuff akin to diff.

The internet is full of constantly newly produced video and pictures. So, there is a wealth of material to hide in. But, as for ‘using unique pictures’, because of compression systems, many pictures that were unique are changed when they are simply formatted to fit different sizes and levels of compression.

What you are saying may engage a decent, working system to detect systematic steganographical use, however, and so detect a previously unknown network:

Catalogue stock images en masse, and do diff comparisons for anomalies not explainable by formatting software.

Such a task is possible, it is true. But, typically, the far better way of detecting such networks is simply looking at their chokepoints. In the case of looking for foreign spy networks:

  1. watch key industry of highest interest to nations known to have such networks
  2. watch key centers of communication for such areas of highest interest, ie, have ground coverage on the conferences, online and off
  3. watch central foreign control points, such as embassies and consulates where the upper layer of these networks tend to exist with loose cover
  4. create IRL and online honeypot systems designed to attact such networks interests
  5. dig into known intelligence infrastructure, ascertain where deep cover divisions are, and apply maximum pressure to those areas

Similar possibilities exist for terrorist [human] networks. Terrorist networks are far worse, however, as these systems usually involve public messaging as part of their substance. Whereas spy systems, or even worse, nation based sabotage systems, these are stealth from the root of the trees to the very ends of the branches.

Blockchaining tech from bitcoin is being improved on to create networks where individuals can operate anonymously from start to finish. The key problem there is that a singular choke point is usually already revealed: any user of the technology. That they are taking such enormous pains for communication makes them a target.

They are a target because they are effectively creating islands of significantly interesting chokepoints. Maybe they are dealing drugs. Maybe pedophilia porn. Maybe they are spies. Maybe terrorists. Or maybe just enthusiasists who try too hard. It is fascinating technology.

Typically, though, the ‘message is the means’. The more sophisticated the means, the more daunting the capacity of the [potential] adversary.

And like many security systems, this manner of system is going mainstream.

Terrorists may like it, but spies would not. Nor would nation based sabotage networks. Because it is a chokepoint and it forces behavior which stinks of suspicious.

The impossible and the infinite do not exist in the finite world. Nor has the finite world we exist in come from an infinite world. Far less true is this finite world being ruled by some realm of the infinite.

Dreams are delusions, madness, which sometimes come true. Usually, it is the substance of meaningless diversion. Death and taxes alone are what are assured.

L. W. Smiley December 19, 2015 3:19 AM

Log into most any Linux system by hitting backspace 28 times

I hope opensuse has a patch also.

@ Going Yard

Thought it would be interesting to use the singular value decomposition (SVD) of an image for steganography or to dither the image. But SVD is a floating point algorithm whereas pixel color values are integer for each of the RGB channels. So rounding noise would be introduced when reconstructing the image from the SVD. Let’s say the image is processed in n×n blocks where SVD is applied to each block, then keep the first m singular values and principal components of the image, the remaining principal components corresponding to small singular values would be replaced with n×n blocks of noise or blocks representing characters (but look like noise) at the same magnitude as the remaining n-m least significant components.

ImageMatrix = UΣVT

A Singularly Valuable Decomposition: The SVD of a Matrix

I’m sure it’s been done this way somehow.

ianf December 19, 2015 3:43 AM

Call me gentoo-penguin-averse if you like, but all I see in that penguins fighting over squid video is proof of hiring incompetence and direct unsuitability of that penguin #2 as an underwater camera operator. When you’re filming, esp. if also in such low-light conditions as here, you go for a [<—-wide shot—->], so that future film watchers will be presented with an overview of the mêlée, something instantly understandable sans verbose description of what one is supposed to be watching, and nor underlined by some dramatic music. Also, forget about self getting a piece of the action, YOU ARE THERE TO DO A JOB, not to gorge yourself on squid. Do it well, and you can bet on henceforth being fed tasty, even prechewed squid morsels by Pingvinettes beak-to-beak style… that’s my advice to budding pingvin videomakers.

L. W. Smiley December 19, 2015 3:50 AM

Also it seems like the image entropy for a given RGB channel would be

H = -CΣpklog(pk)

pk = (λk)2/SS

That’s summation over k, C some constant, SS = sum of squares of singular values

Winter December 19, 2015 5:06 AM

CNN: FBI is investigating the Juniper Networks security hole

Hmmm. It took @foxit 6 hours to find the password for the ssh/telnet backdoor in the vulnerable Juniper firewalss. Patch now

— Ronald Prins (@cryptoron) December 18, 2015

It’s also affecting discussions where some government officials insist on backdoor access to secure networks and services for law enforcement, even though security experts insist that inserting such vulnerabilities actually weakens security for everyone.

Google CEO Craves Privacy, applies 'To Be Forgotten' December 19, 2015 5:40 AM

EU’s ‘Data Protection Reform’ Brings Stronger Privacy Rights For Its Citizens:,30774.html

Google under scrutiny over lobbying influence on Congress and White House. So corrupt, the USA Congress allows Google to build dossiers of elementary students while at school!
No wonder Google CEO Eric Schmidt camped out at the White House (when hes not living next door to the swing’in Playboy mansion):
He craves HIS privacy:
yet demanded his ex-mistress take down her blog:
Now this ruling elite moral authority wants to censor free speech

Clive Robinson December 19, 2015 8:47 AM

With regards the Juniper Network backdoors one of this blogs regular readers / posters has this to say on one of them,

    “The weakness in the VPN itself that enables passive decryption is only of benefit to a national surveillance agency like the British, the US, the Chinese, or the Israelis,” says Nicholas Weaver

Hmm not the Russians or French or Germans…

Interesting he put the British first because that’s where I’d have pit a small side bet on (being British one has to wave the flag you know 😉

Anyway people can read more at,

Importantly this shows the problems with,

1, Insider / Implant agents.
2, Backdoors of any kind.
3, Not using proper “end to end” encryption.
4, Trusting proprietary solution providers.
5, Closed sorce “code review” processes.

And one or two others besides…

The only solution to this type of issue is “To instrument and mittigate” and don’t forget you don’t realy control your hardware unless you take proper segregation steps…

Who? December 19, 2015 10:01 AM

@Clive Robinson

Not all operating systems pass so successfully an attempt to implant a backdoor as OpenBSD did. 🙂

Not Saving It For Fox December 19, 2015 10:41 AM

I note with some amusement that certain posters try to separate computer security related matters from the discussion of politics, education and other matters in the current climate.

Is a new encryption method gonna solve the passage of the CISA bill?

Can the root causes of our security woes (the three letter agencies) be addressed when the voting public is so thick that 30% of Republican primary voters support the bombing of the fictional kingdom of Agrabah?

Clearly if voters intend to keep the authoritarians in power through their wilful ignorance, this is a major problem to ever addressing these many problems wholesale.

While the many readers coming here respect the technical skill and abilities of Clive, Bruce et al. and try to understand what we can, pretending these issues are separate from a much larger context is simply foolish.

On another note, I dispute the contention that unbreakable communication is a near impossible task. Even if your computer is back-doored to hell, with a direct link to the Stasi, they cannot defeat properly implemented OTP methods.

For instance, this One-Time-Pad method will work:

1) Two (or more) six-sided dice OR 1 to 5 x ten-sided dice.*

*We can create unique keys using six or ten-sided dice for our one-time pad, so long as the dice are not loaded / biased.

2) A pencil and single sheets of paper to write unique keys on.

3) An agreed-upon ‘checkerboard variation’ for conversion of plain text into digits e.g. CT-37c conversion table for modulo 10

4) The mental ability to calculate simple addition/subtraction to encrypt/decrypt messages (modulo 10 for numerical keys; modulo 26 for alphabetical keys).

5) A computer and pre-agreed communication chanel to relay the encrypted message.

So long as the usual pitfalls are avoided e.g. secure key management, no re-using OTPs, enough key material is available for potentially years in advance, calculations are done by hand, and so on, then this should work fine.

Re: a suitable steganography option to combine with the above, the Words-Per-Sentence (WPS) system can be used to hide the cipher-text in a normal looking email / blog post (like this one) etc and have full deniability.

If those communicating do not have access to 6 or 10-sided dice, then they can could also use Bruce’s Solitare (deck of cards) encryption method, outlined on this very website:

“Solitaire generates its keystream using a deck of cards. You can think of a 54-card deck (remember the two jokers) as a 54-element permutation. There are 54!, or about 2.31 * 10^71, possible different orderings of a deck. Even better, there are 52 cards in a deck (without the jokers), and 26 letters in the alphabet. That kind of coincidence is just too good to pass up.

To be used for Solitaire, a deck needs a full set of 52 cards and two jokers. The jokers must be different in some way. (This is common. The deck I’m looking at as I write this has stars on its jokers: one has a little star and the other has a big star.) Call one joker A and the other B. Generally, there is a graphical element on the jokers that is the same, but different size. Make the “B” joker the one that is “bigger.” If it’s easier, you can write a big “A” and “B” on the two jokers, but remember that you will have to explain that to the secret police if you ever get caught.”

So, given this would be lost on the Republican Fox crowd who want to bomb Agrabah, perhaps this info may be of some interest to those out there who need proper and secure comms, and who are willing to go to extreme lengths to get it.


Diner Eavesdropping December 19, 2015 11:44 AM

I had to dig to find just one accurate article on tabletop camera equipped tablet computers at restaurants. Waiters are typically truely ignorant. Management plays dumb lying and claims no cameras or microphones while they build their eavesdropping network.
Red Robin says the camera function is not being used RIGHT NOW and that, “no one can listen to or watch anyone from the tablet.”
Technically there is NOTHING to stop them from eavesdropping given our the Hedge Fund Big-Data controlled USA Congress.
Is this a joke? These devices are expensive to maintain and must generate increased profits for TWO business. Following a familiar pattern, the CUSTOMER becomes the product – not the food being served. Targeted advertising follows photo identify verification. Then further facial expression monitors every mouthful. Complain and the cooks can get instant revenge. Your open mouth is analyzed for cavities. Skin for sun damage and wrinkles are added to your authorized public medical records.

Big-Data adds jokes, political and religious conversations to your dossier. Your mood, temperament and personality are analyzed for character defects then sold to hiring managers. Your appearance, meal selections and total caloric intake are matched to your weight/fat and sold to insurance companies. The number of alcoholic drinks are noted for possible criminal proceedings.

Already new Stringray derivatives are under development to put diners under remote mass surveillance. The new CISA law allows these creepy restraunts to legally deny recording yet share live feeds with Big-Data, police, Dept of Homeland Security and FBI. No probably-cause required. With corporatized America as your ‘security comes first’. LOL!
While these techniques are useless for The War on Terr or, they are essential for Wall St start-ups to become billionaires.

The sad part is the lazy, clueless smart phone addicts who without question accept these spying devices being too preoccupied to critically think. They should order from the children menu!

Forget the business lunch too guys.

There are no privacy policies or laws specifically for these in-your-face kiosks:

Just One Example

Here Applebee corporate owner DineEquity disowns responsibility diner data collection. Its here if you know EXACTLY what to look for:
“If you take a survey, apply for a job, contact us, or interact with us in various other ways, demographic information, DINING PREFERENCES , and other information you choose to provide, such as resume information, may be collected.
DineEquity is not responsible for any personal information or other data that franchisees of DineEquity or its affiliates choose to collect.”

Applebee diners need to ask the hostess for their particular privacy policy. Choose between ordering your food or reading the fine print! Personally I can’t enjoy a meal knowing some creep is watching.

This brings up a MAJOR point of only doing business with corporations, business that don’t share your data for non-essential services. Without exception I’ve received better human-to-human service and a lower price by eliminating this impersonal, lurking, exploitative Wall St middleman.

MrC December 19, 2015 11:47 AM

@Going Yard

I’m no expert, but my understanding is that steganography via image files has been done to death, and the current state of the art has the steganalysts far ahead of the steganographers. The bottom line is that an attacker who cares to look can easily tell if a given image contains a steganographic message, and the steganographer’s only hope is that the NSA lacks the processing power — and will continue to lack the processing power — to look at every image on the web.

Why is it so easy to tell if an image contains a steganographic message?

For starters, you cannot use .bmp files, since no one uses .bmp files for the web and thus doing so would arose immediate suspicion.

Accordingly, most of the steganographic work has been done on .jpeg images, since they have long been the de facto standard for images on the web. Since .jpeg is a lossy compression format, you can’t muck with the pixels directly. Instead, you have to muck with the DCT coefficients. And here’s where you run face first into a huge problem. The distribution of 1s and 0s in the low bits of the DCT coefficients should form a predictable histogram — and you’ve completely messed it up by twiddling bits, so much so that anyone can see it. AFAIK, the best algorithm available is J4, which does first- and second-order compensation to try to put the histogram back into shape. Unfortunately, this can be defeated by (a) approximating the cover image by converting to .bmp, cropping, and reconverting to jpeg, then observing that the DCT coefficients are all wrong; (b) examining the degree of similarity between neighboring DCT blocks, and observing that they aren’t as similar as they should be; and (c) probably other methods I don’t know about. Let’s say then, that absent a new algorithm that really advances the state of the art, hiding steganographic message in .jpeg images is a hopeless endeavor.

Since .bmp and .jpeg are both out, what about .png? I suppose that maybe .png has become common enough in the last few years that it’s not automatically suspicious like .bmp. Since .png is lossless, you could fiddle with pixels directly before converting to .png and get those same pixels back on the other end. That avoids steganalytic attacks specific to .jpeg and its DCT coefficients. But, unfortunately, it’s still open to a whole host of attacks revolving around statistical properties that an unaltered image should have, but won’t if you hide a steganographic message. I’m not aware of any work that’s been done to develop a steganographic algorithm that compensates for these statistical changes (which was tried and failed in the .jpeg arena), so maybe there’s hope here.

Anura December 19, 2015 12:02 PM


PNG has poor compression for things like photographs (using it might draw attention from anyone looking), so it’s usually used for things with large blocks of solid colors and the like, which is useless for steganography.

MrC December 19, 2015 12:10 PM

Thanks Anura! I did not know that.

Steganography aside, what’s a good lossless format for photographs?

Who? December 19, 2015 12:36 PM


There is no such a thing as a lossless format for photographs. What you are looking for may be the tiff format.

Infinity People December 19, 2015 4:52 PM

@all, regarding

My what a time for backdoors. Unfortunately, the FBI have their hands full. As the culprit is most likely: the FBI themselves, NSA, or CIA.

Juniper has been known for a long time for hiring top tier hackers, even ones who there are whispers about that might actually be government. Including the late, great Barnaby Jack (his real name), who discovered multiple ways to remotely kill people via hacking so that it would look like an ‘death by accidental or natural causes’. Who, himself, died via natural causes right before a major show where such technology was going to be exposed.

(Of course, fiction aside, such as the recent ‘feathermen’ movie and book, or the ‘John Rain’ series, IRL a roving band of contracting assassins who are accomplished at assassinations which look like accidental or natural causes is patently absurd. This was tried in Dubai in 2010, and it clearly did not work. All of the assassins were promptly rounded up and went to jail.)

(They were detected via global airport facial recognition technology. What silly people to look like the people from whom they stole the passports from!)

(To argue that the fictional ‘ice bullet’ story came true, amongst hackers most of all, it certainly preposterous.)

(Sarcasm or sarcastic sarcasm, hidden somewhere in this message.)

Question: can the government investigate its’ self? I think if you can’t even get software you need without getting through impossible ‘red tape’ barriers, that would probably be a resounding ‘no’.

Who here would be surprised if the government ever investigated its’ self and came to actual conclusions, especially on such a core case as backdoors being covertly implanted in code. Consider, for decades now, the government has been performing code reviews on all sensitive software, including conditions where this has meant sending out covert teams of auditors known only to one or two upper level executives. Or none at all, because it is hardly difficult to fake job papers so as to get access to code.

Sure, one could point out ‘internal affairs’, or “Monica-gate” (which was not a thing, but an A+ for trying)… or the mother of them all, “Watergate”. But, Watergate’s forward momentum was only gained because of top level FBI dissension. And reporters. And CIA guys so out of practice they made the most trivial of mistakes. Which is probably well hammered into everyone’s head since then to never repeat again.)

(Never mind, the public sentiment at the time. The public was strongly ready for supporting such criticism, the political environment was right. Many argue, on both sides of the fence, that the Vietnam war was lost because of waning public sentiment, for instance.)

Reality is: if anything truly wonky is going on, then its’ strongest defense is the ‘rabbit hole’ its’ self. That is, the implausibility of it. Investigators learn from day one, through experience and study, that the reality is incompetence even in the darkest of conspiracies. Not scary competence.

Philby was an incompetent drunk who had a few key successes motivated largely by his own survival. Like other moles, who try and find and turn over moles in their own group as soon as they can. Watergate was a circus of fools. The ‘crown jewels’ of the Pike-Church affairs were insanity — exploding cigars, hah. Regime overturning backfires. Directorate S spies exposed, they were all entirely ineffectual, even if their stories are interesting. The Dubai assassins were entirely caught on video tape, and that by the local yokel Dubai police force. And so on and so on.

What is more career destroying then mouthing impossible conspiracy stories that are best well known amongst the tinfoil hat wearers? Certainly, such jobs are very stressful, and mental illness is common.

the infinite filters down from heaven as if through a great funnel, or timepiece of sand… granting wishes and dreams, by the littlest pieces, and ever so painfully slowly…if your imagination was infinite and the substance of what those of the finite world call ‘reality’, is it not inevitable that, with your relative omniscience and omnipotent powers you would be bound to create the ‘finite’… as it was so often argued, ‘how many angels can fit on the head of a pin’, or, ‘can God create a rock so heavy, even He can not lift it’?

Infinity People December 19, 2015 5:19 PM

@MrC, etc, on ‘steganographical weaknesses’

No small part of keeping secrets is by ‘security by obscurity’ (which steganography is). Referring to your statement about the NSA being kept busy by massive steganalysis systems.

Steganalysis makes news, especially when it is discovered some criminal or spy was using such a system. But, again, that is ‘what makes the news’. People can erroneously come away from such news stories with the mistaken impression that the criminal or spy was caught because of steganalysis. Not so. Except when the steganalysis system is exposed post-mortem. For instance, Penguin X is caught because her boss turns state evidence. She is found to be using steganographical program Y. It is discovered via the investigation she visits sites A,B,C, and indeed, she has left information on her computer where she has been getting her image files. Not ‘burning after reading’, as she should have. Because she is so deeply used to being unseen, and her fears are not unfounded. Never mind, much of her work is really just about having fun and living an interesting life, she isn’t actually doing anything very serious. And even if caught, would be traded like a pot smoking linebacker working for the Dallas Cowboys.

Compressed formats are poor formats for steganography — in their compressed format. The original material is where the steganography is, and then that material is compressed. This can certainly take into prediction non-lossless compression formats popular with image software.

In general, however, pictures as mediums were and are always limited not just because of these factors – though a picture can tell a million words – but because of their usually limited size.

Youtube videos, however, are as common place these days as mud.

Spies, terrorists, and criminals alike often practice code speaking, however. The human mind naturally works this way. When people know each other and want to say something others who are not ‘in the know’ won’t understand, they can easily do so using metaphor, and other means of ‘on the fly’ obfuscation.

It may seem this could include notable deviations exposable by profound NLP analysis systems, but speech its’ self is innovative and so therefore full of deviations from its’ very nature. NLP can only poorly understand context, context is sublime.

For instance, jihadis use the Koran and related documents – many of which are deeply arcane – for their code talking. Literally, they use their own religious texts, historical texts, and so on as a codebook. So, they can be very hard to understand even when one does have their plaintext.

Twins, like husbands and wives, or extremely insular societies and cults, naturally do this. We all do it, in fact, as certainly as we all commit sex. Usually unconscious, anything necessary for speaking on private matters, in public, can raise the capacity to do so consciously.

Contrary to Whitley Strieber’s frequency at UFO conferences, in his actual book, ‘Communion’, he posited that ‘he doesn’t know if these are really aliens, or not something else such as from Heaven or beings from the future’… pointing out how he was put in strange positions such as being shown how there was a deep lack of his own knowledge regarding the 19th century opium trade in China. And how the primary aliens look remarkably like human fetuses, perhaps being symbolic of beings from the future. The sons of humankind, as it were. Nevermind his past as a truly illuminated fiction writer. “The Hunger” ranks on a level of symbolic fiction difficult to fathom. At least, in the movie format. Paving the way later for Anne Rice, through the cheerleader vampire killer, Angel, Blade, and even that weird vampire love story, ‘team edward’, or ‘Christian Grey’. Not to add credence where credence is not due.

Infinity People December 19, 2015 5:36 PM

@Diner Eavesdropping

I had to dig to find just one accurate article on tabletop camera equipped tablet computers at restaurants. Waiters are typically truely ignorant. Management plays dumb lying and claims no cameras or microphones while they build their eavesdropping network. Red Robin says the camera function is not being used RIGHT NOW and that, “no one can listen to or watch anyone from the tablet.”

In ‘Person of Interest’, the leader “hacker”, is able to trivially hack anyone’s phone via NFC or bluetooth so as to make it a remote eavesdropping tool for his investigations. Just because it can be done, does not mean it is being done.

In ‘Agent 47’, fully automated devices shoot devices onto target cars that pass by. It jammed the internal electronic systems and was tied to ‘further up the road’ devices which triggered when drove past. It could be gps. It could be a surveillance device which has the capacity to record the inside car communication, like lasers on a window. Real world, or fake? IRL, the government employed rfid on embassy cars, and on chokepoints leaving DC. It worked. It helped them keep track of likely spies who were posing as otherwise ordinary embassy employees. Low false positive rate.

The movie also depicted fully automated sniper systems which utilized long distance facial recognition and robotic turret.

While utilizing such technology for guns and bombs is unlikely, utilizing it for gun like devices which implant from afar, accurately and remotely, surveillance systems is quite plausible.

Reality is typically much more simple. Stranger then fiction, but less elegant and efficient.

‘Confirmation bias’, like related forms of intimate biases, are your closest friend and your closest enemy. ‘Revolver’, to get smarter, you have to game your own self.The more you know your own bias, the less you see of it in your own self, and the more you see of it in others. Because that speck in the eyes of others is always so much more visible then the log blinding yourself. And your own mind uses your observations and close friendship of your own biases against you. It becomes all the more clever, diabolical, even. Watch your dreams when you are without sleep, they reveal your deepest, secret desires. Chinese fortune cookie says, ‘you have just eaten the poison’. Cthulhu is metaphoric forerunner of Cybermen, the Borg, and Walmart. Food for thought. Tentalizing to the stomach, the conscious mind hates it. Too complex and impossible. Veers left on madness, or genius. Japanese hentai tentacle porn is more then just another meme, it is a way of life.

you can lead a horse to FEEDTROUGH.... December 19, 2015 5:47 PM

@∞people Ha ha you funny guy, FBI is on the case, you crack me up. FBI will scramble their elite anacephalic retard task force and catch everybody except the guys who said they did it. Some Clouseau will blame FEEDTROUGH on the Chinese, and believe it. This is a case for Vincent Lisi, the guy who couldn’t catch the government at germ warfare when the anthrax was sitting in a government freezer. Or a case for Spike Bowman, who destroyed the anthrax when somebody told him where it obviously was. Or a case for James Kallstrom, who couldn’t find No Man’s Island when TWA 800 corkscrewed from their missile. As with all US government crimes, they’ll just assign whoever is now the Special Agent in Charge of Barking Up the Wrong Tree.

Terry Cloth December 19, 2015 6:09 PM

Catalog of cellphone-surveillance
The Intercept has an article purporting to be a catalogue (including prices) for local, state, and federal governments wishing to track people. It includes tracking, cellphone interception and jamming—even the odd drone. The dateline is Thursday.

albert December 19, 2015 7:22 PM

@Diner Eavesdropping,
from the cited article: “…A spokesperson for Ziosk, the technology Chili’s uses says on its tablet, “the camera does not save information and does not share any information without the permission of the user.”…”. When you deal with weasels, you get weasel-wording. Of course the camera (or microphone) doesn’t save information, the software in the server does. A piece of tape takes care of the camera; the mike is a little more difficult, as they can behind a 1mm hole, or any of the myriad vent holes. If you know where it is, some honey or peanut butter will do the trick. A handy device would be a pen that blasts the camera with UV, but those electret mikes are very rugged indeed. High heat might neutralize the charge, or maybe a high SPL signal around 15kHz might mask your conversation.
. .. . .. _ _ _ ….

Grauhut December 19, 2015 7:56 PM

@Clive, Winter “Juniper Back Doors”

IMHO the Juniper Backdoor story stinks a little like a cheap marketing campaign:

  • If you can place a hidden shell in their code you don’t let the logging log these accesses
  • If you publish something after an internal audit, you want to see it discussed in public
  • If you buy the story, these weaknesses only exists in “old systems” Juniper wants to see vanishing from the used system markets, like the NS-5000s (still good for 5-6 Gigs on a 10g)

If it is a marketing story it could try to transport these messages:

  • We are the good guys, fighting against those sinister back doors (worked for Apple, why not?)
  • Only our old crap is insecure, buy new ones or pay for support

  • We need your money now and we don’t want you to invest in open stuff like SDN, that becomes more and more usable and frightens us, since we can’t make enough money in the highly competitive SDN switch market

“Ethernet Switch Market Grows in Q3; Router Sales Flat
Zacks By Zacks Equity Research
December 14, 2015 1:00 PM”

Maybe someone wanted to unflat router sales numbers…

Means the best investment in JNPR now could be to sell them.

L. W. Smiley December 19, 2015 10:34 PM

From data base search for OPM contracts yeilds files


then from 2012 contracts opened in LibreOffice.calc
filter column AF for Juniper yields


then juniper webpage

is SRX3400 running backdoored unautherized code?

maybe a little more digging

L. W. Smiley December 20, 2015 12:22 AM

2013 OPM contracts file


2014 OPM Contracts file


2015 OPM Contracts file


vendor name is in column AR

Infinity People December 20, 2015 1:15 AM

@you can lead a horse to FEEDTROUGH….

Heheheh. Your namez erz confusez mez.

Some people on this forum are just a little too goodz with their word ninjutsu.

Ha ha you funny guy, FBI is on the case, you crack me up. FBI will scramble their elite anacephalic retard task force and catch everybody except the guys who said they did it. Some Clouseau will blame FEEDTROUGH on the Chinese, and believe it. This is a case for Vincent Lisi, the guy who couldn’t catch the government at germ warfare when the anthrax was sitting in a government freezer. Or a case for Spike Bowman, who destroyed the anthrax when somebody told him where it obviously was. Or a case for James Kallstrom, who couldn’t find No Man’s Island when TWA 800 corkscrewed from their missile. As with all US government crimes, they’ll just assign whoever is now the Special Agent in Charge of Barking Up the Wrong Tree.

Well, I try to be funny. But my sense of humor is a mixture of Bob & David, Tim & Eric show, and mixture of Monty Python. So tat is a circle of madness.

looking up… “feedthrough”…

(TS//SI//REL) FEEDTROUGH operates every time the particular Juniper firewall boots. The first hook takes it to the code which checks to see if the OS is in the database, if it is, then a chain of events ensures the installation of either one or both implants. Otherwise the firewall boots normally. If the OS is one modified by DNT, it is not recognized, which gives the customer freedom to field new software.

Any relation, I do not know.

Only read one article on this so far. Did, in all that time since these exposures, Juniper never perform an independent code review? For this matter, is Juniper not performing code reviews, at all?

Now. I realize that hiring top gun researchers who may – or may not – have sketchy & spooky gov connections certainly should not lead to code compromise. After all, they are busy doing stuff like getting ATM machines to hack for conference purposes. Zero contact with the people developing the code.

But, why, in all this time, did Juniper never do their own code reviews of their own systems? Could it have been they trusted those NSA code reviews which the government mandates??

Companies, here is a clue, if you develop critical infrastructure systems: please create a secure SDLC. Get that code review systems you so sorely need and use them. VeraCode (by InQTel, a CIA subsidiary, with BBN defense contractor top engineers and execs), IBM (Old Ounce Labs, all ex-BBN), CheckMarx (ex-Israeli mil & intel), HP Fortify (totally clean, I am sure)….

… poster…

To be fair, I can’t speak of Republican conspiracies, without also mentioning Democratic conspiracies. Likewise, one can speak of airplane crash mysteries and US conspiracies, but what about Russia? Did that plane full of Polish dignitaries a few years ago really just happen to crash for no reason when they were on their way to mourn the anniversary of the death of many Polish military war heroes murdered by Stalin?

Never mind the recent Ukrainian shoot down… which echoed that Polish crash. Which Russia insisted on controlling the investigation of.

China is actually the odd man out in that game. They lead the list on locking up journalists, but it isn’t in their culture to be murderously crazy*.

(*Except in exceptional conditions, such as journalists, or difficult to control domestic political leaders.)

I, for one, find the existence of really dark US infrastructure far more interesting then Russian, however. One expects that from Russia, with their dastardly past, from the Czarist intel to their progeny. But, the US? These are choir boys. Unlike China or the US, they don’t even have a deep cover program – much less assassination system – on record. The kids go from Quantico or the Farm, straight to the working fields. They are Mormon and Evangelical. Their lie detector tests look for pot usage and refusal to attend Sunday church meetings.

How could these sorts of folks ever create gigantic swordfish?? How?? Hehehheheh. 😉

∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞∞… etc, etc, ad infinitum. May we get but just a tiniest morself of your infinity, enough to cure disease and crime and war, and have a really good, old time. 😉 ‘Oh what a trick the gods have played on me, I am but a cosmic clown’ — paraphrase of Jim Morrison, son of Admiral Morrison, ‘He Was True to His Own Self’, inscription his father put on his tombstone. Five to One Baby, One in Five. No One Here Gets Out Alive. That, and Voodoo Chile, are the kick ass songs of the beginning of this Horrible Millenium. 😉

Infinity People December 20, 2015 2:44 AM


I wonder if OPN Office of Personnel Management hack involved Juniper backdoors?

Well, then, that seals it. As both doctors and scientists will testify, and already have, China hacked OPM*. Therefore, Clouseau, of the Keystone Cops Division of the FBI can rest their heads, and go and shoot some ducks, or whatever it is they do for fun.

Case closed.

No career suicide necessary. The mad house and early retirement is off the books. There are terrorists to find, and they are potentially everywhere.

*Disclaimer: I actually do not like China and am all booking for a fast and dirty regime change there. ( see, for instance, )

The question is not who hacked OPM, but when. And what records did they put in. Rest assured if your boss and coworkers do not appear to be Asian, you can Trust Them (TM).

Infinity People December 20, 2015 3:07 AM

@’Steganography’, after thoughts

I stated that steganography actually has little to nothing to do with cryptography, but this was a fast & dirty lie. Reality is, what is true steganography but the simplest form of all, substitution ciphers.

Problem is, what is the letter ‘a’, what is the letter ‘b’. It can be the slightest shade on a picture of a tree. It can be positional: top of the picture of the tree is ‘a’, bottom is ‘b’. And so on.

If you shift your bit by one degree, it is ‘a’. If you shift it by two degrees, it is ‘b’. What, herein, is meant by “degree”, is the problem? Can it be discerned that the color ‘blue’ in a photo has it’s bit’s shifted at the first place by ‘one degree’. And what, exactly, is the ‘color blue’. Or ‘black’. Or ‘brown’. Or ‘blue’.

Who decides what is the color, and what is just one degree up? Or two degrees up? On the very definition of ‘degree’? And is it not just a little too simple that one degree up is ‘a’, while two degrees up is ‘b’, and so on? When that whole shifting up of degrees can be totally backwards and a mixed up soup??

Yes, I am forcing upon you a dastardly education on the mysteries of steganography. You did not ask, nor pay for this course. But here it is.

Wat about sounds? Sounds are amazing. Some people’s voices are low. Some are high. Certainly, one must have the original to know what might be the ever most minute of shifts, right?

Can, that is, a substitution cipher be secure?

Can ‘security by obscurity’ ever truly provide the sort of security nasty motherfuckers who are doing really bad things require, for their convenience?

The sorts that play to AC/DC, ‘dirty deeds done cheap’. iZombie put it well in their episode on the psychopathic killer: he was the winner of all contests involving unnecessary to know facts, because his mind was completely uncluttered with emotional details. Or, the recent Agent 47 movie, where they finally did Agent 47 right. No emotions, no conscience, no hesitation. Genetic alteration for the superior human being.

Like the “six million dollar man/woman”, they were, by birth, designed as if machines from the future to destroy modern human kind.

Smarter, faster, stronger, wittier.

Unlike your average serial killer, they do not flaunt their god like powers, or leave puzzling signatures that ask, “can you ever catch me”.

They do not mock citizen and investigator alike, simply to get high off the fumes.

Nor do they ride this machine of government into the ground, to demonstrate any manner of show of horror and catastrophe, for the raised eyebrows and dropped jaws of human kind.

Noooo. They are machines, not human beings. And machines from the future. Uninterested in everyday mechanizations of local yokel fools. They flatline economies and undermine regimes with a cold ruthlessness and brutality that makes people wonder, “Why did I not kill myself when I had a chance”.

let us all mourn the passing of the Day onto the Night, for this is the Age of the Totalitarianist, of which American debt assures us all of DOOM.
If you think I am joking, well. Fuck you. I am.

Smile: Can You Say Backdoor? December 20, 2015 5:49 AM

Microsoft extends China link with government version of Windows 10

Of course any government has the right to keep their employees data secure. That is not the issue.
What is an issue is Microsoft (once again) assisting governments in the mass surveillance of ordinary citizens. This is why Chinese President Xi Jinping went straight to Microsoft campus in Seattle to solidify the new customized/spyware Windows 10 deal.

Just like anyone, else Chinese citizens have a legitimate right to expose corruption or practice religion.
Now they can’t if they use Windows 10. Even VPN’s become compromised as the operating system transmits the data before encryption. The Communist Party Police will be arresting anyone within minutes of typing – nipping their ‘problems’ in the bud.

Where is all the previous opposition from the USA high-tech? They claimed to never assist any repressive government.
Now American Hi-tech are falling over themselves for Chinese business:

However ruthless, you have to admire Microsoft winning strategy in that they respect the laws of other countries. (Google hires lobbyists to fight and lose). They’ve been busy for years building local data centers throughout Europe. Operating under local data authority control, they are now beyond the reach of the USA government. So when Microsoft is served National Security letters they cannot comply with what they do not ultimately control. The same process is now being implemented but with customized backdoors demanded by the Chinese government.

Being dependent upon advertising dollars from American Big-Data, the American press choose not to report our monumental loss of fundamental human rights and freedoms.
In the USA, all citizen data protections were removed from the final draconian CISA law.
With this turn for the worse, there is little difference between China and USA repression, as both pay American High-Tech agents. Everyone needs to be more careful in there monitored communications. Practice self-censorship while smiling constantly as the Beast assumes control.

Curious December 20, 2015 6:36 AM

Google has a pop up on Youtube these days, calling it a “privacy reminder”, basically telling the viewer that:

• Google ‘process’ ‘data’
• Google is ‘combining data’
• Google effectively collect data for purposes that supposedly is described in their policy (

As mentioned by me some time ago, Google now, and not just Microsoft: Google refer to “data” in a vague manner, and not concretely as “The data”, this even though the whole point of the pop-up is apparently to be poignant about how Google explicitly handle information as such, aka. data, from any one user of Youtube. The data that Google allude to is not the idea of “data as such”, but instead specific data, even though it might be whitewashed through a process and simply end up being metadata, that may or may not be linked back to any one user in any way.

The word “collection” is never used, instead Google apparently treat the collection of information, as vaguely as possible, in a metaphorical way with this notion of having purposely processed data on the users of Youtube.

The word “EULA” is never used, instead Google appear to be calling what seems like to be some kind of End User License Agreement, or some agreement of sorts, to be a “privacy reminder”. What sense does it make that anyone should have to agree or even join an agreement to Google’s “privacy policy”?

Although word “information” is used three times, I think the word “information” is here probably intended to mean three different things, and worse, maybe never really intended to mean ‘information’ as such, or to mean the same one thing. Thus ‘information’ is talked about in a metaphorical way, alluding to concepts that strictly isn’t information as such. Google also does not refer to “information” as “the information”, being bizarre in that the notions of “information” is paradoxically both neither anything concrete, yet is said to be something specific (presumably being various ways of simply referring back to the word “information” yet with the word “information” having no clear meaning in the end). Presumably, Google intentionally used the word ‘information’ instead of ‘metadata’, to avoid being clear, leaving the notions of “information” and “metadata” to be absurdities in their “privacy reminder”.

The word “agreement” is never used. Instead there is a button with “I agree” and instead of even alluding to an agreement as such, the written text Google show and call a “privacy reminder”, it has the form of being authoritative, with Google simply allude to the idea of there being an agreement between Google and any one user, as if Google simply presupposed a demand to agreeing with Google.

This “privacy reminder” that Google is showing is imo: User unfriendly given the vague language, insincere and dishonest because of how Google seem to both want to inform the user, yet apparently is demanding an agreement of the user and Google is possibly being utterly deceitful (depending on whether or not Google is aiding or abetting say US or other governments in monitoring, information sharing or surveillance on people).

At the end of the text in the pop-up, there is an “agree” button. Because of how this ‘privacy reminder’ is written is such a vague manner, I am inclined to argue that any agreement that Google demand of a Youtube user, is basically invalid if the user couldn’t possibly understand the agreement itself.

Presumably, Google treats the concept “collection” (of information/data) the same way that US government does, as if you didn’t collect data (information) if you don’t want to admit to be using such data (information). So I guess what everyone ends up with is seeing “boilerplate” lingo, and thus seeing language about data processing in a merely performative way (making points about making points, about making points, etc, with no true explanation).

Curious December 20, 2015 6:38 AM

To add to what I wrote:

The notion of making points about making points, is really a philosophical thing, and not something logical I’d say.

Grauhut December 20, 2015 8:04 AM

@Smiley: “is SRX3400 running backdoored unautherized code?”

Nope, the SRX’s run Junos 12, not ScreenOS.

Screenos runs on these platforms.

I don’t know if Junos was bugged the same way as Screenos. Juniper says no.

“Q: What devices does this issue impact?

All NetScreen devices using ScreenOS 6.2.0r15 through 6.2.0r18 and 6.3.0r12 through 6.3.0r20 are affected by these issues and require patching. We strongly recommend that all customers update their systems and apply these patched releases with the highest priority.

Q: Is the SRX or any other Junos®-based system affected by these issues?

These vulnerabilities are specific to ScreenOS. We have no evidence that the SRX or other devices running Junos are impacted at this time.”

This could well be a clever “Buy new boxes now or pay that support fee!” marketing campaine.

It’s good that the FBI looks at it (and if the FBI looks at it this doesnt smell much like involvement). If they don’t find evidence for a hack they may find something else.

albert December 20, 2015 10:49 AM

What do you expect from weasels, but weasel words?
This reminds me of my own involvement in weasel-wording. Many years ago my team leader was writing some product information and said: “I’m writing about bugs but I don’t want to use the word ‘bug’; Do you have an alternative?”. I said: “Sure, Software anomaly.” Of course we took it as a joke, back then, but now….
@Smile: Can You Say Backdoor?,
MicroSoft was never one to let laws, ethics, or morality stand in the way of making money. ‘Whatever it takes<?i>’ is the mantra of the capitalist classes everywhere. BTW, while the US insists on trading partners ‘alignment’, China doesn’t give a RSA, as long as they get what they want. ‘We’ consider China an enemy, but they still trade with us…a lot.
@RE: Steganography. Most arguments against this have nothing to do with the process, but are really workarounds. Are you a target because you exchange bmps instead of jpgs? wav instead of mp3s? Maybe you prefer a raw image format (my favorite if you’re really into photography).

Compression is great for the 80% or so of the BS that permeates cyberspace. Who needs dynamic range for ‘music’ that’s processed at 0 VU?
. .. . .. _ _ _ ….

Clive Robinson December 20, 2015 10:49 AM

@ Grauhut,

IMHO the Juniper Backdoor story stinks a little like a cheap marketing campaign:

Not just that, it’s timing is somewhat convenient with regards the Gov backdoor nonsense and the tacking of twice failed legislation on to the US Finance legislation. Which is basically just blackmail by any other name.

I just wish that one day the US Democrats would grow a pair and face off the GOP idiots… Otherwise you have to belive they are either complicit or impotent…

As for the UK side of the puddle, what can I say, you have the infinitely evil Theresa May MP, Home Office Minister and her snoopers charter. Ably supported by the vacilatincg PM David “piggy chops” Cameron, and laugh a minute Chancellor George “gidiot” Osbourne who draws slow hand claps,and boos whereevery he goes… I’m sure there must be a rewording of a children’s nursery rhyme that suits the twits…

Clive Robinson December 20, 2015 5:04 PM

@ Figureitout, Nick P, Thoth, Wael,

You all might find this of interest,

I’m realy not surprised about the highly degraded read whilst writing performance. The Flash Nand write cycle is measured in significant fractions of a second. Worse it’s often a 2K Byte block size write not the usual 0.5K Byte and the in drive controlers have obly limited buffering and interleaving capability.

But even in near read only drives there are problems inside the chips with the way they are structured.

What I did not see was anything on “hidden data retention” due to things like wear leveling and interleaving, often done to try to improve write performance, where the chips can get treated in a “roubd rovin” or “circular buffer”.

Worse although there is some OS support getting it right for any particular usage can be both a leap in the dark and made in faith, neither of which is realy the way you want to go with multiple access memory systems.

Nick P December 20, 2015 5:30 PM

@ fr0sty

Coincidentally, I burned the new TAILS, Ubuntu, and Mint a few hours ago. System needed a cleanse anyway. On Mint now. Briefly had Haiku on it but couldn’t figure out how to turn on wireless in the 10-15 min allotted. Getting my apps and data back on now. Will play with Haiku and A2 Oberon some more when Virtualbox is reinstalled. Darn, should’ve tried GenodeOS between installs now that I think about it. (shrugs)

Grauhut December 20, 2015 7:43 PM

@Nick: If you are playing with Linux, give Elementary OS a try.

Btw: Desktop Virtualization… I ordered some parts for my regular holyday private hackaton, refurbished e5-2670’s are around 250 bucks now. They make a great workstation with a x79 board. 🙂

@Clive: “Flash Nand write cycle is measured in significant fractions of a second”

How many years ago? SSDs are not that bad, 200us write? If you dont kill them with higher raid levels they perform great. I know some early Intel consumer models running as mysql data disks (> 95% read) for over five years now in raid1 mode.

Nice lecture on flash:

And i am really happy that modern ssds throttle down before overheating. Of cause, if you are setting up “hot” cold zones in dcs (> 25-30 deg cel) in order to save energy costs, then you better use conventional disks.

Wael December 20, 2015 10:47 PM

@Clive Robinson,

Re: SSD…

I tend to agree with the article if we’re talking about servers in data centers. For personal computing, I’d go with an SSD, especially on laptops.

Thoth December 20, 2015 11:45 PM

@Nick P
Linux Mint does have some networking issues especially if it’s newer computers. Regarding GenodeOS, I would prefer to let it mature a little bit more and it should be treated as a framework instead of an OS as that’s their goal.

Makers of the Librem laptop ( which is a FOSS friendly laptop with open source / FOSS friendly hardware have a “hardened” version of Linux Mint called PureOS ( and is pre-installed into each Librem laptop.

Clive Robinson December 21, 2015 4:04 AM

@ Grauhut,

How many years ago? SSDs are not that bad, 200us write?

I forgot to add “compared to read cycles” due to trying to type quickly…[1]

Any way I’ll be going down for the operation in a few minutes…

[1] Some nurses don’t find it amusing that you use “mobile devices” at any time, when there are perfectly adequate “pay through the nose” “bed side entertainment systems at only €10 a day, with no internet and calls (recive and make) at €0.7/min…

Dirk Praet December 21, 2015 8:05 AM

@ Grauhut, @Nick P

Btw: Desktop Virtualization… I ordered some parts for my regular holyday private hackaton, refurbished e5-2670’s are around 250 bucks now.

Or you can go for a real cheap Raspberry Pi2 DVI solution with Citrix HDX and ThinLinx.

@ Nick P

Briefly had Haiku on it but couldn’t figure out how to turn on wireless in the 10-15 min allotted

A known pitfall. From the FAQ: “Some wireless network adapters require installation of a firmware file, which must be installed by running the install-wifi-firmwares script (see the guide Connecting to wireless networks). As much as I like it, I see very little use for it.

Coincidentally, I burned the new TAILS, Ubuntu, and Mint a few hours ago

I have given up on Ubuntu ever since Canonical started doing spyware on it. And I hate the Unity interface too. I’ve been on OpenSuSE as a general purpose OS for years now, and I’m pretty happy with it. Next to that, I’m also using TAILS, Kali, PC-BSD, Cubes/Whonix and some others for more specific purposes.

@ fr0sty

Tails 1.8.1 is out

Fixes the Grub2 CVE among other things. In-line upgrades on USB/SSD have become seriously slow, though.

@ Thoth, @Nick P

Regarding GenodeOS, I would prefer to let it mature a little bit more and it should be treated as a framework instead of an OS as that’s their goal.

I second that emotion.

Re. Mint, Elementary OS, PureOS etc.

I really wish all of the fine folks behind these would join forces and focus on one or two distributions. Unless one is getting paid for reviews, it has become impossible to keep up with dozens of different distributions that all have their own strengths and weaknesses.

Sent December 21, 2015 12:40 PM

Which linux variance offers a mature, stable distribution, with good security, privacy and encryption?

I guess Tails is the defacto option for a live distribution, but for something you want install and use as a ‘regular’ OS what are the options?

Maybe Qubes? Gentoo? Debian? Fedora? Mageia? Arch?

Grauhut December 21, 2015 2:05 PM

@Clive: Good luck!

@Dirk: “Or you can go for a real cheap Raspberry Pi2 DVI solution with Citrix HDX and ThinLinx.”

Thats the other side of the virtualization equation! 🙂

“I really wish all of the fine folks behind these would join forces and focus on one or two distributions.”

There are also more than two sorts of cars in the world…

And i like Elementary OS because of the happy feedback i get from people who ask wich linux variant to try first. And its based on u1404, something i know, if they are in trouble…

Nick P December 21, 2015 3:21 PM

@ Grauhut

Might try it. Meanwhile, Virtualbox dependencies are corrupted because of Linux’s take on dependency hell. I clicked install on the wrong icon and the problems apparently remain after I uninstalled it with gibberish error messages. Whatever. Miss how Windows apps uninstall then reinstall almost always worked.

I think I have KVM and now have a front-end for it. Might try to work with that instead. Once I figure out where the front-end went. Always this shit on Linux’s lol…

@ Dirk

re Haiku

Haiku is nice because it tries to preserve the superior [for desktops] BeOS architecture. Described nicely here. That’s how it did awesome stuff on old, Win98-era hardware like this. Skip to 4:45 to see how responsive it is with multitasking overload around 17:14 or so. I was envious when I watched that long ago.

Now, Haiku itself isn’t quite delivering yet as a knockoff by a small team. However, it and MorphOS are keeping alive the tradition of easy-to-use, lightweight, consistent, highly-responsive desktops. Good parallel tradition and knowledge base to have going on for diversity. Plus, when more robust, might make a good OS for cheap, air-gapped systems leveraging whatever security tech they can. Would be much easier with a small, consistent OS vs a UNIX. So, I’m just interested in it for architecture and long-term potential rather than day-to-day.

re Linux

I got Ubuntu to use for hardware support especially when fixing machines or getting files off them. It usually runs on about anything with little work. Mint is my day-to-day desktop because it’s pretty reliable, super-easy, and less scheming than Ubuntu far as I can tell. I plan to try Fedora, SUSE, and the BSD’s again in near future but had to reinstall immediately to deal with cruft (or spyware) that was bogging my system down. Moving FAST now. 🙂

@ Thoth, Dirk

I wanted to try GenodeOS just to see where they’re at. I agree on pushing it as a framework to build on and turn into something useful. Especially purpose-built systems or appliances. I’ve been promoting that for a while.

@ Clive

Um, are those spoilers or unrelated to actual content of the movie? I haven’t seen it yet. So, if spoilers, I’ll read them later.

Thoth December 21, 2015 6:08 PM

@Nick P
Hopefully @Clive Robinson is out of ops theater all fine and well and resting. 4 hours after his last post to enter surgery and out posting again. He seems to recover very fast and seems good 🙂 .

Wael December 21, 2015 6:35 PM

@Thoth, @Nick P,

4 hours after his last post to enter surgery and out posting again.

Oh, no! He posted during surgery after he hacked the EKG machine in his sleep. Rumor has it the EKG machine was sitting behind some sort of a back-doored firewall.

Nick P December 21, 2015 8:52 PM

@ Thoth, Wael

He likely both hacked their network with an undersized, smartphone keyboard then installed a chatterbot representing to respond if he wasn’t awake. His main AI is trained using a huge knowledge base, integration with Watson, and samples of his posts fed to a GPU-accelerated deep learning machine. Noticed how the chatterbot merely used a pile of comedy links rather than a long, detailed post. That’s a cheat to seem realistic. I bet soon… maybe within hours to a day… we’ll see the real capabilities of his Watson and DNN hybrid.

Nick P December 21, 2015 8:59 PM

@ Jonathan

Congratulations. Those of us writing here expected plenty of attacks on QKD as it’s (a) weird and (b) eventually relies on regular stuff. I’ve always been a fan of solid stuff you can argue with basic physics and math. Hence, liking physical separation, data diodes, separation kernels, TRNG’s, etc.

So, given you smashed quantum stuff, what do you think about the KLJN scheme? It was the only QKD-style scheme that interested me because it relied on physics that were well-understood and theoretically more verifiable. Not to mention maybe cheaper. 🙂

Buck December 21, 2015 9:47 PM

@Nick P

If I understood correctly, it’s even more impressive than that, as they didn’t destroy just the ‘regular stuff’ – rather a flawed implementation of some of the ‘weird’ stuff (Bell’s Incompleteness?)!


Please correct me if I’m wrong here! I have a peculiar fascination with the quantum world, but absolutely no experience in the laboratory. As counterintuitive as it is, I imagine there are some things that one would just have to see to truly believe…

Also, I must commend you on your patience and willingness to explain yourself, if that was you fielding the replies on /. I can only imagine the frustration you must feel sometimes when trying to explain your experiments… 😉 Keep up the great work!

tyr December 21, 2015 10:45 PM


Dirac Uncertaincy Principle

‘An anecdote recounted in a review of the 2009 biography tells of Werner Heisenberg and Dirac sailing on an ocean liner to a conference in Japan in August 1929. “Both still in their twenties, and unmarried, they made an odd couple. Heisenberg was a ladies’ man who constantly flirted and danced, while Dirac—’an Edwardian geek’, as biographer Graham Farmelo puts it—suffered agonies if forced into any kind of socialising or small talk. ‘Why do you dance?’ Dirac asked his companion. ‘When there are nice girls, it is a pleasure,’ Heisenberg replied. Dirac pondered this notion, then blurted out: ‘But, Heisenberg, how do you know beforehand that the girls are nice?'”‘

Peanuts December 21, 2015 11:16 PM

Most hospital systems in place today have no firewalls, no authentication, no connection success failure logging. The “normal” is, if you can reach it, its assumed you own it are suppose to be able to reach it without auth or encryption.

It is in fact, a usually undocumented design requirement.

Which is why, before the next time you can plan nodding off to dream land for some procedure.

You might try to 1) Plan a walk through with staff to identify relevant systems to disconnect from the network.

Now you only have to worry about time bombs or viruses which trigger due to extended network disconnects.

By the way, none of the systems have antivirus or an establish vendor patching program.

Happy dreams, best recovery!


Clive Rovinson December 22, 2015 2:55 AM

@ To all,

Yes I’m alive, but not quite kicking yet. Thank you to all who expressed concern for my wellbeing.

@ Peanuts,

Sadly it is true that the WiFi in hospitals is not as secure as it should be, but there are other things in a lot worse state medical wise that would cause me considerably more concern.

As some regular reads will know I’ve been banging on about the lack of standards let alone security in the communications to inplanted medical electronics. The direction of surgeons these days is less and less “cut-n-shut”, keyhole surgery is now routinely carried out by doctors as is angio clearing of blood vessels around the heart etc. The result is surgery is becoming more about “replacment” or “assistance” of failing parts or in some cases “augmenting” parts. Micro electronics is effecting these implanted devices in many ways but the desire to get the best out of them has ment that a control interface is required… And there appears,little in the way of standards over the base physical layers and none further up the stack. Whilst this might give a little security by obscurity it has a very significant opportunity cost when you need it the most (think being rushed into resus in ER / A&E Depts). It is knowing of the issues that made one of Bush Juniors advisors have the communications interface removed from his little box of heart electronics).

@ tyr,

+10, That is most unfair, Ive just had my head operated on and laughing is likely to cause bleeding and worse (it might make a nurse smile 😉

That said it was well appreciated.

@ Nick P,

The fast recovery is due to my issues with GA/gas/painkillers they and I don’t get on, so after a brief discussion it was agreed by all that under local would be a safer option. Thus I also got to know what was going on during the op. Further as I’m an infection risk they don’t want me hanging around other people…

They were worried about me not using painkillers but I’ve had the first peacefull nights sleep for several weeks, so “no worries” there. Though I suspect things will start to hurt as I “regress to the mean”.

The question now is “will I be able to play the piano”[1]… Or more correctly still whistle and sing in tune.

[1] From the old joke… A man is being wheeled into the operating theatre and on the way says “Doc, after the Op will I be able to play the piano?” to which the anesthetist replied whilst injecting the premed “I don’t see why not” to which the now sleepy man replies “Oh good I always wanted to play”.

Clive Robinson December 22, 2015 5:19 AM

@ Nick P,

Not having seen the film, I can not rule out a little bit of spoilage, but I don’t think they are intended as such. They read more as a “hang draw and quater the muts, for dareing to spoil our enjoyment of our selves, and if they survive that burn them as witches or heritics, and stick their bits on spikes over tower gateway”…

Basicaly a p155 take of the usual “outraged of Basildon” twitter idiots who’s only claim to fame is to be more vile than the preceding commenter and often less inventively…

BoppingAround December 22, 2015 9:27 AM

Something from DJB:

As a boring platform for the portable parts of boring crypto software,
I’d like to see a free C compiler that clearly defines, and permanently
commits to, carefully designed semantics for everything that’s labeled
“undefined” or “unspecified” or “implementation-defined” in the C
“standard”. This compiler will provide a comprehensible foundation for
people writing C code, for people auditing C code, and for people
formally verifying C code.

For comparison, gcc and clang both feel entitled to arbitrarily change
the behavior of “undefined” programs. Pretty much every real-world C
program is “undefined” according to the C “standard”, and new compiler
“optimizations” often produce new security holes in the resulting object

Full text:!msg/boring-crypto/48qa1kWignU/o8GGp2K1DAAJ

Nick P might be interested.

Markus Ottela December 22, 2015 10:02 AM

@Nick P (DEC 10th)
“Neat demo. The image transfer particularly brought back memories of Usenet and BBS’s haha. Hopefully combining the popular NaCl with your endpoint approach will improve uptake.”

Hah, wish I was old enough to have such memories. I hope that too.

@Figureitout (DEC 10th)
Nice, would like to see it running for a longer time (I thought I saw it change the timing of sending).

That’s intentional. I’ll make detailed response so I can refer it to later: has input process and sender process. Input process generates messages and outputs them to queue. The sender process is as simple as this:

It’ll first record system time in milliseconds. It then checks two queues for data. As long as both are empty it’ll flip a coin (using /dev/urandom as entropy source) and output a noise message/command based on coin result. If message queue has data, it’ll send that, if not, it’ll check the file queue. Since you can constantly add data to queues during trickle connection, you can “interrupt” file transmission to send messages instead: You don’t have to wait for the slow file transfer. /cf and /cm commands will stop the transmission of long message / file.

Based on message queue content, the function enters one of two sub-loops where heads sends the intended type and breaks from the subloop, and tails sends noise packet of opposite type, generates new time stamp and restarts the subloop. The subloop is restarted until coin is heads and message/command is actually sent. This means the chance your message is not sent for x outputs is 1/2^x. I wanted it to be turn based (m,c,m,c…) but I couldn’t prevent it from occasionally sending two+ messages/commands in a row when user writes messages etc: random output closes this metadata leak. Once the actual/noise packet has been sent, the timestamp will be pushed to function trickle delay.


message = mq.get() will take variable amount of time, especially if input process is sending large files: It’s in RAM so it shouldn’t be too long, but still, it’ll add to overall time between coin tosses, making timing attacks easier. The constant delay is achieved within the trickle_delay() function:

The trickle_delay function immediately loads timestamp in milliseconds and compares it to start of previous sender_loop. It’ll calculate the offset and sleep that time. So it zeroes out the time differences in encryption time (XSalsa-Poly1305 is constant time but PyNaCl binding implementation might not be perfect). After the constant sleep, the function loads random number from /dev/urandom (No longer Python’s math.random) between 0 and trickle_r_delay (1.0 seconds by default). This obfuscates the tiny variances between packets. I included a boolean setting print_ct_stats that displays statics about how long the constant time sleep actually is:

Compared to queue loading + encryption time that has more than 40ms of variance, the variance after CT-delay is only a fraction: 1-2ms. CT-delay is practical to have since it prevents spamming the IM server and key offsets from getting out of hands if contact’s RxM goes offline for some time. You can also see the random delay time as the third line. The High CT-delay fits large variety of end point devices from my netbooks that take 600 to 800ms to faster computers that take only ~150ms to load messages, encrypt them and re-iterate keys (20k rounds of PBKDF2-HMAC-SHA256 between every message; Talk about forward secrecy).

Final option in trickle_delay function shows the l_t_delay for long transmissions. The purpose of this is to make packets look more human like to IM server. This isn’t exactly fooling intelligence services the delay timesa are uniform, but it might fool some anti-spam filters. I wonder whether gaussian random /some other type of randomness would resemble human typing more.

The anti-spam delay might not be necessary. I inquiried duck-duck go about their XMPP policy if users enable trickle connection — their response was “This should be fine as long as it is low volume.” So you should be fine if you keep the trickle_c_delay reasonably high.

“And have you thought about RasPi Zero for TFC”

It looks like an interesting piece of hardware but there’s even more accessories you have to buy.
I’m more in favor users buy themselves cheap netbooks that combine battery, display and keyboard into one hassle-free package. The beauty of TFC is it’s not hardware dependent.

Markus Ottela December 22, 2015 10:25 AM

Thoth (DEC 10th)
“Neat to see yet another crypto library (NaCI) working on TFC. Do you have a protocol for the key exchange and all that ? Are the packets sent at fixed length as they look pretty uniform ?

I find it a pain that most modern crypto messaging apps are really bulky. This hinders very compact implementation onto hardware implementation. I have looked at OTR, TextSecure/Signal … they are just too bulky and requires multiple runs to get a message across.”

I’ll make a protocol description later but quickly put:
1. TxM generates a curve 25519 ECDHE keypair.
2. TxM outputs public key and sends it over network to contact’s RxM.
3. Recipient sends back their public key.
4. User types received public key manually to TxM’s input screen.
5. User then verifies the key checksum from RxM to ensure there weren’t any typos.
6. User calls contact over Signal and verifies integrity of public key and if there’s MITM, enters new one.
7. User accepts public key and TxM generates ECDHE shared secret, runs it with both public keys separately through PBKDF2-HMAC-SHA256 (250k iterations) to generate two symmetric keys.
8. TxM outputs contact account, nick and symmetric keys over direct data diode connection to RxM.
9. TxM and RxM store the keys and contact to database. Key exchange is now complete.

“Are the packets sent at fixed length as they look pretty uniform ?”

Yes. All plaintext packets are padded to 254 bytes before encryption.
This is mandatory. The padding function has a guard that kills the program unless the
packet is exactly 254 bytes long. Since TFC uses UTF-8, messages with special characters
such as €ÅÄæøö etc. take more bytes so you can’t necessarily send 254 chars per message.

Ciphertext contains 24-byte nonce, 254-byte ciphertext and 16-byte (128-bit) MAC. Base64 encoding extends the 294 byte message to 4*(294/3) = 392 char string. So it’s that plus the header containing information about TFC model, version, packet type, key ID etc. Since keyID slowly grows, it’s not going to be static length but I think the most important part is padding of plaintext before encryption to ensure ciphertext length doesn’t leak data about plaintext.

As for Axolotl and OTR, yes the protocol has much more overhead, but I feel it’s quite necessary as you want to have as much future secrecy as possible when you’re encrypting with weak endpoints. I hope TFC can at some point be squeezed into small C program.


Other comments about the project:

I added logs for TxM side so cross-comparison can detect exploited RxM showing forged messages. I can’t believe how long it took me to figure out how easy it was to close down such a big issue in security.

I changed CRC32 checksums to trucated SHA256 hashes so user can tweak the error detection rate.

I completely rewrote, added proper termination for multiprocess processes, hard-coded packet length to 254 (It was tweakable, but only for OTP where key material can be wasted if messages are shorter on average).

I added my second bit of “eye candy” in the form of data diode simulators (visible in the last link in my post above, that help illustrate the concept when I demo the system).

File transmission now sends a header with the name, extension, estimated packet count and delivery time based on sender’s packet output delay settings. File decoding can now be automated. I did not use subprocess for decoding this time to prevent shell-injection with custom filenames.

I fixed a dozen bugs, every single PEP8 style error, and wrote more than 300 unittests (2000 LoC).

I finally added arguments you can start, and with to control many settings.

That’s a long list but probably not everything worth mentioning. There’s a lot more work to do, installer, whitepaper, manual.. Also, OTP/CEV versions needs updating, they’re not going anywhere.

Happy holidays everyone!

Nick P December 22, 2015 11:16 AM

@ Clive Robinson

re links

Those were pretty funny. Putin link being my favorite. Even Vader wasn’t that villainous: he waited two movies. 🙂 I believe they threw in a Citizen Kane reference on the last one.

re hospital

” Thus I also got to know what was going on during the op. ”

Ahh that makes sense. Are you saying you were actually awake during the operation? I hear this happens but haven’t known anyone personally. What was that like?

“They were worried about me not using painkillers but I’ve had the first peacefull nights sleep for several weeks, so “no worries” there. ”

That’s how I do it. They always scoff at me avoiding painkillers when I’m sick or few times I had surgery. Took a tiny hit of them once after appendix being removed because the pain was unbearable. Plus, how else am I going to sleep in an American hospital with interruptions every 5-10 min? (Still didn’t: not enough morphine…) Regardless, I usually avoid them so I can mentally track my progress in healing & to be used to that sort of thing. One day I might not have medical backup…

On other end was a friend of mine. We’ll call him Jake. He’s normally a tough redneck but terribly afraid of needles. Wanted a tattoo done by my brother, an experienced artist. They got him set up, reassured him, put the tattoo gun on him once… mofo’ screamed and passed out instantly. Had to check his pulse and shit lol. “Still alive? Aight, let’s finish the tattoo.” He woke up later to find he got what he paid for even if he didn’t remember the specifics. Hope he didn’t have that experience in anything else he might have paid for. 😉

” “will I be able to play the piano”[1]…”

I could imagine him grinning with a bit of drool coming down his mouth with that line coming out as his head slowly fell back. Then the doctors smirk at each other quipping, “The stoners always say the damned things…”

@ BoppingAround

I was. My main haunt these days is Hacker News as the better technical discussions happen there most often. Quite a few of the old guard there, too, like John Nagle and people that worked LISP machines. Anyway, you’ll see my comments here. Also done things like slamming them on not leveraging prior work in safe programming and small, safe/secure runtimes for unikernels. However, the essay I put most time and thought into on where we need to go for robust, secure, day-to-day programming was this one. It focuses on imperative languages because I assume no major shift to functional programming. I commented more on how to build it and other tools with low-subversion in this thread.

@ Markus Ottela

You don’t have to grow up with it: here’s a great article on the various services that existed before the Web and their contributions. Written much more interesting & with more context than various timelines. Hopefully, you’ll get to share a similar one with your great-grandkids talking about how we used to use controllers to play game consoles on non-optical computers instead of the stuff they snort now days.

Czerno December 22, 2015 11:22 AM

@Clive : relieved to learn that, unlike Schrödinger’s cat,
you’ve escaped out of the uncertainty box ! Hope your
wave function will be sorted out very soon !

Bob Paddock December 22, 2015 11:46 AM

@Clive Robinson
“Further as I’m an infection risk they don’t want me hanging around other people…”

DO NOT UNDER ANY CIRCUMSTANCES TAKE Fluoroquinolone antibiotics Levaquin, Cipro etc. (23 different names world wide) unless you are about to die from Anthrax or the Plague (one lady that has been crippled by these drugs told me she will take her chances with the Anthrax with nearly fatal death rate next time). is my Cleveland/Akron TV interview. 3,000 dead. 200,000 injured by these drugs per the FDA. That is estimated to be only one-percent of the real numbers!

Please see: Our Win at the FDA hearing on Fluoroquinolone Antibiotics for details and videos of the hearing at the FDA on November 5th 2015.

The nearly unanimous conclusion of the FDA advisory panel states that the current labeling to support Fluoroquinolone antibiotics use for sinusitis, bronchitis and uUTI is NOT justified.

Eyes and Ears where not considered. In the opening remarks the FDA Chair said that detached retinas due to these drugs would not be considered today [Nov 5th 2015]. When will they? How many people think about how taking an antibiotic will make them go blind. 🙁 …

In the medical device design contest, where my design came in 3rd place, I found that no one is interested in spending the $100,000,000 dollars to get FDA approval for something that only about 3,000,000 (the known number, it is higher I’m sure) people suffer from daily. 🙁

Rounding out the winners’ circle in third place is the Intracranial Cerebrospinal Fluid Pressure Regulator. The entry, submitted by Bob Paddock, was inspired by a personal experience; his wife committed suicide after an excruciating headache that resulted from cerebrospinal fluid leakage.

‘The inspiration for this dream device obviously came from a very personal place,’ said Jamie Hartford, managing editor of MD+DI. ‘The Intracranial Cerebrospinal Fluid Pressure Regulator is a novel improvement over the standard of care for intracranial hypotension caused by cerebrospinal fluid leaks, a condition that can cause devastating headaches. I like that it uses wireless power transfer, which is really the next step for implantable devices.’


“And there appears,little in the way of standards over the base physical layers and none further up the stack. … have the communications interface removed from his little box of heart electronics…”

It is not that those doing medical implant designs don’t know how to do things such as encrypted communications, it is the powered required to carry out the operations. What works on a desktop rarely scales well to the embedded space running on a coin cell implanted in the body, with current technology.

What good encryption would you use in a pace maker, insulin pump or a spinal implant? Ideally it must have no impact on battery life. As well as run at only 1MHz clock or if lucky 24 MHz clock for a few milliseconds?

John Galt IV (soon to be JG4) December 22, 2015 12:16 PM

@Clive – Glad that you pulled through. I opted for the morphine pump on the ruptured appendix. It led to some meditation on a nearby cemetery, where the residents didn’t have access to modern healthcare. I have no idea what was in the 24/7 rounds of antibiotics, but the bags were prominently labeled “Contains DEHP”, which they no longer use on premature kids. It causes testicular atrophy among other wonderful side effects. Not worry, it’s in everything in your house anyway. When I was a kid, it was in the teething rings.

I had seen this general news before, and much later realized that this can be done using beamforming with multiple hacked WiFi and/or cell phone devices to extend range and/or resolution:

I might find some time over the holidays to post the bill of materials and concept for a robust powerline filter.

I think that it is possible to use a two-machine cascade at each end to provide secure channels using a variety of compromised services. The general premise is to run the encryption on a machine that is thoroughly air-gapped/energy-gapped/filtered/etc., then transfer that through some clean protocol, up to and including printed paper, to an internet-connected machine. This doesn’t defeat traffic analysis, but that isn’t the main threat model, which is theft of IP/trade secrets. My concern with traffic analysis is that it allows adversaries to reverse engineer a business by discovering the trade secrets of the supply chain. I was shocked to find that by default, most vendors supply Level 3 credit card data to banks. That is a gaping 700-foot long information leak for any business.

Anura December 22, 2015 12:49 PM

@Bob Paddock

Salsa20/12 (12 round variant of Salsa20) is a very small and pretty fast stream cipher; while it’s not broken it doesn’t offer a high security margin (but still, best attack is against 8 rounds and is not feasible). For a low power device that doesn’t transmit a lot of data, it may be a good choice but I have no clue what the actual cost is – it depends on how much data you need to transmit. ChaCha12 should in theory be stronger, but hasn’t been around as long.

The other question is what you are communicating with. If you are talking to a remote server, key exchange becomes a problem. If you have two fixed components of a device that communicate wirelessly but only to each other, then you may get away with using a static key. Also, especially when using a stream cipher, you need to ensure that you perform message authentication which also adds to the cost.

Gerard van Vooren December 22, 2015 1:30 PM

About boringcc,

The simple fact is that Dennis Ritchie didn’t improve C when he could and had the momentum. That train has moved a long time ago. Although I respect DJB a lot, the mindset is just wrong. You can polish a turd but I still wouldn’t eat it. Having a compiler that lacks unspecified behavior doesn’t make the language C a reasonable language. The underhanded C contest should be “boring” for that to happen. There are even “improved” C languages that break compatibility, but they aren’t in use very much. There are also countless of languages that are safe and even without GC such as Ada and now Rust, but systemd for instance is still written in C and even OpenBSD is written in C. I admire the attempt of DJB and even would like it to become real, but I am very skeptical whether it would change the security of tomorrows systems. The view is to narrow for that. Better throw away the prototype and burn the ships, and that also won’t happen. C is here and here to stay.

Nick P December 22, 2015 11:07 PM

@ Gerard

“and that also won’t happen. C is here and here to stay.”

Hence all the work on improving C codebases, compilers, etc. You’re probably using C code to type your posts. It’s not going away and it’s too critical. So, gotta do something.

Figureitout December 22, 2015 11:17 PM

Markus Ottela
That’s intentional
–Ok, I thought you were aiming towards absolutely no timing differences (and I understand difficulties implementing that, not to mention getting access to untampered real timers w/ quartz, not software-based ones from some filtered digital signal or questionable API (maybe..), in larger chips in RasPi or likely x86-based PC’s). You can’t realize full-system operation w/ these chips so it’s always an annoying question. Timers can be tricky, to initialize, and do what you want (various modes…), I know. :p

Noted on some of the system operation and various tech. details, makes sense mostly, some things not though. I’ve only scanned thru quickly the code on github, I should read it better but am overflowing w/ projects and work (even on my “break” lol)..haven’t built or tested the system. I’ll get to it eventually, more so when I’m done w/ school…

I mention RasPi Zero b/c if you consider building such a system as TFC, w/ basic OPSEC practices you want hardware you don’t care about throwing away. 3 laptops (6 if I want to test whole system myself) is a bit of a price for me, I could get real good use out of those, if they have at least 4GB ram. Sad but true. When you have a huge community like the RasPi one, new hardware comes out and gets hacked up immediately. Already some quick hacks to get mostly regular Pi functionality. What I’m envisioning is the same system just shrunk w/ a cheaper cost, would just be a bit of a hassle to build nicely and fit in nice case.

Oh if you have time, you mentioned “MAC’s”, I don’t think I’ll get it on first rev. but something I really want is AES-MAC for my little physical detect system. Was that difficult or easy to get working? As of right now I’m planning on appending a microsecond sample to a pre-set message to XTEA so ciphertext isn’t the same….(w/ pre-set freq. hopping, as I don’t know how to keep both ends synced). I ultimately want AES but am having little trouble right now (w/ RF portion, implementation is long done just on micro). Naturally, design decisions pull on each other, for instance I want to easily generate new keys w/ just buttons and microsecond timers, but they have to be transmitted then to other end (which I also want the receiver to not talk back to transmitter to make more difficult to track down and tamper w/ logs). Alternative is just changing keys out every few months which makes this really unusable except it may be nice to force that for changing out keys…

Anywho, happy holidays too mate!

Nick P December 22, 2015 11:42 PM

@ Gerard

Just got some good news. Well, I’ll just paste the comment:

“For anyone interested in Modula-2 as a potential replacement for C as a systems language, there is a small but hopefully growing revival of the language occurring on the Gnu Modula-2 mailing list and (especially) on comp.lang.modula2 . Right now there is a call on the newsgroup for testers of the Modula-2 to C (M2C) compiler. This is part of a larger effort for the Modula-2 Revision 2010 (M2 R10) language definition and implementation by Benjamin Kowarsch and Rick Sutcliffe.”

I still have the old Modula2-to-C compiler and Lilith report just in case. Might get some new tools. Hell, I might even do some alpha tests on that for them. A form of Modula-2 is one of the few alternatives to C that could get traction if benefits were explained. Anything C people can do, Modula-2 can do as well. You know, outside supported backends or popular libraries. A revival of that brings us a step closer to Modula-3: the best for industrial use. Need to ditch uppercase requirement and add macros for zero-overhead abstractions, though.

Anyway, just posting it in case you wanted to get on the mailing list or comp.lang.modula2 to get in on the revival or test out their tools.

Grauhut December 23, 2015 3:13 AM

@Nick P: “I think I have KVM and now have a front-end for it.”

For a decent virtualization host you could give Centos 7.x a try.

If you want it easy install the program groups “server with gui” and “basic virtualization host” and add package “virt-manager” after basic setup. Before installation of vms configure the bridges on nics you want to offer to these vms manually. For a “backbone bridge” with no internet connect setup a tap device and a bridge on this. Wherever possible use virtio drivers, they offer internal 10G speed. For BSD installs use e1000 kvm nic emu oder disable checksum offload on virtio nics inside bsd after install. If you need a fast “invalid packet” generator, leave offload on! 🙂

Nick P December 23, 2015 12:41 PM

@ Grauhut

Appreciate the tips! 🙂 CentOS also has the advantage of strong, SELinux support IIRC. There’s guides out there on using that with KVM, etc.

@ All

Great paper on usable, PKI implementation

Paper here. Even I was shocked that getting on secure wireless network and cert setup was a twenty-something step process even Ph.D.’s couldn’t understand. They narrowed it down to a few, intuitive steps with rest automated. Excellent example of improving security in a way people can use. More thinking like this needs to be applied to each product category and the apps that manage them.

Michael Belless December 23, 2015 2:42 PM

I do not have a news story. Someone I know claims that they can not get money from France (to the US) from a relative because the French Government, in reply to recent bombings, has made transferring money out of France to anywhere nearly impossible.

Anyone know anything about this?

Czerno December 23, 2015 3:32 PM

@Michael Belless :

I’m not aware of any such restrictions
on money tranfers abroad from France,
whether the senders are French residents
or visitors here.

The only vaguely related news I’m aware of are rather drastic restrictions of the amount of purchases paid in cash, in order to thwart money laundering in general and possibly terrorism-related (as if terrorists and those who supply them with weapons are going to declare every transaction to the authorities). Even so, the limit for payments in cash ( 1,000 € iirc) applies to French citzens only not foreign visitors/tourists, afaik.

Czerno December 23, 2015 3:40 PM

Sorry for my bad English. To clarify,
the 1,000 € limit is per individual invoiced item (or service) paid in cash. Above said ceiling payments must be using non anonymous means such as cheques or credit cards (anonymous, pre-paid CC are not available in France in practice and will be banned altogether).

Nick P December 23, 2015 4:30 PM

@ Clive, Wael, Thoth, Grauhut

My position on knocking C out in favor of a safer, system language was it had to be simple and efficient. My favorite contender I’ve been pushing was a Modula-2 revamp or (ideally) Modula-3 as we can subset it. Modula-2 was nice because it was simple (65p specification), readable, safe in all kinds of ways, easy to compile, efficient in production, maps easily to stack architectures, and has no GC. A lot of nice traits. I wanted to drag it toward Ada some more in safety features albeit not complexity.

Anyway, there’s a Modula-2 revival going on with a revision and a new Modula-2-to-C compiler. Main site is here. However, you might want to start with the FAQ. It is a series of very, rational statements that I can’t argue against outside maybe uppercasing. Rare when I see promotional material for a programming language. 😉 Might be worth testing out and contributing to.

Plus, remember I’m thinking of it in terms of my overall vision for imperative language overhaul. And next step after that, once I got functional down, is automated conversion of best FP language I can find to the safe 3GL to leverage its compiler. If necessary or beneficial. I do know the LISP’s and ML’s are easier to verify plus there’s refinement strategies into stateful, imperative code.

Clive Robinson December 23, 2015 6:19 PM

@ Nick P,

With regards,the Modular-2R10, it looks interesting, and I’m glad they are taking an “engineering” approach rather than the more common “Let’s throw it against the wall, see what sticks and sell it, that which does,not stick and people want we will nail up in the maintenance or next candidate release”…

However I still have an attachnent for Modular’s predecessor Pascal and the P-Code engine. Which being mainly stack based had a number of advantages on lower powered architectures.

Thoth December 23, 2015 6:27 PM

@Nick P
Is there a Haskell/Ada direct to Assembly (x86, ARM…etc…) ? It do be useful skipping the middle-man (C code) which most higher language relies on. I can imagine running a kernel or drivers built on a safer language (Haskell/Ada built low level codes) that spins out more assured environments and usage.

Thoth December 23, 2015 6:55 PM

@Nick P
re: PKI deployment
My job of deploying HSMs and integrating them (especially with PKI infra like CAs) are a very painful and interesting affair. I cannot describe the amount of horror when trying to collaborate to deploy a HSM cluster with something as simple as a CA.

I have helped deployment for the following enterprise CAs either with the HSMs:
– Microsoft CA (obviously :P)
– RSA Keon
– EJBCA (Open Source Enterprise Java built CA)

I can tell you all of them have very bad and unintuitive GUI and even me who have been deploying all these crypto infra for my clients, I still find that awkward and I have to always flip through hundreds of pages of manuals to refresh and catch up where I left off on-site in front of the customers where they are eagerly waiting for me.

I should say the “open source” CAs are the worse kind (EJBCA). The GUI isn’t very intuitive, you have to figure your way around the commands …etc… Maybe some of you have deployed EJBCA via software keystore but once you add the complexity of a HSM cluster, good luck 🙂 .

RSA Keon has a Windows 2003 style GUI (web GUI) with too much words all over and uyou really get lost in the ancient web-based manual. It has a ton of internal CAs to setup just to setup a usable public CA. There is the Admin CA (for register Admins) and System CA and a whole lot more of internal CAs before you get to the public CA …damn… How many internal CA keys do I need to manage and do Cert Revoke checking every week… No wonder my customers are scared of bringing this beast to a halt when they need some maintenance on their HSMs.

MS CA (Microsoft) is surprisingly the most friendly once you get used to it .. a few clicks left and right and you are done. You can easily get a HSM cluster connected to the MS CA(s) if you want but the problem is with the concept of Microsoft’s Cert Store with it’s multitude of many types of Cert Store as shown in the research paper. Importing and exporting keys can be a b _ _ _ _ . The headache comes when a customer wants to migrate from SHA1 to SHA256 (the recent migration rush) and most legacy systems are running on Windows 2003 (banks and huge corporation). You have to understand the stupidly unwieldy Microsoft CAPI and CNG Crypto Provider infra to get working. The setup of the HSM + MS services are quite straight forward (if the MS CAPI/CNG already supports them or the HSM drivers help you with click and point install) but when it comes to migration and reloading of HSM protected keys, I wish you all the best. Been there, done that, part of my rice bowl, and it HURTS daily.

There’s a recent case where the HSM manufacturer don’t even know what’s up.

Sorry Thales, I don’t mean to make it look bad but you gotta clean up your boys on the ground (make them more knowledgeable) and keep them up to date and stop telling the customers to engage your tactical units (because of the nice yummy cash flow once the tactical units get deployed … I know…). Oh and your nShield Connect 6000+ HSMs suck hard because the battery blocks and PSU keep going down recently and recently your HSM units (nShield Connect 6000+) keep crashing more frequently with system fault.

For any Cryto Deployers lurking in the background, do heed my 2 cents advise … MS 2003 migration from SHA1 to SHA2 only works for Microsoft’s own CAPI mechanisms and does not work for other CAPI mechanism (due to MS’s hardcode to only allow that migration). Migrating from 2003 CAPI (SHA1 to SHA2 algo) means upgrading the damn OS to anything above Win 2003 that supports the CNG providers.

Creators of the EJBCA CA guys (PrimeKey Solutions AB) acknowledges the difficulty of setting up CAs + HSMs and came out with an all-in-one package called the PrimeKey PKI Appliance which is a server running EJBCA pre-installed image with a Utimaco PCI card HSM attached to it in the hopes of an easier PKI deployment.

Of course all these PKI deployment doesn’t revolve around just HSMs and CAs but these are integral part of any PKI deployment (especially the CA portion).

Anura December 23, 2015 7:03 PM


Not sure about Haskell, but GNAT does not generate C code. AFAIK, all GCC compilers (which includes GNAT) compile the code to GENERIC, which then generates assembly.

Markus Ottela December 23, 2015 11:40 PM

@ Figureitout:

It’s trickly alright. Current random setting of 1.0 seconds means the random delay exceeds 998ms with the probability of 1/500. It would however appear that there is no difference in timing error whether I place messages to queue, or whether it outputs noise messages.

I’ll see what I can do with tightening the timing. I’ll move the padding before queue loading, so padding won’t take time, and constant length plaintext is always loaded from queue for encryption. In the case this makes no notable difference, you still can mitigate by increasing the random delay after constant time delay: Having 10 seconds of random delay means only one in 5k packets will exceed the randomness, and the average output delay is increased only with 4.5 seconds.

“I’ve only scanned thru quickly the code on github”

The timing implementation in current CEV/OTP isn’t nearly as good. Encryption time isn’t evened out at all, there’s only static delay. So I suggest you wait with the code until I can get the latest version out. NaCl version will probably be first, and CEV/OTP will hopefully follow within the next two months.

TFC isn’t really the thing you need to be throwing away. If you can’t ensure the physical security or anonymity of your physical location, it’s unlikely it offers any additional protection against HSA. So if Tor works, it’s unlikely anyone will break into your home and your devices are safe. If you want physical security, get a Caucasian Shepherd. It’s tamper evident in the sense it’s either the dog or the intruder who’s in pieces on the floor when you come home. Joking aside–

It’s a nice idea you could carry keys, software and OS in a medallion, and that the TCB can fit your pocket. The issue is however, you need to protect your power supply, display unit, keyboard from physical tampering aswell. Two small netbooks can fit most backpacks without effort and are much less of a hassle. You don’t need to throw away the NH computers, they’re assumed to be compromised at all times.

You can test entire TFC suite on a single computer: Just run one local testing instance on host OS, one on virtual guest OS. Key generation program must be run with -k or -K flag as you can’t use HWRNG during testing.

Unless I can figure out how to mix entropy from HWRNG to ECDHE private key, I’m not goin to support it. TFC-NaCl will however support pre-shared keys, and there will be for that. The genKey will take entropy from HWRNG, kernel and from user keyboard input. Since HWRNG sampling is set to take 15 minutes, I’ll adjust the number of rounds keyboard-entropy is derived with PBKDF2-HMAC-SHA256 to match the sampling time. RasPi is slow, and the way I see it, this is the safest and most convenient arrangement; It’ll be tweakable, naturally.

The MAC used in TFC-NaCl is Poly1305 that is AFAIK based on AES-128. It’s part of the libary so I really can’t say how easy it is to get working.

“As of right now I’m planning on appending a microsecond sample to a pre-set message to XTEA so ciphertext isn’t the same…”

This deviates from standard practices. Exercise caution! If you’re looping through just the 6 least significant numbers — microseconds — and not the entire posix timestamp, you might repeat the IV, and that will compromise the entire security. Also, I highly recommend you use a nonce that has a random section, say, 256 bits, concatenated with the timestamp. AES-GCM takes as long a nonce as you want; For the reference, TFC-CEV uses 512-bit nonce.

I’m not sure whether the wireless system you’re lookin into is like TFC. If it is, make sure the RF receiver and sender communicate with TCB through data diodes: It’s much harder to enforce wireles communication isn’t bidirectional. Also, it’s best practice to re-derive the key after every message to obtain forward secrecy.

If it’s not like TFC but TCBs that have bidirectional connectivity, look into Axolotl like communication where computationally expensive DHE key exchange is periodical (hourly/daily/weekly), and computationally cheap key-derivation is still there, after EVERY message. Just don’t go to OTR where you’re using same key for multiple messages until periodical DH-ratchet is complete.

Also, yesterday I came up with a nice idea for TFC-NaCl in case youre doing ECDHE key exchange over a TFC style system: There are no identity keys, but you can log both public keys on TxM during key exchange. That way you have a reliable way to detect any past MITM attacks against key exchange once you have the possiblity to compare fingerprints face to face.

For a moment I thought about chaining the public keys so single hash can detect MITM during any previous key exchange. This is however not wise, as if one of the parties doesn’t complete the key exchange due to public key not being received, or because they cancelled/crashed, the the updated hash will get out of sync. Also, it might be harder to tell which of the messages were compromised during MITM.

Thoth December 24, 2015 12:08 AM

@Markus Ottela, Figureitout, Nick P, Clive Robinson, Wael.
It seems like someone attempted to use cash to get RaspberryPi Foundation to install malware into their chipboards but apparently RPi guys rejected the offer. I am pretty sure this are very common stuff across the IT (especially the ITSec) community on a daily basis.

Do we continue to put some trust in RPi ? I don’t think we should actually put entire trust on it and assume that RPi is already untrusted from the start.

What are the likelihood of clustering a few more RPis to do @Clive Robinson’s prison architecture on top of Markus’s TFC ? The idea is a few RPis behind the sending data diode does split computations while a few RPis sitting behind the receiving data diode does it’s own split computations as well for the decryption and encryption of messages thus ramping up the security of TFC even higher.

Another thought is to use a bunch of STM32 ARM chips to form a clustered sender and receiver module using a Prison model. Of course this requires $$$$$$$ and expertise to get it up and running.

A less secure but also useful software-based security which is to use a dynamically generated obfuscating code (some levels of security via obscurity) on the user side when they compile the program to generate and insert redundant functions dynamically and insert them into the compiled codes so that each compiled code looks different with different redundant functions (pseudo-randomly generated is sufficient) for the encryption/decryption functions and sensitive operations so that even if a CPU is powerful with algorithms to read the internal state of the state machine, a randomly generated with random redundancy function that is somewhat different across the same products/machines would make it even harder for the backdoor to decide what is important data to exflitrate or make sense of. This would also meant that either the backdoor must exflitrate whole sale (becomes overly too obvious) of all running machine states or needs to be smart enough to understand the obfuscated and random running states and caches. If this is coupled with splitting the functions across multiple cores/CPUs with the random redundancy, it becomes even more harder for a backdoor in a core/CPU to make sense or at least on a whole level even if the entire set of chips/cores/CPUs are infected, it still pose a pretty darn obstacle to cross.


Thoth December 24, 2015 12:25 AM

@Markus Ottela, Figureitout, Nick P, Clive Robinson, Wael.
A Software Based Obfuscation With Random Redundancy Functions

Define a list of functions {A, B, C, D, E, F, G, H, I, J … Z, -1 }. Let’s assume the total list is 27 possible functions. Function -1 is the random function that is used across all software to randomly execute a function either in parallel or sequence.

During compilation defined Compile(function[] probableFunc, int Min, int Max, function[] requiredFunc), Compile function randomly selects between a Min amount of functions to a Max amount of functions to load into the bytecode. Let’s say Min = 3 and Max = 7, we will have at least 3 random functions and at most 7 functions selected.

The end result for a bytecode file is a bunch of randomly selected functions. After each functions are selected, they are also renamed randomly before finally being output as bytecodes or executables. Once the executable runs, the function -1 would take over and randomly executes the renamed list of functions.

An attacker trying to build a binary tree or some form of analytics tree would have to cross the hurdle of understanding the rename names of the functions, the common functions, and the whole list of all functions and how they are interconnected.

Under such circumstances, the attacker/malware has two options which is to understand and know which functions are the correct ones (unlikely since the converter contains random obfuscation and renaming assuming the converter is trusted and secured) or the atatcker simply wholesale exports all the data making it’s activities very obvious.

Wael December 24, 2015 1:10 AM

@Nick P,

My position on knocking C out in favor of a safer, system language was it had to be simple and efficient

Two things: First, Simple is good. Two: Efficient is also good. The two requirements maybe at odds with each other: S-v-E, as we discussed a few times in the past, as in efficiency vs. security. I read a paper sometime ago about the three horizons, which I’ll adapt to this problem:

Horizon 1, Current languages used, especially C, C++ and their derivatives:This is where current developers need to use these languages to build whatever projects they are working on. What needs to happen is use the languages “safely”, have security in mind from the start, and adopt a solid SSDLC. People working in this “horizon” aren’t interested in anything more.

Horizon 2, Modify existing languages and the corresponding toolchains to achieve better security. The Modula-2 example you’re citing is a fit for this horizon. I would recommend you list out a more granular set of requirements that achieve the “efficiency” and “simplicity” you noted. These two requirements, in your view, indirectly lead to a more robust language. For each requirement you list, you’ll need to state it’s direct influence on security as well as its relation, conflicts, and tradeoffs with other requirements.

Horizon 3, New paradigms: A research topic. I think this is what you need to look into. What should programming look like in 50 years? We can throw some predictions there.

You need to decide which horizon you’re operating in. As for the Modula-2 FAQ, I read it. The arguments are mostly rational, but also superficial. I am not fond of the capital letters, like you.

Wael December 24, 2015 1:20 AM


@Markus Ottela, Figureitout, Nick P, Clive Robinson, Wael.
A Software Based Obfuscation With Random Redundancy Functions

This technique is already in use as a building block in some commercial obfuscation and white box cryptography tools. I have also used similar technics in the past with a few more steps and algorithms. You may want to add additional details…

Anura December 24, 2015 1:31 AM


I don’t think secure, efficient, and simple are mutually exclusive, but efficient doesn’t mean that you don’t accept some trade offs. Bounds checking adds a small hit, but not that much and the compiler can often optimize out bounds checks (e.g. the compiler can determine that bounds can’t be exceeded when looping through an array in many instances since you are doing the bounds check in the loop condition anyway). A lot of safety can be done at compile time, like ensuring variables are initialized before use. With a simple set of rules, you can provide built-in reference types that guarantee no wild pointers, dangling pointers, or memory leaks without any run-time overhead (although there would be limits to their use, it could cover the bulk of it).

Wael December 24, 2015 1:49 AM


I don’t think secure, efficient, and simple are mutually exclusive

Not mutually exclusive, but have a dependency relationship. Increasing one comes at the cost of decreasing another. That has been the traditional empirical observation of current implementations.

A lot of safety can be done at compile time

Compile-time operations weren’t included in the above. I agree with you otherwise.

For a reference, Security vs efficiency was briefly discussed here

Gerard van Vooren December 24, 2015 2:26 AM

@ Anura,

I agree. The language Modula-2 is quite simple and easy to use but most of all easy to read. It is much harder to sneak something into a Modula program than it is into C. Modula also invites to write clear programs where C is rather liberal. In my opinion they should indeed use case independent or better lower case keywords. The only question is whether they can position the language in an area that is occupied with C and C++. That question depends on advertising and eco system (libraries, tools, documentation), and they may find a fierce competition with Rust which has lots of “modern” (think complex) features.

Clive Robinson December 24, 2015 2:51 AM

Xmas & Air Gap Crossing Malware

Be careful what you wish for this year… It appears a south-asian company is recruiting spy-ware and application developers to increase it’s producr range.

It already has a malicious Android App in Google store which is a Santa Game, and air-gap crossing desktop malware as well…,

So along with laying off the third helping of Xmas Pud with heavy cream, take care with what you download from even official app stores this year and into the future.

Clive Robinson December 24, 2015 4:41 AM

@ Wael,

Hmm where do you find the time to find all these links…

With regards the “bed pan” it does kind of look like a prototype for those very low drag helmets used by down hill ski jumpers and indoor bike riders.

As for the stock of weapons, I’ve discovered a new one…

As is my custom around this time of year I get a tin of Chinese Phoniex rolls to dip into chocolate etc, –to ensure enough of a sugar hit to go direct to Type II without passing ER– and the 454g tin was recognisable by it’s inset round lid (kind of like a paint tin fit).

Well I could not spot one, so I assumed that the manufacturer had gone for a more traditional sweet tin lid.

However on getting it home and removing the square lid, I was greeted by a round lid… TWO LIDs both tight enough fitting to stop RF… And with just enough clearence to get “100 Ohm Foam” between the two to act as a broadband attenuator 😀

Come the dull week after Boxing day unless I go “sun bathing” (I kid you not it’s been around 16C and very sunny in London) I’ve a little project in mind for that box using capped N-type connectors 😉

As for the odd weather in the UK “Global Warming” has been in most news programs. In Cumbria they have had two “two hundred year floods” in a month and the Gov has had to trot out “Environment Agency” spokes persons to waffle on about how the Gov has got it all under control. Despite film footage of some kids teddy bear floating in two foot of turd coloured water in the front room of a house, and other shots of a little poppet in her mums arms looking very sad.

I guess it will mean that we are going to get a foot or so of snow this Easter or some such, so time to stock up on canned goods, bottled water, instant coffee, milk powder and fuel for both the generator and camping stove 🙁

Thoth December 24, 2015 5:00 AM

@Clive Robinson
You should have tried chinese pineapple tarts if you intend to raise your blood sugar 😀 .

About tin cans, have you thought of using tea canisters ?

If you are doing RF proofing without needing to finish a bunch of diabetes bringing snacks, you can just buy some cheap tea canisters from some local chinese tea shops or get someone to buy them off ebay for you and you meet them in person to pay them the actual cash.

They come in different sizes and some of those can be small enough to fit into your pockets (talk about pocket size RF resistance).


Thoth December 24, 2015 5:14 AM

@Clive Robinson
If I have a two lidded tin can (assuming tight lids) and I want to mount an external keypad and a LCD display with the electronics hosted inside the RF resistant tin can, how should I go about to RF proof the electronics and yet allowing the mounting of a keypad and screen linking to the electronics inside the RF resistant environment ?

Wael December 24, 2015 10:04 AM

@Clive Robinson,

Hmm where do you find the time to find all these links…

Doesn’t take me a long time. My entire post, links and all, took me less than 5 minutes. I just remember some keywords. It’s all about metadata 🙂

I kid you not it’s been around 16C and very sunny in London

I guess that’s warmer than San Diego!

I’ve a little project in mind for that box using capped N-type connectors 😉

Now you have me wondering! Project with or without the foam?

Nick P December 24, 2015 11:22 AM

@ Thoth

re PKI deployment

The troubles you describe plus what was in paper seem to suggest this market is ripe for Silicon Valley-style “disruption.” An easy-to-deploy, trustworthy solution might sell like hotcakes. Would be a high, initial investment getting it going, though. Lots of lawsuits to follow. So, probably best to operate that one in stealth mode as long as possible to get lawyer money together. And invest in sales over engineering as we both know. 🙂

re languages

Ada has always been its own language with direct to assembler. The main compiler uses GCC intermediate as Anura said. There’s commercial compilers that don’t. There’s everything from zero to minimal to significant runtime depending on what safety/features you want. Vendors of microkernel RTOS’s often have Ada runtimes that sit right on microkernel. So, it’s about as independent as a system language can get. Integrates with C well out of necessity.

Haskell, while even safer in theory, is a whole different situation. It uses a large, less-portable runtime that’s written in C I believe. The compiler itself is quite complex to the point that certified compiler work focuses on ML. I’ve seen one port of a Haskell backend to Forth to run on Forth chips. So, it’s conceivable. Plus, there’s my de facto solution of using subset Ada or SPARK to do the runtime of a similarly-safe language plus any native code or libraries. Mix and match. It would be a lot of work, though, that needed experts in compilers & runtimes for functional languages.

Back to shit I can comprehend… 😉

@ Wael

Your categories miss the popular option of creating a new language that solves the problems. It’s not a new paradigm or just a mod to an existing language. Not usually. It’s close to Horizon 2 but benefits from ability to ditch fundamental problems in existing languages. Design problems rather than just syntax or whatever.

Modula-2R10 falls somewhere in that category or Horizon 2 in your classification: original was an alternative language for new projects while this revises it to aim for same goal. Fits partly into Horizon 1 of legacy code if we’re talking about porting. There’s a significant mismatch between C and Modula-2 that means you want someone who understands the codebase doing the porting. It won’t be automatic. However, they’re pretty close to each other in what they can do if you think of expressions, control flow, structuring, and so on. Going from C to Modula-2 is a gap rather than the gulf that going to Java or C# would be. That’s where I see adoption potential.

” I would recommend you list out a more granular set of requirements that achieve the “efficiency” and “simplicity” you noted. These two requirements, in your view, indirectly lead to a more robust language. For each requirement you list, you’ll need to state it’s direct influence on security as well as its relation, conflicts, and tradeoffs with other requirements.”

That’s a lot of work and we discussed it. I could simplify here a bit to show the extremes along with how to get the middle ground. The left side has highest efficiency outside ASM. That would be SSA form for its portability and optimization potential. The drawback to SSA is that it’s hard to read for sizeable programs and it’s not safe at all. The right-hand side is max safety in an imperative language. That would be Ada for readability (to pro’s) and safety features for all the common errors. The drawbacks to Ada center on compexity: hard to learn, parse, and generate efficient code for.

So, this leads to some guiding principles:

  1. Needs to be more readable than SSA or C.
  2. Needs to have safety and “programming in the large” features of Ada where possible.
  3. Needs to avoid Ada’s compilation difficulties, esp parsing. Side-effect of boosted productivity & QA time if compile-test cycles are fast.
  4. Needs to aim for efficiency of SSA or C.
  5. Should be easy to learn to get many eyes in FOSS and cheaper QA in proprietary. What? The latter is realistic requirement haha.

Modula-2 meets 1, 3, 4, and 5 without any further modification. The original language had some safety features of Ada with some more easy to add without trouble. Design-by-contract or SPARK proving would be difficult but is conceivable due to No 3 solved to extreme. So, there’s much more in direction of No 2 than asm, SSA, or C. So, by criteria 1-5, Modula-2 is already stronger than C for robust, productive, and efficient system programming. An update can make it more so.

“The arguments are mostly rational, but also superficial.”

What did you find superficial outside the capitalization given 1-5 above? The safety features they mentioned eliminated problems in practice. The OOP subset let people do that if they want while eliminating complexity found in C++ compilers. Just two examples I recall that weren’t superficial.

@ Anura
(@ Wael)

Good points on some safety kicking in without loss of performance. The general rule is there’s going to be a performance hit. The high end, shown in C safety, is around 40-70%. That high end probably has more to do with C’s design than safe programming in general. Plus, many programs are I/O bound where some CPU overhead is invisible in overall scheme of things if language is otherwise efficient. Last thing to remember for languages like Modula-2 is that Unsafe modules can be used for critical path functions if the safety makes performance unacceptable. I could see it happening in video decoding or a JIT.

Far as the balance, Go is probably our best evidence here as people keep migrating to it from Python and Java for speed boosts with success. I’ve seen huge speed boosts. Nobody in application space is complaining about the performance of a GC’d, Wirth-style language. Modula-2R10 should be faster outside compiler optimizations because it’s the simpler, non-GC style of Wirth language. Semantics-aware analysis to eliminate checks would be the next step as they do in Ada and the Midori research I’ve been reading. The former demonstrated acceptable tradeoffs in practice but the latter is showing it can be pushed further.

Note: This doesn’t even count the potential of using static analysis tools a la Astree or SPARK to justify eliminating checks in modules by showing the error can’t occur.

@ Gerard

“The only question is whether they can position the language in an area that is occupied with C and C++. That question depends on advertising and eco system (libraries, tools, documentation), and they may find a fierce competition with Rust which has lots of “modern” (think complex) features.”

Good point. Momentum killed everything else off so far with Go and Rust being few exceptions gaining ground due to momentum of who backs them. So, outside of education or niche that favors quality, it might be an uphill battle. That Astrobe is still selling Oberon for embedded use and Blackbox doing Component Pascal (really Oberonish) in Russia give a glimmer of hope.

Thoth December 24, 2015 4:02 PM

@Nick P
re: Silicon Valley style disruption
That’s what most of those “Security Products” company do. Many of them create a vague webpage with a ton of vague and misleading information with misused jargons and a bunch of FIPS or CC certification badges and claims. Note that there is a play of word for “Validated for FIPS/CC” or “Certified for FIP/CC”. The former is only validated by the standard but may not be certified by the standard while the later is certified by standard and some of them don’t state which level of security of the standard and try to worm their way through. Customers for most would see the “Badge of Honour” and buy them until a difficult customer like me come along poking at them details.

We take a classic example for the Blackberry Secusmart’s SecuVOICE which is a server rack mounted hardware voice encryption which “contains 64 pieces of Secusmart smartcard chips to provide voice encryption for 64 concurrent users rated for Govt use”. Nice marketing… 64 pieces of smartcard chips that becomes a HSM whether literal or otherwise. It isn’t wrong by any means but hey, you can get away with it you know 🙂 and dear Angela Merkel loves the Secusmart guys and they are nearly honoured as national heroes of Germany. I have nothing against them or the country but what I feel weird all these while is how easy it is to get away with “Security Product marketing” by waving your hands around and being friends with everyone and there is so little consequence if things don’t work out right.

Look at the amount of jargon and nonsense in the Security market and we can tell there is no such thing as not possible to sell. I can get a smartcard and write a meta binary language for customers to program their own crypto algorithms and once NSA scripts a meta algo for a Type 1 crypto algo and loads into a smartcard.. behold.. a Type 1 TOP SECRET/SCI capable… lol… By the way the meta languagr custom algo is an experiment I am intending to do once I have the type. Kind of a “softwarr FPGA” for embedded non-malleable environments where you can’t dynamically load codes (just for the fun of it). And… again… Secusmart also hints at introducing customet customizable algo for their Secusmart smartcard HSM products which I suspect would lean along the line of meta languages due to thebfact that most smartcard do not support changing of crypto algo or customization of it’s algo let alone dynamically using and loading a custom algo which they hint at doing. It should have sounded alarms in the head of those who are in the field but they have the charm to make the sales pitch 😉 .

If you setup a vague product page and force customers to phone you up to enquire the products while you run background checks and blacklists on those calls with NDAs which you can effectively weed out possible competitors doing their legal war mongering scouting. Security as a sales business for the big bucks isn’t over. You just need lots of friends high up there and lots of parties behind close doors and very good sales pitches just like Crypto AG a.k.a Hagelin which should have closed shop after the Hagelin cipher machine incident but they still survived and are doing so well up to these days with continued presence in Middle East !!!

Security like any business is all about the hype and smoke bombs 🙂 .

Clive Robinson December 24, 2015 5:04 PM

@ Nick P,

Back to shit I can comprehend… 😉

You might want to have a flick through this,

Or read these two,

The trick is understanding what happens,on each pass through. Modern compilers try to do everything in just one or two heavily overloaded passes which is confusing. Where as logicaly there could be a hundred or so simple passes when you unravel the complexity.

Figureitout December 25, 2015 2:39 AM

Markus Ottela
I’ll move the padding before queue loading, so padding won’t take time
–Probably smart, isolate the protocol as much as possible, and definitely constant length plaintext (easier said than done). I just need to dig in the code more and get it flowing in my head so I can say more meaningful things, which I will when I can build (money, time, and brain limits killing me as always). Looks like you’ve got some nice debug tests set up to see this too. If you can see just by eye that a human is transmitting, then there’s still work to be done probably…

If you can’t ensure the physical security or anonymity of your physical location
–I’m working on an addition to 1st part(guns and knives are good here too, but that’s a little…messy :p), and 2nd part is OPSEC w/ experience (you need someone trying to find you..). Based on my experiences, there won’t be much evidence of tampering unless you’re really looking for it or you get under an attacker’s skin and get them to act out of carnal emotions (like kicking your dog b/c it barked at them, or leaving other evidence behind).

All I’m saying is it’s a limitation if you’re anchored to one spot w/ TFC, as setting up in a coffee shop or library w/ 3 laptops is a bit cumbersome and attracts unwanted attention. You can counter saying parts become less verifiable the smaller they get. Personally I enjoy having my sh*t in one spot too though, w/ AC power so I don’t need to keep checking battery life, not constantly moving, and most systems do that.

Power supplies can be built relatively easily from many sources, just takes a minor bit of hacking. Display units are insecure no matter what as they display the plaintext, you need an external shield protecting that info blasting out behind you. Keyboards can be checked pretty easily except for the firmware in little controller…get paranoid whip out the snippers and cut off connector and make a new one from scratch…and feel cable for any irregularities like checking a body for cancer lumps…

And netbooks now, w/ windows 8 or windows 10 have some gnarly f*cky hacked up implementations, can’t even encrypt them w/ opensource or load in linux in some…

This deviates from standard practices. Exercise caution!
–I know, it won’t be most secure, especially starting off. Won’t be hella crypto running on 8bit MCU’s (I’m shocked even AES-128 works on it…), being relatively secure from rootkits bypassing any crypto is more important to me. I would be happy w/ some of the newer versions of “keyloq”, w/ more secured rolling codes; but it’s closed source. Having each message authenticated would be nice too. I can add in additional samples or some other “entropy” to a max of 32 byte packets from another clever on-board source (timer jitter). But the message is the same every single time, it doesn’t matter if it’s encrypted, just need to keep integrity of RF (going wired, a physical attacker could trace the wire straight to your logger and tamper w/ logs much easier; probably best to have both though to protect from RF issues). But having crypto is always nice, and maybe someone will really want/need it, so better to have it ready. Making sure activations get logged and remain untampered is the most important; if someone finds the board, any blow-joe can erase EEPROM or read it too, so it needs to be hidden (placed while not under surveillance).

So from high level standpoint I just want receiver to receive and not reply (to malicious pentesters), and I want some kind of good spread spectrum implementation, using pseudo-random freqs, not pre-set ones…All about making that initial attacker approach a mega fail, no recon, or anti-recon (seeking it out).

Describing the chip/protocol, RF24 library is really nice implementation a lot of people use, and it matches the datasheet nicely. You really have to read most of it to get a feel for it (like any system). So that library, the nRF24L01+ datasheet, and this is nice too on shockburst protocol: I wouldn’t read it unless you really want to know, just more information.

They say you can only receive and transmit at one time (via FIFO queues, you can switch fast, but not at same time), I still wonder about info-leakage in protocol.

Do we continue to put some trust in RPi ?
–It’s not completely open and I don’t know if you can inject malware somewhere inside the chip w/o putting in programming mode, so…depends on your skill level and what tools your have to verify. But seeing rejection of malware is at least good from ideology standpoint. Be better to identify the scum.

Wael December 25, 2015 12:07 PM

@Nick P,

Back to shit I can comprehend… 😉

Unfortunately, that’s the “stuff” I don’t comprehend. At least we can complement each other.

What did you find superficial outside the capitalization given 1-5 above?

What I find superficial is the idea of designing yet another general purpose programming language that will eventually suffer from the same “weaknesses” C/C++ have. The OO support is minimal and not complete. They still have pointers, I think. I’m not sure how much success or popularity they’ll achieve.

Don’t you thing there should be a close relationship between a programming language and the operating system it sits on? What good is to use Modula-2 when the OS is written in “C”?

Nick P December 25, 2015 2:10 PM

@ Thoth

“”contains 64 pieces of Secusmart smartcard chips to provide voice encryption for 64 concurrent users rated for Govt use”

Hey, that’s the idea I posted here years ago in a minor form. I said we could get high assurance SSL, etc on the cheap if we put thousands of smartcards in racks with an untrusted, load-balancer splitting the connections across them. The chips could be EAL6+ for security-critical harware and software with plenty of ease vs Intel/Windows/UNIX. Nice to see it play out in industry.

“) and dear Angela Merkel loves the Secusmart guys and they are nearly honoured as national heroes of Germany. I have nothing against them or the country but what I feel weird all these while is how easy it is to get away with “Security Product marketing” by waving your hands around and being friends with everyone and there is so little consequence if things don’t work out right.”

Yeah, they go pretty far with it. Extra funny given it runs on a platform Snowden leaks say NSA can bypass. Maybe they just did that for the intellectual challenge to show their secure methods work regardless of where applied. Yeah, that’s what they did. 🙂

“once NSA scripts a meta algo for a Type 1 crypto algo and loads into a smartcard.. behold.. a Type 1 TOP SECRET/SCI capable”

I get your point but it doesn’t apply to Type 1. That certifies the whole product in a certain configuration with pentests. The difficulties the secure products impose are why many companies are bragging about using Blackberry etc to “reduce or eliminate use of Type 1 products.” What you’re describing happens in EAL4-or-below, FIPS 140-2, etc all the time, though.

“you run background checks and blacklists on those calls with NDAs which you can effectively weed out possible competitors doing their legal war mongering scouting. Security as a sales business for the big bucks isn’t over.”

Yeah, that problem is still big. Despite good engineering, I repeatedly called out Green Hills for that crap. Their INTEGRITY Global Security page is perfect example of it. I can’t tell what the hell the details or level of security is on anything they offer. At least there’s salespeople standing by to assure me it’s worth a lot of money.

@ Clive Robinson

I have the first but might not have the others. Those type of easy-to-follow links are priceless in this field. Thanks for those. I think you’re missing what I meant, though. Functional programming and especially compilers is quite a lot more complex than the tricks you use in a simple scheme compiler. Especially if one wants a verifiable language or toolchain.

Here’s two links for you to see what I’m talking about. The first is a nice summary of all the concepts in the field that might come into play. Certainly much better works available for learning process but the reference nicely shows what mastery will have to understand and integrate.

The second is more important as it’s the Dragon Book of functional compilers. Haskell people on HN say this old work is still worth reading for all the stuff it teaches even if a good chunk of it has been superceeded. Skimming through it from an imperative or non-mathematical background gives me layers and layers of “Huh!?” Anyway, I’m keeping it for future projects by myself or others where people need a collection of tactics for certified, functional compilation. You should skim it and add it to your as it’s enlightening, esp if you know some FP. Also have a feeling it will come in handy on the non-Turing language side of security given many boil down to a System F (non-Turing) with recursion added. I’m guessing it would be straight-forward to make same thing work without recursion for a lot of problems.

Note: Work like the above always makes me feel functional programming is more scientific and engineered than imperative usually is. You can argue, even easily prove, many claims about programs or structuring that are guesswork in imperative styles. Although I went imperative, I keep feeling the whole field went in the wrong direction. More so, that its best stuff (eg Cleanroom methodology, Praxis Correct by Construction) would’ve been better in whatever ML or Haskell would look like today with a C/C++ level of investment.

Nick P December 25, 2015 2:30 PM

@ Wael

“The OO support is minimal and not complete.”

That assumes OOP is the right way to do things. Lots of good software is written without it and easier to understand. Likewise, there’s all kinds of good software written with OOP that they claim is easier to understand. I’d love to see some science decide the issue one way or another but meanwhile there’s at least two styles to cater too.

An alternative with OOP style plus safety/performance is the Eiffel method and language. They popularized Design-by-Contract & safe concurrency (i.e. SCOOP model). Modula-3 & other Wirth languages show Modula-2 could be extended for OOP but generics are more popular among target group.

“What I find superficial is the idea of designing yet another general purpose programming language that will eventually suffer from the same “weaknesses” C/C++ have… They still have pointers”

In Modula languages, the things you do are robust and safe by default with acceptable performance. You have to go out of your way to do unsafe stuff, even explicitly marking it as Unsafe. Anyone doing Modula idiomatically will have many, fewer problems and faster, development pace than idiomatic C or C++. Plus it’s easier for compiler writers to extend or handle due to simplicity.

Only worry I have is the standard library. Approaches to concurrency, string handling, etc need to be decided well ahead of time with safe, effective defaults. Those problems of C/C++, where everyone reinvents it poorly, might kick in if one is not careful. A new language lets one get a good start, though, whereas older ones must keep baggage.

“Don’t you thing there should be a close relationship between a programming language and the operating system it sits on? What good is to use Modula-2 when the OS is written in “C”? ”

Ever seen Perlix? It’s a basic, UNIX user-land with shell and utilities written in Perl. They work better, are a bit more robust (individually haha), easier to modify, and so on. That’s because the language is better suited to those things than C. The FFI let’s it interoperate with C libraries or OS where necessary. Same with Modula-2 languages where the OS might be C but apps can benefit from safety. There is residual risk in the interface: even Ada had a vulnerability that came from abstraction gap at FFI level. So we have to watch out for that but it’s still interesting to note it’s always C dragging down safety of what’s calling it rather than vice versa. A good enough reason for me to ditch it.

“I’m not sure how much success or popularity they’ll achieve. ”

Probably minimal. It would still be worthwhile if enough community got together to maintain/optimize the compiler, IDE’s, key libraries, and so on. Safe-FFI techniques, ZeroMQ-style component models, or semi-automatic translation from C to Modula-2 can do the rest. Truth be told, we should’ve matured that kind of stuff by now anyway as we keep needing it every time a language gets popular.

You could say I hope it becomes another Ada, Eiffel, Common LISP, Ocaml, or Haskell. It’s there for those wise enough to use it for its advantages while the rest of the market ignores it & suffers greatly for that choice. Another secret weapon for elite shops differentiating on pace, quality or security. 🙂

Clive Robinson December 25, 2015 6:23 PM

@ Wael,

The OO support is minimal and not complete.

Is OOP support realy that important these days?

It’s already a programming paradigm that’s got beyond it’s Best Before Date. As I’ve indicated in the past the “future is parallel”, because the single CPU / Core has stopped following Moore’s Law some time ago, and the overly complex CISC tricks are not cutting it any more.

Whilst OOP can be extended it’s imperative roots are holding it back, Process-oriented programming is the way of the not to distant future. Functional programing has some significant advantages when it comes to parallel computing.

Hopefully I’ll still be around in ten to twenty years to see how it pans out.

@ Nick P,

I think you’re missing what I meant, though. Functional programming and especially compilers is quite a lot more complex than the tricks you use in a simple scheme compiler.

I don’t know what your learning style is, but few can go from quite esoteric theory to well found code.

As my father once pointed out to me nearly half a century ago, “If you are going to build anything well you have to get the foundations right, and to do that you first have to know how to dig the hole they go in, and that’s a lot harder than it looks”.

As for lambda calculus it’s one of several ways you can get functional programing to work. Another problem is that functional programing is increasingly problematical as you go down the stack towards the CPU and below, which is traditionaly the domain of the OS. Another issue is that even functionaly correct and verified code is of little use against “bubbling up attacks” from compromised hardware, and it’s an issue that untill recently had slipped out of sight to the majority of people in academia and other places. That TAO catalog has been a quite rude awakening for some, and many are trying to play “Catch up with the 60’s”, only to find that you can no longer “verify” or “trust” COTS suppliers… Ops… It’s why I keep talking about “mitigation” not “verification” as the solution to “bubbling up attacks”.

Don’t get me wrong I’m not saying tool chains etc should not be verified, they should, it’s just that it is far from sufficient. When you look at the computing stack the tool chains occupy a little bit in the center ground. In the same way they can not protect against malicious hardware, they can not protect against malicious or incompetent programers, managment, legislators and politicians.

Nick P December 25, 2015 8:38 PM

@ Clive Robinson

“I don’t know what your learning style is, but few can go from quite esoteric theory to well found code.”

“As for lambda calculus it’s one of several ways you can get functional programing to work. ”

I learn the piece-by-piece, useful-stuff-first method like in the examples you linked. My goal isn’t to learn anything directly from theory. As I said, the books with theory are used as a reference to support advanced work of making the compiler.

Far as lambda calculus, major ones derive from different versions of it with sophisticated, type-systems. Of strongly typed, the type systems and many compiler techniques are ground in theory and need it to work. The imperative stuff I can just throw tactics at, often learned in isolation w/ little to no theory. Not so much for the functional languages as they’re highly mathematical in nature. Anyone that doubts that can try to explain monads, their sound implementation, and correctness-preserving transformations to amateur, functional programmers without confusing them. I already know that’s not going to work out from reading so many comments (by FPer’s) on FP sites. That’s just one sub-topic among many that might crop up.

So, the point of referencing landmark works in the field is only as references to support the work. I’d recommend each topic start as tutorials and more English stuff that gradually builds on thing on or alongside another. Paying attention to foundations as you stated. It’s just that you’ll need plenty of theory and complex techniques for FP compiler if you want it to work correctly and fast. Hence any supporting works. How much I’m still not sure of and all of it is currently out of scope for me past skimming and collecting until I can at least use a functional language as simple as ML. 😉

Note: Things like GHC are also said to be poorly documented with lots of magic in them to outsiders. So, there’s a documentation benefit to tricks it used ending up in books.

“Another issue is that even functionaly correct and verified code is of little use against “bubbling up attacks” from compromised hardware”

True, true. There’s lots of ways to handle that, mostly in logic that corrects itself + Correct-by-Construction HW methods. They’re already most of the way there in EDA tooling and methodology given the complexity of HW vs number of errata found. The main failure I see is, outside elite shops on cutting edge nodes, the HW vendors often don’t handle local failure post-manufacturing as well as they should. I’m sure there’s tricks yet to be discovered here but our old trick of voters works best even if inefficient. Recent learnings taught me voter logic itself can be implemented in self-correcting logic gates rather than triplicated. Research into decentralized voting hardware is still worth pursuing.

“It’s why I keep talking about “mitigation” not “verification” as the solution to “bubbling up attacks”.”

Best to do both in parallel. That’s what NASA always did and Galois is doing with Copilot DSL for embedded. They assume the failures will happen somehow. They come up with how they want to react ahead of time. They then use a combo of robust implementation plus monitors and recovery techniques. Do it on every component. Works wonders.

So, I say we keep doing both. Let’s not forget recovery-oriented architectures where they just keep testing for or assuming failures with availability-preserving restarts. Lots of restart and reset-based approaches that catch lots of things. Might be worthwhile to do, similar to your Prison or Tilera’s TILE chips, a grid-like approach that’s somewhat self-organizating (leader/master-election) that can work around bad nodes on boot. Then, part or whole ASIC can be reset to activate that process when problems are bubbling up or even before to detect them.

Buck December 25, 2015 10:24 PM

@Clive (et al.)

I have absolutely no desire to manufacture yet another general-purpose tool-building factory factory factory… Parallel programming is the future, you say, yet it’s quite difficult to grasp? I can easily enough understand how to parallize stack-recursion for a particular well-defined problem set… I suppose my next question should be something about how to effectively learn/teach this approach at a systems-level..?

Tangentially related, what do you foresee as being the near-future of important encryption? End-to-end everywhere, homomorphic distributed, onion-layered, quantum-assisted one-time-pads, some sort of Shamir’s secret sharing, something else entirely? Or unlike the inevitable distribution of processes, will the proper cryptographical solution be dependent solely on the specific situation at hand..?

Clive Robinson December 26, 2015 8:56 AM

@ Buck,

what do you foresee as being the near-future of important encryption?

I know this will sound odd but I view encryption the way I view insulation around electrical circuits.

The idea behind insulation is two fold, firstly to stop power leaking away, the second is to stop any harm –from power leaking away– happening.

If you replace “power” in the preceding sentance with “information” and “insulation” with “encryption” the similarity becomes clear.

The problem with “end to end” encryption, is how do you define where the end is… That is it does not matter if you are using the most perfect encryption possible in the universe, if an attacker can get to the points beyond what it protects then it is just a waste of resources deployed as “window dressing”.

Thus with these “end run attacks” not only can they see what you can see, but worse they get between you and the encryption end point via driver shims etc and they can change what you type and see.

There are ways to deal with this as I’ve explained for around two decades with securing “transactions” in financial systems . But importantly it hinges on two things,

1, A reliable side channel.
2, The authentication must work through the user.

For instance, if you use paper and pencil and an OTP to encrypt a message and type it into a secure messaging app, it does not matter if the attacker can get at the raw screen and keyboard information, they don’t have the OTP so what they see is garbage. The fact that when you receive a message you then take away the cipher text and write it out on another piece of paper takes it beyond “electronic eavesdroping in the channel”. The OTP is the reliable side channel, and you applying it to the ciphertext in your head provides the first step of the authentication. Providing you don’t show the OTP or paper with the decrypted text to a camera or read the deciphered message out loud for a microphone, then the chances are the attacker has been foiled in their attempts (there are other necessary steps however).

Whilst people have and still do go to these lengths, they are not exactly user friendly. Thus most people will not bother, which is a problem, because anything less is not secure, as the nation state IC is more than well aware…

The two people who most notably let the cat out of the bag over this were Harry Hinsley and Peter Wright with their books back in the 1980’s. The fact that few took note since should tell you something about human nature.

Even with the Ed Snowden document trove effectivly bludgeoning people over the head with more explicit details, the human proclivity to close the eyes, cover the ears and loudly say “Nagh nagh nagh not listening” repeatedly rather than take the facts on board and act on them should make the point even clearer, “humans are their own worst enemy”… Thus virtually anything they do involving encryption will be “window dressing” not “action this day”.

So the first step is not to fix the technology but the humans…

Wael December 26, 2015 8:57 AM

@Clive Robinson, Nick P, all,

Is OOP support realy that important these days?

How else would one maintain code composed of several millions lines of code? It’s one of the main purposes of OOP. For a general purpose language, OO is currently necessary unless there is a new paradigm which achieve the same plus security, of course. Seems you agree with “Horizon-3”.

“future is parallel”

Parallel is likely, and so is heterogeneous with the use of specialized CPUs such as DSP and GPU.

Hopefully I’ll still be around in ten to twenty years to see how it pans out.

Or be part of the change 🙂

Nick P December 26, 2015 9:41 AM

@ Wael

“How else would one maintain code composed of several millions lines of code? ”

The way people maintained hundreds of thousands of lines of code before OOP became a big thing: information hiding with modules, procedures/functions, interfaces, and complex data types. Layered, esp hierarchical design and decomposition help too.

“Seems you agree with “Horizon-3″.”

That’s what we said about OOP in the 90’s. Now it’s technically Horizon 1 although its users still all disagree about how to use it to achieve stated benefits. The only things they agree on, abstract concepts like information hiding or reuse, already existed in prior paradigms. So, I still consider the full promise of OOP to be Horizon-3.

“Parallel is likely, and so is heterogeneous with the use of specialized CPUs such as DSP and GPU. ”

For parallel, we’re seeing a lot of uptake in functional programming again. Actually, it seems to be better at that and OOP’s goals than imperative. Back to imperative, there’s been many attempts at parallel languages. I think ParaSail might be most worthy of copying.

re OOP approach

Turns out, my poor memory did me in again. A guy name Mayson showed up in another discussion on Oberon and Modula-2R10. Told me how he had to rescue a distributed OS in last 6 months of a 5-6 year plan that had basically no working code. Got it done with Modula-2 using a compiler from Logitech, who turns out to have been cranking out good dev tools back then. Paper is on the backlog: keeping it for historical reasons.

Anyway, he pointed out that the best one was Blackbox’s Component Pascal. That was an industrialized, improved version of Oberon-2. It had the readability, safety, compilation speed, and efficiency of the Modula/Oberon lines. It also had OOP, automated GUI’s, and a component architecture for plug-and-play style development. Developed a reputation for “if it compiles, it probably will work.” Great presentation in slide form on it here. So, there’s the start if one wanted an OOP, GUI answer to Modula style.

Note: It wasn’t a huge commercial success. So, they open-sourced it. However, it attracts a certain zeal in comments on forums that I rarely see. Nothing but praise from its users. Had significant uptake in Russia, too, on clean-slate projects. So, adopted by C crowd or not, it’s probably worth looking into, playing with, improving. To me, it’s like an industrial Pascal meets Visual Basic.

@ Wael, Clive

re Humor

Discussion about writing compilers popped up on Hacker News. One person asked how they’d do a Visual Basic for Applications compiler. A response said something like “I can’t wait to see what comments this gets.” I was happy to oblige but had to avoid tripping over moderation. Result was better than I expected. Here was the exchange:

Nick: “You have to start with the top-tier talent that delivers enterprise apps in Visual Basic 6. They need to know VBA language inside and out. They should implement templates for what every primitive does in terms of stacks, registers, and/or control flow (esp jumps). This includes common representations, type system rules, binary interfaces… you name it. Everything one needs to know to parse and implement VBA files along with reference implementations in VBA.

Then, you pay a LISP consultancy to implement that as a DSL in Racket or Common LISP that runs during development and outputs portable C for production w/ frameworks like WxWidgets and/or NPRS runtime. They should be able to throw the whole thing together in under a year. :)”

Critic: “I am not quite sure what you are making a parody of?”

Nick: “Domain experts, executable spec both parties understand, source-to-source compiler using DSL’s, and targeting C to reuse backends? Is that a parody or a recipe for VBA compiler small team can handle?”

Neutral: “It’s Poe’s law for software.”

(I DuckDuckGo Poe’s Law.)

Nick: “Lmao. That’s perfect.”

(Silence from critics.)

Nick P December 26, 2015 10:00 AM

@ Buck

That is a classic. I love it. Yet, the Wirth line of things is the polar opposite to that situation: add just enough features to simple languages to express complex problems without being bogged down. The only industrial version of it is Component Pascal. What little I’ve read on Eiffel suggest they similarly try to balance things but with more complex tools and language. Both are still better than Java plus it’s frameworks and classes.

Most promising development, though, is the addition of functional programming features to existing imperative languages. Functional programming already has ways to make shape-shifting hammers instead of factory, factory, factories. It will probably be Transformers-style, nano-factories by the time it hits imperative side. That still gives us do-everything customization with only one layer of abstraction and a factory small enough to fit into one page. 🙂

Nick P December 26, 2015 11:44 AM

@ Clive

Another Jungian synchronicity at work it seems. As we’re discussing gradual works like your link vs compendiums like Simon Peyton Jones’, someone posts a gradual, FP compilation, text with evolving source code examples by… Simon Peyton Jones. Book here. Probably a nice start on way to other one. 🙂

Note: On mobile, so havent read it yet. Hence, the word probably.

Clive Robinson December 26, 2015 11:36 PM

@ Nick P,

Another Jungian synchronicity at work it seems.

Odd you should mention that…

I’ve been keeping my eye on one or two other sites and this blog for a couple of years. Hacker News in particular shows quite a high correlation with subjects in the Friday Squid. Sometimes HN leads sometimes FS leads, so it’s not a game of “follow the leader” as it often is with ITSec and more general News sites.

Of course now I’ve mentioned it, it will all go to pot 😉

Nick P December 27, 2015 11:36 AM

@ Clive Robinson

The other thing it does is maintain, on many threads, the high quality of discussion and diverse array of commenters we saw here years ago. Hence, me spending more time there than other places.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.