Jacob October 7, 2016 5:10 PM

This has some major implications:

There are primes and there are trapdoored primes. One can never tell which one is which, unless one is privy to the generation methodology.

There are many standards in actual use that use primes without indicating how they were generated.

A research group showed how, with a modest investment in CPU time, they can cryptoanalyze a 1024-bit discrete logarithm computation.

This is not as bad as the Dual-EC, but it is close.

Daniel October 7, 2016 5:37 PM

From the above link:

If you run a server, use elliptic-curve cryptography or primes of at least 2048 bits.
If you are a developer or standards committee member, use verifiable randomness to generate any fixed cryptographic parameters, and publicly document your seeds.

And if you are an end user, pray to the God of your choice.

Tatütata October 7, 2016 8:29 PM

Re: Organizational Doxing and Disinformation

A real-life example from today’s Graun:

Over the past four months, websites including media outlets and WikiLeaks have widely distributed information stolen not just from the campaigns of US Democrats but of the World Anti-Doping Agency (Wada) and of the ruling party of the Turkish government.

The Wada hack was perceived to have been launched in revenge against whistleblowing athletes who revealed corruption among Russian anti-doping officials. An internal investigation by Wada itself this week found that the leaked information had been partially falsified before it was distributed.

AlanS October 7, 2016 10:45 PM

Now Theresa May has graduated from Home Secretary to British PM there’s a whole lot more to worry about than the Snoopers Charter and the scraping of the Human Rights Act. This week, at her party’s annual conference, her party  veered further to the right spluttering out all sorts of dangerous xenophobic nonsense. And the PM renewed her attack on the Human Rights Act by attacking activist left wing human rights lawyers who harangue and harass Britain’s armed forces. Onward to hard Brexit and <a href=”the Union coming apart

Finl October 7, 2016 11:00 PM

If AMD had something on their side, it was the lack of vPro backdoors. Now they lost even that last advantage of theirs:

AMD’s Pro chips are comparable to Intel’s vPro chips, which are popular in business desktops. The Pro chips have remote management and security features based on the DASH (Desktop and Mobile Architecture for System Hardware) standard, which is widely used in servers. DASH shares many features with vPro, such as the ability to wipe out or shut down remote PCs that may have been stolen. But DASH isn’t widely used yet, with Intel’s vPro dominating the business PC market.

So now every CPU on the market will have a full range of NSA approved backdoors like Intel Identity “Protection” and ME or PSP and similar “features”, and AMD believe they will have the ability to compete with the better Intel offerings by establishing similar backdoors.

If someone manages to produce a CPU equivalent to a Celeron or an A4 without any backdoors and even put it in a nice laptop, it will become an instant hit.

briny October 7, 2016 11:24 PM


Thank you! Useful here as well. Very nice chip, but I’ve always liked TI’s work on the embedded side.

J. October 8, 2016 1:24 AM

“have evolved spectral tuning”

So the squid are not really color-blind, they are just limited to seeing one color at a time.

Wesley Parish October 8, 2016 2:30 AM


What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution.

Wouldn’t this constitute a conflict of interest with the way the NSA, FBI and whatnot plan to make use of Zero-Days, etc? (Mind you, when more than one of the Grand Panjandrums of US State Security can declare that there is a conflict of interest between US State Security and individual citizens’ privacy ergo security and nobody notices, I don’t think that anybody’s noticed this either …)

ab praeceptis October 8, 2016 3:52 AM


Indeed, and not at all funny, many if not most (large) primes used in crypto are “pnly quite probably” primes. The problem being that, to be certain, one needs to basically do what the opponent is to do, namely to extensively und fully check for primeness.

On the other hand that’s less of a problem than it might seem and for most purposes a “very likely prime” is good enough, because to make use of eventuall not really primeness requires the opponent to fully and extensively check somewhat less than 1/2 of the space for any given prime(?) number – which again would mean that our prime(?) already served its purpose.

But still, as i.e. Bernstein/Lange demonstrated, a good level of primeness testing is required and unfortunately often not done. There are quite a few out there who limit themselves to rather superficial checking.

It is therefore reasonable to assume that most of the “cyber protection agencies”, some of which (e.g. germany) even openly state that they also want to crack and eavesdrop, will build or have systems with both, large lookup tables and very elaborate primeness checking. I guess that their success rate will be quite shocking.
(Note for lookup tables: This is indeed feasible because there are certain ranges within which the vast majority of primes(?) used in crypto are).

What shocked me personally most is that the (at least open source) crypto community doesn’t seem to care too much about a widely implemented good level of primeness testing.

Grauhut October 8, 2016 4:09 AM

Let the games begin! 🙂

“Russian server co. head on DNC hack:
‘No idea’ why FBI still has not contacted us

Published time: 2 Oct, 2016 15:17

Blaming Russia because servers are from here is “absurd,” Vladimir Fomenko, owner of the Russian server company implicated in the DNC hack, told RT adding that they are ready to help any special service in investigating the attack.”


October 07, 2016

Joint Statement from the Department of Homeland Security
and Office of the Director of National Intelligence
on Election Security

The U.S. Intelligence Community (USIC) is confident that the Russian Government directed the recent compromises of e-mails from US persons and institutions, including from US political organizations. The recent disclosures of alleged hacked e-mails on sites like and WikiLeaks and by the Guccifer 2.0 online persona are consistent with the methods and motivations of Russian-directed efforts. These thefts and disclosures are intended to interfere with the US election process.”

“Cory Gardner

I plan to introduce legislation mandating the Administration sanction Russia’s bad actors who are responsible for malicious cyber activities

14:38 – 7. Okt. 2016”

Sancho_P October 8, 2016 8:06 AM


Another libel, now to detract from Trump’s tape?
I’m a bit confused, what’s the agenda, or is it plain stupidity?
Who asks for such a statement?

Beltway Bob October 8, 2016 8:39 AM

@Grauhut, In other news, the U.S. intelligence community confirms that Gaddafy gave his troops Viagra (because everybody knows raping lots o’ girls is how you win a civil war.)

ODV October 8, 2016 10:11 AM

Dutch MEP proposes export controls on surveillance software used to target torture. ‘Youtube’ censors her.

On whose orders?

Goes to show that when the Tor project defends human rights, it can’t limit its scope to a single right like privacy. All human rights are interrelated, so Tor also defends ICCPR Article 7 and the Convention Against Torture. That makes Tor a threat to the US government, which confers impunity for continuing crimes against humanity of systematic and widespread disappearance and torture, domestically and internationally. Tor is helping to expose grave US crimes. Soon the Tor project board will have to choose sides between the humans or the state. If you side with the humans, as we hope you do, you better batten down for heavy weather.

Dodo October 8, 2016 4:56 PM

@yik yak

Hopefully you’re still here; last Friday Squid you asked Clive about books for self learning about security and programming etc.
A little while ago someone naming themselves ‘Dumber Than Clive Robinson’ a similar question, or at least ‘where/how to start’
There’s a few posts in the link below from Nick P and Thoth that directly answer this question. Very detailed, very helpful.
This will be of useful to many

As a general education you may wish to work backwards through this blog focusing exclusively on the Friday Squid post comments (just those as they are the most broad, & otherwise it’s just overload). You could focus on a few a week, have a text document open to capture the relevant bits. That should keep you busy for the next few years 😉

jl October 8, 2016 5:28 PM

Looks like Google Drive spam is a thing. Anyone can share a folder with you containing potentially malicious files. There does not appear to be any way to block someone sharing with you, or to report spam or abuse. Some have speculated that this could make you vulnerable to exploits if you tap on picture files.

I have received one or two of these and deleted the folders without opening. Interesting discussion here:!topic/drive/XXExyRX4CoM

hawk October 8, 2016 5:47 PM


The Russian Gov”t can’t begin to do the damage to US elections that the popular media has.

Nick P October 8, 2016 6:16 PM

Saw another post on “Countering Trusting Trust” on HN. Decided to redo my article I posted in response to reproducible builds craziness for this one. D Wheeler himself showed up to discuss it with me. He’s a pleasant guy to disagree with. I did discover the Core Infrastructure Best Practices Badge project in the process. Pretty cool. Link to discussion here. My initial post and a more specific one are below:

“This again. A perfect example of solving the wrong problems in a clever way. To his credit, Wheeler at least gives credit to the brilliant engineer (Karger) who invented the attack, points out it took 10 years before that knowledge reached anyone via Thompson (recurring problem in high-security), and did the reference essays on the two solutions to the actual problem (high-assurance FLOSS & SCM’s). That’s what you’re better off reading.

Here’s a quick enumeration of the problems in case people wonder why I gripe about this and reproducible builds fad:

  1. What the compiler does needs to be fully specified and correct to ensure security.
  2. The implementation of it in the language should conform to that spec or simply be correct itself.
  3. No backdoors are in the compiler, the compilation process, etc. This must be easy to show.
  4. The optimizations used don’t break security/correctness.
  5. The compiler can parse malicious input without code injection resulting.

  6. The compilation of the compiler itself follows all of the above.

  7. The resulting binary that everyone has is the same one matching the source with same correct or malicious function but no malicious stuff added that’s not in the source code already. This equivalence is what everyone in mainstream is focusing on. I already made an exception for Wheeler himself given he did this and root cause work.

  8. The resulting binary will then be used on systems developed without mitigating problems above to compile other apps not mitigating problems above.

So, that’s a big pile of problems. The Thompson attack, countering the Thompson attack, or reproducible builds collectively address the tiniest problem vs all the problems people actually encounter with compilers and compiler distribution. There’s teams working on the latter that have produced nice solutions to a bunch of them. VLISP, FLINT, the assembly-to-LISP-to-HLL project & CakeML-to-ASM come to mind. There’s commercial products, like CompCert, available as well. Very little by mainstream in FOSS or proprietary.

The “easy” approach to solve most of the real problem is a certifying compiler in a safe language bootstrapped on a simple, local one whose source is distributed via secure SCM. In this case, you do not have a reproducible build in vast majority of cases since you’ve verified source itself and have a verifying compiler to ASM. You’ll even benefit from no binary where your compiler can optimize the source for your machine or even add extra security to it (a la Softbound+CETS). Alternatively, you can get the binary that everyone can check via signatures on the secure SCM. You can even do reproducible builds on top of my scheme for the added assurance you get in reproducing bugs or correctness of specific compilations. Core assurance… 80/20 rule… comes from doing a compiler that’s correct-by-construction much as possible, easy for humans to review for backdoors, and on secure repo & distribution system.

Meanwhile, the big problems are ignored and these little, tactical solutions to smaller problems keep getting lots of attention. Same thing that happen between Karger and Thompson time frame for Karger et al’s other recommendations for building secure systems. We saw where that went in terms of the baseline of INFOSEC we had for decades. 😉

Note: I can provide links on request to definitive works on subversion, SCM, compiler correctness, whatever. I think the summary in this comment should be clear. Hopefully.

Note 2: Anyone that doubts I’m right can try an empirical approach of looking at bugs, vulnerabilities and compromises published for both GCC and things compiled with it. Look for number of times they said, “We were owned by the damned Thompson attack. If only we countered it with diverse, double compilation or reproducible builds.” Compare that to failures in other areas on my list. How unimportant this stuff is vs higher-priority criteria should be self-evident at that point. And empirically proven.” (me)

“You are right, but reproducible builds are still very useful, not for high-assurance though.” (zzzcpan)

“They’re barely useful for low assurance. Just read the Csmith paper testing compilers to see the scope of the problem. They solution to what they’re really worried about will require (a) a correct compiler, (b) it written in cleanly-separated passes that are human-inspectable (aka probably not C language), (c) implemented with correctness checks to catch logical errors, (d) implemented in safe language to stop or just catch language-level errors, (e) stored in build system hackers can’t undetectably sabotaged, (f) trusted distribution to users, and (g) compiled initially with toolchain people trust with optional, second representation for that toolchain.

Following Wirth’s Oberon and VLISP Scheme, the easiest route is to leverage one of those in a layered process. Scheme, esp PreScheme, is easiest but I know imperative programmers hate LISP’s no matter how simple. So, I include a simple, imperative option.

So, here’s the LISP example. You build initial interpreter or AOT compiler with basic elements, macro’s, and assembly code. Easy to verify by eye or testing. You piece-by-piece build other features on top of it in isolated chunks using original representation until you get a real language. You rewrite each chunk in real-language and integrate them. That’s first, real compiler that was compiled with the one you built piece by piece starting with a root of trust that was a tiny, static LISP with matching ASM. You can use first, real compiler for everything else.

Wirth did something similar out of necessity in P-code and Lilith. In P-code, people needed compilers and standard libraries but couldn’t write them. The could write basic system code on their OS’s. So, he devised idealized assembly that could be implemented by anyone in almost no code and just with some OS hooks for I/O etc. Then, he modified his Pascal compiler to turn everything into P-code. So, ports & bootstrapping just required implementing one thing. Got ported to 70+ architectures/platforms in 2 years as result.

The imperative strategy for anti-subversion is similar. Start with idealized, safe, abstract machine along lines of P-code with ASM implementations. Initial language might be Oberon subset with LISP or similar syntax just for effortless parsing. Initial compiler done in high-level language for human inspection with code side-by-side in subset language for that idealized ASM. It’s designed to match high-level language, too. Create initial compiler that way then extend, check, compile, repeat just like Scheme version.

The simple, easy code of the initial compilers and high-level language for final compilers means anyone can knock them off in about any language. That will increase diversity across the board as many languages, runtimes, stdlibs, etc are implemented quite differently. Reproducible build techniques can be used on the source code and initial process of compilation if one likes. The real security, though, will be that many people reviewed the bootstrapping code, the ZIP file is hashed/signed, and users can check that source ZIP they acquired and what was reviewed match. Then they just compile and install it.” (me)

“‘m not disagreeing, yes, reproducible builds don’t make a lot of sense from a security point of view. But they do from an infrastructural/package management point of view and could make some things easier, more manageable, more reliable.” (zzzcpan)

Ok, that might be true. No argument there. 🙂

anony October 8, 2016 6:20 PM

This looks like a pretty robust and anonymous way to disperse information.

a serverless web page you can update with a bitcoin transaction

For most websites the servers and domain names are the most vulnerable aspects. Both can be easily seized and are far from anonymous. With Web2Web, however, people can run a website without any of the above.

Cortez October 8, 2016 7:36 PM

@jl on October 8, 2016 5:28 PM…

thanks for that info on Google Drive spam. Definitely useful.

Mega Lister October 8, 2016 8:44 PM

For those of you stuck with Windows 10 for whatever reason, here is a good start at OUTBOUND ip ranges to block with your firewall. Windows firewall will do the job quite well. Many stories around that W10 over-rides the hosts file for much of the telemetry, apps and updates urls. I think that’s at least partially true. I also think block rules in the windows firewall DO over-ride hard coded allow rules.

This list is a start. Blocking will probably cut out hundreds of connections per day.

Also, a couple sites expanding on tactics to quiet MS is offered.

A short defense of MS: Some of the outbound data connections are quite innocent and non-personal. Engineers need to know what their users are doing to make Windows better or at least work with millions of diverse computer setups.

If anyone ever finds the Ultimate Mega Windows 10 Blocklist, please post the link here.,, Update has an alternate in another range. #Start menu searches. (, ( (Start menu searches.) ( (

65535 October 8, 2016 10:17 PM

@ albert

Yes, the Yahoo breach is starting to stink to high heaven with finger prints of the NSA/FBI on it – not mention the cozy sale of Yahoo to Verizon.


“It definitely contained something that did not look like anything Yahoo mail would have installed,” the source added. “This backdoor was installed in a way that endangered all of Yahoo users.” …source, who also requested anonymity and was familiar with what happened, confirmed that describing the tool as a “buggy” “rootkit” is accurate.

[Misleading reporting by Reuters and NYT]

“[Reuters] article is misleading,” the statement read, referring to the original report by Reuters. “We narrowly interpret every government request for user data to minimize disclosure. The mail scanning described in the article does not exist on our systems.”

[Rootkit discovered and then Yahoo chief security guy gets a pink slip]

“After the Yahoo security team discovered the spy tool and opened a high severity security issues within an internal tracking system, according to the source, the warning moved up the ranks. But when the head of security at the time, Alex Stamos, found out it was installed on purpose, he spoke with management; afterward, “somehow they covered it up and closed the issue fast enough that most of the [security] team didn’t find out,“ …In other words, the incident was an “extremely well kept” secret, the source said. Stamos, a well-respected veteran of the security industry who now works at Facebook, declined to comment. Reuters reported that this incident was one of the reasons that led to his departure”- Motherboard

@ AlanS

This is bad news for human rights activists and journalist. Exactly, what parlementary procedures allowed May to slide into the PM’s seat?

@ Finl

The second largest maker of chips now backdoor’d. The clammy grip of the TLAs is getting much tighter.

@ Nick P

You mention on HN that nobody paid your for your secure complier work. It certainly merits more funding. How about the EFF?

@ Cortez and jl

Wow, that is a bad problem with google drive. Any fix in sight?

@ Mega Lister

Hat tip to you for the ports to block. I will bookmark it.

Thoth October 8, 2016 10:49 PM

@Nick P, Figureitout

So, I got my GroggyBox Java client to finally work on encrypting a file of 345 KB within 2 minutes. Is it reasonable for a AES-256-CBC-PKCS5 done within a smart card to encrypt a 345 KB PDF file in 2 minutes or as an end-user do you expect it to be faster ?

The document I am trying to encrypt is an Infineon e-wallet PDF document (linked below) as a sample document while testing my GroggyBox GUI + GroggyBox smart card applet.


Figureitout October 9, 2016 12:00 AM

–Cool chip but gah, I’ve got too many chips on the backburner! :p There’s so much cool stuff going on in small RF chips, main thing is low power then range, and more sensitive receivers. Just got a yardstick one, which has a TI CC1111 which has built in encryption processor which I want to use for something. Now I’m busy w/ school and work…sucks.

Good work. And yeah that’s reasonable, how much can you store on the smart card? What about decrypt, same speed?

David H October 9, 2016 12:08 AM

@Finl • October 7, 2016 11:00 PM

I just bought an ARM based chromebook; not because I want ChromeOS but because I want a laptop with long battery life that has Linux under the hood.

What is popular is “crouton” application running Ubuntu chroot on top of ChromeOS base OS. This has many of the security features of ChromeOS, like forbidding execute access from the home directories.

I’m still playing with the thing, but I think I’m going to install either Arch Linux or Debian. I have built open source ChromiumOS and some related Linux applications on my home server. The freshly built ChromiumOS and its Linux applications run on the laptop. ChromiumOS does not have the application availability that either of the other two Linux distro’s have (Example – stuck at Python 3.3)

Normal chromebook applications are web based through the Chrome browser, and are not really executed on the laptop platform. Note well – ChromeOS is an Alphabet/Google product and the Chromium browser will report home to Google.

If implanting my own Linux distro works out as I hope it does, then ARM chromebooks and chromeboxes become widely avaiable personal computers without the Intel or AMD management engines.

Thoth October 9, 2016 12:11 AM

Not sure about the decrypt speed as I am currently coding the decrypt function interface on the Java desktop client side to talk to the smart card for decryption. Can’t store much on smartcard. It has about 2KB RAM and say 80KB EEPROM ?

Thoth October 9, 2016 1:05 AM

@David H
Beware of the ChromeBook’s TPM and ARM’s TrustZone. ChromeBook is just as untrusted as Intel’s AMT and AMD PSP.

65535 October 9, 2016 2:01 AM

@ Wesley Parish

“Wouldn’t this constitute a conflict of interest with the way the NSA, FBI and whatnot plan to make use of Zero-Days, etc?”

I would think it would. But who is to stop them.

Along those same lines Krebs is starting to name names of IoT makers who hard code admin passwords.

“One of those default passwords — username: r@@t and p@ssword: xxxx? — is in a broad array of white-labeled DVR and IP camera electronics boards made by a company called Xixxxx Technologies.” –Krebs on Security

[Unsafe internet devices]

I think the most insidious coding of said IoT devices is to include an undocumented internet facing shell, such as SSH shell or the like into it.

This problem was bubbling up in Bruce’s blog when a well known publication indicated that all English citizens who had broadband routers from said ISP showed an internet facing shell that the GCHQ could probably manipulate and capture internet traffic.


“A paper released earlier this month by a group of security researchers has outlined the technical details behind a potential Computer Network Exploitation (CNE) program likely used by the U.K. Government Communications Headquarters (GCHQ)… According to the paper, a secondary hidden network and IP address is assigned to a BT user’s modem, which enables the attacker (in this case the NSA or GCHQ) direct access to their modem, and the systems on their LAN from the Internet….researchers tested BT Open Reach modems Huawei EchoLife HG612 and ECI B-FOCuS VDSL2. In a side note, they point out that BT developed the firmware, so claims of Huawei being responsible for the backdoors are false.“ -CS on line.


The same is suspected in American routers. The problem of the “hidden shell” could be much larger than DVD records and the like.

Worse, the Universal Plug and Play code in most consumer’s routers which can nullify the NAT safety feature when games or gaming consoles, TVs and other internet devices need a direct connection to the internet.


“…several readers already commented in my previous story on the Mirai source code leak, many IoT devices will use a technology called Universal Plug and Play (UPnP) that will automatically open specific virtual portholes or “ports,” essentially poking a hole in the router’s shield for that device that allows it to be communicated with from the wider Internet. Anyone looking for an easy way to tell whether any of network ports may be open and listening for incoming external connections could do worse than to run Steve Gibson‘s “Shields Up” UPnP exposure test.”

One of KoS’s poster dryly noted that there is no economic consequences for consumers to buy the cheapest IoT devices because they simply don’t care if is Kos knocked off line by “booters” be it scrip kiddies, government censorship and so on. Consumers don’t care about security.

David H October 9, 2016 2:34 AM

@Thoth • October 9, 2016 1:05 AM

The TPM appears benign. If ChromeOS is replaced by Debian or other Linux distro it wont even be accessed.

The particular chromebook I bought has an NVIDIA kepler graphics chip in it. The proprietary driver in ChromeOS is from Nvidia and is not open source. The nouveau open source driver almost matches the features of the proprietary nvidia driver. I’m not so lucky with the Marvell wifi blob. I’m either stuck with the blob or use a USB dongle for wifi.

I’m pretty sure I need to use the nouveau driver with whatever Linux version I use as the interfaces evolve together.

There is a under-the-keyboard write protect device that allows the boot firmware to be updated. libreboot is an open source version of coreboot that eliminates proprietary blobs.
Its available with a Samsung ARM chromebook, but not the model I purchased.

This chromebook/netbook exercise is an experiment with in installing open source Linux and firmware. It does not have to succeed for me to consider myself more knowledgeable.

What is more interesting is that there are 64 bit ARM single board computers intended for nonportable devices that will boot any garden variety Linux distro. Debian has complete builds for various 32 bit and 64 bit ARM architectures.

My ultimate goal is to get to open source firmware and drivers. It looks do-able.

Who? October 9, 2016 3:04 AM

@ab praeceptis

What shocked me personally most is that the (at least open source) crypto community doesn’t seem to care too much about a widely implemented good level of primeness testing.

Not surprising at all. As an open source developer working on a security related project, I can confirm that most of my “colleagues” are a bunch of self-esteemed egotists. Working in something as difficult and mathematically challenging as primeness testing does not provide the reward most of them want.

If something good exists in commercial software, it is that when a corporation hires a developer he must work on anything that needs improvement, including not fun but important areas like the one you describe.

My experience with open source is that people does the minimum required to get something usable, then they drop it and at most care about bug fixing so people does not perceive low quality in the product.

ab praeceptis October 9, 2016 3:14 AM

Nick P

In the discussion itself I was torn between the two of you. Both of you have good points and thoughts and both of you came from somewhat different POV (and of interest focus), wheeler being more modest and pragmatic and you being more concerned about the big picture.

Funnily both of you either lauded open source and saw closed source critically or didn’t spend much effort on it. I’m, however, reaching more and more the point of begging to strongly differentiate, up to the point of where I’m positing that there is no such thing as open source – there is but a zoo of open source, ranging from desirable to despicable. Which, of course, also translates to open source often being the problem rather than the solution.

Here’s what I mean: There is OS in the form of excellent teams of very well educated researchers and seasoned engineers, e.g. an Inria. But there’s also bunches of cool teenagers without even an acceptable minimum of responsibility, without much education and with little and/or extremely limited experience producing gazillions of line of problems and bugs. So about whom are we talking when we say “open source”, let alone open source being better than closed source?

Actually the situation is so bad that the differences between you and wheeler are practically insignificant. Out there, in particular in the open source scene, the thoughts of both of you are blissfully and grinningly ignored. Those a**holes just go ahead an put layer on top of layer and language on top of language.

Let me confess something: If I see perl being used in a project I’m gone. Won’t touch it; chances just are way to big that it’s thoughtlessly cludged crap by losers who didn’t hear the shot.
Second, one of the possibly most important things I’ve learned over many years is to very carefully select my tools and to expect ignorance or even idiocy everywhere. Example? evil corp; vcc, something I’d love to enforce on some colleagues runs only on windows and demands visual studio. dafny, something I’d also like to urge some colleagues to look at, produces but worthless .net crap.

Both of you are right, each in his own way. You are right, for instance, in pointing at Wirth and Oberon and P-code and he is right in being modest and walking a path of of small steps.

In the end I see only one salvation chance: formal methods. Example: I would like a compiler that not only is formally verified but that also runs massive test suites, possibly with the help of another particular back end, to completely verify down to binary code production.
5 Years ago that would have been a pipe dream. Today it would be feasible but a major endeavour and a painful one. One reason being that we have, it seems, a plethora of formal tool – few of which, however, are theoretically and conceptually sound and practically useable in real world.

On the other hand, all those governments have billions to pour into hacking and eavesdropping. Which my minds eye clearly demonstrates that the governments incl. the military are plain stupid and evil. They have a strong bias for aggressive capabilites and little concern for defensive capabilities (other than bla bla and some pocket money for researchers).

Who? October 9, 2016 3:40 AM

@Mega Lister

A short defense of MS: Some of the outbound data connections are quite innocent and non-personal. Engineers need to know what their users are doing to make Windows better or at least work with millions of diverse computer setups.

No, engineers neet to know what their users say when filling a bug report. No one on a big corporation needs to know what their users are doing. Period.

Nick P October 9, 2016 11:34 AM

@ 65535

“You mention on HN that nobody paid your for your secure complier work. It certainly merits more funding. How about the EFF?”

It was joke that I was sure Wheeler would get given what he said. He knew I built some half-assed compiler for my personal needs. He knew it wasn’t close to GCC or even TCC. He knew this because it’s what all compiler amateurs do. He asked how many people were using it or something like that. I told him he got me: “nobody wanted to pay for it or anything haha.”

Sorry for the confusion. In high-assurance compilers, there’s several projects being funded by large organizations that might fund another one. They usually want someone to push the cutting-edge a bit rather than polish some projects into a product. That plus a preference for academic institutions would give me a disadvantage.

Good news is some recipients keep open-sourcing their work. CompCert getting locked-up was a setback. Then there’s CakeML teams “translation validation” to source, FLINT certifying compiler for ML, KCC that implements C in K framework, and C0 compiler of C subset from Verisoft project. So, with ML or C, there’s stuff people can start on for high-assurance compilers.

The best one for informal route I’ve seen is this paper given it’s so incremental with minimal, background knowledge. I might try it myself at some point given most of what’s out there sucks. Alternatively, what’s in SICP or one of the older papers. Let some paranoid argue those were subverted in print haha.

@ Thoth

“So, I got my GroggyBox Java client to finally work on encrypting a file of 345 KB within 2 minutes. Is it reasonable for a AES-256-CBC-PKCS5 done within a smart card to encrypt a 345 KB PDF file in 2 minutes or as an end-user do you expect it to be faster ?”

Now you’re running into that problem I warned you about. Doing it all in the smartcard would be slow as hell. That’s too slow for most users to wait on a file. If it’s that slow, then the endpoint security might be necessary for the crypto after all. You might be better off doing a split system where:

  1. Each file is encrypted and decrypted by the host CPU. That’s because the host can already see it at either point. Threat model with host being malicious usually already screws you there. So why not.
  2. Each file gets separate key generated by smartcard and exported encrypted to the same storage as the files.
  3. User starting new session on same or new machine authenticates to the smartcard.
  4. For just files user needs, smartcard decrypts their associated keys so the host can decrypt file itself.

Lets recap. Decrypting files with smartcard itself is ultra-slow and results in same level of security if host still sees decrypted files. Your goal seems to limit what untrusted hosts see if they’re not same host, at session or hardware level, as one user was on when originally encrypting files. The scheme above lets the host CPU do fast crypto on the files, which are large. The small keys for that are generated and protected by the smartcard. So, now you’re looking at overhead of calls to smartcard, smartcard decryption of 256bit of data, and CPU decryption of 345,000bit of data vs prior overhead of smartcard calls with smartcard decryption of 345,000bit. I can’t prove this will be faster but I have a feeling. Esp if host has AES instructions. 😉

@ David H

“The TPM appears benign. If ChromeOS is replaced by Debian or other Linux distro it wont even be accessed.”

Never assume that with chips on the boards. There’s been more focus on firmware-level attacks on things you don’t use plus subversion of ASIC’s. If it’s a TPM outside main SoC, they tend to be dumb chips that don’t do much but come from subversive organizations. If inside, there’s no telling what it might do. Best it be trustworthy or not there at all. If a IOMMU is onboard, see if you can use it to block the TPM. If it’s on actual board, see if ripping it out leaves the board functioning. 🙂

@ ab praeceptis

“wheeler being more modest and pragmatic and you being more concerned about the big picture.”

Fair characterization.

“Funnily both of you either lauded open source and saw closed source critically or didn’t spend much effort on it.”

It’s impossible to assure a closed-source product against subversion right now. The obfuscated, C contest shows that at a level above assembly I’d get with closed-source. Open-source at least has a chance. Plus the usual benefits where you can fix its problems, freely distribute it, fork it, not get patent suits, and so on. Open-source model is always safer because owner can’t tell you to screw off & deep-six the product like they did many great things in closed-source. Recent example that 180’d was OpenVMS. Older was Convergent CTOS: closest thing to Plan 9 that made production & killed after Unisys acquired it. Open model is always more powerful since the users can tell the owner to screw-off and fork the source into something better. Many precedents for this, including the most secure UNIX (OpenBSD).

Now, those are big picture risks. The quality of a specific project varies tremendously. The data that came in last time they studied it showed open-source projects on average had less defects and fixed them faster than proprietary. I think they studied popular ones, though, as community-driven development often needs a critical mass before that happens. Anyway, a given OSS or proprietary project can be shit or good. Depends on responsibility & skill of the team building it. Yet, open-source always has less risk than proprietary in long-term as you can review, fix, and fork it. Without lawsuits, too.

The MS Research techs are a good example. I’ll get sued if I implement their verified, VerveOS or try to fix their toolchain for non-MS stuff. Maybe they won’t but they might given patents they filed on it all. Whereas, the open-source provers, VCC generators, compilers like CakeML… I can do whatever I want with them so long as I share it per the license. If VCC locks me in or has issues, what the hell am I going to do? If CakeML does, I submit pull requests or fork it to fix it myself. That’s the key difference. It’s why it’s always better to put effort into open solutions if they’re available or to release any critical tooling under open license. Especially with patent provisions! I think we’ll eventually see a BSD project get sued over patents since they just had copyright license technically.

“Actually the situation is so bad that the differences between you and wheeler are practically insignificant. Out there, in particular in the open source scene, the thoughts of both of you are blissfully and grinningly ignored.”

I agree with first part but our conversation happened due to second not being true. Wheeler’s paper introduced concept of reproducible builds. There’s all kinds of work in major projects right now making that happen. Whole, little communities. There’s also been quite a few works on improving repos. Actually, the pace of work on that stuff from Git to containers is so fast it hurts quality. But people are all over this area with many working security or reproducibility.

Problem: They ignored Wheeler’s articles on making your FOSS secure and secure SCM. That SCM page of his needs to be integrated into the DCVS’s and other work. It’s comprehensive. Much more than what I’m seeing get built that’s “secure.” Closest thing on his page was Aegis but it needs work and barely had any attention. OpenCM is dead. I think this problem will have to be solved with a niche offering that people who know they need it voluntarity use to compare against a reference implementation in Git, etc. FOSS just don’t care enough. Proprietary are using Sharepoint. I can’t tell if that’s an improvement or not on FOSS. Situation is bad as you said.

“Example: I would like a compiler that not only is formally verified but that also runs massive test suites, possibly with the help of another particular back end, to completely verify down to binary code production.”

That would be fine. I’m leaning toward the proof-carrying code or verified intermediate styles. In the first, the untrusted transformation produces evidence that can be verified by simple checker in prover that it did its job. In the second, a formalization of the intermediate language in a compiler lets one check equivalence of pre- and post-transformation of a given step. Again, transformation/optimization can be untrusted. My implementation would have a cluster going through the files using optimized, ML programs (eg MLton) to do untrusted processing producing evidence chains for each module. Then, verified checkers compiled through CakeML would go through looking at data to validate and verify it all. Also on a cluster cuz “ain’t nobody got time for that.”

As linked above, I think that this paper has a nice route at gradually building the initial compiler. It has to be understandable to amateurs, gradual, and useful. A guy named sklogic, whose tools are here, demonstrated one could simply embed Standard ML in a LISP to get its benefits while keeping LISP’s. That Scheme project has most of what’s needed for a SML already. Could just make a SML that’s actually a Scheme with functional equivalence. Formal tools only use a subset anyway. That gives you a LISP and SML to bootstrap everything else. And it’s well known that about everything has already been implemented in LISP at one point. 😉

C0 compiler probably is written in an ML or extracts to it. On way out the door so I didn’t have time to double-check the link. Just throw it in there real quick. C0 is a candidate for a first, C compiler given it handles a subset or something similar used in OS projects. Then a real compiler could be modified into C0 by removing unsupported pieces. Maybe TCC since it’s small. Solves chicken and egg problem until real, verifying compiler comes along that’s open-source. Optionally run CO compiler through CakeML to have verified-to-machine-code compiler for C0. Or build on KCC which did whole C semantics in just a few thousand lines of code in their platform. I think K framework isn’t getting enough attention given all they’re doing with it.

All for now. Be back in a few hours.

albert October 9, 2016 11:46 AM

@Grauhut, Sancho_P, et al,

The US isn’t alone as one of the more retributive governments in the world. If you look at the US responses to various hacking incidents, you see that they are proportional to the sinfulness of the actions exposed. This is why Assange, Snowden, Russia, etc*. are vilified world-wide by the USG official mouthpieces (AKA the MSM), and the odd Congress-critter, or assholy pundit.

It’s all political theater. Exposing corruption in politics is everyones business.

If you keep getting caught with your pants down, stop wearing pants.

*the Bogeyman-du-jour.
. .. . .. — ….

ab praeceptis October 9, 2016 12:36 PM

Nick P

“open source” – sorry, but bluntly “No”.

For one because: what is FOSS? It’s everything from excellent professionals to drugged 14 year old losers in romania piling one crap layer on top of another crap layer.

But also “No”, because closed source != not verifiable. Maybe in the usa that doesn’t mean a lot (don’t know, just wildly guessing) but in many other countries (e.g. pretty much all of europe) companies are liable and bound to what they promise. If as, say a french company I assert that my software is fully verified than it’d better be, because if not I’m in serious trouble.

Moreover, there are grey zones. A company can, for instance, have its software (incl. source) externally checked and confirmed. Or it can offer its source under nda for purpose of checking, etc. Have seen that, have done that, not too unusual.

Back to open source and somewhat related: In pretty every OS software they tell you in big fat letters “No responsability, no liability, no nothing whatsowever. Fuck yourself!” That is, in fact, a lot less then what I get from evil closed software (well, some anyway). Also, while an OS project can – and often will – not care a rats ass about its reputation, at least most companies can tolerate only so much bad reputation. (well noted, this doesn’t mean that I’m against OS software. I’m just against evangelizing and preaching the “holiness” automagic wonderfulness of OS).

That aside, the points that drives me really mad and angry is this: We do have researchers, we do have thousands of talented people in universities and hundreds of projects and (at least in most countries) these are payed by tax money – and should hence by default and law make all of their work open source! But alas, they don’t. They rather open a company to monetize what they did being payed for by the public. Yuck!
And yuck again because much of that stuff is of at least modest and sometimes even good quality.

As for verified compilers: Yes, *ML is one good approach, far better than lisp anyway.

I’m currently looking at a “closed loop” system, where each stage of a compiler is double checked, once by itself and once through z3 and where each stage also generates tests for itself. The dirty part obviously being the backend.
An interesting and promising approach, btw, to not only focus on AST or code generation but also on (incl. self-) verification.

In the end there is only one way: proper formal spec … through the diverse stages … to formally verified code generation. With each stage also creating information for verification one can get pretty close to trustworthiness.
Funny sideremark: I’m just rediscovering a Prolog based CLP system as a very useful tool to create lots of helping information such a range checks etc. Funny also because I played a lot with it and learned to value it highly, yet was dumb enough to not notice for a long time what it could do in my main area of interest.

Another issue I meet again and again is that we must stop clumsy annotation games (à la acsl et al.) and have formal specs. in the language itself. Writing functions without at the very minimum Hoare triples or fumbling with vaguely “specified” vars (like “int”) rather than properly spec’d vars (like “x : int 1 … 366”) is just asking for bugs (and making the life of well meaning compiler hard).

One thing that really begs for that (ans being often beasty for formal verif) is loops. I think we should very seriously change our habits and languages; we should, for instance, specify loop not with a processor in mind but with an algorithm in mind. Specifying a loop invariant should obviously be a standard part and not an exception for paranoid developers writing a comment annotation.

r October 9, 2016 1:23 PM


Vs. Avoiding Lawsuits & YOU

Have you reviewed the GPL compatibility list? Not all OSL are ==, BSD and MIT technically have a vulnerability GPL doesn’t have imb – that is reissuing altered source under the same license.

It shouldn’t be too hard to sanction a bsd or mit based company and purloin their efforts as a front company – some foundation huh?

Maybe I’m just one of the paranoids you speak of, but it seems to me one almost needs a lawyer for mixing licenses.

Gerard van Vooren October 9, 2016 2:32 PM

@ ab praeceptis,

Sorry for being picky but…

Another issue I meet again and again is that we must stop clumsy annotation games (à la acsl et al.) and have formal specs. in the language itself. Writing functions without at the very minimum Hoare triples or fumbling with vaguely “specified” vars (like “int”) rather than properly spec’d vars (like “x : int 1 … 366”) is just asking for bugs (and making the life of well meaning compiler hard).

AFAIK the only language which is “wide scale” in production that is able to do this is Ada. Agree? Are you saying that the whole C and C++ world should switch to Ada? That is what I read here. Let me be clear, if it could be done I would be quite happy but I just don’t see that happen.

AlanS October 9, 2016 3:50 PM


The British PM is elected by the governing party from its MPs. When Cameron resigned after the Brexit vote there was an election and she won (she was one of two candidates on the final shortlist and the other person dropped out). At the time she appeared to be the sane pick which says more about the other options than her.

Czerno October 9, 2016 4:23 PM

Dear colleagues and praised experts !
I’ve had a, semi-urgent :=) , need to check a dozen RSA (public) key exponents for known weak common factors against public databases (by Euclid’s algorithm); there used to be online services to this end, after Lenstra et al. reported the problem with weak random generators used in key generation a few years ago; however just now a (quick) Googling returned no such (working) online testing service in the first pages of results… :=(

Rather than go on thru Google (non-)results, I am coming to Bruce’s place here – surely of there is a publicly accessible (and free) service still in activity, several of the regulars will kindly point me to them ?

ab praeceptis October 9, 2016 5:17 PM

Gerard van Vooren

a) that already has been done, albeit with some ugly sidenotes (VCC) or too little (e.g. Ivy)

b) the way it has been done isn’t that far away from something like, say, “Secure C”. C parsers are beasts but it’s not like it couldn’t be done.

c) While I agree that C has probably the most urgent need, there are are more or less widespread and sufficiently powerful languages that could profit from such an undertaking.

[d) btw, I guess you mean Spark, not Ada; but no problem, I got understood you].

ab praeceptis October 9, 2016 5:26 PM


Sorry, I can’t help you with a site but I can tell you that I do it with sage and I mean to vaguely remember that Bernstein/Lange one provided a Python script for that purpose.


Thoth October 9, 2016 8:16 PM

@Nick P, Figureitout
Letting the host computer do the symmetric encryption of the files and the smart card handling the keys would be as good as the OpenPGP card. In fact, for the OpenPGP card, it does not even handle symmetric crypto at all (except for the optional Secure Messaging protocol). All that the OpenPGP card does is RSA Private Key Decrypt and RSA Private Key Sign. The signing is a dumb sign where you feed it a small data (max 256 bytes) and it will not think much and apply signature without asking any further. It wouldn’t even bother to hash and sign as per the normal practice so you have to generate the cryptographic hash and feed it into the card’s signing call. Similarly, the Private Key encrypt is a generic RSA Private Key encrypt which will take in any data less than the modulus size of the RSA key and simply do a dumb encrypt.

I have talked to a smart card developer friend and we ran algorithm speed test on the Infineon card I am using and the nice thing is it has one of the fastest AES hardware engine amongst all the other different smart card variants that exist and the problem is with the transferring speed from the PC client to the reader than from the reader to the card and back.

The reason GroggyBox exist is to only give an infected host computer control over the plaintext selection and not the symmetric key selection like what OpenPGP did for the OpenPGP card. Multiple layers of key encrypting keys can be used but that is still not good enough since the symmetric key generated on any host general purpose computer is considered compromised.

The goal of GroggyBox is to move end-point security out of the generic untrusted Intel/AMD CPU although I can’t say smart cards are anymore fully trustworthy, they have such a small attack surface which makes it easier to guard against than traditional computers. I don’t see any way to allow any encryption/decryption algorithm take places safely in a host computer since the threat model is to consider the host computer as already compromised and the giving away the plaintext would have already been bad enough but that’s what it is in most cases.

r October 9, 2016 8:51 PM


New provider?

Programmable in basic, not Java like you’re aiming for but 2-32kb of space.


Sorry just stumbled onto it looking at smart cards for the western hemisphere want sure I saw it in your list previously.

Thoth October 9, 2016 9:13 PM

Building A Tamper-Resistant Backdoored System With ARM TrustZone and TPM

What you need:
– Full control over the TrustZone ROM image and loader images
– Some control over the TrustZone TEE OS
– Full control over the TPM chip’s PP mode (Administrator login mode)

What to note:
– TPM’s PP mode is disabled when put into developer mode (a.k.a install any non-Google ChromeOS mode). The implication is anyone installing their own OS (a.k.a dev mode) will have the TPM’s administration and control not available and that is one of the pre-requisite for the tamper-resistant side of this secure backdoor scheme.

How to do with a 20K feet view (Google or some Govt Agency view):
– Load whichever certificates and keys you want (including backdoor public keys) into the TPM since the PP mode is effectively in your control or you have access to the factory.
– Typically the TrustZone Root Certificate will be burnt into OTP ROM but this does not provide enough tamper-resistant. Google/Govt wants to crap user’s system up if the user attempts to detach the TPM fom the TrustZone. Portion of the ROM bootloader for the TrustZone can be encrypted with a private key stored in the TPM and thus making it even harder to separate the specific binding of ARM SoC and TPM.
– TrustZone would boot into the Secure World before it continues to boot into the Insecure World (userspace OS – Ubuntu/Archlinux …etc…). Since the control of the TrustZone “Trusted Boot” sequence is within Google/Govt control, you can intercept calls within the userspace and inspect (and alter them if needed) 🙂 .
– To prevent user from replacing their userspace browser or OS certificate chain, all certificates in userspace must be signed with the TPM protected Google/Govt certificates and keys otherwise the Google/Govt controlled TPM can be used to overwrite all userspace certificates and keys anytime it is necessary.
– To force a Linux kernel update even if the user does not want, the TrustZone Secure World (under Google/Govt control) can be used to communicate to Google/NSA servers to download “approved” Linux kernel images and replace whatever that is existing in the userspace.
– To install a remote kill switch into a user’s TrustZone/TPM setup controlled by Google/Govt, communication between the TrustZone Secure World and Google/NSA servers can be protected by TPM keys and the server side can handle the secure communications with HSMs loaded with specialized “Secure Execution Environment” codelets to ensure Google/Govt employees are not able to meddle with the remote kill switch codelets except for the “Powers That Be” that control the farm of HSMs containing the hardware remote kill switches with their secure HSM tokens.
– Now you have a triple security enhanced backdoor that only the intended custodians can take over whatever they want to take over 🙂 .


Thoth October 9, 2016 9:21 PM

Basic cards are actually very old stuff 🙂 . Been around for a while although JavaCards are old as well. It is the MULTOS and JavaCards that are the grandfather generation and are still doing very well.

About the 23K EEPROM, ouch .. it’s too little. No card programmers would want to use anything less than 64K space and the latest cards have flash memory of 1.5 MB which is much better than a 23K. I would personally prefer 80K and above since I have more space to get more things done.

From the website:
“In the end, the most important difference between a BasicCard® and a Java® or MultOS®card is not the programming language – it´s the price. And the formula here is simple: the bigger the smart card chip, the higher the price. Java® and MultOS® are resource-hungry, to run a simple application they need expensive smart card chips (i.e. 1 kByte RAM, 64 kByte ROM and 32 kByte E²Prom). Using the Enhanced BasicCard (256 bytes RAM, 17 kByte ROM and 8 kByte E²Prom) costs 1/3 as much.”

Indeed JavaCard and MULTOS are resource hungry but you have to have a minimal RAM space for you to do encryption. RSA 2048 is actually 256 bytes (modulus) and if you want to do encryption with RSA, you may need more than 256 bytes otherwise the 256 bytes would only fit your RSA modulus but what about other parameters like the RAM space to hold the messages to encrypt and process ? 256 bytes of RAM for cryptographic purposes is simply too small.

GroggyBox applet itself consumes more than 256 bytes of RAM and includes it’s own internal session state machine too to ensure correct execution sequences of commands which BasicCard will not be able to handle properly with only 256 bytes of RAM.

Figureitout October 9, 2016 9:28 PM

Not sure about the decrypt speed
–Ok, take your time and double check it and maybe cover your bases by saying it’s experimental software. I put up code here that was wrong (the encryption part, one of most important) lol, fixed but still. Embarrassing.
Can’t store much on smartcard
–Ok, I’m still misunderstanding. You do crypto on the card, in blocks (stream cipher is just a small block cipher eh), and re-construct file on host PC. Not like using Veracrypt on a SD card where primary crypto operations are taking place on PC and files stored on card.

Getting into endpoint security, I wouldn’t even trust an OS, it’s simply not needed w/ our threat models and if you are separating functions and putting sufficient space such that an attacker would need a close range physical or RF attack to breach the gaps, you’re going to beat like 99.99% of attacks. Doing manual ciphers on pencil/paper at random places, virtually untraceable if not under surveillance already. Just micro’s w/ small looping code and even those could be backdoored in a few ways.

The big problem I run into is, of course programming the chips from one of the big 3 OS’s, not getting off those and flashing from there. And homebrewing makes it hard to have usable files on the big 3 OS’s, which are the most pleasurable to use and get the most support. W/ Arduino and SD card libraries, you can generate several file types and open w/ no trouble on the usual 3 OS’s. So you could do all encryption (AES-CBC, easier since it’s not transmitting it via RF channels or otherwise), like your passwords (back them up multiple places, better to get stolen but you can change than lost!) then store on SD card. All on a ATmega. Sure these libraries could be ported to other MCU’s.

Nick P
345,000bit of data
–345 KB is 34510248 = 2,826,240 bits, just FYI.

Thoth October 9, 2016 10:00 PM


The scheme is for the GroggyBox file format to be done on the smart card in it’s entirety. The host computer simply reads every 200+ bytes of a target file and hands it over to the smart card with GroggyBox applet and the applet will encrypt the 200+ bytes of file it was given and returns the ciphertext. The host computer will keep sending pieces of files (200+ bytes chunks) until reaching the end of the file at the host computer. Due to smart card’s communication protocol (APDU), it can only handle a total packet size of 256 bytes of payload. For decryption, the ciphertext file or text would be sent and plaintext chunks would be received.

I have given GroggyBox applet a scratch pad RAM size of 1031 bytes (card I am using now has 1.8K userspace RAM – Infineon SLE78 type). If I were to import as much data for processing into the card’s RAM and then encrypt it, I still have to split the encrypted or decrypted chunks into 256 byte chunks and send them back out. The amount of data entering and exiting the card maybe the same even if the data is buffered to be one shot encrypted at once. The card is capable of 50KB/sec AES-256-CBC-PKCS5 speed if the data is plain encryption or decryption without calculating lag time for the sending and receiving data from/to the card. Compared against my current estimate of 3KB/sec on the current GroggyBox implementation 🙂 . Now you can see the huge difference due to the time taken to transfer data between the PC to the card reader and card reader to card.

I have also thought of some “out-of-band” entry and display of sensitive messages external devices (RPi over an encrypted network to a host PC with the card and reader).

Maybe a PCB with those DIY touch screen and hobbyist MCU (ATmega, PIC, STM32) with a separate WiFi SoC can be used for the remote secure display and input to login to the card over a network. Since it uses ATmega, PIC or STM32, it would be easier to create an open source micro-OS with very limited functions. If a WiFi SoC is uncomfortable, probably a SIM card of Smart card reader can be integrated into the PCB (e.g. mobile POS card reader style).

r October 9, 2016 10:29 PM

2 final questions about smart cards:

I looked at some infineons, I know encryption happens on the card some of them state that even in transit data is protected, has anyone looked at dongles in a chamber+sdr?

Second, what’s the military use? Any .mil. 🙂

Thanks for the RAM nudge Thoth.

Thoth October 9, 2016 10:45 PM


If Infineon is talking about data in transit protection, it depends on the hardware technology. They have their “Integrity Guard” technology that somehow performs proprietary CPU “encrypted calculations without needing to decrypt into plaintext” and also the data stored in their Infineon IC chips are “encrypted” and when the memory are transferred from the Flash/EEPROM to the CPU and back are also “encrypted in transit” via their proprietary means. Most of these are simply XORs or mostly obfuscation but I have no idea what’s inside since I am just an end-user so it’s mostly guess work here from whatever technology are already known in other IC chip spec sheets.

NXP chips have their “Secure Fetch” technology that “securely transfers data between EEPROM/Flash to CPU and back”. Most likely also obfuscation and XORs.

Dongles in a chamber + sdr ? Not sure what you meant. Need to be more specific.

What’s the use of smart cards in military (your other question) which I presume is you asking about the smart card’s place in military settings. They are used to protect access to buildings, emails, files and so on for not so secretive systems. If it’s for a higher level secrecy clearance, the cards will only be used as identity attestation with biometrics and PIN and they will use a specialized cryptosystem of their own design to handle the rest of the cryptography and electronic security (including EMSEC the sensitive device or even a SCIF room or building).

Wael October 9, 2016 10:45 PM


To bet Ok! H

That’s just: 2,826,240 in hex 🙂

2B2000h -> Two Be Two 1000 hex
-> To Be To K; K=1000 (not 1024, in this case, hence the question mark at the end.)
-> To bet ok h 🙂

Sockpuppets, don’t mess with them, especially ones that smoke. He speaks in more obfuscated code 😉

Curious October 10, 2016 6:03 AM

Q1: Pardon my ignorance, but am I correct in thinking that website connections that are encrypted with TLS is nothing like ‘end to end encryption’?

Q2: About TLS, is it possible for a connection to have a connection to a website set up with TLS, yet, there still being a place in the middle that turns off TLS without me ever noticing it?

Q3: How can I ever simply trust a website for downloading software from it, if there is no end to end encryption?

Thoth October 10, 2016 7:36 AM

You just need to trust the CAs and the browser makers being on your side 😛 .

TLS was never supposed to be used as an E2E connection.

Having some parts of the website not using TLS would have your “green lock” icon degraded into a warning sign (if you trust your browser). Can a partial TLS site slip pass the browser yes but it will be too obvious and thus no (unless it’s such a huge blunder on the browser maker side which will be on the news the next day).

There is something called PGP signing or some form of software code signing but as @Clive Robinson loves to rant about code signing and software signing, it’s pretty much not too useful other than attributing the codes to it’s authors and (hopefully not) been tampered with while transferring to you. The problem is do you trust the public key certificate either when it’s laying around on the Internet (with or without HTTPS doesn’t matter) ?

The only way to ensure the codes are proper is to inspect them yourself and do your own compiles as an open source software and that brings to the point whether you can trust your compiler and toolchain and then it brings about the integrity of the low level codes and then now it goes down to the chip level trust … oh my … too much to trust and too much to consider 😀 .

Have fun with the chicken-and-egg chase which is like the Serpent of Infinity, Ouroboros.

I would personally trust what I see and do than to trust a bunch of compilers and versifiers but the fact is if you are to go to the ASM level, there’s still the microcodes and you can keep drilling down. Just keep your codebase as small and compact and hope for the best that the Big Bros ain’t after you 😀 .

ab praeceptis October 10, 2016 7:49 AM


Just a quick in between wondering/question: Can your java-whatever make use of the chips security accel? While this is typically rather primitive in smartcard chips, there usually is at least some accel blocks like montgomery, etc.

I’m asking because the numbers you published seem to strongly suggest that you do not make use of those facilities (and because I’m dumb re. java).

Thoth October 10, 2016 8:50 AM

@ab praeceptis

I have been discussing with my other smart card developer friend and used his algorithm testing tool. What we found is the chip’s HW crypto engine already been accelerated (probably clocked close to the limit ??).

For JavaCard technology, you don’t get to choose how you implement the crypto. You are given a Cipher class and you simply have to hope the card’s OS authors have made the effort to push close to the limits for the HW crypto operations which in my case, I am using an FT-A22CR card which contains an Infineon SLE78 chip. It is in fact the fastest HW AES ever on any known smart cards out there with only a 4+ ms per 256 bytes block of AES-256-CBC-PKCS5 crypto whereas other cards have a significantly slower 30 to 40+ ms per 256 bytes with AES-256-CBC-PKCS5 showing that the OS authors may have chosen to be more moderate in the clocking and pushing the performance for the HW AES engine while my FT-A22CR card has had it’s Infineon SLE78 HW AES pushed really hard.

You can see the comparison table below that he made with published algorithm test results for the different smart cards. Look under the AES-256 column and compare against other cards 🙂 .

JavaCard 3.0.4 API linked below as well in case you have an interest or anyone having an interest in it.


r October 10, 2016 9:28 AM


Sharp eye/mind, good catch.


Sorry I’m vague/cryptic, I was asking about dongles(readers/programmers). Are there any that have been put through an isolation chamber with a SDR listening? Are there quality manu’s vs cheap manu’s?
That’s what the run of the .mil question was.

r October 10, 2016 9:34 AM

@Curious, Thoth


just hope that big bro ain’t after you

This mad dash you see away from what we know as insecure is partially the result of “big bro being before you”.

ab praeceptis October 10, 2016 9:42 AM


I might be badly mistaken, but: You talked about 345 KB in ca. 2 Minutes which is a little less than 2,9 KB/s. On the other hand you say (and I don’t doubt it) that your Infineon chip is capable of doing 256 Bytes in 4 ms, i.e. 345 KB in ca. 5.5 s.

Yet you need ca. 120 s.

I see two major candidates to completely f*ck up your times: a) the java in “javacard” (incl. your assumptions on the OS) and b) PBKDF2.

As you seem to stick to the “standard” (rather arbitrary and doubtful stuff based on rsa’s musings) you have >= 1.000 iterations of sha-1 with some mumbo jumbo around. It goes without saying that a KDF is expensive; after all that’s it’s raison d’etre.

To get something like a vague grip on your situation I based myself on the assumption that your implementation on supposedly making good use of HW crypto accel. javacard is wrong and to encrypt 256 Bytes actually takes 10 * 4 ms = 40 ms. This brings us into the ballpark of 55 s for sym. encryption alone and leaves another minute for KDF (which seems plausible) during which the setup/PBKDF2/SHA-1/rsa mumbo jumbo runs.

Again, I’m quite clueless and dumb re. java whatever, but I know a thing or two about crypto (incl. implementation). From what I see, that whole thing is ridiculous. Not your efforts! Nor the chip but the javacard blah thingy. I have reason to trust that you know your stuff and I happen to know that Infineon are quite smart in the smartcard chip field. So, from my POV java* is the culprit and slows you down by a factor of almost 20. This roughly matches what I would expect from a non accelerated pure software implementation on that class of chip.

I can’t help you a lot due to my java ignorance but from what I see, you should have a close and tough look at your premises, in particular the javacard OS.

Thoth October 10, 2016 9:56 AM

@ab praeceptis

“As you seem to stick to the “standard” (rather arbitrary and doubtful stuff based on rsa’s musings) you have >= 1.000 iterations of sha-1 with some mumbo jumbo around. It goes without saying that a KDF is expensive; after all that’s it’s raison d’etre”

I have not yet implemented PBKDF2-SHA1/2. I am doing just AES-256 for now over a hardware key as part of an alpha stage implementation. PBKDF2 would be done later once things have stabilized.

Thoth October 10, 2016 9:59 AM


“This mad dash you see away from what we know as insecure is partially the result of “big bro being before you”.”

That is the sad truth that Big Bros are way ahead in the security game and have rigged their traps in many corners for us to fall into.

Whether they have been tested for energy emissions (card implementations), I doubt so.

ab praeceptis October 10, 2016 10:19 AM


Oh no. Those lousy results just with aes-256 alone? Unless you or Infineon are grossly lying – neither of which I assume – it seems blindingly obvious to me that your sw doesn’t use any hw accel. at all.
120s vs 5.5s for aes-256 hardly leaves any other explanations.

I just hope that you find a way to make the underpinning java-crap use the chips hw accel.

Good luck! (friendly and honest)

JG4 October 10, 2016 10:39 AM

There are a couple of Arduino “compatible” boards with FPGA out there. This one looks interesting. This is not an endorsement, although I like the concept.

What is XLR8?
•Arduino compatible development board
•Embedded 8-bit AVR instruction set compatible microcontroller
•Programmable with Arduino iDE

It seems that there are more back doors than front doors. Your mileage will vary.

It will be ironic if you were a rabid libertarian and find yourself looking to the government for protection from the idiots, psychotics, criminals and psychopaths in the private sector.

see also:

ab praeceptis October 10, 2016 10:41 AM


P.S. I’d like to suggest that you build and run a quick and dirty test in which you simply run sha-1 over 1.000 iterations (feeding it’s output into its next cycle input and starting with whatever arbitrary string) to get a rough ball park figure of what to expect from PBKDF2. To extend that guesstimate you might want to switch sha-1 and sha-2 and run 1.000 cycles again.

Such you would at least have a rough idea of what we’re talking about in terms of runtime and where some optimization is needed most.

BTW: aes-256 is a ridiculously fast algo (not the fastest but very fast anyway). Looking at a typical SLE78 spec, the timing numbers you tell would suggest that the chip needs over 11.000 cycles per Byte on aes-256. This is way out of any reasonable expectation for a cipher like aes-256. Well noted, this is even way out for a clumsy software only implementation.

Sancho_P October 10, 2016 5:24 PM

Btw, is Markus Ottela around here?
I have a working proposal for a fast, simple USB data diode to discuss in case you are still interested (or someone else).

Markus Ottela October 10, 2016 6:22 PM

@ Sancho_P

I’ve been focusing on the next version of TFC so I’ve been somewhat inactive here. The data diode could really benefit from a speedup, what do you propose?

Nick P October 10, 2016 8:41 PM

@ ab praeceptis

“for one because: what is FOSS? It’s everything from excellent professionals to drugged 14 year old losers in romania piling one crap layer on top of another crap layer.”

Coverity’s report shows open-source is ahead of proprietary on average although not by far. Prior studies by likes of Mitre found open-source software had fewer vulnerabilities that were patched faster on average. Not to mention flexibility given many are designed to integrate easily with other stuff whereas lockin was proprietary’s default. The real benefit, though, is protecting critical tooling from the evils that come from proprietary reliance. I’ve seen in my time the following happen with great technology that was only free and open by promise rather than licensing laws:

  1. Get contracts and lock-in by pretending they’re open but with changes to the standards that defeat that benefit. Or charge aim and leg for the docs like The Open Group.
  2. Company is acquired then product is turned to garbage for some goal of parent company. Microsoft and IBM are legendary for this. Each have ruined great techs that their opponents depended on.
  3. Company is acquired then product is vanquished along with all ecosystem put into it. This happend to one of only venture companies I recommended, FoundationDB. Only app similar to Google’s F1 RDBMS. Apple bought it, killed all contracts with customers, and probably uses it internally.
  4. Company creates shared-source or plugin-driven community around proprietary product with reasonable licensing terms. Source is not under an open license, though. Users contribute tons of code and apps. Company turns around to lock it all back up again while charging lots of money for it. QNX pulled that one.
  5. Company gets patents on stuff, releases the code for free without a patent provision, lets ecosystem develop, and sues anyone who built a product on it. Or just patents stuff in those without patent protection clauses to similarly sue companies. Microsoft’s patent bullshit against Android nets it several hundred million a year in licensing fees despite them not contributing to Android.

  6. Last, simplest, and most common risk is company starts making decisions you don’t agree with on the product you put a lot of time and effort into. Unlike OSS, you can’t just fork it to make it the way you want. You’re stuck with it or begin a potentially-expensive exit.

These risks exist only in proprietary apps. Open-source apps are, by licensing and source availability, immune to these except for the patent suit risk where only a few cover it. The ability to guarantee continuity, fixes, extensions, and so on are strongest reasons to use OSS for anything that critical. I’ve gotten to the point, having seen so many stunts by OS and database vendors, that I recommend the use of open-source tech by default for anything mission-critical. The best of the key apps are usually better quality than the proprietary ones anyway. The problems are easy to work around. Microsoft particularly is becoming an exception on quality but still scheming bastards. Recently, adding all kinds of surveillance features into Windows 10 that are hard to turn off is just icing on the cake. People are already forking Android to deal with Google’s BS but you can’t fork Windows 10.

“because closed source != not verifiable.”

I’m the one who wrote the definitive essay on that topic since I kept seeing a false distinction of open=reviewed and closed=non-reviewable. The real components are that you need qualified reviewers who review the whole product with associated security claims, build it themselves with trustworthy toolchain, publish signed hash for that binary, and keep doing that for updates. You are down to trusting the reviewer’s character and time. Multiple, mutually-distrusting reviewers help. Open-source has an advantage there’s no restriction on number of reviewers or visibility into how the program functions vs binary RE.

If subversion is likely, again it’s open-source that’s preferable since there’s actually been more closed-source subversions. Examples from industry include Borland’s Interbase having a backdoor soon as it was open-sourced, many firewalls with remote access (i.e. FTP) enabled by default with no notification, and Microsoft developers hiding entire games in Word and Excel products. There’s about as many instances of open-source maintainers rejecting stuff that’s obviously a backdoor or sabotage attempt. Got to the point they don’t even try that much any more. These days, they’ll try to slip something in that looks like an accidental catastrophe. Still enough in FOSS code that it’s deniable so long as it happens rarely. 😉

So, ideal model is Cathedral-style development by team dedicated to quality-security with result licensed as open-source w/ patent, suit protections. This result hits all the right spots for high quality or security.

“”No responsability, no liability, no nothing whatsowever. Fuck yourself!” That is, in fact, a lot less then what I get from evil closed software (well, some anyway). ”

Most closed-source software comes with a shrink-wrap license that basically says the software is provided AS IS. You not only pay for it and don’t get open-source benefits but you also get same lack of guarantees as open-source. There’s a small number of products in each category with more responsibility by developers but not the market leaders. Altran-Praxis, who does Correct-by-Construction method, offers warranties on their stuff. Some Cleanroom vendors used to. I know one local. Rare but they exist.

“at least most companies can tolerate only so much bad reputation. ”

They all had terrible reputations. They all did terrible. In desktops, only 2 survived with one through monopoly tactics and other through appeal to self-indulging luxury market + use of open-source (FreeBSD & Mach in Darwin). In UNIX land, UNIX quality stayed terrible for years with the open-source eventually overtaking them in overall benefits and experience. My data indicates AIX and Solaris 10 are still highest quality ones of full-featured with the highest being OpenBSD. Former two have less quality but make more money via… lock-in again. OS quality and commercial backing or success have no connection for companies doing marketing and lock-in well.

” are payed by tax money – and should hence by default and law make all of their work open source! ”

I’m with you there. It’s ridiculous. So much great work paid for by us but not available to us unless we… pay again.

“As for verified compilers: Yes, *ML is one good approach, far better than lisp anyway.”

On a site about tiny C compiler, I found a tiny Ocaml compiler you might like from Japan. It’s for educational purposes. The code is beautiful in simplicity, structure, and readability compared to about any compiler project I’ve seen in C or LISP. An ML-based bootstrap compiler for other compilers might take a fraction of that code.

“proper formal spec … through the diverse stages … to formally verified code generation”

That’s the ideal. Not happening so far. One can get a long way with something like Haskell/Ocaml, functional style for most of compiler, Design-by-Contract, QuickCheck, thorough testing based on spec, and so on. Ocaml compiler itself was already good enough in traceability and quality that Esterel barely had to modify it for a DO-178B code generator that needed source-to-object verification.

” I’m just rediscovering a Prolog based CLP system as a very useful tool to create lots of helping information such a range checks etc.”

Glad you discovered this. I figured it might be true given people on Hacker News keep telling me FOL and Prolog are good for type systems. CLP on top makes sense.

“have formal specs. in the language itself. ”

I agree. It’s why I always reference Design-by-Contract that Eiffel put in their language followed by SPARK and then Ada 2012. The quality difference they’ve made is huge. Plus, it gives the automated methods something to constrain the state explosion that happens during analysis. Makes automated feasible.

@ r

“Maybe I’m just one of the paranoids you speak of, but it seems to me one almost needs a lawyer for mixing licenses.”

Oh no, I agree and so do FOSS experts. It’s why they have a certified set of licenses to choose for the needs of your product. Plus, most keep incoming code in the license they standardize on by default. Copyright assignment is an even bigger problem as it often should happen but didn’t across thousands of contributors. All kinds of bullshit. Some more people definitely need to get on analyzing common combinations.

@ Gerard van Vooren

“AFAIK the only language which is “wide scale” in production that is able to do this is Ada. Agree?”

Don’t forget the language that invented mass-adoption version, embedded it into whole stack of a language, got it deployed into significant industry use, and inspired Ada’s. That’s Eiffel. Also had void and concurrency safety long before languages like Rust showed up. Ada 2012 w/ SPARK 2014 is undeniably the safest of the system languages but Eiffel invented Design-by-Contract and is still around. Meyer et al should always get credit.

@ Thoth

“has one of the fastest AES hardware engine amongst all the other different smart card variants that exist and the problem is with the transferring speed from the PC client to the reader than from the reader to the card and back.”

That’s good to know. It’s another knock on the smartcard scheme but at least the chip itself can handle the job.

“Multiple layers of key encrypting keys can be used but that is still not good enough since the symmetric key generated on any host general purpose computer is considered compromised.”

In my scheme, the keys can be generated wherever you want. The reason I put it on the host is that the host already has the plaintext. You’re saying using the per-file key on the host would compromise the key. The key is used to protect the plaintext. The host already has the plaintext itself. So it’s equivalent. This is true for when encryption is originally applied to something on the host or when that something is decrypted later. Since that’s true, I decided to improve on the situation by offloading some crypto to the host for just the file being accessed. All other keys of all other files are still encrypted/signed by the smartcard. They’re unavailable as you intend unless whatever host encrypted them was already compromised.

“I don’t see any way to allow any encryption/decryption algorithm take places safely in a host computer since the threat model is to consider the host computer as already compromised ”

I get that idea. I’m saying that, if host is already going to get a specific plaintext, you can go ahead and decrypt it on the host as there’s no difference. You just gotta protect every other file. This is why my proposal combined per-file keys with smart-card based generation and encrypted export of them to the host. Second host sees nothing unless your smartcard allows it to even if it assists with bulk decryption.

r October 10, 2016 8:48 PM

@All, CC: Nick P (no sarcasm, you just posted a nice response not trying to detract but we have but 1 white wall.)

BBC is reporting that TVMonde5 is now being attributed post-hoc to Russia not to ISI[S|L]. I know those tapes are always rolling, but if they’re going to roll them back shouldn’t it be done in a public manner like what some said about the GOP hacks? It smells like mission creep to me at this point, but who knows – it could fit into an AD&D campaign nicely.

Nick P October 10, 2016 9:25 PM

@ r

It seems like this might have something to do with the US vs Russia stuff going on right now. It’s actually NATO vs Russia. So, all these stories blaming Russian hackers for any common thing… including in US, UK, and French products/services… seems like it might be political posturing. I know Russian intelligence is in top 3 of espionage, hacking, etc. Yet, the kinds of claims we’re seeing seem reflexive and herd-minded. I don’t buy it for now.

I also don’t care what’s true about who attacked who in a cyberspace all sides deliberately leave insecure. I’ll call them all out for scumbags they are. 🙂

Rollo May October 11, 2016 1:42 AM

@ Nick P

  1. Company is acquired then product is turned to garbage for some goal of parent company. Microsoft and IBM are legendary for this


Curious October 11, 2016 4:31 AM

Off topic:

A few weeks ago I had an idea here about maybe creating power from fluctuations from what I thought could be similar to a bose-einstein condensate. In this recent video apparently linked to topological condensed matter and the recent nobel price in physics, there is this presentation about ’emergent gauge fields’ that to me seem to be about fluctuations in a vacuum. I wonder if the subject matter in the presentation might perhaps be linked to some broader idea of harvesting power from a vacuum (maybe because of how space-time is not solid and is forever emergent). At some point answering a question, the presenter states that he “doesn’t know what the model is equivalent to”, which I thought was a little odd, but maybe I misunderstood his answer. (“Some Boundary States for Bosons – Edward Witten”)

Curious October 11, 2016 4:42 AM

“Yahoo disables automatic email forwarding feature: AP”

Ah, I think I understand this one. If you decide to use a different email provider, you would want to temporarily forward emails from your old email account to the new one. As I understand it, Yahoo is disabling this feature now, presumably making a transition to a new email provider awkward if you abruptly were to stop using the old email account.

“Yahoo Inc disabled automatic email forwarding at the beginning of the month, the Associated Press reported, citing several users.”

“The company’s website says that the “automatic email forwarding” feature is under development and has been temporarily disabled.”

Curious October 11, 2016 4:48 AM

To add to what I wrote:

Hm, I wonder if maybe I got that story a little wrong. I mean, if I were to simply terminate the use of an old email account of mine, then maybe the emails won’t be forwarded after all as I initially though. Or, perhaps the email forwarding is a service provided to people even with their account terminated (for some period of time at least).

Curious October 11, 2016 10:10 AM

More on trapdoored primes:

“NSA could put undetectable “trapdoors” in millions of crypto keys”

“Researchers have devised a way to place undetectable backdoors in the cryptographic keys that protect websites, virtual private networks, and Internet servers. The feat allows hackers to passively decrypt hundreds of millions of encrypted communications as well as cryptographically impersonate key owners.”

Curious October 11, 2016 10:16 AM

To add to what I wrote:

As somebody that doesn’t know much about crypto, I can’t help but think the internet tech and infrastructure is a perpetual shitshow.

Thoth October 11, 2016 10:21 AM

re: NSA backdoor

The answer is to use One-Time Pads. They are the most cumbersome and yet the most controversial and the most well studied. They break down the easiest with even the slightest human errors but they can be a pain to break (thus called the ideal cipher). Flammable OTP papers would be a nice key exchange mechanism.

r October 11, 2016 10:38 AM

@Thoth, Curious,

OTP with some sort of salt or whitening, until the PRNG/etc could be actually verified.

Ted October 11, 2016 2:47 PM

‘Center to coordinate private, public sector for tech future’

“The World Economic Forum is dedicating a new center in San Francisco to connecting government officials and tech companies with the goal of more legislative policies to foster tech development. The center will focus on the collective decision-making of consumers, communities, businesses and government to bring about a more prosperous and sustainable future, said forum founder Klaus Schwab.”

CallMeLateForSupper October 11, 2016 4:51 PM

“The answer is to use One-Time Pads.”

Please explain how that would work for comms of “websites, virtual private networks, and Internet servers”.

Sancho_P October 11, 2016 5:48 PM

@Markus Ottela

Good to hear you are still in the battle.
I was asked to connect a medical device (with serial port) to a PC (USB). For safety and to protect from ground loops and spikes an optocoupler isolation was mandatory.
Now it’s easy to modify this bi – directional design to a replacement of your original data diode. The improvement is the coupler, and it doesn’t use RS-232 and batteries.
OK, I’ll clean up my documentation, is your email at still good?

Thoth October 11, 2016 7:32 PM


You can start by reading on military link encryptors as a starters. I am sure they have a good deal of information how to enable pre-shared keys (a.k.a key fill loading from EKMS systems).

Hmmm …. I remember the blueprints are labeled as classified and cryptographic controlled items so it would be a little harder to find those blueprints though.

The easier version that is unclassified would be to look at @Markus Ottela’s TFC design and modify it’s OTP based design of TFC (should be the old version) to get it working as a link layer encryptor where you tunnel highly sensitive stuff through it with limited bandwidth. You would not be using it for anything else except for the most sensitive data so as not to quickly burn through all your OTP pads and re-key them from your HQ again. Probably a 1 TB HDD worth of OTP would be useful for the modified TFC link layer classified encryptor.

The TFC project have a very good base design anyone can use freely and openly to implement all sorts of interesting security stuff that requires very high security assurances.

Figureitout October 12, 2016 1:10 AM

–Got it. A few seconds (or minutes to an hour for more data, like GB’s) isn’t a big deal. Anytime I need to encrypt something, I’m ready to do something else for a bit. One won’t be doing this all the time normally, there’s data that needs this protection and other data that just needs to be backed up 3+ ways. It also would need a reason to be even digitized too if it’s that important. I’d stay focused on just working w/ PC, but I’d use the “touch screens” for RasPi or Arduino just for being able to do better graphics and displaying more text, the touch aspect of them aren’t that good. But if not I’d be looking at what LG is doing, their cap touch algorithms and such. Compared to a worse Samsung phone, it’s night and day.

It’s more code, potentially more controllers and of course memory spots for malware. For instance, I can connect an LCD up to an Arduino taking only 4 pins, 2 for power; save on pins but there’s a separate driver chip and I don’t know what all’s in there.

And I don’t get why an OS is needed for such a task, but there’s a few RTOS’s available for all those chips.

Thoth October 12, 2016 1:23 AM


If a person is willing to use a complex scheme like GroggyBox which includes schemes that aims to for deniability of the possession of the cryptographic keys, that person would likely have something highly valuable to protect and as you have mentioned, it is likely to only be used for small text documents that contain highly sensitive details.

The best bet would still be the Ledger devices which provides both screen and inputs with almost open hardware.

Despite the fact that smart card version of GroggyBox is current running at a slow speed, I would be playing around to make it faster without changing the security mechanisms. Things like applet logic can be improved and the Java desktop client can be improved by using native I/O to read files (currently using Java’s I/O via FileInput/OutputStream). Something like a Random Access Memory class usage giving block level access would be much more advantageous (and also more dangerous) and also creating a file cache when reading/writing files for higher efficiency.

Rollo May October 12, 2016 1:52 AM

@ Thoth

> “The answer is to use One-Time Pads. They are the most cumbersome and yet the most controversial and the most well studied. They break down the easiest with even the slightest human errors but they can be a pain to break (thus called the ideal cipher). Flammable OTP papers would be a nice key exchange mechanism.”

Thoth, as always, appreciate your level headed & insightful sharing.
RE flammable I recall you referring to methods for this – quick combust using a certain kind of pencil on certain type of paper – potassium coated paper i recall.

Question: above you describe two attributes of the OTP with the result they are ‘called the ideal cipher’. I get the second of those attributes making them ideal – “a pain to break.” But your first attribute ‘they break down the easiest with even the slightest human errors’ I don’t understand.

In the context of crypto such an insight can be quite educational

Is it an error on the part of the third party interloper – or an error on the recipient / party doing the encrypting, that you refer?

Thoth October 12, 2016 3:35 AM

@Rollo May
OTP have been discussed very frequently in the past and @Clive Robinson have given a lot of explanations.

Regarding the first attribute where OTP can be very difficult to use, if you have a 1 MB message, you need 1 MB of keymat for the OTP and you have to find a way to securely transfer it. On top of that, both ends should find a way to synchronize on it’s use and never re-use OTP keymats whenever it has been exposed (already used or captured/tampered). Human errors are the problem when the sender re-uses OTP keymats or uses OTP keymats that are suspected of being captured or tampered by an external party which can break the entire security scheme of OTP.

Clive Robinson October 12, 2016 5:37 AM

@ Rollo May,

… on a certain type of paper – potassium coated paper i recall.

Not quite, it is washed / soaked in potassium permanganate (KMnO4) which is a very strong oxidizer. Just dropping a “simple alcohol” on it like glycerin used in cake making will cause it to spontaneously ignite.

It has a downside other than it’s flamability, in that over relatively short time periods it oxidizes the paper, thus it’s not something you can archive (not that you would want to for an OTP).

You can also use nitrates for the same purpose, which have the advantage of not turning the paper a distinctive brown colour. Two of which are relatively easily available sodium nitrate and potassium nitrate as they are used in meat based food products to prevent the likes of botulism, preserve colour and add flavour (with a side order of coronary heart disease and colon cancer).

Although they can be purchased as “pink curing salts” from barbeque and home bake shops for hams and turkeys at the festive seasons, they are only around 6% nitrate. Thus it is better to go to a charcuterie supplier and get the nitrates in much more concentrated form for making your own curing salt mixtures (the salt in salt beef is the “saltpeter” nitrate not table salt).

ab praeceptis October 12, 2016 6:13 AM

CallMeLateForSupper, Thoth

I’ve seen some “quasi-OTP” is use. It came basically down to: Use a really good PRNG with a very high cycle etc. Then exchange but 2 seeds, one of which is the start seed and the other one is used to seed a channel selector (a way of reseeding).

From then on the parties exchange but short numbers which tell the other side how many cycles to go through on PRNG 2 to define the channel on PRNG 1 which then produces the key for sym en/decrypt the message.

Disadvantage: No reservoir of OTP keys
Advantage: No reservoir of OTP keys

In other words, you basically exclude the danger of having your OTP being exhausted and/or found but on the other hand it’s just a quasi OTP.

Some dislike that, saying that it’s not the real thing. I, however, do like it a lot, mainly for three reasons:
a) after all, an OTP is but the output of high quality random.
b) it solves another problem, too, namely the whole (non pq) PK issue. Assuming one has physical contact from time to time, which is a very reasonable assumption for companies, banks, etc, the parties have a “new” and random key for each and every contact/transmission/session with the need to rely on PKE (SSL/TLS …).
c) It can be arranged such that neither party has the “OTP” which is attractive because an OTP list/CD-Rom/* can become a liability very quickly. Moreover OTP means one has to put trust in those holding the list.

In fact, I happen to know at least one company who have something like this in place for day x, when PK gets brutally weakened in a post quantum scenario (which, of course, isn’t perfect in that it relies on the fact of Shores algo successfully running on an actually useable Q-system is not necessesarily published right away. While a corp. player is very likely to brag immediately to conquer the market state players might prefer to stay in the shadow)

Clive Robinson October 12, 2016 6:28 AM

@ ab praeceptis,

But still, as i.e. Bernstein/Lange demonstrated, a good level of primeness testing is required and unfortunately often not done. There are quite a few out there who limit themselves to rather superficial checking.

This is especially true of embedded devices such as routers and the like that generate their own PK certs at initial bootup. This lack of checking is also a feature –or lack there of– on top of the very poor entropy these devices have. Thus to the likes of the NSA and other major SigInt organisations a gift of a near backdoor.

Markus Ottela October 12, 2016 6:35 AM


Yes, the email is good, but is there a reason to use mail? Note that I’ve revoked my PGP keys until I can find better process for private key management. Lack of forward secrecy really puts stress on security process.


It looks like the OTP is getting deprecated. Cascading crypto won’t be part of next release but maybe in the future. The current version 0.16.05 is decent but starts to feel quite unusable and insecure compared to what’s coming.

As for the discussion of OTP, it’s not the way to go for all encrypted communication. AES256, XSalsa20 etc. have shown very little signs of weakness so they’re not the part that’s going to fail. With that being said, PSK is more usable and it’s already standardized in TLS:

Would discrete log and semiprime factoring become trivial, end to end encryption would still work with PSKs, no need to use OTPs. After DES (that was known to be trivial to break), the news have revolved around public key crypto: 512-bit DHE/RSA export grade stuff, that some primes with 1024-bit DHE can be backdoored (as per the latest Ars Technica article). Then there’s the lost trust in NIST and their P-curves. But I think these are all a good thing: Moving to 2048/4096 bit DHE is what we’re supposed to be doing in the first place. Curve25519 seems to be fine too.

The split TCB design isn’t going to replace the client-server model because that’s where the convenient consumer products are. Therefore that’s where money, security practices for business that allocate for some risk, and so where the race between exploiting and patching vulnerabilities are. Scalable exploits is the threat, breaking all crypto isn’t. While TFC can’t handle it, these systems will work just fine with post-quantum crypto such as McEliece with Goppa codes.

TFC is for environments where failure of end point security might cost the user their life. The architecture does not exactly forbid the user from typing a 125kB McEliece public key, but I think at the moment PSKs maintain the sanity of the user seeking post-quantum solution.

1TB at theoretical max speed of current data diode, 19200 bdps means the pad lasts for 16.5 years for non-stop transmission (assuming 10 bauds = 1 byte):

PSKs with hash ratchet based forward secrecy have lifetime of 2^128 transmissions / sessions depending on implementation.

Markus Ottela October 12, 2016 6:49 AM

@ab praeceptis

The seed is essentially a root key of symmetric key hash ratchet. The CSRPNG must alter the state of hash ratchet to ensure forward secrecy, is this the way it’s done? Or are the initial seed values stored indefinitely? It’s a great mistake to confuse such schemes with OTP that have a very strict definition. Any keystream expanded from seed (=key) with CSPRNG is either a stream cipher, or a sub-keyed block cipher.

Thoth October 12, 2016 7:27 AM

@Markus Ottela, Clive Robinson, ab praeceptis
“The current version 0.16.05 is decent but starts to feel quite unusable and insecure compared to what’s coming”

Security of OTP lies in the discipline in the keymat usage and the method of keystream generation. These two are the most difficult to maintain properly and thus it’s perceived insecurity and difficulty in use.

“As for the discussion of OTP, it’s not the way to go for all encrypted communication. AES256, XSalsa20 etc. have shown very little signs of weakness so they’re not the part that’s going to fail.”

Although there is no known further attacks on the mathematical side of AES, I do stay clear off whenever the opportunity arises. AES is known to have quite a few side-channels in and of itself and I would prefer Salsa and ChaCha alone if possible since Salsa and ChaCha are designed to mitigate a few classes of side-channels. How about Twofish or Serpent ? I wouldn’t know as I am unaware of them being deliberately designed (like Salsa and ChaCha) to be side-channel resistant from the start. I am coming to think of simply using ChaCha or Salsa alone for symmetric encryption for the fact that no other ciphers are known to have that much “safety harness” put into it’s very inception as what DJB have done.

I am still using AES on my smart card projects since I have no choice but to use it due to the speed and constrains that exist on smart card chips otherwise I would have moved away from AES and simply use ChaCha20 which I have recently written a smart card library for ChaCha20 but the speed is too slow and much to be desired.

OTP, in my opinion, is still highly reliable and relevant in the coming years ahead as many nations have started to discuss issues with limitations and backdoors to cryptography and personal digital and electronics schemes. It would be unlikely that such legislation would advance internationally and in national communities but there is always a crazy spark of opportunity somewhere that would allow a world-wide ban on secure communications via known cryptographic methods to take place and the simplest and still unbreakable cipher that cannot be prohibited in any form when used correctly would be OTP since all it does is doing a XOR operation and it would be absurd to outlaw something like a XOR operation.

PQC is still in it’s infancy and we should wait for it to mature a little more before starting to use it. DHE, RSA, ECC (including SafeCurves) would still be a staple in asymmetric cryptography for anything that is not secretive and sensitive. Anything that is truely sensitive must use known symmetric cryptography (PSK) as you mentioned to avoid catastrophic security failures.

I would be very reluctant to call Curve25519 and any DJB made curves any safer than any other ECC since the ECC maths were pioneered and pushed by NSA although DJB et. al. went a long way to attempt to proof the safety of different curves and try to proof whether there is backdoors, I would be particularly cautious since it is an NSA driven effort and to be on the safer side, all my schemes are based on RSA and DHE unless there is a necessity for ECC which is exceedingly rare that I would resort to using any sort of ECC (including SafeCurves) for schemes I design as I see them as being long tainted from inception despite lacking of substantial claims to the math of ECC being tainted.

In a world where an international ban of civilian digital/electronic cryptography on a personal level were to take place, I think it’s a safe bet to put OTP in front to do the really sensitive cryptographic communication that are to be used only when necessary and in my view the second cipher that can be used on a daily basis would be ChaCha20. I have implemented the ChaCha20 cipher from scratch (in Java using basic bitwise operations without using JCE or other libraries) and it is a very easy cipher to implement and a strong cipher designed with “safety harnesses” that other ciphers did not add into their design. It should be easy to transmit ChaCha20 cipher design in forms of printed T-Shirts, some form of visual codes …etc… to defeat cryptography blockades like what happened in the 1990s (Crypto War v1.0).

Using self-synchronizing PRNGs for OTP is a nice idea but the trouble would be complexity of the self-synchronizing PRNG mechanisms. The implementations for such self-synchronizing PRNGs should be easily implemented by mere mortal developers so that they don’t make too many mistakes and turn OTP into a death trap of itself.

Thoth October 12, 2016 7:41 AM

@Clive Robinson, ab praeceptis
A useful way for self-synchronization without using a clock would be to use a counter. Taking a page out of the HOTP standard (Hash-based One Time Password) used for financial security tokens, it uses a counter and a secret key. Whenever the counter is out of sync, there are “jump forward” points where you simply “fast forward” to a demarcated counter to sync up. An example is to set every 10th counter to be a sync-ing counter so if the self-synchronizing counters desync, they can skip forward to the pre-arranged counter value which does not need to be a secret as per specified in the RFC standards for HOTP tokens calculation.

For such a self-synchronizing keystream via counters, a secret 256-bit key can be used to derive a keystream together with a 256-bit counter. Hash the secret key and also hash the current counter value and then concatenate both of these hashes and hash them a final time to derive a 256-bit keymat. When out of sync, simply “skip forward” the counter to a pre-arranged sync value (i.e. every 10th counter) and the other party would always be keeping an eye on possible “skip forward” scenarios or a pre-defined “re-sync” secret message can be used as well to indicate a need for re-syncing.

Clive Robinson October 12, 2016 7:42 AM

@ ab praeceptis,

Some dislike that, saying that it’s not the real thing. I, however, do like it a lot…

Ahh the “cola consumer test” for crypto 😉

Obviously it’s not the “real thing” as we are talking “Determanistic -v- Truly Random”. But does that matter?

Firstly is the philosophical aspect of “Truly Random” and what it might be, we glibly talk of “entropy in noise” but can not say how to tell the difference between what is entropy and what is noise. A definition of noise id a signal that we cannot determin in advance.

Thus the key point is not the use of a determanistic system to generate noise, but can an observer tell the difference…

If the observer can not tell the difference then the PRNG -v- TRNG argument would appear moot.

Unfortunately it’s not. Most TRNG output is strongly biased in some way, it’s why the care and feeding of such beasts is of utmost importance. It is also not bounded which is quite problematical for OTPs. Thus making your own OTP from your own TRNG is problematic at best (something they tend not to tell you on courses etc).

But PRNGs whilst not needing much in the way of care and feeding have other problems, their output has an unnaturally flat distribution and is very much bounded.

Thus there is a gap between the two that is for most an unknown area, however I suspect it is rather better known to those in the major SigInt entities (and carefully guarded).

You thus have the prospect that the likes of the NSA and GCHQ are running a more modern “VENONA” project. A consequence of this is that although they might not be able to break a CSPRNG they may well be able to recognise it as such thus be able to use other techniques such as EmSec, black bag or even wet work to find the underlying generation mechanism.

You then have a problem, which I can tell you about from real life. Back towards the end of the last century, I needed an entropy source I could use for testing. I had a number of choices for the generator such as storing and reusing the output of a TRNG or develop a PRNG with appropriate output. I settled on modifing ARC4 and used a BBS generator to do the modification. For obvious reasons it passed the statistical tests then in use and could produce as much test data as I needed at very high data rates that I could not have got from a TRNG and storage. As we now know ARC4 has a number of problems that were not known back in the late 1990’s. Thus had I used my generator for communications there is now a likelyhood it could now be broken. It’s why the NSA is known to have a “store everything” policy. For obvious reasons this future insecurity issue does not apply to a “real thing” OTP.

Realistically there are legal requirments that have 999year lives none of our current determanistic systems look like they are going to make 50years let alone a millennium. Thus for some people the use of the OTP or the other “provably secure” systems is going to remain the atractive if not only option.

I don’t know how old you are but 50years is easily within an adult life time these days. And there are people in their teens, twenties and thirties doing the “while you are young” thing that could easily come back and haunt them in later life. Unlike previous generations much of this is getting sent/stored electronicaly and thus is ending up in various peoples data vaults, encrypted or not it becomes a hostage to future technology one way or another… Thus I can understand the “real thing” differentiation that some people make.

That said I’m a long way away from saying that CSPRNG stream generator systems should not be used, just that they should be used with care, not just in their initial design but actual use. One criteria I mention when discussing this is that of “modified standards”. I mention that you should use several differing types of cipher algorithm in a chain or in otherways mixed together.

For instance, most block ciphers have two entirely seperate parts, the actual rounds and the key expansion algorithm that generates the round sub keys from the actual crypto key. There is nothing to stop you –other than testing– to replace the key expansion algorithm with another one that is equally or better in it’s security. You could arguably replace it with a stream cipher that changes the round sub keys every encryption or decryption etc. Further as you are effectively using it to generate an OTP you can add one way functions to the output, as there is no need to reverse the process. The choice is wide, as long as people remember the usuall warnings about not designing your own crypto 😉

ab praeceptis October 12, 2016 7:49 AM

Markus Ottela

Sorry if I expressed myself clumsily.

There are, in effect, two PRNGS used (which can also be one and the same used serially). P1 has a starting seed which has been transmitted physically (e.g. hand to hand) and creates a series of seeds (and possibly channels) for the second one, P2, which then is used to create keys for symmetric crypto. One major practical advantage is that rather small numbers can be transmitted traditionally, say, by phone) to seed P1 to get a seed for P2. Additionally, there can be variations of this mechanism where, for instance, one more factor is transmitted traditionally.

In a case I know of, they have their own “courier” who transports diverse stuff (mainly documents, DVDs, backup tapes, etc.) between HQ and the diverse factories, offices, etc. This courier is also used to transport an envelope with a sheet of paper with some (iirc) 32 byte random string that is then used as “basis” for some days (till the next one arrives) upon which simple operations are made.

Example: paper for 3 days says [some random 32 bytes]. Some simple algo then performs some operation based on another bit of information which is transmitted by phone, say, for the sake of an example, it adds some 8 digit number to the 3-days base, the result of which then serves as seed for P1.

They like this also for being multifactor and very pragmatic (well they’re industry people). If the paper gets stolen from the courier, no bad things can happen. To attack that mechanism one needed to a) know the PRNGs b) the 3-day “base code”, c) the code per transmission.
Seen from an external intruders perspective (which they assumed to be the major danger to defend against) who only sees bits on a network, one would see quite the same as if OTP was used.

The reason I mentioned the whole thing and why I’m interested in it is that I do trust the better sym. algos (like, as you mentioned salsa-x/y and some others) but I do not trust PKE.
Most primes are doubtful, the problem classes are known to not be pq-secure, the code quality is lousy, etc.

This to me strongly suggests to avoid PKE whereever possible. And in many somewhat closed settings like intra-company I consider a quasi OTP a very attractive and provably viable solution, certainly more attractive than PK.

Full salsa-20, for instance, can be trusted and is very fast, too. The crux is the keys (and with some algos, nonces). The a.m. mechanism solves that issue in a simple, provable, and elegant way and delivers some desirable side products, too, for instance, nonce generation. The potential weak point obviously being that one must chose a high quality PRNG.

ab praeceptis October 12, 2016 8:08 AM

Oh hell, what did I start there *g

TRNG or PRNG is a non-issue in my eyes, provided one has a PRNG of high enough quality. In the end it’s simple: An OTP is RNG-based, too; so it comes down to a classical choice of priorities and both have their disadvantages and advantages. A true OTP mechanism is not deterministic and hence both sides must have the current material. This, however, opens a problem class. A deterministic Prng-based mechanism avoids most of those problem but opens some problems, too, most of which dome down to PRNG quality.

Clive Robinson is, of course, right, when he points out Trng vs Prng, the latter supposedly being usually worse over some parameters. BUT: random distribution and some other factors are not the decisive point here because we do not use the RNG output for encryption but we use it as key for sym. crypto. So the relevant factors like random distribution, etc. are anyway defined by the sym. crpto.

Let’s look at it properly. Say our PRNG is 64-bit. As P1 drives P2 this already opens a hefty set of variations. Now feed that into a good, say, 256 bit hash whoose output you then use as key for, say, salsa 20 (btw. Thoth, I agree. The mere fact of aes being nist makes me consider it as tainted).

Good luck attacking that. Also good luck proving that the net security of that is relevantly worse than OTP.

Bonus: That mechanism is multifactor, non PK and pq-secure.

Would I use that for the highest level state secrets? Maybe but probably not (I’ve learned to be conservative). Would I use it for some company? Every day of the week and smilingly.

r October 12, 2016 9:14 AM

@Thoth, ab,

My concern, as a layman (both with and without) ChaCha – is the speed.

I don’t think the speed or structures are likely weaknesses for sidechannels, but it concerns me just how much this algorythm may be able to be optimized or ran in parallel. I wonder if the same type of scalable wall applies to ChaCha that applies to things like bcrypt though, that one might just need to add more rounds to take advantage from year to year over advances.

Sorry, musing. 🙂

I do think that the simplicity is a strength as it allows portability.

ab praeceptis October 12, 2016 9:49 AM


Somewhat simplifying one can say that both sym. crypto and PRNGS are no significant speed problem, even on embedded CPUs (for which one might anyway prefer small HW optimized algos such as e.g. rabbit).

The performance problem is with PK and with KDFs and similar. bcrypt, which you mention, is an example. Those are expressely designed to be expensive and such to make the lives of crackers harder.

Side note: The thing those algos strive to defend against is when some opponent already has, say, your pw database; they strive to immensely drive up the necessary efforts, for instance by needing to run thousands of hashing rounds which also makes precomputed rainbow tables all but meaningless.

Side note 2: The quality (sometimes according the target) is quite different. The better KDFs address multiple factors, such as computing power as well as memory, which, if well designed, can limit the usefulness of massive parallel approaches.

Side note 3 (not making friends, I guess): Regarding PKE your “simplicity” argument is quite relevant. One of the problems of not-simple being that also modelling, verification, and math proving get exponentially more complex.

That’s quite valuable in simple designs: You can consider the algos (e.g. PRNG 1 and 2, Sha-256, salsa-20) as elements of a serial mechanism which also allows you to apply rather simple and strong logic to each element.
I find it an interesting property that one can simplify it to the point of saying that there are only 2 approaches for the opponent. He can either attack the encrypted message, in which case he has to break a strong (but cheap) sym. algo, say, aes-256, or he can try it from the front. And thats where the mechanism I talked about shines again. Being multifactor and (high quality PRNG based) he will be out of luck even if he, which is a classical nightmare assumption, were in full control of the network and managed to copy the couriers envelope.

I think that is actually the most promising approach: to rely on simple, well proven “building blocks”. And it’s the problem of PK, which, one must say for the sake of fairness, addresses a vastly more difficult problem class and has to do with a rather limited set of well understood one way functions.

(In case your post was just meant as a joke: OK, you got me. I’m an idiot *g)

r October 12, 2016 10:13 AM


Not making friends :p

But no joke, I appreciate the input as those and that are my concerns and perspective. I’ve got to go back and read what I missed in this thread.

Markus Ottela October 12, 2016 10:32 AM

@ Thoth

“Security of OTP lies in the discipline in the keymat usage and the method of keystream generation.”

The generation is slow but it can be mitigated. Proper use of pad is manageable by software so that human can’t screw it. Pad only needs an offset header and some sort of ID that tells who it belongs to. The hard part is ensuring pad is destroyed, a SoC with flash memory isn’t able to do that. You’ll have to encrypt it with symmetric encryption on the fly. If you can guarantee no one will ever compromise you physically, it should be fine though.

“How about Twofish or Serpent ?”
IANAC so can’t say for sure but sure, Salsa and ChaCha seem like great designs.

“I am still using AES on my smart card projects”
Is this a secure messaging project, code signing or..?

“PQC is still in it’s infancy and we should wait for it to mature a little more before starting to use it.”

The security proofs seems to be good, it’s the implementations that are scarce.

Bernstein et al are also working on code-based crypto

I’m pretty sure we’ll see a usable library before quantum Turing machines of nation states reach required number of qubits.

“In a world where an international ban of civilian digital/electronic cryptography on a personal level were to take place, I think it’s a safe bet to put OTP in front to do the really sensitive cryptographic communication”

If these laws can be enforced, they will tackle whatever channels OTP software is distributed with. If the rule of law one day allows citizens only to use graphite and dead tree for encryption, our discussion here can do very little to help such situation — pen-and-paper MAC design aside.

@Clive Robinson

The RC4 as CSPRF is a great cautionary tale about why choise of algorithm matters. Most modern 256-bit algorithms are deemed secure until the unforseeable future. I don’t think any crypto in the 90s enjoyed that for very long: Scheier et. al. were talking about 80..90 bit security as absolute minimum before DES broke.

For OTP the HWRNG is a strict requirement, but good quality CSPRF can be used to compress the pad to remove things like complex bias and auto-correlations — things like Von Neumann algorithm can’t.

I think it’s easier, more safe and raises less eyebrows to cascade algorithms instead of creating custom implementations of key scheduling.

@ ab praeceptis

I see, so there’s an initial hash ratchet that is used to generate domain separated symmetric keys. I’m not sure if it’s practical to mix in entropy over insecure channels manually. I see the practice quite problematic: Forward secrecy assumes the previous state of key is forgotten, so if three couriers deliver three spoofed values, the states of sender and recipient will desynchronize. Assuming the CSPRNG is secret is on conflict with Shannon’s maxim (the enemy knows the system). With that being said, I unfortunately fail to see the security benefits or anything that makes it comparable with OTP.

If you want to do future secret hash ratchet with couriers, you start with a pair of symmetric keys (one for each direction of data transmission). You then on daily basis deliver HWRNG generated chunk of entropy via the courier. This entropy must be encrypted with authenticated encryption (Chacha20/XSalsa20-Poly1305 recommended), where the key is domain separated from previous state of hash ratchet. If the Poly1305 authentication succeeds, the block was valid. In the incredible scenario of existential forgery, the only downside was, no physically delivered entropy was mixed in that day. Remember that you can never hurt entropy if you mix in Diffie-Hellman shared secrets so even if you have your doubts, it’s worth it: the entire rest of the world relies mainly on it and they’re doing fine. Even Snowden did fine with PGP and RSA keys.

Salsa20 is an excellent PRNG, or to be more precise, CRPRF or cryptographically secure pseudorandom function that with some cryptographically random key can encrypt say 32 null-bytes into random string. By encrypting the resulting ciphertext again using the key, you get another chunk of random bits you again encrypt using the key and the cipher.

RE quasi OTP:

Unless you are using high quality HWRNG to generate pads you use exactly once and that you destroy immediately after use, you’re doing no favors to anyone when talking about OTPs, especially quasi; something outside the professional discourse. Stream cipher, not OTP — keystream, not pad. Even if you generate all symmetric keys with HWRNG, unless you limit message length to 32 bytes per each AES256 key (again, used exactly once), you can’t talk about OTP. I don’t think arguing about semantics is important but to be taken seriously, you should avoid discourse that makes 99% of what OTP snake oil is about.

“Let’s look at it properly. Say our PRNG is 64-bit.”

The internal state of CSPRNG must be a lot more than 64 bits. 256-512 is good, /dev/urandom actually has 4096 bit internal state that gets squeezed through SHA-1..?

If the 64 bit PRNG feeds a 256-bit SHA256, you only get 2^64 different 256-bit hashes for keys. You’ll want to make sure the entropy for hash function has at least 256 bits of entropy. So assuming the tool ‘Ent’ evaluates the entropy of your HWRNG’s output to be 4 bits / byte, just feed 512 bits from HWRNG to SHA256 to get strong keys. This is of course over simplification as HWRNG output needs to be deskewed etc first: you don’t want to risk the HWRNG outputting 512 zeroes to hash function.

r October 12, 2016 10:33 AM


The other concern I have with chacha is out superficial relationship to speck, a while ago I saw something on wiki I wanted to bring up here (en reverence to it not being mul based) but I haven’t been able to find what I saw again. So it’s on my watch list thanks to @Thoth.

r October 12, 2016 10:40 AM


Goppa/ECC(not elliptic) makes me nervous when quantum is referenced. But then again, I can’t even read Shamir’s.

I really like the hash stuff, and since we are talking about 0 and 100% trng’s a should have zero weakness (exempting side channel) introduced through relatively small amounts of dilution.

Clive Robinson October 12, 2016 10:53 AM

@ r,

… but it concerns me just how much this algorythm may be able to be optimized or ran in parallel.

As a rule of thumb, all determanistic stream ciphers are compleate generators, runing from a start point in their state array. Thus they are all capable of being run in parallel to any depth you can make hardware function at.

The down side is not all state arrays have linear progression, therefor determining the individual start points may be difficult.

To see why this might be the case consider a block cipher with a counter as it’s input. The counter acts as the state array which is mapped to a new value by the block cipher. Now consider the output of such a generator as the input to another block cipher. From the Second block cipher perspective the first block cipher is it’s state array. But it has no idea as does an observer what the next value will be.

ab praeceptis October 12, 2016 12:22 PM

Markus Ottela

I get your point and you are right, if one looks from a stringent and theoretical perspective.

However, the mechanism I talked about (and called quasi-OTP) doesn’t strive to replace OTP or to be an alternative real OTP (let alone in the strict sense). Rather it strives to get rid of PKEX and it does so by using some principles of OTP.

“64-bit PRNG not good enough” – oh well, then we should forget about quite some nice HWRNGs that happen to deliver single bits. More seriously, we can apply the same mechanism, i.e. we can grab more than 1 chunk of output; where some HWRNGs grab 256 1-bit outputs we can grab 4 64-bit outputs. In the given case it was decided to use a 256-bit hash as a spreader. Yes, you are right, that means that we still use only 2^64 of the available 2^256 states/outputs. So what? If that was a problem then we should stop using 256 (or more) bit hashes right away because chances are that we’ll not use their full space anytime soon. Obviously however that’s not a problem.

Finally and most importantly, I’m talking about keys, not about encryption, which is done using some well established sym. crypto algo.

As for alternatives (again, note that the mechanism I talked about is targetting to keep PKEX out, not to replace OTP) I’d like to share an interesting point: I know of 1 (in words “one”) algorithm that generates provable primes. iirc openssl is not even applying reasonably state of the art primeness probability(!) tests, nor does gmp. In other words: Very many PKEX out there are running with lousily checked prime probability and they are, well noted using algos that very much rely on primes being prime.

I get your points but the context must be considered. We are rarely in the situation to freely chose from diverse perfect alternatives; this is particularly true in key exchange.
You may judge differently and I respect that but for me a proven 64-bit random key space, that for most opponents is de facto a 256 bit key space, is much more useful than a bazillion bit keyspace relying on lousily checked probable primes and implemented in questionable (and actually proven lousy) code (not to talk about the protocol).

As for your 3 courier envelopes stolen/copied: So what? One can comfortably and easily prepare solutuons for that case and one could also easily and simply introduce backup and/or alternative channels. That doesn’t worry me (nor said company). And btw, when you introduce weaknesses with the courier (which, well noted, can exist) you should also introduce an Eve that steals/copies OTP material (and such breaks/poisons your complete scheme).

Last but not least: All I did was to talk about a mechanism I found interesting. You don’t like it? No problem; I don’t want to convince or to evangelize you. You feel, only real OTP based on real (HWRNG generated streams) should be used? Fine, no problem. Actually I know cases where I would prefer/choose that myself. But I also know cases where I wouldn’t and where T’d consider a weaker (mathematically) solution to be actually stronger in practical terms.

Sancho_P October 12, 2016 2:07 PM

@Markus Ottela

So what would you prefer / propose to transmit e.g. images? For the text, some details (links) will sound like advertisement, the moderator here may not be happy with.
Where is the risk?
The “design” is trivial + open, but you’d never know whether you can trust me or not. You’d have to use your own judgement + the public.

Re “ban of encryption”
Encryption can’t be banned, because that term is to broad. Unicode is kinda encryption, only a special group can read from the ciphertext.

ab praeceptis October 12, 2016 2:43 PM

@whoever feels like it:

I’m thinking of a number between [0 and 18,446,744,073,709,551,615].

The number was randomly chosen using /dev/urandom feeding a PRNG (say, 2Fish based Fortuna in honour of our host).

It is assumed that you have full access to 30 GB of data that have been encrypted using aes-128 using as key the output of sha-256 whose input was the number I’m thinking of.

Tell me which number I’m thinking of. You have 3 days.


“I think OTP is more robust than we’re giving it credit for.”

??? I never argued otherwise. All I said (in that regard) was that OTP crypto – like everything – has advantages and disadvantages, i.e. that for some tasks it is more useful (or even just perfect) while for other task it is less appropriate.

In particular I mentioned that in quite some scenarios where perfect crypto is not required (which is the vast majority of actual real world scenarios) and where not having a paper or disk or whatever with the Pad at both sides is desirable, another mechanism with deterministic “quasi OTP” elements might be more attractive (and still easily sufficiently secure).

Let me put it like this: It is doubtlessly strongly desirable to own the whole process from design to productions for ones CPU (and actually some countries worked towards that goal). At the same time, it is also desirable to find a solution one can afford (very few of us can afford to have our own chip design and fab facilities plus the capability to tightly control the fabs, processes and humans).

Hence it is perfectly reasonable to work on good crypto, reachable alternatives (say FPGA), good and reliable software implemenation, solid math behind our work, etc using widely available (quite certainly not really trustworty) processors (see “chip bugs”, “CPU microcode”, et al.)

Moreover, to bring the two together again: Assuming we talk about OTP crypto mainly in the context of digital communication, OTP isn’t used in the theoretical perfect realm but with problem ridden computers, problem ridden buggy OSs and libraries and software and on hardware very few can really fully understand, let alone verify (and into which intel and the likes have put a slew of questionable hacks on to of other hacks).

Finally, for many cases we (mere mortals) aim for an unreasonable aim and “good enough” would be sufficient. Let me give a hint: Even quite some subpar algos have rarely ever been hacked. Why? Hackers virtually always attack the implementation, the OS, the software around and what not – but not the crypto.

I’m perfectly fine and, in fact, glad that we do have researchers to go ever farther ad to develop ever more (sometimes absurdly) secure crypto and algos. I seriously am.

But at the same time the vast majority of people and companies act in a world where users can’t be asked to use even creative 8 digits/char passwords or to not put them on their monitor and where many other pragmatic concerns are to be considered, and where 99,99% of the enemies are 2nd rate attackers (at best. Often it’s just script kiddies), and where the enemy is, at worst, a competitor or a disgruntled ex employee.
In that environment I need to give my client a software they run twice a week and that spits out some dozend magic 32 byte chunks which are then printed and handed over to some dozend outposts or factories, etc.

Why? Because “quite nice” security actually applied is much, much, much better t5han perfect security ignored.

a October 12, 2016 3:21 PM

Whisper Systems (maker of Signal Messenger) had user data requested by a grand jury and no message metadata to turn over. I guess this pretty much proves their assertion that they don’t keep logs. I imagine the same cannot be said of other services that have implemented the Signal protocol, like WhatApp.

Clearly, you have the protocol on the one hand and how you manage your servers on the other, regardless of message content being encrypted. Whisper Systems seems to be the only ones now to have proven their commitment to the best behavior on both counts.

Pretty amazing to think that only a few years ago to have even a lower standard of security than this you need to pay at least $3100 ( and then convince someone else to do the same so that you had someone to talk to!

Thoth October 12, 2016 8:05 PM

@Markus Ottela
re: AES projects

They are for file encryption and secure communication. One example is my GroggyBox file encryption project linked below which is still in it’s infancy.

re: PQC
It’s just a lead of faith like how RSA and ECC cannot be proven to be secure in a P vs. NP problem but we all still happily took it and used it across the Internet. Only time and dead people (dead from broken security) will tell if these stuff are secure … if they could talk 😀 .


re: ChaCha20
It is designed as a security improvement and speed up when compared to Salsa. If you have not read the technical papers, you should begin reading it first. A speed test was ran against Salsa in the paper and ChaCha20 is almost always much faster and ChaCha20 improves security by making it harder to break via 1 additional round than Salsa. Both ChaCha and Salsa are from the same author and ChaCha design is not similar to Speck from a round function perspective or even a key generation perspective.

ChaCha20 does:

b ^= (a+d) <<< 7;
c ^= (b+a) <<< 9;
d ^= (c+b) <<< 13;
a ^= (d+c) <<< 18;

Speck does:

ROR(x, r) ((x >> r) | (x << (64 – r)))
ROL(x, r) ((x << r) | (x >> (64 – r)))
R(x, y, k) (x = ROR(x, 8), x += y, x ^= k, y = ROL(y, 3), y ^= x)

You can clearly see there is a good amount of differences. In fact, I prefer ChaCha20 since it’s implementation is way simpler and much more cleaner than Speck.

Speck also generates keys as it runs (keygen -> encrypt/decrypt ._ keygen -> encrypt/decrypt …) and every round affects the keygen process.

ChaCha20 is vastly different as it generates a keystream in the beginning with a fix size of 64 bytes of key material after that you must re-key with a different counter/IV/key again for another 64 bytes of keymat. The obvious difference is Speck is a block cipher while ChaCha20 is a stream cipher thus it’s vastly different other than the ARX constructs which is a common round function anyway.

Thus, it can be concluded that ChaCha20 is a faster variant of Salsa, is different from Speck and is a higher security variant when compared against Salsa.

I would highly recommend using ChaCha20 and learning to code cut some ChaCha20 for the sake of it’s simple yet secure design. It ain’t all too hard to code cut the ChaCha20 algorithm and to learn it. The tricky part I encountered was simply 32-bit to 8-bit conversion when writing for a smart card variant which was a pain in the bottoms.


Thoth October 12, 2016 8:08 PM


Ouch .. the blog security mechanism deleted part of the Speck and ChaCha20 round functions I intended to put here. You have to read the paper and visit the links now.

@Bruce Schneier, Moderator
Could you include a code tag for us to insert some short technical codes here ?

r October 12, 2016 8:55 PM


I appreciate your time there, I was not trying to defame DJB’s(?) good name I’ve been merely trying to wrap my head around these “simpler” routines.

anonymous October 12, 2016 9:34 PM

@Markus Otella

I’m sure you could have found this for yourself, but it seems like you are plenty busy enough as it is.

(&@Thoth: your doesn’t go anywhere compared to this):

“I have been working on creating a specification in the background to unify Truecrypt’s plausible deniability, PGP’s and miniLock’s capability of sending over the web with multiple recipients and also a simple cryptographic keystore capability for secure key storage all designed to be implemented on smart card. Instead of Truecrypt’s plausibly deniable volume, it does plausible deniability of crypto keys by allowing splitting keys and does not have explicut checksums to indicate correct decryption thus making using any decrypting even with errors look plausible. The creation of a ubiquitious single format with multiple use case specification is still in the making and still actively being edited by me.”


This open blog has the most liberal formatting whitelist that I’ve seen since MySpace, and hopefully, your change request can be nullified by explaining some of the little tips and tricks!

Inputting “&amp;gt;” outputs the encoded: “&gt;”, while inputting that gives: “>” which should render the same as the literal ‘>’ char.
Using a raw &lt; [<] followed by a &gt; [>], implies X/HTML syntax that is rightly dropped by a sensible filter.

The <code> tag is strictly used for custom styling purposes. For this comment section, you can equivalently use the <pre> tag along with almost every other accepted formatting tag you would need.

See: Character encodings in HTML

Thoth October 12, 2016 11:07 PM

@Clive Robinson
Scroll down the website you have given me and look for the following:

“JavaCard: jChaCha20 — Java based ChaCha20 stream cipher according to RFC7539
JavaCard: jChaCha20 — JavaCard based ChaCha20 stream cipher optimized for JavaCard (16-bit) environment”

Well, well … isn’t this the JavaCard ChaCha20 I have made and ranted about in the past. Seems like someone found both of my works and included them 😀 .

I am thinking of upgrading the 8-bit math library to 16-bit math (gonna be slow as usual but hopefully a little faster maybe) when I am tired of developing GroggyBox during my free time and feel like switching up a little. I have a few open source projects running simultaneously at the same time but there is only one @Thoth.

Markus Ottela October 12, 2016 11:25 PM


Imgur or should be fine for the images. Text with commercial links can be published on pastebin if nothing else.

Free hardware design isn’t going to provide problems with trust. If I understand and trust the design I will endorse it, and vice versa. BTW make sure to publish under GNU Free Documentation License v1.3 or similar to ensure ethical redistribution that’s compatible with TFC documentation.

Skeptical October 13, 2016 9:02 AM


I think you’re underestimating the significance of the official attribution.

It changes the meaning of what US action, or inaction, communicates to the Russian Government and to others.

It removes the element of uncertainty as to who the US Government considers responsible.

It commits the US to a response – though who knows what the chronological ordering of official attribution and response might be.

Before the official attribution, silence reduced the costs to the US of not responding. Silence preserved a measure of uncertainty as to whether, or how, the US would respond.

The US defense policy actually is structured around the long game. They are loathe to expend certain capabilities or resources that are better preserved for more dire exigencies.

But I think we’re witnessing in part the implementation of a policy decision that makes the setting of certain norms in cyberspace an important component of the long game – and with the higher valuation placed on such an endeavor, one might expect greater commitment of US capabilities and resources to that endeavor.

My concern is that the Russian Government does not understand the nature of its escalation, nor does it understand the image that its internal developments present to the United States and others. The US military has never stopped – even at the height of its counterinsurgency efforts – preparing for a strategic conflict. It would be a grave mistake to place any confidence in a strategy that depends on the assumption that the US cannot outmatch and defeat escalation when core interests of the US are at stake.

Putin needs to find a way to preserve Russian nationalism in a manner that does not predicate it upon confrontation with the United States nor upon laying claim to prior imperial possessions. He may be surprised at how few will be willing to march down such an unnecessary, and obvious, road to ruin. Opposition would come, and is mounting, from every direction.

He may also be surprised at the extent to which his undoubted fidelity to Russian nationalism would allow him to cooperate with the US, and reduce tensions, without harming his domestic reputation. Just as only Nixon could go to China…

Dirk Praet October 13, 2016 5:33 PM

@ Rollo May, @ Nick P

2. Company is acquired then product is turned to garbage for some goal of parent company. Microsoft and IBM are legendary for this.

Sun Microsystems was really good at this game too.

@ Grauhut, @ Skeptical

Joint Statement from the Department of Homeland Security and Office of the Director of National Intelligence on Election Security

This begs for the introduction of a Colin Powell or James Clapper-scale of certainty or truth telling.

gordo October 13, 2016 6:19 PM

It’s Too Complicated: How the Internet Upends Katz, Smith, and Electronic Surveillance Law

For more than forty years, electronic surveillance law in the United States developed under constitutional and statutory regimes that, given the technology of the day, distinguished content from metadata with ease and certainty. The stability of these legal regimes and the distinctions they facilitated was enabled by the relative stability of these types of data in the traditional telephone network and their obviousness to users. But what happens to these legal frameworks when they confront the Internet? The Internet’s complex architecture creates a communication environment where any given individual unit of data may change its status — from content to non-content or visa-versa — as it progresses Internet’s layered network stack while traveling from sender to recipient. The unstable, transient status of data traversing the Internet is compounded by the fact that the content or non-content status of any individual unit of data may also depend upon where in the network that unit resides when the question is asked. In this IP-based communications environment, the once-stable legal distinction between content and non-content has steadily eroded to the point of collapse, destroying in its wake any meaningful application of the third party doctrine. Simply put, the world of Katz and Smith and the corresponding statutes that codify the content/non-content distinction and the third party doctrine are no longer capable of accounting for and regulating law enforcement access to data in an IP-mediated communications environment. Building on a deep technical analysis of the Internet architecture, we define new terms, communicative content, architectural content, and architectural metadata, that better reflect the structure of the Internet, and use them to explain why and how we now find ourselves bereft of the once reliable support these foundational legal structures provided. Ultimately, we demonstrate the urgent need for development of new rules and principles capable of regulating law enforcement access to IP-based communications data.

Bellovin, Steven M. and Blaze, Matt and Landau, Susan and Pell, Stephanie K., It’s Too Complicated: How the Internet Upends Katz, Smith, and Electronic Surveillance Law (June 7, 2016). Harvard Journal of Law and Technology, Forthcoming. Available at SSRN:

Markus Ottela October 14, 2016 3:03 AM

Got some critique for TFC-CEV’s non-standard CTR-nonces a while back. It would so appear that the same style is used in TLS 1.3. Glad to have been correct about how it should be done.

Also glad to see my proposal for combined fingerprints and integrated QR code scanner for Signal finally implemented. Too bad it still lacks a feature that remembers when fingerprint has been scanned and verified. Most of my peers already use it so it’s getting harder to remember who I’ve done the verification with.

Skeptical October 14, 2016 6:25 AM

@Dirk: This begs for the introduction of a Colin Powell or James Clapper-scale of certainty or truth telling.

Unless you think the official statement by the ODNI is deceptive about the US Government’s actual attribution of responsibility, Clapper isn’t the appropriate point of comparison (among other reasons).

Powell is appropriate if you think that the US Government is misreading the evidence, and is incorrect or not sufficiently justified in its attribution of responsibility.

Attribution like this are rare; it’s not in the USG’s interest to make them unless they have strong evidence; it’s widely believed by those who know something about the USG’s capabilities that they would be able to collect evidence that would show the Russian Government’s complicity (or lack thereof); and credible independent sources have made the same attribution.

So, while nothing in this world is certain…

But in any event, it’s the act of official attribution itself that I think is most interesting here. My own, back of the envelope, analysis is that the US – despite some arguments to the contrary and despite some deliberate efforts to persuade the US otherwise – does have escalation dominance here. I suspect the USG has come to the same conclusion, and the hard part about this is calibrating the response, not whether there will be one (if not already).

Clive Robinson October 14, 2016 7:26 AM

@ Skeptical,

Unless you think the official statement by the ODNI is deceptive about the US Government’s actual attribution of responsibility…

To believe such a statment you need either hard testable evidence or sufficient trust in the source.

The US IC is not known for supplying hard testable evidence currently, in fact the opposite appears to be the case, they hide behind the twin doctrines of “no comment” and confidentiality of “methods and sources”. In fact history tends to show that they only say anything publicly when under direct “political” compulsion of the US President or again politically when under direct threat. Even then they stick to attribution with evidence held behind the methods and sources blackout curtains.

Further I think it is fairly safe to say that an impartial observer looking over the history of most of the US IC will say that they are not trustworthy with oversight let alone any other scrutiny.

So no trust and no evidence, makes it just another release of hot gases that smell a lot worse than those you would expect from a flatulent male bovine.

Clive Robinson October 14, 2016 2:54 PM

@ JG4,

I’d like to move somewhere that won’t require martial law to keep or restore order.

That used to be easy you went up a mountain and became a guru or hermit (gurus near the summit, hermits in the foothills).

But… The US energy companies have made that sort of “off grid” lifestyle effectivly illegal, so you get martial wether or not you want it because the energy companies have profits that must be orottected at all tax payer costs…

r October 14, 2016 4:31 PM


Those idiots, wasting all this time and hype agrandizing problems that needed to be fixed 10 years ago are going to be all geeked up and ready to go when the looting begins.

Such a waste.

Did they include remote controlled ford cars?

Maybe bad GPS signals on the highway causes pileups outside of MA

Maybe someone hacked that train last week

They want to play AD&D(&D) 3rd edition is the problem.

Screw it, I know where I can get a whole palette of bricks for the occassion.

r October 14, 2016 4:43 PM

Maybe it’s not even like that, (@ALL)

@Sancho_P, you see the DoJ heard my complaints right? (@”Grab your gun”)

3d Printers manufacturing fintech for drones and mortars.

fin tech

ready to print
ready to go
Launched from your friendly neighborhood pickup trucks none the wiser

Distributed blue prints for mayhen, they’re stamping it out now and I’m willing to bet the long game isn’t “long” guns.

Fin Tech.


ISIS is piloting drones into your chilrens face, first they try it overseas now your kid designs one for his neighbor.

Oh the utility.

The next admonistration has alot of their plate, hopefully they’ve got their eye on the ball and not being dismissive about repercussions of “ill do it later”

You’ll do it now, or lives will be lost.

What could go a-rye?

I hope the 8 supreme court justices feel a little bit more “gung ho” about shooting drones out of the sky now that the weaponization is upon us.

Sancho_P October 14, 2016 5:44 PM

@Markus Ottela, re “USB data diode”

To keep everything together I’ve uploaded images + text to , please scroll down for the text part.
Because it’s now public I’ve added kinda preamble.
Check esp. whether the license is OK (it wasn’t intended to publish like that but I understand you may need it).

If anything is missing / wrong / bad / … / do not hesitate to tell me.

Sancho_P October 14, 2016 5:56 PM


”My own, back of the envelope, analysis is that the US …
does have escalation dominance here.”

Yes, they have the dominance to destroy our kid’s future.

May I suggest to retire old and paranoid politicians?
Their clock will stop soon, anyway.
They do not care for the future as they have none.
Don’t listen to them.

Youngsters value their future and would realize that business with the Russians is a better opportunity than war.

Btw, this was called capitalism, but the oldies have forgotten the tactic.
Alzheimer is a serious disease.
50 is a limit, not only in speed.

Figureitout October 15, 2016 7:57 AM

–Nice, like it. So you can have two devices connected via this, and send something to receiver and it can’t send back? What I thinking is just like a putty session on 2 computers and you can send text to receiver but it can’t reply? One potential way to do the checking it was received properly is have a program that can check a checksum on other end (right next to you), and it just prompts you to manually check the checksum. Then you’ve protected the PC used to generate encrypted files or what have you. Even though I’m wary of a company attacking users that don’t know if they have a fake chip or not, FTDI is still one of the best options for USB chips. And those drivers will be in the OS’s too. I was surprised that Kali linux had a CH340 driver included already, some knock off arduinos use that instead for USB->serial.

I feel like these should be modules one can purchase, like the lan tap throwing star. Would be handy. Think this would help out additionally w/ some isolation?:

Sancho_P October 15, 2016 6:16 PM


Thanks, I think it’s what Markus had but without batteries / RS232.
The issue with USB com (UART) speed may be the buffer on the receiving side and USB priority. The devices’ USBs are asynchronous and speed / latency somewhat depends on the tree – structure / attached devices / actual load.
Continuously error correction might be mandatory for files, and the only feedback (success / fail) here could be the operator. Imagine to wait 3 minutes only to see “Failed, checksum error” on the receiving side.
Wouldn’t be a good idea.
Better: Error rate indicator (but I know only of one commercial product to have it included in the transmission respectively continuously available at the receiver).

A USB-USB isolator is extremely important for USB attached test equipment like an USB scope, otherwise all kind of bad things may happen.
It seems the Adafruit isolator will only supply 100 mA to downstream USB (device) side, that isn’t much (my scope needs 500 mA at least), and the Adafruit doesn’t have an external supply possibility, better check:

Figureitout October 16, 2016 1:26 PM

–Yeah, a USB one w/o batteries would get used a lot more IMO. Has to be a nice little module w/ nice case to get used. Would using an isolator be as good isolation as using batteries? Don’t want to change batteries out much. I want to make the original one in the U of Iowa paper w/ discrete components and spin a board; shocked there isn’t a board of these yet.

Yeah, continuous error checking done manually by user would be tedious and not ideal…but something like FEC, there’s a couple implementations out there ( , ), I’m not sure how to implement. There’s a few kinds. Underlying concept (I think), is: TX generates redundant copies of payload convolved w/ error checking codes ( Y(s) = X(s)H(s) ), and receiver can thus do error checking on each copy (get original signal w/ X(s) = Y(s)/H(s) ) and determine if a copy contains an error if it can’t extract error codes it compares against a LUT of them. Question is, how many copies to be good enough? Maybe 3 or more? And TX and RX would have to have same error code matrix, I assume it could be any arbitrary bytes, not like a specific matrix of values.

Grr…if it’s going such a short distance, maybe we can get away w/ either a simple FEC implementation or nothing. Have you experienced any errors yet?

Ok, yeah that olimex isolator is probably better for more protection.

Sancho_P October 16, 2016 6:17 PM


I’m not sure I understood your first part with the “isolator”.
A pure USB isolator is not only bidirectional, it must also support full USB functionality – which probably most technicians don’t know what it is.
To securely “isolate” data flow between devices stay clear off here.

The other isolator, the simple optocoupler, doesn’t know about bits, bytes, command or data. On is on, off is off [1].
Batteries are only necessary because of the RS232 signals, which mean (min.) -6 V for “on” and (min.) +6 V for “off”. Unfortunately a USB_RS232 coupler doesn’t provide that voltages, therefore the batteries (and 2 couplers for the diode).
But RS232 has also a huge disadvantage in speed, as the transition between 0 and 1 is (minimum) 12 V. Imagine a 1 MHz sharp square signal with 12 V, what a waste of energy (and a pleasure for your neighbors), let alone the effort to drive it.

FEC is OK but nearly useless when you don’t know how close to disaster you already are. Everything in electronic is analog, making bits out of it is clumsy.
It’s important to know when there are, say, 8% corrected errors in communication so one can check before final failure.
FEC is not about copies of data, that would waste to much bits, see:
On bus based systems errors are unavoidable, let alone with plug and play.

Nah, the Olimex (a pure USB isolator) isn’t for more protection, but for more load. It supports the device side with up to 350 mA from the host’s supply in contrast to the 100 mA. Additionally, one can plug in an external power supply to raise that device supply to 1 A.

[1] Don’t do it – don’t do it – don’t do it – don’t do it – I did:
Shortcut pins 1 – 4, and 5 – 8 of the coupler IC.
Connect it in series with an incandescent light bulb (60W) to mains (here 230 VAC, 325 V peak) and wait 1 – 2 minutes. If the chip didn’t explode shortcut input and output of the chip, the lamp must go on, now it’s safe to call the chip isolated.

Figureitout October 17, 2016 1:05 AM

–Typo on my part, didn’t say the “wired” part. I was wondering if the 5V line is from same source, if that could potentially break the isolation in a crazy way, whereas a battery would break that powerline connection, but nevermind you answered the question why the batteries were needed. It appeared that somehow the isolation was being breached by coupling of some kind when I used the adafruit usb isolators, trying to debug wireless operation. The grounds should’ve been isolated, all lines should’ve, don’t know why we observed evidence of AC ground line to wireless unit. I admittedly didn’t dig into the AD chip much and could’ve tried some more things.

Yeah, that’s the risk of FEC, and I’ve read that wiki link a few times, the bit about “redundancy” and the example about sending each bit 3 times as a “repetition code” then receiver does a vote on which bits are correct based on a 2-1 or 3-0 vote of bits was why I was thinking bits would be sent multiple times, w/ error codes. Perhaps if you get such a slow comms rate by sending same bits like 10-20 times or more, the errors would be minimal and the vote info would be more trustworthy.

I keep thinking about some kind of feedback unit that’s been mentioned places, and keeping track of the bit error rate there, that then would send a one-way signal back to original unit if error detected (being 99% sure that couldn’t be exploited)…but still seems flawed.

My terms were wrong, I get it lets more current flow if need be.

Your “error rate indicator” doesn’t return any google results, so not sure what you have in mind. Also, have you used it yet to do file transfer or any kind of serial transfer? Might’ve missed that. I’m gonna order some of those converters and 7723 chips and try it myself, so I’ll find out otherwise.

Sancho_P October 17, 2016 6:02 PM


Don’t know if your “Nvm” means “Not very much” (“e”), but think of:
– What does my proposal change in regard to the actual TFC data diode?
– What was the purpose of both the scope’s screenshots with one character and only the start bit?
– Why did I use the character “e” and not e.g. “a”?
– Where were channel A and B attached to?
– What would the scope show with, say, 1 MB text?

Remember, the original data diode concept was USB-XXX-USB, so the USB problems (if any) were there from the beginning. I don’t know if @Markus Ottela ran into speed issues because of the converters, the coupler or whatever (USB, OS, TFC SW). It seems he didn’t see the proposal or doesn’t find the time to acknowledge.

If all lines are isolated / separated (you wouldn’t need 230 VAC to test, just a multimeter) AND you’re working with AC (esp. high frequencies) always think of capacitive coupling, it’s a bummer.

I guess any automated feedback would be a no go for a data diode as it would constitute an information channel back to the source. Even without my tin foil hat I wouldn’t want that.

Yes, likely you won’t find that “error rate indicator” in public.
Btw file transfer or serial transfer, what would make the difference in your eyes?

An example would be a music – CD (granted, somewhat old school):
The system is similar to a data diode, and read errors are part of the conceptual standard.
Therefore error correction is paramount.
However, your player doesn’t show you the actual quality of the reading process or status of error correction, you won’t realize “Oh, it was green, now it’s yellow, what’s going on?”.
No, you will suddenly realize that some errors couldn’t be corrected any more.
Now you can check connectors, clean the CD and the player lens, try a different player, but probably it’s too late to make a copy of your CD.
Why don’t they have that indicator? Yep, because nobody would care [1].
But in security applications …
Btw, check your “terms”, this is paramount, too!

[1] Only a fool would assume they want you to buy a new CD 😉

r October 17, 2016 11:28 PM

Sancho, nevermind. (commonly, I don’t think I’ve ever seen it as not very much? I could be taking the dismissive/exclusionary route though – nice abv)

Figureitout October 17, 2016 11:48 PM

Clive Robinson
–Ugh, vast majority of that looks like no fun (only some of algorithms and concepts and the math, vast majority of it probably hasn’t changed since 1700s or earlier…). Know it’s important but bores the living hell out of me, CPU architecture and embedded programming on the other hand, can’t get enough. At least guy was cool enough to post book online and I can click to chapters in contents page. Simply don’t like probability theory, I’ll gladly let someone else who enjoys it do it. For now getting forced to do it and it’s pointless b/c I’m not going to do this when I get thru this stupid class. :/ Guess my understanding will remain hazy of the more complex FEC schemes. :/

–Nvm is text-speak for “nevermind”, trying to limit my bandwidth. Will avoid that for ESL reasons. Yeah I’m thinking of your questions. Kind of a weird baudrate. And you can see small delay from input to output, propagation delay from going thru optocoupler, and output matches input, and should be no response. I’m wondering if the errors are going to be too bad or if it’s just being robust for noisy environments and overly-cautious, that’s why I want to see non-trivial file transfer (I want to eventually transfer GB’s, and I’m betting there’s no way in hell it remains completely error free) which is what I’m going to try using this for. Certainly shouldn’t be as bad as RF, one-way schemes there are basically unheard of.

So you sent char ‘e’, which is 0x65 in ascii table, which is 01100101 in binary (101 in decimal), but sent little endian style, it becomes 10100110, 0xA6 with a low start bit and high stop bit. Not sure why char ‘e’ matters for your test.

I’m sure Markus is reading, just evaluating and deciding on what to say.

Yes I’d just use a multimeter continuity test, not some dangerous test w/ 230VAC.

RE: my unit under test, isolating it.
–Bah, I won’t get much into it b/c I flap my lips enough as is, but when it comes to debugging wireless products you simply need to wirelessly transmit debug info to a dongle to your PC. I had option to make a custom morse code debug protocol from one chip, transmitted over a single line to RF chip by pulling line high or low but that was going to be too crazy and not worth the time. I made due, still got a lot of useful info, but holy sh*t this made me so paranoid w/ coupling and injected noise.

RE: the feedback unit
–yeah it’s broken if we assume the device we’re sending info to is network attached and could have the most insidious malware in the world, one of its features being to send continuous error bits back to TX resulting in never-ending DOS attacks. If there was no malware, it’d work.

RE: your cd example
–Ok, same could be said of any flash rom. Except all the ones I’ve used there’s verify procedure. Changing just 2 lines of trivial code resulted in a vastly different binary, I was surprised how much (the functions would be reaching down in some guts though, that must be why so much changes). So yeah, error correction is so critical for all kinds of things. And don’t worry about my terms! :p

Clive Robinson October 18, 2016 7:51 AM

@ Figureitout,

Simply don’t like probability theory, I’ll gladly let someone else who enjoys it do it. For now getting forced to do it and it’s pointless b/c I’m not going to do this when I get thru this stupid class. :/ Guess my understanding will remain hazy of the more complex FEC schemes. :/

Well prob maths does fall off the straight and narrow and is thus harder to see in your head. However it’s finding where the curves give you the best sort of sweet spot that is where the pain pays off.

The trick as I said a little while ago is to assume that even with EC codes things are going to fail thus make your life easier at a higher level in the stack.

One such is not to use what are “one time” commands but delta commands, that is instead of sending “move forward three feet” you send “move forward for X mS” and repeat as often as is required to get from point A to point B. Yes it’s grossly inefficient use of the comms channel but it tends towards fail safe, and will get you there eventually if –and only if– you have the time to spare.

Likewise with random data that has to be 100% correct on reception, keep the packet length short and the repeat window long, because most times comms fails to a burst signal not a continuous rise in the noise floor (provided you’ve got your link budget right).

The big prob with most EC is it does not fail gracefully, that is it hangs onto the edge of a precipice by it’s finger nail before droping out of sight. Thus you can go from near full comms to no comms in much less than a heartbeat. There are ways to deal with this if you know it’s about to happen.

You use other tricks such ad reduceing transmisson bandwidth thus gaining on the noise per bit curve. Which in turn means you have to prioritising data at a higher stack point as well as coordinating the dropping from a complex multi-level multi-phase modulation scheme down to simple BPSK and swithing back up again. It can and does work wonders, but you pay for it with complexity up the stack. Which might do wonders for your power budget, but can make the software a right royal pain in the proverbials to test let alone get right. For instance the switch up/down takes time and you could end up in yoyo mode where the link budget gets entirely used up by the switching.

All of that can be done before you get into MIMO territory which has it’s own interesting predictive problems. That said though, dual receivers can make one heck of a lot of difference quite cheaply these days, without hitting the power budgets to hard (but might have size constraints instead).

I find FEC most important when round trip time is long and power budgets at one end are low. Consider the case of a satellite using just a few milliwatts of TX power, you have a long and variable path delay measured in multiple packet times. The sat does not have the power budget to send loads of ACK/NACKs and other tagged packet requests thus you have to front load at the ground station with FEC etc.

It’s why reliable coms is a game of “horses for courses” and a fast goer on the flat may not be anygood on the steeple chase, and vis versa.

However you might want to look at the likes of AmSat they have a history of “things tried” and what are now effectively “off the shelf” design solutions along with code etc. Most times your requirments will be a lot lot less demanding.

Oh another thing to watch out for “shared bandwidth” consider a cordless phone. You have two requirments audio and signalling. Due to voltage add equals power squared neither gets optimal use of the comms circuit. Further you have to keep digital signalling out of analog audio but have signalling have a greater distance noise capability than the audio. It makes for interesting design choices.

I once got such a design baddly wrong approvals wise. We used a sub audio frequency data rate with it’s fifth harmonic just below the 350hz audio cut off. Thus manchester coded 60Hz signalling. It worked fine giving every thing we were looking for till somebody put the base unit within a foot of a computer display… The monitor frame rate was 60Hz (not the UK stanfard 50Hz) and the stray magnetic field was getting into the data path by magnetic loop coupling on the PCB… Worse I had made a tiny but fatal mistake in my coding scheme and the sixty Hz interferance made it look as though the handset was sending a valid “keep alive” to the base, so the base failed to drop the line when the handset went out of range… Which would have been an instant approvals failure. A trivial software change resolved the issue, but it was only by luck that the error was found in time so that the cost was only half a morning of testing and five minutes of coding.

It’s why I feel sorry for the designers of the Galaxy Note, apparently that product recall is going to cost around 10billion to sort out by some estimates… From what has leaked out in trade gossip it’s a battery fault caused by managment pushing for a couple of extra percent on battery capacity and not giving enough test time.

Which is a valid reason to tell managment “test time” is not something you can cut back on ever, due to the risk.

Oh the mobile phone I designed got both the company and the customer company a quite prestigeous award (Which #1) from a high profile UK consumer test organisation. So sometimes a line or two of code is a fractional difference between winning gold, praise / pay rise and looking for a new employer… Because the buck almost always drops to the bottom of the hierarchy.

Sancho_P October 18, 2016 6:05 PM


I think you’re getting closer to the point / term 😉
What my proposal does is changing everything between the USB receptacles of sender and receiver to simple “state of the art” products, within Markus’ proposed budget. This HW will send up to 1 Mb/s [1], optically isolated, far away from critical noise or timing. The screenshot was to demonstrate the performance of the FT232R module and the coupler, the “e” was used to show single and double data bits at 1 and 0 level.
Sending shouldn’t be a problem.

However, I don’t have knowledge and equipment to investigate at various receiving PCs (HW, OS, driver, USB bus topology) at which point in time / speed / bus action data may be discarded (let alone which kind of error correction would make sense).
Of course, it would be best to stay clear of any limit, but where is it?

Again, just reading “Checksum Error” after minutes of transmitting would be useless in a security environment.

Not really happy with you “flash rom” equivalent, because there is neither an inherent data error nor FEC involved, just a dull checksum at the end (I assume you are talking about hex data transfer from an IDE into the chip). It may be “only” one bit wrong or hundreds, you wouldn’t know the difference.

The baud rate timing depends on the internal 12 MHz oscillator (multiplied by 4 to 48 MHz) of the FT232R. With higher baud rate the relative error will increase, different chip types on both sides of the coupler would make things worse. It seems reasonable to limit the baud rate below 1 Mb/s.

Figureitout October 19, 2016 1:41 AM

Clive Robinson
you send “move forward for X mS” and repeat as often as is required to get from point A to point B.
–I can basically do that now, I know timing of reTX, how many times I specify will match a time period. Not seeing the difference. Really can’t spare much though.

Got the packets short, speed slow. I’m ready to move on to newer designs though, really want to see their performance. We want to port code and probably use same protocol if possible. Likely can’t afford 2 [trans]receivers, and probably won’t have peoplepower to make a custom modulation scheme, we’re up to our eyeballs in work as is. We’re pretty much stuck.

Yeah I’ll look at Amsat some time, next project is I’ll have to make some kind of mesh network…at least these will be w/ powered units. Not sure how to code it right away, there’s just a couple hard things and it’ll be mostly easy after. I just want a powered non-RF project, no crazy coupling, small assembly project.

Yeah feel sorry too, lots of the smartphone guys doing RF keep getting squeezed and squeezed into a smaller chunk of the phone they can use, get less space and management expects more. Until it’s basically impossible. Our asshole (biggest most aggressive ass I’ve ever worked around) sales guy wants 1) cheap, 2) robust, and 3) developed quick lol…

Again, just reading “Checksum Error” after minutes of transmitting would be useless in a security environment.
–Uhh excuse me not sure what kind of security environment you in, but that would be useful info to me and warrant further investigation. You’d prefer silent fail?

And my flash rom example, jesus fight over everything, even when it’s pointless. What is a CD-R when you load a bootable .iso image on it? It’s like a ROM eh? There’s a feature now where CD’s can get written to multiple times and retain info thru power cycles, that’s like a flash rom or eeprom eh? I don’t care what bit is off, as long as a bit is off I want a completely fresh reflash. If it’s a one-off then it’s not worth investigating usually. If it turns out to be persistent (or malicious), then yes I’d want to know which bit[s] and why. And that can be found out so long as you have a clean one.

Ok thanks for info. I got parts on the way and going to try it as soon as I can.

Clivr Robinson October 19, 2016 3:58 AM

@ Figureitout,

Not seeing the difference

The difference is you send as many move for X mS commands as it takes for the ROV to move the distance you get to see on the feedback path. If some of the commands get corrupted and lost it does not matter eventually enough move for X mS signals will get through and you will see the ROV reach it’s target position in the feedback and stop sending them. However if you only send a start moving signal you might not be able to send a stop moving signal in which case you will watch the ROV disapear over the control horizon.

Radio Controled plane hobbyists used to have this “disapear out of sight and crash hard” problem, and eventually they developed LOS devices that would put the plane in a gentle downward circle, untill either command was restored or the plane bumped more gently into the ground. It was probably a very similar LOS device in the US Drone the Iranians claimed to have captured / brought down by some method, possibly just jamming the control channel and GPS in some way.

Figureitout October 20, 2016 10:30 PM

Clive Robinson
–Well that’s a whole different problem set. W/ a drone, you generally will have a flight life of what, anywhere from a few hours to maybe a day at most. You don’t need multi-year battery life. Once the mission is done you can do full service to the vehicle and check things and fix any small issues that may arise. Not sure if they actually have gas motors, there’s different types looks like. Gas motor w/ an alterator, that’s a nice source of power. That would be nice. Large backup batteries too. You will know in advance where you’re sending the drone. Higher probability of active attacks on comms so having multiple different kinds of backup modes ready to get triggered if, say an error flag is set. For the commands to work, there needs to be an independent module keeping track of location (GPS), which means that system needs to be functioning well too for the commands to even work properly.

I have to squeeze everything into about a 4X4X1 inch box, won’t be serviced for its lifetime, expected to “set it and forget it”. I don’t know where it will be deployed (I know there’s been RF issues w/ UV-tinted windows for some reason), and most likely people won’t be trying to jam comms intentionally.

Different problem sets. Different solutions.

Clive Robinson October 21, 2016 4:46 AM

@ Figureitout,

Well that’s a whole different problem set. W/ a drone, you generally will have a flight life of what, anywhere from a few hours to maybe a day at most. You don’t need multi-year battery life.

It depends on the drone, quite a few are “handheld” size and have very small and light batteries and their comms power budget still has the same problems that your device does.

Even larger drones have comms power budget issues because they have more than a hundred times the range to cover that you do so they need N^2 times as much ERP as you do…

So the power budget and Los Of Signal (LOS) issues are when all is said and done comparable.

Where drones have an advantage on you is that they have the ability to get a larger ERP for free due to having room for larger antennas, but that introduces other problems due to directionality etc 🙁

With regards,

I know there’s been RF issues w/ UV-tinted windows for some reason

One way to “UV-tint” is to deposit a very fine layer of an appropriate metal –gold is one– onto the glass in a similar way to the tin layer is deposited on LCD glass to make the digits/chars/dots.

The tint is thus quite conductive and would distort the EM fields around it acting as either a screening material or worse still an antenna of some kind.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.