Friday Squid Blogging: Which Squid Can I Eat?

Interesting article listing the squid species that can still be ethically eaten.

The problem, of course, is that on a restaurant menu it’s just labeled “squid.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

EDITED TO ADD: By “ethically,” I meant that the article discusses which species can be sustainably caught. The article does not address the moral issues of eating squid—and other cephalapods—in the first place.

Posted on October 21, 2016 at 4:00 PM182 Comments

Comments

Tarquin October 21, 2016 4:14 PM

The Dirty Cow vulnerability has been present in Linux for nearly 9 years. Here’s more proof, if needed, that open-source software isn’t necessarily more secure. Not patching it for such a long period is inexcusable.

http://arstechnica.com/security/2016/10/most-serious-linux-privilege-escalation-bug-ever-is-under-active-exploit/

VeraCrypt, the successor to TrueCrypt, has been audited by QuarksLab. They found:
8 Critical Vulnerabilities
3 Medium Vulnerabilities
15 Low or Informational Vulnerabilities / Concerns

“This public disclosure of these vulnerabilities coincides with the release of VeraCrypt 1.19 which fixes the vast majority of these high priority concerns. Some of these issues have not been fixed due to high complexity for the proposed fixes, but workarounds have been presented in the documentation for VeraCrypt.”

https://ostif.org/the-veracrypt-audit-results/
https://ostif.org/wp-content/uploads/2016/10/VeraCrypt-Audit-Final-for-Public-Release.pdf

How private is YOUR messaging app? Charity rates WhatsApp as the most secure – but experts aren’t so sure

http://www.dailymail.co.uk/sciencetech/article-3859412/How-private-messaging-app-Charity-rates-WhatsApp-secure-experts-aren-t-sure.html

Markus Ottela October 21, 2016 4:25 PM

@Nick P @Thoth @Sancho_P @Clive Robinson et. al.

It’s been six months and the next version of TFC, 0.16.10 is now ready.
https://github.com/maqp/tfc


@NickP IIRC you talked about locally encrypting TFC’s keys a long time ago. This is now featured. Previously keyfiles were unencrypted files containing the keys changed by hash-ratchet. Group files contained plaintext list of group members, and log files contained the plaintext log.

This is now all different. There’s a contact database with simple structure that contains accounts, user accounts, nicks, forward secret symmetric message keys, static symmetric hash ratchet counter (harac) header keys, public keys used during key exchange (private keys are ephemeral and discarded after symmetric keys are derived) and some settings like logging (TxM/RxM), file reception and window notification privacy (RxM).
I put extra care into separately padding every data field per contact. There’s a setting that defines the maximum number of contacts. This is used to pad the number of entries in the database. So the size of database never changes. The entire database is re-encrypted with new nonce after every message. This works fast enough even with gen1 RPi under default settings.

This database padding applies also to groups that now contains the name of group, logging setting and list of members. There are two settings for group: Maximum number of groups and maximum number of members per any group.

These databases can be scaled upwards in size when user adds more and more contacts. There’s protective logic that prevents down-scaling database size if there’s more contacts that would be possible after down-scale.

Message log database works a bit differently. Each entry contains padded date, contact information and the message. Number of entries (sent messages is not padded). On the other hand each, all logged messages are written to same database so only max number of sent messages is revealed. Furthermore, the logged data is not the message itself, but the assembly packet delivered to contact. This means the number logged entries reveals no more metadata than what communication over XMPP server reveals. There’s also an option (disabled by default) to log noise packets during trickle connection (constant transmission). This hides even the max number of actual messages sent. Sent files might be more sensitive than the discussion so the file itself is never logged, only placeholder data.

SSH password used to connect to RPi for HWRNG sampling is no longer stored in plaintext, but is padded and encrypted to separate file.

All this data is encrypted with master key derived from master password (created when software first starts up). The password is combined with urandom spawned 256-bit salt and derived through PBKDF2-HMAC-SHA256 over at least 65536 iterations, but if hardware is fast enough, the number will double until it takes at least 2000ms. The master key is entered into new animated login screen written on curses.

The encryption adds some protection against physical attacker, protects from impersonation, mitigates the wear leveling problem and makes data exfiltration harder.

PSKs are also password+salt protected over 65536 iterations.


The trickle connection design is much better: Tx.py uses constant time list lookups to determine highest priority queue to load data from, plus a new constant time context manager. Together they should hide runtime much better. All data given to the sender process via queues is pre-padded. Trickle connection works for groups, and I finally got the messages and commands to take turns reliably.


TFC can now use multiple accounts in Pidgin. This hides the network of communicating accounts from servers to some extent when each user can route messages via multiple servers.


Previously the hash ratchet counter used for forward secrecy has been an unauthenticated public value. Now it’s encrypted with static key that’s either a separate PSK or key domain separated from ECDHE shared secret.


The data structure of transferred data has much less headroom (e.g. base64 encoding over serial was unnecessary).


The protocol is now much cleaner and there’s clearer abstractions between layers.


TFC no longer uses one window to display all sent and received messages. Selecting a contact/group now sends encrypted command to RxM to display the chat log for that person.

Messages per session are ephemeral. Logged messages can be viewed with command /history or exported in plaintext with /export. Screen can still be cleared with /clear and returned with /msg Alice. There’s a new command called /reset that resets each terminal and removes the ephemeral message history (it’s essentially shoulder surfing protection on steroids) from Rx.py.


The public keys and local key’s key decryption key that are typed manually now use Bitcoin’s Wallet Import Format that’s Base58 for convenient manual typing, and it comes with integrated SHA256 based checksum.


Whenever user manages a group (creates it, adds/removes from it or deletes it), Tx.py asks to send a message about it to related contacts so that they can manage their groups as well. As spoofing the message about leaving the group could be used nefariously, software of recipients displays a warning about the contact still being able to receive messages they send to group.


The local testing feature is completely re-designed. It now uses Terminator as mux to launch all software (aside IM client) adjacent to one another. Installer creates two launchers (TFC and TFC DD) where the latter launches the dd.py data diode visualizer proxy software to relay messages between Tx.py/Rx.py and NH.py.

The /names command now shows key exchange and logging setting and when it’s printed during start. Contact can no longer be selected with the ID number but with account or nick (tab complete supported). Group can finally be selected from main menu.


I fixed the screen clearing issue with Ubuntu 16 by moving from os.system(“clear”) to VT100 codes that allowed for much more interactive feel to many prompts.


The code quality is much much better and there’s a significant amount of more unit tests. (A month’s worth of effort in one sentence.)


The serial interfacing is much better. I noticed 19200 bauds was stable with current data diode design so I doubled the speed. In case there’s any problems, there’s now a user adjustable Reed-Solomon erasure code that’s used during serial transmissions.

The removal of serial interfaces no longer crash the programs (they now scan for the adapter instead), and any random device name mappings (e.g. /dev/ttyUSB0 > /dev/ttyUSB2) are automatically handled. NH.py can even solve user moving serial cables to opposing ends provided user sends any message from TxM so NH can learn about the change.

RPi 3 moved serial interface from /dev/ttyAMA0 to /dev/serial0, that’s fixed too. Raspbian’s Kernel maps this file correctly to serial, agnostic of underlying RPi version (AMA0 is now bluetooth under RPi3).


Tx.py now shows HWRNG sampling progress in real time over SSH.


The installer is a lot easier to use. Raspbian configurations are no longer displayed when launched on *ubuntu/Mint. There’s a set of interactive prompts.

Installer randomizes the order in which dependencies are downloaded.

Apt-get was changed to newer version — Apt.

Installer can now automatically configure static IP for TxM and HWRNG configurations so HWRNG use is easy. Local testing has interactive setup for IP and user name of HWRNG (settings are stored to Tx.py).


Most (if not all) of the 3D models in documentation were updated to cleaner, minimalistic style.

There’s now a brand new Wiki with articles on threat model, security design, protocol, HW configurations, installation and how to use the software. There’s a FAQ, and brand new set of step-by-step tutorials on how to build HWRNG on perfboard and breadboard, and how to build data diode either point-to-point, or on a perfboard so that it uses two DC power supplies. The security design is the most detailed one and combines thoughts I expressed in blog and some discussions here. It still needs some work but maybe we can now start discussing the terminology and overall design. Protocol needs some attention too.

The whitepaper is something that’s going to take time to write so that’ll have to wait. It’ll most likely summarize the more detailed articles in Wiki so no need to wait for that to get to work.


This is just to summarize the key features. The rest is in the update log:

https://github.com/maqp/tfc/wiki/Update-Log


I hope you’ll find the time to test the software. I made it particularly easy for this purpose. Just run

sudo echo && wget https://raw.githubusercontent.com/maqp/tfc/master/setup.py && sudo python setup.py 4

under Ubuntu virtual machine to get started with the local testing software. Let me know what you think.

Annoyed Users October 21, 2016 4:34 PM

@Moderator

Is there something that can be done about excessively long “comments”? The post above is not a comment, it is a blog post at almost 1500 words. It will make it difficult to read all the other comments because it requires so much scrolling.

TIA

Annoyed User

Pedro October 21, 2016 4:36 PM

En Argentina diputados aprobó el Voto Electrónico (e-vote) ahora falta que lo apruebe la cámara de Senadores y saldrá la ley. Quieren implementarlo en 2017.

Fue una promesa del actual presidente de la Nación y es reticente a cualquier crítica, poniendo en juego la garantía constitucional del secreto del voto de todos los ciudadanos (y probablemente un buen negocio para unos pocos).

Ignoraron absolutamente las observaciones de los expertos,
quieren implementar un sistema donde se imprime el voto en una boleta y también se graba en un chip RFID que se encuentra en ella.

El proyecto de ley (que se aprobó a las 04 AM en el recinto de diputados, fue aprobado por gran mayoría -impulsado por el oficialismo-), incorpora PENALIDADES DE PRISIÓN a los investigadores que traten de entender como funciona la máquina de voto electrónico.

Hace un año un investigador (Joaquín Sorianello) se le allanó la casa y se le secuestraron sus computadoras y tuvo un proceso penal por haber encontrado LOS CERTIFICADOS SSL PÚBLICOS EN INTERNET Y REPORTARLOS A LA EMPRESA. Finalmente la justicia determino que su conducta lejos de ser considerada un delito representa un ciudadano alerta.

Existen múltiples personas que encontraron vulnerabilidades en el sistema incluso mostradas en congresos de seguridad (ejemplo: Ekoparty 2015) como la vulnerabilidad MultiVoto que permitía incorporar múltiples votos en una misma boleta probada y verificada (descubierta por el Dr. Alfredo Ortega, quién también en una exposición en diputados mostró como accedió a la base de datos del sitio de la Cámara de Diputados mediante un SQLInjection).

Investigación independiente (excelent paper): http://ivan.barreraoro.com.ar/vot-ar-una-mala-eleccion/

MultiVoto: http://www.elladodelmal.com/2015/07/como-votar-multiples-veces-con-el.html

Subsistemas del Voto Electrónico implementado en 2015 en la Ciudad Autónoma de Buenos Aires en 2015 (sistema muy similar que se quiere implementar a nivel nacional ahora): https://blog.smaldone.com.ar/2015/07/15/el-sistema-oculto-en-las-maquinas-de-vot-ar/

Nota de HOY con videos de demostraciones: http://www.lanacion.com.ar/1948796-la-boleta-unica-electronica-implica-riesgos-para-el-secreto-del-voto

Por favor quien quiera traduzca este comentario, copielo, comentelo, denle visibilidad. Muchas gracias.

Ram October 21, 2016 4:54 PM

No squid can be ethically eaten – they are smart and should not be eaten unless necessary for life.

Moderator October 21, 2016 5:04 PM

@Annoyed: The Friday squid post is for open discussion of security-related matters. Markus’s post qualifies; it is the continuation of an extended conversation (involving the individuals noted at the head of the comment, and others) about Tinfoil Chat, an encrypted messaging system. It’s long, but it’s relevant, and it’s in the right place. If you’re not interested in reading it, please keep scrolling.

Markus Ottela October 21, 2016 5:23 PM

@Annoyed Users

The longish post summarizes more than thousand hours of work I’ve put since May to give endpoint secure messaging free to everyone. Like the moderator said, it’s been discussed extensively with many people who discuss here. I hope you’ll find it useful, take part in the discussion and maybe even contribute to the project.

Annoyed User October 21, 2016 5:26 PM

@Moderator

Bruce has many readers but as of this date I will no longer be one of them. Whether you like it or not the effective impact of these excessively long posts is to drown out and silence other voices. I come here to listen, not to make myself heard, but if all I hear and the same person babbling there is no point to me coming. I can listen else where. Good luck,

Annoyed User

ab praeceptis October 21, 2016 5:58 PM

Annoyed User

In the couple of weeks I’m here Markus Ottela has posted less than a dozen times, I think even less than 5 times, and only 1 of his posts was longer than what’s usual here.

Moreover, he hasn’t blabbered but reported on actual work in our general field of interest here. How could that be an annoyance? In the universe I live in actual work and technical reports on it are welcome.

The annoyance, pardon me, are you. If people like Markus Ottela stopped writing here in any length they deem adequate, I’d take that as a loss.

As for you: bon voyage.

Harold Thomas Martin October 21, 2016 6:02 PM

For the How Stupid Do They Think We Are archive:

The RUSSIAN HACKER!!!!1! turns out to be a hacker who’s Russian. Because there’s nobody less obvious the FSB could pay to hack defenseless Democrat chumps. And because nobody would hack helpless incompetent Democrats just for shits & grins. And because FSB couldn’t wait ten seconds for 30 thousand stoned adolescents to bust the DNC’s tender maidenhead. Because what Eugene got busted for is undisclosed, but obviously that.

http://theins.ru/news/34012

C October 21, 2016 6:08 PM

Hey Bruce,

The post you linked to doesn’t talk about ethics: it talks about conservation status. There is hardly a connection. It’s upsetting enough that you alternate between “squids are so smart, they can do xyz” and “5 favorite squid recipes” in your Friday posts; please don’t start mistaking environmental protection with ethics — it’s not ethical to slaughter sentient beings for food when other options are available.

In any case, it’s easy to avoid eating the endangered ones: simply stop eating squid 🙂

Clive Robinson October 21, 2016 6:15 PM

@ Ram,

… they are smart and should not be eaten unless necessary for life.

Some species such as the Humboldt, have become ecological disasters. Short of just ignoring the problem, fishing them for food has kept their numbers under some kind of control.

In more recent times the increase in disolved carbon doxide in the water due to climatic changed has made this voracious always hungry predator slower thus more vulnerable. However they are pack hunters and form shoals of upto a thousand individuals and devour entire shoals of fish. The diminishing fish stocks and increasing numbers have caused the Humboldt to increase the scope of it’s food sources and are thus destroying entire ecologies. Traditional hand line fishing of these six foot monsters called localy as diablo rojo (red devils) is an extremely risky business and fishermen get regularly injured or killed. Likewise divers that are in the vicinity of Humboldt have been seriously injured even though they did not provoke them.

But Humboldt are just one of very many species, many of which are still in balance within their environment, so not all need to be predated as a human food source.

ab praeceptis October 21, 2016 6:22 PM

Markus Ottela

Some remarks.

  • While I consider NaCl an excellent choice it might be desirable to provide a fallback and to not put all eggs into one basket.
  • sampling (i.a.) random through SSH over ethernet seems to be an inconsistency actually weakening your design. It might be worthwhile to use NaCl there, too.
  • PBKDF2-*? From what I understand that is mainly used in contexts that require it (e.g. smartcards). Wouldn’t there be more attractive KDFs/Hash ratchets (e.g. Argon2 (PHC winner))?
  • You might want to look into mypy (http://mypy-lang.org) which allows “lite type annotations”.

Compliments for your work.

Clive Robinson October 21, 2016 6:39 PM

@ Marcus,

The longish post summarizes more than thousand hours of work I’ve put [in] since May to give endpoint secure messaging free to everyone.

That is over “fulltime employment” hours, and is much appreciated by a number of us here (some of whom would like to be able to put in the same level of effort in our own non-employment coding projects).

Saying something like “keep up the good work” etc does not sound right, so hopefully a “well done” will sound better.

Thoth October 21, 2016 8:07 PM

@all
It is quite sad to see some readers putting themselves in the center thinking that by saying they wouldn’t read @Bruce Schneier’s blog because we posted some long posts and it requires scrolling would be meaningful or helpful.

These people have no idea how much time is spent drafting a long post jam packed with all these useful insights and important development notes released by some of us who are doing practically free and open source security development which not many would be willing to burn their weekends and pull their hairs out over it.

Please be more empathic to how we feel putting the effort to write highly informative information that you won’t find lying around on the Internet and to include progress notes of practical security development we do in our free time vs. that few seconds of frustration while scrolling through the long pages. I believe a few of us are using smartphones to access the contents and have to scroll through the long amount of posts and some of us have to type on tiny little smartphone screens to reply (and sometimes in a long winded fashion out of necessity).

If scrolling is too troublesome, then this is a wrong place to be in. Expect tonnes of scrolling and be more empathic to those of us who have lots to say with important contents to publish.

Drone October 21, 2016 8:09 PM

“…listing the squid species that can still be ethically eaten.”

How ’bout a list of squid species that can ethically eat humans?

Thoth October 22, 2016 12:30 AM

@Markus Ottela
Regarding Yubikeys which you mentioned in your change log:

“No magical crypto dust exists to protect from this so make sure to use strong passphrases and possibly 2FA with something like Yubikey, that remembers a static part of the passphrase”

Nice way of putting it by the way 🙂 .

Now for the Yubikey part, I have an idea how to make some magical crypto dust via some magical crypto token 🙂 … or more of an illusionary one that looks magical …

If you remember how Android OS does it’s Full Disk Encryption with Qualcomm’s QSEE hardware backed Keymaster, the hardware backed Keymaster stores an RSA private key which is used to sign a user entered PIN or Password and the signature is then put through some process to shorten it to whatever key length the Android OS FDE system wants. Essentially the signature is hashed to the key length to put it very simply.

What are the features Yubikeys have:
– HOTP
– PIV (Smart card login)
– FIDO U2F 2FA
– OpenPGP smart card

It now becomes very apparent that there are 3 options that enable smart card based secure key storage of an RSA 2048 bit key inside the smart card chip of a Yubikey. Those are the PIV, FIDO and OpenPGP applets.

The Android FDE technique can be emulated in this context by using either one of these three smart card applet found in a Yubikey token to store an RSA private key that will sign the user derived PBKDF hash and then the 2048-bit signature would be hashed to derive a 256-bit key for use.

It can be represented as:

CryptoKey = Hash(RSA2048Sign(HardwarePrivateKey, PBKDF2(Hash, UserPassword)))

This ensures that a “What You Have” and “What You Know” exist at the same time to provide a strong cryptographic protection. Even if the password is weak, the attacker will need a tamper resistant hardware protected private key to derive the decryption keys.

There are three applets in Yubikey and which of them is the most suitable applet to invoke to get the job done.

PIV applet:
– It is used in Windows login and Governmental stuff but it is rarely used in a normal user setting and is complex if you want to use this stuff. It requires reading thousands of pages of PIV document by NIST FIPS 201 and other accompanying standard linked below to understand how to get this stuff working.

OpenPGP applet:
– This is one of the better choice but the problem is OpenPGP specification only allows a single keyset in it’s applet. Users might want to segregate their email security keys from their TFC security keys.

Due to how OpenPGP works, an OpenPGP keyset is made up of three pieces of keys. One for decryption, one for signing and one for authentication but all of them are simply RSA-2048 bit keys at the end of the day. You can specifically decide to only use one of the keys to perform an RSA operation if you want to but another problem appears which is smart card PIN. In order to execute a security operation, the user have to submit a smart card PIN to operate the RSA keys and now the user not only has to know his weak password, he needs to remember his PIN.

Methods can be used to derive a smart card PIN from the hashed password but that means when a user wants to independently operate the OpenPGP applet from the Yubikey, the user must know how to derive the weak password into the hashed password and then into the smart card PIN which is a pain but this surely provides much better security when compared to the next method I am below.

FIDO U2F 2FA applet:
– You do not need a PIN to operate the FIDO U2F applet because that’s part of the specifications. All you need to do is press the golden button found on the Yubikey to activate the RSA key operation. The reason PINs are not required is it’s primary role as a “What You Have” to complement the already existing “What You Know” and the FIDO U2F does exactly what it was intended very well.

Another plus point is the FIDO U2F applet is allowed to create unlimited amounts of cryptographic keys and the FIDO keys are NOT RSA-2048 but NIST/NSA Suite B ECDSA P-256 keys as per the FIDO standards. Now, someone might be shouting loudly “BACKDOOR SPOTTED” and before yelling that, chill and listen. You have to figure out the P-256 key and you have to be able to reverse the hash function on the signature to be of any use at all from the above algorithm I have given.

The FIDO U2F applet can allow unlimited keys because the U2F specification have a provision for derived or wrapped key offloading where the smart card / Yubikey will house a master secret key to derive or wrap the P-256 key which the garbled P-256 key material would then be exported since it is not a live key anymore. Yubikeys take the path of using it’s internal HMAC-SHA256 algorithm to be combined with unique nonces to derive the unique P-256 key which is much more secure than using AES to encrypt the P-256 key (in my opinion). Linked below includes the Yubikey’s U2F key derivation method. Whatever that is offloaded during the creation of the unique P-256 are actually unique nonces which are by themselves not sensitive unless somehow the Yubikey’s master HMAC key for the FIDO applet can be extracted from the tamper resistant smart card chip inside the Yubikey and the master HMAC key for each Yubikey device is uniquely generated to ensure that the compromise of one Yubikey is not going to affect another Yubikey if the smart card’s tamper resistant barriers and sensors somehow becomes breached and reverse engineered successfully.

Without needing to remember additional PIN (except for a button press on the golden button), the user can use the FIDO applet to create a unique P-256 signing key which during the process will offload it’s unique nonce and public key certificate. You can store the public information together with the encrypted database and if you need so, you can sign the encrypted database and public cryptographic information itself with the P-256 key to detect tampering with the ease of a golden button press on the Yubikey token.

The FIDO U2F smart card APDU protocol (use to connect to the Yubikey applets) are open standards and Yubikey being a FIDO certified device must implement the standards. I have linked the FIDO U2F APDU protocol below and from the protocol, it’s very simple with only 3 request message formats (look for XXXXXX Request XXX under Chapter 6) which are Registration type (key generation operation), Authentication type (signing operation) and Version querying type (theoretically 🙂 ).

Comparing all the options available above, FIDO U2F is the better choice to create a hardware secured private key as part of the “magic sauce” to protect a weak user password from being bruteforced as long as the Yubikey token is secured.

On top of that, I have a couple of spare Yubikeys sitting around, so if you want help for the Yubikey/FIDO portion, you can drop me a message.

Links:
http://csrc.nist.gov/groups/SNS/piv/standards.html
https://www.yubico.com/2014/11/yubicos-u2f-key-wrapping/
https://fidoalliance.org/specs/fido-u2f-usb-framing-of-apdus-v1.0-rd-20140209.pdf

Thoth October 22, 2016 12:31 AM

@Markus Ottela
To add to the above, if using FIDO U2F applet for private key, the algorithm I offered have to be updated to:

CryptoKey = Hash(ECDSA-P256-Sign(HardwarePrivateKey, PBKDF2(Hash, UserPassword)))

Curious October 22, 2016 12:48 AM

Google or maybe Alphabet, is less concerned with respecting people’s privacy as I understand it:

“Google Has Quietly Dropped Ban on Personally Identifiable Web Tracking”
https://www.propublica.org/article/google-has-quietly-dropped-ban-on-personally-identifiable-web-tracking

“But this summer, Google quietly erased that last privacy line in the sand – literally crossing out the lines in its privacy policy that promised to keep the two pots of data separate by default. In its place, Google substituted new language that says browsing habits “may be” combined with what the company learns from the use Gmail and other tools.”

“The practical result of the change is that the DoubleClick ads that follow people around on the web may now be customized to them based on your name and other information Google knows about you. It also means that Google could now, if it wished to, build a complete portrait of a user by name, based on everything they write in email, every website they visit and the searches they conduct.”

I stopped having anything to do with Google a long time ago: No Youtube account, no Gmail account, not using Google Chrome browser. I hope I haven’t missed anything.

Hrm, I must try set up some Linux machine before Microsoft screws me over. This costs money and I don’t want to repurpose my pc desktop (maybe a laptop with a big screen could be interesting), it is a little bit confusing to me (have done it before though) and I don’t really trust linux os that much either.

Thoth October 22, 2016 1:35 AM

@Curious
Privacy and tracking are two opposing business models. They cannot be put together as this is what happens. The business of offense/tracking has higher short term payout so that is where Google will focus on unsurprisingly.

You should mot wait for such situations to setup Linux. It should be done long time ago as part of privacy and security enhancing routines.

Volt October 22, 2016 3:20 AM

@Curious
If not trusting Linux is a reason for you to stick with Microsoft, you’re either lacking in knowledge about either or both, or there’s an opportunity for you to recognize you’ve made up an excuse.

Curious October 22, 2016 3:37 AM

@Volt

Haha, good point. I just don’t feel fulfilled in desiring Linux is all I wanted to say I guess.

Thoth October 22, 2016 5:34 AM

@Markus Ottela
I was looking through the FIDO documentations and despite the commands being simple, it is difficult to manipulate it’s signing feature for use as a hardware protected brute-force prevention mechanism for the What You Have factor in 2FA.

I decided to spin up my own Smart Card Root Of Trust (SCROT) scheme below that is easily workable on many smartcards available on the market as long as you buy one with AES-256-CBC.

SCROT operates as a hardware-based Root of Trust where it is used in an authentication scheme that requires a What You Know and What You Have. The SCROT contains an AES-256 bit key that is corresponding to the What You Have factor in the form of a Smart Card or Secure Element running JavaCard.

SCROT works with a What You Know factor in the form of a user supplied secret byte string (i.e. hashed passwords or keys) and during the registration phase, the user supplied secret byte string with a limit of 200 bytes would be hashed and stored as a PIN using SHA-256 hashing. A randomly generated AES-256 bit key would be generated and stored nothing exportable.

When a user intends to authenticate, to perform the Root of Trust key mixing operation, the user has to supply the same secret byte string (max of 200 bytes) used when creating the ROT derived key for authentication. During the creation of the ROT master key, the user may set flags to specify whether there will be limitations on failed authentication which will result in CSP wiping or whether to allow unlimited authentication tries. If authentication limitation flag is set to TRUE, user may set the amount of retries in a single connected session before the CSP wipes itself if failure is detected.

The hardware secured AES-256 bit key would be used as an encryption key to encrypt the user supplied secret and padded with zero bytes until it satisfy the block length (16 bytes) for AES and the resulting ciphertext would be SHA-256 hashed to derive a final 256-bit key. The AES cipher mode will be CBC mode of operation with 16 bytes of zeroes as the IV.

Thoth October 22, 2016 6:11 AM

@Figureitout
I have managed to glue the decryption command handling on the Java desktop client side up and now I have the decryption timing.

345 KB file decrypted within 137582ms which is 2.5 minutes which is a little slower than encryption since I include a “implicit checksum” flag which is some sort of internal security mechanism in the GroggyBox scheme. The flag must be enabled for decryption if encryption uses it and vice versa but that should not be too breaking (speed wise). I guess we can safely settle on the speed of about 2+ minutes for something around 345 KB.

Very delighted to know that the decryption and encryption logic are working after all these efforts spent on integration between both desktop client and the Java Card applet.

Clive Robinson October 22, 2016 6:16 AM

@ Thoth,

I decided to spin up my own Smart Card Root Of Trust (SCROT)

A word to the wise, “SCROT” is not a good name… It’s a slang word to refer to a male delinquent implying worse behavior than a “Yob”.

Yob is derived from “a backwards boy” and “scrot” is derived from various derisive comments about a part of the male anatomy…

Curious October 22, 2016 6:42 AM

“Indiscreet Logs: Persistent Diffie-Hellman Backdoors in TLS”
https://eprint.iacr.org/2016/999

“We conducted an investigation of discrete logarithm domain parameters in use across the Internet and discovered evidence of a multitude of potentially backdoored moduli of unknown order in TLS and STARTTLS spanning numerous countries, organizations, and protocols. Although our disclosures resulted in a number of organizations taking down suspicious parameters, we argue the potential for TLS backdoors is systematic and will persist until either until better parameter hygiene is taken up by the community, or finite field based cryptography is eliminated altogether.”

Not sure if anyone linked to this in the previous days on this website.

Clive Robinson October 22, 2016 6:43 AM

@ Bruce,

With regards the Grauniad article on squid, I’m not sure they have the correct view of things (butvI don’t have the time to do a full research document triage).

This is from a few years ago,

http://www.pnas.org/content/104/31/12948.full

And indicates despite increased fishing, their had been an increase in range and habitat damage by the Humboldt Squid for the preceding decade and a half, and was actively destroying the Hake fish stocks which was the primary economic source in those areas being invaded.

I know the Humboldt is short lived and the research document is several of it’s generations old but it would require a significant event or change to reverse that growth trend.

Thoth October 22, 2016 6:46 AM

@Clive Robinson
I didn’t even think about that… lol.

Was just using the boring convention of most developers to use acronymns of concepts as a name.

Oh well, give me some ideas for a name that would sound nice and simple.

Thoth October 22, 2016 7:19 AM

@Curious
re: Indiscreet Logs: Persistent Diffie-Hellman Backdoors in TLS

Stick to the tried and tested DH groups (MODP/RFC3526 and Oakley) for now. The main problem is getting your browser to detect weak or problematic groups and then weed them out. You don’t really have fine grain control of your own browser and this makes the problem even worst.

The way forward would be to standardize a set of tried and true DH groups for those who are still using DH.

Another method is to have a long term RSA signing key and for a single session, you generate a random pre-master RSA session key and use the long term RSA signing key to sign the session pre-master RSA key. The pre-master RSA key would encrypt 24 bytes of nonce and send to the destination (assuming the destination also sends their signed RSA pre-master session public keys) which they would also encrypt with their own pre-master RSA key to send their own 24 bytes nonce.

Both sides combine the two pieces of 24 bytes nonce into a complete 48 byte nonce and the first 32 byte is used as the session 256-bit encryption key and the last 16 bytes are used as the session 128-bit HMAC key. Once the secure session is finally established, the pre-master RSA keys are discarded (wipe memory buffer) and use have a sort of RSA-based Forward Secrecy scheme (kinda like DH but not DH).

Europe vs American Models October 22, 2016 7:27 AM

EU
Crusader Margrethe Vestager, the Danish head of the EU’s competition division, has been dropping hints about a new front against Silicon Valley. She’s scrutinizing how companies’ stockpiles of data might breach antitrust or merger rules. Meanwhile, privacy rules the EU passed this year will give authorities across the region broad powers to investigate and fine companies that don’t seek consent before collecting user data. “Europeans are much more sensitive to the idea of exploitation of their personal data and much less inclined to buy into the social contract where we get a service for ‘free’ by paying for it with our personal data,” says Van Someren, the investor.

On the the U.S. approach is to let tech companies go wild with (as Wikileaks reveals) Google Schimdt and Facebook writing White House tech policy.
http://www.bloomberg.com/news/articles/2016-10-20/silicon-valley-s-miserable-euro-trip-is-just-getting-started

America
People with conscience and morals used to run America. Now its tax dodging, profit minded Wall St backed corporations using big-data to crush competition. As Curious states, they are becoming evermore intrusive and literally push or force themselves upon clueless, addicted, lazy Americans:
http://www.theatlantic.com/technology/archive/2016/10/incessant-consumer-surveillance-is-leaking-into-physical-stores/504821/

Grauhut October 22, 2016 8:50 AM

In Germany a new intelligence law for the BND was passed.

They did it like GHCQ, just legalized all snooping formally, knowing that German constitutional court will not accept it like that. But this will takes years to get fix. 🙂

“Reporters Without Borders (RSF) is appalled by the adoption of the reform bill on the German foreign intelligence agency (BND) in the German Bundestag today. With the reform bill, the ruling coalition wants to allow surveillance of foreign journalists abroad by the BND and thus to legalize a severe breach of the fundamental rights to freedom of expression and freedom of press.

Today the German Bundestag has passed the law without any noteworthy amendments. As a consequence, the BND has the explicit right to spy without restrictions on non-EU journalists, as long as this is deemed to serve Germany’s political interests.

The ruling coalition thus not only defies the unanimous criticism of media associations and human rights organizations, three UN Special Rapporteurs (http://t1p.de/gwdl), the OSCE Representative on Freedom of the Media (http://t1p.de/iut5) and the legal committee of the German Bundesrat (http://t1p.de/1ian), but also technical objections (http://t1p.de/ut2d; http://t1p.de/4xx5).”

https://rsf.org/en/news/bnd-law-german-bundestag-ignores-criticism-civil-society-and-breaches-constitution

Thoth October 22, 2016 9:24 AM

@Grauhut
Any nation that waves the flag of respecting privacy and personal security is total nonsense. I don’t buy into such nonsense and neither would the vast majority of the regulars here I believe.

Germany used to take pride in it’s “IT Security – Made in Germany” and I think that would simply tarnish the brand as well. When it comes to spying and personal security, everyone and everything is simply a bunch of data and metadata asking to be collected as long as it exist somewhere.

I am wary of that “Made in __________ country” nonsense and I find them appalling that people actually buy into it and next thing it hits them are the stark reality.

What we need are much more robust personal security and privacy that we create on our own as much as possible and not dictated to us by someone else.

hawk October 22, 2016 10:03 AM

The more technical a solution the less likely anyone will use it and often, the more unfriendly it is to use.
Just listen to yourself, it’s like a kitchen appliance with a ten ton user manual. Oh, and you better not make a mistake or the whole thing breaks and worse, you won’t even know it until it’s too late. But don’t come back here expecting any sympathy.

Here, let me save you the time in responding:
@hawk
Hey man, you sure are dumb. No problem here typing with my left foot on the jagger502b while holding down three space bars with both hands while I do the retinal thingy with the crypto key masher version 9.02 on one certain Linux flavor only. No problems here. Why doesn’t everyone do this? They must be dumb.

Figureitout October 22, 2016 12:47 PM

Markus Ottela
–Looks great. I like the new data diode w/ DC power plugs instead. There could maybe be some kind of crazy attack there over powerlines thru a powersupply but not worth it IMO to fully prevent that. Being able to leave it on your bench w/o changing batteries is nice.

Thoth
–Ok, so looks like that freescale/nxp chip uses an external crystal, prob. 32.768kHz, not clocked internally. Work w/ an HCS08 lol. Wonder what J3 is used for. U5, U6 and U3 I’d guess some kind of regulator. Not sure wbout U2 (probably another regulator) or U4. 3 of those regulators must be for 1.8,3,and 5V operating voltage you can switch to. Looks like it supports class A,B,&C cards w/ T=0 and T=1, and PPS protocol.

Anyway, so USB (2.0, 12Mbps) comes in, via CCID protocol it can communicate directly w/ card over USB. Can’t really comment on hardware design, I’d have to look over code to have any clue for optimizing. Probably not much.

So, you have 2,826,240 bits you can transmit at a rate of 12,000,000 bits per second thanks to CCID protocol. So you should be able to send that payload in 0.23552 seconds to the card, times 2 would be 0.47104 seconds ideally for full comms trip. Out of total time of 137.582 seconds, the comms would take 0.34% of the time. Based off that rate, you could encrypt 2.5 KB per second. Word files, you just put a space in them and it’s a KB lol, but text files, well…it’s much better but still will add up quickly if you have a lot.

The bottleneck isn’t the comms I think. It’s Java on the PC and the encryption on the card going from Java down to actual executing code. Easiest way to speed that up is probably specs of computer and specs of smartcard eh? Well as long as you’re satisfied I’ll look for other smart card readers and cards and go back to all my other projects, thanks. :p

Definitely would be plenty to probe on that board though.

ab praeceptis October 22, 2016 1:11 PM

Slime Mold with Mustard

For the sake of fairness: We Europeans sometimes tend to smirk about us-americans, and frankly, there is plenty reason for that in my minds eye.

HOWEVER: Looking around I fail to see how “our” EU-rope is any better. Your politicians are but a bunch of corrupt crooks and so are ours. Your politicians don’t care a rat’s a** about the normal people, neither do “ours”.

To a certain part, one might excuse ourselves as being hardly more than vassals of the united states of a part of a part of America. But still, we fell for the same traps as you did over there. We, just like you, lousily failed to enforce the holy rule of “we the people” and allowed the crooks to pervert the whole system.

Leaving politics – at least it would seem so – I see very similar patterns repeating in our field:

The googles and mozillas and facebooks make a lot of PR and theater but in the end they bend the rules, shift vast amounts of $ to clinton, abuse their power to censor people, etc.

Getting technical, one might look again at SSL and lots of other nightmares and less than perfect hw and sw and ask “Are those really real problems that just couldn’t be avoided or is it just another cycle of the ‘oh well, the world is bad but we try very hard’ bla bla we know from politicians?”

Fact is that there are plenty of corps which do have billions; and we do have ways and tools to do considerably better. Yet, the whole stack is rotten.

Example: letsencrypt. Sounds great (if brain deactivated). BUT: In my books “you install some crappy sw from us on your machine to then enjoy our super-duper-security magic” isn’t anything like secure. To me that sounds simply like “Let us put some hooks right into your heart and brain. It will be wonderful, promised”.

As for open source: It wouldn’t be the first time that the crooked found ways to make well meaning people do their bidding.

Just thinking.

Clive Robinson October 22, 2016 1:36 PM

@ Thoth, Curious and many others,

You don’t really have fine grain control of your own browser and this makes the problem even worse.

And there is no excuse for it either, it’s not as though people have not been complaining about web browsers and this lack since before this century… But it appears to have always fallen on –deliberately?– deaf ears with the developers of the major browsers.

Personaly I’m thoroughly disgusted with most of them but those that came via Mozilla and Chrome have now become completely untrustable in this respect. As they were not originaly that bad you have to ask what incentives the developers had to make their products that way…

As they say, “Answers on a postcard, so all the world can read it…”.

Clive Robinson October 22, 2016 1:50 PM

@ Europe

People with conscience and morals used to run America. Now its tax dodging, profit minded Wall St backed corporations using big-data to crush competition.

One thing we do know is that will only get considerably worse if Hillary “Wall St bought and payed for” Clinton gets tenure for the next four years. Not that most Americans actually understand this or probably even care…

As for the other side of “the non divide” it’s far from clear what Donald Trump would do, some of his early utterances suggested on this asspect he would be the better option, now however…

The one thing most Europeans who think about it tend to agree on, is that the US will probably pass “Global Scope” legislation for the likes of the big spying orgs like Google, Twitter, Facebook et al so they can not be legally or financially penalised in any way. Or as one friend put it “American exceptionalism supported at the point of a gun”.

VinnyG October 22, 2016 2:19 PM

@SlimeMoldWithMustard re “privacy policy”. The application of the term is precisely correct: it is a “policy on privacy”. Anyone so naive, obtuse, or lazy as to assume that any policy is what he or she would like it to be without actually reading it is imo highly unlikely to benefit from a mere change in document title. If any top-down regulatory change would be of benefit (I’m highly skeptical of that proposition generally) I think it would be more along the lines of prohibiting legalese mumbo jumbo and double talk, and restricting policy descriptions to combinations drawn from a dictionary of simple phrases and terms, and/or some kind of common/standard graphical representation of the features of privacy policies. I would add that those changes, like any others in this domain, could be forced by a knowledgeable, concerned public without recourse to government regulation. Unfortunately no such entity exists, which probably dooms any effort at improving the situation via either venue to long term failure.

Grauhut October 22, 2016 2:33 PM

@Thoth: “What we need are much more robust personal security and privacy that we create on our own”

The crazier the the snooping gets, the more talented people will work against it… 🙂

Thoth October 22, 2016 6:19 PM

@Figureitout
Probably once I have finish implementing the native desktop I/O to bulk read/write the files then it would be more conclusive of the problem.

I sometimes wonder if the actual speed lag might be from the JVM in the card but that is hard to conclude.

The method I am using to measure the milliseconds spent does not include native file I/O by the way. What the timing measures is the total transaction time between the card and the desktop. For every transaction, it would add up to a total transaction timer I set in place.

r October 22, 2016 6:25 PM

@Grauhut,

Not that I’m talented, but:

The have lists upon lists of us starting from grade school, one can only finish so much when running against the grain. It’s overbearing at times, are they trying to use the weight of reality to wear us down?

ab praeceptis October 22, 2016 7:21 PM

Thoth

Again, I lack any significant experience in Smartcards and I’m following your project rather superficially, but those problems intrigue me.

I suppose the problems are based in mainly 2 areas: a) industry blurb and b) java.

Those thingies are made to infrequently do some crypto and to transmit a low amount of data. Things like 2048 key bits. I assume that a timeframe of about a quarter to half a second is considered acceptable.

So, from my (possibly ridiculously wrong) perspective, the story goes roughly like this: Transmitting about half a KB is already a rather major job for those scenarios (e.g. a 4096 bit key). Supposedly a line speed of 33 Kb is considered a reasonable minimum (simply because that’s still used and once was the “standard speed” in the typical scenarios). So transmitting 2 Kb (plus some protocol overhead) takes about 1/10 of a second. If the smartcard chip could deliver, say, a 2 Kb key within another 30 to 50 or so ms, the whole process could be comfortably handled in much less than a 1/4 of a second. Case closed (meaning: everyone would be happy and those specs would be fine).

Which comes down to everyone having had even some reserve. The Infineon chip we talked about would actually be a speed demon for those typical scenarios (like signing a 32 Byte hash).

All of this also translates to the industry not being under any pressure. Bloody java was fast enough. In fact, even lousy and poor jvm implementations were good enough. And anyway the hardware would get faster, so what.

When they added USB capability they didn’t do that because they needed more speed but because USB became an omnipresent standard and end customers wanted something to simply plug into their PC.

In the end, I suppose, (part of) your problem is that you assumed that USB means at least 1.5 Mb speed and hey, the chip could do the crypto in 5 or so ms. I don’t doubt that Infineon has a reference implementation that actually delivers that. But I strongly doubt that that’s what you get out of a lousily implemented jvm.

Your usage is just far out of industry bounds, I assume.

Which must not be a major problem, unless you do way too much on those thingies. Maybe you should re-distribute your workload and use those thingies just for what they were really made for (rather than what the blurb numbers say).

You have some not fast but “safe” crypto and some “safe” on chip memory and some security gadgets available. That’s a nice basis which quite well supports quite nice approaches. Example: Seeding a good PRNG, drawing some key out of a “magic hat” (the smartcard thingy), etc.

I’ve worked on a vaguely related project where we also definitely wanted some decisive and vital stuff outside of the box. Reason: one of the major problems turned out to be the question of trust, meaning “We can run stuff on a system but how can we make sure that stuff hasn’t been poisoned?”. We solved it with an elaborate verification chain whose pivot element was a tiny hash routine that was pulled from a “magic outside box” so as to make sure that the further verification routines and self checks were performed by unpoisoned/unaltered software.

In case I got the hole thing completely wrong and just blabbered stupid things, forgive me and accept my good intentions (and curiosity).

Thoth October 22, 2016 8:07 PM

@all
Looking pass the flaws of medical devices, the MedSec security bug hunting team are more of market speculators than actually concerned security researchers wishing to improve embedded security.

Medical devices are flawed and that is a known thing but the really disturbing part is for someone or some organisation to use flaws to short sell stocks, meddle with markets, hoard vulnerabilities and other actions that are not positive to the situation.

Link: http://www.theregister.co.uk/2016/10/22/st_jude_new_security_claims/

Thoth October 22, 2016 8:25 PM

@ab praeceptis

“Those thingies are made to infrequently do some crypto and to transmit a low amount of data”

Indeed. Those shouldn’t take too long though as most of the memory crypto are usually XORs but that just my guess. Each chip uses different methods to encrypt their internal RAM and EEPROM and are kept under multiple layers of red tapes.

“The Infineon chip we talked about would actually be a speed demon for those typical scenarios (like signing a 32 Byte hash)”

Yes. This is indeed true as Infineon chips are one of the fastest I have ever worked with (NXP, Infineon, Samsung). NXP gives pretty decent speeds too.

“When they added USB capability they didn’t do that because they needed more speed but because USB became an omnipresent standard and end customers wanted something to simply plug into their PC.”

That’s correct as most people are ditching the CCID card reader and are doing USB FIDO/PKI tokens and what better than to simply add an additional USB firmware into the chip. I recently bought a small batch of JavaCard USB tokens and they simply out perform any smart card reader + smart card (traditional setup) in terms of speed test. I have not uploaded my GroggyBox applet since the USB smart card token does not support PKCS 5 pad mode natively and that is a major headache so I am giving myself sometime to think of how to handle this situation. I could either implement zero pads and not use PKCS 5 padding mode or I could implement software PKCS 5 pad mode but I am leaning to switching to zero pad mode but breaking compatibility with PKCS 5 pad would also be questionable when it is the industry standard.

“Your usage is just far out of industry bounds, I assume.”

That is correct. Smart cards were never built for bulk encryption like what I have done. The most they were expected to handle were small transactions and wrapping of cryptographic keys and leave the bulk encryption to the host (insecure) computer.

“You have some not fast but “safe” crypto and some “safe” on chip memory and some security gadgets available. That’s a nice basis which quite well supports quite nice approaches. Example: Seeding a good PRNG, drawing some key out of a “magic hat” (the smartcard thingy), etc.”

I guess that sums it up. Pretty much I have to lower the user’s expectation of a fast and safe smart card file encryptor into a slow and safe and friendly smart card file encryptor. To compensate for the slow speed, the only thing I can think of to keep users happy would be to make the GUI much more friendly and usable so that despite the slow speed, the UX would make up for the slow speed and also to lower the user’s expectation on the crypto speed of these tiny security devices.

Nick P October 22, 2016 8:29 PM

@ All

Quickly dropping this paper on a type-safe linking and information hiding model for C programs. It’s based on 4 rules they say are compatible with how C programs are written when authors have sense. They claim the application of it to 30 OSS programs found over a 1000 problems that could’ve turned into bigger ones over time. Linker-level correctness and security gets little research plus a ton of stuff is written in C. So, I like this work even though I only skimmed it for a few seconds. 😉

Additionally, the F* team doing the verified HTTPS just released a compiler to C from a subset of F*. Originally targeted F# and Ocaml. So, this is nice addition to COGENT-to-C system from Australians.

Finally, best for last, I can’t recall if I already shared this article on Cleanroom Software Methodology. Unlike many others, it’s very approachable intro that combines standard fare with what author learned at a training session from IBM that has details overviews leave off. Additionally, author has been teaching students to use the method then recording their results. The results have been impressive. I still think it should be combined with functional programming a la Haskell w/ QuickCheck or Ocaml in style of MirageOS TLS.

Bonus: On discussion about DDOS dominating Internet, I pointed out the people using 1970’s tech called dial-up over POTS, dedicated point-to-point, BBS’s, and intranet email are doing just fine. That’s right: the 70’s is still more resilient than the modern Internet circa 2016. Mwahahaha.

ab praeceptis October 22, 2016 10:41 PM

Nick P

Re. CMOD (your 1st item), oh well, one of many, many academic efforts that will hardly ever take off. There are tough practical issues, the major of which probably is that one must use yet another wrapper. Very few developers will accept that.

Sure, the matter is important and it would be highly desirable to have something at least vaguely like what a whole zoo of Wirth languages (and some others) have since decades. But: C programmers are a mercilessly pragmatic bunch and they will hardly accept another wrapper.

However, the paper has a nice side, too, namely the math behind CMOD. That certainly deserves positive mentioning and some attention.

Re “F*”

I’m enchanted. Having looked at diverse evil corp and INRIA cooperations I have, as you probably know, found quite some good stuff, that however, wasn’t practically useable/acceptable for me due to one or the other microsoftism (.net, needs windows, …).

The way the F* people chose isn’t really new (but quite). I mentioned LEON (EPFL, the “sister of ETH) earlier. Much of the thinking is quite similar. LEON is build on a Scala base and also creates C code for some (reasonably large and useful) subset of Scala. And, interestingly, also looking at and using C as an omnipresent meta-assembler (which it ideally is).

I’m not really a fan of Scala; I feel that it has been brutally hampered by its java heritage and closeness. F* looks quite good. Stringent, consistent, verif not as an add on but fully and seriously designed in from the start. And while I’ve not yet found hard proof, considering its surroundings I would bet that the (left vague) “SMT” backend mentioned translates to or at least includes/allows for Z3. Plus F* is more hardcore about safety and more consistent than Scala it seems to me.

The minus I see that F* is rather young and not yet mature, let alone endowed with goof tool support, libraries, etc. But that can be forgiven considering its strong background (i.a. OCaml) and the fact that often just sensitive parts of a project will be done in F*.

Excellent, and finally, it seems, not evil corp. license or OS or … crippled. Kudos to evil corp. They move and they move in a good direction (at least in the field I care about).

As you might feel between the lines, I’m very pleased and excited. You see, determined stout weirdos like myself somehow had our toolbox and our ways to do things the way they should be done anyway (if clumsy and burdensome), but I always felt lost when trying to convince others. Usually it ended with them telling me that the entry barrier is way too high, that it’s not well portable, etc, etc.

Finally it seems that there is a way every serious professional can take. Wonderful. I hereby forgive evil corp quite some of their ugly sins.

ab praeceptis October 22, 2016 11:16 PM

Thoth

“Pretty much I have to lower the user’s expectation …”

Maybe not. First, let’s not mix up things. Your endeavour was bringing fruits in that you quite considerably extended your knowledge and gained insights and know-how into a valuable field.

The other thing is that you want to build something new and useful. That can still be done, just not the way you chose first, I guess.

Maybe it would be worthwhile to look at and consider my earlier hint. Let me elaborate a bit:

You could summarize the context as a) can the user trust his (insecure) system (which, however is powerful) and b) can the user trust my software on his system being not poisoned?

a) is not something one needs to ascertain fully each and every time. Hence it may need much more (run)time, who cares. There your smartcard does more resource and time consuming things (basically coming down to a host OS check).

b) can be done very well with very low (smartcard) resources by following a staging approach. So, a very small routine, maybe 1% or 2% of the size you have been experimenting with (about 350 KB) and hence within the time comfort zone (1s or 2s) is to be loaded from the smartcard and then verifies that the larger and costlier verif. routines of the host system are not poisoned; those then verif. the “big” program (crypto or whatever) and iff all tests passed e.g. given a hash computed along the way (showing that all verif stages passed) is sent to the smartcard, which then and only then hands out some keymat or whatever.
It could be simple and modest; e.g. your smartcard generates a random and transmits it to the verif downloaded before, then computes something (e.g. how a hash should look like if the host is OK and clean) and then checks what the host verif routine computed matches the internally computed hash or whatever.

From that point on your users can be, say, 99,99% sure that the crypto or whatever stuff done by your main program on the host OS is actually safe and secure. That’s not 100% but it’s much, much more than what one can usually get with small resources and it’s also a good starting point for more services (like e.g. tranferrig HIDS keys to your smartcard or similar).

(And I have a question in mind that hunts me, which to answer, however, I don’t know enough about the smartcard industry: Are you really limited to java? I would assume that Infineon also supports e.g. C (but maybe only for large customers))

Sorry that I can’t offer more and more useful input but again, my skills in your field are very limited.

Ratio October 23, 2016 12:55 AM

@Markus Ottela,

I haven’t played with TFC, but I’ve read bits and pieces of the code. Some comments on the code:

  • setup: you may want to check out pip (and virtualenv) and the way you can specify SHA-256 hashes for packages in requirements files. Ideally someone should be able to install all the Python bits using pip install -r requirements.txt (once you’ve got whatever OS packages you need using apt install … or what have you).
  • In hwrng: the docstring on main doesn’t match the code: ent_size can be 768 bits
  • In NH.input_validation: isinstance(param_tuples, tuple) is always true. You want a tuple (or any sequence, really) of 2-tuples or something along those lines.
  • Maybe for n, (param, exp_type) in enumerate(param_tuples, start=1) is a nicer way to do what you’re doing in NH.input_validation; these variables correspond to your n, t[0], t[1]. (You could also make nth a list or a tuple and drop the start=1.)
  • In NH.phase: sum(‘\n’ in c for c in string) is just string.count(‘\n’)
  • Rx.padding is basically no more than evaluating string.ljust(255, chr(255 – len(string)))
  • There seems to be quite a bit of duplication between NH, Rx, and Tx?
  • The globals, they burn. 😉

Anyway, that’s as far as I got in half an hour or so. Hope that helps.

ego October 23, 2016 1:46 AM

World largest glorified advertising company Google has dropped ban on personally identifiable web tracking

https://www.msn.com/en-my/news/techandscience/google-has-quietly-dropped-ban-on-personally-identifiable-web-tracking/ar-AAjfJBr


When Google bought the advertising network DoubleClick in 2007, Google founder Sergey Brin said that privacy would be the company’s “number one priority when we contemplate new kinds of advertising products.”

But this summer, Google quietly erased that last privacy line in the sand – literally crossing out the lines in its privacy policy that promised to keep the two pots of data separate by default. In its place, Google substituted new language that says browsing habits “may be” combined with what the company learns from the use Gmail and other tools.

The change is enabled by default for new Google accounts. Existing users were prompted to opt-in to the change this summer.

The practical result of the change is that the DoubleClick ads that follow people around on the web may now be customized to them based on your name and other information Google knows about you. It also means that Google could now, if it wished to, build a complete portrait of a user by name, based on everything they write in email, every website they visit and the searches they conduct.

The move is a sea change for Google and a further blow to the online ad industry’s longstanding contention that web tracking is mostly anonymous.

Of course Google needs to “show growth” in one way or another and since they don’t really manufacture much of anything that brings ongoing revenue, except adverts, their only resource to mine are their users…

Thoth October 23, 2016 1:56 AM

@ab praeceptis
If I were to do native Infineon blobs or any native smart card blobs, I will be (and my applet as well) subjected to:
– NDAs … tonnes of them
– Create the Card OS from scratch which is a pain
– Not able to open source anything

All these would doom the FOSS nature of my project and adds unnecessary cost, legal, resource and time wastage.

I would prefer to use a standardised JavaCard which is fully open platform and thus my applet can be comfortably open sourced without any NDAs, legal, cost, resource or other problems a native applet with native from scratch Card OS brings.

So no, I would not consider another solution that will impact the FOSS nature of my project and which would prevent the ease of running a single applet on multiple platforms (NXP, Infineon, Samsung ..etc..).

Sadly, only JavaCard fits the bill as it is the single most widely adopted Card OS out there only following behind by MULTOS and for MULTOS OS, it uses C but it still have the overhead of a VM like JavaCard and there is only one supplier of MULTOS while JavaCard suppliers can be found across all major smart card suppliers and almost 99% of card development are JavaCard safe for proprietary card development encumbered by red tapes and NDAs.

So the short answer is no, I will only stick to JavaCard for now as it’s the single most widely adopted smart card platform with the most market shares and also the most support and tools.

Maybe you could take some time to pick up JavaCard and buy a small batch of JavaCard enabled smart cards online. It isn’t too hard to pick up JavaCard since the syntax is mostly Java style and you just need to remember your codes cannot do dynamic calls and there is only one software applet package per applet. I picked up most of my JavaCard and smart card APDU skills and settled down within about 1 week plus pretty quickly.

Krista Well Socialised Geekogirly October 23, 2016 2:03 AM

@ Clive Robinson

first time caller long time admirer

You have so much wise, worldly practical advice that can be of great benefit to the 99% of people on the this planet that don’t understand infosec and are not likely to come up to speed with the complexity of topics on this blog (mostly) . Your advice includes infosec for the laywoman but goes beyond computers into old school trade craft amongst many other things

Do you have a legacy? Have you written any books or papers? I am just wondering if your great knowledge can be available for the world in the years and decades after you have ascended into light and continue to guide us mortals from your Valhalla ?

So far my only thoughts are that all of your posts on this blog over the years can be scraped and compiled into a book, curated by Bruce.
My question is, is that the best we can expect for your posterity?

a warm embrace and a hot mug of motion generation your way xoxo

PS your post about surviving a power outage in a cold climate. You said coffee fared well with unsalted butter.Well did you know it’s a whole phenome known as bullet proof coffee. But ignore the marketing and the book and website and blah. Just DIY – it has many health benefits, the fat fuels thermogenesis of the coffee and really boosts cognitive function. A good two tablespoons of grassfed organic butter unsalted, with coffee and blend into a delicious frothy mass of delight. The recipe calls for coconut oil and butter but it makes one very sluggish and is heavy on the liver. Just butter is best.

Sancho_P October 23, 2016 9:29 AM

@ab praeceptis re: Smart Card Inspector (TM)

Um, I don’t know if I understood your proposal / idea in full, but:
I’m afraid there’s a basic issue with that approach, it’s called authentication.

Imagine to sit in a cardboard box from, say, a washing machine.
The only connection to the outside world would be a small slit where you can send / receive a piece of paper with some data.
How would you know what’s going on outside?
You may read ‘I’m @Bruce, trust me’, verify the Mi$o signature or check if “they” can compute your secret, but that’s all, you’d never know how many black hats or guns are involved out there.

Generally it may (!) be possible to inspect from upside down, but never the other direction.

But the real issue is with your 99.99%.
History has shown that all NOBUS isn’t true for tomorrow, their tools are ours 😉

Nick P October 23, 2016 11:14 AM

@ tyr

I was so confused at first since it was written like this involved the simulator for the actual A-10. Then I realized it was a game but still confused since I played A-10 Tank Killer and its sequel. I especially loved trying to come in low through the rivers then napalm or maverick their asses just as I came up. Not that I was a great aim. In real life, I had a training round for the Avenger cannon from my grandpa. Something about holding a bullet that’s nearly a foot tall helps people understand how this plane could shred tanks. I thought the real one was awesome since it was a beast that flew right into gunfire tearing through the enemies. A big, ugly heaping machine of Get Her Done philosophy. 🙂

Ok, back to this game. So, it was a realism-oriented sim. Looks like a neat project. The culprit is obvious to any Dilbert readers: horrifically bad management combined with short-term thinking and no real understanding of the market. They were seriously about to mix a realistic flight-sim with MMORPG concept that was just invented because point-and-click spell casters liked MMORPG’s? Huuuuh? How would they even do all that on old hardware is what I’d have asked? And forced to do a dynamic 3D game in a throwaway 2D engine? It was just FUBAR all the way. It ends with a warning that has been true to this day: games, even genre’s, will disappear if the margin gets too low for big companies. It happened to space sims despite Wing Commander, Privateer, Freerunner, etc showing there’s definitely a market. The companies just won’t build them since it’s not a profitable market if there’s competition.

They need to start doing enterprise software on the side. Even acquire something with a decent codebase that’s already selling. Then, you have game coders moonlight as optimizers for the enterprise software. Just make sure they mostly do gaming to keep them from quiting. The enterprise software becomes bread-and-butter to cover business if gaming side isn’t doing so well. Gaming side has potential for moonshots or bursts of profit. Keeps high-grade talent to do occassional extensions of enterprise, especially if there’s overlap. Storage, networking, or chat come to mind.

ab praeceptis October 23, 2016 12:13 PM

Sancho_P

It is presumably known what’s in the box (Smartcard) and it can be checked and verified what’s in the host box.

The point in my approach is that one can hardly verify a host system (on which some sensitive code should run) without a trusted reference. In a way an Analogon to Gödels incompleteness theorems.
Second, as we learned from Thoth, both the performance (at least with java) and the transmission capabilities of smartcard systems are very limited. But – and that’s the pivot point – one can use it as a reference (possibly even as a balanced mutual one) with low performance and transmission requirements.

Think, as an example, of a classical HIDS problem. A HIDS is nice and dandy but of little worth if its reference (the last known good state) is on the very same system to be tested. Accordingly one is typically advised to mount the reference read only or similar.

A smartcard would nicely lend itself to that problem. One could have a) a key (say 32 bytes), b) last good hash, c) some first stage verif. element on the smart card and then
1) transfer the first stage to host
2) make it check/verify second stage (on host)
3) have 2nd stage hash check HIDS ref.
4) decrypt HIDS ref
5) run HIDS
6) take current HIDS hash to smartcard
7) clean up, etc.

Sure, there are still problems that this does not solve; the host verif, for instance, is either limited or very, very time expensive. But then, it was the very premise that there is an untrusted host system involved.

But anyway, Thoth seems to be not interested or disliking what I wrote, so the case is closed anyway for me. And I said from the beginning that I have good intentions but very limited knowledge in the smartcard field. So I’m neither surprised nor pissed off by Thoth rejecting my thoughts.

ab praeceptis October 23, 2016 12:36 PM

Clive Robinson

Thanks.

frustration stems from the idea that static typing will solve all our problems, or even one specific problem: The ubiquity of broken software

So what? Static typing isn’t meant to solve any problems. It’s one requirement next to others which, properly met, will help to create software that is troublefree (oh well, at least better more than what we had).

And the value and power of static typing (at least for what languages and understanding we have today) is easy to demonstrate (as has been amply shown).

It seems that McIver mixes up quite diverse elements, some rather technical, some more social, etc. Btw, I widely agree with his non technical/compsci arguments.

Much of the discussions going on come down to a simple point: Some argue in terms of usage, often mentioning arguments like “the computer is better at it anyway”, while others, like myself, strongly oppose.

Simple reason: No the computer isn’t better at it. In fact, it can not possibly deduct what the programmer wanted; it can deduct what type matches what the programmer typed. It can deduct that “day = 15; month = 11” are integers – but it can not know what they mean. Moreover and more importantly, those variables are but elements of an attempt to properly implement an algorithm.
It just so happens that days > 31 or months > 12 for a date related algorithm are nonsensical and quite certainly introducing serious trouble. How would the compiler know that?

Another view: We don’t say in math “let x be some kind of number thing and let A be a function that does [whatever]”. No. We properly specify the function and its domain and codomain.

Unless some people seriously want to argue the fact that code is the implementation of algorithms (and such disqualifying themselves) that whole type deduction, inference, ducking and whatnot discussion is moot, alone for the fact that we are discussing quite different matters. I’m looking at proper algo implementation while they look at funny transformation magic, ease of use, etc.

Clive Robinson October 23, 2016 12:39 PM

@ Krista,

Welcome to the fold of commentors.

I will have a look into “bullet proof coffee”, thank you for bringing it to my attention, though a quick glance at Wikipedia suggests there is a strange extra added which I will certainly treat with caution 😉

@ tyr, Nick P,

I once had an unfortunate run in with an A10 and it still wakes me at night from time to time…

Put simply back when I was wearing the green we were erecting some Clark Masts with HF antennas in a field in UK MOD property on a cliff edge early one summer morning, with a little mist still in the air. Not to far away was an airbase where A10’s were resident for training at the time. Due to one of life’s little cockup’s nobody told the fly boys they were not supposed to come in off the sea there as there was a hazard to low level air navigation… Well they did at very low level and at the last moment the pilot realised that there was a sizable antenna in front and thus stood the A10 on it’s tail pipe. The first I knew about it was a physical pain from the power of the jet wash and noise and I looked up into a hell on earth that was the engines giving it full welly…

Luckily I did not have to change my cloths unlike the person standing a few yards from me who found the stomach churning vibration to much after a night on the beer and let loose from their tail pipe. Thus they had to ride back to camp in the back of the Landy due to the overpowering aroma of distressed digestion, that Theakstan’s Old Peculiar[1] can induce.

[1] Apart from Old Peculiar Theakston’s make a whole range of specialty beers that get shipped all over the world. A more recent one “Christmas Ale” owes it’s existance to NCIS,

http://www.theakstons.co.uk/m/Tasting-Notes/Ale-Information/%3FBeerDetails%3D2

Ted October 23, 2016 12:40 PM

On October 19, the Nat’l Telecommunications and Information Administration (NTIA) convened its first multistakeholder process meeting in Austin to discuss Internet of Things Security Upgradability and Patching.

The Federal Register notice announcing the meeting is here</a href>. (Bruce is cited.)

The notes from the discussion indicate that the next meetings are scheduled for December (IIC meeting in San Diego and IEEE World Forum on IoT in Reston, Virginia).

https://ntia.doc.gov/other-publication/2016/multistakeholder-process-iot-security

CallMeLateForSupper October 23, 2016 12:51 PM

Europe vs American Models
shared a link to story at The Intercept:
incessant-consumer-surveillance-is-leaking-into-physical-stores

It is instructive to consider that 1) such surveillance cannot happen in the absence of certain physical things and 2) that the surveilled control these things.

This is one example of your being empowered to metaphorically poke a thumb into surveillors’ eyes. Don’t waste the opportunity. Turn Off that expensive tracking device.

The dark humor part of me often opines that, any day now, I will walk into a store and immediately be swept up and dumped into a dust bin…. because I carry no tracking beacon, therefore cannot possibly be human.

CallMeLateForSupper October 23, 2016 1:21 PM

Re: my earlier post
“shared a link to story at The Intercept”
should have been
“shared a link to story at The Atlantic”

JG4 October 23, 2016 4:30 PM

I am a big fan of grass-fed butter and milk products in general. Coffee in moderation is one of the many natural health tonics.

@Sancho P

You offered some time ago to provide further details on a data diode. I really like the idea of open-source hardware as a complement to high-assurance software and firmware. I’d like to see more information about your design.

If I weren’t mired in profound and continuing dysfunction, I would have posted my designs for a Faraday cage and power supply energy gap. That and the missing groundswell of enthusiasm for the blindingly obvious.

Speaking of health tonics, one of the reasons you have to go easy is that some of them upset the gut microbiome.

https://www.scientificamerican.com/article/drinking-causes-gut-microbe-imbalance-linked-to-liver-disease/

This may also explain the cardioprotective effect of stout beers, which provide fodder and shelter for gut microbes.

Thoth October 23, 2016 6:24 PM

@ab praeceptis

I have no idea how to do a verifier in a smart card so I have to spend time reading up before I can decide. Don’t want to introduce too much complexity into it.

Mike Barno October 23, 2016 7:23 PM

@Clive Robinson,

Theakstan’s Old Peculiar

Isn’t it “Old Peculier”, with the last vowel an E, named after some community officer in your peculiar nation? I thought I remembered it’s an official position, but Wikipedia Theakston Brewery says not quite:

It is named after the peculier of Masham, a peculier being a parish outside the jurisdiction of a diocese.

tyr October 24, 2016 12:57 AM

@Nick P.. Clive

A10s are devastating beasties, they can do a
low level pass so fast you can’t react to it.
Keeps infantry humbler that way. If you can hear
the gun they you’re alive, if you’re the target
you never hear it.

I always wanted the Wing Commander bunch to do
a game called Renegade so I could fly the enemy
ships from Privateer. As far as I know the whole
team fell over in the Star Citizen fiasco. The
tendency to chase bigger better faster with more
pretty pictures has ruined what would have been
a pretty good industry. I haven’t seen much to
expect the staying power of chess from in comp
gaming.

If you get the core engine that reacts to the
player right the rest is just fancy artwork but
too often the AI and sequencing take a back seat
to dazzley eyecandy. That only masks off the
deficiencies for a short time.

I’d like to see a browser that wasn’t an
about:config nightmare of obscure settings that
supposedly do something. When they said Chrome
was an interpreter I immediately fled from it.
An interpreter opened to the Net attack surface
struck me as a horrible idea. Between the feaping
creaturism and gee whizz involved I’m amazed
that everyone hasn’t been owned big time in one
big happy botnet.

Clive Robinson October 24, 2016 2:08 AM

@ tyr,

I’m amazed that everyone hasn’t been owned big time in one big happy botnet.

That made me think of the old joke,

An old boy had a business in the rag trade that was not doing so well. It did not matter what he tried he was not getting the work. So he had a confidential word with some of his old friends about what to do and the general consensus was he was in such dire straights that his best option was to pray for a bolt of lightning to hit the business and burn it down so he could break even on the fire insurance. Anyway he goes back to work and things get worse, and he decides it’s time to be decisive, and put a plan in place. A few days later he’s walking down the street thinking further on the matter, when the wife of one of his old friends stops him, and starts wailing on about his misfortune. In the process she says “such a terrible fire to lose everything” at which point he says “Shush that’s not till next week”.

So on the same basis with your prediction of universal “owned’,

    Shush, that’s not till next week

😉

Clive Robinson October 24, 2016 2:27 AM

@ tyr, and the usuall suspects,

Speaking of universal ownership, especially by the likes of corporates with Android…

It appears it’s also got under others skin, and they have come up with a version of RowHammer to “root” Android devices,

http://arstechnica.com/security/2016/10/using-rowhammer-bitflips-to-root-android-phones-is-now-a-thing/

I’m not sure how to feel about this… From my egalitarian streak, anything that alows you as an individual to take ownership of the hardware you have purchased is a good thing and allows you to exert the traditional sense of ownership over tangible objects and “The Freedom to Tinker”.

However another part of me knows that there are sufficient people of “evil intent” who will use such a development to act as pariahs to society in general.

Clive Robinson October 24, 2016 2:47 AM

@ All,

You might not of heard of “Black Mirror” it started in the UK as the brain child of Charlie Brooker, who started his work life as a computer games reviewer.

Any way what started as Cult Noir viewing on the UK Channel 4 has moved to Netflix where it will find a larger audience.

Put simply it’s a near future Techno dystopia look at what could well happen with the technology of today (an earlier episode earily parallels the Trump Presidential run).

http://www.nytimes.com/2016/10/21/arts/television/review-black-mirror-finds-terror-and-soul-in-the-machine.html

Give it a watch you might not get nightmares, but you should enjoy it.

Wael October 24, 2016 2:47 AM

@Clive Robinson,

The head sockets aren’t in very good shape these days… sight and sound, I guess! May I use your timeshare? 🙂

However another part of me knows that there are sufficient people of “evil intent” …

Keeps us busy. I think there is a solution to this rooting problem. Having flags or single bits to control access is a fundamental weakness…

r October 24, 2016 7:38 AM

@Clive, CC: tyr

In response to your ‘walk of paths’,

A certain level, or a certain quality of outliers is a good thing. Everyone on this site is an outlier when you shine the light on the center of society, we might seem to be a random sample economically etc but more than likely there’s a different center point we could all be hidentified from outside of here… genetic, psychological, etc.

Outliers are envelope pushers, destructive or not. Yes there are malicious ones with malignant habits but there’s others that are Hark Tamils – ones that should be allowed to play and to push – sometimes maybe under supervision [and super-vision] but none-the-less allowed to participate for the greater good.

Thoth October 24, 2016 7:41 AM

@JG4

It’s the usual Military-Industrial-Goverment complex which focuses on profiting over the trampled rights of their own citizens and of the notion of humanity and in favour of a highly controlled world order where citizens are simply economic generators to feed the elites.

It’s all about profit for elites where the rich gets richer and poor gets poorer.

Skeptical October 24, 2016 9:13 AM

@Clive: Surprised your eardrums were intact. Though I’d be more concerned if I were on the receiving end from the 30mm gun on the other side. I believe it’s going to be retired in the near future as more F-35s become operational.

@Nick: Probably some overlap between development of certain simulations, visualizations, and game development, but I am a little skeptical as to whether it’s enough for a company to split resources between the two. The market moves fast, and the economics often weights speed to shipment more greatly than final quality. Although I suppose if certain components were abstracted in just the right way…

Re recent IoT-driven ddos, autonomous/semi-autonomous vehicle hacking, and the like… the price of failures of security, and the likely liability that will attach to companies involved, will be clearer and more dramatic than much of what we’ve seen thus far. Over the long run, I think the prospects are good that the incentives will line up appropriately. The recent attacks have been on the lowest of the lowest hanging fruit – and seem to rely as much on certain servers being beyond the easy reach of Western law enforcement as anything else.

And, I think this particular vector will lose current rather quickly. The attack last week essentially put every company on notice that selling internet accessible devices with non-random, non-unique login credentials poses a serious danger to both customers and others in society.

Indeed, so easily were these devices harnessed for use of harm that frankly I think companies harmed by the events of last Friday have a plausible cause of action against the merchants of the goods utilized in the attack, though the bar for attaching liability to a person for the criminal actions of a third-party is rather high. However, given recent FTC enforcement actions, and the number of people who predicted such an event, I’d venture the suit to likely be viable.

Moreover, though I haven’t looked closely, I suspect there is room on federal and certain state levels for regulatory and law enforcement agencies to look hard at the question of liability or penalty for companies whose products were utilized.

And those companies whose products were not on Mirai’s list (but was it the full list?) but have vulnerabilities of the same class had better be very, very proactive.

JG4 October 24, 2016 11:22 AM

the mention of the F-35 disaster reminded me of this excellent discussion, which is spot on the topic of nation-state security

https://thearchdruidreport.blogspot.cz/2016/10/the-future-hiding-in-plain-sight.html

the substance of Boyd’s work is that simple systems often outperform complex systems, besides offering the insiders less opportunity to feather their nests

@Thoth – your point is well taken. the system has been perverted – or, adapted, if you prefer, to serve the needs of the few, while claiming to serve the needs of the many. I’ve done a bad job of pointing out that many systems are adaptive and respond to feedback. Very little of it is accidental, as wikileaks are showing.

Markus Ottela October 24, 2016 10:35 PM

@ab praeceptis

“While I consider NaCl an excellent choice it might be desirable to provide a fallback and to not put all eggs into one basket.”

Curve25519 is the weakest link, not XSalsa20. I’ve given this some thought since I deprecated OTP and CEV versions. With the current version, 2-3 algorithms could be used in cascade for PSKs. It wouldn’t be too hard to implement either, most of the work was done when writing CEV version: The options are Twofish, AES and SHA3-256-CTR.

The issue is local key’s key decryption key would have to be 100 or 150 chars instead of current 50 to ensure no weak links in security (if rest of the keys are delivered over just XSalsa20, there’s no added security if adversary compromises the networked computer and breaks the encryption).

“Sampling (i.a.) random through SSH over ethernet seems to be an inconsistency actually weakening your design. It might be worthwhile to use NaCl there, too.”

NSA likes to talk about red/black concept. According to this jargon, TxM that outputs ciphertexts is what’s called a blacker. In theory it would be possible to have an entire isolated network of systems that use the TxM as a gateway; The HWRNG is an element in such network: it sits behind data diode so it can not be exploited:

https://www.cs.helsinki.fi/u/oottela/tfcwiki/hwconf/2.png

The TCB base is of course wider but the installer is just for convenience. It would take the user 10 minutes to type the sampler program on Raspbian without it never having been connected to Internet. Otherwise, the Ethernet connected HWRNG sampler RPi stands pretty close to threat model of TxM.

“PBKDF2-*? From what I understand that is mainly used in contexts that require it (e.g. smartcards). Wouldn’t there be more attractive KDFs/Hash ratchets (e.g. Argon2 (PHC winner))?”

This is definitely something that needs improving. Most of the attacks are done with parallel CPUs. I’m going to have to look into the differences between pbkdf2, bcrypt, scrypt and argon2, find a library and test vectors and run some performance tests but yes, it will be done.

I’ll take a look at the mypy when I find some time.


@Clive Robinson

“some of [us] would like to be able to put in the same level of effort in our own non-employment coding projects”

It’s come with a cost on everything else in my life but I feel it’s worth it.

“hopefully a “well done” will sound better.”

You made my day! (:


@Thoth

TFC works as long as there’s no pre-compromise of TxM (malware outputs keys to network). If this attack doesn’t happen, data diodes provide security until the adversary does their next optimal move: Exploit serial stack and inject malware to RxM that has access to RAM and a key logger. This way it can access master password, PIN and ultimately master key, that can decrypt persistent data. The sensitive keys or plaintext data stored by malware is then exfiltrated by attacker that compromises the end point physically.

If the smart card returns the static master decryption key (hash of signed password) to RxM, the malware can grab it from the memory and store it in plaintext. Even if the smart card would yield a new symmetric encryption key every time, malware could still store all keys or all displayed plaintext messages separately.

TFC’s current persistent data encryption works against medium strength adversaries that haven’t compromised RxM but are accessing TFC files when user left screen unlocked. In this context Yubikey’s static password provides similar protection against brute force to that of a smart card with one significant drawback: If user leaves Yubikey on table, the entropy takes a few seconds to copy to personal device. Smart cards on the other hand require the PIN before they yield the key. So user can make a lot of effort to ensure future mistakes won’t have dire consequences. This is worth looking into, but then again, it’s the high strength adversaries and physical compromise by HSAs we’re worried about.

If smart cards would be used, best application would be ephemeral conversations, where the forward secret symmetric keys are generated based on public keys, stored and used inside the smart card. Similar to what the Google Project vault’s messenger does. That way the malware on RxM would have yet another layer up against it. Malware could still log plaintext messages displayed from the point of compromise onwards, but impersonation would be an infeasibly hard problem. A pre-requisite for this would be to have a capacitive numpad integrated into the smart card surface. That way infected RxM would have no access to PIN.

I have no expertise with smart card programming and use so I can’t say how much effort this all would add to end users, but if it’s doable, then any usable guide would be worth trouble integrating into TFC.


@Sancho_P

“Have you seen [data diode designs and discussion from last week]?”

Apologies, getting the release out took most of my time. The data diode design and article is fantastic. All the necessary information is there. Rx side is powered by the RxM so that’s a huge plus. I have a USB-TTL converter lying around somewhere but local stores do not carry the optocoupler model, so testing is going to have to wait. The material looks great: I’d love to add the article to TFC wiki (you’d get the credits naturally), if you don’t mind that is.

As for the trust on hardware, COTS has it’s risks, and functionality of optocoupler isn’t verifiable any way. I would imagine though that if the local electronics store that carries these components is infiltrated and adversary hands out IC with malicious logic, user is targeted to the extent one or another close access operation will get them in the end.

The only way to mitigate this is to make the original design work with a phototransistors that replaces PNA1801LS. A list of compatible LED-phototransistor pairs is really needed.

The paper by Jones et. al. gives following specs to phototransistor:

handles 20mA forward current at 15 volts
collector current of at least 3mA at 500 Lux
peak sensitivity near 800nm
4μs response time

the LED would have to have

maximum current rating of 20mA
2.2V nominal forward voltage drop
Non-diffusing package

I’m not sure if the panasonic LN28RCPP and LN28CPP LEDs are still in stock somewhere.

The great thing about these simple components is it’s much harder to hide logic inside them so ordering from online store would be almost risk-free.

“Imagine to wait 3 minutes only to see “Failed, checksum error” on the receiving side.”

The latest version of TFC features Reed-Solomon erasure codes (used e.g. in CDs) so that should fix most of the transmission errors:
https://github.com/maqp/tfc/blob/master/Rx.py#L4342

“I don’t know if @Markus Ottela ran into speed issues because of the converters, the coupler or whatever (USB, OS, TFC SW). It seems he didn’t see the proposal or doesn’t find the time to acknowledge.”

The issues were with optocoupler. The simple wire-based data diodes (left-most) have handled RS232’s 115200 bd/s fine.

“I guess any automated feedback would be a no go for a data diode as it would constitute an information channel back to the source”

I agree. I’d rather make all the retransmissions in the world than risk covert return channel.


@Figureitout

“There could maybe be some kind of crazy attack there over powerlines thru a powersupply but not worth it IMO to fully prevent that.”

I pondered about this for some time. My threat model is mainly batteries dying on me when I’m demoing the system so it’s fine for me. Some users might consider the side channel risk too big. A DC connector leaves room for options from power supplies to large battery packs.

RE: https://www.adafruit.com/product/2107 I wonder if shared ground loop was the reason I got so terrible readings with my scope when trying to analyze the bare output of USB-TTL adapter. Will have to investigate

“Have you experienced any errors yet?”

This question was directed to Sancho_P but I’ll just say that outside what bad Python code has caused, not really. The CNY75A works very reliably at 9600 bd/s and during testing where I sent packets together with their checksums, not even 19200 bd/s had any issues (hence the speedup in latest version). The reason I did not use faster speed since beginning was my scope displayed slight latency in rising edges.. Turns out it wasn’t a problem.

“You’d prefer silent fail?”

Unrecoverable transmission errors display warning to user: https://github.com/maqp/tfc/blob/master/Rx.py#L4343


@Ratio

IIRC it was Frederic Jacobs who talked about pip using no authentication over downloaded packets.
The SHA256 hashes might do the trick so I’ll have to re-investigate.

The hwrng.py docstring is incorrect, 768 is valid length for entropy queries (It really confuses when docstrings are out of date. This will be fixed).

I’ll have to dive deeper into the input validation check but huge thanks for pointing this out!

Duplicate code is a big issue. At some point the code is going to have to be rewritten from start into smaller packages. “Simple is better than complex” but having slightly different implementations of each function for different programs might be just bad design.

Global variables are a problem too, I’ve tried to keep the number at absolute minimum.

Again, thank you so much for taking the time to review the code! I’ve much to learn but it’ll get there.

Clive Robinson October 25, 2016 12:00 AM

@ Joshua Pritikin,

Squid is loaded with cholesterol. Eat it at your peril

It’s a complex ill understood subject at best… Not all lipids are the same, if you ask most doctors will tell you the HDL good LDL bad advice. The problem is the evidence behind the mantra is not as clear cut as many portray it. There is also good LDL as well as bad but the cost of the tests to differentiate is almost eye wateringly high.

Further have you ever asked how much dietary LDL cholesterol makes it across the gut barrier to directly become LDL blood cholesterol?

The answer is nowhere as much as many have tried to make you belive in the past (or currently). It turns out on investigation that the liver is responsible for most of the LDL in your body.

The liver manufactures and secretes LDL into the bloodstream, and does not require any dietary cholesterol to do so. What is not clear is what all the LDLs it produces are for…

Part of this is that there are receptors on your liver cells that can “monitor” and try to adjust the various LDL levels. If however, you have fewer liver cells (which some people do due to genetics), or if they do not function effectively, the various LDL levels may rise. But there may be other reasons for the rise in certain LDL production.

For reasons of “doctrine” and “funding sources” [1] few experiments were carried out in the past into what effect sugar has in the bodies reaction to it’s effects (the Keys “Sugar good, Fat bad” mantra). This is now changing and more balanced research is being carried out and steadily it is being found that whilst some LDL causes plaques in the arteries, it appears to be as a defence mechanism to other efects, some of which are caused by sugar…

We by no means have all the answers, but unlike the last fourty to fifty years we are now starting to do the research… So time will hopefully produce more answers. But one thing that is hard not to notice is the epidemic of heart attacks, strokes, TIAs, stones etc that have hit the western world, appear to be historicaly as a delayed consequence of our “sweet tooth” and “salty tongue” and excess calorific intake from simple carbohydrates.

One thing that is becoming recognised is that “sugar” triggers a response in the brain that unlike proteins does not sate our hunger / appetite. It’s been argued that this is a survival mechanism. The reason is that simple carbohydrates like sugars are only plentiful in late summer and early autumn in non equitorial regions. This is at a time when many animals lay down fat stores to see them through the winter months when all forms of plant foods are scarce, thus some even hibernate. It therefor appears logical to assume similar mechanisms are part of human survival.

[1] See history of “pure white and deadly” and other articles about Keys deliberate misreading of his own study and the funding sources from the “corn syrup” industry and the effects they had on what research was done.

ab praeceptis October 25, 2016 1:59 AM

Markus Ottela

“Curve25519 is the weakest link…” – Of course, as is PK Crypto generally. For one the quality of numbers used, in particular of primes, often isn’t exactly ideal. Plus, of course, the sword of pq (I took the liberty of handing Damokles a post-crypto sword *g) is hanging over our PK crypto.

I’ve played with a “bandaid” idea for a while. “bandaid” because it doesn’t avoid the pq problem but makes it relatively cheap to use a “PK ratchet” (pardon my english).

The logic is this: When chosing the private key for Curve25519 ECDH, why not chose one that is prime and can serve as input for stage 2 (RSA)? If the price isn’t too high, we can at least protect ourselves against either one of them broken. The effectively used SK for sym crypto could then be, say hashed(SK 1) xor hashed(SK 2) (or whatever mangling recipe you like).

“Twofish, AES and SHA3-256-CTR” – luxury for later, not at all urgent. As you correctly stated, PK is the hot zone. The sym. algos are damn well analyzed and tested and deservedly well established.

“bcrypt, scrypt and argon2” – Argon2 was the winner in the contest and for good reason. If you want to have alternatives, have a look at the other finalists. Interesting and worth a good look anyway.

“I’ll take a look at the mypy when I find some time.” – Do yourself (and your project) a favour and find that time soon. MyPy is a very simple and comfortable way with next to 0 learning curve to have your python code (pseudo but checked) statically typed. Doing crypto with dyn. typing is just inviting trouble it seems to me.

Thoth October 25, 2016 3:31 AM

@Markus Ottela
In my original Root of Trust design above, I did mention that keys are not exportable. Once the user enters the correct secrets into my smart card scheme, it turns into an encryptor assuming you don’t have a ton of things to encrypt since I assume the encrypted database would not be more than 300 KB ?

The smart card can also be doubled as a message encryptor like Project Vault which can be adapted to. The drawback is that smart cards are rather slow encryptors so they are mostly suited for signature checking and unwrapping the KEK.

Wesley Parish October 25, 2016 3:50 AM

@Europe vs American Models

People with conscience and morals used to run America. Now its tax dodging, profit minded Wall St backed corporations using big-data to crush competition.

That’s rather an oversimplification, and plays right into the hands of the US. It’s interesting to examine what the others think of that:
http://www.republicoflakotah.com/steps-to-sovereignty/158-year-stuggle-for-justice/

The simple truth is that the US government never intruded on the rights of anyone who could fight back either in the courts or the battlefield unless they had overwhelming advantage; likewise US big business – you should read Norbert Wiener. In one of his books, in the preface or epilog (I’ve forgotten which one: I thought it was God & Golem, but I’ve misplaced my copy) the person giving the biography tells about his former opposition to workers’ rights and the change in mind occasioned by discovering just how petty, low and stupid the bosses actually were as compared with the dignity of the workers he met and came to like. A bit of US union history might work wonders: everybody’s taught that the unions established themselves through violence: what nobody ever acknowledges is that the bosses hit first and bloodiest. They only conceded when they were outfought.

Norbert Wiener?
ht tps://archive.org/stream/NorbertWienerHumanUseOfHumanBeings/NorbertWienerHuman_use_of_human_beings_djvu.txt

Clive Robinson October 25, 2016 10:48 AM

Another root to an Android’s heart

It’s not been a good past few days for Android with the variation on RowHammer and now Dirty Cow,

http://arstechnica.com/security/2016/10/android-phones-rooted-by-most-serious-linux-escalation-bug-ever/

As I’ve already indicated above I have mixed feelings on such exploits, but when all is said and done I come out on the right of “freedom to tinker” because that is what moves society out of the maws of the unproductive “rent seaking” wasteralls.

Markus Ottela October 25, 2016 2:49 PM

@ab praeceptis

“When chosing the private key for Curve25519 ECDH, why not chose one that is prime”

Curve25519 ECDHE is nice as it takes any 256-bit value as valid private key. I’d rather not reduce the keyspace into just all the 256 bit primes (I wonder what the ratio is). The sad truth is equivalent amount of security for classical DHE has large enough keys to drive the user typing them insane. And while the QC qubit requirements are higher, they’re not outrageous. IANAC but if djb is fine with Curve25519, I think it’s good enough until QC comes. If QC is part of user’s threat model, PSK is the way to go.

RE: PHC finalists
Good point. It would be interesting to see if there were performance over security tradeoffs made there like in the case of Rijndael vs Serpent during AES.

@Thoth

After the 256-bit local key is stored into the smart card the rest is free for the contacts. RxM side smart card needs to store two 256-bit keys for sending and receiving. The TxM side basically needs only 256-bit keys for sending, but I think it’s worth it to store a hash that can verify identities of parties during initial key exchange, e.g. SHA256(“fingerprint” + public key with smaller value + public key with larger value). The string “fingerprint” is just for domain separation. You said keys are not exportable. Is this by design or is it possible to control the exportability so that only the hash of public keys is exportable and not the symmetric keys.

In addition to keys each contact needs the hash ratchet counters (64 bits) plus a UID, for this purpose 12 bits.

(300 * 1000 * 8 – 256) / (512 + 64 + 64 + 12) = 3680 contacts that is slightly less than the 12 bit UID space (4096).

The XMPP account is somewhat public knowledge so storing mappings of UID and XMPP accounts on TxM/RxM isn’t a huge problem. As long as conversations are not logged on the smart card I don’t think there’s a problem with the limited space.

Smart cards may be slow but unlike TxM/RxM currently, they don’t have to re-encrypt the entire key database after every message. Just encrypt+sign/auth+decrypt 255-byte string, run the key once through SHA256 and update the hash ratchet counter. That’s all. Any idea what the performance numbers are on your ChaCha20(-Poly1305?) implementation or AES?

ab praeceptis October 25, 2016 3:41 PM

Markus Ottela

You are right (prime size) but: It’s just a (reasonable) recommendation to use 2 roughly equal primes for modulus generation. It makes sense, no question but unless your opponent is a) extremely potent and b) already has cracked the ECC part, this is of rather little concern with my mechanism.

Again, my bandaid idea is an addition. The question of how (in)secure a (effectively, it is to be assumed) 512 bit RSA modulus is, is not that trivial because even 512 bit RSA moduli aren’t factored in seconds (it is a reasonable assumption that today a mid-size academic cooperation would need some weeks to do that; nsa might do it in days, maybe even hours if your stuff is important enough for them to throw all their resources at it). But again, that’s an additional or, worst case (ECC broken) an emergency security layer (512 bit is much less then 1K or 2K bit but so much better than nothing).

Moreover, the “use roughly same size primes” does not suggest that an RSA modulus of, say a 256 but and a 768 bit prime is as trivial to factor as a 512 bit modulus; it merely suggests that it’s simpler than a 512/512 modulus, which brings us into the grey area of not yet feasible or extremely expensive.

So, I still think that my bandaid mechanism might be useful, at least as an option.

But, of course, I’m not a vendor. You don’t like my mechanism, no problem. I merely like to think of ways for a poor man (compared to nsa or even a university) to defend himself well.

Sancho_P October 25, 2016 4:38 PM

@Markus Ottela

You are welcome, do whatever you want with that stuff, tell me in case you need more / something else.
HCPL7723:
In small quantities you will have to order that optocoupler from mouser, digikey, tme or so, even amazon have it listed (SMD only, shocking price).
Make your electronic store (if there is any) order it for you, that’s easy in respect to tax and customs, plus there is another “isolator” between source and “target”.

”As for the trust on hardware, COTS has it’s risks, and functionality of optocoupler isn’t verifiable any way. I would imagine though that if the local electronics store that carries these components is infiltrated and adversary hands out IC with malicious logic, user is targeted to the extent one or another close access operation will get them in the end.” (my emph.)

I’m not sure if I understood your concerns. First, they’d have either to infect ALL devices (very costly for a mass product plus with very high risk of unintended side effects and detection, do not underestimate competitors’ curiosity) or to have an agent at your local store.
The latter is only feasible if you are a very valuable target.
However, there is still hope:
Also a simple optocoupler can be tested (230 VAC hint @Figureitout) and you will connect only one transmission line to the FT232.
On the other hand, any USB / TTL converter is a soft target, so better use the RasPi’s UART lines directly.

Forget about the phototransistor if you want to transmit more than Morse code using your flashlight. No way for reliable + homemade kBps.

Re your discussion with @Figureitout:
[adafruit adapter] ”I wonder if shared ground loop …”
This is a USB to USB isolator, not USB-TTL, a completely different device.
It’s intended to be completely transparent to all USB negotiations, commands and data. Of course you’d be free to connect GND of input and output, no problem.

Re: Reed-Solomon erasure code: Good!
I love it (however, it’s probably over my head).

But:
”Unrecoverable transmission errors display warning to user: https://github.com/maqp/tfc/blob/master/Rx.py#L4343

Exactly what I was talking about, this is crap, totally unacceptable in critical mission or security.
Sorry.

I’ll try another simple analogy, this time from the as-famos-as-stupid car domain:
Driving home you have to hit the brake in an emergency, but nada, zilch,
a simple but true message comes up: “Brake failure!”.
Waking up after 3 months in coma you are told “Brake fluid was gone”.
Soooo…

Shouldn’t we have a warning before failure? In case someone has touched …?

But be aware: Real, true security isn’t digital, isn’t true or false.
This is also why my example (the simple brake fluid level switch) is crap.
That’s not security, that’s a farce.

pass …
Good you didn’t invest time here yet 😉

Markus Ottela October 25, 2016 8:04 PM

@Sancho_P

RE HCPL7723:

Will have to take a look. Ordering ICs online is asking for trouble, but using an electronics store as proxy might indeed help.

“On the other hand, any USB / TTL converter is a soft target, so better use the RasPi’s UART lines directly.”

This assumes all computers are RasPis.

“Forget about the phototransistor if you want to transmit more than Morse code using your flashlight. No way for reliable + homemade kBps.”

The paper by Jones et. al. says 1200 bd/s works reliably. This might be slow but that doesn’t mean it isn’t in the interest of many users who would rather have the assurance over the speed. Let the user decide.

“Exactly what I was talking about, this is crap, totally unacceptable in critical mission or security. Sorry.”

This is instant messaging we’re talking about, not delivering nuclear launch codes. As I said, I haven’t encountered any transmission errors with data diodes so far. The Reed-Solomon wasn’t added because it was needed, but because it’s good protocol design. Even if you had to re-send one message in a thousand, it’s like having to restart a car on a winter’s morning. Having an ACK channel leak the private keys from RxM — that is having the breaks fail; There’s no warning light.

If you have a trivial, provably secure way to verify transmissions, I’m willing to listen. Otherwise I see no reason to endanger security over nonexistent problem.

Thoth October 25, 2016 9:35 PM

@Markus Ottela

“but I think it’s worth it to store a hash that can verify identities of parties during initial key exchange, e.g. SHA256(“fingerprint” + public key with smaller value + public key with larger value)”

This can be done. Pretty easy.

“You said keys are not exportable. Is this by design or is it possible to control the exportability so that only the hash of public keys is exportable and not the symmetric keys.”

This depends on whoever write the applet codes. If the developer does not include a mechanism to export, there is no way to extract it. If you include a method to export, it can be exported in secure or insecure fashion whichever the developer chooses.

“As long as conversations are not logged on the smart card I don’t think there’s a problem with the limited space.”

Most smart cards have 80 KB EEPROM and that includes the executable. Better to store non-cryptographic stuff or things that do not need smart card protection off the card and use a card’s key to wrap them if necessary.

“Smart cards may be slow but unlike TxM/RxM currently, they don’t have to re-encrypt the entire key database after every message. Just encrypt+sign/auth+decrypt 255-byte string, run the key once through SHA256 and update the hash ratchet counter. That’s all.”

Indeed there is no need to re-encrypt the key database since the smart card already is a secure environment that is also tamper resistant by itself. Hmmm… if you have to re-encrypt your database after every messsagem wouldn’t that mean that your TFC have a significant overhead everytime you send a message since you must re-encrypt key database after every message ?

“Any idea what the performance numbers are on your ChaCha20(-Poly1305?) implementation or AES?”

ChaCha/Poly1305 will be slower than a snail crawling. Better not to go that direction. For AES-256 encryption including I/O to smart card over a 200 byte string, that will take somewhere between 70 to 90 ms. The ciphering is not the lagging part. It is the I/O that is slowing down portion of the scheme. Most ciphering will take from 2ms to 40ms at most and for Infineon chip via SLCOS (COS are the different types of Card Operating System) type cards, it does it around 2ms+ and for FTJCOS type cards it does it around 4+ ms for ciphering a 200+ byte string.

What kind of speed are you looking at and how much bytes you need to encrypt/decrypt/sign/auth/hash depends on the use cases.

Just an idea of the timing used, 1 second is 1000 ms and let’s assume the Smart Card I/O (both ways altogether calculated) + AES ciphering activity + SHA256 hashing takes 90ms which is 0.09 seconds done over a byte string of 200 bytes.

A cautionary note is getting smart cards to encrypt something like a PDF file, images or multimedia, the user might not be patient enough to wait for 345 KB PDF to de/encrypt and hash while thumbing their fingers and waiting for 2.5 minutes via AES-256-CBC-PKCS5-SHA256 scheme. Anything done on smart card should usually be completed within 3 I/O cycles (256 bytes * 3 cycles of I/O) otherwise the user probably would pull their hair out.

Do note that smart cards don’t support Curve25519 since it’s not NIST standardized. These cards usually support RSA-2048 and for the lucky few, RSA-4096 would be supported too if users are willing to pay cash for those kind of security. NIST ECC cruves are expected on most modern cards with key sizes usually up to 256 bits.

My recommendation is to use smart cards for their AES/SHA hardware and to use it to secure critical codes and data (i.e. integrity checksum comparisons for key databases) that you are worried that malwares might attempt to alter, deny or exflitrate since the only practical way into a smart card other than physical decaps and physical attacks is via it’s APDU logical protocol which is easily audited, open standards and controlled by the developer of the applet if you know where and what to look for.

0xc4 October 25, 2016 10:42 PM

Going cold is a way that the average user can “air gap” our records, in a way that minimally interferes with our day-to-day rhythms, while treating special, sweeping requests as the sensitive events that they are.

Strong dark archives: people or institutions to stand guard over our lives’ data

(shamir’s secret sharing?)

Imagine if instead of simply storing digital documents in a manner which was readily recoverable, sensitive data at rest were encrypted with a key that is then broken into fragments, with each “horcrux” held by a fiduciary in a different jurisdiction.

No one trustee—nor the donor—could jump the gun and get to the data—but the right group of them can. This would also, in some cases, help mitigate the duress that a particular person could be put under to turn over a key, since only part of a key is held.

All this talk of security and non-repudiation without any mention of kleptography or the vulnerability of endpoints? Uh-oh!

Clive Robinson October 25, 2016 11:05 PM

@ Markus Ottela, Sancho_P,

If you have a trivial, provably secure way to verify transmissions, I’m willing to listen. Otherwise I see no reason to endanger security over nonexistent problem.

As Claude Shannon formalised and others before him demonstrated, no physical communications path is “100% reliable”. It boils down to,

1, What Bit Error Rate (BER) you can get on average.

2, How you detect any errors.

3, The higher level protection protocols you use to resolve them.

So lets assume the BER is low(ish) on average(1) and take a look at what methods there are to detecting errors(2).

The normal way to detect errors these days is with some kind of mathmatical function ranging from simple parity checks upwards. Each of them adds bits to the data sent and Richard Hamming was one of the first to show that some functions can also be used to correct errors.

This in turn gave rise to the notion of Forward Error Correction (FEC) codes. However all FEC’s have limits based on the BER of a given channel and other methods have to be used in high BER conditions all of which use a feed back channel which it’s self is subject to errors… So it is always possible for errors not to be detected or corrected.

The use of a simple feedback loop is one way to detect errors but is as prone to giving false errors as it is to detecting real errors due to having two equal length channels in the same environment. However it provides a logical starting place from which to work.

One thing a simple feedback loop can do is detect errors without leaking any other information from the receiver, which is desirable from a security perspective.

The way you do this is to chain the transmit path back to the the transmitters receive path at the receiver and the transmitter checks what you could call the “echo”.

So the total path would be, from the transmitter transmit pin to the TX opto-coupler input, from the TX opto-coupler output to the receiver receive pin, which is then tee’d off to the RX opto-coupler input, from the output of the RX opto-coupler to the receiver pin of the transmitter.

Provided the transmitter receives what it sent, either the bit was correctly delivered to the receiver receive input or there were an even number of coincidental errors in the total path. Over short data paths such errors most usually happen when a single element common to more than one segment of the path glitches. The usuall suspects are the likes of power supplies under electrical noise or common connectors/cables under mechanical noise.

The question then arises of how do you deal with errors that arise in that path as well as how you deal with errors beyond the receiver receive pin. these are standard questions and have standard answers under various different assumptions bassed on path reliability and the nature of the actuall transmission such as half or full duplex. When security of the transmitter is the major concern the usual option is not to have feedback from the subvertable parts of the receiver, thus half duplex operation is assumed. Which in turn means FEC and either accept that there will be errors or at some much higher part of the system a method of requesting a retransmission at some later point in time, be it in full or in part. Such higher level signaling protocols was addressed in an insecure way with the likes of Kermit and Z-Modem some thirty years ago and in very unreliable radio paths with very high BER by the likes of Automatic Repeate reQuest (ARQ) protocol systems like SiTOR and AmTOR some forty years ago, with later additional error detecting and correcting methods in PacTOR. These basic protocols can be augmented by better FEC and other codes, but the gain is not likely to be seen in anything other than unreliable and noisy channels.

I’ll stop at this point because all feedback from susceptible parts is a potential covert channel and that has a whole load of additional issues, that require other aspects to detect and correct such as voting protocols to detect “traitors” long time delays on errors etc to reduce the covert channel bandwidth and other issolation techniques.

Oh one last point, speaking of issolation techniques, if the receiver receive pin is not a true input pin but a general purpose software selectable I/O pin (GPIO) then it is subvertable. You can solve this by putting a buffer in to provide isolation between the tee point of the path and the receiver GPIO receive pin (nobody said “thinking hinky” was easy 😉

Markus Ottela October 25, 2016 11:12 PM

“Better to store non-cryptographic stuff or things that do not need smart card protection off the card and use a card’s key to wrap them if necessary.”

That’s actually a lot better idea. To store all keys on TxM/RxM and to store one key encryption key on the smart card. The smart card authenticates whatever keys are input before decrypting and using them. More I/O but significantly less space requirements.

“wouldn’t that mean that your TFC have a significant overhead everytime you send a message since you must re-encrypt key database after every message ?”

At the moment yes, but by default we’re talking about less than 75kB of data per contact database, it’s so fast you barely notice it on gen 1 RPi — anything faster (e.g. $200 netbook) and it’s completely unnoticeable.

“What kind of speed are you looking at and how much bytes you need to encrypt/decrypt/sign/auth/hash depends on the use cases.”

This depends on how much data should leak to physical attacker compromising TxM/RxM. Each entry in RxM database has 11 fields that are all padded to 255 bytes prior to encryption, so if contacts are separated to independent files it’s 2.8kB per contact. If the number of entries is hidden by padding the entire database (by default 20 contacts), the database size is always 56kB.

So ~100ms for 255 byte message isn’t a problem but databases might.

Is the AES authenticated encryption (e.g. GCM), or is SHA256 MAC required?

RE: curves

This is a big problem. Typing 2k/4k RSA public keys isn’t practical, and like Bruce, I don’t think the P-curves are trustworthy. But threat models vary and not every nation state has access to the backdoor. Maybe. So it might be useful addition under some threat models. Hopefully Yubico etc. will build devices that use Curve 25519 for signing and key exchange.

“My recommendation is to use smart cards for their AES/SHA hardware and to use it to secure critical codes and data”

It is useful but if RxM is compromised, it might be that whatever native software is authenticating e.g. TFC software with the help of smartcard, might give false-positive answer. I think the best use for smart card under the threat model is to protect the keys and prevent impersonation. But again, the pin code needs a secure input channel and I’m not sure the tech exists yet.

One option would be to run TxM and RxM with Tails live distro from DVD, and store all persistent TFC files on something like this that has an alphanumeric password and overwriting features. With $400 price tag it’s not for the average user.

A less expensive alternative would be
https://istorage-uk.com/product/datashur/

Clive Robinson October 26, 2016 12:13 AM

@ Markus, Bruce,

On the same page you link to Bruce also says,

    Clearly I need to write an essay about how to figure out what to trust in a world where you can’t trust anything.

I don’t remember if that essay ever appeared?.

RE: Porpoise October 26, 2016 12:37 AM

In sum, we’ve replaced judge and executioner with robots. What’s next? the jury too?? In all likely hoods, global warming will offset the chilling effect the reality of 1984 has had on us all and anyone even flirting with holding a picket sign will be identified flagged and tagged for removal through the remote sensing of your mood swings facial expressions and explitives. This will enable them [not us] to intervene on behalf of the greater good.

ab praeceptis October 26, 2016 12:54 AM

Clive Robinson

Slightly extending on your mentioning Shannon (hitting the nail), if I may.

I’m amazed over and over again at the grey zone where philosophy and physics touch each other. Following the rabbit hole (that’s a proper english phrase, no?) examined and researched by Shannon deeper, one finally(?) arrives at quants (and quarks, too, btw) which, if we haven’t gotten that completely wrong, prove and necessitate entropy and, more importantly, the absence of any reference (at least over any not insignificant amout of time (which opens another rabbit hole, but I’ll stay on path)).

In other words:

a) security is provably not achievable. What is achievable, and what all security is bound to be limited to, is an aproximation. Neither is “justified trust”, i.e. the certainty not to be failed.

b) security is, indeed, as many (like Bruce Schneier, you, and quite some of us) have so boringly (and correctly) repeated, no state but a process (which I would like to elaborate because that intimately and causally related to what we see in sw and hw, but I’ll refrain).

c) we ask many questions around security, such as “against what?” but we tend to ignore an immensely important question: “Over what amount of time and when?”.

Gladly, and that’s where my approach differs from e.g. yours, we can achieve quite satisfyingly high levels of security, due in part to the fact that our opponents meet similar barriers as we do. Maybe now I can show better why I have such a pragmatic attitude (see also (d)).

d) Cost for security increases exponentially or near exponentially (which is funny because, on the other hand, we perceive mechanism a secure that drive our opponents cost up exponentially *g).

Praised be Shannon. That man’s work was indeed important and profound.

Thoth October 26, 2016 12:55 AM

@Markus Ottela, Figureitout, Clive Robinson, Nick P
Please do not mention dataShur as a secure device. I remembered I have busted dataShur as INSECURE device on one of the posts somewhere.

You should start reading the dataShur Common Criteria paper and you will be appalled at how insecure it is. If you give the dataShur to @Figureitout, the dataShur becomes dataUnshur !!!

You have some technical backgrounds yourself and you should actually consider experimenting with defeating dataUnshur yourself if you have time and money. The security of dataUnshur is it has no tamper resistance at all. None. It relies on it’s USB microcontroller to accept a PIN and do the security operations and the USB microcontroller can be a PIC chip and there are tonnes of materials in the wild on how PIC chips are NOT security chips. What dataUnshur relies for security is just a bunch of tamper evident epoxy and that’s all.

Too bad I don’t have access to strong acid and all that as you need a chemist license to operate a lab in Singapore otherwise I would have decapped that thing long time ago. How dataUnshur works is PIN and crypto are all bunched into the PIC chip. If you enter the wrong PIN, the PIC chip acts as a hardware router that simply denies USB mounting. If you enter the correct PIN, the hardware based PIC chip router would allow USB mounting and the AES decryption would take place. The worst thing is the PIC chip (not even a security chip of any sorts)

In fact, dataUnshur does not own the technology. They simply licensed the patented technology from Clevx’s Datalock technology which by the way many other licensees of the Clevx Datalock technology (not just DataUnshur but includes Toshiba’s similar spin-off). All products (licensees) of the Datalock patent are vulnerable in the sense it is not truely tamper resistant but marketed as tamper resistant and who knows what other vulnerabilities lies inside including hacking the JTAG and what not.

So what if dataUnshur gets FIPS 140-2 Level 3 … it means nothing because the current FIPS 140-2 standards are considered outdated and easily bypassed. I think Ross Anderson would easily bypass many FIPS 140-2 “HSM” that are not HSMs in any form or way. Those are simply dishonest marketing and a security certification system that is totally busted by many experts and shown to be faulty. Heck, even CC EAL certification is pointless as most part of the CC process is not spent on actually decapping and micro-scoping the chips sent for “evaluation” but mostly spent on reading piles of reports that have to conform to CC certification.

If you want CC EAL 5+ chips, you put a bunch of metal shields and enclose it in epoxy, write a nice report and there you have it. A “smart card” or “secure element” chip. Yes, I am simplifying a ton of stuff here but I just want to put the point across that most secure element, HSMs and smart cards (including TPM) are mildly to moderately tamper resistant or even tamper resistant at all.

I made popular the idea of using smart cards because it separates actual state attackers from script kiddies and smart card are mildly tamper resistant but not strong enough if you are up against nation states. I have been developing a bunch of methodologies which I am finding time to release and write about in future to compensate the mild tamper resistance of these smart card chips. It simply makes attacking harder on the remote level and somewhat harder on the physical level but compared to dataUnshur that does not use any tamper resistant chips (even mildly), dataUnshur is a big door wide open. One other and highly important reason I push for smart card adoption is because of programmability. You can script your JavaCard, BasicCard, .NET Card, MULTOS C card … and in that instance you can obfuscate your codes and executables and for a smart card chip with limited RAM and EEPROM, it will be difficult to hide a malware that de-obfuscate and predicts your execution as the lag will be too apparent and easily detected. Smart cards are also highly available in market and almost every hardware security researcher will at least decap a smart card once and if stuff is hidden, someone would have caught something due to high availability of smart cards and because of it’s high availability, that means hidding backdoors that can be easily measured by researchers becomes harder and plus these smart cards are commodity products in a way where it can be used for virtually anything from payment to transit, governments … backdooring would require too much effort and too large of a scope and also introduces risk in the own governments having the same vulnerability as everyone thus my view the backdooring all the world’s smart cards is likely implausible in whatsoever form and thus my push for use of smart cards as Secure Execution Environments for very critical and weird use cases that the industry never have dreamt off to put them off their guards.

DataUnshur in no form or way is suitable even for HIPAA, FIPS, CC, HSPD, DOD, CESG, EMV, PCI DSS … whatever Government, Financial or Critical Infrastructural use, let alone protection of data security in my opinion due to it’s insecure chip design.

If they want to improve, they must move away from an insecure chip and use something like a smart card controller or a dedicated HSM chip even if it cost money.

Yubikey once used an insecure micrcontroller and was shown to be insecure (one of those old Blackhat of DEFCON videos somewhere in Youtube land) and after the exposure, Yubikeys are now equipped with NXP’s JCOP smart card controller. The Yubikey uses a JavaCard capable smart card chip with CC EAL 5+ rating and technically you can program a Yubikey like any smart card using the JavaCard language but they decide not to release the ISD keys (Card Manager keys) required to authenticate and authorize the programming of the NXP JCOP chip. I have not heard of anymore problems with Yubikeys once they switched to using NXP’s JCOP smart card as the heart of their Yubikey products.

Side Note: Do note that trying to bruteforce the ISD keys (2-Key TripleDES) more than three consecutive failed authentication into the Card Manager applet in any smart card may cause the card to permanently brick itself thus a good way of protecting unauthorized card programming 🙂 . You need to request the 16 bytes of TripleDES keymat into the ISD domain to authenticate and load the executable from whoever issues you the card.

Links:
http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140sp/140sp1873.pdf
http://www.clevx.com/datalock-fips.html

Curious October 26, 2016 12:59 AM

“French surveillance law is unconstitutional after all, highest court says”
http://www.pcworld.com/article/3134476/french-surveillance-law-is-unconstitutional-after-all-highest-court-says.html

“A key clause of last year’s Surveillance Law essentially allowed security agencies to monitor and control wireless communications without the usual oversight applied to wiretapping operations.”

“This is unconstitutional as the lack of oversight is likely to result in a disproportionate invasion of privacy, the council ruled Friday.”

Markus Ottela October 26, 2016 1:02 AM

@Clive Robinson

One idea I’ve been playing in my head is an electromechanical system where the ACK signal is stored into a simple latch circuit, and the connectors reading and writing would be controlled by relay switches operated by TxM. The logic would prevent galvanic connection from NH to TxM at all times. Since there is only single bit that gets transferred, compromise of TxM should be practically impossible.

This of course does not work with NH delivering data to RxM because an adversary can in theory control both devices. So I don’t see a way to ensure transmissions over recipient’s (the more troublesome) data diode. One mitigation would be to change per-packet forward secrecy to per session one. Outside long trickle-connection based ones, the sessions in TFC are relatively short lived. (Moxie was right to wonder why OTR used round trip Diffie-Hellman rachets.) With per-session keys, NH could cache ciphertexts and replay them as per request of the user — based on some ID RxM would complain it didn’t receive. The command is easiest to send from TxM, as NH doesn’t have an input function.

Alternatively, RxM could cache previous keys and partial packets into RAM for limited time and wait for missing ciphertexts. But this is again, a risky solution to seemingly non-existent problem.

Figureitout October 26, 2016 1:03 AM

Thoth
–Not conclusive at all, I’d need one of those boards on my desk, smart card and all, be able to program firmware of the interface chip and java application at will, and probably a month or 2 working full-time on it to fully investigate. Like I said I wouldn’t trust timing measurements that are purely software-based on non-DSP chips due to getting burned a bit in the past by software timing issues. The scope sorted it out real quick, and I was able to zoom in on the problem real quick (have to know code base pretty well to do that).

What the timing measures is the total transaction time between the card and the desktop
–That’s good, that’s part of what we want, but would take a bit of memory on a scope. If anything, there’s likely tiny delays interspersed in the 200 or 256 byte payloads constantly sent out and back. I don’t see that adding up to much above 2% of the time.

Regardless, I’d be willing to bet the problem isn’t the comm speeds, just not sure precisely where the main delay is happening. You could put flags all over the timing test code, like when encryption starts and finishes, time that period, not sure. It may be too much effort to speed it up. If you could spin the performance issues to dealing w/ some timing attacks, but still if someone has a smaller doc to encrypt they’re going to want it done faster.

Markus Ottela
My threat model is mainly batteries dying on me when I’m demoing the system
–Ha, yeah I’m not insane enough yet to run all my computers on a generator (or even better, a windmill) in the backyard. Makes my eyes bleed thinking how to eliminate that threat, anything electrical is ill-advised at that point. But if you power both your PC’s from in your home, that’s a connection, isolated by a bunch of transformers but still, ultimately it’s a connection between TX and RX. There was a senior project at my school of a comms channel over powerlines.

RE: usb isolator
–Man, we gotta use a TTL-RS232, then RS232-USB adapter; that’s the way to go! But yeah, I was thinking if this would be nice before going into PC. I know it passes USB data fine, I even used my USB sticks to test that some. It’s just I got compelling evidence that the ground was changing when I plugged/unplugged other side of isolator. Can’t expand on it.

The CNY75A works very reliably at 9600 bd/s and during testing where I sent packets together with their checksums
–Ok, good to hear; hopefully it’s comparable to HCPL7723. Have you done testing on the Reed-Solomon ECC’s, like intentionally inducing errors, to see they work, or have you never seen errors?

Sancho_P
Also a simple optocoupler can be tested (230 VAC hint @Figureitout)
–OR a simple continuity test. Continuity test is one of the simplest most powerful tests in electronics, I swear. Play around w/ high enough voltage (current around 1-2amps?) to hurt you, or do a completely safe harmless test to check if something is electrically connected or not. Hmm, tough choice.

Shouldn’t we have a warning before failure? In case someone has touched …?
–So you want a warning of something before it’s happened? Uhh, you listening to yourself? You like crystal balls and fortune tellers and stuff like that? Unless you can identify specific, repeatable conditions for an error.

Think you’ll be going down a rabbit hole of insanity looking for what you describe, doesn’t even look remotely technically possible.

Thoth October 26, 2016 1:26 AM

@Markus Ottela

“That’s actually a lot better idea. To store all keys on TxM/RxM and to store one key encryption key on the smart card. The smart card authenticates whatever keys are input before decrypting and using them. More I/O but significantly less space requirements.”

This is how most smart cards applets do. They handle the tiny KEK and the rest is left to the host computer.

“So ~100ms for 255 byte message isn’t a problem but databases might.”

Good to know.

“Is the AES authenticated encryption (e.g. GCM), or is SHA256 MAC required?”

GCM is a very new cipher mode and thus not supported. It is not surprising considering most smart cards are simply tamper resisting legacy CPUs and never meant to be all too updated.

Current Cryptographic Algorithm list for you to select:
– AES_CBC_NOPAD (no padding literally)
– AES_ECB_NOPAD
– AES_CBC_PKCS5 (depends if supplier has PKCS5, currently working on PKCS5 to be hand coded by myself in case PKCS5 is not around)
– AES_ECB_PKCS5 (depends if supplier has PKCS5, currently working on PKCS5 to be hand coded by myself in case PKCS5 is not around)
– SHA_256
– HMAC-SHA_256 (depends on supplier but I have a working HMAC-SHA2 library I hand coded and suppose to work from SHA1 to SHA2 of any variant).
– RSA_NOPAD
– RSA_PKCS1 in Cipher.MODE_ENCRYPT and Cipher.MODE_DECRYPT
– RSA_SHA1_PKCS1 in Signature.MODE_SIGN and Signature.MODE_VERIFY
– RSA_SHA256_PKCS1 in Signature.MODE_SIGN and Signature.MODE_VERIFY (depends on supplier if the SHA256 based PKCS1 is supported but I too have a working hand coded variant)
– ECDH_NIST_P_PLAIN with DHC and DH mode (Allows flexibility of hashing of negotiated shared secret after ECDH event)
– ECDH_NIST_P_KDF_SHA1 with DHC and DH mode (Performs mandatory SHA1 of shared secret as the final secret material after ECDH event thus resulting in 20 byte secret material presented without flexibility).
– ECDSA_NIST_P with SHA1, SHA224 and SHA256 hash as typical offering. SHA384 and above hashes are more rare.

“This is a big problem. Typing 2k/4k RSA public keys isn’t practical, and like Bruce, I don’t think the P-curves are trustworthy. But threat models vary and not every nation state has access to the backdoor. Maybe. So it might be useful addition under some threat models. Hopefully Yubico etc. will build devices that use Curve 25519 for signing and key exchange”

Yubico uses NXP JCOP smart cards as it’s heart. The smart card industry are driven by NIST directives. Sadly, non-NISt curves are in high demands but for now the industry is still crapped up. Better to use symmetric crypto (AES) whenever possible.

One good thing is NXP’s JCOP includes it’s proprietary ECC Curve library where anyone signing an NDA with NXP would be able to implement their own ECC curves (including Curve25519) but the side effect is you won’t be able to port the codes to another platform and NDAs are horrible stuff. It will be nice if Yubico can use it’s relationship with NXP to utilize the ECC Curve library NXP supplies to it’s partners to create an open Curve25519 API but so far I have not heard any news of such initiatives yet. They need to be nudged 😀 .

“It is useful but if RxM is compromised, it might be that whatever native software is authenticating e.g. TFC software with the help of smartcard, might give false-positive answer. I think the best use for smart card under the threat model is to protect the keys and prevent impersonation. But again, the pin code needs a secure input channel and I’m not sure the tech exists yet.”

That would require a dedicated MCU if you are worried. More work but also more security. You can get a bunch of buttons, an OLED display and an STM32 to work on but that’s gonna spend a huge amount of time unless you are going to commit to TFC as your main project for many years to come and even spin a marketable variant for the sake of getting some income, you could consider this route.

You won’t need to spend USD$400 on dubious security stuff that you cannot audit and control by yourself. I would advise you to grab some Arduino, mount a OLED display, get a keypad or something to hook to the Arduino and then do some comms between the Arduino built secure PINpad to talk to your RPis. Also the Arduino can be equipped with an ISO7816 card reader module. I was just browsing a while ago and found a smart card module from Adafruit. You can use a USB module on the Arduino for a USB-CCID interface to the smart card as well and that solves the problem of needing dedicated ISO7816 language since there are tonnes of USB-CCID card readers on the market out there.

Links:
https://www.adafruit.com/products/101

Markus Ottela October 26, 2016 3:35 AM

@Figureitout

“Have you done testing on the Reed-Solomon ECC’s, like intentionally inducing errors”

The unittest class for Reed Solomon encoding adds random errors to beginning of transmission and ensures the corrected string matches that of the original:

https://github.com/maqp/tfc/blob/master/unittests/test_tx.py#L3229

I should probably make the error placement random but other than that, the correction seems to work provided the e_correction_ratio value can handle the number of errors.

“So you want a warning of something before it’s happened?”

I think what was meant here was error would be detected before large packet is received. The packets in TFC are ~500 bytes long (it depends mainly on length of tx/rx XMPP addresses delivered to NH) so there won’t be long delays.

@Thoth

Re: Current Cryptographic Algorithm list

AES_CBC_NOPAD and HMAC-SHA256 are probably fine provided the plaintext is padded prior to uploading it to smartcard.

“–unless you are going to commit to TFC as your main project for many years to come and even spin a marketable variant for the sake of getting some income, you could consider this route.”

That’s unlikely to happen. Interdiction makes selling the system commercially too risky. I’d rather not deal with whatever NSLs are handed in this industry, or what compromised subcontractors deliver to storage. Decentralized project is much safer for all users, even with all the inconveniences.

RE: Arduino.

The assurance depends on whether it can be reprogrammed for pin stealing by RxM. Bidirectional I/O is required for key handling. I’d imagine the isolation of input is handled better by the proprietary security products that are FIPS certified.

Markus Ottela October 26, 2016 3:59 AM

@ab praeceptis

Cost for security increases exponentially or near exponentially (which is funny because, on the other hand, we perceive mechanism a secure that drive our opponents cost up exponentially *g).

I’d imagine the costs are something like running cost of fundamental research of security, math, exploits. Then there’s the deployment cost of massive systems with maintaining costs (small compared to number of targets). As end-to-end encryption becomes the norm, governments move towards automated exploits; The running cost increases but the deployment costs go down (currently existing infrastructure like Quantum, FoxAcid etc. appear to work fine).

What troubles me is, there’s no exponential or possibly even linear increase in cost while exploit remains undiscovered. Research by Symantec showed the window of exposure for zero-days is 312 days on average. This number goes down as the number of targets goes up but the probability of losing the exploit it’s not linear. As less high value targets are added, their security practices go down and the risk increases logarithmically. If security experts and dissidents with infosec contacts/background can be filtered, the chances of getting caught are very low. Average citizens are not running Snort. Average citizens don’t blame NSA when their computer crashes. Average users don’t even understand the concept of vulnerability and exploit. They won’t report their suspicions.

To me it feels like TFC is the first one to kind of add linear increase in cost per user to break the system. Were I wrong, that would mean it’s cheap to compromise any system the user of which obtains security software online — at that point the protection computers can provide against the adversary is negligible.

@Thoth

RE: dataShur

Good to know they are not worth their salt. It saddens me to hear that the certifications aren’t obtained with active testing by third party (especially when it comes to CC EAL).

Thoth October 26, 2016 5:16 AM

@Markus Ottela
Padding wouldn’t be a concern as I am already working on PKCS5 pads and I can make them work even with NOPAD modes. HMAC that I wrote should work across multiple platforms and so far the big providers work fine with my crypto library I created for smart cards so that should be fine.

Regarding Arduino based secure PINpads, you can control I/O and support only whichever external ports you want. You can even program the Arduino as an infra-red interface making exflitration much harder.

Wondering October 26, 2016 5:20 AM

Why does not someone develop and market things like a ~ $50 mobile/landline snap-on hardware encryption device for the common people, to be sold on ordinary store shelves? So much mass and targeted control, why so few effective and practical countermeasures?

Clive Robinson October 26, 2016 5:38 AM

@ Figureitout,

OR a simple continuity test. Continuity test is one of the simplest most powerful tests in electronics, I swear.

Yes and no, many components do not conduct untill after a certain voltage is reached. Think of a zenner diode it is designed to have minimal to nothing in the way of conduction below it’s knee voltage, and quite a low impedance afterwards.

It’s not just semicoductor devices but “gas tubes” as well, hence such nice test kit as the Megga, which will make your eyes light up if you hold the wrong things during testing 😉

I’ve got Hi-Pot test gear for testing of insulation and gap breakdown that pushes a nice 1500volt pulse to wake up the inatentive…

The thing Sancho_P left out of his primitive Hi-Pot test was a current limit resistance. Various scientific bodies (including medical) point out the old saying of “It’s the volts that jolt, but the mills that kill” is nearly right… Thus you need a series resistance that at the peak-peak voltage only allows 30mA maximum.

Thoth October 26, 2016 6:01 AM

@Wondering
You are referring to JackPair (linked below) ?

You do realize that cryptography and personal security / privacy is frowned upon by “The Powers That Be” ?

Security appliances and equipments have a ton of red tapes on them and that includes both import and export control. Possessing security equipment can mean nasty sentences in certain countries like the Middle East, Pakistan, India and so forth ?

Elites of the society cling onto power and these days we live in the world of digital communication which is a double edged sword. These elites of societies would not want to yield power and so by the means of force and resources, they attempt to deny the use by the commoners.

In the digital world, attacks are easier than defense (quote @Bruce Schneier) and so the defenders have to defend all corners while the attackers attack wherever and whenever they want which makes means you have to spread your defenses equally. Unlike physical battlefields where you have weather, terrain, morale of troops … digital resources are abundant and you can simply DDoS someone whenever and wherever they are as an example.

Link: https://www.kickstarter.com/projects/620001568/jackpair-safeguard-your-phone-conversation/updates

Thoth October 26, 2016 9:38 AM

@Markus Ottela
Basically your integration with smart card would need:
– KEK wrapping of keys
– Storing some hashes and comparing

For initializing the smart card state, users would first “register” a setup by supplying a hash of their password and the smart card would randomly generate a 256-bit key. The 256-bit smart card key would encrypt the password hash thus forming the master key via the product’s AES-256-CBC-NOPAD ciphering operation.

The SHA-256 hash of the master key would be stored in a secure PIN slot that will be actively wiped when tamper is detected by the smart card as like all PIN and Key type objects which are tamper protected. The hash of the master key is used as the authentication PIN into the smart card as well. The IV used will be 16 zeroes. The master key is never stored permanently but always derived from every logged in user session. The master key will be stored inside a temporary key slot which is a secured portion of the RAM that will very quickly lose the keys and sensitive temporary RAM objects when violations are detected. The master key is used for key wrapping and key sealing (a.k.a fanciful term for HMAC integrity operation).

After registration stage, a flag would be set into the hardware state preventing reverting to registration state unless the user authenticates and set the flag to a self-destruct / wipe state of the smart card which essentially securely destroys the internal CSPs via mechanisms provided by the smart card (including random byte overwriting and zeroizing on a hardware level).

Other finer details like enabling or disabling of self-destruct if too many failed attempts are detected and the setting of hardware protected retry counters are done at the registration stage too. Once all these information are set, it is permanent until card is wiped and reset again.

Op_HW_WRAP_AND_SEAL_SENSITIVE :: Key wrapping is done by the following procedure:

  1. Keymat or any sensitive materials are wrapped with sizes smaller or equals to 127 bytes.
  2. 16 bytes of random IV and 32 bytes of random AES-256 bit DEK are generated.
  3. The sensitive material is padded with zeroes until it reaches 127 bytes if it is less than 127
    bytes.
  4. 1 byte indicating the length of the sensitive material is placed in front of the 127 bytes.
  5. AES-CBC-NOPAD is applied to the padded material including the 1 byte of length indicator (total 128 bytes) with the random IV.

  6. The random IV would be hashed and concatenated to derive another set of 16 byte IV which will be used by the Master Key to encrypt the DEK.

  7. The original random IV, encrypted DEK and encrypted sensitive object would be HMAC-SHA-256 via the same Master Key and now this is a total of 200 bytes in all which would be exported.

  8. When unwrapping, the 200 bytes of encrypted and sealed sensitive material would be imported and then authenticated and unwrapped and then release of the sensitive material that was wrapped and sealed inside.

Op_HW_STORE_COMPARISON_OBJECT :: You store the attestation result of a plaintext object you are targetting. The comparison algorithm method (RSA2048_SHA1_PKCS1, RSA2048_SHA256_PKCS1, SHA1, SHA224, SHA256, HMAC-SHA1, HMAC-SHA224, HMAC-SHA256). Since you are unlikely to use RSA based comparison method, that would leave with the SHA hashing methods and the HMAC verification methods. HMAC methods would use the master key for to perform “signing” operation over the plaintext object and then store only the MAC result while for the SHA hash methods, it will hash and then store the hash result. A 2 byte OID would be assigned as the object handle and returned. Typically you can store as many objects within a 2 byte short type range but the card will not have that much space. It would be prudent to only store as much as necessary.

Op_HW_COMPARE_COMPARISON_OBJECT :: You simply keep feeding the plaintext until the operation is finalized with an indicating OID that was given when storing. The verification process would be done on card and a boolean result returned.

Op_HW_MANAGE_COMPARISON_OBJECT :: Do the Remove and List operations of objects stored for comparison purposes. UID is required when doing Remove operations. List objects would list out how many objects in total and all the OIDs and the accompanying MAC/hash results in plaintext.

If you need anymore, you can add or remove from those above.

ab praeceptis October 26, 2016 12:46 PM

Markus Ottela

That’s difficult one. Of course, our minds like to play with what we know and to eternally enhance. Which in security leads to projects like yours. They are valuable from a research perspective (and a human one), “security out there”, however, might be quite different in terms of threat scenarios and needs.

I think it’s high time to return to planet earth and to understand that the enemy who really and everyday bring us into danger is not nsa – it’s us, it’s the developers who design and code carelessly and it’s, of course, the greasy profit driven management bots in companies who prefer to sell blunt crap in order to make 3 cents more profit.

Can mere mortals drive up costs exponentially for attackers? Certainly – but in 97 or 99 out of 100 cases they won’t. They’d rather click on a funny icon (opening yet another hole in their system).

And then there’s the politicians and society and its handlers. Some of us say “We can’t educate the user. We rather must create more secure systems”; and they are right. And not; because, just look: if the politicians (calling themselves “we” and “society”) really want something driven into Johns and Janes head then they can.

That’s one of the points that drive me nuts. Lies. Lots and lots of lies. politicians, and being at that, whole legions of marketing people can “educate” people; in fact they can even bend their minds and make them do crazy thing. If, however, say, Bruce Schneier wants to tell them, that there is big fact security monster right behind them, he usually fails. They just don’t care.

Why don’t they care? Because no “authority” told them to. That’s the ugly point. Users are told to care, when it makes profits for corporations like symantec. They are, however, not told to care for the question whether those corporations sell crap and snake oil.

The way it is nsa is happy, corps are happy, users are happy (they installed CrapSec for 29.99! and got cute new icons) and politicians are happy; for one they have something to threateningly blabber and moreover the whole INsecure situation is a usefull – and plentifully used! – stage to justify more spending on cyber as well as on classical military.

Which, just btw also explains why costs don’t grow, let alone exponentially, for undiscovered exploits.

Sancho_P October 26, 2016 6:09 PM

@Markus Ottela

”This assumes all computers are RasPis.”

No. It assumes Tx and Rx are part of the TCB, not general consumer COTS machines. RasPi & Co are good because of their diversity.
But my point is:
Be careful with USB in the TCB (similar to say USB can’t be part of the TCB).

”Let the user decide.” Good, love it.

”This is instant messaging we’re talking about, not delivering nuclear launch codes.”
So you are devoting your time to a complex “high assurance” messaging system, limited by design to chat with your mother? 😉

”As I said, I haven’t encountered any transmission errors with data diodes so far.”
So what should I take from that? That you are good in soldering? Never had a damaged phototransistor, broken cable, loose connector? SW or timing problems in a complex, probably already compromised system?
– Or that you have never searched for transmission problems in runtime?
You know the saying ”The absence of proof is not proof of absence.”

”The Reed-Solomon was*** added **** because it’s good protocol design.”

That’s what I wanted to read, thanks!

”Even if you had to re-send one message in a thousand, it’s like having to restart a car on a winter’s morning.”

Now this is too close to @Figureitout, that’s not you.
Nope, it’s not, on the contrary.
Hopefully it’s not a problem to resend or to restart your car once (if it does start then).
The problem is:
You should have known before that your communication was on the brink to collapse, something bad was going on in your system, or likewise your car computer’s error log was already full of “low pressure from gas pump” or “low battery power” warnings since weeks (nice that your repair shop can read OBD logs but usually you can’t).

I think that’s what @Clive Robinson hinted to when saying: ”The big prob with most EC is it does not fail gracefully, that is it hangs onto the edge of a precipice by it’s finger nail before droping out of sight.”

No, we don’t need (and here can’t have) any machine feedback [1].

That said, there is a “best praxis” solution, it’s trivial and simple, called “live zero”:

  • On the Tx machine(s), after preparing the bitstream, flip some single bits depending on packet length, say in 10 Bytes or about 1% of the packet, whatever (at random or in fixed, known position, what would you suggest?).
  • On the receiving side add a counter + display for corrected bits (on status / settings screen).
    Rx has to “know” how many bits would be (normally) flipped (as a preset or transmitted by Tx, but that’s complicated, check it’s above a minimum, …).
    The fixed error bit positions could be checked, but ….

Is the correction rate below “known” (threshold 1 bit) display a critical warning “Receiving data error rate below minimum, system probably compromised”.
Is it above (with threshold about +2%) but below 10% give a warning “Receiving data error rate extremely high, check connection / data diode”.
Above 10% “Receiving data error rate critically high, check connection / data diode”.

The data diodes are your concept’s firewalls.
Hope you are “willing” not only to listen 😉

[1]
What would be the purpose of any feedback to the sender?
Imagine a solar powered transmitter periodically sending data (RF) home. Why should I send back that I have problems to understand what it says? What would it do with that information? Apologize for snowfall?
There’s a point in time where it needs a human action.

Sancho_P October 26, 2016 6:12 PM

@Markus Ottela

At the end of your discussion with Thoth (25, 11:12 PM)
you wrote ”It is useful but if RxM is compromised, …”
and finally to store all persistent TFC files (???) on something like …

Later you wrote ”The assurance depends on whether it can be reprogrammed for pin stealing by RxM. Bidirectional I/O is required for key handling. I’d imagine the isolation of input is handled better by the proprietary security products that are FIPS certified.”

I’m a bit slow in thinking so I didn’t understand fully but I’m interested in the problem, probably I’ll come back to that thematic.

Sancho_P October 26, 2016 6:16 PM

@Clive Robinson re high pot test

I wasn’t talking about testing live objects, you know a cow can be killed already by 10 mA (they are known to be more sensitive than humans) and some people survived several amperes through their limbs (seriously injured, of course).

My current limiter is the incandescent bulb with relatively low cold resistance but acting as a short circuit protection just in case. The voltage should be held for 1 minute to be enough to heat up any “leaks” in the IC package 😉

Markus Ottela October 26, 2016 9:51 PM

@ab praeceptis

“I think it’s high time to return to planet earth and to understand that the enemy who really and everyday bring us into danger is not nsa – it’s us, it’s the developers who design and code carelessly and it’s, of course, the greasy profit driven management bots in companies who prefer to sell blunt crap in order to make 3 cents more profit.”

HSAs like NSA etc. represents mortal danger to whoever is labelled a terrorist by the political winds. I don’t symphatize with ISIS but there was a time MLK was the highest threat to national security. That’s what’s at stake here. Like Snowden said, “We are building the greatest weapon for oppression in the history of man, yet its directors exempt themselves from accountability.”

I don’t want to live in a society like that.

“Can mere mortals drive up costs exponentially for attackers? Certainly – but in 97 or 99 out of 100 cases they won’t.”

True. This isn’t about forcing everyone to change their way of life. It’s about giving those care about their human rights a fighting chance.

“Some of us say “We can’t educate the user. We rather must create more secure systems”; and they are right.”

We need both. Unless we have tech for strong end-to-end encryption, there won’t be a conversation with realistic expectation of privacy. Unless user is educated to the point they check public key fingerprints, it’s the same. Users can’t replace encryption with code words — tech can’t solve the issue of MITM without user verifying key authenticity.

“Users are told to care, when it makes profits for corporations like symantec”

What sickens me more is having proprietary security vendors lobby their proprietary products at crypto parties: VPN for those who need Tor. Really? Not understanding the threat models of audience is in some contexts risking their lives.

@Sancho_P

“So you are devoting your time to a complex “high assurance” messaging system, limited by design to chat with your mother? ;-)”

So you’re saying high assurance means we accept high risk of key exfiltration via ACK channel and guarantee the following theoretical inconvenience won’t ever happen: (1) Alice sends “I got transmission error with the last” and (2) Bob presses up arrow and enter to retransmit the message.

“Or that you have never searched for transmission problems in runtime?”

I already mentioned I ran a testing period that searched for transmission errors when upgrading baudrate to 19200. That testing wasn’t anything formal, so it only lasted for a few hours.

You know the saying ”The absence of proof is not proof of absence.”

Yet the world of information security lives by computational hardness assumptions. Not even companies like OWL who build certified data didoes have dared putting such features into production. If you can do this, I’m betting you have a ludicrous paycheck waiting for you there.

“That said, there is a “best praxis” solution, it’s trivial and simple, called “live zero”–”

So warn user about TFC having to do correct errors? Sure. It’ll take me a while to understand where in Reed-Solomon class I can see the number or errors, but that’ll be done. It won’t fix catastrophic transmission errors like if connector is loose but then again, nothing will.

“Receiving data error rate below minimum, system probably compromised”

I think a simple “Data diode is giving errors. Please check batteries and connections” is enough. Errors do not occur under normal use so even a single byte error should raise the warning.

“What would be the purpose of any feedback to the sender?”

I understood you meant delivery should be guaranteed, and that sender should be able to retransmit until it goes through. This requires a dangerous ACK channel that confirms no errors occurred. It can not be done, thus other ways to mitigate are need.

“I’m a bit slow in thinking so I didn’t understand fully but I’m interested in the problem”

The security design doesn’t differentiate between exploit capabilities. I’m assuming adversary has almost unlimited budget with exploit design. The capabilities of malware are

  1. Any path for data transmission can carry any type of malware and whatever information it has collected along the way
  2. The malware has access to RAM and can run arbitrary code and commands as root
  3. The malware can operate hardware freely to turn it into covert transmitter

The axioms security is built on are

  1. Malware can’t cross data diode in opposite direction
  2. Malware can’t decrypt symmetric encryption without having acquired access to keys

And now as per discussion with @Thoth:

  1. Malware can’t exfiltrate data from smart card without decapping it

I admit the last one is a bit of a stretch but if any additional security is desired, it’s the best bet.

As the networker computer (NH) is connected to receiver computer (RxM) with data diode, malware can propagate from network to RxM. Malware on RxM can log every displayed message, display arbitrary messages, access keys in memory etc. But. It can’t send anything back as per security axiom 1. So, at this point we’ve already achieved security against remote exfiltration.

The next challenge is to provide as much protection from physical attacker as possible. Since FDE on RxM can be defeated with clever evil maid attack, the “evil servant” malware on RxM can hand master key it collected from memory to physical attacker. Note that the physical attack succeeds also against TxM, but since unlike RxM, this computer can not be remotely compromised. Compromising RxM FDE takes maybe two trips. TxM is a more trickly one and takes maybe three visits.

The “evil servant” on RxM can hand back all logs it has covertly recorded. But it can also hand over keys that can be used to impersonate the user. If security axiom 3 holds true, this can be prevented by storing a non-exfiltratable master key on the smart card, that then also runs all cryptographic decryption operations on behalf of TxM/RxM. The smart card can possibly be obtained by adversary but as long as guessing the PIN wrong thrice might destroy the master key, adversary won’t attempt gaining access to encrypted keys stored on TxM/RxM (smart card has very little memory).

This is where the PIN comes in. Where is it typed? If you use RxM keyboard, the “evil servant” can record it. If you use USB smart card interface with PIN pad, the security design is hard to audit, the evil servant might be able to propagate. If you use Arduino with built in tools, the entire MCU might be compromised. Whatever device is relaying ciphertexts and plaintexts from RxM to smart card, must be assumed to be compromised by malware. So therefore I’d rather have a secure, alternate input path to smart card’s crypto processor, and a smart card with integrated PIN pad would be ideal. An easy analogue would be the encrypted external drives: Unlike any TrueCrypt volume I mount on RxM, the password I type into the drive itself isn’t visible to the evil servant.

Clive Robinson October 26, 2016 10:01 PM

@ Sancho_P,

I wasn’t talking about testing live objects…

I’m sufficiently knowledgeable “in the art” to infere that you were not.

However, in the UK there are sufficient “accidents” to indicate that there are a number of people who will “do things for fun”…

One classic of recent times was a manufacturer of battery powered electric razors. In an advert they showed it being semi immersed in a sink full of water and a person start shaving with it immediatly. Unfortunately it became clear that some people using other shaver products that ran off of the mains started immersing them because the advert indicated it was OK… As far as I’m aware nobody got hurt, but it did trip circuit breakers etc. The company pulled the advert shortly after they were made aware of the problem.

The problem with this blog is as with televison adverts that “you don’t know who’s watching” or “what message they will chose to take away” from it.

It’s one of the reasons I’m a little more cautious about some of the things I say technically since “stuxnet”, as I don’t want to have second guessing lawyers breathing down my neck. Especially with the UK legislation about terrorism, it’s worse than section 1201c of the DMCA for chilling research publication etc.

ab praeceptis October 26, 2016 11:18 PM

Markus Ottela

“HSAs like NSA etc. represents mortal danger to whoever is labelled a terrorist…”

You are frighteningly right but we are not in contradiction. Because what I described is the basis for what you describe.

If there were some safe full stacks from a solid and actually practically usable OS and safe libs and languages and … – then we might succeed to also stop nsa and accomplices.

I remember wondering when linux became stronger and a usable alternative and neither state, nor deep state nor corp. world went against it. Somewat later, when forst tens of millions and then the first billions were thrown at linux, mysql, etc. the picture became clear. They wanted the open source movement (obviously for their own purposes). It was just perfect for them; they could control, manipulate, and poison and at the same time they could play the nice guys.

The final bang for me came with heartbleed. Not that I needed another wake up call, but it came anyway and there was little room for interpretation. Not only was the thousand eyes principle but a lie or at best a pipe dream but even worse, the second pair of eyes just treated the jewels of security like just any any other worthless project; it wasn’t worth his attention so he simply clicked it O.K.

The us agencies as well as the european agencies spent hundreds of millions upon hundreds of millions on academic research, yet rarely we – who pay for all that – see the fruits. In cases where something actually useful is produced typically a company is created and payed by us results are monetized, and not for small money at that. And what little is publicly available must usually be found in cumbersome searches and it is rarely easily installable and usable.

Short, someone (and someone very very mighty) does not want for us to have safe systems; not even a reasonable chance to have them.

Another hint: Whenever something very fast and reliable comes up, quite usually one will find banksters behind it. Just have a look at languages supporting the actor model. Formertimes you would have found big industry names (ibm, etc) and nowadays you’ll find fintech corps connected.

Finally, let’s not dream and hallucinate. We know, we bloody know, that the risks are enormous when creating important sw with C/C++, Java. We know that sw reliability would immensely increase if we started to use better languages. But we don’t.

Recently we had a discussion based on a blog post of our host here re. “educating the users”. Lot’s of discussion – yet we failed to understand that the very same problem concerns ourselves. Just try to preach proper software development …
The vast majority of developers is no different from John and Jane who just do what everyone does (buy a cheap modem/router box and happily click whatever comes up on your screen).

I don’t see any tangible chance for us to achieve a more secure society (in terms of IT sec). The optimistic maximum is that by far less than 1 promille of the population can achieve at least a minimal level.
Just look at your own project. There are optocouplers that easily do 100MHz but you use plain diodes and sensors. Why? Because you don’t trust them chips.

In other words: Unless we, the people, take the power back (if we ever had it), unless managers in corps must be seriously afraid of f*cking us nothing will change. You will continue to use old stuff because you can reasonably trust it and I will continue to build safe sw. Drops in the ocean.

gordo October 27, 2016 12:36 AM

Variation on ‘The Democratization of Cyberattack’ theme:

[Free]World’s Largest Net:Mirai Botnet, Client, Echo Loader, CNC Source Code Release

https://krebsonsecurity.com/wp-content/uploads/2016/10/mirai-hf-940×377.png


Mirai (given name)
Mirai (未来) is a Japanese given name, meaning “the future”.
Also a Shona name (Zimbabwe) meaning wait .

https://en.wikipedia.org/wiki/Mirai_(given_name)


Mirai (acronyms)
– Malicious internet routing attack infrastructure
– Mainstream internet routing attack infrastructure
– Militarized internet routing attack infrastructure

– All of the above [?]

IoT malware exploits DVRs, home cameras via default passwords
Boing Boing / CORY DOCTOROW / MON SEP 12, 2016

Linux/Mirai is an ELF trojan targeting IoT devices, which Malware Must Die describes as the most successful ELF trojan. … .

[quotes Malware Must Die, from 8/31/2016:]

The moral of the story of this threat ; this is the example, on how a group of bad hackers can improve themselves.. if we let them still be freely doing their vandalism act out there. They keep on improving their threat and they have no care to aim anything that can be infected to expand their “botnets”.

https://boingboing.net/2016/09/12/iot-malware-exploits-dvrs-hom.html


Attackers are now abusing exposed LDAP servers to amplify DDoS attacks
LDAP adds to the existing arsenal of DDoS reflection and amplification techniques that can generate massive attacks
By Lucian Constantin | IDG News Service | Oct 26, 2016

Corero’s Larson believes that increasing numbers of insecure IoT devices combined with new amplification vectors could lead to multiterabit attacks over the next year and even attacks that reach 10Tbps in the future.

http://www.csoonline.com/article/3135686/security/attackers-are-now-abusing-exposed-ldap-servers-to-amplify-ddos-attacks.html


Massive cyberattack poses policy dilemma, Stanford scholar says
Stanford cybersecurity expert Herb Lin says the Oct. 21 cyberattack that snarled traffic on major websites reveals weaknesses in the Internet of Things that need to be addressed. But stricter security requirements could slow innovation, cost more and be difficult to enforce.
CLIFTON B. PARKER | Stanford News | OCTOBER 24, 2016

[Lin:] The primary policy recommendation is that we need policy that encourages – or mandates, depending on how strong you want to be about it – at least minimal security measures for devices that connect to the internet, even Internet of Things devices. How you actually promote, encourage or incentivize that without a legal mandate is problematic, however, because nobody quite knows what the market will accept. … .

http://news.stanford.edu/2016/10/24/massive-cyberattack-poses-policy-dilemma-stanford-scholar-says/

See also:

FORD HAD A BETTER IDEA IN 1956, BUT IT FOUND THAT SAFETY DIDN’T SELL
June 26, 1996 @ 12:01 am
Diana T. Kurylko, Automotive News

http://www.autonews.com/article/19960626/ANA/606260836/ford-had-a-better-idea-in-1956-but-it-found-that-safety-didnt-sell


Regarding innovation:

https://en.wikipedia.org/wiki/Innovation#Future_of_innovation

and …

Innovation: A Conceptual History of an Anonymous Concept
Benoît Godin
Project on the Intellectual History of Innovation
Working Paper No. 21
2015

Ironically, these developments led to the transformation of the concept from a means to an end to an end in itself. Some words, Lewis suggests again, have nothing but a halo, a “mystique by which a whole society lives” (Lewis, 1960: 282). The word seeps into almost every sentence. Over the twentieth century, innovation has become quite a valuable buzzword, a magic word. Innovation is the panacea to every socioeconomic problem. One need not inquire into the society’s problems. Innovation is the a priori solution (p. 17).

http://www.csiic.ca/PDF/WorkingPaper21.pdf


Innovation as we know it is not what’s called for. Repairing the infrastructures, whether, e.g., liberty, i.e., trust; utility, i.e., reliability; workmanship, i.e., craft; and so on. The current transformation is effectively global. The last time a basic communication technology radically changed the West a reformation followed and led to an enlightenment. May we be so lucky.

Figureitout October 27, 2016 1:24 AM

Thoth RE: datashur
–First thought was “Argh! Why buttons on USB stick?! Why do people keep doing this?!”, if you have to push them while it’s plugged in it’ll break your USB port. I hate ports that get loose or wiggly, need them tight and crisp, just my OCD. The best is VGA or DB9’s where you can screw them in nice and tight for a secured connection. The worst is the latest mini/microUSB that can wiggle and lose connection, so flimsy…Then it has a battery in it and you push in pin while it’s not plugged in. Ok, better but you now have code executing while the USB stick isn’t plugged in. That’s a big ‘nope’ for me. I unplug the USB stick, I want it to die and no more code execution as soon as the last bit of charge dissipates from any power cap on board (had a crazy, not bug, well small hardware bug I guess, even surface mount caps can hold enough potential for a low power chip to operate longer than you’d think, a lot longer…so you kill power and expect chip to be off almost immediately? Not so in some cases…huge security risk if your vulnerable RAM is remaining powered and ripe for picking, good to clear it, best to kill the power and lose the contents of it for sure).

And just b/c it’s a PIC chip doesn’t mean throw it out. Your smart card reader had a very common HCS08 8 bit MCU (they’re a bit old now, but still all over the place), w/ a plastic case that didn’t destroy the circuit board when you took it off. And no conformal coating at least to make it a slight pain to probe (usually just pierce thru it, but you have to press a little). Can you actually easily exploit an MCU if I set lock bits and encrypt data in an internal eeprom? W/o researching existing exploits, if there’s any easy ones documented. Decap the epoxy then what? You need a good microscope that can take pictures and to know what components look like in silicon.

Double-edged sword, designing tamper proofing for physical access (usually a death sentence in security field eh?). There could be a nice hidden backdoor underneath all the tamper shielding, that I can’t get at before it destroys itself, and that backdoor remains lurking.

Markus Ottela
–It’s sounding like there is few to no errors being experienced. I’ll try some tests when I build Sancho’s USB data diode. Something to try might be an electric drill held close to wire, best when you can see the sparks in it. It generates a lot of crap RF. Another common one is when smartphones do the initial negotiation for a 3G/4G data connection or wifi connection, noticeable burst of RF from that. If it can hold up, basically unshielded from most annoying RF noise sources, that’s good.

I think what was meant here was error would be detected before large packet is received
–I think he doesn’t know what he wants and is just kinda throwing things out there? He wants warning of errors beforehand, but even things like sensors telling you to check your brakes and stuff, those can be attacked and disabled or reset…

If you use Arduino with built in tools, the entire MCU might be compromised.
–And a raspi (which actually has a necessary binary blob we can’t see to run) or regular laptop PC w/ at least 5-6 separate MCUs (chips w/in chips now is normal) using python won’t be? MCU’s win here in terms of attack space for usability, sorry. And if MCU’s are compromised, then all PC’s w/ their huge CPU’s and microcode and all the OS’s and higher level languages running on top of them are as well. No one here had an attack on the ready for my little nRF_Detekt, and that’s attaching a radio to a SPI port, take that away and it’s 99% to 100% esoteric attacks that aren’t really used or physical attacks that generally won’t happen. Sounds like every smart card reader is going to have an MCU (probably even 32 bit, and a ton of unused flash) routing the comms to smart cards. Something has to read the PIN pad, doesn’t just magically happen. The cards themselves are usually MCU’s even…

Clive Robinson
–True, but is there a zener diode in the optocoupler? That would be a bit of a design flaw eh? Why put a zener diode in an optocoupler? I’ve burned enough protection diodes when I wasn’t meaning to… I suppose if you want to blow sh*t up like a crazy Spaniard (eh Sancho? :p), sure why not lol.

Thoth October 27, 2016 1:51 AM

@Markus Ottela

“The smart card can possibly be obtained by adversary but as long as guessing the PIN wrong thrice might destroy the master key, adversary won’t attempt gaining access to encrypted keys stored on TxM/RxM (smart card has very little memory).”

Remember I previously said smart cards are mildly tamper resistant ? The reason I designed that the user’s password hash (hashing should be done on host computer to alleviate load off smart card) would be encrypted by a hardware AES key. It solves two problems using this method (also widely deployed in industry to strengthen security of hardwawre).

The first thing it prevents is decap. Imagine an attacker decaps the chip and finds an AES key that needs to be combined with a user password hash and he/she already spent one week to one month analyzing all the targetted chip and what comes out is a close to unusable AES key without a password hash.

The second thing it attempts prevent is backdoors to certain extend. Imagine if I have 20 AES key and I give them very weird names and now a backdoor in the card has to work through them all. Also, to include a backdoor inside such a confined space is going to be very challenging without seriously impacting the card’s performance and will with very high chances be caught even by causal home hobbyist with their acid baths and cheap USB microscopes since the cards are using legacy 200+ nm processes and maybe easier to view under affordable USB microscopes by hobbyists.

My current design is three wrong retries consecutively and there goes the card. It resets and wipes itself out. There is no such thing as “admin” or “backup” for the sake that someone actually wants to “self-destruct” the damn thing in a pinch without chances of recovery.

“This is where the PIN comes in. Where is it typed? If you use RxM keyboard, the “evil servant” can record it. If you use USB smart card interface with PIN pad, the security design is hard to audit, the evil servant might be able to propagate. If you use Arduino with built in tools, the entire MCU might be compromised. Whatever device is relaying ciphertexts and plaintexts from RxM to smart card, must be assumed to be compromised by malware. So therefore I’d rather have a secure, alternate input path to smart card’s crypto processor, and a smart card with integrated PIN pad would be ideal. An easy analogue would be the encrypted external drives: Unlike any TrueCrypt volume I mount on RxM, the password I type into the drive itself isn’t visible to the evil servant.”

You would need to verify the PINpad’s design and that will take more time than using a commodity Arduino hardware. The reason is Arduinos are general purpose hobbyist stuff and would be the least on any spy agencies list to backdoor it. Not to forget that adding backdoors will break a chip’s design or would add overhead. The best way is to choose a chip just fast and light enough with as much excluded features and with as little resource only enough for the job so now a backdoor becomes even more apparent due to high chances of performance degrades and weirdness.

There is the Ledger Nano S and Blue which are essentially “smart card based hardware with secure entry and display” but I figured you would call into question it’s design despite the entire Ledger design being almost open hardware including the firmware, OS and everything save for the ST31 smart card’s internal proprietary blobs (linked below).

Knowing that you would not want to go near these devices which in my opinion are more secure than most things out there (they are literally pocket HSMs) but you being more conservative, thus the best choices are things like $9 CHIP, Raspberry Pi, BananaPi, Arduino and so on which are easy to build around and the designs are mostly open for most of them except RaspberryPi which surprisingly has it’s Broadcom firmware blobs proprietary in nature while the AllWinner (Chinese made) are open firmware and can be found on the website of the Chinese manufacturer.

It seems the Chinese are more open in this aspect surprisingly.

Otherwise … another really bad choice is fully proprietary PINpads 😀 .

Links:
https://github.com/LedgerHQ/
https://www.ledgerwallet.com/products/12-ledger-nano-s
https://www.ledgerwallet.com/products/9-ledger-blue
http://www.acs.com.hk/en/products/34/acr83-pineasy-smart-card-reader/

Thoth October 27, 2016 2:19 AM

@Figureitout

re: OCD
Same here. I can’t stand ports that go lose.

re: VGA or DB9
Those are depreciated but they are still used. Have you seen the Safenet HSM ? You will love the looks of the secure PINPad card reader. Uses screw in ports. Tight stuff. Linked below. The nasty thing is it’s heavy and bulky and looks so squarish and uncool looking. Designers need to make these Safenet Luna HSM PINpads more stylish as well as functional, tight and secure. Not to forget, the idea behind the Safenet Luna HSM is to assume hostile environment and uses a lot of military comms equipment and high durability methods to make the HSM extremely durable and resilient from environment and human factors partly because Safenet used to work (and may still be) with NSA to provide their security stuff. One famous relationship results in the famous Clipper Chip that Safenet built for NSA.

I do prefer Thales’ build of their secure PINpad as Thales (well maybe I am bias) makes it more cool and stylish looking but also functional. A nice looking PINpad for smart cards with a USB connection cable. No more loose USB ports but these are under export controls by the way. Nasty paper works.

I almost forget one thing, anything related to security is export locked down in UK and an example is asking @Clive Robinson to send us a PIC chip with his custom coded firmware of DES encryption, @Clive Robinson would need to seek UK Custom’s agreement to release something insecure like a PIC chip with a lousy DES cipher.

re: PIC chips
Note that those USB “secure” sticks are targeted for important appointments from NATO/UK/US Govt usage to diplomatic, healthcare, banks, finance … all the sensitive organs of society. If they were just selling it as personal security, I wouldn’t be all too critical but maybe I would.

How would you protect a NATO or government official with something not even tamper resistant ?

Linked below is NATO’s IA portal classifying the dataShur as it’s deserving UNCLASSIFIED category which means it can only transport unimportant stuff that wouldn’t be vital to NATO/US/UK decision making plans. To reach at least a RESTRICTED category in NATO, a smart card chip needs to be used and even then more stringent handling are required.

If a product is positioned for handling important documents including security classified documents and they label themselves as tamper proofed when they are only tamper evident, it is false marketing practices and companies like these can easily get away with false marketing and we always wondered why security is so laxed in Governments and around the world. The reason is as @ab praeceptis mentioned a few posts before, the result of greedy and dishonest entrepreneurship.

If someone markets a tamper resistant device, they better make sure it lives up to it otherwise when someone really use one of these mislabeled devices for sensitive stuff and gets into trouble, both the user and the seller would be in for serious consequences from legal and non-legal perspective.

The better way that dataShur should promote themselves should be malware resistance instead of tamper proof solution since they are more malware resistant and not even tamper resistant by any means. This would allow users to make better decisions on purchase.

Basically, do not pretend something is tamper resistant when it is only tamper evident just for the sake of marketing and cash. Dishonest business are bad businesses which will cost their unknowing customers not only trust but their lives as well in cases where lives matter (e.g. diplomats, journalists, military personnel, Govt officials …etc…).

Smart card readers are not meant to be secure anyway. The reader in any threat model are always considered compromised and thus any card developer must ensure card to host communication have a layer of logical encryption unless the card reader is provably secure and trusted thus the reason I asked @Markus Ottela to use Arduino to build a secure PINpad for card reading and PIN entry so that it is much more verifiable despite not with the highest security but better than those proprietary stuff out there which are total blackboxes.

Links:
http://cloudhsm-safenet-docs.s3.amazonaws.com/007-011136-002_lunasa_5-1_webhelp_rev-a/Content/Resources/Images/dock2_connect_ped_600x367.png
https://www.thales-esecurity.com/products-and-services/products-and-services/hardware-security-modules/general-purpose-hsms/nshield-remote-administration
https://www.thales-esecurity.com/products-and-services/products-and-services/hardware-security-modules/general-purpose-hsms/~/media/Images/Thales%20e-Security/Global/Products/Remote%20HSM%20NEW/nShieldfrontUSB1JPG.JPG?h=133&w=200
https://www.ia.nato.int/niapc/Product/iStorage-datAshur_462
https://www.ia.nato.int/niapc/Product/iStorage-datAshur_462

Clive Robinson October 27, 2016 5:02 AM

Another acoustic side channel

This time one that works at any earth spanning distance,

https://arxiv.org/pdf/1609.09359.pdf

Put simply “don’t type while Skyping” unless you want your key strokes decoded along the way…

Acoustic side channels are so numerous finding a new one is like “Fishing for shopping trolleys in the Birmingham Canals”[1], you just dip your hook and wriggle. The real problem has always been how to exploit them remotely, in the past this has been done by exploiting the likes of on hook telephones with “infinity bugs” or RF tricks[2].

[1] Birmingham UK has claimed to have more miles of canal than Venice, more trees than Paris and one or two others such as the Rotunda… https://www.theguardian.com/commentisfree/2006/jun/23/comment.stuartjeffries

[2] The infinity or harmonica bug was reputedly invented by the Mob for use against Law enforcment. The use of RF to jump the hook switch in old style carbon granual mic phones is given in the first half of Peter Wright’s “Spy Catcher” a book that intensely irritated UK PM of the time Margret Thatcher, so all in all was probably a good thing 😉

Clive Robinson October 27, 2016 10:22 AM

@ Markus Ottela,

One idea I’ve been playing in my head is an electromechanical system where the ACK signal is stored into a simple latch circuit, and the connectors reading and writing would be controlled by relay switches operated by TxM. The logic would prevent galvanic connection from NH to TxM at all times. Since there is only single bit that gets transferred, compromise of TxM should be practically impossible.

Unfortunatly, that idea / dream can keep waking you at night in a cold sweat…

The reason it could is it increases the attack surface significantly.

I will explain using a simplified device as it saves getting into too many specifics. So in effect what you have is a battery powered token which has the following IO,

1, A display (LCD).
2, A keypad (4×4 matrix).
3, Three galvanicaly issolated contacts,
3.a, An issloted common ground
3.b, An issolated TX out.
3.c, An isolated RX/DSR input.

All run by a single chip MCU.

In the case of normal embedded design you would have an inbuilt timer used to create background / foreground ticks to do the low level IO “bit banging”, “key debounce” and data buffering for the display and serial IO. With the likes of the LCD data control lines fed to an interupt, likewise the keypad lines would go to another interupt and the MCU input from the issolated RX/DSR input oh and a way to put the device into shutdown to conserve the battery.

Thus you would write a mini User Space OS with “Real Time” features, possibly including some level of multi-tasking. This could be done in assembler or C or Forth etc. For overall simplicity within the code a stack based approach would be simplest to get to run effectively especialy if trying to multitask.

Now lets view it from an attackers point of view. The only external connectivity is via the isolated TX and RX/DSR pins which are interupt driven. The RX/DSR connector is the only input the attacker can control.

However the software in the MCU is only written to read the RX/DSR “status” not to decode serial data etc. Now there are two ways that status could be pulled into the main body of the code, by connecting it to an interupt or by polling it. In the general run of things it would be connected to an interupt line and an interupt function.

Thus what advantage could an attacker gain from this status input?

Well if the state of the status line changes then in the ordinary run of events this would cause changes in the IO subs in the interupt functions. In turn this would give rise to timing differences in other parts of the software. Which would be a first step in getting the device to leak data of one form or another, unless considerable care has been taken to reduce or eliminate the possability. One way is to have a time based interupt poll the line and copy it’s status to an internal buffer or update a couple of addative counters (c1=c1+status, c2=c2+1). Which gives constant time irrespective of the status line value.

The second issue for an attacker is how to monitor any changes in timing characteristics. Back in the very late 1990’s Differential Power Analysis hit the smart card world with a bit of a thump. However in the early 1980’s I discovered just how susceptible a lot of digital electronics is to EM fields, and how the state of individual logic lines on a PCB could be detected by the way it’s resonant impedence changed. Thus serial data would cross modulate an RF carrier in a way that could be picked up at a sizable difference. As became known at a later date, the Russian’s had a similar system designed by Theramin which got called “The Great Seal Bug” and it had a range of atleast fifty meters. Which means that a handheld device such as the token described above could be targeted from well outside the building it is being used in.

So if the RX/DSR status line is not properly mittigated by the embedded software it could be used to cause the token to leak either plaintext or keymat.

Czerno October 27, 2016 11:35 AM

Did Google try to kill the Swiss encrypted webmail provider, Protonmail ?

https://protonmail.com/blog/search-risk-google/

“… for nearly a year, Google was hiding ProtonMail from search results for queries such as ‘secure email’ and ‘encrypted email’. This was highly suspicious because ProtonMail has long been the world’s largest encrypted email provider.”

Curious October 27, 2016 12:11 PM

“FCC imposes ISP privacy rules and takes aim at mandatory arbitration”
http://arstechnica.com/information-technology/2016/10/isps-will-soon-have-to-ask-you-before-sharing-private-data-with-advertisers/

“The Federal Communications Commission today imposed new privacy rules on Internet service providers, and the Commission said it has begun working on rules that could limit the use of mandatory arbitration clauses in the contracts customers sign with ISPs.”

“It is the consumer’s information, it is not the information of the network the consumer hires to deliver that information,” Wheeler said. “What this item does is to say that the consumer has the right to make a decision about how her or his information is used.”

Curious October 27, 2016 12:20 PM

I am not familiar with this website, but supposedly, they want to make use of facial recognition in Germany.

“Germany planning facial recognition surveillance”
http://www.dw.com/en/germany-planning-facial-recognition-surveillance/a-36163150

“Interior Minister Thomas de Maiziere wants to test video surveillance with facial recognition software in Germany’s train stations, a leaked parliamentary document has revealed. The technology is part of a wider plan to expand video surveillance across public spaces in Germany that was revealed on Wednesday in a draft law that the cabinet intends to rubberstamp in November.”

Curious October 27, 2016 12:23 PM

Complementary article to the one from Ars Technica above:

“The FCC just passed sweeping new rules to protect your online privacy”
https://www.washingtonpost.com/news/the-switch/wp/2016/10/27/the-fcc-just-passed-sweeping-new-rules-to-protect-your-online-privacy/

“Federal regulators have approved unprecedented new rules to ensure broadband providers do not abuse their customers’ app usage and browsing history, mobile location data and other sensitive personal information generated while using the Internet.”

“With its move, the FCC is seeking to bring Internet providers’ conduct in line with that of traditional telephone companies that have historically obeyed strict prohibitions on the unauthorized use or sale of call data.”

Sancho_P October 27, 2016 4:52 PM

@Markus Ottela

Re my “Receiving data error rate below minimum, system probably compromised” warning:

Ouch!
Your simple substitution:
”Data diode is giving errors. Please check batteries and connections”
is completely wrong, misleading, a very bad conclusion / text.

When you deliberately add 5 error bits at the sender (called “Live Zero”) and the receiver only finds 3 (or less) then there is no way the data diode or it’s battery could have improved the process / corrected that errors.
Obviously something else is wrong, either
– at the sender’s software (changed? Compromised?)
or
– at the receiver’s FEC software (changed? Compromised?)
or
– the receiver didn’t receive the legit sender’s telegram (connection compromised?)

And please do not hint to check the batteries, probably there are none.
– By al that tin foil around, if you do not consider malice, why think of TFC?

Re my “What would be the purpose of any feedback to the sender?”
You wrote: ”I understood you meant delivery should be guaranteed, and that sender should be able to retransmit until it goes through. This requires a dangerous ACK channel that confirms no errors occurred. It can not be done, thus other ways to mitigate are need.”

What you understood is wrong in various aspects, but probably you were talking about the first (mis)understanding, so let me clarify:

”… you meant delivery should be guaranteed …”
No, never, the diode makes it a one-way broadcast, it doesn’t matter if or who listens, the sender can’t care.

”… and … retransmit until it goes through.”
No, on the contrary, you may have taken that from my example, the (here: accumulated water in snow) sensor is sending data periodically to the base station, but ‘periodically’ does not mean ‘retransmit’, it’s a new data point with a different timestamp.

”… requires a dangerous ACK channel …”
No, my example’s sender / sensor neither can receive nor has the “intelligence” to repeat a missed data point, it doesn’t matter because a new data point is calculated and sent every thirty minutes. Plotting the received error rate over time the SW concludes which of the 20 to 30 sensors needs maintenance before it would completely fail (it’s hard, expensive and dangerous to get out there, so if one sensor fails you’d better also visit it’s neighbor if it already shows decreasing transmission quality).

Thanks for trying to explain the RxM trouble, it will take time, as part of your arguments are still confusing me, but I’ll sort that out.

Sancho_P October 27, 2016 5:00 PM

@Clive Robinson

Yep, regarding (UK’s and elsewhere’s) “terror legislation” (pun intended) I’m a bit concerned about my discussion with @Markus re TFC.
It’s sad that even we old boyz (nothing to lose) feel the chilling effect.

Krista Well Socialised Geekgirl October 27, 2016 6:20 PM

OT
@ Clive Robinson
The quality of kindness is expressed through your writings. While you are regularly and rightly commended for your OpSec contributions, I feel this tangential observation is valuable and appropriate. As an aside, now that I think about it, it can be noted that a few of the most admirable minds here have the same quality – Nick P, Thoth, Wael. Not that others don’t it’s just clearly apparent with some. Perhaps intellectual sophistication can, on occasion, foster emotional intelligence also. Or vice verca

re: Coffee OT but you were also making some correct and uncommon observations about cholesterol and erroneous mainstream perceptions of LDL and HDL etc.

The bullet proof coffee idea is essentially caffeine plus fat. The fat acts as a buffer to the insulin response of caffeine. So instead of a high followed by a slump 30 minutes later, the blood levels of caffeine remain steady for as much as 8 hours. The fat also acts as a crude drug delivery system for the caffeine and as a fuel for its thermogenic qualities. Apparently its hugely popular in silicon valley, as cognitive function in all its manifestations are vastly enhanced. It also greatly assists in weight loss especially when paired with the ketogenic diet. I suppose the caffeine also delivers the high levels of quality fatty acids to where its needed

The originator followed the very califonian method of creating superflous commerical products around his concept, for maximum return including special ”mould free’ ‘high altitude’ coffee, and a semi-toxic MCT (medium chain tryglericide) oil combo and and obnoxious infomercial ‘literature’. But as mentioned, the DIY method is easy: quality grass butter (quality meaning, pesticide free so chemicals are not stored in the fat) with or without quality virgin cold pressed organic coconut oil. Some find the coconut oil too heavy, but it definitely tastes much better than one may imagine. Be bold. Start with say one tablespoon total fat (butter and/without coconut oil) and slowly increase. Blend or shake well into a lovely latte, it doesn’t disperse with a spoon
Or hey – have it with a Squid

The FATNess Known as KIM October 27, 2016 7:23 PM

Canadian police get cell-site data to text thousands near murder scene
http://arstechnica.com/tech-policy/2016/10/canadian-police-get-cell-site-data-to-text-thousands-near-murder-scene/

Texting, police say, “is an evolution” of old-school, door-to-door canvassing.

The Ontario Provincial Police in Canada are planning to text about 7,500 mobile phones that were in the area where the body of a murdered man was discovered in December—all in a bid to find somebody who may have information about the crime.

Welcome to the modern, digital-age version of door-to-door police canvassing.

Thoth October 27, 2016 7:53 PM

@Sancho_P, Markus Ottela

“So you are devoting your time to a complex “high assurance” messaging system, limited by design to chat with your mother? ;-)”

It is quite troubling to see a lack of respect for all the excellent and hard work @Markus Ottela have put into TFC. There might be disagreements on technical designs but not in a personally insulting manner like this.

@Markus Ottela delivers a real life design of high security assurance that actually works and exist in real life instead of just paper theories and thus deserves respects for his hardwork and willingness to spend time and money on a problem everyone of us here except him is willing to take a step out and make it a reality.

Clive Robinson October 27, 2016 7:59 PM

@ Thoth, Markus,

It looks like the rumour of late Sept / Early Oct is true, Qualcomm is taking over NXP,

http://www.reuters.com/article/us-nxpsemiconductors-m-a-qualcomm-idUSKCN12R1AW

It’s known that Qualcom want to broaden their product base beyond Smart Phone chips as the market for smart phones is known to be decreasing. Thus Qualcom are most likely after the currently expanding automotive and IoT chip market.

As with all such mergers and acquisitions there will be “stream lining”. What product areas it will fall on is currently not stated only the 1/2billion of expected sayings.

NXP has some eclectic chip ranges as well as those where market share or profit are below what they could be, thus these are likely areas where the axe may fall.

I’m not sure if this will effect the smart card production, but it is an area which could easily be spun off or closed.

Clive Robinson October 27, 2016 9:13 PM

@ Krista Well Socialised Geekgirl,

With regards MCT’s I’ve done a little reading on them. Thay are found naturaly in some milks (mainly grass feeders) and both coconut and palm oils. Not sure which source the Bullet Proof Coffee uses.

It would appear MCTs have an interesting history within the medical industry, and are used in non-oral feeding systems, including by catheter line directly into the veinous system outside of the hepatic portal. They do however tend to cause Non Alcoholic Faty Liver Syndrom (NAFLS) when used this way. Which is in effect similar to Cirrhosis of the liver with it’s attached problems. However NAFLS does not occur when taken oraly provided there is not a moderate or high level of simple carbohydrates in the diet.

In normal oral consumption they can cross the GI barrier and go through the liver without requiring energy input thus oxygen consumption and provide natural “brain food”. The fact that they do not require the liver to use oxygen in theory means there is more oxygen available to the brain, thus you would not get the “post heavy meal fatigue”. However it is doubtful that they would improve sports performance. A further upside is that MCTs are also known to have certain “protective” abilities against various harmful processes caused by simple carbohydrates.

What is currently not clear is if MCTs have any part in arterial plaque formation. LCTs however there is evidence for, thus reducing LCT in your diet would be advisable. Coconut oil does have around 5% LCTs so the –not necessarily correct– standard advice is to use it sparingly especially as it has no vitamin E content. The reason for the “not necessarily correct” is that the standard advice is changing, things like butter and cream are now getting recomended again and they have even higher LCT content.

With regards ketogenic diets it’s hard to find evidence about them as little research has been done outside of that for epilepsy control in children. It is known medically to be harmfull in certain conditions, but otherwise evidence is scant one way or the other, although anecdotaly a percentage of those on the Atkins diet did complain of headaches (which might as easily been caused by hydration issues). More recently there has been renewed intrest in research as high carb diets statisticaly show a correlation with early onset dementia and other neurological diseases as well as diabetes and inflamatory diseases that are now known to be the primary cause in CVD.

Thoth October 27, 2016 10:18 PM

@Clive Robinson
I was planning to talk about this later in the day once I have gathered more information from my sources and friends in the industry but since you posted this agead of me, ai will just be direct that I am getting more worried about NXP if this deal goes through. Reason being now there will be lesser manufacturers of different types of IC chips and we are getting to a point where if his continues, there will be very few chip makers and backdoors becomes easier due to lesser varieties available and my scheme uses varieties of cards and chips as part of it’s security.

Smart cards are still somewhat hard to backdoor even after the merger of NXP and Qualcomm but it moves a step closer theoretically though but not provable as of yet.

As long as there are still varieties and smart card being used as a “commodity security device” of sorts, it is still somewhat hard to implement blanket backdoors without having both technical and practical issues.

There are still chips from Sony (Japan), Samsung (S.K.), Infineon (Germany), ST (France) and other small manufacturers like the Spyrus Rosetta (US), Atmel (US) and so on.

Markus Ottela October 28, 2016 2:59 AM

@ab praeceptis

“You are frighteningly right but we are not in contradiction.”

Agreed.

“second pair of eyes just treated the jewels of security like just any any other worthless project”

I’d imagine maintaining a library with legacy ciphers, legacy protocols quite meaningless, especially when you consider TLS mostly secure against everyday hackers, and fundamentally insecure against nation states with CA resources etc. I can’t be sure what happened with Heartbleed but I would imagine the project isn’t maintained out of passion. That’s when last minute pushes before the new year’s party happen. When you need to clear that table to relax.

“Short, someone (and someone very very mighty) does not want for us to have safe systems; not even a reasonable chance to have them.”

This is an interesting subject. I often wonder if the game rigged and if, to what extent.

@Figureitout

“-I think he doesn’t know what he wants and is just kinda throwing things out there?”

I finally got it. It’s about displaying warnings when first errors start happening when e.g. batteries (or one or more components) are dying. The forward error correction library should be able to tell how many errors where corrected. Receiver side software can then keep track of changes in performance.

So it’s not warnings of errors before they happen, but warnings when first recoverable errors start to occur.

“And a raspi or regular laptop PC w/ at least 5-6 separate MCUs using python won’t be?”

It absolutely can. That’s why the idea is to enter the PIN into smart card directly, not using the RxM keyboard.

“Something has to read the PIN pad, doesn’t just magically happen.”

I’m not sure how safe a standard USB smart card reader with pad is. I’d rather have external channel.

@Thoth:

Requiring a password doesn’t prevent decap, but rather expands the scale of required attack.

“My current design is three wrong retries consecutively and there goes the card. It resets and wipes itself out.”

That’s great to hear!

“The reason is Arduinos are general purpose hobbyist stuff and would be the least on any spy agencies list to backdoor it”

The question is not is it backdoored in advance. It’s, can you use whatever interface RxM would deliver encrypted keys via Arduino to smartcard, to infect the Arduino.

“Otherwise … another really bad choice is fully proprietary PINpads”

Agreed. The only thing I see good about them is, since RxM can’t leak information back to network, the choice of COTS hardware bought on cash won’t give adversary information about the hardware model used. Proprietary designs adds variance. Recommending use of systems featured in TFC Wiki reduces the complexity of malware that’s fed from NH to RxM.

@Clive Robinson
EMSEC is out of scope here so if adversary is in close proximity with antenna, there’s very little that can be done against monitoring of keyboard/display cables. With no background in EE it’s really hard to follow along, sorry! If we can dumb it down for me a bit —

Let’s say TxM is a RasPi. The device first uses GPIO_1 to control a relay #1 that disconnects the latch memory circuit from TxM (state is read by GPIO_2). GPIO_3 controls second relay #2 and closes the circuit that allows NH to write reception status to it.

TxM then uses the UART/RS232 to send the ciphertext over the data diode. NH receives the data but FEC fails to correct errors in packet. So NH (also a RPI) stores bit ‘1’ to latch circuit using it’s own GPIO pin. TxM times out, uses GPIO3 to control relay #2 that disconnects latch from NH. It then uses GPIO_1 to connect the latch to TxM. GPIO_2 then reads the ‘1’ bit indicating error. The process is received over data diode until latch carries ‘0’ to indicate success.

Tx.py software would loop ciphertext output and GPIO operation until delivery is successful. NH gets to time ciphertext output delays anyway (although constant time delays in trickle attempts to defeat it).

So what’s the risk here when NH gets to input single bit through this tiny transaction drawer only TxM has a handle for?

There’s been some research on covert channels that turn e.g. USB cabels into transmitters. I wonder if it’s possible to receive data in similar fashion.

@Sancho_P

“There is no way the data diode or it’s battery could have improved the process / corrected”

Remember Shannon’s maxim: Enemy knows the system. NH can be compromised at any point, thus it can adjust functionality on Rx-side. That would however be pointless as attacker most likely wants to stay hidden. Easiest path to DoS is not “packet transmission error” messages but preventing message delivery from XMPP server.

If TxM is compromised by malware during setup, it’s easy to set output to whatever user configures on that side when leaking keys.

The covert key exfiltration channel is going to be more clever than faulty bits — it’s most likely going to operate on lower level than TFC’s Python code, or use tiny differences in timings / voltage levels (data diode’s output current depends on brightness of LED).

FEC is for user’s convenience. It’s not a security feature.

“And please do not hint to check the batteries, probably there are none.”

That might be the case for data diode designs like yours. Not for the previous ones.

“- By al that tin foil around, if you do not consider malice, why think of TFC?”

I do consider it. I’m just not assuming adversary would choose different error rate for their output packets that what TFC would normally use.

“No, never, the diode makes it a one-way broadcast — sender can’t care.”

Great, we’re on the same page.

“part of your arguments are still confusing me, but I’ll sort that out.”

Language barrier is one problem. Also, many of us here often explain complex thing impartially when the detailed schema in our mind fills the missing parts. Also, there’s a lot of variance in tacit and explicit knowledge here. All this with non-real time discussion. Only thing that could make things harder would be 140 char limit for comments. Anyway, let me know if there’s anything you’d like me to clarify.

@Moderator

A possibility to edit posts would often be very useful. Has implementing such feature been discussed?

@Nick P

If you can find the time, I’d like to hear your thoughts on progress of TFC.

Thoth October 28, 2016 3:56 AM

@Markus Ottela

“Requiring a password doesn’t prevent decap, but rather expands the scale of required attack.”

Indeed. It is to make someone who have decapped the chip become really frustrated to learn that what they have on hand is mostly useless as well.

“The question is not is it backdoored in advance. It’s, can you use whatever interface RxM would deliver encrypted keys via Arduino to smartcard, to infect the Arduino.”

You need control over the hardware via it’s firmware as the starting point. It acts like a data guard with very small and tight attack surfaces to minimize possibility of infection. Find those kits without any wireless interfaces of any sorts and then move to creating a an open and custom firmware image that makes infection very difficult by having very essential functions and easily inspected.

One example is like having your own ATMega or STM32 and you create a very specific OS to only handle card reading and PIN entry and if a malware tries to attack it has to find a flaw in your restrictive OS image and if you code it very tightly and make the OS very small and easy to debug (~ 2 to 3K LOC of codes), you should be fine.

Ratio October 28, 2016 5:08 AM

@Markus Otella,

Instead of if some_key in some_dict.keys(), use if some_key in some_dict. The former checks if some_key is an element of the list of keys that some_dict.keys() returns, while the latter only performs a simple hash table lookup. It’s also slightly prettier. What’s not to like? 🙂

Anyway, I still haven’t read all of the code, but from what I’ve seen the issue in the large is that things are intertwined in all sorts of ways. Functions “do too much” and can’t be easily plugged together in some other way without changing their internals. The mutable global state doesn’t really help either. Fixing this, and getting rid of the code duplication, would be my highest priority on the software side of things.

Markus Ottela October 28, 2016 5:09 AM

@Thoth

That makes sense. So minimize attack surface on PIN and password input and to store hash of master key on smart card. The smart card’s private key acts as salt so there’s no way to obtain master password without physical access to smart card.

However, as stated, all this provides diminishing returns. Preventing impersonation is important under some threat models but I simply can’t find the time to take on such complex concept at this point. I hope the community can use the existing code and expand on it.

Thoth October 28, 2016 5:23 AM

@Markus Ottela
No problem.

I guess if you really want to add smart card to the security model, the best way is to use password entry which is derived to a password hash to be fed into a smart card on the RaspberryPi computers themselves and not adapt a secure PINpad via an Arduino board until it becomes feasible with surplus time and resource on name.

Markus Ottela October 28, 2016 8:27 AM

@Ratio

RE: dict.keys()

That’s true and works as long as the dictionary isn’t being edited while iterating. That is the case in some of them, but I’ll have to check them through if there are some that use .keys() unnecessarily.

“Fixing this, and getting rid of the code duplication, would be my highest priority on the software side of things.”

@All, ratio

Now that the project is approximately feature complete, I can start looking at rewriting everything. Python 3 has better module structure so the language version will probably bump at the same time. Studies take more and more of my time so I don’t think there will be similar period like the past summer I can dedicate. If you have interest in contributing to the code, I welcome all audits, pull requests and discussion on any issues.

Sancho_P October 28, 2016 7:03 PM

@Markus Ottela

You wrote @Figureitout:
”The forward error correction library should be able to tell how many errors where corrected. Receiver side software can then keep track of changes in performance.”
Yup, only I’m not quite happy with the term “performance” here.
The mentioned principle of “Live Zero” [1], which is deliberately adding error bits, is not about performance but system integrity.

Your next sentence, ending with:
”… but warnings when first recoverable errors start to occur.” also doesn’t fit exactly (it is the deviation from “normal” in recoverable errors that would be a concern) so I don’t know if we “finally got it”?

Yes, I’m extremely fixed to the wording, but the reason is experience, so please apologize, I spent years of my life only repairing what bad design papers / contracts / ideas / misunderstandings before coding had caused.

Your paragraph starting with language barrier is true, esp. discussion in text is difficult, but verbal is worse, believe me, because most details get lost.
E.g. from your posting I can read your reply to me but I can’t understand, there are to many inconsistencies I’d have to “swallow” (which I probably would when we sit together at a beer 😉
I only don’t know if you want (and have the time) to hear them and if this forum is the right place to discuss, not to overburden other users and our host’s hospitality.
Please tell me what you think.

[1]
The adding of “errors” could be done in a pseudo-random manner to “pair” a certain set of devices at commissioning. This fingerprint might “forever” identify the sending device to the receiver, to detect a device change by an “evil servant”. To further obscure this practice I could go on and explain more details here in public, but … 😉
Plus:
Exactly here, at the overall design, the treat model, the adversary, the purpose (to say it bluntly), the proposed HW / SW model and open source (love it) I’m still chewing on several (still diffuse) questions (keyboard, PIN pad, smart card, who has access, …).

Hopefully I can check your documents this weekend.
If there is there more than at github please tell me, too.

Figureitout October 30, 2016 10:04 AM

Thoth
–Yeah looks good. I like square things lol. It’d be good for workplace security but still for my personal stuff I’d prefer making my own keypad etc. Reduced attack surface and being able to see things easier (rather than reverse-engineering the HSM keypad) is just personal preference.

RE: crypto export laws
Legally one may need to do that, but that horse has left the barn and it’s impossible to enforce.

Smart card readers are not meant to be secure anyway
–…What? Well that’s where the attacks will come from. Doesn’t have to be arduino, can be any 8 bit micro, but as you saw w/ qualcomm buying nxp which just acquired freescale there’s eventually going to be not much choice soon I bet.

Markus Ottela
I’d rather have external channel.
–Yeah me too, that path is always going to have an MCU though. How else you going to communicate w/ the card? Not quite sure how a smartcard would be used with TFC anyway.

RE: feedback unit
–You wouldn’t need the latch and relay for most MCU’s (I heard something about chip Pi uses, that you need to set bits to read/write GPIO), you can just poll a GPIO pin via an interrupt service routine. Pretty standard thing to implement in embedded world so you’d have some choices. Recently, I learned that some chips will advertise more GPIO but it’s not normal GPIO and we had to do something real funky to toggle the lines. So watch out for things like that, incase you start running out of pins to use. I was thinking similar thing, how a malware could attack an I/O pin that just either goes high or low, firmware reading it would only repeat a packet if high, else low just continue. Only practical attack is DoS attack just constantly sending errors, unlikely (best way to stop would be to harden NH as much as possible) but if it happens it would be pretty terrible and basically shutdown your chat session. Clive was saying that needs to same time no matter if we read an error or not. So each read would need enough delay to cover how long it would take to read an error and probably right before it retransmits. Easier said than done to get those timings.

r October 30, 2016 6:53 PM

Just because the horse has left the building doesn’t mean that it can’t or wont be enforced. If they left the reigns on the horse or maybe installed GPS to it you never know what the future will or will not bring.

Figureitout October 31, 2016 1:00 AM

Whens the last time you heard of export law for crypto being brought up? The horse that left the barn in the 90’s is dead now, and was beat to death by people who have no experience w/ things moving around the globe.

:Addenum:

Markus Ottela
–Actually, just a “DoS” attack on an error pin would matter depending on how you handle error. Looks like you give a little text warning, and an attacker could fill your screen up w/ error messages but your conversation could still get thru (could externally verify via phone or radio). Either way, bringing down NH is the surest DoS attack.

Clive Robinson October 31, 2016 7:58 AM

@ Figureitout,

When’s the last time you heard of export law for crypto being brought up?

Not that long ago depending on how you view May16, the US were trying to get the Wassenaar Arrangement on “Export Controls for Conventional Arms and Dual-Use Goods and Technologies” upgraded back in 2014, supposadly to prevent the likes of Syria’s Assad and other ME Tyrants spying on “their people”. However what got amended and how –post Ed Snowden revelations turning the US into world wide pariahs– was a little to rich for US exporters who were imagining an end to their lucrative export market. Thus the US ended up arguing about their own argument…

http://arstechnica.com/tech-policy/2016/03/us-to-renegotiate-rules-on-exporting-intrusion-software-under-wassenaar-arrangement/

Apparently the thing is as the Wassenaar is an “arrangment” not an agreement or treaty, it has no legal binding (unlike the old ITAR) for either control or reporting of such technologies (crypto being “Dual-Use”).

So it’s backfired on US “self image” especially when BIS rather than just implement them “published for comment” jusy about all sides raised complaint…

However the “Corporate Elite” that act as a “shadow government” in the US –via the neo-cons / war-hawks– to protect their interests have other options available to them such as NSLs etc on any US based entities (think about MicroSoft’s protracted argument about the independence of their Irish Data Center). It’s even been said by some that the US corporates / government could use TTP tribunals to kill off any foreign company exporting where the US or other aligned Eye do not want it to go (not sure how but as they sit in secret…)…

r October 31, 2016 8:41 AM

@FigureItOut,

Global or not, what happens at home has international ramifications over the means. NIST was a hollowed out corpse, OpenSSL is was and will always be a work of hart. We might not be able to stop the linear evolution of something leaking and spreading but complexity and resources are finite who’s going to release >256bit block/stream ciphers? Who’s going to vet them? Who’s going to implement them?

Some of the giants we stand are aren’t giant’s at all, but trojan horses towering over us in all their welded glory.

64bit cyphers only lasted so long, where are we at with 128’s ? And the 256 ones?

Certain fields of science are fields of diminishing returns, theoretical mathematics is one of them.

Moderator October 31, 2016 11:30 AM

@Markus: Enabling comment editing would necessitate registration, which is not required for visitors to participate in discussions here. If you need to make a correction, you can either post a corrected comment, with a preliminary request to @Moderator to delete the incorrect one; or post a request to correct the error.

ab praeceptis October 31, 2016 11:45 AM

r

“256bit block/stream ciphers? Who’s going to vet them? Who’s going to implement them?”

The usual suspects. They do both; they implement their algorithms and then they fight each others algorithms.
Plus quite a few others out there.

“64bit cyphers only lasted so long, where are we at with 128’s ? And the 256 ones?

Certain fields of science are fields of diminishing returns, theoretical mathematics is one of them.”

Well, we have them. They have arrived (and not just yesterday). 256 bit is a decisive marker because there is a rough logic that says “post-quantum will half bits, so 256 bit algos will have only the effective strength of 128 bit algos. For the sake of being on the safe side 128 bits are desirable because that gives us, give or take some more bits due to successful weakening, the 80 bits we need for our current definition of mid-term security”.

As for your doubts: Sure, some crypto people have sold out, no doubt. A tenure here a problem solved there … et voila, you have tainted work. We know that nist is to be considered a department of nsa.

But, and I like that example, while I never trusted the rsa guys (which turned out to be well justified mistrust), the algorithm can be checked by everybody (well, by everybody with some knowledge and inclination).

I see no reason to mistrust all the crypto people and mathematicians. Btw. there are practical reasons, too. Assume someone found an algorithm that considerably speeded up, say, factoring. Woulkd he keep that secret? Could he? After all, those people usually don’t work in solitude; they use, for instance, computer arrays, they have secretaries, they communicate with each other. In other words, the chances are very, very small for such an evolution to stay secret (people working within nsa is another game, of course).

I feel well with current algorithms. Even with rsa. If there is pain then that’s in the PK corner. The known and widely used algo won’t survive PQ and the new ones aren’t properly tested or are way to big to be practically and economically usable or …

r October 31, 2016 12:04 PM

@ab,

I don’t, if I did I wouldn’t be here gleaning and preppering would I?

I just like to think of those giant over-arching bay watching hercules’ as salt… in the wound… in the future… It’s a shell game of trust isn’t it?

Where’s the 512bit transactions? 256 is “good enough” now with the Certificate Authorities I’m assuming and if the Wassenar is truly a gentle man’s agreement then it’s known to be politically unprosecutable (excluding proles, snivvies ofc). Which makes the whole encryption ordeal a non-issue as there’s something else behind the incomplete switch case block.

I really do believe, that even if global export of crypto is allowed at this point it doesn’t matter because even without the info the meta is still controlled and available. Make no mistake, THEY WILL know when you’re studying how to build a 8192bit public key transform. Make no mistake if you speak math like some people speak database or admin they’ve got a pidgeon trained on you.

Sure, “we” can check it. But will we be “allowed” to deploy it? As-is? Vis-a-vis? Likely implementation errors will be introduced, evoked, maybe the RNG will be poisoned, side-channels made available.

You pushing the whole hrc email thing should make that very clear, if they were encrypted [properly] things would be fine. There’s resistance to meeting these goals not only from the public, but from the unpublic, the government and other hentities. If encryption were a reality… scratch that, secure comms. If secure comms were a reality then you mathematicians could sleep at night, but then again – if god gave us math – and geometricians gave us the bomba – then who would want that?

You guys are the pride and joy of export restrictions, and we’re a country of poor us borders. Give us your sick, your disenfranchised we’ll put them to work building an economic and political weapon unmatched by any fully-closed source dictatorship. We will change the world a bit at a time.

I guess I trialed off, anyways you guys all know you should remain guarded… maybe you are… maybe you REALLY are guarded, I can’t check your originating hosts yanno? But it’s an open timeline and anyone can be subverted or put down. It’s a scary thought when the pressure from the plotus is concerned about preventative economic pressure to remain stable boiling it’s contents instead of figuring out how to properly cook the contents without charring the insides. They’re on fire and it’s not going to stop any time soon.

ab praeceptis October 31, 2016 12:39 PM

r

I’m not sure I can follow you, but from what I understand you are mixing up some quite different if more or less losely connected matters.

Can we use, say 8192 bit rsa? Sure, besides a very unattractive cost-benefit ratio I don’t see any reason why we couldn’t.
Can we export good crypto? Well, that’s somewhat more complicated and I’m strongly limited in my understanding as I lack legal training, From what I see, it’s two sided: On one hand probably some laws in some countries could prohibit that; it seems questionable, however, whether that could actually be enforced. On the other hand that cat is out of the bag and in the wild anyway. Maybe, if I happened to work for a major enterprise where the legal department has to be asked even before farting, I couldn’t; but then the major problem wouldn’t be the state anyway (but rather my employer wanting to have everything). If, on the other hand, I happened to be a math or compsci prof who has found and developed some something great I could almost certainly put that into the wild, and even if my state would prohibit it, they could hardly enforce it. After all there would be papers, discussions with colleagues all over the earth, etc.

So, yes, my take is that we can both check and develop/deploy crypto quite freely.

My worries concern mainly two issues: a) implementation and b) the dark zone (nsa, …).

a) is urgent and important. Virtually all “broken crypto” (besides ridiculous old shit) has not been broken crypto but broken implementation. To make it worse, there are probably hundreds of heartbleeds more lurking out there.

Another major pain in terms of implementation is how PK is actually implemented, particularly the CA problem is very, very frigthening. It basically comes down to being secure where you do not really need it (shopping at ebay or sending love letters, etc) and not being secure where you really need it (protection from gov. eavesdropping, etc.)

Yet another – widely ignored – PK implementation problem is that all those SSL/TLS servers (httpS, et al.) are basically attack vectors. Anyone can ask my httpS server to do a LOT more work (than http), hence even relatively small attacker can abuse SSL/TLS as (D)DOS attack vector.

The other factor (b), the “dark zone” is not merely nsa next to certainly working on ways to crack crypto using mathematics but mainly that they try that also in more sinister ways. Having every byte on major IXs is an ugly example of that.

However, what Snowden has shown us is clearly not to be adressed by better crypto alone. The really big issue there is politics and law (ignoring politicians and agencies).

r October 31, 2016 2:04 PM

I meant, ignoring both the politicians and the politik[s] that we are all on somebody’s payroll.

Figureitout October 31, 2016 4:03 PM

r
who’s going to release >256bit block/stream ciphers? Who’s going to vet them? Who’s going to implement them?
–Probably djb or the annointed few, and they’ll require lots of care and nurturing to get the security. I’m looking forward to crypto made for 100+ cores, will be sick.

Clive Robinson
–That was intrusion software, not crypto, and what’s come from those rules if they’ve even made it out of committee: it’s never been easier to get all kinds of exploits online. Notice this quote too “The same sort of rules once restricted the export of commercial-grade encryption“. There isn’t even that much new crypto today, lots of similar algorithms.

And your other link returned error or corrupted (from you I’m guessing corrupted :p), so a broken link doesn’t help the case much either.

r October 31, 2016 4:23 PM

@FigureItOut,

I look forward to ChaCha etc on ultrascale stuff, but it wont be the consumers who obtain it first. We tend to ‘buy the [server] farm’ before we’re ever out of the gate.

There really are an annointed few, if we don’t get this education and awareness stuff down we will lose the race condition of “securely” creating mathematicians and cryptographers.

Can we create them faster than others can identify and exploit them?

It’s tragic, I was raised with an such idealistic view of humanity I can’t believe how thick I have been – it’s all cloak and dagger – to not be operating in such extremes is to leave a vacuum for others to fill.

Clive Robinson October 31, 2016 5:27 PM

@ Figureitout,

And your other link returned error or corrupted (from you I’m guessing corrupted :p), so a broken link doesn’t help the case much either.

Try a google of,

And it should be in the top results (and people wonder why I don’t like giving anything other than simple links).

Clive Robinson October 31, 2016 10:37 PM

Assange’s view on Googles head

As some of you may know during his spare time cooling his heals in London Julian Assange has been writing a book[1]. In essence it’s “beware the holder of the olive branch” on Googles head Eric Schmidt and his less well known associates.

And there is a “taster” up on the wikileaks site,

https://wikileaks.org/google-is-not-what-it-seems/

What ever people may think of Julian as an individual or aid to whistleblowers, the piece is quite readable (if a little long for a single web page). As far as I can see the facts given are correct, but they are but the threads of a wider wholecloth julian has woven with them.

The taster is worth a read over a cup of coffee.

[1] http://www.orbooks.com/catalog/when-google-met-wikileaks/?utm_source=newsweek&utm_medium=serial&utm_campaign=google_met_wikileaks

Figureitout October 31, 2016 11:19 PM

Clive Robinson
–Yeah, “BIS Updates the EAR to Implement Wassenaar Changes to Encryption Controls and Others” on Oct. 20. Had to do a little searching eh? Looks like it “Notably, the rule eliminates Encryption Registration requirements”. Not sure what exactly that means, don’t really care and won’t affect my purposes.

Clive Robinson November 1, 2016 12:33 AM

@ Figureitout,

Not sure what exactly that means…

It’s something I’m still looking into, but from what has been said by others, it’s what looks like a change for the better, but is infact a change for the worse.

That is the old system had a reporting mechanism back to a state entity. The new system requires open self reporting available to all. Which puts a whole lot more information into the public domain than the old system ever did.

Markus Ottela November 1, 2016 8:00 AM

@Sancho_P

“I only don’t know if you want (and have the time) to hear them and if this forum is the right place to discuss”

The design needs to be formalized eventually. The wording is quite important so I’d rather split the hairs before finalizing the design and restarting writing of white paper.

“This fingerprint might “forever” identify the sending device to the receiver.”

NH can be compromised from network. You can’t trust NH is able to authenticate TxM hasn’t got malware in it. TxM needs to be trusted implicitly. If that can be done, integrity checking isn’t necessary. TxM can be perhaps audited with external spectrum/logic analyzers, but I think that’s out of scope for most users.

RxM can be compromised from network through the data diode (via NH). Therefore both of these devices can lie about anything that gets displayed to user. You can’t trust any integrity verification mechanism in these systems. The only thing trust is built on is, it’s risky to show arbitrary messages on RxM if they’re out of context with what user and contact have previously sent from their trusted TxMs. It’s risky to compromise TxM installation procedure as it’s a finite set of network traffic and code that needs to be audited, once.

“to detect a device change by an “evil servant””

You can’t detect it on any device. I’ve designed TFC so that as long as the security premises I listed previously hold true, it’s secure against remote attacks.

Also, I apologize the rant about insecure ACK channels. Reading the imgur’s data diode description showed me you agreed on that since the beginning.

“Hopefully I can check your documents this weekend.”

Yes. Definitely read the Threat model, FAQ, Security design and maybe protocol description. It’ll be much easier to understand why I designed the system the way I did after that. Plus I’d love to get feedback on what remains unclear to new pair of eyes.

“If there is there more than at github please tell me, too.”

I did my best to ensure all relevant data was there. Unfortunately I haven’t been able to go through all notes and posts and discussions here and in Reddit. So I might be missing something.

@Figureitout

“How else you going to communicate w/ the card? Not quite sure how a smartcard would be used with TFC anyway.”

It’s probably easier to design secure firewall configuration system through external management port than to add a device to network that logs in through the WAN port of that firewall.

“How else you going to communicate w/ the card? Not quite sure how a smartcard would be used with TFC anyway.”

So the point is to prevent master key from physical attacker. This doesn’t prevent malware from storing messages on RxM, but it prevents impersonation when the keys can not be accessed in their decrypted state.

“Either way, bringing down NH is the surest DoS attack.”

You’d have to masquerade it as bug in NH.py. I think it’s much easier to DoS the connection to XMPP server. But outside bricking Syria’s backbone routers, entities like NSA prefer to remain covert in their jobs.

@ Moderator
That’s very kind of you. Some image boards etc. have solved editing of anonymous posts with cookies, but I’m actually more happy with the site not setting any. Keep up the good work!

Figureitout November 2, 2016 1:20 AM

Clive Robinson
new system requires open self reporting available to all
–Ah, self regulation lol. Bet they don’t have enough resources to check every product in a timely manner (how could they know they just have a pre-made firmware made for them?). And it’s good anyway to make the info public, it’s a hint but just saying crypto algo/scheme shouldn’t break your security.

Markus Ottela
design secure firewall configuration system
–Using what? Another large PC? Larger MCU’s w/ more code? Attack surface expansion right there. So you want the smartcard to have it’s own connection too.

So the point is to prevent master key from physical attacker.
–How do you detect a physical attack? From my messed up life experiences, I can tell you that, if they don’t want you to know they’ve been there, it won’t be easy to detect. My particular attackers messed up big time getting emotional and gave me all kinds of intel so I could study their methods. Physical violation is the worst, and it’s where most of the security industry throws its arms up. That was the point of my nRF_Detekt project (which is ongoing, I want timestamps logged to an SD card and you can plug that into any Windows/Linux/Mac PC and store that data for evidence later. Super simple and can be implemented on any MCU; I was just trying to make something tricky for attackers that would reset dataloggers by hiding them, forcing them to search more and further risk detection).

You’d have to masquerade it as bug in NH.py
–Not so sure about that, think they can blaze right on thru as long as PC is connected to internet (hence I think completely separate channels other than internet are needed, RF is only practical one in use today; we need more but doubt a usable global channel that isn’t internet or RF will come in my lifetime), but depends on your OPSEC and cleanliness.

Seen some highly suspected attacks like remote rebooting (imagine you’re really in the zone, finishing up an assignment or writing code and you get a reboot that deletes some of the code you just wrote; that happened to me at school on school network which is usually pretty secure, they take it very seriously, and I believe they even run VMware virtual machines on the computers so whatever students do is eliminated when you kill the VM (there’s different IT dept’s across the different schools, weird how they do it, I do know which one is best and which PC’s start acting a little strange…). Yet didn’t happen to anyone in my school lab, odd eh?) and such (I still think somehow destroying SATA controller via software to point technicians recommended new motherboard was most epic attack I’ve ever seen, if it was so, before that it was cat’s meowing on every mouseclick so pretty good sign of silly malware there lol. Other than that there is a potential microcode exploit I’m looking for loosely on this same PC which I use as a live system for internet surfing, that can infect my personal favorite fresh live systems (loads microcode “updates” on each boot…doesn’t do that on other PC’s. I can’t beat that besides doing my usual if needed getting on other PC’s completely unrelated to me and different connections but I find that morally repulsive and weak running away from attacks which I want to defeat head-on).

Recently, not sure if I’m witnessing same attack vector on another PC or just coincidence and/or benign error (computer reboots and unable to find HDD at times, usually directly connected to internet when it happens, last time it was one of my gmail accounts; all I do is replug sata cable and it boots fine again, finds HDD, all my files uncorrupted–leading me to believe more benign error since it’s been much more repeatable and can ever so slightly start forming a pattern).

Other attacks, I’ve done tests trying to see if there’s just eavesdropping malware trying to remain completely hidden. I don’t mind that as much b/c it doesn’t really matter watching me study or whatever when I open source much of what I do. It’s the destructive malware that really will leave a sour taste in the mouth (and get you more interested in security).

So I know you know all this, getting malware pre-loaded in PC’s at the fab (possible, more likely just bugs in firmware) or when you put a new OS on computer, likely get that image from internet, that’s most likely way of infecting RxM or TxM especially. But NH is just so uncomfortably vulnerable compared to rest of TFC system, just irritating to think about when rest of system, I’m happy w/ and I’m sure you are too.

Sancho_P November 2, 2016 6:57 PM

@Markus Ottela

”The wording is quite important so I’d …”
OK, I’m on track, just having less time this weekend than I thought, as always …
I understood your NH being vulnerable from the network, that’s unfortunately a basic property of standard computers / PCs for common use. We should keep that in mind for later discussion.
But I’d challenge your ”RxM can be compromised from network through the data diode (via NH).”
Please give me some time, I’ll be back with several points, hints and (unfortunately) some questions 😉

Again, I’m not happy with our communication channel because that’s far beyond the scope of this forum.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.