Comments

K15 February 2, 2018 5:40 PM

Simple question, that I’d posted too late for last week’s Friday crowd:

How can you find out whether a company thinks it is using a signed (not self-signed) security certificate on its website?

If all I see in my browser, when I go there, is that in the address bar it says “Secure” – but in actuality the co. is using a signed certificate – obviously it would mean something is wrong. How can I find out if it is the case, that they are?

(yes, I should already know how to do this.)

MK February 2, 2018 5:48 PM

Click on “Secure”, then click on “valid” under the Certificate icon, then look at the certification path. [at least, using Chrome]

About that "bigger than watergate" memo, lol? February 2, 2018 7:24 PM

https://theintercept.com/2018/02/02/nunes-memo-fisa-trump-russia/

Despite rhetoric that could help to undermine Mueller’s investigation, the Nunes memo specifically says that George Papadopoulos sparked the counterintelligence investigation that ultimately led to the resignation of National Security Adviser Michael Flynn, the firing of FBI Director James Comey, and the appointment of Mueller as special counsel. Papadopoulos, a former Trump foreign policy advisor, pleaded guilty in October to making false statements to the FBI.

So it’s settled! The investigation was legitimately engendered and the GOP just proved so in a banana-handed attempt to discredit it, released over the strident bipartisan objections despite all the talk about cracking down on leaks.

What’s next, sending Nunes to pull the fire alarms at FBI HQ? They’re running out of distractions.

k15 February 2, 2018 7:28 PM

All other things being equal, a signed certificate is more trustworthy than a self-signed one, right? And there have been cases of self-signed certificates that were not trustworthy? So, the more important it is for the information on the website to be trusted safe and accurate, the more worthwhile it is for the company to use a signed certificate?

Or, is this reasoning not correct.

k15 February 2, 2018 7:34 PM

To clarify, the second part of a 2-part question: if there was bad intent somewhere along the route, could a company’s use of a signed certificate look, to the recipient’s browser, as if the company site was using a legitimate, ordinary, garden variety self-signed one?

Thoth February 2, 2018 9:56 PM

@AlanS

re: How not to use Signal

Well, bad luck if the user got caught by someone sneaking up from behind. In fact, such is a common tactic to sneak up on someone. I have a habit to check the surrounding for CCTV cameras before I converse on sensitive information with my clients and such.

The elevators in Singapore are now all equipped with 2 CCTV cameras in the housing apartments and this is very very spooky. I made a point to never answer calls, not to unlock the phone and not to even show the phone screen when entering and inside the lift. I have tried to explain to relatives and friends on the dangers of using the phone or even just unlocking it for a call but AI have been labeled as paranoid and surely enough, officials have released statements that the CCTVs in the elevators and public areas are all networked and run through “filters” and “programs” for “Anti-Terrorism” efforts.

If you have someone important to message, think along the lines of the US President but on a easier to replicate scale. Excuse yourself, get out of any visual device range (i.e. go to toilet) and decrypt/unlock your phone and send the message.Of course that’s for lower importance messages. If you have something of higher importance (i.e. keymat), then you need higher OPSEC with lesser compromise on security.

Major February 2, 2018 10:57 PM

K15 – Certificates signed by a reputable authority to be issued to the named company/domain provide substantial verification that you are connecting to that company/domain, and not somebody else. For example, they protect against man in the middle attacks.

Self-signed certificates don’t provide this assurance of identity and therefore are not worth much security-wise unless you have personally verified that a self-signed certificate was created by the entity that you intend.

For that reason self signed certs should ideally be used only in testing, on in situations where you yourself can verify their authenticity.

Major February 2, 2018 11:04 PM

K15 – Self signed certificates should bring up a stern warning on your browser unless you unwisely turned this off.

If the cert is issued to a large company, like Google, to itself – AND no warning is displayed – I understand this to mean that the company is a recognized certificate authority itself and so the cert is ok.

Most certs these days are countersigned by another authority for further authentication of their validity.

Carrots February 3, 2018 4:32 AM

@Anders

Quoting Moxie, the PKI is “total ripoff and mostly worthless”. It’s weird how we strongly condemn the idea of backdoors and golden keys, yet accept the golden keys provided by CAs. Equally bad are the large Internet companies that handle massive amounts of user data without providing additional security (like E2EE) from them.

Attempts to actually lock out themselves from user data are laughable considering how much money these companies have. We see deviations from high quality standards like Signal protocol, bad defaults like non-E2EE by default (Hangouts, FB Messenger) and disabled non-blocking fingerprint warnings (WhatsApp), bad key lengths (iMessage), lack of fingerprints (iMessage, Confide), lack of forward secrecy (iMessage), proprietary software (all of the above), lack of metadata protection (all of the above).

This isn’t something these companies really want to solve. Considering the attention to detail in other parts of their software, not understanding technical nuances isn’t credible explanation. Apple provides PGP key fingerprints on their site. WhatsApp has intentionally deviated from what Moxie offered them.

I’ve read studies and heard speeches in academic circles that theorize that concept, but we never would issue a ‘fake’ SSL certificate,” Jones said, arguing that would violate the SSL auditing standards and put them at risk of losing their certification. “Theoretically it would work, but the thing is we get requests from law enforcement every day, and in entire time we have been doing this, we have never had a single instance where law enforcement asked us to do something inappropriate.

What service exactly is a Certificate Authority such as GoDaddy providing to LEAs if not certificate signing? Plus that statement leaves so much room for interpretation. Maybe they don’t think it’s inappropriate to comply with lawful requests. LEA doesn’t ask, they hand court orders: Lavabit wasn’t exactly “asked” for their SSL keys. The certificates are not fake, they are real valid certificates the private key of which is not fake, just different one. Maybe the CA does something weird and hands out the private signing key every time to LEA, so they can generate all the certificate(s) they want themselves. Maybe CAs lease out Packet Forensics’ boxes with pre-installed CA keys in “exfiltration proof” smartcards. And finally, these companies don’t have to even play word games, they are outright forbidden by the law from telling the truth.

Such an attack, though, could be detected with a little digging, and the NSA would never know if they’d been found out.

That only applies for rogue certificates signed with CA key in situations where the public key of certificate is pinned to browser in a way there is no way to replace that certificate by a newer one (how is this handled in browsers?). Also, if NSA requests / hacks Facebook’ TLS private key, there is absolutely no way to detect the attack.

So why bother? It doesn’t hurt to use TLS to get rid of low-budget attackers, banana dictatorships etc, but unless you assume big players are routinely decrypting TLS (when despite methods, we know Bullrun program allows them to), you’re not protecting yourself in a meaningful way. Use FOSS E2EE software, make WoT reality, nobody else has economic incentive to fix this issue for you.

Gunter Königsmann February 3, 2018 7:34 AM

In a corporate environment chances are that you https connection goes to a middlebox, not to the other end. All you can hope then is that the middlebox did check if the certificate of the target you want to communicate with is trustworthy and that the private key the middlebox is using isn’t known as your computer will most likely accet it as a certificate authority.

What I don’t know is if the communication partner’s certificate isn’t trustworthy – how does the middlebox signal that to your browser as many middleboxes do…

Anders February 3, 2018 7:52 AM

I remember the times when Russian banks used self signed certificates
and there on the bank web page was certificate fingerprint written
so that the clients could check this out.

Current blind CA trust is so broken…

bttb February 3, 2018 8:34 AM

I listened to c-span radio for a few minutes this morning.

Whackos and non-Whackos on multiple sides of the issues were chattering away after calling in.

Are members of congress dumb enough to be jammed by Trump and Nunez?

Where might the Congress, the Courts, the Medical Community, the Intelligence Community, the Law Enforcement Community, and the Defense Department be should Trump start making some desperate moves to save his sorry a$$?

I assume people in those communities or organizations are game playing various scenarios. What might those scenarios be?

Bo Diddley or Stravinsky anyone; along with some popcorn?

http://blueslyrics.tripod.com/lyrics/bo_diddley/who_do_you_love.htm
https://www.youtube.com/watch?v=MAGoqMZRLB4

https://en.wikipedia.org/wiki/Right_of_spring
https://www.youtube.com/watch?v=jm5wXERBzhw

OT
http://knoppix.net/forum/threads/29100-Command-Line-to-wipe-a-hard-drive

bttb February 3, 2018 9:18 AM

tl;dr

https://www.youtube.com/watch?v=G9tenSy-vzo George Thorogood, Who Do You Love, ~ 5 minutes

https://www.youtube.com/watch?v=Z0xNo2894Fw Right of Spring Ballet, ~ 5 minutes

and from above, with a question: Is this cookbook entry any good?

“ok, here goes, if you type wrong drive number you will wipe wrong drive beyond easy repair, so take care.
first you become “superuser” or more correctly root.
Code:
sudo su
everything you do from now is dangerous.
first you find name of drive by typing
Code:
sudo fdisk -l
on a commandline.
the outout wil be something like

Disk /dev/sdb: 7948 MB, 7948206080 byte
Enhet start end Block Id System
/dev/sda1 * 11 19166 7757824 83 Linux
(yeah I am using a pen-drive just now)

this tell me that I have only one drive called sda and only one partition called 1

a ATA drive would show up as hda for first drive, hdb for second, hdc for third…
a SCSI or SATA drive would show up as sda for first drive, sdb for second, sdc for third…

partitions are merely numbers 1, 2, 3…

wiping the disk I write
Code:
dd if=/dev/urandom of=/dev/sda&&dd if=/dev/zero of=/dev/sda
this wil first write “random” data to the entire drive bit by bit, in the process wrecking any partitiontable, filesystem… then it will write zero’s to entire drive bit by bit (yeah, will take a while).

if you just wanted data to be fairly well destroyed so you could start anew, here you go.

if not enough, now for second step.
make disk one large partition and encrypt entire drive, that is a bit more than I can write here, but here you have one link (search web for more, there is a lot out there).

http://www.tldp.org/HOWTO/html_singl…ryption-HOWTO/

Actually you could likely do it using RSA/SHA like if it was a textfile or E-mail.
use sha or rsa on /dev/XYZ as if it was a file

if you want a “deeper wipe” repeat ALL steps above 25 times/disk

if you want data 100% gone you are out of luck as to simpel computer code, according to authorities only way to be 100% certain is if you melt them down and almost have to stir the thing according to some (not likely needed imho as the magnetic properties will be gone past curie point)
Last edited by OErjan; 03-12-2011 at 07:58 AM.”

keiner February 3, 2018 9:28 AM

I use a screwdriver and a hammer for end-of-life HDDs. Unscrew the controller and remove the slices (normally glass, sometimes metal). Put the controller and the slices in two layers of plastic bag and make small pieces with the hammer. End of story.

Anders February 3, 2018 9:53 AM

@bttb

“and from above, with a question: Is this cookbook entry any good?”

This misses any possible hidden areas – HPA and DCO.

Physical destruction is the best – take out the platters and bend them into the pieces.

Lot of hard drives have glass platters, this is easy, but beware. Aluminium platters are also very easy to bend with pure hands into pieces.

And yes, olympics are coming!!!

https://en.wikipedia.org/wiki/File:IBM75GXP_Failed_Disks.png

Gunter Königsmann February 3, 2018 10:38 AM

I don’t believe in Bitcoin. But I own a 1000000000 Reichsmark bill from the inflation time on which one can read that the bank will be happy to exchange it with 1000000000 golden Reichsmark coins.

This wouldn’t be the 1st currency with no real intrinsic value.

Wael February 3, 2018 11:24 AM

@Gunter Königsmann,

I don’t believe in Bitcoin…

Regardless, there are those that made tons of money from ‘crypto-currency’. I, at one point could have invested $10,000 in one of them when it was 60 cents a ‘coin’ about two years ago. Now it surpased $1000. I could have retired, but instead, in my infinite wisdom: I chose to invest in a couple of companies. One became insolvent (a nicer word for ‘went bankrupt’) and the other lost over 90% of it’s value. The way I see it, I’ll have to work until I ‘Shuffle off this mortal coil.’ In other words: retire from this life.

But I own a 1000000000 Reichsmark bill from the inflation time…

Like this one? It’s gotta be worth something!

This wouldn’t be the 1st currency with no real intrinsic value.

No, it would not be, although it’s debatable that anything (including gold and precious metals) has a universally agreed-on ‘intrinsic value’. Digital / virtual currancy is likely the next evolution in ‘medium’ asset representations. It still remains to be seen whether blockchain is going to be the underlying technology.

Who? February 3, 2018 12:18 PM

Have you observed the huge amount of dangerous URLs Google returns when you type “Intel microcode” and restrict the search results to the last days? Worrying at best.

While here… why is the load of our processors is so high, up to 40%, when you stay at http://www.google.com? (of course without typing anything on the search field.) It does not happen in search engines like duckduckgo.com where load is usually less than 5%.

What is Google running in the background on our computers?

albert February 3, 2018 1:26 PM

“…“Overall, JASON finds that AI is beginning to play a growing role in transformative changes now underway in both health and health care, in and out of the clinical setting.”…”

https://fas.org/blogs/secrecy/2018/02/ai-health-care/

“…Fundamentally, the JASONs said, the future of AI in health care depends on access to private health data….”

. .. . .. — ….

hmm February 3, 2018 1:29 PM

“Someone has been given a patent to geotag IP packets.”

Yeah? Who gets the patent for spoofing them?

Crickets.

Tatütata February 3, 2018 1:30 PM

There is a new installment in the Russian wee-wee file…

Not the one you’re thinking about, but this one, about defeating tamper-resistent urine sample bottles.

The northern German TV network affiliate broadcasted a new two-part documentary timed to coincide with the upcoming winter games in Korea, which was simultaneously made available in English.

part 1, part 2

The doping controls will use a new model of bottle, in which the metal locking ring is replaced by a plastic device. But this model is also flawed, as it cannot absolutely guarantee that the bottles are unopenable. After refrigeration more than a few of them can be opened and closed at will, instantly destroying confidence in the system.

Furthermore, practical methods of circumventing the system are suggested (which may not be the one which were actually used in Sotschi).

It is shown how old style glass bottle could simply be simply broken off and separated from the cap, and replaced by a new one bought off the shelf. A determined laboratory technician could only distinguish the bottle types with a close examination, and presumably after being informed to be on his guard.

The bottles carried a label, but these are apparently very easy to imitate. Even though the journalist doesn’t want to provide details, it seems obvious to me that the laser labeling machine can also be bought off the shelf, and the only difficult element is to find an adhesive markable film of the proper size and color. My bet is that these are also a commercially available type.

It is also stated that the new bottles and caps can also be copied lock, stock, and barrel, inclusive “security” hologram, and made to custom order very cheaply in Shenzen (0,14$ for the bottle @150k pieces, and 6€ for the entire thing). (Hopefully the manufacturer will destroy the tooling, and not start selling bottles on his own. That is a frequent problem in China.)

k15 February 3, 2018 1:44 PM

Sorry, I’m still not clear on this.

  1. When I go to sites like duckduckgo.com the address bar is prefaced with the padlock icon and “Duck Duck Go, Inc. [US]”
  2. When I go to sites like google.com, the address bar is prefaced with the padlock icon and “Secure”.

How can I, sitting looking at my browser, be assured that sites in the latter category really & truly aren’t, on their real websites on servers hundreds of miles away, actually using the same type of certificate as those in the former category?

Tatütata February 3, 2018 1:54 PM

The World Anti-Doping Agency (WADA) response:

https://www.wada-ama.org/en/media/news/2018-01/wada-update-regarding-new-generation-bereg-kit-geneva-security-bottles

The problem with the 2017 model bottle is acknowledged. It will be immediately pulled, and the 2016 model will be used instead.

WADA states it had been informed by an accredited laboratory in Cologne on 19 January. I wonder what would have happened (or not) if the NDR hadn’t made this information public…

maqp February 3, 2018 3:18 PM

Last “quick” update on TFC’s experimental version:

This became longer post than intended. If you’re on mobile, you can jump over it by searching “MAKEITSTOP” from page.

Back in the day when TFC communicated via Pidgin, there was no functional API to sending files exported from TFC. This limitation was bypassed by encoding files on TxM side and sending them inside packets that are indistinguishable from messages. This became really useful when traffic masking was added: even in the upcoming version this feature remains as a slow but covert way to send files in the background without leaking metadata about quantity/schedule of communication or about the fact a file is being sent.

However, when traffic masking is disabled, this method is very slow and it introduces a lot of overhead when there’s a separate nonce and tag for every assembly packet. TFC should decide on the threat model between Eve (global backbone tapper) and Mallory (Eve who also hacked NH) I described earlier. TFC’s main focus has been endpoint security, so the claims about security regarding content and metadata it makes, should assume Mallory is observing NH. When traffic masking is enabled, metadata is protected from both Eve and Mallory, but when it’s disabled, the question becomes, should TFC keep protecting metadata against Eve at the cost of performance when a) Tor and Onion Services already offer quite a bit of protection and b) this protection fails anyway if NH is hacked: Mallory can see TxM send e.g. five hundred messages to NH in brief time, so she’s able to determine output data was a file simply because nobody types that fast. So without traffic masking, it’s impossible to hide the fact user sends a file in this scenario. Therefore making NH display message about it is less misleading to user, and allows us to improve file transmission speed quite a bit.

To speed up multicasting of files, I introduced earlier the file export (/fe) command that compresses and encrypts the file, and outputs the ciphertext to NH, from where it can be shared with e.g. OnionShare to contacts. Contacts can download the file with Tor Browser to their NH, request decryption key via TFC, and after /fi import command, and upload of CT to RxM, input the file decryption key and let RxM store file. This isn’t easy to use, and in some cases it’s not even secure. If Alice uses /fe to send file to Bob and Mallory (who has infiltrated the group), and Mallory receives it first (say Bob is AFK for a moment). Since /fe uses same symmetric key for both Bob and Mallory, Mallory could change the content of Alice’s file on Bob’s NH and alter what Bob receives to his RxM).

The way to fix this is to also share the hash of the CT embedded in the decryption key, so that recipients’ RxM verifies CT before decryption. Since Alice can deliver the embedded key+hash all the way to the RxM of Bob where the file is imported, using a longer key does not increase the amount of manual labor. Current version has another problem. If Bob forgets to ask for the key from Alice before importing, not only can’t he decrypt the file, he’s also unable to see incoming messages from Alice containing the key. The solution is to automatically share the decryption key beforehand. If we’re going to implement that, why not automate file CT delivery to Bob’s RxM also, now that the users have their own Flask server and client with no API limitations like in Pidgin. This would get rid of the need to publish files over OnionShare. It’s not even going to be slower, since Tor is the bottleneck here.

The upcoming version removes both /fe and /fi export and import commands. When traffic masking is disabled, /file will compress and encrypt selected file with a symmetric key, and output F|R1|R2|..|Rn|CT|H(CT) from TxM, where F is the file packet header, R# is recipient’s account, H is hash function and | denotes concatenation. Flask publishes file to each recipient, who will download the CT from server in one B85 encoded chunk. Recipient’s NH outputs the decoded CT to their RxM; RxM receives the CT, but since file decryption key has not yet arrived, RxM caches the CT in dictionary the keys of which consist of the CT’s hash and the purported account of sender:

file_buf[ct_hash + account.encode()] = file_ct

The way group member Mallory could attempt existential forgery here is, she could use symmetric key of file she received, produce CT’|H(CT), and replace CT|H(CT) in some other group member’s NH before it’s output to their RxM. To detect this, the hash in file_buf’s key is calculated from received ciphertext. The bundled hash is included by NH just because comparison improves error detection with no computational effort, but using that value directly for CT identification is not safe.

On sender side, once the file CT has been output to NH, TxM will multicast the file key delivery message that contains the original CT hash, plus the decryption key of the file to active contact or members in active group. When contact’s RxM receives this special message, the key will be stored to file_keys dictionary the following way

file_keys[ct_hash + account.encode()] = file_key

Unlike in the case of received file, the account is not parsed from the content of the file key delivery message. Instead, it is the contextual account: In order to be able to deliver this message to RxM, sender needs to be a contact with whom TFC key exchange has been completed. The message’s CT must be delivered from NH to RxM with correct account data. Trying to spoof the origin of the message makes RxM load the keys for wrong contact and ciphertext MAC will fail. If the decryption of file key delivery message succeeded, that means that message could only have come from the sender the datagram’s header metadata implies it came from.

The line of code above is the only place where the file_keys dict can be assigned new key-value pairs. Therefore, Mallory is unable to add file keys to this dictionary for anyone else except herself, provided that she’s a contact in the first place.

In other words, when Alice sends a file to Bob and Mallory, Mallory who receives the file first is unable to deliver different file CT to Bob using same symmetric key. File key delivered by Alice that could decrypt Mallory’s CT will not be loaded from file_keys dict because the hash of Mallory’s CT does not match hash delivered in Alice’s file key delivery message. If Mallory tries to spoof file key delivery message, she’s unable to forge the account data part because that would require sending the message from Alice’s TxM, and if she could do that, no separate attack would be necessary. Therefore she’s unable to produce existential forgery.

To summarize, the new /file command is

  • as fast as export with /fe command: File only needs to be output from TxM once.
  • more secure: There is no risk of existential forgery by frenemies
  • more convenient: Sender doesn’t need to manually type OnionShare URL from NH to TxM
  • more convenient: Recipient doesn’t have to type OnionShare URL from RxM to NH
  • more convenient: Sender doesn’t have to send the decryption key over e.g. /whisper message
  • more convenient: Recipient doesn’t have to worry about remembering to ask for the key before importing
  • more convenient: Recipient doesn’t have to manually copy-paste received key to file import dialog

The only downside compared to /fe is, if Alice wants to also send the file to new contact(s). She can’t just share the OnionShare URL, file hash and decryption key. Instead she’ll have to resend the file. However, if there are multiple contacts, she can generate a temporary group, not send a group invitation to it’s members, and then use that group to multicast the file to each recipient over single file CT transmission.


This feature is already featured in the experimental release for Ubuntu 17.10. The only known bug (currently unfixable) is, user needs to remove the user_data directory from $/HOME/tfc/ between restarts and start over. This is because unfortunately Stem is currently unable to reliably create new v3 Onion Service with same signing key so there’s no guarantee server is reachable after session ends. This stagnates the development process approximately until April or May, or possibly until as late as July. A great way to help out is by donating to the good folks at Tor project. (Note neither myself or TFC is affiliated with Tor Project in any way).

Finally I’ll warn that while everything except src.nh.onion is well tested, it’s not yet safe for production use. Tor version in use is at alpha, and there’s a lot I need to understand about the new Onion Services from key blinding to subkeys, so endpoints might not be as anonymous as Tor’s spec would imply.

post skip tag: MAKEITSTOP

Grauhut February 3, 2018 3:34 PM

@keiner: For retirement of old disks that dont contain sensible data i prefer a scabbling pick and some brute force. 🙂

albert February 3, 2018 4:02 PM

@Tatütata,
What’s really sad is that doping is even an issue. I have little doubt that someone, somewhere, is working on a device that can quickly test each athlete before each event.

The solution I propose is to simply legalize doping. Problem solved.

. .. . .. — ….

Sancho_P February 3, 2018 5:09 PM

@k15 (@Anders)

(Disclaimer: I’m not an expert at all!)
Probably I didn’t understand your question, but with most browsers a click at the padlock will show the certificate itself. A certificate is a certificate, and a self signed may be more valid than a CA’s. There are different types of certificates, that may be confusing, but presumedly it’s intended. Duck Duck holds an EV cert from DigiCert and google is it’s own authority, but whom would you trust really, your wife? However, a certificate is better than none, but to know more one would need the history of certs of a certain entity and if someone else in an other jurisdiction would see the same cert …

Also, imagine sitting in a cardboard box with only one small slit to the outside world, no other communication. The paper you’ve sent out with “Help, need money” is replied by a signed paper saying “Hi, I’m a good guy, tell me your credentials” – Would you trust?

Sancho_P February 3, 2018 5:19 PM

@maqp

The onion and encryptional details of TFC are way above my head, I’m lost, but I’m glad to hear you’re still making progress.

Anyway, my first comment is meant more than half serious: Transform your adversaries to male, e.g. to Eddie and Mick, not to trigger a shitstorm in the near future. Living in a crazy time.

Apologize when I now state the obvious:
The threat model is the most important decision, it’s late now, but not too late to give it a closer look!!!
The second is where do you go to, I’ve never understood the basic chat idea (effectively it is text sent as a file snippet) because the nearly real time send and receive requirement (similar to a standing connection) is diametral opposite to security considerations, the worst case for a hidden communication. A significant random delay would help to defeat timing analyses.
So traffic masking is mandatory, at least in the molecular (packet) view.

Personally I’d never prioritize on speed but on obfuscation, in the rare case where someone has to transfer a secret video-file via TFC even several minutes more would not matter (and upload speed might be the limit anyway).
My hope always was to have TFC transmit data like (text) email or key files from a (dumb) secure device to the recipient’s (dumb) secure device.

Didn’t understand your ”Therefore making NH display message about it is less misleading …”.

Multicast (group ?) sending using the same key for a group is something I wouldn’t even want think about, sounds crazy, unneeded complexity, wouldn’t use it, dangerous.

Sorry, that threw me off track, will chew on the rest later, too much for me today.

Sancho_P February 3, 2018 5:25 PM

@albert

Great suggestion, would improve in all directions, from science to business, probably also slightly reduce population, bingo!
Go and tell the crocodile, he will be pleased!

(required) February 3, 2018 5:56 PM

@

“The solution I propose is to simply legalize doping. Problem solved.”

Absolutely, and the solution to nuclear holocaust is to build 10x more weapons.

I like how you think, it’s very simple. Would you like to be the next leader of the world?
It’s a very simple job if you don’t think about it.

Alyer Babtu February 3, 2018 5:58 PM

In re doping –

Alternative solution: return Olympic sport to its original purpose, honor and worship of the gods. No one would dare cheat.

65535 February 3, 2018 8:46 PM

@ Anders, Carrots, Gunter Königsmann, k15, Sancho_P and other posters

Regarding the SSL/TLS stripping:

This is dangerous and misunderstood subject because it effects all of us and all of our banking transactions and all SSL/TLS communications.

Let first say that I am not a PKI expert but general SSL stripping is done by several methods but the most straight forward method is Avasts’ CA certificate and it’s stripping enginge.

Basically, Avast places it root Certificate in Microsoft root certificate store in the OS and then using it stripping engine to intercept a website SSL certificate and replace it with Avasts certificate reading the entire session and “cheking for Malware” at the same time. Avast claims it has white list of about 600 banks which it doesn’t impersonate and strip SSL emcription [the obvious problem with that is Avast could lift the white list and read all conversations]. Avast does this legally when you accept their complex Terms of Service.

[Avast SSL stripping conversation]

‘But turns out that yes, in fact it is replacing web certificates with its own root CA certificate and then using that in place instead of the website’s certificate. This is how Man in the Middle (MitM) attacks are carried out.’-Stackexchange commenter

https://security.stackexchange.com/questions/148402/should-antivirus-https-scanning-be-left-on-is-it-secure

The Avast method is just one of many methods to strip SSL. The Kaspersky SSL stripping process caused NSA digital weapons to land on Kaspersky servers – which Kaslpersky blamed on a lifting of its AV program to allow a KeyGen or Key Generator to be used to install Microsoft Office.

https://en.wikipedia.org/wiki/Kaspersky_Lab#Malware_discovery

and

https://en.wikipedia.org/wiki/Kaspersky_Lab#Bans_and_allegations_of_Russian_government_ties

and

https://www.theregister.co.uk/2017/10/25/kaspersky_nsa_keygen_backdoor_office/

This says bit about the effectiveness of SSL Strippers white list and there actual usefulness.

Anders points to the wired article which previews the SSL stripping box for Law Enforcement and possibly everyday private investigators.

[Wired]

“…The boxes were designed to intercept those communications – without breaking the encryption – by using forged security certificates, instead of the real ones…Packet Forensics, which advertised its new man-in-the-middle capabilities in a brochure…According to the flyer: “Users have the ability to import a copy of any legitimate key they obtain (potentially by court order) or they can generate ‘look-alike’ keys designed to give the subject a false sense of confidence in its authenticity.” The product is recommended to government investigators, saying “IP communication dictates the need to examine encrypted traffic at will.” And, “Your investigative staff will collect its best evidence while users are lulled into a false sense of security afforded by web, e-mail or VOIP encryption.”-wired

https://www.wired.com/2010/03/packet-forensics/

I will say the “import a copy of any legitimate key they obtain (potentially by court order) or they can generate ‘look-alike’ keys” slides over the wiretap laws in the USA will little explanation. I doubt they even import key but just use the ISP’s cert keys –but that is just a guess.

[Wired cont.]

“SSL authenticates that your browser is talking to the website you think it is….browser makers trust a large number of Certificate Authorities – companies that promise to check a website operator’s credentials and ownership before issuing a certificate. A basic certificate costs less than $50 today, and it sits on a website’s server, guaranteeing that the BankofAmerica.com website is actually owned by Bank of America. Browser makers have accredited more than 100 Certificate Authorities from around the world, so any certificate issued by any one of those companies is accepted as valid… Firefox has its own list of 144 root authorities. Other browsers rely on a list supplied by the operating system manufacturers, which comes to 264 for Microsoft and 166 for Apple. Those root authorities can also certify secondary authorities, who can certify still more – all of which are equally trusted by the browser. … Technologists at the Electronic Frontier Foundation, who are working on a proposal to fix this whole problem, say hackers can use similar techniques to steal your money or your passwords… two security researchers demonstrated how they could get certificates for any domain on the internet simply by using a special character in a domain name… Seth Schoen, an EFF staff technologist. “There is software that is being published for free… The researcher published a paper on the risks (.pdf) Wednesday, and promises he will soon release a Firefox add-on to notify users when a site’s certificate is issued from an authority in a different country than the last certificate the user’s browser accepted from the site [Good luck with trying to determine the IP location using TLD country list – unless you have a verified private IP list and locations –see TLD post below]

https://www.schneier.com/blog/archives/2018/01/skygofree_new_g.html#c6768724

“EFF suggests a regime that relies on a second level of independent notaries to certify each certificate…”- wired

I believe that is referencing the old and defunct Perspectives notaries project:

“Perspectives by perspectives-cmu, dschaefer… Connect securely to https websites by checking certificates with network notaries. See http://www.perspectives-project.org”-perspectives for Firefox

https://addons.mozilla.org/en-US/firefox/addon/perspectives/

The actual method of SSL stripping and the “stripping engine” has never been thoroughly explained to me but Bruce S. has discussed it for years starting with md5 collision scam.

https://www.schneier.com/blog/archives/2008/12/forging_ssl_cer.html

Next, the entire topic of SSL Stripping and the SSL stripping engine spanned several post last ending at this post

https://www.schneier.com/blog/archives/2017/12/friday_squid_bl_606.html#c6766664

A vexing question about SSL stripping and AV vendors doing so moved from “GCHQ Found and Disclosed Vulnerabilities’ thread.

@ all SSL/TLs experts and the like
Is the danger of Anti-virus software that also strips SSL/TLS and it high level of privilege, including sending files off of the local machine a danger? Is this SSL/TLS stripping done by hiding the user’s cert behind the root cert of the AV vendor and his this dangerous or exploitable? Could this common method of SSL stripping undermine the PKI system and banking transactions? How exactly is SSL stripping done by AV vendors?

Here are those questions by Wael, Clive R. and 66535.

Wael,
“Root cause: violation of ‘Least Privilege’. Anti malware processes need not have write privilege. Also a violation of ‘Separation of Domains’: Anti malware processes need to be containerized so vulnerabilities are local to their container. New architectures need to be explored. On the subject proper: disclosing one vulnerability doesn’t imply all were or will be disclosed.“-Wael

https://www.schneier.com/blog/archives/2017/12/gchq_found_–_a.html#c6766348

In summary can anybody explain how the SSL stripping engine or SSL Certificate forging engine works?

[Please excuse all of the grammar and other errors. I had to bang this out]

maqp February 3, 2018 9:22 PM

@ Sancho_P

“The onion and encryptional details of TFC are way above my head”

I’ll update the documentation once the design is finalized. Version 3 Onion Services might still cause major changes to the current design. If you need to catch up and haven’t already, I’ve posted about the Onion Service’s path derivation with X25519 in an earlier FSB (possibly two weeks ago).

“Transform your adversaries to male”

Absolutely not. But I’m willing to be more careful about using the proper names for commonly associated roles. English language is a blessing with it’s gendered personal pronouns that make it easy to distinguish who’s in question. For example:

“Alex sends Message to Bob and Chuck (who performs an existential forgery) and when Bob receives the message from him…” Who’s message did Bob receive?

Finnish language only has “hän” which refers to either gender so you have to keep using names and I don’t think that serves the purpose better. This isn’t about upholding patriarchy through sexism, it’s about math and being understood. Also, women have as much right to be evil as men.

“it’s late now, but not too late to give it a closer look!!!”

It’s never too late, and in the end things aren’t going to change for the worse for existing users. Lack of quantity/schedule metadata protection by default was already an issue with Pidgin. What didn’t happen here was, I didn’t create a non-traffic masking connection with small added metadata protection against Eve that was useless against Mallory. Instead I created much more convenient environment to use traffic masking in, and at the same time I was able to significantly increase file CT size, making transfer much faster and less error prone on non-traffic masking connections.

As for the changes in thread model through up-coming drastic edits, I’m not writing TFC’s next version for it’s current users because the hardware architecture doesn’t allow updates for TxM. There will be no backwards compatibility, because not only does it introduce the possibility for downgrade attacks, it also unnecessarily complicates codebase which I’m trying very hard to keep minimal. The next version is going to change the entire messaging backend. This is the first and hopefully the only time this will happen. It’s unfortunate that users will have to upgrade, but it’s lesser of the two evils. All new users deserve the best I can provide, and the better the features, the more users might adopt TFC. The earlier I make these major changes, the smaller userbase I’m going to piss off in the long run. And I feel this is where the project shines. How many messaging tools use V3 onion services, X25519, XChaCha20-Poly1305, Argon2 or support traffic masking? TFC will feature a protocol that’s ideal enough very soon. My conjecture is v3 Onion Services and X448 (and perhaps PQ key exchange if one with reasonable key lengths is found) are the only things missing. Both are under work in upstream.

“I’ve never understood the basic chat idea”

I recommend you take a look at this GIF to see how X25519 key exchange and basic chat works. Have you read the security design in GitHub’s wiki?

“effectively it is text sent as a file snippet”

Text is read from user, compressed, encrypted and output as a packet to NH, where the packet is published as B85 encoded text string under Onion site. NH of contact loads that string, decodes it, uploads it to RxM for decryption and display. It doesn’t turn into file in any phase. Sent files turn into files in recipient’s RxM once transfer is complete, before that it doesn’t touch disk any disk.

“Personally I’d never prioritize on speed but on obfuscation”

The problem with traffic masking is it’s not novice friendly. If I now change that to default behavior and launch new version, after creating master password, exchanging local key and adding the first contact, TFC will not allow me e.g. to add another contact. This is because of built-in active protection that prevents sending a public key to a new contact, because doing so would reveal when TFC is being used. So new users would have to first learn how to disable the feature. This is unacceptable IMO. Another issue is TFC would fill NH’s RAM while contacts are offline.

“My hope always was to have TFC transmit data like (text) email or key files from a (dumb) secure device to the recipient’s (dumb) secure device.”

You’ve been able to do that for a long time: my post above was about making it even more convenient. I’ll see if I can make something like a screen-cast about how to use TFC for the next release.

“Didn’t understand your ”Therefore making NH display message about it is less misleading …””

If you look at the GIF above, when user (Bob) replies to Alice, NH will display the following message:

08-29 / 23:17:28 – message TxM > bob@jabbim.pl > alice@jabbim.pl

This is sort of visualization what Mallory can deduce from what happens on NH. But what user might not realize is burst like

08-29 / 23:17:29 – message TxM > bob@jabbim.pl > alice@jabbim.pl
08-29 / 23:17:29 – message TxM > bob@jabbim.pl > alice@jabbim.pl
08-29 / 23:17:29 – message TxM > bob@jabbim.pl > alice@jabbim.pl
08-29 / 23:17:29 – message TxM > bob@jabbim.pl > alice@jabbim.pl
08-29 / 23:17:29 – message TxM > bob@jabbim.pl > alice@jabbim.pl

with five packets in single second indicates file transfer to Mallory (it might also indicate very long message but the more packets consecutively output, the smaller the likely-hood). However, under traffic masking TxM outputs a packets to NH at fixed rate regardless of whether anything is actually being transferred.

08-29 / 23:17:30 – message TxM > bob@jabbim.pl > alice@jabbim.pl
08-29 / 23:17:31 – message TxM > bob@jabbim.pl > alice@jabbim.pl
08-29 / 23:17:32 – message TxM > bob@jabbim.pl > alice@jabbim.pl
08-29 / 23:17:33 – message TxM > bob@jabbim.pl > alice@jabbim.pl
08-29 / 23:17:34 – message TxM > bob@jabbim.pl > alice@jabbim.pl

These go all the way to contact’s RxM where they will be discarded as empty, or parsed as file (which is stored to disk) or displayed as message. Therefore, Mallory can’t deduce anything from data that comes from TxM. However, in the first case (with burst) you can see why it could mislead the user who might think “nobody can guess I sent a file because it says ‘message'”.

08-29 / 23:17:35 – file TxM > bob@jabbim.pl > alice@jabbim.pl
08-29 / 23:17:35 – message TxM > bob@jabbim.pl > alice@jabbim.pl

Fixes this as it indicates a file was delivered to Alice and that doesn’t mislead Bob about what Mallory’s able to deduce.

By multicast I mean TxM outputs multiple copies of same plaintext to contact, or same encrypted file from NH to contacts’ NHs.

“sounds crazy, unneeded complexity, wouldn’t use it, dangerous.”

Signal has very successfully encrypted files to contacts using the same key. The only thing you need to ensure is that you decrypt the original ciphertext. The hash of original CT and decryption key are delivered through secure, authenticated means. It doesn’t add complexity as you would in any case need some way to identify the CT, the decryption key of which you send separately (suppose user sends multiple files).

Initially I did encrypt file to contacts with separate key every time. To identify the file I had to add a separate random file_id next to CT, which I realized was pointless because that’s what hashes are used for: to identify files and check their integrity. Doing so allowed sending same CT to multiple contacts.

Also, were I to use different key for every contact, Mallory on NH could see consecutive file transmissions of same size output to multiple contacts (it’s pointless to pad files because no effective padding scheme that hides large files exists), and deduce they were the same file. Just because there’s a different key doesn’t mean if she’s able to decrypt one of them by being a contact, she’s not able to deduce she’s decrypted all of them. RxM already prevents existential forgeries as I explained, so I don’t see what’s the problem here. Let me know if I’m missing something here.

k15 February 3, 2018 10:55 PM

Certificates – fbi.gov and wellsfargo.com certificates are self-signed, cia.gov and bankofamerica.com’s are countersigned?
(That’s what I am seeing.)

Practical Internet Encryption February 4, 2018 5:16 AM

Incredibly ironic, the majority of USA VPN services allow the granddaddy of spyware (G**gle Analytics) to eavesdrop.
Most notably at the log-in screen where the user enters their username and password.
Worse you must disable ad blockers/user-agent spoofers so Google can accurately fingerprint your system.
Rather than just complain here is a whole network AES 256 Swedish solution using a fast DD-WRT modem/router (serially chained behind the ISP modem). Servers are location in many international cities.
https://www.mullvad.net/en/guides/dd-wrt-routers-and-mullvad-vpn/

Going to the extreme, you can now weigh the pros and cons of running TOR locally

Alyer Babtu February 4, 2018 12:42 PM

@ All ye wise

My poor understanding doesn’t get why no one has coded a network facing anonymizer so fingerprinting methods return to the evil query-er basically nothing more but a random smiley face.

Point and Shoot February 4, 2018 1:27 PM

“The largest supercomputers in the world are inside of two companies — Google and Facebook — and where are we pointing them?” Mr. Harris said. “We’re pointing them at people’s brains, at children.

“We were on the inside,” said Tristan Harris, a former in-house ethicist at Google who is heading the new group. “We know what the companies measure. We know how they talk, and we know how the engineering works.”
https://www.nytimes.com/2018/02/04/technology/early-facebook-google-employees-fight-tech.html

Bauke Jan Douma February 4, 2018 1:49 PM

@JG4 In Roger Penrose’s Conformal Cyclic Cosmology model (CCC), entropy goes down at the end of an aeon, providing conditions for a new big bang and a new aeon. See a.o. his lecture for the Canadian Perimeter Institute.

k15 February 4, 2018 3:58 PM

Is anyone else seeing that the Wells F. website certificate is self-signed yet the B of A one is countersigned?

James Joyce February 4, 2018 4:26 PM

The Spanish intelligence agency CNI has decrypted a number of letters that were written at the beginning of the 16th century by king Fernando el Católico, using 325 different characters and symbols. Nobody had been able to read these letters for more than 500 years: Newspaper article

echo February 4, 2018 6:30 PM

I am left speechless by the UK governments both eroding of security capability and picking on the weakest.

Royal Marines in danger of being sacrificed to ‘short-term bookkeeping’, says Defence Committee report
http://www.independent.co.uk/news/uk/home-news/royal-marine-cuts-commons-defence-committee-report-royal-navy-a8193026.html

Government spending on private firms carrying out ‘brutal’ disability benefit assessments soared by £40m in one year
http://www.independent.co.uk/news/uk/politics/personal-independence-payment-spending-government-disability-payment-assessments-pip-dwp-atos-capita-a8189291.html

C U Anon February 4, 2018 9:39 PM

@James Joyce:

The Spanish intelligence agency CNI has decrypted a number of letters that were written at the beginning of the 16th century by king Fernando el Católico

Hmmm, I guess it is obviously easier for the Spanish Intelligence Agency CNI to waste Spanish tax payer money doing that, than sticking a camera over a Catalan politician’s shoulder at “signal” messages from the Catalonian political leader Carles Puigdemont. Who the Spanish Government in Madrid have deposed by military force and currently has forced into exile, and are now trying to bring in new legislation against.

http://www.telecinco.es/elprogramadeanarosa/mensajes-Puigdemont-declarando-Cataluna-republicana_4_2508510004.html?type=listing

https://edition.cnn.com/2018/01/31/europe/catalonia-former-leader-carles-puigdemont-messages-intl/index.html

James Joyce February 5, 2018 2:56 AM

@C U Anon:

I mostly agree with what you say, except that (1) the history of cryptography is clearly important enough to deserve more research to be done and more taxpayers’ money to be spent on it, and (2) I understand that the leak of Puigdemont’s Signal messages was an accident and not the Spanish government’s fault. You’ll find plenty of material in this blog about how dangerous it is to use cellphones in the presence of TV cameras.

I wasn’t meaning to speak up for the Spanish government, you know….

65535 February 5, 2018 5:47 AM

@ k15

“Certificates – fbi.gov and wellsfargo.com certificates are self-signed, cia.gov and bankofamerica.com’s are countersigned?
(That’s what I am seeing.)”

I see that with the CIA and about any dod or gov site. I don’t have accounts at Wells or BofAmerica so I cannot get past their respective sign on pages.

Wells cert:

Issued to
CN: connect.secure.wellsfargo.com
Organization: Wells Fargo and company
OU: DCG-PSG
Serial Number 6A:AD:B9:14XXXXXXXXXXXXXXXX

Issued by
CN Symantec Class 3 Secure Server CA –G4
Organization Symantec Corporation
OU Symantec Trust Network
Period of Validity
Begins On Wednesday, October 12, 2016
Expires on Saturday, October 13, 2018
SHA1 Fingerprint F1:01:C2:D0XXXXXXXXXX

[Details]
Certificate Hierarchy
VeriSign Class 3 Public Primary Certification Authority – G3
Symantec Class 3 Secure Server CA –G4
Connect.secure.wellsfargo.com

About the same for BofAmer.

Issued to:
CN secure.bankofamerica.com
O Bank of America Corporation
OU eCommerce Network Infrastructure
Serial Number: 17:66:48:0F:5DXXXXXXXXXX

Issued By
CN Symantec Class 3 EV SSL CA –G3
O Symantec Corporation
OU Symantec Trust Network
Period of Validity
Begins on Sunday, August 06, 2017
Expires on Monday, October 22, 2018
SHA1 Fingerprint 00:1B:4C:C1:D7XXXXXXXXXX

[Details]
Certificate Hierarchy
VeriSign Class 3 Public Primary Certification Authority – G5
Symantec Class 3 EV SSL CA –G3

I don’t know if typing out the entire SHA1 fingerprint helps much since a change should drastically alter the whole SHA1 fingerprint in theory. I really cannot confirms if it is SSL stripped using a good SSL stripping engine.

If others can help please speak up.

k15 February 5, 2018 9:22 AM

Thanks bigno but that’s way above and beyond, I just need an explanation at the “what does it show in the ‘security’ field of the address bar and why should they be different” level of sophistication.

I do not understand, just looking at that field of the address bar, why wells f. should do security differently from b of a, likewise fbi from cia. The fact that they appear to do it differently, if there’s no good business case for doing it differently, suggests something might not be right. Just looking at homepages here.

echo February 5, 2018 12:26 PM

The US-UK extradtion treaty is one sided. I have no idea what motivated government to sign this off. Notably this case heard evidence for a US psychiatrist who questioned the adequcy of safeguarding procedures in US prisons. In the UK many psychiatrists have a beaurocratic monopoly vested interest in propping up the system and it is notable how quiet UK psychitraists are when issues of miscarriages of justice and abuse of executive power crop up. The US is much more candid in this regard and a welcome intervention.

https://www.theguardian.com/law/2018/feb/05/hacking-suspect-lauri-love-wins-appeal-against-extradition-to-us

A high court ruling blocking extradition to the US of Lauri Love, a student accused of breaking into US government websites, has been welcomed by lawyers and human rights groups as a precedent for trying hacking suspects in the UK in future.

echo February 5, 2018 12:31 PM

Ooops. In spite of my opposition to Brexit I noticed this article also explains the case succeeded in large part because of the legal test devised by Prime Minister Theresa May (yes, she of the snoopers charter!) which prevented Love’s removal.

Timm February 5, 2018 1:42 PM

Thanks for the heads-up, @echo. The UK had no choice but to block extradition. The CAT is binding on the UK and overrides US extradition by prohibiting refoulement to a torture state. Bradley Manning’s CIDT is overwhelmingly relevant in establishing the US a torture threat. And since the US is still in the throes of CAT follow-up of multiple urgent issues, extradition would only get Britain more entangled with the formal legal crime against humanity of systematic and widespread US torture. That would compound the problem of British CIDT in Afghanistan, currently under investigation by the International Criminal Court.

echo February 5, 2018 2:49 PM

For clarification did you mean “CAT” means ‘Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment’ and “CIDT” means ‘Centre for International Development and Training’?

I note US psychologists were eventually instrumental in questioning the US torture regime.

https://www.theguardian.com/law/2015/jul/13/psychologist-torture-doctors-collusion-jean-maria-arrigo

There are decent people about even if we all sometimes take the long way getting there.

Ratio February 5, 2018 4:12 PM

@James Joyce,

Nobody had been able to read these letters for more than 500 years

Not even Gustave Bergenroth?

@C U Anon,

[Carles Puigdemont, who] the Spanish Government in Madrid have deposed by military force and currently has forced into exile, and are now trying to bring in new legislation against.

Suuuuure, this isn’t BS at all. Nope, definitely not.

aenomymous February 5, 2018 4:24 PM

https://www.nytimes.com/2018/02/05/nyregion/cyber-crimes-unreported.html

Many of the offenses are not even counted when major crimes around the nation are tallied. Among them: identity theft; sexual exploitation; ransomware attacks; Fentanyl purchases over the Dark Web; human trafficking for sex or labor; revenge porn; credit-card fraud; child exploitation; and gift or credit-card schemes that gangs use to raise cash for their traditional operations or vendettas.

In a sense, technology has created an extraordinary moment for industrious criminals, increasing profits without the risk of street violence. Digital villainy can be launched from faraway states, or countries, eliminating physical threats the police traditionally confront. Cyber perpetrators remain unknown. Law enforcement officials, meanwhile, ask themselves: Who owns their crimes? Who must investigate them? What are the specific violations? Who are the victims? How can we prevent it?

Sancho_P February 5, 2018 6:02 PM

@ maqp

Re role names, no problem, you may stick to the old school () naming conventions, but be aware that language isn’t an excuse, it’s a tool and the clearness depends on it’s use.
(
) Back in the good old days it was fun to name adversaries as female, however, times goes by.

The difference in our both understanding may not be represented by a traditional threat model, I think it is based on our different purpose the system is thought to use. This spectrum is both, very broad and very specific, from embarrassment to existential threat, the adversary goes from the brother to curious customers, from criminals up to state actors.

I had to cope (not technically) with international industrial espionage, insurance fraud and stock market speculation. It was always about money and, in times, against state level actors, but not life threatening.

We would not chat or send two encrypted messages to the same receiver / address, never. So to me to think of group chat and a real contact list is impossible.
But the basics are similar: A secure box between NH and a kind of terminal.
Only that complexity in that secure box causes me headaches.

WetSuit February 5, 2018 6:15 PM

keiner • February 3, 2018 9:28 AM
I use a screwdriver and a hammer for end-of-life HDDs. Unscrew the controller and remove the slices (normally glass, sometimes metal). Put the controller and the slices in two layers of plastic bag and make small pieces with the hammer. End of story.

Not really effective. Some people like to play with jigsaw puzzles. Back in the day, when hard drives really were the only feasible storage, I’d take a high temperature torch outside the building, and make sure the flame was downwind (there’s nasty chems on those platters). The warping and melting really does a number on the bits. Pretty much gone then. Nowadays, use a uSD and eat it. (KIDDING) – nasty chems in those too.

Thoth February 5, 2018 11:05 PM

@aenomymous

Good luck on finding a robust (2/M)FA solution. Most of them are problematic in their approach.

You have SMS OTP which is weak to SS7 attacks and furthermore a malware in your phone can copy your SMS and even delete it.

If you use hardware OTP tokens, the endpoint (typically a web browser and the OS) is a weak link and the attacker and listen for you typing the OTP code into the browser or app and redirect it for their own use.

If you are using email codes, you are also pretty much gone case as it’s trivial to access them.

If you use FIDO tokens, there is a known weakness in the FIDO protocol which is session hijacking. You may press the button or authenticate to the FIDO token but the weakness in the browser or OS (or both) allows the session data to be swapped. Although FIDO has what is known as a TLS Channel binding, but they have wiped their hands clean and said very specifically that the Channel Binding over TLS can still be subverted (think of MiTM of the SSL/TLS channel via some MiTM hardware or certificate injection).

Similarly, smart card logins have their problems in that they have no secure display and entry and the PC which is the endpoint is insecure and thus is already a weak link.

What you need is an end-to-end secured (not just cryptographically but in Secure Execution) where the endpoint hardware device and all the stacks above and below it in the device is secured.

Only by having two endpoints that are secure (both cryptographically and in execution environment) then can a secure session be established. Any of them having a vulnerability will immediately cause the entire secure environment to leak and breakdown.

Think of it along the line of sending secure messages in a diplomatic/military sense.

If you want to encrypt a message and send to your other team on the field, you use a specialized manpack set and put it to secure mode on both sides with proper keying . You wouldn’t be using Signal app on smartphones to call your other team on the field as your phone is not built for security whereas lots of money and time have been put into the manpack set to secure it against tampering and also a clear switch for you to activate to destroy the keymat on the manpack set if you are about to get captured. Maybe @Clive Robinson can explain it in a more clearer sense since it was part of his job in the past.

(required) February 6, 2018 1:36 AM

Physical OTP paper sheets, in order of use, stored in safes, burned per message, this thread is over.

:Drops enigma machine:

65535 February 6, 2018 2:00 AM

@ k15

These certificate forging engines could fool the lock symbol when they SSL stripping happen and see the real conversation and reversed. That is the real threat.

The front facing log on page certificates of Wells and BofAmerica looked legit.

The Common Name or CN looked like it belonged to Wells => donnect.secure.wellsfargo.com

This matched the Organization (O) => Wells Fargo and company

The Organizational Unit or OU is probably just the initial of the business unit of Wells that acquired the certificate => DCG-PSG

The Serial number should positive and match Symantec’s records of the purchase of the certificate by Wells but how to tell is a different story.

The Issued by => Common Name or CN is Symantec, Class 3 “Secure Server CA” [So, what happed to the Class 2 certificate server? It probably a policy server to issue only certain types of certificates or a revoking server or both. The policy server can be off line… not much of a help]

The Period of Validity is important.

In Wells case :
Begins On Wednesday, October 12, 2016
Expires on Saturday, October 13, 2018

Hence it is still valid.

Next, go to Gibson Research Corporation page below

https://www.grc.com/fingerprints.htm

copy and paste the Wells Common Name or CN:

connect.secure.wellsfargo.com

Paste the CN into the blank part of Gibson’s red box which has the HTTPS:// prefix before the blank box. Give it a go and paste in the above Wells’ Common Name and get the finger print calculated from Gibson:

F1:01:C2:D0:D1:A8:39:8C:23:0E:31:F7:76:DC:E3:C0:F0:C2:1F:18

Compare to

SHA1 Fingerprint F1:01:C2:D0XXXXXXXXXX

They appear to match. That is the one main indicators of a real Certificate from Wells.

Supposedly. if intercepted the fingerprint should be drastically different [the fingerprint is a SHA1 hash of the de-coded x.509 certificate or the ASN.1 portion of it].

So, in theory you can just look at the first 3 to 4 sets of hexadecimal digits of the fingerprint and quickly tell if the certificate has been altered… Well, not properly altered as in a good certificate forging engine and interception stack.

Problem [Explained by Wikipedia]:

“Weaknesses
“A web browser will give no warning to the user if a web site suddenly presents a different certificate, even if that certificate has a lower number of key bits, even if it has a different provider, and even if the previous certificate had an expiry date far into the future.[citation needed] However a change from an EV certificate to a non-EV certificate will be apparent as the green bar will no longer be displayed. Where certificate providers are under the jurisdiction of governments, those governments may have the freedom to order the provider to generate any certificate, such as for the purposes of law enforcement. Subsidiary wholesale certificate providers also have the freedom to generate any certificate. All web browsers come with an extensive built-in list of trusted root certificates, many of which are controlled by organizations that may be unfamiliar to the user.[4] Each of these organizations is free to issue any certificate for any web site and have the guarantee that web browsers that include its root certificates will accept it as genuine. In this instance, end users must rely on the developer of the browser software to manage its built-in list of certificates and on the certificate providers to behave correctly and to inform the browser developer of problematic certificates. While uncommon, there have been incidents in which fraudulent certificates have been issued…”-Wikipedia

https://en.wikipedia.org/wiki/Public_key_certificate#Weaknesses

But, notice Wikipedia does not go into a properly forged certificates and how to Identify them – Gibson Research does – but take that with a grain of salt.

That is all I can go. Other PKI experts can be of more help.

maqp February 6, 2018 3:26 AM

@Nick P, @Thoth, @Sancho_P, @Clive Robinson et. al.

I’m thinking TFC’s HW units and SW running on them needs better naming. TxM, RxM, NH are not mature, and things didn’t get easier after I unified Tx.py and Rx.py launchers to tfc.py. Do you have any ideas regarding this?

I’ve pondered Clive’s Castle and Prison for TCB halves, Protocol Converter -abbreviations for NH (alas PC is not a good term) or Nick P’s suggestion of networker. Blacker would be nice term for TxM as per NSA jargon, however I’m not aware of equivalent for RxM. I’ve considered easily understandable concepts like Mouth/Ear, encryptor/decryptor, sender/receiver. The challenge is to be able to easily refer to either software or hardware, without confusing listener which one is in question — so self-explanatory names are preferable.

12th Arrondissement l337 cr3w February 6, 2018 6:38 AM

Found this mildly interesting:

http://idlewords.com/talks/ancient_web.htm

Google had a conference last week called Google I/O, where they showed off some truly amazing technology. The theme of it was AI in everything, both directly in their products, and by improving the tools they make available for everyone else. The subtext was that nothing but good could come out of moving these techniques into the real world as fast as possible.

Left unsaid was the fact that to train the AI, and to fund the whole enterprise, you need a program of mass surveillance. It’s significant to me that none of the companies that make their money from such surveillance are comfortable talking about their business model.

But it’s that business model that’s enabling authoritarians.

If you can’t be arsed to read, the major part of the article is him compareing the making of the radio to that of the Net.

22519 February 6, 2018 8:04 AM

@Alan S

Isn’t it astonishing? When will politicians finally learn that they need to use encryption so that it actually works? So, Mr. Catalan Independence has NOBODY around him to advise and assist on information security?

By the way, can you imagine what the next US presidential election is going to be like from the viewpoint of collection? It’s going to be incredible.

End-to-end encryption is not very robust when the ends are compromised.

CallMeLateForSupper February 6, 2018 8:49 AM

Interesting report from The Citizen Lab ona 19-month-long phishing attack against Tibetan community.

“The operation was simplistic and inexpensive, yet achieved some successes. We estimate the infrastructure used in the operation cost slightly over 1,000 USD to setup and required only basic system administration and web development skills to maintain.

“The operation illustrates that the continued low adoption rates for digital security features, such as two factor authentication, contribute to the low bar to entry for digital espionage through basic phishing.”

https://citizenlab.ca/2018/01/spying-on-a-budget-inside-a-phishing-operation-with-targets-in-the-tibetan-community/

The discussion of 2FA most interested me, because I am frustrated by a disconnect between the most common means of 2FA (SMS) and my chosen way of being (eschewing cellphone ownership). (Also, SMS is insecure.) There is also Yubi for 2FA, which some users have reported works great with Gmail, but Google’s writeup about Yubi on one of its Security pages leads me to believe that my use case (PC; Linux; Firefox) disqualifies me. The offending passage seems to inject Chrome browser as a prerequisite; I don’t want that s/w.

So my options for getting 2FA with Gmail seem to be:
1) pay $hundreds for a cellphone to use for email (and cough up my PHONE# to Google)
2) pay $40 to Yubi and install Google’s browser
Those are not viable options here, so just leave me in the “no adopt” column.

I’d never heard of “OAuth”.
https://en.wikipedia.org/wiki/OAuth
“A day in which you learn nothing new is a wasted day.” 🙂

Clive Robinson February 6, 2018 9:36 AM

@ CallMeLate…,

<

blockquote>I’d never heard of “OAuth”.

I’d advise you to “not learn it” it’s basically bad news…

Firstly it relies on a hierarchical system, much favoured by “Enterprises that Spy on you” which alone should send you running for the hills.

Secondly it’s repeatedly been found to be insecure in various ways.

Thirdly it’s not trivial to implement which is a very bad sign as this will lead to insecure implementations. Further the use of “code libraries” will make the insecurities not just availaable as attack vectors but very widely available.

Fourthly it relies for security on the security of TSL which as we know has been pretty poor on security in oh so many ways…

The thing is secure authentication needs a secure side channel to not just setup securely but maintain securely over time. As we know from the many and varied Key Managment (KeyMan) ideas that have been shot down over the years such systems are hard if not impossible without an initial “personal” secure “Face to Face” (F2F) meeting.

Thus it’s not something that is going to work with “Modern Enterprise” solutions in a safe or secure way.

We know PubKey CA’s are not secure in any way you should trust, so they and TLS etc are a bust.

We know that CA’s who also supply “tokens” (RSA) do not do things securely, so they are a bust.

Bank tokens etc likewise have not proved to be secure.

National ID card systems, likewise have not proved to be secure.

And the list goes on, all having failed in some way…

Thus good old fashioned Spy-to-Agent “field craft” methods appear to be the route to go, but 99.99..% of the population can not do OpSec reliably if at all…

Such is life…

CallMeLateForSupper February 6, 2018 11:45 AM

@Clive
“I’d advise you to “not learn [OAuth] …”

Too late. I read up on it (and gagged) yesterday. I mentioned it here just “to spread the joy ’round.”[1]

[1] Words of a “Covey” (OV-10) Forward Air Controller. Riveting story but very OT here.

Sancho_P February 6, 2018 6:10 PM

@maqp re new name for TxM, RxM

Um, your reasoning for the separation and the use of a dedicated HW – data diode did not convince me (but I agree on the mandatory use of galvanic isolation to protect / connect sensitive devices). It may be a “chat” requirement, but increases complexity and attack surface (key mat on both devices).
I can’t really help here, sorry.

65535 February 6, 2018 6:24 PM

@ mapq and Thoth

“I’m thinking TFC’s HW units and SW running on them needs better naming.”-mapq

I agree. Simple names will help in wider adoption of your neat project.

1+ for Thoth’s naming scheme [It is simple and understandable].

TxM: Outbox Machine or TxM = Outbox Machine
RxM: Inbox Machine or RxM = Inbox Machine
NH : Internet Relay Machine or NH = Internet Relay Machine

That is fairly straight forward. I second the idea.

@ CallMeLateForSupper and Clive R.

“The operation was simplistic and inexpensive, yet achieved some successes. We estimate the infrastructure used in the operation cost slightly over 1,000 USD to setup and required only basic system administration and web development skills to maintain.”- Citizen Lab

Yes, those PRC Chinese are very thrifty with their spy work. I had a relative try to visit Tibet from Beijing and it took two years to get the correct travel documents. I will say there have been some ugly groups come from Tibet and do very nasty things. I really don’t think the PRC top party members will ever condone Muslims in their country – but who knows.

I notice the Citizen Lab show a certificate details box with the Certificate Hierarchy blank which I thought was an uncommon occurrence or scam.

“The change to Comodo certificates is correlated with the infrastructure moving from an Ubuntu server to a Windows 7 server (115.126.39[.]107) in February 2017. The operators may have moved from Let’s Encrypt to Comodo, because Certbot (the most common tool used to deploy Let’s Encrypt certificates) is not available on Windows systems… Exploiting a Certificate Registration Bug” –Citizen lab about a third down the page

https://citizenlab.ca/2018/01/spying-on-a-budget-inside-a-phishing-operation-with-targets-in-the-tibetan-community/

I think Clive R. is correct. Trusting Giggle or Alphabet is not wise.

“I’d advise you to “not learn it” it’s basically bad news [OAuth scheme]… Firstly it relies on a hierarchical system, much favoured by “Enterprises that Spy on you” which alone should send you running for the hills.”-Clive R.

That is my general reaction to OAuth scheme also – run away as fast as you can.

CallMeLateForSupper, I agree with your options:

“…options for getting 2FA with Gmail seem to be:
1) pay $hundreds for a cellphone to use for email (and cough up my PHONE# to Google)
2) pay $40 to Yubi and install Google’s browser
Those are not viable options here, so just leave me in the “no adopt” column.”- CallMeLateFor…

I find that SMS message blows your OpSec as does any biometric identification such as fingerprints or facial recognition. These ‘biometric’ identification can be scammed or worse harmful – as thief cuts off the your finger tips to empty your bank account.

CallMeLateForSupper, the link to the description of OAuth in Wikipedia really is a non-description. Or a good example of legal con-games.

‘…OAuth provides to clients a “secure delegated access” to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially… OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.[3]…’ –Wikipedia

https://en.wikipedia.org/wiki/OAuth

Notice the tricky construction of the sentence “…OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.” –Wikipedia referenced above

The sentence cleverly slides over exactly who is the third-party authorization server owned by and/or connected with. That could mean just about anything or anybody including the Giggle, NSA/CIA/FBI and so on.

This of occurs with “your consent” buried in the tiny print of the Terms of Service contract and a healthy dose of confusion.

“A day in which you learn nothing new is a wasted day.” 🙂 –callmelatefor…

I agree.

JG4 February 7, 2018 7:15 AM

As always, appreciate the good discussion. My cognitive abilities are coming back.

https://www.nakedcapitalism.com/2018/02/links-2718.html

AI May Have Just Decoded a Mystical 600-Year-Old Manuscript That Baffled Humans for Decades Art Net (Chuck L)

…[the transition of economic leadership may be very soon]

First step towards flying cars: Incredible footage shows driverless drone flying people around China Thai Tech (furzy)

…[I’m interested in business models wrapped around open innovation]

Meet India’s women Open Source warriors FactorDaily (Chuck L)

…[magnesium and potassium, plus other good things]

Eating Leafy Greens Each Day Tied to Sharper Memory, Slower Decline NPR (David L)

…[it doesn’t mention that the FBI were in on the scam]

In Baltimore, Brazen Officers Took Every Chance to Rob and Cheat New York Times (resilc)

…[Pentagon Wars]

Imperial Collapse Watch

Streamlined MV-22 Maintenance: From 70 Osprey Types Down to 5 Breaking Defence. Kevin W: “You read that right. There are 129 of these planes which come in 70 variations!”

…[just another day on the blue marble of unintended consequences]

Why Ethical Robots Might Not Be Such a Good Idea After All IEEE (David L)

CallMeLateForSupper February 7, 2018 7:43 AM

Riana Pfefferkorn tweeta, “I’ve just published a whitepaper called “The Risks of ‘Responsible Encryption,'” critiquing recent key-escrow proposals by DAG Rosenstein and FBI Director Wray.”

PDF is here:
https://cyberlaw.stanford.edu/blog/2018/02/new-paper-risks-responsible-encryption

I don’t tire of reading rebuttals of “responsible encryption” (though I abhor that term). They are necessary. I think the way in which proponents lobby for it – dictating a vision in great detail while giving wide berth to technical discussions – makes them look silly. They might as well lobby for technical solutions for building chimneys from the top down.

Clive Robinson February 7, 2018 12:43 PM

@ Wesley Parish,

British suffragettes: early UK ‘terrorists’?

Depends on your definition at the time.

When you consider the “Cat and Mouse” game the British Politicians and Legislature forced onto the women, and the prevailing definition of terrorism back then, then it was the British Government that were actually the terrorists not the women who were protesting.

Yes some of the women committed illegal acts, but they were not the ones using brut force of guard labour with cudgels and their horses to maim and kill their opponents. Nor were the women using what even then would be regarded as tourture techniques on their opponents.

As has often been observed in history, power tends not to cead power gracefully or quietly, and frequently employs violence to cling to power…

la reina de la colmena February 7, 2018 1:51 PM

https://www.reuters.com/article/us-usa-cybercrime/u-s-shuts-down-cyber-crime-ring-launched-by-ukrainian-idUSKBN1FR2M7

The cyber crime network, operating as an online discussion forum known as “Infraud,” ran a sophisticated scheme that facilitated the purchase and sale of Social Security numbers, birthdays and passwords that had been stolen from around the world, the department said.

Birthdays are worth money nowadays?

Years ago people had birthday parties, and everyone knew how old they were on what day, and they had cake and presents and all that good stuff.

You had to have a signature, like real ink on an original real piece of paper, that at least resembled the one on the signature card on file at the bank, and if a man had a date of birth on his I.D., it was considered more than redundant to suffix Jr., Sr., II, III, or IV or whatever to his name.

Furthermore, a woman is completely out of luck in regards to I.D. theft because she gets to take her husband’s name, (or the name of some some guy who claims he’s her husband,) and it’s rude to ask a lady her age, and ladies are allowed to write checks oftentimes when men are not because it isn’t considered safe for them to carry cash, and blah blah blah.

And now it’s worth money to know someone’s birthday and you can forget about that cake, because apparently a birthday is enough to open a bank account or take out a loan in someone’s name, because everything’s all electronic nowadays.

None of this quite makes sense to me. Whose fault is this?

VinnyG February 7, 2018 3:16 PM

@ colmena – in the days when most financial transations were conducted face-to-face, knowing someone else’s birthday would probably have been of little value in perpitrating fraud on that person. Today, however, knowing someone’s birthday might nicely narrow the space occupied by the additional security factor required to get on-line access to an account.

tyr February 8, 2018 12:34 AM

@Clive

Barlow is dead. He was the instigator of
independent cyberspace and the EFF.

When the Suffragets were told that what
they wanted would end civilization you
knew the opposition was completely full
of it.

We hear the same thing about transparency
in government affairs with lots of scare
talk about things going ‘dark’. Mostly
from folks who are playing CYA games with
the legal system while yapping about rule
of law.

Most of what is classified could be easily
de-classified without any effects at all.
By reducing the mass of crap it would become
a lot easier to keep important material in
house. Once you allow a junior grade nitwit
access to a rubber stamp you get exactly
what you described highly classified toilet
paper.

On the popcorn circuit apparently the FBI
is unaware of their own track record as a
less than law abiding entity. The history
is freely available to anyone interested.

Comey probably is delusional enough to
feel he is a ‘good’ guy, but he seems to
be unaware that his delusions cost him his
last job for playing drty politics. They are
all running around doing PR instead of the
much needed housecleanings that would get
them some trust from the citizens.

Wesley Parish February 8, 2018 2:43 AM

@Clive

Quite. One of Conan Doyle’s Sherlock novels was a piece of propaganda portraying mining labour activists as terrorists; the seventies film The Molly Maguires gives much the same interpretation, iirc.

Power arrogates itself the “right to defame, slander and libel” all and any opponents.

@usual suspects:

Just when you thought it was safe to dive into the shark and piranha pool again –

Leaked NSA hacking tools can target all Windows versions from the past two decades
ht tps://www.theinquirer.net/inquirer/news/3026129/leaked-nsa-hacking-tools-can-target-all-windows-versions-from-the-past-two-decades

Interesting reading.

Cassandra February 8, 2018 4:54 AM

Re: (Browser) certificates.

Many, if not most people who use standard web-browsers are not aware of who they are trusting when they use the browser. A fraction of the people who use standard browsers take the care to check a locked padlock symbol shows when accessing a ‘secure’ website. Still fewer review the certificate. Still fewer take the long and not-very-well documented process that would need to be followed to remove all pre-trusted root authorities from the browser* and only trust a select few certificate authorities.

I had a bank that, at one point, issued individual self-signed certificates for performing Internet banking. They no longer do. I suspect their customers found the process of using them too difficult.

The current PKI system generally used is not ideal, however, no-one that I know of** has come up with a practical better approach, and there may well be interested parties with both strong incentives and the power to ensure that the current system is not improved.

Cassandra

Depending on the system, there could well be a system-wide list of trusted Certificate Authorities, used by the browser(s) and other applications across all users, and a separate application-local and/or user-local list of trusted Certificate Authorities. Here is the ‘simple’*** inelegant hack one-liner for providing me with the human-readable X.509 certificates used by the browser on one of my systems:

for file in /usr/share/ca-certificates/mozilla/
; do openssl x509 -in "$file" -noout -text; done | less

**I am always willing to admit ignorance, and certainly will do if I realise the alternative is to admit stupidity.

***Simplicity can be in the eye of the beholder. There is much to recommend the UNIX philosophy that includes the idea that configuration files should be human readable text-files. People can trust their own Mark I eyeball and innate text-processing abilities more than say, the openSSL code-base. It is surprisingly difficult to trust code that manipulates binary data files to be doing what you expect it to do, and not something different, either by mistake or by design. This difficulty is amply described in Ken Thompson, “Reflections on Trusting Trust“, Communications of the ACM, Vol. 27, No. 8, August 1984.

Ratio February 8, 2018 8:15 AM

Gang storms hospital in Spain to release arrested drug suspect:

About 20 people stormed a hospital in southern Spain and freed a suspected drug trafficker who had been injured in a motorcycle crash as he tried to escape arrest, officials said on Wednesday.

The two policemen who were guarding the suspect at the hospital in La L[í]nea de la Concepción did not use their guns “to prevent the situation from turning into a drama”, the town hall of La L[í]n[e]a said in a statement.

[…]

During an interview last year [La Línea mayor Juan Franco] said there was a feeling of “total impunity” in the city as drug traffickers operate almost openly and do not hesitate to challenge police, who say they are under-equipped.

Nick P February 8, 2018 11:59 AM

@ Clive Robinson

Damn, Clive, you beat me to it by minutes! Haha. I also have a Reddit thread and Hacker News thread for anyone who discusses those places. So, here’s the link plus contextual comment:

Security of Software, Distribution Models: It’s Not Just Open vs Closed!

“A prior work on this was Spafford’s “Proprietary vs Open Source” in 2006. The SCOMP and GEMSOS systems he references are described here. SCOMP was first system certified under TCSEC A1 class in 1985 after five years of analysis and pentesting by NSA evaluators. I think they spent two years on GEMSOS at cost of $50 million they said. Obviously, most security-focused FOSS has had nowhere near that amount of review. There’s also never been a high-assurance, secure system done under FOSS development model: FOSS examples were cathedral-style developments by experts FOSS’d either as they went or after the fact.

This difference between the high potential of FOSS for security vs fact that all strongest systems came from private sector led me to investigate whether the models could be combined. Also, what impact if any did sharing source with everyone have on security? Almost none I found given a strong development and review process will leave almost no defects in system to begin with. The people building or reviewing, esp their skill or time allotted, were the crucial aspect in determining system security. This is also why some of us almost reflexively trust security of code produced by certain people or teams: their mindset, skill, and efforts regularly result in systems or code that does what they claim. Next one probably will, too. Probably. ;)”

Nick P February 8, 2018 12:29 PM

Quick Advice to Embedded, C Developer Selling Bosses on Formal Methods (draft) (2018)

The best methods in terms of cost-effectiveness were Cleanroom, Design-by-Contract, Ada with code reviews, and (for increased assurance) SPARK Ada. I’ve got some links on selling a project manager on those minus Cleanroom. Despite Stavely’s great writeup, I haven’t read his book or seen results from third parties I’d trust to vet the effectiveness of his version of Cleanroom. I’ll go through others one by one.

The first link is written from or near a manager’s perspective by a Lockheed-Martin employee. It has the kinds of things project managers might like to see to justify using formal methods. There’s a few ways to do it in both C and OOP languages. Interface checks catch most errors, though. I’ll throw in an example in game development since there’s some overlap between it and real-time applications. Note that property-based testing can generate tests from specs to save time on tedium (i.e. boost productivity). EiffelStudio already does that since Betrand Meyers was brilliant, practical researcher. 😉 Example tools for Python and C. I also thought about verifying whatever turns specs into tests at some point since they’re expressed in simple logic and language. Note that Hillel Wayne did a nice write-up comparing different languages ways of doing contracts with Eiffel still winner IIRC. He also investigated my idea of mixing contracts, PBT, and fuzzing. Lionel Matias similarly combined Ada with AFL.

Ada was systematically designed to find errors as Barnes’ book shows. This study confirms it’s good at it with best apples-to-apples, empirical comparison I’ve ever seen between languages. SPARK has had around two decades of industrial experience trying to bring proof to affordable, usable levels for specific kinds of errors. I’ve included two projects that were recent showing it applied in safety-critical fields: a spacecraft and a glider. In both cases, it did really well with manageable caveats. Do remember that one can always be selective about where they use SPARK if it performs poorly on something where developers fall back to runtime checks or older language (eg C) with its assurance-increasing technologies like static analyzers. Although a project recently added pointers, the champion of doing safe code with dynamic allocation and concurrency is currently Rust since its mechanism is ready and widely used.

Rust is so new, though, that it’s hard to say if it can be applied to safety critical at the moment. Like Ada and SPARK in early days, I think translating it to equivalent C might net some of its benefits. Anyone who knows C, Rust, and assembly should be able to confirm in various scenarios that the borrow-checked Rust is equal to the C version (or not). One could probably derive templates or macros to fix discrepencies that they check by eye. Although traditionally escalating to Ada, SPARK developers might escalate to Rust instead for code SPARK can’t handle (eg dynamic or expressive concurrency). Likewise, instead of relying on C, the SPARK user might rely on Rust they hand-convert to C after it borrow-checks to maybe knock out dynamic errors. Due to multi-paradigm style, Rust might also make bolting-on contracts easier so that analysis can be done in Rust versus whatever boilerplate C might require. I mean, Ada supports it naturally but I’m talking where Rust is fall-back.

I’m still shocked I’m the only one talking about doing this I’ve been able to find easily in Google if others exist at all. The linking analysis desperately needs expert attention since so much low-hanging fruit can be handled mixing these languages with their toolchains. There are people working on the formal verification side of that. I’m sure the build-and-exhaustively-test side could get plenty done without that rare expertise, though.

Long story short, I’m including the TockOS papers since their work is the cutting edge of what can be done in embedded Rust. RedoxOS might have some tricks in it but TockOS people keep describing theirs in detailed papers. The Cell/TakeCell concept was good example. If they handled LinkedLists, how complicated could the rest be? 😉

On the C side, the people who did the formal semantics for C in a GCC-like compiler have started a company selling a static analysis tool that claims to find tons of errors in C programs with no false positives. Well, you know how marketing claims go but I’m hoping they’re being honest since it would be great if true. They’ve had quite a few blog posts on stuff from the Toyota benchmark to a VM for smart contracts. This would be interesting to further evaluate for effectiveness on known-buggy code from past projects to see what it would or wouldn’t have caught. One possibility would be moyix using his defect-seeding software (lava?) to thoroughly test their tool among others like Astree and Saturn. Our confidence in the software goes up as we vet it with tools that themselves were vetted by thousands of bugs inserted and caught. A constrained, development style like what’s common in embedded would probably boost their effectiveness on average as well.

Ratio February 8, 2018 1:13 PM

How a Tiny Startup Became the Most Important Hacking Shop You’ve Never Heard Of:

Besides its researchers’ talent, which multiple sources said is top-quality, what separates Azimuth from other players in the exploit industry is its client rolodex. Three sources familiar with the company said Azimuth—through its partner firm—provides exploits to members of the so-called Five Eyes, a global intelligence sharing group made up of the United States, United Kingdom, Canada, Australia, and New Zealand. The partner firm is Linchpin Labs, a software company founded by former Five Eyes intelligence officials.

“Azimuth provides Australia essentially all their offensive cyber capability,” a fourth source familiar with the company told Motherboard, referring specifically to the Australian Signals Directorate (ASD), the country’s version of the NSA. One of the sources, as well as confirming the ASD as a client, said the UK and Canada are Azimuth customers.

[…]

At this high tier of exploit development, there is something of a circular door between Azimuth, intelligence agencies, and Silicon Valley, which are all looking to attract hires from the limited pool of people who can hack up-to-date devices.

[…]

Prices for zero days have risen with the increasing difficulty of breaking into sophisticated software and hardware. According to one source, a full, remote exploit chain for iOS 11 devices (currently the latest operating system for iPhones) and which requires no interaction from the target goes for well over $2 million today. And those prices have risen every year, the source added.

For comparison, a remote exploit for Firefox can go for $200,000, one for the Tor Browser can be worth $150,000 or $250,000, and one for Chrome that allows an attacker to escape the program’s sandbox can go for between $500,000 and $1 million, according to people familiar with the market.

(Directions from down under.)

Clive Robinson February 8, 2018 5:38 PM

@ Nick P,

Also, what impact if any did sharing source with everyone have on security? Almost none I found given a strong development and review process will leave almost no defects in system to begin with.

Which in essence is the main problem… Just about every closed source shop where I’ve had an opportunity to “see behind the veil” the big difference as far as security goes is their “Quality Processes”. Part of which is a development process not just with strong review but strong remediation. Also the documentation process, in particular the carrying forward of “learnt measures” via history files and the like.

It’s why in the past I’ve indicated that in development the security and quality processes are to most extents the same.

All that “Closed Source” gives over “Open Source” in the design and development process is the ability to hide junk code from most enquiring eyes (Obviously not to those who know how to reverse engineer the code backwards).

That’s not to say that Open Source is going to be of any higher quality[1]. The source just makes the code available to a larger group of observers, who might or might not look at the code.

The big example of how badly that could go wrong was the Open SSL code that was “used by all but reviewed by none”. An earlier example might be the libraries of AES code that all used the “fast” version of the competition code, that was to be polite not good news when time based side channels were finally considered (a real win for the NSA).

Which brings us onto the question of “Formal Methods” as I’ve pointed out in the past in the main they are too far up in the computing stack on the wrong side of the High Level Language / ISA “Great Abstraction Gap”. Thus are prone to quite predictable “bubbling up” attacks or any other attack that can reach down through the Virtual Memory (Rowhammer) or around it (Spector, Meltdown etc) to get direct core memory access/control. Such attacks were quite predictable and had been for some years now, which is why just about all higher end CPUs released in the past decade or so have these low level vulnerabilities.

As I said they were predictable which is why it’s something I’ve been banging on about for a number of years prior to Rowhammer, Spector or Meltdown, and discussed alternative atchitectures to mitigate such hardware issues.

As our host @Bruce has recently indicated it is quite unlikely we have heard the last about hardware susceptibility attack vectors. In part because those that have been seen recently are in effect “class attacks” that can easily be realised other ways when the trick is known. Now that it is known you can expect to see many “me too” type attacks appearing from academia in what is to them a new field to play in (but some have foreseen for years). So a number of early wins are to be expected, one of which is an attack against Intel’s SGX “secure enclave” technology, which is likely to be true of other CPU design “secure enclaves” as well…

I’m not saying that there is anything wrong about the use of formal methods or that people should not use them. On the contrary, I would encorage their use, but by those who are aware of their limitations, so that they can put other measures in place, as and when they become available/viable. One such measure is encrypted core memory on a per process basis (think individual file encryption -v- full disk encryption to see why it has to be on a per process basis).

[1] Actually a large amount of Open Source projects are not of higher quality than Closed Source. For a most important reason, that is they never realy makes it to “release quality”. The reason for this is large amounts of Open Source projects are “hommer projects” often started by those who have time on their hands for one reason or another, or are getting to grips with a new programming language or tool set/chain. Thus there is no “Commercial Imperative” to “go to market”.

Nick P February 8, 2018 9:48 PM

@ Clive Robinson

Far as gap between verification and hardware reality, I recently found this gem by the lady who busted out the Viper team and overzealous folks in formal methods way before that big-time paper by Guttman on it. That’s in Section 6.

In CompSci, there’s lot of people working on formal verification of hardware for things like information flow. It’s been done down to the gates. There’s increasing work done on analog and RF modeling of attacks at least. I figure that stuff will stay informal for now. There’s at least people doing it.

65535 February 9, 2018 3:32 AM

@ Cassandra

+1 That was two very helpful posts.

Good links.

@ tyr

John Perry Barlow was a writer for the Grateful Dead band.

“He was also a former lyricist for the Grateful Dead and a founding member of the Electronic Frontier Foundation and Freedom of the Press Foundation.”- Wikipedia

https://en.wikipedia.org/wiki/John_Perry_Barlow

He had a long and interesting career.

Anders February 9, 2018 6:39 AM

@Cassandra

“I had a bank that, at one point, issued individual self-signed certificates for performing Internet banking. They no longer do. I suspect their customers found the process of using them too difficult.”

Problem is here with browsers. Once upon a time browsers were just happy with self signed certificates. With people who did know what they do this was easy. Fast forward – everything changed – browsers started to give warnings about self signed certs, users had to go through multiple confirming that indeed they know all the risks and yes, they DO want to connect to that site and yes, they DO want to create the exception etc. This effectively killed using the self signed certs.

Clive Robinson February 9, 2018 7:17 AM

@ Nick P,

Clive’s energy-gapping theory gets corroborated even more. As if it needed more corroboration after all the leaks published so far over so many mediums.

Did you notice they missed a couple in their list of known methods?

1, Mechanical.
2, Gravitational.

So there is scope for atleast another four papers there 😉

I wonder how long it will take them to think further on it now I’ve mentioned the channels again. Then devise an experiment to demonstrate it. After all generating pure gravity waves is known to be a hard problem (quadripole radiator) but there are short cuts by changing vectors that have a measurable effect though small.

With regards physics as any 12 year old should know there are three basic types of energy transmission,

1, Radiation
2, Conduction
3, Convection

What is not immediatly obvious is that they all require an “efficient medium” to transmit the energy. Which alows for a fourth transmission effect which is “kinetic”. It would after all be hard to argue that somebody was not trying to send you a message when they point a machine gun in your direction and expend a few rounds.

The point is that not all of these channels are equal. Radio waves and light are assumed to originate omnidirectionaly from a “point radiator” and thus the energy to communicate a signal drops in proportion to the increasing surface of a sphere of expanding radius, ie as 1/(r^2). Magnetic fields are worse in that they are in effect volumetric thus 1/(r^3).

Thus many people fall into a trap of assuming all “side channels” fall off at a rate at worst of 1/(r^2) which also gets taught as such in EMC and TEMPEST/EmSec courses without much talking about the exceptions… Which is unfortunatly because it’s those exceptions that show you cannot generalize in that way.

For instance think about how energy moves by conduction in certain types of channel. The simplest way to think on this is with a simple electrical signal in a power supply line in a national grid. The power engineers will tell you that they have two main losses, I^2R “heating” and leakage currents through insulation. Thus the energy transmission limits are based on resistance more than distance, but as a consequence the losses are the same for any distance (which gives an exponetial decrease with distance). Thus the lower the resistance of the conductor and the higher the resistance of the insulation the less the loss and the further the energy will travel. The same is true for any transmission line. Which includes mechanical movment and vibration in mechanical structures.

So if you bolt your computer to your faraday cage and bolt that to the floor or support member, then the vibration from the fans will “conduct out” to any number of places. And you can not make the assumption that other vibrations will swamp the signal…

Finally, Sir Issac Newton had a few comments about “matter and motion” and Elon Musk’s “cherry red” electric sports car has been in the news as SpaceX’s demonstration of these and Kepler’s law of orbits.

Newton pointed out in principia that there were certain fundemental principles[1]. Of which the first two are normally stated as,

    Every object persists in it’s state of rest or uniform motion in a straight line, unless it is compelled to change that state by forces impressed upon it.

And,

    Force is equal to the change in momentum, per change in timr. For a constant mass, force equals mass multipled by acceleration (F=ma).

Which describes “ballistic” behaviour of bullets and cherry red sports cars. Hence the original pre launch press comments about the car going for a billion years or more. The kinetic energy they have will stay with them till they meet a force of some kind[2], which in the case of the car is likely to be a long time and immense distance covered[3] . Thus any information encoded on them will likewise remain untill that time at whatever the distance is.

But energy behaves in some seemingly odd ways. As the paper notes a Farady shield will not stop a low frequency magnetic field, which is to be expected. That is a Farady Shield is an electrically conductive medium not one with any degree of magnetic permiability. The reason it can keep out higher frequency magnetic fields is due to Eddy currents, the skin effect[4] and back EMF which via the inductance law,generates a magnetic field in opposition to the original magnetic field.

Thus as I’ve pointed out before shields should be made of materials that not only have a good conductance but alsow a high magnetic permativity. Hence aluminium foil has good conductance but low permativity. Iron wire such as chicken wire, has fairly bad conductance above low frequencies but has over ten thousand times the permativity of aluminium. Which helps cut down those magnetic fields quite a bit likewise their frequency thus information carrying bandwidth. As early radio engineers knew the likes of u-metal plated in copper or silver had very good shielding properties right down into the low audio frequencies, which is why many EMC mains filters are in plated iron based metals (a point I’ve made before when talking about designing shielding for TRNG’s). Modern slab and ring ferrite materials have even higher permativity, however they are often heavy and brittle, but you can now get plasticated materials with micro sized particles of ferrite uniformly mixed in that can be used for TEMPEST style shielding in SCIFs in layers with high conductance films[5].

Oh which brings us around to “gravity” or it’s dynamic side effects of “acceletation” as a side channel it is actually quite fascinating. We’ve already seen the use of “key press” information being leaked this way. The usual way to measure it is via a force balance system often in the form of a bridge or pendulum built on or out of a compressible medium. Which is then read out indirectly via strain guages, variable capacitance and speaker coil type magnetic displacment systems. Whilst gravity waves are dificult to generate, changing gravitational vectors is not that hard, you just need to move another mass into the physical arangment. In essence that is what mountains do as you climb down them, they rotate your effective downwards point of refrence from the vertical fractionaly to the the center of mass of the mountain. Whilst not entirely trivial to do, people have made pendulum systems capable of measuring the effect of tides far inland by the “tilt”. However they also have issues with the likes of security guards walking around and tilting seven foot thick reinforced concreate floors…

But there is one channel I’ve not talked about which is “convection” whilst it generaly has a very low bandwidth it has been used in the past to leak security information. It caused heat from the CPU to change the temprature of the system clock crystal (Xtal) by fractional but measurable amounts. I first mentioned this issue of “load” and delta F on timing which is common to all processes on the same system on this blog sometime prior to it becoming the the focus of a researcher at the UK’s Cambridge Computer Labs… Whilst the bandwidth from a convection channel is low, it has certain advantages. It is visable from considerable distance and can go around corners and through the mesh of EmSec screens with little difficulty. It is also a real problem because it’s due to “inefficiency of work” and is thus the ultimate form of pollution thermal energy. You can not “screen” it nor can you cancel it without doing greater work… In practice all you can do is use large “thermal masses” to act as storage devices, thus integrate the signal and thus effectively lower the signal bandwidth. However the likes of personal/portable equipment and “large masses” tend not to go together well 😉

Which means even though you might hide in your cubical, that hot air will rise like a smoke signal causing at a minimum turbulance that can be seen with appropriate equipment. More importantly so can the heat rising out of a computer in a Faraday screen that out of necessity has to be ventilated andcas such may easily end up outside of the “secure area” (I’ve indirectly mentioned these issues when talking about putting computers in safes on this blog some years ago). That said the convection problem moves backwards up the power supply chain and also will end up outside of the “secure area”… It’s one of the reasons that designing effective SCIF tents / rooms / buildings can be not just difficult but regarded as “secret”[5] in some places.

I can fairly confidently predict we will hear more on why “air gapping” is nolonger effective, which is why some time ago now I coined the term “energy gapping”. For many people it will require a shift in their “paradigm” thinking but it actually is not that difficult. The essentials to gather are,

1, Type of channel.
2, The channel limitations.
3, Energy type and levels.
4, Information coding/modulation.
5, Channel bandwidth.
6, Channel losses.

But also the difference between passive and active EmSec attacks. Which means you realy have to understand “transducers” and how they convert energy from one form to another across multiple steps, and the fact that most are actually bi-ditectional. Which in turn brings up the thorny issues of “transparancy” and “error detection and correction”.

Oh that Cambridge Computer Labs paper won an IEEE award, funnily the same happened with another paper from there on “active EM EmSec” techniques on a TRNG… As I’ve gently half joked before you always get to read it first on this blog ;-).

[1] The following year Newton proved –via what we now call calculus– that his laws apply to Kepler’s laws of planetary motion. That is the eliptical orbits that fairly accurately describe the motions of the planets. Which makes life a bit awkward for modern astronomers as Einstein’s laws are not as amenable.

[2] A point to note is that we have reason to believe that the universe is not continuous but discret. Thus there are “smallest” indivisable objects of not just matter but energy as well. Thus Newton’s laws apply to them in a similar way with a few extra tweaks from Einstein’s thinking.

[3] It appears the car got a little bit more of a shove than intended. Thus it has overshot the intended mars orbit. But not as much as originally thought, it won’t be heading as first thought for the asteroid belt but a more interesting orbit which will bring it within a few million miles of both Earth and Mars for some time to come it has in effect taken a –lack of– scenic route 😉

https://www.theverge.com/2018/2/6/16983744/spacex-tesla-falcon-heavy-roadster-orbit-asteroid-belt-elon-musk-mars

[4] The skin effect is not an easy thing to initially get your head around which is why I’m not even going to attempt to describe it. For those that want to know more,

https://en.m.wikipedia.org/wiki/Skin_effect

Importantly it contains a lot of important information that can be easily understood without having to sweat the mathmatics. So is well worth the read.

[5] In the past the US classified such information at “codeword” level secrecy hence “TEMPEST”. However this met “head on” with Electromagnetic Compatability (EMC) which in the 1980’s Europe did not regard as being secret (despite the UK representatives attempts). Which means that I’m entirely uncertain as to just how much “US Citizen’s” are currently alowed to know, if anything about the subject of shielding, TEMPEST, EmSec or SCIF design in general or specific. However as I independently discovered some of the more interesting “active” EmSec attacks back in the 1980’s I look at it this way “Stuff’m and their needy want’s”…

Ratio February 9, 2018 8:47 AM

@Nick P,

Clive’s energy-gapping theory gets corroborated even more.

The theory that says that, unless no information can be transmitted, information can be transmitted?

Always nice to see tautologies get corroborated.

Clive Robinson February 9, 2018 8:54 AM

@ Nick P,

Do you actually know the date that draft report on Viper issues was written? I looked at Viper for various reasons in the late 80’s as I had a tie in with RSRE and it was getting pushed. I was not happy with Viper for various –cost/power– reasons.

Judging by the refrence dates it was late 80’s early 90’s, thus has in effect lain unknown for over a quater of a century which is a shame because so far it looks very interesting.

Moving on to,

In CompSci, there’s lot of people working on formal verification of hardware for things like information flow. It’s been done down to the gates.

Whilst it’s a good idea it is, well at best late to the party… Plus it’s only likely to solve what is now in effect a diminishing series of problems. Like it or not the world of computers is moving to parallel processing and has been for some time. In effect going multi-core was the only way that Intel could hang in with Gordon Moore’s observation of market needs (it ain’t no law[1] 😉

But that aside parallel is the way we are moving, not just on chip but motherboard as well. With interesting array of “raid of raid” systems using high speed solid state drives and high speed networks as well “core memory” is becoming just another layer of “cache memory”, using VM techniques to pull tiny fractional parts of very large data sets in for processing.

Thus the problem is “core memory” has become all things to all processes and thus from a security aspect the bigest single point of failure as more and more “Efficiency-v-Security” hardware bugs come to light over the next few months to years.

The only way we can keep up with the speed needs is by being efficient at the expense of securiry in certain areas…

If you accept that as a “given” then you have to look at alternative security mechanisms. Luckily we have sort of been here before with mechanical hard drives. Thus one method we know is to use “encryption” to protect the data. Not the Full Disk Encryption type protection that only gives security for “Data at rest” but the equivalent of “File level Encryption” with different keys for each.

That is core memory needs to be encrypted not just under one key to keep external memory –DMA / IO style– attacks out, but at a much finer granular level to keep insiders off of each others grass or pearing over the VM fence.

That is each process and shared memory resource –such as IPC– needs it’s own individual key. But further the storage and use of keys etc needs to be held in non-core memory. In effect in registers when a process is running but otherwise held in an entirely seperate non cached etc key store that also holds the adjoining process page tables and read/write/execute and similar atributes.

The future, if we are going to remain “secure” is going to be quite different architectures, the academic community is only just showing signs of thinking of…

[1] Tricks aside Gordon Moore has tried to kill off “Moore’s Law” several times. The last time he went for it was back in 2015 when even Intel admited the curve was changing. However the shares marketers insisted otherwise… But last year Intel made it official in their SEC filling. Which might be another reason why selling those shares by the CEO was not a good idea insider trading wise. What did amuse was that kind of brought Intel back on line was the purchasing of Altira with their high end FPGA weighing in at 30billion transister equivalent on a chip.

Clive Robinson February 9, 2018 9:27 AM

@ Ratio,

The theory that says that, unless no information can be transmitted, information can be transmitted?

Oh dear there you are again trying to stupidly put words into other peoples mouths.

Do you have some kind of aberant behavioral mode that you lack the will or ability to resist displaying publicly?

I notice other people have recently complained at you again very recently for similar comments aimed at posters not the subject under discussion.

If I remember correctly the last time you went down this path the Moderator did not look kindly on it and what followed as a consequence.

Anyway I ask that you only address topic related questions to me in future. As I don’t want this thread or other threads being railroaded yet again.

Ratio February 9, 2018 10:02 AM

@Clive Robinson,

The (rhetorical) question was about the theory, not its originator.

The endless ad hominem is still boring, by the way.

maqp February 10, 2018 9:38 AM

@Thoth, @65535, all

I like the Internet Relay Machine but I worry it’s too similar with “Relay node” inside Tor chain between client and Rendezvous node. But Relay Program sounds really good, and @Nick P’s suggestion of Networked Computer (that has stuck with me) is distinct from it. Many thanks for these to both of you.

As for inbox and outbox, the terms are strongly associated with emails, which is problematic. I don’t want user to think TxM caches outgoing messages to “outbox” before sending them. I reviewed industry terminology, and many companies seem to use terms “Source Computer” and “Destination Computer”. These reflect the unidirectional nature, don’t reinvent the wheel and are almost as simple. (Also, src and dst work as abbreviations in code.) Your thoughts?

As for TCB programs, Receiver program is self-explanatory, but I can’t decide between Transmitter Program and Sender Program: your input on this is more than appreciated.

So

Transmitter (or Sender) Program on Source Computer
Relay Program on Networked Computer
Receiver Program on Destination Computer

@Sancho_P

(See the text above on updated terminology)

“language isn’t an excuse, it’s a tool and the clearness depends on it’s use.”

My rationale is not to name adversaries with female names. Just to have names of male and female mixed. I have absolutely no problem having “Alex, Betty and Eric” instead of “Alice, Bob and Eve”. It’s just uncommon and I feel I should spend additional time defining role of each name. I consider myself a feminist, and neither I nor my peers have found this naming offensive (I double-checked just to be sure and they felt since it’s common practice, there’s nothing sexist about it). Should this become a problem, I would switch without hesitation. On a side note, you should pay attention to the reasons behind strong reactions in social media: It has more to do with negative reaction to being called out, than what injustice originally took place.

RE: broad threat model.

Siblings are most likely defeated by FDE provided they’re not doing evil maid attacks. Criminals should be deterred by any physical security measures and as for state actors, the limitation that protection is only against remote compromise, post setup, assuming user has followed precautions. Entire books are written on this topic, and including large amount of information regarding threat model assessment would not be beneficial. Links to good articles and books on the topic are worth including as “optional” reading.

Regarding unlinked accounts, I’m considering things like unique user accounts so that you can pre-generate multiple accounts for yourself so you can refer to the next account during chat so you know what account to add for that contact the next time. This depends a lot on how Tor implements v3 Onion Services.

“Only that complexity in that secure box causes me headaches.”

We should tackle this. Could you share your current scheme about how the system operates and where the additional complexity lies, and I’ll try to explain the rational or fix what’s broken.

“your reasoning for the separation and the use of a dedicated HW – data diode did not convince me”

Could you please elaborate on this?

“It may be a “chat” requirement, but increases complexity and attack surface (key mat on both devices).”

The “keymat” seems slightly confusing terminology since TFC no longer uses OTP; To me “key pad” and “key mat” are synonymous. I’m assuming you refer to the key database in this context. So just to confirm, you’re aware that message keys on local Destination Computer for outgoing messages are encrypted with master key derived from master password, and that those message keys are forward secret and practically immediately erased when a message is sent and the copy of it is received? Sent messages are not logged by default. With this in mind, why do you think it’s less dangerous to have opt-in logs just on local Source Computer, than it is to have opt-in logs on both Source and Destination Computers? I can of course add a setting that disables delivery of decryption keys of outgoing messages to Receiver Program, and to not deliver outgoing ciphertexts to local Destination Computer. But that would require you to correlate time-stamped outgoing messages in terminal log of Transmitter Program with timestamped incoming messages of Receiver Program. I think this would make following the conversation much harder, which would affect usability by a great deal. What’s the major security benefit that justifies the inconvenience?

Nick P February 10, 2018 11:25 AM

@ Clive Robinson

Thanks for another detailed write-up on energy gapping. I’ll send that to some people that are asking me about what I mean by that term. On hardware verification, she wrote that in 1988. The other critique came from the CLI group that made the first, truly-verified stack. They were competition, though, whereas she was someone trying to check Viper’s proofs. Probably due to realizing the gaps, she eventually just got bored with the project moving on and saying it wasn’t worth further time. Ouch. 😉

@ maqp

I’ve been collecting what I saw of your stuff for a long-overdue project of trying to implement it. I’ve just been slammed over past year with high workload and life issues. So, sorry I’ve been away. My research and writing has been sort of a break from it all to recharge. I do tweak your design into new stuff I see, esp in hardware or embedded, to try to figure out how best to implement it. Whatever is done will need to be dirt cheap, easy to assemble, mitigate backdoor possibility, aid securing software level at least for Receiver/Network, and amenable to FOSS. That’s a tricky combo even though it shouldn’t be. I’ve mentally redone the original TFC probably a 100 times since we last talked even though not publishing anything.

I have been making progress on numerous fronts for anyone curious about that stuff. My main one is a high-level, fire-and-forget summary of several decades worth of assurance techniques that I’ve been studying. It’s here. Also, I did write-ups on obfuscation and hardware subversion. These three have the kinds of things I keep in mind when brainstorming design/implementation strategies for your work. Well, for everything but NSA-resistant chat doesn’t show up every day. It has to be built at some point. 🙂

Work on it February 10, 2018 4:13 PM

The endless ad hominem is still boring, by the way.

Seconded. @Clive you need to watch how you come across when browbeating people on the internet.
You know it’s beneath you.

Sancho_P February 10, 2018 5:22 PM

@maqp

I nearly missed your reply because I‘m mostly a „100 Latest Comments“ reader and then searching for my nick. It would be a good practice to have all nicks in the first line (or use different posts).

Re gender, I think I wrote „more than half“, might be less than 60 % , but I see you understood. Mostly there is no original injustice but nearly always a crazy reaction.

My ISP has a complete outage since Friday, I’m on mobile if it‘s not overloaded, I‘ll be later back to your other polnts.
Only I‘m not sure if we wouldn‘t overly stress @Bruce‘s hospitality – Last year you suddenly forgot to look at your email?

Ratio February 10, 2018 5:38 PM

@Work on it,

FYI, when you spew ad hominem in response to someone daring to, shall we say, question your profundities, that isn’t browbeating. It’s just sad.

Clive Robinson February 11, 2018 10:46 AM

@ Nick P,

On hardware verification, she wrote that in 1988

Thanks I thought it might have been the later half of that decade from the dates in the refrences[1].

I remember Viper because it was being written up in journals as though it was the eleventh tablet for moses.

As you have no doubt found by now energy-gapping is more indepth than the name suggests, in that you can only mitigate energy channels you can not just identify but understand well enough. Further that there are darn few people who understand sufficient areas of science to see more than the obvious channels, or how knowledge from one channel domain can be transfered to another domain…

Oh and it does not help when certain people decide they are going to run their next “sad attack”. Which is why I’m going to go off this thread before talking further about the subject.

One of the reasons I stopped with personal email was a similar reason, the discovery that the signal to noise ratio had gone negative and trying to address the issues of the ratio were just a waste of time…

[1] Dating an actual contribution to a field of endevor is hard enough at the best of time due to publishers, but they are now making it worse as they try different money making models. You could come up with an idea write it up and send it to a journal. They would sit on it based on editorial selection before sending it out for peer-review, which could be quick or slow. The more revolutionary the idea the longer it would take. Thus there was a balance between incremental, new and original type ideas. So you could submit an original idea and not see it published for many months, if at all. Meanwhile somebody else could write three incremental papers and get all of them published in less time and end up in the same place as your yet to be published paper. They get the credit and you don’t even get a historical footprint. Now some publishers are developing new models, high end high priced journals that few will ever see, then a lower priced “best of the past six months” compendium that will not include all papers, oh and then a “dribble out” phase to the Internet that might or might not be free, but papers will no doubt be selected on their “free-click to premium-pay” convertion potential rather than their submission date. I suspect that is why some academics “self publish drafts/preprints” to try and draw a line in the sand. I’m guessing we are slowly seeing the end of the journal maffia, but it’s very certain thay they are going to fight tooth and nail to keep their money machine rooling. But some of the supposadly egalitarian web sites are anything but, in that you have to be invited to join by an existing member who will vouch for you…

JG4 February 11, 2018 5:04 PM

@Clive – I thought that the (maverick) physicists had call bu115hit on the high-priced private publishers and now have their own real-time journals where your submission is published instantly and reviewed officially and unofficially while it is being revised as needed. I was pleased to see last week that Thomas A. Bass has a university position. He wrote two books that were important to my intellectual development. I read the first in the early 90’s. I’m sure that I’ve mentioned “The Eudaemonic Pie” before. Can’t recall exactly when I read the second, but I’m equally certain that I mentioned “The Predictors.” There is a great quote in the book from the holiday skit put on by the scientists, “Just say no to bu115hit.” That is the lite version of withdrawing consent. When I read the Shannon book last fall, I realized that it was Ed Thorpe’s work with Shannon that set the stage for The Eudaemonic Pie. Always and everywhere on the old blue marble of entropy maximization, it is an endless struggle over every fount of delta G.

maqp February 12, 2018 4:12 PM

@Nick P

“I’ve been collecting what I saw of your stuff for a long-overdue project of trying to implement it.”

If the project is for communication, the Python version could function as a rapid development prototype. I hope you find the time to try the local testing version on Ubuntu 17.10 virtual machine. Feedback on usability is important so I’ve put a lot of effort to make installation as easy as single copy-paste command, even now when the Onion Service back-end is incomplete. I hope you’ll see what I’ve considered best practice for each smaller problem and that it gives answers or ideas about how to implement the low level version, whether it’s networking side, cryptographic protocol design, or something as simple as encoding choices.

“I’ve just been slammed over past year with high workload and life issues. So, sorry I’ve been away.”

That’s okay, my personal life has and will come to the the project’s way. Related to that, I can’t promise anything but that the project isn’t going to be abandoned. My conjecture is more and more “this is the theoretical limit of how HW/SW architecture can protect communication”. However, the implementation has tons of things to improve from algorithms (X448 or PQ Key exchange if one with short enough keys is found) to usability (decent GUI). So I feel I will be coming back to this project for a long time even if I need to take breaks.

“I do tweak your design into new stuff I see, esp in hardware or embedded, to try to figure out how best to implement it.”

I’m glad the design is in the back of your mind and like I said above, I hope the details can give you even more ideas.

“My main one is a high-level, fire-and-forget summary of several decades worth of assurance techniques that I’ve been studying.”

I’ll have to reserve some time in my weekly schedule to dive into these. Related to subversion, an interesting thesis on covert channels was circling on Twitter recently: I’ve never seen anything as extensive on the matter as this, I hope you’ll find it interesting.

“NSA-resistant chat doesn’t show up every day.”

Not sure if it’s going to be more positive surprises about the attention to detail I’ve gone through to achieve this, or if it’s going to shock you (“No, no, why would he do that”). I don’t know which one I’m more enthusiastic to hear feedback about.

@Sancho_P

“My ISP has a complete outage since Friday, I’m on mobile if it‘s not overloaded, I‘ll be later back to your other polnts.”

Sure, I’ll wait and monitor this plus newer FSBs. As for emails, It happens to me practically always; Things get lost amidst mailing lists, I delay replies. Also a big problem was the numbering scheme that didn’t really work: Tackling too many issues from design to my personal home page at the same time, while trying to compartmentalize things that are usually related felt difficult, especially with everything else going in my life, especially at that time. I apologize for that. I hope we can keep the conversation here and thus stick to the point to avoid like you said, stressing @Bruce’s hospitality.

I’m hoping the conversation can stick around existing source code and documentation, as well as any diffs to those I might add here. Additionally all improvements to SW/HW, feature requests etc are worth tackling.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.