Comments

Jim November 16, 2017 8:56 AM

When I go to the Frontline Defenders site I get a warning in IE (yeah, I know – I am on a corporate box) saying “Do you want to allow this webpage to access your Clipboard?” Doesn’t seem very “privacy friendly” to me!

Inwoods November 16, 2017 9:41 AM

So I just went down quite the Bitlocker rabbit hole, and it turns out that the TPM cards/chips for my Asus mobo are compromised, and Asus has not acknowledged that they need to put in Infineon’s fix.

Windows itself will block an Asus chip as useless if I buy it, and I need one of those 14 pin chips.

What should I do?

MikeA November 16, 2017 10:44 AM

So, an article that recommends ditching, or at least blocking, Flash is on a site that tries to run Flash content. (and stacked up half a dozen instances of itself in my back-button history, from one visit)

It’s also ironic (in “We’re all gonna DIE! way) that they praise Apple. and recommend not putting critical stuff on iCloud. It feels like every frikkin update tries to trick me into enabling iCloud backup.

wumpus November 16, 2017 11:29 AM

@Ander’s “restore privacy” guide appears to be more for the level seen at this site.

Note that while it clearly points out the dangers of “windows 10”, it seems to sweep the issue of presumed backporting windows surveillance into win7 and win8. So you get to choose between unpatched editions of windows with years of known vulnerability or downloading the updates and having most of the issues of win10.

It also ignores the “threat model”. While I’d expect anyone going to this level of security to have already considered such issues, I suspect they are missing the point. There are two elephants in the security room (the first is a common refrain here, the second doesn’t seem to be mentioned as often as it should):

  1. Everybody else is leaking your data all over the place. There’s almost nothing you can do about Equifax leaking your data, it isn’t like you can find financial institutions that won’t hand your data over to Equifax and similar jokers. I’d be very curious how well you can track people without facebook accounts simply by what people who use facebook often enough mention about them.
  2. Physical access. While the “bad maid” is a famous attack, I’d recommend the “crazy ex” as a worse danger. This can be compounded as typically you would expect to share critical information and may not be aware of an existing domestic crisis while it is starting. There was an amusing example recently of a wife using a sleeping husband’s thumb to unlock his phone and discover his adultery, but more extreme examples should be obvious and well known. Fortunately “protecting the kids from strangers” at least motivates some to protect against “attacks” by the kids (most likely posting the wrong things on Facebook, but other dangers as well).

Before following any advice in these that may seem inconvenient (presumably tossing Windows would be high on the list for anyone not already using Linux or OSX) you should at least figure out which threat models you can’t/aren’t willing to protect against and make sure that any actions you take (at least inconvenient/costly ones) actually reduce the actual danger. Anything sufficiently below the unavoidable noise levels ignored is probably not worth defending against.

Jim November 16, 2017 11:47 AM

@wumpus nails it with his second point. I think for many, the highest risk probably comes from having teens or a non-tech-savvy significant other in the house. “You installed WHAT on your laptop/phone/tablet? And you’re running it on MY network?” Have seen some discussions on segmenting home LANs, etc., on security forums, but little mention of it in less “techy” places, including many of these “How to” sites. How many of us are conducting security awareness training, pen tests and self-phishing exercises on our families? 🙂

Jonathan Gunter November 16, 2017 1:52 PM

The next frontier defined by these fine collections of simple tools & techniques is how to step “word people” without affinity for numbers or technology through the time and tedium of implementing this level of security.

Spreading FUD is not the answer ….

hmm November 16, 2017 3:19 PM

“it seems to sweep the issue of presumed backporting windows surveillance into win7 and win8. So you get to choose between unpatched editions of windows with years of known vulnerability or downloading the updates and having most of the issues of win10.”

You can remove the telemetry KB’s, there’s even a script available to do that.

https://www.ghacks.net/2017/02/11/blocking-telemetry-in-windows-7-and-8-1/

Simply “hide” those updates in the control panel to prevent them from being reinstalled on the next sweep.
The issue of MS discontinuing ALL security patching for anything but W10 looms, but it’s not here yet.

As for your “physical access” vulnerability fears, if you are living in a house with someone you do not fully 100% trust and sharing your data and devices with them, whose fault is that? At least that vector is manageable. If you’ve got random sketchballs in your house accessing your machines, you have a security issue!

JPA November 16, 2017 7:54 PM

@hmm

Its not random sketchballs in the house. Its children and spouses who are not IT savvy and who get irritated at being constrained because they just don’t understand how badly things can go wrong. Also there are the schools that have children signing up for all sorts of free accounts that then track the dickens out of everything the child does. Or the site requires Flash to run. But I can’t stop my child from accessing these because otherwise my child can’t do the assignments. I think that having two networks at home is the only way to go. One for those who are going to practice decent infosec and the other for everyone else. Maybe three networks, with the third being for guests.

hmm November 16, 2017 8:51 PM

Most routers these days have VLANS that are separated, it’s not trivial to set up properly but once set up you can have your kids fully sandboxed from whatever you’re doing unless you deviate from that yourself. I think the main culprits would be central file sharing or streaming 3rd party boxes, and of course that $100 printer with its own wifi and no security updates in 5 years.

“Its children and spouses who are not IT savvy and who get irritated at being constrained because they just don’t understand how badly things can go wrong.”

= People who don’t get to use my machines.

There are rules in life. Those who expect to be let off from them are dangerous.
You either deal with that up front, or you deal with that on the back end.
Kids can be made to understand rules, wives too. No quarter, no carrier!

” Also there are the schools that have children signing up for all sorts of free accounts that then track the dickens out of everything the child does. ”

You’re right there needs to be more oversight / enforcement and stronger regulations.
Tell that to the “Deregulate it all” congress as they set fire to precedent for money.

“But I can’t stop my child from accessing these because otherwise my child can’t do the assignments”

= Then you need to find a new school or district even. That’s just unacceptable. Let them know vociferously.

Sometimes you simply have to be willing to walk away from an untenable situation.
This is something we tend to forget in consumerism. We need to get it back.

itgrrl November 16, 2017 9:05 PM

@wumpus:

“Fortunately “protecting the kids from strangers” at least motivates some to protect against “attacks” by the kids (most likely posting the wrong things on Facebook, but other dangers as well).”

And sadly, “protecting the kids from strangers” as a primary focus is poor threat modelling itself. By far the biggest danger to kids statistically are family, friends,
and adults known to them1,2 rather than adults they don’t know.

  1. Who Abuses Children?
  2. Child Maltreatment, Facts at a Glance

225 November 17, 2017 3:49 AM

oooph a guide that includes notes on cryptography and then states twice that there is ‘no such thing as “perfect security”’

Why did Shannon even bother writing the communication theory of secrecy systems

Dorothy November 17, 2017 9:25 AM

These articles on personal computer security are fascinating, and I do implement some of their suggestions. My question is about social media bars. One used to be able to block them while using AdBlock Plus and/or UBlockOrigin, but on many sites a person can’t block them any more. Are these new social media bars with FaceBook, Twitter, etc. buttons tracking me while I am reading the web page on which they are floating, even though I don’t click on them? I don’t belong to any of these social media companies.

Clive Robinson November 17, 2017 3:48 PM

@ hmm,

… if you are living in a house with someone you do not fully 100% trust

There is nobody born who is 100% trustworthy, we all cheat and lie even to ourselves. We might call them white lies or the glue that enables society to work, but they are still lies even when you do not say anything. Then there is gossip etc…

<>… and sharing your data and devices with them, whose fault is that?

The statistics on dovorce, and spousal abuse should make you wary of making comments like those.

You could –and many have– write books about trust and the human failings that give rise to it’s failing. Some even claim –with some experimental evidence– that the reason humans “over trust” is down to chemicals, others the desire for mates or social acceptance.

Put simply societal expectations set the average human up to over trust then suffer the consequences. Some get lucky most don’t but only suffer occasionaly, a few however end up in purgatory or worse, which is what you would expect for a normal population distribution.

hmm November 18, 2017 6:02 AM

“The statistics on dovorce, and spousal abuse should make you wary of making comments like those.”

Divorce is common but you can see it coming and if you don’t, you’ve got other security issues.

The topic was having people in your house, accessing your machines. I assume your ex wife doesn’t still have access to your machines clive? If so then all bets are off, yes.

You should be able to trust people you share physical proximity with. If that’s not true, evaluate that situation.

If you’ve allowed untrustworthy people into your life, why is that?

Clueless in Seattle November 18, 2017 10:05 AM

Does anyone have improvements to these blog posts?
https://blog.filippo.io/securing-a-travel-iphone/
https://blog.torproject.org/mission-impossible-hardening-android-security-and-privacy

Dirk Praet used to say something like for opsec read stuff from the “thegrugq”.

The Motherboard article talks about threat modelling, as does ssd.eff.org. If you don’t know where your going …

On the other hand, I remember this post from Daniel regarding Reality Winner, the NSA, and election hacking
https://www.schneier.com/blog/archives/2017/06/nsa_document_ou.html

Clive Robinson November 18, 2017 11:12 AM

@ hmm,

I assume your ex wife doesn’t still have access to your machines clive?

I don’t have an “Ex Wife”[1], but a lot of people I used to know do, they married young and regreted early. From what I remember most of them did not see it comming (both men and women). In fact they thought their relationships had been through a rocky patch and were now improving, right up untill they found themselves out in the cold with a handfull of legal papers and mind numbing debt.

Which brings us back to,

The topic was having people in your house, accessing your machines.

The first question you should be asking is not actually why are people “accessing your machines”, but “Why are people in your house” in the first place.

Aside from workmen, burglars and first responders[2] the evolution of most people is,

1, Live with parents (relatives).
2, Live in School-Uni dorms.
3, Share accomadation with students / flat mates etc.
4, Cohabitation with a significant other.
5, Add children to sig other.
6, Add Grandchildren.
7, Retirment home etc.

From 4-7 it’s your mate and offspring for the better part of your life hence Divorce Seperation and Spousal abuse are very much a risk for half your life or more.

As for children, they are not leaving home as early as they used to thanks to “rent seekers” raising property prices to 10 times average income or worse (I remember it being less than three to one). And we are now seeing rising levels of “Elder Abuse” not just by relatives but low income carers.

So only having people you can trust sharing where you live is something very very few can do for all sorts of reasons.

Therefor the only sensible policy is not to trust anybody… But that does not work out very well as normally lack of trust by one person leads to suspicion in the other person and things get poisonous very quickly… The problem is people “demand trust” and if you give it you stand a chance of it being betrayed in oh so many ways.

Look at it this way if your significant other says “You do trust me don’t you?” what do you think will happen if you say “No” or “I don’t need to as I’ve mittigated the risk”. Or if you say “Of course I do” what are you going to say when the next question is “Shall we have a joint bank account” or “joint legal liability on debt” etc etc…

I’ll let others judge what they think your “relationship status” is but my guess is you’ve got a lot to learn.

[1] As a friend from my days of wearing the green used to say “Marriage is the prelude to divorce… If you don’t get married you can’t get divorced”. UK law has changed somewhat since then but the point is still valid. As very few marriages are “untill death us do part” these days, a little forward planning would be advised, way over and above a pre-nup which often get challenged in court at ruinous expense.

[2] Lets ignore FBI SWAT teams and IC / SigInt operatives doing black bag jobs, even though they do appear with depressingly increasing frequency since the phony “War on Terror” gave them peckerhead room to be wonton home invaders.

Tor, Tails, & Tor Browser w/o Tor November 18, 2017 4:30 PM

@Anura

“For those of you that use Firefox, be aware that with the latest update many extensions, including NoScript, will no longer work and will be silently disabled.”
https://www.schneier.com/blog/archives/2017/11/friday_squid_bl_600.html#c6764184

I found this, but haven’t tried it, w/ or w/o a VPN.

“11. Use the Tor browser with a VPN (instead of the Tor network)

serveimageTor (which stands for The Onion Router) is a free software and open network that has been around since 2002 and is primarily funded by the US government (even today). While some recommend using the Tor network for privacy reasons, it has proven to be compromised, and some would argue broken. From a usability standpoint, the Tor network can be too slow for everyday use (1-4 Mbps).

But while the Tor network has some issues (just like many VPNs), using the Tor browser with Tor disabled is an excellent option that will protect you against browser fingerprinting. The Tor browser is simply a hardened and protected version of the Firefox browser. It gives you the following benefits:

protects you against browser fingerprinting due to the browser’s default (protected) settings
helps you blend in and with all the other Tor browser users
can be utilized with a good VPN service for maximum protection

Here’s how you can implement this secure browser solution (see images):

Download the Tor browser for your operating system.
In the Tor browser go to the Options button (three lines in the top right corner) and select Preferences icon (image).
Select Advanced > Network > Settings (image)
Select No proxy > OK (image)
Type about:config into the URL bar and hit the enter/return key
In the search box enter network.proxy.socks_remote_dns and then double click to disable (image)
To completely disable the Tor network, go to the search box again and enter extensions.torlauncher.start_tor and then double click to disable (image)

Now you can start using the Tor browser to protect yourself against broswer fingerprinting. But remember to combine this with a good VPN service to hide your IP address and geolocation, otherwise this option is pointless.”

from https://restoreprivacy.com/simple-privacy-guide/

I did attempt the above with Tails, but with no success.

MarkH November 19, 2017 11:54 AM

@225:

secrecy ≠ security

“Perfect secrecy” is a term Shannon coined, to refer to an abstract property of a cipher considered in isolation.

Security — as used in the discipline of security engineering — can be defined as the cost (amount of resources needed) for an adversary to cause an unwanted outcome.

In this sense “perfect security” could be understood as a condition in which the attacker’s cost is greater than any attacker can afford.

Attackers often don’t “play by the rules.” Usually, they don’t crack your public key encryption by solving an enormously costly math problem; they guess your passphrase, or steal it using a keylogger, or exploit some weakness in your hardware and software to extract the key.

Security is never determined by an abstract property of a single defense considered in isolation.

Shannon’s “perfect secrecy” was attained by a cipher usually called a one-time pad. In theory, it’s perfect. In practice, as a security measure, it’s absolute crap.

One-time pad replaces the problem “how to do I distribute X amount of data so nobody can read it?” with the problem “how do I distribute X amount of key material so nobody can read it?”

Perhaps the closest we can come to “perfect security” in real life is keeping certain things inaccessible to toddlers, because their available resources are so limited … but still it’s shocking how often they break through.

When it comes to information security on networked computers, however, there is no defense that is known to be impervious to affordable-cost attacks. Actual usable information systems don’t exist abstractly in isolation … they function in environments in which no aspect of information theory can provide “perfect security.”

Clive Robinson November 19, 2017 4:07 PM

@ MarkH, 225,

One-time pad replaces the problem “how to do I distribute X amount of data so nobody can read it?” with the problem “how do I distribute X amount of key material so nobody can read it?”

Whilst that is true on the downside, you did not mention what is also true on the upside…

There is a reason we have codes and ciphers, most people think they are the same but they are not.

A code can be considered like an English to French dictionary with a French to English section in the back. What the front half of the code does is replace words or entire sentences with say a 4 digit number. Obviously the back half of the code is the list of used numbers and the word or sentance they are to be replaced with. As long as the numbers assigned to words are truly random then the code has “Perfect Secrecy” if and only if it is used once. Which is a bit pointless but nether the less codes were in use for hundreds of years.

However aside from the single use issue –that an OTP cipher shares– the big problem with codes was “How do you say something that is not in the code book?”… To which the answer is at best messy.

Thus the problem is making a comprehensive code book long before you know what you are going to say…

The One Time Pad (OTP) cipher solves this as you use an “alphabet” not a “Dictionary” and what boils down to as many Ceaser ciphers as there are letters in the alphabet. It’s security rests on the truly random ordering of those Ceaser Ciphers.

Thus with the One Time Pad you gain the significant advantage of being able to say anything that can be said with the alphabet “AT THE TIME OF CIPHERING”. But you get a down side in that you get a much longer message as a result that has usually had a significant cost penalty.

Thus for years the “one time use” of a code book was solved by using it as a method of “compression” then “ciphering” the code numbers in a process we now call “super encipherment”. However the strength of the whole system rested on the strength of the cipher system used and the number of messages required to break the code book.

The thing is “Perfect Secrecy” does not mean what it sounds like. All it realy means is “That all messages of equal length are equiprobable”. But it also ignores another issue with One Time Pads which is that they can not be truely random in all ways, there are bounds you have to observe to avoid meaningful leakage of plaintext in the cipher text.

Put more simply lets assume that your True Random Number Generator (TRNG) is unbounded in it’s run length. That means there is a small probability it will put out two of the same charecters in succession, that is 1/(m^2) chance where m is the size of the alphabet in use. Likewise a smaller chance of three of the same charecters 1/(m^3) etc. Now an attacker not knowing that they are observing a One Time Pad in use will use a statistical tool to check the randomness of the text. If your TRNG. There are various types but lets say your TRNG put out five of the same charecters, that is sufficient to break the use of a single Ceaser Cipher thus a plaintext word or part of one would be recovered by the attacker the probability is that rare fragment will not leak anything of importance but… Which is why the run length in a One Time Pad needs to have hard limits put on it.

But for all that the level of security a One Time Pad gives with the use of just a pencil and paper is realy quite astounding. Which brings me to,

When it comes to information security on networked computers, however, there is no defense that is known to be impervious to affordable-cost attacks.

Whilst nearly true is not quite true. Networked computers are in the general PC or Server case part of the communications system in their entirety. Which means an attacker from the network can reach around any application running on them to see the User Interface (UI). If your encryption/decryption is done by an application on that computer then it is very definately “Game Over” because the attacker will see the “Plaintext” at the UI along with the user.

But if what is presented to the user is an OTP encrypted message, which they then transcribe onto the pad with a pencil to then decrypt the message back to “plaintext” there is little an attacker can do (unless there is a camera that over looks the pad).

Thus the real argument is about where the “securiry end point” is with respect to the communications end point. In the case of the encryption application running on a networked PC the comms end point is beyond the security end point and the whole system is thus insecure as indicated. However with the use of the OTP and pencil the security end point is beyond the comms end point and the system is as secure as the OTP usage is.

Whilst I might sound quite padantic at this point I am doing it for a very valid reason. Which is it is proof that no matter how many backdoors the FBI or any other LE or IC agency requires by law,

    If the encryption system you use is secure, and your security end points are beyond the communications end points at both ends then your message content is secure from security end point to security end point.

The NSA actually admit this in their own broachures and catalogues when talking about the use of AES. If you look at their Inline Media Encryptor it clearly states that it can be used for “secret” classified data, BUT… it is only secure for “Data At Rest” that is not during use.

As Ed Snowden has been known to observe “trust the math” that underlies modern crypto like AES.

But you must be aware of the system pitfalls, it’s why I say that apps like signal and Whatsapp can not be secure in their designed use case… It’s also why I keep talking about not just old style “air gaps” but the fact you have to “energy gap” systems. Where one is used for communications (official jargon “Online use”) and one is used as the security end point (“Offline use”) and they should be fully seperated such that there is no energy path be it EM, electrical, acoustic or mechanical between the two systems.

Due to the proliferation of Smart Devices that are without doubt “back doored by design” the use of One Time Codes and One Time Ciphers needs to be dusted off and reappraised for those with higher than normal security requirments.

225 November 20, 2017 4:32 AM

@MarkH

Perfect secrecy == Perfect security

“The second part of the paper deals with the problem of “theoretical secrecy.”
How secure is a system against cryptanalysis when the enemy has
unlimited time and manpower available for the analysis of intercepted cryptograms?”

I don’t know when the language changed but now Wikipedia calls this Information-theoretic security now (not secrecy). I think the safe way forward linguistically is just to accept the two phrases are synonymous.

@Clive Robinson I think by ‘bounds’ and your example you are suggesting that OTP have some size limit, this isn’t true. If I was holding down any key on a OTP machine, and told you that I was doing this, then from the cryptograms you would not be able to see which key was held down ever, even if you waited for a million of the same characters in a row.

Wael November 20, 2017 4:54 AM

@225,

you are suggesting that OTP have some size limit, this isn’t true.

In that case choose an OTP of one character length and encrypt your next post with it. Let’s see how long it remains secret.

Oh, make sure your post is one sentence composed of meaningful English words. Still think there’s no size limits on OTP?

Clive Robinson November 20, 2017 1:50 PM

@ 255,

I think by ‘bounds’ and your example you are suggesting that OTP have some size limit

No, what I am saying is that to avoid leakage of plaintext the run length of a single character in the OTP needs to be limited.

The OTP is the equivalent of a number of Ceaser Ciphers. The number of Ceaser Ciphers is the same size as the alphabet in use.

The theoretical strength of the OTP is gained from not knowing which of the Ceaser Ciphers is used next. That is to an observer without the plain text they should not be able to determine which Ceaser Cipher was used.

The problem is that in reality the OTP does not change the message statistics before it is used for encryption. Thus if the OTP has more than just a couple of the same Ceaser cipher in a row the output statistics change from random towards what you would expect of plaintext. The longer that run length of the same Ceaser Cipher the easier it is to determine not just the change in statistics but actually determine the plaintext over that run of using the same Ceaser Cipher.

Depending on who’s figures you use the unicity length of a 26character alphabet Ceaser Cipher is around five charecters. Thus you need to prevent this happening.

There are a number of ways to do this, the easiest is just to throw away output from the TRNG. So you have a simple algorithm of,

1) Get a new character from the TRNG.

2 Check if the new charecter is the same as the last four chars accepted from the TRNG.

3, If the new charecter matches the last four charecters discard the new character and go to 1.

4, If the new charecter does not match output it.

5, Use the new character to update the last four character store.

6, Goto 1.

That algorithm will stop more than four successive Ceaser Ciphers in the OTP being the same.

Whilst it is simple it fails to stop simple patterns like ACACACAC… Which again have recognisable statistics (it’s effectively the equivalent of a Vernam cipher with a two charecter key…).

However the TRNG because it is unbounded could produce a sequence of one hundred characters or more the same. Which means in theory that even though the TRNG is working a charecter might never be output due to the unbounded run length.

Another algorithm uses an alphabet that is effectively five TRNG output characters in size. Howevet these form a subset of the maximum possible alphabet. That is they are such that no three alphabet characters can form a string of five or more TRNG charecters in length.

However this still leaves a problem…

Claude Shannon, proved whilst at Bell Labs during WWIi that the one-time pad, properly implemented, is unbreakable. His research was later published in the Bell Labs Journal.

He also proved that any unbreakable system must have essentially the same characteristics as the one-time pad, which are,

1, The key must be kept secret.
2, The key must be truly random.
3, The key must be at least as large as the plaintext.
4, That the key never be reused in whole or part.

The problem is (4) with “never be reused in whole or part”. The logical consequence of this is that you can not make a true OTP key stream…

Because it will always repeat at some point “in part” if it has a constrained alphabet. Look at it another way, let us assume the alphabet is the numbers 0..9. Pi is an irrational number which means that it cannot be written as the ratio of two integers. The concequence of this is it’s length has to be unbounded or infinite. However every digit 0-9 gets reused an infinite number of times, every two digit 00-99 gets reused an infinite number of times, the same for three digit etc etc. The same logic applies to any unbounded sequence where the alphabet used is bounded. Thus (4) above can not actually be achived with the “in part” constraint.

But it gets more interesting. For any given message length with a constrained alphabet there is a maximum number of combinations (write all the numbers from 000-999 to see this).

Some of these combinations will be identical to plaintext strings. Thus under the normal quoted rules of the OTP you would be encrypting plaintext with plaintex under these combinations and what would still be a Vernam Cipher if there where any repeates which there would be (but irregularly). However it is as well the equivalent of a “Book Cipher” that can be broken by use of the “sawbuck method”…

All of which means you have to massage the OTP key stream to keep any of these issues below their usable threshold otherwise plaintext will leak. In most cases it will not realy matter, but in a few it might reveal something of importance.

In most cases the rules used to massage an OTP are “proprietary” thus also “secret”. Also in practice the plaintext that is to be encrypted under an OTP should first have it’s statistics flattened in some way. If you see an automated OTP system that does not do this then it has been designed by rank amateurs.

In proffessional circles using automated systems, the plain text would first be compressed, then encrypted using a conventional cipher, then that resulting cipher text would be super encrypted by the OTP system.

Even manual OTP systems would normally have a weak manual compression/encryption system prior to the use of the OTP for messages that were beyond a sentance or two. The same with manual stream ciphers like VIC,

https://en.m.wikipedia.org/wiki/VIC_cipher

The thing is as farvas the US was concerned this compression/encryption prior to the use of the OTP was highly classified and it was hidden away from the cipher Clarks etc by “Standard Operating Procedure Rules” untill those in the Open Community made it not just known but why it should be done…

But even now it is very rarely specifically stated as to why, just given as a rule to be followed, thus ends up mainly being ignored by the likes of software writers re-inventing the wheel and ending up with a square or worse 😉

225 November 21, 2017 1:51 AM

@Clive Robinson “Thus if the OTP has more than just a couple of the same Caesar cipher in a row the output statistics change from random towards what you would expect of plaintext.” That’s not true, if you XOR any information with random data of uniform distribution that you have no prior knowledge of, the result is also uniformly distributed.

What you are doing is like flipping a fair coin and saying “It has landed heads 4 times in a row, we will now not allow the next sample to be heads”.

Clive Robinson November 21, 2017 4:57 AM

@ 255,

That’s not true, if you XOR any information with random data of uniform distribution that you have no prior knowledge of, the result is also uniformly distributed.

Firstly lets disambiguate the manual and automatic use of the OTP. The (Frank) Miller Cipher was the first known manual OTP cipher. The Vernam Cipher was the first patent to use the XOR function the (Joseph) Mauborgne Cipher variation was the first automatic OTP.

Secondly whilst talking about the mixing function of the various OTP systems, you normally do not have to distinquish between the XOR or simple modular addition. Because the XOR is additive mod2 on a bit by bit basis thus both systems are addative. The implication is that both equate to a simple substitution cipher broadly equivalent to a Ceaser Cipher when the pad value is the same.

Now back to the subject in hand, let me put it this way,

Your random TRNG because it is unbounded could put out one hundred “A” charecters in succession which would when seen as part of the whole output from the TRNG be of “uniform distribution”.

A sentance has certain statistical properties that are dependent on the language and it’s redundancy.

If you shift all the letters up in that sentance by one letter (“A” Ceaser Cipher) the statistics of that sentance do not in anyway change (same if you are using moular2 addition).

There are well known statistical tests that work on as little as two charecters in a string. By five charecters used in a sliding window you can be reasonavly certain to tell the diferance between random statistics and language statistics over the size of the window. Obviously the bigger that window the better the distinction. Also as the window gets larger the more informative the contents become.

If you do not get that then you are having some problems in understanding the basics of simple substitution ciphers.

So if you are presented with an encrypted message of a hundred or so charecters untill fairly recently it would be assumed by a cryptanalyst to be a hand cipher.

Thus you would automatically do a series of basic statistical tests starting with a simple charecter count then bigrams trigrams etc and their placment. You would if these showed language statistics start looking for “key length” indicators using them.

Even if a key length indicator was not found the areas of language equivalence statistics would be further examined. One test would be for simple substitution ciphers of which the simplest is the Ceaser Cipher. If recognisable fragments of language text came out (and it would with the simplistic use of the OTP) then other methods of looking for key length indication would be used.

Even if the result was that an OTP was probably used the plain text fragments might well reveal other information. Thus if a similar or same length message was found but using a differet key they would be compared to see if it was a retransmission –common in both millitary and diplomatic communications– of plaintext. There are a number of reasons to do this, one is to recover key streams for analysis. Which is what happened during WWII with the high level German machine cipher those at bletchly called “fish”.

Thus you could “fill in the blanks” to rebuild the original message by using the partial plaintext “messages in depth” without having to worry about finding the various OTP key streams.

Such methods were used as part of Project VENONA you migth want to read up on it. It was in part how the Russian OTP re-use was discovered.

A mistake many people make with OTPs is not realising that an addative mixing function cares not a jot which input has key stream and which plaintext. There is of course a logical inplication of this in that if the keystream is not checked it can be seen as another plaintext, thus the OTP is nolonger an OTP but a “Book cipher” or other stream cipher. Hence as I said the compression then randomisation by encryption of the plaintext before super encipherment with an automatic OTP system.

225 November 21, 2017 6:07 AM

“Your random TRNG because it is unbounded could put out one hundred “A” characters in succession which would when seen as part of the whole output from the TRNG be of “uniform distribution”.”

No fix needs to be put in to stop this happening. No reasonable literature backs up your claim that you need to use some strange RNG that has hard coded limits on how many times in a row a character can come up. If you have a TRNG and you get 100 A’s in a row, then your message length is much larger than 100, you don’t get to fish for 100 A’s and a row and send a 100 character message starting the key from there.

Clive Robinson November 21, 2017 6:33 AM

@ 255,

No reasonable literature backs up your claim that you need to use some strange RNG that has hard coded limits on how many times in a row a character can come up

Realy?

Have you read up on debiasing algorithms for TRNGs

Or how about reading up on how you would use a 5bit output TRNG to build a truley random output with a 26 charecter alphabet.

I think that what you are trying to say is you’ve not read or consciously remembered any such documentation, that says it in just quite the way you want to argue.

But just to make the point whats the unicity distance on a Ceaser Cipher with a shift of One?

And how does that compare to the simple use of the OTP with one hundred “A” chars in sequence?

Clive Robinson November 21, 2017 6:59 AM

@ 255,

On re reading the above comments I notice you’ve neglected to reply to @Wael’s request,

    In that case choose an OTP of one character length and encrypt your next post with it. Let’s see how long it remains secret.

    Oh, make sure your post is one sentence composed of meaningful English words. Still think there’s no size limits on OTP?

Oh then go and actualy look up the works of Benjamin deForest “Pat” Bayly both during and shortly after WWII. You can find not just a circuit diagram but block diagram and description of one of his works online, it might give you some things to think about…

Whilst I don’t mind giving you pointers to improve your education I am not going to spoon feed you handouts.

Also remember when you talk about “reasonable literature” what I’ve been talking about was still clasified information into the 1990’s and I think it may well still be so in various countries.

225 November 21, 2017 7:07 AM

You keep on bringing up other stuff, but this bit “Which is why the run length in a One Time Pad needs to have hard limits put on it.” you are wrong on. But since you can’t accept being wrong then this conversation and your learning is at an impasse.

Wael November 21, 2017 7:15 AM

@Clive Robinson,

I notice you’ve neglected to reply to […]’s request

Methinks he didn’t “neglect”. He tried and it looked awfully close to Caesar Cipher, so he changed the subject 😉

andy July 12, 2018 12:28 PM

@Laurent

Thanks for this. I used to use a proxy but soon realised how unsafe they are from privacy point of view. DNS leak was one of the issues when using a proxy. Have used VPNs since then as their service is much better

Clive Robinson July 12, 2018 10:34 PM

@ Moderator,

The above from “andy” is a repeat offender for unsolicited link advertising.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.