Roughly, any encryption scheme can be assigned to one of three classes of message secrecy, with respect to “data at rest”:

• Weak — an attacker could perform cryptanalysis (key discovery) within a feasible time frame and budget

• Strong — while cryptanalysis is possible in principle, the required time and resources make it infeasible to achieve … there is no *practical* attack

• Perfect — cryptanalysis is impossible, even with unlimited resources

Shannon proved that the Vernam cipher — and *only* the Vernam cipher — achieves perfect message secrecy.

Every cipher with a key smaller than the message — there are no exceptions to this! — can be cryptanalyzed (if all else fails) by exhaustive search of the keyspace. For modern ciphers exhaustive search is infeasible, but not theoretically *impossible*. Only the Vernam cipher can never be cryptanalyzed, even in theory.

Therefore, a cipher requiring the “exchange of a small secret” can never, never, never achieve perfect secrecy if the message volume is larger than that small secret. Insisting that such a scheme has perfect secrecy is proof of failing to understand Shannon’s paper.

Having disposed that any such scheme like the one you have outlined cannot be in the Perfect category, it is either Weak or Strong.

Most Strong ciphers in broad usage (like AES) have been broken in the narrow technical sense that researchers have found attacks less costly than exhaustive search … but for Strong ciphers, these attacks are still many orders of magnitude beyond any imaginable effort, and often require predicates which would be impossible to achieve in most usage scenarios.

I am most confident that the scheme you have in mind is Weak — if it were used for any large message volume, it could be cryptanalyzed at a cost of perhaps a few dollars, or even a few pennies.

People who *design* strong ciphers (Bruce is the only person here I know to have done so) start by intensive study of the art and science of cryptanalysis, which is deep and difficult.

Who would create a strong cipher, had best begin by breaking as many ciphers by other people as they can manage.

Who would encourage people to use a newly invented cipher, had best make an extremely solid and convincing case why it’s better for them than what is already in use. That case *must* include assessment by experts in cryptanalysis, who made a large investment of time to study various attacks, and have assessed that the best practical attack is at least billions of times more costly than even a nation state could afford.

If I understand, if you have 1 billion sequence instead of 2000 but the plaintext is 1 million then the plaintext will make a signature onto the random cipher input, allowing both to be detected, by multiple plaintext or multiple cipher random bits, or filtering out the none random plaintext from the cipher by throwing random strings at the combined cipher output.

Yes it will leak. ]]>

Why does ‘fue’ repeat with each iteration just shifted to the left? ]]>

“which means each plaintext character will have a unique permutation”

Is that each ‘a’ in the Alphabet set, or each ‘m’ in the message.

The importance is different.

If it’s each ‘a’ the cipher is polyalphabetic and thus repeates and alows the equivalent of a message in depth attack.

If it’s each ‘Mi’ then the purmutations never repeate and that reduces mathmaticaly to M simple substitution ciphers.

If the latter then the question then moves to how the permutations are selected.

If selected determanistically then if M is large the patterns will show eventually which enables prediction.

If random then there are no patterns to show thus no prediction is possible regardless of length.

What you appear to go onto describe is a determanistic process not a random one, which means it is breakable if M is long enough.

Does that answer your question?

]]>The table got a bit out of shape, so here it is again

0 1 2 3 4 5 6 7 8 9 to 25

**A B C D E F G H I J K L M N O P Q R S T U V W X Y Z**

X H G N V A Q T L Z P R Y W B F U E I M K C D J S O

L Z P R Y W B F U E I M K C D J S O T Q A V N G H X

W B F U E I M K C D J S O T Q A V N G H X Y R P Z L

O T Q A V N G H X Y R P Z L S J D C K M I E U F B W

Z L S J D C K M I E U F B W P R Y X H G N V A Q T O

R Y X H G N V A Q .

Thanks for the lecture. It is well received and your suggestions taken onboard and you are correct in your statement, that more information is needed to evaluate an encryption system. However, that was not the question I had for you and for @weather, or was it? The question was based on a mathematical problem I put to you, if it was possible to determine a 2000 random character string which had been transformed into a cipher, using the 26 characters of the English alphabet. For @weather I reduced that to a problem using 4 bits and like you, he failed to give a simple answer. The answer should have been ‘No’ or ‘Yes’ and in case of the latter one, a mathematical proof would have been nice. That should have been followed up by the question how the system works and not with a conclusion from your end that it doesn’t work. I hope you follow the (patronizing) comments you made and open your mind to absorb much knowledge. If you can’t, then find some knowledgeable people who can help you, as we did during the course of the last five years (not the 5 or 10 minutes you suggested).

The death-end path as you called it, became our road and started at the Caesar cipher, leading to Leon Battista Alberti in the 15th century who inventing the Alberti Disk. Kahn credits him with the invention of the poly alphabetic substitution and in the 16th century multiple alphabets and code preceding encryption was added. Until the 19th century this offered a strong protection for ciphers. At the end of the 19th century it was Frank Miller and at the start of the 20th century Gilbert Vernam who independently invented the OTP.

Let’s have an example based on the Alberti Disk, but instead of having a second alphabet going from A to Z we use a permutation. Using only this permutation and reading from outwards ‘HELLO’ inwards our cipher becomes TVRRB. Obviously that is not safe and we change the modus by changing the permutation after each character encryption. From the 16th century we know that people used a character code to move the alphabet for each encryption step, but this code was limited and at the end of it would start with the first permutation again, if the plaintext length required it. Our changes will use the plaintext to create permutations, which means each plaintext character will have a unique permutation and during the process transform the plaintext into a random character string. The modus here is moving to the first plaintext character on the alphabet and take the characters on the permutation, placing them in reversed order at the end; creating the next permutation. The character at the start of this permutation becomes the first transformed plaintext character. H is replaced by L and modular arithmetic applied, with the first character of the first permutation. We move to the second plaintext character E and repeating the process. W becomes the next transformed plaintext character and is paired with the second character of the first permutation. Once all plaintext characters have been transformed and modular arithmetic being applied our cipher is IDUMM. An adversary knowing the system also knows that prior to encryption the first permutation is generated via a character code, which can be 0 to 4 characters in length. Let’s assume in our case these have been two characters, U and K and the data we transmit is UKIDUMM.

0 1 2 3 4 5 6 7 8 9……………………………………………..25

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

X H G N V A Q T L Z P R Y W B F U E I M K C D J S O H

L Z P R Y W B F U E I M K C D J S O T Q A V N G H X E

W B F U E I M K C D J S O T Q A V N G H X Y R P Z L L

O T Q A V N G H X Y R P Z L S J D C K M I E U F B W L

Z L S J D C K M I E U F B W P R Y X H G N V A Q T O O

R Y X H G N V A Q T O……………………………………………

TVRRB – single permutation mapping

LWOZR – multiple permutations; transformed plaintext + modular arithmetic = cipher

H.. | L + X = I

E.. | W + H = D

L.. | O + G = U

L.. | Z + N = M

O.. |R + V = M

Certainly you are a knowledgeable person as far as cryptography goes and I trust you have no problems to present us with a mathematical solution for the dilemma we face.

Before you answer (big if), keep in mind we can use the hexadecimal system or binary system for the initial secret we share (an option for users of our system). So all written languages and data formats could be possible solutions, be that ASCII or Unicode.

Here is a slightly different permutation at the start, resulting in the same transformed character string as in the example above. The plaintext here is WORLD.

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

X R G N V A Q T J P W Y H Z B F U E O M K C D L S I

LWOZR

]]>“Your definition of a stream cipher might be your personal view on it, but is not a universal view shared by others and you might look that up by googling it. The OTP is a stream cipher.”

The basic definition of a stream cipher super set is that,

1, It has a simple mixing function of a limited alphabet / width.

2, The mixing function has two inputs one for a character A of message M the other for a key K.

3, The key K, remains the same for encryption and decryption.

4, The message M is plaintext P for encryption, and ciphertext C for decryption.

5, The mixing function output is ciphertext C for encryption and plaintext P for decryption.

6, The mixing function f is reversable thus C=f(P), P=f(C), C=f(f(C)), or P=f(f(P)).

Note that the primative used in the mixing function f is not limited to XOR, it can ADD but the overal mixing function would need to complement one of the inputs usually the key K to K’ with respect to the cardinality of the alphabet (ie “two’s complement for binary power alphabets A). Thus C = P ADD K, and P = C ADD K’.

Where opinions diverge is with regards the key K and how it is generated.

For the OTP and similar provably secure systems each element of K, Ki is randomly selected and fully indipendent of every other Ki. That is “You DO put the ball back in the urn after drawing it”.

For other Stream Ciphers the generation process “has memory” thus the individual Ki whilst they may be randomly selected ARE NOT independent of each other. That is “You DO NOT put the ball back in the urn after drawing it, you wait untill the urn is empty and put all the balls back at the same time and start again.

Using this “With Memory” and “Without Memory” more restricted stream cipher distinction the OTP is not a Stream Cipher, even though the mixing mechanics are the same.

Importantly the “With Memory” key generation method, the fundemental size of the memory defines the “Key Space” size, not the “message length”. The opposit is true for the OTP where the generation method is unbounded and thus the message length defines the key space size for each and every message.

The system you described you say has a “small secret” that both parties know well that is what defines your systems keyspace size thus it is not an OTP.

Look at it this way, there are determanistic algorithms that can be used to generate some irrational numbers, they have memory requirments that grow with the length of the output. In effect they generate fractions where the memory required to hold the fractions grows with each iteration. So with pi it’s a effectively 2n where n is the number of digits you HAVE output so 3/1, 31/10 …. 31415926/10000000…

Now you could use such a generator to greate an endless stream of digits, but the number of such generators is limited. Thus you could use a second function to mix the last two or more digits output with a short secret.

But… Based on “the enemy knows the system” an attacker would know what your generator algorithm is, thus at worst they would only have to brut force the “short secret” space to have the plantext. Thus the issue moves from one of having an infinite length key generation recognition one, to a way of determining that you have a valid plaintext.

This depends on the statistics not of the key which is now irrelevant, but on the plaintext. Have a look at unicity distance and how it is calculated.

It’s why you should always “flaten the statistics” of your plaintext to reduce them as much as possible.

The usualy quoted but actually not very good way to do it is to use “dynamic loss less compresion”.

There are two basic types of lossless compression, dynamic and non dynamic. The both have advantages and disadvantages. Whilst dynamic compression will result in shorter messages in most cases the way it works requires a dictionary unique to each message be built and sent with or within the compressed plaintext. This dictionary “adds structure” which is an even bigger peg for a cryptanalyst to hang their hat on than ordinary language statistics, and such structure is easier to detect mechanically…

Thus if you are going to use compression to reduce message length, you need to still follow it with a way to “flatten the statistics” this time of the structure not the language. There are ways to do this such as some forms of fractionation.

Oh and remember that ALL forms of Error Correction “add structure” so should only be done on the ciphertext never on the plaintext.

]]>Some of your English was indecipherable to me.

I didn’t mean to suggest that lossless data compression can’t be done … I use it all the time!

However, the average reduction for a *truly random* bit sequence is very nearly zero. This is always true for *lossless* compression algorithms, no matter how clever (or computation intensive).

I know you view point on the matter, but you can compress a 700mb file down to 1000bytes and expand it, probilty comes into it, extreme compression your look at 60% possibly data success as 1 byte compression your look at 90% I posted the general just early but wrapping it up in a pagage yes won’t work, but wet wear is if it looks like a book it expanded correct or if PE runs with out crashing is correct.

If @mod doesn’t mind the spam I’ll post C code for compression and expansion, and give you a BMP picture compressed, but warning it will probably take a year to expand, its serial instruction, you can’t really parrellel it.

Bending the laws not breaking them, but I know you view on the matter.

]]>I don’t understand what you’re driving at.

I recommend looking at the Wikipedia pages for ciphers such as RC4 or El Gamal, as examples of how to make a comprehensible and unambiguous description of a cryptosystem.

If you want insightful criticism of your invention, the best way to seek that out is to write it up clearly and succinctly so that knowledgeable people can read it in 10 minutes (or much better, 5 minutes).

Then if you’re lucky, someone who understands cryptography will be patient and generous enough to explain why the invention doesn’t achieve what you seem to believe it does. The open mind can absorb much knowledge!

Every claim for a perpetual motion machine (whether “zero point energy” or anything else) from which useful work can be extracted, has been false.

Every claim for objects or signals moving faster than the speed of light in vacuum, has been false.

Every claim that a cryptosystem achieves Shannon’s perfect secrecy without the need to communicate a volume of key material at least as large as the message traffic, has been false.

When a young friend of mine was in his teens, sometimes he would tell me about a tech project he and his friends had cooked up, which I was confident couldn’t succeed. They were all very bright, and were simply wandering into territory they didn’t know much about.

Rather than pour cold water onto youthful enthusiasm, I would respond to the ideas by saying, “son, you’re going to learn a lot!”

]]>