Comments

Moo April 1, 2016 6:23 PM

This story has been around for a couple of weeks, but given that the Tour of Flanders cycle race is on this weekend, I thought I’d mention it to the good readers of this blog. So …. backpacks and cooler boxes are being banned from ‘secure areas’, which are basically prime spots at different parts of the race that require people to buy tickets.

So … back packs and cooler boxes are going to instantly mean that you are a terrorist threat. Absolutely fine then if you are carrying perhaps a hand bag or any other type of ‘container’ then. Keep in mind that this is the biggest sporting event in Belgium and the vast majority of folks can see the race for free.

http://road.cc/content/news/182756-backpack-ban-fans-tour-flanders-due-security-alert

Lawrence D’Oliveiro April 1, 2016 6:35 PM

I have come up with an idea for making brute-force decryption even harder to do.

Normally, when you enter an incorrect decryption key, the algorithm can tell you so pretty quickly. But what if it can’t? What if the procedure for deciding the key is wrong can never terminate?

(Of course, decryption with the correct key will terminate after a finite time. But this time could be chosen to be very long.)

I have written up a description, along with a proof-of-concept Python script, here https://github.com/ldo/wildgoose.

Have others thought of this before?

Critiques welcome.

Wael April 1, 2016 7:38 PM

@Lawrence D’Oliveiro,

Critiques welcome.

Two comments:

  • The use of such an algorithm may raise suspicions.
  • An attacker may do the following: Measure throughput of decryption with a correct key and correlate it to data size, then start the brute force trials. If the operation doesn’t terminate within the estimated time frames, then kill the operation and move to the next iteration.
  • You are adding a constant delta time to each round of decryption attempt.

    Thoth April 1, 2016 8:15 PM

    @Lawrence D’Oliveiro
    There are techniques for measuring execution time which can be used to know if the decryption is correct which Wael mentioned. These are commonly found in smartcard and SIM card hacking and are quite easy to do.

    I would rather the decryption yields a result even if the wrong key is used. Without explicit checksums and MAC mechanisms, you can make a decryption with a wrong key look as though it decrypted thus a good chance of denying possession of the secret key.

    Lawrence D’Oliveiro April 1, 2016 10:03 PM

    @Wael re: “within the estimated time frames”

    It’s not clear to me what you mean by “estimated time frames”. How do you estimate how long it will take?

    @Thoth

    Just in case it isn’t clear: if you put in the wrong key, the decryption attempt never terminates.

    Wael April 1, 2016 10:35 PM

    @Lawrence D’Oliveiro,

    Suppose someone gets a hold of the algorithm and they use it to encrypt and decrypt some data with a good key. They additionally, construct a table with a few columns and enough rows to be statistically significant. The columns will be: data size, data type, time to encrypt, time to decrypt. Now they have profiled the algorithm. When they are presented with a cipher text of a known size, and they don’t know the key, they can estimate based on the table they constructed how long the operation should take. They can add a “safety factor constant delta time offset” to that. Suppose they know that they measued the decryption operation takes 1 mS for 1 KB of binary data, then they’ll let the decryption operation run for 1 mS + 10% or so before concluding this is the wrong key and move to the next key in the brute force sequence.

    If they have no access to the algorithm and the ability to experiment, then this becomes a different discussion about “Security through Obsecurity”

    If the data size isn’t known before hand, such as in a Full Disk encryption, then the attacker will need to use the full size of the disk as the gauging parameter for expected time to decrypt.

    This also assumes the decryption operation is done offline, meaning they removed anti cracking controls. So terminating a decryption operation could be as simple as stopping or killing the process and restarting it with a new key.

    Have I missed something? I did read the text you had on github. If I have, then please clarify the intended initial setup.

    Lawrence D’Oliveiro April 1, 2016 10:54 PM

    @Wael

    Your table will need more than 2 dimensions. The user can choose any value for the limit parameter at encryption time, to make encryption and decryption take any amount of time.

    Wael April 1, 2016 11:09 PM

    @Lawrence D’Oliveir,

    Your table will need more than 2 dimensions.

    Ok. It’s more complex… then a third dimension will need to be added (with an estimate as well: probably a worst case scenario. What’s a typical setup you have in mind? A user being coerced to enter a key to decrypt a file or an email? Or is it a full disk encryption with a TrueCrypt “plugin” that uses your algorithm?

    Lawrence D’Oliveiro April 1, 2016 11:23 PM

    Here’s an example encrypted message:

    {
    “crypt_alg”: “AES-128”,
    “hash_alg”: “SHA-256”,
    “iv”: [200, 145, 100, 229, 106, 97, 160, 11, 252, 7, 194, 202, 16, 32, 245, 116],
    “crypt”: [171, 58, 146, 33, 215, 54, 119, 123, 182, 187, 239, 108, 223, 108, 13, 45, 206, 42, 253, 189, 187, 167, 12, 135, 88, 136, 136, 174, 249, 177, 254, 45, 44, 105, 56, 33, 9, 98, 97, 70, 236, 23, 27, 166, 141, 76, 255, 156, 49, 150, 172, 193, 218, 138, 129, 239, 126, 202, 213, 86, 168, 204, 247, 14, 15, 5, 5, 62, 238, 254, 221, 97, 28, 18, 133, 85, 13, 248, 116, 5, 233, 255, 208, 237, 106, 51, 42, 161, 33, 125, 85, 250, 17, 203, 24, 175, 235, 144, 223, 35, 99, 60, 72, 69, 20, 216, 22, 1, 73, 53, 253, 112, 36, 199, 130, 112, 124, 146, 181, 58, 85, 142, 250, 102, 91, 101, 169, 109, 227, 63, 45, 231, 187, 118, 109, 236, 166, 180, 229, 249, 179, 5, 234, 15, 95, 147, 85, 187, 132, 59, 144, 18, 147, 4, 207, 45, 53, 172, 43, 189, 133, 82, 211, 175, 110, 185, 44, 216, 133, 21, 46, 144, 195, 60, 15, 141, 34, 209, 34, 22, 170, 129, 207, 16, 93, 240, 114, 82, 242, 100, 189, 202, 105, 226, 84, 71, 231, 128, 106, 165, 68, 106, 122, 138, 34, 27, 254, 97, 220, 88, 5, 109, 30, 131, 174, 160, 70, 194, 101, 168, 180, 227, 0, 185, 221, 179, 106, 38, 139, 37, 93, 148, 141, 177, 248, 92, 137, 42, 106, 191, 213, 166, 250, 207, 255, 108, 105, 154, 82, 234, 196, 116, 218, 124, 106, 19, 154, 171, 199, 252, 133, 11, 7, 144, 21, 95, 235, 56, 116, 21, 138, 160, 139, 104, 148, 58, 195, 24, 213, 49, 97, 170, 9, 121, 185, 118, 204, 74, 204, 28, 127, 121, 236, 212, 224, 27, 38, 47, 138, 59, 45, 163, 152, 214, 233, 103, 82, 165, 111, 222, 57, 29, 124, 165, 87, 85, 234, 223, 53, 195, 245, 10, 161, 79, 250, 65, 112, 216, 143, 0, 127, 57, 132, 60, 90, 162],
    “hash”: [83, 95, 218, 183, 93, 132, 11, 85, 90, 118, 204, 64, 28, 31, 213, 175, 250, 17, 182, 51, 21, 126, 211, 103, 15, 118, 142, 235, 2, 26, 250, 74]
    }

    The key is a positive integer less than 100.

    How long will it take you to brute-force that?

    Wael April 1, 2016 11:38 PM

    @Lawrence D’Oliveiro,

    Here’s an example encrypted message:

    I don’t know. Give me some time.

    How long will it take you to brute-force that?

    You are asking to find out the work factor (which I presume is close to AES-128 needed work factor) plus the time delay, which should be a constant multiplier over and above AES-128! Correct? I’ll also have to play with the algorithm you have. I bet you @Anura is already working on it — you probably changed his weekend plans!

    Niko April 1, 2016 11:57 PM

    @assume nothing

    That link seems like a journalist searching for click bait. If John Ehrlichman really told a reporter that, why wait 17 years after his death to publish the story?

    WhiskersInMenlo April 2, 2016 12:26 AM

    @Lawrence D’Oliveiro,
    —- Critiques welcome—

    A bad key should return a block or stream of rubbish in the same time
    that it takes to return the correct answer.

    With previously compressed and/or encrypted data identifying
    a correct answer requires yet another near impossibly long computation.

    Double encryption is not necessarily more difficult for analysis
    but brute force could be blunted. Even ROT13 complicates
    the validation of a correct key.

    Brute force suffers from a need to validate each try.
    Analysis should reduce the numbers of tries to validate.
    Additional Critiques welcome
    </novice comment>

    Lawrence D’Oliveiro April 2, 2016 12:36 AM

    @WhiskersInMenlo

    How do you tell how long it is going to take to return the correct answer?

    Thoth April 2, 2016 1:56 AM

    @Lawrence D’Oliveiro
    The opening statement of your README claims that by presenting a cryptographic key to a cipher, you would quickly be able to tell if a result set produced from a key and cipher is right.

    How do you come to this conclusion ? What are the mechanisms to allow you to determine the key presented is correct. Let’s assume the plaintext content and nature is unknown and it’s the first time you are using a supposed key.

    Usually the message integrity check is done by a cryptographic checksum that accompanies a message but let’s assume no checksums what so ever were present in the first place.

    What leads you to be able to distinguish a processed ciphertext with a key is a real or false plaintext ?

    Clive Robinson April 2, 2016 6:29 AM

    @ Jacob,

    When you are a target, there is no “going dark

    Yup, did you notice the bit about “alleged” use of encryption.

    As most hear know there is either encrypted traffic that can be identified or there is not. I could allege in court that Bruce has a psychic connection to the British Physicist Brian Cox, thus he has the inside track on all things quantum, but importantly unless I offered credible evidence I would quite rightly be laughed out of court…

    Clive Robinson April 2, 2016 7:18 AM

    @ Lawrence D’Oliveiro,

    How long will it take you to brute-force that?

    It rather depends on if you can “short cut” the search or not, your clue of “key is” might help considerably.

    As we know from password attack competitions on other systems the “brute-force” metric is a high water mark that is way way off.

    I don’t know the exact algorithms they use, but I suspect they are based on “known human failings”. It is the weakness of the human brain that makes such attacks possible, thus you would have to take that into consideration in your analysis.

    Jacob April 2, 2016 7:52 AM

    Mr. Clapper is very concerned about the public lack of trust and the misunderstanding of how the intelligence community supports the US constitution and citizens’ freedom, while preserving the privacy of the American people. This cognitive dissonance must have come about due to the following reasons:

    • “New and persistent public narratives about intelligence activities based on unauthorized disclosures that often lack context and reflect an incomplete or erroneous understanding of the IC and its governance framework.”
    • “Many of the documents that the IC releases to the public are highly technical and lack the context necessary for clarity and broader public understanding.”

    To counter such ignorant attitude by the public, DNI Clapper establishes a transparency program. For details, please head over to

    http://icontherecord.tumblr.com/transparency/implementation-plan-2015

    and please try not to laugh.

    Clapper's clacker April 2, 2016 8:47 AM

    @Jacob

    Everything became crystal clear (and their agenda very transparent) when we witnessed the “Collect It All” slide. That beautiful image captured everything we needed to know in a nutshell – the whistleblower transparency program is the only thing required. Not more useless reports, committees, ‘privacy’ officers and PR campaigns to try and undo massive damage to the NSA’s image.

    One brave man provided more honesty than decades of drivel from those in the crapper mold ever could.

    The IC plan you linked has lots of pretty words – ‘principles, transparency, oversight, responsibilities, public interest, open government, compliance’ etc. In other words, it is propaganda writ large given the light shining on nefarious IC activities.

    Don’t forget Crapper is the same man who, when asked by Oregon Democratic Senator Ron Wyden whether the NSA collected “any type of data at all on millions or hundreds of millions of Americans?”, replied “no.”

    This is Crapper’s hallmark characteristic – classic doublethink.
    He routinely holds two mutually opposing views at the same time (secrecy = transparency; collecting data in bulk on millions of private citizens = respecting privacy).

    The only words we should be hearing from this shit-spouter are whether he is going to plead guilty or not guilty to misleading Congress.

    Tinker, tinker April 2, 2016 9:33 AM

    @Jacob, Clapper’s increasing transparency all right. The “intelligence community” is propagating its CIDT methods down into local law enforcement, where perv cops perform them in public for extra degradation and mental suffering.

    https://www.washingtonpost.com/news/the-watch/wp/2016/04/01/video-shows-white-cops-performing-roadside-cavity-search-of-black-man/

    Security analysis is pointless in this country unless it’s premised on human security, because the real threat is this state. This state is a criminal enterprise shored up by a purpose-built Mukhabarat. It has to be demolished and replaced. Recourse to rebellion and R2P Pillar 3 apply.

    LarryG April 2, 2016 11:17 AM

    @Clapper’s clacker said, “Everything became crystal clear (and their agenda very transparent) when we witnessed the “Collect It All” slide.”

    Thus it became a red/blue question, as prescribed by the doctor. In our universal-proxied healthcare system, the patients can choose not to take the pill but never color of the pill, becaused vested interests lies in the hands of professionals who know better.

    After doublethink came double-speak as the wise teaches answerthink and random people on the internet call the “bullsh*t”.

    WhiskersInMenlo April 2, 2016 1:00 PM

    @Lawrence D’Oliveiro • April 2, 2016 12:36 AM

    Q: How do you tell how long it is going to take to return the correct answer?

    A: test, implementation and design.

    In general the time to operate on a block of data should not be sensitive to
    the data itself. If timing changes per the data then there is an information leak worthy of being exploited and plugged.
    An attacker would modify his tools to be sensitive, a designer would make sensitive tools that leak information difficult to implement.

    An attacker would test to discover a normal decode. Longer than this is a notable failure and would simplify inspection of the output.

    Attaching a validating checksum is a courtesy for friendly secure communication but would be omitted when courtesy or integrity is not needed. Integrity is interesting. If I automate the transmission of data I need a way to validate that the decrypted data is good data worthy of handing to the next process. Functions, machines, people have a choice of being cautious with what they accept and send.
    Too many on the internet do not fact check their input and thus make bad decisions. Lack of attribution is rampant. Programs have the same problem that people do. Garbage in garbage out….
    One value of strong encryption has to do with attribution and garbage.
    Consider 500 messages: One if by land two if by sea.
    and 500 more messages: Two if by land one if by sea.
    All sent anonymously and encrypted with your public key.

    Wael April 2, 2016 1:24 PM

    @Lawrence D’Oliveiro,

    The key is a positive integer less than 100.
    How long will it take you to brute-force that?

    Hmm. I tried the dumb approach of running a brute force decryption and got nowhere because I don’t have adequate computing power at the moment. I would estimate the bounds, relative to a pure AES cipher, to be:

    Table constructed with a correct key showed:
    The maximum latency I saw was 3.4 seconds for a limit of 2000. The following is a sample of the output:

    limit = 1
    Success at 1
    Time taken: 0.00014s
    This is a new test from Schneier.com\n

    trying 1865
    Success at 1920
    Time taken: 3.2s
    This is a new test from Schneier.com\n

    limit = 2000
    Success at 1940
    Time taken: 3.4s
    This is a new test from Schneier.com\n

    if a single round of guessed key decryption on a pure AES algorithm takes x seconds, a round on your algorithm will take x * y seconds, where y is the maximum expected “delay” based on the table. This “loop” delay is usually ignored when we evaluate the complexity of an Algorithm — the Big “O” thingy. So in case of a limit of 2000, the expected brute force time relative to a pure AES algorithm would add 3.4 seconds to each decryption attempt. If the limit is chosen to be high enough that it takes 30 seconds for example, then a usability issue arise.

    Then there is the two step attack where one can calculate the time required to generate the limit value for a large number of limit values — and I don’t know the theoretical “practical” upper limit is. We can use this value to terminate the decryption attempt after this time threshold has elapsed. Still adds a hurdle, but I would be suspicious that a decryption operation of a small file takes 30 minutes! I would bust the guy on the spot 😉

    I used this script (called it profile.sh) so I don’t mess with your code (short of turning verbose on rather than giving it as an option)

    #!/bin/bash
    for i in seq 1 2000;
    do
    python3 wildgoose.py encrypt 90 $i < $1 | tee out-$i
    python3 wildgoose.py decrypt 90 < out-$i >> result 2>&1
    echo “\n” >> result
    done

    The input file (called it input) only contained: “This is a new test from schneider.com”

    and the command I used to run profiling was: ./profile.sh input

    This is by no means anything close to cryptanalysis, but rather an “attack” method, although pretty rudimentary. as @Clive Robinson mentioned, brute force attacks are theoretical maximum limits, and other methods usually exist for more efficiency, which would require proper cryptanalysis and digging deep into implementations and finding weaknesses. This… I don’t have the time for 😉

    More observations:

    Usability is a factor: One wouldn’t want to wait 5 minutes to get a file decrypted.

    Playing with the source code, which we expect the adversary to have access to, may reveal some more information, such as, for example, finding less threshold times than the rudimentary profiling table shows.

    As for plausible deniability, I don’t think this is a robust approach, given that the adversary is expected to be aware of this algorithm. You may want to consider other use cases such as protecting a key in an environment where HW crypto support is lacking. Perhaps a hybrid White-Black Box Cryptography solution. You may find other applications as well.

    One more thing: Generating the key in such fashion doesn’t necessarily protect against inadvertent creation of weak keys, but I am not sure of this likelihood.

    Pretty interesting work, thanks for sharing! And I am not aware I have seen an idea like this in the past (doesn’t mean much, as I haven’t seen a zillion other existing ideas) 🙂

    Parker April 2, 2016 1:32 PM

    Choosing a strong key would deliver far greater returns than trying to compensate for weak keys by encrypting multiple times, etc.

    Relatively little development work is being done on block ciphers anymore, for several reasons.

    Most all cryptanalysis of block cipher applications focuses on the key. Intuitively, this seems easier. But there are weaknesses in the cipher text itself that can be exploited. Use a common cipher like AES to encrypt large blocks of true random data and the measured entropy will actually fall. It has to.

    Here is a decent source http://www.idquantique.com
    Use strong random data for the key as well as the IV. Don’t mess with small files. You can’t reliably measure the entropy. Use a good test suite like TestU01. You can’t just tally the distribution. But you can prove the entropy actually falls after encryption.

    There are numerous weaknesses in ciphertext. For example, keep in mind that the first half of the output is not influenced by the second half of the input.

    r April 2, 2016 2:07 PM

    @Parker,

    “Choosing a strong key would deliver far greater returns than trying to compensate for weak keys by encrypting multiple times, etc.”

    Choosing a strong key for a weak or backdoored algorithm is a good reason to choose at least two ciphers and two keys, while I wouldn’t go overboard I liken it to insurance especially with the points Wael and Thoth are making about incoherent output.

    I’m by no means trained or educated like these guys though.

    Clive Robinson April 2, 2016 4:45 PM

    UK’s FBI gagging RIPA key case

    The UK’s National Crime Agency, the equivalent of the US FBI is using RIPA to try and force a defendant facing extradition to the US to reveal encryption keys.

    Like the FBI the NCA are trying to get a court precedent, but mindfull of the backlash the FBI got from going public, the NCA are gagging the defence and denying them the right to solicit public opinion to help their case.

    https://theintercept.com/2016/04/01/british-authorities-demand-encryption-keys-in-closely-watched-case/

    Lawrence D’Oliveiro April 2, 2016 5:05 PM

    @Clive Robinson

    All of encryption is based on problems which are not provably difficult to solve, only ones that, as far as we know, are difficult to solve.

    In the example wildgoose implementation, I use the Collatz function to generate arbitrary-long sequences which are hashed to produce the inner encryption/decryption key. I am not aware that anyone has come up with a way of short-cutting evaluation of that function.

    As you point out, the weakest link in any security system is always the human factor. What I am trying to suggest is that wildgoose could offer another way to obfuscate that weakness.

    Lawrence D’Oliveiro April 2, 2016 5:15 PM

    @Wael

    One wouldn’t want to wait 5 minutes to get a file decrypted.

    Depending on the secret, maybe you would. You could start the decryption when you come in to work in the morning, get a cup of coffee, and when you got back to your desk, it would be done.

    I would be suspicious that a decryption operation of a small file takes 30 minutes! I would bust the guy on the spot 😉

    Guess how long it took me to decrypt the example I posted…

    tyr April 2, 2016 5:41 PM

    @Jacob

    A wholly transparent IC community would be quite a sight.
    Given the historical track record of what has been seen
    I’m sure the concept is welcomed with open arms by the
    worlds IC folk.

    Just think of how many Nuremburg style indictments it
    would generate along with the odd treasonous behavior
    case.

    Highly unlikely it would effect much of the rank and file
    clerking types but the movers and shakers would have to
    do some heavy interpretation to avoid prison time.

    ” I vas only following orders ” is no longer a valid
    defense option.

    Wael April 2, 2016 5:51 PM

    @Lawrence D’Oliveiro,

    Guess how long it took me to decrypt the example I posted…

    I tried all iterations from 1 – 99. Not in order, though. Ran about 30 concurrent terminals on an Intel i5 quad core — not the most powerful, but it’s what’s available at the moment. So timing may not be accurate. If I were to guess, based on the scripts I ran, it must have taken you around 20 minutes? If it took you significantly less time, then I did something wrong. What’s the key you chose? I can also see how long it took.

    I left the first set of iterations running overnight (that’s from 3:00AM to 7:00AM for me.) The last two sets I let run for about 45 minutes, then gave up as I thought there must be something wrong. I expected the correct key to decrypt the sample cipher text in less than a minute.

    Wael April 2, 2016 6:03 PM

    @Lawrence D’Oliveiro,,

    Or you can tell me the time it took to decrypt and I’ll adjust the script and see where it takes me… Just in case the text you encrypted was “naughty” 😉

    Camp No means Yes April 2, 2016 6:26 PM

    @tyr, it so happens the Convention Against Torture treaty body meets starting April 18 and the Committee Against Torture follows up on selected severe issues in US compliance:

    12(a): Carry out prompt, impartial and effective investigations wherever there is reasonable ground to believe that an act of torture and ill-treatment has been committed in any territory under its jurisdiction, especially in those cases resulting in death in custody;

    14(c): Investigate allegations of detainee abuse, including torture and ill-treatment, appropriately prosecute those responsible, and ensure effective redress for victims;

    17: The State party should ensure that interrogation methods contrary to the provisions of the Convention are not used under any circumstances. The Committee urges the State party to review Appendix M of Army Field Manual No. 2-22.3 in the light of its obligations under the Convention. In particular, the State party should abolish the provision regarding the “physical separation technique” which states that “use of separation must not preclude the detainee getting four hours of continued sleep every 24 hours”. Such provision, app licable over an initial period of 30 days, which may be extended upon due approval, amounts to sleep deprivation — a form of ill-treatment —, and is unrelated to the aim of the “physical separation technique”, which is preventing communication among detainees. The State party should ensure the needs of detainees in terms of sleep time and sleeping accommodation provided for prisoners, are in conformity the requirements of rule 10 of the Standard Minimum Rules for the Treatment of Prisoners.

    Equally, the State party should abolish sensory deprivation under the “field expedient separation technique”, which is aimed at prolonging the shock of capture, by using goggles or blindfolds and earmuffs on detainees in order to generate the perception of separation. Based on recent scientific findings, sensory deprivation for long durations has a high probability of creating a psychotic-like state in the detainee, which raises concerns of torture and ill-treatment.

    26(c): Provide effective remedies and rehabilitation to the [Chicago torture] victims;

    26(d): Provide redress for Chicago Police Department torture survivors by supporting the passage of the ordinance entitled Reparations for the Chicago Police Torture Survivors.

    A president with balls could decimate the CIA with that. Like Carter did.

    Thoth April 2, 2016 6:37 PM

    @Wael
    I don’t think much of those interrogators have the patience for games. Once they suspect the slightest of tricks to delay time by means of long encryption key stretching rounds with schemes designed to make life difficult for them when it is very obvious that their target knows the decryption key but is playing mind games, they would be more than happy to play ball and use rubberhose cryptanalysis and soon their target would start chirpping out his/her darkest secrets 😀 .

    A better idea is to split a hardware (assuming a proper tamper resistent HW like a smart card or TPM) and a user key. After use, the hardware key can be zeroized and even if the user starts to sing out his user PIN and key, without the hardware key that have been destroyed, it do be pointless but the same fate of the rubberhose by the interrogators to vent their frustration would still proceed but the secrets remain safely destroyed (without the HW keys).

    The excuse of HW being expensive and hard to use is quite untrue as smart cards with security functions are quite readily available and have bought myself quite a nice stash of them with different types of processors for fun of playing with them. The issue of backdoors in these smart cards can be somewhat mitigated by using obfuscation of applet codes to certain extent.

    When the decision to be rubberhosed is made, it is hard to escape ;P .

    CallMeLateForSupper April 2, 2016 6:50 PM

    Gee, Ta think?

    “Rise of Ad Blocking Is the Ad Industry’s Fault, Says Outgoing FTC Commissioner”

    “In addition to privacy, ads have more recently become a major security issue. So-called malvertising campaigns now strike with alarming frequency, exposing tens of millions of users to malware by infecting the very networks advertisers use to convince us to buy shit.”

    https://motherboard.vice.com/read/ad-industry-is-to-blame-for-ad-blockers-outgoing-ftc-commissioner-says-julie-brill

    r April 2, 2016 7:05 PM

    @Lawrence,

    “Depending on the secret, maybe you would. You could start the decryption when you come in to work in the morning, get a cup of coffee, and when you got back to your desk, it would be done.”

    Good luck selling that to someone holding shares of reynold’s metals.

    Wael April 2, 2016 7:10 PM

    @Thoth,

    use rubberhose cryptanalysis and soon their target would start chirpping out his/her darkest secrets 😀 .

    Sounds fair! Get dark on us and we’ll extract your darkest secrets.

    A better idea is to …

    There is no better “technical” idea. You need a good lawyer at this point or you’re screwed.

    Lawrence D’Oliveiro April 2, 2016 7:10 PM

    @Wael (and everybody else)

    The key is the integer 9. And it took 2000 seconds to decrypt on my Core i7.

    So @Wael, you tried all integers from 1 to 99? So you went right past the correct key without noticing…

    Lawrence D’Oliveiro April 2, 2016 7:15 PM

    @Thoth re rubber-hose decryption

    But surely most of the point of having encrypted files on a computer is that you cannot keep all those secrets in your head. So the attacker needs those decryption keys, no way around that.

    If it takes half an hour to decrypt a secret, no amount of application of the rubber hose is going to change that.

    r April 2, 2016 7:24 PM

    @Lawrence,

    Now you’re kindve making me think of bcrypt is it?

    Maybe you have a point about brute forcing but I still don’t think walking away from a secure workstation with sensitive decryptions pending is something I would advocate.

    Buck April 2, 2016 7:28 PM

    @Thoth

    The excuse of HW being expensive

    Is that really an excuse though..?

    Encryption Is a Luxury

    The people that most need privacy often can’t afford the smartphones that provide it.

    “When encryption remains a luxury feature, those who are the most surveilled in our society are using devices that protect them the least from that surveillance,” said Christopher Soghoian, the principal technologist at the American Civil Liberties Union. He calls this the “digital-security divide.”

    Maybe… @Thoth (continued):

    and hard to use is quite untrue

    Maybe not, but time is money, and properly learning encryption + OPSEC is almost sure to cost you a good bit of your available time!

    Wael April 2, 2016 7:29 PM

    @Lawrence D’Oliveiro,

    you tried all integers from 1 to 99? So you went right past the correct key without noticing…

    It was running conurrently with 30+ terminals. It’s the one set I stopped after 45 minutes. Let’s say the 2000 seconds on your i7 translates to 3000 seconds on my i5. Multiply that by a load factor of 10, and that means the expected correct decryption should have taken around 30,000 seconds on the “loaded” i5. That’s roughly 8 hours!!! I never had a chance to catch it because I never expected you to put such a high limit. What limit did you set, by the way?

    Wael April 2, 2016 7:35 PM

    @Thoth,

    There is no better “technical” idea.

    I take that back. There are better technical / operational ideas. Discussed that with @Dirk Praet back in 2012 or 13…

    Lawrence D’Oliveiro April 2, 2016 7:44 PM

    @Wael

    The limit was 50000.

    One property of my algorithm is that decryption is always going to be slower than encryption. Of course the encryption took only a few minutes…

    I have updated the readme to try to give a better explanation. I have introduced the term “encryption with incomplete-key decryption”, which I think makes more sense.

    Lawrence D’Oliveiro April 2, 2016 8:19 PM

    @r re brypt:

    bcrypt is designed to be equally slow with both correct and incorrect inputs. A better comparison would be a password function that was slow with correct input, but even slower with incorrect input. To the point where it could never conclusively say that that input was incorrect.

    Wael April 2, 2016 8:28 PM

    @Lawrence D’Oliveiro,

    you tried all integers from 1 to 99? So you went right past the correct key without noticing…

    As you can see from the script I shared, I maxed it out at 2000, and I thought that was excessive. I was off by a factor of 25.

    Now suppose you’re in an airport in a foreign country and you were asked to decrypt this sample file. This is the expected conversation:

    Officer: We need to see the content of this file.
    Lawrence: Ummm, it’s encrypted

    Officer: You’ll need to decrypt it before we let you go
    Lawrence: Ummm I’m not sure if I remember the key

    Officer: Give it a try
    Lawrence: Sure thing, officer… clickity clickity click click… There, I typed it

    Officer: It’s been 15 minutes and it’s still churning! Should have finished by now!
    Lawrence: Well, it’s a lousy crypto algorithm. It takes forever!

    Officer: No worries! We got all the time!

    Two hours pass by…

    Officer: This is a 520 byte file! Two hours to decrypt?
    Lawrence: Perhaps I put the wrong key. Lemme try another one

    Officer: I’m running out of patience…
    Lawrence: I feel your pain, officer! Clickity click click. There. This must be it

    Another hour passes by…

    Officer talking to a collegue: Ok, bring the frickin rubber hose to juggle his memory.
    Lawrence: Now let’s be civilized, officer! I honestly don’t know it…
    …..

    How do you see this conversation concluding in your favor? All you have done is buy time in custody. Had you used a plain vanilla AES algorithm, the officer may have found faster that the key entered was incorrect. How are you going to use the time you bought to your advantage? It’s possible this extra time can hurt you, especially if they decide to waterboard you until the thing decrypts! You just bought yourself an extra 20 minutes of agony!

    Thoth April 2, 2016 8:49 PM

    @Wael
    A lawyer is a luxury 🙂 . We cannot assume all Governments to be that kind to allow you access to lawyer when you ask for one. There are those around my region, the Middle East or even the Far East where calling for a lawyer might not be a very good idea as that might piss your interrogators even further.

    Singapore’s law (my region) states that the Internal Security Act may detain without reason or warrant for about 72 hours (if my memory is not faulty) and within which time you maybe subjected to whatever they throw at you (go imagine). Oohhh… and anyone possessing a “warrant card” issued by the Homeland Department/Police may detain, search and remove items and people at will without needing written approval or warrant because the “warrant card” is a wild card warrant and issued widely even amongst security contractors 🙂 . Good luck with trying to worm your way out here.

    The Constitution of a country is one thing but the practical thing on the ground is another.

    @Lawrence D’Oliveiro
    Rubberhose does make a difference if the victim knows the key. A little more torture would make them start talking. Once they spitted out the correct keys and settings, the decryption would likely be much more smoother for the interrogators and once the victim’s use is over, they can dispose him/her away. If he/she doesn’t talk, just ramp up the torture unless he/she is damn willing to die with the secrets.

    @Buck
    I am talking about smart cards not iPhones. OPSEC and Security must be learnt to some degree. It doesn’t come free.

    Lawrence D’Oliveiro April 2, 2016 9:27 PM

    @Wael

    How do you see this conversation concluding in your favor?

    They didn’t get the key.

    All you have done is buy time in custody.

    That may be enough. And if they lose patience and resort to more extreme measures, they will never get it.

    Sometime it is helpful if the “authorities” know this in advance.

    Lawrence D’Oliveiro April 2, 2016 9:29 PM

    @Thoth

    Rubberhose does make a difference if the victim knows the key.

    Only if the interrogator can be sure the victim knows the key. This is an algorithm that will only decrypt properly if the user has patience. Which those who are prone to “enhanced” interrogation techniques are not known for.

    Wael April 2, 2016 10:05 PM

    @Lawrence D’Oliveiro, @Thoth, @Figureitout,

    Only if the interrogator can be sure the victim knows the key.

    The inverse of this is the key to the solution. Just replace “knows the key” with “doesn’t know the key” and find an implementation. Then you can avoid the agony and protect the data.

    Lawrence D’Oliveiro April 2, 2016 10:24 PM

    @Wael

    Solution to what? Beating up the wrong suspect will not get them any closer to decrypting the secret data. Which is the whole point of encryption, isn’t it—keeping that data secret?

    Bad Cops April 2, 2016 11:08 PM

    @niko – open secret?

    Smoke and Mirrors
    The War on Drugs and the Politics of Failure
    By Dan Baum

    Chapter One: A Question of Discrimination

    A Practical Matter: 1969
    [President Nixon] emphasized that you have to face the fact that the whole problem is really the blacks. The key is to devise a system that recognizes this while not appearing to. — H. R. Haldeman to his diary

    the story continues –

    http://www.washingtonpost.com/wp-srv/style/longterm/books/chap1/smoke.htm

    not_a_spook April 2, 2016 11:12 PM

    Wael droppin knowledge! Keep at it @Lawrence, good ideas, good job getting expert feedback.

    Lawrence D’Oliveiro April 2, 2016 11:28 PM

    By the way, I had to make another backward-incompatible fix to the bit-packing code. This means the current Git version will not be able to decrypt files encrypted with previous versions.

    I have added tags to the Git repository to try to mark the points where such changes have been made.

    Wael April 3, 2016 12:55 AM

    Gentlemen, ianf,

    I’m by no means trained or educated.

    That reminds me! Haven’t heard from you in a while @ianf! Hope all is well, bud! 🙂

    @r,

    …trained or educated like these guys though.

    Oh, don’t be too humble now! You made me blush and @Thoth flush!

    Thoth April 3, 2016 3:05 AM

    @Wael
    Flushed by the worries of constant insecurity in supposedly secure products….

    Recently I was looking into the security processors into my self-encrypting thumbdrives (Apricorn Aegis, datAshur iStorage, Kingston Datatraveller, LOK-IT Secure Flash…). Surprise ! They all use the same design and their FIPS document are almost similar because they share ClevX’s DataLock technology that uses a non-tamper resistant crypto processor and still achieves FIPS 140-2 Level 3. I was set to purchase one of them for a test run but when I read about the use of a non-secure PIC microcontroller (@Figureitout’s and @Clive Robinson’s favourite PIC microcontroller ?) as the crypto security, I just turn the idea around and decided to proceed on with the design of my GroggyBox project.

    Cryptanalysis conundrum April 3, 2016 6:37 AM

    @experts & Lawrence

    Isn’t the answer not relying on a silver bullet encryption method – this only delays eventual brute-forcing via the rubber hose method – but instead using the enhanced security arising from crypto + steganography, giving plausible deniability?

    We know multiple jurisdictions can now compel encryption keys to your foolproof Truecrypt vault (UK and Aus among others), but there is nothing on the books stating that you must identify the relevant container (or better hybrid container) files for a hidden steg message/data. Nor are you required to admit something is hidden in the 1st place (compelled speech and all that).

    So, spooks could compel the encryption key – potentially beating you into submission for the initial passphrase – but then be faced with:

    1. Distributed steganographic data storage hidden amongst 10s of thousands of ‘junk’ image, audio and data files. Dummy files are included as a plausible excuse for encryption – helping to bring a swift end to the electroshocks 😉

    E.g. https://csit.am/2009/proceedings/3ITCT/20.pdf

    Or

    2. Hide encrypted text with steg methods amongst 100s of thousands of random audio, image and other files. Instead of hiding the complete encrypted text into one solitary image or other file, we will be hiding a part of the encrypted message. Further, they can’t be sure whether audio, image or other methods have been used (if at all).

    You can plausibly claim you didn’t want bittorent evidence of 100,000s of illegal MP3s for example

    http://arxiv.org/ftp/arxiv/papers/1009/1009.2826.pdf

    Thus:

    Unhidden part of the encrypted message will be converted into two secret keys. They are faced with huge number of combinations to work with if they suspect steg has been used. In this system to get the original message one should know, along with keys for Cryptography and Steganography, two extra keys and the reverse process of the key generation.

    Or

    3. Simple use of decoy data e.g. thousands of spreadsheets with data. The data could be the winning powerball/horse race numbers going back decades, exhibiting your propensity to gamble or something similar

    Critical numbers referring to the secret message/account/code etc are buried in a handful of row/column locations known only to target & memorized with simple mnemonic. Sheer volume of data prevents possible brute-forcing. I know one person that used this method to bury their bank account passphrase.

    Anyhow, I bow to your superior knowledge on this issue. No doubt superior steg methods have arisen in recent years(?). I don’t see how the Feds can ever beat this method, given they cannot claim there is hidden data or feasibly prosecute if the data is:

    a) in decrypted plain view and
    b) you have reasonable grounds to hold the said data

    Cloudflare April 3, 2016 6:55 AM

    Cloudflare = enemy of Tor and causes a second-rate browsing experience for many.

    Although Tor users should note the solution is to simply use proxies (startpage and similar) which will get around most captchas without too many problems.

    https://blog.torproject.org/blog/trouble-cloudflare

    Wednesday, CloudFlare blogged that 94% of the requests it sees from Tor are “malicious.” We find that unlikely, and we’ve asked CloudFlare to provide justification to back up this claim. We suspect this figure is based on a flawed methodology by which CloudFlare labels all traffic from an IP address that has ever sent spam as “malicious.” Tor IP addresses are conduits for millions of people who are then blocked from reaching websites under CloudFlare’s system.

    Wael April 3, 2016 8:11 AM

    @Cryptanalysis conundrum,

    Isn’t the answer not relying on a silver bullet encryption method…

    Use the key custodian method: I have part of the key and two other people that never travel with me have the other parts of the key. Additionally there is an onsite server that needs to grant me access based on some metrics, including verified geo-location. Here is my 1/3 of the key, do what you like with it. Your software needs to present the right GUI to corroborate your story. Other methods exist. As for steganography, it’s sometimes useful, but tools to expose steganography do exist.

    I know one person that used this method to bury their bank account passphrase.

    That person wouldn’t happen to be you? I used similar tactics to obfuscate some passwords, as I don’t trust password wallets, especially the ones that require us to login into the cloud just so our password is available on all devices. Yup! That’s a nice centralized location which one may be forced to decrypt under pressure — a central point of failure, so to speak.

    Thoth April 3, 2016 8:29 AM

    @Cryptanalysis conundrum
    As long as under pressure and pain you do not spill the beans and reveal existence of some hidden volume or whatsoever secret security measures used, all good 🙂 .

    Marcos El Malo April 3, 2016 9:00 AM

    Rubber-hose Cryptanalysis Countermeasures

    Why couldn’t safeguards be built into the hardware to lose/scramble the sought after data in response to an attempt at rubber-hose cryptanalysis (sort of similar to the IPhone passcode safeguard)? A large capacitor could be embedded in the hardware and set to trigger from the right signal – perhaps pain reaching a certain level. If not a capacitor, some mechanism to deliver sufficient kinetic energy?

    An added benefit of this scheme is that, in a business setting it might motivate employees to follow security protocols/policies.

    Figureitout April 3, 2016 11:14 AM

    Lawrence D’Oliveiro
    –Cool idea mate (and not just a proposal lol, something we can build and see), first obvious concern jumping out at me is that delay being some kind of clue as to key being right or wrong when confronted w/ ciphertext. The solution being crypto should always be slow but that’s annoying and would get dismissed outright if everything had a ton of lag added on…Think it’s a user decision then, still neat.

    Thoth
    –PIC isn’t my fave, would either be AVR or MC68k-based chips (their asm is very nice). When you’ve got chips that have easily accessible datasheets and documentation galore, multiple toolchains and ways to program, and tons of existing code that’s been somewhat battletested for bugs, that hobbyists can tweak and have a functioning project in a snap; that factors into personal security decisions. When it’s impossible to not short out pins just to read a voltage on a pin on epoxied up blackbox tiny tiny chips, that’s a major personal concern.

    And as we can see here: http://www.bunniestudios.com/blog/?p=3554 there’s just another MCU inside the smartcards (if it’s an 8051 CPU then there’s not much separation b/w code and data, just pointers); I’m failing to see a significant difference and find it amusing you write off an MCU as insecure yet use one anyway to point to where the encrypted data goes…error does not compute. Looking for another pointless argument?

    albert April 3, 2016 11:17 AM

    @Desmond Brennan,

    I’ve been using country-specific searches for years (e.g.: letras bossa nova site:br), but I never realized the depth of possibilities in dorking.

    Thanks for the info, scary though it may be. A powerful tool, for good ….or evil….
    . .. . .. — ….

    Thoth April 3, 2016 11:41 AM

    @Marcos El Malo
    The description of using power-backed SRAM memory is exactly what secure chips are doing for more robust security applications like HSMs that may cost quite a bit and put a hole in your pocket.

    Most of the tamper resistant (not just tamper evident) hardware security chips consist of a tamper passivation mesh on the outer most layer of the metal of the IC chip connected to the security monitor and usually linked to the SRAM containing the crypto keys where in the event of a breach, it would lose the keys and wipe them. There are attack methods like FIB workstations to use lasers to cut into the tamper mesh and what not and this is where additional light sensors and proprietary mechanics are placed into a blackbox IC chip to protect it from being tampered and to as quickly as possible zeroize the keys upon detecting tamper.

    Most HSMs have power-backed crypto key slots where if the power is disconnected, the tamper protected SRAM would lose the keys almost immediately and most power backed secure key storage SRAM are rather small (some only 256 bits of flip flops to store a 256 bit symmetric master key) and due to the small size, it wipes very easily and quickly and also the small surface area makes it easy to stack layers of passivation meshes, electric and light spectrum sensors and to scramble the wirings.

    There are many manufacturers of tamper resistant chips and recently I have dug out quite a few suppliers and some of them have the usual FIPS 140-2 certification (but the certification doesn’t mean they are very secure or good either other than meeting some minimal baseline requirements by the industry). These chips (including NXP/Freescale, Maxim Integrated… etc…) provides a good amount of tamper battery backed SRAM secure key store and also the usual tamper resistant mechanisms and traps suitable for the security processor of a HSM but be prepared to sign tonnes of NDAs and pay up cash just for that higher level of security assurance. Tamper resistance and scrambled wirings is not the only defense they have. Some of these chips have implementations of ARM TrustZone architecture as well to protect code execution.

    So far I have not heard of anyone decapping an Apple iPhone chip and I do be very interested if Apple actually took the steps in incorporating tamper resistance into their chips (which I would actually encourage). A possible guess is by adding more security of the chip, you would be effectively attracting the attention of authorities who are evidently very unhappy with the rampant spread of secure technologies and the classification of cryptographic equipment in export controls are still strongly enforced globally as strong security are considered “dangerous munitions” of sorts by authorities world-wide.

    The open hardware USB Armory chip (the Freescale ARM chip is not open source but just the PCB board layout is open) is the closet you can get to a HSM as the Freescale chip contains TrustZone and also is tamper resistant. The ARM chip onboard also has a Secure RTC clock within the chip to detect power line glitching and other glitching attacks.

    The power man’s option would be smart cards as they are the cheapest to obtain with some of them going for a couple of dollars per card with AES-256 and RSA-4096 keys and being programmable with the known JavaCard standard. The draw back is that most smart cards are expected to draw power from a host computer and not carry any power supply on their own which prevents them from implementing secure power-backed SRAM key storage although they do have the usual tamper passivation shields and more basic sensors and logic scrambling. Some of them have more advanced proprietary features like self-encrypting memory and encrypted calculations on instructions.

    The problem is that most open source projects and products meant for the general public are created with a mindset for backwards compatibility or some sort of convenience without needing to carry a security token (e.g. a smart card) to do your security stuff. It is much more simpler and cheaper for anybody to simply code up some Java or Python script for encryption and simply post them online and publish them whereas if you want to do more serious security with tamper resistant hardware, you need to spend your time knowing what you are trying to do and the cost and effort is something most code-cutters are not willing to invest in.

    There are more projects being created to integrate hardware security and making them cheaper and easier for the masses to accept (e.g. Mooltipass, Ledger Blue and JackPair) but the poisonous mindset that Govt Agencies try to spread about encryption and security are only for crooks and criminals and also the tightening of encryption and security may have their impacts.

    It is paramount for projects to start moving to integration of secure hardware and software to increase security assurances. The projects running on just software as we know today are all vulnerable to many attack vectors on secret key exfiltration techniques that can be done by more skilled crooks and criminals besides well funded nation states.

    Links:
    https://www.indiegogo.com/projects/mooltipass-open-source-offline-password-keeper#/
    https://www.kickstarter.com/projects/620001568/jackpair-safeguard-your-phone-conversation
    https://www.ledgerwallet.com/products/9
    http://cache.nxp.com/files/32bit/doc/data_sheet/IMX53CEC.pdf?pspll=1
    https://para.maximintegrated.com/en/search.mvp?fam=micros&1351=Yes

    Thoth April 3, 2016 11:55 AM

    @Figureitout
    Isn’t that an SD card and not a smart card that you linked ? I think essentially the architecture that is available out there is either some Intel variant, ARM/RISC variant or in rare cases a PowerPC variant of sorts.

    The bottom line is relying on one single chip or technology itself (if you want very high assurance security – CC EAL 7) at this current point in time isn’t going to give you the security you need at that very high assurance level. The most probably method would be a mix of Data Path security from @Nick P (e.g. Data Diodes and Guards), Prison technique from @Clive Robinson and Castle technique from @Wael all somehow combined nicely in one.

    I would think a more step-by-step gradual approach to ramp up overall security with increasing ease of use and cost effectiveness (while at the same time integrating hardware and software security) is the best approach. Trying to take big jumps in security assurance but not improving usability and acceptability on a bigger scale might end up putting the entire improvement in personal security on a global scale a few steps back because people are not willing to take the extra steps due to complaints in problem using new security paradigms and higher than expected cost needed.

    edw April 3, 2016 11:58 AM

    Hard mathematical problems as basis for new cryptographic techniques
    http://phys.org/news/2016-04-hard-mathematical-problems-basis-cryptographic.html

    RUB researchers develop new cryptographic algorithms that are based on particularly hard mathematical problems. They would be virtually unbreakable.
    […]
    “If somebody succeeded in breaking those algorithms, he would be able to solve a mathematical problem that the greatest minds in the world have been poring over for 100 or 200 years,” compares Kiltz. The mathematicians make the algorithms so efficient that they can be implemented into microdevices, such as electric garage openers.
    […]
    The algorithms are based, for example, on the hardness of the following lattice problem: imagine a lattice to have a zero point in one specific location. The challenge is to find the point where two lattice lines intersect and that is closest to zero point. In a lattice with approx. 500 dimensions, it is impossible to solve this problem efficiently.

    Nick P April 3, 2016 12:00 PM

    @ Programmers

    It’s impossible to validate an email address

    For anyone who didn’t know the real scope of the problem. I’d love to see a contest between people pushing various parsing strategies or languages to show a concise, efficient solution to this. Might be interesting.

    @ Figureitout

    re smartcard MCU’s

    The difference between regular MCU’s and security-focused MCU’s is substantial if it’s one of the major vendors. Small-time smartcards who knows. I’d be skeptical and investigate their specific details. Far as EAL5-6 vendors, here’s the document with their security considerations:

    https://www.commoncriteriaportal.org/files/ppfiles/ssvgpp01.pdf

    Filter out the purely red-tape stuff as you skim it. You’ll find they consider a lot of attack points in software, hardware, protocols, fabrication, and so on. Then, they design their MCU’s and chips to try to counter what they can in a given cost bracket. One of the better ones is Infineon’s SLE:

    https://media.digikey.com/pdf/Data%20Sheets/Infineon%20PDFs/SLE%2088CF4000P.pdf

    It cites some specific features it has for security-critical development that your average MCU won’t. The memory management tends to be a differentiator as the smartcard MCU’s (a) have one and (b) ditch complex MMU’s in favor of simple, rigorously-built MPU’s. They also have TRNG’s and some runtimes built to be bulletproof as possible. So, there’s considerable differences. Stuff worth copying in future designs.

    Parker April 3, 2016 12:07 PM

    Infineon SLE used in (physical) access control terminals, card readers, etc. These systems all have weak links. For example, see HID in news last week.

    r April 3, 2016 12:44 PM

    @nick p,

    Holy sh**, talk about feature cramming.

    Please tell me that address line feature set was only in the public RFC and not a finalized STD.

    JG4 April 3, 2016 1:06 PM

    @edw

    isn’t that lattice problem precisely the type that quantum computing will solve with unprecedented efficiency?

    there has been some traffic in recent months suggesting that quantum computing either is here or will be soon enough to make a difference

    Clive Robinson April 3, 2016 1:44 PM

    @ Buck,

    Encryption Is a Luxury – The people that most need privacy often can’t afford the smartphones that provide it.

    Whilst the price of equipment is always a concern, the problem is a bit more subtle as it’s KeyMan that’s the real problem not encryption.

    A prime example is the One Time Pad, it’s about as secure as you can get if used correctly, and needs at most pencil, paper and a match/fire to use securely. The reason it’s not used is three fold, the generation of the KeyMat, the quantity of the KeyMat and storage and transportation of the KeyMat.

    So it’s KeyGen and KeyMan issues not encryption issues with the OTP. Of the two KeyMan is by far the hardest to do (although with OTPs KeyGen can be problematic as well due to various issues).

    A solution to KeyMan issues came about with certain mathmatical ciphers and the thinking of a couple of individuals. But as we now know Diffie-Hellman Key Exchange, along with all other PubKey algorithms has problems we are only just getting our heads around. But those issues aside D-H KE is not something most people can do reliably nor quickly with pencil and paper, you need what was once considered a high end computer. Also there is a big set of “ifs” over mathematical ciphers, not only are they fairly trivial to “backdoor” in various ways, computing power is rising at a rate that makes the length of intergers needed rise as quickly which tends to make the likes of KE obvious to spot, then there is also the unknown horse of Quantum Computing to throw into the race. All of which means bigger faster and more powerfull CPUs etc, which comes back to a “Red Queen’s Race” cost wise.

    Thus it’s the KeyMan that is the luxury not the encryption.

    Further it should now be clear to just about everybody who cares to think about it, that Smart Phones are nolonger secure in anyway shape or form. It’s not just the US FBI, it’s the UK NCA and many governments that have repressive views of some kind (all) or are tyranical or despotic in nature (many). Thus no mater what the Smart Phone has in the way of “secure apps” as long as the “Eve’s of this world” can get to the screen and keyboard drivers and “shim them” then they can do an end run around any kind of secrecy the Apps can offer.

    Whilst there are technical solutions to this Smart Phone issue, they are a significant increase on the price of a smart phone.

    Thus the likes of other encryption systems beyond the exploitable hardware are becoming of interest again. Especially those that don’t need expensive and very obvious electronics which is a major OpSec no no. Hence paper and pencil “hand ciphers” have become of interest again, which brings back the KeyMan issues…

    It’s one of the reasons that the likes of Card Shuffling Algorithms for stream ciphers keeps bubbling up. CSA’s are fairly simple ARC4 is one of very many examples, and most programmers can remember not just the algorithm but the code for their favourite programing language. CSA’s can also be done with the likes of a pack of cards, thus with a little patience and practice a manual PAD/Stream cipher can be implemented. The fly in the ointment with CSAs as with other ciphers is just how secure thay actually are in use.

    One thing that is known is the number of ways you can permutate a pack of cards is boardering on the impossible to do in any meaningful way. Thus the question of the state transformations arises and how much information about the pack permutation they leak.

    The general feeling is that for relatively short messages and low traffic usage they are secure if used correctly.

    There are also ways to use them that would make many people pause for thought. Nearly all mechanical ciphers are based on rotors, which in reality can be represented quite effectivly by strips of paper. The use of a stream cipher can give you a new set of rotors/strips quite quickly. Likewise the stream cipher can be used to step/move the rotors/strips. All of which can be done relatively simply by a human with a little patience, and can likewise be fairly easily memorised. Further the strips of paper can be easily destroyed with a match, especialy if pre-prepared with the likes of potassium permanganate or other chemicals.

    Expect such systems to be given thought to over the next year or so…

    Clive Robinson April 3, 2016 1:52 PM

    @ Wael, Thoth,

    I take that back. There are better technical / operational ideas.

    There have been many conversations over the years, which methods had you in mind?

    My prefered is multiple out of jurisdiction “shares” by entities that can be trusted not to be turned for various reasons.

    Young @ Heart April 3, 2016 1:59 PM

    Whilst the price of equipment is always a concern, the problem is a bit more subtle as it’s KeyMan[Key Material?] that’s the real problem not encryption.

    I’m in a minority camp on this one, but I think the real problem is the price of equipment+development_environment+barriers_to_spread_of_enhancements.

    Imagine a world if you will, where every 14 year old clock soldering child has a phone in front of them. That phone is 100% open source. All source for the phone is on the phone’s primary system disk/storage. The child can plug the phone via micro hdmi and usb ports into a bigscreen TV and usb keyboard and mouse. The child can literally see a gitorious-like tree of every line of source running on the system. The child can relatively easily (especially if given a bit of guidance by a freshmen engineer) find the relevant line of code for any feature, including the various security aspects, such as key material handling. The child can make an improvement, press a button, and now have the modified OS running native on the same phone. All code recompilable on the phone itself. The child can also enter a single word or image, and have their logos replace android/cyanogenmod/debian. The child can then make that git fork available to the entire world with the internet. Now any child in the world can take that enhancement, and repeat the process.

    Once the barriers to people being able to improve the software features of their mobile phones get reduced to that level, this whole “key material handling is cumbersome” shit will disappear. Kids are fucking smarter than we are. Believe it.

    Tear Down The Garden Walls Already April 3, 2016 2:07 PM

    Of course everying I just said as Young @ Heart doubly applies to console video game development. If kids have that level of free development kits, there would be many establishement game companies that would go out of business, and the first party titles from the console manufacturers would make a hell of a lot less money for them. And then people would realize those first party titles were subsidizing some razors/blades printers/ink shit, and be glad they had moved forward with technological progress beyond that level of stifling innovation.

    Wael April 3, 2016 2:48 PM

    @Clive Robinson, @Thoth,

    There have been many conversations over the years, which methods had you in mind?

    Any method that shows a believable story that the person under pressure doesn’t possess all the information needed for decryption. This way, the information is protected, and the “victim” need not be harmed. This is but one variant of a slew of possible implementations.

    @Lawrence D’Oliveiro,

    That may be enough. And if they lose patience and resort to more extreme measures, they will never get it.

    I admire your steadfastness and forecasted future perseverance! However, you may change your mind under state of the art narco-interrogation. You may still resist it, but not everyone will be able to.

    Keep in mind that the “threat modeling” brief discussion we had covered a small fraction of the technical possibilities, which include subversion, side channels, end point weaknesses, data at runtime weaknesses, as well as bypassing the delay factor in the algorithm by running the decryption on a specialized supercomputer that can run a day’s worth of your i7 computing power in a few seconds. Suddenly your 30 minute decryption delay becomes few tens of milliseconds!

    Like I said: it’s good work, but you may need to consider finding other applications for it.

    @Nick P,

    It’s impossible to validate an email address

    Programmers that like to get cutsie by nesting, concatenating, and pipping regular expressions to come up with a stunt like that should be sent to a security concentration boot camp. This is a prime example of what not to do!

    Clive Robinson April 3, 2016 3:01 PM

    @ Cryptanalysis conundrum,

    I don’t see how the Feds can ever beat this method, given they cannot claim there is hidden data or feasibly prosecute if…

    The Feds etc lie like cheap watches in all FiveEye countries.

    Courts in these countries work on the “Trial by Strength of Arms” method arising from Heraldic behaviour in England over a thousand years ago. Each side appoints a champion to slug it out before God and a judge, the loser is guilty. Thus the champion has to carry the burden of truth before God. The Jury system came about for “surfs, villains and freemen in the hundreds” via the arises, where you would be judged by a tribunal of truth of your peers, with a tribunal of law (judge) responsible for ensuring both parties follow the rules of law.

    Which means these days you have an expert in law (and little else) looking at papers that represent findings of fact from both champions and deciding from that and oral argument what are matters admissible under law and those not such as common hearsay. Champions may call upon the services of domain experts to be “officers of the court” and present “opinion” which whilst being hearsay, are backed by other experts in the domain. Sadly as you will see few “expert witnesses” are independent in their views these days and judges realy do not like having two experts at loggerheads, so seldom alow them to be properly examined.

    Thus as the rules relax you will have prosecution witnesses from LEOs stating in witness boxes that they have been told XY&Z from a domain expert who for various reasons is not available to give either oral testimony or written statment. Such behaviour is iniquitous but they get away with it. Thus you get perjury via an unknown third party, which judges alow…

    In such a system it is difficult for an honest person to prosper…

    Lawrence D’Oliveiro April 3, 2016 3:11 PM

    @Wael

    I don’t think anyone has to be “steadfast” at all. The simple fact is, the more stress you put your victim under, the more likely they are to make mistakes. And my partial-key encryption proposal is not tolerant of mistakes, or demonstrations of impatience.

    Dan April 3, 2016 4:09 PM

    @JG4,
    Currently, it is believed that quantum computers cannot efficiently solve NP-Hard and NP-Complete problems. The complexity class quantum computers solve efficiently (with a small chance of failure) is called BQP. The Discrete Logarithm Problem and the Elliptic Curve Discrete Logarithm Problem are known to be in the complexity class BQP.

    @edw,
    Reading the article, I believe the person writing it does not understand cryptography. Most(but not all) asymmetric key algorithms have security reductions to (assumed, at least at one point in time) hard problems. Security proofs are not new to cryptography. Judging from the writing style, the author of this article is not a nerd and does not understand technology on the technical level(I don’t know exactly how I decided the author wasn’t a nerd; my instincts just told me he wasn’t).

    Dan April 3, 2016 4:13 PM

    @Bruce Schneier,
    I like the on-off switches for the social media buttons. I note that the social media buttons in the “subscribe” section of your website do not have on-off switches. Do those buttons not track users?

    Dan April 3, 2016 4:28 PM

    @Wael, @Thoth,
    I believe the solution to rubberhose cryptanalysis is something called a “duress code”. It is used to indicate that someone was forced to enter a code against their will. In an alarm system, entering the duress key would disarm it, but silently alert the authorities. In a secure storage device, the duress code should instantly wipe the key storage(the storage has to be tamper-resistant). The duress code should be indistinguishable from the correct password (assuming the correct password is unknown) until it is too late.

    Wael April 3, 2016 5:11 PM

    @Lawrence D’Oliveiro,

    And my partial-key encryption proposal is not tolerant of mistakes, or demonstrations of impatience.

    That’s the best description you gave of how to use the algorithm! Perhaps you should indirectly mention this “act” in the user’s guide. Time to field test it 😉

    @Dan, @Thoth,

    I believe the solution to rubberhose cryptanalysis is something called a “duress code”.

    It works under different circumstances. Consider a situation that often arises in military settings: if a soldier is captured with sensitive equipment, a Duress Code could be used to initiate subtle self-destruct functionalities. These self-destruct capabilities could be initiated by various triggers, including, and not limited to Duress Codes. The thing to keep in mind, though, is the enemy is certainly aware of these mechanisms.

    Duress codes will not work under some circumstances. For example, if the encrypted material is saved on media such as a DVD or a USB disk (like the ones you find in parking lots[1].) There are variants of the so-called Duress Codes that behave differently and decrypt bogus files and corrupt other confidential files or do “other things”with them.

    Duress codes will not work against a sophisticated adversary. One of the first things an attacker will do is air-gap the system and either clone it or extract all data out as a backup. There are counter air-gap-attack defense mechanisms, too! An example would be the loss of a beacon transmission signal in certain geo-locations or outside certain geo-locations (there are mechanisms to spoof GPS signals as well). There are counter-counter-counter …

    [1] On the way to the airport, inconspicuously drop the USB disk on the street and make sure you are under camera surveillance. Then pick up the disk and examine it. If they ask you to decrypt stuff on it, say: Gee, officer, I just found it in the parking lot and I was going to clean it and reuse it — check the camera man. I wouldn’t recommend you ahem stick it in your computer. I’m telling you this so your superiors don’t “wisdomise” you!

    We don't want no stinkin security April 3, 2016 5:13 PM

    Why is it that people keep repeating the old tired “we don’t want good security because then someone will kill me instead of stealing my stuff”… (or chop off my finger, or torture me, or whatever, same meme)

    Really? Well, gosh, why don’t you keep your doors unlocked? Never lock your car and keep the key in the ignition, so that people can steal it without damaging it or hurting you… in fact, store all your furniture and belongings on your front lawn all the time, and keep your house totally empty, just so people won’t kick in the door to steal it, or bother to use guns to hold you up… I mean, that’s “SAFER” right??? What’s the matter with people!

    Dirk Praet April 3, 2016 6:56 PM

    @ Wael, @ Clive, @ Thoth, @ Nick P, @ Lawrence d’Oliveiro

    Any method that shows a believable story that the person under pressure doesn’t possess all the information needed for decryption. This way, the information is protected, and the “victim” need not be harmed. This is but one variant of a slew of possible implementations.

    Partial key escrow with Tomb + SSSS. I think I have referenced it before on this blog. It has the advantage that you can even afford to lose one or more keyholders because you don’t necessarily need all keys to decrypt. You can elaborate on the method by splitting the key not just over different individuals, but also over hardware security modules, smart cards or other such devices. For those familiar with QubesOS, check out SplitGPG.

    @ Dan, @ Cryptanalysis conundrum

    I believe the solution to rubberhose cryptanalysis is something called a “duress code”

    Unless you come up with a homebrew device that self-destructs with a small thermite charge when tampered with, there’s too many ways to work around it, as @Wael already mentioned. But you can also reverse the process by having the other (partial) keyholders automatically destruct their key if for whatever reason a previously agreed upon protocol is not followed.

    Nick P April 3, 2016 7:22 PM

    @ Dirk

    Re Split GPG

    Lmao if that’s from Joanna’s people cuz it’s yet another recommendation from my Qubes rebuttal she’s adopted. I mentioned the Dresden mi crokernel work which split signing emails between trusted and untrusted domains. What she rejected and raged on is again showing up on the site. Good that they’re learning though.

    Meanwhile, Rust’s compiler lead told me they’re adding incremental, per-function compilation to speed development. Only LISP has had that so far. Gettimg it in a safe, system language would be awesome!

    Niko April 3, 2016 7:46 PM

    @we don’t need security

    I’ll answer your straw man. Most home locks aren’t really designed to prevent someone from getting into your home. They exist to prevent people from breaking into your home without leaving some sign of forced entry. Without some sign of forced entry, your insurance claim is likely to get denied. Home locks exist less to prevent theft, than to be able to prove after the fact that a theft occurred.

    Buck April 3, 2016 8:24 PM

    @Nick P

    Challenge accepted! Although, given the time I can feasibly devote towards the task, I am almost certain to get it (at least) slightly wrong. I think this is a great example from Elliott Chance:

    Furthermore the local-part can contain any characters, including an @ sign, if they are enclosed within double quotes. There are also perfectly valid:

    • “dream.within@a.dream”@inception.movie
    • bob.”@”.smith@mywebsite.com

    Now, where to find that in RFC 882..? Here’s a clue in section 3.4.5. QUOTED-STRINGS:

    Where permitted (i.e., in words in structured fields) quoted-strings are treated as a single symbol. That is, a quoted-string is equivalent to an atom, syntactically.

    Luckily for me, the list of definitions is fairly comprehensive…

    word = atom / quoted-string

    atom = 1*<any CHAR except specials, SPACE and CTLs>

    CHAR = <any ASCII character>

    specials = “(” / “)” / “<” / “>” / “@” / “,” / “;” / “:” / “\” / <“> / “.” / “[” / “]”

    <“> = <ASCII quote mark>

    SPACE = <ASCII SP, space>

    CTL = <any ASCII control character and DEL>

    quoted-string = <“> *(qtext/quoted-pair) <“>

    qtext = <any CHAR excepting <“>, “\” & CR, and including linear-white-space>

    quoted-pair = “\” CHAR

    CR = <ASCII CR, carriage return>

    LWSP-char = SPACE / HTAB

    HTAB <ASCII HT, horizontal-tab>

    Woah there, when we have to thoroughly define what the word ‘word’ means, you already know we’re well into the twilight-zone! By my (admittedly brief) reading of the RFC, a <quoted-string> (<“><qtext><“>) containing only the @ CHAR (i.e. the <word> represented by the following three <CHAR>s: “@”), is syntactically equivalent to the <@> <atom>

    I have no clue, from 822 alone, what “syntactically” (among a litany of other phrases) actually means here… Some of the spec is clearly outdated, and so much more of it has been basically ignored for so long as to be practically meaningless at this point in time.

    If only there were some way to more conveniently cross-reference terms and phrases that have already been defined… Maybe some sort of hyper-textual markup language? Ehhh, nevermind, that’d probably be far too complicated to see any widespread adoption!

    edw April 3, 2016 8:26 PM

    JG4, Dan

    yea that phys.org site has some authors that are better at reporting on some subjects than on others;-P

    The article does however point to the German University Ruhr-Universität Bochum as the source of the news article, and I found some articles on their site about this same subject.

    The articles are all linked to on this “Lattice-Based Cryptography” page:
    http://www.sha.rub.de/research/projects/lattice/

    Figureitout April 3, 2016 8:54 PM

    Thoth
    –Yeah, old time smart cards aren’t made anymore right? So it’s going to be a mostly dead memory form in the next decade right? If you just wanna do your own thing w/ a stash of cards, that’s fine, I want to get it in the public’s hands. If you don’t upgrade you get left behind and eventually screwed (I don’t agree w/ it but it’s true).

    Some of these MCU’s have been made for probably 20 years and will likely continue being made 20 years from now, too much money to be made not to. Z80’s are still made, not exactly same, but compatible.

    I think essentially the architecture
    –You think? Kind of a big detail to mull over eh? Just the CPU…

    And whoever said relying on one chip? Maybe someone else. My security strategy is to have lots of chips doing specific tasks, most of their memory would be unused. Compromise one, good job, you got probably 10 more to go. I’ve yet to try MSP430, STM32, and PIC much. My little sensor project will be able to connect to many many sensors so an attacker would potentially need external attacks on all them to avoid getting logged.

    The only person here putting up a data diode we could build was M. Ottela, and that was mostly the work of a researcher in Iowa. There’s pretty much no other open products for data diodes, just black box products. The prison/castle debate was a debate and there’s no usable product from it or even proof that the designs are feasible.

    Nick P
    –A lot of MCU’s seem to have security features that are pretty good if you use them. OTP/internal eeprom memory to mark your chips internally for authentication or as a PRNG seed (say you ship a batch of 1000 to security-focused customer, and keep the serial numbers heavily encrypted at the factory, then send them over to recipient to verify via at least 2FA comms channel; an attacker getting those numbers out to program in an infected batch would add a pretty noticeable delay), analog comparator for checking a voltage level, watchdog timer (timer w/ a high priority interrupt), non-maskable interrupts to yank control away from whatever (which may promptly yank it back, but it’ll be annoying for both parties regardless), lock bits (been hacked, still nontrivial), verified bootloaders that only get overwritten if you have the key, temperature detection (add a check for a change) which is apparently pretty cheap and easy and why you see it on so many chips, light detection receiver IC’s which can detect regular light bulbs, sunlight and IR (at least one I work w/, sunlight is “FFFF”, so epoxy the chip and set a check for anything over 0 to trigger reset/self-destruct), etc. etc. These are non-trivial to hack in practice, on an actual target (ie not having the chip on your bench).

    The common criteria thing could’ve been reduced to like 5-10 pages; main value is organizing, and it is valuable to have someone on your team do that, but didn’t offer real countermeasures (ie you shall do this ideal thing, uh ok).

    The “data sheet” more like sales doc (what a sham, 9 damn pages, doesn’t tell me jack, go shove it) on the infineon thing is exactly what I’m talking about. I’m not signing my life away to get a damn datasheet. And oh boy(!) DES accelerator(!) Yippie! All those features are existing in regular MCU’s all over the place. I wouldn’t trust any embedded TRNG if whatever you’re protecting really matters, just tack on your own. The only thing really different would be the MPU (which this link just calls it MMU…), but you can usually control channels in embedded pretty well, they’re everywhere and they haven’t all been hacked yet. I mean memory management isn’t super critical for a lot of small embedded (just moving pointers and overwriting, not actually zeroing data, it would just reduce lifetime of memory) like it is for a networked PC w/ a big OS which absolutely needs it; and you’d want to limit it anyway for security applications.

    C’mon man, surely you have a better document in your link farm that supports your points. I didn’t see it in these (in fact I think you posted them before, and I said similar thing; we’re repeating ourselves now).

    Buck April 3, 2016 9:54 PM

    Oh, nevermind… There it is in section 3.1.4. STRUCTURED FIELD BODIES:

    The analyzer provides an interpretation of the unfolded text composing the body of the field as a sequence of lexical symbols.

    These symbols are:

    • individual special characters
    • quoted-strings
    • domain-literals
    • comments
    • atoms

    The first four of these symbols are self-delimiting. Atoms are not; they are delimited by the self-delimiting symbols and by linear-white-space.

    So it seems, that sometimes, the <CHAR> otherwise known as <@> isn’t actually an <atom> so much as it is a <special>[s] (sic?) In fact, even the very next example shows that Chance was right! Not only can you use more than one @ — it wouldn’t even have to be enclosed by double-quotes!

    ":sysmail"@ Some-Group. Some-Org,
    Muhammed.(I am the greatest) Ali @(the)Vegas.WBA

    It even looks like (at least, unintuitively) <CR>s and noncontiguous <SPACE>s are also allowed in some parts of the address… Regardless, I still think that this is the wrong RFC to be poking through for anything that appears to the right of the final (only?) <@> <atom>

    Don't want no stinkin security... April 3, 2016 10:25 PM

    @Niko

    Since it’s a straw man… why don’t you make it easier for people to steal your stuff, you know, so that they can get to it more easily, instead of risking your life in a robbery…. Go on, do it. That’s exactly what some people are proposing about computer security though!

    Notice how I purposefully worded it to avoid mentioning locks on houses, to avoid your example… It’s a general question. Who’s answer is the straw man now. The guy who thinks computer security is a bad thing, and we shouldn’t have it perhaps?

    Thoth April 3, 2016 10:55 PM

    @Dirk Praet
    Split keys in HSMs and smart cards are quite common especially when managing a HSM module, the administrative key is usually split by the HSM and loaded into different administrative USB crypto tokens or smart card tokens with the crypto tokens further securing it with their own proprietary techniques to comform to internation requirements for M/N quorum requirements.

    @Figureitout
    Some older smart card chips are surprisingly in strong demand up to these days and some of these chip manufacturers have deliberately extended their lifetime longer because the banks and financial institutions are very slow at moving to newer smart card chips. We have to understand how their memory works. A smartcard typically have at least 2 sets of memory for namely a ROM and an EEPROM. You usually load the Card OS into the ROM which is separate from the EEPROM and the rest of the applets and data goes to the EEPROM. The more modern version coming out just these few years attempt to simply have a huge bunch of Flash memory with no separation of ROM/EEPROM/Flash stuff so that the more security critical OS codes gets mixed into the Flash containing user data and codes. I would say I prefer the ROM/EEPROM split over the more modern all Flash version since the ROM/EEPROM split is a Harvard architecture (and yes the smart card architectures of certain chip designs will tell you if it’s Harvard or not Harvard architecture for certain chips).

    @Clive Robinson did once mentioned that the separation of executable and user data is becoming less relevant because if your executables allow some form of arbitrary loading point, it would be effective allow user data to turn into a mini executable VM of sorts. The one feature that the JavaCard variant of smartcard architecture defines that code classes cannot be sideload and this somewhat prevents user data from casually turning into a mini VM and start executing commands on it’s own unless you are willing to write some code blobs to become a parser of sorts for some higher level language but this will impact the tiny MCU’s performance.

    The better idea might be as you say to split the execution over multiple MCUs (@Clive Robinson’s Prison concept afterall). We just need to take steps and define the acceptable level of security and usage for the projects that we are building. Almost all of those chips are blackboxes. Their pin outs and ASM codes for external interfacing might have microcodes.

    If you want quick and easy to develop security and also a good way to blend into the crowd of available security products, the norm would be smart cards, ARM with TrustZone and those stuff where you have publicly known documentations for JavaCard APIs, TEE APIs and what not.

    If you prefer the less traveled road of writing the entire system OS and make your entire circuitry on your own because you feel a need for very high levels of security assurances and don’t want to blend into the crowd, you can go that road.

    Security is much more bigger than just implementing super secure. In this era where Govts simply want to find all means and excuses to weaken security or probably even remove security for civilians, they may want to go on a witch hunt for anything that looks weird and standing out.

    I don’t think you want to lug a PCB board around for mobile communication ? This would definitely attract attention. But, if your goals are to bunker up inside your home-made SCIF compartmented room with all kinds of RF security prevention techniques or physical intrusion and spying methods, the possibility of having multiple pieces of electronics, data diodes and rather cumbersome security devices that provides very high levels of security would be suitable.

    It really depends on how you want to use security, where and when you want to use it, in what settings to use and how much attention it should draw to you if used in public. These are the softer skills of Security Engineering and as part of OPSEC.

    If the goal is being really paranoid to a point where communicating securely in public is a problem, then probably the best option is still the good old OTP crypto with drop points and the usual spy stuff that are still commonly in use to this day which electronic security would not provide you these protection.

    The recent discussions on ISIS encryption regarding the higher echelons using out of sight cell towers in not so obvious jurisdictions for a limited amount of usage are a good idea but that wouldn’t be all too secure if the security model assumptions assumes too much requirements.

    What it all boils down to is know your users, know your attackers and finally know your comfort zone for Security Engineering in general then you can devise a security model suitable and start picking the objects and item parts (including trusted people for key splitting/sharing or courier) required in your security model then start to devise schemes, implement and execute. Such are the rigours required for security modeling, development, deployment and ending the lifecycle when necessary.

    Nick P April 3, 2016 11:33 PM

    @ Buck

    Nice breakdown and work on it. Before I say more…

    “If only there were some way to more conveniently cross-reference terms and phrases that have already been defined… Maybe some sort of hyper-textual markup language? Ehhh, nevermind, that’d probably be far too complicated to see any widespread adoption!”

    …BNF, logic, specification, and functional languages have all handled this stuff in the past. In high assurance or just CompSci, they often found problems in the requirements or design just by formalizing them in such precise notations. A great example is this work that found problems in Eiffel’s SCOOP model for concurrency. They weren’t evident in English but straight-forward even with amateur using Maude specs. The question is how messy will stuff like you wrote get when put into such tools.

    Not to mention denial of service possibilities during catch-all validation. I mean, this stuff is ridiculously complex to the point that I totally ignored it in the past. I just assumed email was real, stored it as text, send a confirmation message, and deal with any weird results for that. Still, I’d like to see these champs at readable parsers write something up for this.

    @ Figureitout

    The discussion was whether security-focused MCU’s and products using them (esp smartcards) are better than regular MCU’s. That an insecure MCU has been around 20 years doesn’t help you. That you might have to sign a NDA for a datasheet or not have the Verilog is orthogonal. Mitigations you mention are basically non-existent on common MCU’s outside those that accidentally fit into their use case. They are certainly not implemented in robust ways, either. There’s a whole subfield dedicated to all the failures they can bring that high-robustness MCU’s or CPU’s counter. So, features or assurance, there’s quite a bit more in security-critical MCU’s than regular ones.

    “The common criteria thing could’ve been reduced to like 5-10 pages; main value is organizing, and it is valuable to have someone on your team do that, but didn’t offer real countermeasures (ie you shall do this ideal thing, uh ok).”

    Maybe. I think it’s real value was covering the attacks on many levels. It’s about requirements more than specific countermeasures because that’s a cat and mouse game. The non-security MCU’s didn’t meet requirements in this list. Evaluated smartcard IC’s at least try to. Hence, me bringing it up as a differentiator.

    “The “data sheet” more like sales doc (what a sham, 9 damn pages, doesn’t tell me jack, go shove it)”

    It’s a sales doc that includes a list of features not found on most MCU’s. They included some tamper-resistance, side channel mitigations, a TRNG, fast crypto for apps, special hardware for compartmentalizing software more than average MMU, and encryption/integrity for what’s outside the CPU. This is orders of magnitude above average MCU in ability to protect your apps. Even provides some protection against hackers with physical access to device that lack hardware or specialized knowledge to attack that.

    Btw, show me a “regular MCU” that has all those features that isn’t a higher-priced, security-focused MCU like what I’m talking about. Really, dude, because I looked at a lot of MCU’s in my embedded research finding most of them didn’t have shit for anything. I settled on recommending one per peripheral function that was just high-end enough to implement anything w/ necessary HW interface onboard. Adds $100-200 to price of computer because those aren’t cheap.

    ” I mean memory management isn’t super critical for a lot of small embedded (just moving pointers and overwriting, not actually zeroing data, it would just reduce lifetime of memory) like it is for a networked PC w/ a big OS which absolutely needs it; and you’d want to limit it anyway for security applications.”

    The MPU’s are both the first and last-ditch defenses against issues that arise when environmental faults (eg SEU’s) or malicious input turn your precious C or ASM code into attacker’s tool of choice. They isolate further damage with attempts notifying monitoring code that something is up. Even your simple devices should have memory protection. Even the PDP-1 had some memory segmentation for exactly this reason.

    “C’mon man, surely you have a better document in your link farm that supports your points. I didn’t see it in these (in fact I think you posted them before, and I said similar thing; we’re repeating ourselves now).”

    Yes I did. They modeled precisely what the design and security techniques were with code that mapped to that, tests, analysis, some proofs, and pentesting. You didn’t agree with that stuff because you didn’t have the wires or preferred C or something. I didn’t bother digging it out again since the subject was simpler: whether security-focused IC’s offer advantages over common MCU’s. They do. Evidence above. Much simpler discussion.

    Clive Robinson April 4, 2016 7:02 AM

    @ Nick P, Figureitout

    Firstly MPU is an ambiguous TLA meaning either “MicroProcessor Unit” or “Memory Protection Unit”. It’s not always clear from contect or using their less ambiguous companion TLA’s MCU and MMU. Sometimes you might see something akin to “The MPU of the XXXX/e MPU is a simplified and faster alternative to the XXXX/s MPU’s MMU…” whilst the author had it clear in their mind from the get go, it’s not till your brain gets to the MMU that it has a chance to make sense except by chance. Think how much harder it is for others not so embedded in the technology to follow along.

    Worse whilst the base definition of MMU was historicaly established thus in general enhancments are easy to recognise, the same is far from true with a MPU. To some an MPU is a simplified MMU to others it is an enhanced MMU, thus it’s best to clarify up front the meaning you are ascribing to MPU so it does not jibe with some readers mental models.

    Secondly I can not remember if you ever mentioned looking at Altera’s Nios II/e for their range of FPGA’s it can be pulled together from as little as 700 LEs and the last time I looked (pre Intel takeover) it was royalty and licence free and if memory serves also unencumbered by NDA, with development boards well south of 500USD, some below 100USD (less with academic discount). Depending on if you went for the basic MPU or more complex MMU you could load up Linux and other *nix OS’s.

    http://www.altera.com/products/ip/processors/nios2/ni2-index.html

    The last time I looked there was a fairly comprehnsive online series of course notes etc on it,

    http://instruct1.cit.cornell.edu/courses/ece576/

    Thus the use question falls to development and deployment costs.

    Clive Robinson April 4, 2016 8:28 AM

    @ Keiner,

    Not Erdo-gone’s idea of “Turkish Delight”…

    To be honest Im not realy surprised around 16years ago I had quite a long chat with the person responsible for doing Cisco training for TurkGov IT droids…

    What they had to say would make your eye lids roll back and your jaw drop below your feet…

    It did have a funny side, on one TurkGov system they found the bulk of traffic was Russian going to a Malware P2P server/exchange that had been put up on it… Apparently the unknown infiltrators had tided it up patched it etc to make it more secure…

    Mike Amling April 4, 2016 11:20 AM

    @Lawrence D’Oliveiro

    I missed the part where you explain how the legitimate user recognizes the correct value of k. Granted, I haven’t read the Python code.

    This type of scheme would be useful when the decryption is infrequent. E.g., whole disk encryption where key determination is only done at startup or login, or maybe backup files that are unlikely ever to be decrypted. I wouldn’t want to incorporate it into, say, 100 new TLS connections per second.

    Correct me if I’m wrong, but it seems to me that, like standard key-stretching techniques, your proposal increases the effort of the legitimate user by the same factor that it increases the effort of the attacker.

    CallMeLateForSupper April 4, 2016 12:10 PM

    From the I’m-in-the-wrong-business File:

    “You may have seen the TSA Randomizer on your last flight. A TSA agent holds an iPad. The agent taps the iPad, a large arrow points right or left, and you follow it into a given lane.

    “How much does the TSA pay for an app that a beginner could build in a day? It turns out the TSA paid IBM $1.4 million dollars for it.”

    https://kev.inburke.com/kevin/tsa-randomizer-app-cost-336000/

    1970’s musician/singer Don McLean explained:
    “The more you pay/the more it’s worth.”

    Nick P April 4, 2016 12:18 PM

    @ CallMeLateForSupper

    It’s a good deal. The politicians took bribes from IBM to get elected. TSA was created for political reasons rather than utility. TSA heads get their jobs and money from those politicians. TSA gives IBM our tax dollars as payment for their bribes to politicians. Government and defense contracts as usual.

    Just shows IBM’s investments in politicians are A Good Thing for them.

    Nick P April 4, 2016 12:40 PM

    @ Clive Robinson

    re MPU

    I meant Memory Protection Unit. You’re right that the meaning is ambiguous where it might be simplified or enhanced. Either way, it’s usually better in design or implementation security than typical MMU’s because that was a design consideration in the first place. That security mattered in the design was probably most important impact on what results they usually get. As in, hopefully the joker isn’t so complex it breaks static analysis or so unreliable it turns off randomly under undisclosed circumstances (cough Intel cough).

    re FPGA’s

    Well, assuming you trust FPGA’s, a MCU SOC on one of them is a decent way to go. Allows obfuscation and diversity opportunities. I ignored NIOS because, like Lattice and MicroBlaze, it’s designed to lock you into one FPGA. It may have been ported to others later. Yet, I push using a vendor-neutral core on there as a start. There’s Plasma MIPS, Cambriged BERI MIPS, Amber ARM, the RISC-V cores, Gaisler’s Leon3 SPARC under GPL, OpenRISC, and numerous clones of popular MCU’s FPGA-proven. Gaisler’s are designed to be easy to modify with several CompSci papers doing that for security with details published. SAFE is doing Alpha for some retarded reason but their papers are detailed enough to imitate. CHERI modified BERI MIPS with their stuff, including FreeBSD port, open-sourced for Terasic FPGA board.

    So, these are the things I tend to recommend. RISC-V and SPARC most of all given others have I.P. risks due to an owner that’s lawsuit happy. Gotta future-proof our work. 😉

    Bruce Schneier April 4, 2016 3:26 PM

    “I like the on-off switches for the social media buttons. I note that the social media buttons in the “subscribe” section of your website do not have on-off switches. Do those buttons not track users?”

    Yes, they do not track users.

    Wael April 4, 2016 4:48 PM

    Waaaay off-topic:

    Question:

    Do those buttons not track users?”

    Answer:

    Yes, they do not track users.

    Shouldn’t the answer be:[1]

    No! They do not track users?

    Because the question can be stated as: “Don’t those buttons track users?” The answer needs to negate — not affirm the question. The actual meaning of the question posted is: “Do those buttons track users?”.

    It’s just like when a prosecutor asks a defendant in court: did you not order a pizza on that night? He really is emphasizing the fact that the guy ordered a pizza. The “not” in this question isn’t for “negation”; it’s for emphasis.

    Any grammaterians here? this question has bothered me for sometime.

    [1] I’m using a similar format in the question.

    Wael April 4, 2016 4:50 PM

    @Dirk Praet,

    Partial key escrow with Tomb + SSSS. I think I have referenced it before on this blog.

    That rings a bell. It should work as well. It’s more or less the same idea (more on the less side) 🙂

    Wael April 4, 2016 5:12 PM

    More on the OT comment…

    Heh! I used the same question format to someone here:

    Didn’t we cover this topic here?. And did you not say:

    My question actually means: I know you said so, didn’t you?

    Dan April 4, 2016 6:22 PM

    @Wael,
    That question has bothered me too. I think the best solution is to answer the question in full, instead of a yes or no answer. Regardless of the official rule, this seems to be the clearest way of answering a question formatted like that.

    Wael April 4, 2016 7:02 PM

    @Dan,

    One more thing: I didn’t realize you were the one who posted the question!

    Do those buttons not track users?

    Possible answers:

    1. Yes, they do not track users.
    2. Yes, they do track users.
    3. No, they do not track users.
    4. No, they do track users.

    Given the (intent of the) question, only 2) and 3) make sense to me… I forgot to ask what the intent of your question was, as it may make a difference! Wasn’t it your intent to say: I know these buttons track users, is that the case!? Lol… I’ll stop now before I get my yellow card 😉

    Wael April 4, 2016 7:50 PM

    @r,

    Who is they?

    Lol! I don’t know! Security is a complex domain. Buttons these days have a mind of their own! A small JavaScript Boboo and …

    Dan April 4, 2016 9:34 PM

    @Wael,
    1) and 4) are the answers someone would give from a literal interpretation of the question. I have seen so many people use 2) and 3) (meaning the answers that fit the pattern of assertion and negation of the answers you numbered). Given how the meanings of “yes” and “no” in this case is not well agreed on, it appears to be best to give the answer as “They do not track users”, rather than “Yes, they do not track users”, or “No, they do not track users” (If the buttons don’t track users, of course). Until an official definition of what “yes” and “no” mean in this context is created and widely accepted, the method I use is best.

    r April 4, 2016 9:40 PM

    @wael,

    We recently went over contractions with our 8yr old. Seeing you split the responses up reminded me of that and something my grandmother used to say about English being the most difficult language to learn for her with all its nuances. Then… I realized that HTML buttons don’t do tracking at all anyways… They’re front end and that’s the backend’s job. “Tracking” is usually done with cross domain requests/images and cookies I believe.

    🙂

    So even if the buttons directly lead to a tracking page, answering affirmatively that buttons don’t track would not be a fallacy in and of itself.

    r April 4, 2016 9:46 PM

    @wael,

    You also missed…

    “Yes? They do not…”

    ‘They’ could also be tracking connections and hosts while ignoring differences between individual users of the same device technically.

    English is fun, this must be what the guys in D.C. are doing with our laws and Constitution, no?

    Wael April 4, 2016 10:39 PM

    @Dan,

    it appears to be best to give the answer as “They do not track users”, rather than “Yes, they do not track users”, or “No, they do not track users”

    You are absolutely correct. This is the conclusion I came to as well. It’s the least path of resistance. I believe the difficulty arises from the fact that the answer is a compound answer that needs to answer two implicit questions: one question is about the tracking, and the other question is about confirming the tracking. It’s a question about a question… If that makes sense!

    @r,

    We recently went over contractions with our 8yr old.

    I’m curious how your 8-year old would answer this question about the tea![1]

    That reminds me…

    @Rolf Weber,

    Earn some credibility credit points that you may redeem on a dark day… Would you please translate this to German?[2]:

    “My cat has a tail, but I don’t have a tail”

    @r,

    “Yes? They do not…”

    What, are you a grammar Nazi? Did I infect you or somethin’ ? 🙂

    English is fun, this must be what the guys in D.C. are doing with our laws and Constitution, no?

    Oh, language is their game! They can write one sentence that bears ten different meanings and several loopholes, yes? 😉

    [1] I think @Clive Robinson just popped a vein!

    [2] Not the best way to ask. He’ll probably ignore me 🙂

    Wael April 4, 2016 11:14 PM

    Crud!

    It’s the least path of resistance –> It’s the path of least resistance;

    @Dan, @r,

    I guess that’s what they mean when they say: it’s not a ‘yes’ or ‘no’ answer! So if @Bruce answers the next question by saying: it’s not a yes or no answer, then don’t make a fuss about his canary.

    Figureitout April 5, 2016 12:27 AM

    Thoth
    –One chip I’m working w/ doesn’t have an actual internal eeprom, they emulate it in flash (which just sounds like bug-city to me…). So I can see that trend happening too (having even a small separate internal eeprom is so handy, having to emulate it kinda sucks even though they’re similar memories (I haven’t needed it yet so I haven’t messed w/ it)). But you can of course still just put an external one on board, very common.

    Having separate memories is all over industry, it’s not a real special security feature. Not going to argue that anymore.

    Don’t have time to go over other points. They either been discussed or mentioned.

    Nick P
    –Pretty nice security systems that require physical access or fancy emsec attacks can be done w/ regular MCU’s. That’s my main point. That’s about the best we can do as security people besides getting to the point of people looking at you funny…

    I’m not seeing all these trivial attacks on MCU’s besides the backdoors forced into the biggest vendors (my dad randomly told me they had to give NSA a key for encryption used for a satellite receiver to operate in US; he agrees w/ it btw lol…), they’re usually quite involved w/ physical access at the workbench.

    Tried and true. Isn’t that your motto? That’s what 20 years of life in the commercial electronics industry means. Look how many things rely on commercial MCU’s day-in-day-out, how often do you hear of a stop light failing? They fail (still rarely) when they add more comms and fancy features like entertainment systems in cars.

    show me a “regular MCU” that has all those features
    –If a chip had exactly the same, someone would be suing someone…however individually they are common:
    http://www.ti.com/ww/en/embedded/security/index.shtml
    http://www.ti.com/lsds/ti/microcontrollers_16-bit_32-bit/msp/peripherals.page#security

    http://www.atmel.com/products/security-ics/

    Check page 20 here: http://www.st.com/web/en/resource/technical/document/datasheet/CD00237391.pdf
    Check page 178 here: http://www.st.com/web/en/resource/technical/document/programming_manual/DM00046982.pdf

    Thoth can tell you all about NXP: http://www.nxp.com/products/identification-and-security/authentication:MC_71548

    http://www.microchip.com/design-centers/embedded-security/reference-designs/keeloq

    There’s more, but you can flex your google-fu (1st search terms for me, ie: I didn’t spend more than a minute on this) for harvest for your “precious” link farm. Better keep doing your research, b/c I’m going to keep pushing security in embedded space and leave my mark. It’s going to get better w/ the other big time projects on the way.

    Clive Robinson
    –I see the power and potential of FPGA’s, but I don’t really trust them nor their toolchains right now and I don’t enjoy either of the major HDL’s. MCU dev boards are my bread-n-butter right now. I don’t really need recommendations of boards to look at anymore lol.

    Clive Robinson April 5, 2016 1:23 AM

    @ Wael,

    So if @Bruce answers the next question by saying: it’s not a yes or no answer, then don’t make a fuss about his canary.

    Are you saying “The Cat has got the canary”?

    I guess I should ask first if it was you “who let the cat out of the box”?

    PS I will leave out the feline to canine transition that would enable a “Baha Men” joke such as “Meha Men”, I think even you would not get it 😉

    Wael April 5, 2016 1:56 AM

    @Clive Robinson,

    Are you saying “The Cat has got the canary”?

    I hope so 🙂

    I guess I should ask first if it was you “who let the cat out of the box”?

    Nope!

    I think even you would not get it 😉

    I don’t get it 🙁 Hopefully it’s not encoded like the yellow card one. Can never tell how much you can encode and cram in one short sentence. Frankly, I don’t get the other two questions either (the ones I answered above) — thought I’ll get partial credit, though!

    Dontcha already know that I’m dense and can’t read between the freakin’ lines? Perhaps because sometimes you operate at a different wave length and my internal PLL doesn’t lock on as quickly! Lower your VCO a bit 😉

    Clive Robinson April 5, 2016 6:56 AM

    @ Wael,

    You said “it’s not a yes or no answer” which means it has a degree of “Uncertainty” as did Erwin Rudolf Josef Alexander Schrödinger’s Cat, which for his purposes “he kept in a box not a bag”. So it would not quite be “letting the cat out of the bag”.

    A group called the Baha Men a decade and a half ago had an appling therefore very successful record which had a chorus of “Who let the dog out Woof woof woof”. To go from the canine to the feline would be “who let the cat out meow meow meow” and the first to letters of Baha and Bark are the same so they would be the Meha Men. If I remember rightly somebody used to use “meha” as a comment here.

    Any way it must have been stresfull so try some supposedly relaxing sounds,

    http://m.youtube.com/watch%3Fv%3D-QUJfVIGaRA

    Thoth April 5, 2016 8:31 AM

    @Figureitout
    There are some chips with emulated EEPROM on Flash memory and whether there is any bugs or not, we don’t know. There is a possibility though. If emulated EEPROM is a worry, just export your stuff outside somewhere otherwise find a chip that has only ROM and EEPROM in the specifications and load your critical stuff in ROM and non critical stuff in EEPROM. Usually if there is Flash emulated EEPROM, they do be very proud to announce it outright and won’t bother trying to hide as they see it as an advancement on their part.

    The reason these chip makers are told not to put too much comprehensive design stuff into their technical booklets and what not is because the Common Criteria PP for smart card specifically refrains them from doing so. If they were to publish any details, they would have their CC EAL removed or fail on the spot. Please refer to @Nick P’s CC EAL PP for smart card he previously posted and look under point #35, 36, 41, 42. It simply gags any smart card chip maker from exposing anything as a requirement to passing their smart card CC EAL PP. Smart card IC chips all require NDAs as part of their “security measure” by obscurity.

    Looking through the bunch of Security ICs, here’s my inputs:
    – Texas Instrument’s design relies on Tamper RTC clock, FRAM and an external trigger sent to the chip. It does not seem to include more robust physical protection like wire scrambling, DPA and SPA protection built into the crypto accelerators, tamper passivation shielding via outer metal layers, tamper intrusion sensors for light spectrum attacks. Whether it internally uses memory encryption of it’s contents are hard to tell from the specs.

    • The STM32 chips are not security chips and have no capability of handling logical, power line or physical tamper and should not be used for security critical applications. The importance of detecting clock and power line faults and analysis (DPA, SPA) to peer into cryptographic operation execution and glitching attacks are what make smart card chips more secure than just the other chips sprinkled with some form of logical security or rudimentary security. A smart card chip is designed from the outset to be used and function as a security environment form day one.
    • Regarding the NXP chip you referenced, it is using a smart card IC chip with USB interface with the configuration mode of a USB Security Token. This is not anything surprising as more smart card IC chips are finding themselves utilized as a USB capable HSM. Due to market demands, some flavours of common smart card IC chips can operate not only with standard ISO-7816 interfaces but also with USB CCID standards baked right into the smart card IC. One example of USB CCID capable smart card chips are the Ledger products linked below (Ledger Nano and Ledger HW.1). I am in contact with the guys at Ledger and I have enquired about the smart card chip with USB capability before and was told they are using native chip programming methods instead of the JavaCard standards which means you cannot load your own applet into these Ledger Nano and HW.1 smart card USB devices. I particularly like the HW.1 form factor because it comes in the shape of a standard ID-1 smart card with break-out portions like your SIM card where you break the SIM card out of the smart card.
    • Atmel’s Security ICs have been known to be used as smart card chips and it isn’t a surprise either because they contain the usual smart card security including passivation shields on outer metal layers, glitch and power line analysis protection and what not. There are smart cards running with Atmel’s chip as it’s smart card IC.
    • Microchip’s Keeloq page have some broken links if you try to open some of their PDFs. Crypto accelerator does not mean it is secure. To meet the standards of a smart card or a HSM, tamper resistance is required and that includes protection from trivial listening into crypto operations via power line and so on. It is unclear whether Microchip’s Keeloq has the capability to meet the security requirements that smart cards have to meet.

    Link:
    https://www.ledgerwallet.com/products

    Thoth April 5, 2016 10:08 AM

    @all
    Defeating iPhone crypto security not by putting backdoors but by disguising as company managers and tricking the target to unlock the phone and then arresting the target and changing the password on the unlocked iPhone. Who needs any backdoors except those who are incompetent at HUMINT and social engineering 😀 .

    The British Police also made claims that encryption were not a problem to them and on that case.

    Link: http://arstechnica.com/tech-policy/2016/04/iphone-terror-crypto-uk-police/

    Clive Robinson April 5, 2016 10:23 AM

    @ Thoth, Figureitout

    It is unclear whether Microchip’s Keeloq has the capability to meet the security requirements that smart cards have to meet.

    Even though I’ve used theor chips for some time, you have to look at the history of what is known. Microchip like NXP have had published attacks against their crypto…

    Thus you have to ask if the faults occured due to fixable implementation issues or probably not fixable architectural issues.

    If the later, any fixes would be a “slap a bandaid on it” fix, and as most of us know bandaids have a habit of coming unstuck when you least want them to…

    Meh April 5, 2016 11:13 AM

    The Hubble Space Telescope is capable of detecting objects billions of light years away. The Large Hadron Collider is capable of detecting subatomic particles. If you could somehow put the Large Hadron Collider INSIDE the Hubble Space Telescope, you STILL wouldn’t be capable of detecting how little I give a shit whether some “social media” button is tracking the celebritard-wannabes who plaster their lives all over “social media”.

    Wael April 5, 2016 11:14 AM

    @Clive Robinson,

    Oh, ain’t that grand! Security people speak in obfuscated words, even when joking 😉

    Uncertainty” as did Erwin Rudolf Josef Alexander Schrödinger’s Cat…

    Makes sense, I should have caught that one. Wasn’t in a physics mode last night!

    A group called the Baha Men a decade and …

    Never heard of them. I did search and found the group, but couldn’t correlate it to the right context. I had to recognize Dr. Schrödinger first! Which brings us to the last “clue”:

    Are you saying “The Cat has got the canary”?

    I thought that referred to the expression meaning “happy”, relating to the acquisition of Resilient. This is what threw me off. You didn’t explain this part, yet!

    Any way it must have been stresfull so try some supposedly relaxing sounds,

    No stress relief for me today…

    Our systems have detected unusual traffic from your computer network. Please try your request again later. Why did this happen?

    This page appears when Google automatically detects requests coming from your computer network which appear to be in violation of the Terms of Service. The block will expire shortly after those requests stop.

    This traffic may have been sent by malicious software, a browser plug-in, or a script that sends automated requests. If you share your network connection, ask your administrator for help — a different computer using the same IP address may be responsible. Learn more

    Sometimes you may see this page if you are using advanced terms that robots are known to use, or sending requests very quickly.

    Either 50+% of the readers of this blog needed stress relief and hit the URL, or my super sophisticated controls stopped me!

    Gerard van Vooren April 5, 2016 2:37 PM

    @ Nick P,

    You keep talking about Rust. Have you used it and if so what are your experiences? Can you talk about the benefits and drawbacks, the learning curve, concurrency, CARGO etc.. from a user POV?

    Nick P April 5, 2016 4:57 PM

    @ Gerard

    Not yet. The reason is I’m applying a concept I came up with called documentation dogfooding where I try to learn it exclusively through the main docs to find problems with them. People are having a really hard time learning its memory management. So, I thought dogfooding those sections of the docs, finding problems, and sending feedback to Steve K is best contribution I could make right now.

    From what I’ve seen, it’s pretty effective at catching many of the worst offenders in memory and concurrency errors. Similar to Ada, it basically forces you to express the program in an explicit and restricted way amenable to analysis. The techniques are inspired by both Cyclone language (safer C) and ML’s, among others.

    I actually do plan on learning and using Rust given that it’s the best mainstream language for safe systems. The compiler team also takes bugs seriously and even started with Ocaml, not C, before the bootstrap. Community is cranking out a lot, including a microkernel OS (Redox). They’re one of only examples of The Right Thing approach to languages that’s been accepted.

    That’s my two cents on it. Btw, it’s gotten stable and mature enough that Dropbox is already using it for their low-level, storage layer. Meanwhile, I’m fighting with myself over whether to go ahead and start using it or keep on current activity to avoid inaccuracy in assessing docs.

    Note: I had such a fucking headache on Ownership, Borrowing, and Lifetimes that I cant wait for the Concurrency sections. (Sarcasm)

    Buck April 5, 2016 9:33 PM

    @Nick P

    I’m still working on your challenge… My current impression is that it should actually be quite simple to validate an email address. The only hard part is in parsing the requirements. (I’m ignoring the practicality/utility aspects of it for now, but I will make a note of those).

    Haha, funny thing just happened – I was about to point you to item?id=11434630, but after less than twenty minutes of brief research, I refreshed and saw your 11436040 entry. You’re really fast yo, and incredibly concise while still remaining thorough to boot! 😉

    r April 5, 2016 10:30 PM

    Oh and more ADHD bs for you guys, on the subject of tracking buttons: lapel cameras.

    And on the subject of tea standing up…
    Water can stand, but other beverages sit.

    Wael April 5, 2016 11:15 PM

    @r,

    lapel cameras.

    I see your camera and raise you a Belly button NanEye camera!

    Water can stand, but other beverages sit.

    Yep!
    Standing water: a pool of water of any size that does not flow
    Beverages sit: a coffee / drink shop (I guess)

    You’re the sort that would enjoy this thread. Don’t bother posting there! The thread is closed because truly yours exploited a VVV (Vulgarity Vulnerability Vector) in it. I still remember the joke and laugh at it — lol.

    Figureitout April 5, 2016 11:59 PM

    Thoth
    –The scary part about the emulated eeprom is that they’re not physically separating the memories, just a pointer. Just a pointer away from memory corruption (or getting lost in never never land). Imagine a malware that can on-the-fly create a space outside of your firmware, use it, then delete, w/o having to deal w/ protocols to the other memories (avoiding that exposure point). I don’t trust it and won’t use it if I can avoid it.

    RE: cc eal
    –That’s unfortunate, sounds a little political, not evaluating the tech only, but what access you give others to info. Guess it explains why the reports I’ve read are so boring and practically worthless. I’m not convinced anything they’re doing is all that different from other chips. Guess I need to sign an NDA and potentially take the downfall if someone hacks my PC I store docs on or get access some other way to find out.

    RE: ti
    –Wire scrambling…uhh does a simple continuity check w/ a $2 multimeter crack that lol. Seems pointless unless you can isolate lines which would seem like a big pain in the a$$. You can add on DPA/SPA protection if you need it (again, as has been said a million times here, you have to physically protect your assets w/ humans and guns and surveillance if these are legit threats to your IP. An attacker getting unfettered physical access is generally considered game over all across security field.), same w/ shielding (put sheet metal over chips and board, wow so hard). At the end of the day, somewhere has to be unencrypted (when you’re running instructions on a processor, whether you’re doing something crazy like encrypting instructions to the various parts of the CPU, somewhere it gets decrypted). Encrypt a harddrive, you still need a plaintext bootloader to boot, or you got an encrypted brick.

    RE: stm
    –I haven’t worked w/ them, but I’m sure I or anyone else could make a product (say a simple SD card encryptor) that very likely wouldn’t get hacked or would be way more effort than it’s worth, so a win in my book.

    RE: nxp
    –Do they verify the USB interface chips? What’s that process like?

    RE: atmel (now microchip basically)
    –Yeah they’ve got some neat chips, want to try them.

    RE: microchip
    –Yeah broken links galore, page is terrible w/ no script. Meh, not allowing. Hey can you read what’s on this page? http://www.microchip.com/design-centers/embedded-security/reference-designs/trustspan :p

    Clive Robinson
    –Yeah that’s what happens when people use your product and expose it to conditions you could’ve never foreseen or tested for. We can ask ourselves all day if we architected our implementation correctly (you yourself said you can never be sure you “did it right”, so you can keep thinking about that if you want, it’s a dead end after a few years…), if you can’t design & fab your own chips you’re at mercy of people who can. And slapping a bandaid, to continue the metaphor, you could pay for invasive surgery (which may do more damage than it solves) that would bankrupt you and your family too (or work w/o deadlines). No guarantee you’ll live much longer too. And the boogie man can still get you.

    Thoth April 6, 2016 1:40 AM

    @Figureitout
    If you don’t like Flash related chips, just use NXP since they are still promoting 144KB EEPROM when ST and Infineon are already moving into Flash. And… NXP is from Netherlands. And Netherlands Govt just funded a round of cash to support the Netherland’s crypto community so they are the good guys unlike their American counterparts right 😛 ? Who knows …

    NDAs are common in the IT industry. They are usually for monopoly and political means.

    Regarding physical, power line and logical tamper resisting, it buys you time and makes life much harder for attackers. It is rather easy to say decap this chip or do this physical attack to defeat the chip but what is seldom mention is what if that is the only chip with valuable details. You could practise on some similar chips on your attack techniques butbwhen it comes to the target chip it’s a 50-50 of sorts. Either you control the decap or physical attack process very well and extract the secrets or you would blunder and have the acid eat off vital parts of the gates and lines, accidentally tripping some tamper trigger or what not and having it do a wipe right under your nose. Every additional defense is going to make it much better.

    If the consideration is trusted boot in a tamper resistant chip like some of the NXP i.MX chip or something similar, you would need to bypass he physical tamper sensors and metal meshes via extended soaking in acid or ion lasers and hopefully not have the acid eat away or the laser burn too deep and damage the chip before you can defeat a tamper hardware protected boot and full disk crypto assuming the stuff all takes place within a tamper resistant with trusted boot hardware something that can be done on a ARM TrustZone implemented into a tamper resistant chip with hardware crypto for the NXP i.MX case.

    STM encryptor can be done and I think the Ledger Blue hardware wallet combines a STM32 chip as a physically separate “Insecure Zone” with a ST32 smart card chip as the “Secure Zone”. Note that smart card chip is actually not an accurate name because these chips can be converted to TPM as long as the pins allow I2C. What determines a secure chip can be used as a smart card is the interface with ISO 7816 protocol for smart card. Besides including ISO 7816 smart card protocol, if you buy one of them that allows I2C, you can become even more flexible and on top of that some secure chip ICs include USB. The USB support is actually USB CCID and not USB Mass Storage so you can’t actually use it for storage. The USB CCID is to implement the PCSC standard for connecting smart card into a USB port. For the Ledger Blue hardware, it is partially Open Source (the OS in the STM32 is targetted for Open Source) but the OS in the ST31 smart card chip is not due to he usual NDA requirements. The devs told me that during boot, the STM32 and it’s environment would be secure booted via the ST31.

    So if you want a hardware encrytor with STM32, it can be done but it would be highly advisable to add the usual tamper resistent meshes and sensors to encase the STM32 for additional protection and wipe keys when tamper detected. Also a software implemented dynamic whitebox crypto to make power line analysis of crypto operations would be nice on the STM32. The tamper protection is to protrct that sensitive plain area from being easilt probed and defeated too quickly to buy time. Similarly, crypto is about buying time by using a mathematically hard problem to scramble messages as a deterence. It does not remove the message but simply changes it’s form. If you want simpler development access, search for a USB Armory chipset and buy one of those. It comes with a tamper resistent i.MX chip with ARM TrustZone and Secure Boot ready and you can find technical documents online as well by searching for the specific NXP i.MX chip.

    Regarding the Microchip TrustSpan, it redirects me the the Embedded Security page and does not provide further info.

    Figureitout April 6, 2016 2:32 AM

    Thoth
    –Oh god, trust the Dutch….hmmm :p (all Europeans have stereotypes for each other, it’s hilarious..the Dutch are said to have a balloon in their stomach, b/c they are business-oriented and “branch out” into other areas a lot, that’s the stereotype (well there’s others but I’ll let it be))

    I basically can’t trust anyone but the tech and the physics and you have to trust teachers which are humans subject to bribes and severe threats if they don’t comply (don’t teach students secure electronics…just a nightmare of mine..)…

    I don’t mind flash chips (that’s mostly all I’ve known), it’s not having separate memories (Atmel for instance, the ATtiny has the handy internal eeprom, and is 4 lines of C code to write to it, and like 2 to read (total after me just wrapping it is like 20 lines, hell ya), but their latest ARM chip can’t even spare 512 bytes or real eeprom. Honestly I’ll take a couple millimeters larger chip to get any kind of real eeprom.

    RE: bypassing metal meshes w/ acid
    –Can anyone here make a simple tutorial on taking apart a frickin’ RSA dongle? They’ve sold like millions of those, right? They don’t have screw holes and epoxy the thing. I had to use pliers to crack it open like a hard nut lol. Otherwise I would need a fume-hood to burn open the plastic. That could be designed to break any memory chip etc. Then components were designed to be hard to get at, and no juicy chip right away. Basically you had to completely destroy one to learn how to have a chance at the next one. If you have one board completely personalized…you would be likely safe from this initial threat (unless they eavesdrop on you making it, which is pure ownage anyway). Doing these attacks take TIME and SKILL and most importantly MONEY.

    RE: redirect
    –Lol, alright everyone may want to avoid that page now as it’s serving different things. It had some separate weirdness on the site which I didn’t check for before. All the chip companies have terribly insecure websites for their products, seriously all of them lol.

    Thoth April 6, 2016 4:33 AM

    @Figureitout
    Trust the Europeans … probably trust anyone 😀 . Was just a remark but the main point is use whatever tools most convenient.

    The RSA dongle has a hard outer nut casing making it very hard to pry open and besides that, the chip it uses is a smart card chip so even if you manage to not destroy the chip while cracking the hard nut, you still have to attempt to defeat the smart card chip (JavaCard chip to be more specific). The link and then click on the Offerings tab. The combination of all the security features would end up being nice and handy at making life much harder for attackers in every aspect designed and implemented for security.

    Link: http://www.tokenguard.com/RSA-SecurID-SID800.asp

    Clive Robinson April 6, 2016 4:47 AM

    @ Buck, Nick P, et al,

    I am away from the dead tree cave at the moment, but their is an O’Reilly –light blue– book on RegExp of various forms and it has an Email Address format validator in it and it’s a long scary bit of code.

    I read it several years ago and it had a bit of history as to why email addresses could be so complicated, apparently there are earlier systems that had to be accommodated, which IIRC amongst other things had the domain order the other way around and used shriek symbols not decimals etc etc. The thought occurs that maybe quite a bit of the oddities have actually passed their “end of life” date etc.

    Oh and of course it’s actually not possible to validate an email address only it’s format without going online and asking the actual server, which might not know or admit. But for low usage it might be easiest to just ask the server to validate the whole thing, which in practice is what many services have and currentl do by sending you an email to which you have to reply to with the correct information.

    Thus sometimes the workable solution is as Douglas Adams so adroitly put it make it “somebody elses problem”.

    Clive Robinson April 6, 2016 5:08 AM

    @ Wael,

    I still remember the joke and laugh at it — lol.

    Not that old chestnut about “You only need eyes to see an enzyme but you need ears to hear a….”

    There are many such jokes I can think of four immediately including one based on a famous English novelists name, and a line of much more tedious work…

    Clive Robinson April 6, 2016 9:31 AM

    Yet another reason to hate Google

    As many know Google reformed it’s self and became Alphabet. In the process it moved various parts around.

    One such part is NEST which is a home device –computer ‘white goods’– design manufacturer and supplier.

    Well they have decided to prove just what a bad idea it is to give money to the Choclate Factory, as they have in effect decreed you don’t own the hardware you pay for…

    The EFF have a write up on this,

    https://www.eff.org/deeplinks/2016/04/nest-reminds-customers-ownership-isnt-what-it-used-be

    Their final advice is effectivly “Don’t buy from the shysters now or ever”… Which in the circumstances appears to be a good idea.

    Nick P April 6, 2016 10:55 AM

    @ Onetime

    It implies the regex is unreadable and horrible. If it fails, it also doesn’t explain that too. The point is, though, that such a horrible way of doing the job means it’s way too hard to see why it fails.

    @ Buck

    “Haha, funny thing just happened – I was about to point you to item?id=11434630, but after less than twenty minutes of brief research, I refreshed and saw your 11436040 entry. You’re really fast yo, and incredibly concise while still remaining thorough to boot! ;-)”

    I appreciate the compliment but have no idea what those items are. The Schneier comment id’s start with c6-something. So, I’m a bit puzzled on that one.

    @ Buck, Clive

    I have a better challenge. In near future, I’m going to call out Thomas Ptacek on Hacker News yet again about some bullshit he peddles. Not just him, though: many in reverse engineering communities believe same thing. That’s that source availability is not important because you can assess the security in assembly just as well. I countered that, aside from taking more time, the assembly lacks stuff connected to requirements, operational assumptions, and context that affect security. They disagree. Thought about cramming a ton of esoteric vulnerabilities into one assembly to test them but too time-consuming. Brilliant idea came to me yesterday.

    A $125 million software flaw

    Essentially, the test recreates that flaw in simplified form in assembly. It’s expressed as a control system that reads two inputs, does conditional checks, and calls one of two functions as a result. In assembly, the numbers will look all the same if no debug information or whatever is there. You won’t see an error present. Then, when I show the source, you’ll see one variable clearly labeled miles and another kilometers. Or comments above the functions doing inputs that say what units they’re using.

    So, what do you think? Excellent counter to the assembly-only verification that some believe in?

    Nick P April 6, 2016 1:30 PM

    @ Clive Robinson

    A conversation came up in another forum on key signing. People were talking about protecting the main key in a certificate authority or whatever. These increasingly rely on black boxes. Ideally, the system would do only what it’s supposed to with easy, 3rd party inspection and few side channels. Just got a wild idea: do key entry and encryption part with analog circuits using widely available components.

    The analog, special- and general-purpose, computers I looked up basically implemented specific mathematical operations using basic components. Usually some supporting circuitry to counter noise and such. I wonder if one could implement primitive ops in RSA or ECC in analog. If so, any digital portions would basically be I/O and could be done with any cheap process subject to visual inspection. That there’s no RF in the design and key components are analog means EMSEC should be very basic.

    What do you think on the concept in general? And I’m also interested if you’ve already heard whether RSA/ECC primitives can be represented that way.

    Gerard van Vooren April 6, 2016 1:43 PM

    @ Nick P,

    I’ve written a couple of thousand lines in Rust, just translating some C, in a Rusty way. Rust certainly has some nice spots like immutability and the memory management. Do I like it? No. Rust has way too many compiler error stuff in it. I think a C++ programmer loves it, but a C++ programmer is already used to piles of crap. Why don’t I like it? Well, they have too many loose ends, too many ways of doing things, where one way would be enough, and they keep adding new stuff each release. Just creating a cargo is not easy. There are things you have to keep in mind and the documentation, although quite good, is squattered. And literally each thing that you write results in compiler errors. Also the one-liners over many LOC can be hard to get and modify. The lifetime stuff in it, although very well thought off and quite unique, is just a pain. I think the problem that it solves just isn’t worth the burden.

    And finally, where with Ada you can specify each bit and even can create new types that are for instance 6 bit, with Rust there is nothing like that. Serialization is a lot harder in Rust than in a Pascal language.

    For me, the hype around Rust is just advertisement…

    Wael April 6, 2016 3:15 PM

    @Clive Ripobinson,

    Not that old chestnut about…

    No that one! The moderator for some reason removed the one I posted. It was a “Confucious say” joke. I’m guessing it was removed because he viewed Confucianism as a religion, rather than a philosophical system of ethics. Which, in that case, would be the appropriate thing to do.

    Nick P April 6, 2016 7:27 PM

    @ Curious

    “Keith Chu, a spokes­man for Wyden, said that al­though the sen­at­or is a mem­ber of the In­tel­li­gence Com­mit­tee, he has not yet been briefed on how the FBI hacked the device.”

    The most interesting part. I wonder if they’re keeping him in the dark due to his opposition to surveillance state or if they just haven’t gotten around to it.

    @ Gerard

    Thanks for the writeup. I was imagining plenty of difficulty but that sounds horrific. Now, I’m really not looking forward to it. Especially frequency of compiler errors. Funny about C++ people being used to it. I do wonder how many issues you ran into were from how it merges ideas from imperative and functional languages vs just regular issues. Might be interesting to have someone with Ocaml background try it and write comparisons. It’s main objectives are memory safety, concurrency safety, performance, and community. Everything else seems to take back seat.

    Far as Ada, I’ve countered Rust community and compiler leads enough with Ada citations that they’re more careful with their claims now. Might even learn something from it haha. I still promote Ada on Hacker News and other places plus reference work done in it like IRONSIDES DNS and Muen microkernel. The consensus between those of us there that know about safe, system languages is that Modula-3 was best one. That with some minimal extensions would kick ass. Or Component Pascal w/ Blackbox. Ada w/ SPARK are reigning champs on technical side for safety. Yet, only thing getting adoption that’s not another scripting language or heavyweight VM is Rust. Still watching Julia, Nim, and RED, though, as there’s good attributes in them plus active communities.

    One idea I had for an Ada programmer questioning Rust was re-coding IRONSIDES DNS in Rust. That would give us a security-oriented DNS… one thing on list of John Nagle’s must-have apps… coded in a safe, mainstream language. Might bring in code contributions. More importantly, doing both side-by-side might show problems in the app, languages, type systems, or compilers that we’d otherwise not see. Not to mention see how well Rust stacks up against reigning champ. Thing is, it would be best for an Ada programmer to do the port and they have no intention of switching that I’ve seen. 🙂

    Nick P April 6, 2016 7:36 PM

    @ Wael, Anura

    I see you on. Feel free to jump in on the question I have to Clive about implementing key signing (or verification) in analog hardware to make it easier to inspect. The thing I have to know is if the RSA or ECC operations can be implemented with the primitives that analog computers have. If they can, then an analog machine could be built. Otherwise, maybe not.

    Here’s the operations an analog computer supports:

    1. Addition
    2. Integration with respect to time.
    3. Inversion
    4. Multiplication
    5. Exponentiation

    6. Logarithm

    7. Division

    It will likely pipeline them as well since analog systems run continuously.

    Wael April 6, 2016 8:41 PM

    @Nick P, @Anura, @Clive Robinson,

    Just got a wild idea: do key entry and encryption part with analog circuits using widely available components.

    You’ll have to think of different paradigms of encryption, I believe. How will you feed a file to an analog computer for encryption? The file is digital to start with, and the analog computer doesn’t speak digital.

    You might want to look at this paper, especially table 1. You’ll also need to brush up on Malvino’s book, remember him?

    I thought about this idea from a confidentiality perspective looong ago. Went nowhere with it, although I have some ideas in mind. Are you starting to have the same nightmares I had?

    Nick P April 6, 2016 10:42 PM

    @ Wael

    “How will you feed a file to an analog computer for encryption?”

    You can do that with a digital system for I/O on easier-to-verify components or straight up patch cables and shit lol. Far as implementation, it was actually this demo that probably led to the current idea. Going from the math to the analog with EE experience seems straight-forward if the math operations fit the one’s I listed.

    “You might want to look at this paper, especially table 1.”

    Hell yeah, thanks for the link! My collection on analog computers is tiny compared to the rest as I focus on most practical and general-purpose info. This fits in nicely. Might even be foundational a bit from my skimming.

    “You’ll also need to brush up on Malvino’s book, remember him?”

    Malvino? Is that an Italian coffee or chocolate bar? Not sure what the significance here is for an amateur getting high-level view.

    Note: Same response applies if Malvino was EE-related. I’ll leave it to you to ponder the interconnection between coffee, chocolate, and EE. (pauses) Shit, that wasn’t that hard actually…

    “I thought about this idea from a confidentiality perspective looong ago. Went nowhere with it, although I have some ideas in mind. Are you starting to have the same nightmares I had?”

    “What are you getting at anyway?”

    Still applies to linked statement anyway. Saying further, your delay’s existence and specific value would represent the secret. Remember that NSA’s interception technique was an attack on delay where they responded faster than the site you tried to access. The delay was in essence the identifier. Easily gamed with sufficient resources. Another form of it was in fake signals from radios like spoofed basestations, although those typically use power I think. Delay would be another attack.

    I think what you were actually trying to do was authenticate one device against another using the electrical and logical properties of the line in search of a new trade secret, patentable invention, or immediate solution to a project problem on low end. This has been done in academia in papers I’ve seen, maybe including with delay. I can’t remember the variable off the top of my head. I do recall thinking such approaches should only be used for obfuscation combined with monitoring rather than real security.

    So, there’s your answer. Sleep deprived as I produce it but hopefully it’s helpful. 🙂

    Clive Robinson April 7, 2016 1:33 AM

    @ Nick P,

    That’s that source availability is not important because you can assess the security in assembly just as well.

    The Imperial / SI argument is not a good example to pick, because it’s a “non functional meta data” argument.

    The actual bug in this case was one in the specification not the code, that would only have been found by functional integration tests external to the subassembly the code was in. In effect what was missing or what was wrong was the value of a multiplicitive constant.

    Yes, someone examining the specification correctly would have seen the error but from that point downwards, it’s a crap shoot of what people decide to name things. It’s most likely that a software engineer would have functional naming for the inputs, not the units they were in. The only place I would expect it to be found would be on a conversion subroutine or conversion constant. Even then the programer could have named it input scaling conversion and not mentioned the actual convertion formular.

    To see why, tell me what these do,

    A=BxC, V=IxR, P=IxV, P=VxT, O=I1xI2

    Thus if the code writer was one who actually belives in “full commenting” / “documenting in the source” –something I try to do on the “least surprise principle”– then it could have been picked up on. If however it was written by a “Macho Code Cutter” who eschews comments as being for wimps then no it would not have been in the source code…

    Thus your argument is a powerfull one for “engineering level documentation” in source code, but a fairly irrelevant one for reverse engineering.

    To be a powerfull RE argument you would have to show that a bug would be obvious in the source if all variable names had been replaced by sequentialy numbered names not in any way reflecting their fuction or for that matter type.

    I’m not arguing against your argument that having the source code is better than just an executable dump, I’m arguing that you have picked an argument that would be difficult to defend at best to try to show it.

    I’d let others who have worked on hardware control code have their say, before you think about progressing further with it as an argument.

    Clive Robinson April 7, 2016 2:27 AM

    @ Nick P,

    Time to rain on your parade again or at least add the “noise” sound effect.

    The first and important thing to keep in mind is,

      Digital computers are comprised entirely of analog circuits

    e second is,

      All circuits have noise issues

    Mostly what you call “analogue computers” were “Difference Engines” and could be made with motors and gears, and prior to WWII were made just that way. They multiplied or divided numbers by having a flat disk driven by a motor, on this disk was a wheel with it’s own little rubber tire that drove a second shaft as the output. The difference in speed between the two shafts was the result of the diameter of the wheel and how close to the center of the disk it was. Obviously the output was slow when close in and greater when far out.

    Thus you had in theory an “infinitely adjustable gear” or multiplier. In practice you had nothing of the sort due to mechanical slop, variable friction, and other mechanical nasties putting a lot of noise on the output shaft… So your multiplication was randomly variable by a small but fixed range. Thus you could only divide or multiply by ammounts that were small.

    All analog systems suffer from this limited range issue due to fixed output noise ranges. Thus the third thing to remember is,

      All analog systems add noise thus each stage has a Signal to Noise range that limits its usable gain and range.

    What a “digital circuit” does is try to make the noise a very small fraction of the output range, by only having two output levels, as far above the input bias level as it can get and as far below the input bias level as it can get. That is you drive the outputs into saturation to get over the additive output noise. Early CMOS gates had their input biased to aproximately half way between the supply rails, you could with care use them as amplifiers with a gain upto around ten by adding two resistors in the same way as an inverting OpAmp circuit. But they were noisy, not particularly linear and thus had a limited input range.

    You can make a digital gate with an OpAmp, you simply bias it up around half the rail voltage with a gain of around ten. You then have multiple input resistors, the result is in positive logic a NOR gate, which like the NAND gate is considered a universal gate. Thus you can see that you can build a computer with OpAmps if you so desire. Which in effect all digital computers are.

    You can also pick values of resistors around your OpAmp to have analog addition and multiplication but in a way where the output range is limited to say four levels. But you would have to divide the input levels up to a smaller range, which brings you quickly into the noise… Which is why we generaly don’t do that unless there is a significant reason to do so (like having a communications channel of fixed bandwidth where using both phase and analog modulation gives a greater bit rate, whilst staying within the channel baud rate).

    The best signal to noise rate you are reliably going to get is around one part in four thousand which in an adder would need the inputs limited to 1 in 2000 and for a multiplier 1 in 63. Neither of which is going to give you much on the maths front. The issue of correctly scaling an output to an input quickly becomes a compleate nightmare. Unless you find a way around it, the easiest of which is to not need it which brings you back to binary level signaling….

    So whilst it looks like a nice idea from a 20,000ft view on the ground it’s a very thorny bria patch best worked around not through.

    Clive Robinson April 7, 2016 6:38 AM

    Is ITSec as bad as Nutrition?

    The UK newspaper The Guardian has published an interesting article.

    On the surface it appears to be about the very shabby behaviour of eminent nutrtionists (Keys in particular) over the past 60years.

    However read the story not so much as LowFat-v-LowCarb but the how and the way that politics dictated against scientific evidence.

    Then think about our current CryptoWars II situation…

    http://www.theguardian.com/society/2016/apr/07/the-sugar-conspiracy-robert-lustig-john-yudkin

    Wael April 7, 2016 8:17 AM

    @Clive Robinson,

    This might be of interest to one or two people,…

    Took a quick glance at it. It does seem interesting, but I don’t see the innovative part yet.

    Nick P April 7, 2016 1:11 PM

    @ Clive Robinson

    re assembler argument

    Well damn, I’m back to the drawing board. Appreciate the review. 🙂

    re analog

    I’m not sure about the focus on mechanical stuff as I’m using electronic. They did a lot better than mechanical ones. Your posts on noise and precision of the data look accurate: most writing on analog indicates that will bite me in the ass. Only question is how much. I”ll take you’re word that it’s too much for now.

    Still can use older process nodes or hand-wired constructions for security-critical stuff. 🙂

    re RISC-V

    Unfortunately, I don’t have an account. Could you summarize key observations and predictions?

    ianf April 7, 2016 3:15 PM

    OT: The face of American fascism: veneration of punitive physical violence.

    [As a retribution for losing a verbal argument in his courtroom, a judge ordered a bailiff to punish a defendant in dock by way of activation of] a 50,000 volt-capable stun cuff attached to said defendant’s ankle for about five seconds via a remote control held by a courtroom deputy. The defendant fell to the ground and screamed in pain. […] For that, the judge subsequently received a sentence of a year’s probation and ordered to attend anger management classes. Checks and balances…

    http://arstechnica.com/tech-policy/2016/03/judge-who-ordered-man-to-be-shocked-must-take-anger-management-classes/

    Nick P April 7, 2016 10:45 PM

    @ tyr

    I missed that one. Seems pretty accurate. Actually, some of that was already in the Snowden leaks. I think that’s part of the intent, though.

    Thoth April 7, 2016 11:23 PM

    @Anura, Nick P, Clive Robinson
    Are there any security risk (other than bruteforce) to lengthen the entropy of a PIN or password which will be converted onto a 256 bit encryption key by doing HMAC(K,m) where K is the PIN or password padded with zeroes as per HMAC requirements and the m is a zero filled array with 64 byte of them and then use the output of the HMAC as a 256 bit encryption key ?

    Thoth April 7, 2016 11:25 PM

    @Anura, Nick P, Clive Robinson
    I forget to mention that the functions of the key stretching must be available and easily implememted in a hardware like smart card thus my suggestion of a smart card supporting a crypto hash or even HMAC itself natively. SCRYPT AND BCRYPT are not supported and expensive on smart cards to use.

    Clive Robinson April 8, 2016 1:26 AM

    Pressure on Apple From Turkey

    Reported via a French newspaper[1] it is said that father who apparently lives in Istanbul, Turkey but is originaly from Rome has written to Tim Cook over the death of his son.

    The father want’s Apple to unlock the phone to get at two months worth of photos and notes made by his terminally ill son. Acording to the reports the son had allowed his father access but this nolonger works,

    http://arstechnica.com/tech-policy/2016/03/father-begs-apple-ceo-to-help-unlock-his-dead-13-year-old-sons-iphone/

    To be honest I would expect more stories like this to emerge in the future as the press amongst others try to put politicaly inspired preasure on Apple.

    [1] There are so many different languages and countries involved in this story and the father could not be contacted, so I’m far from certain that all the details are correctly reported or that the story is real.

    ianf April 8, 2016 2:39 AM

    OT: Memorable, AND THEN Impossible To Wipe Out Thought Department:

      […] a lot of people don’t think lesbians (that is, real human lesbians, not the male porn fantasies) have any fun. “People think we just sit at home in sensible shoes reading feminist theory to our cats” […]

    Found in “20,000 lesbians in the desert: welcome to the Dinah Festival, a world without men

    http://www.theguardian.com/lifeandstyle/2016/apr/07/dinah-lesbian-festival-women-palm-springs

    Clive Robinson April 8, 2016 2:56 AM

    <>Possible cure for arterial plaque is sweet</>

    There is a common food additive that is also used in drugs and is the trick behind “powdered alcohol” called cyclodextrin (CD).

    Due to research by parents of children with a rare genetic disorder they found that CD was a plaque buster thus a potential life saver for the children. Apparently the effect in reducing the killer cholesterol plaque on the inside of arteries which causes arteriosclerosis had been seen before by scientists but they had not picked up on it in the past.

    Well the parents have kickstarted new research that appears to confirm the findings. Importantly as CD is already an approved substance for injecting into the human body human trials should be able to start fairly quickly…

    If you want to see more of the current research,

    http://stm.sciencemag.org/lookup/doi/10.1126/scitranslmed.aad6100

    Clive Robinson April 8, 2016 3:13 AM

    @ ianf,

    Did you ever watch the Robin William’s film “Goodmorning Vietnam”?

    It had bit in their about lesbians and their footware and it’s use as an euphemism.

    ianf April 8, 2016 3:29 AM

    It does/ it did? Strange, I missed that bit. Only threads I remember from that movie were Williams character’s unrequited romantic pursuit of a Central Casting Vietnamese local female; and some conflict with the generals for whom war was a serious business, not meant to be made entertaining (or something). There was no place for lesbians-in-sensible-shoes argument. Could it have been some other comic and/or movie that you misremembered?

    Thoth April 8, 2016 5:26 AM

    @Anura, Clive Robinson
    I wonder if the following dynamic whitebox would work on a XOR-based scheme in-regards to HMAC requiring the HMAC key needing to be XOR-ed into a fix value of 0x5C and 0x36 so that the codes can be open source without having to compromise it’s security of the dynamic whitebox. The possible side-channel in HMAC might be the XOR it uses that may show itself on a power analysis.

    During device setup 2 sets of fixed randoms, FR1 and FR2 are to be generated where FR1, FR2 each contain 2 elements (2 bytes). A random byte would be generated and 0x5C XOR the random byte to get a result where the random byte and result for 0x5C is placed into FR1. Similarly, a random byte is generated and XOR to 0x36 and both the random and result are stored into FR2. Both FR1 and FR2 results can be stored inside a key slot if there is hardware key protection simply for the sake of tamper resistance and nothing else.

    During HMAC operation for the XOR portion of the algorithm, the HMAC key is required to be XOR-ed to 0x5C and 0x36 but this is too direct and might show up on a power analysis so instead they will be XOR-ed to FR1 and FR2 instead by taking ((HMACKey XOR FR1[0]) XOR (HMACKey XOR FR1[1])) then along the way 2 random bytes are also generated and XOR-ed to the HMACKey bytes as dummy rounds. Besides these, the operations of the dummy and real rounds of XOR operations can be shuffled randomly with the intention to provide as much noise against power analysis.

    What are your thoughts on such a dynamic whitebox method that doesn’t require any hard-coding of some fixed variables ?

    ianf April 8, 2016 5:59 AM

    @ Clive,

    A single pedestrian in-movie/ on-air “can’t-say-lesbian-no-more” unfunny joke amounts in your mind to a full-fledged memorable thread from that movie??? Also: the cat component is missing, as critical an element in this context as any footwear. NO CATS WILL EVER COME TO HARM OR BE SUPPRESSED IN MY PRESENCE.

    PS. I noted how you promote the old curmudgeony means of no-hypertextual referencing by not using the overt granular link to the item in question, but the default one to the page + verbose location-in-page instructions—as were this still a analog medium.

    Dirk Praet April 8, 2016 6:00 AM

    @ Clive, @ Bruce

    Re. Pressure on Apple From Turkey

    Reported via a French newspaper[1] it is said that father who apparently lives in Istanbul, Turkey but is originaly from Rome has written to Tim Cook over the death of his son.

    Wouldn’t this be a great idea for a new contest on this blog ? I mean, @Bruce handing out a copy of one of his books to whomever comes up with the saddest story or most compelling business case for Tim Cook to burst into tears and immediately unlock that phone by any means necessary.

    ianf April 8, 2016 7:22 AM

    THE GUARDIAN: San Bernardino iPhone hack won’t work on newer models, says FBI

    […] The hack is also “perishable”, according to FBI director Comey, because at any moment Apple could update iOS for the iPhone 5C and render the hack inoperable[…] FBI could help local and state law enforcement by simply unlocking the older iPhones for them, but that evidence gained this way could not be used in court.

      (Innate humility and propriety forbids me from uttering the ack-so-apt-here “told you so” phrase.)

    http://gu.com/p/4t6qt

    @ Dirk‘s proposal for @Bruce to “hand out one of his books to whomever comes up with the saddest-sack story or most compelling business case for Tim Cook to burst into tears […]”

    I’m all for iCook to go a-crying, but why reward that with Bruce’s oeuvre? Wouldn’t “The Pet Goat” be a more suitable consolation book choice?

    CallMeLateForSupper April 8, 2016 9:55 AM

    “Richard Burr’s Encryption (AKA Cuckoo) Bill, Working Thread
    Published April 8, 2016 | By emptywheel

    “A version of Richard Burr and Dianne Feinstein’s ill-considered encryption bill has been released here[1]. They’re calling it the “Compliance with Court Orders Act of 2016,” but I think I’ll refer to it as the Cuckoo bill.”

    https://www.justsecurity.org/wp-content/uploads/2016/04/Burr-Feinstein-Encryption-Bill-Discussion-Draft-The-Hill.pdf

    [1] In case you care, the “discussion draft” is a PDF
    https://www.justsecurity.org/wp-content/uploads/2016/04/Burr-Feinstein-Encryption-Bill-Discussion-Draft-The-Hill.pdf

    ianf April 8, 2016 1:05 PM

      [Compound replies ahead. Assume the praying-mantis position.]

    @ Wael: “Gentlemen, ianf,”

    First of all, I VEHEMENTLY OBJECT to being singled out AND not counted in by default among, ergo excluded from Gentlemen. You wanna I should call ya some ungentlemanly names? (pace Elmore Leonard).

    Furthermore, I see you have difficulties letting go of your Once Hour of Delusional Glory, a.k.a. the Waelian slip instance. Perhaps, let the sleeping dogs lie?

            […]

    […] “Programmers that like to get cutsie by nesting, concatenating, and pipping regular expressions (…) should be sent to a security concentration boot camp.

    Hey! Easy on that KZ-dispatching fever. Also, don’t knock (recursively nested & invoked) regular expressions… stuff that keeps one outwardly concentrated and sane when obliged to attend departmental conferences with endlessly droning PowerPoints. Besides, I’ve never had so much fun as when I scripted text-processing apps with HyperTalk transparently calling on services of the grep- and repXCFN.

    From: Temporarily Waylaid Response Dept.

    @ Wael: Since when do Irish talk like this?

    @ albert “has been using country-specific searches for years (e.g.: letras bossa nova site:br), but never realized the depth of possibilities in [wtf?] dorking. (cc @Desmond Brennan)

    If using openly published granular search syntax is considered an instance of abuse-inviting or malevolent hacking, then there’s something wrong with

    1. the parties that say so, plus their chains of supervisors; and
    2. the accessible records-keeping entities that put them up for ocular and mechanically-assisted perusal in the first place.

    I.e. blame the submitters and maintainers of these records, not the tool to extract the intel.

    Wael April 8, 2016 1:53 PM

    @ianf,

    [Compound replies ahead. Assume the praying-mantis position.]

    Yeah, right. When I saw my name associated with this warning, I assumed the fetal position.

    First of all, I VEHEMENTLY OBJECT to being singled out AND not counted in by default among…

    lmao! That was good 🙂 No worries, though! You’re an “Officer and a gentleman” so long as you stop harassing others without valid justifications.

    I see you have difficulties letting go of your…

    I’m like an elephant or a camel. I never, ever let go until you explicitly ask me to stop[1]. Ask @Figureitout, @Buck, @Nick P, @Dirk Praet, and his majesty @Clive Robinson. Crap, you might as well ask @Bruce, too. But he’ll likely ignore you 😉 One slip, and you’re pigeon-holed 🙂

    Si può benissimo pensare così…

    Interesting story. Read: Ho-fricken-hum … 🙂

    [1] As you can attest to that. I no longer correct your broken-ass English and sentence construction and strange formatting, per your request. Remember, gentle-punk? Lol, hmmm was the last sentence fragmented?

    Clive Robinson April 8, 2016 6:50 PM

    @ Wael,

    I’m like an elephant or a camel. I never, ever let go until you explicitly ask me to stop

    I thought I had told you how the English once bred bulldogs?

    Put simply they were bred for their ability to bite and hang on regardless of pain or other distractions. One selection process was to teach them to hang onto a wooden bar, if they were deemed to not have promise they were slaughtered. Those with promise were further trained by a pivoted wooden bar with one dog on either side again those with promise survived. Apparently the last test for breeding was when the dog had jumped up and got a good bite on the bar they would chop it’s hind paws off, and see if it maintained it’s bite. If it did it was finaly allowed to breed. Though without it’s hind paws you have to wonder how it managed…

    Oh and remember it was said around that time that “An English Gentalman treats his animals better than his servants”…

    But generaly they did not actually get there hands dirty, they emoloyed the “bowler hat brigade” of realy nasty pieces of work, their apprenticeships not to disimilar to the buldog breeding program, just nastier. Which is why the term “Gentalman’s Gentalman” has a double meaning, usually they had worked their way up from being “footmen” who ran alongside the coach to deal with vagabonds and other nuisances that got in the way.

    Ever heard of “last man standing”? It refered to a selection process for such Gentalmen, but it was a bit more brutal than just being the last on your feet after the melee. Not only were they expected to be standing, but kicking as well, so the fallen learnt an additional lesson about who “the best man” was. Oh and over in the molasses, rum and sugar plantations, “Bucks” –large slaves– were used for bare knuckle fights, to amuse their Gentalman owners and large sums would be waged, as a result often the victor was encoraged to mutilate the fallen sometimes by manual gelding and gouging (castration and pulling out of the eyes, that was once also used on poachers when caught)…

    @ ianf,

    So being called a Gentalman, might be considered the mark of a psychopath…

    Which speaking of films as we did earlier, might be why the droods in the clockwork orange wore bowler hats…

    P.S. The use of “bowler hat” here is generic, I know that they actually had other names, but few would even recognise the names as hats, especialy the more regional ones.

    Wael April 8, 2016 8:56 PM

    @Clive Robinson,

    I thought I had told you how the English once bred bulldogs?

    I vaguely remember something like that. Either way, you just told me. Fascinating story. Would you call it intelligence-driven evolution? I heard that during a period in the middle ages in Europe, a newly-born child was dropped in a tub of alcohol for a few seconds. If it survived, then it was “strong” and a good breed. If it didn’t make it, oh well… It’s not fit.

    Leave a comment

    Login

    Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

    Sidebar photo of Bruce Schneier by Joe MacInnis.