Comments

Ted August 19, 2016 4:34 PM

“The Security Advisor Alliance<a/ href> was founded in 2013 by a group of 35 dedicated information security leaders. The organization was built on a foundation of helping each other, growing the space and giving back to their working and living communities. Today the Alliance has over 400 members, on three continents.”

Most recent podcasts</a href> (including show notes)

Episode #35 Finding, Recruiting and Developing Top Talent
Episode #34 The Challenge of Cybersecurity Education
Episode #33 The Transition from IT Exec to Security Exec
Episode #32 What’s Up with That Email Tag?

Daniel August 19, 2016 4:40 PM

So far the most comprehensive overview of the recent NSA leaks I’ve discovered is being covered at the following link, with daily updates.

ttps://www.riskbasedsecurity.com/2016/08/the-shadow-brokers-lifting-the-shadows-of-the-nsas-equation-group/

CallMeLateForSupper August 19, 2016 5:41 PM

Didn’t NIST recently nix txt 2FA, and didn’t IRS then shutter its still-warm 2FA program? Or did I dream all this?

“Seeking to enhance online protections, [Social Security Administration] required ‘my Social Security’ account holders to use a password sent to them via text message.

That was a problem for some older folks who don’t text message and don’t plan to.

“’Who’s the youngster who dreamed up the idea of text messaging for senior citizens,’ Franklin, 73, and Janice Moses, 70, of Arlington, asked in an email to the Federal Insider.

“SSA officials got the message, not sent by text, and reversed course. Text messaging is no longer required.”

https://www.washingtonpost.com/news/powerpost/wp/2016/08/17/seniors-balk-and-social-security-backs-off-text-messaging-requirement/

So, SSA hears its customers but does not hear NIST. In this particular case, I guess that’s progress.

r August 19, 2016 5:53 PM

@CallMeLateForSupper,

I guess it’s one thing when all the youngin’s stand up and picket the banks, but if the geezers get all wriled up it might raise insurance rates across the board… heart attacks, blood pressure medicine, viagra for those late night meetings with new faces. It could be absolutely dire to the currently frail American economic situation to have all that subsidized mobility being requested and exploited donchaknow.

Chad Walker August 19, 2016 6:14 PM

Come on, gang, it’s the weekend! You all should be playing CRYPTOMANCER, a tabletop fantasy role-playing game about hacking, informed by real-life cryptography and networking fundamentals.

http://cryptorpg.com

Bruce has a copy, why don’t you?*

*Granted, I sort of just insisted on mailing him one, but that’s besides the point.

Clive Robinson August 19, 2016 7:10 PM

@ Daniel,

So far the most comprehensive overview of the recent NSA leaks I’ve discovered is…

Do they have an answer for the BitCoin problem?

As some will know a million BTC is a significant fraction of the entire number of BTC so far mined.

For anyone to try to get/buy a million bit coin will either drive the BTC price through the ceiling or/and make it absolutely clear who is buying them. It’s kind of like asking for a million half carat diamonds, not something you can do without people noticing.

Thus the reason I think the “ransom demand” is pointless and who ever posted the files knows this. Thus they deliberately ask for what is in effect an impossible task… Which makes me think it is either another nations IC or more likely a whistleblower, setting the NSA et al up to fail, thus giving them a plausible excuse for their next actions.

r August 19, 2016 7:23 PM

@Clive,

Ah Clive, the proof is in the pudding; if the NSA was into framing other countries then the bidding would be easy-peasy. Just jackpot someone you’ve been monitoring like, forever (and ever) and frame someone else. Maybe do it a couple times.

They obviously can’t re-appropriate the current auction coins that are going on, it’s a shame had this happened sooner that would’ve been a nice chunk to allocate.

r August 19, 2016 7:29 PM

@Nicaragua,

This whole bitcoin auction thing is more your style, you know – all quid pro quo.

r August 19, 2016 7:33 PM

@ScottD,

That’s one way to avoid giving out one’s password.

Without even so much as a smile, mind you.

r August 19, 2016 7:38 PM

@Clive,

What would be really funny (think fish), is if one of the major underground drug marketplaces spent out of their coffers. Maybe even if any vendor-attached addresses got involved.

Somebody stands to make some bitcoin sales in the between here and there.

ScottD August 19, 2016 7:52 PM

@r

That’s one way to avoid giving out one’s password.

Without even so much as a smile, mind you.

I am a bit confuzzled what that means.

Thoth August 19, 2016 7:59 PM

@Figureitout, Clive Robinson

While I was contemplating on how to make my ChaCha20 implementation faster, I recalled your question asking me in the previous Squid section why I couldn’t simply replace the cryptographic counter ChaCha20 uses instead of taking the initializing of the entire cipher approach.

After much thought, the answer would be it would be impossible to simply replace a 32-bit counter only. The reason being the keystream and the matrices (lookup tables) were already permutated and the original keymat would not be present in the keystream nor the matrices after finishing a ChaCha20 cryptographic function thus, naively replacing the 32-bit counter meant that the next 64 bytes of keystream and matrices would be very different.

One example below taken from the RFC document.

When you load the keymat into the matrices it should look like:

61707865 3320646e 79622d32 6b206574 [Fixed Constant]
03020100 07060504 0b0a0908 0f0e0d0c [ChaCha20 key in little endian]
13121110 17161514 1b1a1918 1f1e1d1c [ChaCha20 key in little endian]
00000001 09000000 4a000000 00000000
[Counter] [——Nonce in little endian——]

After you do all the permutation of the Matrix tables above the Matrices should look like:

837778ab e238d763 a67ae21e 5950bb2f
c4f2d0c7 fc62bb2f 8fa018fc 3f5ec7b7
335271c2 f29489f3 eabda8fc 82e46ebd
d19c12b4 b04e16de 9e83d0cb 4e3c50a2

Now that the original ChaCha20 key, nonces, constants and counter are all mixed up beyond recognition, simply substituting the 32-bit counter for the next 64-byte message and intending to re-use the matrices state would be a very bad idea.

Here’s what happens if you simply re-use all matrices and only change the 32-bit counter:

837778ab e238d763 a67ae21e 5950bb2f
c4f2d0c7 fc62bb2f 8fa018fc 3f5ec7b7
335271c2 f29489f3 eabda8fc 82e46ebd
00000002 b04e16de 9e83d0cb 4e3c50a2

Which will product the wrong results. The correct way is to re-populate the matrices again:

61707865 3320646e 79622d32 6b206574 [Fixed Constant]
03020100 07060504 0b0a0908 0f0e0d0c [ChaCha20 key in little endian]
13121110 17161514 1b1a1918 1f1e1d1c [ChaCha20 key in little endian]
00000002 09000000 4a000000 00000000
[Counter] [——Nonce in little endian——]

With the increment of the counter and then re-mix the the matrices, it should give you next 64 bytes worth of state.

The nice thing is the ChaCha20 quarter rounds are ARX-based and are fast if you don’t do 32-bit to 8-bit translation like what I did (to adapt to smart card CPUs).

The troublesome and probably time consuming and resource consuming part is imagine every 64 bytes of keystream used up, you have to re-populate the entire matrices and re-setup your keystream by mixing your newly setup matrices again.

Only stream ciphers handle this problem by continuously taking and re-using processed information to feedback into their matrices or keystreams withut needing much re-setup but I suspect taking old results and feeding back into the cipher’s state instead of re-initializing the entire stream cipher everytime the keystream runs out might pose it’s own problems and lead to cryptanalysis based on attacking previous cipher state.

Maybe @Clive Robinson can help chime in regarding the operation and security of a stream cipher’s keystream and internal state ?

r August 19, 2016 8:03 PM

@ScottD,

It makes things sudo psecure, eg. only a little bit. But it invites deniability with complex transforms (where memory is concerned, not in ownership).

The other reasonable method is something like “did you find me a lawyer?”

ianf August 19, 2016 9:05 PM

@ Chad Walker: Come on, gang, it’s the weekend! You all should be playing this my tabletop fantasy role-playing game about hacking, informed by real-life cryptography and networking fundamentals.

Don’t be silly. This is the place where we play Me Mom Is A Saint game, Teaching Moderator To Sit Pretty game, You Noise Me Signal game, the Denunciation Method game, and I’m Bored By You game.

So what would yours supply that we haven’t tried already. Learn from the pros, come up with something that ups the ante, not side-channel-lines it onto some, er, side channel.

anony: Deep-sea squid cannibals battle it out in a fight to the death. In Monterey Bay

It’s a whole ocean out there, but NO!, THEY HAVE TO HAVE A GO AT IT within sights of always profit-hungry Hollywood agents looking for new talent fodder. Figures.

@ ScottDA web tool that uses English Orthography to create passwords… randomly selects from a list of 174 consonant elements and 157 vowel elements from English Orthography to create the passwords.” […]

Interesting. Could you follow it up with either a Francophone, or a Romanesque version, perhaps one based on Interlingua? (not the brain dead Esperanto)

[ Spinning on, there ought to be an app[*] that’d accept input of a few times spoken passphrase, perhaps a nursery rhyme/ equiv., analyze the samples for distinct phonemes, normalize them, and use them to construct a “sufficient strength” password that would also provide a mnemonic reminder of user’s initial passphrase. ]

[^*] It could also be a webapp, but then of course we’d have to worry about browser fingerprinting and surreptitious saving of generated passwords in hostile associative_arrays[“couldn\’t”,”have”,”that”;]

Winston Smith August 19, 2016 9:10 PM

I hope others find this interview of a NSA whistleblower as interesting as I did. Having read and explored this blog, and, having followed Snowden’s story since the summer of 2013, I find that this whistleblower offers a certain “ring of truth” to her story which depicts intrigue and abuse after filing a complaint with management and internal ‘watch dog’ committees.

She paints a picture that indeed there is the NSA that is doing the work of the USA’s patriotic interests, and then, there is the other NSA that is doing the bidding of the ‘dark lord’ from a Star Wars film. Or Satan. Or pick your poison.

Some of what she declares amounts to hyperbole given her frame of mind and her personal bias, but then again, some of what she states is absolutely spot on to my perception… so, sift the truthful nuggets for yourself.


https://everydayconcerned.net/tag/zersetzung/

I found this quote from the article interesting. I disagree with some points, but interesting nonetheless:

Karen Stewart: The Protect America Act (PAA) and the NDAA I have always thought were horrific ploys more than mere mistakes. You do not take away the rights of the citizens to protect them from the enemies whom you are purposely allowing access to them. This is outrageous illogic. But people are acceding, losing their freedoms and quite willingly because their fear is overwhelming their ability to think.

Maybe it’s the fluoride in our water making us sheeple, maybe it’s the lack of modern America’s former generally homogeneous national Judeo-Christian social philosophy and morals that have been lost in the Right versus Left discourse of the last 50 years, which had kept us united as a strong, cohesive culture and a benevolent civilization (mostly).

But we have lost our humanity and moral compass to the point where we have created leadership by psychopaths, and they are indulging in the worst that humanity can imagine in the melding of science and depravity, i.e. electronic harassment.

r August 19, 2016 9:44 PM

@WinstonSmith,

She thinks it’s the flouride, I think it’s all the peanut butter and jelly.

Grauhut August 19, 2016 10:00 PM

Max F. Acepalm News…

“WikiLeaks uploads 300+ pieces of malware among email dumps
Freedom. Justice. Openness. And some entirely avoidable p0wnage for good luck
19 Aug 2016 at 07:02, Darren Pauli

WikiLeaks is hosting 324 confirmed instances of malware among its caches of dumped emails, a top Bulgarian anti-malware veteran says.”

theregister.co.uk/2016/08/19/wikileaks_uploads_324_bits_of_malware_in_munted_document_dump/

@JulianA: Toooo much pr, not enough hackin!

ScottD August 19, 2016 10:13 PM

@ianf

Interesting. Could you follow it up with either a Francophone, or a Romanesque version, perhaps one based on Interlingua? (not the brain dead Esperanto)

From the start, I plan to make a French/German/Spanish maybe even Russian version. English was the easiest (for me anyway) but my underlying code does not care about language. It just sees numbers and strings.

Since it is many decades since I took French in High School, even tried to self-learn German a couple decades ago, my fluency in the rules of other languages is not high. Russian is another beast with a completely different alphabet. Doable, but I would need some assistance working out some language issues.

Not sure what you mean about Romanesque. A quick google yields architecture, unless you mean using the roman alphabet.

r August 19, 2016 10:20 PM

@Daniel,

Thank you for posting the risk link, I may have seen it already but I clicked into it this time.

“Perhaps the most amusing discovery yet, is that the Equation Group’s “noclient” (version 3.0.5.3) is vulnerable to a pre-authentication buffer overflow. Apparently, ‘Rawsend’ mode trusts user input and it shouldn’t. We won’t hold our breaths for MITRE to assign a CVE identifier.”

THAT is interesting, I wonder if that’s a message.

ianf August 19, 2016 10:53 PM

@ ScottD “Not sure what you mean about Romanesque

Should have written Romance languages instead, use Romanesque as a shortcut to “of Romance origins.”‘ You really shouldn’t have to do more than one such version for all “Romancephones” which is why I suggested Interlingua, with its streamlined grammar.

Forget about Russian edition until you’ve found a motivated AND linguistically knowledgeable someone to do that portion.

Clive Robinson August 20, 2016 12:32 AM

Has MS broken your WebCam?

Apparently MS Win10 anniversary update has “broken” the functioning of millions of WebCam’s and have decided they are not going to sort the mess out with a work around, but might fix it in Sept, but then again might not…

https://www.thurrott.com/windows/windows-10/76719/microsoft-broken-millions-webcams-windows-10-anniversary-update

However something else nasty slipped in as well, read the link about not being able to back out of updates after ten days…

It’s reading stuff like this which makes me glad I don’t have any skin in the MS Win10 game, as rope burns hurt real bad.

r August 20, 2016 2:46 AM

  1. Home ownership and type.

Really pushes me further into the whole anonymous LLC property holding company thing.

tyr August 20, 2016 3:32 AM

I’m surprised the malware count is that low since
Wikileaks has made so many government friends in
our lovely world.

@Clive

That’s M$ business model, you can get a working
camera with the Win 11 upgrade, which may have
a few minor flaws that they will fix RSN. : ^ )
My favourite episode was when they introduced
input buffer overflow system crashes as a new
feature of MSbasic in one version. the fix was
to toss it out and use the previous or later
release along with the use of appropriate bad
words directed at the genius who added that
feature to something that wasn’t broken before.

@ianf

Monterey has the whole area set up to do aquatic
voyeurism while they wait for the stars to be
right.

ianf August 20, 2016 4:36 AM

11. Home ownership and type.

Where did that come from, rrrrrrrr? And can you really set up wholly anonymous LLC property holding thing? Or did you mean opaque & obfuscated.

I’ve looked into it some years back, and discovered that, short of setting up a semi-criminal postbox company in one of the “secrecy havens” like Switzerland or Lichtenstein, and buying into the market in this fashion, while needing to rely on expensive & shady lawyers there (minimum yearly retainer charge is €15k), there’s no legal way to do it in the EU. Primarily because of taxes, capital transfer, and property levies. No can do.

In fact, outside of apparent cyclical boom-bust property development/ investment areas in e.g. southern Spain, where they’ll take and whither away anyone’s money, in truly attractive places, even some whole countries, there are restrictions against non-industrial/ non-office property ownership by corporations (to prevent “bleaching” dirty money). Besides, business owners are trying to shed owned properties to, you guessed it, specialized property-holding corporations. So to set up a wayside little nest for less than €1M, that’s a tall order. Maybe not in the laissez-fairytale USA though?

Gerard van Vooren August 20, 2016 5:47 AM

@ Clive Robinson,

“It’s reading stuff like this which makes me glad I don’t have any skin in the MS Win10 game, as rope burns hurt real bad.”

I prefer paper cuts too 😉

MS … sigh, they always keep pushing the envelope. It’s the habit of all the big players btw, and lawmakers are nowhere to see when it comes to proper ICT regulation.

@ Wael,

“it will be hard to design a language that programmers two or three generations later don’t find to be bad.”

About making programming languages “future proof”, that’s indeed an interesting and of course utterly unsolvable issue. You know, “programming languages are designed, not discovered, and it shows”.

I share with Nick P the opinion that Pascal languages are still today up to the task when it comes to low level programming. Rust is not my favorite because it has the taste of design by committee all over it. I can’t comment on Swift.

Grauhut August 20, 2016 6:13 AM

@all: Is the human resource situation for sniff.govs so bad they asked Microsoft for medicine? 🙂

“Microsoft has open-sourced PowerShell for Linux, Macs. Repeat, Microsoft has open-sourced PowerShell
OpenSSH remoting will be baked in, too”

theregister.co.uk/2016/08/18/microsoft_brings_powershell_to_linux_and_mac_publishes_as_open_source/

Ergo Sum August 20, 2016 7:58 AM

@Clive

Has MS broken your WebCam?

It’s not really broken, just a programing error for the QoS settings. Maybe MS reserved too much bandwidth for LEOs? Semi-kidding…

Not having skin in Windows 10 is good, but…

Starting in October, 2016, Windows 7 and 8.1 will get the Windows 10 style updates:

https://blogs.technet.microsoft.com/windowsitpro/2016/08/15/further-simplifying-servicing-model-for-windows-7-and-windows-8-1/

From October 2016 onwards, Windows will release a single Monthly Rollup that addresses both security issues and reliability issues in a single update.

Enterprises will have the choice to install security package only:

The Security-only update will be available to download and deploy from WSUS, SCCM, and the Microsoft Update Catalog. Windows Update will publish only the Monthly Rollup – the Security-only update will not be published to Windows Update. The security-only update will allow enterprises to download as small of an update as possible while still maintaining more secure devices.

Non-enterprise end users will just have to fall in line. It won’t be long before Windows 7 and 8.1 break the webcam for them. Of course, that’s in addition to not being able to disable telemetry in these OSs.

Vista is not impacted by this change of service…

Tor Project RFI response August 20, 2016 8:28 AM

Attn: Our host

Alison Macrina asked about Global South outreach prospects, presumably for Tor. Wonderful idea

  • Xiaojun Grace Wang is the UNDP lead advisor on South-South cooperation
  • The UNOSSC is the UN office for South-South Cooperation

easiest access to the above is probably through the G-77

  • Volunteer Services Organization (VSO) recruits worldwide to NGO specs

ianf August 20, 2016 9:40 AM

So, Ted, are we now entering the Age of Teenage Smartphone Moral Panic[k]? As, btw., with practically all previous technologies, beginning with allowing “the youths” free listen to 78-rpm records of ragtime and other way too seditious, toe-tapping, limb-twitch-inducing music.

I thought we already were past that after the net.furore over this famous Dear Gregory letter, a mother’s written iPhone usage contract to her then 13 (now 17) year old son:

http://m.huffpost.com/us/entry/2372493

vas pup August 20, 2016 10:02 AM

http://www.bbc.com/news/uk-politics-37130455
Internet spying powers backed by review:
“He backed three kinds of bulk data acquisition:
◾Bulk interception: The tapping of internet cables by GCHQ to target suspects outside the UK. The review says this is of “vital utility” to the security and intelligence agencies, citing the case of a kidnapping in Afghanistan that would have led to the killing of hostages, if spies had not used these powers.
◾Bulk acquisition of communications data: The gathering of data about communications but not the content of it. Only disclosed publicly in November last year, for MI5 it has “contributed significantly” to the disruption of terrorist operations, Mr Anderson’s report said.
◾Bulk personal data sets: Databases of personal information, which could include everything from the electoral register to supermarket loyalty schemes, which the security services acquire openly or covertly.

But he expressed some reservations about a fourth practice – bulk equipment interference, which involves hacking into smart phones or computers over “a large geographical area”, saying there was “a distinct (though not yet proven) operational case” for it.

Unlike the other bulk powers described in the review, this one has yet to be used, it says.

Jiminy Cricket August 20, 2016 11:50 AM

@Dearest Gerard van Vooren,

MS … sigh, they always keep pushing the envelope. It’s the habit of all the big players btw, and lawmakers are nowhere to see when it comes to proper ICT regulation.

That’s because you’re looking in the wrong places, and at the wrong people. Do you not realize that while you’re looking for them in their offices they’re out on the town spending lobbiest’s money before you can catch them with it? And when they’re actually at the office, they’re making calls co-ordinating how family members and “business” partners are spending it.

http://www.pbs.org/wgbh/pages/frontline/shows/prescription/hazard/independent.html
http://www.pbs.org/newshour/rundown/mishaps-and-deaths-caused-by-surgical-robots-going-underreported-to-fda/

Welcome to the future of drugs, drones and death.

https://www.drugwatch.com/manufacturer/
https://business.illinois.edu/accountancy/wp-content/uploads/sites/12/2015/09/Tax-2015-Jiang-Robinson-Wang.pdf
http://rense.com/general33/fd.htm
https://www.salon.com/2004/05/01/muck_epa/
https://www.bloomberg.com/view/articles/2015-11-20/what-s-worse-than-the-sec-s-revolving-door-

http://www.slate.com/blogs/moneybox/2013/06/20/silicon_valley_nsa_revolving_door_deeper_than_you_think.html

Gerard van Vooren August 20, 2016 12:59 PM

@ Jiminy Cricket,

Do you really want me to read all that stuff? I get the outline of it but I am not that interested. It stinks, but I am not interested in the flavors of the smells.

Gerard van Vooren August 20, 2016 2:23 PM

For the guys who want to see Jeremy Corbin in action, look no further than this video. It’s about disassembling NATO. The list of facts that he presents are mostly already mentioned in The Untold History of The United States but it’s an impressive speech!

I’ve said it before, I am not a pacifist but NATO should be a thing of the past, I entirely agree with Jeremy Corbin on this subject.

Locke/Rousseau/...Schneier? August 20, 2016 2:53 PM

CIA torture cowards classify their crimes TS/SCI:

https://prod01-cdn07.cdn.firstlook.org/wp-uploads/sites/1/2016/08/C06579388-2.jpg

https://theintercept.com/2016/08/15/documents-confirm-cia-censorship-of-guantanamo-trials/

DoJ mob lips instruct the court to exclude evidence of US government aggression:

http://witnessiraq.com/wp-content/uploads/2016/08/Opp-to-Motion-for-Judicial-Notice.pdf

For the US government, crime doesn’t matter if it’s within the scope of your government employment. What thinking human being is going to go to work for these scumbags? What decent human being is going to do anything but knock this mafia state over?

Bruce is doing gods’ work defending the right to privacy with Tor, and Tor’s social contract is right to invoke human rights. But the legal scope of privacy is conditioned on ordre public, which is all human rights including freedom from torture and your right to peace. Government murderers and torturers don’t deserve privacy, and Tor’s social contract cannot advance human rights unless it explicitly supports use in opposition to state impunity. It’s not enough to never harm the users. You have to help the users expose US government crime. Otherwise the state will kill and torture them and get away with it. Whose side is the Tor Project on? Us or the government criminals? Are you with us or against us? That’s the question everyone’s still asking.

Daniel August 20, 2016 3:11 PM

@r writes, @Nicaragua, This whole bitcoin auction thing is more your style, you know – all quid pro quo.

Don’t you mean squid pro quo?

r August 20, 2016 3:42 PM

@Daniel,

🙂 We can blame that one on in person haters. I missed that one for sure, good catch. Maybe it’s related to not being my native language like some of the others are referencing?

ianf August 20, 2016 4:25 PM

ADMINISTRIVIA @ Ted

I supplied an corroborating anecdote to your off-topic with 2-para lead to a granular URL, total scanning time at most 2 minutes. In response you refer me to an entire website of “some good articles” of like subjects. I suppose I should feel honored at being entrusted with a chance to glance at all that wisdom, but, eternal grump that I am, am not. You’re a educated fella, so I’m curious, hasn’t it stricken you that that your methodology might be directly counterproductive.

    After all, it’s essentially the same as, say, when Clive Robinson makes a disputable [non-techie] claim followed by “you can look it up.” No offense intended to either party, but how does that raise the level of—so dear to you—common knowledge?

ADMINISTRIVIA @ Gerard van Vooren

        HOW DARE YOU be so unappreciative of another poster’s palpable sweat and toil of assembling all those links to what must be hundreds of screenfuls’ of BEYOND INTERESTING INTEL—which you then so summarily dismiss in one meeky para? Hasn’t your mother taught you basic blog forum manners? IF NOT: in such cases you thank the poster for its concern, and then lie about having gotten the gist of it. Everybody happy.

Later,

[…] video of Jeremy Corbyn calling for disassembling of NATO… which should be a thing of the past.

I’ll come back to this soon in another post of political nature. Couldn’t but note the absence of any idea what to replace NATO with – or perhaps we simply should scrape it now that we’re so kissy-huggy with the Rooskies. More later.

r August 20, 2016 5:13 PM

@tyr,

I blame alcoholism and direct orders, the good thing is we can solve it with more looting.

Isn’t there a House ReAppropriations Committee ? Should that be read like ‘the house’ as in the Casino, where the house rules?

Or else it gets the hosed again.

@Gerard,

I posted those, saw the “death by medical instrument written off as simple ‘injury'” on pbs last night – thought it might be cute to reinforce your assertion with links I only skimmed. (documentation is important, as the word reference is oddly similar to reverence.)

Don August 20, 2016 5:27 PM

@ Schadenfreude

anyone have any news on the status of Hacking Team and their hopeful demise after their massive exfil & breach by the Master?
[so brilliantly explained in a release a few months back]

come on, we need some more good news

By the way , in the film Citizen Four by Laura Poitras, towards the end there is a discrete mention of a potential second whistleblower in nsa on the heels of Snowden. This was only some short weeks after his departure from Hawaii. The timing may coincide with the recent Equation Group hoo-ha

ooof August 20, 2016 6:09 PM

@ianf, you don’t need to replace NATO, NATO is tits on a bull. NATO in its current form is a failed attempt to supplant (actually, end-run) the UNSC. NATO’s Charter subordinates the alliance to the UN Charter in the preamble, Article 1, Article 5, and Article 7 – because it has to, legally. UN Charter Articles 51 and 53 require that.

In the Balkans and the so-called War on Terror, the US tried to use NATO to undermine UNSC authority. The idea was, stampede a bunch of countries off to war and leave the UN in the dust to discredit it. But instead of discrediting the UN the illegal wars discredited the USG, caused a backroom mutiny in Europe, defibrillated Russia, and accelerated NATO’s degeneration. The US government rationale for everything was SELF DEFENSE!!!1! But now, with the SCO as enforcer, the UNSC just rubs the USG’s nose in UN Charter Article 51, which says that self-defense is also subject to UNSC authority.

https://nationalinterest.org/commentary/mccains-un-charter-confusion-8862

Turkey is the poster child for NATO victims: you get Playmobil aircraft with crippled IFF so Israel can attack you and you can’t fight back, and then the troops that fly the worthless crap around overthrow you on US orders. Who needs that?

Who? August 20, 2016 6:32 PM

Like Hacking Team dump, but worse:

This week someone auctioning hacking tools obtained from the NSA-based hacking group “Equation Group” released a dump of around 250 megabytes of “free” files for proof alongside the auction.

The dump contains a set of exploits, implants and tools for hacking firewalls (“Firewall Operations”). This post aims to be a comprehensive list of all the tools contained or referenced in the dump.

https://musalbas.com/2016/08/16/equation-group-firewall-operations-catalogue.html

Thoth August 20, 2016 6:39 PM

@vas pup, all

re: Bulk Anti-Citizenry Powers

“Bulk interception, Bulk acquisition, Bulk hacking”

Properly encrypted data and protected done in secured devices that resist tampering. Most encryption and data security tools and practices are rarher careless by relying on vulnerable syatem setup.

“Bulk personal data sets”

Social media discipline and do not agree to hand over personal information to sales and marketing people just for hat extra loyalty points or for the nice credit card bonus. Short term bonuses will bring long term problems. Ever had some random people call your cell phone when you least expect to push sales ? That means someone leaked your information to another company.

“But he expressed some reservations about a fourth practice – bulk equipment interference, which involves hacking into smart phones or computers”

Sounds more like a lie. The latest Equation Group hacking should have some network device exploits. Once fhe network routers are down, they can deliver their MiTM and hijacking of end points in the network and once the hijacked end points nove to another network, what is going to contain further spreading of the malware to other networks by infected end points.

r August 20, 2016 7:08 PM

@Curious,

http://www.sizecoding.org/wiki/Main_Page
https://news.ycombinator.com/item?id=12328375

This is kind’ve the objective to strive for in smartcards like Thoth is asking for (different cpu though), less bytes generally means more room for functionality in resource starved environments. Competitions like this can be very educational and entertaining, it’s something I think WinNT destroyed for a large part by removing the bare metal aspect of computing from home users. It might be stupid silly and useless to some but it’s still a fun and educational hobby, you can learn assembler raw hardware interfaces (ATA/ATAPI/graphics) practically everything.

256 byte limit is kind’ve more of an exposé, but you one can get ALOT done with that kind’ve space on a x86(or 64).

Figureitout August 20, 2016 7:48 PM

Thoth
–Want it faster? Write it in C. :p But yeah, I would have to study the cipher a bit more (the RFC was actually fairly nicely written compared to some of the docs on small IC’s (missing parts, I discovered on my own some things about the chip that should’ve been documented)) to comment on how to initialize. To automate that you could use some RNG functionality to re-generate keys, nonces, and constants (and counter would simply be set back to 0?).

And by changing the 32bit counter, only one 4 byte chunk was different (d19c12b4 -> 00000002), odd no?

ianf August 20, 2016 9:00 PM

ooof: […] “Turkey is the poster child for NATO victims: you get Playmobil aircraft with crippled IFF so Israel can attack you and you can’t fight back, and then the troops that fly the worthless crap around overthrow you on US orders.

So that’s what happened there? Thanks for enlightening me, couldn’t make heads or tails out of it. I just heard a forceful denial from Foreign Ministry of Romania as to the rumor that NATO (or was it merely its member USA) recently transferred ~50 nukes from Turkey there, so that’s probably true—otherwise they wouldn’t have to deny it. Or piss off the fellow NATO member next-door neighbour Bulgaria for not having been chosen in its stead (or even symbolically partaked in these regional… A-sweepstakes). So in fairness—a rare commodity these days—we should not exclude the potential behind-the-scenes machinations therein of well-known Balkan perfidy.

But do tell us more about Israel’s IAF standing by to attack (preferably state who/ which state(s)) on U.S. orders, I’m all ears with (alas, can’t be helped) an Israeli-make hearing aid in one.

Thoth August 20, 2016 9:10 PM

@Figureitout

It’s not really about programming language here. It’s about architecture where you have to degrade 32-bit word to 8-bit word execution. Even if you were to implement in C a 32-bit to 8-bit math library for a C-based smart card environment (i.e. MULTOS), the 8/16-bit CPU is one old legacy b**** that the smart card industry or even embedded security co-processors are so reluctant to move out of due to it’s proven architecture.

Keys, counters and nonces are for the user to feed into the system. This is a cipher library, not a full suite that includes key generation so the user has to do due diligence in reading the RFC and knowing how my ChaCha20 library works.

Constants cannot be generated and must be fixed to the defined 128-bit constants laid out by the RFC and by DJB himself.

What I meant by re-using the matrices is if you take a lazy approach and simply replace the 32-bit nonce, the initial state would be wrong because the initial state’s constant would be 837778ab e238d763 a67ae21e 5950bb2f instead of the dictated 61707865 3320646e 79622d32 6b206574.

The alternate scenario I showed was what happens if you initialize all the matrixes by borrowing the end state of the previous computation and then lazily refuse to reload all the parameters and simply just swap the counters.

The correct initialization state for the second counter 0x00000002 should be:

61707865 3320646e 79622d32 6b206574
03020100 07060504 0b0a0908 0f0e0d0c
13121110 17161514 1b1a1918 1f1e1d1c
00000002 09000000 4a000000 00000000

After the previous state of:

61707865 3320646e 79622d32 6b206574
03020100 07060504 0b0a0908 0f0e0d0c
13121110 17161514 1b1a1918 1f1e1d1c
00000001 09000000 4a000000 00000000

My alternate scenario presumes your suggestions of only replacing counter value from 00000001 to 00000002 and re-using end state matrices of previous calculation.

Thus the alternate initialization state following your suggestion by trivially replacing only the counter from 00000001 to 00000002 and reusing previous state would end up be:

837778ab e238d763 a67ae21e 5950bb2f
c4f2d0c7 fc62bb2f 8fa018fc 3f5ec7b7
335271c2 f29489f3 eabda8fc 82e46ebd
00000002 b04e16de 9e83d0cb 4e3c50a2

When the actual correct state with due diligence for the initialization state would be:

61707865 3320646e 79622d32 6b206574
03020100 07060504 0b0a0908 0f0e0d0c
13121110 17161514 1b1a1918 1f1e1d1c
00000001 09000000 4a000000 00000000

I do agree it’s abit confusing reading from what I said and it’s better to find time to read the RFC and also not to try and skip steps for shortcuts when implementing no matter how tempting it is to make execution faster because who knows what kind of side-channels these shortcuts and misinterpretations may introduce outside of the intended procedures and parameters.

I also would like to point out that the 32-bit to 8-bit math library have not been checked or tested for side-channels despite ChaCha20 cipher itself is suppose to be immuned to timing-based side-channels. Due to needing to use if-else control flows within the 8-bit math library I created, who knows if any timing side-channels might be created by the necessity of downgrading 32-bit word to 8-bit word processing.

Put it in simple words, if the crypto says use 32-bit words, stick to 32-bit words and don’t degrade it to 8/16-bits unless damn necessary due to possible introduction of unintended consequences.

And for crypto algo designers, they have to take into consideration the widespread existence of 8/16-bit CPUs still out there and have to design their crypto without being all too myopic to Desktop, Smartphones and Laptop 32/64-bit powerful CPUs.

Most crypto designs I have seen while looking through IACR are in fact rather myopic, sadly, because they put emphasis on common consumer hardware Intel/ARM/AMD 32/64-bit CPUs and never took much consideration on smartcards and 8/16-bit CPUs. The fact that we carry these tiny 8/16-bit CPUs even till these day in RFID tags, smart cards, door access cards, payment systems and so on existing in your wallets or hung around your neck as badges but so widely ignored is the state of why till these days legacy ciphers (DES and 3DES) are still so wide spread in embedded crypto and SSLv3 are still found in legacy security systems especially those paired with payment and finanical industry that uses embedded plastic cards (smartcard chips) and why moving to TLS 1.3 or even TLS 1.2 is so difficult for banks and financial services and other public and private sector.

Maybe I might want to write an article on how NOT TO design Crypto Systems and Ciphers from a perspective of a Security Engineer for Crypto Systems and Ciphers designers to read when time avails 😀 .

ianf August 20, 2016 9:29 PM

Always ready to fill in gaps of my ignorance, I set out to discover the security-topical contributions of one “Don” who recently(?) graced our forum with “its” sophisticated presence. The quest has not been a success due to the crappy duckduckgo search engine, and much-too-blandness of the 3-letter-keyword (I figured out that if Bruce Schneier wanted us to find granular/ conditional way back stuff, he’d provide us with tools to do it).

So now for the Plan B: do you, yes, YOU reading this, by any chance have any saved pointers to said handle’s archived brain-effluvial output? I’d so want to drink from that Crystal Knowledge Chalice, honest. Thanks much in advance.

[THIS REQUEST WILL BE REPOSTED PERIODICALLY UNTIL GAPS FILLED].

Wael August 20, 2016 9:36 PM

@ianf,

due to the crappy duckduckgo search engine

You may also try this on Google:
”SearchString site:schneier.com”

r August 20, 2016 9:47 PM

@Thoth,

The use of java itself may introduce side channels, considering the abstraction layer present. Just putting that out there, I think java may be the right choice from my 100% complete outsiders stance though. EMSEC is EMSEC no matter what and when things are finally brought up to a higher circuit density you can drop some of those extra dependancies.

By side channels I mean side channels not implemented backdoors – so please anyone wanting to poke me for that I’m aware.

Wael August 20, 2016 9:48 PM

Or if you’re concerned about the sensitivity of your search strings, then cascade your google search through at least three anonymizers.

A[1]->A[2]->A[3]->Google.com

It’s also good to have a closed loop in the anonymizer list you choose, for example, end the top one with A[1] or A[2] in addition to the above.

Choose three different countries for the anonymizer proxies, and stay away from 5-eyes.

Wael August 20, 2016 10:02 PM

@ianf,

A1 A2 A3 A4 A3 A4 Google. The loop can add confusion or break badly designed deanonymization engines 😉 And if you’re clever, then you can have the list of anonymizers chosen from a file, scripted through various techniques. When you go to your search engine, the rest will be automatic (part of it could be hard coding some IP addresses in your hosts file.) Now I haven’t tried this, but it should work.

On a phone, this may need an application, not sure one exists. The phone however isn’t to be trusted at all.

ianf August 20, 2016 10:04 PM

Fri Aug 19, 2016 | 5:34 PM EDT
U.S. Army fudged its accounts by trillions of dollars, auditor finds

    tyr: A mere few trillions in accounting adjustments.

ObLitRef: “Seven Days in May” 1962 novel by Fletcher Knebel, 1964 movie by John Frankenheimer. Army-staged military coup in the USA financed from the Joint Chief of Staff’s emergency budget of $100M. These were the days when a hundred mil was nothing to sneeze at (fortunately Burt Lancaster was there).

@ Wael “You may also try this on Google:
”SearchString site:schneier.com”

Since you’re so quick on the draw with solutions (same as, but, judging by timestamps, far quicker than Clive Robinson), perhaps you’d care to supply me a clickable search string that GUARANTEED will return those my sought-after Don’s security-topical contributions here. Pretty please.

Waiting…

Wael August 20, 2016 10:18 PM

@ianf,

Try this: ^.(\don\b)?.$ site:schneier.com on Google

You’ll get two or three links. I didn’t validate the correctness of the regular expression, but it gives you a place to start.

Gary August 20, 2016 10:49 PM

@ Don, “By the way , in the film Citizen Four by Laura Poitras, towards the end there is a discrete mention of a potential second whistleblower in nsa on the heels of Snowden. This was only some short weeks after his departure from Hawaii. The timing may coincide with the recent Equation Group hoo-ha”

If that’s the case, wouldn’t Russians know it?

ianf August 20, 2016 10:59 PM

BTW., Wael,

So, acc. to your understanding of my English, that your reg.exp

^.*(\don\b)?.*$ site:schneier.com

should deliver any post or comment from here with the string (\don\b) in it AS A FULFILLMENT for my RFSTC (=request for security-topical contributions from aforementioned “Don”) for me? If so, I truly didn’t know before that that keyword was so saturated with magick reg.exp qualities, but now I do (know). Still no nearer those mythical “contributions” though.

ianf August 20, 2016 11:09 PM

@ Gary,
              in the same movie, there’s also a scene that features prominent use of soy sauce on a napkin. You know where that H-K soy sauce was made? In CHINA. Ask the expert diviner Don to explain the significance of THAT.

r August 20, 2016 11:16 PM

@Wael,

“Choose three different countries for the anonymizer proxies, and stay away from 5-eyes.”

A couple things,

#1 stay away from 5-eyes: yeah, that’ll happen.
#2 stay away from 5-eyes: you remember the other day talking about hops? cheers! (smile!)
#3 choose 3 different countries for the anonymizers: did you see the new octopus logo? we need more secure options than a simple “take the long way” recommendation. FreeNET or some sort of mixnet like i2p may be a good solution but they may invite as much scrutiny as any of those basic optimizing proxies. AT LEAST, in the cases of something like i2p/freenet/tor one actually (disreguarding current public exploits) using those tools cuts out large swathes of model adversarial threats. Those exploits are at least 3 years old, the ‘company’ that had them was capable of creating an 8(?) vendor hdd replacement firmware – you think your simplistic anonymizers are clean? I WOULD: pick one’s that are very new spin ups – minimum. OR, to take a page out of your playbook -> throw away device and short path search eg. just do it don’t worry about filters/evasion/tracking – fire and forget.

#4 Google supports regex? Man I’ve must’ve been sleeping under a bridge. (rock and rol!)

#5 Wael, i am not friendly is trying to trick you into narrowing down on your location through the use of selectors. Don’t fall for him trying to instigate you modulating a very narrow band signal for him and the fleyes.

@ianf,

You don’t remember hearing that as part of the speculation over some of the papers that came out?

Wael August 20, 2016 11:17 PM

@ianf,

I was trying to give you an example as a pointer. I don’t think Google supports full regular expressions. You may try to play with advanced search, though.

Back to your point: I guess you are trying to make the point that @Don didn’t make previous security contributions, therefore he doesn’t have the right to say “something”. I think that’s an invalid statement. For one, I’m not going to do the search for you — it’s not worth my time. Two: Even if the premise is true, it doesn’t lead to your implied conclusion.

If you are interested in a subject, we can give you pointers or exact answers if that doesn’t require us to do the work on your behalf. I did this once for you with the 3GPP specification, but can’t do it every time. Therefore, I choose to send you “no links”. I tried to help, though!

Wael August 20, 2016 11:24 PM

@r,

trick you into narrowing…

He’ll only know what I allow him to know. Besides, if I were trying to hide my details, I would have chosen a nickname. Then again, @Figureitout, @Clive Robinson, @Nick P (especially Nick P), and @Mike the goat know a lot about my whereabouts, and more…

Wael August 20, 2016 11:37 PM

@r,

Don’t fall for him trying to instigate you modulating a very narrow band signal for him and the fleyes.

Oh, I’m aware of that 😉 I obfuscate things before I send them, ask @Clive Robinson. And @ianf uses google analytics to see who watched the video links he posts and tries to match the location with comments here 🙂 I don’t do that because I don’t care who is who or where. I don’t have this sort of curiosity.

Marcos Malo August 20, 2016 11:57 PM

@ianf

[THIS REQUEST WILL BE REPOSTED PERIODICALLY UNTIL GAPS FILLED].

Is this is a canary? What would you like us to do when your gaps are filled, notify next of kin? 😀

r August 21, 2016 12:05 AM

Wael I just thought[r] I might be trademarking it.

Just playin with ya, no significance at all.

@Marcus,

You’ve just brought tears of manic joy to my life, thank you. There are so many gaps to be filled in… but his next of kin is killer, absolutely wonderful.

Wael August 21, 2016 12:17 AM

Let’s not talk about relatives!

Yo’ mama so fat, they hired her at the box office to spit butter on the popcorn 🙂

Wael August 21, 2016 12:33 AM

@r,

From your link,

How SSL/TLS Encryption Hides Malware

That’s one reason to use tiered network zones. Terminate the TLS session in a safe zone, inspect the packets, then establish a different TLS session with the destination. Additional steps include change of protocols.

ianf August 21, 2016 1:10 AM

You are dissembling and playing Alfred E. Neumann, too, dear Wael. I didn’t ask specifically you, but everyone, and I stated clearly what I was looking for—as the aforementioned gent thought that my contributions were crap, presumably in comparison with his own (security-topical or otherwise AND of presumed high signal-to-noise ratio). So what were they/ where are they? AWOL.

Don didn’t make previous security contributions, therefore he doesn’t have the right to say “something”. I think that’s an invalid statement.

Nice use of strawman: ascribing your own pedestrian conclusions to me, then demolishing them. You should hold classes. Maybe you do.

BTW. I once posted an inquiry whether no-SIM device advertised its presence to the towers UNTIL the time for emergency call. I never got a clear answer to that, but I got yours in-depth 3GPP spec instead, so I’m not complaining—not that I’d know what to do with it. But wasn’t this just what you expected to be asked for, indeed invited such queries, yet now you are making a fuss over that, claiming that “you did this once FOR ME with the 3GPP specification“??? Shame be upon you.

Wael: ianf uses google analytics to see who watched the video links he posts and tries to match the location with comments here

    @ rrrrrrrr that explains the goo‍.gl links

Don’t be silly, why is using a URL shortener (that promises not to become stale as other such services in the past) suddenly ALL EVIL? The rest is another kindergarten strawman courtesy of Wael®. And, yes, goo‍.‍gl does provide rudimentary analytics – so what of it? Very useful for pinpointing that one of, say, 12 viewers from the USA could be… someone… I’ll just feed that intel into my ICBM launcher.

These in no particular order:

Google has its own, fairly precise (though not reg.exp) search syntax, most useful for Gmail.

What “don’t I remember hearing of as part of which speculation”? I remember all sorts of stuff, but don’t fall for unlabeled speculations.

And, no, I am not trying to have Wael disclose his location. I already know where he’s at, and I like it that way. If anything, I’m only trying to make it harder for him to escape from the logical corner that he put himself into. That’s the kind of games adults play in this forum.

@ Marcos Malo: read it as a preview of coming rhetorical distractions. You’re making quite a leap of faith that those gaps will be filled with (especially said “Don’s”) knowledge, I should be so lucky.

Anyone who mentions vas pup‘s mother gets the hose, remember that.

Y’all happy now, or should we have another go at discussing the missing ouvre of that new friend of Wael’s, the literary critic Don?

Sam August 21, 2016 1:34 AM

@ Wael, “And @ianf uses google analytics to see who watched the video links he posts and tries to match the location with comments here 🙂 I don’t do that because I don’t care who is who or where. I don’t have this sort of curiosity.”

I’m surprised that ianf is on to some analytics because most of his posts appear too cryptic for the general census. Afterall, the internet is a link farm full of analytics and metadata, so the choices we make reflect on the paths we traverse. It’s like spamming threads with bullshit to see who responds to who what when.

Wael August 21, 2016 1:39 AM

@Sam,

My mistake. This is too much noise. In the past I felt bad ignoring anyone, but I’ll have to make exceptions now.

Figureitout August 21, 2016 3:08 AM

Thoth
–Was a joke and I know but I guess we’re gonna have an argument now b/c you’re feeling a bit frisky I guess lol, have you tested that or are you just assuming? What about other 16bit chips that still get good support and I can code/flash w/ fat IDE’s? Why limit to just smart cards?

I know it’s not a full longterm implementation, I’m merely saying that it sucks that we have to do all that re-initialization when counter reaches FFFFFFFF. I mean, even a relatively weak cipher, if you give it as much tender loving care as ChaCha20 needs, would be a b*tch to crack. I know constants somewhat “magical”. And I most definitely didn’t suggest to just replace the counter, again saying that it sucked that we couldn’t just do that. Attributing false words to me pisses me off a bit, please don’t do that, maybe it got lost in some “singlish” misinterpretation lol.

There were quite a bit of “if’s”, like “if…if…if…if…if”, like at least 5 nested if’s. Adding all those branches definitely adds noticeable time that one could likely observe w/ oscilloscope, I know b/c I’ve tested this. I also know that in ATTiny chips we use, the delay_ms() function does not actually delay for the specified time period, it’s a falsehood (can’t say too much…) so you cannot trust the code only for timing. You’ve got to verify the timing w/ a scope at the least unless you want a race to the bottom (I’m not getting into possibility of corrupted test/measurement equipment…unless it’s a cheap multimeter, can’t trust…).

I haven’t actually executed your code either and tested it myself so not sure if it’s broken either, the bit conversions in particular. I know in C you can do quite a bit of int-to-string, 32bit to 8bit hex conversions in a surprisingly small amount of code (got some of these generally for LCD displaying stuff). I use a 16bit to 8bit conversion and vice-versa quite a bit when dealing w/ something like 2 8bit timers combined into a 16bit one or you got like a 12 or 10bit ADC. Gotta use 16bit variable in code.

Wael
–As I’ve said, “I come in peace”, I don’t care anymore (turns out computers and the flow of information are more interesting in everyway imaginable to me), I don’t and haven’t said anything about your whereabouts or other PII to anyone; I keep a lot of things to myself for my personal protection only. I don’t play the games anymore, I don’t care, the only winning move is not to play. I do know the same cannot be said for me.

Wael August 21, 2016 3:15 AM

@Figureitout,

I know, bud! I’m just upset with myself I got dragged into this load of horse sh*t.

Clive Robinson August 21, 2016 4:20 AM

@ r, Curious, Thoth, et al,

If you are still reading this thread after the “toys out of pram” above.

less bytes generally means more room for functionality in resource starved environments.

Whilst it does it also increases complexity at a rate greater than the byte savings… and complexity is very definitely a place you don’t want to go if you want the desirable side of your efforts to show profit. Thus code maintainability, upgradability, reuse etc become to expensive to consider, thus your work becomes a pebble tossed into a black hole.

That’s not to say “coding small” is a bad idea, it’s actually a good idea if you can do it clearly or minimise the impact to just a few low level points that are very platform specific. That is your aim is to turn the “Coding Pyramid into a Coding Diamond”.

Sometimes life throws you a banana and you still have to catch it even though it flys like a lead boomerang.

One from back in the days when I was a young turk, was making a CPU neutral cross assembler, by writing a pre-processor that converted a neutral instruction set to a CPU specific instruction set on a mainframe. That is all 8bit CPUs had common instructions that could be translated… Or so you would think.

Take the example of the (A)ccumulator register that the ALU results get put in or the secondary e(X)tra register that gets used for incrementing etc as a loop counter. You would expect a simple and obvious one byte instruction for setting the register value to zero such as,

    CLRA CLRX etc

Not a load immediate two byte instruction such as,

    LDA 0 or STA 0 etc

But if there was no CLRA on the CPU,would you expect it to be more efficient in CPU cycles to instead use,

    XOR A, A

Which was the way you did clear on atleast one popular CPU…

The thing is I’ve written the same code for so many different CPUs in my time that I almost “program backwards” compared to others. That is I will write instruction level comments and then find the appropriate instruction to fill in. In effect I cut and paste the comments not the code…

Look on it as like writing a story in one major language say French and commenting each word in Latin then later taking the Latin and converting that to another language, say English. It will work and provided you use a certain style it will work well.

It produces a slightly stilted coding form that does not wring the best out of any individual CPU type but does allow cross platform use of code more efficiently in terms of code storage than you will get with most other ways (except for writing in a stack based threaded interpreter, which is generaly so stilted your brain has to work on the Z axis not just the Y and X).

As a general rule to writing any non trivial code for a “new to you” microcontroler, is you should first write a “stock” BIOS, then add a simple schedular RT/OS, before you write any application code.

Part of this is developing a clean understandable API as your software backplane application code sits above this hardware code sits below it. You should write IO code wherever possible to be double –or triple– buffered and interupt driven. That is the hardware interupts have fast handlers to copy to/from short circular buffers and flags, the main RT/OS timer interupt loop acts on these flags to copy data to/from linear buffers of appropriate size.

Whilst you can write code for microcontrolers in other ways you tend to qyickly end up wishing you had not.

One main advantage to this is if you get the API right application side code does become not just much easier to write it can be more easily made maintainable, upgradable and reusable. Further you can get multiple developers involved without them tripping over each other, or for that matter even having to talk to each other 🙂

As time goes on both your BIOS and RT/OS will grow and you will realise what you are ending up with is like a striped down *nix kernel with RTOS extensions but without a lot of the bloat. However most SoC systems have more RAM/ROM in chip than most Mini-Computers ever had, so bloat on it’s own is not realy a consideration any more. Which might account for why many embedded developers these days just go directly to a stock Linux or BSD base and save themselves time.

In essence there are now three levels of embedded system. The first of very dedicated and often limited function you might find in your electric toaster or combi-boiler that are “close to the metal” and have their own human level UI built in both hardware and software. The next layer are similar but don’t have a human level hardware UI but a standardised comms interface instead, they generaly don’t have storage for logging etc only configuration and are not generaly designed to run general purpose user code. The highest level are basicaly general purpose computers with specialised hardware control such systems used to be made in the past by designing interface cards and driver code for PCs.

In the future we will see less and less of the middle tier, and the top tier will become more like nodes in a communications network supporting parallel computing. Standard OSs will get streamlined with more attention on through put. This will mean micro kernals to set up comms circuits and userland control of the circuits when established with applications in effect having distributed processes. Traditional OS services will be carried out by application specific nodes, not local services, with the result that the “real time” element of the RT/OS will become dominent and thus the old style OS will need to be more of a light weight RTOS.

Thoth August 21, 2016 4:42 AM

@Figureitout

“Was a joke and I know but I guess we’re gonna have an argument now b/c you’re feeling a bit frisky I guess”

Don’t worry. It’s all good. I am trying not to trip over myself by not misrepresenting the words of DJB, Adam Langley and Nir (authors of either RFC7539 or the original ChaCha author). I am repeating myself again and again hoping not very hard not to confuse you and hoping I don’t trip myself badly 🙂 .

So … don’t mind if I get long winded when it comes to ChaCha20 because I was bitten by it a number of times while banging my head and trying to figure the RFC-7539 and original ChaCha20 paper and try not to really mess up and mislead you.

I felt I have burnt a good amount of brain cells writing the Java(Card) codes and math library.

“What about other 16bit chips that still get good support and I can code/flash w/ fat IDE’s? Why limit to just smart cards”

The only 16-bit stuff I have around me are smart cards 🙂 . I do be glad if there are something else I can get my hands on without too much of a problem. Technically the IDEs can load the code and play it especially for the jChaCha20 which can be run as long as there is a standard JVM.

“There were quite a bit of “if’s”, like “if…if…if…if…if”, like at least 5 nested if’s. Adding all those branches definitely adds noticeable time that one could likely observe w/ oscilloscope, I know b/c I’ve tested this.”

It’s sadly a necessary bloat and evil until somehow there is a better way to write the conversion codes without the if-else like doing the 32-bit ROTL 7, 8, 12, 16 without only bitwise operations.

“I know in C you can do quite a bit of int-to-string, 32bit to 8bit hex conversions in a surprisingly small amount of code (got some of these generally for LCD displaying stuff). I use a 16bit to 8bit conversion and vice-versa quite a bit when dealing w/ something like 2 8bit timers combined into a 16bit one or you got like a 12 or 10bit ADC. Gotta use 16bit variable in code.”

If you noticed any rooms for improvements, do give me a heads up. Would be very glad to see ChaCha20 running on a 16-bit environment in less than 1 second since now encrypting a 112 byte messasge takes around 4.30 seconds. Imagine the user having to wait to encrypt a document 😀 .

By the way on the GroggyBox project, I have developed a GUI and it’s starting to take shape in Java since Java provides rather good support for smart card access (better than Python or C in my opinion).

Once I have the full GUI mockup running, I would post some screenshots on my website. The GUI is geared for simplicity and usability for everyone besides techies.

Thoth August 21, 2016 4:57 AM

@Clive Robinson, r, Curious, Thoth, et al,

“Whilst it does it also increases complexity at a rate greater than the byte savings… and complexity is very definitely a place you don’t want to go if you want the desirable side of your efforts to show profit. Thus code maintainability, upgradability, reuse etc become to expensive to consider, thus your work becomes a pebble tossed into a black hole.

That’s not to say “coding small” is a bad idea, it’s actually a good idea if you can do it clearly or minimise the impact to just a few low level points that are very platform specific.”

Complexity is always troublesome to handle. It is less of how much you can shrink a code than how much you can cut the bloat but still make it clean and not complex I guess ?

“That is I will write instruction level comments and then find the appropriate instruction to fill in. In effect I cut and paste the comments not the code…”

Hmmm … I have found someone who code cut similarly to how I do my code cutting 🙂 .

Clive Robinson August 21, 2016 6:38 AM

@ The usual suspects,

As you are aware the search for usable entropy is potentialy never ending. The major impediment to exploiting new sources is cost.

Well now there is the posibility of safely adding fast decaying sub atomic particles to the TRNG list fairly cheaply with this muon detector,

http://www.symmetrymagazine.org/article/the-100-muon-detector

The rate might be slow, but it’s enough to agitate a CS-DPRNG effectively.

Clive Robinson August 21, 2016 7:08 AM

It would appear that the Canadian Association of Chiefs of Police (CACP) officers want Canada to create a law compelling people to hand over their computer passwords,

http://motherboard.vice.com/read/canadian-cops-want-a-law-that-forces-people-to-hand-over-encryption-passwords

Whilst CACP like all ACPOs is a trade union style lobbying organization, politicians and legislators have a habit of giving them more credence than they deserve.

CACP have membership from across Canada who met earlier this month for their annual conferance. Prior to this the members were sent quite biased background reading by CACP and the result was the “resolution” was passed mandating that the group advocate for such legislation.

It is unclear as to if such a law would be actually possible within Canada’s present legal position. Which gives rise to the situation of Canada’s senior police officers being put in the position of lobying for an illegal act…

ianf August 21, 2016 7:40 AM

@ Clive,

interesting, but I have grave doubts that such a law could pass simply because it’d be unenforceable. For one, it’d offer a ready focal(point) around which various human rights groups there easily could rally. How do you manage 500 protesters all claiming to have forgotten their passwords BUT pointing out who might have (also forgotten) it?

Also, the first time some political figure has been arrested, AND it transpires later in court that the police never asked for said now-celebraliability‘s password, the Parliament will move to strike it down. I don’t know more about Canada that I read in the news, but that proposal sounds very much like a shot across the bow of legislators to grant the Police some other, lesser bills, or the union will make noise about this one. One of legal democratic methods of exerting pressure in such scenarios.

Clive Robinson August 21, 2016 7:40 AM

When Evidence is not Evidence.

The dump of the –supposed– Equation Group hacking tools is getting more interesting as time goes on.

Kapersky who analyzed it have made claims about the use of a constant in the RC6 code is “evidence” that it must be the Equation Group because they have used the same constant in that particular form before.

As I’ve already mentioned, the fact that the constant is the two’s compliment of the original subtractive constant given by Ron Rivest is not realy proof of anything as your choice as a programmer is 50:50 as to which to use as x = y -“constant” or x = y + “twos complement of constant” produce the same result.

Well it turns out I am wrong in one asspect… It may not be only your own choice, your compiler might make that choice for you instead…

https://www.cs.uic.edu/~s/musings/equation-group-rc6/

This is just another reason I get annoyed about “Attribution” in what passes for “Computer forensics” when it comes to supposed State Level participents.

It’s not just it appears to be a knee jerk blaim the current favourite baddy and then look only for what you think supports your choice, that is not science. But that countries are queuing up to pass legislation that says that such god dam awful nonsense can be used as an argument that the current favourite baddy has committed a “first act of war” thus they can go hot with kinetic and WMD against them…

Thoth August 21, 2016 7:46 AM

@Clive Robinson

re: Forceful coercion by legal and illegal means to hand-over personal secrets

Regarding this matter, I have updated the OpenPGP smart card applet with a new feature (my fav. feature) which is the self-destruct PIN code feature.

I have sent a “pull request” to the maintainers of the JavaCard’s variant of OpenPGP smart card applet which are the supplier of the Yubikey USB hardware token my modified version of the smartcard applet containing a specialized self-destruct trigger in the form of specially provisioned PIN code.

My version of codes when comparing login PINs would randomly decide to compare the self-destruct PIN or the actual user PIN to confuse attackers attempting to mount a power glitch attack on the card chip with the modified logic.

What I can improve on in future for the self-destruct code is to add a tamper detection flag in the form of a byte where the comparing logic would write ahead in advance to update the tamper flag and when a power glitch occurs, the flag would fail to update properly and would trip the self-destruct event (wiping all the PGP keys in the smart card). For now I have not implemented this additional tamper reaction function and only simply rely on confusion.

The the maintainers of the JavaCard variant of OpenPGP smart card applet does not want to accept my updated version of software, it can be downloaded manually from my Github repository.

It’s about time that card developers and other hardware secured application designers start thinking of how to ensure the security of their system in the event their users are being coerced by aggressors by sprinkling some tamper detection and reaction triggers that the user can use with good chance of getting away with it.

For readers who are strongly against tampering with evident, we are living in an era where the Governments have turned their backs against their citizens who gave them power and the big corporates and companies sell out their customers as though they are trade-able human products with very little feeling and thought of how their customers would feel. Maybe we should re-consider our own personal security and privacy models and not be reliant on our “Lords and Ladies” for shelter anymore as they have proven to be untrusted.

Links:
https://github.com/Yubico/ykneo-openpgp/pull/43
https://github.com/thotheolh/ykneo-openpgp

ianf August 21, 2016 9:07 AM

OT COMPUTER ADVICE COLUMN: Should I replace a MacBook Air with a Windows laptop?

Google launches cross-platform answer to FaceTime

    The Duo app for Android and iOS aims to be simple, friction-free alternative to Skype, using your phone number to quickly place free video calls. Does not require a Google account as with their Hangout. https://gu.com/p/5vvhe

The Guardian’s resident mad feminista-fashionista on Mens’ Cargo Shorts – practical clothing or man-shaming stupid pants?

    “These multi-pocketed monstrosities have provoked an unlikely summer fashion war. But do they have merit beyond kneecap-high mobile phone storage?” [She also expects them to be ironed—NO JOKE]. http://gu.com/p/5vv4n

Who? August 21, 2016 10:27 AM

@Thoth

What is the advantage of a self-destruct PIN, like the one you suggest, compared to a more traditional short-lived digital certificate?

After reading about your work for months, I am seriously considering buying a OpenPGP smart card (I think version 2.1 is the one for sale right now) and perhaps a smart card reader with keyboard. A device that allows secure storage of private keys is a good choice even if using an operating system like OpenBSD on all computers here (some permanently airgapped). It should be enough to securely store a set of sign/encrypt/authentication certificates. Are JavaCards better for these purposes? What advantages may they have (apart of its ability to being programmable)? My primary goal is authentication using OpenSSH, but would be great allowing signature/encryption of files and email too.

Another question, if you do not mind answering it. What would you choose? RSA (either 2048, 3072 or 4096 bits) for all three certificates or RSA-whatever for encryption and DSA for signature and authentication? I understand Ed25519 is out of the scope of current smart cards.

Right now RSA may look “older,” but I do not think DSA is better as it is sponsored by the wrong third party (NIST) and it has to be 1024 bits long to be compliant with NIST’s FIPS 186-2:

$ ssh-keygen -b 2048 -t dsa
DSA keys must be 1024 bits

Thanks for your work on JavaCard technology!

Who? August 21, 2016 10:57 AM

@Thoth

Being more specific… as I understand it a smart card will lock after three failures and will become unlocked only after typing the “admin PIN” (in GnuPG terms).

I supposed modern smart cards were protected against tampering, clock-signal glitches and so on.

Thoth August 21, 2016 11:10 AM

@Wael

re: Self-Destruct PIN vs Short Lived Certs

What if a person were to be captured during the active time of the short lived certificate still being valid (assuming a 1 month certificate), the user could be forced to divulge the PIN to access the private keys and a lot of bad things can be done within that 1 month like signing bogus or tampered software files especially for security products and projects. Most smart card applets enforce a retry limit but an adversary would not be foolish to not except you to set a retry limit on the card.

The better way is to force the aggressor to either bruteforce the PIN code and then trip over the PIN retry limit and permanently block themselves and yourself out or Plan A would be to give a self destruct PIN and make them think they have you in their control and relax their guard a little but it is expected that they would be extremely furious once they realise they tripped the self destruct trigger. Either way the aggressors are faced with tough decisions due to hardware backed security in a tamper resistant package and it is the captives job to make life tough or af least go down with a nice boom (Mutually Assured Destruction) if you know what I mean.

The fact that the normal OpenPGP card being fully closed source indicates that it is less trusted. I personally prefer to do things myself so I will choose the JavaCard variant and load the applet myself as I trust and am confident in my own work.

The GlobalPlatform specifications are explicitly using JavaCard platform as a model so JavaCard stuff are the “golden standards” of sort when it comes to GP compliant smartcard. They are more rigidly designed and vetted according the GP standards from the JCVM’s secure operations and Card Life Cycle down to what is expected of the API and functions while the official OpenPGP card supplied by g10code uses it’s own raw and proprietary architecture that no one except the designer knows what is in their non-GP compliant design for the Card OS and their OpenPGP applet. I feel they created the OS and applet in a right knit design suiting their use case but the fact they did not use a well known architecture is not something I would want to touch.

Just not being too mean to g10code whose sales actually sponsors the activity of the GPG project and also supports GPG’s lead, Weiner, if anyone wants to sponsor GPG project, head over and purchase the card they produce despite the close source and proprietary system as it would fund the GPG project.

But … from a security perspective, I have to scrutinize my options and be mean at times so I have to be practical and take the JavaCard variant for the security sake of my codes that I need to do code signing.

If you are using JavaCard with JavaCard capable of v2.2.2, you are likely only going to have access to 2048 bit RSA. JavaCard v3.0.4 comes with 4096 bit RSA but this requires the card supplier’s hardware and Card OS to enable the support. A 3072 bit RSA is rather rare as the JavaCard standards don’t support that key length but supports weird lengths like 1984 and 1536 bit lengths. I think 2048 bit RSA would be sufficient and is the most widely recognize standard.

If you are using smart card PGP keys, it only has RSA option. ECC curves like those DJB recommend are not NIST curves which makes it unavailable on smart cards. It isn’t really much of a choice but once you use smart card backed PGP keys, the only option is PGP using RSA as per the specifications for OpenPGP card found in the g10code (maintainer of OpenPGP card standard) website. The best keying optiin sadly would be RSA 2048 since that is almost a universally recognized PK algorithm and key length.

Link: https://g10code.com/p-card.html

Thoth August 21, 2016 11:22 AM

@Who?

The truth on how smartcard blocks after 3 wrong tries is due to a hardware counter which is being checked when an attempt to use card resources. It is the due diligence the card developer implement the API to check that the retry counter has maxed and then immediately kick the connection by sending an error indicating security fault.

The card doesn’t magically blocks access until the codes call the API and then kick the offending connection.

The GPG Admin PIN is like the SIM card PUK code for those who played with SIM security before. The Admin PIN also has a retry limit of 3 just like the Reset Code and the User PIN. If the Admin PIN exceeds it’s retry limit and gets blocked, it’s kind of game over as no one is more powerful enougj to reset any PIN codes than the Admin PIN.

Modern smart cards are equipped with metal mesh to detect physical IC chip intrusion, power and clock glitching, power analysis on crypto operations supported by hardware and more depending on the IC maker and designer. ARM also has it’s line of smart card soft IP it sells as the ARM SecurCore IP with tamper resistant design baked into it’s soft IP pakcage it sells to chip makers like ST, G & D, NXP, Samsung and many more who purchase the ARM SecurCore IP to implement on their smart card chips.

Wael August 21, 2016 11:30 AM

@Thoth,

re: Self-Destruct PIN vs Short Lived Certs

Is that a question or is it a comment addressed to me? If it’s a question, then please rephrase it more concisely.

Who? August 21, 2016 11:35 AM

Thanks @Thoth!

(It was me, not @Wael, who asked this question.)

It is more clear now. Thanks! I see JavaCards and OpenPGP smart cards are truly different technologies. I did not know OpenPGP smart cards were closed technology, as they are designed by a open/free source friendly corporation (as you say, g10code) and they have a full set of specifications available. Nice to know! It turns decision between a Neo and an OpenPGP smart card even more challenging.

However, I am not sure we should trust JavaCards as these ones are small computers with a Java interpreter. You know much better than me if we can trust on that technology.

I think recent OpenPGP smart cards (at least v2.1 ones) support both RSA (up to 4096 bits) and 1024-bit DSA (no elliptic curve cryptography variants). I would certainly avoid any ECC based on NIST curves, like ECDSA, this one is the reason I asked about Ed25519. Don’t know what to say about DSA itself. It is modern cryptography, but sponsored by NIST.

For an OpenPGP smart card I think it will stay at RSA-2048 or RSA-4096 (as it is supported since v2.1) instead of DSA. We will see if a future standard supports Ed25519.

Who? August 21, 2016 11:59 AM

@Thoth

I know what “PIN” and “Admin PIN” (as you say, some sort of PUK code) are… however, what is the “reset code” you noted? I have seen in GnuPG documentation and looked for information about this option for weeks. It is the fourth option in the “passwd” command available on GnuPG when editing smart cards. Is it the “reset PIN” you are using on your updated JavaCard code? (i.e., some sort of smart card self-destruct code.)

I think that, even if an “Admin PIN” is mistyped three times, there is a way to recover the card but no the certificates it contains:

https://lists.gnupg.org/pipermail/gnupg-users/2009-September/037414.html

(to me it is much easier making a new set of subkeys than buying a new card.)

Petter August 21, 2016 12:20 PM

Social media photos can be used to create a 3d face model and then mapped onto the model to bypass facial recognition security systems.

https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/xu
https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_xu.pdf

Abstract:
In this paper, we introduce a novel approach to bypass modern face authentication systems. More specifically, by leveraging a handful of pictures of the target user taken from social media, we show how to create realistic, textured, 3D facial models that undermine the security of widely used face authentication solutions. Our framework makes use of virtual reality (VR) systems, incorporating along the way the ability to perform animations (e.g., raising an eyebrow or smiling) of the facial model, in order to trick liveness detectors into believing that the 3D model is a real human face. The synthetic face of the user is displayed on the screen of the VR device, and as the device rotates and translates in the real world, the 3D face moves accordingly. To an observing face authentication system, the depth and motion cues of the display match what would be expected for a human face.

We argue that such VR-based spoofing attacks constitute a fundamentally new class of attacks that point to a serious weaknesses in camera-based authentication systems: Unless they incorporate other sources of verifiable data, systems relying on color image data and camera motion are prone to attacks via virtual realism. To demonstrate the practical nature of this threat, we conduct thorough experiments using an end-to-end implementation of our approach and show how it undermines the security of several face authentication solutions that include both motion-based and liveness detectors.

Kemp Ensor's weeping rectal chancres August 21, 2016 12:37 PM

@Clive re Tor strike. Funny how we’ve got this organization dealing with problems that happen everywhere every day, but with the Tor project, the Internet is full of helpful assholes pushing actions that, if carried out, would just happen to interfere with the work of the organization.

This is the most honest testament to Tor. It’s been obvious from the beginning of Applebaum’s kitchen-sink vilification that NATO governments are randomly trying things to foment conflict, any conflict, among anybody working on Tor. It’s standard COINTELPRO, get people fighting on any pretext. They think you’re all too stupid to know who your enemy is. This one smells like the JTRIG pedos but the US is undoubtedly involved too, since NSA’s a laughingstock now and the Internet is starting to undo NSA sabotage.

Marcos Malo August 21, 2016 1:45 PM

@r
I’m good for the occasional smart ass comment in the spirit of fun. I can’t really contribute to the signal here, so hopefully there are redeeming qualities to my noise. I do get a kick from the humorous back and forth between the regulars, and sometimes I can even follow a bit of the technical discussion.

ooof August 21, 2016 2:04 PM

@ianf (re 9:00) what I mean is, Turkey’s NATO weaponry has crippled IFF (Identify Friend-or-Foe). By design, NATO weapons platforms can’t lock on other NATO allies. In Turkey’s case, they can’t lock on Israeli planes either. It’s demeaning, like giving them Baby’s First Fighter Jet. Turkey is quite aggrieved about this, since Israel has killed their nationals and assaulted their humanitarian Gaza flotilla. Turkey has in the past threatened to source their arms to less controlling suppliers (like, say, Russia.) So Turkey’s de facto NATO denunciation is not unexpected. The botched CIA coup probably just tore it.

JG4 August 21, 2016 2:07 PM

I don’t think that I mentioned this threat model before, and I don’t recall seeing it articulated. “They” have sufficient information to reliably duplicate your voice, speech patterns, email “return address,” writing style and vocabulary. I haven’t yet seen (or don’t recall seeing) the argument made that one of the most compelling reasons for using robust encryption is to prevent people/groups/agencies from impersonating you, in particular to disrupt your network. The threat model was at least implied by the post a couple/few months ago of a newsclip where Stanford published video of real-time transfer of facial expressions to a computer-model animated mannekin.

I love the Snowden quote about the first amendment and not caring about it just because you have nothing to say. The bit about Jacob Applebaum and the diligent efforts of various parties to create conflict in the TOR community reminded of this gem, which I haven’t seen posted/mentioned often enough:

“Read the CIA’s Simple Sabotage Field Manual: A Timeless, Kafkaesque Guide to Subverting Any Organization with “Purposeful Stupidity” (1944) [Open Culture]. Weaponized agnotology. How nice.
http://www.openculture.com/2015/12/simple-sabotage-field-manual.html

—bonus information from the day that it was posted—

http://www.nakedcapitalism.com/2016/08/200pm-water-cooler-892016.html

Imperial Collapse Watch
“Coming to Terms with Secret Law” (PDF) [Harvard National Security Journal]. From the abstract:
https://cryptome.org/2016/08/rudesill-secret-law.pdf
The allegation that the U.S. government is producing secret law has become increasingly common.This article evaluates this claim, examining the available evidence in all three federal branches. In particular, Congress’s governance of national security programs via classified addenda to legislative reports is here given focused scholarly treatment, including empirical analysis that shows references in Public Law to these classified documents spiking in recent years. Having determined that the secret law allegation is well founded in all three branches, the article argues that secret law is importantly different from secrecy generally: the constitutional norm against secret law is stronger than the constitutional norm against secret fact.Three normative options are constructed and compared: live with secret law as it exists, abolish it, or reform it. The article concludes by proposing rules of the road for governing secret law, starting with the cardinalrule of public law’s supremacy over secret law. Other principles and proposals posited here include an Anti-Kafka Principle (no criminal secret law), public notification of secret law’s creation, presumptive sunset and publication dates, and plurality of review within the government (including internal executive branch review, availability of all secret law to Congress, and presumptive access by a cadre of senior non-partisan lawyers in all three branches).
The horse may already have left the barn; after all, Obama’s set up a system where U.S. citizens can be whacked on the say-so of the executive branch alone, which seems to violate the Anti-Kafka principle. “Sovereign is he who decides on the exception,” cackles Nazi legal theorist Carl Schmidt.

News of the Wired
“Library anxiety is real” [Atlas Obscura]. “Some students in Mellon’s study did their best to avoid the library altogether. ‘I know that nothing in here will hurt me,’ wrote one freshman, ‘but it all seems so vast and overpowering.’ Another first-year student described the library as “a huge monster that gulps you up after you enter it.’”
http://www.atlasobscura.com/articles/the-strange-affliction-of-library-anxiety-and-what-librarians-do-to-help
Researchers crack open unusually advanced malware that hid for 5 years” [Ars Technica]. Tick tick tick..
http://arstechnica.com/security/2016/08/researchers-crack-open-unusually-advanced-malware-that-hid-for-5-years/

r August 21, 2016 2:51 PM

To: Gerard van Vooren, All (OT)
CC: Thoth

Subject: Institutional Corruption and CO-OPT’ed behaviour.

More to Gerard’s statement the other day, picked this up from “full measure” this morning.

“Pharmaceutical Industry–Sponsored Meals and Physician Prescribing Patterns for Medicare Beneficiaries”
https://archinte.jamanetwork.com/article.aspx?articleid=2528290&linkId=25731487

@ianf,

Still want to trust the doctor that says “it gets the scalpel” as opposed to the @Clive Robinson styled ones?

This is why and where the American legal requirement to have medical insurrance has botchulism currently, to mandate the requirement of paying these just write it off thieves is contemptable.

SEE: trust your mechanic, after all – mechanics get to play under the house rules.

ScottD August 21, 2016 3:33 PM

@r

The other reasonable method is something like “did you find me a lawyer?”

I assume you mean by this that a pass phrase is better. Yes and no. A pass phrase is easier to remember, and statistically has many more possible permutations from a straight numerical perspective (i.e. a ridiculous large set for brute forcing). Except that most people will choose a common phrase which is probably susceptible to a dictionary type attack jsut as a single word would be.

Most password systems limit the number of characters to significantly fewer than the character count of most pass phrases and do not allow spaces. A two or three word password, where the words are capitalized and separated by an underscore (or dash), meets the requirements of most password systems that require an upper case, lower case, and a symbol. Much easier to remember with just two or three words (three recommended).

Many of the online websites that create a word based password, use a dictionary of around 7000 common words. So consider the security of a two or three word password. (disregarding the joining character to simplify the argument)

Assuming a 7000 word dictionary, a brute force attack on a single word would see 7000 possible combinations. 2 words 49,000,000 (trivial to attempt 49 million with even a cheap computer). 3 words 343,000,000,000. 343 billion. Still not much for today’s computers. Consider your hard drive is probably much bigger than that.

Now consider my Word Password Generator which uses a dictionary of 109462 words. 1 word is obviously 109462 combinations. 2 words 11,981,929,444. Now consider just three words, easy to memorize, has 1,311,565,960,799,128 combinations. Your desktop computer would take quite a while to try all those. Granted a massively parallel system, such as the NSA uses, could still crack that (everything is crack-able given enough horsepower). Seems from a security point of view, compared with the status quo, that for an average person, a three word password from a near unabridged dictionary (some offensive words and words shorter than 3 letters removed) becomes both easily memorable and much more secure than what is currently in use. By using a random generator, such as what I made that uses a very good random number generator, the problem of people using their pets/kids/spouse name and such goes away.

Interesting is some of the combinations of words that come up. The best way to pick a password is to generate a bunch and simply look at the list. Within a dozen or so combinations, there should be a word combination that simply clicks with your brain making it all that easier to memorize. As it was chosen at random, the security of that password remains high while being very easy to memorize.

I think using a password generator is better than making up your own password. Everyone has biases toward specific words and types of phrases. A randomly generated password overcomes that limitation while greatly increasing the security, avoiding re-use, and making the three word combination easy to remember, which seems to be the biggest issue in any type of password based anything.

r August 21, 2016 3:52 PM

@ScottD,

I think you should have your webapp print the word behind the transformations also. It would help the mutations set it visually for people. What I meant with my original response is that using passwords like this is a good way to provide rudimentary deniability… one can simple state ‘I think it is “could you help me please.”. I think you’re on the right path for extending the concept behind diceware and bringing any background processes available for hardening such processes into the foreground. I think you and I both believe the same thing, that there is no reason for what we think of as ‘insane’ passwords to be hard to remember. But I can’t fit my perspective into everyone’s shoes. Certainly using a password like this is a good way to have a semi-robust method of getting into a password manager as it gives it keys to your memory and understanding?

ScottD August 21, 2016 3:54 PM

Just to add a little more to my brain dump (it is cathartic to cleanse the brain cells occasionally). Along with the argument for using three randomly chosen words, by using Orthographic Assembled Words, since the words most likely do not exist in any dictionary, and having a pronounceable but nonsensical word, the combinations are ridiculously huge.

Consider a 3 orthographic word password. Assuming all lower case, and assume a two syllable (consonant-vowel pair is a syllable in random order) per word. The total combinations using these short, pronounceable, orthographic pairs is 415,617,096,595,784,117,870,850,624. (174 consonants, 157 vowels, two each per word, three words). As such is why I think for higher security needs it is better to use orthographic words than dictionary words.

Vinnie Gambino August 21, 2016 4:03 PM

@Don re post-Snowden NSA leaks: Greenwald of the Intercept had reached the (>tentative; <definitive) conclusion that there was an additional leaker while he was still receiving the Snowden documents a few years ago. So a new leaker would be no novelty.

V Gambino August 21, 2016 4:06 PM

@Don re post-Snowden NSA leaks: Greenwald of the Intercept had reached the (more than tentative; less than definitive) conclusion that there was an additional leaker while he was still receiving the Snowden documents a few years ago. So a new leaker would be no novelty.
OOPS – looks like the “less than” char is part of an escape sequence on this blog…

-VinnyG

r August 21, 2016 4:09 PM

@ScottD,

Of course your idea can be used to salt and strengthen the concept behind diceware, but your current application of it (outside of temporarily being broken, I just tested again) is extremely thick – like l33t speak. I think printing the source key next to the transformations may be a good idea.

Two things that make me nervous about passwords, is the shiftkey/capslock prone to louder/distinctive EM leakage (this applies to both capitalizations/and certain characters)(and the inverse caps+randomized shift to get lowercase)? I try to shy away from randomized capitalizations and randomly positioned numbers/characters because of this.

r August 21, 2016 4:10 PM

“I think printing the source key next to the transformations may be a good idea.”

Or lowering the permutation rate from %100 to 30-50% to make things more recognizable and thus memorable.

r August 21, 2016 4:15 PM

@ScottD,

Wasn’t borked, I clicked on the wrong link. It’s still very thick to read though so it makes it more like a randomly generated password and less of a memorable one.

ScottD August 21, 2016 4:16 PM

@r

Broken? What is happening? It is working when I try it. You can contact me off list if desired. My email is on the contact page of the website.

ScottD August 21, 2016 4:19 PM

@r

It’s still very thick to read though so it makes it more like a randomly generated password and less of a memorable one.

are you referring to the Word based generator or the Orthographic generator?

ScottD August 21, 2016 4:31 PM

@r

I think you should have your webapp print the word behind the transformations also.

Assuming you are talking about the orthographic words. At the bottom of the web page is a complete list of the vowel and consonant orthographies used to create the words. The way a word is generated is as such:

Assume the default setting of 1 to 3 syllables per word. A random number is chosen with the range 1 to 3 to choose how many syllables to use for that word. Assume 2 for this example.

Another random number is chosen as a coin flip to decide whether to start the word with a vowel or a consonant. Assume consonant.

As there are 174 possible consonant orthographies, a random number from 0 to 173 is chosen to select a consonant.

The last character of that consonant is examined to determine if that letter is a vowel or consonant. If it is a consonant, then a vowel orthography is chosen next. If it is a vowel, then a consonant orthography is chosen next. Assume it is a consonant.

As there are 157 possible vowel orthographies, a random number from 0 to 156 is chosen to select a vowel.

repeat to make the second syllable.

That is the process for generating a word from the orthographies.

r August 21, 2016 4:54 PM

@ScottD,

It IS less random that 100% randomly generated strings, IF the idea is to be memorable but obscure then the orthographic substitution at this point may be too thick for some to connect the dots for memory purposes and simple mispellings provide the same level of protection where diceware is concerned when you’re not presenting a direct mapping of the source word in the context of the permutation. Nobody (see @ianf) – nobody here anyways is going to use those passwords in situ(?) as presented by your site (no offense). My opinion is that if you want to promote orthographic substitution you should lower (or scale) the mutation rate and present a directy co-relatable example of the source word you’ve transformed.

ScottD August 21, 2016 5:08 PM

@r

…present a directy co-relatable example of the source word you’ve transformed.

Yeah, I agree they can be a bit thick. The site is in BETA TESTING so I am looking for this kind of feedback. I guess one option would be to pare down the lists eliminating some of the more complex constructs.

The process is not grabbing a word from the dictionary and transforming it. (see previous comment). It assembles from the following orthographies:

Vowel Orthographies:
a aa aae aah aahe aar ach ae aer ah ai aie aig aigh aille air aire ais al ao aoh aor aow aowe ar are arr arre arrh au aue augh aughe aur aure aw awe awy ay aye ayer ayor ayre e ea eah ear eare eau eaue ee eer eere eg eh ei eie eig eigh eighe eir eo eor eou er ere err erre ers ert es et ete eu eur eure ew ewe ey eye eyre ez gh i ia ic ie iee ier iere ieu iew ig igh ighe ii ioux ir irr irre is o oa oar oare oe oea oeh oeu oh oi oll olo oo ooe oor oore or ore ot ou ough ougha oughe oul oup our oure ow owar owe oy oye u ua uar ue uet ueue ueur ui uo uoy uoye ur ure urr urre ut uy uye w wo y ye yr yrrh

Consonant Orthographies:
b bb be bh bt c cc ce ch che chi chm chs chsi cht chth ci ck cks cn cq cqu cque cques cs ct cu cz d dd de dg dge dh di dj dn ed ed f fe ff ffe g ge gg gge gh ght gi gm gn gne gu gue h i j k ke kes kh kk kn ks l ld le lf lh lk lks ll lle lm ln lve m mb mbe me mh mm mme mn mn mp n nc nd ne ng ngh ngue nh nn nne o ou p pb pe ph phe phth pn pp ppe pph ps pt q qu que ques r re rh rr rre rrh rt s sc sce sch sci se sh she shi si sne ss sse ssi st sth sw t tch tche te th the ti ts tsch tsh tt tte tth tw tz u v ve vv w we wh wr x xc xe xs xsc xsw y z ze zh zi zz

Thoth August 21, 2016 6:29 PM

@Wael, Who?

Opss…. mistook @Who?’s question and addressing it to @Wael. I was answering the question before I turned in for the night 🙂 .

Don August 21, 2016 6:54 PM

as in life, as in Security, so too on this blog

don’t let the poison in.
be vigilant, discerning, rigorous, disciplined

remember who you are. remember what serves your life force and which wishes to steal it

Don’t feed the troll. (s)

Nick P August 21, 2016 7:07 PM

@ ScottD

The longest-running one I know is random.org. What’s your pitch on differentiators in terms of features, cost, RNG quality, whatever?

r August 21, 2016 7:07 PM

@Don,

Humor increases one’s life expectancy. Humor at the expense of others is kind’ve parasitic. But! by all means: don’t expect every last one of us to want to run around all over town down with a frown – we’ll drown. Clowns will sharpen both one’s dull life and wit.

eg. don’t be a mopy d***. $m:)e for the cameras (and logs++).

@ScottD,

EaaS from NISTwits? Let’s go spam that thread.

Thoth August 21, 2016 7:08 PM

@Who?

The Reset Code in OpenPGP card specifications is not self-destruct PIN. Imagine a scenario where a journalist organisation issues an OpenPGP compliant card to a journalist, they would supply a means for the journalist to reset their own PIN codes in the event they lock themselves out by accident when theybare travelling. The Reset Code is kind of like Admin PIN but less powerful and used for resetting only the user PIN when the user accidentally lock themselves out.

Regarding the link on recovering card, it is more of destroying the old PGP keys and restarting new. Version 1 of OpenPGP would block the card permanently but Version 2 of OpenPGP would not block the PIN but would wipe the card similar to my self-destruct feature but the difference is you need to block the user PIN and admin PIN before it wipes. More accruately, such a behaviour ia called factory reset when you wipe the card. JavaCard OpenPGP development prefers to permanently block out the entire card instead of resetting the card to factory state. I can modify the JavaCard codes to enable factory reset after blocking the user and admin PIN if anyone wants this feature but whether the original JacaCard OpenPGP maintainers want such a patch is up to them to decide. If you want the code update, do let me know so that I can update my version of JavaCard OpenPGP repository.

Regarding choosing JavaCard variant or original variant of OpenPGP card for use in a security setting, I think due tonthe closed nature of the original OpenPGP, it presents a total black box whereas the JavaCard system has documentations on expected behaviours, card states and standards governing what is defined as a JavaCard and under what conditions must be met for a card to be certified as a GP and JavaCard compliant card and thus giving such an approval.

I have mentioned about code obfuscation making a VM based card platform that contains backdoors harder to inspect applet codes and how to constrain platform resources so that a VM-based card platform would not have enough resource to host an AI that sniff the applet execution as well in older posts.

Another alternative is to wait for the Ledger Blue and Nano S hardware to launch. The Ledger Blue and Nano S are mostly open source hardware and firmware which the close source part is mostly due to the embedded ST31 smart card chip’s driver frimware which the Ledger Blue and Nano S uses to communicate with the ST3 embedded smart card chip in it. The rest of the BOLOS OS that Ledger Blue and Nano S uses are mostly open source hardware and firmware and they already have a Github repository containing the open source code for their BOLOS OS and the closed source driver as well so that developers of the Ledger Blue and Nano S hardware security device can load the OS and code by themselves.

The another plus point is developers would have full 32-bit CPU access to the embedded smart card chip in the Ledger products because the developers will not be using any JVM of any sorts as the codes would be bare metal style thus the user when coding for the ST31 chip can go close to bare metal with full 32-bit ARM with the exception of the closed source blobs which the Ledger team have provided a HAL API layer for developers to acces the closed source binary without needing any NDA while keeping their development open source.

That would mean someone wanting to code those DJB curves for Ledger product could go down to the bare metal and do their own DJB ECC curve and 32-bit ChaCha20 if they want to without any constraint. I am waiting for the Ledger Blue and Nano S to launch before porting my security applets once used for traditional smart cards onto the Ledger devices which provide more security and developmemt flexibility that no smart card can match 😀 .

ScottD August 21, 2016 8:46 PM

@Nick P

What’s your pitch on differentiators in terms of features, cost, RNG quality, whatever?

Regarding cost:
Absolutely Free. I see this as an interesting infrastructure project especially regarding the needs of IOT. The past couple years I poked at various projects. This one for some reason I just find to be an interesting problem. As I state on the front page of the website, There is no tracking, no cookies, no advertising and access is anonymous. Currently I am paying for everything. My plan is to hopefully develop the site into something useful then hopefully get a grant to pay the operating costs to keep it running. I am not intending this to be a business. There are some real costs involved, but it is not that expensive.

Regarding security:
I am Beta Testing its functionality. Obviously much work is needed to make it a truly secure service. First step is to test and verify that the numbers and utilities are working properly in a useful form. For the service to be truly secure, I will need fully dedicated, wholly owned and operated, hardware with controlled access located in a secure location like Switch. When I get to that point, the costs will go up, but it is still not too expensive that a modest grant from some organization could easily cover. I am open to suggestions what and how it should all work.

Regarding Verification:
Assuming I get the dedicated hardware and a secure place to park the server cabinet, I plan to have all of the operating code verified/certified by whoever it is that does that (NIST?, EFF?, A committee of volunteers?). I realize the issues of MITM attacks and such. What I plan to have, for which a button already exists on the front page, is a series of open source programs written in C (with compiled versions also available) that implements some of the features of the website on the client side. For instance, the password generator. The server side routines, which can be used with any web browser or with a single GET through the API, can be implemented on your local machine. Grab a block of random numbers from the server, run a quick confidence test on the numbers, then generate the passwords on your own system. The real issue is the generation of the random numbers. The various features are simply providing useful forms of the random numbers.

Much literature is available about the problems and issues with generating very high quality high resolution uniform distribution random numbers. My technique is very brute force, and numerically intensive. But the results are very good. I spent two years developing the technique. Basically the website offloads the intensive numerical crunch giving instant access to large blocks of fresh uniformly distributed random numbers. For instance, when your computer first boots up (especially Linux system) the entropy pool is very shallow. You can fill the pool, or get a big random number to use as a seed, or similar, thereby avoiding issues with poor entropy at power-up.

RNG Quality:
Better quality and resolution than the other techniques, including QRNG. I am working on a paper that does a comparative analysis with QRNG with interesting results. Just not quite done with it yet (so much to do at the same time). My paper (pdf) describes how the numbers are generated.

Differentiation with Random.org et al:
Aside from higher resolution and quality with my technique, random.org limits a pull of numbers to 10,000. My limit is 1,048,576. A limit is necessary else someone could request a terabyte and crash the system. My site has no daily limit whereas they have a limit of 1,000,000 bits (that is bits not bytes). As demand increases, I can add additional generators working in parallel to easily scale up as needed. The API on their site, and on many other sites requires registration and returns results in a JSON array that requires parsing. Other sites require registration to simply get a block of numbers. Although I give the option of getting the numbers in a JSON array, I return the numbers directly, or as CSV, JSON, or as a downloadable text or binary file. Their API requires submitting a JSON structure with commands, requires registration to get a key, and such. My API works with a single GET with a query string to select various options. This is very easy to implement in an IOT processor with very limited resources. As the numbers can be returned with minimal or no formatting, processing by the application is very simple, even for very large blocks of numbers.

Not every application is about security. Consider an engineer or researcher that needs to generate very high quality white noise through a D/A, or is simulating white noise for analysis of an analog system. They can download a very large block of floating point numbers that do not repeat that can be gigabytes in length (may require appending several sets, but that is trivial).

Phew, this is getting long. I will gladly answer any questions, discuss any issues, debate the philosophies and concepts.

Wael August 21, 2016 8:48 PM

@Don,

Those words are like bells of wisdom that toll in a world of foolishness.

Although the bells tolled for all, I heard them most loud and clear [1]

Thank you for those majestic words, my friend! I needed them.

[1] Could be my tinnitus 😉

Wael August 21, 2016 9:48 PM

@ScottD,

I read parts of the paper. You put a lot of work into this, and I hope it works out well for you. Have a couple of inquisitive questions, though:

Spectral analysis is usually used to pull structure from disorder, such as extracting a signal from a noisy source. We want the opposite. Given some initial condition or seed, we want noise without any perceptible structures or repetition.

So in spectral analysis, you use Fourier transforms, not series. Why have you chosen the Fourier series (with prime coefficients,) and can you really call that “Fourier series”? Why did you use that over a Taylor or Mc Lauren series, I am just curious.

Another question: why did NIST only approve predictable random number generation for cryptography, but didn’t accept non-predictable? I may have read that in the past, but it’s probably fresher in your mind. Is it a “conspiracy” thing, although there maybe a ‘mathematical” and “formal proof” side to it?

ScottD August 21, 2016 10:49 PM

@Wael

So in spectral analysis, you use Fourier transforms, not series. Why have you chosen the Fourier series (with prime coefficients,) and can you really call that “Fourier series”? Why did you use that over a Taylor or Mc Lauren series, I am just curious.

Fourier was the starting point. Other than using a prime table instead of incremental integers, the raw summation process is identical to summing the Fourier series. But the raw summation is only the start. Much conditioning must be done to the numbers to create a uniform distribution, which is what the bulk of my paper discusses.

The concept is a sum of a very large number of orthonormal functions, or in my implementation, a sum of oscillators. Sine and Cosine are infinitely cyclical. The Taylor series is not as it is a polynomial series. I needed transcendental functions that are easily calculable with high order multiples.

Normal usage of Fourier series is to sum incremental integer values of n. Using incremental integer multiples results in harmonic oscillators which produce perfect multiples of previous terms. By using primes instead, since prime numbers are by definition never a multiple of each other, no oscillator is a perfect multiple of any other oscillator. The goal is to create white noise which is a summation of all frequencies. So each oscillator is contributing a unique characteristic to the summation.

Another question: why did NIST only approve predictable random number generation for cryptography, but didn’t accept non-predictable?

Deterministic random numbers means that given the exact same initial conditions, you can calculate the exact same set of random numbers. This is important in many applications. I do not at hand have a reference to the appropriate NIST citation, but a conclusion by NIST is that deterministic random numbers are actually more random than non-deterministic (natural or true) random numbers. I too found that while developing my technique.

ianf August 21, 2016 11:02 PM

@ Thoth, Re: Forceful legal coercion to hand-over personal digital keys (self-destruct PIN)

I’m not competent to evaluate the technical niceties of your design, but from a purely ergonomic point of view, ease of deployment, unless runtime use of it is IOTTMCO straightforward, it risks staying in the drawer, rather than in the pockets of suitable users.

For that reason alone, when thinking of ways to nullify legal confiscation of digital keys (to one’s private kingdoms, etc), I keep coming back to a solution of the “death man’s grip” type (“cold dead hands” to ye Yanks ;-)). I.e. any device that forcibly may be taken away from us, and we compelled to disclose its entry key, needs to be preset to a self-destruct/ brick-itself mode conditional on the action being arrested after logging in. That bricking needn’t be wholly destructive (=making the device unusable in all perpetuity); optionally should be made reversible with a special recovery pass key THAT ONE NEVER KEEPS WITHIN REACH/ or in close proximity. If a SIM can discern between PIN and PUK codes, so could the RAM-based bricking routine.

Thus when approaching a checkpoint etc where the risk of confiscation is heightened, we preset the device to that conditional self-bricking mode and then, smiling, “Officer, here’s your password, enjoy it for the next 3 minutes” (in reality 45 seconds, but who’s counting). After all, devices may be expensive, but they are replaceable. Privacy transgressions can never be rolled back.

ianf August 21, 2016 11:15 PM

@ Marcos Malo hopes “there are redeeming qualities to his noise

Listen, don’t buy into this stupid “signal-to-noise” concept. It is usually defined along the lines of “my complaining about the noise is SIGNAL,” whereas what others think of as their signal, is (by any to us objective measure) pure NOISE. This forum is what we all make it be.

There are some here, who’d like it to consist of but meandering ACCRNM-filled intellectually impenetrable accounts of übertechy kind… but, if it weren’t for the rest of us, they’d bore themselves to death (there are just that many sexy aspects of algorithms, and ASICs and data diodes). So fire away, and never you mind occasional fly-by-night trolls admonishing us to “beware of the trolls!” (akin to a thief running down the street shouting “catch the thief!).

@ ooof,
            the way you present it, that allegedly crippled IFF functionality of American-built Turkish planes, MUST BE the linchpin to peace in the region, or at least proper balance of power. Whereas in reality the Turks were fully briefed and on board as to what they’d be getting for their US military aid. Or do you think that Obama suddenly issued an emergency “Withhold AT ONCE the IFF function from the Turks Executive Order?”

As for that Gaza flotilla project with the deadly outcome, the Israelis were the first to admit to their heavy handedness that led to the death of 9 Turkish nationals onboard. One guy who was there told me that they expected to be drowned with sea water from approaching tugboats, never that they’d be boarded by IDF special forces from helicopters (he did not understand how the victims died either, given that his large group was merely ordered at gunpoint to sit down and wait until the melee was over). Moreover, it pays to keep the objective in mind: the Israelis want to prevent import of weapons etc to Gaza, so they direct all ships (also, ultimately, all the flotillas’ own) to the port of… Ashdod? There is no port in the Gaza strip itself. Even then, a sizable portion of building materials that pass through end up being used by Hamas for construction of infiltration tunnels, etc., hardly peace-abetting reconstruction projects… because only with gun in hand can Palestine be taken back (good luck with that).

The rest of your Turkey-in-NATO-in-Meddle-East (no typo!) reads like another argument in line with the claim that Israel’s PM Netanyahu had Stuxnet code rewritten to make the payload act more like a hammer than as a scalpel. Because of the WHO’S THE BIGGEST MACHO thing.

@ rrrrrrrr
               I regret to inform you, that, while I understand all the words in this your missive, and could s.p.e.l.l them s.d.r.a.w.k.c.a.b while doing flip-flops, none of the sentences that they make up make sense to me (the doctor that I mentioned simply could not diagnose my leaky tear duct, scalpels were not on the menu, etc). Which is why I can not comment upon it.

Wael August 21, 2016 11:16 PM

@ScottD,

Got it.

By using primes instead, since prime numbers are by definition never a multiple of each other, no oscillator is a perfect multiple of any other oscillator.

Ok, so a square wave can be represented by a Fourier series summation, sine, cosine, or exponentials (they are related through Euler’s formula.) So the sum of of your series represents what, “white noise”?

Now about the use cases, and this relates to certifications you inquired about: PEN testing reports would be required by some customers or clients. They would probably ask for proper threat modeling results as well. They don’t accept “self-certified” reports, they’d want an independent third party specialized test house to conduct the test. I say this because you may have to consider creating a “sandbox” and expose some special APIs for testers, when the time comes. Better be ahead of the game.

Now, for me as a user, what assurance would I have that you, god forbid, aren’t you know… manipulating the numbers based on my IP address, for example? Would you consider splitting the randomness control between the client and server so that a client wouldn’t need to implicitly trust your service? I mean, me as a client would like to have a say in the matter, more granular say and control.

that deterministic random numbers are actually more random than non-deterministic (natural or true) random numbers.

Now that’s interesting! Could you shared some light on the tests you ran and what models you used to test for randomness? A while back (and randomness gets discussed often here) @Clive Robinson shared a document (in German, and my German sux) about such tests. Is there any papers you can point us to? This maybe outside the scope of comments that you were hoping to get from us…

Regarding NIST: perhaps this isn’t the most appropriate time to make jokes, so I’ll leave it for now.

ScottD August 21, 2016 11:23 PM

@Wael

Well now that my brain is on overdrive, an additional comment. When I first started my project, I considered using Legendre polynomials and some other series. What is interesting is that the theory behind quantum random number generators is similar to what I am doing. QRNG plays off the uncertainty at a moment of measurement which is actually just catching a finite well sum of oscillators in a specific state. Mathematically a finite well is often described with Legendre polynomials. So in a sort of sideways look at what I am doing, I made a QRNG simulator, but with better results. A problem with using Legendre, and many other series, is that the need for a very high number of terms is not practical. The coefficients explode beyond the first dozen terms.

r August 21, 2016 11:30 PM

@ianf, Wael,

Thar be trolls afoot(ahead).

@ianf,

“(the doctor that I mentioned simply could not diagnose my leaky tear duct, scalpels were not on the menu, etc).”

And yet I found out about this doctors long winded explanation, you really don’t wonder what the alternative was?

@Wael,

I’m glad you’re feeling better, I don’t quite understand what got ya down – but I’ll submit: lots of you guys are probably alot deeper than I.

But! I have a question, in relation to your question to ScottD:

“Now, for me as a user, what assurance would I have that you, god forbid, aren’t you know… manipulating the numbers based on my IP address, for example? Would you consider splitting the randomness control between the client and server so that a client wouldn’t need to implicitly trust your service? I mean, me as a client would like to have a say in the matter, more granular say and control.”

There was a link to a doi paper here last month(?) that made the assertion one could create provably random information out of the ether of pseudo I believe. I don’t know if you’ve looked at it or thought about it but it may be worth a look (considering).

ScottD August 21, 2016 11:34 PM

@Wael

A square wave is approximated by summing odd harmonics. The result sort of looks sloped square with ringing.

Would you consider splitting the randomness control between the client and server so that a client wouldn’t need to implicitly trust your service?

Due to the numbers being calculated in blocks, which can be timed with a stopwatch, blocks are calculated by completely separate computers (currently three used HP boxes running Linux dedicated to just calculating numbers). A small buffer of numbers sits ready to be sent. Current settings generate about 16k blocks of floating point numbers at a time.

As I mentioned in a previous post, I am still beta testing. I want to setup dedicated hardware and have all of the code (server, system, utilities…) inspected by whoever it is that does such, or whom others deem trustworthy. To get to that point will take some work, but first I need to get everything working in a way that is useful.

Wael August 21, 2016 11:45 PM

@ScottD,

inspected by whoever it is that does such, or whom others deem trustworthy

I was alluding to another form of client control: on-prem setup as opposed to the hosted one. You specify the hardware requirements and deliver the software solution, help with deployment. But the client will be in full control — everything sits inside the customer’s firewall. You may want to think about that, too.

Wael August 21, 2016 11:50 PM

@r,

There was a link to a doi paper here last month…

I probably missed it. I’ll track it down, thanks!

Thoth August 21, 2016 11:59 PM

@ianf

re: IOTTMCO

It uses standard OpenPGP commands and doesn’t require much additional tools to setup. That means anyone with an OpenPGP console like the GnuPG software can insert my modified version of the applet and just use the GUI or the command line for GPG to setup.

The only difference is now the user has to manage PW4 (a.k.a self-destruct PIN) which I am looking into making a very specific and easy to use console for the sole purpose of initializing and managing the PW4 PIN and after the initializing the PW4 PIN, the user can go straight to normal OpenPGP smart card use with GnuPG GUI or command line. The reason needing that one additional special console for managing PW4 is because PW4 is not a recognized standard PIN and the card command only needs a bit of tweaking to work via the special console.

re: conditional on the action being arrested after logging in

Everytime you sign a document, you have to login to the card. That means when you are signing document A and gets arrested, another login again is necessary to do another signature which means a new opportunity to feed in the PW4 self-destruct PIN or another new chance to brick the card by typing in too many wrong PIN tries.

re: Non-disruptive bricking

This will only complicate the security and codebase. I prefer not to go into temporary bricking or destruction of the PGP keys.

If what you meant is after the tamper and destruction occurs within the card and you want to reload the backup PGP keys when your surrounding is clear and safe, this is a valid option as during the setup phase of the OpenPGP card (be it the official or the modified variants), you have a chance during setup to backup the PGP keys onto your hard disk encrypted with a password. Assuming during creation of the card you backup the keys and then split share the backup keys with your own secret sharing algorithm, it is considered do-able but the effort is huge and it must be worth the trouble as the PGP GUI or command line will not go into secret sharing and all that for you which you have to do manually.

Assuming you are released from your captives and manage to go back to your news reporting HQ, you can reassemble your PGP keys and import the keys back into the card as the card includes an import command you can use when the card is set into factory mode.

re: enjoy it for the next 3 minutes

Smart cards usually don’t carry an internal crystal clock and have no concepts of time unless it is fed in by an external clock signal which can be corrupted.

In fact when you are approaching the checkpoint and you are found to have an OpenPGP card, you could simply divulge the PW4 self-destruct PIN that would wipe your keys and show a blank card without keys and you can simply state that you have a newly reset card and have not had time to setup your PGP keys into the card. They will not be able to tell from the card exactly if your statement is true as it is a known fact that you can send individual card commands for manual loading of your PINs but not load your keys whenever you wish.

You could even reset the card to factory by sending the TERMINATE DF command then use the PUT DATA command to load your PINs and not load your keys which will show up via command line or GUI as a card with PINs but no keys and this should not even be impossible.

Considering the above possibility there is no reason to suspect a card without keys as being totally out of line but simply a little weird which the journalist would say that the new organisation provisioned a card but are doing key rotation for journalists or some maintenance on going.

ianf August 22, 2016 1:11 AM

      This post is all SIGNAL in the style of my correspondents’ signal. YMMV

    @ Sam,
                 who complains that “most of [my, ianf’s] posts appear too cryptic for the general census

    Given that I post in English, using full sentences made up of a beginning, a middle, and an end (in that order), what EXACTLY did you find “too cryptic” there?

    Perhaps if you could give me a few pointers, quotes of mine that you had trouble understanding (and so elected to hide behind the generalizing “general census” [no such General in no Army I know, but apparently you do]), I could improve my act, and even try to repost them in simplified, all upper case form, JUST FOR YOU?

    As for your “surprise” at my glancing at goo‍.gl built-in analytics, ask Wael who told him about it (“disclosed the existence, of” may indeed sound “too cryptic”), which he and you now try to blow up to some über-spying proportions; by you additionally salted with some alleged link farming and metadata that is the Internet (which “reflects the choices we make on the paths we traverse” – now it’s my turn to ask WTF DOES THAT MEAN?)

    It’s like spamming threads with bullshit to see who responds to who what when.

    If only you could apply that sharp-edged wit to your own submission.

    So let me see some links. Otherwise you just #talkthetalk.

    @ Wael
                  I hate to barge in like that into that your New! Promising! Friendship, but maybe you, who were so protective of that fly-by-nighter’s non-existent track record of security-oriented SIGNAL, could ask it the same question that you put to me

    Tell me, @ianf: what areas of security do you see important? Share with us some of your pain points, so that we perhaps can suggest something — or discuss something technical.

    … which I answered a few hours later.

    I mean, given absence of ANY intellectual track record, ask it this now, BEFORE it turns out to be a complete idiot, say, and you end up with face palm—even though it once fed you all those touchy-feely “words like bells of wisdom that toll in a world of foolishness” which you found so full of promise.

    [Preemptively: the above paragraph has been written in a special cryptic way for one Sam not to grok.]

    Ratio August 22, 2016 3:36 AM

    @ianf,

    to hide behind the generalizing “general census” [no such General in no Army I know, but apparently you do]

    General census is a con.

    Carry on!

    Clive Robinson August 22, 2016 6:15 AM

    @ Wael,

    With regards,

      that deterministic random numbers are actually more random than non-deterministic (natural or true) random numbers.

    With a little thought you can see why that would be the case up to a point and then the opposite would be true.

    All CS-DPRNGs can be thought of as a counter of a given size (based on the size of the internal state) followed by a random map (or sum of random maps). That is all values in the counters range appear just once upto the point the counter rolls over at which point they repeate. Thus you have a bounded range of input that contains all values in the range. The quality of the randomness is determined by the qualities of the map (see random oracle cipher model[1])

    A TRNG input however is not bounded thus logicaly it’s input set is infinite and in any meaningful –to humans– time frame a very sparse data set. That due to imperfect sampling / quantization can and will repeate some of it’s numbers.

    Thus you need to think about what were once called Chinese Counters or clocks when using oscilators with non related frequencies. That is if they all start at zero the highest frequency oscillator gets back to zero first, the others get there one by one in order of decreasing frequency. However since the oscillators continue they will not all arive back at zero in phase at the same time for a very long time.

    To see how long lets have two oscilators of period 3 and 5 and sketch them out they only cross zero together From the same direction at 15 time periods. Thus it’s clear that the the total time is the product of the individual periods.

    There is however an issue with most oscillators, when you examine their output the level is not uniformly distributed in time unless it is either a saw tooth or triangular wave output. That is with the likes of a sinewave sampled in time the values are biased towards 1 and -1. Such bias is usually not desirable in any kind of RNG. Further you have to consider what effects this has when you add the outputs of oscilators together. If you draw this out for the period three and five oscilators it’s clear to see that such bias gets more bias bands.

    But this is before you sample or quantatize the levels. It’s fairly easy to see that when you do so the courser the step size the more likely you are to get multiple readings of the same value.

    Thus the numbers most certainly are not going to look at all random as they dwell at just one or two unique values in each bias band with few if any readings at values inbetween.

    Previous work has shown that if you want a good source of psudo white noise you are better to use a long length Digital Linear Feedback Shift Register (DLFSR) and feed it through an integrator then put that through a lowpass filter set at half the clock rate as this makes it very close to Gausian White Noise in it’s properties.

    [1] https://eprint.iacr.org/2008/246.pdf

    Curious August 22, 2016 8:37 AM

    I am curious about something that has to do with coding, though I am not sure my question makes the best of sense, but here it goes:

    So I am thinking, that if a general idea of compartmentalization is creating physical separation of “things” and links between hardware on a motherboard, I am wondering: are there ways to create compartmentalization of code that are guaranteed in ways to stay separate?

    Thoth August 22, 2016 8:54 AM

    @Curious, Clive Robinson

    @Clive Robinson, do excuse me if I get the below wrong and supplement my comments 🙂 . About time to try and recall what I learnt from you in the past.

    There is no way to compartmentalize codes to keep them apart from a strict point of view. Essentially there needs to be some form of soft logic between the application and execution codes as a sort of glue.

    There are Harvard and Von Neumann architectures where Harvard attempts to keep execution codes separate from application codes and data but if you think about it, there will be time when you need both to be used together (during execution).

    In the Von Neumann architecture, both application data and execution are mixed and it’s much harder to segregate code by logical means.

    Also, it is known that you can create your own execution environment on top of the application data so the line to be drawn for segregating codes are rather blurry.

    ab praeceptis August 22, 2016 9:13 AM

    @Thot

    You will massively lose in terms of performance. Moreover you will either have a lot of algo proving to do (I spent considerable amounts of time on finding useable tools only to be gravely disappointed by most of them). And let me guess: You love ACSL and frama? – because that’s the hard part of your endeavour. The math has been done, thanks to djb, but now you are confronted by Dijkstra (“software is the implementation of algorithms”) and in the hardcore version where the term “100% correct” enters the game.

    I’d reconsider. There are special contests for small hardware optimized crypto. Considering what needs to be adapted for 8 bits (e.g. when doing 13 bit shifts after arx) chances are you’ll end up with less than satisfying performance (and lots of chances for ugly venomous micro-squids (this is the squids-thread, after all)).

    @Gerard van Vooren

    I’m pleased to see that there are still some people around who see the significance and beauty of Wirths brillant work. And yes, I agree, Rust is just yet another attempt to get something like a C kind of language with Pascal qualities. I myself came back to it after decades because I’m working in a rather sensitive field and just couldn’t afford to play catch me with C anymore. That said, I ended up doing some things in C anyway, albeit heavily guarded by ACSL and after a meticulous formal specification (and validation), mainly for 2 reasons: a) we don’t live on an island and many libs and, of course, the OS itself, make C all but unavoidable. b) While the basis itself is rotten beyond hope, there exist very useful things for C such as frama and ACSL along with some quite useful tools like code generators for formally specified state machines. (and a tongue in cheek c): borland acted as irresponsibly and ignorantly as was any possible with a not too beautiful result).

    “dictionary syllable based passwords et al”:

    I find it a never ending source of astonishment (and amusement?) what we do not have. Wo do have a gazillion superdupersmart approaches – which, unfortunately don’t work for one reason (bad concept) or another (users simply perverting it) don’t work. What we do not seem to have is an actually working approach, where by “actually working” I mean over the nessecary set of criteria – incl. the user (no matter how stupid we consider him/her/it(+60 sexes))!

    Funny: We do have the needed bits. Suggestion: abolish password (oh well, make it look like it anyway). How? Make the url the password – kind of. Let the user have 1 (in words: one) password and lets play with that to create a gazillion password for a gazillion sites and services. Why? Because a) that’s how users tick. Ask them to remember 1 password and they’ll; aks them to remember 3 and they’ll play tricks or refuse. b) math and cpus are gladly purring when being tasked to create variety in a manner that meets the necessary criteria. Humans, however, are not.

    Something like: user input (his password) fed into some (“some”, not “a”) hash algo, url (or service name or …) fed into some hash algo, encrypt it, et voilà, dinner is served. What does “some hash algo” mean? It means a combination of a) some very human-digestible additional selector (“What’s your favourite animal and your favourite singer?”) being mumble-jumbled into either b) an algorithm selector or, more sensibly, a seed or channel.

    Result: even a simple password (plus some very simple human-digestible variator) and one gets an automagically created, no need to be remembered, password that is different for each and every site or service and that is at least reasonably hardened against dictionary and statistics based attacks.

    Finally, on the practical side we’d need but a standardized machanism (like a pipe) to feed the password into the browser etc.

    Kindly note that this is not answering the question, how to create a highly secure password worthy of locking the nuke arsenal of a country. It rather is the answer to the question, how to create a mechanism that keeps the 95% away from all the security sins comitted by overwhelmed humans and that adresses the 95% rather primitive but also rather frequent and painful problems out there.

    More generally: the problem isn’t that we can’t secure nukes properly (although, chance are, we can’t …). The problem, just look at an ssh server, is a gazillion of stupid scriptkiddie attacks (plus obscenely pervert ssl config hardly an admin properly does), another gazillion of dictionary attacks, etc.

    In other words: We have a strong tendency to strive to optimize a 99,7% good solution for 0,3% of problem situations (nukes, state DBs, etc) while actually > 99% of the – real and frighteningly often successful – attacks are not dangerous and successful because aes128 is too weak and, oh god, had only we used aes256; they are successful, because admin keep RC4 enabled, because users use passwords like “secret[month of birth]” when being asked for hundred password for 100 sites, etc.

    Our problem is not the peak. It’s not that our good crypto isn’t good enough. Our problem is that big fat base on the floor of > 90% of software, databased, websites, etc. are lousliy, ignorantly, and mindlessly made and used.
    We find it worthwhile and sensible to again and again bring together the best crypto people – but we do not find it worthwhile to address the real albeit primitive problems.

    Thoth August 22, 2016 9:27 AM

    @ab praeceptis

    The jcChaCha20 is just a proof of concept on getting ChaCha20 from 32-bit to 8-bit environment. I was thinking of adding more cryptographic algorithm to a smart card than those supported by hardware and I managed to get ChaCha20 running which was a rather nice surprise albeit the poor performance and rather ugly math library.

    One quirk Java has is bitwise shifts that retain the MSB (signed bit) despite convention that bitwise shifts don’t need to retain signed bits. This is where the ton of ugly if-elkse with “normalizers” come in to solve the retention of signed bits during bitwise shift operations which leads to ugly codes.

    The good old rationale and taboo for smart card developers to never develop their own software algorithm holds true as it is always wiser to use hardware accelerators than to do them in an awkward fashion in these constrained environment.

    vas pup August 22, 2016 10:11 AM

    http://www.bbc.com/news/uk-england-london-37152665
    Private drones used for criminal activity again.
    Are we (in US and Western Europe) ready to huge increase of drone (aero and submarine) applications for criminal/terrorist/smuggling activities?
    Do feds/Europol have any proactive plan how to address and minimize this real threat of high level of danger or we still in reactive mode?

    Nick P August 22, 2016 10:37 AM

    @ ab praeceptis

    “And yes, I agree, Rust is just yet another attempt to get something like a C kind of language with Pascal qualities.”

    That’s an oversimplification. Rust’s history is actually one of experimentation toward a safe, concurrent, fast, systems language. The inconsistencies that Gerard gripes about came from that experimental process. Constantly adding techniques from other languages, including functional, then dropping what didn’t work to effectively solve problems. Semantics in “personal years” was about opposite of C’s approach. Wisely, they wrote the initial compiler in Ocaml to get its benefits in robustness & extensions. Most fools write one in C despite it being probably least suited to that job.

    So, knock their inconsistency or feature set all you want. They’re not another C, though. Far from it. Actually, it’s hard for me to say most languages are “another C” given it’s defined by having almost no features, unsafety in core operations, and raw performance. Even C++, using its proper style and runtime, greatly improves safety and programming in the large. I’d probably limit “another C” to C modifications or attempts to re-create its properties. Aside from safer C’s, like Cyclone or Popcorn (C0), I just don’t know any…

    @ Gerard

    Every now and then I see a hobbyist project that attempts to focus on good tooling all around. This Github attempted an Oberon compiler as probably a learning experience. Neat thing is Luke wrote it in Ada 2012. Interesting seeing what it looks like in Ada with its strong types & specs. The implemented steps are still small in source form despite Ada’s verbosity. A testament to Wirth’s ability to keep a practical language simple.

    @ All

    re stuff in safer languages

    Meanwhile, work to re-write critical software in these safer languages continues. BlueJekyll did a nice write-up of his long haul clean-slating a DNS system in Rust. Also worked on DNSSEC extensions. Unlike most of them, he’s a fan of the C language that noticed it caused about 50% of BIND’s vulnerabilities. Decided to take action. Prior work in this area is IRONSIDES DNS in SPARK Ada.

    I suggested to BlueJekyll that the best thing he could do is post unified view of RFC-specs and coding modules. The process of going through years worth of RFC’s to slowly piece together a protocol is ridiculous. A single specification, citing where each requirement came from, would let newcomers get a good start implementing DNS protocol. He thought it was a nice idea. Might try it. Another commenter said their project marked the RFC’s in the code, pull requests, and so on. I haven’t looked at the code but that also sounds like good start on spec side.

    re LANGSEC

    One I missed before was the Nail parser generator. It’s the successor to the HAMMER tool I previously posted about.

    ab praeceptis August 22, 2016 11:18 AM

    Nick P

    We look from different angles with different criteria weights. To me even java is “some kind of C”. Why? Because it’s using braces and many other sins of C. And because it’s at least in a considerable part (openly confessed or not) an answer to a question that has “C” in it.

    Well noted, this is not meant to be language flaming. May everyone use whatever he pleases and, yes, there are many valid perspectives and views. I’m talking about mine.

    Ada looked soooo promising to me. Designed by a french (Pardon me, but a) there is some cultural and intellectual foundation that is simply lacking across the ocean and b) The french have kept some sane level of education and some healthy corners in science intact, among them math), albeit rather quickly perverted over there (the usppa (united states of a part of a part of america)) and soon sent into the realms of insanity. Don’t get me wrong. There are fine people over there, Tucker being one of those, and they actually did some sensible evolution (DbC, to name one in Ada2012), this, however, can’t counterbalance the bureaucratic insanity done to Ada. Just look how voluminous the Standard has become. That a) destroys one of the holy principles of Ichbiah (simplicity and elegance) and b) makes Ada less secure. Why? Classical case: They left the human factor out of the equation. In other words: There are reasons for Wirth and others along that path (e.g. Modula-3) to almost anally keep standards tight and consistent. The human factor is an important one.

    Simple rule of thumb: Any language standard with more than max 100 pages is exponentially getting close to a lottery generator.

    “nail” -> Funny and interesting: you linked the paper and I was googling the code. Not implying that one approach is better thena the other, but noting that we come from rather different corners.

    I need tools and my interest in scientific (and “scientific”) papers is strongly limited by the question whether there is something for me to learn and to widen my horizon. In the end, however, I need tools. Realiable tools. Tools, that are logically sound, tools with a solid scientific background – but tools.

    Why? Because unlike most I do not think that adding better crypto to SSL/TLS and to so some code cleanup will help us. It will mainly create new bugs and problems.

    From my point of view (which is just one of many, many of them valid) we need a solid basis and we need to properly think (and work). Not meaning to insult anyone but my impression ist that our primary problem is that there is way to much “it’s fun!”, “guru”, “bang bang” crap out there. The sad fact is that we still stand here with a very limited set of realiable, well designed and conceived tools. How can we build good safes with lously hammers?

    To put it somewhat bluntly: The greatest progress in security we can possibly achieve is not to do with crypto or linux vs windows (or Capros or …). It’s to do with avoiding the 90% of gross and shameful errors and malfeasance.

    Rust is step, possibly a useful one and definitely a needed one. But the real answer is in the Wirth corner. And in math.

    Clive Robinson August 22, 2016 11:42 AM

    @ Curious, Thoth,

    I am wondering: are there ways to create compartmentalization of code that are guaranteed in ways to stay separate?

    It all depends on what you mean by “guaranteed”… The bottom line is 100% security is mathmaticaly impossible to achive. But getting better than a couple of nines –99%– is fairly easy getting each successive nine costs about ten times as much as the previous nine.

    There is a secondary issue that has to be considered, which is as @Thoth mentioned the computer architecture. If you look at the computing stack from device physics up to international law, each level can only give a security improvment if the layers below allow it. That is you can not have effective memory tagging if the memory architecture is not designed for it. Likewise you can not have effective memory issolation if there is not appropriate hardware support for memory segmentation support.

    But even if the lower layers do provide what appears as effective support this will be conditional on the assumptions behind it. A recent example of this is a RowHammer attack on the page tables used for segmentation. That is RowHammer works at a very low level in the computing stack down at the device physics level… The reason it’s possible is two fold, firstly you have to know such an attack is possible before you design at that level of the stack, secondly there has to be an effective solution to stop the higher levels on the stack reaching down to exploit the fault.

    But you can not stop a “bubbling up” attack at the higher levels that originates from a lower level. One such attack is Direct Memory Access (DMA) attacks. If the DMA controler is activated then as it’s function is below the CPU level but above the memory level any device connected to the memory bus via the controler can change what memory locations it likes. The segmentation hardware above the controler can not stop it and nore will the CPU get any signals to say it’s occuring. Whilst the CPU nominally controls the DMA controler, it will not see a signal at the logic layer changing the configuration state of the DMA controller…

    But in most cases such attacks are to specialised currently for all but very technicaly skilled attackers with what is in effect “front pannel” access like an insider might have.

    Most modern OS’s are fairly good at issolating processes from each other whilst also giving many ways that they can communicate through mediated communications channels such as Inter Proccess Communication (IPC), networking and shared IO devices.

    Thus if processes use these methods and securely manage them then you can get five or six nines level of security.

    The problem with this is such levels of security are not liked by users as they very much get in the way of the “make it so” workstyle that managment has encoraged since the 90’s in the name of the very short term thinking of “increasing shareholder value”. What the senior managment do not yet apear to grasp, is that such short term thinking will almost allways go wrong, and any marginal gains made in the short term will get wiped out entirely –possibly along with the whole organisation– when the inevitable happens.

    But worse such thinking makes it through to application design. The classic example is web browsers, they effectivly run numerous processes in the same memory space the OS assigned to the app at start up. Thus web apps only have the protection of their own program logic… Thus such apps effectivly short circuit the security the OS has available…

    But if it’s your computer and you set it up correctly and write your own applications so they do not short circuit the OS level protections, you can get the five or six nines level of security. If you also spend the money you can get more secure hardware using the likes of memory tagging which will alow you to push up another level on the security side.

    I hope that goes some way to answering your question.

    Gerard van Vooren August 22, 2016 12:29 PM

    @ Nick P,

    “The implemented steps are still small in source form despite Ada’s verbosity. A testament to Wirth’s ability to keep a practical language simple.”

    The famous Ada verbosity isn’t really that bad. It makes sense when you realize where Ada was designed for. It met all the required goals. It’s good for small, large, it’s memory safe and it’s really good readable. Okay it has it’s faults (the module system is too complex) but it also has excellent bit manipulation and custom floating point types (both nowhere to be seen in Rust), grow-able vectors on the stack and you name it. But most of all, it’s productive.

    Rust is a 2016 language and although they accomplished some incredible hard issues, it’s just a pain. The verbosity in just doesn’t make sense. And serious, I bet you can write a Masters thesis on satisfying the Rust compiler alone and get a straight A.

    @ r,

    I will read the link later on. I was too busy ranting (which is too much fun to ignore).

    @ ianf,

    “HOW DARE YOU”

    Yes, I love you too. Dick.

    Who? August 22, 2016 12:32 PM

    Hi Thoth

    Thanks a lot for the detailed explanation of what a “Reset Code” is in the smart cards world. Now I have a clear idea of how “PIN”, “Admin PIN” and “Reset Code” work together. Thanks for the time you spent answering my—to be honest not very clever—questions.

    I think any PC/SC reader support JavaCards so I am will look for a SCM PIN pad reader (SPR-332). It should support both smart cards and JavaCards, and this one has a chance of get signed firmware updates in the future. Another possibility would be an SCR-3310v2, for those that would prefer non-upgradeable firmware. Now it is time to look for a JavaCards seller (for smart cards the logic step would be contacting kernel concepts).

    Indeed, I know old smart cards are bricked while unlocking them. If I choose the smart cards way I will go for version 2.1 ones, but JavaCards have big advantages. I have seen two scripts to unlock v2 smart cards (shown on an earlier link):

    /hex
    scd serialno
    [verify commands to block PW1]
    [verify commands to block PW3]
    scd apdu 00 e6 00 00 (“terminate df”)
    scd apdu 00 44 00 00 (“activate file”)
    /bye

    and a slightly different one, that seems to include some protection against unintended blocking (http://briankhuu.com/blog/self/2015/02/28/openpgp-card-v2.0-factory-reset.html):

    scd reset
    scd serialno undefined
    scd apdu 00 A4 04 00 06 D2 76 00 01 24 01 (“select file”)
    [verify commands to block PW1]
    [verify commands to block PW3]
    scd apdu 00 e6 00 00 (“terminate df”)
    scd reset
    scd serialno undefined
    scd apdu 00 A4 04 00 06 D2 76 00 01 24 01
    scd apdu 00 44 00 00 (“activate file”)
    /bye

    where “verify commands to block PW1/PW3” are, as I understand them, not only strings like

    scd apdu 00 20 00 81 08 40 40 40 40 40 40 40 40

    but also something like

    scd apdu 00 20 00 81 10 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20

    (if I understand the smart card specification.) If permanently bricking v2.1 cards is difficult I may play with other combinations like replacing the 0x40(“@”)/0x20(” “) bytes by 0x00. Will look carefully at this code to understand if this one is better and why. Love learning something new!

    I understand this code will not be really useful if I buy a JavaCard, but it is a technology I want to understand and use. (is it crazy buying both smart cards and JavaCards?)

    I agree with you, something that is a black box should not be trusted so smart cards may not be the best choice.

    I did not now Ledger Blue and Nano S devices. Look great. I think these devices are just cryptocurrency wallets, so they may be less useful for my goals but are good designs.

    Bernstein curves must be implemented in a future standard. I cannot imagine a good reason for not having them in a smart cards v4 (perhaps it is too late for v3) or as a standard feature on JavaCards. Not implementing them will increase our distrust on the smart cards.

    The only issue I will have with JavaCards is my complete lack of understanding of the Java programming language. JavaCards are a good reason for learning this language, that I had been avoiding since the 90s up to now.

    I cannot say anything about your repository. I think the features you wrote are useful, do whatever you think is better with your code but certainly a factory reset after blocking seems reasonable to me. Why would JavaCard OpenPGP developers prefer to permanently block the entire card to a factory reset? Are they in the business of sellings cards and bug users? If I understand what you say a factory reset sounds much better than a permanent blocking.

    Curious August 22, 2016 12:37 PM

    @Clive Robinson

    Yes it does, or rather, I think it makes it more interesting. Thank you. 🙂

    ianf August 22, 2016 12:54 PM

    @ rrrrrrrr

    […] And yet I found out about this doctors long winded explanation, you really don’t wonder what the alternative was?

    Are you an eye M. D.? Or merely one Self-Licensed To Remotely Diagnose Ailments And Suggest Remedies On The Basis of Analogies in “ER” or “Holby City?”

    I ask, because nearly all my female acquaintances ARE that, self-licensed diagnosticians who at various times have suggested alternative treatments for my specific ills. I liked the “have a Thai masseuse lick off the eczema” best, even though it was purely theoretical and would have been grossly revolting, sexist and whatnot.

    PS. as for allowing scalpels in proximity to my eyes, I adhere to a rigidly enforced version of the Talmudic commandment “May no iron touch your face” – iron as in sword, but you get the drift (also why male Orthodox Jews do not shave or cut their hair).

    @ Thoth, re: IOTTMCO et al.,

    fair enough, though I guess we’re in two different frames of minds. I think mainly of pocket devices, because it is there, in the field, that the risks for sudden confiscation of keys are the greatest… when you are raided in a static place, home or at the office, there usually are several other lines of defense.

    But to come back to ease of use: counting each key- and mouse press separately, how many moments does it take to enable or disable your self-destruct PIN (mode)? In my worldview that should be a Yes/No (max 2 moments) decision.

      For illustration of the dead man’s grip, I keep thinking about how little was needed for #DPR (Dread Pirate Roberts) to prevent being captured with the disk unencrypted. Putting all his other stupid missteps aside, had he e.g. made HDD decryption and usage dependent on constant presence of a key on a coded USB dongle (physically tethered with thin metal chain to his wrist, encrypt on USB removal), no FBI agent would have dared snatch the laptop out of his reach to secure the evidence [yes, I know it’s theoretical, they’d have tasered him then].

    So I guess, what I’m trying to say is that as your OpenPGP self-destruct PIN setup has a sizable learning AND deployment curve, it risks being of use only to a handful of security experts such as yourself, and probably even less so – because the others probably have rolled out their own, too. Whereas what security conscious ordinary end-lusers would require is something far simpler IN OPERATION than that.

    r August 22, 2016 1:00 PM

    @ianf,

    I was merely alluding to people who opt for doctors that cater to hypochondriacs. We see those plastic surgery disasters on an almost daily basis and it seems that not too many of us realize the problems run considerably deeper than mere electives.

    r August 22, 2016 1:03 PM

    @ianf, CC: Thoth

    Subject: Reactive security.

    Content: Voice activation, heart rate monitoring, ETC

    Bluetooth has certain incomspicuous available tools, we just need a backend to connect the dots for our ram-wiping functionality. (maybe?)

    r August 22, 2016 1:05 PM

    @ianf,

    We can call it Re: Activity instead, believe it or not incomspicuous was accidental; maybe freudian?

    Marcos Malo August 22, 2016 1:16 PM

    Re: Password generation for dummies (@scott @r @all)

    A couple of questions regarding easier-to-memorize passwords using three word combinations and how to harden them against simple dictionary attacks.

    Would substituting numbers for letters for one of the words help complicate against a dictionary attack? Let’s say that my three words are Able baker charlie (I wouldn’t use that common combination, but for purposes of illustration, let’s use it). A simple substitution would give us Able_2111518_charlie. That’s a simple substitution that the user can easily do in their head if they don’t bother to memorize the number, although they might eventually memorize it from repeated usage. It could be further complicated by using a simple ROT substitution?

    Next question: how much harder does it become to crack with a dictionary attack if we take that password and repeat it? The new password is Able_2111518_charlieAble_2111518_charlie. How much harder does it become to attack with brute force? That’s now a length of 40 characters, albeit weaker than 40 characters generated randomly.

    What I’m asking is will these fairly easy to remember complications buy the dummy anything against dictionary or brute force attacks?

    ab praeceptis August 22, 2016 1:47 PM

    Marcos Malo

    I expect but rather insignificant progress.

    Above I talked about proper reasoning. Let me give you an example: Humans tend to think in patterns (as opposed to arbitrary series of symbols, number, etc.).

    If we ask a human to a) come up with and b) remember any password – which in the end is a string of symbols – he/she will process that in a way that is radically different from a computer. He/she will come up with something that makes sense to him/her, which is another way of saying “pattern based”, while to a computer as password is just a string; to a computer ‘mom_and_dad’ is no different than ‘1$1-&#7-p3p’. Funny sidenote: Humans actually tend to assume the latter to be somehow more secure (because it a) uses symbols not perceived as ‘common’ and b) is a comparatively rare combination (seen from a human))

    On the other hand, one of the major potentials to make more powerful cracking stuff is based on an attempt to “recognize” and use patterns, i.e. combinatons of elements of any available set of symbols that are more likely to be “human”. dictionary based attacks are a crude example.
    However, that’s important to note, the motivation behind that is merely to enhance statistical hit probability; those “human” symbol combinations promise a higher hit rate.

    In the end it’s always those two factors. The purely mathematical factors, i.e. for instance the combinations possible based on a symbol set size (e.g. 23 characters), and probability enhancements.

    Way more important, however, – and you address that somewhat – is the human factors.

    Let me offer you a striking example:

    As most humans dislike (or are hardly capable) to remember more than very few passwords, they create a situation in which cracking any service they use, gives you their access (credentials) to many if not all the service they use.

    This is often not seen, but it is a very major factor for crackers, because it translates to “Chose the service that is the most simple to hack and gain access credentials for services you might be unable to hack at all”.

    And there is another factor: To a machine it makes virtually no difference whether it stores 2 or 2 billion strings. For a human, however, that is an unsurmountable difference.

    So, even if we succeeded to enhance password variety by 100%, the net effect would be virtually insignificant.
    Moreover inertia plays a shockingly major role with humans. They dislike to come up with, let alone remember multiple long strings and, to make it worse (in their eyes) to frequently change them.

    There is no way around it: Password based security, particularly through wires, comes down to symbol set cardinality (alphabet size) and password length. Even “random” passwords vs. “human” passwords (-> dict. attack) isn’t a major factor but rather a modest enhancement.

    Which means: long passwords out of large symbol sets -> which directly translates to “forget it. Won’t work with humans” (except, of course certain rather exotic settings where that can be rudely enforced (opening security gaps)).

    It comes down to a classical man vs machine problem. The solution is to use a machine for assistance on the humans side (as I roughly drew earlier).

    In other words: a) have high quality passwords and b) keep the human out of the loop for the most part.

    Who? August 22, 2016 1:53 PM

    @Thoth

    I will certainly buy a good PC/SC reader and play with it! My goal is making the certificates on one of the airgapped OpenBSD computers here and use them to authenticate on all the OpenBSD laptops/desktops/servers.

    Want to get both smart cards and JavaCards. It is good to know both technologies.

    Without access to the hardware (yet!) I think something like this will be a good way to reset a smart card to factory settings if something goes wrong with card management:

    scd reset
    scd serialno undefined
    scd apdu 00 a4 04 00 06 d2 76 00 01 24 01
    scd apdu 00 20 00 81 06 00 00 00 00 00 00
    scd apdu 00 20 00 81 06 00 00 00 00 00 00
    scd apdu 00 20 00 81 06 00 00 00 00 00 00
    scd apdu 00 20 00 81 06 00 00 00 00 00 00
    scd apdu 00 20 00 83 08 00 00 00 00 00 00 00 00
    scd apdu 00 20 00 83 08 00 00 00 00 00 00 00 00
    scd apdu 00 20 00 83 08 00 00 00 00 00 00 00 00
    scd apdu 00 20 00 83 08 00 00 00 00 00 00 00 00
    scd apdu 00 e6 00 00
    scd reset
    scd serialno undefined
    scd apdu 00 a4 04 00 06 d2 76 00 01 24 01
    scd apdu 00 44 00 00
    /echo smart card has been reset to factory defaults
    /bye

    We cannot learn without breaking (sorry, bricking) some smart/Java cards!

    r August 22, 2016 2:06 PM

    @ad precepice,

    Dear John,
      the ugly head of THC-Hydra rears it’s ugly head once again,

    ‘On the other hand, one of the major potentials to make more powerful cracking stuff is based on an attempt to “recognize” and use patterns, i.e. combinatons of elements of any available set of symbols that are more likely to be “human”. dictionary based attacks are a crude example.

    This is often not seen, but it is a very major factor for crackers’

    That’s it, I’m taking it back.

    Are you insinuating that white people can’t hack it?

    Punt.

    t& August 22, 2016 2:20 PM

    &*TSU-irate-applebaum 4 2nddate to make banana republic? could be epic or ant happening cause buzz

    Gerard van Vooren August 22, 2016 2:21 PM

    @ Nick P, and (sorry to forget you) ab praeceptis,

    About writing a Masters thesis on satisfying the Rust compiler alone and get a straight A with it: Thinking a bit more about it, if I was still at the university, that would probably it. I would name it “Satisfying the Rust compiler”.

    “this, however, can’t counterbalance the bureaucratic insanity done to Ada. Just look how voluminous the Standard has become. That a) destroys one of the holy principles of Ichbiah (simplicity and elegance) and b) makes Ada less secure. Why? Classical case: They left the human factor out of the equation.”

    Personally I don’t consider Ada that bad. But I do agree that the mental model of Ada is pretty high. That counts even more for Rust. The only problems with Modula 2/3 are the upper case keywords and that it lacks curly braces. If that could be fixed (and please don’t fix more) it would be one of the best languages there is TODAY. I like the worded operators much more than the cryptographic characters and the module system is close to none (and not existing in the UNIX world).

    ab praeceptis August 22, 2016 2:22 PM

    r

    The name is “ab praeceptis”. If that is too hard to type for you, copy/paste may be helpful.

    Are you insinuating that white people can’t hack it

    I don’t have the slightest idea what you are talking about.

    ab praeceptis August 22, 2016 2:48 PM

    Gerard van Vooren

    It wasn’t about the mental model. It’s about a standard of many, many hundred pages. That leads to a situation where a language might theoretically allow for safe highly reliable programs but being practically no less a bug generator than, say C++.

    “Modula and Capitals” – that is an editor problem, not a language Problem. Though, to be fair, the very raison d’etre of the capitals almost doesn’t exist anymore nowadays with syntax highlighters. But begin/end rather than curly braces really is but between a minor annoyance and an editor problem.

    As for the operators (I did crypto in Modula-3) the issue isn’t so much one of ^over ‘XOR’ but it gets quickly annoying and disturbing the flow to write ‘XOR(a, b)’ rather than ‘a ^ b’ or at least ‘a XOR b’. And it gets much worse with nested bin. operations.

    What was really the showstopper is that the (currently only “major”) implementation of Modula-3 has no true cardinals! Incredible. Just try crypto with cardinals that magically turn into (signed) integers whenever the MSB is 1…

    More generally speaking I see a problem in that Wirth in a way went the opposite direction of (most mostly us-american) language designers; where the latter usually cared little about scientific principles and soundness but rather went “hands on” for pragmatic use (which does have its own advantages) Wirth seems to have been largely ignorant of pragmatic issues. He did, however very much care about simplicity, purity and elegance which happen to be existential for a good and safe language.
    Sad sidenote: Modula-3 was one of the very few languages (I think even the only one for a long time) which had a formally specified and verified standard lib. Unfortunately Larch (the system used) has been all but forgotten and shelved it seems.

    But anyway, if tomorrow I had millions thrown at me and at the same time a revolver at and a calendar near to my head to create a reliable and sound language for safe programming I’d base it on Modula-3 and I would change precious little and very carefully and respectfully. Meyers DbC would be the single major add on that would strike my mind right away. And maybe, but that would require more time, a good interface to formal spec. and verification. And, of course, but that’s nothing in terms of workload, I’d introduce Adas ‘–‘ comments (and multiline and doc variantions thereof).

    r August 22, 2016 3:15 PM

    Do you think reverse engineers prefer being grouped with malicious skiddies? Do you think those that had the ingenuity to apply statistics to hashes like to think of themselves as any less than innovative? Do you think the (non thinking?) landless whites of the world enjoy being reminded that they are relegated to a category of only using the tools of others in predictive ways? Do you think? Or – are you allied with the PC (politically correct) crowd on the grounds of the sissy white hat crowd can’t exist in a polarized environment? The world is full of shades of grey, the only way to wear that white dress is to stay home and play home (house?). Let me emphasize that I firmly believe they are black-hearted donning a white dress, that ownus is on you – because like a Muslim or a Jew or a devout Christian I want that verified.

    You won’t learn a thing reading those books in school, well maybe a couple things but as for the rest (and the rest) chances are everyone out there who knows anything has had some sort of real world experience beit delving into something protected by copyright or DRM or by invading the privacy of their coworkers to do the company job. Where do these lines end? They end when we stop relegating unwanted members of our communities into puffin (pidgin) holes instead of addressing the deficiencies involved with their creation.

    I posit: nobody who Robin Hood’ IDA or SoftICE wants your children. Nobody who freely liberated the tools to an education or an awareness of vulnerable complexity wants your simple thieves and shitheads. Nobody. Crackers have been operating as an idealistic constant since the days of Apple and Amiga if you have members of your community that can’t hack it don’t hack the definitions to the point that you give somebody else your problems. Crackers, feed innovation, feed the security industry, p2p mechanisms were employed in reinforcement of both the anti DRM and anti copyright communities. Both of those communities consisted of crackers and reverse engineers, poor white people I’d wager to a large extent. You’d deny the legacy of piqued curiosity? You would seek to muddy THOSE waters?

    I’ve said too much, please if you can’t raise your children to respect others don’t expect my community to do so – all we can do is point [them] in the write direction.

    ab praeceptis August 22, 2016 3:24 PM

    r

    Feel free to talk about whatever you like. I, however, take this to be an IT/security related place and intend to stay away from politics here.

    As for crackers: I consider them an enemy. Doing that I do not imply that they are all and only driven by evil motives. Certainly many of them are playful, curious, inventive. Some I might even like as a private person.

    Professionally, however, they are enemies and it is part of my job to make theior endeavours harder and less often successful.
    That’s about all I have to say to that point, here. Again, I’m not here for politics.

    Have a nice day/night

    anony August 22, 2016 5:29 PM

    World’s most efficient AES crypto processing technology for IoT devices developed

    a new technique for compressing the computations of encryption and decryption operations known as Galois field arithmetic operations, and has succeeded in developing the world’s most efficient Advanced Encryption Standard (AES) cryptographic processing circuit, whose energy consumption is reduced by more than 50% of the current level.

    http://www.tohoku.ac.jp/en/press/most_efficient_aes_crypto_processing_technology.html

    Journal Reference:

    Rei Ueno, Sumio Morioka, Naofumi Homma, Takafumi Aoki. A High Throughput/Gate AES Hardware Architecture by Compressing Encryption and Decryption Datapaths. Conference on Cryptographic Hardware and Embedded Systems 2016, 2016 DOI: 10.1007/978-3-662-53140-2_26
    

    Thoth August 22, 2016 6:33 PM

    @Who?
    Indeed the TERMINATE DF command will wipe all the files (factory reset) then afterwards you do ACTIVATE FILE then PUT DATA and so on to load your PINs and PGP keys.

    That means whether JavaCard or OpenPGP official card, if you brick your card by entering all the wrong PINs and max out all the PIN tries counter, your last option is factory reset for those supporting TERMINATE DF command.

    My self-destruct PIN takes a more “timid” approach of not fully destroying the Admin PIN and RC since that would be an over-kill but if Achim Pietig, the maintainer of the OpenPGP card standards give his nod, it can be official. Currently, those guys at Yubikey like my self-destruct design and have took my codes to meet with Achim to discuss with him on adding my self-destruct PIN idea into the next OpenPGP card standard 🙂 .

    So, permanently bricked cards no more if you run the TERMINATE DF command be it JavaCard or official variants. The TERMINATE DF undo the permanent brick by resetting it back to factory mode.

    Also, OpenPGP standards do not support PINPad PIN entry into smart card (JavaCard or Official) for now.

    And yes, JavaCard application developers or should I say smart card developers need a way to make some cash. Bricking a card and then needing to buy another is one way to purchase another card but for OpenPGP case, if you do not know the TERMINATE DF command, it will be permanently bricked until you use it to reset to factory mode.

    Reason DJB curves are not implemented is because it is not part of NIST/NSA Suite B and most chip makers only follow the path of “what the standards say”. They only make chips with NSA Suite B algorithms and even ECC curves may not be the most common place because smart card chips are rather old and many are legacy architecture so RSA is the most nature PK algorithm to use.

    There won’t be DJB algorithms in the near future or the far future because the industry is plagued and ill. They just want to make money and use the bare minimum (default standards – NSA Suite B). Implementing DJB curves in the tiny smart card software is close to impossible if you use platforms like JavaCard or MULTOS card platform as it has a VM layer.

    If you use the card in raw (Official version), you may have a chance of accessing the raw crypto primitive elements and may actually succeed in putting DJB’s crypto because raw card development have better access to the bare metal than JavaCard or MULTOS card development which have a layer of security VM.

    That being said, again, I will point back to the almost fully open Ledger Nano S and Blue as they will allow close to bare metal use of their hardware and that might increase the chance of success for implementing things like your custom crypto or someone’s crypto besides the default NSA Suite B.

    To be compatible with multiple variants of smart cards and security devices, it is hard for OpenPGP standards to choose anything other than RSA because compatibility amongst different platforms must be considered.

    Regarding hardware wallets for cryptocurrency, they are the best place to turn to for your hardware-backed security as many of these hardware wallets are no better than more resource abundant versions of smartcards. Essentially these hardware wallets have to use a smartcard chip as their crypto secure processor and then have different processors to handle BLE, NFC, LCD screen, OLED screen, keypad, battery management and many more. You
    can imagine hardware wallets as more powerful versions of smartcards (due to both using the same crypto chip essentially) and hardware wallets that have their own battery packs can run independently in “offline mode” while smartcards rely on a host for power supply , data and clock and hardware wallets like Ledger’s come with secure input and display built and connected to the smart card chip whereas a smart card relies on your PC’s probably bugged keyboard and screen which you may have problem trusting a dumb smartcard without display or keypad built right into the card.

    Thoth August 22, 2016 6:52 PM

    @ianf

    re:DPR

    When I heard of how those greedy and corrupted agents tried and successfully pulled out such tactics to grab DPR’s laptop while it was running, it made me think of how to prevent it. One way is another device that communicates over encrypted wireless or Bluetooth to the main laptop to constantly ping the laptop and once it detects a cut of connection, either side may lock themselves.

    For traditional smartcards, it is not possible to do so as they rely on an external power and clock source to keep them up. It would be nice if you can make the contents appear like factory state while crossing bother and on another button press without much hassle, you make the contents appear again but that requires something more of a hardware wallet like the Ledger Blue and Nano S devices that are essentially smart cards but have touchscreen and they have most parts open source and they have their stuff on Github if you have time to head over and inspect their hardware board design and their source codes if you want. It’s all in C code so it should be quite straightforward to validate.

    Considering a cross platform security technique (for dumb smart cards and interactive smart cards with their own screen and keypad), the current option would be PIN entry per attempted use of card resource (which opens a chance for entry of self-destruct PIN). If the desire is to be able to open a menu and lock the device for X amount of minutes, you really need to do a platform specific approach and in this case you need to lean on the Ledger Blue and Nano S hardware security wallets as they contain their own battery and quartz clock. In fact, you can even timestamp a self destruct certificate, PIN or environment (i.e. false certificates and environments) for X amount of hours (considering crossing borders are a long process with all the useless checks and inspections) or maybe if you want something fancy, you can do a plausible deniable PIN code instead which reveals a factory empty partition or a fake partition as a direct answer to your requirements for border crossing anti-harassment security.

    For the scenario that occurred to DPR, you need a paired device that constantly communicate with each other over encrypted channels and send alerts and enter into tamper response mode when they separate. This is an active reaction security measure. For a passive reaction security measure, enforcing every security operation to reload keys via needing to enter PIN codes would be more practical.

    The Ledger Blue device has a BLE capability which you can link to another bluetooth device (laptop’s bluetooth) and once either of them separates, it immediately shuts down both sides or for higher security reaction, wipe the keys. I am waiting for Ledger team to deliver production ready devices before starting work to scrutinize their Github source codes and designs and then port my smart card codes to Ledger Blue due to it’s higher security benefits with internal clock, touchscreen and battery pack connected to it’s board embedded ST31 smart card chip. All these presents a capability for proactive defense measures against coercion and capture. So, once Ledger is out and apps are ported, I would highly recommend moving to it for more proactive security.

    Daniel August 22, 2016 7:21 PM

    Regarding DPR.

    In my view the best way to make these things work is not by attaching a USB to a chain of some sort but to design them so they hook up with heart beat monitors. There are very thin heart rate monitors that fitness buffs use that unobtrusively fit across the chest and under the shirt. So the LEA wouldn’t even know that you possessed a dead man’s switch.

    When the police grab you your hear rate is going to go up and boom your computer shuts down.

    r August 22, 2016 8:10 PM

    @Daniel,

    There may be a different way to go about the whole heart-rate monitor thing, maybe it can be done with a mic?

    Thoth August 22, 2016 8:16 PM

    @Daniel

    re: Snatch and go of electronics

    Biometrics are not the best way in my opinion as there are too many triggers that might trigger the lock down (false positives) and the fact that it is your very personal data. Your heart rate can climb under many circumstances, not just from being cornered. When you are embarrassed (talking to girls :)) might raise the heart rate as well and poof … time to login again.

    The better idea is to tether via encrypted wireless link to something like a secondary device that they would not expect.

    There is also the case where the computer shuts down and now the aggressors gets mad and start to do some rubberhose cryptanalysis.

    A whole suite of active and passive defense must be in place to counter differing threat models.

    Again, the technique of restricting a cryptographic operation to a single login backed by a secure hardware and self-destruct PIN code would come in handy and to tether the computer to another discrete hardware that you can keep very close to you.\

    Most snatch and go scenario are based on distance between the user and the device. When the device detects that it has been removed from the user due to a distance gap, it would lock the device (via Bluetooth).

    Another method is to use a webcam equipped on the laptop to actively measure the distance of the user via facial recognition software. Once the software detects that the face is inaccurate or has been moved, it triggers a lock down.

    There needs to be more research in this area.

    r August 22, 2016 9:19 PM

    @Thoth,

    The other day one of the shyguys said we were lucky if the fleyes could spare man power, I nearly asked if the same idea could be applied to:

    economic power
    electric power
    and I nearly found myself asking

    if they could find it in their heart to spare some girl power.

    ab praeceptis August 22, 2016 9:55 PM

    Thoth

    NSA/B does support ECC, albeit not the djb curve. And since quite some time now it does not accept RSA.

    Maybe the misunderstanding arises from nsa announcing suite B at the rsa conference (and then it also accepted rsa). Or maybe it arises from the fact that pretty any smartcard chip in use did and still does support rsa. Simple reason: they won’t take out their useful hw multipliers and modulo ops; because that’s pretty much what “rsa support” cooks down to.

    It should also be noted that at least a major part of currently (say, the last 3 years) available smartcard chips are 16 bit cores.

    Typical times for rsa 1k are in the lower tens of ms, rsa 2k is in the mid to higher hundreds of ms, ecc is in the hundreds of ms but somewhat lower than rsa 2k (all for 64 bit message) and PK exchange is in the some hundred ms range, too. Also not that aes (128 and 256) hw support is quite common, too.

    Note: This is based on a rather old but stil widely used Infineon chip.

    You’ll have a damn hard time to beat aes (or to implement chacha20 using hw accel. in java).

    If you just love experimenting and implementing an algo of your own choice, why not something quite different, rarely used but nice, say sosemanuk or blowfish. Why? I.a. because it gives you an edge by using an algo nobody (desiring to fu** around with you) would expect.

    As for PK you might want to think about something quite different. nsa declaring rsa unacceptable in /B, seen from the perspective of paranoia, translates to “they expect factorization to have a short life expectancy”. On the other hand, one can hardly go wrong assuming that nsa recommending (or even demanding) ECC translates to “Stay away from that! (particularly using nsa/nist curves). Moreover, from the mathematical perspective it seems reasonable to assume that broken factorization (NP -> P) will very soon be followed by log. becoming P, too.

    There will next to certainly wait problems for you, though, working on alternatives (say, code based PK) as they tend to be considerably or even vastly more memory hungry.

    Whatever you chose, good luck.

    Wael August 22, 2016 11:40 PM

    @Clive Robinson,

    To see how long lets have two oscilators of period 3 and 5

    You’d want to choose two frequencies that their ratio is an irrational number, otherwise the sum will be periodic. The answer depends weather you are a mathematician or an engineer: http://math.stackexchange.com/questions/681750/sum-of-two-periodic-functions-is-periodic ! Lol 🙂

    With a little thought you can see why that would be the case up to a point and then the opposite would be true.

    Sounds like an engineer talking 🙂 I’ll need to think about this and find out where that point is.

    PS: perhaps I should abolish the use of href tags and provide raw links to save readers from the need to validate links before they follow them. Security vs usability, I guess…

    Thoth August 23, 2016 12:31 AM

    @

    “NSA/B does support ECC, albeit not the djb curve. And since quite some time now it does not accept RSA.”

    NSA Suite B indeed has ECC but the fact is old smartcards made long time ago supports partial Suite B namely 3DES and DES with SHA but they lack a Suite B PK algorithm which they defaulted to RSA which is not Suite B.

    Only recent ones start to introduce at least Suite B ECC P-* curves and AES.

    “It should also be noted that at least a major part of currently (say, the last 3 years) available smartcard chips are 16 bit cores.”

    Indeed and thus my POC ChaCha20 operates in 8 bit to fit into 16 bit cards. Recently the ARM SecurCore introduces 32 bit smartcard chips but the fact is the layers above namely JavaCard on this context fix the operation to 16 bit maths although it allows 32 bit math extension via an optional JCInt API that is if the card OS developers added the JCInt API inside.

    “This is based on a rather old but stil widely used Infineon chip.”

    SLE78 or SLE which series ?

    I usually use SLE78 series for my development.

    Software ciphers simply gets nothing done if that ChaCha20 experiment hasn’t proven enough. And the ciphers selected should not only be strong but well studied and still recommended for yse. In fact @Bruce Shneier does not recommend Blowfish anymore and recommends Twofish. But anyway, it seems like software ciphers are getting nowhere on these cards and it’s best to stick to tried and true hardware crypto.

    If you mistook my intentions for looking into ChaCha20 as seeking some thrills, it is more of a necessity as it is known that AES has a good amount of side-channels and a secondary strong and easy to implement algorithm that is still recommended by the author and well studied would be a nice nit just for my open source smart card projects but also to give an additional option for other open source or close source smart card projects wishing to finding something othwr than AES and DES just in case there is a need to switch up ciphers if they feel they need to do so. It is not simply seeking for thrills but for a largely practical reason and I believe many people would be interested with putting DJB’s work on embedded smart cards.

    For now I am working on mostly designing protocols with algorithm agility instead of fixating to a specific algorithm (which can hinder future progress) for my open source card projects.

    Clive Robinson August 23, 2016 1:39 AM

    @ Wael,

    Sounds like an engineer talking 🙂 I’ll need to think about this and find out where that point is.

    In the case of a CS-DPRNG it is at a point before the state rolls over to it’s starting value. Bruce when writing about the design of RNGs wrote about it.

    However how far before that rollover depends on not just the state “counter” but the mapping as well.

    If you have ever designed a Direct Digitaly Synthesized (DDS) oscillator –counter, followed by a map followed by a D to A converter– you will know that you can cut the size of the ROM “map” by four simply by using XOR gates on the address lines in, and on the data lines out. You might also know that by carrying the MSB of the counter out you effectivly get an extra bit of data out. Would intuitively tell you that the upper bits of the “counter state leaks” information through the map, at some point.

    Oh and as an other thing to think on, think about why the Random Oracle Model is potentialy fine for “hash maps” but not for “cipher maps” and thus why you would need to add another step to the RO model to make it suitable, and what the implication of that extra step is on CS-DPRNGs.

    Wael August 23, 2016 1:53 AM

    @Clive Robinson,

    Bruce when writing about the design of RNGs wrote about it.

    Must’ve missed it!

    Figureitout August 23, 2016 2:00 AM

    Thoth
    –Ended up “quick” reading the rest of the RFC. Getting a better grasp of the algorithm (still pretty hazy though). Would definitely have to study it more.

    I’ll try to get it working on 8bit chip in C and time it eventually (likely next summer but may squeeze it in sometime) b/c 4.3s seems long.

    And a GUI on what platform, standalone?

    Clive Robinson
    –How would the muon detector seed a prng? Would it be a counter that repeats?

    ianf August 23, 2016 3:35 AM

    @ Gerard van Vooren

    Mission accomplished: propensity to see insults where obvious hyperbole should be a trigger of ironic, friendly intent, pinpoints your mental age at around 20, when you first became AWED by computer syntax, and never left that stage.

    @ Thoth RE: “Snatch-n-go electronics”

    Good title. You have a future in the advertising trades (“Me? I’ll just set my BlueRibbon to Snatch-No-Approach and Go!“)

    There needs to be more research in this area.

    I’d like to see a funding request for that within the Academia. Dressed in medical, care of demented patients mumbo-jumbo presumably, never mind the real reason.

    As for #DPR, he’s no hero of mine, more like a poster boy for patent naïveté—which is why he’s now doing 30, and I’m not. I read a lot about how he got uncovered, and the two impressions that remain with me are that (1) he was stingy & small minded; (2) he couldn’t get a date (same as with adolescent Zuck, which is why now we’re stuck with his AmIhotOrNot copycat).

    BONUS: Stealing bitcoins with badges: How Silk Road’s dirty cops got caught

    paintitblack August 23, 2016 3:50 AM

    ianf

    Where does the propensity for unwarranted self-importance, vacuous rambling and daft ad hominems pinpoint one to?

    LightWorker August 23, 2016 4:05 AM

    @ianf

    it’s no secret you’re really hurting. [Remainder of faux-new-age sarcasm and veiled threat to dox REMOVED by moderator. Sir or Madam LightWhatever, you are out of line.]

    Robert Gu August 23, 2016 4:05 AM

    I sought to download the paper discussed in our host’s recent “Research on the Timing of Security Warnings” post here:

    https://www.schneier.com/blog/archives/2016/08/research_on_the_2.html

    That blog post links to this page, with an abstract:

    http://pubsonline.informs.org/doi/abs/10.1287/isre.2016.0644

    There, I found this link to the paper:

    http://pubsonline.informs.org/doi/pdf/10.1287/isre.2016.0644

    Downloaded the paper three times. Once via http, and twice via https.

    Obtained three distinct pdf files, of slightly different sizes.

    I find this curious. I wonder whether others are able to confirm.

    [I post this here in Friday Squid, rather than on the discussion page for the article itself, to avoid derailing on-topic discussion there with my idle curiosity.]

    tyr August 23, 2016 4:54 AM

    @Curious

    You can go to a multi-processor comp arrangement
    with custom software and have the separate CPUs
    pass data using a DMA buss. That isolates the
    code in each CPU. You still have to load the code
    into each one and using multiple loaders would be
    a bit pricey. This would be overkill for general
    purpose computing but works fine for real time
    process controls. Other than the loader once it is
    finished you only have one attack surface which is
    the DMA channel hardware.

    Restricting the DMA channel to specific memory that
    is not shared with the code gives you the isolation
    via hardware in each processor.

    ianf August 23, 2016 5:53 AM

    @ paintitblack

    If only you could be a bit more specific, and list the exact instances, sentences? of the manifestations of my alleged “unwarranted self-importance” (IS there also a “warranted” kind?); “vacuous rambling,” and “daft ad hominem(s)” that made you abandon your usual lurking mode, and emit the above? Perhaps show me your own contributions without any of the alleged such. Otherwise you’re just as vacuous as what you accuse me of. And how am I EVER to better myself.

    @ LightWeight

    Very droll you are, indeed. You write so well that I’d like to read more of your past submissions (where?), as I will be reading them—I trust—in the future. I also want to thank both of you for making me AGAIN! the center of everybody’s ATTENSHUN. You also alluded to, without any specifics, to my ?somehow? lacking, and I quote “level 1 literacy and comprehension in the (UK) english language.”

      Only have you for a moment considered that maybe, just maybe, it could be your inability to comprehend same that’s at the core of your displeasure? I mean, philosophically you have to admit to the possibility of that happening.

    […] “inevitably pushing everyones patience too far and being doxed on paste bin. ( It’s reprehensible but sadly it seems to happen ).

    Do you have a real, written mandate to speak for said “everyone” here, or is it one more of a imaginary, rhetorical kind (“I feel so, and since I’m normal, others also must feel so.“) You do realize that hiding behind backs of others is characteristic of cowards? Much as you may consider yourself to be The Voice of the Silent Majority, if it had a voice, it wouldn’t be silent.

    As for your “reminder,” but really a threat of doxing me elsewhere – I’ve already been threatened with physical force here once, and diagnosed as “mental” countless times. This is not my blog, but were I the moderator, I’d ban your IP and handle right away. Because, obviously words (of Level 1 literacy etc) fail you, so you see no option but take to veiled threats. Was that clear enough for your elevated English (UK) comprehension rhetorical q.

    Thoth August 23, 2016 6:22 AM

    @ianf

    re: You have a future in the advertising trades

    Thanks 🙂 . Good for a technical person like myself to at least be able to do a little advertising and save some cost.

    Clive Robinson August 23, 2016 6:34 AM

    @ Thoth,

    Software ciphers simply gets nothing done if that ChaCha20 experiment hasn’t proven enough.

    My experience with the modern equivalent of the 6502 and 8051 eight bit micros tells me the real problem is with the high level languages you use on them. For various reasons like execution speed and time based side channels, you are better off writting crypto code directly in assembler. Especially if you hit the second big speed problem and pick a cipher where you have to use things like add and rotate across multibyte values a lot.

    Which brings us onto,

    [T]he ciphers selected should not only be strong but well studied and still recommended for use.

    Unfortunatly we are moving out of the land of 8 and even 32bit processors in all but the likes of smart cards and low power embedded systems when it comes to data communication. The result is that ciphers are going to get aimed at using 64bit or above data widths. Not for extra security but more bytes/cycle performance on processors with those data widths. We saw this in the NIST/NSA AES competition and the Hash3 competition. Oddly less so in the EU “New European Schemes for Signatures, Integrity, and Encryption” (NESSIE).

    The point being that you additionaly need a cipher designed to work both effectivly and securely with the narrower data widths. Because theoreticaly secure ciphers can be anything but when it comes to practical implementation where key leakage from power utilisation spectrum is the biggest threat.

    Whilst many may draw in breath about the use of RC4, card shuffling algorithms and 8bit S-box’s are what you are looking for on these narrow bit width CPUs. Which are not going to go away anytime soon, and thus need to be secure for the next thirty to fifty years of “in service” life, after they drop into the “depreciated for design” catagory…

    Clive Robinson August 23, 2016 6:54 AM

    @ Wael,

    Must’ve missed it!

    If my memmory serves…it’s in the book he wrote with Niels Furguson https://www.schneier.com/books/practical_cryptography/ I’ve not read the second edition with Tadayoshi Kohno added to the list of authors so I don’t know if it made it into there.

    The upshot of the various bits of information around is, “If you are looking towards using a CS-DPRNG don’t use it in a way where the three MSBits of the ‘effective’ counter change state”.

    Clive Robinson August 23, 2016 7:13 AM

    @ Figureitout,

    –How would the muon detector seed a prng? Would it be a counter that repeats?

    You would “stir it into the entropy pool”, which would with the likes of a CS-DPRNG be to modify the “state array” in a way that lasts longer than a single operation.

    With the likes of block ciphers the Sarray would be the counter of CTR mode. What you would do with a low speed TRNG is give the counter increment a “drunkards walk”.

    Think of the TRNG driving a “leaky” digital integrator. Where the TRNG puts a large negative (for 0) or positive (for 1) into it. This then decays back to zero over a time period. Take the digital value shift it so the result will always be greater than one and use this as the update value for the counter driving the block cipher.

    Alternativly just use CTR mode with an increment of one, and use the fact that the TRNG has output a bit to increment the key value instead.

    Thoth August 23, 2016 7:58 AM

    @Clive Robinson

    “My experience with the modern equivalent of the 6502 and 8051 eight bit micros tells me the real problem is with the high level languages you use on them.”

    Yes this is indeed the problem with JavaCard and things like Basic Card and also .NET card which host a stripped down (Java/Basic/C#) VM above the OS for the applets to run. That is the reason I am looking into Ledger Blue since it gives the most direct access to the 32-bit ARM chips (both the ST31 smart card and the STM32 processor) and this would give me more control and even getting to the point of doing ARM assembly if I need surgical precision.

    “The point being that you additionally need a cipher designed to work both effectively and securely with the narrower data widths.”

    The interesting thing is most designs these days usually take the de-facto 32-bit and 64-bit word processing as their design and this can be tricky to implement bigger data width ciphers.

    Thoth August 23, 2016 8:03 AM

    @all

    ECC or not to ECC ? Below is an abstract from the latest IACR report on the investigation on claims of whether ECC is still safe for use. Have fun 🙂 .

    A RIDDLE WRAPPED IN AN ENIGMA

    By NEAL KOBLITZ AND ALFRED J. MENEZES

    Abstract. In August 2015 the U.S. National Security Agency (NSA) released a major policy statement on the need for post-quantum cryptography (PQC). This announcement will be a great stimulus to the development, standardization, and commercialization of new quantum safe algorithms. However, certain peculiarities in the wording and timing of the statement have puzzled many people and given rise to much speculation concerning the NSA, elliptic curve cryptography (ECC), and quantum-safe cryptography. Our purpose is to attempt to evaluate some of the theories that have been proposed.

    Link: http://eprint.iacr.org/2015/1018.pdf

    ab praeceptis August 23, 2016 9:17 AM

    @Thoth

    I was talking about the 66 series.

    Sidenote (without having any depth re. smartcards): Surprisingly much crypto uses surprisingly few operation types over and over again. If you know the math good enough and if you know enough about the chip (e.g. instruction set incl. hw ops) I’d see some light at the end of the tunnel. It might be doable and useful to use those hw capabilities, no matter whether they’re called “des accel” or “aes accel” or “rsa accel”, as hardly any (to my knowledge) do, say, rsa, magically in an rsa op; they “merely” offer hw accel for some ops that are typically heavily used and hence worthy of accel.

    @Figureitout

    Sidenote/hint (ad “muon detector”): driving a simple cheap zener diode within a certain range will do the trick, too.

    @Clive Robinson

    Yes and no. MULs and DIVs (which MOD usually is in disguise) are among the most expensive ops on pretty any architecture. Going ASM rather than, say C, doesn’t cut out much there. Plus: Unless one knows a given system very well (and ASM development, of course), chances are that a decent compilers backend developer(s) is/are someone who know that Arch. very well. In other words: For the vast majority of java, C, etc. developers, chances are that them doing crypto in ASM will actually create slower code.
    On the other hand you are right insofar, as only tight control over code, allocation, etc. provide the necessary means to get a grip on timing issues and other sidechannel beasts. Which quite certainly i at least one of the reasons for djb to create hs own intermediate language/representation.

    If the chip designers did their job well, using the hw accels will even get you some sidechannel protection, too.

    Gist: ‘Don’t think twice but twice to the cube before going to “making it faster/better/more secure” yourself’ is a piece very solid advice to all but very very few.

    Unfortunatly we are moving out of the land of 8 and even 32bit processors in all but the likes of smart cards and low power embedded systems when it comes to data communication.

    I dare to contradict. There is a reason to NESSIE and ECRYPT being concerned about small fish. The amount of 32 and 64 bit processors in use is dwarfed by that of 16 and even 8 bit MCUs. This together with a strong trend of “add on security layers” for protocols many would consider passé or niche (like CAN and lots of 232 derivates and cousins) demonstrates a powerful necessity.

    Keep in mind that quite some of those busses are end terminated and it’s well feasible to loop in. You’d certainly want a layer of crypto in, say, your ET’d serial busses running your ship or your power station.

    Which lands you in ugly water. It’s rarely worthwhile for chip producers to add add e.g. hw arx support to decades old chips. Usually those endeavours end up with software developers scratching their heads. Maybe RABITs (attention: pun warning! *g) aren’t attractive animals on 32-bit ST Arms (w/crypto accel) and MBs of memory, but they are much much more attractive than nothing between your power station or chem. factory and some evil guys when you happen to have serial controller networks with 8 bit CPUs dealing with KBs of memory.

    Ad RC4: Yes! I agree. There is a (somewhat useful) perception disease going around and giving people the impression that using RC4 and the likes is like carrying bundely of cash losely attached to your suit and walking through a bad neighbourhood of mexico city. Far from true.

    Security doesn’t start around 80 bits. It seems quite many have forgotten the very definition of security measures. I’ll help out: It serves to make attackers to need much more time. A burglar opening your door in 3 s is much more dangerous than one needing to fumble (and to make noise) for 3 or even 30 min. Getting at your saved money by simple opening a kitchen drawer is less secure than needing equipment and 30 min or 3 hrs. to open your safe.
    The dimension we use to think in in the crypto world are in a away out of this world; they are almos perfect ideals because the time factors are so absurd (like 10¹⁵ years) that we humans perceive that as “unbreakable”/”perfectly secure”.

    Evidently *each and every” bit of time gained (or lost on the attackers side) enhances security. An algo that offers, say, “lousy” 2⁵⁰” security will certainly not withstand an nsa tao team, but it will certainly keep 99% of evildoers out.

    Moreover, again, the human factor: The production manager of a production facility is almost guaranteed to say “No. No way!” when confronted with “oh well, sometimes losing a cycle in the feedback loop” as the price tag for chacha20 in his CAN network. End result: 0% security gain.
    The same guy, however, will nod a somewhat mistrusting “OK, you have your chance, but you better don’t disturb my control loops!” when implementing RC4 (“Don’t worry, that’s cheap and wont burden your system”). End result? Security enhanced 2 to the power 50 (give or take 10 or even 20). The nonce? Once a week (or a day) some manager changes it by hand. Ridiculous, I know, but much enhancing security.

    The world looks quite different in an industry shop floor than in a university office.

    Moderator August 23, 2016 9:49 AM

    @ianf: “As for your “reminder,” but really a threat of doxing me elsewhere… were I the moderator, I’d ban your IP and handle right away.” Thank you for calling my attention to this. The threat has been removed and the commenter warned.

    Who? August 23, 2016 10:22 AM

    @Thoth

    It is nice to know that YubiCo staff likes your approach. To me it makes sense providing a self-destruct PIN to someone trying to break a card’s secret. It is a clever approach and only line of defense against a high-tech adversary that has the means, determination, knowledge and time to recover a PIN from it. Just remember you are trying to set a standard. Standards are easy for “tie people” with money and power (e.g. think on how easy NSA subverts some committees), not so easy for technical people that have knowledge, ability and common sense like you.

    Whatever card I buy it will support TERMINATE DF! I do not want to buy new cards just because I was able to brick one, nor want to wait until they arrive.

    There are powerful algorithms on the NSA Suite B, at least algorithms that look strong, but a few important ones are missing too. DJB curves based ones are a significative example.

    I do not care about standards but real world. Reallity is what matters!

    I had been working on projects were using OpenSSH to connect to remote servers was prohibited “because it was not certified,” but tools available on Windows, Android, iOS and OS X were allowed. That’s crazy. People should stop listening to the tie guys and start listening to knowledgeable ones.

    To be compatible with multiple variants of smart cards and security devices, it is hard for OpenPGP standards to choose anything other than RSA because compatibility amongst different platforms must be considered.

    I should be able to choose DSA, as I plan to connect to computers running OpenBSD only (now either 6.0 or -current). Wide compatibility is not an issue to me.

    I appreciate your comments about hardware wallets. I though on them as specialized smart cards that were restricted to a single function (i.e., managing wallets for cryptocurrency like BTC, ETH or XMR). I fear these devices may be useless for other purposes. What you say sounds great.

    I do not like these wallets having their own power sources either. What happens when the battery stops working? Can it be replaced or we need to buy a new wallet? Cryptographic tokens usually need to be replaced when battery dies, and it is bad for our “non-cryptocurrency wallet”.

    By now I will certainly buy a good reader, I think one with a PIN pad (as you say it is not supported by OpenPGP, but it seems GnuPG and, I hope, OpenSSH, have extensions that support it). I do not think there are many keyloggers in OpenBSD computers, but it is better being safe if possible. Will play with a few smart and Java cards too and, if possible, using them to log into my computers.

    Hope it does not require a lot of additional software. I understand I will need GnuPG (and possibly OpenSC) on the airgapped OpenBSD computer to transfer subkeys to the cards. Let us see what software is required on the other ones. At least OpenSSH has a pkcs11 helper on /usr/libexec, so there is a chance ssh-agent(1) will support it. I will enjoy playing with these new toys and learning from them!

    I wish you the best luck making your idea a standard! I will buy new cards if your proposal is implemented.

    Who? August 23, 2016 10:50 AM

    What I will say now may be unpopular and, perhaps, even completely wrong.

    Now that a second “hacker” has the stolen NSA hacking tools on sale for $8,000 USD:

    http://bgr.com/2016/08/22/nsa-hacking-tools-1×0123/

    I think the FSF, WikiLeaks or whatever should buy these tools and release them to the public so these bugs and backdoors can be identified and, whenever possible, fixed forever.

    The way the NSA has managed this issue shows that NOBUS vulnerabilities do not exist, either because adversaries may have enough computing power or because they may stole the keys to exploit these vulnerabilities.

    Now that these powerful hacking tools are out of the NSA control what can we do? Let cyberterrorists and wrongdoers use them against multiple targets? The NSA must act responsibly if “national (or “international”, who cares!) security” is one of their goals.

    r August 23, 2016 12:53 PM

    @Who?

    The longer this goes on the more I am finding myself inclined to believe it’s part of the Snowden archive, or at least – a subset of it that was fleshed out early.

    If you look at the article I posted yesterday from the reuters commentary there’s some fishing sounding stuff.

    There’s alot of minor co-incidents surrounding this, I wouldn’t be suprised if there’s more than one disinformation campaign making the rounds.

    Who? August 23, 2016 1:14 PM

    Well… I have read somewhere that dates on the stolen NSA hacking tools match the time Snowden left the NSA. Don’t remember the source, too much has been published in the last days about this incident. It is an odd fact that supports your idea on these tools being part of the Snowden archive.

    I really hope these tools are not in the hands of black hats (Snowden or the people with access to the archive would not do that, right?). If not, the code should be released to the public or, at least, to security experts and manufacturers so these bugs are finally fixed. I do not trust, however, in the way small subsets of the community manage security incidents, sometimes hidding fundamental information to the right targets (e.g., OpenSSL hidding bugs to LibreSSL in the past).

    Sorry, I missed your article. Will look for it right now.

    Gerard van Vooren August 23, 2016 1:36 PM

    @ ab praeceptis,

    Your post got me triggered. I think I am gonna write a 5 page blog about what I call the Hydra problem and why fixing one issue doesn’t solve anything.

    r August 23, 2016 2:17 PM

    @Who?

    No apologies, my noise level does me a diservice sometimes.

    http://www.reuters.com/article/us-intelligence-nsa-commentary-idUSKCN10X01P

    There are a couple fishy things being stated in that article,

    The first being that snowden’s current copy in russia is 100% ANT free.

    The second being that snowden claimed he ditched all his data before traveling past HK, this may be minor as it could’ve been reconstituted but how then does he have an archive in russia?

    The third, curl http://nsa.gov

    Was something embedded on that website so deeply that it still needs to have it’s front page kept down?

    Sometimes the best place to hide something is in plain site.

    Nick P August 23, 2016 2:20 PM

    @ Wael

    Yo man, check out what they did in the “baby” language I was using a while back: Frost OS. Looking at kernel, spinlock, and some other files was interesting. The prototype nicely mixes high-level, readable BASIC with occasional, inline ASM for lowest-level stuff. Also wraps the latter where possible to make the calls type-safe. The cool thing is I understand most of the non-ASM parts without remembering the syntax. Something I have trouble with when looking at arbitrary C or C++ apps after being away from those languages for so long. 😉

    Nick P August 23, 2016 3:18 PM

    @ Gerard

    I found another Oberon compiler I didn’t know about: Vishap Oberon Compiler. Github here. It compiles to C as usual cheat for portability & efficiency. A nice chart shows unmodified source can already run on quite a few ISA’s & OS’s with enough claimed efficiency for use on 8-bitters. Through it, I discovered another interesting page with some books I didn’t know about worth following up on. Finally, also found out Wirth has a language specifically for PIC’s: PICL. It has clear advantages in readability & error prevention over the assembly with the compiler being about 6 pages of Oberon in 2 files. Per its news section, the VOC compiler above was used to port PICL to Linux desktop.

    So, some interesting stuff altogether.

    @ All

    The essay, Oberon – The Overlooked Jewel, does a great job explaining the motivations and accomplishments of various projects in that sphere. Just over 10 pages so quick read. It was clearly ahead a number of times with many some key technologies of modern era re-inventing it without attribution. I particularly think both JVM and WebAssembly are garbage compared to the Slim Binaries of Juice project referenced in this article. It was a perfect for the time foundation for mobile code with all the right tradeoffs. Also, I knew the school ran on Oberon but not that even their printers used it.

    ab praeceptis August 23, 2016 3:23 PM

    @ Gerard van Vooren

    Your post got me triggered. I think I am gonna write a 5 page blog about what I call the Hydra problem and why fixing one issue doesn’t solve anything.

    Oh, thanks so much for the compliment; and it is a very major one because there isn’t much one could strive for more than to trigger the professional brain of a colleague (in a forum like this one). Thank you.

    The only grain of salt: If only I knew which of my postings you’re referring to *g

    In case you are interested (and willing to hint me to it) I’ll gladly read and/or comment and/or engage in a discussion on the matter on your blog

    @ Nick P.

    Funny. When I was a “cool” greenhorn I always smiled at an older colleage who worked in Basic, which, of course, I considered uncool.
    Later I was hinted that that man had created major pieces of software for airline management and that they loved him because his software (unlike most) had almost no flaws.

    While I still think that writing an OS in Basic is a very poor choice , I have learned some important lessons with that man, among others that arrogance should at the very minimum be based on professional merits and standing rather than on “I’m a C hacker, ergo I’m cool and Basic guys are but hobbyists”.

    Today, I have to confess, that I was the idiot, not him.

    It took me many years to not take for granted that image (of a language) and PR do translate to quality and that a “baby language” can offer a damn grown up compiler while some “adult pro languages” can be condemned to have lousy compilers, if alone for vage “standards”, leaving much to the understanding or even the will of any compiler builder.

    But there is more to it; that hole goes way deeper.

    We tend to consider a compiler to be a tool that transforms a human grokable language (representation of cpu instructions) into binary cpu instructions.

    Looking closer one notices (or not …) that the crux is built into the definition: 2 times cpu lingo, one “human grokable”.

    Problem: We humans are not computers. Besides the fact that we still know regrettably little about how we do think, we have strong reason to assume that we think very differently from computers. It is us around whom the whole model should circle, us – not the cpu.
    Because it is us who think about problems and solutions, it is us to formulate needs, often quite IT ignorant (e.g. client talks to IT architect), it is us to ponder paradigms and formulae.

    Code is but the last output. The real process takes place between humans and within human brains. Hence, we must rethink, we must understand that a programming language is not a human grokable version of what the cpu wants. It is, or must be, a dual, a two-faced transformation mechanism/tool. First – and way more important than the cpu end – it must be an interface to humans. It must allow to formulate as a minimum solutions in a way humans tick. Only at a later step that gets transformed into cpu lingo.

    Concrete example: I’ve yet to see an programming tool that allow us to mathematically formulate an idea and to then spit out code.

    We have sage and the like or tla/tla+ and we have a plethora of programming languages – but we don’t havea human centric tool that allows us to specify a problem, to formulate ideas, to tinker with them and, once we are satisfied, to push the “generate code” button.

    The way I see it today is that we are not that much of a distance away from people flipping switches on a hex console. Almost all those superduper languages are but glorified, rubber coated, funnily painted switches on a hex console. Funnily, one still wide spread criterion for “professional” and “geek” actually is to have rather primitive console switches (e.g. C) while the few attempts to create real programming tools (albeit in the form of a language) are either all but dead or frowned upon.

    (That btw. is also one of the reasons why I again and again mention the cultural and intellectual foundation that seems to be much stronger (and more rigid) in europe)

    ianf August 23, 2016 3:26 PM

    @ Robert Gu,

    I’m not interested in this particular paper, but in basic computer forensics. As you’ve already downloaded 3 versions of it of slightly differing lengths, did you

    1. check the files’ CREATION dates
    2. cat * | strings | diff (symbolic syntax)

    to gain an insight into what you were served? It’s a mystery to say the least.

    At the risk of causing another “you suck all air out of this forum” T’ACCUSE!, I was confronted with a somewhat analogous situation lately. While browsing through an education QUANGO’s website, I discovered 3 versions(?) of the same ~13MB downloadable file. They were named:

    …/media/document.pdf
    …/media/document_1.pdf
    …/resources/files/document_0.pdf

    I downloaded all 3, then checked their sizes and creation dates. File 1 and 3 were the same size, different dates. File 2 different size, close-enough date to 1, seemingly an alt. version. The size diff was on the order of a kilobyte.

      I decided to notify the NGO of it, and EXPLICITLY stated that IF they remove the superfluous duplicates, THEN they should create symbolic links to the remaining file with current filenames SO AS NOT TO BREAK ANY BOOKMARKS for these elsewhere. I even provide them with the correct syntax for that on both OSX and CLI Unix.

    I receive a thank you note from a secretary informing me of her forwarding my note to the correct party (not listed on the web).

    I thank the secretary and invite her to keep me informed of everything that happens at the office, “no issue is too small,” very polite et al. Last I hear from them, I think.

    I am wrong. A week later I get a letter from a titled systems support person explaining that they used to maintain a tiered website, etc excuses. I do not respond.

    Sunday, on a hunch, I again try all three files. The first is there, the other 2 are 404’d. Symbolic links apparently are Satan’s gift to Mankind.

    You’ll can breathe in now.

    @ Thoth RE: advertising

    To paraphrase an advert that once was all over the MTV:

      been there
      done that
      NOT doing it again

    I don’t think your product(s) are suited to be promoted by advertising. Besides, clever slogans are a dime a dozen… the real work is in identifying and reaching the correct target group. A tall order in your narrowly specialized game, and, HEAR THAT, due to confirmation bias, it’s not sure that you are the one most suitable to focus on it.

    (Not quite in context, but I recall this marketing case example: a small factory made thin profiles out of metal foil. Then all such was “outsourced” to Taiwan, bankruptcy loomed. The owner went for advice to an ad/ marketing agency, he only had a miniscule budget for that. Let’s do this, they said: fold your gold foil profile into a picture frame. We’ll put in a Jubilee picture of Our Queen Beatrice, and pay for a full page ad in a family weekly a percentage based on incoming orders. This single ad generated so much business that he was able to prosper and then some.)

    Nick P August 23, 2016 3:55 PM

    @ ab

    “We humans are not computers. Besides the fact that we still know regrettably little about how we do think, we have strong reason to assume that we think very differently from computers. It is us around whom the whole model should circle, us – not the cpu. ”

    This is true. It’s why high-assurance systems almost always use a combo of English specs, formal specs that are machine-checked, and matching implementation also machine-checked. The combo has done the best in real-world examples. It’s especially important to constrain the design and implementation to what’s easily analyzed. It was hierarchical layers of finite-state machines in the past with statically-typed, functional stuff w/ side-effects in one FSM these days.

    “Concrete example: I’ve yet to see an programming tool that allow us to mathematically formulate an idea and to then spit out code.”

    There’s a ton of those. The two that heavyweights are using the most are Coq and HOL. Both have extraction mechanisms that produce code equivalent to the mathematical expressions in them. The CakeML team have expressed a HOL prover, first-order logic, ML variant, LISP 1.5, machine code of several ISA’s, and a compiler to those from HOL. Before that, Prolog was used to express problems in first-order logic with heuristic search solving them. I recently discovered a compiler done with a mix of Z specs and equivalent Prolog statements. Mercury added functional concepts to improve on it. Long before all this, the woman (Margaret Hamilton) who invented sofware engineering did something similar (001 Toolkit at htius) with a logical, specification language that semi-generated design then autogenerated code, tests, and traces. It’s been done many times over in hardware with ACL2, DDD toolkit, and recently in HOL via CakeML people.

    Advances have been increasing past few years do to combo of great provers and powerful desktops. Yet, mainstream engineers just aren’t interesting in messing with them. The result is investment into lightweight methods that enchance functional programming for easier verification and efficient compilation. COGENT is best example of past year or two with ext2 filesystem redone in it already. Haskell with QuickCheck and QuickSpec are kicking serious ass in terms of productivity, correctness, and performance. Combining Haskell ecosystem with Cleanroom development methodology is probably closest thing we’ll get to your vision for most programmers. For the average ones anyway.

    “(That btw. is also one of the reasons why I again and again mention the cultural and intellectual foundation that seems to be much stronger (and more rigid) in europe)”

    Europeans tried this here before. I pointed out we invented high-assurance, software engineering (see Hamilton et al in Apollo & Burroughs) and INFOSEC (Anderson, Burroughs, and Schell). The best demonstrators were created in America with good work in UK and Europe in parallel. Tons of them in kernels, crypto, compilers, language models, protocols… you name it. U.S. (esp separation kernels/crypto/CPU’s/langsec), U.K. (esp Cambridge CHERI), France (esp Gallium CompCert/Coq), and Australia (mainly NICTA seL4) did the top-tier results recently. As of today, the same groups that did the previous ones are still doing good ones with a few, scattered results popped up in more places. Medium assurance is all over the place in more scattered form. There’s no clear winner on nationality.

    So, I call BS on whole, prejudiced claim. I have thousands of papers on the topic. The only thing I see consistently is: (a) quality of academic center involved, (b) desire to focus on high-robustness developments, and (c) having teams that follow through on that. It can happen anywhere so long as they start with the right theory and worked examples with help from a specialist. Occassionally, some smart person in an environment without (b) or (c) gets amazing stuff done. Occasionally, that creates (b) or (c). Rarely, someone without (a) just dreams it could be better and does what they can. That’s it. There’s no European advantage given the above. It’s just bright people taking on hard challenges where neither the brightness nor hardness represent most of what goes on in that country.

    Clive Robinson August 23, 2016 4:59 PM

    @ Nick P, Wael,

    The cool thing is I understand most of the non-ASM parts without remembering the syntax.

    The thing with “old style” BASIC is it’s like your first bicycle, that had training wheels on it. It was designed so it would stay up and you along with it, even though you did daft things as you got over confident.

    Actualy writing an old style comand line BASIC interpreter is very easy –certainly easier than writing a full screen editor– the hard part is realy the memory managment and if you implement it the garbage collection.

    The rot set in with “objectification” and Object based BASICs became just one of a plethora of object based languages on which the object model did not sit comfortably. Also with objects came serious bloat and slowness and a myriad of other issues.

    gordo August 23, 2016 5:08 PM

    @r

    …a couple fishy things

    The reference to “going through this archive” is probably from 2014:

    And there’s another prospect that further complicates matters: Some of the revelations attributed to Snowden may not in fact have come from him but from another leaker spilling secrets under Snowden’s name. Snowden himself adamantly refuses to address this possibility on the record. But independent of my visit to Snowden, I was given unrestricted access to his cache of documents in various locations. And going through this archive using a sophisticated digital search tool, I could not find some of the documents that have made their way into public view, leading me to conclude that there must be a second leaker somewhere. I’m not alone in reaching that conclusion. Both Greenwald and security expert Bruce Schneier—who have had extensive access to the cache—have publicly stated that they believe another whistle-blower is releasing secret documents to the media.

    The Most Wanted Man in the World
    By James Bamford | WIRED | August 22, 2014
    https://www.wired.com/2014/08/edward-snowden/

    ab praeceptis August 23, 2016 5:09 PM

    Nick P

    I’m not surprised by that answer. But the real world is quite different from a sofa with 1.000 papers around it. You can and do provide lists upon lists of projects and at times that’s useful. However: How much code have you produced in, say, Mercury (which you mention)? Have you searched for and finally hacked yourself syntax highlighting for your favourite editor? Have you used and tested the seemingly nitty gritty details which, however, can make ones day miserable? Have you created bindings for C libraries with it? Have you also analyzed what Mercurys weaknesses are?

    HOL, for example. is a PITA for developers to work with – and that’s why very, very few do. Don’t get me wrong, HOL is a fine tool for some things, but there is a reason why programmers rarely touch it. btw, most not only don’t ever touch compcert (a fine verified compiler) but actually even hardly heard of it.

    Another example: The creator of TLA went the distance to create TLA+, a “much friendlier” incarnation; and while he mentions some major corps. as reference, it’s rarely used out there in the wild. If you are Amazon you can afford to get a lecture and support (and the prof will gladly hurry to your HQ); unfortunately, very few of us are Amazon.

    Part of that problem is that software developers are educated (and practically forced to) think in terms of language and compiler. It goes even further: There is a language standard and there is what ones compiler accepts. Guess which one is binding for most programmers.

    As for across the ocean and europe, my point isn’t that across the ocean they are stupid and have no culture (no matter whether true or false; it’s imply not my point). My point is the significance of proper reasoning and a solid foundation (where europe happens to be in a much better position. If it were the lila-lulu polynesian islands I’d say the same). Actually, I’m expecting quite a lot from Russia in the next decade; reason: excellent education, rigorous adademia, solid cultural and intellectual foundation. I’d even go so far as seeing Russia as the “new europe” because we (west)europeans have lost very much and have reached a very sad academic state, except maybe france and, to a limited degree the uk.
    The Russians seem to have well kept (over a long winter) what across the ocean hardly exists and what we (west)europeans stupidly and arrogantly let go.

    You fail to see factors not to your taste (or experience or self-view or …). To give some examples:

    The decisive factor sadly often is not the quality of a concept (or language) but the “market”. If large corps. push sht then sht is the wid spread normal. Had Ichbiah created Ada as an academic experiment or for a portuguese government agency, it would be dead by now and but a few experts would have heard of it. But he created it for the dod and so it had weight and clout.

    Or look at Pascal. Hardly anyone would know about it, let alone use it, if Kahn/borland didn’t market and push it (and at an affordable price at that, which probably was the decisive factor for its success).

    Another issue you underestimate or judge wrong (imo) is when you name all the projects done across the ocean. So, does the fact, that a given country has a massive community in pharmaceutical research indicate that its population is healthy?

    Many of the projects you seem to see as a wunderful example of ingenuity (which is a valid way to see them but not the only valid one) can as well be looked at as late insight after fu**ing up big time.

    But in that fat argument block of yours is a hidden very major factor, namely when you mention ISAs. Why? Well, there has been am (understandable) tendency for languages to go along with cpus. As someone so aptly and funnily put it: “At those times we considered a language something to come with a new architecture or processor”. In fact, C, or more precisely, its ancestors came into existence for that very reason! While a new hw architecture (or a significantly evolved or changed one) was worked on, a new language for that arch. was also worked on.

    So, in a way you answer the question why there were/are so many projects across the ocean. In the end: Because IT was widely their technology (and to a large degree still is).

    But that was not my point. My point was how to create safe and reliable software from the beginning (rather than starting many projects to cure problems that should have been avoided in the first place).

    Extracting it brutally down we arrive at: There is but us and mathematics and proper reasoning (at the other end, let’s not forget about that, there are exploding rockets and humans killed by radioactive poisoning … or maybe soon a hijacked and melted down nuclear reactor).

    That also explains why I’m somewhat hard on you and the 1.000 papers.

    So, to get back: we have us, humans, and we have mathematics and proper reasoning. That is what it boils down to – and that is what we must build on. To do that, we need tolls – and, very important, a healthy perspective and understanding of our field of profession.

    And, if I may remind ourselves, albeit somewhat bluntly: That’s what makes us engineers rather than hobbyist. It’s about time to understand – and to understand as non-negotiable and as binding, as the law of our profession – that software must be built no less carefull and no less professional than bridges or airplanes.

    As long as this is not the accepted and implemented professional standard we may enjoy fumbling but we’ll have to worry about this weeks 10 mio stolen passwords or, sooner or later, about a hijacked nuclear reactor.

    Clive Robinson August 23, 2016 5:47 PM

    @ ab praeceptis,

    MULs and DIVs … are among the most expensive ops on pretty [much ]any architecture. Going ASM rather than, say C, doesn’t cut out much there

    Yes and no. I suspect that I did not make it as clear as I could have. What I was talking about –and Thoth as well– is doing say 64bit maths on an 8 or 16 bit computer. Most high level languages do not give you access to the carry bit where as assembler always does. Thus an “add with carry in” is a standard assembler instruction and very very fast compared to doing it in a high level language.

    Thus to do a 32bit add on a 16bit computer requires four reads one 16bit add followed by a second 16bit add with carry and a couple of writes in assembler. However in a high level language on the same hardware eight reads and masks with four shifts to get the two 32bit values in 8bit format in the low byte, you then have to do four adds, four masks, three branches and potentialy three additional adds and then more masks and writes.

    That’s why I said the math or word width instructions are very slow in a high level language when the data width you are working with is greater than the underlying CPU data width.

    Nick P August 23, 2016 6:00 PM

    @ ab

    ” But the real world is quite different from a sofa with 1.000 papers around it. You can and do provide lists upon lists of projects and at times that’s useful. However: How much ”

    What’s that paragraph have to do with anything? You claimed there needs to be a way to use math to make software. I pointed out there was a ton with varying degrees of practicality. You also re-iterated a prejudicial statement about Europe’s superiority in this field. Having seen thousands of papers and tools, I easily refuted that showing there was no consistency where the best or even mediocre work was coming from. All I need is those papers, some which had prototoypes or commercializations, to refute those two claims.

    “”HOL, for example. is a PITA for developers to work with – and that’s why very, very few do.”

    I should’ve been more clear to mean Isabelle/HOL. I’ve seen countless stuff done with it. The one that’s easier for average developers to learn is use of Dependent Types with Coq. Chlipala has a nice book on that. Heavier stuff can be learned with this one. These topics don’t apply to the average developer because they want to throw code together with minimial thought, rarely do any QA, and often with time/focus constraints. The latter force them to use 3rd-party stuff as much as possible. The former’s implications mean that 3rd-party stuff is mostly written in languages that are not helpful to your goal.

    What the average developer wants is immaterial to this conversation except the Haskell or medium-assurance parts. The high-assurance, mathematical stuff you seek has to be done by skilled people with appropriate training and background. Isabelle/HOL, Coq, ACL2, and others have worked fine for those types of people with problems that can be fixed over time if enough people cared.

    “to create TLA+, a “much friendlier” incarnation; to create TLA+, a “much friendlier” incarnation; “If you are Amazon you can afford to get a lecture and support (and the prof will gladly hurry to your HQ); unfortunately, very few of us are Amazon.”

    It was a great example of improving a model-checker to help people with less time or talent. You don’t need to be Amazon to follow the guidance on its website. Or Butler Lampson’s free book. Or form a group of such people OSS-style building educational resources, example programs, etc.

    “So, does the fact, that a given country has a massive community in pharmaceutical research indicate that its population is healthy”

    We are talking about practices or tools for creating robust systems. Your claim was a European advantage. That means I need to look at the best tools and most robust systems created to see what their nationality was. They mostly weren’t European. That’s consistent over long periods of time, too. The heaviest hitters were probably the likes of Dijkstra, Hansen, Wirth etc in terms of correctness vs practical effect where 60’s to early 70’s European output was huge. It was mostly language-level stuff. Yet, we had Burroughs HW/SW architecture, McCarthy’s LISP, & Hamiltons correct-by-construction systems for Apollo program before 1970. Almost no comparison in capabilities or what areas they were successfully applied to. I could easily brag on American superiority there but it’s dishonest given most of America wasn’t doing that. Most of Europe wasn’t either. It was dynamics of individual and group-level activity in action where a few, brave, bright people set out to conquer the hardest problems imaginable. The rest actively opposed high-integrity systems and still do. So these prejudiced statements about nationality conferring advantages are exactly that and only seek to divide us. Good news is great things are currently coming out of U.S., U.K., Europeans, Australians, and so on working together.

    As we should.

    “It’s about time to understand – and to understand as non-negotiable and as binding, as the law of our profession – that software must be built no less carefull and no less professional than bridges or airplanes.”

    Far as that, software is usually allowed to be crap quality either in general or as long as you looked like you put in good effort. This is what markets want because they almost always vote against the high-quality offering in favor of faster, cheaper, pretter, etc. Same with management, owners, and lawmakers taking money from them. So, engineers wanting to succeed in the market should make whatever the market wants no matter what the cost to society. They’re not responsible for the damage in democracies and markets that refuse to push proper liability legislation and standards. If they choose & for personal principles, they can try to make something with higher quality. There are those who succeed differntiating that way. It’s just a big chance with something like 90+% failure rate. Need fast development, high-quality, and integration of 3rd-party components. Hence me mentioning things like Haskell, Ada/SPARK, or Rust that can do that.

    Wael August 23, 2016 6:47 PM

    @ab praeceptis,

    But the real world is quite different from a sofa with 1.000 papers around it. You can and do provide lists upon lists of projects and at times that’s useful. However: How much

    Every so often, someone directly or indirectly disses a poster. What we need to realize is that posters here have variety of interests. There are those who are interested in programming language research and history in relation to security. There are those who are interested in implementing their own projects be they on smart cards, micro controllers or web applications. There are those who are more inclined towards politics and cultural differences. There are those who posses depth and breadth in all of those subjects, and more. Then there are those who are here to learn from everyone: a good paper, a new crypto algorithm or weakness, a new tool or browser extension, etc…

    There are the humorous ones, the ones with axes to grind, the grouchy bastards, the PR types, etc… a small model of the real world.

    Then again, there are those who know a lot about an area but can’t say much on the subject because of NDA, work contract restrictions, fear of assassinating one’s own character, etc… There are those who are very competent coders, but can’t share, discuss, or even comment on code that’s shared here, for a variety of reasons.

    @Nick P, @Clive Robinson,

    Yo man, check out what they did in the “baby”

    goddamit, Nick! How many times did I tell you I drink straight from the cow’s[1] … ? Huh? I got your basic right here, pal. 🙂 I’ll look at it later… Incredible work load at the moment.

    [1] https://www.theguardian.com/technology/1999/mar/04/onlinesupplement3

    Nick P August 23, 2016 6:59 PM

    @ Wael

    Hilarious shit. I’m guessing the Pentium thing is a reference to the infamous recall that made formal verification standard practice in that industry. Far as BASIC, I’m just saying it is an OS and looked pretty clean vs C or ASM. Far as from the cow, I think that analogy is going to get pretty disgusting once you hit digital, analog, and esp RF. RF would probably be whatever moves through it in waves. I hear you drink from that, too. 😛

    ab praeceptis August 23, 2016 7:21 PM

    @ Clive Robinson

    You are right. But hl languages that do support inline asm aren’t that rare anymore; moreover one can always link in asm routines.

    The point I was focussing, however, was a different one, but maybe I misunderstood Thoth. I was under the impression that the assumption ASM is faster than, say, C is not necessarily correct per se. I did quite some ASM stuff myself but then, when I did that, it did make sense to even quickly glance over any C compiler output (todays compilers are dimensionally better).

    Just recently I had to “print” (no really, but it’s good enough as explanation) some numbers in a very time sensitive aio network context and sprintf just didn’t cut it; it was way too slow. So I did my own and evidently divide-by-10 was an ugly spot. After handcrafting some SHL, ADD, then SHL ASM I got curious (way too late) and looked what the C compiler made out of a stupid / 10. Surprise (on my side): It knew and used the same algorithm.

    That’s why I warned that most programmers will usually be better served just sticking to their language. But again, for things like flags, your are of course right.

    @ Ni (we are saving bits, right?)

    You also re-iterated a prejudicial statement about Europe’s superiority in this field.

    If I may offer a piece of advice: Don’t allow emotions to take over in an intellectual discourse. I did not say that. I talked about the foundation, not about the tools or the quantity thereof.

    All I need is those papers,” … [a while later] … ” TLA+” … ” model-checker to help people with less time or talent. … follow the guidance on its website. Or … book.

    The 1.000 papers effect again. Reality: TLA+, at least for a while, simply didn’t work on linux, another version did, but didn’t on Windows. Was, it seemed, to do with some java idiosynkrasy.

    Which leads to another point: A verifier, model checker, or similar in java. In java! Of course, in academia they love antlr and it’s certainly no coincidence that since antlr got popular, a whole slew of tools (usually created in academia) are built in java. I’ve learned from bloody experience to stay away from them, although, from time to time, I curious (and stupid) enough to try again (as I did with TLA+).
    In fact, it was only later that I tried to reason about that and to find out why a whole class of tools has a strong tendency to not deserve my trust or to be in my toolbox.

    Sir, it seems to me that you never used TLA+. But you know papers and books about it. You might want to seriously consider to have no less respect for actual engineers than you expect from them.
    Again, you are evidently a man who knows a gazillion things from a gazillion papers (I say that respectfully and honest); and that is valuable and useful and I respect you for that and I read with attention what you write here. But knowing a thousand cities from books and pictures is very different from living there. Each one of us has his place and use. The 1.000 papers people are a valuable resource but so are the engineers, even though some 1.000 papers people consider them, I quote, “people with less time or talent”, not “skilled”, or “immaterial to this conversation”.

    For your information: I’m actually working with formal specifications every day, I do formally verify anything sensitive or not trivial. My “1.000 papers foundation” is 100 papers and 900++ hrs of actually doing what we talk about here. I dare to conject that this foundation is no less reliable and solid than 1.000 papers.

    Being at that: It is exactly those immaterial people with mediocre talent whom we must wake up, teach, and provide with tools they can use. Because it is those people who write the vast majority of software – the very software that spills our credentials and has plenty of backdoors.

    Your claim was a European advantage. ” – Again: No.

    That was what you in a hurry and with a quick glance took it to be – and erred. I claimed a european advantage in the foundation, i.e. in the cultural and intellectual basis. If it pleases you (which seems probable) I’m ready to say that across the ocean many, many more software (incl. tools) have been created.
    Not only would I be ready to assume but I actually did assume that elsewhere (namely Russia) is a society that will very soon overtake us europeans.

    My interest isn’t in glorifying one country and blaming another and, if that soothes your feelings about the felt “attack” I’m certainly not proud to be west-european (ask them. I’m telling them often enough how stupidized we have become, how we did allow our very foundation to rot).

    My interest is in finding reasons and the hope that finding them will lead to solutions. That is what I’m interested in.

    Let me offer a concrete example. Something that looks very innocent and unimportant, yet is extremely powerful: ranges. The point I’m interested in is: How must one think to come to that approach – as opposed to the typical and very widespread signed and unsigned integers (and floats, and …)?

    The signed, unsigned, 8, 16, 32, 64 bit integer approach is one that sees a machine and asks “How to make use of that?”. Wirth’s (et al.) range approach comes from a quite different direction, namely from mathematics. It’s an approach that says “well that machine can do operations very quickly and we have to keep some of its properties in minds (such as word sizes) but – a very important but: foremost we must properly define the domains and codomains of the functions.”

    The C approach is to say, that the machine offers a word size, while Wirth’s approach is to ask, how to properly use the provided technology and he keeps the rules of mathematics in mind. Whatever we put in those registers are but values along an alorithm and for an algorithm, which can be considered a function we need to properly specify domain and codomain. For him, so to say, an integer is a box of a given capacity and he acknowledges that but moreover he keeps in mind that we use those boxes for a purpose and that not the tool but the concept and the laws of reasoning are of primary importance.

    Seen from this perpective we can easily understand Ada to be an evolution of that paradigm, one that is even more rigorous.

    But we can also understand other things, for instance that it’s insane and irresponsible to use tools (incl. languages) that do not support and make easy to discern between algorithmic and implementation errors.

    There are other examples that look innocent, yet have much depth, for instance Pascals High and Low. While a loop in C is often parameterized in a descriptive way (what the lowest index, typ. 0, what’s the highest index, typ. sizeof – 1) behind Pascals loop lays the mathematical constructive approach (apologies. english is not my 1st language).
    All that is needed for trouble in the C approach is to add some elements to the array (changing its size). Using the other approach won’t spill beans. Low is still Low and High is still High (~ Ada ‘First and ‘Last).

    One can hardly overestimate that point. Wirth (et al.) come from a mathematical view and are almost necessarily more robust – and btw. much easier to correlate with formal specs. Not by coincidence.

    But there is even more to it. Because one approach makes it easy to make implementation errors while the other makes it easy to avoid them. That is also important because unfortunate many developers don’t care too much (as you correctly hinted). I call that the human factor – and we’d better keep that in mind until we find a planet with natural programers, because if we want more reliable code we will need to produce it with the programmers we have.

    ab praeceptis August 23, 2016 7:41 PM

    @ Wael

    I had no intention whatsoever to diss or insult a poster. As some might have noticed with a certain poster, I politely reply and when I reach the point where I think *stupid a**hole”, I simply turn around and ignore that poster.

    In fact, I respect Nick P, and I have clearly stated that now. We do need the people who know a 1.000 papers. But we must not forget that in the end we also need code, preferably properly working one.

    I’m, however, not willing to “politely” ignore issues. I’m willing to address them as politely as I can but I will address them.

    How the hell should I address a bluntly obvious problem without someone feeling dissed? It is, in my minds eye (and I’m ready to learn something I clumsily overlooked) a simple fact that pretty everything across the ocean is quite different. Similarly, its bloody obvious that we in europe reach new records of stupidity every decade it seems. We observe, to be concrete, that nowadays fresh students are considerably less educated than high schoolers were some decades ago. And, Pardon me (and I *really, really try to avoid politics here), need i spell out the factor that changed europe since about the fifties and that replaced baguettes with burgers and relatively sophisticated TV with primtive soaps?
    Yet, well noted, my interest is not to point at the bloodily obvious culprit – my interest is to avoid us dying in millions next to melting reactors. Or, less dramatically, “how do we enhance software quality?”

    I heard that phrase probably more often than my own name -> “Nowadays IT is the nerve system of modern society, of goverments, of industry, and economy”

    To me, our way of dealing with that looks like saying “rattle snakes can kill you” and reaching with your bare hand for one. It’s sheer insanity!

    The epicenter of IT is across the ocean. May they be blessed and earn gazillions with it. But that also means that we must somehow address that problem and as politely as we possibly can ask them “You fu**ed up and big time. No week without mio. of credentials stolen, probably billions of $ robbed, and sooner or later a max. desaster like a nuclear reactor melting. OBVIOUSLY you must think again and think hard about your approach rather than happily spitting out new band aids every month”

    Wael August 23, 2016 8:01 PM

    @ab praeceptis,

    I had no intention whatsoever to diss or insult a poster.

    Got it. I wasn’t sure, but there are others who do, and I wanted to share that with them as well.

    How the hell should I address a bluntly obvious problem without someone feeling dissed

    The way it came across to me is this: You read a lot of papers, you send thousands of links, have you ever coded a thing in your life? You don’t know what you are talking about — are you sure you read the papers, or to you just look at the pictures? I bet you hold the papers upside down, too…

    But that’s clear now.

    I simply turn around and ignore that poster.

    I noticed 🙂 The rest of your text isn’t contentious — I agree. You are a scholar and a gentleman (or lady — I don’t know)

    Wael August 23, 2016 8:21 PM

    @ab praeceptis,

    And, Pardon me (and I *really, really try to avoid politics here), need i spell out the factor that changed europe since about the fifties and that replaced baguettes with burgers

    Well, that’s unfortunate! I wish you hadn’t said that, because you just got yourself (temporarily) involved in politics (and food.)

    See, when I was younger, I was impressed with German stuff. And I thought what is it that made Germans so smart. I found the answer! They eat a lot of cabbage — it looks like a brain, right? Logical, makes sense, must be right.

    The baguettes? I love them, but I doubt they contribute to intelligence. The burgers maybe the culprit, I tell you 🙂

    r August 23, 2016 8:25 PM

    I’m here for the pictures, pdf’s and analytical fourplay can paint a great image for those of us who struggle with the (not so 2bit expressions of) MathML.

    Chad Walker August 23, 2016 8:39 PM

    @ Chad Walker: Come on, gang, it’s the weekend! You all should be playing CRYPTOMANCER, a tabletop fantasy role-playing game about hacking, informed by real-life cryptography and >networking fundamentals.
    Don’t be silly. This is the place where we play Me Mom Is A Saint game, Teaching >Moderator To Sit Pretty game, You Noise Me Signal game, the Denunciation Method game, and >I’m Bored By You game.

    So what would yours supply that we haven’t tried already. Learn from the pros, come up >with something that ups the ante, not side-channel-lines it onto some, er, side channel.

    @ianf

    My game is about rolling d20’s, killing orcs, and compromising pretend networks. It teaches non-technical folks the basics of crypto, networking, and privacy literacy. It’s outreach to people who might otherwise proliferate the “I have nothing to hide” argument and makes them allies, and it does so in a fun and silly way. If you are in Utah in October, come see me present and defend the game’s conceptual fantasy architecture to a bunch of security heads at SaintCon!

    r August 23, 2016 8:49 PM

    @Chad Walker,

    We didn’t always have the luxury of pretend networks.

    (+++; some of us never left our home network to begin with, pretend is irrelevant there.)

    That wool sweater?

    It’s a white dress over a black heart, get to know her before you invite in.

    @by the rules,

    Cracker is racist and deragatory, even the way you use it is dismissive on it’s own.

    Incase you didn’t catch up with the point I was trying to make, Wael points out that you’re still knee deep in politics whether you realize it or not.

    I would hate to come off feeling like you’re some sort of racist.

    Nick P August 23, 2016 8:54 PM

    @ ab

    EDIT: The good news is I saw the last exchange before submitting. Edited to be more civilized. 😉

    “If I may offer a piece of advice: Don’t allow emotions to take over in an intellectual discourse. I did not say that. I talked about the foundation, not about the tools or the quantity thereof.”

    I tested your claim logically and empirically by looking at results in academia and marketplaces of Europe vs everywhere else. Something I’ve been doing for a long time. I found nothing supporting your unsubstantiated claim of Europe’s superiority in robust systems at any level. I instead found specific groups in various countries were doing it themselves against the status quo. So, I rejected your false hypothesis then pointed out that mere prejudice, in you or your sources, was all that was left in it. I’d bet the technology stacks, CVE’s, and so on in Europe have as many problems as in U.S.. I know they ask for the same stuff on their hiring pages. The good ones are outliers everywhere.

    “Reality: TLA+, at least for a while, simply didn’t work on linux, another version did, but didn’t on Windows. Was, it seemed, to do with some java idiosynkrasy.”

    Reality: most good tools didn’t work for a while on most platforms. They sucked. A demand for the product or corporation pushing it got into enough shape to be useful. Then, there were tools like TLA+, Coq, ML, and so on that could do great things but were in shoddy condition. Almost no effort by the OSS or commercial sectors while they puts tons of effort into C, Java, etc. Even when those projects were failing, they still put tons of effort into them. It’s a social, not technical, thing causing such problems. Not enough people care.

    “Sir, it seems to me that you never used TLA+. But you know papers and books about it.”

    The papers point was a pile of work produced from all over the world. I brought it up to counter your claim of European superiority. They were also produced by a combination of researchers, professional engineers, and elite combinations of both. A significant chunk are experience reports where they applied specific methods to real-world problems. They then report what worked, didn’t, and so on. TLA+ was weaker than some of those results so I ignored it. It was other engineers that used it talkign in places like Hacker News that told me its usability was greatly improved. They gave me Amazon case study and Lampson book as evidence. In any case, I thought it was strange you’d ask me to dismiss what thousands of researchers and engineers taught me as a show of “respect” to one, anonymous engineer claiming something else. I’m still listening as you’ll see.

    “If it pleases you (which seems probable) I’m ready to say that across the ocean many, many more software (incl. tools) have been created. Not only would I be ready to assume but I actually did assume that elsewhere (namely Russia) is a society that will very soon overtake us europeans.”

    I know that many top firms in tech, pushing low assurance stuff, have research centers in Russia where Russians solve their hard problems. I know they’re smart. Yet, all the best stuff is non-Russian. Examples: MLton or CakeML vs Moscow ML; their little Pascal/Oberon compilers vs Modula-3, Ada, or Rust; Elbrus CPU’s vs Oracle SPARC’s or POWER8; (90nm?) fabs vs 14-28nm among non-Russians; total lack of equivalents to results like SP architecture, Cambridge’s CHERI, dependent types, and so on. The published evidence plus capabilities of their commercial systems indicate they’re behind the state of the art in these sub-fields. Their I.P.-theft-clone-improve cycle does amazing things like with that Itanium variant Intel was forced to build. Their clean-slate stuff is weak, though. I’m more concerned about China if we’re talking well-educated, innovative copycats with cheap labor. Shenzhen’s innovation, Loongson processor, and increased patent trolling are a sign of things to come.

    So, let’s summarize. The evidence says the best results are independent of countries or continents involved. Evidence indicates there is better education, esp science or math, results in countries outside America. Evidence indicates that’s had little impact on high-integrity, software demand or production. That implies social factors on demand- and supply-side dictate what will get uptake. Evidence in marketplace shows highest demand for low-assurance systems and lowest demand for anything high-assurance. So, that’s the reality of the situation. Foundations are laid by social and economic phenomenon that are hostile to robust, software development.

    “Something that looks very innocent and unimportant, yet is extremely powerful: ranges. The point I’m interested in is: How must one think to come to that approach – as opposed to the typical and very widespread signed and unsigned integers (and floats, and …)?”

    Now, I like how you think here. Clearly, one group is operating on reason and one is only operating on the machine. I agree with you that Ada’s designers tried to keep the two together. Wirth, too, but much less than Ada due to simplicity being highest priority. We also see his hardware work shows a knowledge of math and electrical engineering. The stuff he creates is designed to work with both easily. No surprise an offshoot of his work, Modula-3, was one of first to have its standard library formally verified. His Lola language and an Oberon variant were also used to synthesize hardware.

    “nd we’d better keep that in mind until we find a planet with natural programers, because if we want more reliable code we will need to produce it with the programmers we have.”

    Haha. Nicely put. Yeah, that’s going to be tricky. I still don’t know what the solution is going to be. I just know it needs to be high-level, strongly typed whether static or dynamic, support some kind of Design-by-Contract for interface checks, safe-by-design, simpler than Ada, and produce efficient machine code. I’m with Gerard that an improved Modula-3 with nicer syntax would get us pretty far. I’d add some of Ada’s restrictions, Rust’s safety features, and SPARK contracts into it while keeping language itself simple. The amateur developers can use simplest version with many forms of safety built-in to language & libraries. As they get better, they can add contracts, advanced types, automated tests, or anything else. It just needs to be incremental learning with whole language not too damned big.

    So, that was my idea on it a while back.

    Wael August 23, 2016 9:11 PM

    @Clive Robinson, @Nick P,

    Also with objects came serious bloat and slowness and a myriad of other issues.

    The idea was, in the early days of .NET, to allow developers with different language skills to work on the same project. One can write in C#, another in Basic, another in ASP.net, etc… Good concept, has some drawbacks.

    As for OOP, I believe it’s a good paradigm for large projects. May not be the best choice for device drivers or embedded, resource retrained systems because of bloating, large libraries, and “things that happen behind your back”.

    Wael August 23, 2016 9:16 PM

    @ScottD,

    I am rethinking the orthographic password algorithm.

    An algorithm is one thing, and a commercial solution is another. I suggest you look at use cases and how they can be secured for an enterprise environment — you know, with a two or three tier network where TLS sessions terminate in the perimeter zone (internet facing and easily reachable by attackers.)

    ab praeceptis August 23, 2016 9:18 PM

    @ Wael

    The way it came across to me is this …have you ever coded a thing in your life … I bet you hold the papers upside down, too..

    In fact, I’m based on the assumption that he already coded in his life.
    No, I do not at all think that he is plain stupid (“upside down”), absolutely not.

    For the rest, let me turn it around: I am not capable to be a halfway professional engineer and a professional researcher at the same time. Hence I assume that sometimes during his professional life he did code and that since more or less years he does research.

    Whatsoever the details (I just politely answered) I’m not someone to say “I respect you” when I don’t. I did say that to him and I meant it, Period.

    FYI: I’m male.

    For your other post:

    Due to my self-imposed rule of avoiding politics as far as any possible, I can’t respond to all of your (not consistently fair) statements. But I can say something:

    My self-imposed rule also concerns what others may likely take to be political.

    It, however, wasn’t. And I certainly don’t believe that eating cabbage makes you a smart engineer while eating burger makes you stupid (I like burgers myself).

    What I did was hinting at culture or lack thereof. And again: My point is not to say “nation x has no culture” (or to consider myself superior based on where I was born). When I mentions those issues I do it for a reason: Those factors play a role.

    There is reason for some countries creating well educated people and others creating high-school graduates, 75% of which have grave weaknesses in reading, writing, and basic math (according to their own data).

    Actually, usually I do not even care. Of most countries I even wouldn’t have a clue what their “culture and education position” is. I think, that I, however, have a right to make some remarks, when that certain country more or less rules major parts of the world and, mor importantly for me, (still) is the epicenter of IT.

    Assume we lived in a grave and serious risk to perish in millions (or to lose our posessions, etc) because a certain pacific island, say, liked to carelessly play with coconuts simply ignoring that this might create serious damage.
    Would I be asked to shut up and to not dare to mention the problems in that coconut epicenter of the globe? Certainly not.
    Could we even solve the coconut problem without talking about it and it’s relation to that island? Almost certainly not.

    Unless we are willing to assume that it’s just mere coincidence that the field of IT is controlled/influenced to a very considerable degree by its epicenter across the ocean and that, hey, tomorrow morning about 9 am the epicenter just flips over to say, Italy, and from then on Italy basically control major parts of IT, unless we are willing to assume that nonsense, we can’t but ask the question “how come that we are plagued by unreliable, insecure, and often makeshift software? What in that epicenter country might be the cause?”

    Not to smear that country but to find solutions.

    Soothing side note: In europe we have a “hatespeech regulation” wave (“hatespeech” meaning whatever the governments happen to dislike). There are already people who have been sent to prison for “hatespeech” and thousands of social media accounts have been blocked or deleted for “hatespeech”. Based on that and terrorism paranoia the first european countries are seriously creating “cyber security centers” and they are even openly talking about creating agencies for the prupose of breaking encrypted communication.

    Can they do that? After all, we have crypto. Yes, they can, because we also have openssl and a plethora of insecure, shoddily built software. Is that OK for you? Is that threatening and important enough to think about why in the IT epicenter country things are going badly wrong and since many years?
    How much damage is needed to make those questions and thoughts OK and not “anti[country x]” or “too political”?

    If you still have doubts I recommend to good deep look at scada security.

    And, no this was not political.

    Wael August 23, 2016 9:30 PM

    @ab praeceptis,

    Thank you for the elsboration. Ummm.. See, Security and politics are like peas and carrots. You can try to avoid speaking about politics, but the moment you mention a geographical object: country, sea, ocean, then you are inviting politic discussions. I’m not into politics, but if I get cornered, I’ll reply.

    The cabbage, baguettes, burgers were all jokes.

    I will not talk as out politics with you 🙂

    Thoth August 23, 2016 9:33 PM

    @Clive Robinson

    If I have a circuit (copper or metal ribbon) and I were to change it’s physical properties by taking a scissors and cutting a slit into the metal ribbon without breaking the circuit, what are the electrical properties that could have changed even if the circuit has not been broken ?

    Nick P August 23, 2016 9:33 PM

    @ Clive, Wael

    “Also with objects came serious bloat and slowness and a myriad of other issues.” (Clive)

    Someone recently told me on HN the original, Smalltalk machines had specs with about same memory as PDP-11 that C was built on. There’s also ways to do OOP resolution at compile time with little to no performance impact. I think the problems we see with it in many of these languages have to do with their implementation of it. Similarly, I found out the first LISP ran on an IBM 704. So much for concern it can’t handle constrained environments. 😉

    r August 23, 2016 9:40 PM

    @Thoth,

    You trying to unique-ify something?

    @Wael,

    I did not realize that about .NET, that’s an excellent quality.

    Wael August 23, 2016 9:56 PM

    @r,

    You may need to refer to online documentation. I last touched .NET when it first came out.

    Nick P August 23, 2016 10:14 PM

    @ Wael

    It largely became what it hoped. You had Visual Basic, C++, Java, C#, X# (assembly), F# (Ocaml-ish), and so on. The platform was even used to make several OS’s in a cross-compiler sort of way. They did one-up Java on the cross-language thing. OpenVMS still the winner given it did it at native code level. Closest thing was all the (good lang)-to-C compilers. 😉

    Figureitout August 23, 2016 10:16 PM

    Thoth
    –It’ll be portable but still either Windows, Linux or Mac OS right? Just an attack surface weakness, that’s all.

    RE: slit ribbon
    –I would guess some stray capacitance (that may affect surrounding areas) and increased resistance from frayed strip (leading to more heat, leading to melting something (maybe)). Stray inductance too…

    http://diy.stackexchange.com/questions/11561/what-happens-if-i-shave-a-little-sliver-off-electrical-wire-with-a-utility-knife

    Clive Robinson
    –Well your first method doesn’t make much sense how you described (why a large negative, not small?), but the 2nd counter method, won’t that mean occasionally some blocks get same counter value? Why not use the time elapsed between each sample instead or waiting to increment by 1?

    ab praeceptis
    –Yeah I haven’t tried using just a zener diode for entropy, will build one someday. Any thoughts on using the timing jitter as an entropy source?

    r August 23, 2016 10:20 PM

    @Figureitout,

    Don’t forget Android, native JVM capability basically – good throw away device if it has USB OTG capabilities.

    ab praeceptis August 23, 2016 10:39 PM

    @ ni

    Pardon me, but there seems to be a mental barrier problem.

    “I tested your claim logically and empirically by looking at results in academia and marketplaces of Europe vs everywhere else. Something I’ve been doing for a long time. I found nothing supporting your unsubstantiated claim of Europe’s superiority in robust systems at any level.

    No, Sir, you tested your mistaken understanding of my alleged claim. Would you kindly take not of the fact that I – I repeat – did not claim that europe has more or better tools nor did I claim that europe has more robust systems.

    You may repeat that again and again but it will not contribute to the discussion nor support my desire to accept you as a well reasoned partner in this discussion.

    As I like to find out reasons I will gladly offer you what I consider the reason: europe has a better foundation, which, however is next to worthless as it simply looks across the ocean and with little thinking just follows.

    But then, again, my claim was not that europe is somehow better or produces better tools or more robust software.

    Reality: most good tools didn’t work for a while on most platforms. They sucked

    I waited for that one and honestly hoped you wouldn’t offer that. To clear things up: I talked about a production version, not about an early alpha. I’m not trying to find something bad, I was putting hope on TLA+, I wanted it to work and I wouldn’t even have looked closer if it had not created the impression of being a useable tool.

    I brought it up to counter your claim of European superiority.

    Well noted, I say this with a friendly grin: What is needed to make you stop riding that very dead horse? Smashing coconuts on your head? Please, pretty please, finally take note of the fact that I assumed (and still assume) european superiority in the foundations, in the premisses, in the underlying basis – the result, the outcome in terms of products or rubostness is – kindly listen carefully – not existing or insignificant imo.

    “Russia and China”

    I think you are gravely mistaken there and you apply a completely wrong measure. Your use of the term “copycat” underlines that. Russia is not at all about copycat. But they are pragmatic and, that should be kept in mind, they first were in communism and then they were a decade more dead than alive. Should they have reinvented the wheel? Would that have been smart? Hardly.

    So, they took what could serve as basis, partly foreign (e.g. open sourced sparc) partly e.g. elbruss cpu (which btw is no worse than loogson) and quite promising and it was originally developed in communist times, i.e. under lousy circumstances. To put it bluntly: They had to throw brains at a problem others could throw money at. You yourself mention Modula and Oberon. Isn’t it a sign of intellectual quality to chose those as a base?

    Why am I so convinced and said what I said above? I’ve talked with them and I’ve seen their work. And they are rigorously and excellenty trained.

    But anyway, as I said, my interest is neither smearing nor glorifying this or that country. I brought that Russia point up to show you that my position is absolutely not “we europeans are smarter and make better software, too”.

    So, let’s summarize. The evidence says the best results are independent of countries or continents involved. Evidence indicates there is better education, esp science or math, results in countries outside America. Evidence indicates that’s had little impact on high-integrity, software demand or production. That implies social factors on demand- and supply-side dictate what will get uptake.

    Widely accepted.

    But, as I said, while a certain country is the IT epicenter and the major producer of crap (incl. in academia) my point was not about pro or anti this or that country.

    So, let me pick up your statement and ask the question that really interests and drives me: How come? Why? Or, more pragmatically, “do the good results (no matter where) have something in common and if yes, what?”

    That is the kind of question I’m interested in and that brings us forward.

    I conject (is that the verb for conjecture?) the following:

    The good results indeed have some factors in common, namely i.a.:

    • a “philosophical” layer of properly thinking about the problem
    • strong math orientation and foundations
    • sound reasoning (sound particularly as in “roughly transposable to and guided by mathematical reasoning)

    where both the first and the last point also touch issues like simplicity or even beauty.

    That provocation, as some may see it, was intentional. Because it opens an interesting and promising question: What is beauty – in relation to the field at hand – and how are beauty, simplicity and mathematics related?

    And I’d like to add a factor that I already mentioned and that I think is grossly underrated: the human. Both as in “how do we tick?” (hence, how must a good machine human interface be structured?) and as in “in the end, humans must be capable to design and produce good software”.

    Which invites me to quickly link in one of your closing remarks:

    (me: we will need to produce it with the programmers we have.)
    (you:) Haha. Nicely put. Yeah, that’s going to be tricky

    Ergo: we must either change the humans (small likelyhood of success, particularly if needed quickly) or we must change the tools.

    As for the latter, beauty raises its head -> What is a (programming) library? It is (among others) a “decoupling” that abstracts complications away and offers a simpler interface.
    So, to write to a disk, all I need to do is to say open(), write() close(). The library will take care of many ugly complicated details.

    That repeats in the OS. Thanks to the OS, a library need not know or be concerned about control io, how to make a sata controller accept and write bytes to a device, etc.

    That, I think, is an extremely promising paradigm – and the most important step is easy: It’s to understand and to transpose that logic of decoupling and making simple.

    To put it poetically: We need programming languages that render us the same service that libraries provide. Even better: we have thousands of smart minds who have thought about that, albeit from another perspective, and we already understand much of the implicated problems, mechanisms, laws etc.

    That, I think is the most promising answer to your “tricky” remark.

    Behind the “comfortable friendlyness” we do, however, need a very rigid and sound basis which can be provided by paradigmata along the lines of Pascal, Modula, Oberon (which again lend themselves very well to mathematical rigorousness).

    Finally, we must care and invest less in the spot and repair problems approach and care and invest more in avoiding problems in the first place. I conject that for this we need a) basic formal capabilities and b) an interface to formal spec. and verif. from the beginning and as important part of the design.

    One important pragmatic subgoal must be to rigorously discern between algorithmic and implementation problems/errors. One very important step in that direction – also addressing the careless average Joe programmer – would be to not anymore think in terms of “a language” but in terms of a “software engineering chain” where a formal tool part were designed from the start to produce, as one product, a formally annotated (and compiler grokable) code skeleton.

    From a logical perspective that would be a triade. One side were algo specification and verification, the other side were the final output (compiling) and smack in the middle – where his rightful and proper place is – were the human developer. This could also, and often probably would be, split into algo designers, possibly even as a purchased service or by another group, and programmers who could rely on the skeleton they get (or have elaborated themselves).
    Finally, as the compiler had not only the code input but could also look at the spec, it could do a way better job and make sure that spec and code were congruent.

    Side remark: we should pick up an idea hidden in some of Wirth’s work and have the whole chain offer 2 modes, “strict” and “normal” with certain rules and requirements. This would enhance range of use and also ease and speed up uptake.

    Normal would be roughly in a ballpark between Pascal and Ada while “strict” would be beyond and above Ada. Possibly later one could add a “lite” mode for quick and dirty stuff as well as an entry point for beginners. That mode could, for instance, be generous with domains/codomains, not insist on formal spec. etc.
    Obviously certain critical components such as libraries would be required to be in strict mode only so as to offer a warm fuzzy feeling of safety when using them.

    And (tongue in cheek) we must stay away from java.

    r August 23, 2016 10:50 PM

    @Figureitout,

    No more of an attack surface than anything else, just in different areas. Certainly a phone that’s only been un-boxed and never used can’t present a super threat where connecting to a smart card usb/serial dongle for an emergency message to be signed isn’t anymore dangerous than a laptop that can have parts removed at a border?

    @by the rules,

    I hear you on the java, C or LISP are likely the best cantidates but you (where C is concerned) need to carry around multiple tool chains… LISP maybe not(?), I have to look into it (personally) considering what I used to do with gas/nasm.

    r August 23, 2016 10:53 PM

    @Figureitout,

    My chromebook is capable of booting off of sd cards, also a full-SMT device. With a little epoxy you should be able to block off most any jtag ports too.

    Thoth August 23, 2016 10:59 PM

    @Figureitout

    Yup it will be cross platform Java. Anything that uses an Java 7 or 8 would do and yes it presents a vilnerability if security operations are done there thus the use of smartcard. The Groggybox logic only exists within the card.

    @r

    Most tamper resistant chips have a metal tamper mesh and I am wondering if the mesh can be used for PUF so if someone uses an FIB laser toncut through the mesh, the mesh could be used as a detection via PUF property to detect laser cuts.

    r August 23, 2016 11:09 PM

    @Thoth,

    That’s what I figured you were on to, I figured something akin to physically altering an antenna. But with how close to the metal cough that mesh sits, do you have the granularity to detect minor breaches?

    I was a while ago wondering if you could print one of those etched and traced antenna arrays over the top (or bottom) of a chip. It’ll be interesting to see @Clive’s response.

    Thoth August 23, 2016 11:53 PM

    @r

    Yes it will be interesting.

    Most PUF algorithms assumes that the challenge-response function to be done between an external client and the PUF secure processor to check for tampering. My idea is the opposite which is to host an integrity engine within the mesh of the crypto processor and use the PUF of the metal mesh to generate a “wrapping key”. The parameters for selection of where to start sampling the PUF is considered a security parameter which the authenticator and the authenticatee have to keep secret (symmetric keyed type) according to the concept of the AEGIS PUG based crypto processor that @Nick P pointed out before.

    Assuming that the integrity engine consists of an internal authenticator and authenticatee engine and both contains the security parameters for sampling the PUF andboth housed within the tamper mesh itself, both can independently sample the mesh to generate the PUF “wrapping key” (after some hashing and processing of enough of PUG materials) which will this PUF-generated wrapping key would be used to unwrap a wrapped master symmetric key that can be used to authenticate each other and subsequently unwrap other application keys.

    The security is based on the fact that if someone attempts to drill into the metal mesh via the FIB laser beams to extract the security parameters, it will consequently destroy the PUF because the laser would have altered the structure of the metallic mesh and thus even if the security parameters needed for sampling the metal mesh’s PUF electrical characteristics were obtained, the fact that the mesh now has it’s characteristic altered due to using FIB lasers to dig through the mesh would render unable to create the “wrapping key” and thus the master key would never be unwrapped successfully.

    All this assumptions are assumed that the parameters are randomly generated within the mesh and not loaded from factory (that can be handed over to who knows once coerced) and that the algorithm for sampling the PUF characteristic should yield enough entropy (and plus sprinkling a little magic dust via hashing the entropy for more pseudo-randomness).

    Due to the fact that only the crypto processor knows it’s own randomly generated sampling secret parameters, only an unaltered crypto processor could calculate it’s own “wrapping key”.

    For the part where the external world (e.g. normal users or some integrity checking software) needs to verify the crypto processor, using a 2048-bit RSA keypair (the usual technique for TEE, TPM and smartcard environments) either generated on the crypto processor when in factory and have it’s public key extracted or loaded in the factory by the supplier’s crypto officer can be used to validate if the PUF-based crypto processor is still intact and un-tampered.

    The “wrapping key” generated by the PUF protects the master secret key which the master secret key would protect the master RSA keypair in this fashion so that when the tamper mesh is breached, it would not be able to re-construct the correct PUF-based “wrapping key” and thus all the other keys and secrets down the chain remains encrypted and thus indicates that the tamper mesh has been broken with a high degree of accuracy in my opinion.

    All these are assumed that nobody managed to spike the production chain of the crypto processors 🙂 .

    I actually emailed and talked to Devadas some time again in the past, the author of the AEGIS processor but I felt his answers kinda were vague as he pointed me away from AEGIS to another of his secure computation project.

    Wael August 23, 2016 11:57 PM

    @r, @Thoth,

    It’ll be interesting to see @Clive’s response.

    Bullsh*t. I can’t wait for him. I want to have ze pleasure 🙂

    I got news for both of you.

    I am wondering if the mesh can be used for PUF so if someone uses an FIB

    PUF, eh?

    You need to look for papers on FIB (Focused Ion Beam – it’s not a LASER.) You also need to look at transmission line theory (for trace cutting effects.) Then you need to look at PUF (Physical Unclonable Functions.) All of these were discussed to some extent (very light) on this blog.

    Yes, you can print antennas, yes a cut on the traces will have electrical / electromagnetic effects (parasitic capacitance, inductance, impedance effects, radiation effects, etc.. ) and will be a function of the frequency and duty cycle of the current on the trace, the layers of the board, etc… At high enough frequencies, you’ll be looking at the “circuit” as a “lumped” component, rather than discrete components. In other words, you’ll be looking at E and H fields and Maxwell’s equations rather than at V and I and Ohms and Kirchhoff’s laws.

    I don’t see how the mesh will act as a PUF. With a FIB, and a scanning electron microscope / X-ray one can extract the layout of the device (including the mesh) and “clone it”.

    Are you trying to protect against a state agent? Hmmm… A couple of sips of water (on a water board) and you’ll tell them everything, and more. Besides, you won’t be able to build what you are describing (even if it were to work, and it won’t.) Hate to be the bearer of bad news, but just trying to save you the time…

    This was just an introduction to pave the way for the steam roller @Clive Robinson will come at you with. I’m sure he’ll more aaaaammmm “diplomatic” about it 🙂

    @Thoth… Did they legalize Marijuana in Singapore yet? I’m wondering if you took a couple of PUFs when you came up with this idea 🙂 to summarize: put the bong down and work on the smart card crypto. LOL

    Thoth August 23, 2016 11:58 PM

    @Figureitout, r

    re: Android as attack surface via USB OTG

    It can be done the Ledger Blue way (uses USB OTG) where the Ledger Blue is the HSM and you key in and encrypt your plaintext and decrypt your ciphertext from the Blue device and the Android phone or desktop simply acts as a transmitter or network interface for the Blue.

    Once I get Ledger Blue’s version of GroggyBox up, I would add an option for the Java GUI to enable “Bypass Mode” which is to simply act as a gateway and leave the Blue to handle the rest of entering sensitive text messages and decrypting sensitive text messages.

    r August 24, 2016 12:05 AM

    @Thoth, forgive me if I’m mistaken but…

    @Wael,

    I don’t think they’ve even legalized bubble gum yet. 😛

    r August 24, 2016 12:07 AM

    @Wael,

    If you know the altered impedence, the doesn’t that give you a bias for further detection?

    Thoth August 24, 2016 12:16 AM

    @Wael

    You can try to help improve open source security here as there is always a need for more practical improvements besides the papers and pens 🙂 .

    If those projects doesn’t suit your taste, choose your own open source security projects to contribute to.

    Links:
    https://github.com/thotheolh/jcChaCha20
    https://github.com/Yubico/ykneo-openpgp
    https://github.com/thotheolh/groggybox
    https://github.com/maqp?tab=repositories
    https://github.com/LedgerHQ
    https://github.com/open-keychain
    https://git.gnupg.org
    https://github.com/genodelabs/genode
    https://github.com/redox-os/redox

    Wael August 24, 2016 12:24 AM

    @r,

    If you know the altered impedence, the doesn’t that give you a bias for further detection?

    “They” will destroy several during reverse engineering the target devices and then be able to build an identical one.

    I don’t think they’ve even legalized bubble gum yet. 😛

    +1 you’re a genius. Lol

    @Thoth,

    You can try to help improve open source security here as there is always a need for more practical improvements besides the papers and pens 🙂 .

    I paid my dues, baby. Pen and paper suite me just fine now. Maybe in the future when I’m a free agent. Right now, I need to pay the bills.

    r August 24, 2016 1:53 AM

    @All,

    Don’t buy mid range dell laptops, I don’t care how cheap they are. My (cheaper) HP can power my USB 1.44 – the dell cannot.

    r August 24, 2016 2:12 AM

    Well, it turns out I am misinformed. Poor millenials, there’s so much cool stuff that they missed.

    From wikipedia:

    “In the mid 1990s, Singapore’s laws began to receive international press coverage. For example, the U.S. media paid great attention to the case of Michael P. Fay, an American teenager sentenced in 1994 to caning in Singapore for vandalism (for using spray paint, not chewing gum). They also drew attention to some of Singapore’s other laws, including the “mandatory flushing of public toilets” rule.[4] Confused reporting about these issues has led to worldwide propagation of the myth that the use or importation of chewing gum is itself punishable with caning. In fact, this has never been a caning offence, and the only penalties provided under Chapter 57 are fines and imprisonment.[5]”

    Thomas_H August 24, 2016 2:27 AM

    So this is going on:

    EU ministers debating tightening up surveillance laws

    …and just this morning I read that both the French and German governments want to ban end-to-end encryption in messaging apps (can’t find an English-language source yet).

    Besides the technical hurdles and the unfortunate fact that this will break certain secure services, I am left wondering whether there’s any organization that is keeping a detailed tally of the continued assaults on freedom by so-called “democratic” governments as opposed to the successes of Islamic Terrorism in reigning in freedom in democracies. I estimate the former will vastly overshadow the latter.

    No URL shorteners in eventual replies, please.

    RE: The NSA tool leak: An alternate theory.
    One thing that has been bugging me since the NSA scandal revealed by Snowden is how the agency keeps being in the news, not in the least by actions of its own director and additional “revelations” on its activities. The reason it bugs me is that this is not the expected course of action for a person or organization caught up in a scandal that very likely is very damaging to them and that they would prefer not to have happened. The expected course of action for such things is that the person or organization will try to shift the attention of the public to something else, as to quieten down the agitation caused by the scandal both within the inner circle and outside of it. The way the NSA case is handled doesn’t strike me as typical, there are some distractions (very noticeable important news overshadowing certain tidbits of information – it’s noticeably absent in the case of this leak, by the way), but not enough, as the topic of the NSA keeps floating back up in new “scandals”. It’s especially grating because the NSA is supposed to be a security organization that thrives on a certain invisibility.
    This makes me believe some of these leaks, including the latest one, may be part of a deliberate strategy aiming at allowing the unimpeded creation of a new organization that can replace the NSA over time, or a reorganization of the NSA itself. This deliberate strategy would consist of revealing small parts of the (past) NSA tool sets and/or activities that are deemed to be “inoffensive” for the NSA’s (or eventual new organization) current and future operations. Of course, just dumping the information, as Snowden did, would not work, so the release method is crafted in such a way that it firstly seems to implicate another big player, while later, closer examination reveals it was “constructed”, further obfuscating the source. Basically, the latest leak doesn’t strike me as a “leak”. It is more akin a “puzzle” that will keep interested persons busy for a while without really revealing anything really important.

    The only assertion we have for an insider leak is that the information release was constructed in such a way that it implies such a thing. There’s nothing that tells us the person doing so is actually doing anything illegal, it’s just that based on previous events we assume that is the case.

    (maybe I’m just being overly paranoid…on the other hand, the pattern of releases really tingles my “look for what is not there (not obvious)”-sense – which I am aware can be totally fooled into finding patterns that are not there 😀 )

    Thomas_H August 24, 2016 2:33 AM

    Oh, and as a follow-up to myself:

    Why are we seeing these kind of leaks only (majorly, in case I missed anything…) with the NSA, and not with other TLAs?

    r August 24, 2016 2:37 AM

    @Thomas_H,

    That is exactly the problem at hand, even if the NSA came out and gave us everything they’ve got – their image is tainted now. There is absolutely no chance for the MIC to win our trust (or even the world’s) back completely unless something unforeseen happens – the current cadence is doing nothing for cleaning up their image or undoing the damage that’s been done to our entire country’s image for that matter. Something’s amiss. The only thing that could reverse it is a ‘grand slam’ on their part like getting a tip about a dirty bomb or something horrendus, god help us. But then there’d be all this speculation like with 9/11 that it was engineered by the CIA or something. I do not look forward to tomorrow (or the next).

    There’s really no turning back, not for them – not for us – not for anyone.

    How many husband’s think their wife will get over their cheating?
    Does it ever happen?
    Do they ever get over it?
    How many husbands die by their spurned wives hands?

    Clive Robinson August 24, 2016 2:39 AM

    @ Nick P,

    I think the problems we see with it in many of these languages have to do with their implementation of it.

    Yup, it makes you wonder if the people building the implementation ever actually use it in anger, or just go on to the next implementation spec.

    Often when the implementation of a language is “tight” and has some rough user edges (terse CLI commands etc) it’s because the people implementing the language are using the language for real.

    However when those implementing are not going to use it for real, what you get is a loose and flabby implementation but superficialy the user experience is smoth with fancy IDEs etc etc, but the output is like the implementation loose and flabby.

    Much though we malign it C was written by people that used it in very very constrained environments and it’s where some of it’s strengths and weaknesses come from. It’s noticeable how some of the weaknesses got turned into strengths as time went on and then later became weaknesses again as the constraints changed.

    One such is the block structured code issue. Part of it was “everything in one file” for the compiler requirment was not going to work due to core memory constraints. Thus we got partial compilation into object files that got linked together later. The down side was not being able to correctly check seperate compilations were consistent with each other. Now we have much less constraint on memory “everything in one file” is not an issue in most cases. But because you had to split things up, people developed techniques that changed the way the industry worked and even though we could go back to the old way, we’ve progressed well beyond it making any sense to do so. Unfortunatly for those new to the game and fresh out of school/college/etc, the problems are still there, and they don’t like the fact that their hand is not being held the way it is in later languages.

    Sometimes the price of freedom is a territory without fences. Thus you risk getting shot for a whole host of reasons before you get to learn about them. Whilst ences make you safer, if you put them up you then have to build paths around and between them and get constrained to follow the way they lead, which may well not be the way you want to go.

    Clive Robinson August 24, 2016 4:37 AM

    @ Figureitout,

    –Well your first method doesn’t make much sense how you described (why a large negative, not small?), but the 2nd counter method, won’t that mean occasionally some blocks get same counter value? Why not use the time elapsed between each sample instead or waiting to increment by 1?

    Hmm lot’s of questions in one paragraph, where to start 😉

    Well firstly the size of the number into the leaky integrator defines how long it’s effects will be present at the output. That is it decays away in some manner over time. The larger it is the longer the effect. However don’t make it so large that the TRNG output rate swamps the integrator by effectivly “driving it into the rails”. Unless of course you decide a “wrap around” is OK and going from .9 to -.9 is OK (which in most cases it will be unless you get your chosen language complaining). However if you do it’s technically nolonger a drunkards walk which might effect your theoretical proofs.

    As for the difference between negative and positive values, there should not be unless you need to correct for bias in the integrator. What I originaly said was,

      Where the TRNG puts a large negative (for 0) or positive (for 1) into it.

    I left out the comma after large 🙁

    What I ment was that you come up with some suitably large value X. Then if the TRNG outputs a 0 subtract the value X from the current integrator value. If the TRNG outputs a 1 then add the value X.

    What is important is that you then deliberately bias the output from the leaky integrator such that it’s effect on the increment is it’s positive. That is you change the increment value from 0-n where n is positive and a quite small value of the total counter range.

    Which brings us to your question about the counter having the same value. It’s not a value in isolation that is important, but a repetable sequence of values. No matter how big the counter it will in some infinite future roll over to it’s start value. If the increment was always the same then the sequence of the output from then on would be entirely predictable. With the TRNG and leaky integrator the roll over will still happen but the sequence will get broken by the changing increment value. Thus with the first method you enter a game between your rate of increment change and an attackers ability to analyse it to reduce the range of output values. If you want to dig into that it would be worth a few PhDs, or a life times employment in a Sig IC agency.

    Which reminds me try to keep the increments odd 😉

    In the case of the second generator by changing the key you should –if the crypto’s any good– change the sequence. Even if you do end up with the same counter and key the sequence will change when the key does, thus you need to change the key at a rate that will not allow any given sync on key and counter to last long.

    Which brings me to your question about “time” my implicit assumption about TRNGs is that they do not produce output bits like clockwork especialy those with a low output rate. Within reason the time gap will be likewise random within a certain range. It’s what the computer hardware and OS / App do down stream of that which makes the difference. Obviously the higher the output rate of the TRNG the less and less that time gap becomes with respect to the determanistic sampling of the computer, thus at some rate the time will be effectivly deterministic for the majority of TRNG outputs. At which point things fall into “lock step” and only the state of the bit is random.

    If you are in a lockstep position, then you need to use the bit value as a “stop and go” on incrementing the key counter in some way. The discusion of which will be long.

    Clive Robinson August 24, 2016 5:08 AM

    @ Wael, r, Thoth,

    Bullsh*t. I can’t wait for him. I want to have ze pleasure 🙂

    Happy?

    But what about the things you missed?

    Less happy now?

    Don’t worry they are not that important 🙂

    Oh on tbe “gum” issue, many countries –quite rightly– have legislation against the filthy stuff and the careless behaviour it engenders. However it’s the punishment that varies from a gentle slap on the wrist to chopping it of at the neck…

    But don’t fear all you who wish to chew and drop, the US is comming to your rescue with those oh so secret rules in their TTP trade treaties… So Coke and Wriggles will be able to open new markets, bringing tooth rot and type II where ever they go.

    Just wait for TTP2 it will no doubt have a section mandaiting the opening of veins in children such that GM corn syrup can be mainlined to rot their brains liver and other organs but leave them their smiles. So that US Pharma can step in with billion year pattents on all medications to make them pay pay pay untill they die untimely deaths.

    P.S. I’m only half joking about TTP2.

    Ratio August 24, 2016 6:15 AM

    @Nick P,

    Butler Lampson’s free book [on TLA+]

    TLA, TLA+, this book and (probably most notably) LaTeX (and a book on it) were done by Leslie Lamport.

    (Lampson and Lamport are both at MSR and were both at DEC in the eighties, but they are not the same person.)

    Bruce Schneier August 24, 2016 6:17 AM

    “Does this forum have user policies for safety and other general terms of service, similar to those of other public social collaboration sites?”

    No. At least, no formal policies. This is my blog, and you’re expected to be both on topic and respectful. If you’re not, the moderator will delete your post. If you continue, the moderator will ban you.

    Clive Robinson August 24, 2016 6:24 AM

    @ Thoth,

    Any thoughts on,

    https://www.theguardian.com/technology/2016/aug/24/singapore-to-cut-off-public-servants-from-the-internet

    Personaly I think it’s a sensible idea, there is one heck of a load of technical debt involved with “connect everything”. Such behaviour makes for a huge attack surface, before you even start to think about what extra nasties the interconection complexity brings into the game.

    Also there is the age old question of “Why?” or “What benifit?” comes with having the majority of employees connected to the Internet…

    Thoth August 24, 2016 9:53 AM

    @Clive Robinson

    re: Singapore Government employees cut off from Internet at work

    I think this news have been reported in the past in this forum if my memory serves right. It was a “knee jerk” reaction due to some hacking cases against the SG Govt but it seems this “knee jerk” reaction can be helpful and harmful at the same time depending on how it’s implemented.

    One note being civil servants not working in the sensitive fields can bring their smartphones and use their smartphones to do network tethering and that would immediately side step the effort of trying to “protect Government infrastructures from the Wild Wild Web”. It is interesting to note that not all Government computers are hardened in a uniform manner although the governing authority is the Infocomm Development Authority (IDA) that is the one to set the basic rules and regulations for all Government electronic usage (including the military). It is the SG version of NIST of sorts but is more powerful with teeths and fangs instead of the US NIST that can only advise but not have authoritative and punitive powers against any Govt civil servants or orgs breaking the rules it lays down.

    Some organisations do block the USB port and that includes tethering and some don’t. It really depends on the situation and the policies on the ground.

    Dennis August 24, 2016 10:17 AM

    @ Thomas_H wrote, “This makes me believe some of these leaks, including the latest one, may be part of a deliberate strategy aiming at allowing the unimpeded creation of a new organization that can replace the NSA over time, or a reorganization of the NSA itself.”

    That’s a very keen observation. Who benefits? It’s safe to gestimate that there has been a net outflow of talent from gov into private sector ever since we had seen Snowden revelations. Not referring to gov contractors as benefectors of course but the private sector doing non-government work. It’s safe to assume most secret work is highly segregated, meaning one does not know what the other is doing down the hall. Snowden revelation is naturally a revelation to those inside, if not giving them a glimpse of the bigger picture.

    It’s easy to dismiss conspiracy theories as nut jobs when no proofs are in presence. It’s long been speculated, and revealed by several whistleblowers, of various government programs that concern the privacy nut, since the days when theories of Echelon surfaced the digital underground. There were no proofs but deeply we know it can exist. Then came the whistle blowers and wikileaks who rooted the moral high ground of nationalism and gave patriotism a different slant. There has been somewhat of a shift, and the internet made the world somewhat borderless, if you know how to surf it.

    JG4 August 24, 2016 11:17 AM

    @Bruce – Thanks for your comments on etiquette. I always try to color inside those lines.

    https://www.bloomberg.com/features/2016-baltimore-secret-surveillance/

    He visited the ACLU’s headquarters in Washington, and in the office of Jay Stanley, a senior policy analyst and privacy expert, McNutt explained why his cameras weren’t a threat. The aerial images couldn’t identify specific people, because the target resolution would be limited to one pixel per person. The analysts zoomed in on specific areas only in response to specific crimes reported to the police. To further ensure that his employees weren’t spying on random people or addresses, everything they did was logged and saved—every keystroke and every address they zoomed in to for a closer look. Vehicles would be tracked only over public roads in areas where people have no expectation of privacy.

    McNutt cited a couple of U.S. Supreme Court cases to show Persistent Surveillance wasn’t in the business of wanton intrusion. In 1986 a case from California hinged on whether police had the right to fly over a man’s property to see inside a fence in his backyard and then bust him for growing marijuana. The court backed the police, saying that “any member of the public flying in this airspace who glanced down could have seen everything that these officers observed.” Three years later, the court similarly upheld the arrest of a man busted for growing marijuana in a greenhouse after police in a helicopter spotted the plants through the roof, which was missing two panels.

    Stanley heard McNutt out and thanked him for taking the initiative to seek the ACLU’s feedback. But McNutt’s presentation shocked him to the core. As he listened to his visitor describe the type of surveillance the company was capable of doing, Stanley felt as if he were witnessing America’s privacy-vs.-security debate move into uncharted territory.

    “My reaction was ‘OK, this is it,’ ” Stanley recalls. “I said to myself, ‘This is where the rubber hits the road. The technology has finally arrived, and Big Brother, which everyone has always talked about, is finally here.’ ”

    Nick P August 24, 2016 11:51 AM

    @ Ratio

    Definitely not the same person. Just a memory slip on the name. Yes, Leslie Lamport at Microsoft Research. Thanks for the catch!

    @ Clive Robinson

    re languages

    “Yup, it makes you wonder if the people building the implementation ever actually use it in anger, or just go on to the next implementation spec.”

    I heard it happened once.

    “However when those implementing are not going to use it for real, what you get is a loose and flabby implementation but superficialy the user experience is smoth with fancy IDEs etc etc,”

    I think that’s overstating it. They often design them ideologically thinking certain features are going to be worth it. Moore’s Law also had an impact where it smoothed over the performance issues. Until bloatware and pay-per-CPU/RAM/packet clouds made them pay for that. It’s why I advice, if performance is important, to test the platform on minimal hardware. I’m talking Pentium 2’s and shit. Or whatever frequency FPGA’s run a single core on. If it’s efficient there, might be efficient on lowest, common denominator of whatever modern stuff is out there. Especially with safety or security checks on. 🙂

    “not going to work due to core memory constraints. Thus we got partial compilation into object files that got linked together later. ”

    I knew linking was an issue but not that it was their hardware constraints. Remember that Wirth made Modula-2, named after module support, on same hardware. It could’ve been C inventors’ reason with them just not trying to come up with a better approach. I’ll have to look into it.

    “and they don’t like the fact that their hand is not being held the way it is in later languages.”

    Sort of. I think it’s a difference in perspective. You see two things they won’t like: (a) holdovers from what they perceive as bad decisions in stuff like C that are due to hardware constraints that no longer exist; (b) the low-level thinking itself. Languages far back as the 70’s-80’s showed we don’t need nonsense C did. Showed it on same PDP-11’s. So, they’re right to think that. Far as (b), you’d mock them on this but they’re right to at least want something better. Our goal in making programs is not micromanaging an instance of electrical circuits. We want to solve a problem in a high-level way that works because it’s closer to how we think about it. So, best route is to continue to invest in compilers, specifications, annotations, functional, logic-based… anything that helps us work at a high-level with good enough efficiency. Haskell’s little community shows you can practically code in math itself and still get good efficiency + concurrency.

    So, I’m not hard on the latter group. I encourage work to continue on efficiency in high-level languages. I also encourage work to continue on analyzing low-level stuff for when we absolutely need the performance. There’s tools to support typed, memory-safe assembly these days. So, we’re already there for anyone who puts in effort.

    “Sometimes the price of freedom is a territory without fences…”

    Interesting way to put it. It’s why I continue looking into ways to both unify programming paradigms and incrementally teach the skill from high-level to low-level. Just need to make sure that, as challenges & learning progress, they don’t get overwhelmed at any one step. Each step builds on one before it. Btw, found a nice description of type systems on Lobste.rs security site. It has nice breakdown for new people to get the differences. Particularly, I like the simplicity of how “concrete examples” section illustrates from basic to better to dependent types. That plus an IDRIS paper that illustrated dependent types with a checkers game and vending machine. You could see how the types could encode the rules where you simply couldn’t express an incorrect solution. Interesting stuff.

    re OpenPiton

    Awesome find! I didn’t know about it! It’s good that it’s building on the OpenSPARC work as I often tell people to do. One of the best, too. The paper is so detailed I’ll have to hold off on commenting until I have time to dig into it. I do like that they, like Gaisler’s Leon, make it easy to modify. Also, over a dozen cores ASIC-proven at 1GHz on 32nm is already valuable. That’s enough juice to run servers, do gaming, run EDA suite for designing the 28nm one, etc. Bootstrapping. 😉

    r August 24, 2016 12:31 PM

    @Nick P,

    I look at it differently, I think logic and assembly should be taught in school not the reverse of the HLL first. I think it’s a crutch and a blindspot, maybe I’ve got blinders on myself but being able to ctrl+apple+esc and boot trace through single stepping WAS huge for me.

    I don’t think we should hide the revolution that both was and IS from our children.

    r August 24, 2016 1:03 PM

    @Nick P,

    C wasn’t invented by people versed in html and basic, it was invented by people who already knew how dangerous programming mistakes were – and how powerful asm was. They just made mistakes letting people start off in something with zero rubber walls and the ability to emit 100kloc of assembly in a week. It was egregarious in retrospect, but necessary.

    ianf August 24, 2016 1:55 PM

    rrrrrrrr: […] logic and assembly should be taught in school not the reverse of the HLL first

    Spoken like a true elitist who thinks only of teaching small clutches of mathematically gifted, introvert children, those who can be counted on keep their attention from one day to the next. Only most Western kids are not like that, they need outlets for shedding their abundant energy, and have learned (yes, from adults around them), that it’s OK not to perform at the top of their abilities.

    Teaching abstract subjects such as maths is particularly taxing. And even when computers are in the classroom, the focus is on exercises that visibly perform something to judge or admire within the same day – which seldom can be achieved with low level tools like the assembly language… at best like watching Fibonacci numbers multiply in combinatorial nets. Keeping attention of children is a tall order, and even the most ambitious thinkers/ educators (Alan Kay’s Vivarium project; Seymour Pappert’s Logo) learn that theories do not easily translate into palpable results.

    When I was in school, our Physics classes consisted largely of stripping down and refurbishing our teacher’s vintage motorcycle (which was not on the curriculum). He had our full attention, the girls’ too, and was clever enough to weave in things like the II law of thermodynamics and other science components into it. How combustion happens, will there ever be A-bomb powered car engines, the works. Most of that I forgot, but I was once stranded with a hired moped on a mountaintop in Sicily, where I managed to strip down il moto using a pocket knife, a screwdriver and some hex keys, blow dry, and restart, so that I could return to civilization and, in time, haunt this forum.

    ianf August 24, 2016 2:00 PM

    @ Chad Walker,

    I did not ask what your game DID, I asked what it could supply (in line of education or entertainment) that we don’t already have. Perhaps you, too, have problems with Level 1 English (UK) comprehension that I’ve been accused of?

    PS. As for computer games in general, I go by this shallow and probably bigoted theory that it’s an activity for people who find thinking all on their own too painful and chaotic, so they flock with like-mindless ones to do their leisure-time strategizing by game makers’ rules.

    […] conceptual fantasy architecture lecture to a bunch of security heads at SaintCon in Utah

    Head cases more like it, but it sounds like you’ve found just the right place to be.

    Gerard van Vooren August 24, 2016 2:20 PM

    @ ab praeceptis,

    It was posts. A typo.

    @ Nick P / ab praeceptis,

    “I just know it needs to be high-level, strongly typed whether static or dynamic, support some kind of Design-by-Contract for interface checks, safe-by-design, simpler than Ada, and produce efficient machine code. I’m with Gerard that an improved Modula-3 with nicer syntax would get us pretty far. I’d add some of Ada’s restrictions, Rust’s safety features, and SPARK contracts into it while keeping language itself simple.”

    No, please no. Stick with what I said before. Design-by-Contract can and should be dealt with outside of the language with the right IPC. See etypes. Adding Ada’s restrictions also doesn’t make sense when the language itself is well defined and simple and that counts for Rust’s safety features either. AFAIK there are only two safety features in Rust and the rest is mental masturbation. These two features are lifetimes (which makes sense in a weak language such as C but not much in Modula) and the other is poorly implemented (compared with Haskell) functional PL features. The functional PL features are IMO more a burden than a feature in a low level environment.

    @ Clive Robinson,

    “One such is the block structured code issue. Part of it was “everything in one file” for the compiler requirment was not going to work due to core memory constraints. Thus we got partial compilation into object files that got linked together later.”

    I love your comments! That linking never made sense to me. It was a required step indeed in a time when memory was expensive. That is also my problem with the modules in Rust. It’s too complicated. If a PL simply concats files and compiles these, based on a recipe, the AST can be much better in deleting unused code and the resulting executable only contains the code it needs. The name-spacing would be at the file or directory level but preferably only one level deep. Shared libraries need to have much code to deal with all the possibilities but with compilation in one step this is a thing of the past. It probably doesn’t work very well in large monolithics but for simple tools I don’t see why it couldn’t, especially in development mode where optimization isn’t needed.

    r August 24, 2016 2:24 PM

    @ianf,

    While I haven’t read your whole response let me point out that Assembly is the original basic, functionally interactive OS code (think applications) isn’t math heavy. Trust me, I’m no Bruce. Do you know how easy assembly is? The only thing standing between it and basic are a) libraries (which Nick ever so slightly addresses) and b) time (which HLA addresses).

    Trust me, unless you want the children to be squeeking by with OOPs all day long. Is that what you want? People that don’t understand the issues at play? There’s less risk in risc when it comes to your childrens brains huh?

    Mine have already suffered enough with the public education system.

    r August 24, 2016 2:53 PM

    @ianf,

    Actually, when it comes to your children’s brains there is less risk in risc (reduced instruction set comprehension) when it comes to public education: I posit that if public education were better your kids would have a harder time meeting the status quo and find much more and much harder competition in their peers and fellow students. There would be mass fail, instead we have mass fail with a cover of A+ student in a D+ school at best. Good for the rest of the world, you should thank us Europe – China… But horribly detrimental to the people back home.

    Clive Robinson August 24, 2016 3:39 PM

    @ r, Nick P,

    I look at it differently, I think logic and assembly should be taught in school not the reverse of the HLL first.

    I must first admit to be being biased, before I say I agree with you.

    At school we initially had a 300baud acoustic coupler modem dialing into a time share system in oxford. It was expensive to use in terms of the phone bill so we had to write and debug our code the real hard way. OK it was easier because it was BASIC but even so as a scruffy twelve year old it was a greate moral booster to get code running. However the following year with a fund from the PTA the physics master got in a very simple single chip 4bit micro and 1Kbit –yup not byte– ram and a 2K PROM and a number of switches and lamps as the IO. So we dived into some very primative assembler. A further PTA grant enabled us to buy chips and build a couple of computers using perf board and wire wrap sockets.

    Little did I realise that it would stand me in good stead just a few years laterto teach collage students how to build Z80 based systems with “high speed” –for the time– 9600baud serial interfaces and multiple A-D & D-A converters with other instrument level IO so the students could learn to build their own test systems or industrial controller prototypes.

    Which in turn stood me in good stead for a job actually designing HPC systems for the body scanner industry. So working wirh bit-slices I got to also design my own assembler by writting the microcode.

    Happy days long gone but yes learning right from the bottom of the stack and working up it taught me much that few ever get to know but importantly it taught me to know where the powers and pitfalls of such freedom lay.

    It’s something many CS Grads don’t get taught, yet you wander over to the other science and engineering faculties and the students learn it in their own time. They have to to get their non computer projects working.

    So as I’ve indicated before I tend to favour graduates from other hard sciences over CS grads.

    Nick P August 24, 2016 3:47 PM

    @ All

    If interested in type systems, this thread on Hacker News has proven to be as interesting as I thought it would be. The original article is a good description of type systems I already linked to Clive. The sheer number of people from different backgrounds in language design on this thread, all citing examples, leads to some interesting finds. As I’ve been digging up first-order verifications, I’m particularly interested in catnaroek’s proposal of a dependently-typed language that uses decidable, first-order logic instead of stuff requiring provers. I didn’t know such dependent types could exist. Point being, you can express common properties real-world programmers would be interested in then automate the checks on their annotated code. They don’t have to do much work or no work if it’s a library. Sounds great to me. 🙂

    @ r

    “I look at it differently, I think logic and assembly should be taught in school not the reverse of the HLL first.”

    They tried to do that for years. Very few people took those classes because they’re bogged in boring, technical stuff. Exciting to some of us but not most that just want to create things. Further, most people go to college to learn job skills. The jobs were C++, Java, and C# with latter two starting to dominate. Schools, being training programs for markets, transitioned to things like Java and C#. People can use those effectively using heuristics that guide the compilers to produce fast code. Also, prebuilt libraries that native-level thinkers optimized the heck out of. The majority rarely needs to know about the underlying machines. Hence, me trying to come up with an approach that brings in tons of candidates, produces tons of workers, and also gets more people to the metal.

    Note: If sufficient interest exists, a school can always do it both ways where you can go straight to assembly or C if you choose. Nothing stopping that. I’m talking a better baseline than only having ASM/C classes almost nobody gets or Java classes that poorly teach new programmers.

    “C wasn’t invented by people versed in html and basic, it was invented by people who already knew how dangerous programming mistakes were – and how powerful asm was. They just made mistakes letting people start off in something with zero rubber walls and the ability to emit 100kloc of assembly in a week. It was egregarious in retrospect, but necessary.”

    That’s a myth. They certainly knew how to code. Their choices were almost totally due to personal preference, the ancient hardware they had, etc. Others using the same hardware did better. So, it’s undeniable that they didn’t have to put in the garbage they did. They just wanted to for personal reasons, including max speed. I’ve worked to debunk this myth by tracing how each step happened using the papers written by the inventors themselves. Amazed me how much C proponents’ claims about why it does things certain ways contradicted the statements of its inventors at the points they invented it. Here’s the brief history.

    It got slammed in a few ways on Hacker News when someone posted it. Not any counter-evidence to my points. They all stood. There were a lot of people that (a) didn’t know amazing stuff like Modula-2, Smalltalk, and LISP were done on PDP-11-equivalent machines and (b) thought I was indicating they should’ve done a full ALGOL instad. I’m going to rewrite it in the future to emphasize I meant they could use a safe, efficient subset of ALGOL like Wirth’s Modula-2 & other languages. Thompson and Ritchie just loved BCPL’s raw style, didn’t care about safety/modularity, and actually only added some typing & structs to get UNIX to compile on a PDP-11. Personal preference, not engineering superiority within constraints.

    Here’s what the other two, Wirth and Jurg, did in Lilith project with same machine to work with:

    “let’s focus on language design the main two went with: the Modula-2 language. It’s very simple, compiles faster than C due to cleaner grammar, is safe-by-default in a few ways, lets you turn that off where necessary, modular, supported concurrency, and runs fast. This approach got repeatedly iterated into progressively safer and faster systems by amateurs & pro’s alike, sometimes just 1-2 students, that crashed less than C apps. Best part relevant to your comment: they built first Modula-2 compiler on a PDP-11. 😛

    Most of C’s advantages, a few more on top of it, fewer problems, and developed on same architecture. Sounds like a clear winner from design standpoint. If you want simpler & faster, the Edison system is a good, extreme example of how ALGOL practitioner might do a C-like project. Too simple vs Modula-2 imho but beats C at its own game of fast minimalism with some Wirth-style benefits. Whole system, with source, got published in one paper.

    General, Modula FAQ

    Modula-2 Wikipedia

    Lilith computer

    Hansen’s ultra-minimalist, Edison System

    Modula-3: industrial version of Wirth’s languages from DEC as answer to C++ & Java that Gerard and I think makes excellent tradeoffs.

    @ Gerard

    re Ethos paper. I’ll read it and consider its approach. I note its opening statement is wrong. One can enforce types or policies across process boundaries and with different languages by using a common specification or proving language. CertiKOS did that to verify a whole hypervisor written in a bunch of different DSL’s. Verisoft did that before to verify a stack from email client to OS to compiler to CPU. So, it can be done & best work is doing it. I’ll still consider etypes in case it’s an easier method or has other benefits.

    re Ada’s restrictions. The restrictions, some similar to Modula/Oberon, are methods of expressing things in a way that is both type-safe and unlikely to fail. Still valuable in a Modula/Oberon. Just counters Wirth’s “simplicity above everything, even safety!” mantra that goes too far. Remember his core metric is just how hard it is to compile a compiler. That’s neat. Usefulness in diverse, real-world programs with productivity & maintenance in mind are also important. Ada was designed for that. So, I’m saying cherry-pick some of its benefits that can be added in a simple, seemless fashion. Otherwise, leave it out of core language.

    re Rust. The main benefits are safety from temporal errors (eg use-after-free) and concurrency errors. They’re worth copying. The analysis isn’t that hard, either. Just enforces disciplined use of lifetimes. Still matters in a Modula-3 as exception-free code is better than code that raises exceptions. Not to mention modules that don’t use the GC. Rust can protect those. Hell, its safety features helped in implementing a GC in one project.

    ab praeceptis August 24, 2016 4:25 PM

    Gerard van Vooren

    I think, DbC is often misunderstood, probably mainly because most of us think too much in terms of code and processor.

    Risking to bore, Dijkstra again (I’d nail that to the head of CS graduates) “software is the implementation of algorithms”.

    DbC isn’t a pimped parameter assert(). DbC is (already from a rather pragmatic developer view) about the domain and codomain of algorithms. It’s about “given, you feed me with data matching the domain, I will produce data with the codomain and according to spec” (plus invariants, which are another story).

    The domain may or may not contain, say the NULL (pointer) element. I sometimes use a nice example in number theory: Most people would tend to agree that even numbers can’t be prime as they are, by definition, dividable by 2. A trap to easily walk into. The correct definition, however, is that a prime number is any number that is not dividable by anythin but itself and 1 – which 2 satisfies – and hence the above statement is false.

    True, this is a simple and evident example (which, however, traps surprisingly many grown up software developers).

    Now, assume you function is checking for primeness. You bet that many developers would tend to create a condition ladder with “((n % 2) != 0) ranking high (less experienced developers might even very early on try to get it cheap by “if(n < 3) return false;).
    Similarly, many would define the parameter ‘n’ as “int n” – and fail to properly deal with negative numbers.
    DbC isn’t about “make sure the array index is within bounds”. It is about making sure and expressing that the implementation matches the algorithm.

    Is the index array i for a outside Low(a) .. High(a) is a job for the compiler.

    Actually and reasonably some (very few) of the better formal algo tools for software development have DbC, too. And that’s a place where I welcome it because it will help the developer to realize that the correct type for n is “unsigned int”.

    I’m experienced but I’m a human and both formal tools and DbC have saved me sometimes from doing stupid things.

    I conject that any safety conscious language must have DbC and definitely should have a good interface for formal spec. Such a language would know and accept that it’s job is to implement algorithms and that it should assume that they have been properly developed, specified, tested, and verified. Seen from that perspective it would look insane to ignore domain and codomain.

    “linking”

    I’d like to offer a different point of view (and I have generics on my side *g)

    Files are too big a unit. I’d like to have micro libraries with many small thingies, very basic algorithms, mini generics so to say, that I build and collect over time.

    But, on the other hand, yes, files are the wrong approach. They are again an implementation of the “hands on” approach. We have way to many houses in the software world that have been built without architects and guided by “So, well, let’s see what we find in the local constructers market”.

    ab praeceptis August 24, 2016 4:40 PM

    Grrrr! A propos traps: Stupid me. I fell intp the html trap and had part of my “code” broken and maimed. Apologies.

    ab praeceptis August 24, 2016 5:53 PM

    “OpenPiton”

    I’d be a strange CS creature if I’d not be exited about it.

    But, looking closer, I’m not set on fire. It’s of course currently fashionable to go manycore and 550k or even 500M cores sounds impressive. But it’s largely theory.

    Looking from a purely academical perspective I’ be impressed and consider OP ver important and impressive. Which can mean a “less desirable” in the real world. Example in case: Academics are crazy for FPUs. In the real world they are far less important and attractive. While I don’t think that Suns ratio of 1:8 was the best conceivable one I neither think that 1:1 is healthy and reasonable.

    Security ops run into a trap and are done in software? With an effective bandwith of roughly 7 or 8 GB/s at 1 Ghz? Thanks, no. IMO they took the wong turn (but then, I’m neither in a CS university lab nor in a physics center and greedy for up to 500M FPUs …).
    Sidenote: Why 7 to 8 GB/s? Yes, there are 3 NOCs, each with roughly 7.5 GB/s (-> 2.8 GB 350MHz) but they can’t be seen as 3 * 7.5 GB/s because they serve different routes/directions/jobs. And don’t forget that those same NOCs also serve a lot more than user payload.
    Of course, if you happen to be a large corp capable to spend a couple million $ for your own ASIC you can change things and even, e.g. have hw crypto support.

    The LEON, btw, isn’t that lousy in comparison and it’s a real-world proven design. After all it was designed for ESA (“the european nasa”). Neither are the current Russian Sparc based cpus lousy.

    From what I see (granted with a view that doesn’t favour the mindset and target) OP is conceptionally an big and attractive step, but in practical terms I don’t see industries hold their breath.

    Would I love to have a devel board with that thingy? Sure. Otherwise I should see a psychologist, I think. Would I professionally care much? No. It doesn’t deliver anything that I consider important (which, I confess, comes down to safety and proper concepts and doesn’t care at all about thousands of manycores).

    My priority interest is in finally having a realiable processor, based on a healthy concept, caring a lot about doing things the right way and correctly. That’s why I look towards Russia (on a larger time scale) or China (with a lower expectation). Europe? Nuh. We will continue to eat the crumbs from across the ocean I think.

    ab praeceptis August 24, 2016 6:17 PM

    Funny(?) Addendum:

    I think we are focussed too much on cpu performance. That doesn’t make sense for 1 and a half reasons (and possibly others but this is what I’m looking at, now):

    A very major part of our software, starting from the OS, is big, ugly, fat bloat. An average linux desktop PC devours between 300 and 1000 MB before you even start any, say, word processor. Much of that is bloat.(I assume windows is similar but I lack data).
    Plus the (well, my) “law of pixels”: Programmers who do graphics (on average) create considerably worse and more bloated code than console people. Touch “design” or “video” or the like and it gets a dimension worse. And the worst of all is web people. The performance per MB memory is miserable but the trouble per LOC rate is at the conceivable peak.

    And: Performance isn’t just hardware. Slim, well designed sw on mediocre hw will easily outperform the usual bloatware on fast hw. In case you are interested, have a look at Oberon (which isn’t simply a language but rather a “language coming with its own OS” (or vice versa)). Probably Nick P can tell you quite more examples.

    BTW: Recently I has to (oh well, I stubbornly insisted I had to) create a small special editor using Object Pascal. Primitive thingy but with some special features needed.

    Result: About 5.5 MB executable! Damn. How had I so lousily failed? But: that thingy, without any hand tuning, was fast, way faster than even the minimalist editors (e.g. xfwrite, based on a quite modest lib). And it also loaded instantaneously (faster than others).

    And then it hit me: statically linked with everything incl. the kitchen sink. The C++ coded alternatives (also GUI) looked tiny in terms of exec. size. But dynamically linked. Their net size? Multiple of my Lazarus thingy.

    Small things one easily forgets about.

    Chad Walker August 24, 2016 7:19 PM

    @ ianf

    I did not ask what your game DID, I asked what it could supply (in line of education or entertainment) >that we don’t already have. Perhaps you, too, have problems with Level 1 English (UK) comprehension >that I’ve been accused of?

    To my knowledge, there is no traditional Dungeons & Dragons-style game that, in addition to classic gamer stuff (e.g. killing orcs and getting loot), demonstrates how characters in a fantasy setting would use asymmetric crypto to securely exchange shared keys across a hostile network full of adversaries, for example. There are games that abstract “hacking” down to a dice roll… Cryptomancer compels the players to decide what key they’ll use to protect what payload, across what channel, at what time, and presents many, many ways in which things can go terribly awry. Playtests yielded really fun and teachable moments, even with folks with IT backgrounds. The concepts are elementary to folks who enjoy cryptanalysis, but for many folks in infosec, it’s a fun way to bridge their profession with a past time they like. It weaves infosec concepts in a shared narrative, rewarding cleverness, sound opsec, and general deviousness. If such a thing already exists, please point me in that direction, because I want it!

    […] conceptual fantasy architecture lecture to a bunch of security heads at SaintCon in >Utah
    Head cases more like it, but it sounds like you’ve found just the right place to be.

    I’m totally OK with you trolling me (I think it’s funny, even), but there are some absolutely legit people in the Utah security scene.

    Hank August 24, 2016 8:36 PM

    @ Chad Walker, ianf,

    “If such a thing already exists, please point me in that direction, because I want it!”

    This sounds like a fantastic idea! I mean combining traditional D&G character development factors with the loot & lure elements of 4Square, Ingress, Pokemon Go, under an urbanized high-tech fantasy parallel universe. It’s definitely a Go.

    Hank August 24, 2016 9:04 PM

    @ Dennis,

    “There has been somewhat of a shift, and the internet made the world somewhat borderless, if you know how to surf it.”

    It’s a rough surface on many fronts, both nationally (governments) and private sector (service providers). The internet is very much divided and using the elephant analogy you can be looking at noses and ears from different elements rather the same one (what a way to fool the already blind). On national front, there is content segregation due to censorship, spam-fighting, and national security. In addition to that, service providers localize contents based on geography and demography, in the name of optimization. It’s akin to a ripple effect of sorts. Speaking of which, the surfs up and I gotta get some exercise done.

    Figureitout August 24, 2016 11:33 PM

    r
    No more of an attack surface than anything else
    –Uh..lol. Yes it is THE device w/ the biggest attack surface today and the hardest to dig in and remove troublesome components. Remove the radios, do an Apple-style crypto implementation (along w/ filesystem encryption) and it’d be a lot better but low feature phones aren’t in the market anymore. I’ll stick w/ my MCU’s.

    Thoth
    –All I wanna know is there is no undocumented communications into smart card or to areas which may serve as bounce off points. I’m not sure how to setup an experiment to test that. Simple serial comms have better tools to look for that crap. For instance, I had a USB stick that had a separate filesystem that would be hidden when plugged into windows/OSX. I only saw it when I plugged it into an OpenBSD setup. Didn’t want to test it on a phone after I infected enough PC’s (also wanna keep the suspected malware to study when I feel I can safely dump the USB contents, I’m worried it’d look for system time on a large OS/desktop PC and delete itself).

    That kind of crap makes me really queezy so I’d crawl back to my MCU’s. The ledger blue looks good, just the bluetooth comms (I have a bad history w/ that too, so I’m biased. At least it’s encrypted. Also look at all the difficulties people have w/ bluetooth that try to use it a lot, so something buggy like that, like won’t connect, would a little glitch be benign or sign of an attack..?), NFC, etc. Just a damn serial port and a serial converter if the other device needs USB. Physically connected so you can more easily narrow down problems, you can connect power to turn on chip to a button press too so physical attack is 99.999% needed to compromise. Only reason I used wireless in my pet project was for attackers having to search your whole house for the logging device; but for this it’s assumed you aren’t about to get raided and can use it w/o worrying about shoulder surfing, might as well limit problems and use physical cable.

    Clive Robinson
    –Well you know you can just ignore me and I’ll shutup and go back to my cave… 😛 I’d need to have one of these devices in my hands to try to clear things up a little from a drunkard’s walk and leaky integrators lol, but makes sense kinda. I’m still not sure how some of this doesn’t amount to you just doing personal compression-like functions to obfuscate things. Don’t need further explanation though, nor a PhD in it lol, thanks. At ease soldier :p

    r August 25, 2016 12:30 AM

    @Figureitout,

    Believe it or not, don’t think consumer grade devices don’t make me nervous – I’m aware of their potential but what choice do you have? Carry a pwnie plug over the border?

    Speaking of which, I just know if I keep runnin my mouth next time I goto Windsor I’m going to have to take some scuba gear.

    I could just send some balloon’s ahead or maybe a message in a bottle.

    r August 25, 2016 12:34 AM

    @Figureitout,

    If you’re having that kind’ve trouble you should probably purchase a ledgerblue directly from their offices.

    Thoth August 25, 2016 12:58 AM

    @Figureitout

    “All I wanna know is there is no undocumented communications into smart card or to areas which may serve as bounce off points. I’m not sure how to setup an experiment to test that. Simple serial comms have better tools to look for that crap. For instance, I had a USB stick that had a separate filesystem that would be hidden when plugged into windows/OSX. I only saw it when I plugged it into an OpenBSD setup. Didn’t want to test it on a phone after I infected enough PC’s (also wanna keep the suspected malware to study when I feel I can safely dump the USB contents, I’m worried it’d look for system time on a large OS/desktop PC and delete itself).”

    Smart cards have the ISO-7816 for it’s specifications and then the ISO-14443 to extend into the wireless type (NFC). A smart card has a physical protocol and a logical protocol which are all described in ISO-7816 including how a smart card should send it’s data “pulses” (those 10101010 encoded in pulses) for the physical protocol and the logical protocol is called the APDU.

    The APDU is the core of how you communicate with the card in this pattern:

    [CLA][INS][P1]{P2][LC][—–DATA—–][LE]

    CLA, INS, P1, P2, LC are a sequential 5 byte header where you have CLA as Class type, INS as instruction type, P1 as Parameter 1, P2 as Parameter 2, LC as Length Counter, Data is data and LE is Length Expected (expected of replied length from the card which is optional). All logical commands are sent in the above sequential format from the card reader to the card.

    You can imagine the card (as the server) and the reader (as the client) which is the inverse role of a network server and the card being a “server” would only respond to request from the reader in the sense a ServerSocket only responses while a ClientSocket sends from a logical standpoint. If you detect the card sending without reader requesting, something must be very wrong as it violates the traditional client-server model that smart card systems are modeled after 😀 .

    The ISO-7816 does describe a logical filesystem but in actual fact, a chip does not care about the “logical filesystem” and simply gives you whatever memory it has assigned. JavaCard takes this a step further by turning it into a JVM programmable environment where there is no concept of filesystem at all. Some card developers attempt to implement the logical filesystem on top of the VM but that is their choice. In simple terms, there is no filesystem from the physical standpoint but the OS implemented onto the card will determine if it has a logical filesystem. Even if it has a logical filesystem, the only avenue of communication is via the APDU command and response (traditional client-server) and if you see commands out of place, it is something worth investigating.

    You can imagine it as you card reader being a browser, your card being a server and whatever this server has in it’s storage be it as plain objects, byte arrays, filesystems depends on how the card (server’s) OS is implemented.

    [ Card Reader (Client) ] —– APDU/Physical contact channel/Physical contactless 13.56 MHz wave channel ——- [ Card (Server) … Contents (Type:Objects/Bytes/VFS) ]

    A standard smartcard that complies to ISO-7816 and for those wireless ones they should comply to ISO-14443. If those are complied with, you have to ensure the smart card reader is honest and not quietly editing the logical APDU protocols.

    The link of the image of a smart card (below) shows the physical interfaces. Data are usually sent via the I/O line so if a card is contact-only, you will only expect a data channel via the I/O line into the chip.

    If the card includes a contactless interface, it will use an antenna and you have to sniff the NFC signal wirelessly.

    If I want to catch a side-channel, what I will do for a contact-only chip is to monitor and regulate the 6 contact points as shown in the image. If the card contains a contactless feature, I would just isolate it from other devices and monitor on the 13.56 MHz band to inspect the packet.

    You would want to monitor the card reader as well for any suspicious behaviour.

    To summarize, the specifications must follow ISO-7816 and ISO-14443 for a ISO compliant smart card (physical and logical properties) and thus your monitoring point for physical includes the card reader and it’s behaviours to comply with ISO-7816 and for the contactless cards it extends into ISO-14443 operating at 13.56 MHz for packet inspection. That means you have only 2 entry points (contact entry or contactless entry) to guard against.

    Regarding Ledger Blue, I have been pushing rather hard with their team to bring in a physical disable button on contactless/wireless feature when necessary which will not be available in the current version but for future versions, it is still under discussion between me and their team. There is only soft-disable from the touchscreen menu for wireless/contactless for now. Source codes all on their Github page.

    If you want to process sensitive stuff, it is best to energy isolate yourself and your devices (SCIF-like room) and then use the USB cable and soft-disable the wireless/contactless in a secure SCIF-like room.

    I have been actively bothering the Ledger team to suggest them more higher assurance methods and tactics and also enquire a ton of details from them and now my email conversation counts is about 30 in total 😀 . Talk about persistence from my end and also them having pretty good patience to listen and talk to me for so many emails. When I open up my email thread, I have to scroll a ton (not as bad as the current blog comments here) just to get to the current message to read and reply 🙂 .

    Link: http://imgur.com/a/l3al0

    r August 25, 2016 2:00 AM

    There, finally got my quartback running.

    @Thoth,

    I know I’m crazy, but if you’re implementing ChaCha20 from an RFC – how do you know the RFC isn’t a stand-in?

    Do you guys have AND recommend verified copies? Or are you sourcing the specs from existing source implementing it like OpenSSL?

    I’m running into that problem with a) having to update my bios, b) requiring FreeDOS or MS-DOS 6.22 to do it and c) dban.

    I think that covers my list for the moment, finally got the iommu working but can I trust the box it’s running under now that it’s been updated? The internet is not verified enough imb.

    Thoth August 25, 2016 2:10 AM

    @r

    “I know I’m crazy, but if you’re implementing ChaCha20 from an RFC – how do you know the RFC isn’t a stand-in?”

    Below are links on how a draft looks like for RFC which will say:

    Network Working Group
    Internet-Draft
    Intended status: Informational
    Expires: August 24, 2015

    It will have an expiry and says it’s an Internet-Draft.

    “Do you guys have AND recommend verified copies? Or are you sourcing the specs from existing source implementing it like OpenSSL?”

    Always read the RFC and follow it. You are “never wrong” if you simply “follow the book” 😀 . If the standards says A, do A. I know logically it sounds weird and wrong because there’s the what-ifs when a standard might be wrong and it has happened before but in the end you are implementing someone else standards, not your own standards so you need to follow the book.

    “I think that covers my list for the moment, finally got the iommu working but can I trust the box it’s running under now that it’s been updated? The internet is not verified enough imb.”

    You need to think of your own security threat models and how you want to secure yourself.

    Links:
    https://tools.ietf.org/html/draft-irtf-cfrg-chacha20-poly1305-10

    Don August 25, 2016 2:20 AM

    @r

    “Humor increases one’s life expectancy. Humor at the expense of others is kind’ve parasitic. But! by all means: don’t expect every last one of us to want to run around all over town down with a frown – we’ll drown. Clowns will sharpen both one’s dull life and wit. don’t be a mopy d***. ”

    was I asking anyone to run around frowning? Sorry if it was interpreted that way. I trust everyone here eats well and spends more than ample time doing yoga and at the gym to counter all their time sitting down in front of computer 😉 so no need to run! Or burn up extra kilojoules by choosing the frown
    Humour – yeah great you mentioned, excellent. Always essential. But we all need an immune system. Ask Thoth, is he being ‘rigorous, disciplined, discerning’ in his brilliant smart card R&D? Humour is not mutually exclusive to being vigilant about what we let in. And, of course humour is often the best response to a mind parasite or meme
    I wrote those few words in a sudden moment of clarity in the hope others may appreciate the insight also – also knowing it was utterly and totally relevant to the essence of security and Security and indeed all facets of life. Let alone, self development and spiritual empowerment. Isn’t the latter the highest levels of Security achievement?
    Also : we are all so besieged with degrees of FUD and paranoia – and if you’re not, spend more than 5 minutes on this blog and you will be. My suggestion about being discerning also applied to this –
    we need our energetic immune system to filter all the trump/killary/russia/nsa/microshaft/mcdonalds/Paul McCartney wasn’t replaced with a double in 1963/ noise and nonsense we are force fed all the time
    So – feel free to disagree.
    Oh and on the subject of doubles – any one ever suspect Tony Blair was swapped/replaced when he visited the US on the eve of the Iraq invasion? His support was essential for the US to really push it through. They needed an agreeable Blair 2.0
    I’ve seen about 3 different Blairs in a variety of photos spanning years – I swear

    @ Wael

    “Those words are like bells of wisdom that toll in a world of foolishness.
    Although the bells tolled for all, I heard them most loud and clear [1 Thank you for those majestic words, my friend! I needed them.”

    you are welcome! So very, very very happy you benefited. to be honest there was a tiny quiet part of my awareness that was writing to you – just subliminally, you were indirectly and not deliberately,an intended recipient.
    Forgive me for being bold but I also assume you embrace a ‘Way’ – of which there are so many – internalising your Path beyond the temporal shariat- you are empowered enough to discard beyond its limited role, in a manner we can be sure Idries Shah certainly did. [hence feeling my penultimate sentiments may feel relevant to you]
    incidentally i’d greatly welcome your feelings on Idries Shah or more specifically his works, if you were so charitably inclined

    Wael August 25, 2016 2:51 AM

    @Don,

    i’d greatly welcome your feelings on Idries Shah or more specifically his works, if you were so charitably inclined

    I honestly don’t know much about “Sufism” — relatively speaking. I know the word comes from the Arabic word “Soof”, meaning wool. Sufis, according to what I learned, used to wear wool garments directly on the skin as a sign of humility or a reminder that this world is not “enjoyable” (Muslim males are prohibited from wearing silk or gold, but females can.) I’m sure there is a lot more to it, but I never went to depth in this area. And this is the first time I hear of Idries Shah.

    Me? I have my way. I don’t take anything blindly without references, proofs, or consensus of knowlagble subject matter experts (with explanations that make sense, too.)

    Further discussions on the topic will take us way out if the blog’s charter. I also limit my comments on Islam to corrections (how arrogant, eh?) of what I percieve (there, recovered from my arrogance) to be incorrect. I don’t comment on opinions. What I question is the logic, and I try to limit it to that. I give the answers to the best of my knowledge, and if I find I said something inaccurate, I go back and fix it.

    I read some of Omar khaiam’s work (pen name: Saqi, if I remember.) Basically, I’m not qualified to comment on Sufism.

    Clive Robinson August 25, 2016 3:24 AM

    @ Figureitout,

    I’d need to have one of these devices in my hands to try to clear things up a little from a drunkard’s walk and leaky integrators lol, but makes sense kinda.

    The simplest way of looking at it is you are converting the digital output of the TRNG back into –the digital equvilant of– a very low frequency –compared to sample rate– white noise source. You then use this wave form to add the noise in a suitable way to “stir the entropy pool”.

    It’s a similar idea to using the time latencies from a hard drive to generate entropy. But without the grief of having to remove the signal from the noise (such is the joy of using certain types of particle detectors).

    Oh if you do use a zener diode as a noise source, remember it’s not a true white noise source, you do need to filter etc. It’s why some people use reverse biased transistors to generate RF noise and then use a Direct Conversion Receiver to get a DC-1MHz etc noise source. You can do it quite easily with a SA/NE 612 chip and a crystal oscillator. You will find designs for such DC receivers in either the ARRL or RSGB “Radio Communications Handbooks” for portable QRP operation in the high HF or low VHF bands.

    Thomas_H August 25, 2016 3:49 AM

    @ r, Dennis:

    The main problem is that the NSA-related events following the Snowden scandal keep on revealing further “problems”. This is strange because it does not fit into a typical containment scenario after a major security breach, which would aim at hardening security to prevent similar events from happening in the future (yet similar events seem to be exactly what keeps happening).

    Option #1: the NSA is really a mess. Now I can imagine that that was the case at the time of the Snowden revelations, but it strikes me as curious that such a situation would be allowed to continue, especially when the institute involved is a government agency with a really large budget (on the other hand, bad behavioral habits are difficult to eradicate). What also strikes me as curious is that one agency would not be able to get the kind of experts needed to contain leakages while the others apparently do have access to such experts, since they don’t suffer the kind of security failures that keep happening to the NSA (alternate explanation: they have a different work culture/don’t use contractors/better security checks/different infrastructure/catch rogue elements in time and silence them). It just…doesn’t seem to compute.

    So option #2: it’s strategy. Good strategy, IMHO, exploits weaknesses as tactical advantages, not only those of the enemy but also your own (i.e. draw the enemy to a ‘hole’ in your defenses). Your agency has been compromised and its image tarnished? Make use of that. The public now has a certain image of your agency that basically confirms what the conspiracy nuts have been saying all the time? Make use of that. The part of the public that will now keep focusing on you tends to pay more attention to certain subjects than to others, and tends to get caught up in the technical side of things? Make use of that. At this point, what you have to think of is a ‘feeding mechanism’. This aims at steering people in a certain direction and keeping their prying eyes occupied. It also raises three additional questions: Why (chose this strategy above a regular containment scenario)? What (is actually being released)? Who (are the target)?

    The ‘why’-question I’ve covered almost enough, I think. The ‘what’-question has two levels: the one we can extract from the released data, and an additional one that could only be answered truthfully if we had access to the NSA’s complete pool of exploits. The ‘who’-question also has two levels: the public (“us”), and an additional one, which is directly related to the second answer to the ‘what’-question. Now before I continue speculating on those two second questions, I want to point out one thing: the NSA-related reveals have always concerned older exploits and data until now. This probably indicates that the source(s) are databases of older material, accessible with less restrictions (confirmed for Snowden).

    So if these “leaks” are post-Snowden strategy (and not the result of the NSA being an awful mess), then I’d expect that the actual aim is to show off the level of knowledge of the NSA to third-parties and nation-states, while concurrently closing down decommissioned NSA exploits/back doors that could be used (or are actually known) by those same third-parties and nation-states. At the same time, those independent curious people that could potentially be an unexpected nuisance are kept busy. Seems like a win-win (+ free publicity to those that matter*) situation, no?

    *Hint: the public doesn’t.

    Of course I am just speculating here…

    (let’s see if my internet connection goes down the crapper again after posting this…:P )

    Who? August 25, 2016 4:26 AM

    @Thomas_H

    Option #2 does not make a lot of sense to me. It is too risky.

    There is an easier explanation for option #1. Not only the NSA had no time to react after Snowden’s event but also there is a risk that cannot be contained: the human factor. NSA may have the best technological protections, but nothing stops anyone with access to these tools to make a copy and release it.

    Thoth August 25, 2016 4:44 AM

    @all

    Triple DES, Blowfish and other 64-bit block ciphers are going to take a downgrade in security protocols.

    Fact is one of the most powerful industry that drives cryptography (Financial Institutions and Banking) are still stuck on 2DES and 3DES on their HSMs, smartcards and many more.

    Your smartcards usually uses 2DES to authorize new installation of applets and card management keys. Credit cards are processing financial transactions using 2DES and 3DES as part of the EMV standards. A huge amount of smartcards out in the open which people are buying and using also use the standard 2DES card management keys defined in the legacy GlobalPlatform smart card standards. New smartcards manufactured are using the old legacy protocols because it ain’t proven to be all too broken yet that warrants for a switch up and the Banks and Financial sectors hate to move from 2/3DES because their legacy systems cannot be touched and are using 2/3DES for transaction cryptography.

    The “export grade” cipher era really messes up crypto security and up till now when there are good quality ciphers, we are still stuck with badly messed up “export grade” ciphers that the banks and financial institutions are unwilling to move away from and that means the crypto hardware and software suppliers and makers will still have to support “export grade” ciphers until every single bank and financial institution in the world have fully migrated away from these weak ciphers.

    Cipher designers needs to design ciphers that can scale from a tiny 8-bit CPU to a 64-bit CPU and standards bodies have to pick a capable and secure selection of ciphers for a cipher suite that supports a wide variety of application from securing 8-bit CPUs to 64-bit CPUs and even beyond 64-bit CPUs.

    Link: http://arstechnica.com/security/2016/08/new-attack-can-pluck-secrets-from-1-of-https-traffic-affects-top-sites/

    ianf August 25, 2016 5:10 AM

    Chad Walker: no traditional Dungeons & Dragons-style game that, in addition to classic killing orcs and getting loot, demonstrates how characters in a fantasy setting would use asymmetric crypto to securely exchange shared keys across a hostile network full of adversaries, for example.

    Got it, fantasy crypto in a fantasy setting, just what the Man in the White Labcoat ordered. Very becalming et al.

    […] I’m totally OK with you trolling me (I think it’s funny, even)

    If anything mildly critical qualifies as “trolling,” then you do need to spend some time in Persecution Complex Be Gone Boot Camp. Even more so now that I delivered a second dollop of “that.” Or maybe just name me a Verbose Orc and try to kill it?

    Hank: a fantastic idea! combining traditional D&G character development factors with the loot & lure elements of 4Square, Ingress, Pokemon Go, under an urbanized high-tech fantasy parallel universe. It’s definitely a Go.

    I realize that by addressing this partly to me, you demonstrate that don’t have the necessary attention span to distinguish friend from foe of gaming. I am one of the latter. I could change sides, however, if Chad and other handheld Pokemon Go-like game developers went the whole hog, and led their players IRL along e.g. high-speed train tracks, and then had the elusive quarry veer off course right before and in front of approaching locomotives, dovetailed with the very trains real-time position on the tracks. Call this Post-Darwinian Augmented Player Deselection, abetting the weeding out of such not-quite-proficient ones. I mean, this already happened IRL in a few cases, but here it’d be formalized and accelerated. You game?

    @ Clive Robinson agrees with that “logic and assembly should be taught in school not the reverse of the HLL first.

    Weren’t you quite recently outed as a savant, i.e. an elite within an elite? This is secondary education we’re talking about, I guess, below undergraduate level. I’d like to be a fly on that typical classroom’s wall where such logic-and-assembly language FIRST education is undertaken.

    Clive Robinson August 25, 2016 10:14 AM

    @ ianf,

    This is secondary education we’re talking about, I guess, below undergraduate level. I’d like to be a fly on that typical classroom’s wall where such logic-and-assembly language FIRST education is undertaken.

    If you are in my part of the UK this comming Autumn Term, you can come and sit in on an after school PIC programming corse I teach. The kids are 12-15.

    Thoth August 25, 2016 10:26 AM

    @Clive Robinson

    Would be nice if you can teach these kids defensive programming and of course implementing security and cryptography inside those MCUs.

    We need more security engineers that can actually get the job done and not just program securely but also know some cryptography. I have met a little too many security engineers who don’t understand cryptography nor defensive programming and some of the financial and banking apps out there are relying on these stuff.

    A look at Cisco, Juniper et. al. that are suppose to be “secure” are not. That is what is going on.

    ab praeceptis August 25, 2016 10:51 AM

    Thoth

    Pardon me, while you are right I take this to be yet another case of “Let’s ignore the gun pointing at us as well as the knife at our throat and let’s focus on the buttons on our shirt which are of regrettable quality!”.

    Why? DES was good enough to protect state secrets for many years. And it’s not as if any Joe or Harry could crack it within the blink of an eye.

    NO I do not recommend DES and I am very seriously concerned about security.

    But: From what I know, most banks still use a 4 digit PIN to “protect” accounts. Moreover I could tell you banks (yes, in the year 2016) that happen to use the same method to “protect” their online banking websites. A 4 digit PIN!

    Now, put 2 to the 56 security nex to a 4 digit pin and everyone should clearly see why I think our problem is not to enhance our indecent but not outright ridiculous door locks. It seems to me that DES is a far superior shield than a 4 digit pin to protect us from heaven falling on our head.

    That said, we should properly analyze the situation, that is, we should see the the evil banks use a combined protection mechanism, namely [insert mediocre crypto algo] plus legal means. Whereever they are forced to rely on proper technology they do get the best and most epensive (e.g. HSM).

    Plus, there is a social factor: The legal part of their construct, coming down to “any problem is the customers problem”, also creates a social problem (which maybe shouldn’t be discussed here as it directly leads to politics).

    Funnily,the financial institutes in a way use the same approach we do, albeit they actually use it smarter. Like us they use the doorlock approach where their interests are touched and legal instruments alone can’t protect them. And we apply blissful ignorance (everything except ever better doorlocks) they have hordes or lawyers (plus excellent political contacts).

    Cipher designers needs to design ciphers that can scale from a tiny 8-bit CPU to a 64-bit CPU

    They did and they do.

    standards bodies have to pick a capable and secure selection of ciphers for a cipher suite that supports a wide variety of application from securing 8-bit CPUs to 64-bit CPUs and even beyond 64-bit CPUs

    Absolutely. Unfortunately, there is little reason to support the assumption that will happen. You yourself named one reason indirectly: The clout of 5 banks and 10 corporations is way bigger than that of 5 or 50 million ordinary people.

    Plus: Those millions are humming away happily using opensssl (a.k.a. how not to implement cypto properly or as serious trouble generator) and paying plenty bugs to have some corp’s machines auto-certify that they are John Doe – unless, of course, they liked themselves to be known as “trustworthy banking inc” for the same “low” price.

    With PKI being to do more with a money making machine (and the occasional man in the middle of some gov.) than with security and with banks today using 4 digits pins my worries about “DES is dangerously insecure!!!” are very limited and modest.

    Mitch August 25, 2016 11:17 AM

    @ Thoth, ianf, Clive Robinson,

    “If you are in my part of the UK this comming Autumn Term, you can come and sit in on an after school PIC programming corse I teach. The kids are 12-15.”

    “Would be nice if you can teach these kids defensive programming and of course implementing security and cryptography inside those MCUs.”

    I’m sure some kids, and grownups, can pick it up on their own if you were so nice to post the course online, like on a blog. That way you’ll have more reach.

    @ Who?

    “Not only the NSA had no time to react after Snowden’s event but also there is a risk that cannot be contained: the human factor. NSA may have the best technological protections, but nothing stops anyone with access to these tools to make a copy and release it.”

    It’s easy to keep track of who had access to what tools and when. I would not be surprised if they insert some sort of watermarking in operational binaries. The fact these guys were bold enough to release the tools means they stripped it, there is no watermarking and they know it, or they stole someone else’s tools either from the inside or as outsiders. Which ever way it desmontrates a high level of technical savy, IMHO, so it’s very likely the second whistleblower in Snowden’s film. 😀

    ianf August 25, 2016 11:26 AM

    Deep inside the longform ArsTechnica article on “Stealing bitcoins with badges: How Silk Road’s dirty cops got caught” there’s this conclusion, which always made me distrust the alleged anonymity of transactions with that crypto currency:

    […] Bitcoin’s greatest feature was also its greatest liability for would-be criminals: everything was on the record, forever. The blockchain was a giant public ledger.

    I suppose that total (if nominally anonymized) accountability is what makes the banks, and other legit financial institutions, interested in it DESPITE the popular image of Bitcoin being the currency of choice for criminals.

    Another conclusion from reading this piece, is that the top federal “Bitcoin task force” but privately rogue Secret Service agent Bridges’ behaviour, who thought himself unapproachable by law enforcement, is a prime illustration for the ageless adage of

        Absolute power
        corrupts absolutely.

      Thoth August 25, 2016 11:46 AM

      @ab praeceptis

      “Why? DES was good enough to protect state secrets for many years. And it’s not as if any Joe or Harry could crack it within the blink of an eye.

      NO I do not recommend DES and I am very seriously concerned about security.”

      I have helped organisations, known certificate authority, banks and FIs do HSM support and installations especially for the Thales HSMs and from my observation, if the financial institutions and banks are allowed to indefinitely drag on, it would take too long to upgrade their security and would mostly be more attentive to marketing their new products. Who knows when AES expires and some other cipher suite takes over, we might still be using DES for financial security.

      “But: From what I know, most banks still use a 4 digit PIN to “protect” accounts. Moreover I could tell you banks (yes, in the year 2016) that happen to use the same method to “protect” their online banking websites. A 4 digit PIN!”

      That is not the case for my country (Singapore). The MAS regulation is very strict and would dish out heavy handed punishments to FIs and banks that do not comply be it local or foreign. Six digit PIN is the common practice here and we have no problems with local Singaporean banks and FIs obeying Singapore’s laws to upgrade whenever the MAS says upgrade security. It’s the foreign banks and FIs that are dragging their feet but eventually they all are pretty obedient enough to follow the minimal security guidelines set by the MAS for 6 digit PINs locally to comply with local laws so thankfully, we don’t have problems here with local or foreign FIs and banks using weak security that does not comply with strict MAS regulations on cyber-security if they want to operate in Singapore.

      If we really want to have lesser of these huge breaches on the news, FIs, banks, companies, organisations and state agencies should do less feet dragging and at least have a starting point for upgrading as quickly as possible since it takes a while to upgrade from DES-based cipher suite to AES and that is especially true for the Thales nCipher HSM.

      There is absolutely nothing wrong with DES-based ciphers in that no one has ever managed to break the cipher itself but the general attitude of resisting improvements and change when it is time to do so (especially when regulations and standards ask for) is something that is rather troubling.

      I don’t think anyone wants their personal data to be guarded by someone incompetent at doing so. Similarly, these banks, FIs, organisations and agencies are hoarding a lot of data and valuable assets that belongs to their customers and their customers trust whoever that is protecting their assets but it seems this kind of trust the customers have falls short when the news headlines of some million records database being hacked into or something breaking into SWIFT appears on the front pages of newspapers.

      Essentially it’s about building and maintaining this trust via security but due to the weakened security, this trust is also weak and the root of this weakened or broken trust (and security) is due to problematic attitudes of the custodians of their customers’ valuable assets and trust.

      “With PKI being to do more with a money making machine (and the occasional man in the middle of some gov.) than with security and with banks today using 4 digits pins my worries about “DES is dangerously insecure!!!” are very limited and modest.”

      Both are very insecure regardless and needs to be quickly changed to at least 6 to 7 digit PIN and at least AES-128 or even AES-256.

      An interesting question is, if the Singapore Government can make local and foreign banks and FIs obey the local MAS regulation for security, why can’t US/UK do so when their Governments are more influential and powerful than ours ?

      Dirk Praet August 25, 2016 12:27 PM

      @ ianf

      @ Clive Robinson agrees with that “logic and assembly should be taught in school not the reverse of the HLL first.”

      So do I. It’s not any different than learning some decent solfège basics before picking up your actual instrument(s). Admittedly, it’s horribly boring, but it does make you a better musician and no one who’s ever attended music school will say otherwise. With the exception of savants who intuitively understand whatever instrument they get in their hands.

      ab praeceptis August 25, 2016 12:30 PM

      Thoth

      why can’t US/UK do so when their Governments are more influential and powerful than ours ?

      Maybe you are gravely mistaken? Maybe it’s not “when their Governments” but rather “because their Governments” are more …?

      Trying to avoid politics and staying on a clean course: us and uk are big and democratic countries. Singapur, from what (little) I know, however, could be describend as a benevolent diktatur. Even if I were wrong there, Singapur is also way, way, smaller, and hence many things work quite differently (some of them in a positive way).

      “6 digit pins” – Pardon me but while that’s a little better it’s largely insigificant. That’s akin to a 3 mm caliber gun vs a 2 mm cal gun when actually a cal 45 gun is needed, so to speak.

      May – and must – remind us of one of the holy laws of security? Et voilà: The security of a chain is equal to the security of its weakest link.

      There is a reason for me to beat on that drum again and again (and such getting myself disliked …).

      We are completely misdirected. Our problem is not that we need to make our existing and very good algos better. Our problem is many ridiculously weak links in the security chain.

      We aren’t hacked because aes128 isn’t good enough and should urgently be replaced by magic-512. We are hacked because e.g. FIs use php, funny javascript sh*t (and 4 or 6 digit pins) or because major sites do not even sha-2 their password db.

      Another example: We believe religiously in the “1000 eyes” principle, no matter how often it has been demonstrated practically nonsensical.

      And we even believe in “obscurity is not security” blabla, which is bloody obviously nonsensical. Crypto IS obscurity; yes, mathematically sound and well engineered obscurity, but anyway obscurity.

      Before we get a religious war here: Yes, I know that there is a perspective, looked from which the “not obscurity” statement makes sense, namely when we rely on crypto which everyone can know to all its details – which is a good thing. But a): Do Joe and Mary understand that? I have doubts. And b) The fact that an algorithm is completely open and visible doesn’t change the basic oberservation that crypto IS about obscurity. After all, we use ever changing seeds (obscurity), (P)RNGs (obscurity), keys (obscurity), etc.

      Actually, one might well say: good crypto is something that realiably and mathematically provably creates security by obscurity professionally and reliably.

      The holy foundation still is obscurity. If you think I’m wrong, send me your private keys and tell me why you don’t have your passwords on post its on your monitor.

      Wael August 25, 2016 12:55 PM

      @ab praeceptis,

      The holy foundation still is obscurity. If you think I’m wrong, send me your private keys

      Obscurity means the strength of the algorithm depends mainly on the secrecy of the algorithm design and implementations. Don’t confuse that with protecting the private key.

      If you search this blog, you’ll find this topic discussed at various scattered occasions.

      Dirk Praet August 25, 2016 1:07 PM

      @ Nick P, @ Wael, @ Thoth, @ Figureitout

      This morning I finally received an invitation for the keybase.io alpha I registered about half a year ago for. I can now invite 4 others as well, allowing them to bypass the current waiting queue of about 25k people. If you’re interested, let me know.

      @ Clive

      I just assumed that you’re not interested in this type of mundane implementations. And I didn’t ask @Bruce because he can probably just sent these guys a simple mail to get an invite.

      ianf August 25, 2016 1:30 PM

      Thanks, Clive, for the invite. It so happens that I have a date in Cambridge a week from now, but even was the Autumn term already under way, I couldn’t come by to observe you in action, as that’d mean losing my anon mystique… too steep a price to pay for the pleasure of your acquaintance. Also wouldn’t want to destroy the—now up to 14 points, I believe—long list that you and Wael keep on me offline [ref. on request].

        To meet you halfway, I can divulge that I intend to see the Babbage exhibit, and, if time permits, track down and see Alison Lapper works by Marc Quinn. Yes, not only am I a culture vulture, but also sculpture vulture.

      That said, the course you teach is not part of the general curriculum, but an after school activity for kids who came to you already, ehmm… pre-programmed to become post-programmed. So we’re back to square 0.

      @ Thoth,
                    please, don’t ask Clive to do the impossible, “teach these kids defensive programming and implementing security and cryptography inside those MCUs.” I’m sure he does the basics in such a way as to provide a stable stepping stone for their future (if) development, but starting at this higher plane would only confuse them now.

      If indoctrinating school-age children in security “think” right from the start means that much to you, you should’ve thought about it 12-15 years ago, and proceeded perhaps like that Nagel guy to assemble a football-team-strength clutch of future potential dedicated listeners. Or something. Perhaps it’s still not too late for that, with first dividends in ~ 2030?

      @ Mitch,
                    be realistic. Clive’s course is hands-on interactive, possibly conducted from his notes, but hardly suitable for publishing as-is at large. Nor is this the forum for such.

      @ Dirk Praet – without specifics, this discussion of what /not/to/ teach in secondary schools FIRST is purely academic & leading us astray. It is also divorced from reality. Any educator will tell you that most ambitious plans fall flat not because of students’ inability to learn or teachers’ incompetence, but for other, secondary, tertiary reasons.

      Sometimes, as happened to me once, for something as trivial as the need to repair the leaking roof, for which the only funds available were in that school term’s extra-circular activities – so our artsy excursion was scrapped for a ad-hoc hands-on building restoration course that just happened to take place in the attic (and was quite popular, too, as I recall). Educators have learned to weave alternative course plans. You should too.

      Wael August 25, 2016 2:11 PM

      @ianf,

      I believe—long list that you and Wael keep on me offline [ref. on request].

      What do you mean? Speak en-clair, as you like to say…

      r August 25, 2016 2:12 PM

      @ianf,

      “If indoctrinating school-age children in security “think” right from the start means that much to you, you should’ve thought about it 12-15 years ago, and proceeded perhaps like that Nagel guy to assemble a football-team-strength clutch of future potential dedicated listeners. Or something. Perhaps it’s still not too late for that, with first dividends in ~ 2030?”

      That’s quite the race condition you’re drawing up for us there; economics, education, genetics… Good thing it’s being shored up by 13 year olds having 13 year olds having 13 year olds.

      r August 25, 2016 2:23 PM

      @All,

      ianf’s logic is far more abstract than our programming minds, he programs and disassembles English and other languages – he debug’s your head.

      Think about it.

      ab praeceptis August 25, 2016 2:42 PM

      Wael

      My stupidity is limited. Of course I know what you said.

      The algorithm, however, is but a means, a mechanism. The final goal can very well be subsumed under “obscurity”. More bluntly, crypto serves to create obscurity in a controlled, well understood, and realiable manner.

      2 to the x security directly translates to “more time needed to defuse the obscurity”. When we say, aes-256 is more secure than aes-128, we mean that (statistically) way more time/resources are needed to defuse obscurity and to get at the cleartext.

      Funnily this is exactly the analogon to the friendly police officer explaining that doorlock Y (us$ 100) will buy more time (~ security) than doorlock X (us$ 20).

      The private key is protected beause it’s the means to immediately defuse the obscurity rather than in years and with systems worth millions. A prng creates obscurity so as to seed an algorithm to provide better obscurity.

      Concrete: We introduced salt so as to enhance obscurity when e.g. having our OS store password hashes. Kindly follow me for a moment. hashes would be worthless if they weren’t properly repeatable, i.e. if input X would not always and guaranteed create output Y. Signatures, for instance, are based on that very fact. So, what is a salt? It’s increasing obscurity so as to make an attack on a given hash algo harder.

      private key plus public key in PKE create obscurity (sym. key) which then creates obscurity again (ciphertext).

      I don’t think we serve ourselves and others well when we simply belittle obscurity as the idiots understanding of security.

      The decisive difference between ourselves and the romans tatooing messages (maybe shifted) on a messengers head and relying on his hair growing back is that we have a way better understanding of obscurity and way better methods and algorithms to create and work with it.

      r August 25, 2016 2:56 PM

      @by the rules,

      I’m not sure obscurity is the word to apply, I liken encryption to opacity and translation. BTW who’s trolling now? 🙂 A language is obscured by individual meanings, a language is obscured by time OR the opacity of that language to known quantities of deduction. When we encrypt something we pass it through a lense of difraction – breaking up it’s bitterness – into it’s component parts and then remixing them.

      Conceptually, yes it is obscurity – but obscurity is also the inability to tangibly grasp something – with public encryption (not public key, public as in mutual) I liken it much more to translation to an inaccessible language than one that’s inaccessible due to archaisms.

      Hope that helps, if you’re not a native English speaker also remember something else that could interfere with this empasse: #1 being either one of our grasps on English.

      Wael August 25, 2016 2:59 PM

      ab praeceptis,

      Of course I know what you said.

      I know you know! Really.

      2 to the x security directly translates to “more time needed to defuse the obscurity”. When we say, aes-256 is more secure than aes-128, we mean that (statistically) way more time/resources are needed to defuse obscurity and to get at the cleartext.

      You’re redefining terminology… What is it you are trying to get at, what’s your point? That all crypto algorithms depend on some secret shared (or PKI) entity?

      I suggest you skip semantics and tell us exactly what you’re trying to improve

      r August 25, 2016 3:05 PM

      @All,

      About the W10+Kindle BSOD: it’s a scam, Microsoft and Amazon are colluding – they will blame it on Amazon and you will soon receive an update to fix your device. 😛 Because it couldn’t be Microsoft’s fault right?

      ab praeceptis August 25, 2016 4:17 PM

      Wael

      You’re redefining terminology… What is it you are trying to get at, what’s your point? That all crypto algorithms depend on some secret shared (or PKI) entity?

      Yes, partly your “redefining” accusation is correct. But I don’t do it for the fun of it. Allow me an analogy to show my point:

      Some experts argue that our rifles need a better MOA (“more repeatable precision”) to win in the war in which we are plenty attacked. They say that for some cases we need rifles that can hit within a 3 inches circle over a distance of 750 m. They are right – somewhat. And not – somewhat.

      I argue that we shall not lose sight of the reason of the war, of what we want to achieve and what the enemy wants to achieve. And I argue that while we should work on even better guns, we should also realize that most of our people have no gun at all but stones and possibly knives. I argue that we should care about every one having a half-way decent handgun.

      And I don’t want something like bishops to declare from church thrones what the terminology (and in fact the universe) is. I want a profound understanding of terminology, too. I don’t want dogmas (“obscurity is not security”), I want profoundly and properly understood facts and a solid basis.

      We may turn any way we want but we can’t but see where our paradigm led us to. It lead us to a situation where we do not urgently need even better crypto. What we need is reasonable – and reliable – implementation of “security” and safety for everyone.

      I’m not worried about crypto. We do have excellent and sufficient crypto thanks to some excellent cryptologists incl. our host here. We can – as far as crypto is concerned – protect our secrets, we have the necessary crypto with the exception of post-quantum scenarios.

      And we have plenty experience that again and again demonstrates that (real world) attacks very, very rarely give us reason to doubt our crypto. What it does show, however, is that the implementation frighteningly often is attackable and attacked.

      “terminology, part 2”

      When people preach dogmas and create lousy software (incl. crypto implementations) while bluntly ignoring scientific facts that are oberservable then we did something wrong. 2 factors (of posibly more) that I can identify are a) an unhealthy distance between science (crypto) and everyday developers and b) terminology that seems to serve the experts well but that frightens Joe and Harry and makes him think that security is something to be consumed (because it’s too complicated anyway).

      How can I educate Joe and Harry, how can I make them learn more and apply better standards when I’m not ready to talk their language? How can I succeed, when, in sad fact, I myself have not understood what it is all about (obscurity).

      Joe and Harry may have the good intention but reality shows that their servers still happily accept SSL2 and lousy or even dangerous algos. Obviously we have created a scenario in which Joe and Harry are not capable, no matter their good will, to create a reasonable config.

      “obscurity is not security” is also an elitarism and it’s a major obstacle for Joe and Harry to understand security. In the end they think security is “weird damn complicated math stuff” one better stays away from. So they buy certificates and run the (idiotically dangerous) standard config.

      I will not elaborate much on that because it would bore most and because I’m not allowed to. But I’m involved with security every day and I was profoundly shocked when I had to realize that we do have excellent crypto – but not much more other than major conceptional flaws (plus, of course, massive implementation problems). OpenSSH to name a concrete example is a living proof of lack of proper thinking. It was basically treated as just another internet service although even a quick look would clearly show the very major differences and that SSL is a completely mindless approach for that purpose (while, in theorym, it is well suited for web servers and the like).

      Apologies, it seems I’m annoying many here. I’ll try my best to become a nice friendly little link and sec. gossip drone.

      ab praeceptis August 25, 2016 4:29 PM

      r

      We can liken it to whatever we please and many analogies might even be good and matching ones.

      But in the end we must reach a situation where “3 mio. credentials stolen” (almost every other week) is the exception and not the rule.

      I doubt that we can reach that with bishops and sages. I think we will need blacksmiths and ordinary school teachers, too.

      Wael August 25, 2016 4:32 PM

      @ab praeceptis,

      Allow me an analogy to show my point:

      Can you show your point without the analogy?

      Nick P August 25, 2016 4:44 PM

      @ ab

      Obviously we have created a scenario in which Joe and Harry are not capable, no matter their good will, to create a reasonable config.”

      You don’t. You use one of several alternatives:

      • Physical appliance that handles it all for them like HYDRA or Secure64. Advantage is this is on-premise and uses physical isolation.
      • Virtual appliance that does something similar with less isolation but easier deployment and updates in theory.
      • Cloud account where every aspect of their operation is managed by the provider in Platform-as-a-Service model.
      • IT or INFOSEC management outsourcing where you setup and maintain all their shit the way you want to to make sure most of the risks are knocked out.

      What you don’t do is try to teach them these things. They shouldn’t have to learn in the first place. At worst, they should have to “download this bundle” then “say yes to automatic updates.” They’re not going to get better than that.

      ab praeceptis August 25, 2016 4:47 PM

      Wael

      Can you offer me any solid statistics that show aes-128 or even old blowfish (“not cutting edge crypto”) being the reason for even 3% of security incidents?

      As I mentioned my stupidity is quite limited. But I can offer “ugly knees” as a reason to dislike me. Thank you.

      ab praeceptis August 25, 2016 5:06 PM

      Nick P

      Physical appliance …

      Forget it. In quite some companies, yes. But otherwise, no way. Partly thanks to open source (“there are free solutions! Let’s use one of those”). And due to not understanding security.
      Besides, problems with them boxen aren’t exactly rare …

      Virtual appliance …

      Same issue, different color. Some advantages, some disadvantages but basically the same as the above.

      Cloud account …

      Thank you! Some humour is often helpful to rebalance the emotions in a discussion.

      IT or INFOSEC management outsourcing

      Will often not happen. Too expensive and also requires an understanding of the situation.

      They shouldn’t have to learn in the first place.

      I would love that. In fact, it’s something I mentioned as a requirement for software development. Well, at least as far as any feasible.

      But see above. INFOSEC management outsourcing, for instance, requires (other than a budget which is not always available) some more understanding than blissful ignorance (or fear of “complicated math”).

      “obscurity is not security”

      well, we tried that dogma extensively. Reality? Plenty enough sec. problems to fill a weekly gazette “weekly security news. Today: How 8 mio. credentials were stolen. Plus: the up to date list of the 1000 vulnerabilities!”

      “1000 eyes”

      OpenSSL – need I say more? We failed to even have 4 eyes to look properly according to their own club rules. Maybe we should begin to ask for 500 knowledgeable brains behind those 1000 eyes …

      We may turn that any way we want but we will have to create some better understanding. A reasonable minimal basis at least. Plus a little more for admins and developers (“and management” I don’t dare to say).

      Clive Robinson August 25, 2016 5:30 PM

      @ Bruce,

      I think you might want to read this “investment report” (even though it’s from a notorious “short seller”),

      http://d.muddywatersresearch.com/wp-content/uploads/2016/08/MW_STJ_08252016.pdf

      Put simply they claim to have identified a US manufacture of Implantable Medical Electronics (IME) such as pacemakers that have significant and easily exploited attack vectors[1].

      The report indicates that nearly half the companies income is derived from the manufacture of these IME devices. Further based on the evidence thay say they have to hand the analysts think that there should be a product recall (Ouch!), and that a class action litigation is more likely than not…

      The company concerned has for obvious reasons, said that the report is not true/acurate[1].

      As you are probably aware I’ve been rather more concerned about the security of IME devices than of IoT devices. For the simple reason the risks presented by IoT are more likely to be in the PII/Privacy domains than in causing the onset of life threatening conditions that could easily kill you before First Responders could get to you.

      The sort of IME devices covered by the report have been fitted at the behest of medical insurance companies in the US for just about any cardiac anomaly. With the result that they are rapidly becoming an “every day operation”…

      [1] https://www.google.co.uk/search?q=%22St+jude+medical%22+inc+devices+hacked

      r August 25, 2016 5:44 PM

      @Clive,

      I saw that, avoided linking it cuz I’m green – I was going to call it “Don’t have a heart attack”…

      BUT please see my previous post directed at Gerard about those companies adding “insult to injury” by labeling malfunctions not as device related deaths but malfunction and “injury”.

      It’s scary how cozy the FDA is with these companies now that it isn’t funded by US tax dollars, PBS just ran an expose` on these manufacturers.

      r August 25, 2016 5:53 PM

      @Clive,

      References

      https://www.schneier.com/blog/archives/2016/08/friday_squid_bl_540.html#c6732069
      https://www.schneier.com/blog/archives/2016/08/friday_squid_bl_540.html#c6732187

      The top link includes some of the current medical community bs but also some other sections of the USD cozying up, the last one is more specifically about kickbacks and prescriptions.

      Like somebody said on the “cops blow up sniper” thread, the medical community is hopefully next – consumer protections are out the window where profits are concerned.

      It stinks, right @Gerard?

      r August 25, 2016 5:57 PM

      @Clive,

      I forgot, it’s not just the heart IME’s. There’s one for spinal and nerve problems too that are killing people or can kill people with indignation, I think it partially has it’s links in the 90’s when Congress unblocked the strict drug testing requirements and allowed more early trials on the public.

      A couple accountants != accountability.

      Nick P August 25, 2016 6:19 PM

      @ ab

      Do note that I wasn’t involved in the conversation so much as responding to that specific claim. The first three on my list are collectively billions in revenues. Maybe tens of billions. I imagine they’re easier to sell than you’re indicating. The last one was supposed to be the comic relief. Cloud might actually improve the security of companies without IT budget or security teams. The joke was the environment around that solution rather than the solution itself. 😉

      Far as the other points, I believe you left off an @ r or @ Wael in there somewhere depending on who made them. My belief on obfuscation’s value is well-known. Far as those crypto, Blowfish was recently hit in practice by the SWEET32 attack. Bad implementation but not possible with some other algorithms. Additionally, you have to remember that the baseline is to make sure your crypto will survive for decades. At least a decade. That’s the statute of limitations for any legal attacks that might come at you. Military’s rule of thumb is 40 years until declassification so any damaging information should be close to inconsequential.

      So, it’s advisable to use the strongest stuff you can for your secrets unless you know for sure they can’t bite you in public eye, civil court, or criminal investigations in next 10 years. Also, that this is true even if LEO’s have, for whatever reason, misinformation leading them to suspect you of something and see everything you do in a negative light. Happens to a lot of people so I added it. The initial condition itself is so hard to guarantee that we have the Fourth and Fifth Amendments as a result.

      Dirk Praet August 25, 2016 6:30 PM

      @ Wael

      I wouldn’t mind trying it for testing.

      Where do I send it? Mention that fish we once argued about so I know it’s you and not someone trying to hijack the invite 😎

      Nick P August 25, 2016 6:30 PM

      @ r, Clive

      The errors in the radiation machines were among the scariest to me. Last I checked the software was despicable with beams going too wide and sometimes in wrong direction. Both types, radiation and heart, can kill you with software defects. Radiation justs does it slowly, painfully, and (considering family impact) cruelly.

      Nick P August 25, 2016 6:32 PM

      @ Dirk

      I’m sure one or both of you have access to a social, networking site that might make that easier. 😉

      r August 25, 2016 6:42 PM

      @Nick P,

      Re-read 1-8.

      http://www.pbs.org/wnet/supremecourt/democracy/sources_document3.html

      I don’t think only 5 & 6 apply to such a concept.

      About the radiological machines, thank you for bringing that to light – I wasn’t aware.
      thth
      I was injured several years ago and I had to take pause to consult the doctor: they ordered 5 x-rays and 2 cat scans of my head. I was horrified at the amount of radiation that implied – they told me that if I was 18 they wouldn’t even dare consider dosages that high. :\

      r August 25, 2016 6:45 PM

      @Nick P,

      Good diversification link thank you, 2nd level in the ‘hackme’ link is hardening pracices too.

      Nick P August 25, 2016 6:51 PM

      @ r

      re links. You’re welcome. Glad to help. 🙂

      re Amendments. The idea is that in American anything you say or possess might be used against you in a current or future investigation. There could be any number of circumstances that could come up with you or whatever the cops think that could make a specific think look guilty. There’s no way you can consider all that in one moment. Hence, things being secret by default without probable cause. Explained in this great video that I encourage every American (or interested foreigners) to watch. Includes examples of how you get trapped even if innocent. A cop does a matching segment that begins by agreeing everything he said was true then showing how with specific tactics.

      Dirk Praet August 25, 2016 6:59 PM

      @ Nick P

      I’m sure one or both of you have access to a social, networking site that might make that easier.

      I’m on Twitter and LinkedIn. Checked your mail yet ?

      ab praeceptis August 25, 2016 7:08 PM

      Nick P

      Front-up: I do not know much about about the system across the ocean, amendments and such.

      First, thank you. I highly value people who are capable and willing to change perspective from time to time.

      And perspectives might explain some differences. I assume you are right in thinking that billions are made with diverse “security solutions”, but to me (from my perspective) that means nothing and proves nothing (relevant). One might also mention the CAs, a pure snakeoil business; I assume they make billions, too.

      Bruce Schneier – and I liked that a lot – was among the first to differentiate and to talk about “security theater” (vs. security). It’s a powerful and true image.

      Blowfish was recently hit in practice by the SWEET32 attack. Bad implementation …

      Thanks for that confirmation. In other words: Blowfish is still fine but implementations are often problematic.

      As for your “long term” argument I agree to the degree possible (I’m not panicked by post-quantum but I take it seriously and see it as the single most potent long-term risk).

      The read thread this goes through all the problems, vulnerabilities, crack, hacks, etc. is 2 things, one of which can to a large degree be reduced to the other: lousy software and very basic and foundational problems. In other words: We did think hard and well about whatever we thought (e.g. crypto algorithms) but we thought was too little about the foundation of everything.
      In a way (hence military my analogy above) we behaved like officers who pondered many detail questions and did that quite well, but we forgot to ask the underlying basic questions like, why we are in war and what’s the purpose.

      I agree with your “Forget about teaching them” – theoretically. Practically though we’re way too deep in trouble to afford that. We can’t just let all the attacks happen for a decade or two until we have developed the necessary basis (like languages that make it hard to create bugs and easy to use techniques to avoid them).

      I agree with you, that’s why I brought up (way ago) my library analogy. It’s about complexity shielding and abstraction. That’s the only way I see, that is logically sound and not contradicting the way human are. And it’s an excellent knowledge multiplier.
      Some very few high end professionals can put their knowledge into their layer and all the layers above can build on that without knowing much about details, risks and dangers.

      Unfortunately that’s also the hard part. To do that it’s, for instance, not sufficient to know math well enough to formulate and verify one algorithm. One must also find a way to make it easy and comfortable for others to make use of that.

      Concrete case: I know of no professional, supported and alive language that makes it easy to verify that any code matches the formal specification (and adds nothing). What I know of is e.g. ACSL, i.e. a more or less attached layer on top of the code that, frankly, is more of a good will gesture. Funnily, one that seems to work quite well, albeit based more on psychological reasons. To make it short, it seems that developers making the effort to use ACSL seem to be concerned enough to produce better code.

      Or seen fron another angle: In the beginning as youngsters we took the compiler as enemy; it was moldesting us and putting barriers in our way to success. It needed some riping time to understand that a strict and rigorous compiler actually is our friend hinting us at problems that could be ironed out cheaply now rather than dimensionally more expensively later.
      I’d like and suggest to add more to that, namely formal spec info to enable the compiler to check even more rigorously.

      BTW, I have escaped some crypto problems by diligently (and stubbornly) cleaning and tightening data types. Stuff like “uint16_t” rather than just “int”. Well, crypto guys are typically mathematicians and not programmers, that’s OK. But I’ve learned that I must not simply consume their work but understand that for them code is often an ugly necessity. No problem, once understood I can, with a friendly and grateful smile, work over their code and bring my part to the table.

      Which brings me to a final point: I can do that because I don’t hate math (I’m a pervert, I know g). Not that I’m particularly good at it but I understand enough to have some common basic ground with the math crypto people. And I think thatÄs of high importance. I’m very worried about the many, many programmers our universities produce that dislike or even fear math and who, for whatever reason (comfort?) assume that the crypto people are both, experts in math and in programming.
      So it comes up again: We *must
      teach many people more, at least more solid basics.

      r August 25, 2016 7:30 PM

      @by the rules,

      “In a way (hence military my analogy above) we behaved like officers who pondered many detail questions and did that quite well, but we forgot to ask the underlying basic questions like, why we are in war and what’s the purpose.”

      You’re not here to ask questions grunt.

      Also, about the proof-ability – while I still have to check into common lisp – I thought that was pretty much directly verifiable? Or are you saying it’s not?

      Anyways, easy you’re right – it’s not. I have to go through byte by byte assembly comparing what came out of my printer (both the disassembly and the bytecode) to various reference sheets calculating some hex on paper or a calculator. I assume that’s the type of verification that was done on punch cards but certainly from a certain point all of that can bootstrap the next step, it’s up to us.

      r August 25, 2016 7:34 PM

      @by the rules,

      At which point, assembly being verified “infection/coercion” free can be re-entered onto a different system for production/release responsibly.

      Maybe not bug audited, but manipulation audited through paper and eyes.

      r August 25, 2016 7:42 PM

      Living out of a debugger/emulator can be a good thing too, you code doesn’t work until it nears completion – unlike the reverse which is you only hit compile and test after you’ve reached a point where you think it’ll work.

      Don August 25, 2016 7:44 PM

      @ Wael

      Thankyou. Out of respect for Bruces tenements we have in writing to
      1. Be respectful 2. Stay on topic which of course we didn’t need to be told, but now its there indelibly imprinted and seared on our eyes flesh brain and buttocks – I’ll end this thread with you by suggesting two books all readers on this blog will be expanded, stimulated, thrilled, engrossed, entertained and relieved by
      FYI i am not religious. And I have never founded a religion.

      Tales of the Dervishes by Idries Shah (oral teaching stories collected personally by Shah some as old as 1000 years. @Clive you would love it! 😉

      The Way of The Sufi by Idries Shah
      explains in great academic detail how so much of everything we know actually originates from the Sufis. Further – Sufism is WAY older than Islam and certainly doesn’t belong to it.

      Don August 25, 2016 7:46 PM

      @ALL @ Wael
      PS the titles (And their contents) should be easily found online I believe at least one of them is in the public domain now.

      ab praeceptis August 25, 2016 7:54 PM

      r

      Misunderstanding. While code verification is an important part it was not the issue I addressed. That was a) algo verification and, more importantly, b) verification that the code actually matches the algorithm.

      Some think that certain languages like setl or lisp provide that. I’m not content, however. Let me show you a simple example:

      Suppose you have an algorithm that takes as input or produces as output (for whatever reason. Don’t care, this is an example) any positive integer that is matching two criteria: It’s max. 100 and its divisable by 4. In (some of the better) tools it’s feasible and even simple to specify that.

      But what about the code? Most would probably say “well, use an assert()”. Nope. That’s a (rather poor) solution to a different problem, namely to “how can I check that a parameter or the return value matches certain criteria?”. My question, however, is how one can assure that the code is indeed a proper implementation of the given algorithm.

      Add to that that an assert is the wrong tool anyway for the “max. 100” limit. In code that should be expressible; which e.g. in Ada or Pascal it is but in many languages it is not.

      Again, I think that the basic problem is that most languages are trying to answer the question “How can one create code for a cpu?”. The correct question to answer (or to ask at all in the first place) were “How to express an algorithm in a manner that is digestibble both for a human and (after compiling) for a machine?” … and in extension “and how to do that in a way that makes sure that the language code does indeed fully and precisely express and match the algorithm?”

      Why? 1 answer: One of the famous clay problems -> NP – P or, in other words (and slightly different) the Turing problem, i.e. that any non trivial programm can not be checked for correctness (actually Turings statement is somewhat different but this is the practically relevant version of it).

      Which boils down to: It seems we simply can’t check a programm for correctness.

      What we can do, however, is to check algorithms and, I conject, to check whether a given implementation (code) does match the algorithm. (Reason: That’s an entirely different problem class, namely transformation correctness)

      Nick P August 25, 2016 7:58 PM

      @ Dirk

      I got it. Now I need to get a Twitter feed up just because many mainstream INFOSEC types exclusively use it to communicate these days. Also, I recall Keybase used stuff like that last I looked at it.

      Don August 25, 2016 8:09 PM

      @ Nick P @ All

      thanks Nick for referencing the very good and very important video, I’d like to spell it out in more detail so no one misses it

      It’s called Don’t Talk To The Cops, it has millions of views. It’s a detective explaining all the ways you incriminate yourself and how the cops are trained to play on that.

      And how, the law actually says ‘what you say can not be used to help you”. So, if you think you’re helping by talking – you are not.
      Say NOTHING. The stats are something like 65 percent of convictions are based on confessions! Without wish the cops wouldve had nothing! ‘
      After you watch this film you’ll never look at a cop interviewing a suspect on tv or in a film the same way again – you’ll always be thinking ‘SHUT UP! Why are you telling them anything!! You’re doing their job for them!!”

      It’s just as relevant for non-yankees as it is for yankees (Sepo’s)

      It’s about 45 minutes long but is virtually just talking heads so you can just rely on the audio if you’re too busy

      https://www.youtube.com/watch?v=6wXkI4t7nuc

      r August 25, 2016 8:30 PM

      @by the rules,

      You’re going to have to give me a little bit, I’m currently trying to validate your input.

      Wael August 25, 2016 8:44 PM

      @Don,

      I give the answers to the best of my knowledge, and if I find I said something inaccurate, I go back and fix it. […] I read some of Omar khaiam’s work (pen name: Saqi, if I remember.)

      I mixed up Omar Khayyam and Rumi. Rumi is the one with the pen name “Saki”. Should have checked… My memory isn’t as good as it once was.

      r August 25, 2016 8:55 PM

      @Wael,

      No worries, while accuracy and speed may have taken a hit I’m sure that quality and quantity have replaced them.

      Wael August 25, 2016 10:00 PM

      @Dirk Praet,

      Thanks, got it. Do I have to put a picture in my profile? I’m not as cute as our host here, you know…

      Figureitout August 26, 2016 12:45 AM

      Thoth
      –Looks like you gotta buy the specification (lots of chip companies you can get MCU datasheets free).

      Looks like it’s serial comms on that I/O port, it’s half-duplex, asynchronous 8 data bit, 1 start bit, 1 parity bit, and a guard time at the end. There’s a sequence for activating the chip and deactivating. But that’s for T=0? APDU is for T=1 protocol right? A serial data diode could potentially be setup for T=0 I think…

      Looks like it has a 2 byte CRC available which I would use.

      Anyway these comms get transformed by a USB chip in the card reader, correct?

      You can imagine it as you card reader being a browser
      –Yikes, that’s a huge attack surface usually.

      your card being a server
      –And those get hacked everyday.

      I have been pushing rather hard with their team to bring in a physical disable button
      Enable, I’m talking an Enable button that has to be held down entire time of using smart card. Meaning even an external attack (like remotely illuminating the NFC part of these smartcards, unlikely but hey, may exist) would have to have enough current at the chip to power it on. This is a hardware change. I know it works b/c I use that for a product. There must be a pretty quick boot time or this won’t work. Also, what happens if you lose power at some inopportune time?

      Also always following book can be wrong. I’ve found errors in basically every textbook, and some datasheets (one of the errors got corrected at least). Always needs more checking.

      But yeah it’s good you’re pestering them, will keep them on their toes more. And the smartcards are not bad, just different.

      Here’s my source you may have posted before as I can’t go on much more: http://www.smartcard.co.uk/tutorials/sct-itsc.pdf

      Clive Robinson
      –Ok. I feel queezy using an RF receiver to get my entropy, even shielded.

      Dirk Praet
      –Ah yeah sure, thanks mate. Let me get an email to send out, may take a day or 2. So you have to link up an account w/ it right? Shall I use the magic word of a popular belgium beverage that starts with a jay to authenticate?

      Dirk Praet August 26, 2016 1:06 AM

      @ Wael

      Do I have to put a picture in my profile?

      No, you don’t. It takes it automatically from the picture in you PGP public key.

      @ Nick P

      Now I need to get a Twitter feed up just because many mainstream INFOSEC types exclusively use it to communicate these days.

      That’s optional. Twitter is just one of the media you can use to confirm your identity with, just like Github, Reddit and Hacker News. I find the exchanges between prolific tweeters like @thegrugq, Matthew Green, Nicholas Weaver, Chris Soghoian, Vesselin Bontchev, Alec Muffett (to name just a few) usually quite informing. And the public feud between Jake Appelbaum (@ioerror) and Nadim Kobeissi (@kaepora) was nothing short of hilarious.

      @ Figureitout

      So you have to link up an account w/ it right? Shall I use the magic word of a popular belgium beverage that starts with a jay to authenticate?

      By all means.

      Clive Robinson August 26, 2016 1:10 AM

      I don’t know if this has already been posted but it sure made me smile,

      https://medium.com/@thegrugq/completely-wrong-a300246ad316#.v112fv666

      Spot the fact that whilst making his analysis of what a journalist has said wrong, he also makes major mistakes based on assumptions. Not the least of which is the assumption that the original (now known to be flawed analysis) is correct.

      Now don’t get me wrong, I’m not castigating either side for what they have assumed and said (but it does make good Edutainment). What it points out is that trying to make argument on cyber attacks are kind of pointless.

      Thus attribution is pointless as well. Which might be OK it we are talking about entertainment only, but we are not. The US amongst others are tallking about making what they view as Cyber-Attacks as being “The first act of war” so they can then use it as an excuse to go kinetic with their own WMD…

      I just wish a few people would take that onboard and be more constrained with what they “claim is the truth” especially the MSM talking heads of CNN et al.

      ianf August 26, 2016 1:22 AM

      ADMINISTRIVIA @ rrrrrrrr,

      this is not the first time that you have issued a general warning(?), or advance elaboration(?) notifying “ALL,” me included, of some alleged je-ne-sais-quoi-impenetrability, or perhaps otherworldly logick behind my words. I realize that this must be how you experience them, and, in your otherwise commendable quest to be of service to others (so unlike e.g. Gweihir’s), you proceed to notify us of how my posts should be handled: with a magnifying glass and utmost care (=a poetic metaphor).

      Only, dear rrrrrrrr, have you for a moment asked yourself, whether your worries on behalf of others’ understanding might be unfounded? Or apply mostly to some hitherto little explored, innermost nook of your frontal lobe? (I wouldn’t like to be handled by backyard parts of your brain anyway).

        MOREOVER, though it is a off-topic for another occasion, do you somehow feel having a mandate from others, the by-n-large silent readership, to act as their translator of—if not my words, then at least—my ALIEN logic, that sounds so to yourself? #FTR I rely on Aristotle, with some additions from Korzybski’s general semantics, both are well known and accepted in academia. The Aristotelian logic is actually that used by majority of people, the very same ones who go through life oblivious to the fact that they speak in prose.

      In any event, this being a friendly forum, could I lob a friendly cease-and-desist request in this respect in your direction (in plain English: you quit doing this—only I phrased it like that not to reuse some words that I previously used).

      Clive Robinson August 26, 2016 2:03 AM

      @ r,

      A couple accountants != accountability.

      I’ve seen more good engineering firms driven to the wall by aaccountants than I have by any other type of mismanagement.

      Look at it this way accountants are like financial thermometers, they tell you the state of things now and can at best make limited predictions (going up / going down). If you were ill, you would read a thermometer, but you would not take medical advice from it outside of it’s very very limited domain.

      Speaking of mismanagment and medicine, on the BBC Radio 4 news this morning… Apparently a bunch of accountants have to make ~100Billion USD equivalent of “efficiency savings” in the UK National Health Service. The statment basicaly was they were going to make significant cuts to services to “improve patient services”… Cuts that have been proven time and again will kill people, especially in the “economically active” which is a serious concern…

      Oh every time you hear the expression “efficiency savings” remember that what it realy means is a high cost project “to move the deckchairs on the Titanic”. The only winners in that game are those smart enough to be seniors making wonderfull claims, then cutting and running before the chickens come home to roost. That way if it fails, it’s the fault of those who took over the project not that of the senior, if against all odds those left behind manage to make it work, then the senior claims it as a success for their good insightful ground work… Either way the senior wins and most others loose, frequently very badly.

      Just about every time the UK Government gets the political stripe of the current encumbrents in, this old chestnut comes up. Basicaly they think –due to lobbying– that the older US health care model will be better… Despite the fact it costs more than twice that of the current NHS per patient, and the outcomes for more serious cases are considerably worse (ie the US insurance companies dump patients with chronic or reoccurring illness).

      The real problem with the UK NHS is the lack of funding in other areas of social care giving rise to what is known as “bed blockers”. One significant cause of this is “care homes”, which find every opportunity they can to send those in their care to hospitals, as supprise supprise it saves the care home significant sums on staffing etc.

      And the underlying cause of this lack of spending in social care is two fold. The first is that those in care rarely get to vote, so they are not worth bribing (whilst other “pensioners are”). Secondly is the “virtualisation of companies” that has decimated the tax take…

      Nick P August 26, 2016 6:48 AM

      @ Dirk Praet

      Ok. Then I can do it sooner as I already have both HN and Github accounts. Far as Applebaum vs Nadim, is that recent? I’ll look it up if so.

      Thoth August 26, 2016 8:17 AM

      @Figureitout

      “Looks like it’s serial comms on that I/O port, it’s half-duplex, asynchronous 8 data bit, 1 start bit, 1 parity bit, and a guard time at the end. There’s a sequence for activating the chip and deactivating. But that’s for T=0? APDU is for T=1 protocol right? A serial data diode could potentially be setup for T=0 I think…”

      There is T=0 and T=1 format for contact cards and T=CL for contactless like NFC. Just like TCP/IP there is the physical layer and logical layer. On the physical layer it will be T=0/1/CL and then on top of the T=* layer, you have the APDU. APDU is simply the logical representation of the data and the T=* is the physical transmission. In real life, a card reader would de-construct a APDU into it’s binary representation then apply T=* format to send the bits over physical wires or wirelessly depending on how you select the protocol for the physical T=* protocol.

      “Anyway these comms get transformed by a USB chip in the card reader, correct?”

      Yes indeed. The card reader must have a processor chip to understand APDU and T=* protocols. That’s why I mentioned that you have to take the card reader into suspicion when investigating as well.

      For the GlobalPlatform compliant cards (means those cards with management capabilities according to GP standards), they describe using security domains and VMs (i.e. JavaCard which was the base for the GP standard on smartcards). Of course the debate is how robust the VM and the OS is and I am very interested to find out 🙂 .

      Enable, I’m talking an Enable button that has to be held down entire time of using smart card. Meaning even an external attack (like remotely illuminating the NFC part of these smartcards, unlikely but hey, may exist) would have to have enough current at the chip to power it on. This is a hardware change. I know it works b/c I use that for a product. There must be a pretty quick boot time or this won’t work. Also, what happens if you lose power at some inopportune time?”

      I would say a physical “slider” sort of switch where you slide to switch on and off to break the circuit directly to the Bluetooth and NFC setups inside the device. Instead of
      needing to press on a button all the time to enable NFC or Bluetooth, you could slide the switch and it would either break the circuit or connect it and you don’t have to worry about accidentally lifting your finger and breaking the circuit when you are using the NFC or Bluetooth signaling system.

      “But yeah it’s good you’re pestering them, will keep them on their toes more”

      I don’t simply keep them on their toes. I dare say I have already supplied them with a good amount of design and security ideas. Some ideas which they find it to be over-kill as well 😀 . I understand they are aware of some of my ideas being over-kill and may unlikely move in those directions but I won’t let up my pressure since I personally see their products as worth my time, resource and effort to help them and advise them personally.

      Especially in the “department” regarding the touchscreen for the Ledger Blue devices, I have proposed to them my ideas of secure input and display which include encrypting data transferred between displaying MCUs and processing MCUs, memory encryption, data flows and sequence, data flow protection and all that over the course of many emails.

      More smart card documents linked below for the ISO-7816 standards to help you create your mental picture of how the physical protocols and logical protocols work together.

      Link:
      http://www.cardwerk.com/smartcards/smartcard_standard_ISO7816.aspx
      http://www.win.tue.nl/pinpasjc/docs/GPCardSpec_v2.2.pdf
      http://www.win.tue.nl/pinpasjc/docs/

      Dirk Praet August 26, 2016 10:53 AM

      @ Nick P

      Far as Applebaum vs Nadim, is that recent? I’ll look it up if so.

      More than a year ago. At some point, Nadim had totally had it with Jake’s cocky attitude, and which rapidly devolved into a flame war with both accusing each other of douchebaggery and writing crappy code.

      Figureitout August 27, 2016 3:22 PM

      Thoth
      –Ultimately the physical layer matters most right?

      That’s why I mentioned that you have to take the card reader into suspicion when investigating as well.
      –How would you do that though?

      I would say a physical “slider” sort of switch
      –Any kind of switch would work. It’d have to be connected directly to Vcc pin of smartcard, I think it’d be better suited on the smartcard reader that’s laying flat on surface, need a USB extender cord so it doesn’t do any tension to connections. It’s a very simple yet strong for security hardware change. A lot of people wouldn’t want it, but it’d be nice to have that choice.

      RE: encrypting between displays
      –Just shifts the problem, unencrypted info would still be going to the chips doing encrypting. A homomorphic encryption solution is needed, but I don’t have a clue how to implement that.

      Dirk Praet
      –Check your “god of the sky” email, and your key id BA8E1E8C, and “authenticated” subject line. I attached my pub key as a text file, hope that’s easy enough to import. Thanks again mate.

      r August 28, 2016 7:33 PM

      @Figureitout,

      Did you miss the short-hash is a no-no article?

      https://news.ycombinator.com/item?id=12296974

      Linus and Kernel.org discovered thus far, how do you know those hactors aren’t after anyone here? There’d certainly be less chance of detection, right?

      Take ianf for example, wont even meet an old man in a public place… What does hen have to hide? 😛

      Figureitout August 29, 2016 12:26 AM

      r
      –No didn’t miss. I sent it w/ the actual pub key, didn’t feel like typing out the finger print or mucking up the thread w/ the pub key lol.

      And trust me I’ve been thru much worse attacks. It’s not the end of the world if I can’t get to Dirk securely via our current solutions.

      Figureitout August 29, 2016 12:59 AM

      r
      –Sorry, just fixed the dumbest bug[s] I’ve ever written (couldn’t sleep/do other things until I found it) and it just happened to be my pet project :(. It’s fixed now though and I simply deleted the wrong code. Lesson learned: Don’t rush and release.

      r August 30, 2016 9:00 PM

      @Figureitout,

      Blame the bug on a) God making you do it, or b) God doing it. Either way cholk it up as a lesson learned before ultimately ending in a hard fail.

      Do eat again. 🙂

      Leave a comment

      Login

      Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

      Sidebar photo of Bruce Schneier by Joe MacInnis.