Friday Squid Blogging: Peruvian Squid Fishermen Are Trying to Diversify

Squid catch is down, so fisherman are trying to sell more processed product.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on November 18, 2016 at 4:10 PM198 Comments

Comments

Ted November 18, 2016 4:41 PM

IBM opened a new security headquarters in Cambridge on Wednesday. The facility will house a network security testing environment called X-Force Command that is the first commercial cyber range of its kind. According to several recent articles, security personnel, C-suites, and others will be able to experience a simulated breach and all that it entails in order to better understand their level of preparedness and adjust accordingly. IBM is also able to help companies manage a legitimate breach following discovery.

https://techcrunch.com/2016/11/16/ibm-opens-new-cambridge-ma-security-headquarters-with-massive-cyber-range/

Republic Rat November 18, 2016 4:43 PM

In political news….

U.K finally passes their “snoops charter”

http://www.zdnet.com/article/snoopers-charter-expansive-new-spying-powers-becomes-law/

and President Trump nominates Jeff Sessions to be the chief law enforcement officer…

http://www.nytimes.com/2016/11/19/us/politics/jeff-sessions-donald-trump-attorney-general.html?_r=0

I have no sense of how Sessions is on privacy issues, immigration seems to be his thing. However, I expect a Trump administration to be hostile to privacy interests generally.

Secrets of TPP Finally Revealed November 18, 2016 8:17 PM

We now know why the Trans-Pacific Partnership (TPP) was kept top-secret by the Obama administration! It also explains why Google’s Eric Schmidt camped out at the White House for years.
Hoards of American technology and media companies supported the trade deal.
Google was pro TPP. As was Microsoft, Apple and Facebook. The Motion Picture Association of America supported it too.
1) It barred governments from blocking how companies share data across national borders.

2) It also banned “forced localization,” or laws that require a company to keep its citizens’ user data stored within its borders.

Of course Russia and China (Brazil India?) insist on local storage for national security reasons. Europe for privacy reasons.
Another loser is the NSA who won’t have access to anywhere near the quantity of foreign data. With this reduced scope, Trump should be able to trim the huge black budgets for spying increasingly turned upon innocent American citizens.
Would the USA’s National Security benefit from local data storage?

American hi-tech should respect the laws of sovereign governments and its citizens. They simply cannot be allowed to forever push the eavesdropping envelop as it degrades society in countless ways: dumbing down young people, distracted drivers, poor social skills, lack of critical thinking, gullible to believe anything posted, blurring of news to advertising, surrender control of their lives, divorces & breakdown of the family. This is the wrong track allowing robots control humans. Can we save humanity?

http://www.recode.net/2016/11/18/13669196/google-facebook-apple-trump-win-killed-trans-pacific-partnership-tpp

Ratio November 18, 2016 10:12 PM

@Republic Rat,

U.K finally passes their “snoops charter”

The Investigatory Powers Bill (a.k.a. Snooper’s Charter) was always popular with the Tories, so for it not to become law somebody had to actually oppose it.

Realistically, that somebody would have to include Labour, otherwise it’s game over numerically. But Labour first abstains (fine, some crowd-funding can get them the spine they so desperately need) and then votes for the Bill (forget about that spine, they’re going to need a moral compass instead).

But, hey, at least the two guys and their intern from the Lib Dems still seem to care about civil liberties.

Clive Robinson November 19, 2016 12:06 AM

@ r,

Eluding, hiding from? Eliding?

The meaning of “eliding” varies from dictionary to dictionary.

Some take the narrow view of purposefully omiting –thus hiding– a sound or syllable when speaking or joining two parts of a “word” together. Other dictionaries[1] include a more general meaning of purposeful omission from other objects such as lists.

Thus the more general case of “eliding” covers the function of hiding the malware presence by “purposeful omission” in both ps and top.

[1] http://www.thefreedictionary.com/eliding

Clive Robinson November 19, 2016 3:33 AM

@ Wesley Parish,

[T]he usuaql suspects, any comments on this?

Istarted reading it then I go to,

    In order to hack this platform a cyber-baddie would need to break the digital signature, which – any time before the introduction of quantum computers – would be exorbitantly expensive.

And thought, “there is someone who still after stuxnet just does not get it”.

Placing of a contractor, turning an emoloyee, or blackbag job and the digital signiture is out the door.

Likewise a little illicit code injection before the code gets signed etc…

Ho hum…

Wael November 19, 2016 4:12 AM

@Clive Robinson,

Yes, I know this used to be called “BYOD” and was supposed to be a good thing but apparently it is now bad again. It turns out that bosses have now decided that forcing their employees to buy their own kit and plug it straight into their steaming cesspit of IT insecurity might risk compromising their non-existent data protection safeguards. Who knew, eh?

Lol 🙂

Clive Robinson November 19, 2016 4:42 AM

@ Wael,

Did it improve your BadFri (TM, copyleft etc etc) or your OMG-SatMorn?

Back “when I wore a younger man’s clothes” [4] we used to call Friday POETS day [1] even for the RHINO [2] and TIM [3] types.

[1] Push Off Early Tomorows Saturday.

[2] Realy Hear In Name Only.

[3] The Invisable Man.

[4] From the song “Piano Man”.

Thoth November 19, 2016 5:38 AM

@Clive Robinson, Wesley Parish

Anyone noticed the obvious huge fan holes of that Layer 3 “Snake Oil” box that makes security probing and disturbing of the hardware much more easier ? Maybe they are not interested in FIPS 140-2 but heck, such huge holes and a total of 4 fans with big holes will immediately fail any FIPS certification right off without even needing to sift through the finer details. This kind of quality is called “designed for networks with extreme requirements for data security” ? Does anyone realize that network switches also have their own profile for FIPS 140 and most of them have a huge No-No regarding such big fan holes in “secure boxes” in FIPS’s eyes.

Ok, maybe they are not interested in FIPS or maybe some of our readers (including myself) would think that FIPS and CC are not the best indicators of security levels as me and a few others have pointed out the problems of FIPS and CC but those huge glaring holes, yummy for hardware tampering already. Imagine you can slide a metalic probe into the gaps of the fan holes and then glitch the bus lines 😀 .

Yup … secure microkernel with separation and all that jazz and then a simple bus tapping would be able to forge the messages and then bring it down. Nice work – whoever did the secure box design 🙂 .

Oh, and that haven’t even gotten started with analyzing the chips themselves seeing if the ROM, FLASH, (S/D)RAM can be dumped out and the JTAG … or maybe they are really betting their cash and reputation only on software side to beat “The Game” ?

This box is a joke in itself because I can already start nitpicking and considering that the boxes are built for “extreme requirements for data security”, they never actually thought of covering everything from the hardware, to the casing to the software to all the usual corners a secure hardware is expected to face at the very least.

Not seeing any epoxy coating with the metal tin can and tamper mesh of most secure boxes . Honestly, ORWL does so much better considering it is a KickStarter project but as I personally feel, ORWL’s tamper mesh have a problem being the gaps a little too big but it’s better than nothing.

Hey Eugene Kaspersky, why don’t you fund the ORWL project and then purchase a bulk (and also help the ORWL team via fundings) and use that as a proper hardware foundation and then run an Intel version of Kaspersky OS in a modified ORWL box. That would be much much better.

Hoo Mee November 19, 2016 6:57 AM

Re: Secrets of TPP

  1. Trump may be quickly brainwashed by the extraordinarily powerful corporate-military defense lobby. I am sure they are working on him at this minute. In short, don’t be surprised if Trump is the one to sign TPP. Soon.
  2. Even if Trump doesn’t, time is on the side of our globalist rulers. In four years Trump may or will be gone, and they can send in one of their operatives, like Hillary for example, to make sure TPP passes.
  3. NSA/Five Eyes cyber access and budget cuts by Trump regime? I doubt it. It’s like a game, and they play to win every time. And, they make the rules.

Basically, TPP is the corporate globalist rule book carving up the world markets to make it easier for them to fleece the sheep. Think: NAFTA on steroids. Baa!

Clive Robinson November 19, 2016 7:20 AM

@ Wesley Parish, Thoth and the usual suspects,

After a little more trawling about you get an earlier blog post from Eugene Kaspersky[1] that shows,a snipit of a C like code, and quite a way down says, “There are really just two methods” and further down of their OS,

    This is the important bit: the impossibility of executing third-party code, or of breaking into the system or running unauthorized applications on our OS; and this is both provable and testable.

Hmm the likes of Google with Android spring to mind as well as quite a few DRM systems, anyone else remember Microsoft’s X-Box getting owned or other set-top / games boxes?

The problem with that comment is,

    … this is both provable and testable.

Turn it around and you realise that to be provable it needs to be testable. But more importantly your tests have to cover 100% of the posabilities to be actually “provably secure”, and as far as we know –unknown unknowns issue– that’s not possible beyond very very simple systems of logic, and systems where every posible state and transition is fully specified (Kurt Godel proved that point as did Turing and Church).

We’ve come across this problem before with security and crypto proofs. They carefully tailor a small subset of tests and what they cover then prove those fractions of the use cases. The clasic example of this is “proving an algorithm but not an implementation”.

As I’ve said before, you need to remember the issue of “Security -v- Efficiency” and except under special very limited circumstances you can have one or the other but not both…

The NSA used this prove the security of the algorithm but not the implementation trick during the NIST AES competition. The algorithms were examined in detail for flaws, but the accompanying freely downloadable implementations were designed for speed or efficiency on a given hardware platform NOT security.

The result was the opening up of various implementation side channels typicaly time or power based that leaked key material across the network or freespace…

NISTs technical advisors for the AES competition were the NSA, who had the most to lose by a secure crypto system becoming standard. The NSA must have known beyond all doubt that optomising for speed/efficiency would have exactly this side channel opening effect in implementations.

The result as we know was that the speed/efficiency code was downloaded and put directly into code libraries, that in turn became part of many many applications. Some of which are embedded into infrastructure systems with an expected service life of more than a quater of a century, with quite a few still out there, and unfortunately some still being built with those insecure libraries…

[1] https://eugene.kaspersky.com/2012/10/16/kl-developing-its-own-operating-system-we-confirm-the-rumors-and-end-the-speculation/

ab praeceptis November 19, 2016 7:20 AM

Nick P, Clive, Thoth (et al.)

I did a little digging on kaspersky os.
The russian wikipedia article didn’t tell a lot but contained hints that put me on the trace.

They are partnering with sysgo, the guys who commerzialized the fiasco microkernel (and the kaspersky thingy just so happens to support exactly the same architectures). They are talking about verifiability and about verifiable code.
And they have learned some tricks from some others like, for instance, minix 3.

All in all we’re talking, it seems, about yet another L4 based approach (to avoid using the term “copy”), possibly based on seL4 with some resilience à la Minix 3 thrown in and some hash or signature magic (I personally tend to think it’s hashes).

I’m somewhat split. For a start it’s certainly not a secure OS (unless compared to ubuntu) but that’s not my issue. What I mean is that it’s not secure as, for instance Nick P. or Clive or myself would define that. But I think that a considerably more safe OS than the common wide spread ones (like linux 2.6 in plastic boxes) actually is quite some progress and damn good enough for many low to mid level scenarios. So that’s not my issue.

Looking at kaspersky os from that angle, yes it certainly has some value and can make bazillions of infrastructure boxes much more secure than the usual linux crapware.

What strongly disturbs me, though, is that from what I see so far kaspersky OS is just a “get/buy some blocks and then use your big name and marketing power” product. Frankly, I take kaspersky os to be more of a quick $ marketing hype rip-off than a serious contender in its field.

I’m under the impression that the driving force in kasperkys thinking was not security but that he had that marketing idea and as soon as all the major blocks were available (seL4, Minix3, some “code verifiers”) he just mangled those blocks together, added some magic sauce – based on consumer impressing buzzwords, not on real security – like “signatures” that can only be broken by Q (so he says), and started his sales tour.

From what very little code I saw, unlike Joe or Jane, I’m not at all impressed. First it’s C (and quite probably C++ on some higher layers). That’s a very strong counter-indication. Next and more importantly I saw but some typedefs – but no functions.
I may be wrong but to me that tells something. That tells me that he is avoiding to actually show anything that would allow at least a guesstimate of their real approach; they are mereley targetting Joe and Jane and brain-dead golden sticker driven middle management.

Why am I saying that? Because with a function I could see ACSL or at least Deputy/Ivy annotations (if they existed). With typedefs I can’t. All I could see is that his coders seem to be disciplined. So what, that’s non-news in Russia.

But he talks about “verified” and “verifiable”. My take: marketing blabla.

His “verification” is – at best – running his stuff through clangs static analyzer (which is but a funny modestly useful toy as of now). That’s laudable for the average C developer drone working on, say, some gui code, but in the context of verifiable security it’s ridiculous.
There is a very simple law: If you use C/C++ (which you should not do in areas like a secure os) then you damn use ACSL/Frama-C or sep.logic/verifast (which is much harder to do). Everything else is not acceptable. Simple as that.

Well noted, I’m not in any way anti-kaspersky. From what I know they make some good products (well, as far as AV can be a good product) but this kasperky os is IMO little more than marketing hype. Kaspersky certainly knows how to make a ton of money but I see no reason to believe he also knows to create a safe and secure operating system.

Sorry. 4 out of 10 is the best I can offer for that attempt.

JG4 November 19, 2016 8:35 AM

been busy, or you’d hear from me more often

my top choice for head of EPA

Pollution kills more people every year
https://www.youtube.com/watch?v=qOMLEJN7dg4

clear air is a form of security denied to people around the world. I probably said before that it is long past time to stop burning coal and use it as electrode materials for grid-scale sodium-ion batteries

Wael November 19, 2016 10:26 AM

@Clive Robinson,

Did it improve your BadFri (TM, copyleft etc etc) or your OMG-SatMorn?

It definitely did! Thank you 🙂

Matthew November 19, 2016 11:46 AM

@Thoth

Regarding the ORWL computer, one feature that interests me is when the user walks away with the electronic key, the computer will lock itself and disable the usb ports.

Firstly, I wonder how easy is to implement this with Android watch for windows and linux boxes. An app is needed to pair the watch with the computer.
Apple seems to working on similar feature for their watches and macs.

Secondly, how did ORWL disable usb ports when the computer locks? Can we do likewise for windows and other OSes. Maybe Microsoft, Apple, Linux can make this the default setting to defend against the recent poisonTap attack.

FrUgl November 19, 2016 11:56 AM

External facing system interfaces can be accessed remotely. Subnets and protocols are little understood and mistakes involving common systems are already frequent. Throw a few obscure technical areas in and you have a lot of room for errors and unknown exploits. Already leaving systems and databases exposed to the web is frequent, as well as known weak passwords and out of date vulnerable and unpatched software and operating systems.

Even hardware signing then has a problem as there is no way of telling where the data actually originated or who is in control of the system. It might be the hardware is signed, but it could be communicating from an operating system that is already compromised.

Automation steadily increases the risks and the fast development pace of connected appliances provides a ready supply of devices to be manipulated.Compromised networks of internet capable devices running botnets are can be remotely controlled and used to profile systems running infrastructure for weaknesses to exploit.

Clive Robinson November 19, 2016 12:26 PM

@ Wael,

You have an interest in various things involving physics.

You might find this of considerable interest,

http://arc.aiaa.org/doi/10.2514/1.B36120

Especialy section 10 “discussion” which brings up the issue of “Pilot Waves” underlying quantum effects. If the Pilot Wave congecture is found to be correct –abd evidence is mounting– it might suggest a reason why QC is not realy moving forward.

Clive Robinson November 19, 2016 1:13 PM

Two women computer pioneers get Presidential Medal of Freedom.

Grace Hopper and Margaret Hamilton are to be awarded the Presidential Medal of Freedom on tuesday for their outstanding services to computing advances.

They also both coined terms so commonly used they are in the normal everyday lexicon. Grace Hopper coined “Debugging” and Margaret Hamilton “Software engineering”.

Grace Hopper was also known for carrying lengths of wire (~1ft long) and giving them to people so they could see what a nanosecond was in terms of the distance light traveled. She is also reputed to have given grains of sharp sand as picoseconds.

Margaret Hamilton joins another NASA mathmatician Katherine Johnson who was awarded the medal last year, who also made fundemental contributions to computing.

https://techcrunch.com/2016/11/17/grace-hopper-and-margaret-hamilton-awarded-presidential-medal-of-freedom-for-computing-advances/

Oh and another pair of recipients will be Melinda and William Gates for Philanthropy, due to the work of their foundation and other activities.

Nick P November 19, 2016 2:05 PM

@ ab praeceptis

Well, that’s kind of some good news. SYSGO was one of companies I used to post about discussing separation kernels. They’re an also-ran in that department with what looks like a good product: PikeOS w/ user-mode virtualization of common stuff. It had less assurance than INTEGRITY-178B or VxWorks MILS with less features than LynxSecure. Market went with OKL4 in mobile. PikeOS was part of European verification efforts, though, with at least a prototype in secure phones. Not sure what deployment it had since it was already too low on tiers for me to study.

It’s worth looking at their page. As I fix NoScript rendering, I see the bottom first with this line:

“See how we work with T-Systems, Airbus, Thales and others in the EURO-MILS project to establish PikeOS as a European security platform for embedded systems certified up to level EAL5+. ”

Two of its competitors say DO-178B Level A and EAL6+ w/ High Robustness extras. SYSGO setting their sights high, eh? 😉 Can’t rag them too much as it’s still useful as a stronger baseline if the product requires development pace too rapid for high assurance or similarly for complex features. Even Turaya & GenodeOS would probably go for EAL5+ which I know one firm was working on for Turaya.

Let’s look at the page again ignoring that angle. 🙂 Well, it’s mostly just same old same old that it was when I started promoting these things. They’ve expanded it to multicore, added some new VM’s, and put more work into graphics. Same as competition. Then I see Euro-MILS effort aiming at EAL5 target. That’s friggin embarrassing for Europe given talent there. That questionable French evaluator certified PikeOS in some private way. The French agency certified Mandrake Linux at EAL5 one time. No mainstream Linux distribution is EAL5 by the covert channel requirement alone. Lol. Whereas, the French engineers are also playing catchup to U.S. and Australian microkernels with their ProvenCore from ProveAndRun. Looks a bit like the Microsoft VCC and SPARK approaches. That’s what they should start evaluating.

What of the code verification for PikeOS? They’re just using VCC like in the Verisoft project. Still at it. Meanwhile, Microsoft did VerveOS, NICTA did seL4, and FLINT at Yale did mCertiKOS. I can say with confidence that whatever Kaspersky and SYSGO come up with will be less secure than what they could currently buy or build on.

Bong-Smoking Primitive Monkey-Brained Spook November 19, 2016 2:21 PM

@Nick P,

Makes complete sense. That’s the reason some women think a nanosecond is 12″ long 😉

ab praeceptis November 19, 2016 2:59 PM

Nick P.

Maybe I got you wrong but if you are really smirking at Provencore you are mistaken I think.

For a start, they don’t play with EAL 5, they are going for 7.

I’m not surprised, btw. Neither by their approach nor by their somewhat typical french building strong on math. The former looks reasonable; they seem to basically build along the line “Minix 3 but better and with a strong security focus” (which isa quite reasonable basis). As for the latter, oh well, Leroy and some others have created something serious there with one of the side effects being that they are a little playful with formal tools. Reminds me somewhat of the old days when a new system also got its own language. Now, with the french, every other project seems to get it own formal toolset *g
The french have managed to always keep the math skills of their students /at least at their better universities) sharp and if they now reap the benefits of that wisdom I can easily look over some waste and toying.

That kaspersky thingy is on a quite different (and dimensionally lower) level. Note, in particular that kaspersky seems to carefully avoid using the word “formal” when marketing-droiding about “verified”.

Btw, I personally couldn’t care less about EAL, Inegrity and whatnot. I want to see good engineering and proofs and the french, and to a degree the swiss and some others (funnily ones one wouldn’t think of) deliver (e.g. muen). Nice to see.
On a sidenote I’m bewildered how the germans, who had strong cards both with, for instance, L3 and L4 (or Alice) and with a rich and strong industry base could get so weak. Once the leaders they have fallen back nowadays.

On another sidenote I perceive the situation as the french going for know-how and real solutions while the germans seem to go for political and economic muscle (and hence for yet another fiasco (haha)). I found pikeos interesting but in no way leading. But with enough market muscle the germans might succeed to force-feed it as the “european security solution”.

Clive Robinson November 19, 2016 3:48 PM

From what or whom is “freespace” free from..?

To do the “unmentionable” and quote Wikipedia…

    In electrical engineering, free-space means air (as opposed to a material, transmission line, fiber-optic cable, etc.)

It is a medium in which EM and acoustic signals radiate sufficiently well to be picked up at a moderate distance[1] a process often called “Freespace radiation” by communications engineers.

Thus any alternating signal can transport energy outwards from an electrical / electronechanical / electronic device to a listening device, and it may well have confidential information impressed by direct or cross modulation upon it…

It’s one of the reasons “air-gap” realy is an outmoded name, and why I now tend to use “energy-gap” instead.

One advantage of freespace radiation is that it tends to be reasonably predictable. Thus you can fairly easily work out when a signal will be weaker than the “noise floor”. The same is not true for signals “conducted” down transmission lines[2]. Because they have wildly varying rates of attenuation and frequency response as well as coupling factors.

[1] The mode of energy transportation through air is by “radiation” not “conduction”[2].

[2] However when it comes to leaking information, conduction can if along a continuous transmission line take the energy considerably further than radiation through air. An example of this is a distant moving train. When you are standing next to the rails –which act as lossy transmission lines– you will hear the rails “sing” long before you hear the noise of the train through the air.

albert November 19, 2016 4:14 PM

@Clive,
Kudos to Hopper and Hamilton. They did something significant and useful to society.

The Gates Foundation is a total joke. Don’t even get me started on that.

Then again, didn’t Obama get a Nobel Peace Prize? Rumor has it that the Nobel Committee took so much shit for that award, that they are now considering eliminating the Peace Prize, and renaming it the Nobel Trying To Stop War Prize. P.S. Obama shouldn’t get that one, either.

Could it be that most of these awards are bullshit?

I propose an Award Award, to be presented to awards that actually mean something.

@Bong-Smoking Primitive Monkey-Brained Spook,
Make sure you’re packing at least 500 picoseconds!

. .. . .. — ….

C U Anon November 19, 2016 4:19 PM

@Bong-Smoking Primitive Monkey-Brained Spook :-

Careful you are moving into Trump Territory with 12″ jokes. Next you will be asking “the parking question”…

anonymous November 19, 2016 4:57 PM

Not strictly limited to Squid but related to all of the Internet;
The PATRIOT Act (Persecuting Americans That Read I.T. Oriented Tabloids Act) violates 1st and 4th amendments of the Bill of rights; https://www.linuxjournal.com/content/nsa-linux-journal-extremist-forum-and-its-readers-get-flagged-extra-surveillance https://daserste.ndr.de/panorama/aktuell/NSA-targets-the-privacy-conscious,nsa230.html
Violating the Bill of Rights is insurrection, which is a form of treason. The death penalty is often advocated towards traitors. Rule 41 changes set for Dec.1 2016 are far more radical and militant than the human rights abuses of the PATRIOT Act.
Call your senator and leave a message asking him or her to support the “stop mass hacking act”.

Nick P November 19, 2016 5:24 PM

@ Bong-smoking Spook

The real problem for those guys is that they’re delay-insensitive like those asynchronous circuits I posted. The solution is similar to that stock exchange combating HFT: increase the delay. The tactic of increasing the length of wire can also help.

Thoth November 19, 2016 7:02 PM

@anonymous

Sadly, the ones who hold military and Government powers are the truely powerful people which can bend, break and create rules as they wish. Rules without fangs cannot be rules unless fangs can be added. The military-industrial-intel-civilian-governmental complex are all bunched tightly together and are the ones who truely have the fangs.

If the rules truely have fangs, how would a ton of government officials simply walk away scott-free with all the tyranny and treason they have committed without even a single repercussion effect ? To put it in plain words, rules are useless without enforcement…

@Nick P

There are more ways to bring down an EAL 5 – 7 system and much of the time the security is only evaluated at a per system or component basis. I am now seeing a ton of Smart Card OSes rated very highly 5+ – 7+ across the board from NXP, Infineon and so on. If the FIPS and CC are so easily obtained (and I did mention in the past that most of them are report-reading work) I wonder what’s the true significance of having these (rubber-stamp) standards.

Most people get away with calling their systems as CC EAL 7 evaluated when in fact what was EAL 7 was the MPU and firmware (above example from Infineon, Sony and NXP CC validations) and all of a sudden, they are all over the place sending out good news that their system is CC EAL 7 when in fact if you read again, the overall security is EAL 4+ or 5+ at best (Sony Felica as an example). Maybe it’s about time to use the overall system’s CC EAL as a baseline if they have met with the lowest of the lowest security requirements and not put too much emphasis on these broken standards being used alone anymore.

The reason most IoT crap happens is on multiple levels that allow chances for nasties to happen and slip in and most are purely human factors including cost cutting for security systems for factories when provisioning IoT devices, static passwords … it’s all the simple low level stuff that kills, not those higher powered nation state sophisticated side-channel probing, chip decaps or using some supercomputer to get at the IoT chips.

In fact, most IoT chips are pretty beefy when it comes to security. Most of these chips comes with stuff like hardware security engines, TrustZone backdoors, MPUs and so on. One instance is the bunch of Qualcomm Snapdragon 1100 and 2100 wearable processors that comes with backdoored TrustZone (a.k.a Qualcomm’s QSEE/SecureMSM) and hardware crypto and hardware key storage. Some of the ARM devices for IoTs like the Cortex M0+, M3, M7 which are used in embedded all comes with MPU and M3 and above usually comes with some hardware crypto and other goodies.

The problems are usually on the manufacturer side where they do not have the same robust security process models used for smart card chip fabbing where the fabbing and personalization processes are also taken into consideration when issuing the smart card chip’s CC EAL certification. If the hardware security existing in most IoT chips can be enabled and securely preloaded with unique activation keys and the distribution of activation keys can be done in a secure manner (this is the problematic part), then IoT would be much more secure and less vulnerable to Mirai-like attacks where a cache of 60+ default passwords were all that’s needed to get access to an IoT device. Of course the cost would be more expensive but that’s the necessary money required to be more secure.

In short, IoT security can be improved not via CC EAL of some chips or OSes but via the validation of the security processes and practices that takes place when these chips are manufactured, transported, handled and personalized just like how these were done to smart card chips.

Ted November 19, 2016 8:10 PM

Qualcomm announces the first bug bounty program offered by a major semiconductor vendor. [1] The program will be administered by HackerOne, and will pay up to $15,000 per vulnerability. Initially, around 40 security researches, who have made previous vulnerability disclosures, will be invited to participate. [2]

[1] https://www.qualcomm.com/news/releases/2016/11/17/qualcomm-announces-launch-bounty-program-offering-15000-usd-discovery

[2] https://hackerone.com/blog/Qualcomm-launches-bug-bounty-program

Anonymous Coward November 19, 2016 8:36 PM

Is project BULLRUN limited to mobile devices and desktops or does it apply to self-driving automobiles like the hazardous material transport in Diehard4? Is BULLRUN being applied to the 787 Dreamliner or Airbus C350? Will the NSA make it so we see more airplanes flying into buildings? What about the three mile accident? Was it an accident or did BULLRUN make it easy for terrorists to make their own STUXNET-like nuclear-reactor-targetting malware? Isn’t it treason for any US agency to sabotage US national security by going out of their way to make insecure algorithms like the dual-EC one into national (NIST) standards that critical US infrastructure depends on? In short; is NSA allied to ISIS and/or allied to al-queda?

Wael November 19, 2016 10:02 PM

@Clive Robinson,

If the Pilot Wave congecture is found to be correct –abd evidence is mounting– it might suggest a reason why QC is not realy moving forward.

Pretty interesting. I used to follow this area for sometime (a side effect of watching startrek, impulse and warp engines.) But how did you link Pilot waves to QC not moving forward?

Bong-Smoking Primitive Monkey-Braind Spook November 19, 2016 11:01 PM

@albert,

Make sure you’re packing at least 500 picoseconds!

Are you nuts? That translates to 5.9″ in free space — no can do! Now if you’re talking slow light then I’m game 😉

Nick P November 19, 2016 11:01 PM

@ ab praeceptis

re countries and “Maybe I got you wrong but if you are really smirking at Provencore you are mistaken I think.”

You got me wrong. I offered it as a positive counter-example to PikeOS. It looks good. If successful, it will be their first, high-assurance, secure system. U.S. and U.K. seem to hold record for most of those (with the know how) since they have more people focusing on them. A niche of people in U.S. also invented INFOSEC and high-assurance security followed by a DOD policy mandating its use (temporarily). Gave a head start. Far as high-assurance safety/reliability, same in academia and corporate suppliers. French have the notable INRIA group doing things like CompCert and Coq but it’s all from the same group. Clearly an exception. Another is Esterel which is kicking ass in practical, high-assurance embedded. In safety-critical industry, they have many case studies in things like B method. Might lead in industry adoption of formal specs but I’d need to look at survey data again to be sure. That sort of ranking is a moving target.

Germany does a lot of medium assurance stuff. They did participate in Verisoft project which had high-assurance deliverables. They mostly do good medium stuff like Dresden’s TUDOS team, Turaya, and Genode. I agree they have a more pragmatic, product-oriented focus with Sirrix being good example. Unsurprising given their status in exports. They never were that strong outside of Verisoft as they just have a different focus. GenodeOS and Sirrix are both examples of their prior work going strong. GenodeOS is incorporating best-of-breed components like Nitpicker and seL4 regardless of where they come from.

“Note, in particular that kaspersky seems to carefully avoid using the word “formal” when marketing-droiding about “verified”.”

There are many forms of verification. Kaspersky leaves it vague intentionally for later.

“Btw, I personally couldn’t care less about EAL, Inegrity and whatnot. I want to see good engineering and proofs and the french, and to a degree the swiss and some others (funnily ones one wouldn’t think of) deliver (e.g. muen).”

You should care if they actually conform to it without political bullshit. Here’s an archive of Cygnacom’s excellent summary of what each provides in increasing order. The whole point is verifying the features, engineering quality, etc. That really starts at EAL5 when they get the source code, start pentesting it, etc. Anything less is purely paperwork bullshit. Far as INTEGRITY-178B, it’s a small kernel with EAL6 assurance requirements that was analyzed and pentested for around 2 years. That’s much better argument for its engineering than “someone claimed to use formal methods on their microkernel.” There’s so much more to security than that some of which higher EAL’s cover.

That’s not the end all to it. It’s usually just a start to establish a baseline. Far as the kernel, look at its features yourself here. Disregard the marketing team’s work (wink) to skip down to their architectural details then what comes with the certification package. Notice how wise the architectural decisions are in how they promote determinism, easy analysis, and reduced covert channels. I particularly love how each app must donate its own CPU time and memory to run a kernel call whose internal stack is still isolated from it. Every OS designer should consider doing that. Far as the cert package, there’s a lot of useful data there for anyone trying to verify the kernel’s security or use it in a secure way. Certification mandates that be included which certainly isn’t the case for most commercial or FOSS even if formal methods are used. Hell, even seL4 still needs a covert channel analysis & some other stuff in EAL6-7 that’s ongoing at the Australian organization.

@ Thoth

“There are more ways to bring down an EAL 5 – 7 system and much of the time the security is only evaluated at a per system or component basis. I am now seeing a ton of Smart Card OSes rated very highly 5+ – 7+ across the board from NXP, Infineon and so on. ”

Careful about this. I told you a few things in the past about the EAL’s that I still stand by: (a) the EAL, not the overall cert, is about what assurance activities you did with upper levels being more trustworthy; (b) the evaluation, esp high EAL’s, is usually for a specific components called the Target of Evaluation with everything else being outside of scope; (c) the process in general often involves a lot of hand-waivy bullshit so we should assume a partial evaluation of just what was in TOE w/ half-assed work of the rest. For the smartcards, they usually test if they have the features, do some basic checks on them, and put extra care into the TOE. The EAL6+ cards usually focus on a small TOE that’s significant but can’t really say much past that.

They still have a benefit if done by a decent evaluator on a TOE that makes sense. Who knows for most products. It’s why I recommend private recommendations against similar criteria by groups that give you what you pay for in terms of effort.

“Maybe it’s about time to use the overall system’s CC EAL as a baseline if they have met with the lowest of the lowest security requirements and not put too much emphasis on these broken standards being used alone anymore.”

I did write that exact recommendation in my essay on how to redo security certifications. Also, default to Low for overall if anything in the system isn’t proven High. 🙂

“most are purely human factors including cost cutting for security systems for factories when provisioning IoT devices, static passwords”

Yes. There’s no liability legislation. Without liability, they don’t give a shit. Money talks, security walks.

“IoT security can be improved not via CC EAL of some chips or OSes but via the validation of the security processes and practices that takes place when these chips are manufactured, transported, handled and personalized just like how these were done to smart card chips.”

It could be done with both. I’d prefer lifecycle evaluation from system description to implementation to the hardware to its manufacturing. If not that, then standard guideance on things from hardware assurance to credential storage with evaluators that check it carefully could benefit us, too. Remember that the CC sprang from a TCSEC criteria that actually worked in making high-assurance products happen. The reduced one came from less regulation and market demand for insecure stuff that’s cheap and shiny.

Bong-Smoking Primitive Monkey-Brained Spook November 19, 2016 11:14 PM

@Nick P,

The tactic of increasing the length of wire can also help.

To Increase the wire length is to change the characteristic impedance of the circuit, and that could be a premature solution to the “problem”.

@C U Anon,

asking “the parking question”…

Good thing I don’t know that question!

Thoth November 20, 2016 12:31 AM

@Nick P

In simple, CC EAL and FIPS should be used as indicator of whether the guys who built the system did conform to “base requirements” and did “minimal due diligence according to standards” of sorts. Other than that, the buyer should still be skeptical and invesitgate whenever possible.

I have read from some portions of the EAL documents (from the CC portals) regarding the PPs and the evaluation methods and guidelines and it contains clauses of pushing responsibilities that the declared security measures in the PPs – in the event that someone found that the implementation of PP recommended security measures are incorrect or insufficient – it is not the evaluators’ fault or something along that line. I can’t remember which document since there was quite a few I sifted through but it sounds like the CC committee also want to keep their hands clean (as per usual).

Bottom line is to have defense in-depth and consider additional backups in case something fails along the way.

Regarding the IoT security, most chips these days are ARM A series with TrustZone that includes hardware keystore in TrustZone via some form of keymaster trustlet running with many embedded using at least ARM M0+ (already includes MPUs inside) and the more fanciful and powerful ones like the M3 or M7 have the hardware crypto engine and some even have hardware keystores as well and some may include tamper reaction or resistant features. Despite all these cool features to allow hardware memory safety (MPU/MMU or TrustZone) and hardware key storage, these features are not frequently used in many IoT devices. I mentioned hacking an IP security camera in 98 seconds above. Imagine if the IoT device could use a one-time access key (i.e. a 1-time uniquely generated and securely provisioned AES-128 access key) printed onto a sticker that is stuck onto device itself, it would have prevented many of the password bruteforcing which uses a password database of 61 pieces of default passwords since now it uses a randomly generated and securely provisioned 128-bit or even 256-bit key to HMAC and encrypt the initial setup commands when the user receives the product package since the only way to figure out the security keymat is to open the box which is considered a physical attack.

Nick P November 20, 2016 1:38 AM

@ Bong-Smoking Spook

Lol. Nah, they need the pill that failed to treat high, blood pressure as its inventors envisioned. I heard it had a side effect that could help here.

Jennifer November 20, 2016 3:42 AM

Thanks for the physics Clive, interesting stuff indeed.

They should have a bit better security on printers, web cams, and routers.Phones could have a security standard, or a better update system.Maybe they could even put the sticker with for devices in a little opaque wrapper inside the manual.I just noticed that’s what you actually said Thoth.

I don’t know why they can’t put a more secure setup on install discs for devices like routers and networked printers etc. A little wizard with more secure options, then a password entry that tests the security and tells you to write it down when you come up with a secure one. The setup disc could walk people through a more secure setup, not solve all problems but steer people in the right direction so they learn something on the way. You could even get things working with encrypted DNS if you hold people’s hand and walk them through it.

I figure sometimes they think by adding advanced options it’s more to go wrong and more irate customers to deal with. There is also a school of thought that a bit of forced education and hand holding is a good idea, perhaps better than treating everyone as idiots. Only 1 of the consumer routers I looked at locally on sale had an option to put in hex characters for the pass on WIFI and none were on the support list of alternative firmware.

Though I’d say there are lot of dudes in offices making decisions about these things still who like wire, old bits of fencing wire, and find light and transistors a baffling idea. A good straight fence is important, you don’t get a dentist for that, though it is possible that dentist might knock up some firmware for you in his spare time.The guy who likes fencing wire might be capable of supplying the dentist with the food to make his brain and body function. His grandson might do both fences and firmware, mainly firmware for his fencing robots.

Clive Robinson November 20, 2016 4:21 AM

@ Wael,

But how did you link Pilot waves to QC not moving forward?

Have a think about non locality, hidden variables and what boils down to a determanistic process even though it might look otherwise (like a flower petal floating on a pond).

Clive Robinson November 20, 2016 5:14 AM

@ Nick P,

they need the pill that failed to treat high, blood pressure as its inventors envisioned

Sildenafil citrate actually does work very well as it’s inventors envisioned as a vasodilator…

It’s just that the side effects found during testing are way way more profitable, so they switched what it was being approved for with the FDA etc.

There is a recorded case of a US Dr prescribing it “Off Book” to a woman who could not take other vasodilators and the insurance company and others getting quite shirty about it.

There are other side effects of Sildenafil citrate that “party animals” should be aware of, such as it does not play well with the likes of cocaine and various nitrates that are “party drugs”.

Also anyone on other heart meds such as Isosorbide mononitrate (ISMN) for angina etc usually can not use Sildenafil citrate.

Oh another “off book” use mooted for Sildenafil citrate is Acute Mountain Sickness (AMS) believed to be caused by lack of oxygen etc. However the Lake Louise Consensus (LLC) symptom score method used to assess AMS is sensitive to headaches which is an early key indicator of rapid onset HAPE. Headaches are a known side effect of Sildenafil citrate and most nitrates (people using explosives used to call it an “Nitro G head ache”). In the case of nitrates where the headache can be as bad as full on whiplash/migrane the headache is known to diminish in most cases within a week or two. So the jury is still out on Sildenafil for proflactic use against AMS.

ab praeceptis November 20, 2016 6:39 AM

Nick P

All those committee rules and golden stickers are bound to fail. On a more superficial level one might describe the reason by saying that they are descriptive to a large degerre rather than constructive. Pardon my english; by descriptive I mean “the set of prime numbers are the numbers 2, 3, 5, … ad infinitum)”. By constructive I mean “prime numbers are the set of which is valid x in N+ and no i and j (not being 1 or x) in N+ such that i * j = x”.

But my “let’s not care about perfection but about not producing lousy smelling sh*t” goes way deeper. So …

OK, round 2 of our original confrontation (which only became “hard” due to misunderstandings. Actually I immensely value you elder guys, historians, and lexika. I learned way more from guys like you or Clive than from my profs).

Digital is but a rather wanton and artificial construct on top of analogue. There is no such thing as digital. We merely make-shifted it. We decided that any analog voltage below x shall be considered 0 and any voltage above y shall be considered 1. The obvious problem being that, well, the universe isn’t digital, it’s analog.

Look at the very best processor factories. It starts with sand and cleaning it. On the other end there are wafers with a very much non neglegible part of the chips on it being unusable junk.
What does “properly working chip” mean? It means that those chips passed test station with needles and wires and did not obviously and lousily fail. That’s about it.

The reason I’m picking on that is that the underlaying problem stays with “digital” through everything. Any chip beyond, say, a ttl with a couple nors or the like very, very quickly reaches a complexity that we simply can’t master.
Let me be nice abnd talk of Xeons or even just Sparcs; let’s talk about a 8051 kind of thingy. Does anyone here really seriously think that we will anytime soon, let alone today, be able to really fully test and verify that thingy for any conceivable environment it may work in? I don’t think so.

Result: With any not utterly trivial digital circuitry we are bound and condemned to stay, in good cases, grounds of “sufficently high probability”. And high, our planes fly our trains roll, the internet works (kind of) so we are good, right.

Now add software to that. With its own complexity, quirks, and gotchas.

On top of that we have to add humans. Humans, for instance, who say “it seems to work. Let’s not spend more time, money, work on it”. Or how about a young human in finland who says “I love unix, I’ll build one while I’m here at the university” … and 20 years later finds his toy running major parts of major infrastructure …

As a result of all those factors we have some 99% or so of rather questionable software and circuitry. Plus a percentage I’d prefer not to to guess-quantify of bloody crap.
Talking about it: This very text is put into, transmitted, and viewed with crap² (called “browser”).

And – quelle suprise! – why are hackers annoyingly successful? Because they attack lousy crap.

The problem isn’t that, say, seL 4 is not fully (over all domains) verified. The problem is that software crap sits on top of sw crap on top of sw crap on top of a crap os on top of a rather crappy conglomerate of bandaid-extended “designed” crap junkyard boards with not exactly fully tested processors.

Don’t get me wrong. Yes, we need OSs that are fully verified over all relevant domains (which we may hope to know). Definitely. You are right.

But first – and way more urgently – we need to replace layers upon layers of crap and junk by at least somewhat reasonably build designs and code.

Nick P November 20, 2016 10:55 AM

@ ab praeceptis

re analog

There’s certainly limits to verification. Strange you picked digital vs analog, though, given that one is basically solved with the most widespread deployment of formal verification. The mess of analog is constrained by turning it into standard, digital cells with narrow properties corresponding to boolean equations and design rules to counter physical effects. Then, they verify the equations against the specs. Correct-by-construction tools have existed for digital since the 90’s with field use showing no errors for about everything that got full verification. The buggy ones are usually custom jobs doing risky optimizations or SoC’s with pre-built components tossed together without enough integration verification.

re prescriptive

Well, let’s put your claim to the test. I’ll list some EAL requirements in English. Then, you tell me which of these shouldn’t be on the list for secure software.

  1. Clear, precise description of functional requirements.
  2. Clear, precise description of security goals.
  3. Clear, precise description of abstract design or solution.
  4. Clear, precise argument that the design meets requirements and embeds security
    policy/goals.
  5. Code, HDL, etc that is shown to faithfully implement the design.

  6. Testing of all paths in the system shown to work correctly or fail-safe per design.

  7. Covert channel analysis to show any leaks where secrets can get out without explicit communication channels.

  8. Pentesting by professional hackers that are independent to original organization.

  9. Repository system that controls and tracks who can modify these documents. Also, secure form of distribution such as digital signatures.

  10. Guidance on secure installation, configuration, and maintenance of software.

Alright, which of the above shouldn’t be required for high-security software? We’ll start there.

Wael November 20, 2016 11:25 AM

@Nick P,

Alright, which of the above shouldn’t be required for high-security software? We’ll start there.

I like this structured, methodical disposition. Should take us somewhere 🙂

Clive Robinson November 20, 2016 11:31 AM

@ Nick P, ab praeceptis,

The mess of analog is constrained by turning it into standard, digital cells with narrow properties corresponding to boolean equations and design rules to counter physical effects.

Whilst that can deal with “one dimension” of the analog problem, it by no means deals with them all. For instance look at the way clocked latches work, then investigate the metastability issue that comes with it. Whilst there are mitigations they tend to have significant limitations.

Also analog has other interesting issues, digital waveforms are the sum of various harmonicaly related sine waves. Each has it’s own energy, phase and response to a transmission line. Thus what might start as an aproximation to a squarwave can quickly end up ebtirely different, such as a sawtooth or worse have a sigbificant dip in the middle of the top / bottom flats of the squarewave…

Wael November 20, 2016 11:34 AM

@Clive Robinson,

Have a think about non locality, hidden variables and what boils down to a determanistic process even though it might look otherwise …

So there is a correlation between QC and the true nature of randomness? If that’s the case, then I agree since I don’t believe in “randomness” 😉

ab praeceptis November 20, 2016 11:44 AM

Nick P

1 to 4 can be widely interpreted. Why not “formally spec’d, modelled, proven”?
5, too. Why “shown” and not “proven”?
6, 7 is rarely really feasible. Funnily, I could live with that; it’s you (among others) who brings up, let’s say “not everyday”, situations like intentional EM use against a target.
8 is but lottery with good intention. Similar to the unit-testing religion.
10 is funny because experience shows that many weaknesses are created by admins or developers who just can’t be bothered to actually read, let alone follow good guidance.

Again: I agree with you that we should strive towards perfection. I partly disagree, though, re the means and in particular re the order of things. Before I try to make sure that seL4 is not just simply code verified (“perfection”) I strongly suggest to get rid of a couple of bazillion crap plastic boxes out there.

Noted my little hit on “unit-testing religion”? It nicely shows a point (and has something in common with fuzzy testing): I’ve seen loads of TDD evangelists preaching their religion. Then I took a closer look – and found the usual case to be “Formulate the requirement in terms of tests. Then implement till all test pass”. Done.

Sounds nice. And is BS. What do I really want to find in proper unit tests? Attacks, evil spirited cases, maximal insane user input. I do use unit tests and I use them for exactly that “evil” purpose; I use them against myself to see whether I really nailed it tight.

Let’s look at an innocent case: IP4 plus optional mask string conversion to integer plus integer mask.
Typical procedere: “that thing consists of 4 pieces, each up to 3 digits, separated by dots, plus optionally a slash followed by 1 or 2 digits” (rarely even a simple EBNF formal spec is done). Next step: build unit tests for “ip4str2ip4struct(char * ip4str, IP4STRUCT * res)”, i.e. throw some valid and some grossly invalid srings at it. Works? Done, great.
Comes a stupid user and enters the ip adress with commas. Fail. Oh and btw. there is no “const” with the ip4str and the unit test doesn’t care about that. Maybe it doesn’t care about “2000” (in an IP4) neither or about “351”. The mask var (I’m generous here) is declared as “char” (never mind a user entering “/33” …) and chances are that the ip var is declared “unsigned int” (rather than “uint_32”), etc, etc.

How do you “Clear[ly and] precise[ly]” nail that down? Write a book for that function? Forget it. There will be users coming up with idiocies you’re not even capable of dreaming in a nightmare. Plus hackers, of course who so the same but more or less knowledgeable-

There is just 1 way to properly nail that down: formal spec. Any eal or integrity or secbla asking for less is of little worth.

Nick P November 20, 2016 12:58 PM

@ ab praeceptis

“1 to 4 can be widely interpreted. Why not “formally spec’d, modelled, proven”?
5, too. Why “shown” and not “proven”?”

Because I had to see if you’d vet the fundamental requirement before making it more specific. Also, EAL5-7 vary in increasing mathematical rigor. One of the reasons for that is prior high-assurance work showed formal tools simply weren’t up to the task of modeling and proving many real-world systems. Whereas, there were informal methods that seemed to make such systems work as intended. DJB’s work and OpenVMS are examples where they basically never fail to do their job despite no application of formal methods. Just smart humans reviewing design & code plus extensive testing. Hence, standards provided a range of formality for any given component to use.

The standard prefers higher levels obviously. Prove as much as you can. It’s just there’s a cutoff point for that right now.

“6, 7 is rarely really feasible.”

It takes almost no effort if you’ve done the other stuff with formal specs and/or proofs. Even in Orange Book days, there was computer-assisted tooling for these. Today, there’s tools that can generate the tests for you from specs. Also prototypes for automating covert, channel analysis. It’s always feasible on high-assurance systems. It’s just rarely done.

“it’s you (among others) who brings up, let’s say “not everyday”, situations like intentional EM use against a target.”

I also say put it in an EMSEC safe and/or bunker with power filter if you’re worried about that stuff. It’s irrelevant to whether these assurance activities improve the security of software.

“8 is but lottery with good intention. Similar to the unit-testing religion.”

8 is how about all vulnerabilities were caught in closed-source software with many in open-source software. It produces vast majority of empirical results. That’s not a lottery even though chance is involved. It’s a probabilistic activity where results go up with skill of developer and/or reviewer. That red teams almost always succeed shows how necessary it is, too.

“10 is funny because experience shows that many weaknesses are created by admins ”

Which means it’s necessary, not funny. A criteria mandating both its creation and use where applicable with liability for failures would probably reduce the problem you describe a bit.

“Again: I agree with you that we should strive towards perfection.”

A platitude. You’ve failed to reject with evidence the specific recommendations of the Orange Book and later the EAL’s. Their application to high-assurance systems in commercial and academic sectors showed each one caught problems. This was consistent for decades. The only tenuous one, with extremely mixed results with high cost, was your favorite one of formal proofs. Its cost-benefits are only recently being understood and valuable due to better tooling. The scientific evidence is in favor of these being good methods for boosting security baseline & should be in regulations. Status quo stands.

“Noted my little hit on “unit-testing religion”? It nicely shows a point (and has something in common with fuzzy testing): I’ve seen loads of TDD evangelists preaching their religion. ”

You’re conflating a number of things which have little to do with this discussion of specific methods. I oppose TDD as a combo of English, visual, and/or formal specs are provably better by achieving same goal plus let test generation happen automatically. Testers then focus on stuff computers can’t guess. On other side, fuzz testing caught shit-loads of vulnerabilities by forcing unlikely paths and states to happen human testers never thought about. It’s valuable as an extra for that reason. Likewise, stochastic methods in synthesis and optimization are outperforming the clever, deterministic algorithms compiler writers are using in quite a few cases also occasionally beating hand-coded assembly. There’s a general principle here of using guided randomness to spot problems or solutions our preconceptions prevent us from seeing. All tangential to the list of methods I gave you shown to work in project after project. Basing things on evidence is opposite of religion.

“How do you “Clear[ly and] precise[ly]” nail that down?”

Real-world implementations aren’t clear and precise outside their own source code. I’d just use a grammar fed to a secure, parser generator. Nail has IPv4. Galois does thing like converters in CRYPTOL. I’d also isolate the parsing activity from the main app or system. Old scheme was putting it on a separation kernel in its own address space with message passing sending the input or output with input validation of anything important received. This shit is way easier than people make it out to be. You can’t guarantee correctness without formal specs and such but you can reduce damage from incorrectness. Untrusted, isolated activity followed by trusted, validity checker that’s usually way easier to write. That’s the security pattern. Also de facto standard in theorem proving.

Wael November 20, 2016 1:15 PM

@Nick P,

The right approach to get through “beating around the bush” and bring it to a quick closure.

Orange book again? Lol, welcome back 🙂

ab praeceptis November 20, 2016 1:47 PM

Nick P

a) I’m not interested in pissing contests. Nor am I interested in sockets to put someone on (or to keep that place).
I suggest you return to a civilized discussion.

b) You can tell a lot, i.a. examples, counter-examples, etc. I don’t care. All that is simple to cut off: All that failed to a large degree. We are not in todays shitty situation because integrity, eal, etc did so greatly.

Formal method tool relatively new?
Hardly. Granted, it got lots easier and better but, to name just one example, Modula-3 had its runtime lib. formally spec’d and checked (Larch) decades ago. Z and other spec. tools are also decades old and so is DbC (a pretty good and well proven basis).
A considerable part of my toolbox is decades old (albeit most with new editions/versions). My specs, for instance, are done using a tool that is existing sind 3+ decades.
Pascal, Modula and others had “first” and “last” for decades, allowing to avoid many, many overflow errors by “for First(x) to Last(x) do …”. Or Adas ‘First and ‘Last or (somehwat newer) “for x in whatever’Range loop …”.

You talk about blabla committee successes and fail to mention that those are actually much due to solid decades old tools.

Oh and btw: What I wrote and what you felt to rip apart has also been written down in newer deliberations and recommendations by eal, the dod and the likes (funnily I think it was you who provided a link to such a paper).
But sure, they are just stupid like me and should learn more from reading hacker news and the gurus there …

Finally, what are the results? What has eal or the likes achieved besides “cost is of little importance” fields? One answer has been given by kaspersky and his wannabe security os. obviously even that is much more secure than what’s on the vast majority of infrastructure boxes.

Our computers are bandaid and glue circuitry junkyards with backdoors thrown in for free – what did eal, integrity and co. do about that?
Our operating systems (incl. many in core infrastructure equipment) is insecure and unsafe – what did eal, integrity and co. do about that?
I will not even mention SSL/TLS …
Microsoft shells out millions and millions for formal tools for a reason. But I guess they are stupid, too, and should read the gurus at HN …

Btw your “parser” solution is quite typical. You know about some project that is (supposedly or really) well done, safe, and secure. There is just a little problem: software development isn’t hodgepodge lego. One can’t reasonably collect dozens of pieces in diverse languages and written with diverse contexts and use cases in mind. Transforming them creates new issues. There’s probably good reasons for the good people in charge of Modula or Ada to have created a solid (and in the case of Ada also large) library of good quality. Have a look. Some of Spark/Ada 2014+ offers e.g. formally verified collections.

I’d probably lousily loose against you in a war stories and “I know a paper” discussion. But here in the real world where we are working on real solutions I’m less easy prey than you seem to think. Don’t worry, I will be kinder to you in my world than you have been to me in yours.

Clive Robinson November 20, 2016 2:24 PM

@ Wael,

Back to the old question of “What is random?” and more importantly does the definition make sense in real terms.

To look at it one way you could take the view that “everything is determanistic” but either “there is to much complexity” or ” there is to much information” or both.

Oh I also forgot to mention the issue with the “Many Worlds” idea and what effect entropy implies, but you’ve probably have worked that one out for yourself…

ab praeceptis November 20, 2016 2:50 PM

r

“porcupiny”? I guess that’s something in the area of “irritable”?

A hug? How sweet, thanks, but thanks, no. That is very well taken care of by my wife.

And: Yes, kind of (irritable and fed up). Not even because of 1000 papers people (who, after all, do make some valid points) but rather because I see us in a real mess and I feel this just isn’t the situation to strive for perfection and more papers.

More than anyone here disturbing me, it’s the professional environment who just happily add crap layer upon crap layer and don’t seem to be much disturbed by the occasional nuclear reactor explosion (metapher).

Maybe I suffer from a Cassandra infection but I can’t get rid of the impression that we need results. Soon. Seriously.

r November 20, 2016 7:50 PM

Believe me, the future we’re all so blindly walking into doesn’t exactly make me excited either. There’s alot of competing interests and arguments both to and fro, pro and con. And I agree, non-contributing idealist members of society like myself need to give back in one form or another. We’re doing the same thing with the gnu codebase we do with windows and x86 effectively, politely handing the reigns to other’s we assume are serving a public good. What is a public good? Where can we apply protections without being enablers?

I do read both you and Nick’s suggestions, I realize I may have been derisive in my response but I meant it – you sounded like you needed a hug.

Where do we start?
Where do we sign-up?

Your stance, is that even OpenBSD isn’t secure enough and I hear you loud and clear – I say that because I find myself curious about the vmm they’re building. Or, how Qubes and Xen seem to be the only real IOMMU I can see within the public space currently – SEL4 says their IOMMU isn’t yet up to spec. Nick would tell me that the problem lays in the hardware, and that any movement to a more secure subsystem still wouldn’t be secure enough (see @Clive).

You were right when you came hear, there’s good information to be both picked and piqued here. @Thoth is investigating tamper resistance and zeroknowledge architectures, @Figureitout is playing with IoT, ARM and pi.

I see things like the multicore cache timing attacks and start thinking about per-core isolation of an OS that is riddled with land mines and land marks, so that you’re aware – you have seriously pushed me into investigating modula as opposed to C or Assembler. There are valid arguments everywhere to be found within the blog and it’s participants, anyone questioning my curiosity and ideologies should be aware that researchers pose themselves and the society around them a danger if they don’t protect themselves with a properly setup and controlled environment. You can’t trust your scientific results if you can’t trust your equipment, you can’t trust your measurements, you can’t trust your judgements. Worse, if you’re a reverse engineer you can’t trust that what you investigate can’t be stolen (and repurposed, misused, redistributed, reverse engineered) or just used against you or the public.

We are faced with a multitude ethical dilemna’s which beset us on all sides.

Is it ethical to investigate nuclear power when your neighbor is north korea?
Is it ethical to allow yourself to investigate deterrence when you deny someone else the same?
Is it ethical to investigate deterrence at all?
What about weapons? For defense? For understanding capabilities?

I see from time to time, certain unidentified individuals speaking against privacy and true security. But those who are fine with shoulder surfing need to be aware that without effective stopgaps technology can escape, it can escape a responsible developer and it can be exfiltrated from a responsible scientist without having a secure environment.

I thank everyone who is not afraid of open conversation where humanity is concerned, because no matter how we them us I paint it – we can not dismiss that many of the ideas and technologies we work on are or can be or become dual use.

Responsibility and security are paramount.

Come Play Me: Facebook Dangerously Easy to Weaponize November 20, 2016 8:19 PM

Because the United States lacks European-style restrictions on second- or thirdhand use of our data, and because our freedom-of-information laws give data brokers broad access to the intimate records kept by local and state governments, our lives are open books even without social media or personality quizzes….

One recent advertising product on Facebook is the so-called “dark post”: A newsfeed message seen by no one aside from the users being targeted…

While Hillary Clinton spent more than $140 million on television spots, old-media experts scoffed at Trump’s lack of old-media ad buys. Instead, his campaign pumped its money into digital, especially Facebook. One day in August, it flooded the social network with 100,000 ad variations, so-called A/B testing on a biblical scale, surely more ads than could easily be vetted by human eyes for compliance with Facebook’s “community standards…

Facebook is no longer just a social network. It’s an advertising medium that’s now dangerously easy to weaponize!
http://www.nytimes.com/2016/11/20/opinion/the-secret-agenda-of-a-facebook-quiz.html?_r=0
Homework: Why does Trump luv key board member Stephen K. Bannon?

Nick P November 20, 2016 8:39 PM

@ Wael

“The right approach to get through “beating around the bush” and bring it to a quick closure. ”

I say let’s get to the evidence. If you think it works, show me all the times it has in stuff of value. If you think it doesn’t, explain why it was the only thing working so many times. Cut through speculation to actual practice.

“Orange book again? Lol, welcome back :)”

Haha. We’re past that homie. Just lessons about QA from there that still pay off. Also, small, lean TCB’s that compose cleanly. MLS policy and such? Not so much.

@ ab praeceptis

“not interested in pissing contests”

There’s no pissing contest: it’s simply discussion and debate of methods for making secure systems. Certain methods were proposed, put to the test, worked, worked again, worked again, and so on. Reproducibility and consistency. Others worked in some context but not others with lessons to be learned. Status quo is based on such evidence. You appeared to disagree so I simplified the claims to let you present evidence for rejecting them. You didn’t have it as was case with the others. Status quo stands until R&D and field evidence disproves it.

“counter-examples”

There were none in industrial high-assurance. The use of formal specification, human review, and thorough testing always found problems. In high-assurance security, pentesting and covert channel analysis always found problems. The SCM thing was useful when the Karger attack compromised compilers to inject executables. There were even more like need to generate the system from source but I’m keeping it simple.

“formal method relatively new?”

Larch isn’t countering my statement. It was a method they made for use with a language nobody was using to verify just a few properties of its standard library and other toy programs. It greatly expanded over time but couldn’t do high-assurance in isolation like other methods: just the code or low-level step in limited ways. Nor was it committees coming up with these assurance activities or testing them. It was teams of individuals in defense contractors, private companies, CompSci, government organizations, and safety-critical industry.

The methods they used were all immature which they griped about constantly having to fight the tools. These included VDM, Ina Jo, Gypsy Verification ENvironment, Z, etc. The languages were unsafe ones most of the time or for part of the software just because that’s all the compilers they had for target hardware. Ada compilers were too expensive and buggy for early ones with just one production kernel using it with quite the RAM requirements. GEMSOS, maybe one other, used Pascal for critical part. Clearly the successes were due to the people involved and principles of their methods since the tools were so immature. Same even with Caernarvon, seL4 and CertiKOS where they had to roll their own tools on top of mature ones because mature ones still weren’t good enough for straight-forward application at that level.

“just stupid like me”

You’re just operating on incomplete information like the rest of us with things to learn, contemplate, or teach. Not stupid. 🙂

“what did eal, integrity, and co do about that?”

Since the early days, they showed both how to build systems highly-resistant to penetration plus supplied them. The high-assurance market and practical CompSci produced secure kernels, VPN’s, databases, virtualization, backup, etc. The demand side, esp greedy companies, consistently rejected them. The high-assurance people did their part, though. Now, most are dropping it to medium-assurance combined with virtualization of insecure garbage people prefer like the L4 work or even Microsoft’s on Hyper-V. These are getting almost no adoption either outside of the fad for cloud containers. Those aren’t being done with security baked in well. Apathy prevails.

“I will not even mention TLS”

You’d be wise not to given the adoption of crap solutions like OpenSSL. Even AdaCore just wrapped it in Ada for their AWS server IIRC. It was academics from medium to high assurance sectors that started fixing that from Guttman’s Cryptlib w/ built-in security kernel to DJB’s libraries to MirageOS TLS to LibreSSL. Outliers mostly using careful coding, review, interface checks, and optionally safer language/libraries. Eventually, after much fanfare in media, INRIA and Microsoft teamed up for a mathematically verified spec. Good they finally got around to it with their millions of dollars. Overall, though, TLS situation is a disaster on all fronts until recently on a few.

“btw your parser solution is quite typical”

People writing parsers say otherwise. LANGSEC has been doing secure, parser generators for only a few years. First, verified parsers came around same time or just a bit earlier. Had you been here years ago, you’d find me introducing and defending separation kernels with fine-grained decomposition with references to arguments from all kinds of people in INFOSEC who (a) never heard of them or (b) couldn’t imagine how a TCB of 4-12kloc could improve security. My method, along with MILS architecture, are just re-adaptations of the techniques developed for security kernels in the 80’s and early 90’s. I don’t claim innovative but they sure as hell aren’t standard if nobody can tell me what a separation kernel is in about any INFOSEC forum. Or the language-theoretic version where you use an environment like JX or language like SPARK which again almost nobody use or hear of.

“software isn’t hodgepodge lego”

Smalltalk showed otherwise with a few building blocks even kids were able to learn scaling to apps like that million loc simulation military built. Lots of mission-critical apps with high throughput done in commercial Smalltalks, too. LISP and Haskell did similar capability for high composibility with Wirth and Meyer doing a form of it imperatively (esp DbyC w/ OOP). Scratch literally turned programming into lego blocks by making them look like lego blocks that could only fit into the right, other blocks. Before all that, Hamilton et al with 001 Toolkit were synthesizing whole systems of code from a simple notation capturing logical & resource requirements that easily composed together in pieces. What we can do vs common practice are often different. I do plan to look at those SPARK collections in near future, though.

Most won’t get on the bandwagon with the programming styles as easy as lego blocks. There still are people doing it in various tools. The trick is they export it to something mainstream languages can call or hide it in the app/service they’re offering as competitive advantage. Commercial LISP users have been doing latter for quite a while.

“but here in the real world”

Everything I cite came from real-world use. Modula-3, VCC, etc have never formally verified anything for total correctness or had something run through professional pentesting. Not a single exemplar came from that. Whereas real world products were created with methods I’m discussing, passed pentesting/evaluation, and are in field use right now. My papers that I cite as evidence rather than interesting reading are literally reports from real-world deployments. Those supported the gist of my list with only thing varying being what to apply in what context to what degree. Suitability of each tool to each job within a budget. Evidence in favor of them working has been solid over five decades even if we only get a handful of efforts every 1-3 years to study.

@ r

“Nick would tell me that the problem lays in the hardware, and that any movement to a more secure subsystem still wouldn’t be secure enough (see @Clive).”

Actually, I got blocked from Joanna’s blog telling her the problem was in the software. I told her to use the above strategies citing things like Dresden TUDOS work. That is a high-performance microkernel w/ security focus, user-mode drivers, a trusted path for GUI, and user-mode Linux layer. I told her definitely avoid Xen as it simply wasn’t designed for or prioritize security. She flipped out dismissing all of it. These days, she’s added a trusted path and blasted Xen on their mailing list for their insecurity. I similarly told OpenBSD people what worked in the past along with what methods led to highest-security VMM’s (esp VAX VMM by legendary Karger). They rejected it with Ted Unangst joking their covert channel team was understaffed among other things.

The root foundation of the problem is certainly the hardware. Yet, there’s software methods that got a lot done especially if one just isolates security-critical stuff on a microkernel from an otherwise normal OS image. Interestingly, there was even an OpenBSD port to L4 by somebody. They had a starting point. They just reject methods that work in favor of monoliths on C with mitigation techs they hope will stop attackers do to the maze they work in. It’s a cultural thing as usual given they’re not economically motivated.

“I see things like the multicore cache timing attacks and start thinking about per-core isolation of an OS ”

That was one of the things I told them a covert, channel analysis would’ve found a decade ago. Karger’s VMM did one although that was more than a decade ago before most were thinking hardware. Gotta do it on everything or just use a lot of physical separation of stuff. Lots of mini-boxes with careful I/O. My old gimmick. 🙂

Note: Redox is the latest one doing something close to at least architecture and tactics of medium-high assurance. They’re doing microkernel, user-mode drivers, clean-slate of stuff with baggage, and all in safer language. GenodeOS is kicking butt on architecture and some components but bringing in baggage and in unsafe language for adoption purposes. Tradeoffs, tradeoffs… (sighs) Least stuff is happening.

Anonymous Coward November 20, 2016 8:58 PM

Asking here since this seems the most likely place to post something and have NSA read and reply to it (and possibly some operative might have enough freedom to answer honestly if he believes his organization is doing something bad to America); Is project BULLRUN limited to mobile devices and desktops or does it apply to self-driving automobiles like the hazardous material transport in Diehard4? Is BULLRUN being applied to the 787 Dreamliner or Airbus 350? Will the NSA make it so we see more airplanes flying into buildings? What about the three mile accident? Was it an accident or did BULLRUN make it easy for terrorists to make their own STUXNET-like nuclear-reactor-targetting malware? Isn’t it treason for any US agency to sabotage US national security by going out of their way to put insecure algorithms like dual-EC into national (NIST) standards that critical US infrastructure depends on? In short; is NSA helping (on purpose or by mistake) ISIS more than it’s hurting them? How many actual bombings that were stopped couldn’t have been stopped without the PATRIOT Act? How many that were allowed to happen were allowed because of government screwups rather than due to not enough mass surveillence?
Is decreasing everyone’s security really making America safer?
More on topic; how many people needed to use their smartphones in life-threatening situations, such as to call 911, and were unable to because of malware that messed up their phones, malware that wouldn’t have been able to ir security weren’t being deliberately under ined by those paid to protect it?

r November 20, 2016 9:08 PM

@AC,

I find it funny you zero in on 9/11, but nobody has asked a single question about Malaysia Airline’s double hitter.

These are serious concerns far bigger than you’re letting onto, I wouldn’t expect a serious response.

ab praeceptis November 20, 2016 9:43 PM

Nick P

I’ll start at the end.

“Everything I cite came from real-world use” – yes, but there’s a difference between reading and citing and real world development.

Example: “hodgepodge lego”. You completely mistook that and immediately answered from a theoretical view. I, however, was talking about a case that pulled something you said (~ “I’d take this or that parser”) down to reality. Actually it’s touching an issue of great importance. You know what is probably the single most relevant criterion for language selection? -> availability of tons of libraries.

Now, assume (to pull something down to reality) you had to do your project in Pascal. But that damn super parser you mentioned happens to be written in ML. Sure, can be done, but is it efficient? Do you have the developers around who can do it? How will you make sure the binding isn’t spoiling something? Will the linker handle it? Can your static analyser deal with it? Etc, etc.

As for integrity, eal and similar (which you now for the second time just assert that I somehow failed to properly counter …). Let me put it to you in an ad absurdum/ad contrarium:

According to you we should then forget them cumbersome mathematical notations and proofs. Let’s instead use – I quote you – “Clear, precise description[s]”.

Do you finally see how untenable that line of argumentation is?

On top of that it’s practically useless. What am I supposed to do as a developer? Write the whole Do178 shebang in a comment prefixed to my function and then write another comment for each item on the list in my code?

What the hell does “this subprogram fully meets the specification” mean when the specification is “Clear, precise description”? Do you think more than one generation of compsci people have worked on formal methods and tools because they were bored?

Read Prof. B Meyer and others. There is but one “Clear, precise description” – and that is a formal one. And there is but one “Clear, precise and credible” assertion “routine ABC meets specification” and that is a formal one. Everything else is at best well intended words.

And as we just talk about it. Let’s say I have a formal spec in [insert favourite spec. method] and that I have, say, a pascal implementation – now what? That’s no fun question! That’s where we still have an itch. Unless my, say B tool can speak with my (non existing) Pascal analyser and vice versa there will again have to be a person – and a knowledgable one at that – to compare one to the other and to see whether implementation meets spec.

Let’s look at heartbleed. The problem there wasn’t simply a buffer overflow (that could be caught by a boundary check). The problem was the content of “unused array elements”. In other words: To check that no “Clear, precise description” will be of any use. You’ll need separation logic. Now, how many units, how many condition lines have you done in Verifast? Trust me, that’s not something you’d give to just the first developer around the corner nor to just some student.
Multiply the pain by cross checking that against the formal spec (which only very few methods will allow in the first place). Multiply that again by not having any analyser for Pascal, let alone one understanding sep. logic.

“Clear, precise description” is cheap – and but a lousy substitute for formal methods. But, to be fair, it was the best that was feasible for many years and to a degree still is (e.g. with Pascal).
And, quelle surprise, what are the eal & co more and more go toward? Formal methods.

Finally, you might want to extend your research to the practical realm. Let me tell you why: In a paper things look great. Did those guys ever produce more than a POC or a toy for them to test their ideas? Chances are: No. In case they did, is that tool still alive (let alone maintained)? Chances are: No. In case it is, does that tool run on a halfway current OS version and on an OS you happen to use and know? Chances are: No. In case it is, ist it also available for both x86 and amd64? Does it ask for reasonably modest lib dependencies, etc., i.e. is it practically usable with what you have and have to work with? Chances are: No.
Plus many, many other factors, some of which we occasionally mentioned, like: Is it java or .net only, or windows only, or in weird language without even a reasonable runtime lib, or does it ask for your sould to be sold (license), etc.

You’ll find out that very, very, very, few of those thingies in papers are actually of practical use in the real world.

And that IMO is one of the major reasons why we have so much crap out there. do178, eal? Changed next to nothing except for some “we don’t care about budget” (or “dman, it’s mandatory and they are serious about it”) niche projects.

I find it btw. disgusting how you belittle the work done with Larch for the Modula library as if it was but some weekend job by students. What those people did at that time was a warp leap and we wouldn’t have MANY problems if similar “toy afternoon” libraries were available.

Anonymous Coward November 20, 2016 9:44 PM

@NSA
Do you guys still spend hundreds of millions of American taxpayer dollars, every year, for the sole purpose of sabotaging American IT companies, like you were doing up until 2013 (https://web.archive.org/web/20160205010411/http://www.theguardian.com/world/2013/sep/05/nsa-gchq-encryption-codes-security) or have you stoppe8d?
How many hundreds of millions of dollars a year in taxes does the GCHQ spend sabotaging European IT companies?
Have you or the GCHQ exceeded a billion dollars a year towards sabotaging your taxpayers yet? Will you be satisfied until you have destroyed the entire Internet and brought all of your citizens back to the dark ages?

I am curious if 9/11 resulted from lack of dragnet surveillance laws or just from government incompetence, but I wasn’t zeroing in on that. I was actually referring to the Boston bombing.

Anonymous Coward November 20, 2016 10:01 PM

@NSA
Is your next top-secret multi-hundred-million taxpayer dollar move going to be banning LANGSEC e.g. arresting everyine who uses the Rust language? Is that your idea of protecting America? Making as many ways as you can for Americans to be hacked while terrorists can’t even afford computers and just plot their evil offline without even being affected? Great job. You sure saved lots of lives from the evil terrorists who catch their underwear on fire. I really believe you had to gut the bill of rights to stop the unabomber. Just as much as I believed Bush about Iraw hiding nuclear misile silos under palm trees, and Obama about repealing the PATRIOT Act (Persecuting Americans That Read I.T. Oriented Tabloids Act).
America must sleep safer at night knowing their great leaders are watching their backs. 24/7. Including in the bathroom.
Almost as dilogently as the GCHQ (https://www.theguardian.com/world/2014/feb/27/gchq-nsa-webcam-images-internet-yahoo). Thank God for all you brave national security types protecting our children fro peeping toms. Oh wait.

Anonymous Coward November 20, 2016 10:54 PM

@NSA
I know that you all swear an oath to uphold the constitution, and that you claim the ability to create magical golden keys with magical front doors that respond to your innate goodness while refusing entry to people who hate all the great American freedoms, but when I read about bad guys using the magical golden frontdoors that you put in routers, and your magical golden keys from Equation Group being sold on the black market, it makes me wonder if even though “pilot manual override system would prevent someone from successfully commandeering its planes in this way”(https://www.wired.com/2015/04/hackers-commandeer-new-planes-passenger-wi-fi/), some terrorist might use your magical front door to tilt a plane just a few degrees to the left or right, during a crucial moment while landing, where even if the pilot immediately sets it to manual over-ride it will be to late to prevent it from turning into a thousand ton bomb and crashing into the airport killing thousands. But then again, you guys have shiny badges, so it should be fine, right?

And definitely no danger in backdooring self-driving cars, because it’s not like any of that will ever be networked to commercial vehicles, transporting hazardous materials, like jet fuel and nuclear waste, right? It will stop at class C vehicles and maybe a few buses, but big rigs and tankers will never be networked, so it should be fine to just make magical golden keys for everything, right? The chips will never end up anywhere important, like the developer backdoor accidentally ending up in those missiles(https://www.schneier.com/blog/archives/2012/05/backdoor_found.html), right? That can never happen again, not by the Equation Group, right(the Shadow Brokers never happened, right)? Because liberty&justice4all?

Anonymous Coward November 20, 2016 11:04 PM

@NSA
Are you going to deny all connection you have to Equation Group, say you didn’t know your magical golden keys were stolen until after the Shadow Brokers sold them, and deny making the huge front door in Juniper Systems firewalls?

Thoth November 20, 2016 11:15 PM

Current State of Security Desktop and Server Microkernels and Separation Kernels

I prefer practical numbers and artifacts I can touch and handle. I will just list my current experiments I have done since the first time I got onto this blog forum.

What I Have Looked For:
– Usability
– Setup
– Features
– Object Separation / Memory Protection of Programs (do not ask me about how much verification has been done)
– Hardware Flexibility
– Open Source
– Easily Obtainable Off-the-Shelf by civilians
– Successful installation and booting

Genode FRAMEWORK:
– Usability comes with basic GUI and mostly command line and XML file editing.
– Setup requires XML editing to configure and build your images yourself and your compiler toolsets. Mostly for those who are technically inclined.
– Features are very wide in scope and as a FRAMEWORK (NOT AN OS), it is like a pluggable system where you can load different OSes/Microkernels/Hypervisors/FS modules …etc… and it has a huge amount of varieties of microkernels, hypervisors, modified Oracle VMM running on top of microkernels and micro-hypervisors to make it extremely flexible.
– Separation of objects and memory are available as part of it’s framework since it uses a microkernel or micro-hypervisor built for such a matter.
– Supports a whole ton of hardware from ARM development boards to bare metal
– FOSS and easily downloadable
– Only managed to get a basic Fiasco.OC with L4Linux running and in a very limited mode. Failed to get the Turmvilla Scenario working (Run NOVA hypervisor as security hypervisor then a stripped down and vetted version of Oracle VirtualBox as the VMM and then run a Windows 7 ISO as the userspace) with some quirks on my laptop and simply dropped out of the experiment and created the GroggyBox project and also helping to add duress PIN and other security functionality and improvements to the OpenPGP smart card project maintained by Yubico.

Redox OS:
– Have GUI. Quite usable.
– Build yourself or use ISO disk image.
– Close to a full fledge OS including FS and drivers.
– It is a microkernel so it should have object and memory separation.
– All I have tried is on Intel
– FOSS
– Successfully installed but not a smooth experience and glitchy. Questions in the Redox forum left unanswered for a long time. Not gonna look forward to it.

Turaya:
– Failed installation as it required TPM to exist.
– Available ISO disk image but not maintained anymore.
– FOSS
– Hardware flexibility is unknown as it failed the installation due to lack of TPM. Inclined to put Turaya as rigid to hardware specifications.
– It is a microkernel so it should have object and memory separation.

There we have a break down of three well known FOSS microkernel / framework (in Genode’s case). Excluding Genode Framework, up until now none of the two microkernels (Redox and Turaya) are going to be your daily driver nor are they going to be your air-gapped secure system. The state of development is left much to be desired. In fact, the only option of the two FOSS microkernel is Redox considering the lack of maintenance of Turaya.

Some might think that I am just plain talking and pointing fingers at their problems but not to forget I am currently busy digging around in smart cards, coordinating other smart card research (documenting Secure Element/Card capabilities in the market) and also digging around at @Clive Robinson’s “Fleet Broadcast” concept.

Nick P November 20, 2016 11:36 PM

@ ab praeceptis

“According to you we should then forget them cumbersome mathematical notations and proofs. Let’s instead use – I quote you – “Clear, precise description[s]”.”

You may not have encountered the conversational technique I’m using. I’ve previously promoted formal verification plus EAL6/7 that require it. They’re part of the discussion. My method is to start with an abstract version of the claim in dispute: “Do you value clear specifications of X, Y, and Z?” If there’s a consensus on that, then one gets more specific such as “Do you value informal, semi-formal, or mathematical specifications of X, Y, and Z?” You didn’t agree with the first set. So, no sense in making them even more specific.

“Write the whole Do178 shebang in a comment prefixed to my function and then write another comment for each item on the list in my code?”

You write an English description of your requirements or design along with the code with comments referencing that. This is what’s normally done. There’s also groups doing formal specifications or model-driven development with code generation from the models. Esterel does the latter with SCADE.

“here is but one “Clear, precise description” – and that is a formal one. ”

The evidence said otherwise after years of trying that. The reports indicated that both academics and industry professionals couldn’t understand what the formal specs meant by themselves. A pile of math to someone not in the domain or knowing what it represents is pure gibberish. That’s why the certifications required a combination of English and Formal specs.

“Unless my, say B tool can speak with my (non existing) Pascal analyser and vice versa there will again have to be a person – and a knowledgable one at that – to compare one to the other and to see whether implementation meets spec.”

That’s what they did back when formal verification wasn’t good enough for code. They made the final, formal spec very low level in terms of a state machine with the source code corresponding nearly line by line or module by module. These days, one can tie the two together formally. The originals had good results, though. Kesterel even synthesized code from such specs.

“In other words: To check that no “Clear, precise description” will be of any use.”

OpenBSD and other non-formal projects catch stuff like this with code review all the time. Static analysis can also catch it if one’s coding standard isn’t utter garbage. No need for formal verification.

“Let me tell you why: In a paper things look great. Did those guys ever produce more than a POC or a toy for them to test their ideas? ”

I just told you that. Commercial products, things applied in defense/military, safety-critical projects, practical demonstrators for academia, and so on. Extremely practical given they were sold, fielded, and/or did the real-world job. Closest thing that the best implementation of separation logic came to that is a partial application of partial correctness to Hyper-V. Kind of strange your high standard of full, formal specification and verification on all of my case studies and projects but not the lesser accomplishments of that.

“Does that tool…”

Varies tool by tool. A number look good by points your bring up. Others are high-assurance products running on bare-metal on mainstream hardware for desktops or embedded. The oldest ones that aren’t commercially supported I assume are dead.

“You’ll find out that very, very, very, few of those thingies in papers are actually of practical use in the real world.”

That’s the same observation I made when starting research. It’s why I only cite or reference the practical ones for conversations like these. Especially with real-world deployments.

” find it btw. disgusting how you belittle the work done with Larch for the Modula library as if it was but some weekend job by students.”

Did I? Or did you try to use a great contribution to formal methods… which never resulted in a high-assurance system… in a discussion on what resulted in high-assurance systems? You treated it like a bluff hand in Poker instead of the respectable thing it was. You’d have gotten better results citing a tool that actually led to a high-assurance product or project. I gave due credit by citing all of them that did that I recalled off the top of my head.

Figureitout November 20, 2016 11:52 PM

r
–Thoth is doing hella smartcard stuff, java card and the like. Seemed to port chacha20 to run on javacards, which is pretty cool.

I’m not messing w/ IoT really (I don’t agree w/ the regulations Bruce wants), but a lot of MCU’s. Dev boards of all kinds, from every company, Silabs, Freescale/NXP/Qualcomm, Microchip/Atmel, TI, Nordic Semiconductor. My TI84/3 calc. Android phones. Little bit of SDR work, some digital radio. Big desktop PC’s (Intel and AMD). Arduino. Some crypto, some hardware design, some coding on all those. Really want this data diode to work, if it does I want to make a board for it.

I used to think we were going to do some project together (new secure chip), but that’s not happening; just way too many engineering challenges to even start, too many easy attacks to ruin it, and oh yeah, the $millions bare minimum we need to even be taken seriously. One of us would have to rob a bank or something lol. We’re all doing our own thing now.

So I start w/ the next best thing we have for now: MCU’s. Even the smallest ones are still amazing feats of engineering. They’re so handy and useful, they’re everywhere now.

There’s a bunch of projects to join, but I’m doing my own thing for now and sharing what small nibbles I can make (unpaid in my freetime). I wouldn’t be too worried about the mega-complainers here (I was one of those, can still be from time-to-time), or ultra-idealists. They’ll still be around 10 years from now saying same thing, complaining for others to do the work for them then try to take credit for it in some deluded way. I’d avoid that whole scene entirely, mentally and otherwise.

Clive Robinson November 21, 2016 12:11 AM

@ ab praeceptis,

Not even because of 1000 papers people (who, after all, do make some valid points) but rather because I see us in a real mess and I feel this just isn’t the situation to strive for perfection and more papers.

Part of the problem is academia has gone to hell in a handcart these days.

As has been noted US Unis have become hedge funds, with those at the very top drawing seven figure packages. Payed for by destroying tenure and out sourcing the actual education side of the university to the “lowest bidder”. Who in turn forces even lower renumeration onto others. Hence first year students get taught and apraised by second year students and so on all under the supposed supervision of those holding research positions. However those research positions are more or less “self funding” thus their time is spent chasing grants, networking and kissing arse just to stay where they are. Part of the neyworking involves “being a face” which means being seen at conferences etc. To get to go you often need to present a paper or two for each conferance so churning out a paper a month and putting them forward to “the committee” of various conferences is normal. Likewise getting papers published even online means “paying your dues” to some “committee”. Like any tyranny promotion is not on merit but who the current incumbents allow to step up…

One route up is by being “noisy” and churning out papers on the “Spray-n-Pray” basis. So that you get two or more published a year to keep your “Publish or be damed” number high. The result is “progress papers” where five or six papers now contain the content of what would once have been in one paper. The papers get padded with minutia to the correct length.

Another part of becoming a face is to be “topical” that is to “trend spot” and use the “paper light” technique to get what is in effect a buzzword paper in before everyone else, which then gets quoted by subsequent papers… Thus your quote rank goes up and you get treated more as an insider. Then you start “logrolling” where you quote others with political influance… Thus the thousand papers where one or two would do…

Is it a “racket”, yes. Is it a “tyranny”, yes. Is it “self serving”, yes. Is it “productive”, not realy. Is it “pushing the boundries”, again not realy. Is it “effective” not realy. Is it a “faux market”, yes. Is it “in a race to the bottom”, yes. Is it a “financial market”, definitely these days. Is it a “rigged market” , yes definitely.

Oh one last question “Would you be happy to see your children enter such a tyranny?”…

Clove Robinson November 21, 2016 2:48 AM

@ Nick P, Wael,

With regards lobste.rs (207.158.15.114) have you seen a change of HTTPS cert recently for them?

Wael November 21, 2016 3:04 AM

@Clive Robinson,

DST root CA:
30-Sep-2000 14:12:19 PT
30-Sep-2021 07:01:15 PT

Let’s encrypt authority:
01-Nov-2016 07:00:00 PT
30-Jan-2017 06:00:00 PT

Lobste.rs certificate info:
17-Mar-2016 09:40:46 PT
17-Mar-2021 09:40:46 PT

There was a recent change as you can see.

yoshii November 21, 2016 3:05 AM

Article topics of important interest:

Please, discuss and comprehend the validities of these concepts posed as rhetorical questions stated as concepts to consider.

No web links provided. Sorry, you will need to accomplish your own searches. And yet I provided these ideas because I consider themm to be relevant to people seeking information. Please use it responsibly.

1) Why the “alpha male / alpha dog / alpha wolf” concepts have been debunked and are pseudoscience based upon historical fascism and totalitarianism from the early 1900s is worth comprehending. And yet, perhaps the so-called “omega dog” really is not so bad at all, if “the meek shall inherit the earth”. But civilization cannot be successfully maintained via cliches and oversimplifications and arbitrary assignments of roles. Choice and aptitude and ethics and freedom of thought are rather important, correct? (Rhetorical questions, of course).

2) Why and how even so-called “true carnivores” (such as felines) are capable of surviving without suffering on a vegan diet (as long as toxins and allergic contaminants are prevented from entering the food supply) is worth comprehending.

3) There seem to be several direct correlations between the unfortunate and tragic mistreatments of domesticated animals and the history of the aerospace industry. For example, the launching of animals into outer space beyond the stratosphere to test aerospace systems was and is a tragic part of history. This needs to be acknowledged occasionally, no?

4) Coping with the peculiarities of compound conflicts of interest and multiple domains of unknowns and how to respond or not-respond AFTER the incidents have–is an import concept. This has some similar implications as the book writing about how to respond prudently and appropriately to security breaches rather than just simply trying to prevent them.

5) Dynamic, (real-time) prioritization of personal resources in ways that do not compromise ethics, biology, psychological stamina, nor environmental stability and sustainability. is worth comprehending.

6) The inherent drawbacks of technological innovations is worth comprehending along with the Precautionary Principle.
7) Please comprehend why lots of high-tech tools do NOT need to be used most of the time, and perhaps not at all.
8) Please comprehend that addiction to information is a modern-day problem.
9) Please comprehend that the Scientific Method is flawed by a tragic caveat because if the experimentation stage involves or is motivated by or results in suffering or tragedy, then the scientific pursuit may not actually be scientific at all.

For example, if a person or persons are trying to invent a better good or service and they cause via experimention or results for most goods and services to become obsolete, then that scientific pursuit is not free of logical fallacies. Thus perhaps in a wider context, the pursuit of technological excellence is flawed and haunted by both the means and results of technological innovation. And yet reverting to medieval primitiveness is not a solution either.

Another example…

If a test and/or repeated testing harms the student or test-taker, then the validity of the test itself as a form of information-acquisition needs to be questioned and the tests need to be halted immediately. For those who question the value of ethics and bioethics, consider the longevity and stamina of civilizational elements worldwide that value ethics and bioethics and that consider torture and science without compassion to be criminal. Do we really need to get into all of that?

Please, discuss and comprehend the validities of these concepts posed as rhetorical questions stated as concepts to consider.

No web links provided. Sorry, you will need to accomplish your own searches. And yet I provided these ideas because I consider themm to be relevant to people seeking information. Please use it responsibly.

Bonus concepts to consider within yourself: What was the internet? What is the internet now? What is it’s correlation to your livelihood as an individual within your habitat?

EOT.

Clive Robinson November 21, 2016 3:54 AM

@ Wael,

I think that is the root cause of the problem I’m having on this smart phone…

A Little Bit Louder Now November 21, 2016 5:07 AM

Advertising Dollar fed Great Delusion:
https://www.washingtonpost.com/national/for-the-new-yellow-journalists-opportunity-comes-in-clicks-and-bucks/2016/11/20/d58d036c-adbf-11e6-8b45-f8e493f06fcd_story.html#comments

Fairness note: The Washington Post, like Business Insider is owned by Jeff Bezos. BI also removed reader commenting to silence those who disagree. Both sites practice all-consuming Trump yellow journalism.

Summary: both the alt-left and alt-right are out-of-control. It’s only going to get worse. Everyone sing Hallelujah to destroyer-of-society Mark Zuckerberg!

Thoth November 21, 2016 7:18 AM

@Nick P

re: BackdoorCore/”Proven”Core

Not sure if you did read the marketing on the website but it sounds rather problematic. It’s encumbered by patents, closed source and such. Not recommended at all and not even to be approached with a 100 mile stick !!!

Oh, and it even bundles Over-The-Air backdoor just for the extra measure that NSA et. al. wants their NOBUS way with this line “Over-the-Air Firmware Update: Remotely updating the firmware of the Rich OSs.”. Wow … so they actually admit that they could reach into the running application/OS and edit it’s firmware.

Compare it to GlobalPlatform certified smart card OSes, I would say the smart card OSes have even stronger privacy guarantees by ensuring that there is no “root” in a GP-SCOS (i.e. MULTOS and JavaCard) and the fact that the applet loader (called informally as a CM/Card Manager) has no rights to read any data nor modify any data (i.e. keys, PINs …) in each individual smart card applet existing in a GP-SCOS.

How a GP-SCOS updates is .. there is no such thing as OTA updates in the dictionary. The CM is invoked with the correct CM keys and the card applet has only one option namely destroy the applet (and all it’s data using the hardware’s secure wiping method) before allowing to upload the same applet via the same Applet ID. The applet would have to have it’s own software defined backup mechanism implemented by the card developer (due diligence) to transfer the data that needs to be backup out of the card before the old applet is destroyed and a new applet with the old keys, user data and PINs are reloaded.

Compare those constructs of “security” microkernels, hypervisors and so on with the GP-SCOS scheme, the GP-SCOS scheme wins hands down (in my opinion) as it takes the approach of ensuring that no applet can effective spy on each other or sabotage each other and no concepts of OTA updates, no root permissions and so on where every applet domain are threated as equals. I don’t need to worry about the card having what I have been ranting recently, secure backdoors, in theory. Regarding real world implementations, this is beyond GP standards to control but from a theoretical point, the GP-SCOS provides highly robust architecture preventing those problems I mentioned.

If you remembered me mentioning that most of the card manufacturers are getting high EAL ratings for their Card OS with some of them hitting an EAL 6 or even 7 for SCOS ? That’s one reason due to the fact that GP-SCOS ensured that on theory, sabotage from external and internal circumstances are mostly mitigated.

Why can’t the rest of those code cutters who create “security” microkernels take a page out of GP-SCOS architecture to ensure that “secure backdoor” on a theoretical level cannot be done unless the implementors decides there is a need to deviate from the theoretical aspect in real world scenarios. Comparing GP-SCOS architecture and other microkernel/hypervisor architecture, I think my preference for highly sensitive deployment would still be along the GP-SCOS style.

Link:
http://www.provenrun.com/products/provencore/

ab praeceptis November 21, 2016 8:55 AM

Nick P

Paper based theory, again.

For a start you are arguing with vague definitons and views. Example: OpenBSD.
Those guys are great, no doubt, they try seriously hard and they are experienced and good at it. But: That’s a very different goal post from the kind of security we talk about here.

To put it bluntly, OpenBSD’s goalpost could be described as “not creating crap” (Anyone who has read anything from me knows that I consider that very important and laudable).

But high safety? Forget it. So, fumbling with OpenBSD in this context here, and then in the next paragraph with, say, seL4 is rather, uhm, let’s call it inconsistent.

Btw, let me tell you a funny way to get quite decent code: Create it with Sather or, more modern, with Parasail or Leon.
In other words: One of the few things we should have solidly learned is that C is a very useful meta-assembler but not a PL per se. One can, of course, abuse it as a “high level” PL and that has been done a bazillion times – but we all know the crappy result.
Bernstein coding his chacha reference in C is one thing. Joe and John corp. developer coding an OS in C is a very different thing.

Back to the matter. We use formal notation to capture and convey meanings, qualities, quantities, etc. in a complete, unambiguous and precise manner. Human language, no matter how bent for a purpose can not provide that, at least not with any bearable efficiency. Simple as that. Plus: I as a non native speaker might easily mistake some do178 language but I will certainly not mistake a formal spec.

What it can provide, though, is understandability for humans.

Obvious Conclusion: formal spec plus human oriented annotations. ** not the other way around and f*cking certainly not without the formal part **.

I don’t care about all your papers and studies and “experience shows”. I have my own decades of experience and those are speaking a very clear language. Don’t get me wrong, I’m not hating C, quite the contrary; I’ve taught C, I’ve done uncountable loc in C and I would still urgently recommend it to any young developer – the same way I would recommend high-schools to teach Latin (and for very limited use).

C itself is not properly nailed down. Way too many ambiguities and things left to the implementors. Ergo: No matter how smart your wordy do178 or eal specs are, no matter how good your developers are, no matter how smart your spec to code generators are, there will always be some lottery left.

Microsoft is pimping up hundreds of thousands of loc with Hoare triples, proper boundary specs and other means. Certainly not for the fun of it. And it would seem to me that thay, having eal and do178 and whatnot golden stickers after all, certainly weren’t easy on “precise” blabla specs. Let me tell you the funny part: Doing that they found (to put it extremely diplomatically) many errors.

So, kindly stop telling me “experience” from papers and studies.

Or look at all them post factum analysers thrown at major code bases. Although these may not possibly perform properly as they have no formal “skeleton” to work along but rather must somehow try to make sense out of C code, they still found thousands upon thousands of bugs. Well noted, some of the software tested was eal, well word spec’d etc.

Now, look at Pascal code (not that I want to recommend Pascal for safe and secure but it’s well known, simple, about the same age as C, etc.). Way less problems. Reason? Less ambivalent, better nailed down and, let me call it “fragments of formal spec”. Sounds big but is true. Look at ranges. Those are some kind of (primitive – but very, very valuable) formal spec. Or look at “First” and “Last” loops; some kind of formal spec, too (a very primitive but powerful loop invariant).
And, as an important side note: Pascal simply doesn’t allow certain kinds of human errors thanks to that.

Finally, a do178, eal and whatnot spec’d C library may tell me – the human – a lot in its spec. But not my compiler. It may or may not do what it says it does and the way it says it does.
Proper pre- and postconditions, on the other hand, tell both me and the compiler a lot more. Granted, I give you that, those conditions may look gibberish to many and aren’t easy for the uninitiated, but they serve the job very well – and we all want core infrastructure devices safe, more than anything, right. Moreover the gibberish can be commented. Plus it depends in the tool. The Spark stuff, for instance, is rather human friendly. B (which I don’t use but like) is another sample and from the spec corner; quite friendly und not too hard to use.

ab praeceptis November 21, 2016 9:22 AM

Thoth

I agree to a large extent. Genode is a pita to use and always seems to be in floating half-useable state, and many others are in an even more lamentable state. Lots of nice noise, often from academic labs, but little to actually and practically use.

But (as you indicate) the problem goes deeper. I’m sometimes smiling (it’s better than crying …) at the current “microkernel equals secure” hype. That’s, of course, BS to a large degree.
Yes, microkernels can be made safe much, much easier, up to the point that one might reasonably say that a given microkernel is safe while a given monolith can not possibly be. And, as a rule of thumbs, todays microkernels actually are rather safe (at least as compared to monoliths. And some really are).

The reason is very simple: To create 5 or 10.000 lines of safe code is feasible, to make some million existing loc isn’t (at least not today).
But a) one still needs drivers, servers, etc. Splitting the critical core from all the stuff around is a good thing to do but it doesn’t change the hole game.
And b) Then putting linux on top of it is just hilarious. The best possible net value one can get out of that (given the microkernel really is flawless) is a not-worse situation than simply running linux by itself.

The trick they play is to put everything into a virtualization context. So, 1 microkernel plus on top N linuxes. That’s indeed some advantage but it still doesn’t provide one a safe OS. Linux still is linux.

Plus, even if they don’t put linux on top of it, they run into problems. Either, like. e.g. Minix 3, they bring in the package world of netbsd at al. or, like some others (e.g. pikeos) they basically start from scratch and give you a rather useless system (in term of what mortals mean by OS, i.e. something right away useable).

I personally feel that the latter approach is the better one because it gives us at least something solid to build upon.

As for your Javacard universe I wouldn’t be too trusting. It may not seem so but there is still a lot between your code and the mcu. Frankly, I would be surprised if we didn’t find some very ugly things sooner or later.

All that makes me finally come back again at the Russians and, somewhat less, the Chinese and hopefully one day all of us with open Risc projects.
Simple reason: a) clean slate (rather than the not at all funny x86 hodgepodge zoo), b) they (Russians) have their own processors (the Chinese soon, too), c) microkernels for those. Plus, sooner or later trustworthy OSs (or, more precisely, publicly available one. I’m quite sure that e.g. the russian military already has some).

Thoth November 21, 2016 9:55 AM

@ab praeceptis

Indeed when I was talking about the GP-SCOS (not just JavaCard but also C based MULTOS and other Card OS) which I mentioned the GP-SCOS architecture was nice on paper which I did follow up which points to the fact that real world implementations also varies.

There have been attacks on the implementations in the past which worked so I wouldn’t be all too trusting either. The one thing I did promote about the GP-SCOS is that all the “domains” are theoretically treated equally whereas other schemes like TrustZone puts too much power into the hands of the “Secure World” which can turn dangerous if misused.

It would be nice if we can one day have the RISC V project succeed and not yielding to commercial and governmental pressure since the RISC-V promises to be open and secure. With something like an open and secure chip that RISC-V provides, the community can start creating microkernels and secure OSes for a whole range of projects including security critical ones without needing to dish out tonnes of NDAs since a high quality open source and open hardware project is much more desirable than one filled to the top with NDAs and closed source… that is if RISC V project survives commercial and governmental pressures 😀 .

Clive Robinson November 21, 2016 10:40 AM

@ Alittlebit louder Now,

Everyone sing Hallelujah to destroyer-of-society Mark Zuckerberg!

An analogy for you,

    Every one knows where the local house of ill repute is, but only those who don’t care get seen going in there…

Those who live by social media should expect to die socialy by such media. Nobbody forced them to take part in that echo chamber orgy of self flagellating aggrandizement…

Nick P November 21, 2016 11:04 AM

@ Clive Robinson

The site is now using a Let’s Encrypt certificate. The encryption on my end is TLS_ECDHE_RSA with ChaCha20, Poly1305, and SHA256. SHA-2 fingerprint begins with 1F:32:3F:9E. It’s a Rails app so I wasn’t really worried about the security aspect. I assume it and this box are compromised. 😉

Nick P November 21, 2016 12:19 PM

@ Thoth

re ProvenCore

Remember it’s intended to be a commercial product marketed at the types that buy those. They’ll want things like OTA. They might take it out for customers if those customers ask. They might allow the source to be reviewed by high-paying customers. Who knows. Point was they finally had a high-assurance OS in the works. Way better than that Bertin crap I saw a long time ago.

“I would say the smart card OSes have even stronger privacy guarantees ”

They do. They’re too simple for most embedded systems, though. The I/O and real-time requirements alone start having effects on how the OS can be designed. I’m for trying to use them or their design style more often where possible. I did that crazy concept once where I used a pile of smartcards with MULTOS for web, signing infrastructure. Also considered using them in container VM’s for per-app TCB but Ada or Java runtimes in RTOS’s were good enough. Even the formal methods like Kesterel’s for JavaCard that were so great have been superceded by the likes of mCertiKOS.

So, those are a nice, interim solution for certain problems. Clean-slate efforts should use modern methods. They can’t help on embedded where the RTOS and separation kernels are still superior. Or doing something like real-time variant of JX done with MULTOS-like architecture. Just brain-storming here, though.

“the GP-SCOS scheme wins hands down (in my opinion) as it takes the approach of ensuring that no applet can effective spy on each other or sabotage each other and no concepts of OTA updates, no root permissions and so on where every applet domain are threated as equals.”

That’s what separation kernels & ARINC schemes in safety-critical do. They brick wall all partitions from each other. They take extra step of covert, channel mitigation. A partition is often designated as master for administrative code (eg schedulers, loaders) for convenience. Other features like OTA are optionally built on top. All of these families of security schemes are built on the same principles going back to original, high-assurance methods. Hence the similarities. The smartcards may actually be riskier in that they use less HW protection at OS and app level due to constraints of their hardware. They’re the prior model watered down a bit then with stronger implementation at code level.

“That’s one reason due to the fact that GP-SCOS ensured that on theory, sabotage from external and internal circumstances are mostly mitigated.”

They all have this in common. The EAL6-7 process is designed with malicious developers in mind. Idea is that ever piece of code is justified against requirements, everything is simple, every state is accounted for, every leak measured, and binary emboddies all that. Regardless of whether certifying or not, such principles should be in any high-assurance product.

Clive Robinson November 21, 2016 12:21 PM

@ ab praeceptis, Thoth,

And b) Then putting linux on top of it is just hilarious. The best possible net value one can get out of that (given the microkernel really is flawless) is a not-worse situation than simply running linux by itself.

The example I tend to use is that of modern applications on top of secure kernels.

Take a web browser, it can be viewed as multiple termanal programs connecting to multiple servers. In the old model each connection would have it’s own OS controled and secured process. With a web browser it’s one process with no OS control or security between the connections. The secure OS walls have been torn down and thus the only security between the connections is that the code cutters hacking the web browser together can somehow cobble together whilst concentrating on bells whistles and go faster stripes.

Thus you had the choice of OS security put together by well seasoned and knowledgeable engineers and now that of an application slapped together with string and chewing gum by accident prone boy racers…

As I said some time ago when talking about Castles-v-Prisons, your everyday programer has neither the knowledge or skill to write code securely, and more importantly managment will invariably not give them the time even if they could write secure code to do so, so they rarely if ever gain the skills and knowledge (Catch-22). My proposal was that application coders should use a very high level approach using “tasklets” bolted together in a way similar to *nix shell scripting. The “tasklets” would be written by those who could code securely, to an API that was not just strongly mandated but also interfaced to a security hypervisor. Thus rouge behaviour could be detected and halted via amongst other things “code execution signitures”.

I’m fairly sure Nick P will have a counter point to this, and Wael will point out other advantages. But my main point still stands that it would leverage the crucial security skills of the few to the masses.

But also –as a side effect– for managment it would not just decrease development time it would also significantly reduce the maintenance burden (what some call “technical debt”).

ab praeceptis November 21, 2016 1:27 PM

Clive Robinson

I for one agree with your “tasklet” approach.

Partly because I perceive it to be an extension to the library approach (which, unfortunately, hasn’t been properly understood, used, and exhausted by far).

As usually, one must look a little deeper to see the beauty (in this case) or the ugliness (in regrettably many other cases) of something.

The real positive bang with libraries or tasklets is not even in comfort/time saving but rather in quality and abstraction, often in a closely related manner, btw.

The problem is that IT is a gigantic field with hundreds of corners and niches each of which requiring a solid amount of know-how and expertise. Using libraries/tasklets developers can, if the library people understood and did their job well, enjoy a high level of abstraction and at the same time a high level of quality without having any significant detail knowledge (another reason to like verifiable containers, to offer an example).

And a short sidenote re. virtualization: I’m cautious and, while I see use cases and advantages, I also see a lot of buzz, marketing hype, plain ignorance and stupidity, and other not so nice factors.

For a start, virtualization adds considerable complexity. Complexity, however, is an arch foe of safety. I’d think that almost always any given processor without VT is more secure than one with it.
Moreover one didn’t, it seems, but should have asked “what for”? Many many loudly praised (with profit in mind, of course) VT use cases could have been achieved with less efforts and risks using jails and the like. An example is all those “save costs and gain flexibility by spreading multiple virtual machines over a few hardware machines”. I fail to see for the vast majority of those cases why VT would be better than jails or the like.
And btw. trying to squeeze more performance or profit out of something (in IT) usually isn’t exactly what safety minded people would want. Somewhere there’s always a price to be paid; regrettably often it seems to be in area of safety

As for all those mukernel based OSs I’m smirking but I can understand the drive behind it. It’s an easy way to have a “secure product” and/or to show something for your research grants. Just create yet another variant of e.g. L4 and glue some linux on top of it, et voilà, a new OS is born and you can market droid about “secure OS”. Sometimes even with astonishing success; the german Kanzlerphone is an example.

Clive Robinson November 21, 2016 4:19 PM

@ Nick P, Wael,

The site is now using a Let’s Encrypt certificate.

With a –according to Sohdan– key length of 8Kbits which I suspect is the cause of the problem as could be ChaCha…

What I need is a sacrificial goat Linux box with an uptodate browser, (Oh for a Knoppix DVD when you need one 😉 that I can then use with this phone as a broadband modem. It’ll have to wait untill tommorow when I can download an ISO.

Oh Nick P have you looked at the certs for this server… Bruce may have a problem if Microsoft / Google keep their promise.

ab praeceptis November 21, 2016 5:15 PM

let’s encrypt?

Not with me!

I don’t trust them. Mainly 2 reasons:

  • no golden sticker. But that might be forgiveable as they have many many colourful company logos on their site.
  • not systemd’d. I don’t trust people who just ask me to have some crap running in my crond. If let’s encrypt would be an honest and good and whoooaa secure endeavour they would be systemd’d plus in the nvidia driver.

Moreover I don’t trust 90 day certificates. That’s too long. I want 10 second certs. Actually I’d like one off certs, a new one for each session but 10 seconds seems acceptable.

P.S: Do I need a server to use those certs? And, if so, is my iphone good enough or do I need windows 10 for that?

Thoth November 21, 2016 5:42 PM

@Nick P

“The smartcards may actually be riskier in that they use less HW protection at OS and app level due to constraints of their hardware.”

That is not true as almost every smart card I have handled or seen has hardware memory protection unit built-in from NXP, Infineon, ST, Sony and other cadd makers. I remember @Figureitout recently exclaiming at how much memoriea are actually present in these cards despite that tiny size.

“I did that crazy concept once where I used a pile of smartcards with MULTOS for web, signing infrastructure.”

I have managed to create a scalable, load balanced and distributed smart card emvironment recently which I was experimenting on my own. The idea is a highly scalable SEE environment using as many smart cards one can stick into one’s computer (multiple USB card readers) and then executing business critical codes of limited sizes on all the smart cards. The idea is one smart card may not be all too fast for handling critical codes but with a swarm of them with load balancing logic I created for these smart card cluster, I am able to make them perform faster as more cards are added to scale the performance. With this design, I am able to simply attach 64 pieces of smart cards or as many as I want as the host computer’s USB controller can handle and run them M1cr0$0ft Windows as their host OS (since realistically most financial and commercial enterprises love Windows and I have to emulate the scenarios as realistically). The smart cards essentially acts as multiple hardware secured external computers with higher assurance OS and HW as trusted compute bases while the Windows simply just supply power and communication gateway over the between USB smart card readers with their cards and if necessary, acts as an Internet interface as needed.

V.S.T. (Volunteer Surveillance Team) November 21, 2016 6:39 PM

@Markham

http://www.lapdonline.org/mission_community_police_station/content_basic_view/9066

What is the Volunteer Surveillance Team (VST)?

VST is a group of community volunteers living in the LAPD Mission Area that are specially trained and supervised by LAPD officers to observe and report criminal activity.

How does a surveillance detail operate?

When a crime pattern is identified, the LAPD officers in charge plan how a surveillance detail would be safely and effectively conducted. Volunteers are assembled at a roll call and assigned predesignated observation posts, typically located in cars, vans, buildings or rooftops. The VST members who observe criminal activity in the surveillance area report this via police radio to patrol officers assigned to the surveillance detail. These officers respond to the call, stop and question the individual(s) suspected of the criminal activity and make an arrest, if warranted….

r November 21, 2016 7:03 PM

I wonder how long it will take people to start asking if @Clive and @bytherules are advocating DRM.

Not really, but along similar lines virtual processor’s and live code (a la battle.net’s warden).

dennis November 21, 2016 7:26 PM

@ yoshii said,

Some intrestin stuff,

“If a test and/or repeated testing harms the student or test-taker, then the validity of the test itself as a form of information-acquisition needs to be questioned and the tests need to be halted immediately. For those who question the value of ethics and bioethics, consider the longevity and stamina of civilizational elements worldwide that value ethics and bioethics and that consider torture and science without compassion to be criminal. Do we really need to get into all of that?”

There’s a fine line between propaganda and reality, and propaganda is a manifestation of indoctrination, as is academics. In order to manifest the power of a herd, there needs to be an adequate degree of conformity. Thus, we have students that are test-takers and they are trained for it specifically to conform to a set of predetermined provisions, sometimes on a merit system, of which was carefully designed and imposed.

“Bonus concepts to consider within yourself: What was the internet? What is the internet now? What is it’s correlation to your livelihood as an individual within your habitat?”

The internet is a set of metrics, on all layers.

Nick P November 21, 2016 8:07 PM

@ Clive Robinson

Oh shit, it’s Comodo. Yeah, he better switch to Let’s Encrypt. And how do you still not have a sacrificial box? About the only way you can have a real, web experience these days without screwing yourself up.

@ Thoth

“That is not true as almost every smart card I have handled or seen has hardware memory protection ”

I was probably unclear. Compared to what’s on desktops or well-equipped embedded, the stuff on smartcards is dropping down a notch in quite a few ways. They usually have MPU’s instead of MMU’s. The old ones I looked at didn’t have IO/MMU’s. The CPU’s were slower, usually not 32-64 bit, and had fewer accelerated instructions. The drop was significant enough that neither portable libraries nor certified kernels could be ported to them. Either not easily or not at all.

So, the software and OS’s for them had to be watered down a bit with different security models w/ fewer protections. They did good with a few given the constraints. Not the same as you’d get with a high-end ARM, POWER, or Intel on embedded with same extras in the hardware. The context-switching alone makes a world of difference if you’re doing microkernels. The penalty was too high on older hardware that MCU’s perform similarly to but desktop chips a few years old can do 100,000+ a second w/ CPU 90+% idle.

“I have managed to create a scalable, load balanced and distributed smart card emvironment recently which I was experimenting on my own. ”

Well, that’s pretty awesome. Keep at it. 🙂

“with a swarm of them with load balancing logic I created for these smart card cluster, I am able to make them perform faster as more cards are added to scale the performance.”

That was my concept. Enough of them crammed into a box or rack with good load balancing should cover for the speed of any of them. I also hoped the connectors & chasses should be pretty cheap to the point that all of that together should cost around a few servers or a rack of them at worst. Older, high-assurance products got up to $100-250k per unit. It would be a great improvement.

“while the Windows simply just supply power and communication gateway over the between USB smart card readers with their cards and if necessary, acts as an Internet interface as needed.”

Next step in such a design is to do the communications in hardware, esp FPGA, so the PC handling external input can’t interfere much with the smartcards’ I/O or main software. As in, what goes to them only does so in the safest way with the onboard logic handling the rest. Output can be modulated in time to suppress timing channels within encryption routines if it’s custom where mistakes are likely to be made.

liaklh. November 21, 2016 8:44 PM

One of largest security flaws is how easily people flip for a bit of money/power/influence.
You can get a rather large bonus if you cut the companies security and maintenance budget, fire everyone who doesn’t reinforce your position, and move on to the next company by showing how you turned the books around for 12 months.

phil November 21, 2016 8:51 PM

Microsoft’s security certificate system.Don’t play with it, it might break then start working again in an unexpected or expected way, depending on your faith.

Noted November 21, 2016 8:56 PM

Note for Tails users.

Riseup.net has not updated their warrant canary on time and is responding to questions about it with vague, indirect language. They seem to be avoiding confirmation or denial of the canary issue.

This is potentially relevant because they recently pulled certificate fingerprints for several subdomains, including labs. Labs.riseup.net is the location of the Tails git repo.

Proceed with caution.

Thoth November 21, 2016 8:57 PM

@Nick P

re: Secure Element swarm

“Enough of them crammed into a box or rack with good load balancing should cover for the speed of any of them. I also hoped the connectors & chasses should be pretty cheap to the point that all of that together should cost around a few servers or a rack of them at worst.”

I know of a secure element maker who offered to sell me a server containing 64 pieces of smart card chip crammed inside via daughter boards and I have been talking with the manufacturer. Also, the manufacturer showed signs of interest in my concepts of load-balancing and turning chip cards into swarms which I sent the manufacturer a load of screenshots I took of my experiment which spiked the manufacturer’s interest and was also offered the opportunity by the manufacturer to supply my pet technology for possible commercialization.

I will speculate that the price of the 64 pieces smart card server should not cost more than USD$5000 but that is really up to the manufacturer to set the initial pricing. I talked to another manufacturer I am in contact with (the Ledger guys) who told me I could do 64 pieces of smart card chip in a single box for around USD$600 which is extremely low price for that many chips in a single box to do load balancing.

If you have been reading my recent emails, I think I had some information I sent in the past but I can’t remember which one it was.

“Next step in such a design is to do the communications in hardware, esp FPGA, so the PC handling external input can’t interfere much with the smartcards’ I/O or main software.”

I am thinking of collaborating with the manufacturer with my scalable and loaded-balanced smart card chip technology while he supplies me the server with 64 pieces of smart card chip embedded. I have not seen the server nor the specs yet since it’s still initial discussion with my manufacturer. Maybe I can try to influence the manufacturer to use an FPGA for the load-balance signaling as you mentioned but it’s still really up to how my manufacturer wants to create the hardware since I am mostly the code cutter and the one who cooks up strange ideas for Secure Execution settings.

If all goes well, I might want to commercialize the technology for load-balancing and scaling multiple pieces of 16-bit smart card CPUs since this type of weird and unusual techniques are rarely found in the legacy world of smart card development where most people are contented with it being a payment or ID card at best and rarely step outside the comfort zones of pushing these tiny 16-bit CPUs to their limits at a very cheap price.

Thoth November 21, 2016 9:05 PM

@Nick P

re: Comodo CA

The entire TLS backbone revolves around a few “Super CAs” that control the entire CA community and possibly even standards. Comodo, GlobalSign, Symantec Verisign are the few “Super CAs”. For the better or worse, having a bottleneck with these CAs can be both useful and harmful at the same time.

Clive Robinson November 22, 2016 12:21 AM

@ r,

I wonder how long it will take people to start asking if @Clive and @bytherules are advocating DRM.

What a curious thing to say… What on earth triggered that thought?

As I’ve repeatedly said “technology is agnostic to use” and that it’s all about the “directing mind”. That is the gun is not to blaim for the bullet hole in you, nor was the finger that pulled the trigger, but the mind that directed it to.

Some but not all forms of the underlying technology used in DRM systems can also be used for security. A case in point being “code signing” of binaries, the underlying signiture system works as well for any file be it executable code or media file.

But “to answer your charges” no I don’t advocate DRM because “Mainly it does not do what it says on the tin”[1]. It’s a similar reason I say “Code signing is no indicator of safe code, or any code quality at all”. But it’s also the same problem as the far endpoint trust issue with encrypted communications.

Such are the problems with “Trust” in an “untrusted environment”.

[1] More specifically, “off line” DRM systems are very fragile due to the use of “security by obscurity” once the secret becomes known and it almost always does the DRM is a bust (see history of OSS DVD players). In that manner it’s analogous to the reuse of KeyMat in One Time Pad/Tape systems. Thus the only way the IP controling entities DRM system has a chance of working is by trying to protect the secret at all costs –to you– on your computer by denying you the rights and privileges pertaining to oenership (which is in implementation the same as theft).

Clive Robinson November 22, 2016 1:06 AM

@ Nick P,

Oh shit, it’s Comodo. Yeah, he better switch to Let’s Encrypt.

It was not the CA I was thinking about, Shodan indicates the certificate uses SHA1. Microsoft, Google and others have indicated they will nolonger support the use of SHA1 by 2017. If Microsoft say issue an unavoidable patch to Win10 that removed SHA1 usage…

And how do you still not have a sacrificial box?

The phone is my sacrificial box. With the exception of an old netbook and a pad, none of my other PC’s have ever been connected to anything other than my internal secure test network. I’ve also mentioned that for security reasons I do not have an Internet connection at the dead tree cave location, nor am I ever likely to have cause to do so[1]… I can use the old netbook as a goat but I need an uptodate OS/Browser as a lot has happened in the past few months, as you’ve probably noticed.

[1] I actually shocked a rather nice young lady in a bank today. I’m looking to open another savings account with a better rate of interest. She mentioned non branch banking (Internet/phone) and I casually mentioned I don’t use them . She asked why and I said in the respect of wired comms I’m “off grid” and I don’t trust mobile and neither should anyone else. She then made the mistake of asking why, so I showed her the article about smart phones doing an ET to china with every key stroke. I rarely see a young lady give me the “supprised jaw drop look” these days and she did look “kinda cute” with the “business mask” gone.

tyr November 22, 2016 3:11 AM

@Yoshii

The go to source for educational explanation
is John Taylor Gatto, particularly in re the
use of testing.

For science the best is Feynman, “if you’re
not doing the experiments, it isn’t science.”

@Clive

The best argument against DRM is that it might
shut the legitimate owner out of his own work
later. Something that has already happened a
few times. With the mad scheme to make it a
100 year wait for restrictions to drop on the
knowledge of previous humans, there is no
way to guarantee that the machinery to even
play unrestricted materials will still exist.

The silly assumption that adding extra problems
into an epemeral media is a solution to any
problem is ridiculous. Short term thinking will
be the death of the whole race because not every
thing is measured in quarterly increments.

@Wael

One thing I noticed when encountering ‘random’
in comp usage was the idea that you could get
the same stream from the same start seeding.
That was a completely weird definition of the
word random since the previous definition had
non repeatabilty built-in. I’ve seen the same
thing occur in chaos theory. It’s a bit more
sophisticated but the strange attractors seems
to imply that what is being examined isn’t
really chaos but some psuedo version of the
idea.

This all might just be a Wittgensteinian, as
language can’t be stretched to speak about the
unspeakable things that exist.

ab praeceptis November 22, 2016 3:52 AM

r

Advocating drm, me? Unfortunately, I can’t respond to that because I didn’t pay the license fee for the words I would need.

r November 22, 2016 4:07 AM

The DRM snipe was insincere, I sat on it knowing the parallels and dual use aspects for a day before it came out formulated by (a) Little Miss Information. @tyr covers the positive and negative overlap I was thinking about.

I definately recognize that there’s major uses for blobs reguarding DRM that include both code and data, I call them blobs specifically because of that. A virtual processor, something reality agnostic like java and others even lands within the compiler and intermediary language level. Which I think is sort’ve where something like LISP sits?

So, ‘DRM’ technologies predate and are of major uses in each of these layers in my mind.

Sorry for painting you incorrectly. 😉

Wael November 22, 2016 7:34 AM

So here I am minding my own business, trying to have a nice cup of tea (Tetley) to start the day. Then I come across this by @Clive Robinson:

I rarely see a young lady give me the “supprised jaw drop look” these day

Almost lost my cup of tea. I guess you paid me back with interest 😉

Wael November 22, 2016 7:38 AM

@tyr,

language can’t be stretched to speak about the unspeakable things that exist.

Politicians and attorneys are good at that! The words we use keep mutating!

65535 November 22, 2016 9:54 AM

@ Republic Rat and

U.K finally passes their “snoops charter”

I assume this applies to US citizens traveling to the UK. It seems to target journalist with over seas sources. What can we do about it?

@ Clive

You have talked about it in the past. But, now that it is on paper what is the worst that can happen with this new law? What is the best that can happen? What can be done about it?

[FaceCrook spying]

@ Come Play Me: Facebook Dangerously Easy to Weaponize and r

I would guess FB, Google, Twitter and LinkedIn are all about data mining of its users. Its real customers are the government, credit rating agencies, and advertisers. Those social media services will surely be fully weponized at some point.

But, think about the possibilities of disinformation. Say, your résumé is not good. Why not yell over Facecrook that you give your time to charity, kittens and so on. I can imagine that the CIA moles do that all the time.

I don’t use FaceCrook. I agree with the NYT passage below.

“If you’re serious about making an impact in the world, power down your smartphone, close your browser tabs, roll up your sleeves and get to work.” -NYT

http://mobile.nytimes.com/2016/11/20/jobs/quit-social-media-your-career-may-depend-on-it.html

@ Thoth

“Protect the endpoints, not just the transmission.”

[and]

“The problems are usually on the manufacturer side where they do not have the same robust security process models used for smart card chip fabbing where the fabbing and personalization processes are also taken into consideration when issuing the smart card chip’s CC EAL certification. If the hardware security existing in most IoT chips can be enabled and securely preloaded with unique activation keys and the distribution of activation keys can be done in a secure manner (this is the problematic part), then IoT would be much more secure and less vulnerable to Mirai-like attacks where a cache of 60+ default passwords were all that’s needed to get access to an IoT device [to weaponize DVRs for DDoS attacks]. Of course the cost would be more expensive but that’s the necessary money required to be more secure.” -Thoth

That is a good solution. How do you distribute the activations keys is a secure manner? It probably can be done but I cannot think of a way.

It appears to me that the main components of these 620G to 1Tb attacks use a CC server hidden behind a shady hoster like CloudBlair [sp?], Universal plugNpray, a large Dynamic DNS company such as Dyn and a strain Mirai code.

http://www.theregister.co.uk/2016/10/21/dns_devastation_as_dyn_dies_under_denialofservice_attack/

Is there a way to break this combination at the routing level or some other level? Why don’t these Cams and DVRs have Antivirus protection? Or, is AV just another useless piece of software? I will say that I have never seen any AV software for IP DVRs or Cams. Could this be a business opportunity for AV makers?

@ r

“…nobody has asked a single question about Malaysia Airline’s double hitter.”

Yes, on the MH370 downing in the Indian Ocean there were a number of Freescale/NXP workers aboard and speculation.

Who knows, it could have been a pilot gone bananas, some lithium battery fire, the CIA or Chinese terrorist trying to take control of the airplane – which is now scattered on the bottom of the ocean. We will have to wait until the wreckage is found.

http://www.ibtimes.co.uk/malaysia-airlines-plane-mh370-latest-conspiracy-theory-were-freescale-semiconductor-top-employees-1440097

@ V.S.T. (Volunteer Surveillance Team) • November 21, 2016 6:39 PM

“VST is a group of community volunteers living in the LAPD Mission Area that are specially trained and supervised by LAPD officers to observe and report criminal activity.”

This sounds troubling. The LAPD is enlisting citizen to snitch on one and another under the guise of a “Community Program.”

To all legal experts: How do citizen “monitor and report crime” to the LAPD without becoming and extension of the LAPD [or a victim of a crime].

Are these “citizen detectives” trained in observing crime and all of the related legal statutes? Can these citizens prove probable cause? Do these citizens need a warrant? Are these citizen detectives going to get a Stingray to eves drop on cell phone conversations? How far will it go? Will this turn into a set-your-competitor up self feeding situation. I think this will end badly.

Thoth November 22, 2016 10:11 AM

@65535

The thing is to make acquiring the activation key or password not only difficult but also one-time use only and upon using, it becomes useless.

Assuming there is two modes, administrator mode and user mode where the administrator mode is a master RSA keypair created inside a HSM at the factory (just assume it’s a secure environment). The factory’s HSM injects the factory public key into the IoT chip and the factory master key is used to reset the IoT device and put it into fresh state just in case users want a reset of the device. Some may argue it can be used as a backdoor but there are many ways to enable backdoor, not just a master reset keypair.

In the factory, the HSM also generates a symmetric activation key which is also injected into the IoT device memory and sets a counter the activation key to enable a single use. A PIN mailer feature that will allow the HSM to control the printer to print out activation keys onto stickers. The stickers are stuck onto the IoT device and shipped off. When the user unboxes the IoT device, they would use whichever way the IoT manufacturer provides for access to the IoT device be it a built-in display and keypad or via a HTTPS connection into the IoT device’s internal embedded web server. The activation key is entered and at the same time an administrator PIN/password for the device is set.

Once the device is successfully activated, the IoT device’s counter would be set to disallow using the activation key again. Only the factory’s master keypair is allowed to sign a message that would revert the IoT device to factory state whereby at the same time a new activation key would be provisioned from the factory to the device whether be it offline copying of hexadecimal bytes or certificate format or via OTA updates whichever the IoT device maker chooses.

The above is not known to be fully robust and of a high assurance design as the main goal is simply not to have static activation keys or passwords which the compromise of a single IoT device would compromise thousands and millions more. Goals for preventing backdoor and all that would be discussed later once the basic security of avoiding static keys and passwords are removed.

r November 22, 2016 12:24 PM

@65535,

Believe it or not, I had completely forgotten about that aspect of the questions. Maybe it dislodged from my subconscience? Who knows, I was more concerned about the implications of two planes from the same company dropping out of the air within a couple months by the same operators.

I suppose on the surface it looks like MA was just falling apart economically, and the twofer could’ve been down to equipment/repairs. (Well, one was shot down with a BUK(Maybe, here’s looking at you trolls) but w/e).

I guess I should keep my crazy mouth shut, I wouldn’t want to become part of one of the ongoing disinformation campaigns or have a plane land on my head or home.

Freezing_in_Brazil November 22, 2016 12:29 PM

@Secrets of TPP

My dear. The Brazilian requirement for local data storage stems from the Snowden disclosures. I support it. It was meant primarily to keep some measure of privacy for the citizenry [or for the ruling class, you may argue, but that point is moot]. The decision is not grounded on National Security only. There are real privacy concerns since Brazil is a real democracy.

Internet protections down here are strong. Theres an entire body of law outlining the rights of netizens [seeMarco Civil da Internet`].

Nick P November 22, 2016 1:51 PM

@ Thoth

“The entire TLS backbone revolves around a few “Super CAs” that control the entire CA community and possibly even standards. ”

Comodo is worse. They had issues in the past. Most recently, they tried to trademark “Let’s Encrypt” while having nothing to do with the initiative. They retracted that after much community pressure. Bruce would probably be better off switching away from Comodo regardless of which major company or nonprofit he switches to.

“I know of a secure element maker who offered to sell me a server containing 64 pieces of smart card chip crammed inside via daughter boards and I have been talking with the manufacturer. ”

I’d be careful about talking to them about specifics. If you don’t have a patent, the HW makers will rip you off to steal your market-share. When talking to them, just say you need (components here) in a board while keeping quiet about how you plan to use them. Tell them you can’t give too many details since it’s covered by NDA’s surrounding development of a product.

In any case, that someone is already making hardware for it sounds really cool. That it’s $600 is better than I thought it would be. That’s looking at close to $10 a chip. Assuming a chip per transaction, processing a Facebook level of updates (30,000/s) would take around $300,000 worth of boxes plus networking gear. That’s not bad at all for the assurance you get.

Far as load balancer, don’t worry about the FPGA for now. It’s somewhat untrusted in the design anyway. Just see whatever they’d offer to get what the whole price is for a specific model plus idea of what it costs as you add more units. Additionally, I’m curious what the performance is in signatures per second or seconds per signature for the ones running MULTOS or strong JavaCard implementation.

@ Clive Robinson

re SHA 1

Yeah, it could cause problems. He needs to get off Comodo anyway, though. Let’s Encrypt is free. 🙂

re sacrificial box

Your phone? You’ve griped about the pain you have typing on those for years. A laptop might even prevent you from getting arthritis of the thumbs or other digits. In any case, I’m sure you can rig up a way to sync them with some kind of transfer over infrared w/ simple protocol. Just remember to make the whole thing static/deterministic with dirt-cheap, dead-simple components.

@ ab praeceptis

“Advocating drm, me? Unfortunately, I can’t respond to that because I didn’t pay the license fee for the words I would need.”

Great reply. Haha.

Clive Robinson November 22, 2016 4:34 PM

@ TheRegister,

Bruce, hackers have built a device to move door knobs from the side of the door not having a door knob: they generate huge magnetic fields.

It’s not “huge magnetic fields” it’s very high EM fields, just like those of a hundred year old spark gap transmitter. It’s also not “a device to move door knobs” but interfere with circuits connected to electronic buttons.

From what is said they generate a very large amount of electronic noise that gets into the electronic buttons circuit. This causes the likes of an integrator inside to make a false zero refrence point. When they briefly stop the noise and start it again the circuit due to the imbalance sees this as a valid button press and trigers the timer circuit that interupts / activates the door mechanism.

As such this is a very crude “EM Fault Injection Attack” and I’ve been talking about them for quite some time on this blog, having independently discovered EM Fault Injection back in the early 1980s.

Interestingly with a bit of work they could probably develop the idea into a low grade EMP attack capable of damaging personal radios and entertainment systems, even mobile phones.

EFF fan November 22, 2016 4:37 PM

https://blog.torproject.org/blog/mission-improbable-hardening-android-security-and-privacy

On November 19th, 2016 Anonymous said:

With Rule 41 on its way fast, goodness knows we need it now more than ever.

Plus one. Also: a last ditch bipartisan effort to delay the changes (which go into effect Thu 1 Dec 2016 unless Congress acts immediately) has been introduced in the Senate (with a counterpart bill in the House, I believe):

https://www.eff.org/deeplinks/2016/11/give-congress-time-debate-new-government-hacking-rule
Give Congress Time to Debate New Government Hacking Rule
Kate Tummarello
17 Nov 2016

If Congress doesn’t act soon, federal investigators will have access to new, sweeping hacking powers due to a rule change set to go into effect on Dec. 1.

That’s why Sens. Chris Coons, Ron Wyden, Mike Lee, and others introduced a bipartisan bill today, the Review the Rule Act, which would push that rule change back to July 1. That would give our elected officials more time to debate whether law enforcement should be able to, with one warrant from one judge, hack into an untold number of computers and devices wherever they’re located.

On November 22nd, 2016 Anonymous said:
Senator Wyden’s office said it doesn’t have a Senate bill number yet, but the house version (by Ted Poe, with Wyden et al. as cosponsors) is H.R. 6341. So call your state’s federal House representatives’ Washington DC offices and tell them you support H.R. 6341 “Review the Rule Act”.

Clive Robinson November 22, 2016 5:01 PM

@ 65535,

I assume this applies to US citizens traveling to the UK. It seems to target journalist with over seas sources. What can we do about it?

Guess again…

The preceding act of Parliament known as RIPA defined the extent of it’s powers to any point reachable from a UK network, thus most of the world. This act effectively extends this to anywhere any time any how against anyone is now legal… So a couple of rice farmers in a Chinese padi field using the wakie talkie equivalent of two cans and a bit of damp string are legitimate targets. As are goat hearders in sub Saharan Africa etc etc etc… As for US Citizens rejoice in the fact it’s not just your own government panty sniffing your communications, the UK will be doining it Oh so much better…

Clive Robinson November 22, 2016 5:12 PM

@ r,

Their malware uses a little-known feature of RealTek audio codec chips to silently “retask” the computer’s output channel as an input channel.

If you had been reading this blog about the time of speculation about BadBIOS you would have read RobertT and myself discussing this feature, to make the sending / receiving of ultrasound easier.

It realy supprises me it’s taken this long for what was an “open secret” to be exploited. After all I also mentioned the loading and protecting of software from “BIOS ROMS” at the same time –which is a mechanism Microsoft still “respects”– and Lenovo used to have “permanent malware” for advertising etc on their consumer grade laptops…

Wael November 22, 2016 7:50 PM

@r, CC: @Clive Robinson,

Gosh! It’s been a long time since we spoke to @RobertT. We used to have some pretty interesting discussions back in the day when men were men, and women were ribs.

speculation about BadBIOS you would have read RobertT

Yep, we miss him alright. Such a talented person… But…I hope he made it!

@Clive Robinson,

You may have been using a knockoff memory enhancement medicine. Join the club[1]: here… This is the right one. Seventeen tablespoons every 3 hours, or you can take the easy route: one cup of Dr. Morton’s once in a lifetime. Potent stuff made from a forgotten recipe!

[1] Me, @Bruce, @name.withheld.for.obvious.reasons, and … hold on: let me take a spoonful or two… Oh, yea, it’s coming to me now… and @Mike the goat.

65535 November 22, 2016 9:26 PM

@ Thoth

“In the factory, the HSM also generates a symmetric activation key which is also injected into the IoT device memory and sets a counter the activation key to enable a single use. A PIN mailer feature that will allow the HSM to control the printer to print out activation keys onto stickers. The stickers are stuck onto the IoT device and shipped off. When the user unboxes the IoT device, they would use whichever way the IoT manufacturer provides for access to the IoT device be it a built-in display and keypad or via a HTTPS connection into the IoT device’s internal embedded web server. The activation key is entered and at the same time an administrator PIN/password for the device is set. Once the device is successfully activated, the IoT device’s counter would be set to disallow using the activation key again. Only the factory’s master keypair is allowed to sign a message that would revert the IoT device to factory state whereby at the same time a new activation key would be provisioned from the factory to the device whether be it offline copying of hexadecimal bytes or certificate format or via OTA updates whichever the IoT device maker chooses.”

That sounds like reasonable solution. It’s a little complex but it looks like it would indeed work. Thanks.

@ r

“I was more concerned about the implications of two planes from the same company dropping out of the air within a couple months by the same operators.”

We know one was certainly shot down. But, there are still a lot of questions about MH370 which is now verified to be in pieces somewhere on the bottom Indian Ocean.

@ Clive Robinson

“As for US Citizens rejoice in the fact it’s not just your own government panty sniffing your communications, the UK will be doining it Oh so much better..”

The future of digital communications looks very dark – even more so after reading Bruce’s post about governments using social-media platforms for propaganda and surveillance:

https://www.schneier.com/blog/archives/2016/11/government_prop.html

It looks like every police officer will become million point extensions of the NSA,GCHQ, FBS, and so on.

We are now living in a state semi-martial law where every thing we say will have to be restricted or converted to code…So, much for freedom of speech and the Fourth Amendment. I would also guess this would eventually cause flight of capital from Facecrook, Twitter, YouTube, Instagram and many other digital social sites [for those who cannot encrypt].

Clive Robinson November 23, 2016 12:28 AM

@ 65535,

The future of digital communications looks very dark – even more so after reading Bruce’s post about governments using social-media platforms for propaganda and surveillance.

The sad thing is I remember the run up to the first act RIPA and how everyone and their dog was making statments about how bad it was, and comming up with scenarios to show the then Government was lying about RIPA’s reach. This time nearly but not quite nothing…

Are we now “so accepting” like lambs to the slaughter or are we “to scared to voice our concerns” we alow ourselves to be rail roaded? Or is it we have an incompetent unelectable Prime Minister who has the worst aspects of Tony Blair and Margaret Thatcher, but the “street sense smarts” of neither?

There is of course one other possibility that comes to mind, that there is so much bad legislation comming through that there is cognative overload and the “snooper’s charter” appears as a mear “fly spec” on the agenda…

I remember David Cameron standing up in the US next to Obama going on about banning encryption etc, and the look on Obama’s face was like that of a child in a candy store. And the dam idiot press lapping it all up and applauding, and not realising they were first on the hit list, thus they were as turkeys voting for both thanksgiving / Xmas…

Clive Robinson November 23, 2016 12:50 AM

Researchers Claim Computer Speach Reconition is now better than humans

Researchers at Microsoft say of the NIST 2000 test sets for “switchboard” and “Call Home” conversations,

In both cases, our automated system establishes a new state-of-the-art, and edges past the human benchmark. This marks the first time that human parity has been reported for conversational speech. The key to our system’s performance is the systematic use of convolutional and LSTM neural networks, combined with a novel spatial smoothing method and lattice-free MMI acoustic training.

https://arxiv.org/abs/1610.05256

Oh and on the “Soldier 2020” aspect of AI for “Foot Drones”. There has been developments in FPS type tournament scenarios that might well be a bit of an indicator of where things are likely to move,

https://arxiv.org/abs/1609.05521

Anonymous Coward November 23, 2016 1:07 AM

Since some “wouldn’t expect a serious response” to my claims without me citing sources (which I did for almost all of my claims already).

Since some people can’t believe that an agency paid over a billion dollars a year to protect their computers is doing everything it can to make their computers as insecure and unreliable as possible.

Since the NSA is to dishonest to defend itself against any of the charges that I leveled against it, despite Snowden leaks showing that they spend a large portion of their budget trolling boards like this one and LinuxJournal.

Since people won’t believe any of the awful accusations I made despite it all being easy to look up, and being found in documents that former and current officials say are real.

Since Googling is too hard for most people and I didn’t give links for every single (widely known) scandal that I mentioned earlier, here are sources with regards to Equation Group/Shadow Brokers scandal and Juniper OS backdoor; https://wikipedia.org/wiki/The_Shadow_Brokers http://www.forbes.com/sites/thomasbrewster/2016/08/15/nsa-hacked-shadow-brokers-equation-group-leak/ http://www.dailydot.com/layer8/shadow-brokers-nsa-equation-group-hack/ https://www.wired.com/2015/12/juniper-networks-hidden-backdoors-show-the-risk-of-government-backdoors/ https://www.engadget.com/2015/12/17/juniper-networks-finds-backdoor-code-in-its-firewalls/ https://www.wired.com/2015/12/juniper-networks-hidden-backdoors-show-the-risk-of-government-backdoors/ http://www.truthdig.com/eartotheground/item/former_fbi_agent_confirms_the_surveillance_state_is_real_20130504

Curious November 23, 2016 2:37 AM

Off topic: Re. people’s notion of ‘privacy’

Seeing stuff on twitter, I like to point out to everyone, that simply discussing a “right to privacy” is not good enough, it is all too vague. And if one doesn’t already have an understanding of what privacy really IS, then I think “privacy” as a term risks being all very relative, and practically meaningless if the name privacy is associative to mere regulations and traditions, and not really being about real people at all.

What would be important, if you care about privacy (people’s privacy that is), is people’s ‘need’ for privacy. I’d argue that this personal need is really the only meaningful way of portraying ‘privacy’ as a problem regarding personal things. Whether or not regulations of sorts are affecting people’s lives, for society to not be deemed obscene, unfair, or disproportional, is really secondary, and worse, it is imo basically nonsense if not reflecting an understanding of the idea of people’s ‘need for privacy’.

There would imo be the risk of ‘privacy’ working like a synonym (being a substitute word), like arguing for shielding people from attention in public after the fact, something that probably then wouldn’t be a true privacy issue anymore, because of wanting privacy isn’t the same as needing privacy, two wildly different things when you think about it. The ‘need’ would be relevant as timeless thing, while the ‘want’ would be pragmatic.

I think if society so to speak, would come to dare grapple with the idea of people’s ‘need for privacy’ and take that seriously, “society” then ought not just pretend being simply pragmatic anymore, as if the problem of ‘privacy’ was about tradition, bureaucratic issues, or about policing policies alone.

Somewhat related. I am greatly annoyed by how there is apparently no way for me to turn off auto completion in the url field, or for deleting any stored entries in my Opera browser with regard to that feature. I am curious to see if Opera actually answers the email I sent them. As with every terrible corporation, they don’t seem to bother having an official email address. It is as if they just don’t care, and it’s been like that for quite a while.

Curious November 23, 2016 2:53 AM

To add to what I wrote:

I think it might be wise to consider the notion of ‘privacy’ in the sense of it being ‘freedom’, but NOT a right. Thus, rights does NEVER imply ‘freedom’ as such. Because, if you have to beg for it, that isn’t freedom.

Who? November 23, 2016 3:05 AM

I understand the UK citizens will have a chance to avoid this surveillance beast by setting up VPNs to other countries.

Clive Robinson November 23, 2016 5:10 AM

@ Curious,

I think if society so to speak, would come to dare grapple with the idea of people’s ‘need for privacy’ and take that seriously, “society” then ought not just pretend being simply pragmatic anymore, as if the problem of ‘privacy’ was about tradition, bureaucratic issues, or about policing policies alone.

Your first problem is that “society” is a collection of “herds” each with it’s customs, practices and norms.

As a rough indicator societal norms can move forwards, bacwards, schism and diversify with time. Based on the same behaviour in the herds. Backwards movment is associated with “conservative” dominance forwards with “liberal” dominance. As long as there is sufficient overlap between conservative and liberal views neither society or it’s herds are likely to schism, but will diversify.

As a general rule societal movment tends towards liberal behaviour unless harms are involved. When this is the case the conservative viewpoint tends to get codified into moral codes and legislation. However the harms of oppression –often from the conservative camp– tend to codify the liberal viewpoint. That is conservative legislation is generaly against “actions” causing “discriminations”, liberal legislation tends to be against “reactions” or “discriminations”. Thus you get a pendulum effect on top of the general trend towards more liberal views.

Thus one definition of privacy would be the freedom to practice liberal behaviour without the discrimination of the conservative view, where such liberal behaviour has not yet been codified or accepted as a sociatal norm.

Clive Robinson November 23, 2016 5:25 AM

@ Who?,

I understand the UK citizens will have a chance to avoid this surveillance beast by setting up VPNs to other countries.

Err no, not in the slightest, if anything the use of VPNs will be used as a weapon by authorities against those using them.

To see how this works, if there is not a “backdoor” that is easy for the approved authorities to use, then you will be challenged at some point with your “back traffic” and a letter and told to give up the keys… Failure to do so will be subject to effectively mandatory jail terms. Oh and providing the plaintext does not appear to be a valid defence…

The reason is the old “send a faux message” game. You get sent a valid looking but not decipherable by your key(s) message. You can not provide plaintext, therefore you have to prove a negative of not ever having had the valid key… Go directly to jail, do not pass go, do not collect £200… Any attempt at repudiating such a faux message at the time or later will be presented as “trying to provide overt cover for covert communications” against such arguments you can not defend yourself…

65535 November 23, 2016 6:29 AM

@ Clive

“I remember David Cameron standing up in the US next to Obama going on about banning encryption etc, and the look on Obama’s face was like that of a child in a candy store. And the dam idiot press lapping it all up and applauding, and not realising they were first on the hit list…”

Yes, it is a sad commentary on the press; that they would give the rope to the hangman to only find they are the ones first to be hanged.

On the VPN issue in the UK, I believe you are correct it will be key word or key point to put the VPN user on a list – Xkeyscore and so on.

@ Annon Coward

We hear you. But, if glance back through this blog most of the issues you brought up have been discussed. We are just trying to determine a way out of the mess via both technical and political means.

@ Curious

The “privacy” issue is real. The problem is there is a one-way mirror between those on the inside of the government who eves drop on those on the out side of the government. Its hard nut to crack.

The day when attorneys realize that their private calls and emails are being intercepted and used to kill their clients cases will be an interesting day [I would guess dirty police with stingrays are probably already getting the upper hand is some legal cases]. Once, the lawyer/client confidentiality privilege is proven to be gone lawyers are going to make a lot of noise.

ab praeceptis November 23, 2016 7:08 AM

65535, Clive et al.

You are too easily impressed. And you ain’t seen nut’in. Now, ze dschörmans are coming!

You see, years ago, on a planet called germany in a solar system called europe, the mighty darth nsa overloards had the germans install what would later become the largest IX of all. It looked nice and shiny and as was the habit then in the europe colonies, “democracy”, “freedom of opinion and speech”, and “privacy for everyone!” stickers were generously smacked on everythin all over the place. And some extra cables …

But alas, all sorts of evil rebels (Trademark of the empire) abused the wonderful freedoms, speaking critically about unelected powers, about corrupt politicians, about wanton mass murdering by the “good and very lighthouse democracy”, the empire, and man other abuses of the rights given to the people were observed.

So it came to be that first the empire itself, and later many colol^h^h^h, uhm, its democratic partners, felt to need cyber defenses. Of course, the outpost of germany, having the super large IX, had to build up a “cyber defense center”, too, Well, kind of. The concept of a cyber defense center was liked so extremely well by the mighty internet printout masters that they wanted more of them.

But, of course, once you have three or so cyber defense centers, none of which being even remotely useful, a question arises.

No, not the question “why do we build all that, well knowing it’s useless, us not even having the professionals to staff the centers?”. The other one. The “Why just ‘defense’? Shouldn’t we have a cyber attack center, too?”.

Of course, they answered “Yes!!!” And they had urgent reasons. Because it had turned out that some freedom of speech and freedom of opinion abusing evil citizens had devilishly discovered encrypted communication! (The exclamation mark is but a very poor representation of the evilness of those citizens).

And so it came to pass that the beloved, highly capable, and extremely democratic etc blabla regent bots of the outer colony of germany decided to build a “cyber center” to crack encrypted communications.

Right now, I don’t mean to frighten you, but let’s call a spade a spade, you are all but hacked using rotl13. I repeat: The germany cyber center (#3 or #4, it’s hard to stay up to date as the regent bots are in somewhat of a cyber center frenzy) will verry soon be capable to unwrap rotl13 encrypted communication!

So, you better watch your step. Trust me, you don’t want to know about the ugly things that’ll happen once researchers in germans cyber center #12 find out about 32-bit RSA!

ab praeceptis November 23, 2016 7:18 AM

moz

“…voting machines, specifically those without paper records, may have been hacked or otherwise abused during the US election”

I just refuse to believe that. That’s so in-cre-di-ble!

I happen to know through diligent study of PR material from major voting machines manufacturers like “bend and adapt inc” and “desired results corp” as well as from the governments “super duper extreme totally awesomely fair elections” PR agency that the allegations you make are absolutely not possible.

In fact, I happen to kow that a poll was made using those very machines that clearly proves that 134% of the population are “perfectly happy and fully convinced” (trademark) with their voting machines working perfectly well and unbiased.

P.S. Kindly refrain from using the term “cyber-100%-trust” as I intend to trademark it and to earn lots of money with and to then buy my lovely auntie a governors post or a congress seat.

Ratio November 23, 2016 7:31 AM

@Clive Robinson,

Are we now “so accepting” like lambs to the slaughter or are we “to scared to voice our concerns” we alow ourselves to be rail roaded? Or is it we have an incompetent unelectable Prime Minister who has the worst aspects of Tony Blair and Margaret Thatcher, but the “street sense smarts” of neither?

There is of course one other possibility that comes to mind, that there is so much bad legislation comming through that there is cognative overload and the “snooper’s charter” appears as a mear “fly spec” on the agenda…

Or maybe, just maybe, Labour gave the Tories what they wanted. Game over, the end. (As alluded to before.)

Oh, what am I saying? They would never do that, my ideology doesn’t allow it! It clearly has to be something else. Let’s start with the usual boogeymen and go from there…

@Who?,

I am glad the United Kingdom left the EU months ago

The UK hasn’t even formally started the process, much less completed it.

Thoth November 23, 2016 8:21 AM

@Clive Robinson

A Quick And Dirty HTTPS-VPN

Write a PHP or Node.js script and run that on a server instance. Use a web browser over HTTPS to login to the intermediary web server running a server side script which will do a connection to other network on your behalf.

Script the server to rotate pre-shared keys every hour and the server’s OS (considering it’s Linux) to use ‘shred’ command to wipe the old keys and logs on an hourly (or even quarter and hour) basis.

If possible, setup a web service instance which the Linux OS (too impractical to ask for anything more than Linux) would use Full Disk Encryption on setup and the OS would randomly generate an FDE key which is beyond the user’s control so that on shutting down or powering down, the entire partition is made to be lost (probably even scrubbing the FDE key out as well).

This is for a more casual web browsing and remote access scenario. Anything more serious requiring higher assurance wouldn’t be doing all these VPN in the first place.

C U Anon November 23, 2016 8:31 AM

@ab praeceptis :

I just refuse to believe that. That’s so in-cre-di-ble!

Should that not be “in-cre-die-bold”, for that “Premier Election Solution”?

CallMeLateForSupper November 23, 2016 9:22 AM

@all Brits

Belated congrats on the Investigatory Powers Bill. Politicians of every color worked as one to push it through, putting the lie to claims by some that nothing could be worse than Brexit. This legislation makes even American spies and their enablers look like pikers.

John J Foelster November 23, 2016 8:06 PM

I was in here a few weeks ago asking advice on my research on elections integrity. Several posters were very generous in giving advice, but pointed out that this forum was more one where IT security was discussed, and you could not comment on election machine hacking precisely.

I have since learned that my apparent inability to engage the web’s intelligentsia via email was the result of my writing long letters that were read by Spam Filters.

Embarrassing.

I’m now starting to engage with the relevant people who are also figuring out what happened. And I have a specific IT Security scenario I want the good folks here to pick to pieces and see if it survives scrutiny. See here:

https://medium.com/@jhalderm/want-to-know-if-the-election-was-hacked-look-at-the-ballots-c61a6113b0ba#.ev78wg9jn

Dr. Halderman’s working hypothesis seems to be that agents of Russian intelligence infiltrated the DIEBOLD/Premier GEMS machines at multiple county offices in several states. I respectfully think that this would not have been feasible for a variety of reasons.

I think the most likely scenario for hacking multiple county GEMS machines, which are supposed to be NEVER connected to the Internet, is to infiltrate the machine vendor’s travelling tech and customer support staff.

First you bribe a manager at the Premier Elections Systems subdivision of Dominion into your employ, and then you replace the tech support staff with your own people gradually, maybe over as much as three years. Terminate some staff over normally trivial offenses that will hold up in labor court and leave staff short for the workload. Drive honest employees away with overwork, and no raises or bonuses, or at least transfer the ones you can’t get rid of to irrelevant clients outside swing states. Replace the staff with your own overqualified agents, who will underbid any legitimate candidates on salary.

I think what you would then have them do is make custom wifi dongles that the antiquated plug and play versions of Windows on the GEMS PCs will automatically recognize and install, and that these can broadcast access to the internal HD of the GEMS PC. Wifi hotspots are now so ubiquitous that one more unidentified one emanating from a county elections office might easily go undetected indefinitely.

I have strong reason to believe the Johnson County, KS, GEMS machine to have been compromised for the 2014 election. The county officials at this office have explicitly listed the countermeasures they have in place at this website.

https://www.jocoelection.org/voters/VotingSystems.htm

The most relevant ones from the standpoint of a person installing such a hypothesized dongle on the Johnson County GEMS PC would be:

  • The election software computer is freestanding. It is not networked within the office or connected to the Internet.
  • Physically, the vendor’s election software and each individual election database are secured on a computer that is not accessible by our office staff or the vendor’s staff.
  • This computer is installed in a secure room with controlled access. Key card scans by two authorized staff members are required for access during election cycles. Office policy is that at least two people are in the room at any given time.
  • A video camera also records all activity in this room.
  • Individual election database files are backed up at designated milestones and secured in a fireproof safe. The Election Commissioner maintains control of the keys to the fireproof safe.

John J Foelster November 23, 2016 8:24 PM

OK, here’s the scenario:

The corrupt DIEBOLD/Premier vendor support technician and a Johnson County elections official enter and are recorded on camera.

The technician contrives to make the official believe that the keyboard is no longer working. Simplest way I can think of is holding the control key while attempting to enter a Windows login, but there may be others.

The technician reaches to the back of the PC to check the wire connection for the keyboard (it’s almost unthinkable that they would use wireless peripherals based on all the other security, but one must not underestimate the power of human stupidity), places palmed dongle on open USB port.

Election database files overwritten between scheduled backups by remote access of hacker to the computer. (You can accept as a given that if a new AccuBasic interpreter hack has been uncovered and the hacker has unlimited unsupervised access to the GEMS PC then all AV-OS and AV-TSX voting machines in a county are going to be hacked. I can provide citations if interested.)

So what say ye, security geeks? Feasible or bullshit?

Most counties probably do not have security anywhere near as robust as Johnson.

ab praeceptis November 23, 2016 9:05 PM

John J Foelster

Dr. Halderman’s working hypothesis seems to be that agents of Russian intelligence infiltrated …

That’s where I would normally stop it. Obviously halderman won his PhD in a lottery or has completely lost any professionality he may ever have had.

And I don’t say that because I like Russians. I’d say exactly the same thing if it was about the french, the Chinese or the good people from lila lula island.

Forget what that guy blabbers. Here are some reasons:

  • “would”, “could”, “might have”. Translation: mindless blabbering.
  • Attribution is a very tough problem. Anyone with any professional standing would certainly provide solid evidence for such a hypothesis, if alone because he knew that virtually all professionals would have strong doubts.
  • He very clearly and openly is pro-clinton and anti-Trump. This alone completely disqualifies him (and makes me think, he’s an idiot). Every not completely retarded academic prepared to serve as an expert would very strongly try to at least look impartial.
  • diebold (“desired results inc”) is known for its crappy and pretty much proven to be crooked “election machines”. Also there have been quite some cases reported of people who voted for one candidate and found their vote to be counted for the other one.

Bribing/corrupting diebolds staff – without as much as a single one telling a word (adn earning himself a handsome million for talking)? Not credible.

I don’t know (or care) about your specific state and yes, maybe or even quite certainly there have been manipulations. But anyone pointing to evil Russia should just go staright to the mental ayslum.

John J Foelster November 23, 2016 10:22 PM

@ab:

Yes, thank you for your input.

I don’t actually care what you think about the broader political situation in the slightest, or the technical merits of Halderman’s case, or his competency, or any portion of my previous posts, or the price of tea in China.

I am not presently recruiting people to my point of view here.

I had a specific question regarding the feasibility of gaining access to an allegedly secure system whose security parameters I laid out in as exact a detail as possible.

I simply wished to know if the wifi dongle sleight of hand trick I described could actually give an outside person access to the Johnson County GEMS PC, assuming the security is setup exactly as described in the above link.

If this trivial exercise in the theory of secure IT practices does not interest you, no one is forcing you to read or respond. If you feel my post is abusing the Terms of Use for the community, please point out which portion or report me to a moderator. If I or they find that I am in such violation, I can leave or be expelled.

Otherwise, if you do not have a relevant response, don’t respond.

hopefully im just paranoid and all the reporters are wrong November 24, 2016 12:42 AM

Please post something wrong about the below because I really wantmto be wrong about at least some of it.

Regarding flashing CopperheadOS and Replicant onto Google Nexus/Google Pixel/certain Samsung Galaxy variants/the 2 or 3 other phones supporting CopperheadOS/Replicant(the only OSs for phones that don’t phone-home);

New phones from Google cost over $400 up front with no option for contracts, and with all the vulnerabilities in stock browsers (newest Google Pixel’s Chrome browser hacked in 90 seconds), it’s not safe to assume that a used phone hasn’t been infected, unless you know the seller and greatly trust his prudence.

There are known instances of the NSA infecting read-only CD’s in transit, not because the sender or receiver were suspected of any crime, but simply because of them showed evidence of bring intellectual. Source; https://www.wired.com/2015/02/kapersky-discovers-equation-group/
It would be easier to do this with phones(they’re writable with normal tools), and anyone buying phones that have less permanent-unremovable-backdoors than most phones have is going to look even more “suspicious” than those poor scientists. The only known permanent backdoors in Nexus/Pixel are the baseband firmware.

Still, with new Chinese phones shipping with spyware as bad as Carrier IQ, a used Nexus might be safer than a new Xiao. But no phone is reasonably safe.

GregW November 24, 2016 2:24 AM

Usability of good OpSec for full data encrypted laptops is a pain, right?

Improved approach encrypts the RAM as part of the sleep process and then requests password when unsleeping to decrypt:
http://phys.org/news/2016-11-laptopeven-asleep.html#nRlv

Original ACM paper (paywall): http://dl.acm.org/citation.cfm?id=2978372

ISOC paper by same authors – don’t just redirect to innocuous partition given coerced password, delete data + redirect when given yet another slightly different coerced password: http://www.internetsociety.org/doc/gracewipe-secure-and-verifiable-deletion-under-coercion

Wesley Parish November 24, 2016 3:34 AM

A year ago a sort of a friend living next to me, died of a respiratory illness. The forms were all filled, his family did their best to get everything tied up properly, and for over six months various government agencies posted him various mail; I even had to explain to a couple of police officers that I knew nothing about his family as he had never discussed them with me and I had not felt it right to pry and thus I knew nothing about his child or children and he was now dead, so the people to talk to would be his next of kin.

Now there was nothing hidden about his passing. His family were scrupulous in sending his off. There would’ve been a death notice in the papers, though I never get the papers so I never saw it.

The fools who demand ever data vacumming powers for the law enforcement agencies must answer me this question: if the cops and various other government agencies can’t read the publicly available newspapers, how the **** are they going to be literate enough to handle the data deluge? And if they can’t collate publicly available information concerning deaths with the likes of the coroner, how are they going to collate information that is supposed to be private, like phone calls, etc?

That demands a level of intelligence (of the IQ sort) they have been shown not to have.

Wael November 24, 2016 4:14 AM

@Clive Robinson,

Oh I also forgot to mention the issue with the “Many Worlds” idea and what effect entropy implies, but you’ve probably have worked that one out for yourself…

This one fell through the cracks…

I have not. That article confused me beyond description. I have read other “articles” from unexpected sources that deal with this issue, so I have an idea, but it’s not fully developed.

ab praeceptis November 24, 2016 6:09 AM

John J Foelster

Well, then you shouldn’t have mentioned and linked that man in a prominent “authority” position for your “case”.

As for the rest: We are not your personal free IT security/forensics crew, nor are we in the ghostbusters for free business.

You might want to contact the dnc for any “XYZ is behind it! Can you help me prove it?!” cases. They have quite some experience in that. Be warned, though, that they are also experienced in the field of evidence shredding and vanishing.

Have a nice day

ab praeceptis November 24, 2016 8:34 AM

Reuters (and others) “Personal data for more than 130,000 sailors hacked: U.S. Navy”

Bad enough? No, the navy can do “better”.

“It said a laptop used by [a 3rd party industry] employee working on a U.S. Navy contract was hacked.”

What job would that be, that requires a third party employee to have 100+K personel files on a f*cking notebook?

Of course, some lametta admiral gave a statement. I’ll translate that for you:

“The Navy takes this incident extremely seriously” – Translation: “blah”

“this is a matter of trust for our sailors,” – translation “blah” plus “frankly, we shit on our sailors privacy and security. Not just because we don’t care anyway – as has been amply demonstrated – but also because we do not even know what we talk about. Oh an btw: We need 17 billion $ more to defend our beloved valuable sailors against Cyber attacks! Cyber! Alert red! Cyber!”

The bad news is: washington doesn’t need to care too much about people like Snowden. They have damn enough at their hands with sheer incompetence and ignorance.

Tomorrow: The navy cyber defense investigation team found a vodka bottle plus a CS-ROM with the russian national anthem on it and “KGB – cyber attackers use only!” printed on it.

“The best of the best!”

Curious November 24, 2016 10:00 AM

@Clive Robinson

“Your first problem is that (…)”

I can honestly say that I don’t fancy that kind of reasoning that you write beyond your opening sentence. Not sure if you will want me to talk about it, because I don’t think there is an agreement to be had with me to support your conclusions, at least not in relation to what I wrote. I can ofc understand what you wrote from your point of view, but I don’t like this type of reasoning, mainly because this notion or idea if you will, of anything liberal, really ought NOT to be an enterprise of populism (as if oppressive conservative views swayed toward something liberal with time). And so, the way I see it, any argument that appear to incorporate a spirit of the time (I suspect historians in general likes to refer to this ‘zeitgeist’, as if being a phenomenon) is something I find highly suspicious and not something I will want to try argue against (because the merit of the argument imo isn’t real, maybe best understood imo as being on the level of hearsay).

I suppose my view of this/my purported importance of understanding what privacy, is, in a larger sense , as I wrote it, mainly relying on the highly subjective view that any attempt in wanting to determine what is or isn’t essential privacy concerns based on a pure public interest level, is basically insufficient because of non-personal actors in such matters. I’d say that another way of thinking about my revulsion of any public interest discussion (usually corporate, or state actors, or possibly various interest groups) ought to be understood as being fundamentally skeptical to the level of sympathy and goodwill that would have to be shown by non-personal actors in matters specifically about privacy issues.

“Thus one definition of privacy would be the freedom to practice liberal behaviour without the discrimination of the conservative view, where such liberal behaviour has not yet been codified or accepted as a sociatal norm.”

Well, yes in a way, but.. and I will sort of hammer my earlier point home here so to speak. I don’t fancy working with what I suspect would be similar to pandering to academia here if you will (basically abiding to the notion of “society” being an institution, which is not a good idea for various reasons), when the big questions/debates about ‘privacy’ in an academic or public setting, is all so likely to be discriminated against, for the obvious reasons in various situations. Obvious example: prisoners in US and UK don’t have the right to vote. Less interesting, but still of importance, video monitoring of public space.

Curious November 24, 2016 10:06 AM

To add to what I wrote:

I guess I could have written, that there is in any case likely to be an obvious conflict of interest between “society” as an institution and people (individuals). One perhaps unconvincing way to characterize that type of conflict of interest, would be that “society” is evil in ways, as if might makes right, and in ways more than willing to exact its authority on individuals or groups of people even. This is ofc not about individualism, or anything like that, just a form of human predicament I’d say.

r November 24, 2016 10:57 AM

ab,

why do I get the feeling you’ve adopted a whole, “if you can’t beat ’em”…

join em mantra?

Could you maybe, not share that koolaid?

I’m begining to think you’ve been abducted and replaced with a rwbot.

r November 24, 2016 11:02 AM

@ab,

The US Navy, owns and operates subs. imb that leak is more important than the OPM leak, I’m not happy with the navy and I feel for the sailors. I feel for their families that are back where the personnel files are pointing.

ab praeceptis November 24, 2016 11:24 AM

r

Of course, the f*cked ones are the sailors.

And no, no “join’em mantra” whatsoever. I’m pissed over the edge how nonchanlantly the lamettas don’t give a sh*t about their sailors or, being at that, about OpSec.

Plus, unlike sailors they are protected; the victims will once more be the common man, in this case the sailors.

Finally, I think they should stop to make lots of noise about “evil Snowden” and begin to care about their job and their people.

CSX November 24, 2016 11:43 AM

Some of us, will be signing off in the near future but do keep the following in mind:

https://tech.slashdot.org/story/16/11/24/143214/google-sends-state-sponsored-hack-warnings-to-journalists-and-professors

Encryption
Speech
Press
Guns

What’s left?

Your lives, your families, your hearts and minds.

Democratic values are freedom of the press, freedom of speech, the right to be free from harassment, the right to privacy. We have always had the enquirer so don’t let a little AD&D get twisted up into a nightmare by some lame ass GM, all he does is narrate it’s the dice and your decisions that matter most.

moz November 24, 2016 1:54 PM

Interesting election fraud related stuff:

The recount fundraising page, now at $3,790,096.00 against a $4,500,000.00 target with $6M probably needed for all costs of a full recount in all three states.

Halderman: The presidential election was ‘probably not’ hacked — but the votes should be checked (my emphasis)

https://www.washingtonpost.com/news/monkey-cage/wp/2016/11/24/did-russian-hackers-elect-the-u-s-president-dont-believe-the-hype/ (Washington Post) – note that disbelief tends to correspond to right wing newspapers.

Closer Look Punches Holes in Swing-State Election Hacking Report (Scientific American)

Compare with yesterday’s article with the title “Computer scientists say they have strong evidence election was rigged strong evidence election was rigged against Clinton in three key states” from the Independent

@ab praeceptis

Normally I would largely agree with your comments on Halderman, however, if I was giving him the benefit of the doubt, he’s working against a deadline. He has to make the strongest case now because if more evidence comes out later it won’t do any good. Basically, it’s not his job to prove what he says is true. It’s the election official’s job to prove it isn’t.

@John J Foelster

Physical hacking is impractical and dangerous compared to using malware. If it were to be done you would likely target the software production not the individual machines. @ab praeceptis’s answer was a bit kurt but on point. Further, you should read the articles more carefully. Halderman is not suggesting that hacking happened. Just that it could have happened.

ab praeceptis November 24, 2016 2:33 PM

moz

I’m rather cold-blooded and tough on that whole election issue. Well noted, I’m not for either candidate; besides not being us-american anyway, if I would have a position at all, it’d be anti both of them.

Of course there was fraud! Probably even massive fraud.

We know that the dnc acted inacceptably, we know since at least 16 years that your elections are a scam, we know for many years that the machines of “desired results inc.” are biased, bent, and probably not even election machines but devices to remote generate the desired outcome while running a pure “elections” show like a computer game. People love clicking and people love the feeling that they have anything to decide …

Same story with your ridiculous “peoples vote” vs. electoral system.

Well noted, I have no position on that; do as ever you please. BUT: One doesn’t complain after the election show! If the us-americans had any real interest whatsoever, they had decades to take care of that.
It’s really simple. If I think, someone severed my cars brakes and I know that then it’s a wise thing to take care of that before driving every 4th year rather than complaining right after crashing.

Finally: What do they want? What do they expect? That “counting” again the fraudulent numbers of “desired result inc” machines will lead to another result? And I also find that whole thing funny because from clinton we know that they played games while from Trump we don’t; it’s just lots and lots of noise. Chances are that a “recount” would actually go against clinton. Then what? A re-re-count?

From what I as an outsider see that’s exactly the message by the people. They have damn enough of the large city crowds and the polit mafia (of both sides). From what I see the usually unheard (as in “ignored”) part of the population had enough and now the part that always makes lots of noises and demands safe spaces is doing what they always do: they’re making lots of noise.

That’s also why I see halderman so negative. Where has that “expert” been all those years? Why doesn’t he have tangible, credible, and actionable professionally processed data rather than making lots of “could” and “might” noise?

Yes, that whole election circus system is rotten. But it’s your system and I suggest that you use the next 3.5 years to repair your brakes which are known to be rotten before driving again on election road in 4 years.

Trump is your president. May he be a good one for you (and us).

Curious November 24, 2016 3:13 PM

Re. Bruce Testifies on IoT

I haven’t listened to the entire video yet, but I think it was potentially very unfortunate that the discussion ends up having an emphasis on how security mustn’t stifle innovation.

I think it makes a lot of sense, and being the only wise thing to do, that the only sensible way for congress to purportedly proffer a sentiment in this way, would be to say that it is of importance to both allow for good security and good innovation, not allowing any one subject of the two to be the main motivation for coming up with regulation. Though, I have some difficulty how ‘innovation’ as a subject matter can even be a point of debate in any case, which is also unfortunate I think; unless the discussion would be about obvious problems regarding technology. However, if one were to suggest, or imply that increased manufacturing and R&D costs is stifling innovation, then, innovation as a term is TOTALLY meaningless as a conceptual metaphor even, because IF you CAN work on something, that by merit of itself, rules out the idea that there isn’t any innovation going on. I think it would then be natural to bring attention to the aspect of ‘strategic’ aspirations. Presumably, there is either ulterior motives, or one is happy with the present state of things. I think that for example to whimsically say or imply that “progress” would be a goal in itself, that alone doesn’t make good sense I think.

Thus.. there is the possible problem, for whoever has to be concerned with congress, if ‘innovation’ means increased profit, or even, longevity, two things that has basically NOTHING to do with innovation as such.

So I would like to know, what could US congress or anyone mean by the word ‘innovation’. I am sure representative of other countries would like to know, because I think other members of the international community wouldn’t like there being a special agenda for US alone, giving US industry or US market some kind of self entitled existence or living off subsidies, invoking special protections or rights for US industry and the actors in the US markets.

So what could congress have meant with this notion of “innovation”? I don’t know. Does congress know? Do any of you guys know?

I suppose “innovation” within a select area of technology would make good sense, like, allowing innovation in software over time, or how computer networks eh work over time. Perhaps it was something like that Bruce and congressman Walden had in mind.

I will suggest that one stop using the “innoation” label without also elaborating how just how doing so would be relevant. Otherwise, lobbyist as I can imagine, will easily sway the opinions of politicians and regulators , by tossing feel-good buzzwords around to have things remain the same, which paradoxically itself might become an example of stifled innovation.

In the video (1h:23min), congressman Walden solicited for an idea of how to create a “national framework”[sic] in which to work with securing the internet of things as I understood it. I think it would be honest for US congress to try think strategically (or rather understand what thinking strategically would mean) and not imply that pragmatism is a good plan of action, because those two things (pragmatism and strategy) are sort of antithesis’ of each other I would say. Not understanding one or the other wouldn’t make good sense, or rather it would be unfortunate and probably confusing, or tragic even (i.e. people talking past each other), and it would be disingenuous I think if one were to entertain the notion or the idea, that a periodical evaluation of legislation could ever be a good substitute for having a desired framework on which to act upon in the first place.

There is a short discussion then following, in which the idea of having ‘standards’ apparently is deemed preferable to regulation, and that regulation might come after that, though I don’t see how there would be a national framework off of that, if there aren’t any regulations, otherwise, surely the “stake holders” would just run the show as they please wouldn’t they?

Since “innovation” is so important, I suggest everyone try figuring out what that really means.

John J Foelster November 24, 2016 3:24 PM

@moz

Thank you. I actually have been reading the articles fairly carefully and there is a distinction between what Halderman is saying publicly and what he and others are lobbying the Clinton (and possibly the Stein) campaign to do privately.

http://nymag.com/daily/intelligencer/2016/11/activists-urge-hillary-clinton-to-challenge-election-results.html

I assume they’re also doing what I’m doing, which is deliberately withholding information from the public that might indicate to the hackers how badly they’ve been compromised.

Normally I would agree that attacking the software rather than the individual machines would be a better explanation, BUT, the procedures for what software is installed on the machines and the verification that it IS the software expected are rather stringent. This is a detail that I hadn’t thought to include, but an important one and I apologize.

I have a list of the installed software and the procedure for verifying that the software from the manufacturer isn’t malicious used in the state of Alaska, which is about as secure as Johnson County. Contrary to popular belief, the Federal Election Commission and the Election Assistance Commission aren’t staffed entirely by imbeciles.

This comes from PDF p. 142 (internal pagination 72 of 110) of the State of Alaska Elections Security Report (Phase 2).

http://www.elections.alaska.gov/doc/hava/SOA_UAA_Election_Security_Project_Phase%202_Report.pdf

3.8.3 GEMS Software Hash Verification
The GEMS.exe application should be validated by calculating both MD5 (Message-Digest
5) and SHA (Secure Hash Algorithm) hash functions. These hash codes should be
compared with those registered with the National Software Reference Library
(http://www.nrsl.nist.gov/votedata.html). Known vulnerabilities exist with the MD5 hash
function and as a result both the MD5 and SHA hash functions should be calculated
(Premier’s Windows Configuration Guide, Revision 3.0, Section 10, 2007).
3.8.4 Loaded Software Confirmation
The GEMS server should be checked to ensure that only Winzip v11, Adobe Acrobat 8,
Adobe Audition 2.0 or Sony SoundForge 8 and Nero Burning ROM 8 are the only
applications loaded on the server.

These all correspond to the manufacturer and EAC guidelines. They actually have to use an older version of Acrobat because the precinct reporting system uses a PDF option that has since been deprecated. IF a malicious version of the GEMS.exe file were created at Premier, it would have to survive testing by the staff of the EAC, and potentially experimental scrutiny by a number of academic labs in public Universities, most especially the University of Connecticut.

The old bugs you guys are thinking of actually were accidental and were supposed to have been patched.

Even still, I just sent an email to the EAC asking if any revisions to the software WERE submitted and approved since the above report was written. Never hurts to make absolutely certain.

I’m realizing again this may not be the best forum. As it happens, I actually HAVE been more or less living in a cave for most of the last decade, so please excuse the lack of knowledge. Are there other forums focused on the practical aspects of physical or remote hacking I might try and hit up?

Reddit and I have deep seated antipathy for one another. (Or at least I assume it dislikes me because it wouldn’t let me post anywhere owing to rules and I don’t have time for its bullshit.)

ab praeceptis November 24, 2016 3:54 PM

John J Foelster

Congrats. You have a really funny sense of humor.

Contrary to popular belief, the Federal Election Commission and the Election Assistance Commission aren’t staffed entirely by imbeciles.

The GEMS server should be checked to ensure that only Winzip v11, Adobe Acrobat 8, Adobe Audition 2.0 or Sony SoundForge 8 and Nero Burning ROM 8 are the only applications loaded on the server.

Now I see the light. Those are indeed indicators for an impressively high level of competence! Even the toughest of us would have a hard time to come up with more rigorous security demands.

Not meaning spoil the highest-security-evar! fest, I would like to guide the attention to a little detail (enhancement mine)

both the MD5 and SHA hash functions should be calculated

Wake up!

All that shit-bingo is worth exactly nothing; actually it can work against you.

Watch: “desired result inc” machines arrive. They are “checked”. Early morning election day I change software. Same evening I reinstall the official version.
Next day I look straight in your eyes and say “Everything must be correct. We did everything according to guidelines”.

Or maybe I change your md5 and sha2 checkers and then install leisure larry, of course, after “verifying” its checksums are OK. We wouldn’t want to play leisure larry on an unchecked election machine, would we.

Or maybe I don’t care about all that super-duper-totally-checked election software at all and simply change the votes directly at the data level.

Have fun with your undertaking. I certainly had.

ab praeceptis November 24, 2016 4:42 PM

Thanks to certain contacts in the alaska secret service I happened to get my hands on the fec/eac election machine checksum verification code. Here it is:

#include <stdio.h>
#include <unistd.h>

int main(int argc, char **argv)
{
printf(“FEC/EAC official checksum checker. Tampering strictly forbidden!\n”);
printf(“V. 42. Selfverified. XR Code: ff42beef \n”);
if(argc < 2)
printf(“Checking all sensitive files.”);
else
printf(“Checking file ‘%s’.”, argv[1]);
printf(“Setting up verification engine …\n\n”);
sleep(1);
/* Show them we’re dead serious and very, very competent /
printf(“Cross verification (Mussorgsky-Schiller muon collision cross verification)\n”);
sleep(2);
printf(“Result: .\n\nChecksums match: 8d3c886d2c5a5a78a29e4ba86eae298e\n”);
/
Now, let’s finally play leisure larry */
return 0;
}

ab praeceptis November 24, 2016 4:44 PM

Sorry. HTML stupid me …

Here you go

#include 
#include 

int main(int argc, char **argv)
{
    printf("FEC/EAC official checksum checker. Tampering strictly forbidden!\n");
    printf("V. 42. Selfverified. XR Code: ff42beef \n");
    if(argc < 2)
        printf("Checking all sensitive files.");
    else 
        printf("Checking file '%s'.", argv[1]);
    printf("Setting up verification engine ...\n\n");
    sleep(1);
    /* Show them we're dead serious and very, very competent */
    printf("Cross verification (Mussorgsky-Schiller muon collision cross verification)\n");
    sleep(2);
    printf("Result: .\n\nChecksums match: 8d3c886d2c5a5a78a29e4ba86eae298e\n");
    /* Now, let's finally play leisure larry */
    return 0;
}

Md5 AND sha1 OMG WOOT SECURITY November 24, 2016 7:18 PM

There you have it folks, unless the tin foil hatters who make baseless claims about DES, Skipjack, md5 and sha1 being broken are right, there’s no way anyone could hack the election.

But please try to keep in mind that republican=democrat=laboir=tory=hates your liberty, your freedom, your inalienable human rights, and that they all want to kill every single person who dares to publish non-backdoored software implementations FDE, TLS, VPN, and Tor.

Unless there’s a libertarian candidate who gives a damn if it’s rigged or not? You can choose between pure evil and pure evil. Welcome to black and black morality (calling it black and gray is way too optimistic).

ab praeceptis November 24, 2016 7:33 PM

How can you say such evil things?

They have adobe acrobat security!

What I find a little suspicious, though, is that there is neither mandatory AV nor 32-bit RSA.

Ratio November 25, 2016 1:20 AM

@John J Foelster,

Is he always like this?

No, I don’t think he’s posted code that doesn’t compile before.

The rest is more or less par for the course though.

Thoth November 25, 2016 3:48 AM

@all

I have found another snake oil and possibly NOBUS backdoored product linked below. For your safety sake, please avoid this possibly NOBUS backdoored product especially for US and UK citizens.

The reason is it has two firmware versions with an International version supporting weak encryption (export grade I presume) and has other security features like plausible deniability and self-destruct codes excluded while the US version has all the “goodies”. Due to the inclusion of the “goodies” in the US version, I want to point out that it might be a trap for US citizens.

When will stupid things like putting “secure backdoors” and using “export grade ciphers and keys” ever going to be removed from the mentality of some security engineers who love to kowtow to the corrupted “Powers That Be”. Export grade and secure backdoors have done us no good at all and Clipper Chip has been a prime example which some people don’t seem to learn.

Link: https://crp.to

ab praeceptis November 25, 2016 3:51 AM

Ratio

My special friend (who for a reason knows much about perfidious snakes) is rearing his head again.

Yes, the code doesn’t compile. That is, the code as seen here. And I openly stated the reason: I’m stupid in html and didn’t think of using html codes for <> needed for the #includes as well as for the if.

Being at that: Kindly provide a link to any code that you posted here and that is yours.

Frankly, one must be quite malevolent and a major a* to pick on my funny little feceac checker knowing quite well that the fault isn’t in the code but in html – and that after I myself immediately told that I got the html wrong because I’m an html idiot.

Thoth November 25, 2016 3:53 AM

@ab praeceptis

Wow, MD5 checksum. Why not use ROT13 for “encryption” and CRC16 for checksum since no one could get close to the CCTV surveilled “secure room” and pass it off as security checksum and ciphering.

ab praeceptis November 25, 2016 4:00 AM

Thoth

Tz, didn’t you read what I wrote about the dschörman crypto hacking cyber center?

I warned all of you right there that soon rotl13 is going to be cracked by the dschörman cyber center!

Not with me! I’ll throw 32-bit RSA at them. That’ll keep us secure and tthem busy for some decades.
As for CRC16 you are right, thank you. That will deny them any possibility to fumble with our data without us noticing. Great idea.

Wael November 25, 2016 4:13 AM

@ab praeceptis,

My special friend (who for a reason knows much about perfidious snakes) is rearing his head again.

That means your friend is “fronting his rear”. In other words, he is mooning you! With friends like that …

ab praeceptis November 25, 2016 4:33 AM

Wael

Your word games are lost on me. My english is simply not good enough.

(Disclaimer: This is a neutral statement with a friendly tendency).

As for serpent, lucifer, and similar. Indeed, I often notice that there seems to be a preference for chosing terminology from that corner. Daemons (and daemon logos), devils and devil forks.

Is there more behind that? Me not know. Not my field. But it’s hard to not notice it.

Wesley Parish November 25, 2016 4:39 AM

And (grand orchestral flourish – trumpet roll and blare of drums, sheep barking and meowing in fright followed by the terrified bleating of a thoroughly terrified and confused dog!) Now For Something Completely Different!

For those worried about economic security, you might like to consider the probability that the Danes, the Finns and others know something that US President-Elect Donald Trump doesn’t.

To wit, said President-Elect is a climate-change denier and is set to undo whatever little the Obama administration did on climate change.

You can see the re-gearing needed in the various internal climate change treaties as either a threat or an opportunity. When a significant part of the world sees it as an opportunity, and you see it as a threat, they’ll take the opportunity, and you won’t.

All that the PRC, for example, needs to do to get its revenge on President-Elect Donald Trump’s slanders about China and climate change hoaxes, is to up its research and development of “greener” technology – less polluting, more efficient – and corner the market. They do have some very strong incentives, after all – Beijing’s not exactly the poster boy for smog-free skies.

It’s hard being the only buggy-whip manufacturer in town when there aren’t any horse-drawn buggies anywhere within a ten-thousand-mile radius.

Wael November 25, 2016 4:58 AM

@ab praeceptis,

Your word games are lost on me.

You must have a hell of a time with @Clive Robinson, then 🙂

My english is simply not good enough.

Niether is mine.

Is there more behind that?

Sometimes, yes. Sometimes we’re just being goofy. Serpant, Lucifer: Crypto algorithms. You have to follow the thread of conversation to see how the discussion evolved (or deteriorated) to this. Daemon, Daemon fork: The FreeBSD mascot; a word play on demon, daemon being a background process… you can find the details on a wiki or something.

“rearing his head”. If the head is on a person, then if he rears his head to you, which can mean turning his face away from you, then his butt faces you. Look up what “mooning” means.

Sometimes we want to say something that would otherwise be flagged by the host as inappropriate, so we have to “disguise it” a bit.

Ratio November 25, 2016 5:03 AM

@ab praeceptis,

Kindly provide a link to any code that you posted here and that is yours.

Hmm, don’t think it’s been more than these tiny bits of Python. Should I have been posting more code? Anything in particular? A language or paradigm?

Frankly, one must be quite malevolent and a major a* to pick on my funny little feceac checker knowing quite well that the fault isn’t in the code but in html – and that after I myself immediately told that I got the html wrong because I’m an html idiot.

Pick on it? I was just surprised that it didn’t compile.

I guess you should have taken that code a bit more seriously. Next time please also include a formal specification and do some model checking before posting. Can’t be playing around with the electoral process like that!

Wael November 25, 2016 5:14 AM

@Ratio,

Anything in particular? A language or paradigm?

I’d like to see an object-oriented Dos batch file “program” please 🙂

ab praeceptis November 25, 2016 5:36 AM

Ratio

I’m – positively – surprised. It seems there was a humourous undertone towards the end.

The problem though is that I don’t know of a good static verifier and spec tools for html. The one I currently use is this blogs comment section but it’s a little short on error messages, limiting itself to telling me “You html idiot!”

Ratio November 25, 2016 6:44 AM

@Wael,

Sometimes we’re just being goofy. Serpant, Lucifer: Crypto algorithms.

You missed The Camel. If you didn’t get that one… 🙁

You have to follow the thread of conversation to see how the discussion evolved (or deteriorated) to this.

The discussion didn’t evolve, it was designed! Can you imagine the odds..? (I’ll shut up now.)

Daemon, Daemon fork: The FreeBSD mascot; a word play on demon, daemon being a background process… you can find the details on a wiki or something.

In Greek mythology daemons are (good or evil) spirits. Think “jinn” and you’ve got the idea.

I’d like to see an object-oriented Dos batch file “program” please 🙂

Sure. First make sure to set COMSPEC to… 😉

@ab praeceptis,

It seems there was a humourous undertone towards the end.

Nein! Doch!

The problem though is that I don’t know of a good static verifier and spec tools for html.

Betcha Nick P’s already on it.

ab praeceptis November 25, 2016 6:55 AM

Ratio

“Nick P is already at it”

And you? Where are you? It seems to me that Waels task for you and Nick P’s work would perfectly match. Object oriented DOS batch scripting is all but screaming to be used for that static analyser and formal spec tool for html.

Oh and: No exceptions, please. (In case I failed: This was an attempt at an insider pun: OO … exceptions)

Wael November 25, 2016 7:17 AM

@Ratio,

You missed The Camel. If you didn’t get that one… 🙁

No, I got it. Just didn’t list it!

The discussion didn’t evolve, it was designed! Can you imagine the odds..? (I’ll shut up now.)

I can imagine that, yes 🙂 if I don’t comment on the COMSPEC, it doesn’t imply I didn’t get it,

@ab praeceptis,

Oh and: No exceptions, please. (In case I failed: This was an attempt at an insider pun: OO … exceptions)

We tried to catch what you meant. But thanks for the clarifying comment, otherwise it would have unwound our stacks.

Nick P November 25, 2016 12:18 PM

@ Wael

“Betcha Nick P’s already on it.”

The MySpace amateurs got HTML right more often than wrong. It doesn’t really need a static verifier when WYSIWYG editors with preview do just fine. I used Dreamweaver back in the day but there’s FOSS these days. Plus, basic HTML with CSS is Turing-incomplete. Leaves you with less vulnerabilities if you’re sticking to those over Web 2.0 with JS. People really wanting it could use Understand or Yasca given they support HTML. CompSci occasionally does something with it like this. I just read abstract as it’s not my thing.

@ All

Bruce’s essay on regulating the IoT got a lot of popularity (and criticism) recently on Hacker News and Lobste.rs. I wrote a post on Lobsters summarizing my counters to all the important criticisms on the Hacker News thread. I reposted it here since it seems to be the relevant thread for this blog:

https://www.schneier.com/blog/archives/2016/11/regulation_of_t.html#c6739118

Wael November 25, 2016 12:23 PM

@Nick P,

Betcha Nick P’s already on it.

Good guess, I would normally say something like that. This time, it wasn’t me!

Good last link, by the way. A lot to sift through!

Nick P November 25, 2016 12:39 PM

@ Wael

Oops. You caught it just before I posted to correct it. Yes, it was certainly something you would say. Why I assumed it was you.

@ Ratio

My bad. Decent call. I did try static analysis on HTML but ultimately found it unnecessary. 😉

Simone Thomas November 25, 2016 6:30 PM

During one of those rare gems of a Thanksgiving dinner yesterday at which the conversation was as sparkling as the wine, in a discussion of Internet security and private-industry personal surveillance, an IT professional argued an idea that may be new to more interested people than myself. His point of view was that, while, Yes, it would be better if the Internet were anonymous, if it ever existed, that time has passed and the laws of entropy say it now is not achievable. With the pervasive use of device fingerprinting, and there’s always something new, there is no anonymity and the war for it is irretrievably lost.

What we have now is total identification on the side of government and industry, both of them with interests irreconcilably in conflict with personal privacy and democracy, and a near-total ignorance of their civil rights peril by the public.

He says, and it took me awhile to come around to this idea but now I am convinced of it, that the only solution is to eliminate the illusion of anonymity and make everyone’s Internet activity completely, publicly, and intentionally personally identifiable.

He points out that this not only would give people warning of their risk, but it would create an incentive for the kinds of laws that limited such things as tapping telephones without judicial approval and prosecutors barred from collecting the lists of books a suspect had checked out of the library, two protections that seem from the perspective of today to be from some remote halcyon time. It would also undermine the profits of corporations that use hidden mass surveillance to accumulate illicit profits from personal information not made willingly available to them, inordinately concentrating into few hands the wealth of all modern societies and undermining democracy as direct result.

This strikes me as a radical game-changing solution to what has become an intractable and dangerous public problem, legislation for which Senators Ron Wyden, Patrick Leahy and other champions of civil liberties in the Congress might find allies on the other side of the aisle to advance for another set of reasons.

Tom Sayer November 25, 2016 7:30 PM

@Simone Thomas,

How are ‘they’ going to implement authentication?

One of those plug like things from the Matrix?

‘They’ would need to have just a teensy bit more omnipresence than they do over Philadelphia.

Ratio November 26, 2016 5:10 AM

@ab praeceptis,

It seems to me that Waels task for you and Nick P’s work would perfectly match. Object oriented DOS batch scripting is all but screaming to be used for that static analyser and formal spec tool for html.

In this instance, given the properties of this class of problem, I thought I’d delegate to @Nick P. Apparently he sees no need for this. Says it has no value. Who’d have thunk? Some people might object to his message, but he won’t get any argument from me.

@Nick P,

CompSci occasionally does something with it like this. I just read abstract as it’s not my thing.

It feels like they’re solving the wrong problem? You’d want to know if, for some template language, “filling in” a given template with arbitrary data always results in valid HTML. Or how invalid HTML could be generated.

@P2P,

Why copy-paste your comment all over the place?

Wael November 26, 2016 12:01 PM

@Ratio,

…he sees no need for this…

Clever. I hate to pee in your cereal, but you forgot to italicize “for” 😉

WhiskersInMenlo December 16, 2016 2:22 PM

V2V….
https://www.wirelessdesignmag.com/blog/2016/12/dot-proposal-requires-v2v-tech-all-new-cars
“On Tuesday, the U.S. Department of Transportation (DOT) announced a proposed rule, requiring the inclusion of vehicle-to-vehicle (V2V) communication technology in new cars. This Notice of Proposed Rulemaking would enable a “multitude of new crash-avoidance systems that, once fully deployed, could prevent hundreds of thousands of crashes every year,” according to a statement.
…..
“The rule would also integrate extensive privacy and security controls, preventing the technology, which operates on a 75 MHz band of the 5.9 GHz spectrum, from linking any information to individuals. The current proposed design employs a 128-bit encryption, compliant with the National Institute of Standards and Technology (NIST).

“Also on the docket is the NHTSA’s plan to issue guidance for vehicle-to-infrastructure (V2I) communications, which would allow vehicles to “talk” to traffic lights, stop signs, and other roadway infrastructure, reducing congestion and improving safety.”

====
I am not fully convinced that this press release is informed. In many ways it does not pass the sniff test.

Since roadway infrastructure already has cameras this seems to have been authored in a state where wacky t-backie is sold.
Crash avoidance commonly involves stopping so any vehicle can be stopped including police vehicles.
Of the hundreds of thousands of accidents this will avoid how many of the 35,092 fatalities in 2015.
How many of the near six million vehicle accidents involve multiple vehicles.

In 2009 — Of the 7,945 people who died in the past five years in Virginia, Maryland and the District, 58.9 percent were in single-vehicle crashes.
http://www.washingtonpost.com/wp-dyn/content/article/2009/10/13/AR2009101301973.html

If “Stuxnet” taught us anything industrial control hacks are real and can jump air gaps. To deploy RF connected tech has risks that can move vastly faster than interstate highway speed. The EPA discovered way too late the designed tomfoolery to game emissions for millions of importerd vehicles.

In isolation much of the good intentions I like. As a set the risks are serious and ill considered.

On a positive side I would recommend passive radar corner cube radar and IR reflectors be moulded into rear bumpers and side panels. Corner cube reflectors make wood and fiberglass sailing boats visible at distance to the radar of large commercial ships as well as rescue helicopters and ships. Such passive reflectors can allow driver assist devices to see traffic better. Retrofit on truck bumpers and replacement vehicle bumpers could be routine. Tuned arrays of wire on bumper stickers would be inexpensive and could also be part of license plates. Such passive signal enhancements might help a lot more for less money. Tuned wires can be woven into clothing and canes for pedestrians too.

I have already seen evidence that law enforcement ignores electronic emissions from vehicles.
Headlights out of alignment can be detected visibly in a rear view mirror. (Yes light is electromagnetic wave… 😉
High beams deployed are obvious. Bumped out of alignment lamps are obvious.
Fog lights on wet pavement reflecting up into incoming traffic are obvious yet the glare
risks the pedestrians, oncoming traffic much as high beams do.

SAC February 24, 2017 6:11 PM

@WhiskersInMenlo

The rule would also integrate extensive privacy and security controls, preventing the technology, which operates on a 75 MHz band of the 5.9 GHz spectrum, from linking any information to individuals. The current proposed design employs a 128-bit encryption, compliant with the National Institute of Standards and Technology (NIST).

But nothing short of a social security number is considered personally identifiable information anymore.
128bit encryption becomes 64bit with quantum computers, which are becoming powerful and affordable.
Sha1 was broken without quantum computers and isn’t it 160 or 180bits?
Backdoored dual curve elliptic ciphers are recommended by NIST so how does saying it’s NIST approved make it safe?
That episode from GITS with all the people in their smart cars held hostage by a virus that turned the cars into a botnet will become reality if the only thing preventing it is a 128bit NIST standard.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.