Friday Squid Blogging: Giant Squid Video from the Gulf of Mexico

Fantastic video:

Scientists had used a specialized camera system developed by Widder called the Medusa, which uses red light undetectable to deep sea creatures and has allowed scientists to discover species and observe elusive ones.

The probe was outfitted with a fake jellyfish that mimicked the invertebrates’ bioluminescent defense mechanism, which can signal to larger predators that a meal may be nearby, to lure the squid and other animals to the camera.

With days to go until the end of the two-week expedition, 100 miles (160 kilometers) southeast of New Orleans, a giant squid took the bait.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on January 3, 2020 at 4:25 PM120 Comments

Comments

Lax January 3, 2020 5:59 PM

https://www.snopes.com/fact-check/trump-iran-tweets-obama/

Someone who we know is not trustworthy as a source of independent information says they’re relying on secrets no one can legally (because he blocked oversight) confirm about a “pending” attack (nevermind that they attack all the time either) of such a “grave seriousness” that it would require quasi-illegal (Iraq = sovereign) imminent military action (nevermind without consulting Congress either) in assassinating the VERY TOP government / military leadership of a country we’re already on the brink of war with.

As if that would, plausibly, lower the threat of imminent, serious attacks?

Then let us review the video record of that person’s historical comments on the topic, shall we?

Note in passing that he’s being impeached and faces political and legal oblivion of a seriousness and impending nature “the likes of which no one has ever seen.”

The predicated conditions of this tension include a previous war, one based on verifiably false information presented as fact and “imminent” threat.

There has been no independent verification of this particular cassus belli either.

“The full list of Trump’s proffered rationales for an Obama-led attack on Iran eventually encompassed all of the following:”

“to get elected” (i.e., to win re-election)
“because of his inability to negotiate properly”
“to show how tough he is”
“to boost his poll numbers”
“to save face”

General Suleimani was walking around for decades with everyone in the western intelligence aparatchic knowing exactly what he was up to, in fact doing, where he was, and what he would be planning to do in the future. Timing is everything I guess.

Side note, speaking of timing, look at the imminence of the threat right now?
Less threat? Less serious, less imminent or targeted at soft targets?

War, me worry?

name.withheld.for.obvious.reasons January 4, 2020 12:14 AM

To avoid hypocrisy the U.S. must stop funding the National Science Foundation, educational institutions, and all future looking technological fields (biology, chemistry, physics, engineering, and basic research). My call is that the BSD movement (Botched State Decisions) be enjoined to divest from these industries and sectors of the market. We are suffering from an intellectual apartheid, the Neo-fascists have a disdain for deliberated thought and cognizant behaviors.

The U.S. now operates as the oligarchies of the 16th and 17th century in Europe. Not unlike the near east powers of the 14th and 15th century or the theocratic states of that era.

Wonder how the U.S. compares to, say, intelligent life somewhere in the universe.

My bet is that there is a form of intelligent life that has a planetary presence somewhere in our galaxy or within the local group.

My hopeful observation:
[it is obvious to me] there is no intelligent life on the planet known as the “Earth”.

Squid-indifferent January 4, 2020 2:44 AM

Mr Schneier, could you introduce another rss feed on your blog, which excludes articles about seafood?

JG4 January 4, 2020 6:52 AM

Wishing everyone a Happy New Year, irrespective of their beliefs about when it starts.

https://www.nakedcapitalism.com/2020/01/links-1-4-2020.html

How to Make a Tree With Fractals Wired (David L)

A new mathematical model predicts a knot’s stability PhysOrg (Kevin W)

On-chip integrated laser-driven particle accelerator Science (Chuck L)

Terrorists could make a ‘dirty bomb’ from this common medical device; why regulators won’t act PhysOrg (Robert M).

Big Brother is Watching You Watch

#MeTooBots that will scan your personal emails for ‘harassment’ are an Orwellian misuse of AI RT (Kevin W)

Company shuts down because of ransomware, leaves 300 without jobs just before holidays ZDNet (Chuck L)

Google cuts off Xiaomi smart camera access after bug showed photos of strangers’ homes CNET (BC)

Y2K20 Parking Meter Software Glitch Causes Citywide SNAFU Gothamist. Chuck L: “The Internet of Shit.”

Clive Robinson January 4, 2020 10:00 AM

@ Thoth, maqp,

You and the TFC have been mentioned

You missed menthioning “cwtch.im” the design is an apparant filch of the “Fleet Broadcast” methodology.

Which means one or both of two things,

1, They read the idea here, liked it and incorporated it.

2, They came up with the idea independently, liked it and incorporated it.

Either way it’s a vote of confidence in the methodology 😉

Oh and another idea of XORing a number of messages together from two different servers and then XORing the two privatelyvto get the actual message you want, is not new at all. I’ve discussed it here in the past some time ago. The basic idea behind it is an extention to what used to be an extention to a “book code”

Put simply it’s a stream cipher not any diferent to an OTP but how you make the pad is based on a shared book collection. To make the keystream / pad you have a method by which you agree a subset of atleast four books in the shared collection. As well as for each book in that subset a page, paragraph, sentance and offset charecter to get the start of the text-stream to use. You then apply a simple stastistics flatening / compression algorithm to each of the text-streams then XOR the resulting compressed-texts together to make the keystream / Pad.

It has been hypothoesized in the past that as long as the shared book collection remains secret the system is effectively unbreakable in any meaningfull time frame.

What the method proposed in the talk is instead of using books you use messages and send the keystream as a combined cipher text. You pick a group of messages from one server and a slightly different group from abother server, the difference being the unknown message you wabt to download.

What the talk did not mention is that as you have previous messages stored localy you can use those as canceling texts. That is if you already have message 1024 you can request it again as one of the messages to include in one of the streams from one of the servers. Thus if someonr gets to see the messages you select from each server (say admin collusion) rather than there being just one difference between the two streams you could have as many differenses as you have already downloaded messages. Thus you and the person you are communicating with can use multiple sets of servers which can have diferent owners, admins and locations thus making colusion very dificult if not impossible.

SpaceLifeForm January 4, 2020 1:50 PM

@ Clive

“Thus you and the person you are communicating with can use multiple sets of servers which can have diferent owners, admins and locations thus making colusion very dificult if not impossible.”

The key point is the multiple sets.

If the users randomly rotate thru different sets, it will really help to defeat traffic analysis.

Random traffic, but not constant traffic.

To conserve server bandwidth, which is where the cost is.

SpaceLifeForm January 4, 2020 3:24 PM

“It seemed the tendency of hearing from like-minded people trapped them in their own opinions.”

hxxps://arstechnica.com/science/2020/01/its-the-network-stupid-study-offers-fresh-insight-into-why-were-so-divided/

[This is why I avoid insane people, BTW]

Give me Silence, or Give me Insanity!

(Give me Liberty, or Give me Death!)

Anders January 4, 2020 3:35 PM

news.microsoft.com/transform/hackers-hit-norsk-hydro-ransomware-company-responded-transparency/

SpaceLifeForm January 4, 2020 4:34 PM

@ Clive

When you awake, please revisit the drone article, and educate Anders and myself.

I feel a disconnect in the force.

I think we may both be wrong.

But, you know RF, so please elaborate.

Thank You. Happy New Year.

SpaceLifeForm January 4, 2020 5:32 PM

AWS again. Are IT people really this stupid?

“Apart from listing systems and users, adversaries could also take control of the Amazon Web Services (AWS) account, execute commands on systems, and add or remove users with access to the internal systems.”

hxxps://www.bleepingcomputer.com/news/security/starbucks-devs-leave-api-key-in-github-public-repo/

Seriously, you can not be this stupid.

You just can’t. If this is not fake news, then whomever did this should not be allowed near a PC. Probably should not have cell phone.

SpaceLifeForm January 4, 2020 6:11 PM

@ Clive

Thank you for explaining.

As I suspected, both Anders and myself did not understand the entire EMF picture.

Thoth January 4, 2020 11:23 PM

@Clive Robinson, all

“You missed menthioning “cwtch.im” the design is an apparant filch of the “Fleet Broadcast” …”

I got curious about the Cwtch but was busy with restarting my year’s work again so will look at it when I have the time to do so.

Good to know that somebody is possibly actively listening or chanced upon our ideas or somehow thought of it themselves but whatever the case, at least there is now an experimental practical implementation of the Fleet Broadcast method.

Would be interesting if @maqp could integrate a Cwtch client into the TFC setup besides the usual OTR chat as an additional option for robust one way data diode protected secure comms too.

Also, a scary thought and I believe somebody brought up this topic here before is that the person behind Signal (not gonna name specifically) wants to centralize everything in a 36C3 talk.

I am starting to think that anybody wanting real Anti Data Peeping Tom/ ADPT (as you suggested to not use Data Security word anymore), should be able to implement secure protocols that are inter-operable across different clients and devices instead of being led by somebody and dictated on how it should be.

Projects like TFC and hopefully Cwtch would hopefully provide ADPT protection while running on user chosen platforms and devices and also the intermediatory services (servers) without discrimination or biasness (i.e. Signal, WhatsApp, Apple iMesssage, Skype et. al.).

For those wanting to implement secure comms, here’s some guidelines I would like to suggest:
– Use proven cryptographic primitives that are easy to implement and can be implemented in embedded devices (i.e. smart card and TPM operational perimeters for very low SRAM and EEPROM storage)
– Use routing methods that are deniable
– Use peer discovery and routing protocol frames that are simple and can be communicated in packets not exiting 256 to 512 bytes per packet if possible for purpose of compatibility with smart card and embedded implementations
– Use simple cryptographic algorithm suites that are simple (i.e. ChaCha20-HMACSHA256, AES256-CTR-HMACSHA256 ….) without entering the ciphersuite complexities of SSL/TLS, SSH and the rest …
– HMAC is chosen above due to it being a rather common crypto primitive easily available.

Combine the above suggestions with a data diode with attached secure crypto processor units per data diode, it would be a pretty good setup although it would be better if EM shields can be added to the whole thing that fits into a small tin can for soft drinks.

RUSure January 4, 2020 11:50 PM

“Good to know that somebody is possibly actively listening or chanced upon our ideas or somehow thought of it themselves”

An idea that you didn’t claim to come up with? Do they exist?

Gunter Königsmann January 5, 2020 4:30 AM

The XOR method sounds like storing an OTP in an half-trusted place. There are rumors that the Austrian empire tried something like that in WW1 by using the 2nd volume of an unsuspicious, but nor widely spread book as an otp. Unluckily the office that bought the books for decryption not knowing what they were to be used for for security reasons determined that volume 1 would be more useful and provided that one instead..

Chris January 5, 2020 7:43 AM

Reflections on my newyear resolutions sofar so good
– Realized that not using Javascript is probably too hard
so i have a default javacript blocker and some exceptions
the exception list now is 19 entries, so not too bad all and all
I ended up using the firefox plugin Disable Javascript, since it does that well without beeing too complex

  • Cloudflare, well, what ive done sofar is to block the IP ranges for Cloudflare on the outgoing firewall, there are official and unoffical lists
    what i can see the official should be ok, the unofficial ones dont seem to understand how subnet bitmasks work.
    Then i use 2 different plugins, one that is called Detect Cloudflare and one called Block Cloudflare MITM attack, which seems to redirect to an archived version of the page if it exists, if i still want to badly see whats on that Cloudflare hosted site, which sometimes does happen, there are 2 more options
    I can use Startpage Searchpage Cache or TOR Browser, sometimes it works sometimes it wont, but all and all, Cloudflare is gone, i hope that the Searx project still makes a plugin for this someday since the best option would be to not see the Cloudflare hosted sites in the search result at all
  • Manjaro, well , easy and very happy with Manjaro sofar, but i take it slowly
    since there is some learning curve, mostly with systemctl journalctl and systemd in general, crontab etc, its fun, and i have turned from hostile towards systemd to actually liking systemd, also the pacman thing is new
    to me, and i like it, now when i dont have to think about apt-get and ppa
    i wonder why i havent known abt this earlier, and AUR rocks it truly does

Yeah, totaly of topic ranting but thats the progress, have a good year and
keep it safe

//Chris

Anders January 5, 2020 9:22 AM

@Chris

OK, understood.

But just blocking JS is not good direction either.
We need to start movement and achieve paradigm change
towards JS-less web 🙂

Chris January 5, 2020 9:31 AM

@Anders
We need to start movement and achieve paradigm change towards JS-less web 🙂

Indeed, i am not into politics however, I might set up a blog or webpage however
with some howtos and whynots, but part from that i dont know what else to do.

PS: Its a pity that NoScript is what it is, guess it went downhill with FF Quantum
dunno, the GUI is too complicated, and i could rant allday long why i dont like NoScript
but i leave it at that

Clive Robinson January 5, 2020 10:40 AM

@ RUSure,

An idea that you didn’t claim to come up with? Do they exist?

If you were a regular reader not a sniper then you would know the answer to that without needing to ask.

We are after all discussing things we’ve previously discussed on this blog some time before and you can search for if you wish, as we are open about our thoughts quite often.

As for the answer to your specific questions I’m sure there are. After all I did not come up with the “book code” idea, that happened long before any of us were alive. But somebody else is trying to claim a variation on it as an apparently original idea, which I’ve bern able to say it clearly is not.

Thus the question arises as to if the idea is usefull or not, most times it’s a matter of timing. The book code had it’s moment in history and technology found a way to surpass it’s then usage. Now someone has come up with a different use for what is essentially the same idea now we have “idiot savants” so cheaply available in the form of computers to take out the drudge and make old ways practical for all again. So while the use is new the method is not, which as I’ve experiance with ways to use the book code and similar I can add other knowledge to. That would make the new usage rather more secure. That as they say is how the cookie crumbles.

Several times I’ve come up with new to me ideas, then done research and found that others had started in down the same idea route. Some went different ways or in different problem domains, others not as far, others went further. It’s the luck of the draw, as technology reaches a certain point somethings become obvious, at first to a few then to many, and the world moves on as new ideas improve on old methods.

As I’ve pointed out on a number of occasions one of the problems with high tech is we are realy bad with our history. We forget things we have discovered or learnt as new technology finds better ways to do some subset of things. Thus we end up rediscovering them or reminded of them as new uses of old methods are thought up.

Other than a good memory and having lived through quite a bit of tech history, and having researched other tech history and having a little of what our host @Bruce calls “thinking hinky” I don’t realy have any other special advantages on others including you.

So you too could be the same if you put the work in, in the right places, but please don’t ask me what they are, because I’ve no idea, other than be broadly interested in everything.

Clive Robinson January 5, 2020 10:51 AM

@ Gunter Königsmann,

The story is very similar to one I have documented somewhere in my “dead tree cave”.

From what I vaguely remember the “agent” possed as a traveling salesman in the field of endevor the refrence book was related to.

Though these days we wold not use books of tables of numbers because of Benford’s law,

https://en.wikipedia.org/wiki/Benford%27s_law

Gamgee January 5, 2020 12:49 PM

“1, They read the idea here, liked it and incorporated it.

2, They came up with the idea independently, liked it and incorporated it.”

As a heirarchy of all the considered possibilities, it shows exactly what it does.

Anethics January 5, 2020 1:26 PM

The threat is from the inside.

“Richard Painter, a University of Minnesota law professor who was a White House ethics official under President George W. Bush, said such disclosure rules exist so that the public knows just what financial stake someone advising the president or performing government business has. The point is to keep presidential or other government advisers from secretly enriching themselves through their service, he said.

Painter said Giuliani should have been designated as a formal adviser, to bring his activity under ethics laws.

“The bottom line is, you can’t just delegate any U.S. government function to somebody and simply because they’re not getting a salary from the government, they get to ignore all the conflict-of-interest rules,” Painter said. “That’s a nonstarter in terms of ethics. It’s a disaster.”

There are several federal laws governing his private sector work that would have applied to Giuliani had he held a formal administration position. For example, career federal ethics officials would have reviewed his financial holdings and connections for possible conflicts of interest. In Giuliani’s case, because he didn’t hold an official position, no such reviews were conducted, and there is no documentation revealing how much he may have made — if anything — by being promoted as a White House cybersecurity adviser.”

Clive Robinson January 5, 2020 1:35 PM

@ Gamgee of the large furry foot.

It’s actualy a heirarchy of time events.

If you know of the idea being put on the Internet or in a publication or public recomendation for a standard prior to that, then sing out, but I suspect you won’t as we’ve been here before.

Intetestingly the article you point to expressly mentions problems with the research,

    A vast majority of the literature on illusory superiority originates from studies on participants in the United States. However, research that only investigates the effects in one specific population is severely limited as this may not be a true representation of human psychology. More recent research investigating self-esteem in other countries suggests that illusory superiority depends on culture.

Put simpl it was found, that the issue you refere to is mainly American and due to their social customs / conditions…

Neither myself or @Thoth live in America or it’s social customs.

Which reminds me your outsised furry foot is that natural or because of an outsized sock needed to alow for something else to be shoved in like a ham fist?

Gamgee January 5, 2020 2:29 PM

“It’s actualy a heirarchy of time events.”

It’s a simple cognitive bias to assume that without actually looking as deeply as would be required to find what is being sought, and it’s actually not the first time you’ve claimed to “invent” entire *(fleets of) concepts and taken credit as if it were so.

In the next breath, dismissing cognitive bias because a broad link referred to US-based psychological studies… and no other reason… is pure gold, and a perfect illustration.

Now if you’ll excuse me, I’m inventing the “data triode” because I mentioned it first, according to me. Pip pip cheerio, mr Fraudo.

Thoth January 5, 2020 3:00 PM

@Clive Robinson

I am afraid I dont have the patience and free time with these online trolls these days as you know I dont give up much ideas here anymore.

I prefer to spend the better part of my time in something more constructive (i.e. tinkering with actual practical constructions with interesting Anti Data Peeping Tom creations) which I am weeks away from finalizing something interesting.

Clive Robinson January 5, 2020 3:20 PM

@ Gamgee the hamfisted sock puppet,

It’s a simple cognitive bias to assume…

As normall your entire lack of ability to do research in favour of just blurting out irrelevance has come to the fore.

Do you know who William of Ockham (a small village in Surrey England) was?

I suspect not, you don’t seem the type to understand basic principles, research or history. So perhaps I had better tell you to save you the effort of failing to google correctly as you have in the past.

He was a quite thoughtfull Franciscan friar, credited with being a scholastic philosopher, and theologian. Historians, scientists and philosophers considered him to be one of the most important figures of medieval thought in the 14th century. However most important to your education is he is now mostly known for being the principle of “Occam’s razor”.

What say you is “Occam’s razor”? As you obviously do not know.

Well it’s the methodological principle of “lex parsimoniae” (law of parsimony). There are various ways you can word it but usually it’s given as,

    Entities should not be multiplied without necessity.

Or informally “The simplest hypothesis is most likely correct”.

Thus “time ordered” is the simplest hypothesis and there for considered the most likely when considered against your evidence less argument.

People who would consider otherwise as you appear to do might have caused others to miss judge you by giving you more credit than you are actually due. Thus be advised to give thought to how the Dunning-Kruger effect applies to you as seen by others who atleast know how to form an evidence based argument.

What you have done though is confirm who you are and no doubt others will check. I would have thought the Ratio of your failures (all) to successes (none) would have stopped you trying by now, but obviously not you just repeat the same sad process hoping for a different out come. Perhaps you should also look at Einstein’s definition of madness as well.

So as you have taken on the pseudonym of a Hobbit as your latest sock puppet, I guess the best thing to say to you is “go back in your hole”.

Gamgee January 5, 2020 4:29 PM

“Do you know who William of Ockham (a small village in Surrey England) was?”

Everyone does. You didn’t invent him either.

Anders January 5, 2020 5:19 PM

Something is going on in Austria.

securityaffairs.co/wordpress/96022/hacking/austrias-foreign-ministry-cyberattack.html

Electron 007 January 5, 2020 5:24 PM

@Wrong End of the Gun Barrel

I am afraid I dont have the patience and free time with these online trolls these days as you know I dont give up much ideas here anymore.

I prefer to spend the better part of my time in something more constructive (i.e. tinkering with actual practical constructions with interesting Anti Data Peeping Tom creations) which I am weeks away from finalizing something interesting.

That’s right. Guns are banned and we hide under bridges since those university frat boy hackers SWATted us out of our homes, jobs, vehicles, and bank accounts. And how many years are we supposed to serve in prison just because of some stupid prank or another, which some frat boy or sorry girl chose to play on us?

https://www.wired.com/story/how-to-stop-swatting-before-it-happens-seattle/

Anti Data Peeping Tom/ ADPT

That does not require an acronym to sound smart to the uninitiated.

some guidelines I would like to suggest:
– Use proven cryptographic primitives that are easy to implement and can be implemented in embedded devices (i.e. smart card and TPM operational perimeters for very low SRAM and EEPROM storage)

Someone hands me a “smart card” — which will attract a lot of unnecessary suspicion from FBI and local cops etc. — and I am supposed to “trust” that it is not compromised and that it does what it is supposed to …

– Use routing methods that are deniable

Such as https://tools.ietf.org/html/rfc4941 Privacy Extensions for Stateless Address Autoconfiguration in IPv6?

– Use peer discovery and routing protocol frames that are simple and can be communicated in packets not exiting 256 to 512 bytes per packet if possible for purpose of compatibility with smart card and embedded implementations

That might be appropriate if one is actually communicating with smart cards or other embedded applications that are incapable of larger packet sizes, but otherwise standard Ethernet frames of 1500 bytes or so do not seem excessive.

– Use simple cryptographic algorithm suites that are simple (i.e. ChaCha20-HMACSHA256, AES256-CTR-HMACSHA256 ….) without entering the ciphersuite complexities of SSL/TLS, SSH and the rest …
– HMAC is chosen above due to it being a rather common crypto primitive easily available.

“Easily available” is not necessarily just because it’s on djb’s website.

Combine the above suggestions with a data diode with attached secure crypto processor units per data diode, it would be a pretty good setup

Sneakernet. Right. Wipe that thumb drive before you let it out of your highly secured man cave. I really do not like “setups” when frat boy cops are playing stupid internet games with “suspect” citizens.

Clive Robinson January 5, 2020 5:58 PM

@ Electro,

The key paragraph in the EMP article is,

    Without making relatively inexpensive fixes to the electric grid and military bases to protect against an EMP attack, Pry said that the end could come fast.

I’ve been going on about putting in place these supposadly “relatively inexpensive fixes” for quite some time now as a search back through this blog will show. Not just for man made EMP attacks that might or might not happen, but also solar Coronal Mass Ejections (CME) that do happen and eventually one will at some point create another Carrington Event[1]. If that were to happen today the costs would be enormous, but probably would not be spent ever in our life times… Because our modern “Just in Time” First World would be thrown back to at best the early 1800’s with a little non electrical technology from even upto the 1980’s surviving. Needless to say Stalin’s maxim of “One death is a tragedy, a million a statistic” would apply hundreds of times over in major cities and surounding conurbations. Third world agrarian based countries and economies however might not even be significantly effected.

There is going to also be a major difference between continental America and Continental Europe. Put simply Europe puts much of it’s cables underground and still has stricter regulations about preventative maintainance etc as part of National Security policy.

But don’t think “fiber optics” will be uneffected, whilst the data does indeed travell down a very fine glass filiment, it is protected in armourd cables that are usually armourd with metal thus conductive wire, which will melt if sufficient current flows throigh it just like any fuse does.

Why are these supposadly “relatively inexpensive fixes” not in place? Blaim the “Free Market” mantra where “Profit is king, and ruthless short term cost cutting essential” to meet “shareholder expectations”. Thus the only solution is “Regulation in the name of National Security” that way all energy companies including PG&E will be forced to do the maintenance required for long term survivability.

If you want to know what is likely to happen with an EMP or Carrington event[2] you could ask people in California who found out very recently. Or you could ask people in New Zeland or parts of Canada. In all cases they only survived because the power outages were “localised” and help could be “brought in” relatively easily. If you believe the report about nuclear EMP[3] or the evidence of the Carrington Event you will know that “localised” will not be the case. Therefor help will not be on the way any time soon, and driving 20miles round trip to the next town where they still have power to get a Mucky D’s takeaway –because you can not cook the food now spoiling in your fridge and freezer and your local stores are either closed or sold out– won’t be happening either. If you are lucky the event might happen in late winter early spring maximising your short term survival chances.

[1] As far as we know the first natural EMP effect to noticeably effect mankind was in 1859. Although solar storms and direct coronal mass ejection hits had almost certainly happened befor that and probably quite frequently, mankinds level of technical development that could be effected by such an event was at that time less than thirty years old. It’s called a Carrington Event after London based English amateur astronomer Richard Carrington, who on the morning of September the first did a solar observation and started recording a very large cluster of black sun spots, during which he observed two massive white flashes. Within a few minutes the fireballs he observed had fadded away. But not so the charged particles that were now racing through space to strike the earth. That night most telegrephers discoverd that their simple equipment made of brass, coils of wire and pirces of iron failed, sometimes dramatically so causing not just the equipment to spark melt and catch fire but also the cables outside strung off of poles to briefly glow like the filiments of a light bulb before bursting into flames. All with the night sky lit up with briliant multi coloured Northern Lights that were visable in places like Boston and Rome and most other places in the northern half of the Northern Hemisphere,

https://www.history.com/news/a-perfect-solar-superstorm-the-1859-carrington-event

[2] EMP events happen all the time, even a spark from a “back EMF” of a coil that causes a spark or arc is a nano or micro EMP event, so is lightning. Usually both the energy in the arc or corona discharge and it’s rise time are insufficient to be of much more than an annoying Radio Frequency Interferance (RFI) source. Even though local some EMP events can be heard hundreds of miles away due to their low frequency radiation from slow rise times. Ham and maratime radio operators refer to man made interferance as QRM and natural interferance as QRN, with lightning from tropical storms being heard on low frequencies as “crashes” as far as a thousand miles away depending on the relative times of day. If you are unlucky a local lightning strike will take out all the electronic equipment in your home. If very close without it even being plugged in with 50-100meters taking out equipment that is pluged into the house wiring, even though it might be an off grid house. Likrwise if you are grid connrcted and it earths out on your side of the substation which in rural US areas can be very long runs of overhead cables, if the local “transformer on a pole” has not been installed correctly or has failed in some way. A Carrington event has very large amounts of energy over a very wide area and for very long periods of time, which creates a lot of low frequency signals that can be filtered out in various ways. It is however a very different profile to a nuclear EMP where the rise time and energy is very high but the duration very short. This gives very high energies not just at low frequencies but right through th microwave and centrimetric bands. As I’ve mentioned before slots in metal plates act like resonant antennas, thus equipment ventilation slots and case edges become microwave antennas funneling very high E fields inside equipment. It’s the E field that does the damage to semiconductors as it blows out PN diode junctions in traditional transistors and burns out gate insulation in Field Effect transistors (FETs) which are now the most common found in integratrd circuits or microelectronic chips.

[3] The way the report was portrayed in the article gives me pause, because based on publically available information it rather sounds like “the pudding is being over egged” somewhere. For instance to get an effective EMP a nuclear device needs to be of a particular type and up in the stratosphere, as well as be effectively line of sight to electronic devices thus “Radio Horizon Distance” comes into play. I’m not sure how terrorists even if they had a suitable nuclear device, could get it not just up that high but also in the right position geographically. Whilst all nuclear devices will by radiation transport produce some form of EMP signal, the “yield varies considerably” with the way the physics package is designed to work, and in theory that information is still clasified by the US Government and some others, but as we know “information leaks” especially when governments effectively collapse.

Clive Robinson January 5, 2020 6:19 PM

@ Thoth,

I am afraid I dont have the patience and free time with these online trolls these days as you know I dont give up much ideas here anymore.

It’s one person who comes back every so often on some kind of personal mission.

Because they break @Moderators “sock puppet rules” they are not immediately identifiable. However as they have shall we say certain cognitive deficiencies it general takes a reply or two to identify them.

Why they keep embarrassing themselves in this way I realy do not know it kind of defies logic as does their ludicrous claims[1]

As for “not giving much” no I don’t much anymore either. However I do have strong feelings about how lack of knowledge means peoples “privacy” can be so easily invaded. So yes I do still give information but in a more abstract or 20,000ft way so that more people can get to grips with the ideas.

Speaking of ideas, I hope your project will quickly become profitable, from the hints you’ve dropped in the past it sounds interesting.

[1] It was once suggested by a previous regular poster early on that they might earn their income from a government budget. Whilst it is possible, you would expect a way better response if it was anything other than personal.

Clive Robinson January 5, 2020 6:40 PM

@ Electron 007,

Slow down and less forcefull.

Your writing comes across sometimes as both aggressive and angry, whilst I can understand why, it’s obscuring your message and it’s not going to win you friends that a lighter touch will.

Not being funny but try to have a happy smile on your face when you write your posts, daft as it might sound, experiments have shown it will probably lighten up your writting style. Which in turn will encorage people to read more of what you have to say.

Also try to find an occasional ICT story that is daft or silly, to post as that will make people smile or laugh and they will lighten up a bit in the drudge of the daily grind (the mod even lets sort of on topic jokes get through occasionally).

Electron 007 January 5, 2020 10:30 PM

@Clive Robinson

Slow down and less forcefull.

It is good to be mindful of that, but probably not for reasons of peace and goodwill.

Your writing comes across sometimes as both aggressive and angry, whilst I can understand why, it’s obscuring your message and it’s not going to win you friends that a lighter touch will.

Those people destroyed my life, and I have suffered year after year from their vicious depredations of life, liberty and property and from their arbitrary and malicious infringements of my rights.

Along with many others, I have been grievously wronged and I still seek redress, reparations and vengeance, as I realize that nothing I hope to gain or keep in this life will be earned other than as the spoils of war.

I do not need or want fair-weather friends. The coming war on Iran and on Iran’s sleeper agents and sympathizers in the mainstream media and within the U.S. government is not a light matter.

JimT January 6, 2020 1:20 PM

@SpaceLifeForm

“”It seemed the tendency of hearing from like-minded people trapped them in their own opinions.””

“I force myself to contradict myself, so as to avoid conforming to my own taste. (Marcel Duchamp)”

SpaceLifeForm January 6, 2020 2:03 PM

@ JimT

Which is why insane people can not stop talking. They just can not.

They are addicted to their own echo chamber.

Avoid people that can not shut up and never want to listen to alternative viewpoints.

You are wasting your time.

You are talking to a wall.

hxxps://www.rollingstone.com/music/music-features/donald-trump-pink-floyd-the-wall-915657/

MarkH January 6, 2020 2:29 PM

@Clive:

It’s interesting that you were writing about EMP on Sunday, because a few hours before that my thoughts were drifting in no particular direction, and lit upon the U.S. “Trestle” test facility (formally known as ATLAS-I), which I had read about in the 1980s.

I was wondering, “did Clive know about that? Probably it would interest him.”

It was an EMP test stand for military aircraft, and built to a scale sufficient for the largest of them. What most arrested my attention was its bizarre appearance: a titanic wooden platform centered over a bowl-shaped “crater” depression in the earth.

The platform, and the bridge-like structure leading to it, were made almost entirely of wood and glue (according to wikipedia, there was very sparing use of metal for reinforcement at critical points), so as not to distort the electromagnetic fields near the aircraft under test. Even the bolts and nuts were made from wood. It’s claimed to be the largest all-wood structure in existence.

But right next to this giant wood structure was a comparably huge metal one, for EMP generation.

Here’s another article with some high-quality photos.

I doubt whether any other country made the extravagant expenditure to do such a stunt. Anyway, if the Soviet Union had a counterpart, I haven’t learned about it.

Clive Robinson January 6, 2020 3:43 PM

@ MarkH,

It’s claimed to be the largest all-wood structure in existence.

I wonder if they could get the “Spruce Goose” in 😉

I was aware of the tests ans some of the technical details, but not the site.

What I also know is that thr USAF have effectively made “no changes” to their aircraft with regards E1 EMP events even though their test results were uniformly bad even on quite recent aircraft (apparently performance and stealth are higher priority for reasons I suspect are entirely rational).

One of the issues with people talking about E1 EMP results is “the aircraft will fall out of the sky” claims.

As you know untill recently most civilian aircraft would have been uneffected, it was elrctronic engine managment and full fly by wire that changed that. Even so civilian aircraft would be put in a glide slope “not drop out of yhe sky” because the airframe design was basically “flight stable”. Military aircraft like fighters etc are not “flight stable” by design because you get higher performance, the problem is all the control surfaces twitch under computer control at around two hundred twitches a second. When their computers stop gliding is likely not to be an option, which is why ejector seats that are seriously prejudical to your long term health are generally fitted as standard, though I’m told some have “electronic firing” circuits (it’s not an area I get into these days as my interests are more geared to “one way unmand” vehicles).

SpaceLifeForm January 6, 2020 3:50 PM

@ Thoth, Clive

No on Cwtch. No on tor.

Just, no.

Not an angle that is trustable. Just no.

Do not go that route. Just no.

Seriously. Not that route. Please. Just no.

SpaceLifeForm January 6, 2020 4:41 PM

@ Clive

Last I checked, “Spruce Goose” had no wire.

But, 737 MAX has a bit.

And, here may be a cover story.

hxxps://www.businesstraveller.com/business-travel/2020/01/06/boeing-uncovers-new-potential-design-flaw-with-its-737-max-that-could-lead-to-crash-report/

MarkH January 6, 2020 6:13 PM

@SpaceLifeForm:

This 737 story — unlike the MCAS scandal — looks to be quite minor.

  1. It’s not yet clear whether the potential failure mode might occur in reality; engineering analysis is underway to assess this.
  2. If there is an appreciable risk, the fix is quick and cheap. That being said, Boeing prefers not to make the fix if there’s no actual risk, because any reconfiguration carries some risk of accidental damage to the wiring during the modification process.

The discovery of this potential hazard resulted from a “fine-tooth comb” review of systems, which flowed from the MCAS crashes in an interesting way: analysis of the accidents (and perhaps other MCAS failures which were handled safely) showed that cockpit crews took longer than expected to respond to flight system failures.

This led to a nose-to-tail reevaluation of safety margins, assuming a slower time frame for corrective action by pilots.

MarkH January 6, 2020 6:20 PM

@Clive:

Are those one-way unmanned vehicles trebuchet payloads?

Or something a little more high-tech? 😉

For collectors of engineering-themed gallows humor:

Q. What’s the difference between mechanical engineering and civil engineering?

A. Mechanical engineers design missiles; civil engineers design targets.

Clive Robinson January 6, 2020 7:55 PM

@ Sed Contra,

A over half a decade ago I had reason to look into “Imagination Technologies”, it was far larger and apparently thriving back then and included “Pure”. For various reasons I decided not to move forward as little alarm bells rang in the back of my head. Something was going on, and it looked like someone was taking hostile action against them. Thus being somewhere else appeared the sensible option to me.

Part of this unease was Apple setting up an office in St Albans (north west of London) UK. This office was “odd” at first then effectively indicated it was going into chip design, which as it was figuratively speaking “just around the corner” from Imagination Technologies headquarters caused concern in some parts of the UK industry. Esspecially when it became clear to a number of people that Apple were doing a hostile technology grab by poaching Imagination Technologies staff, specificaly select and key members of both the design and managment teams…

In the US people would have been screaming “technology/IP theft” and reaching for high payed lawyers and “begger thy neighbor court cases” (which Apple had a taste for if people think back). The civil legal system is somewhat different in the UK so other methods would have to be used. Unfortunately the problems for Imagination had become stacked against them. Firstly they had “sleep walked” into a dangerous position of being overly dependent on the licencing deal with Apple. Secondly somebody was successfully shorting their stock (which I’m told upset Intel). Apple continued with the brain drain and around 2016 it appeared that Apple were going to buy Imagination. This did not happen Apple continued the brain drain and Imagination had to down size, evebtually putting themselves up for sale at a “fire sale” price. Due to the stupidity of a policy set by the then political encumbrant George “gidiot / White lines” Osbourn when he was UK Chancellor, the sale of Imagination to a “Chinese preditory organisation” of a “hostile foreign power”[1] was “green lighted” in the UK.

However it had also became clear Apple had at the very least significantly encroached on Imagination’s patented technologies and thus Apple were in a quite awkward position legaly speaking. Which should have given the Chinese owners sufficient leverage to get equitable compensation for Apple’s infringment.

Then US President Trump realy kicked his “anti-China campaign” up several notches… This gave Apple a counterweight argument, in that Imagination would loose every thing if Apple persuaded certain US government entities to “investigate” Imaginations new owners etc. In effect Apple had been awarded a successful “Hail Mary pass” to their problems from the White House.

In effect Imaginations new owners had a choice loose everything including their aquired patented technology. Or take a “whipped dog offer”, which it looks like they have decided to accept. Apple gets off the hook for potentially gross IP infringement, they also get more access to the patented IP of Imagination, and the Chinese owners become even more dependent thus vulnerable to Apple’s whims down the line. So Apple have been able to “whip them to heal”, getting two significant gains for just a handfull of floor sweeping from under their banquet table tossed towards the Chinese owners rice bowls.

As the old song has it,

Nice work if you can get it… And if you get it… Won’t you tell me how?

[1] Just a couple of names that get bandied about for Chinese Investors, technology firms etc etc, irrespective of any relationship they have or more importantly have not to the Chinese Government. Obviously it’s just as ludicrous as calling CISCO and Juniper the same thing, especially as there is public evidence their products were subverted to the benifit of their national government…

Clive Robinson January 6, 2020 8:18 PM

@ SpaceLifeForm, MarkH,

Last I checked, “Spruce Goose” had no wire.

But… Like the test rig described it was almost entirely made from wood, and was for quite some time the largest plane in the world. In fact as far as “Sea Planes” are concerned it was the largest ever made, even though it technically did not fly even though it got airborn from and over water at low hight and back again.

So “symbolically” if it had been popped in the test rig… It would have been the worlds largest wooden plane in the worlds largest wooden test rig…

So just a little symbolism that for some reason peeks my whimsy.

Speaking of the worlds largest odd planes, the Russisn’s made something similar in that it flew the same way as the Spruce Goose. That is from and over water at low hight and back to water again, it was the amazing “Lun Ekranoplan”. Whilst technically it was not a plane, but a “Ground Effect Vehicle” (GEV) ship, I think if you see it as the CIA did on the “duck principle” it’s a sea plane,

https://en.m.wikipedia.org/wiki/Lun-class_ekranoplan

Clive Robinson January 6, 2020 8:39 PM

@ MarkH,

Or something a little more high-tech? 😉

Lets just say to go on the top of “An Engineers” design on steroids and quite a bit smaller than “a Musky smelling cherry red Tesla doing a grand tour” 😉

You know the scary thing about that Tesla? It will have Musk DNA in it that might well out live the rest of the human race, now that is one heck of an immortality statment to beat even Pharaoh Ramesses II, memorialized in Percy Bysshe Shelley’s “Ozymandias, King of Kings”…

Clive Robinson January 7, 2020 10:50 AM

@ Sed Contra,

Apparently, from the article, people who should know better are still using SHA1

Maybe, maybe not…

It breaks the not the privacy of PGP but the authentication of PGP.

Which opens up “deniability”.

Thus you present to a court a message you received that was apparently signed by me, I can now counter with evidence that it’s possible to forge my signiture.

As long as I took care to cover other asspects, you have a problem that you will either fail at that point or it will fall to credability of you and how you got the nessage. Thus if I took care, you won’t be able to get any kind of independent evidence that I sent the mesage. If I can then show evidence exists for me sending you other –harmless– messages then your credibility sinks.

I can make your credibility worse by using “one time phrases” in messages that are otherwise harmless or meaningless to others.

So if I normally start messages with “Hi” not “Hello” or “How are you” etc or I use multiple variations but only use “How are you doing” as the One Time Phrase and you say that to the court, your credability is not going to look good.

Of course now neither of us can use this deniability trick brcause publically we are now both aware of it, so “my bad” your good 😉

Weather January 7, 2020 11:29 AM

@Sed Contra
So is sha2 unless using the 512bit that the hash truncated to 256bit. This was a update about 8years ago along those lines.

SpaceLifeForm January 7, 2020 12:10 PM

@ Sed Contra, Clive

I find the TLS downgrade attack angle more interesting than the PGP/GPG problem.

And Git using SHA1 is even worse.

Oh please. January 7, 2020 12:45 PM

“So if I normally start messages with “Hi” not “Hello” or “How are you” etc or I use multiple variations but only use “How are you doing” as the One Time Phrase and you say that to the court, your credability is not going to look good.”

Nonsense. The word is credibility. Nobody is taking anyone to court based on a single message interception and nothing else. The pseudo-analogy is hilarious.

Think Boris Johnson claiming he’d save $350 million by switching to Geico.

Electron0x63 January 7, 2020 1:31 PM

@Clive Robinson

Which opens up “deniability”.

Denialism abounds.

@Oh please.

… you say that to the court, your credability is not going to look good.”

Nonsense. The word is credibility. Nobody is taking anyone to court …

How is a defendant (always male, male, male! no matter what) supposed to get out of the handcuffs and orange jumpsuit then?

It might be a long time in that federal prison cell, might not get out alive, but in a sense that’s right, nobody’s taking anybody to court on that.

Why is it that MtF transgender such as Chelsea Manning are so highly desired on the male side of the prison system?

Clive Robinson January 7, 2020 1:35 PM

@ Oh please,

A new sockpuppet name? Is it less itchy?

Oh and of course another new strawman you’ve tried to weave together but have left full of holes,

Nobody is taking anyone to court based on a single message interception and nothing else.

Nobody claimed that, so it’s your invention, yet again.

Which begs the questio “when are you actually going to do something usefull with your life?”

As others have noted you contribute nothing to the subject matter of this blog. I guess it’s all rather SAD for you.

But you appear happy to behave only like a waspish shrew, so maybe you should spend more time with the Bard as your barbs are realy neither penetrating or holding upto anything of use. So back in your hole to Brush up your Shakespeare.

Tatütata January 7, 2020 2:19 PM

I’m back!

Bruce Schneier, The Atlantic, 7 January 2020: “Bots Are Destroying Political Discourse As We Know It

They’re mouthpieces for foreign actors, domestic political groups, even the candidates themselves. And soon you won’t be able to tell they’re bots.

Any bots around here yet? Or is Fill in […] the name of this blog still a strong enough Turing test? 🙂

Clive Robinson January 7, 2020 2:28 PM

@ SpaceLifeForm,

I find the TLS downgrade attack angle more interesting

Fallback attacks are often the most insidious, and secure communications software real should not do so without warning usets big time.

But few software developers get to put meaningful warning in the software or any warning at all that it’s happening. For some strange reason “managment think” that users should always get communications and also users would be upset if their software said something along the lines of,

    Can not negotiate AES256, with remote device. Only common encryption type available is ASCII plaintext.

So the software switchs to it without informing the user, and the man in the middle is happy to watch…

But we realy should know by now that true one way determanistic functions are unlikely, thus secure hashes are always going to be problematic at the best of times.

It’s one of the reasons I’ve suggested in the past that NIST comes up with proper “Plug-in Frameworks” for these lower layer functionality. So that somene could setup ways to safely and quickly unplug dangerous functions, and drop in more secure replacments.

Electron0x63 January 7, 2020 3:56 PM

https://www.forbes.com/sites/thomasbrewster/2020/01/07/the-us-just-launched-a-fresh-assault-on-apple-and-warrant-proof-encryption/?ss=cybersecurity

U.S. Launches Fresh Assault On Apple’s ‘Warrant-Proof Encryption’

Thomas Brewster, Forbes Staff, Cybersecurity, Associate editor at Forbes, covering cybercrime, privacy, security and surveillance.

Apple’s iPhone encryption is loved by[+]

JAAP ARRIENS/NURPHOTO VIA GETTY IMAGES

The U.S. government has launched fresh attempts to try to stop Apple and other tech companies locking up user data with encryption.

Very, very slick on the part of the mainstream media.

They very neatly conflate two opposing principles:

  1. the user’s ability to access his own data and “unlock” the phone from a carrier; and
  2. the cops’ ability to access the user’s data on a felony copyright, child porn, or DMCA circumvention warrant.

I’m sorry if the cops are sexist.

SpaceLifeForm January 7, 2020 6:08 PM

@ Clive

“But we realy should know by now that true one way determanistic functions are unlikely, thus secure hashes are always going to be problematic at the best of times.”

So, just asking for a friend…

Why are signatures based upon a hash?

Why?

Why can’t the signature be based upon the ‘payload’ ?

I do not want to hear any BS that it is INEFFICIENT (Which I know you will not spout)

I will expound on this point later.

Weather January 7, 2020 8:03 PM

@Spacelifeform
I think because something in the payload is meant to be secret, one way type if it was based on payload it would allow the attacker to now the data.
Most Hash’s aren’t fully one way, but you trade off between things like small input differences to something, and non repeat chars.

Clive Robinson January 7, 2020 11:49 PM

@ SpaceLifeForm,

So, just asking for a friend…

😉

The answer is actually very dull.

In the real physical world nothing is perfect. In the information world we have imperfect knowledge of attacks at any given point in time. That is proofs of security are based on the known not the unknown. The classic example is the multiplication of two primes is fairly trivial but factoring the product back is “assumed to be hard” curently. That is nobody has come up with a way to do it, but likewise nobody has come up with a proof that says there is no easy way to do it in either the specific case of two primes or the more general case of any two natural numbers. But it’s also posible that whilst the general case of factoring the product of two natural numbers is not possible, there may be paterns relating to the special case of two primes that might make factoring a product of just two primes easier.

The point is we generally are happy to use things that are “hard” rather than “impossible” because proving impossible may not be possible in any acceptable time frame.

Thus there is actually no reason in some cases to stop using the hash even though most would now consider it broken (admittedly I can’t think of any off the top of my head right now as it’s “oh my god it’s morning” in the UK and I’ve not yet slept[1] 🙁

But, with respect to,

Why can’t the signature be based upon the ‘payload’ ?

They can be depending on what you mean by payload and signiture

The normal use is you mark a block of plaintext, run it through the hash function and then append the signiture at the end. Because this is a fairly easy thing todo.

You can also fairly easily take a marked block of plaintext and run it through a hash and check the result matches some position in the marked block of plaintext. However making that block of plaintext would be quite time consuming, unless their was some kind of trap door you could exploit, and if there was it would not be considered a secure hash

[1] It’s @Wael’s “harsh mistress” having a go at me as well, due to the equivalent of time zone shifting.

SpaceLifeForm January 8, 2020 4:49 PM

@ Weather, Clive

I understand the reason for signing a hash of a long payload. Totally understand that.

But what if the payload is small? Say less than 64 bytes?

Then the signature is small.

As I noted before, I may have to transmit say, 256 bytes of data, just to securely communicate one single bit.

Think this flow:

E(S(E(S(payload))))

MarkH January 8, 2020 4:49 PM

@SpaceLifeForm:

Why can’t the signature be based upon the ‘payload’ ?

I don’t recall anybody saying that it can’t. It seems to me that it can.

I presume that by “signature” you mean the usual cryptographic understanding: only the author can sign, but anyone can verify. To my knowledge, every digital signature scheme uses the kind of one-way computations also used for public-key encryption schemes. In almost every case, this is some form of modular exponentiation.

Deployed systems don’t use very large exponentiations, because they consume significant time and electrical energy. In practice, almost all cryptography in use today has a maximum modular exponentiation size of 4096 bits (512 bytes). My old Pentium takes about 2.5 seconds to make one such computation.

The bit-string to be signed cannot be greater in length than the modulus (in practice, it should be at least a little shorter). So using RSA as an example, I could sign a 500 byte file in 2.5 seconds.

A nice thing about RSA signatures is that verification (checking the signature) can be made very much quicker than signing, so the time to verify would only be milliseconds.

Although the size of these computations is usually kept small, as far as I can see there’s no great obstacle to dramatically scaling them up. A file rather larger than 100K bytes could be signed (as the full payload) using 1 megabit RSA. This could be done with fairly simple software, and doesn’t need much RAM. Using the same optimized but elementary algorithm, it would need (I estimate) about a year to compute the signature on my Pentium.

However, with a multiplication algorithm suitable for very large integers (using the Fast Fourier Transform algorithm), and more modern hardware, a million-bit signature could be computed in a couple of hours. For a one megabyte file, the whole-file RSA signature could be computed in a week or so.

The extra cost of whole-file signatures would be the computation time required, and that the signature would be at least as large as the file.


If for any reason such whole-file signing was considered too costly, the file could be divided into blocks, and the a signature computed for each block. This would be a lot faster!

However, this would have to be done with care in order to maintain security. A naive block-by-block signature would be vulnerable to re-sequencing of the blocks, or “replay” of blocks from a different file, by an adversary.

With suitable safeguards, a one megabyte file could be signed block-by-block using 2048-bit RSA in less than 15 minutes even on my ancient computer, and perhaps a minute or less on modern hardware (for this computation, the speedup factor is not quite so dramatic).

It would also work to “chain” the block signing process, yielding a single signature of more manageable size.

A great advantage of block-by-block signing is that the cost is proportional to file size: making the file 10 times bigger increases the cost by only a factor of 10.


So, unless I got something wrong (entirely possible), it’s surely feasible — for some applications, at least — to avoid the use of hash functions for digital signing.

MarkH January 8, 2020 5:13 PM

@SpaceLifeForm:

Our comments crossed in the ether, they have the same timestamp 🙂

Does the notation E() mean encryption? Encrypting a signature is rather unusual … I’m guessing that the application needs both confidentiality and protection against forgery.

I think we can offer some good suggestions, if you’ll explain more fully the kind of security you want to achieve, including the “threat model,” i.e. the intentions and capabilities of presumed attackers, and how many years this security must endure.

The bad news: if what you need is really a signature that anyone can verify, then the signature should be at least 1024 bits (and preferably 2048) to maintain robust security … even if the “file” is only one bit long.

However, if the party who must verify the authenticity can be entrusted with a secret, then there are ways to keep the communication shorter.

Looking forward to a fuller picture …

SpaceLifeForm January 8, 2020 5:23 PM

@ MarkH

“The bit-string to be signed cannot be greater in length than the modulus (in practice, it should be at least a little shorter).”

hxxps://www.johndcook.com/blog/2019/03/06/rsa-exponent-3/

Good luck.

Just say no to RSA.

SpaceLifeForm January 8, 2020 5:43 PM

@ MarkH, Clive

S() – Sign
E() – Encrypt

To be secure, must sign, encrypt, sign again, encrypt again.

Using two different signing keys, and two different encryption keys.

Just call me insane. It’s all good.

Weather January 8, 2020 6:42 PM

@Spacelifeform
S(E(Payload))
E+S then sent, it stops changing the data in the hopes some random data might get decoded to some dangerous.
The S is of the payload raw data, if it has a weakness it can tell the attacker what the data is around the E
WPA has hmac which the S is directly from the hashout put and tacked on the end.

I think that’s how it works?

MarkH January 8, 2020 7:01 PM

@SpaceLifeForm:

Just say no to RSA.

You’re free to say no, as you choose! There are numerous ways to misuse RSA, which are well documented … as are procedures for using RSA securely.

The exponent 3 vulnerability is one of the “oldest in the book,” and quite obvious to those who understand the underlying math. I’ve known it for about 18 years or so.

RSA is still quite secure for signatures, when used according to good practice. But I offered RSA only as an example; all of the public-key signature schemes entail high computational cost, which grows dramatically at increasing bit-string lengths.


I certainly won’t call you insane, though there are others here I wouldn’t exempt 😉 But the intentions are still quite obscure.

I understand that you want to protect confidentiality of your application or customer. However, I am sure you can be MUCH more clear, using abstract descriptions (for example, with Alice & Bob etc.), without compromising any secrets.

For example, signatures are not usually encrypted. Their intended property is that anyone can verify the signature; it can be published to the whole world (sometimes, that’s the whole idea) without any compromise of the protection against forgery.

I can imagine a case for two signatures; the author makes signature S1, and transmits the signed message to a second party. This second party might verify S1, and then sign the result. In this case, recipients could verify that both the author and the second party have accepted/authorized the message.

But what do the encryptions do? Are they public-key encryptions? Which parties hold decryption keys? If you won’t shed light, I don’t see how anybody can help.

SpaceLifeForm January 8, 2020 7:59 PM

@ Weather

It’s not S(E(payload))

For starters, it must be signed first.

But just doing a E(S(payload)) is not enough.

@ MarkH noted:

“Encrypting a signature is rather unusual … I’m guessing that the application needs both confidentiality and protection against forgery.”

That is exactly why. And replay attacks.

Neither S(E(payload)) OR E(S(payload)) is sufficient by themselves.

It really needs to happen twice with a pair of public keys and a pair of private keys.

That just happen to change randomly.

In a secure manner just between Alice and Bob.

Clive Robinson January 8, 2020 8:15 PM

@ SpaceLifeForm,

Then the signature is small.

Do you mean in “bit size” or “range of values”?

All hash algorithms should put out a digest that is the same length in bits irespective of the size of the input even if it is just on effective bit. Thus signing the hash digest output providing the hash is smaller than the signiture modulus should likewise produce a result about the same bit size as the modulus.

The thing about both hash digests and digital signiture outputs are they are for inputs shorter than their outputs “simple substitution ciphers”. So the number of different outputs from them is the same as the input range or “alphabet size”, even though the bit lengths of the outputs are uneffected.

Hence my reason for checking what you mean.

MarkH January 8, 2020 8:53 PM

@SpaceLifeForm:

It seems to me that in most applications, S(E(payload), sequenceNumber) would be sufficient.

sequenceNumber must be incremented for each message.

Umberto (an unauthorized party) can’t learn the contents of the message, because it’s beyond his power to decrypt the payload. He can’t forge such a message, because generating a valid signature would require enormous resources. Umberto also can’t use the message for a successful replay attack, because the sequenceNumber value will never be accepted again.

  1. What form of attack do you envision, that would cause the protections described immediately above to fail, or otherwise be insufficient?
  2. If the only authorized parties are Alice and Bob, in the double-signature scheme you outlined are both signatures made by the same party?
  3. Is it practical for Alice and Bob to use a shared secret that could be applied to a sequence of these messages (limited forward secrecy)? If so, how often would the shared secret need to be changed?

If Alice and Bob can use shared secrets, and only they need to be able to verify message authenticity, then the size of each transmission might be kept as small as 16 bytes. [Bigger traffic would of course be needed for each shared-secret update.]

If it’s necessary that anyone able to verify signatures (not just Alice and Bob) then a high level of security might be maintained with 36-byte transmissions and the use of shared secrets.

If everything must be strictly public-key (no shared secret usable for more than one message), then I don’t see how to do it without roughly 150 bytes per transmission (for a theoretical attack cost of 2^80). For a higher margin of security (2^80 is approaching practicality for highly resourced attackers), then total transmission size would grow to 250+ bytes.

Sherman Jay January 8, 2020 10:07 PM

For the purpose of inspiration regarding more secure computing you might want to look at:
h t tp://www.sunrise-ev.com/vip2k.htm
The VIP2K — A Tribute to the 1977 RCA VIP Home Computer

All the info is offered freely by true hobbyist, Lee Hart.
Specs:
1802 microprocessor running at 4 MHz
32K of RAM
32K of ROM with monitor, BASIC, and CHIP8
NTSC or PAL video displays 24 lines of 24 characters, 192×192 pixel graphics
43-key full ASCII keyboard
TTL serial I/O at 9600 baud
built entirely with common vintage parts and thru-hole technology
…and it all fits in a 3.5″ x 2″ x 0.75″ Altoids tin!

it is powered by 4 “AA” batts

It is both air-gapped and secure (just disconnect the serial port)
I guess to store it safely, it could be wrapped in insulating plastic and kept in a cookie tin faraday cage.

Weather January 8, 2020 10:20 PM

@Mark
Kek wrapper from openssl that’s used with Wpa2 does that, it covers a lot of attack vectors.
I think Slf was thinking its easyer to attack the signature to get the payload data rather than the key and encryption route.

lurker January 9, 2020 12:01 AM

@Electron0x63 it seems someone may have tried the leadpipe method of passkey extraction…

And there are some problems with the integrity of the device: a round was fired into the device, according to the FBI.

1+1~=Umm January 9, 2020 2:13 AM

@Sherman Jay:

“A Tribute to the 1977 RCA VIP Home Computer”

That uses the 1802 CPU, perhaps the most venerable of CMOS CPUs and possibly the furthest 8bit CPU chip travelled so far (but TTL RTL has left the Solar System with the meter still running).

But don’t mention the ‘S word’ instruction your mother would not like it…

https://www.atarimagazines.com/computeii/issue3/page52.php

Anonymous January 9, 2020 3:34 AM

Yet Anothyer Squid-Related Post

What Scientists Learned by Putting 3-D Glasses on Cuttlefish
https://www.theatlantic.com/science/archive/2020/01/why-scientists-put-3-d-glasses-cuttlefish/604555/

Together with his colleagues Paloma Gonzalez-Bellido and Rachael Feord, Wardill used the glasses to show different images to each of a cuttlefish’s eyes. By doing that, they proved that these animals have stereopsis—that is, their brains can work out how far away objects are by comparing the slightly divergent images perceived by each of their eyes. It is an ability that humans and a few other animals share. But, as is the norm with cuttlefish, they manage the task in an odd and surprising way.

Not surprising. Every predator that I can think of, has some form of judging distance. Prey species tend not to have that – they’re more focused of early-warning to get out of the way of predators.

Clive Robinson January 9, 2020 10:02 AM

@ ALL,

For some time now, some longer than others I’ve warned against alowing JavaScript, HTML5 and WebASM to be active in your web browser.

With regards Web ASM it appears it’s now more rather than less beong used for malware,

https://www.zdnet.com/article/half-of-the-websites-using-webassembly-use-it-for-malicious-purposes/

As the article notes certain entities who shall we say realy do not have your best interests at heart, especially you Privacy interests, came up with the Web ASM idea, specification and complrate security disaster area to replace another now much depreciated security disaster arra Adobe’s Flash.

maqp January 9, 2020 11:40 AM

@Thoth

“Would be interesting if @maqp could integrate a Cwtch client into the TFC setup besides the usual OTR chat as an additional option for robust one way data diode protected secure comms too.”

TFC switched to Cwtch-like v3 Onion service backend one year ago. Pidgin and OTR have been deprecated as a backend. TFC is now more or less standalone messaging system (it of course relies on Tor, stem Tor bindings, Flask server, requests -client etc. but still — it’s now a p2p messaging system, not a plugin for another chat app).

“the person behind Signal (not gonna name specifically) wants to centralize everything in a 36C3 talk.”

I’m not following you. Moxie’s talk was mostly about (security) agility of development, and centralized systems are much better at it than decentralized stuff. A good example is PGP fingerprints. SHAppening happened five years ago in 2015. Just this week there was finally a break, existential forgery based on full size fingerprint collision (https://eprint.iacr.org/2020/014.pdf). The guys at OpenPGP workgroup have been bikeshedding over the issue for five years. They could’ve chosen any 256-bit hash function and it would’ve been fine. But alas, AFAIK, no decision has yet been made.

I hacked around the issue in TFC in few days back in 2018: TFC’s RSA signature verification key has since been authenticated using the SHA256 hash of the signature key’s file — the check is done by the one-liner used to install the SW. This solution was “centralized”, AFAIK no other system uses similar script. This gets me back to what Moxie said: decentralized systems are not agile enough, they can be dangerously slow. I mean cryptography moves really slow. SHA-1 is 25 years old. There’s been a better alternative (SHA256) they could’ve updated to, for 19 years.

Also, at the end when questioned about use of phone numbers, Moxie talked about wanting to switch to Tor backend over time. Matt Green raised good point on Twitter about Tor’s limited capacity in a situation where the entire Signal userbase would be moved to Tor. Tor is currently too high latency and has too many bottlenecks to allow (video) calls so I don’t think it’s a smart idea to move mainstream messaging to p2p just yet.

“–secure protocols that are inter-operable across different clients and devices instead of being led by somebody and dictated on how it should be.”

There’s always the question of optimization: message headers are application specific. If you want inter-operable stuff, you immediately stumble on bikeshedding. TFC needs to be very compact because serial bandwidth is very small (it’s gotten better over the years with each revision of the data diode, but still).

“while running on user chosen platforms and devices and also the intermediatory services (servers) without discrimination or biasness”

I don’t see why TFC should be interoperable with Cwtch. Cwtch uses networked TCB so it can do a lot of things TFC can never do, such as have automatically managed groups, use double ratchet mechanism and future secrecy. TFC can do stuff Cwtch can never do, such as provide endpoint security. Whatever ADPT encompasses, you can be rest assured neither messaging platform will ever collect your private data for profit or any other purpose. You run your own server, you only connect through Tor network, there’s no exit nodes, no centralized server (anonymous update checks might happen in cwtch) collecting metadata or app use data. Both are using authenticated E2EE by definition thanks to onion service based design. (Plus TFC has the additional endpoint secure E2EE layer. Cwtch doesn’t need, I very much doubt SJL would be interested in implementing TFC’s messaging protocol. Suddenly everything would depend on the other’s opinion, stuff would have to be scheduled, and released at roughly same time). I think both of us would decline such proposal.

Also just to mention, Onion Service based messaging systems don’t federate with anything, they’re p2p, not decentralized.

Re: Ciphersuites.

XChaCha20-Poly1305 is by far superior to the ones you mentioned. It’s much easier to implement. AES-GCM is another AEAD scheme but much more difficult to get right. Not saying anyone should be implementing it in the first place, just that it has been easier for experts to do that for OpenSSL, libsodium, what have you, and that it’s easier for other experts to verify. Always use existing library with known authors who understand how to implement side-channel proof code.

HMAC is bolt-on tech and less efficient than AEAD schemes (e.g. it uses two calls to the hash function by definition). There’s also the possibility of screwing things up with authentication (i.e. thinking that you’re applying a MAC when in reality you’re not, I’ve seen that sometimes when people pick crappy libraries assuming they’re secure by default).

SpaceLifeForm January 9, 2020 2:41 PM

@ Clive

Bitsize is what I was referring to.

If my payload is 64 bytes, why should I sign a 32 byte digest? Why can’t I just create a 64 byte signature?

“Thus signing the hash digest output providing the hash is smaller than the signiture modulus… ”

Kinda rules out RSA modulus 3, no?

hxxps://www.johndcook.com/blog/2019/03/06/rsa-exponent-3/

Security, Privacy, Speed. Pick two.

SpaceLifeForm January 9, 2020 3:41 PM

@ MarkH, Weather, Clive

I’ve thought about this for many years.

There is no third party signature validation involved in this application.

This is just between Alice and Bob.

And it is not email.

Read link below. Explains why S(E(payload)) OR E(S(payload)) are not sufficient.

Which is why I’m thinking E(S(E(S(payload))). Well, that is Cliff Notes explanation. There is more involved.

For example, the public and private keys used for the comm are ephemeral.

And it is random traffic, not constant traffic, to conserve server bandwidth costs.

Comms disappear after a day.

If Alice or Bob do not get a response, they may have to try again the next day.

The ephemeral keys may expire that quickly too.

Note that this link is nearly 2 decades old.

hxxps://www.drdobbs.com/defective-sign-and-encrypt/184404841

SpaceLifeForm January 9, 2020 4:14 PM

@ MarkH

“If everything must be strictly public-key (no shared secret usable for more than one message), then I don’t see how to do it without roughly 150 bytes per transmission (for a theoretical attack cost of 2^80). For a higher margin of security (2^80 is approaching practicality for highly resourced attackers), then total transmission size would grow to 250+ bytes.”

I’m thinking 512 because of the E(S(E(S(payload))))

Plus some other bits.

PKI. No OTP (Shared Secret)

Just to securely, and privately, transmit just one single bit.

Just ONE SINGLE BIT.

Clive Robinson January 9, 2020 4:22 PM

@ SpaceLifeForm,

Kinda rules out RSA modulus 3, no?

John D, Cook is talking a bout the “exponent of 3″[1]. Not the modulus which is large and should be unique to each user (ie it’s hopefully unique part of their public key).

That is, e = the exponent
Ctxt = Ciphertext
Ptxt = Plaintext
Nusr = Modulus of user.

Ctxt = Ptxte mod Nusr

It should be clear that the modulus Nusr defines the number of possible members in the output set, thus the range.

Further it should be clear that if a plaintext Ptxt is very small after it has been raised to the exponent it might still be smaller than the modulus Nusr which can be problematic hence padding is required to make the plaintext message sufficiently large, and idealy the padding be nondetetmanistic to any potentiall attacker.

So without padding if you want to send a 256 bit AES key with an exponent of 3 the modulus needs to be greater than 257bits but less than 767bits. A modulus that small is considered factorable in a reasonable time frame these days.

[1] An exponent of 3 in binary is 0b011 which is very fast to do. Because of probblems with 3 the next most common exponent in use is 65537 which is 0xFFFF+2 which is binary 0b010000000000000001 which is almost as fast as 3 as the shift is effectively two bytes.

vas pup January 9, 2020 4:48 PM

CES 2020: Samsung’s invisible keyboard for smartphones:

https://www.bbc.com/news/av/technology-51057261/ces-2020-samsung-s-invisible-keyboard-for-smartphones

Samsung has developed a way for smartphone owners to type on a table or other surface as an alternative to tapping on the handset’s own screen.

Its SelfieType software tracks the user’s fingers via the phone’s front-facing camera and works out where the taps would correspond to being on a Qwerty keyboard.

Chris Fox tried it out at the CES tech expo in Las Vegas to see if it works in practise.

Clive Robinson January 9, 2020 5:22 PM

@ SpaceLifeForm,

Which is why I’m thinking E(S(E(S(payload))).

As you know the outer encryption layer offers little other than hiding of it’s plaintext. With most blockciphers you do get a little assurance that the plaintext has not been messed with but not enough. With PubKey and Stream Ciphers there are a whole bunch of nasties in the woodpile waiting for the incautious.

If I’m following on correctly you want some way to make the payload coming from the first party (Alice) only readable by one individual (Bob) and secondly for that second party not to be able to forward it onto a third party (Charlie) as though it had come directly from the first party (Alice)?

The only way you can do this is by doing the seamingly impossible. Which is tightly couple the message routing into the payload such that tampering with the payload or resending it is guarenteed to fail hard.

Well have a think on,

SPubKyA(EBnonce(plaintext)+ERnonce(route))

Where,

SPubKyA = Signiture verifiable via Alice’s Public Key.

EBnonce = Encryption to Bob from Alice under a key that is unique.

ERnonce = Encryption to the Routing system from Alice under a key that is unique.

If either the plaintext or routing information is changed then the signiture becomes invalid. If Bob trys to forward the plaintext to Charlie the routing information will be incorrect and thus flag up what he is trying to do before the system even trys to send the message.

For this to work the system needs to be able to “know” Alices PubKey (which is not an issue) as a fundemental step, likewise there must be a way for Alice and the system to generate a valid unique key. There are known ways to build such a system.

vas pup January 9, 2020 5:24 PM

@Bruce: interesting article for your involvement in policy development

https://www.dw.com/en/2020-the-future-is-now-its-time-for-new-utopias/a-51587700

“What should we do?

So far, we have mainly treated the symptoms without addressing the causes. We are repairing the problems of the last century with the solutions of the last century. But the problems are becoming increasingly complex.

The time has come to rethink the structures of the last century and perhaps find entirely new, better solutions.

Deutsche Welle’s science editorial team has set itself exactly that goal for 2020: To bring together promising approaches and proposed solutions from a wide variety of fields and to outline a constructive, realistic utopia. What could work, transport, nutrition, communication, medicine, cities and society look like in the future?

We don’t all have to stay at home, or live off the grid and or just ride our bicycles. That’s unrealistic, because for most it’s not desirable. And what we don’t want, we don’t do voluntarily. Coercion may seem effective, but there are certainly more attractive solutions that make living and working together more worthwhile.

2020 — the future is now! We need new and realistic utopias. We want urban developers and futurologists, agricultural experts and politicians, visionaries and lateral thinkers to have their say on what better structures and solutions could look like.

Seeking alternatives is laborious and challenging, but it will be worth it.”

Same applied to security and IT security in particular when treatment of symptoms/problems not addresses causes. That looks like last century approach.
Best,
VP

Clive Robinson January 9, 2020 5:41 PM

@ vas pup,

How technology related evidence vulnerable for accidents/tempering

This is where Occam’s Razor can be interesting. Because what it might say on small parts of things it may not say on the whole thing.

On the small part it would say that the hypothesis that an “error” was made is more probable than it was “deliberate action”.

However when you look at the whole system the number of “errors” become improbable to say the least thus the razor would indicate “deliberate action” was the root cause in all the supposed “errors”.

Because the level of “incompetence” required for all those “errors” to happen must have been clearly visable long long before, and if the system was working the way “it is said to do” such shortcomings would have been addressed long ago.

So bureaucratic “coverup” rather than “f**kup” is the more likly hypothesis when viewed in the macro.

MarkH January 9, 2020 6:02 PM

@SpaceLifeForm:

The Dr Dobbs scenario supposes that Bob is untrustworthy (liable to behave adversely toward Alice).

Is that part of the threat model against which you must protect?

SpaceLifeForm January 9, 2020 6:09 PM

@ Clive

“As you know the outer encryption layer offers little other than hiding of it’s plaintext.”

Not trying to hide any alleged plaintext.

Which is not plaintext at that point.

Trying to bury the inner signature.

SpaceLifeForm January 9, 2020 6:36 PM

@ MarkH

“The Dr Dobbs scenario supposes that Bob is untrustworthy (liable to behave adversely toward Alice).

Is that part of the threat model against which you must protect?”

Yep.

Did you know Charlie has made contact with both Alice and Bob? And that both Alice and Bob have trusted Charlie?

Charlie may not be who you think.

Rachel January 9, 2020 8:00 PM

https://www.theguardian.com/world/2020/jan/09/uk-accused-of-behaving-like-cowboys-over-eu-database-copying

UK illegaly copied Schengen Information System . ‘deliberate violations and abuse’ and the explicit acknowledgment by EU access was proved to the US

GDPR, anyone? Anyone have grounds for applying for administrative fines?
Anyone suffered material or non material injury or damage as a result?

The article includes, UK didn’t bother supporting the EU with it’s requests for arrests, its alerts, was only concerned when someone or something entered UK via an alert

Clive Robinson January 9, 2020 10:48 PM

@ SpaceLifeForm,

Trying to bury the inner signature.

Which signiture? They are both inner to the finall encryption.

It would appear from other comments nobody but you know what it is you are trying to do, which makes “making suggestions” either “pot luck”, “frustrating”, or “wrong”.

Do you want to start again from the top?

For instance why do you want short messages? The shorter they are the less entropy they have thus the lower work factor an attacker faces which means that any protection of meaning by encryption will be reduced dramatically.

MarkH January 10, 2020 3:00 AM

@SpaceLifeForm:

… what Clive said. In cryptography textbooks or references, you can find examples of a very clear and brief way of describing who the actors are, the relationships between them, and various constraints on how they interact. We’ve been trying to feel our way in the dark.

Some more things I wonder:

  1. What’s the purpose of concealing a signature? Must eavesdroppers be prevented from discovering who signed the content? In most applications, a signature may be freely published without harm.
  2. Are the messages transmitted via TCP/IP? In my limited experience, sending a thousand messages a kilobyte long is not much slower than sending a thousand 20-byte messages.
  3. Are you establishing a protocol the parties should follow using standard tools, or providing custom software that will implement the protocol you have chosen?

Based on my present understanding — that there is no trust or meaningful cooperation among the parties — you’ll be forced to use a fair amount of public key infrastructure, and be stuck with transmissions of several hundred bytes. Avoiding hashing wouldn’t change that.

If the parties could share secrets, I could propose ways to prevent forgery and protect confidentiality with transmissions as short as 20 bytes.

But with an armed standoff where everybody’s trying to screw everyone else, the lengths of keys and the standard certificates which enclose them will require longish messages.

Clive Robinson January 10, 2020 6:07 AM

@ MarkH, ALL,

The Boeing MAX story grinds on…

https://www.bbc.com/news/business-51058929

Remember the documents seen by the public are redacted (though it’s not been said how), but even so comments made by staff are a bit of an eye-opener.

It would appear that managment at all levels not just senior were not upto doing the job in what is essentially a “hard engineering” organisation. With what are essentially high risk products that demand exacting design, manufacture, operating and maintenance.

Looks like people have forgotten the maxim of “Remember surgeons are not butchers even though the both cut flesh”.

SpaceLifeForm January 10, 2020 3:02 PM

@ Clive, MarkH

“Do you want to start again from the top?

For instance why do you want short messages? The shorter they are the less entropy they have thus the lower work factor an attacker faces which means that any protection of meaning by encryption will be reduced dramatically.”

Think short message == key.

Hence E(S(E(S(payload)))) and other bits.

To make it harder for an attacker.

Think cloud, but not current meaning.
Think NNTP, think servers all over, but not controlled by a handful of corps.

Alice and Bob have exchanged their Master Pubkeys.

And a starting set of ephemeral encryption keys and a starting set of ephemeral signing keys.

That are used for the comms. In a random mix manner.

But they are not really ephemeral.
They don’t magically expire in one day.

It depends on usage.

The Master Pubkeys are not used for comms.

Never said this is simple or fast.

Security, Privacy, Speed. Pick two.

Hint: I do not trust BGP, DNS, or CAs.


  1. What’s the purpose of concealing a signature? Must eavesdroppers be prevented from discovering who signed the content? In most applications, a signature may be freely published without harm.

It’s to make it difficult for attackers, and so that Alice and Bob know the comm is legit, so they know that Charlie did not get in the middle.

  1. Are the messages transmitted via TCP/IP? In my limited experience, sending a thousand messages a kilobyte long is not much slower than sending a thousand 20-byte messages.

Either TCP or UDP. But short. Speed is not an issue. The slowness will occur on the devices. It’s not really a bandwidth issue.
The slowness will be due to the decryption.

  1. Are you establishing a protocol the parties should follow using standard tools, or providing custom software that will implement the protocol you have chosen?

Yes.

SpaceLifeForm January 10, 2020 4:43 PM

@ Sed Contra

“If they really want to do something good, release a completely blob-free, open source Linux phone.”

Great point. Won’t happen.

$$$

MarkH January 10, 2020 10:17 PM

@SpaceLifeForm:

I’ve been sincerely trying to understand, and still feel quite confused. For example,

Think short message == key

To make it harder for an attacker

… both seemingly in response to Clive’s question about why you want short messages.

My interpretation of the second statement quoted above, is that you prefer short messages for reasons of security (“to make it harder for an attacker”), NOT because of cost or speed.

I haven’t grasped “short message == key”. If “key” is used in its usual cryptographic meaning, then short keys make life much easier for an attacker, not harder.

So I’ve no clue, as to how short messages “make it harder for an attacker”.


a starting set of ephemeral encryption keys and a starting set of ephemeral signing keys

Ephemeral encryption keys are the industry standard, but if public-key signatures are used with large key sizes, then signing keys are used persistently because they’re expected to be secure for decades. Why, then, ephemeral signing keys?


The response to my question “What’s the purpose of concealing a signature?” seems to have two parts:

to make it difficult for attackers

so that Alice and Bob know the comm is legit, so they know that Charlie did not get in the middle

As to the second part, if any standard secure signature method is used, appending the signature to the message guarantees that it was Alice (for example) who signed it and nobody else. Because of this, Bob knows (a) that the message originated from Alice, and (b) Charlie can’t have altered it. I don’t see how a final encryption step adds anything at all to these assurances. Charlie can always attempt a replay, but incorporating such protections as sequence numbers or time stamps in the messages can protect against replay.

The first part of the response conveys no information about the presumed threat model. To make it difficult for attackers to do what, pray tell?

For example, I can imagine that the purpose might be to obscure the traffic, so that Umberto can’t easily identify the message as part of this particular traffic. If this were the case, it would be informative to specify “to make it difficult for attackers to identify the message as belonging to the protocol I’m creating between Alice and Bob.”

One thousand times better to spell it out, than to leave people guessing. Precision and thoroughness of thought and expression, are what distinguish engineering from tinkering.

Orly January 11, 2020 12:44 AM

“So bureaucratic “coverup” rather than “f**kup” is the more likly hypothesis when viewed in the macro.”

You divided “errors” and “deliberate action” from eachother, which is a non-starter logically.

You go on to say “errors” at such rates can only result from “incompetence” and then assumed it “surely” would be “noticed” at some previous point were it so, but that’s another dependent addition of your own that may or may not apply.

There are deliberate errors (like strongly opinioned people in positions to assert their authority despite being mistaken) and there is incompetence that stacks, compounds and goes unexposed for various real-world reasons indefinitely without provable, demomnstrable malice.

Regardless of what our gut might say about Java or Flash or Microsoft “incompetence” based on the sheer number of “errors”, OR doesn’t actually lean towards incompetence nor malice based on a large number or even a lopsided ratio. It does the opposite. It says what is the most likely true is probably true, in the absence of competing information, but it does NOT assume the connective tissues you brought in to make the leaps.

Going by what you consider a “large number” of “errors” without knowing the actual subtext of their creation (or lack of discovery) and instead assuming it’s “probably” malice because the number “seems” too high, that’s not Occam’s Razor – it’s the opposite. Only if you had in-situ evidence of the validity of the assumptions in the proposed dichotomy between “errors” and “malice” would that actually be OR. In reality there are massively incompetent individuals who make errors all the time and would never admit it, but to assume them “likely” malicious requires specific information rather than hitting some personal internal threshold and assuming the rest of the way.

Clive Robinson January 11, 2020 2:56 AM

@ Orly,

You divided “errors” and “deliberate action” from eachother, which is a non-starter logically.

You might want to rethink that. Because most logic requires something to be in one state or another. That is a piece of equipment ‘is’ functional to it’s specified purpose or it ‘is not’ functional to it’s specified purpose. A simple Yes/No or True/False result from a question.

If the question allows any other answer, then the question is wrong, or the person reading the question does not understand it for some reason.

If you don’t like the chosen labels of how something became ‘non functional for it’s specified purpose’ of “errors” and “deliberate action”, then pick two you think are more relevant, for when you are presented with a nolonger functioning to specification item or process. But remember they have to be “binary options”. That is “something is” or “something is not”. This has to be true at each point in your hypothesis otherwise you can not test it, and it is thus just an assumption of some form that has to be valid if the hypothesis on which it is made is to be valid[1].

As for “incompetence” again it is a label. But the point is, ‘if you get repeated “non functional for specified purpose” items’ then the specification should have a calculated fail rate by which the failure rate you are observing can be tested. If the actual rate of failure is wrong, and you are not investigating why, them there is something wrong with the system the item is used in. What you chose to label it is upto you.

You then say,

There are deliberate errors (like strongly opinioned people in positions to assert their authority despite being mistaken) and there is incompetence that stacks, compounds and goes unexposed for various real-world reasons indefinitely without provable, demomnstrable malice.

You are leaping in with your own implicit but unstated lables such as a Duning-Kruger manager which a specified system should have checks for.

All you are realy saying is that the “system” is not fit for purpose, thus the system is failing for some reason, the implication of which from your arguments is the specification is incorrect or incorrectly implemented. Thus the question arises as to why those incharge of setting the specification or ensuring the system works to specification have not been monitoring the system and taking remedial action for some reason[2].

I’ll leave the conversation about how people come up with at best partly specified “specifications” and meaningless “tests” for another day, even though it’s one of mankinds biggest failings.

[1] You see assumptions in many industries usually the are “it has a linear response” which usually is only demonstrably true in a limited range. One such is the “plastic behaviour” of materials, another is the equivalence of animal and human models in the testing of drugs. In both cases you can find examples of the harm of such assumptions. In the case of the former “metal fatigue” that caused an aircraft to fail in flight.

[2] Look into the history of the Ford Pinto, or the tobacco industry, or any one of hundreds of others. It’s highly likely the Boeing 737 MAX will in due time become another example for the history books.

Orly January 11, 2020 9:42 AM

” Because most logic requires something to be in one state or another.”

To assume it’s either an error or deliberate without overlap is a false dichotomy. You added that. You also added that “presumably” such a volume of errors would be overseen and caught, or it’s no longer incompetence but malice. That argument can be made and in the context I might even agree, but that’s NOT Occam.

Occam’s Razor does not include extra assumptions you bring internally. The opposite. Unless you can point to evidence in-logical-situ that founds those assumptions, it’s extra.

Otherwise OR would just mean “whatever I think is likely is probably true” and you could throw in all kinds of these unfounded additional assumptions to get wherever you wanted.

“them there is something wrong with the system the item is used in”

That’s fine to say. You just can’t justify any old assertion with OR though because it seems the most plausible based on your internal, extra-contextual experience. It has to be plain for the independent 3rd party to see on the basis of itself, or it’s not OR. Labels and other postulates fall out of the equation unless you can actually back them up with evidence that can be seen by all observers. Citing OR to cobble together a theory of what is likely based on external (even likely true) postulates is not actually OR.

” Thus the question arises as to why those incharge of setting the specification or ensuring the system works to specification have not been monitoring the system and taking remedial action for some reason[2].”

Fine, but you can’t assume malice until you have evidence of any actual malice. There are deliberate errors and there is completely widespread long-lasting incompetent errors for which no actual malice is ever demonstrable.

By your postulate, you’re effectively saying OR would call Flash / Java a trojan platform just by the sheer number of flaws. That’s not actually OR, that’s a probabilistic determination from a certain specific set of assumptions.

Just FYI.

MarkH January 11, 2020 1:02 PM

@Clive et al. re. Boeing 737 MAX:

It seems to me that Clive and I long ago concluded that the Boeing 737 MAX scandal — in which a software modification turned the flight control computer into an actual killer robot — was at its core a breakdown in management of the development process.

I don’t disregard or minimize the significance of reported complaints (or really, cris de coeur) from workers who were involved in the process. Some of them are exactly on point as to where things went wrong.

Nonetheless, throughout my long career I have heard bitter complaints from engineering and production workers along the lines of “those idiots upstairs have no idea what they’re doing” and “this piece of garbage design is doomed to fail.” So far, I didn’t notice an obvious correlation between complaining and outcomes …

Surely, a proper analysis of 737 MAX management failures (which I am confident Boeing has been conducting — the price tag is now estimated as USD 20,000,000,000) will include study of the thoughts and sentiments of the people involved at many levels. Interpretation of such data requires, I propose, a degree of subtle insight.

Those who remember the cooked-up “climategate” pseudo-scandal will recall a list of emails in which climate researchers vented anger, used extremely informal terminology, indulged in “backbiting” and the like. At the time, I asked a scientist friend what he thought about the leaked emails; his response was, in essence, “that’s pretty typical of how scientists communicate in private.”


Though I understand that Clive didn’t mean to compare corporate cultures when he proposed that the 737 MAX might be grouped with the Ford Pinto and the tobacco industry — my interpretation is that he was thinking above all about regulatory responsibility — I think it worth remembering that Ford and the tobacco industry knew their products were deadly, and made reptilian determinations to profit regardless of that.

Boeing (in another kind of regulatory failure) convinced itself that their product was extremely safe. Although they have the same incentive to minimize cost and maximize profit as any other business, they know well that plane crashes are extremely damaging financially as well as morally.


We already have many clues about what happened at Boeing; for a variety of reasons, over time a fairly comprehensive account is likely to emerge.

I’ve made my own effort to imagine how this process went so far “off the rails” … and what might have prevented the catastrophe. Obviously, the failures in management were comprehensive, and must have operated at many levels. The decision to “shut their eyes” after the first crash was completely inexcusable, and occurred primarily in top management. [I accept that my diagnosis may be mistaken.]

As a product development guy, my focus is more on how a fatally flawed system came to be deployed in a context with a broadly shared understanding that safety lapses are intolerable. In the development process, I suspect that most of the actors (though we now know of exceptions) believed that the system was safe; there seems to have been a sort of shared illusion that all was OK.

To most of the people involved in the development process, in their “corner of the elephant” everything seemed to be in order.

Perhaps the process could have been saved by an “MCAS watchdog”: a small group, or even a single individual, with:

• understanding of the underlying system engineering concepts (why and how)
• a thorough grounding in the associated safety hazards
• independence (not answerable to management of the various departments)
• visibility across the range of departments and teams involved in the process
• a “skeptical charter” to assume that the thing was dangerous until proven otherwise, and to question all claims of fitness

[Boeing did have a safety review committee in place, which apparently accepted claims about technical safeguards which were in fact inaccurate — apparently due to misunderstanding, not intended deception.]


Based on what he’s written before, I guess that Clive would say — quite justly! — that (an) FAA officer(s) would have been the exact answer to this role.

In the absence of such a degree of FAA involvement, Boeing surely could have fulfilled this role internally.


I keep thinking of an analogy — I mentioned it here before — of a fatal ground collision at a U.S. airport about 30 years ago. A control tower supervisor, who was monitoring tower operations in general, was hearing the various controllers communicating with their aircraft.

According to testimony afterward (there were apparently no recordings of audio other than radio traffic), when the supervisor started to realize that something had gone terribly amiss (a taxiing airliner was essentially lost in the fog), she “stood up” and ordered all traffic to stop:

She said that her first indication that something was wrong was when the east ground controller stated “[expletive], I think this guy’s lost.” She then directed all controllers to, “Stop all traffic.” When the ground controller advised that the airplane might be on the runway, she said, “I said stop everything” in a loud voice. [NTSB Investigation Report]

The evolution of that accident occurred over a couple of hundred seconds; the supervisor’s intervention might have been soon enough, but a controller who believed that one the accident aircraft had already lifted off did not tell that crew to abort the takeoff. The report faulted the supervisor for not verifying that all planes had in fact been stopped.

Boeing needed somebody monitoring MCAS, empowered to stand up and shout “stop everything!”

SpaceLifeForm January 11, 2020 2:05 PM

@ MarkH, Clive

My interpretation of the second statement quoted above, is that you prefer short messages for reasons of security (“to make it harder for an attacker”), NOT because of cost or speed.

[Correct. If you want Security, and Privacy, you do NOT get low cost or speed. You are going to burn a lot of cpu cycles]

I haven’t grasped “short message == key”. If “key” is used in its usual cryptographic meaning, then short keys make life much easier for an attacker, not harder.

[Payload may be a new empheral pubkey]

Ephemeral encryption keys are the industry standard, but if public-key signatures are used with large key sizes, then signing keys are used persistently because they’re expected to be secure for decades. Why, then, ephemeral signing keys?

[Harder to attack. Note that many signing keys have been revoked over many years. Even ICANN did so about a year ago]

As to the second part, if any standard secure signature method is used, appending the signature to the message guarantees that it was Alice (for example) who signed it and nobody else.

[Assumes facts not in evidence. See DrDobbs article. Charlie may be involved]

Charlie can always attempt a replay, but incorporating such protections as sequence numbers or time stamps in the messages can protect against replay.

[Again, assumes facts not in evidence]

For example, I can imagine that the purpose might be to obscure the traffic, so that Umberto can’t easily identify the message as part of this particular traffic. If this were the case, it would be informative to specify “to make it difficult for attackers to identify the message as belonging to the protocol I’m creating between Alice and Bob.”

[My sincere apologies. It is to obscure traffic analysis which I thought I mentioned at one point or another]

[I’ll expound on new squid]

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.