Steve January 16, 2023 8:00 AM

So, what does this mean? Does this mean many more people/governments can infect people — a nightmare scenario, or is this simply a better opportunity for Apple and others to better investigate and close vulnerabilities? Or both?

M January 16, 2023 8:09 AM

@Steve 8:00 AM

In the short term I would definitely suspect the former; availability is a prerequisite for usage.

In the medium term I would expect the latter; the ability for software vendors to examine attackers’ tools will help them develop defenses.

In the long term, absent at a minimum changes in the software development practices that allowed those holes to get through QA processes to begin with, I would expect that it will roughly even out as new security-vulnerability bugs are introduced and discovered.

echo January 16, 2023 9:23 AM

I tend to think the outcome of the documentation and software being released will amount to precisely sweet eff aye if some people have their way. It really just feeds into the steady state form of egomania and avoidant incompetence. Well, that would be the stock business as usual cynicism peddled by cut and paste techbros.

This isn’t about security it all. It’s about a paucity of imagination, and economics and hierarchies. Most growth really has been based on an increasing population. Most of that was internal growth. Only latterly has it become global. Modernity is a very malleable and shifting thing. To wit: products built for limited lifecycles whether it’s their function, or compatibility, or build quality is a way of maximising profits in a shrinking pool of a opportunity while we race towards largely immovable environmental and resource deadlines requiring avoiding action.

The way to avoid all these issues with smartphones and the like is simply to stop making them. You could also turn off the cell towers too. You can imagine the scenes can’t you? Shirty execs panicking over the balance sheet turning red, and assorted coat tail “experts” having to detox.

So build them properly, and build them properly the first time, build them to last, and also don’t use them as an invasive platform to peddle all manner of stuff and nonsense nobody needs or wants.

Once the stupid tax (i.e. consumerism) has been removed then we can be more honest. Stop making stuff we don’t need to enrich the 1% and divvy the GDP up instead so we can all hit the beach.

Winter January 16, 2023 10:57 AM


So, what does this mean?

  1. People can scan devices for being infected
  2. People can collect information about what people/organisations were targeted
  3. If the “wrong” people were targeted, there will be retaliation against the firms peddling the software

Steve January 16, 2023 11:17 AM


Were your 3 points not already true prior to this software release? I don’t think the software release changes anything in that regard?

Winter January 16, 2023 11:31 AM


Were your 3 points not already true prior to this software release?

Ad 1., scanning for infections, requires the source code to be effective.

Clive Robinson January 16, 2023 1:13 PM

@ Bruce, ALL,

Beware “Not Suitable Fo Work” content

Some of the links given,

1, hxxps://boards.4chan….
2, hxxps://….

lead to pages that have image / add content that some workplaces consider,

“Not Suitable For Work”(NSFW).

So best avoided.

Winter January 16, 2023 1:34 PM

Children’s comics often portray evil characters who want to destroy most of humanity. You find them also in the worse type of James Bond movies and wannabe Bond movies.

4chan is where the real people who would like to do so, but are too incompetent, gather to discuss their hatred of humanity.

Ted January 16, 2023 3:10 PM

I guess it would be too soon to hear from Cellebrite and MSAB, or any of the other companies whose products might be affected.

Maybe it’s also too soon to hear from security researchers?

MSAB’s extraction software, XRY, can supposedly work with 42,000 device profiles, has 4,500 passcode & bypass features, and can retrieve data from 4,200 app versions.

I feel like this needs a ‘cyber safety review board’ type of analysis. But, then again, I don’t know what the files contain.

Jonathan Wilson January 16, 2023 3:30 PM

If I was a phone manufacturer I would be getting some guys to pull this software apart and figure out what exploits/bugs/loopholes are being used to gain access and close them off in a software update (or in the next hardware revision if it can’t be fixed in software)

Same for any app developers who’s apps might contain flaws that this software is exploiting.

Bear January 16, 2023 4:00 PM

I can’t ever see a story like this without assuming that the perpetrator is in the same business as the victims whose secrets are exposed, just using the same tools of the same trade making a move to control the competition. This is probably a bad day for me; that sounds like the sort of thing I say when depressed.

What I realized a while ago is that people are capable of making secure software – and that they won’t. Ever.

The temptation, desire, and perceived need to use software to do things regardless of whether we are sure of being able to secure those things is part of the human condition. The market acceptance of software which does things known “risky” or hard to secure for very marginal or no consumer benefit, is evidence that we are failing on purpose.

Some days I feel a mission to protect people who don’t understand that their lives are being bought and sold. Other days I’m just too disgusted with the whole situation to have any sympathy for those who accept this situation as it is.

Winter January 16, 2023 4:10 PM


What I realized a while ago is that people are capable of making secure software

Can you make secure software?

I know no one who makes software and would claim she or he can produce secure software.

JonKnowsNothing January 16, 2023 5:06 PM

@Bear, @Winter

re: I know no one who makes software and would claim she or he can produce secure software.

Additionally, if anyone claimed they could make software secure, I’d know for sure, they had missed a puddle in the pond.

At most, I might claim that the code I wrote in the editor was “safe-ish”. I would not claim that what got uploaded to the source repository, downloaded to the build environment, compiled into OBJ and LINKED to whatever type of output it drives, would in any way shape or form, actually be what I wrote or intended to write.

Unit tests might show it was WAI, but that does not preclude an exploit. Regression tests might show it didn’t break something else, but does not preclude an exploit.

Known Bugs left Unfixed are rampant. Anyone ever seeing a Bug Report Listing with all the unfixed bugs in it, would have to conclude that the whole system was insecure.

Exploits, especially the zero-day kind, are held by LEAs in order to further their info-wars-attacks on FillInTheBlank groups, organizations, countries and businesses.

These are at least 2 pools for not-secure code, both beyond that ability of a single programmer to remedy.

RL tl;dr

A major big time player in Tech Industry, solved decades of unfixed bugs in 1 day. They truncation their bug database of any entries older than “N(small) years” from the date of purge. Tens of thousands of bug reports vaporized in the time it took for the system to Delete Record.

&ers January 16, 2023 7:54 PM


As all you know modern HDD’s have ATA security feature
that LEA can bypass.

While this leak is phone related i’m interested to know
was there ever some leak of the tools that LEA uses to
bypass ATA security feature?


Clive Robinson January 16, 2023 8:49 PM

@ Steve, Ted, ALL,

Re : Is it time to worry?

@Steve asks the perfectly valid question

“So, what does this mean? Does this mean many more people/governments can infect people — a nightmare scenario,”

The short answer currently has to first take @Ted’s question into consideration,

“Maybe it’s also too soon to hear from security researchers?”

Yes it will take a while to sort through all the chaff before getting to the kernel.

But, this is not the first time this has happened to Cellebrite, so we have previous on the way they operate.

Put overly simply the previous version of Cellebrite’s software was written in a way to ensure the pennies kept klinking into the out sized over gorged piggybank they were terning themselves into to get serious VC investment and we assume eventually mega pay day sell-off or flotation.

From 20,000ft the software had three parts,

1, Instrumentation implant for phone.
2, Control of implant service.
3, Back end client DB and services.

In short knowing the “implant” code does not give you anything but the basic “backdoor” channel used to recieve a push-down from Cellebrite of the code for the clients chosen data gathering exploits which then forwards the data not to the client but Cellebrite’s back end client DB and services.

This way the client is “on the meter” for not just every phone that is implanted, but also every type of data and data quantity they want access to.

If you are a paranoid ruler with a very large Soverign fund then the very high cost is not an issue. Likewise if you are a dictator etc skiming 20% of the countries GDP into “foreign investments” through your own personal “Private fund” then again the very high cost is not an issue. But what if you are a nut-job CEO… Well they introduced a version just for you[1].

But… Cellebrite were known to “double-tap” or more. That is if two people wanted the same Journalist bugged then they both got charged full rate, and they were not told about each others interest in the same journalist.

We don’t know just how many double or more taps they were running but cheating the customers I understand was “Business as Usual” for Cellebrite.

As the old saying has it,

“No honour amongst thieves.”

But then if part of your business model is knowingly setting up innocent people for “enhanced dissuasion” of the very permanent kind, your honour such as you might claim to have as a business person is more negotiable than many would think humanly possible.

The problem with the “every thing goes through us” business model is it has little or no “Plausable Deniability” when somebody from the home Government comes asking questions.

But just how good is the software? Well some have already worked out ways to hack it already, if you remember[2], Moxie was not that impressed with Cellbright’s unlawful activities.

So yup we are going to have to wait a little while, but I’m not expecting a “WannaCry”[3] worm type repurposing to occur. But if it’s possible I’m reasonably certain somewone will turn it into a form of ransomware, just because they can…




&ers January 16, 2023 8:59 PM


Previous leak.


lurker January 16, 2023 11:43 PM

@Clive Robinson, Winter

I had a scroll down the 4chan page to see if there were any intelligent comments. I fear the upload to that site was casting pearls before swine …

Winter January 17, 2023 3:22 AM



Re: secure software

I challenge that. We do not know about a way to produce secure non-trivial software. Especially not any software that communicates asynchronously.

Clive Robinson January 17, 2023 3:38 AM

@ lurker, Winter, ALL

Re : “Not Suitable For Work”(NSFW)

“I fear the upload to that site was casting pearls before swine…”

As @Winter noted,

“4chan is where the real people who would like to do so, but are too incompetent, gather to discuss their hatred of humanity.”

Desire and ability in this respect is mostly not coincident.

Which thankfully for the rest of us is lucky.

But occasionally you get those where “Desire and ability” are both on the higher ends of the scale. The less smart of these go out alone, and for other reasons get caught. Whilst the smarter ones “script it up” with some unwisely selling –as payment is mostly tracable– but the real nasty ones, giving the scripts away so that the majority of those who are daft enough not just to go to 4Chan but comment on it, can play along as well.

Some of us remember back to “Anonymous” and their re-packaging of the “Ion-Cannon” test code into DDoS attack code… And how so many not even “scriptkiddy” level people so unwisely used it, with some later finding out, that whilst the Piper might play for free,

“The Landlord has a price you must pay if you play”

So yeah as they used to say,

“Children that is a bad neighborhood and bad things can happen to you there. So you don’t want to go there, let alone be seen there!”.

Clive Robinson January 17, 2023 4:34 AM

@ Winter, SpaceLifeForm, ALL,

Re : s/secure/profit/

“I challenge that. We do not know about a way to produce secure non-trivial software. Especially not any software that communicates asynchronously.”

I feel you are kind of talking at cross purposes with each other.

Whilst I would agree that no non trivial software is 100% secure[1] we can get very close, even with async-comms involved.

The real reason that most “commercial” and probably all “consumer” code is not secure is because of “profit”.

Put simply due to the way the majority of the software industry works,

“Short delivery cycles of new features”

Are required by not just managers, but directors / C-suite but the fly by night shareholders of the mythical “Free Market”.

Designing secure systems is not just possible, but is actually routinely done in some “associated industries” the problem is delivery is about five times as long, and the costs ten or more times higher.

That is because certain steps between real “engineering” and “code-cutting” are seen as “an unnecessary expence”.

Because the consumer and most commercial software industry is seen as “non-critical” and “low dirext risk” any old junk/example-code downloaded off of the Internet can be thrown in, barely tested and,

“Fixed in the maintainance cycle”

That we know seldom if ever happens unless the failings are sufficiently embarrising to effect “the bottom line”…

Which is we have veritable tsunami of “technical debt” building to the point where it will come crashing down.

We can already see this with the likes of crypto-coin and NFT systems where to most people unimaginable amounts of money get stolen by obscure bugs, that there is sufficient financial insentive to go find and profit by.

You could call crypto-coin / NFT “fools-code” or the “leading edge of crazy”, but the sad truth is it’s the leading edge of the “fools-Paradise” the “Information-Economy” is taking us into mainly via,

1, There is no real high-gain economic activity in the First World any more.
2, Consumer and similar “cupidity”.
3, Failure of market-place “legislation” to have any effect.

Thus we are in a very significant downward spiral wirh “Web3” drilling in a downward spiral into the swamp as fast as can be.

The fact is some have already seen this comming and thought it through. Selling “code” is a mostly pointless excercise these days. The majority of PC usage is already sufficiently “covered” by software developed mor than two decades ago. Thus trying to make people pay again and again for what they allready “own” by increasingly usless feature upgrades has gone about as far as it can. Likewise PC hardware that has previously driven major software upgrade cycles is more or less more powerfull than consumers need, thus exploitable demand has slowed.

The solution has in part been the bubble-market of stealing peoples privacy to rape-marketing budgets, but that’s clearly comming to an end.

The solution they see is “rent seeking” combined with “forcing you to expose yourself” we call “Cloud Solutions”

If you don’t pay what “they demand” every month then they “cut you off” to ensure this has real traction they capture your data and hold it to ransom.

Anyone who uses “Cloud Solutions” is to use the old quote,

“A fool unto himself”

And as the other old saying puts it,

“A fool and their money are soon parted”

And by and large this is where “they” are trying to drive the consumer and low end commetcial markets. A “fools-paradise” of a “guilded cage” most are “sleepwalking into”.

The thing is “cloud systems” are also quite deliberately insecure, because it gives “plausable deniability” when “they” steal your Private Data and it becomes sufficiently public for you to find out it’s been stolen. The plausable deniability and sharp lawyers is just one reason why Experian for instance can field totally insecure systems and carry on pullingvin 3billion every time you turn the page on the financial market news.

What can I say,

“Welcome to hell, Dante’s nine circles were just the bottom of the stack. We are oh so much more ‘full stack’ these days, and growing, have a nice day.”

[1] See my past comments on this blog about “Unknown, Unknowns” for that reason.

Winter January 17, 2023 6:47 AM


We can already see this with the likes of crypto-coin and NFT systems where to most people unimaginable amounts of money get stolen by obscure bugs, that there is sufficient financial insentive to go find and profit by.

Fun Fact:
The Bitcoin Network has not be “hacked” since 2009, even though the total value of all bitcoins is on the order of $1T. No one has ever gotten any bitcoins out of the blockchains without the keys [1].

What is hacked are the “smart contracts” and wallets that interact with the blockchains. Key management is the weak spot, as always.

[1] To be more precise, no one has ever been able to do a transaction on the Bitcoin blockchain without the private keys of the “account” to pay out.

Phillip January 17, 2023 11:42 AM

@Jonathan Winter, I agree. The industry is lucrative – has the resources to do exactly what you are saying.

If this is feasible (maybe some research is needed), security through obscurity could lose again. We can handle this.

Bear January 17, 2023 3:10 PM

As someone who actually writes code, I’ll stand by it. We can make secure code, and won’t.

We’d rather have asynchronous communications, GUI’s, and things made using development kits that weren’t ready to be released. We’d rather use libraries that are less than ten years old and aren’t standard libraries distributed with every version of the compiler for at least ten years. We use software tools that aren’t universal tools used by literally everyone. We like to reuse components that have had less than a million hours of testing.

We’d rather use hardware capable of storing and executing code we don’t need stored and executed. We’d rather put a whole &%%#* operating system on a goddamn thermostat that doesn’t need one – and then enable the thermostat to do communications we don’t need it to do.

Even if we have potentially secure code, we prefer to trust other people to archive it, compile it for us, and manufacture the hardware it runs on.

Our code is insecure because we don’t give a damn about securing it. I can write secure code and absolutely nobody wants to buy or sell it, because secure code isn’t what people want. They want code that does a million unnecessary things before breakfast instead.

Winter January 17, 2023 3:42 PM


Even if we have potentially secure code, we prefer to trust other people to archive it, compile it for us, and manufacture the hardware it runs on.

You cannot secure your tap water and electricity by making your own, including the pipes and cables, pumps and generators.

Writing a sizeable software project as well as a compiler, as designing your own chips and foundry, that requires superhuman coding powers.

Bear January 17, 2023 6:34 PM

Let us just say then, that 99% or more of our software vulnerabilities arise in or because of software that has no rational positive use as deployed – like the bash interpreter and TCP/IP stack inside a camera which you did not buy with any intention whatsoever of running bash scripts or doing communications over the internet.

If we gave a crap about security that code would not even be there. The camera does not need it. A camera is a sensor, full stop. It has the duty of delivering bits in an absolutely standard easily-inspected format to a hardware buffer where they can be read by the same well-known, ultra-simple, easy-to-find-and-inspect code that reads the bits from ANY CAMERA IN THE WORLD. Because we envision the camera itself as a thing that has any business doing communications or interpreting commands, we have already failed.

Winter January 18, 2023 1:27 AM


If we gave a crap about security that code would not even be there.

There is indeed no shortage of unnecessary bad decisions regarding software.

But places that really do spend money on safe and secure software still end up with faulty software. NASA has had its share of software errors ending in disaster. The aircraft industry used to implement fly by wire systems in threes, every system using a different OS and software stack, hoping any bugs would not be present in two of them at the same time.

My point is that the only sane way to approach software development is not to go for “security”, but fault tolerance. There will be bugs, so deal with it.

ismar January 18, 2023 1:41 AM

Re – no secure software possible- it is all about the threat vector- i.e. Can it be secured against a nation state vs a script kid

Sumadelet January 18, 2023 2:59 AM

Secure software is a Halting Problem. There is no process that terminates in reasonable time that can tell you if any particular example of software is secure. Obviously, there are special cases where you can likely answer the question, but they are special cases. It’s like trying to determine if a non-trivial sequence of numbers is random or not; or determining the Kolmogorov complexity of a non-trivial sequence.
Can we write secure software? Quite possibly yes, for trivial cases. The problem is determining if trivial software is useful; or useful software can be generated from trivial (formally provable) programs.

Clive Robinson January 18, 2023 3:34 AM

@ ismar, ALL,

Re : No 100% secure software

“Can it be secured against a nation state vs a script kid”

It’s not about the type of attacker their skill level or even the type of attack.

To see why you have to move your thinking with changes as they occur.

Arguably as I’ve said in the past, vulnerabilities should be seen as,

“Individual instances in a class type of vector”

Thus you have three basic types of vulbarabilities,

1, Known instance, in a Known class.
2, Unknon instance, Known class.
3, Unknown instance, Unknown class.

The Software industry all to frequently defends against “Known Instances”, rather than “Known Classes” which means two things,

A, Increased code size
B, Increased code complexity

Both of which we know lead to new vulnerabilities, of one form or another (the dred “solutions make problems” issue).

Addressing known classes tends to keep the code size much smaller, and about as minimal as it can get as well, so giving “more bang for your buck”, for what ever productivity measure you use.

But with the best will in the world,

1, We can not know everything.
2, Circumstances evolve.

So there will always be the “Unknown Unknowns” coming up as a consequence.

Whilst a very very few people will get the “Hinky feeling” and see by the edges of existing class coverage, that “gaps between” exist and so where new classes will be found and some of their charecteristics. This is a very rare skill set and generally not how new vulnerability classes are found.

Worse as I’ve found out a couple of times in the past when you do identify a new class you get denial from others as they only want to see “Proof of Concept” thus will do nothing untill we get hit by a new zero day.

Most new class zerodays have been found by “chance”… That is something “odd” is seen from what is effectively “random input” and on investigating that a new class and instance within it is found.

Whilst the likes of “fuzzing” can improve the chances of finding something odd, it’s actuall quite a bad way to find new classes and instances as it’s not directed.

A side effect of which is that finding new “something odd” falls randomly not by researcher skill level. Hence any user can see “odd” not just those looking which means skill level has less to do with it than most would like to think.


“As code moves forwards into new areas new classes naturally come into existance.”

Think of all the new issues with AI and not just input data, but the order data is input etc giving rise to what are effectively “hidden biases”.

The reason for this whilst simple enough to get your head around[1] (and thus the start of a burst of research happen as more people “get it” and so find new instances). Is we as yet have no way to realistically stop such attacks.

That is “data evolves” and we want the AI to evolve with it, but that process by default leaves it vulnerable to any “chosen input” attack.

Finding a way to de-couple genuine change from directed change, may not actually be realistically possible in “real time”.

[1] It’s because the AI code input data analaysis is,

1, Dynamic not static and ongowing.
2, Has different attack and decay times.
3, Has sensitivities modified by the past data not “all” data.

Thus the bias point “walks” not from the starting mean, but the last bias point. With random data input this is hard to see and tends to regress. But with selected input you can easily move a bias point to an extream, then occasionaly “top it up” against movment away from that desired point.

Clive Robinson January 18, 2023 4:51 AM

@ Sumadelet, ALL,

Re : No 100% secure software

With regards,

“Can we write secure software? Quite possibly yes, for trivial cases. The problem is determining if trivial software is useful; or useful software can be generated from trivial (formally provable) programs.”

Actually it’s a bit more complicated than the “top down” approach “formally provable” implies.

I’ve described “bubbling up” attacks in the past, as a small subset of the “Probabalistic Security” issue within “Castle-v-Prison” issue (see this blog a decade or so back).

Put over simply,

“People fall into the fatalistic ‘you can not build a castle on shifting sands’ thinking”

Which is provably not true (density, engineered construction strength) as English Tudor King Henry VIII ably proved in various ways –one of which was the Mary Rose disaster– on his way to building the first effective purpose built global navy.

The solution to bubling up and similar bottom up vulnerabilities that “change state” is to “engineer them out”

As small part of this can be seen by the use of “memory tagging” in the likes of “Capability Hardware Enhanced RISC Instructions”(CHERI)

And similar.

I chose to investigate a different more encompassing “Castle v Prison” approach, think of it as,

“In the box thinking for out of the box results”.

Basically our courrent computer models especialy that of software are limited by “serial thinking”, which when tied with the results of the results of Kurt Gödel’s two incompleteness theorems at the begining of the 1930’s proved that our current computing models can not be “secure”.

That is,

1, You can not as an observer look in the box when it is running because it is your only interface to it’s internals currently.
2, The box will only tell you what it is programed to tell you.

Thus you load a program, and “assume” behaviour on what “WAS” loaded not what “IS” running when you ask it to report.

Between “loading” and “asking” an error, or deliberate fault may occure. In either respect you can not trust what the computer tells you. Which is why AV software will at the end of the day always eventuallt “fail the user”.

Thus you need to “reliably” first check for changes before asking, which you can not do whilst the single CPU is running.

With out getting into the weeds and below it always reduces to a temporal “Turtles all the way down” problem.

The only way to stop it is to halt the CPU and using a state machine “walk the wire”.

The problem is the trade off in time between checking and usefully running code. The more frequently you check, the lower the probability of a fault/malware causing issues, but consequently the lower both the efficiency and productivity. The less frequently you check the greater the efficiency and productivity.

Thus you get a “trade-off” the more secure the system the less usefull work it can do.

Hence you have to “pick the point” at which you set checking, always knowing that there is a probability your checking will fail you.

Winter January 18, 2023 5:07 AM


Secure software is a Halting Problem.

Indeed, but I have found that those working in the tech industry do not believe in the value of the findings of Computer Science.

Most of the time, people simply deny that the halting problem actually is proven unsolvable. There is always a reason that they say it does not apply and there undoubtedly must be a real solution.

JonKnowsNothing January 18, 2023 10:45 AM

@Winter, @Clive, ALL

re: But places that really do spend money on safe and secure software still end up with faulty software.

One of the indirect problems with “secure software”, is the definition of what “software” are we discussing.

There is code running on a server, PC, device, router etc. This code is device driven, perhaps has a UI or UI interface. It can be a game or something critical like an XRay or MRI machine. This type of code is often what gets labeled as insecure or has a bug that can be exploited and is generally the small end of the microscope in focus.

There is the other end of the system, that doesn’t necessarily have to have an exploitable bug, it may be working perfectly fine, yet it creates some of the worst insecurities and puts large numbers of people at risk. This is both the physical infrastructure and the transmission of data+info packets from one system to another. Sometime referred to as The Internet Backbone.

In the USA, the internet is split between 2 sections of the country: East & West along the lines of the Mississippi River. On each side of the river, in concentric rings that get smaller and smaller. All the internet and networking systems connect to it from mega corps to an App on the phone. There are only a few providers of this Backbone.

The security risk in the USA, is that USA National LEAs have a direct connect to the backbone. They hoover up everything that runs along it. They also tap into In-Out connections at “The Water Line”. Incoming from overseas lines also have direct taps. Seabed cables are tapped so often that there’s hardly any surprise anymore, when competing interests just follow the cable laying ship.

One can safely predict that any country with the wherewithal has similar programs.

So, security in code is helpful, even if it’s not Perfect Security. However, it is best to remember that even if you could get Perfect Security in CODE, the internet is Not Secure and Not Securable.

This aspect is one that hits the concept of “world wide” right in the middle of that myth. Lots of people ignore it on the premise of “inter-connectivity” and “wishful thinking”. The access to the backbone by passes all the security you can put in the code. You might be able to encrypt the data but Bluffdale has capacity to Collect It All and Hold It All until it becomes plain-text.

For most you have None and best you have Some.

lurker January 18, 2023 12:33 PM

re insecure network

I’m very happy to play again @Clive’s one track disc:

Does it really need to be connected? If not, don’t.

There, fixed that for you.

Clive Robinson January 18, 2023 8:35 PM

@ lurker, JonKnowsNothing, ALL,

Re : Clive’s One track disk…

It’s not a one track disk, but it’s almost always the first track on the A-Side[1][2]…

And for some reason some think it must be “the work of the Devil” because it conflicts with all their silly and mostly wrong mantras.

So there is much gnashing and wailing and so much noise to drown out every other track 😉

[1] Anyone else old enough to remember when disks/records had A and B sides, and sometimes a “45’s” rise in the “Singles Charts” was due not to the A-Side but the B-Side?

Excuse me whilst I just adjust my Zimmer-frame 😉

[2] Rumour Central has it that our host @Bruce has just crossed into his seventh decade,

“Happy Sixtieth Bruce, may you have many more.”

Thus you are now eligable to join us “old gammers” in the “4-box of candles” set. Just watch out for those little Cake Candles though… because whilst one or two are OK when you’ve enough together those pupies generate enough heat to “cook your head” before you can blow them all out and singed eyebrows are never a good look on anyone =(

SpaceLifeForm January 18, 2023 11:44 PM

@ Clive, lurker, JonKnowsNothing, ALL

re: 45’s

I remember 78’s. Barely.


Clive Robinson January 19, 2023 7:53 AM

@ JonKnowsNothing, lurker, ALL,

Re : Backbone security.

“security in code is helpful, even if it’s not Perfect Security. However, it is best to remember that even if you could get Perfect Security in CODE, the internet is Not Secure and Not Securable.”

The reason it’s not secure is simply that,

“We want it to work now and in the future”

Thus the “Internet Protocol”(IP) has minimal functionality at the physical layers and only a little just above, with no reliability considered at all. So it will even work with carrier pigeons (see IPoAC RFC 1149 and subsequent protocols[1]).

IP basically works on a “Fire and forget” basis. To do this it has to have,

1, Destination information.
2, Source information.
3, Sequence Order information.
4, Transit Time information.

As a minimum, which means IP strips all basic communications privacy away to be able to work (but not message privacy other than length).

To increase reliability by “re-transmit” the “Transmission Control Protocol” adds a packet acknowledgedment and other protocols. Without going into details these alow all sorts of “attacks” that in turn can be further used to reduce privacy and security.

Onion Routing in theory can be used to stop all of these information leaks by reducing each “packet message” communication to just a single link or hop. By moving the end to end routing information into the message where it can be encrypted.

Onion Routing is still “to simple” and has no temporal diffusion and confusion mechanism thus it can be “watched” from end to end with high probability for an advisory with sufficient resources. Which is why Tor fails in so many ways as it was not designed to defend against them.

There are other protocols that do defend against the temporal issues and I’ve mentioned what they need to do in the past.

The problem people forget is neither privacy or security are “inherant properties” in nature. So at a basic level “nothing is inherantly secure”. Thus as with a “lock box” or “safe” you have to “build” security and privacy “by design” out of basic components that lack them.

Can the current IP network be changed to give more privacy and security?

Yes, without a doubt.

Will it always be private and secure?

No, for various reasons such as “resources” and “methods”.

Can we keep a sensible level of privacy and security for sufficient time to make it’s loss on a case by case effectively meaningless?

Yes and no, the problem is “new methods”. If our design includes fully determanistic elements as invariably it must, then new methods to determin them have a probability of being discovered. This is the issue we are currently seeing with current key negotiation protocols. It’s not that they are thought to be secure, we know they are not by simple deductive logic[2]. The problem was the required “resource needs” to break the security were based on certain assumptions. Due to the potential of a “potential new method” of Quantum Computing those key negotiation protocols are now seen as very vulnerable.

So we need to,

1, Find replacments
2, Design the old out and the new in.

Of the two, whilst the first may prove impossible, I know the second is NOT going to happen for more reasons than mankind can solve in a thousand full moons (or about an expected human life time).

But is this actually any great surprise?

No, it’s only when something has “utility” do people start investigating it to make it more efficient in some way. Some of those new efficient methods are seen as desirable, others undesirable, but that is a matter of “Observer Point of View” not the technology or science, they have no inherant morality or ethics properties.

Does this process worry me? Actually not realy, the key negotiation protocols were security wise always a bad idea and we know there are otherways to get to similar objectives more securely, but they have other issues such as those involving anonymity.

And that is the real issue, every thing of even moderate complexity has not just technical but human trade offs, and always has done.

[1] RFC1149 was relrased on April 1 1990 titled “Internet Protocol over Avian Carriers”(IPoAC). That whilst being done as a joke do work. Because to be funny in a tech crowd it had to actually be able to work in “reasonable” theory, and subsequently has been shown to work in practice in a limited trial,

I still eagerly await “IP over Earth Worms”(IPoEW) for it’s potential tunneling protocols.

[2] The problem is that ultimately all communications information security falls to a “root of trust” that can only be known by the two communicating parties (Alice and Bob). And must not be known, or determined by a third party (Eve etc). The problem behind the root of trust is communicating it between the two parties (Alice and Bob). It can easily be shown that to use a channel protected by only information security falls to “Turtles all the way down”. Which is a problem for anonymous contact communications which something like 9/10ths or more of Internet communications effectively are.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.