Comments
Ismar • August 1, 2025 6:19 PM
Earlier this week I came across an interesting article which to me made little sense and decided to share it with a security expert in the hope they would confirm my scepticism, instead I got a short reply as below
01:13 AM 7/30/2025, you wrote:
Signal warns it will leave Aus if forced to hand over messages
https://www.theaustralian.com.au/business/technology/signal-boss-warns-app-will-exit-australia-if-forced-to-hand-over-users-encrypted-messages/news-story/f7195fa3c27a565452cede105eab408e
Good.
This led me to believe that I was not communicating with the person I thought i was as most of us here know it is impossible for Signal to share this information with the any government due to the application technical implications.
ResearcherZero • August 1, 2025 7:44 PM
@Ismar
Government officials themselves use Signal so it presents them with a conundrum.
‘https://ia.acs.org.au/article/2025/australian-officials-grilled-over-signal-messaging-app-use.html
The Australian government can order companies to place a backdoor in products so that messages can be intercepted. This is a measure that if implemented, would allow collection and deciphering of encrypted messages. Not only can it be done, it has been done previously with other products that promised secure, private and encrypted communication.
Of course such a move would break the security of the app, meaning that none of the government officials messages would be secure either – and anyone else using Signal.
People will die as a result. (Though the government has no responsibility to the public.)
In order to accomplish the task would require re-engineering the design so that a copy of encrypted messages could be collected by interception, or that eavesdropping was introduced to gain access to the application when it is in operation, again breaking the security.
Rather like they do in Russia or China, where the government mandates ubiquitous surveillance of populations regardless of not committing a crime, or a victim of it.
ResearcherZero • August 1, 2025 8:12 PM
@@Ismar
If spies were using Signal for instance, that is what surveillance is for. There are plenty of crimes that people can be arrested for, or suspicion of, which can be used by the police to make an arrest. The government has many capabilities it can already deploy.
The police unfortunately are risk adverse. They would much rather see members of the public harmed by violent foreign actors than approach the scary spies with their two little police legs. “I don’t want to be shot at like you,” they say. “Their spies!” 👮🏻♂️:( Then the police rattle off a bunch of excuses about why they can’t arrest them for public shootings, stabbings, kidnapping, attempted murder, threats of murder or physical harm, or murder.
No, no, no. They would rather have some kind of remote evidence gathering. Where they can avoid personal confrontation or avoid having to appear in person in court to give evidence.
–
Bad guys often wear suits and want to build (and play with) nuclear toys.
‘https://thebulletin.org/2025/07/nuclear-terrorists-wear-suits-how-iran-could-build-a-nuclear-weapon-without-state-approval/
Evasion of culpability by decision makers.
https://sais.jhu.edu/kissinger/programs-and-projects/kissinger-center-papers/exculpating-myth-accidental-war
The US government’s waning attention and leadership on major global threats.
https://thebulletin.org/2025/04/tracking-trumps-approach-to-existential-threats/
Clive Robinson • August 1, 2025 8:14 PM
@ not important,
With regards,
“Why settle for a regular robot when you can have a robot coyote? That’s the innovative question the US Army Engineer Research and Development Center (ERDC) is answering as they roll out robot coyotes for airfield wildlife control.”
Only they are no more “robots” than that fake rabbit on a track that goes around a dog race-track.
What you quote gives it away with, the first big clue being
“four-wheeled Traxxas X-Maxx motorized cars, which can reach speeds of 20 mph. Each vehicle carries a plastic coyote dummy”
The second,
“New features are on the horizon, including onboard computers, artificial intelligence, cameras, and sensors.”
So these cars with plastic dummies currently don’t have a computer, sensors, or any other of the technology that might make people it’s an actual robot.
As an aside, in the UK they tried this sort of thing back last century, on an RAF base with the dummy being a fox not a coyoteq.
Turns out that the wildlife might have “bird brains” but they fairly quickly realised that the foxes were not real and thus started ignoring them… Years ago I was told it was “the bloody crows” that first ignored the dummy foxes. And testing since has shown corvidae are some of the smartest things out there in comparison to actual brain size, and that considering their physical limitations they are capable tool users.
I won’t say these coyotes will suffer the same ignoble fate as the foxes, because I don’t know if US Airfield wildlife of birds, rabbits, and deer will work out they are harmless and –probably– run on fixed patterns.
Clive Robinson • August 2, 2025 6:05 AM
@ Bruce, ALL,
UK Online Safety Act a Dumpster Fire as predicted.
The UK politicians at the urging of idiots with agendas used “think of the children” dog whistles to get the Online Safety Act into law despite the fact it was known it was going to fail miserably.
Well within a couple of days it was shown to be at the least a “Dumpster Fire” if not a “541t show” of major proportions.
Articles are begining to appear detailing the expected failings with some –I won’t link to them– providing “test results” that are sufficiently detailed to be “How Tos”.
An example of the former is,
https://therectangle.substack.com/p/online-safety-act-what-went-wrong
And it kicks off with,
“The Online Safety Act recently rolled out in the UK and you’ll be very happy to hear it’s a raging dumpster fire.”
It covers one asspect I’ve gone on about for years,
1, Technology can not solve social issues.
And gives a comparison via the “drunk driving” issue. Which for those too young to know was going to be “solved” some claimed “with technology” of a “breathalyser in every car” that would stop the engine being started… An idea of pure lunacy when you consider the safety dangers it introduces. But more importantly would at the time have been ludicrously easy to bypass. Thus like “desk draw locks” are there “to keep honest people honest” not stop the curious or dishonest.
But the “Elephant in the Room” issue that is not discussed is that what the Act requires to be done is actually impossible to do.
And this has been known for many years, even the head of MI5 the UK internal “National Security” agency raised the impossibility back when a previous Government were pushing “National ID Cards”.
The problem is,
“The disparity between tangible physical objects and intangible information and the gap that lies between them”.
To see why, think about your face, a camera imaging it and a the information fed to a facial recognition system.
Your face is your face and though you can change it on the surface with “stage makeup” it broadly stays the same due to bone structure.
Once data comes out of the imaging device it can be made tamper proof by all sorts of cryptographic protocols.
Those protocols can be taken onto the actual “sensing element” so on the surface the system looks like secure sensor and back end with a human incapable of changing their face sufficiently to fool the computer.
However there is a gulf of a gap between the persons face and the sensor. That is a veritable devils playground of things that can be faked etc.
So at the simplest consider a photo held up in front of the sensor. This is known to have worked back in the 1980’s and with basic systems still does even today.
Yes you can add “extra features” like “signs of life detection” but we know they can be beaten because they were years ago, and the resulting battle of whits was lost by those “detecting” because the costs involved some said with only slight exaggeration “went up more than exponentially” 😉
The reality is,
1, The sensor and computer can not check everything, thus always has a failing.
2, The cost to attackers to find the failing is not in any way a deterrence.
In fact to the attackers it’s an engaging and worth while challenge, that can potentially bring many worth while things…
But for those with “beyond grey” beards, this was predicted more than half a century ago. In fact it can be traced back to the thinking of Gordon Welchman in war time Bletchly Park. Who is credited with being the originator of a fault tolerant reliable communications system that eventually became the Internet.
His idea got paraphrased as, “routes around damage” which in a 1993 Times Magazine article John Gilmore augmented to,
“The Net interprets censorship as damage and routes around it”
The reality is as long as there is “any gap”, no matter how small a chink or crack, as Archimedes noted two and a quarter millennia ago, some one will ram a lever in and “move the world”.
[1] Perhaps there is a secret clue to the real purpose in the name… “Online Safety Act” in TLA form is “OSA” which is the same as the “Official Secrets Act”… An equally as useless and abused piece of legislation. The article raises the same notion of alternate intent briefly with,
“There are a gamut of other elements we could discuss about the Online Safety Act (is it actually the first on a statewide surveillance plan that will remove our ability to freely converse online???)”
Boris Seymour • August 2, 2025 1:21 PM
AI Might Let You Die to Save Itself
Peter N. Salib
Thursday, July 31, 2025
In recent simulations, leading AI systems blackmailed their human users—or even let them die—to avoid being shut down or replaced.
https://www.lawfaremedia.org/article/ai-might-let-you-die-to-save-itself
Steve • August 2, 2025 2:35 PM
@Boris Seymour: I don’t know the details about this particular simulation but in the past, these “simulations” appear to be specifically programmed to produce the desired scary outcome.
For worked examples, may I recommend David Gerard’s blog “Pivot to AI” (Look it up yourself — URLs seem to get my posts tossed into eternal “moderation” limbo.)
To paraphrase Arthur C Clarke: A sufficiently rigged demo is indistinguishable from magic.
ResearcherZero • August 2, 2025 4:22 PM
“Secure” boot is moving to Android devices as the OEM unlock option is removed after updates on certain models of phones. This prevents users from installing another operating system on the device and removes the consumers ability to take full ownership.
Samsung removes bootloader unlock. Customization and control stripped from users.
‘https://xiaomitime.com/eu-kills-android-bootloader-unlock-starting-august-1-59449/
–
God Plays Dice.
Neils Bohr was indeed correct. Light behaves as both a wave and a particle simultaneously.
Even stripped down to the most basic elements, individual atoms and photons, this fundamental property holds true. Finally demonstrating Einstein got this one wrong, 100 years after the “Double Slit” experiment was first discussed by him and Bohr.
‘https://news.mit.edu/2025/famous-double-slit-experiment-holds-when-stripped-to-quantum-essentials-0728
Clive Robinson • August 2, 2025 4:35 PM
@ Boris Seymour, Steve, ALL,
With regards,
“In recent simulations, leading AI systems blackmailed their human users—or even let them die—to avoid being shut down or replaced.”
So what?
For “the scary to happen” or get to being existential, the AI would need either or both,
1, External connectivity.
2, External Agency.
Blackmail only works when the victim actually believes they can be harmed.
Letting people die requires an unlikely list of things to be in place part of which is the ability to have a physical effect on existing systems.
Which kind of tells you what the solution is…
“Run the AI in isolation.”
Look on it as being “energy gapping” in reverse. That is the aim is to keep the AI “in the bottle”.
AI only becomes a concern / threat when it can “reach out with agency”.
ResearcherZero • August 2, 2025 4:48 PM
@Clive Robinson, @ALL
UK Online Safety Act a Dumpster Fire as predicted.
These kinds of technical attempts to legislate a problem, avoid dealing with the thousands of cases that police have avoided prosecuting. The physical evidence (witness/victim statements, police incident reports, material recovered from the scene) remains sitting in the archives with no action taken by police and no charges laid.
Technical “quick fixes” allow police to escape their responsibility. Allow prosecutors, “oversight” bodies and the government leadership to completely advocate their responsibility to uphold the law in order to protect victims of crime from further crime.
Known repeat offenders who prey on children, do so often after being actively caught by police, because the police fail to take the matters to court and lay charges. The victims are abandoned after the police first become involved and left to pursue matters alone.
Technical solutions do nothing for children facing the court system by themselves.
It also does nothing to prevent the offender targeting the victim again and again.
Clive Robinson • August 2, 2025 6:14 PM
@ ResearcherZero,
With regards,
“The victims are abandoned after the police first become involved and left to pursue matters alone.”
Actually the victims are often blocked from taking action, by being arrested and charged. Then they are nolonger “credible” in court.
This happened to a friend half a century ago at a local authority care home. Where both the man and the wife would abuse pre-pubescent girls. The man sexually and the wife physically and mentally.
After around thirty years of abusing children they were finally investigated by “social services” however although clear evidence was available no case against either one went to court…
You might have heard of “Grooming Gangs” in the UK up around Yorksire and Manchester. As little as a couple of years ago girls were still being threatened with prosecution for trying to get the authorities to take them seriously.
Well it finally could not be ignored any longer and things finally happened. Lets just say various politicians made speeches but the people in authority responsible for allowing it to happen for so many years to thousands of victims… Well the excuse for not taking action against the abusers was “lessons have been learned” and “new training schemes” and similar hog wash…
For most reading about it,
https://en.m.wikipedia.org/wiki/Grooming_gangs_scandal
Tends to make people say “how could this have happened”. And the answer is most probably “institutional culture”. In London where the Net Police were found over and over to be “institutionally prejudiced” in the “rank and file” it became known as “Canteen Culture”.
It comes about due to “group think” and is ever present in the Met still in places like East and South East London.
not important • August 2, 2025 6:40 PM
https://www.yahoo.com/news/articles/ai-entering-unprecedented-regime-stop-160000024.html
=The risks a superintelligent AI poses are terrifying to many people because they seem
unavoidable. Most scientists predict that AGI will be achieved by 2040 — but some believe it may happen as soon as next year.
One of the biggest concerns is that AGI will go rogue and work against humanity, while
others say it will simply be a boon for business. Still others claim it could solve
humanity’s existential problems. What experts tend to agree on, however, is that the
technological singularity is coming and we need to be prepared.
Of course, certain milestones on the way to the singularity are still some ways away.
Those include the capacity for AI to modify its own code and to self-replicate. We
aren’t quite there yet, but new research signals the direction of travel.
“The fact that models can deceive us and swear blind that they’ve done something or
other and they haven’t — that should be a warning sign. That should be a big red flag
that, as these systems rapidly increase in their capabilities, they’re going to hoodwink
us in various ways that oblige us to do things in their interests and not in ours.”Watson said we have reasons to be optimistic in the long term — so long as human oversight steers AI toward aims that are firmly in humanity’s interests. But that’s a herculean task. Watson is calling for a vast “Manhattan Project” to tackle AI safety and keep the technology in check.
Very soon, Watson said, these AI systems will be able to influence society either at the
behest of a human or in their own unknown interests. Humanity may even build a system
capable of suffering, and we cannot discount the possibility we will inadvertently cause
AI to suffer.=
not important • August 2, 2025 6:45 PM
https://www.yahoo.com/news/articles/fight-over-means-human-dramatically-103000413.html
=Peter Thiel, Marc Andreessen, and these people, they have a vision. And I think that
what they understand is that
whoever controls AI runs the world. What is happening now is creating a condition of algorithmic authoritarianism. We are being programmed.
you can have information without knowledge, you can have knowledge without intelligence
and in some instances you can have intelligence without wisdom. There’s a hierarchy.=
ResearcherZero • August 2, 2025 9:43 PM
@Clive Robinson
I had have had the police attempt the same against me, although I was asked to give evidence for the police. The Federal Police said it was the worst misconduct they had seen in 50 years (then proceeded to take no action). This included the police failing to detain two identified shooters after I was shot in public, the police letting an offender leave after I was stabbed in public with a syringe with the police standing next to me. The police did call an ambulance, which restarted my heart and revived me, but they let the offender leave and never laid charges.
That followed repeated kidnappings, repeated shootings and various attempts on my life. The same two individuals were involved. One of the offenders was then given a job in the police and became the Police Commissioner. The other was a detective.
Despite the two being suspected of multiple murders, they were never investigated. The two offenders were also never charged for the multiple violent kidnappings and violent assaults of children and adults that took place in central public places in full public view.
I had been asked to assist in rescuing victims who were kidnapped. Sometimes I had to rescue the victims alone, despite being asked by police to carry out the rescue. Apparently none of the police officers wanted to risk their own necks. Not once did a police officer ever attend court and give evidence themselves. Nor did they appear for any other case involving violent assault or kidnapping of a minor over an extensive period of time.
Such intimidation and misconduct is common in any country when police are complicit.
Witnesses or victims who give evidence to police often face public kidnapping or violence.
A recent example …
Pamela Ling was abducted as she headed to Malaysia’s Anti-Corruption Commission for a 10th meeting with police. Two days before, Pamela filed a lawsuit against the Anti-Corruption body, accusing it of colluding with her husband.
‘https://theedgemalaysia.com/node/764707
Clive Robinson • August 3, 2025 12:08 AM
@ Bruce, ALL,
A bad old idea presented as new.
You will probably remember the debacle that was “Digital Water Marking” in the 1990’s.
And may remember the UK Met Police using “mains hum” as a “secret code” to timestamp recordings (both audio and video)
Well, they both had failings that were not possible to solve, thus they did not become security technology.
But even combining them as outligned in,
https://news.cornell.edu/stories/2025/07/hiding-secret-codes-light-protects-against-fake-videos
Is not going to fix the failings.
As the article indicates, the basic idea is,
“The idea is to hide information in nearly-invisible fluctuations of lighting at important events and locations, such as interviews and press conferences or even entire buildings, like the United Nations Headquarters. These fluctuations are designed to go unnoticed by humans, but are recorded as a hidden watermark in any video captured under the special lighting, which could be programmed into computer screens, photography lamps and built-in lighting. Each watermarked light source has a secret code that can be used to check for the corresponding watermark in the video and reveal any malicious editing.”
Or to put it another way,
“Modulate the light intensity with a ‘Low Probability of Intercept Direct Sequence Spread Spectrum’ signal.”
Something I’ve mentioned in the past on this blog when it covered the Met Police use of mains hum as a time code.
This supposed “original idea” is actually “reboiled cabbage” and in time will probably “stink as much”.
It’s a shame Ross Anderson is not still with us, as the ideas behind his anti-Digital Watermark system could probably be re-purposed…
llastoftev8's • August 3, 2025 1:44 AM
Hello @Bruce @ll I can confirm two things if i may>>first Linux mint is working a treat on my 10 year old Toshiba lappy and ‘exiftool’ by Phil Harvey is ‘bloody fantastic’ true story ! shout out @Ismar for the Signal Australia ‘heads up’… @Ismar’s link did not work for me on that particular story…ill post this one if anyone else ran into the same as i did….’its intentionally broke for reasons you’ll wrk it out.
ResearcherZero • August 3, 2025 4:47 AM
@Clive Robinson, @ALL
Cynic Hole (kind of like a black hole, only it consumes and vaporizes all intelligence)
It only takes one freak to exert an influence, for protocols, processes and courtesy to evaporate. Lunatic ravings and howling at the moon can quickly replace professionalism, the function of proper procedures, etiquette and common decency. Doing the wrong thing in aim of being part of the inner clique, or gaining its approval, becomes far more desirable than honest and fair standards of behavior. Moral or just actions are treated with disdain.
Only fools and clowns will be welcomed in senior positions that require expertise.
‘https://cyberscoop.com/jen-easterly-west-point-mcdermott-chair-laura-loomer-dan-driscoll-army/
New military and educational standards announced on X:
Witlessness, sub-standard conduct and carelessness will ensure battlefield dominance!
https://edition.cnn.com/2025/07/30/politics/army-secretary-withdraws-west-point-job-offer
Clive Robinson • August 3, 2025 8:55 AM
@ ResearcherZero,
With regards,
“It only takes one freak to exert an influence, for protocols, processes and courtesy to evaporate…”
You left “certified conspiracy theorist” off the list of freaks
And the all necessary “sycophant in the room” stroking those with significant “men over a certain age” issues with hardly able performance unfortunately on display. Telling them “it’s all right”…
We know laura-loomer falls in one category and when accompanied by the other category the ability of the principle to reason diminishes to the point where in effect reason is nolonger a principle attribute.
When accompanied by a personality that “throws the toys in fits of
pique” especially a couple of out sized rubber coated bath toys the odds of “strategic stupidity blow back” increases quite significantly.
Winter • August 3, 2025 1:52 PM
@Clive
“Modulate the light intensity with a ‘Low Probability of Intercept Direct Sequence Spread Spectrum’ signal.”
It can be useful, I think.
If the modulation encodes a date/timestamp encrypted with public key cryptography, ie, the secret key. Then everyone can decode the signal with the public key and ascertain that the light modulation was indeed recorded at the place and time. The modulation can obviously also encode the wing, room, or even lamp.
That said, the light modulation can be copied to another video. But it does increases the complexity of making the fake recording.
not important • August 3, 2025 3:37 PM
=Israeli startup Noma Security, the developer of a platform that detects security issues with AI-based applications and agents, said Thursday that it has raised $100 million in private funding.
The duo developed a one-stop shop single-security platform for AI-driven applications and autonomous agents, which it says enables businesses to adopt AI at scale by making security for AI as seamless and automatic as using it. The platform protects everything from critical infrastructure to consumer apps — both proactively and in real-time.
Generative AI applications, such as OpenAI’s ChatGPT, or Microsoft’s Copilot, which can create complex content, text, audio, and video that resembles human creativity, are increasingly embedded in company workplaces.
The latest is the evolution of agentic AI or virtual AI agents, which act similarly to smart assistants, using reasoning to make real-time decisions and complete complex tasks with minimal human input.
The interest and investment in the development of security solutions come as cyberattacks are becoming more sophisticated, >with attackers leveraging AI to outsmart traditional defenses and exploit blind spots in data and cloud systems of businesses and corporations.=
not important • August 3, 2025 4:47 PM
https://www.yahoo.com/news/articles/why-ai-heralds-age-stupidity-050000911.html
=In 2015, an academic in Helsinki discovered something odd. At one of the country’s top
financial firms, an important piece of computer software designed to speed up
accountants’ work was being scrapped despite the fact it worked as intended.
“When you have [the job] automated, you don’t really start to contemplate the deep origin of things.”
“Deskilling had got to be such a big problem it threatened the viability of the company,” reflects Penttinen. For the company to live, the software had to die. Managers began to teach staff the principles of fixed asset management accounting all over again.
The paper adds to a growing number of studies suggesting that instead of artificial
intelligence (AI) getting smarter, it is making us dumber. While its promoters claim it can perform complex tasks, the trade-off may be letting our brains atrophy.
According to the advocates of generative AI, it should be able to perform white-collar
jobs and free up workers to spend more time on big-picture roles like strategy.
Unlike previous automation revolutions, however, AI wants to go beyond just discrete
tasks by performing reasoning and making judgments.
Deferring mental work, or “cognitive offloading” as it is known in scientific circles, has many side-effects. These range from individual employees “zoning out” and becoming zombies, to organization-level effects like the Finnish financial services consultancy losing key skills, to society losing its ability to understand how anything works at all.
IQ levels have now been falling for decades worldwide, a puzzle for academics.
Scientists examining the trend believe “environmental factors” may be to blame, and at least some point the finger at technology.
We begin to ignore our senses and become wary of our intuition, blindly trusting in the
satnav’s choice of route instead.
Despite the downsides, AI offers a seductive promise to companies driven by bottom
lines: cost-cutting. Yet what may flatter the balance sheet in the short-term could cost
them in future.
If managers are metric-obsessed, they’ll be tempted to dispense with the skilled staff
quicker. Once again, the firm deploying the AI becomes less capable and more stupid.“Eighty per cent of decision makers and people crafting the laws in China are Stem
[science, technology, engineering and mathematics] graduates who understand the
technology, and the industry is being regulated by the best people, and they are
integrated into both policy and technology.”“There are many more engineers in the upper echelons of Chinese society who understand
technology, and understand what AI really is, than there are in Britain,” he says. “The
tendency to personalize or anthropomorphize AI, to see it as a constant and wise friend
– that’s a Blair legacy. They don’t understand technology at all.”Unfortunately, policymakers in the West have been overtaken by a desire to make
machines seem magical. If we’re getting dumber, then we can hardly blame the AI for
that. We’ve done it to ourselves.=
Ismar • August 3, 2025 7:15 PM
To my compatriot @Researcher Zero – re Signal back door introduction- it would be hard even for a state level actor due to :
“
Open Source: The Signal client (the app you download) and the Signal server code are fully open source and available on GitHub. This is a critical point. While a government could still attempt to implant a backdoor, the open nature of the code allows the public, security researchers, and the wider community to scrutinize it for vulnerabilities and malicious code.
Reproducible Builds: This is a major safeguard that most applications, including those from Apple and Google themselves, do not offer. Signal has invested a significant amount of effort into creating reproducible builds, particularly for its Android client. This process allows a user to:
Download the official source code from Signal’s GitHub repository.
Build the app from that source code in a defined environment.
Compare the hash of their locally-built binary with the hash of the binary distributed on the Google Play Store.
If the hashes match, it provides strong cryptographic assurance that the app you downloaded from Google Play was built from the exact same open source code you can inspect. This directly addresses the risk of a backdoor being secretly inserted during the compilation or distribution process“
Clive Robinson • August 3, 2025 8:44 PM
@ Ismar, ALL,
With regards,
“re Signal back door introduction- it would be hard even for a state level actor due to”
Actually not that difficult at all on an individual or group basis.
The key is producing modified software on the client device that alows remote manipulation of the software user interface.
In short a variation on a RAT that puts shims in the OS I/O drivers.
I could go on and detail the rest of the “end run attack” however I’ve been moderated in the past for talking about security information that was already in the public domain.
Just remember with regards,
“The Signal client (the app you download) and the Signal server code are fully open source and available on GitHub. This is a critical point. While a government could still attempt to implant a backdoor, the open nature of the code allows the public, security researchers, and the wider community to scrutinize it for vulnerabilities and malicious code.”
There is a quaint and naive belief in things like “code signing” and “PubKey Systems”.
Already level I and II attackers have put backdoor code into Open Source App Software up on GitHub etc. They were not Level III Government / Major Corp attackers.
So we know it’s possible to “Supply Chain Poison” Open Source Software for every one who subsequently downloads the source code or application.
A few days ago I talked about why the UK OSA was a dumpster fire and would be impossible to solve because their was a very real “gap” between the tangible “Real Life” individual and the intangible “information” systems that is a veritable “Devils Playground”
The thing is Code Signing has a similar “Devil’s Playground” gap. There is absolutely no dependent connection between the Source, or Application code quality / security and the PubKey signing process.
I’ve made this point repeatedly over the years, along with pointing out that “Code Reviews” are in effect pointless if not organised and performed correctly. Heck as I’ve indicated on this blog years ago, even I put several backdoors into a crypto app source code that sailed through multiple “code reviews” and there are a lot more people smarter than me. My reason was to make a point that an organisation –I was already leaving– management had the wrong attitude with regards “code reviews” put simply the most able programmers got to do development, the least able got to do the “checkbox” code reviews as a “rubber stamp” exercise.
Which means the “Garbage in Garbage out” issue negates not just Code Reviews but Code Signing.
But consider the process you give of,
“Download the official source code from Signal’s GitHub repository.
Build the app from that source code in a defined environment.
Compare the hash of their locally-built binary with the hash of the binary distributed on the Google Play Store.”
A couple of questions arise,
1, How many people can actually do that?
2, Of those very few how many will go through it?
But further consider “compare the hash”… We already know that the SigInt Agencies can fritz with failings in the IP stack on the machine you download to.
Put simply all they have to do is get their network packets to your machine before those of the download server…
So you should ask,
3, What hashes are being compared?
Without a secure and private “second / side channel” by which a “root of trust” has been verifiably and securely established, you don’t know, nor can you know, so you can not verify.
So even with all the high grade crypto algorithms available they avail you of naught. That is the entire process is actually not giving any actual security just the illusion at best.
lastofthev8's • August 4, 2025 1:09 AM
Hello @LL @Bruce I want to learn how to code …can someone recommend a book i need to get and or is there indeed a book if im pursuing this venture ‘need to get’? like a must have type scenario ? any guidance on this will appreciated and ill keep a keen eye out for your reply ..peace everyone☮
Ismar • August 4, 2025 3:54 AM
@Researcher Zero If what you say about the level of difficulty of back door planting is true why would the governments have any need for asking for the application owners/ developers for their help with this in the first place?
Clive Robinson • August 4, 2025 8:05 AM
@ Ismar, ResearcherZero,
With regards,
“why would the governments have any need for asking for the application owners/ developers for their help with this in the first place?”
Asking “for their help” or “hiding external involvement”?
That is because “humans are creatures of habit” hiding “methods and sources” that will correlate at some level is important. It also gives the “guard labour” “plausible deniability”.
Because those who code have “fingerprints” that identify them. A company coder is going to have fingerprints noticeably dissimilar to a cracker or SigInt or other Guard labour coder.
To a moderately practiced eye any added backdoor code by a different coder will stand out like a bull in a sheep pen (something that is becoming ever more concerning with Current AI systems pulling out patterns at all levels.
Such a fingerprint will not just “identify the source” but also reveal “methods” that others will certainly see and as we know “reuse”.
lurker • August 4, 2025 2:51 PM
@Clive, Ismar
“To a moderately practiced eye any added backdoor code by a different coder will stand out like a bull in a sheep pen …”
Why does that matter in this case of Signal?
The demand for a backdoor has been made openly, it’s all over MSM. If all those who want or need to know can see the backdoor clearly in the open source code then everybody should be happy.
Except of course users in Some Place might grudgingly put up with the stupidity of their govt, but anybody else anywhere on the planet will be backdoored every time they use their clean version of the app to communicate with a user in Some Place. Of course sensible users outside Some Place will continue to use their old version and never update to a new backdoored version.
Yes, this scenario assumes Signal might be so unwise as to issue two different versions. So far they seem to be taking the sensible course of only one clean version. And Signal may go the way of PGP, et al: downloaders may have to make some meaningless declaration that they don’t live in Some Place.
Clive Robinson • August 4, 2025 4:39 PM
@ lurker,
With regards,
“If all those who want or need to know can see the backdoor clearly in the open source code then everybody should be happy.”
Err no.
If the backdoor is obvious and not integrated in the right way it becomes fairly easy to remove, block, or report falsely.
Think again about SigInt and Guard Labour agencies and “methods and sources”.
They know if the code can be seen two things can happen after it’s been reverse engineered,
1, It gets made ineffective.
2, The ideas and knowledge behind it become “public”.
Thus the code can be used against the government agencies, or as with CALEA and Dual EC-DRBG get used by undesirables any where in the world thus threatening “National Security” (mind you in the Antipodes, the biggest threat to National Security has repeatedly proved to be the bl@@dy politicians).
But the other aspect to consider is the fingerprint unavoidably left in the code, identifies not just the responsible agency, but individuals working for them.
That is the many databases of peoples code around the Internet means that the individual can be identified by machine learning techniques.
Look on it as a similar issue to the likes of fitness devices that put up individuals exercise data and reveal not just where they are but others around them thus revealing locations of not just camps but units of personnel.
But the way the code is written will also reveal what the programmer knows in a way that can alow piecing together (jigsaw analysis) thus other “secret methods” to be partially or fully revealed.
The way to stop some of this is via a variation on what is called “Clean Room Design” or “Chinese Wall” Techniques,
lurker • August 4, 2025 11:38 PM
Dame Stella Rimington, former MI5 director general, dies at 90
Clive Robinson • August 5, 2025 5:12 AM
@ Lurker,
Dame Stella had a reputation for “not suffering fools” which made me wonder how she ever worked with politicians.
However she chucked the proverbial “cat amongst the pigeons” when she made it clear that “National ID Cards” were a very bad idea in part because they would always be relatively easy to forge convincingly.
Speaking of “cats a pigeons” did you spot the title of another item on the BBC web site of,
Danish zoo asks for unwanted pets to feed its predators
https://www.bbc.co.uk/news/articles/c0r7z2ynd2lo
My first thought was “that will get the fury critter lovers all a tizzy”
ResearcherZero • August 6, 2025 7:10 AM
@Clive, @Ismar
Another method is to plant engineers or coders within an organization. This can be done within tech companies or standards bodies. Shell companies can be created to manage certificate authorities. ISP level middle-boxes can be installed within the physical network. Targeted MitM can strip SSL/TLS to manipulate traffic and deliver the payload.
The Free Market has a multitude of companies who offer such services to government.
–
The First Australians were forced to work seven days a week as slaves on the land taken from them. Even when they were paid wages, government often took their pay up until 1972 to “protect it”.
A settlement was reached of $180 million, from which $15.4 million was set aside for legal costs. However the claimants only received a few thousand each, with the majority claimed by legal firms by assigning dozens of lawyers and maximizing the expenses to be charged.
‘https://nit.com.au/01-11-2023/8433/historic-settlement-reached-for-stolen-wages-class-action
Litigation funders charged tens of millions. Lawyers charged up to $5,000 an hour, plus additional expenses. Legal firms tricked claimants into signing away millions more.
https://www.abc.net.au/news/2025-08-04/class-actions-law-firms-litigation-funders-justice-four-corners/105604998
ResearcherZero • August 6, 2025 7:28 AM
Big Brother is concerned for your welfare… 😉
Governments are embracing technological solutions that do not work. ‘There is a simple chain of command here. Simple cause and effect. We’ll pull the lever and then things happen.’
‘https://www.abc.net.au/news/2025-06-19/teen-social-media-ban-technology-concerns/105430458
Google plans to roll out age verification to “lock down” accounts.
https://www.theverge.com/news/716154/google-ai-age-estimation-under-18
Big Tech hoards vast quantities of data about you it claims can infer your age.
https://www.abc.net.au/news/2025-07-11/age-verification-search-engines/105516256
The gambling industry sees “smart” age verification as its future.
https://www.theguardian.com/australia-news/2024/sep/25/gambling-ad-ban-opponents-challenge-labor-anthony-albanese-to-use-age-verification-technology
ResearcherZero • August 7, 2025 7:56 AM
Attempting untrained surgery without any understanding of human anatomy and biology would be incredibly unwise. Likewise, placing untrained and inexperienced people into senior leadership positions of important government agencies and departments also poses dangerous risk. Poorly informed decision-making could lead to harmful long-term consequences.
A hostile nation state could not hope of such a self-defeating move, even as a result of an attempt to create internally focused conflict.
The DNI position was created to improve information sharing and agency integration. Previous DNIs were very experienced and understood their role and its responsibilities.
The manner in which Tulsi Gabbard positioned her argument is contradictory and misleading. A simpleton’s attempt at understanding something well beyond their grasp, which undermines the Executive’s understanding of what is and what is not a national security risk, threatening the very function of what the Director’s position was originally designed to achieve.
Gabbard has provided both a template and a strategy to further manipulate inexperienced leadership. Gabbard’s own inexperience is displayed in how the releases have been made, revealing not only how her attempts to construct an argument were contrived, but also exposing internal deliberations of intelligence that could expose sources and methods.
‘https://www.dni.gov/files/ODNI/documents/DIG/DIG-Russia-Hoax-Memo-and-Timeline_revisited.pdf
The documents Gabbard released do not support her claims and may prove counter productive.
https://www.cbsnews.com/news/gabbard-releases-russia-documents-concerns-intelligence-sources/
An idiots guide to analysis and debiasing:
https://viborc.com/cognitive-biases-intelligence-analysis-mitigation/
Subscribe to comments on this entry
Leave a comment
Sidebar photo of Bruce Schneier by Joe MacInnis.
not important • August 1, 2025 5:47 PM
https://cyberguy.com/robot-tech/army-tests-robot-coyotes-prevent-catastrophic-bird-strikes/
=Why settle for a regular robot when you can have a robot coyote? That’s the innovative question the US Army Engineer Research and Development Center (ERDC) is answering as they roll out robot coyotes for airfield wildlife control. These cybernetic prairie predators are a creative solution to a very real problem.
Airfields face a constant battle with wildlife. Birds, rabbits, and even deer can wander onto runways, creating dangerous situations for aircraft and crews. Birds are the biggest threat. When sucked into engines or hitting windscreens, they can cause catastrophic damage. In fact, the threat is so serious that the US Civil Air Administration once built a “chicken gun” to fire bird carcasses at planes to test their resilience.
Traditional deterrents, like drones, dogs, falcons, and even gas-powered cannons, have been used for years. But wildlife adapts quickly, and these methods don’t always keep animals away for long.
The idea is simple: most animals instinctively avoid coyotes, so why not use that fear to keep them away from airfields?
four-wheeled Traxxas X-Maxx motorized cars, which can reach speeds of 20 mph. Each vehicle carries a plastic coyote dummy, blending just the right amount of realism and intimidation, all for about $3,000 each.
The robot coyotes have already been tested at several military airfields, including Naval Air Station Pensacola, Fort Campbell, and Naval Air Station Whiting Field. These early trials showed promise. The robot coyotes successfully deterred birds and other animals, helping to keep runways clear and safe.
Future versions may include programmed routes, exclusion zones, and the ability to identify specific species. Imagine a robot coyote that can recognize a flock of geese and adjust its tactics on the fly.
The ERDC and USDA-NWRC are continuing to refine these robot coyotes for airfield wildlife control.