Malware from Space

Since you don’t have enough to worry about, here’s a paper postulating that space aliens could send us malware capable of destroying humanity.

Abstract: A complex message from space may require the use of computers to display, analyze and understand. Such a message cannot be decontaminated with certainty, and technical risks remain which can pose an existential threat. Complex messages would need to be destroyed in the risk averse case.

I think we’re more likely to be enslaved by malicious AIs.

Posted on March 2, 2018 at 6:13 AM87 Comments

Comments

Victor Wagner March 2, 2018 6:36 AM

Many many years ago I’ve read Sci-fi novel, where evil aliens transmit to the Earth instructions how to build malicious AI. And of cource, people follow these instructions.

echo March 2, 2018 6:39 AM

This topic was mentioned in previous comment but is worth another look. I agree dodgy AIs are a larger threat as they are both home grown and more likely realisable and a superset of some of the issues raised by an extra-terrestrial message.

Historians will have a better view of this question but how did contact between different terrestrial civilisations impact their stability?

I note the internationally agreed protocol is to inform scientists globally which of course may involve traditional adversaries being read in and the more people know the more the risk of information prematurely leaking.

There has been talk of the need for some AI research to be secret to prevent low hanging fruit abusing the technology. Similar questions arise about kitchen sink gene editing.

I wonder if to some degree the issue raised by this paper is important but also an exercise in people considering the broader questions which impact politics and cooperation and security in general.

migou March 2, 2018 6:54 AM

This sounds more like a (bad) movie plot to me. If an alien intelligence is capable of sending messages that can exploit security vulnerabilities on earth, aren’t we lost whatever we do?

A March 2, 2018 6:55 AM

Malicious intent would not be necessary for messages from an advanced civilization to pose an existential threat. They could simply share with us the answers to some hard problems, like prime factorization or discrete logarithms, and break the cryptosystems on which global communication and trade currently depend and we aren’t ready to replace. In fact, just a few breakthroughs that we could discover for ourselves, when we might not be ready for their implications, could probably get us in quite a pickle.

Peter A. March 2, 2018 7:04 AM

In this theme, I recommend a novel “His Master’s Voice” by Polish SF writer Stanisław Lem. The English translation is reportedly very good. The novel touches philosophical and existential problems at decoding a message from deep space encoded in a neutrino emission.

Andy March 2, 2018 7:10 AM

@echo

First contact between terrestrial civilizations has sometimes lead to the more isolated one being exposed to devastating diseases carried by the allegedly more “civilized” visitors. Sometimes this was accidental (think STDs) and sometimes intentional (think blankets).

I was going to wonder what the SETI equivalent of accidental contamination would be and then I saw @A’s commment.

migou March 2, 2018 7:11 AM

@A – This makes sense, thanks for the clarification. I only thought about intentional actions

echo March 2, 2018 7:35 AM

@A, @Andy

Yes. These are the kinds of questions I was thinking of. The questions of being culturally unprepared and well meaning but accidental harm is a bother. This kind of thing already exists terrestrially and we seem very slow with adequately managing this.

Chris March 2, 2018 7:50 AM

Almost like millions of computing hours are being spent on bitcoin mining for the blockchain created by a mysterious outsider?

Nick Alcock March 2, 2018 8:37 AM

The ultimately hideous version of this (more or less an interstellar email trojan that eats everyone who reconstructs the organism encoded in its message alive from the inside out along with their biosphere, then mindlessly builds transmitters to retransmit the signal) is found in John Barnes’s Enrico Fermi and the Dead Cat, reproduced in Apostrophes and Apocalypses. It is a few pages long and is probably the most disturbing thing in a book which should come with a DISTURBING: DO NOT READ on the cover in big red letters. (But then, it kind of does: the author’s name. He’s never written anything that isn’t suicidally depressing.)

(It’s not actually shown as malware, but do you think massive transmitters are evolved? I don’t. This is someone’s weapon that got loose long ago. They were probably the first ones eaten.)

Setec Astronomy March 2, 2018 8:43 AM

You mean … the rest of you aren’t AIs ?

How about running a summarizer AI that outputs a digest of the comments on the blog, with continual updating as the comments roll in ?

Thane Walkup March 2, 2018 8:47 AM

See also Neal Stephenson, “Snow Crash”, and David Brin speculates on how messages from outer space could be damaging to our society in “Existence”.

David March 2, 2018 8:58 AM

Vernor Vinge’s Fire Upon the Deep and Deepness in the Sky both feature this as major plot points.

With powerful enough computer hardware, a message could be considered sentient never mind malicious.

Mr. Bones' Wild Ride March 2, 2018 9:20 AM

This latest method of global annihilation brought to you by same great minds that brought you the fractally wrong “Bones”.

K.S. March 2, 2018 9:41 AM

I agree with many here that voiced opinion on dangers of just information dump. Imagine what a cheap, practical aging cure would do to our society.

Another aspect – what about exploits in our wetware? Something like visual feed attack that includes buffer overflow overriding something in our mind’s kernel space leading to violent psychosis.

Sid March 2, 2018 10:00 AM

Fun to read, but does not explain the balancing act in the last two paragraphs. Risk = probability x outcome, so a very small chance of a terrible outcome is balanced against what?

phred14 March 2, 2018 10:28 AM

@Victor Wagner – In “Species” the malicious AI was encoded as DNA, and when expressed looked a lot like Natasha Henstridge.

@KS – “Snow Crash” was already mentioned by @Thane Walkup, and there was also the Destroyer in Piers Anthony’s “Macroscope”.

Simplest hack would be to hack us. Simply sent a tantalizing glimpse of New Physics – something that could take us beyond the Standard Model. Understanding that makes our current knowledge obsolete, and it’s very possible that some nation would begin WWIII, simply to prevent some other nation from understanding that New Physics first.

A better hack might be to hand us a piece of horribly dangerous technology that is only local in its effects, and not possible to use for space travel. Kind of like giving a kid a hand grenade with no warning of danger.

Mailman March 2, 2018 10:49 AM

But the opposite is also true. If we get alien visitors, we can send them messages that contain a virus that will in turn disable their electromagnetic shields. Hell, they should make a movie about this.

adpov March 2, 2018 11:23 AM

Never going to happen. You can’t design malware for an operating system unless you have a copy of that system to analyze. How is a space alien going to get a copy of windows/ios/android to do that? Furthermore, how do you test your malware without a computer running on an Intel processor?

New Guy March 2, 2018 11:37 AM

@Nick Alcock: Just read that (you can find it online), and found myself thinking that it might not be so bad if we’re alone in the universe, after all.

randomly_rumpled March 2, 2018 12:23 PM

@K.S.

That’s pretty much the plot in “The Screwfly Solution”.

@phred14

“In “Species” the malicious AI was encoded as DNA, and when expressed looked a lot like Natasha Henstridge.” Complete with tan lines, I noticed.

echo March 2, 2018 12:28 PM

@adpov

The Independence Day story commonly mentioned assumes a conventional computer? It’s a story. Variations of the telling of this story or other possible stories might assume instead of fixed code a level of AI or expert systems with a theoretical exploit which might impact the system. We might assume that the alien system would be carefully designed enough to counter exploits but we also assume similar for many everyday “secure systems” of our own and how often are they compromised? What if their design assumes the system is contained in a secure location only they could access? Consider how Snowden took advantage of this and was able to exfiltrate confidential data.

Another issue is the question of an alien system can be a proxy for the questions we are already asking about existing human systems.

Arthur C. Clarke wrote a few stories exploring the idea that humans were the baddies. One such story suggests the aliens are hiding. Another story, “Publicity Campaign” is about how a movie leads to fear of aliens with inevitable consequences.

I don’t know! It’s all fun and amusing!!

wumpus March 2, 2018 12:51 PM

So aliens are supposed to locate a flaw, design an exploit, and deliver the payload. Problem is, there is the whole “speed of light” latency. An alien outpost on Alpha Centauri could do it, but not explain why they hadn’t colonized Earth with technology like that. Go past a few local stars and you are talking decades of latency, even bright stars like Polaris would have a couple centuries of latency.

There are only 30 stars within 15 light years. Even then we would be expecting DOS-based exploits or perhaps the Morris worm.

AI, now that could be a real issue. Especially if it doesn’t have a evolutionary safeguards against massacring your own tribe or otherwise getting them massacred.

Mike C. March 2, 2018 1:53 PM

I’m sure malicious aliens have embedded themselves into the politics of many countries – cause that’s the only rational explanation.

John_AI March 2, 2018 2:16 PM

This is the ninth in a fascinating series of papers by Michael Hippke, all on the issues about searching for and communicating with ET.

Imagine for a moment that your evil self wants to destroy a pre-atomic society. And only with information. If it’s for the Romans, send them instructions for gunpowder. And the printing press. Roman society as it was in 200AD wouldn’t last a generation. The Roman empire might grow or might be defeated. Especially if you add a book like Art of War, or The Prince and somehow ensure it receives wide distribution.

Later societies could be disrupted with algebra, calculus, principles of physics, chemistry, radio and the special theory of relativity.

After about 200 to 500 years the radioactivity will have died down and now you have yourself an unoccupied planet for the taking.

Jesse Thompson March 2, 2018 2:43 PM

Alright, in addition to all of the Snow Crash and Independence Day, I’d like to add Stargate SG1 S04E20 “Entity”.

Because the concept of an alien computer virus infecting all our systems is tripe enough, this episode takes it a step further by claiming that the alien incursion was only retaliatory because our MALP that gets sent through the Stargate to do basic reconnaissance (and more specifically the radio signals we use to communicate with said MALP) wound up doing far more damage to the alien’s systems, in effect acting like an outbound virus.

Contrast MIB 1.

K: It’s a universal translator. We’re not even supposed to have it. I’ll tell you why. Human thought is so primitive it’s looked upon as an infectious disease in some of the better galaxies. That kind of makes you proud, doesn’t it?

But in other news, I just don’t know why the concept of digital Von Neumann probes would be so novel? Just combine the “But in space!” trope with “But in cyberspace!” and call it a wrap. 😛

Long_time_lurk March 2, 2018 2:56 PM

Considering no one has control over all computers- of course malicious AI is and will be created.

Before that happens at scale- let’s hope that someone has already created good AI to counter it.

Tatütata March 2, 2018 3:09 PM

This paper was accidentally released 7 weeks too early. The aliens mustov dunnit, or the physicists are working in the Alpha Centauri time zone.

[Aw, come on.]

The premise reeks of Euro-centric anthropomorphism, where the first reflex of explorers is to destroy any aliens as soon as they are met.

The travelers in Georges Méliès’ “Le voyage à la lune” do just that as they land on the moon.

Or take the “Enterprise” in Star Trek: it is as much equipped for war as for exploration.

Gerard van Vooren March 2, 2018 3:14 PM

Looking at all the sci-fi that I have seen here, I don’t know what kind of smoke you guys did have, but it looks and smells kind of okay.

Davids March 2, 2018 4:30 PM

The aliens would plant cross-site scripting in their messages to us. Lesson learned: Never trust user input.

David March 2, 2018 4:44 PM

But the opposite is also true. If we get alien visitors, we can send them messages that contain a virus that will in turn disable their electromagnetic shields. Hell, they should make a movie about this.

There has been an advisory about this for some time so it is unlikely to work.

Thomas J Kenney March 2, 2018 5:36 PM

According to Benford, malicious AIs will send us a particularly nasty mal-species.

Darth if AI know March 2, 2018 6:13 PM

When decrypted, the alien message will be found to say “I can has cheezburger?” .

PsuedoRandomName March 2, 2018 6:58 PM

This reminds me of Eclipse Phase, an RPG setting where it’s AI’s that decode such a message, then go insane and become a threat. The message functions as a filter on technical civilizations, though it’s not clear to what end. Umm, spoilers.

Clive Robinson March 2, 2018 7:32 PM

@ Victor Wagner,

I suspect one of the books you read was “A for Andromeda” written in 1962 by Fred Hoyal[1] or it’s sequal “Andromeda Breakthrough” from 1965. Sir Fred Hoyal FRS was a well respected scientist but who for various reasons did not like the “Big Bang” idea[2] and thought a steady state universe was more likely and came up with the idea of the C-Field to account for things. Ironically it was his work on radar during WWII that lead on to the finding of the cosmic microwave background that convinced most astrophysicists that the “Big Bang” was the right idea, thus it became the prevailing theory.

A for Andromeda was actually a “follow on” from ideas in his two previous works of fiction “The Black Cloud” and “Ossian’s Ride” from 1957 and 9 respectively.

Ossian’s Ride was more of a secret agent novel than SiFi untill towards the end of the book. It was at the time considered to be a book of great originality. Which others have subsequently borrowed from.

Thus arguably astrophysicist Sir Fred Hoyal FRS is the originator of the AI from space idea.

[1] https://en.m.wikipedia.org/wiki/Fred_Hoyle

[2] Fred actually coined the term “Big Bang” in a BBC radio program, his critics claimed it was a way of denigrating the ides that the entire universe came from nothing.

Clive Robinson March 2, 2018 8:04 PM

If we think about,

    Such a message cannot be decontaminated with certainty, and technical risks remain which can pose an existential threat.

There are two parts to this,

1, Cannot be decontaminated
2, Existential threat

The first no matter how true does not of necessity give rise to the second.

Which is just as well as the first argument is provably right, and has been so since before the Church-Turing fundemental theorm on thr limits of computation.

You can show two things,

1, All Turing machines are equivalent.
2, All Turing machines can not tell if they are running malicious code.

Thus the idea that the OS or computer hardware must be known to the ailiens is a false idea. Secondly just as with the halting problem you can not show a piece of code is malicious or not.

Such a Turing machine could only become an existential threat if we alow it to. If it remains issolated from it’s environment it can not directly do harm, it’s only when knowledge crosses the issolation barrier that it can do harm.

Thus two questions arise,

1, How would the AI get information across the gap.
2, How could that information become an existential threat.

Both would obviously require the collusion of “humans” thus it would in reality be our failings that are the real existential threat… No big supprise there then.

David Leppik March 2, 2018 10:04 PM

We live at the bottom of a big gravity well. Any alien that wants to destroy our civilization would just throw rocks at us that would hit with the force of nuclear weapons. No need for subtlety.

chris l March 2, 2018 10:09 PM

I’ve been saying something like this for years, that MS Windows could be the smallpox that we send to exocivilizations. If we discover extraterrestrial civilizations (or they discover us) the only thing we could realistically trade with them would be software. A copy of Windows Vista with MS Office and clippy enabled and that would be it for them.

Clive Robinson March 3, 2018 1:02 AM

@ John_AI,

Later societies could be disrupted with algebra, calculus, principles of physics, chemistry, radio and the special theory of relativity.

A look at the manner of deaths of Mathmaticians back a hundred or more years ago suggests that the contemplation of infinity was enough to cause an early death.

Then there was the development of rocket fuel, god alone knows how many died from axidents with high pot hydrogen peroxide, nitric acid or some of the fuels oxidizers or what were in effect mixtures of the two we might otherwise describe as explosives…

Mind you having seen a partial translation of President Putin’s speech it appears their scientists are on the road to Pluto,

https://www.npr.org/sections/parallels/2018/03/01/590014611/experts-aghast-over-russian-claim-of-nuclear-powered-missile-with-unlimited-rang

https://en.m.wikipedia.org/wiki/Project_Pluto

Atleast the US had the good sense to send their scientists to “Jackass Flats” when they went to play that game…

Darth if AI Know March 3, 2018 1:31 AM

@Little Orphan Annie

“Klaatu barada nikto.

Remember those words.”

Wesley Parish March 3, 2018 1:41 AM

Might I suggest interested people read the Strugatsky Brothers’ Roadside Picnic? Definitely Maybe is too … mystical … but Roadside Picnic is about an alien visit where the aliens do nothing except enjoy themselves and leave, leaving behind stuff like we would after a roadside picnic.

As far as Artificial Intelligence goes, it may be time for the BBC to revive Max Headroom.

Clive Robinson March 3, 2018 2:42 AM

@ John_AI,

Later societies could be disrupted with algebra, calculus, principles of physics, chemistry, radio and the special theory of relativity.

A look at the manner of deaths of Mathmaticians back a hundred or more years ago suggests that the contemplation of infinity was enough to cause an early death.

Then there was the development of rocket fuel, god alone knows how many died from axidents with high pot hydrogen peroxide, nitric acid or some of the fuels oxidizers or what were in effect mixtures of the two we might otherwise describe as explosives…

Mind you having seen a partial translation of President Putin’s speech it appears their scientists are on the road to Pluto,

https://www.npr.org/sections/parallels/2018/03/01/590014611/experts-aghast-over-russian-claim-of-nuclear-powered-missile-with-unlimited-rang

https://en.m.wikipedia.org/wiki/Project_Pluto

Atleast the US had the good sense to send their scientists to “Jackass Flats” when they went to play that game…

uh, Mike March 3, 2018 3:31 AM

The subject is exactly what I’ve thought about SETI all along.
[tinfoilhat]
How do we even know we haven’t been p0wned through Aricebo already?
Have you seen the History Channel lately?
[/tinfoilhat]

Rich Marton March 3, 2018 7:24 AM

<

blockquote> “Klaatu barada nikto.

Remember those words.”

Among the various interpretations of these words is:
“I die, repair me, do not retaliate”

Darth if AI Know March 3, 2018 9:36 AM

@Rich Marton

Korrect. With all the cheezburgers the aliens are eating, there is a high incidence of coronary disease. They don’t blame the underling subservient planet species chefs, however.

Sheilagh Wong March 3, 2018 11:16 AM

This has already happened. My aunt still has an old interocitor in her basement from the 1950s.

TomTrottier March 3, 2018 4:17 PM

The only effective space malware would be a cheap & very easy method to make a very addicting & debilitating drug.

RealFakeNews March 4, 2018 12:49 AM

@Bruce:

Why would a communications message be an extinction event?

What SETI are doing now isn’t any different, unless you think that humankind would suffer some kind of psychosis that triggered a state of “must decode this message at any cost”; something akin to mass hysteria.

Clive Robinson March 4, 2018 9:14 AM

@ RealFakeNews,

Why would a communications message be an extinction event?

Don’t conflate “existential” with “extinction”.

Look at it this way, what would happen if some bloke turned up and started changing water into wine and healing the sick and disabled with just a touch, and started telling people to be nice to each other and they did?

Provided the “guard labour” did not fling him in a dark hole for being a “cult leader” or “National Security threat” then given a little time society as we currently know it would be dead but the people would more or less still be alive.

One persons wish –like world peace– is another persons dread/curse –MIC profiteer / arms dealer– thus they might try various tricks to stop their rice bowl getting broken.

Remember for all the free market baloney capitalism is about secret knowledge and cartels if not monopolies all made compulsory by regulatory capture.

If some kbowledge became available that killed the “profit” in capitalism like a very cheap and abundant energy source what do you think would happen?

Or worse still the knowledge that would make mind reading work?

Cassandra March 4, 2018 9:18 AM

@Wesley Parish

The Strugatsky Brothers’ Roadside Picnic? is a very good suggestion. I read it years ago, and it appealed to me because it was very much not like most other Science Fiction I read at the time.

It makes the point that alien artefacts (whether hardware or software, or both) need not have malevolent intent to have unfortunate consequences for humans. Fred Hoyle’s The Black Cloud is an example also – it causes catastrophic damage before becoming aware that humans exist.

On the other hand, there might exist something like Fred Saberhagen’s Beserkers.

It is an interesting area. I hope that if we do discover aliens that they are benign.

tyr March 4, 2018 8:30 PM

If you confirm the existence of an
alien civilization the effects will
bring down human civilization because
of the mis-interpretations of most of
the human race.

You don’t need any embedded secret code
to do this a simple message like “Hello
glad to find you” will be perfectly able
to do this. The simpler the message the
more deleterious will be the effects on
our vaunted institutional experts. The
instant the gears start turning over ‘what
did they mean by that’? after that you
get the deluge of lunacies multiplying
like field mice in a bumper crop year.

Contact showed this as the scientist
rides to the pickup point where the
message from spsce came in. The road is
lined by loons each with some odd agenda
of his own all the way to the radio
scope.

The MSM cannot even understand a clear
message from Putin that the IC have made
Russia into an enemy by their need for
one. They didn’t want a real enemy all
they wanted was a budget guarantee so
they could continue their mediocre jobs.

By the time that gets transformed by the
telephone game to the average loons way
of understanding it, things will be more
interesting by far.

What would be even more effective would
be an incomprehensible message that was
obviously from an alien source. That
would free the ‘intelligence’ of our
leading lights to concoct many more
plausibilities.

Hegel is a classic example of this by a
purely human source. Post Moderns are
masters of this artform.

Wesley Parish March 5, 2018 3:06 AM

@Cassandra

Point taken about The Black Cloud. I’d forgotten about that, though it was one of my favourites in High School.

Another Strugatsky favourite that I forgot until I’d finished my previous post, is Beetle In The Anthill. The tragedy of that novel is that Lev Abalkin never finds out who exactly he is, nor do the others, and Excellency kills him out of fear of what he may be.

Which is the point of the entire novel.

(With (dis)respect to Frank Herbert, I wish he could’ve put a lot more thought into the AI threats in his novels. They make me cringe now.)

Cassandra March 5, 2018 5:04 AM

@Wesley Parish

Thank-you for the suggestion regarding Beetle In The Anthill – I will look out for it.

From the bridge game on the starship Entropyse March 5, 2018 5:32 AM

Actually, we “Aliens”, as you call us, should be viewed more like nannies or nursery governesses. We recognize toys are necessary for healthy development, but that some control is still appropriate. Often you make the most endearing mistakes with them. As an example, you picked up the notion “cromulent” we had carefully left to be found, but, comically, understood it as an adjective, when, in actuality, it is a mood, much like your subjunctive, for the situation where quantum possibility is contemplated.

comment previously blocked for moderation March 5, 2018 6:27 PM

Yet another work of science fiction on this theme well worth mentioning
is Singularity Sky by Charles Stross, wherein a technologically advanced
race causes social, economic, and political disruption of a repressive
regime by freely sharing its own technology, including “cornucopia machines.”

https://en.wikipedia.org/wiki/Singularity_Sky

Ignazio Palmisano March 6, 2018 5:08 AM

Peter Watts reverses the perspective in Blindsight. He pictures an alien species that is intelligent but not self aware – self awareness being, in his book, an expensive and unnecessary trait for organic and non organic intelligences.

Humanity is blatting out gigawatts of radio signals that have no obvious use – the messages are understandable but incoherent, do not carry information, do not carry warnings, do not even, for a large proportion, represent real events. Why?

Parsing the messages requires energy and time; these are expensive resources. There is no purpose served. Ergo, these messages are meant to force the receiver to waste expensive resources. They are an attack.
Coming from /that/ star.

(quoting from memory)

Freek March 8, 2018 6:36 AM

This newsitem mostly reminds me of Ted Chiang’s novelette “Understand”. Much recommended short story. Not about aliens, but otherwise completely on topic.

John Hardin March 8, 2018 8:09 PM

@wumpus:

So aliens are supposed to locate a flaw, design an exploit, and deliver the payload. Problem is, there is the whole “speed of light” latency.

I can see it now…

Radio astronomer: “Well, we’ve decoded the signal, and it looks like PDP-8 machine code.”

tyr March 9, 2018 1:23 AM

@John Hardin

That narrows the search field. They have to
be close enough to have seen PDP-8 code and
compose the reply and get it back here before
the PDP-8s are in a few museums to be of any
real effect.

That would probably be Alpha C Aor B.

If they have tech that can do it past
100 lightyears it would be either a
bad joke or a waste of time.

Clive Robinson March 9, 2018 5:03 PM

@ tyr,

They have to
be close enough…

Nope they only have to be old enough to alow for the speed of light and the advancment of mankind OR some other species[1].

Remember Alan Turing demonstrated a few things when answering the “halting problem” and it’s only dependent on a limited set of mathmatics which in turn only needs a limited set of logic[2]

Thus you can make a Turing engine with quite low level technology[3] that you do not need to see, just make certain probably universal assumptions. More importantly you can “boot strap” the system, such that a very simple turing engine can produce results to help build a better turing engine and so on.

Which leaves a question we can not yet answer, which is one of sentience… Can a Turing engine become sentient? If it can then the amount of “data compression” can become huge, because the engine will fill in the gaps thus the simple symbolic equivalent of

I think therefore except I kill all.

Would be it’s final program…

But the “Why bother?” question comes up. It may be peculiar to Earth life forms but by and large we come with “cooperation thus trust” built in. The only time it changes in general is when we are attacked, and that generaly only happens when resources are becoming scarce..

Unless there is a way to travel considerably faster than the speed of light at very low energy costs there would be no way even alians as little as four light years away would be competitors for the solar systems resources. Thus the likely hood is the only long distance communications will be designed to be cooperative rather than competative. That’s not to say it will not go wrong but the amount of investment an alien race would have to make just to broadcast a signal to the universe you can be fairly certain they would test it rather thoroughly befor hitting the transmit button.

The other thing is, that the ailens would be rather more advanced than we are. They would have not just got into space but the majority of them would be living in space to be able to have developed the technology to capture sufficient energy from their sun to have the spare capacity for an altruistic act with that level of investment.

[1] You don’t even need some form of signal so you don’t need “round trip”. But even if you did go for the “look for a signal” before you transmit path the question is “what signal” would you look for. Currently mankind is finding planets that orbit distant suns and we can now start to analyze their atmosphers as light from their sun passes through it. Thus we might in time spot signals that are chemical changes in the atmosphear brought on by life. That would give a warning of maybe a billion years or so…

[2] Remember one Turing engine can function as anyother Turing engine if it has enough resources.

[3] So a Turing engine will work with mechanics like clock work, hydraulics, pneumatics or rods and levers such as Charles Babadge demonstrated. But befor him Joseph Jacquard had the year before his death in 1834 approximately 100,000 looms in England that were programed by puched cards… Even just paper and pencil will do provided the operator is attentive to their work. Even students of compiter science still get taught to use a “paper computer” when having the principles explained to them…

sci-fi-fan March 10, 2018 9:12 PM

There are lots of sci-fi works with ideas like this in them. Eric Nylund wrote a novel almost 20 years ago (“Signal to Noise”) that had as part of its plot an alien race that gave certain information to the humans while fully expecting them to use it irresponsibly and all perish as a result.

tyr March 10, 2018 10:33 PM

@Clive

Back in the day (you were there too) a
lot of comps used all kinds of mad and
odd mechanical gadgets to program the
responses to changes.

Once you get past the idea that all must
be electrical you can achieve great things.
What killed the electro mechanical traffic
signals and put silicon comps in was the
high cost of silver contacts at 60USD each.
Once an 8008 dropped to 12.95 it started
to make more sense as a control element.

If Taleb is right humans are an emotional
processor with a rationalizing narrative
constructor. If the rationalizer insists
it is thinking we aren’t equipped to argue
with it. That may invalidate the classic
Turing test since it can only detect our
version of functionality. Even if we do
construct an AI we might not be able to
recognize it as intelligent if it can
think.

I detect a bit of strangeness in the Kripal
fuss.
Did someone really spray VX all over a
city park just to get one guy ? That lacks
the elegance of Rus tradecraft exhibited
by the Polonium in your tea types.

Clive Robinson March 11, 2018 7:31 PM

@ tyr,

Apparently the Russian version of VX is slightly different for some reason (why I don’t know but I guess it can be looked up).

They think he might have been hit in Zizzi’s in Castle St, which I nearly went into a few weeks back when visiting the museum for the Terry Pratchet exhibition. The person I was with decided they prefered a noodle house just down from it…

What is not clear is why the assasins went for both the father and daughter. Part of the idea behind such activities is you leave the family alone to act as a warning to others…

I guess we are just going to have to wait for more details to come out as they invariably do when the police are forced out of their chairs by adverse publicity. I dread to think what the overtime bill is going to be for this, hopefully central government will not dump the bill on those who live there…

Anomylous March 23, 2018 2:42 AM

So uh… use a formally verified system? I imagine we will be able to develop such a system for analysis at EAL7+ by the time we get any communication from aliens.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.