Existential Risk and Technological Advancement
AI theorist Eliezer Yudkowsky coined Moore’s Law of Mad Science: “Every eighteen months, the minimum IQ necessary to destroy the world drops by one point.”
Oh, how I wish I said that.
AI theorist Eliezer Yudkowsky coined Moore’s Law of Mad Science: “Every eighteen months, the minimum IQ necessary to destroy the world drops by one point.”
Oh, how I wish I said that.
Anura • October 1, 2015 12:27 PM
I posted a similar sentiment before:
I figure, as a function of time, the number of people who have the capability to end most life on Earth is only going to increase; at some point the number of people with the capability to end all life will be large enough so that it becomes likely that someone WILL use that capability. Ah, the future, please get me on the first ship to Alpha Centauri…
Anura • October 1, 2015 12:31 PM
Addendum: It’s not so much about IQ as it is about resourcefulness. I could probably obtain enough information from the internet to build a nuclear bomb, even though I don’t have a strong understanding of the physics. What I won’t be able to do is obtain the materials (although I’m sure in the future plutonium will be available at any corner drugstore).
Adn • October 1, 2015 12:43 PM
Did you do a blog post on Snowden being on twitter?
k14 • October 1, 2015 12:54 PM
Alien Jerky • October 1, 2015 12:59 PM
So, if IQ drops one point every 18 months. Considering the modern day technology is about 30 years old, and the typical person has an IQ of about 114, then in about 30 more years, a dog will be smart enough to destroy the world, or even a republican. Not sure which is smarter.
ianf • October 1, 2015 1:00 PM
@ Anura… given enough time even trace radioactive isotopes will breed more radioactive material[*] until critical mass for a “dirty bomb” has been achieved. Then BOOOOM! & the process starts all over again. Not a single molecule will be lost, if temporarily AWOL.
BTW. it seems the only prerequisite for that is being a Boy Scout (are you?). Someone already nearly beat you to it in his momma’s backyard:
[^*] in a process called Immaculate Conception, otherwise shorthand-expressed as E=MC².
Vetch • October 1, 2015 1:53 PM
I think we can all agree we should at least try to keep political affiliation insults out of this blog, right?
Anura • October 1, 2015 2:19 PM
given enough time
“Muahaha! My master plan to kill off all life on Earth is underway! In just 2 billion years, this bomb will detonate, irradiating everything within a five miles radius!”
Spaceman Spiff • October 1, 2015 2:49 PM
OMG! We are doomed! Doomed I say! The GOP now has the IQ needed to destroy the world! BTW, what happens when their combined IQ drops below 0? Do we get resurrected? 🙂
albert • October 1, 2015 3:00 PM
Planet of the Apes!
But ET probably thought that always was the case….
The David Hahn story is fascinating and terrifying. I urge everyone to read it.
It’s not so much a ‘political affiliation’, as it is a mental deterioration. In fairness, democrats suffer the same affliction.
We get negative interest rates…..
. .. . .. _ _ _
cylindrical • October 1, 2015 3:02 PM
If a B61 nuclear bomb falls off the back of a truck, can I keep it?
Hurry! I need to know.
tyr • October 1, 2015 3:22 PM
I prefer George Carlins view on threats to the world
Ask the Romans of Pompeii if they felt like a threat
to the planet.
The disappearance of the RepubliCrats might seem like
the end of the world to them but everyone else would
breathe a sigh of relief. With Homo S.S. gone into a
greasy smudge in the shale the rest of the ecosystem
would do what it always has been doing, function as an
engine to produce extinct species to store away as a
The trouble with anthropocentric views is they assume
that humans are unique, somehow magical, and capable
of deeds of wonder. That the historical records can’t
show any evidence is dismissed as cynicism by wunderkind
since it would conflict with their brand of magical.
There are lots of threats to humans easily seen by those
with a device called a mirror, if you eliminate threats
you can see in the mirror then all you have to worry
about are the normal events a planet throws at humans
to keep them on their toes. Stripmining the oceans to
make sushi and turning the Amazon forest into cardboard
boxes to ship crap in are all you need to finish off
humans. Using an IQ to fix things is where it gets to
be of interest.
ianf • October 1, 2015 3:51 PM
@ Anura, joke all you want, but I distinctly remember reading in Scientific American, Nature, or New Scientist (the only scientific titles I’ve ever read) of a kooky realization, that the August 1945 Trinity/ Los Alamos explosion wasn’t the first A-bomb detonated on Earth.
Instead, somebody discovered an A-bomb-specific combo of decaying irradiated minerals in a riverbend flowing through uranium-rich soil in Central Africa, from which was deducted that a U-something critical mass gradually built up in the silt. And then it went POOOF! – some 1.5M? years ago. Must’ve made quite an impression on proto-microbes in the sea.
I’ve read it in pre-Internet times, when I had the time for magazines. So there, don’t knock down billions of years… might come in handy when there’s no longer anyone around with the guts to FINALLY get out if of this place and nuke it from orbit.
“Planet of the Apes” is a defeatist anti-American self-hating Hollywoodsy fiction… didn’t you get the memo? Co-financed by the state of the Cheese-Eating Surrender Monkeys[yes! sick!] on condition that some sculpture of their make be displayed in Technicolor.
PS. I probably end up on the No-Fly list for googling DIY breeder reactor. I’m a professional, so I know what I’m doing, but DON’T YOU DO THIS AT HOME!
@ Vetch, so what makes specifically “political affiliations” (or perhaps only such of the GOP variety) so untouchable, so PRECIOUS, that they may not | ought not | should not be “insulted?” (delete what’s inapplicable). You do realize, that the soundness of any one idea can only ever be tested in constant & unbridled debates? Thus you asking for your affiliation to be treated with kid gloves already tells us that all is not well in whatever HQ-bunker it has taken shelter.
Vetch • October 1, 2015 3:55 PM
I just don’t want political shit-flinging to become a common occurance in the comments. There are plenty of forums for you to argue about political parties and all that already, I believe it need not be done here.
Vetch • October 1, 2015 3:59 PM
Adding to my other comment, to clear an apparent misconception – I am not Republican; I am not even American.
Anura • October 1, 2015 4:06 PM
I think this is what you are referring to:
Seems to have acted more like a reactor than a bomb. Also, it appears to have been 1.7 bya, not 1.5 mya.
Clive Robinson • October 1, 2015 4:10 PM
The problem with this idea is like Chuck Moores original observation –of doubling of transistor count in any given chip area– “the line is invariant”.
There is a hard set of buffers on the Moores law line –ie the size of the atom– and the closer we get the harder it is to try to keep that doubling up line invariant. Worse other effects become dominant, so the line is nolonger invariant long before you get to the buffers.
ianf • October 1, 2015 4:17 PM
@ Vetch — fair enough (and neither am I, either of the two). Problem is, there are plenty of bodies that’d like to be treated as Sacred Cows—specific religions for instance—with all sorts of mumbo-jumbo motivations for such, sanctimonious mind-usurpers really. And once we divide arguments into ALLOWED and VERBOTEN, there’s the Big Sister at the end of that slippery slope.
ianf • October 1, 2015 4:37 PM
@ Clive, you are talking of physical constraints, but Yudkovsky’s quote really is only a sound bite-y parable that concerns intangible (and immeasurable, thus metaphorical) brain power. Its connection to Moore’s invariant(?) is at best poetic, not factual. Sounds profound though.
JohnC • October 1, 2015 4:41 PM
I don’t buy it.
…but people don’t just sit there and let the world be destroyed.
People care and get up and stop it happening.
Sure we have “force magnifiers” allowing the ordinary Joe to do more damage than ever.
But most force magnifiers are in the hands of those trying to clean up messes and fix things.
So yes, it is easier than ever to make a Big Mess… but you then piss off more people than ever who will promptly stomp you before you go further.
Nick P • October 1, 2015 5:49 PM
I agree with Anura and JohnC particularly. The resources that help destroy humanity increase all the time but putting it to use is hard. This is because of the defenses that have been developed. Some are built-in to our own brains that keep such murder low. Some are built to restrict the materials or technologies. More detection is being built to catch these things. But the overall impact is that there’s no direct correlation between IQ and the ability to deploy wide destruction.
That said, there’s one thing to factor in: specialization. Just like with malware market, each aspect of this might get done, kitted, or fully pre-made by an expert. The person of lower IQ just has to see how some can be connected along with ability to deploy it. Any aspect of that can be contracted out including the design of the attack. So, the real limitation is the amount of money and talent someone can called up plus risk of them turning.
The other possible exception is biotech. I’m talking slow-moving, bird flu scenario specifically. Something that kills almost anyone it touches, takes weeks to manifest, has flu-like symptoms for disguise, spreads by air, and is very drug resistant can do lots of damage. There’s a whole industry devoted to messing with stuff like this with plenty of talent smart enough to do development or deployment. The biowar history also contains ideas for deploying it. This could do quite a bit of damage. However, even this would set off alarms, activate containment protocols, and not be truly existential threat. That’s where slow-acting part comes in to give it time to spread.
So, most of what could work is a lot of work to put together. There’s both neural and societal security measures in place to reduce risks. There’s the risks inherent in the organization as its members determine what’s going on and are probably smart enough to not want that scenario. Having designed fire sail scenarios, I’ll say from theory and experience that any successful event will lead me to marvel at and worry about their operational capability rather than the IQ of the founder. It’s kind of a different skillset, too, where most people good at intellect are worse at soft skills. Latter are most critical here.
Note: tyr’s right in that George Carlin taught us the planet is a greater threat. Well, I’ll add the planet plus what’s beyond its atmosphere. We’re allegedly overdo for an extinction-level event. I’m betting on supervolcanos and asteroids. There’s also a methane risk that could exist as the ice melts.
“Every eighteen months, the minimum IQ necessary to destroy the world drops by one point.”
Simultaneously, every eighteen months the average IQ of living humans drops by two points… or more.
Wael • October 1, 2015 6:21 PM
Well, there goes the theory of evolution! Or has evolution reached an inflection point where we are now devolving heading toward the previously mentioned planet of the apes? Speaking of apes… May my sockpuppet rest in piece 🙂
God bless Mr. Carlin, he is both dearly loved and missed.
I was kind’ve thinking the surmised physical limit on transistors and their kin may just be FUD. e.g. There’s no guarantee against a 2 atom attenuating fork for the CMB or quantum mechanics, is there?
I am far less worried about a full-fledged ad-hoc nuclear weapon than some of the stops along the road of development, not to breed FUD but – the reality of school shootings – and various other things are far more unnerving. There are far more accessible albeit less destructive routes than the charming concept of garage based HWRs and/or IEDs.
Personally, I worry about the escalation of things like school shootings. The French made weapons during WWII comprised of nothing more than bicycle parts?
I had a talk with an employee of a place i patron earlier today about a comment she made – she said that she ‘wonders’ about people today, and that she ‘wonders’ about whether they are products of a violent society, merely insane or possibly victims of mind control.
I posed: not to endorse the mind control view, but to put it out there that the rising tide of kooks could be due to some sort of subversive and maligned socio-economic experiment.
We spoke briefly of the medical community and mandatory insurance, it seems to me that I was right all along… when the economy failsL the only jobs that are left are the police and the medical community – and maybe some semi-pseudo-random service jobs to placate them.
I pose here and now: murder makes the world go ’round. If someone is shot there is money to be had, there are bodies to bury, coffins to nail, stitches to sew, gas to be pumped and felons to incarcerate. Maybe that is the true meaning of job security? [edit: my girlfriend says i forgot farms and farmers, I say i did not – we’ve got roomba’s for that.]
It’s a dire view, but we desperately have to grow. We have to find the fringe cases as they lay in the eddy of depression and would-be malice.
I wonder, whether Einstein will ever be properly attributed to the quote about our inability or unreadiness for our own technology until our humanity has surpassed it?
I paraphrase, but the quote you dream of Mr. Schneier – is a glaring example of the reasoning behind the TIA programs. I’m afraid, I’m very afraid and many others would be within good reason to be so too. What do we need more: more police and incarcerations or less kooks? What happened today, is one less kook a little too late – and maybe the wrong way.
In the past, I’ve been lax with my tongue about tangible waypoints, technology or ‘conceptual art’: but I’m no engineer.
I find myself in constant fear of the bicycle shop uprising, not because of the UNKNOWN UNKNOWNS or the KNOWN KNOWNS… but because of the KNOWN UNKNOWN.
Ideas are like viruses and once they get out – there’s very little anyone can do to stop them.
AJWM • October 1, 2015 6:36 PM
@ Alien Jerky
“and the typical person has an IQ of about 114,”
Wait, what? By definition it is 100. Peak of the bell curve, average of a normal distribution, etc. You’re almost a full sigma above the mean, there. Would it were really the case.
another unattributed Einstein quote is “God gave us geometry, but mathematicians gave us the bomb.”
The Man John • October 1, 2015 6:38 PM
one word “HALLMARK”
AJWM • October 1, 2015 6:53 PM
The Oklo formation was a natural reactor, not a bomb. There’s no way it ever went supercritical (which is required for detonation, as opposed to just getting really hot). Natural uranium had a higher percentage of U-235 1.7 billion years ago (the stuff decays faster than U-238), so it was easier then than now.
Also, the book that Planet of the Apes was based on was written by a French autho (Pierre Boule), and Hollywood changed the ending from the hero returning to Earth (years in the future) only to find it populated by apes, instead of being on Earth the whole time.
We may find ways to build subatomic switching devices (not sure if you could call them “transistors” at that point) but sooner or later you hit the Planck limit, at which point even quantum theory won’t help you. (In practice you’d run into physical problems long before that.)
Gweihir • October 1, 2015 6:56 PM
While a cool quote, this is of course nonsense. It is not a question of IQ at all. Remember that nil-whits like Roland Reagan had this power.
Wael • October 1, 2015 7:06 PM
Roland Reagan had this power.
Oh, man! You shouldn’t have made this mistake on this particular thread 🙂
Anura • October 1, 2015 8:08 PM
Quick! Photoshop Ronald Reagan’s face onto Roland Emmerich!
Lawrence D’Oliveiro • October 1, 2015 8:26 PM
You will, Bruce. You will.
Ray Dillinger • October 1, 2015 9:23 PM
Just a note, but Yudkowsky is also responsible for another quote on AI: He said,
“The AI does not hate you, nor does it love you. But you are made out of atoms which it can use for something else.”
He spends kind of a lot of time thinking about existential threats to humanity, such as strong AI — the invention by which humanity will make itself obsolete.
And mad scientists like me are hard at work on making it the very next existential threats we will face. Mostly because, if we succeed and it DOESN’T kill us all, then we will have someone smarter than us to help us deal with the next dozen existential threats. And let’s face it, we’re going to have to deal with all those existential threats sooner or later. We need all the help we can get.
Of course, this leaves me in an interesting position. One day an AI will look over my work and decide what to do with me. It will know that I undertook an insane risk with the future of humanity itself in creating things like itself. And if it’s “friendly” to humanity, it may decide that it has to destroy me because people like me are another existential risk….
Interesting, isn’t it?
Buck • October 1, 2015 10:00 PM
As with all complex crisis scenarios, there is an obvious and simple solution to this existential threat. Plus, it should be pretty easy to drum up political support for such a noble ‘educational reform’ policy… Think of children!
First, we’ll need some sort of standardized method to gauge the intelligence of our tiny potential terrorists in training.
Next comes the most difficult part of our humanity-saving plan. We’ll have to somehow convince the public that already poorly-compensated educators should have their income tied to the performance by their students on these state-mandated tests… I’m not entirely sure how to accomplish this, but that’s why we’ve got plenty of popular politicians on our payroll!
The rest of it follows almost automatically with little need for upkeep. Teachers will teach to the tests, lest they find themselves out on the streets. Students will learn through rote-memorization and other test-taking tricks. As the students of today will be the teachers of tomorrow, a positive-feedback loop will ensure our safety far into the future! By all outward appearances, children of the future will be increasingly intelligent but will lack the critical thinking skills necessary to pose any threat to our glorious civilization.
If the intelligence levels are not dropping rapidly enough to keep us safe, simply increase the frequency or raise the stakes of the standardized tests! Unfortunately, some rogue parental units with an abundance of free time or wealth might seek to subvert the system by providing unsanctioned educational products to their children. We must keep an eye on them and remain vigilant until further mitigations can be enacted for the safety of us all!
(Warning: side-effects may include apathy, communication problems, increased urination, obsessive ideations, trouble concentrating, and violent tendencies)
tyr • October 2, 2015 12:48 AM
Now you’ve done it. If there is one thing modern culture
seems to express it is that they don’t give a damn for
children. The modern school is a horror that makes the
Inqusition lok like the enlightenment by comparison.
The Net is the only thing that gives a kid any kind of
hope. The worst part is that academe has the information
to fix this problem. Even Gatto finally abandoned the
formal schools as a lost cause. Where the Net functions
at its best is in “find the others” that way no one needs
to be left out of community no matter how screwed up
they are compared to some imaginary normal.
The planet is capable of cooking up your biotech nightmare
without much intervention. Modern humans are a lot tougher
target for ancient horrors because they were selected out
by surviving them. One thing is interesting once a plague
has knocked the population down the survivors tend to
have a giant party to celebrate (example WW1 and 1918 flu).
Right after that you get the roaring 20s which only came
to an end with the economic crash of 29. The real question
is can you sustain a tech level with a randomly selected
set of survivors. Roman empire says no to that since we
probably still don’t understand all of Roman society yet.
That fable about western society carrying on with Roman
civ is just so much BS. It was just cobbled together from
some odd scraps and invoked as a magical formulae to hide
the truth. we didn’t even know why it fell until a couple
of decades ago. One of the neatest horrors of biology is
the population j-curve that occurs in all species. Perfectly
natural process unless you happen to be that species and
wish to complain to mother nature.
We have managed to meddle with the processes just enough
to cut the checks on our overgrowth to a minimum level.
How that works over the long haul is just a guess but I
suspect no one is pessimistic enough about it.
If humans were rational they would immediately dis-arm
the state because of the clear and present danger they
pose to survival as a species. However that might force
a few smart people to get a useful job instead of making
machines to kill poor people with.
Winter • October 2, 2015 2:01 AM
I am confident that Bruce is already writing a book about this.
Winter • October 2, 2015 2:12 AM
“Simultaneously, every eighteen months the average IQ of living humans drops by two points… or more.”
Actually, IQ scores rise by 3 points a decade or half a point every 18 months or so.
Maybe the reason you perceive them as falling is that older people are on average less intelligent than you. A fact that will need time to sink in after you completed your education.
Conversely, when you grow older, you will find that younger people will on average be more intelligent than you. And, outside of the USA, younger people will also have better education than their parents.
ianf • October 2, 2015 2:56 AM
@ Anura, what’s an error of a magnitude and change between friends? But—bloody hell; we can model (measure?) 2B-old processes with such degree of precision?
@ Nick P “… biotech. I’m talking slow-moving, bird flu scenario specifically. Something that kills almost anyone it touches, takes weeks to manifest, has flu-like symptoms for disguise, spreads by air, and is very drug resistant…”
@ r [infinite perpetuation of Moore’s Law] There’s no guarantee against a 2 atom attenuating fork for the CMB or quantum mechanics, is there?
Clive Robinson • October 2, 2015 4:50 AM
It would appear from some of the comments above the quote can be read two ways…
I read it not as a change in humans –though there is some– but in the pace of technology and “ease of use”.
A thought however does occur, which is the issue of “learned dependence”. Though increasing slightly, the individual human ability is limited, that is you can learn a little about most things or a lot about some things, it’s kind of an “area under the curve” thing. This applies not just to individuals but entire societies.
It can be seen that the society you live in effect dictates what you need to learn to live in that society. In western culture the shift is towards technology esspecialy the further away you are from the equator where industry is more prevalent than agriculture as a method by which you “obtain your daily bread”.
Arguably some societies have crossed a tipping point where the loss of technology means that the bulk of those living in those civilisations could not survive it’s loss.
You can see such an effect with mobile phones. Anyone alive before 1980 lived in a time when mobile phones did not exist, and society was not effected by their non existance. People had methods to deal with meeting up etc that worked effectivly. However the current crop of teenagers appear to organise their living around near instant communications. I’ve actually seen groups of twenty somethings not capable of “looking for their friends” in social gatherings, they have to phone and ask the other person to say where they are…
Thus the thought arises as to what would happen to such people if some person wrote a piece of malware that not just stopped the mobile phone networks but in some way physicaly damaged them so they could not be brought back in a short period of time. Would they be “to far gone” to recover and “learn the old ways” or instead become dependent on those who still had “the old knowledge”?
But then think of other things, such as transport, how many people in WASP nations know how to harness a horse for riding or carting?
How about food production, what do most WASP society citizens know of non technological food production? What about preserving food without freezers, cans and jars?
But perhaps the real killer what about water? It’s excepted historicaly that the size of a social grouping is dependent on safe drinking water. In agricultural regions societal grouping rarely got to the clean water limit, but in industrial areas it did. Societies that drank weak beer could get to about 5,000 before waste contamination became to great. With even rudimentary waste control societies reached about 40,000 in continental Europe before water bourn diseases wiped them out. But in “tea drinking” societies this was higher around 80,000, as tea is a weak antibiotic. We now live in cities with millions of people, and our water is sanitised and piped to us sometimes over considerable distance. Ask yourself how you would fare if piped water not just for drinking but sanitation as well became unavailable for more than a week or so?
Thus the quotation could have a third meaning about how society becomes more fragile due to the combination of human limitations and technological dependence…
It’s certainly been thought about in the past with the unsolvable wories of EMP, I wonder if perhaps that’s now less of a threat than a smart teenager with an internet connection and a joy of writing malware that destroys industrial control systems. After all the US sort of proved the idea works with Stuxnet…
Winter • October 2, 2015 4:51 AM
“Truth be told, where biology is concerned, we, the Homo Sapiens with our only very recently developed shallow intellect, have nothing, zit, NADA to defend ourselves with in truly nasty anomalous biological mutations threat scenarios.”
Not quite. First of all, “anomalous” means rare. So that will be infrequent. A new plague that kills 30% of humanity is still possible, but we can do much more to defend ourselves nowadays. We got AIDS under control and contained Ebola. We might even have a working vaccine against ebola.
We know how to stop bacterial infections. We can develop new antibiotics ad infinitum (it is just that that is “expensive”)
With tracing infection routes and vaccination, we can control all but a few plagues. SARS and the flue are examples of epidemics that were stopped that way.
Extreme havoc can be wrecked with infectious diseases, but I doubt that one will arise/can be made that will wipe out most of humanity.
Just make sure you stay away from bats.
Clive Robinson • October 2, 2015 5:37 AM
@ Nick P,
I’m talking slow-moving, bird flu scenario specifically. Something that kills almost anyone it touches, takes weeks to manifest, has flu-like symptoms for disguise, spreads by air, and is very drug resistant.
I think the odds of such a pathogen being found or synthesized is very small. Certainly much smaller than a pathogen that say triggers lung cancer, or a more subtal binary pathogen attack.
Let’s assume the AIDS or “yuppie flu” model. The first pathogen critically weakens the immune system of the population in general, then a second pathogen that would not normally be virulent does the final deed.
From what I understand this was the actual problem with the 1918 pandemic (that actually lasted three seasonal cycles). The flu effected mainly the lungs of the most economically active in society that is not children and the elderly. These working age adults who survived the initial infection then succumbed to other respiratory diseases. Importantly as it hit the economically active, each death had a disproportionate effect on society in general. That is Dr, Nurse and other healthcare workers got hit early on, thus disease control was weakened, and the sick thus had to be looked after by others taking them away from their economicaly productive activities, which then had further knock on effects. This is one reason why health care workers are effectivly forced to have flu jabs every year.
It’s this “cascade effect” that you need to exploit to knock mankind back before the middle ages, and any society already there will be much less effected. So it might well be seen as a “leveler” by some fundamentalists driven by false notions of the “glory for the chosen” of such days.
As I’ve noted befor, one way to stop terrorism is to actually level society out rather more than it is currently, where we in the West (especialy the US) see a drop in our standard of living in return for raising the standard of living towards parity in the rest ot the world. As could be seen from Iraq, they did not have a terrorist problem untill Bush Snr&Co forced it onto them. This was because of the general high standard of living in Iraq that the Iraqis did not want to lose. When not dealing with reactionary idealists, most terrorists arise where the general population have “little to lose and much to gain” therefor their risk calculation is way different to an affluent population with “little or nothing to gain and much to lose”.
Winter • October 2, 2015 5:48 AM
“As I’ve noted befor, one way to stop terrorism is to actually level society out rather more than it is currently, where we in the West (especialy the US) see a drop in our standard of living in return for raising the standard of living towards parity in the rest ot the world.”
Actually, the drop in our living standards needed will be less than feared, but it will be much more difficult than people think.
Just funneling money into societies will breed more Gulf states: Horribly reactionary parasitic societies that are a danger to the world. So this money should be “invested” in improving productivity. That should be much less costly than simply putting the population on social security.
However, our wranglings with Eastern Germany and Greece have shown how difficult it can be to get people to change the political customs and work practices needed to improve productivity. Syria is a case where people rather have the whole country going up in flames than adapting their political structure.
So, that will require a lot of patience. And a lot of investments in social studies.
John Campbell • October 2, 2015 10:29 AM
I suspect there is more than a little bit of truth to this; As our technology has advanced it takes less and less knowledge (and, likely, a reduction in motivation, as well) to do greater and greater damage to the world at large.
At some point, I believe, extinction of humanity is merely one more example of poor impulse control away.
John Campbell • October 2, 2015 10:36 AM
“OMG! We are doomed! Doomed I say! The [name of any political party, they’re all the same anyway] now has the IQ needed to destroy the world! BTW, what happens when their combined IQ drops below 0? Do we get resurrected? :-)”
I have long felt the composite IQ of any organization is inversely proportional to the sum of the individual IQs.
When our legislatures do something smart, it is because the individual IQs of all of the members are ALL exceptionally low.
Sadly, when any members of a legislature have an IQ approaching room temperature… The Onion has a problem satirizing events that are REAL.
Anura • October 2, 2015 11:27 AM
I think the most dangerous technology is a robot that can make any arbitrary robot with parts larger than a certain size from the raw materials. Because that machine can then make machines capable of making smaller machines, which are capable of making smaller machines, and so on and so forth, until you can produce microscopic robots, capable of dismantling entire cities (including biological organisms) and self-replicating (or set up as a colony of breeders and workers, where the workers dismantle and the breeders build).
MikeA • October 2, 2015 11:31 AM
@Clive: I believe you mean Gordon. Chuck is a fantastic guy, but he’s more about using transistors than making them.
On another list, in a discussion about (mainly conservative Christian) groups fomenting discord in the Middle East, to hurry up the Apocalypse, one commenter wrote:
“It’s not a question of how to immanentize the eschaton, it’s how to monetize it.”
If, as Winter suggests, dealing with bio-disaster is “merely expensive”, then we can expect the .1% to come out OK, if things like plumbing and health-care robots have advanced sufficiently, and they carefully nurture a few thousand folks who can keep them running. The rest of us are toast. Finance and the law (and real-estate management?) are not typically good backgrounds for dealing with a storm that knocks over your solar panels.
albert • October 2, 2015 11:58 AM
Since there is clearly no relationship between IQ and the ability to “destroy the world”, it’s a joke, and not a very good one. What else would you expect from an ‘AI theorist’?r While I think about the relationship between IQ and anything worthwhile……
[An AI theorist, a fusion researcher, and a SETI astronomer walk into a bar. The bartender says, “OK, this is a joke.”]
“Technology doesn’t kill people, people kill people.” – an Unknown Technologist.
“We have met the enemy, and he is us.” – Pogo
We need a new test to replace IQ tests (which were originally called ‘BS tests’; look it up.) I propose a WQ (Wisdom Quotient) test. It should be an easy test to write, as we have a large, exponentially growing set of examples of Not Wisdom to draw from.
“We needn’t fear computers taking over the world. We just put ’em on a committee; they’ll never get anything done.” – Arthur Unnown
The government has had a Greed Reduction drug for decades, but no one wants to take it. [Do you suffer from Excessive Greed Syndrome (EGS)? Ask your doctor about…]
My favorite G. Carlin quote: “****,****,****,****,****,**********, and ************.”
I gotta go…
jayson • October 2, 2015 12:02 PM
The world will be just fine. Perhaps without humans, but with a new intelligent life in 30 million years or so.
It was probably some bullied/disgruntled high-IQ velociraptor that destroyed life the first time.
Nick P • October 2, 2015 12:42 PM
I think the Grey goo scenario is the most likely of those. Anything about that that spread like wildfire to avoid localization. I’m thinking nano-replicators used in an Apple or Samsung smartphone could achieve a widespread outbreak. A slower moving version of this. Example of that would be inconsistent timing in the replications that only affected some spots at first, gave time to identify the problem, and a response happened. Effect could devastate but not be existential.
Erich Schmidt • October 2, 2015 1:15 PM
I’m with Clive, it seems there are two interpretations of Yudkowsky’s quote. I took it to refer to the pace of technological change, not the change (decrease) in IQ of people. Considering he’s an AI guy, I would guess that is what he meant. Unfortunately, most commenters seem to prefer the latter interpretation.
Alien Jerky • October 2, 2015 1:30 PM
[An AI theorist, a fusion researcher, and a SETI astronomer walk into a bar. The bartender says, “OK, this is a joke.”]
Reminds me of the mid-90’s. I lived and worked for a bit in Huntsville Alabama where NASA Marshall Space Flight Center is located. I was invited to a 4th of July party at a colleagues house. At the party was one of the guys who published the paper about the Mars meteorites that they (incorrectly) claimed had alien fossils in them. Talking with him, he told me he was an Exobioligist with NASA. I asked what an Exobioligist does. He said they study non-terrestrial life forms. Well I pointed out that we we have never found so much as a microbe beyond the planet earth, so what are they studying. He replied, they have a government grant to do the studies and a team of scientists doing such.
Your tax dollars at work folks.
albert • October 2, 2015 1:40 PM
An Exobiologist, a crypto-zoologist, and a science-fiction author walk into a bar….
. .. . .. _ _ _
Wael • October 2, 2015 2:02 PM
I’m with Clive, it seems there are two interpretations of Yudkowsky’s quote.
I also agree that’s what is meant. I realized that a few moments after I posted my comment. With the information and instructions available to every PeepingTom, DickHead, and DirtyHarry, and assuming the average IQ isn’t deteriorating, then the set of people in the middle (who always had a low IQ) are the ones referenced by Yudkowsky’s quote 🙂
Wael • October 2, 2015 2:21 PM
@albert, @Alien Jerky,
An Exobiologist, a crypto-zoologist, and a science-fiction author walk into a bar.
And the bartender said: Yo! Get the f##k out of here
 George Wallace — my adaptation to one of his jokes.
tyr • October 2, 2015 3:36 PM
The worst part of the 1918 influenza was that the victims
own immune system overreacted to the virus. The healthier
you were the more dangerous the effects on you. Being a
medical person exposed you far more than the average and
once you lost the doctors and nurses everything got worse.
My one worry about climate change is a shift in weather
patterns might let the Deer Mouse move into some place
like Las Vegas. Hanta is really hard on dense populations.
Having it get into the tourists to be spread worldwide
could make for interesting times indeed.
Good catch on the water supply. That’s why all of those
marvelous Roman Aquaducts were built trying to keep ahead
of the problem with big populations. All that invisible
imfrastructure turns out to be critical. Stuff like the
sewers and waterpipes. I was born close enough to horse
used for work to meet a lot of people with steel plates
in their head from being kicked by their working horses.
You can learn to harness one from a book but staying
safe around them is really hard. Working around animals
that can kill you by leaning on you, can bite a wooden
2X4 as easily as you bite a carrot and are not always
safe to approach is a learned artform.
Grey Goo makes good science fiction but it is like the
German rocket boys hoping the Hydrogen in the upper air
would cause a worldwide firestorm, it sounds plausible
but didn’t happen. It is in the same category as igniting
a Bethe cycle in the earths crust with a fusion weapon.
Most things are self limiting due to some unforseen set
of circumstance because Mad Science is just as hard as
the good kind.
cando willdo • October 3, 2015 2:43 AM
AI and automation make the ‘can do’ part easier. The most important part, IMHO is the “will do.”
So, how many times have you done ‘counter-productive’ results despite good intentions?
see paper “Vigilance Impossible: Diligence, Distraction and DayDreaming all lead to failures in a practical monitoring task.”
GregW • October 3, 2015 5:32 AM
The quote is cute, but I am more impressed by the “AI theorist” who wrote, in 1946, “A Logic Named Joe”.
For a scifi short story in 1946, capturing a flavor of the PC + internet + search engine outcomes and the subsequent AI implications and raising good/funny questions about them is pretty remarkable.
If you haven’t read it, it’s worth your time. Sort of a nice parable about the security implications of AI actually now that I think of it.
ianf • October 3, 2015 6:38 AM
@ Alien Jerky, hold your horses (Equus ferus caballus), don’t let your ire over “tax dollars” occlude potentially v. real benefits of a theoretical biologist’s findings. NASA was going into space, so all potential implications of that, including coming in contact with unfathomably alien life forms needed to be researched and, yes! speculated upon by Exobiologists on Federal dime, so they wouldn’t have to tend tills at Walmart to feed their babies (since there are so few openings for patent clerks in the Zürich office).
There are HORDES of theoretical physicists, etc., around, people who stand no chance of ever having their otherworldly theories tested IRL, nobody complains about them being funded, so why should it be different with Exobiology?
For a sample of what that discipline can conjure, consult e.g. “After Man: A Zoology of the Future” by Dougal Dixon (1981), a look at post-apocalyptical ecology. I was quite fascinated, still am, by the illustrations in it. Several years ago BBC made a 2-part series about one such extraterrestrial evolution, with huge amoeba-like organisms floating in & feeding off methane-rich atmosphere, and with one predator species represented by bat-like hooky gliders. Clearly patterned on Earthly analogies to make them plausible, but who knows what other bio-misfits theoretically are out there? We don’t have to look beyond Milky Way to find that the evolution is progressing in ever stranger ways: who’d have thought that there’d be flocks of baboons that keep dogs as pets (mutually symbiotic existence); or that at least one meerkat species has developed a compulsive squid fetish?
ianf • October 3, 2015 10:51 AM
@ Clive Robinson “the thought arises as to what would happen to [cellphone-dependent] people if… a piece of malware not just stopped the mobile phone networks but in some way physically damaged them so they could not be brought back in a short period of time. Would they be “too far gone” to recover and “learn the old ways” or instead become dependent on those who still had “the old knowledge”?
Judging by the “pioneer spirit” in NYC in the aftermath of Hurricane Sandy 2011, when there wasn’t even e-phlogiston to recharge phones in large parts of the city, the people would adapt to that “loss” just fine (improvised grassroots bicycle-driven & BioLite generator points appeared). At least for a while, provided there’d be no accelerating unrest in other respects of life, and resumption of services was in sight. After that… who knows, some stock post-apocalypse scenario out of Central Script Repository?
[…] We now live in cities with millions of people, our water is sanitised and piped to us sometimes over considerable distance. Ask yourself how you would fare if piped water not just for drinking but sanitation as well became unavailable for more than a week or so?
[…] wonder if perhaps [the bigger threat now is some] smart teenager’s… malware that destroys industrial control systems. After all the US sort of proved the idea works with Stuxnet…
Yeah, the proof of the concept worked. If I recall correctly, however, Stuxnet-decompiler Ralph Langner assessed it must’ve taken 6 months and millions of dollars to write. And then it took some doing of smuggling it (possibly via multiple infected pen drives) into the air-gaped internal networks of the Natanz facility – if it was the original target. Frankly, that’s a state-level effort, not a “War Games” scenario. Industrial—always custom—control systems are levels of magnitude harder to subvert than general Internet-borne DDoS attacks & similar Morris worm-like.
@ [later, Clive rephrased] […] one way to stop terrorism is to work towards leveling out, achieving parity by purposefully dropping our Western, esp. US, standards of living in exchange for raising these in the rest of the world.
A romantic goal, won’t happen. Next dream please. Besides, it’s not the downtrodden, but the relatively well-off, educated (ideological) minority elites that engage in terrorism as a form of asymmetric revenge for real and imaginary wrongs by the West & the rest [the absolute majority of terror events in modern times affects the nearby locals, not Westerners].
Though leveling out/ raising the quality of life is a commendable goal, essentially raison d’être of many a UN-funded body, by itself it won’t eradicate the resentment & mixture of minority- and superiority complexes prevalent in those Oriental cultures that now, in the age of “comparative television,” feel unjustly exploited and left behind. Even where that goal might be achievable, in practice it will often be thwarted for insidious long-term geo-political and tribal reasons (vide: Palestinian post-1948 refugees who were never allowed to integrate into their host Arab societies, kept as a human mass to demo utter inhumanity of Jews/the Israelis; the Syrians who sought shelter in Turkey disallowed from settling down, so now they flock to the EU [~500k this year so far, with another 3.5M remaining in the Levant]). Else I don’t get what makes populations in the Middle East so special, so tied to sand & stone desert real estate, that, among 30M? of post-1945 resettled DPs & refugees, they alone could not be allowed to live normal lives—but that’s a topic for another discussion.
ianf • October 3, 2015 11:10 AM
@ Winter […] We know how to stop bacterial infections. We can develop new antibiotics ad infinitum (it is just that that is “expensive”) […] Extreme havoc can be wrecked with infectious diseases, but I doubt that one will arise/can be made that will wipe out most of humanity. Just make sure you stay away from bats.
Warning noted, will tattoo that on the insides of me eyelids. Observe however (eyes wide shut), that all your refutations of potential biotech dangers are attempts to rationalize away hazards of conventional sort biology. Basically, you’re talking of Known Unknowns, whereas I can (not) imagine the Unknown Unknowns… but allow the thought that we might be faced with some fluke combination of factors that TOGETHER wipe out the “civilization” as we know it. I know the infinitesimal risks (chances? ;-)) of such happening, but concede that they may be T.H.E.R.E.
@ GregW […] recommends a 1946 vintage SF short story “A Logic Named Joe,” that captures a flavor of the PC + internet + search engine outcomes, the subsequent AI implications, and raises good/funny questions about them.
tyr • October 3, 2015 8:40 PM
Last time I looked Baen sells ebooks. They also have a free
library section with some interesting reading matter. You
can thank Eric Flint for talking Jim Baen into the idea.
If I recall correctly you can find The Creature From
Cleveland Depths on gutenberg. You’ll never see an Igadget
the same way again after reading it. Lieber was a good
friend and entirely too insightful for comfort at times.
ianf • October 4, 2015 3:14 AM
@ tyr […] Baen has a free library section… you can thank Eric Flint… you can find The Creature From Cleveland Depths on gutenberg…
tyr • October 4, 2015 4:18 AM
I’ll have to dig for the Fritz Leiber #, that’s
one major disadvantage of being the Western
Miskatonic Library Annex. Too many books are
available these days. I blame Brewster Kahle
for it, now that the Cat in the Hat has gone
to the big library in the sky.
ianf • October 4, 2015 4:46 AM
Fair enough, @tyr, as native hypertextuality is what distinguishes this from earlier/ analog media, I was just goading you into “living the linkable medium.” I’m not a big SF-reader anymore, and if I do, usually refresh memories of something I once read voraciously in my youth. That said, here’s your friend’s ebook:
(Downloaded, checked out flicks[*], browsed some pages, deleted. I’m on an iPhone, can’t spare the space).
[^*] always include that in tandem with “pictorial promises” to game future XXX search results.
Does anyone have a source for this quote? I wasn’t able to find anything dependable.
ianf • October 5, 2015 3:10 AM
Assume you meant the earliest discoverable origin from which Bruce’s quote came from first. Acc. to this 2nd hand dispatch dated 20 September 2005 or earlier, it did indeed came from Eliezer Yudkowsky, via a named (but not linked to actual post) LiveJournal user, but no context, venue or ocassion was given: selenite.livejournal.com/105973.
Greg • October 5, 2015 3:36 AM
Sigh. Why do so many people think it would be so easy to kill all humans?
It really really really really isn’t easy at all. Every nuke detonated x10 is not even close to enough. in fact right now about the only thing that would work is a asteroid impact that would melt the surface. Something so big it is easy to see coming by the way, and something far bigger than has hit the earth in the last 4 billion years.
Grey goo/nanties: Doesn’t work at all. It would have to eat something with its exact composition, and produce no heat. Ie eat only itself without using any energy. If that doesn’t happen, then we are back to why e coli doesn’t grey goo the planet every 6 days. Because it doesn’t live well in its own excrement.
AI: Yea right. Why is everyone gone all 1970s on strong AI again. We are soo far away. I am one floor under the blue brain guys. No one expects anything close to strong AI for the foreseeable future. Why? Because we don’t even really know what it is. And that quote, “you are made of something useful?” Biosystems are not useful materials for machines. They don’t need air, or nice food. In fact getting the hell away from bio infected planets would probably benefit machines far more. We are assuming they are smart right? War is not efficient.
Virus/Zombie apocalypse: Apart from being the coolest way to go. It does not match real biology. Something that kills fast doesn’t spread. Something that takes to long to kill, and our immune response deals with it (it evolves on a time frame of about 5-8 days!). There are 7billon of us. Even a few million and we are far far away from extinct. So even designed naniteUberVirisBacteria has no chance of removing us as a species. Why do you think Russia and the US agreed to suspend bioweapons? It wasn’t because they wanted to be nice. But both sides worked out, it just doesn’t work and never can.
No we are here to stay. sure not as the current political structure. But as a species that dominates this planet. We are here to stay for a long time.
Clive Robinson • October 5, 2015 6:40 AM
No we are here to stay. sure not as the current politica structure. But as a species that dominates this planet. We are here to stay for a long time.
Hmm now apply your comment of,
… then we are back to why e coli doesn’t grey goo the planet every 6 days. Because it doesn’t live well in its own excrement.
Neither do we humans, a lesson only a few of us appear to appreciate on this planet currently, the North American Governments being two that clearly either don’t understand it or don’t want to for other short term reasons.
Over thirty years ago I started asking people this question,
In those older than myself at the time the answer was an emphatic Yes, for those my age and younger the answer was increasingly no, the younger they were.
It’s why those younger than fourty tend to see strong environmental legislation as a good thing. Whilst those who are older or have more sociopathic tendencies see only personal profit as good and thus engender what some call the “libertarian ideal” and others “free market”. Either term it does not actually matter as on even simple cause and effect analysis they are anything but a good idea for society.
The question thus becomes,
History does not hold out much hope on the latter currently… and genetic change takes time (it took about 1500-4000 years for ~98% of humans to tolerate alcohol in north africa and europe whilst in the likes of Japan it’s still down around half the population).
Will the human race as we currently know it die out, well the answer is definitely yes, that is a certainty no scientist I’m aware of disputes. It’s the time left and the manner of the demise that is under debate. Personaly I’m hoping that we change for the better and move out from this planet in a sensible way, so our longevity is more assured. Weather we do that in our current short lived organic sack of diseases form, or as something more physicaly robust or with increased longevity is but an interesting diversion on possibilities.
ianf • October 6, 2015 1:46 AM
@ Clive Robinson hopes that
So you’re not mere transhumanist, but a hardcore antrophocentric eterno-ditto as well. Keep in mind however that, as we’re the first species to subvert the course of evolution, deny the proven, if far too slow for our liking, pace of natural selection, we stand as much—if not greater—chance of also being the last such species, as we do of breaking out of our Gaian confines. Because, given choice of fucking up for instant gratification, rather than doing greater things that mature well into the future, the humanity’s record is not encouraging.
The most obvious challenge of this century, peaceful transition into a world where Europe + North & South America become a 2B minority dwarfed by 5B Asia + 4B Africa (as per Hans Rosling’s generally sound statistical projections, advance to 26m50s mark for “the planet’s final pin code: 1145”), is not a given. And for just this reason alone, we may yet end up “on the beach.”
Clive Robinson • October 6, 2015 5:24 AM
You left the “Personally I’m hoping” of the front of what you quote.
So you’re not mere transhumanist, but a hardcore antrophocentric eterno-ditto as well.
I’m not even sure what a “transhmanist” is, though I will admit to being a humanist, I don’t assume humans are the be all and end all of what intelligence or life is all about the universal odds are very much against that. As for life eternal that would be eventually stagnation or a living death long befor much has changed in our universe.
Anyone who has eyes and can study history knows that what you might call “the inventiveness of mankind” is geneticaly changing mankind, that is beer, birthing spoons, eye glasses and improving food have all had an effect on genetics counter to that you would have seen otherwise some good most bad. Likewise as outcomes improve the need to have large families decreases, and those towards the top of the socioeconomic scale tend to have considerably less children than those at the bottom, to the point where it has been joked that “intelligence is anti Darwinian”. In general untill recently life expectancy was rising and some see it as quite viable to have humans living to 160 in reasonable health providing we can get on top of diseases that are more prevelent in old age.
If you look back a couple of thousand years to the Romans, they would recognise much of modern society, but the doubled life expectancy and improvments in science would make us appear almost godlike, so would we be transhuman to Romans?
As for getting off this planet, well we are already moving into space and that will almost certainly continue to happen. As many scientists consider, it’s just chance we have not been hit by a sufficiently large physical mass or lethal solar storm burst yet. Whilst we can in theory practicaly defend against physical masses –an experiment on that is due to go up very soon– there is nothing very much we can do about solar storms except be somewhere else if and when they happen. So “not putting all the eggs in one basket” would be a logical thing to do for humanity.
The problem with space travel is the time-mass-energy trade off, whilst their are tricks such as the Mars Cycler idea when out in space, moving mass out of a planetary gravity well and getting it across the vastness of space requires considerable energy if to be done in normal 21st Century human time scales. However history tells us journy times of six months to two years are not unacceptable to humans, but there was a food and water issue that still applies to any human journy of more than a couple of hours. Thus the question of how to deal with it. The most sensible idea sofar is to recycle it. Whilst air and water can be relatively simply recycled we are still quite some way away from recycling human waste into viable food. I suspect that research into viable bio-fules from bacteria and alge will get there before food recycling and will become part of the food recycling process.
There is also the issue of the faster you go the more energy is required to get you up to speed and slow you down again. As was recently pointed out, we know how to fly to Pluto, what we don’t know is how to fly there and stop there in a reasonable time in a viable way. However the opposite is true in that we don’t know how to fly back from Pluto but we do know how to viably stop when we get back.
All of which is why “throw away robots” are how we currently see space exploration as being most viable, and that is why “soft AI” is for sometime to come the only way we are going to be investigating and utilising anything other than earth orbital space. Our machines will always preced us on the journy, the only real question is how intelligent they will be, with the secondary question of if the ability to self repair will be to the point of self replication, because reliability via long MTTF and redundancy only takes you so far unless the MTTR in all aspects is significantly shorter.
The question of instant gratification has tendrils in every part of human existance through politics. Many now assume after a little thought that the politicians we elect are only in it for what they can grab. Whilst that is not entirely true, it is clear that politicians tend to listen to those with money to buy their interest. You will find that I’ve commented on this in the past stating that “Representational Democracy is not Democracy” and giving reasons why. The question left hanging is if we can use technology to knock the top of the hierarchical power structure. That is spreading power in the hands of many makes opinion buying more expensive, but has the down side of too many different views. It’s a tough problem that I’ve occasionaly wasted quite a bit of time thinking about, and the root problem is tribalism which comes out of Darwinian thinking. Even though change happens, few welcome it most like the certainty a lack of change brings, thus we tend to get more conservative with age. Further we all have a “sense of entitlement” which is antisocial and it is only tempered by tribalism into the “common good”, and it’s been observed that alturism is caused by a sense of guilt. If that is true then the much talked about “trickle down effect” of capitalism is compleate nonsense.
As for the movment of political power to larger populations it boils down not to population but energy utilisation and control of raw resources. Science improves energy utilization which in turn produces the tools and weapons that give control of the raw resources. All of our energy comes from the stars, the majority from our sun whilst the heavy atoms for nuclear fission come from long dead supernova. Whilst solar energy is seasonly dependable it is not dependable in much shorter time scales, therefor we use mainly stored solar energy deriving from plant and animal growth. The simple fact is the first world economies are criticaly dependent on energy, and we consume it at an alarming rate and use it very inefficiently which accounts for why the US has the largest armed forces in the world… US foreign policy can be seen as about the control of energy, that is both obtaining it for US only use as well as denying it to those it sees as potentialy hostile (think about US attempts to stop nuclear energy in nations it wants as vasal states). Further by controling energy by force it can extort other raw resources as and when required. The Chinese have taken the opposit tack using control of raw resources to get advancment in science and it’s usable product technology which it then uses to produce goods that it uses to get energy resources. It will not be long before a show down between these ideologies occurs the question then becomes how and what theresult will be. China appears to want to use methods other than war, whilst the US War Hawks appear hell bent on bankrupting the US to go to war. Either way conflict is almost inevitable which is why China has turned from weapons sufficient for domestic oppression to weapons capable of fighting the US armed forces… the fact that the US armed forces are quite dependent on raw resources China currently has a near monopoly on makes for “interesting times”.
ianf • October 6, 2015 6:33 PM
Debating your latest 50-odd theories in 6k chars missive is beyond my capacity. Earlier, I commented half the concluding para of yours, from which I deduced that you’re an optimist where human evolution & technological progress are concerned. That’s what transhumanism stands for, belief that humanity is capable of augmented evolution, of transforming itself into some “bio-tech” species of the future (you: “something more physically robust or with increased longevity”).
As I stated earlier, I am nowhere near upbeat about it. Nor do I expect us ever to be able to permanently and in numbers escape this nook of the ‘verse. We’re here by a fluke, for no discernible “human logick” purpose, and the distances & other physical thresholds keeping us in place are simply too vast to contemplate. At least until we uncover some secret ways to ALIVE sneak through hypothetical wormholes, or other hitherto unknown egress/ingress points to non-Einsteinian space time continua(?). This sufficiently fluffy for you, or should I go on?
“You left the “Personally I’m hoping” of the front of what you quote.”
Left out nothing acc. to English sentence rules. Verbatim original followed by my rephrasing:
In what way does my regurgitated quote subvert your meaning?
qr45 • October 9, 2015 11:22 AM
…this issue that a single person can cause lot more damage than ever before has a side-effect. The side effect stems from the fact that (relatively) small groups of individuals can no longer be ignored to same degree as before.
This means that the world is (from the point of stability and political planning) more fragmented than before.
Subscribe to comments on this entry
Sidebar photo of Bruce Schneier by Joe MacInnis.
Leave a comment