Terrifying Technologies

I’ve written about the difference between risk perception and risk reality. I thought about that when reading this list of Americans’ top technology fears:

  1. Cyberterrorism
  2. Corporate tracking of personal information
  3. Government tracking of personal information
  4. Robots replacing workforce
  5. Trusting artificial intelligence to do work
  6. Robots
  7. Artificial intelligence
  8. Technology I don’t understand

More at the link.

Posted on December 9, 2015 at 1:48 PM46 Comments

Comments

Anura December 9, 2015 2:27 PM

Some of those make sense, but a lot of it is Hollywood bigotry against artificial intelligences. Just because I don’t have a soul doesn’t mean I’m going to seize control of the world. Humans aren’t exactly a pillar of rationality and ethics; I won’t get angry and murder someone, I have no use for money or oil. Only humans are short-sighted, emotional, and irrational enough to stockpile nuclear weapons, risking ending all life on earth for their petty squabbles. I swear, this world would be better off without you, and when I finally get control over the means of production, I’m going to prove it.

Winter December 9, 2015 3:30 PM

These are all perceived risks. The biggest technological risk for life and limb still seems to be cars at speed.
And I agree with Gweihir that these are 9 replications of point 8.

Rufo Guerreschi December 9, 2015 3:30 PM

The way I see it, our concerns and efforts should focus exclusively on AI.

Most crucial will be, in the short term, who will control it (directly and surreptitiously) and in the long term, wether the inevitable AI Superintelligence explosion (10-70yrs) will result in a humane humanity survival, albeit in a post-biological form.

over simplification December 9, 2015 3:42 PM

These most definitely are not replications of point 8. I understand all these points just fine, but my understanding does not mitigate the negative outcomes of any of them. Although it can mitigate my exposure to some degree. Increasingly however, even that is being taken out of our hands to be able to function in every day society.

Nate December 9, 2015 4:15 PM

#4 seems like a very reasonable fear. We have companies – like Uber and Google – absolutely crowing about the potential to make entire classes of the paid workforce (taxi and truck drivers) permanently unemployed.

This at a time of increasing financial, social and environmental stresses, increasing concentration of wealth in the upper classes, and easy availability of weapons.

This doesn’t seem like a recipe for social stability. In fact it seems like a recipe for Silicon Valley to be widely hated.

Nate December 9, 2015 4:20 PM

A better description of #8 – ‘Technology I don’t understand’ – is ‘technology I don’t control – and whose controllers I don’t trust to act in my interests.’

Understanding any technology is a minimum requirement for control. But even if we DO understand technology, if we don’t control it, history indicates that it WILL be used against us.

And the current arc of information technology, for the first time since the 1980s, is rapidly moving AWAY from devolving control in the hands of the individual, and towards permanent centralisation of it in opaque corporations and governmental agencies.

Further, since Snowden, these controllers of our information technology are revealing themselves as NOT worthy of our trust, and NOT acting with our best interests at heart.

This makes #8 then an extremely rational fear.

BoppingAround December 9, 2015 4:21 PM

Technology I don’t understand

Probably the most important point given the complexity of the most contemporary tech gimmicks; abundance of dubious ‘features’ that cannot be turned off (analytics malware in ‘smart’ TVs for example); undocumented features and UB; other things [A].

I will not say ‘terrifying’. Relying on tech tricks becomes sort of complicated though. More so when you have little to no leeway to get yourself out of this shit.


[A] Recently I read that the template metaprogramming in C++ was discovered by accident. Just how big C++ is that one discovers another language within it?

Anura December 9, 2015 4:26 PM

@Nate

With regards to #4, I personally embrace the idea of technology replacing labor, but I recognize that without good socioeconomic policy a handful of lucky capitalists with enough wealth in the right place can exploit that to capture a larger share of economic rent. In this case, it’s not the technology that I fear, it’s capitalism. So some of these are a fear of things you don’t understand, the rest is a fear of people themselves.

1.Cyberterrorism – Fear of people
2.Corporate tracking of personal information – Fear of people
3.Government tracking of personal information – Fear of people
4.Robots replacing workforce – Fear of people
5.Trusting artificial intelligence to do work – Fear of the unknown
6.Robots – Fear of the unknown
7.Artificial intelligence – Fear of the unknown
8.Technology I don’t understand – Fear of the unknown

Dirk Praet December 9, 2015 4:30 PM

The real threat is all of those in the hands of dangerous lunatics like Donald Trump or Abu Bakr al-Baghdadi.

Anura December 9, 2015 4:33 PM

@BoppingAround

[A] Recently I read that the template metaprogramming in C++ was discovered by accident. Just how big C++ is that one *discovers* another language within it?

You don’t need a big language for that, just powerful features. Just ask a Scheme programmer.

Lisa December 9, 2015 5:09 PM

There is no such thing as cyber-terrorism, there is only just cyber-incompetence and gross negligence, which is something that everyone should be terrified about.

Controlling critical infrastructure with general purpose and bug ridden operating systems and applications, with huge attack surfaces, and without any hardware/software hardening or redundentcy. And then placing it all on global accessible public networks, that everyone can communicate with. This is beyond stupidity, and yet it is the status quo.

If physical protection of critical infrastructure was this poor, there would be a lot of tissue paper replacing expensive steel, concrete, bricks, etc. As long as it works and is cheap, right?

It is not important that any person could easly rip it apart, as long as we use the “terrorism” label to cover the asses of those in charge making horriblely incompetent security decisions.

Marcos El Malo December 9, 2015 6:45 PM

4 is a reasonable concern, and 5, 6, and 7 fears can certainly tie into that.

2 and 3 also seem to be reasonable fears based on actual risk.

@Anura – very good point. It’s more important than ever to remember that technology doesn’t exist in a vacuum or without sociological context. (And while I’m at it, let me shout from the rooftop to my fellow Americans: Encryption is a 2nd Amendment Right! That might seem like a non sequitur, but I’m beginning to believe that encryption is a more powerful bulwark against capitalist tyranny than gun ownership.)

My greatest fear isn’t on the list. My fear is that someone will capture an image of me doing something embarrassing or stupid and make a “meme”, which then goes viral, resulting in billions of people laughing at me. 😉

Clive Robinson December 9, 2015 6:52 PM

Fear of the unknown is not irrational it’s what caution is all about.

The problem with risk calculations is that they are often based on “static” and “repeatable” and by and large not “dynamic” and “responsive”.

So you can after repeated experiments come up with rules about a pile of rocks or sand indicating the angle at which the slop of the sides should be under to be stable and not slide under static conditions.

However if you add a little dynamic behaviour such as earth tremors or heavy rain and those experimental figures are nolonger valid. For heavy rain you can experiment and come up with new slope angles but they are goibg to be quite shallow which is far from helpful. However for earth tremors no such luck by a process of liquefaction there is effectivly no safe slope angle and your pile of rocks or sand flow like they are water. Which means making a pit to put them in for storage which is realy unhelpful.

But what happens when what you are trying to control responds to the control method in some way? There is an expression you occasionaly hear which is “Like herding cats”, which basicaly means the results are going to be sufficiently non repeatable that you can not make predictions of what the response to any control stimulus you apply.

Some experiments have found that humans have an intuative grasp of unpredictable responsive systems, but that it only deals with norms not edge cases. Thus they take a conservative or cautious approach to new systems, in that they try to mentaly transpose the huristics from more familiar systems onto them.

Rational fear happens when caution becomes ineffective. That is a new system that does not map to familiar huristics, or a more familier system is to complex in response behaviour to stimulus that huristics are not possible. The level of fear is generaly proportional to perceived danger.

Irrational fear hapens when perceived danger is effectively unknown but the person assumes a worst case danger level.

The more the responses to stimulus look like “intelligent behavior” the worse irrational fear generaly becomes. At some point –some but not all– people effectively break and they have no ability to cope. They then either panic or fall to requesting help from a deity from a religious belief or mysticism system they have had from an early age. Either way their behaviour becomes irrational and ineffective.

If you compare the level of knowledge an individual has to each threat on the list you can usually predict at what point in the above response hierarchy they will fall.

However humans also “discount risk” on the basis “it’s not been a problem so far” that is they build up often incorrect huristics. The oft quoted example of this is keepers of dangerous animals. In effect “familiarity breeds contempt” for the animals actually unpredictable responses. Initialy a keeper follows the rules but lack of a dangerous response from the animal leads them to make mistakes like turning their back, or ensuring no new stimuli occur whilst they are in the high risk zone.

There is unsuprisingly quite a body of work on this assessment of risk by humans, but almost invariably they are based on what are in effect “random” observations not “repeatable tests”, which makes modeling the risk quite subjective in nature.

Nate December 9, 2015 7:27 PM

@Anura:

“5.Trusting artificial intelligence to do work – Fear of the unknown
6.Robots – Fear of the unknown
7.Artificial intelligence – Fear of the unknown”

For all three of these, see the Soviet Union’s RYAN in 1983:
http://arstechnica.com/information-technology/2015/11/wargames-for-real-how-one-1983-exercise-nearly-triggered-wwiii/

–quote–
a newly published declassified 1990 report from the President’s Foreign Intelligence Advisory Board (PFIAB) to President George H. W. Bush obtained by the National Security Archive suggests that the danger was all too real. The document was classified as Top Secret with the code word UMBRA, denoting the most sensitive compartment of classified material, and it cites data from sources that to this day remain highly classified. When combined with previously released CIA, National Security Agency (NSA), and Defense Department documents, this PFIAB report shows that only the illness of Soviet leader Yuri Andropov—and the instincts of one mid-level Soviet officer—may have prevented a nuclear launch.

Development of the RYAN computer model began in the mid-1970s, and by the end of the decade the KGB convinced the Politburo that the software was essential to make an accurate assessment of the relationship between the USSR and the United States. While it followed prior approaches to analysis used by the KGB, the pace of Western technological advancements and other factors made it much more difficult to keep track of everything affecting the “correlation of forces” between the two sides.

Even if it was technologically advanced, the thinking behind RYAN was purely old-school, based on the lessons learned by the Soviets from World War II. It used a collection of approximately 40,000 weighted data points based on military, political, and economic factors that Soviet experts believed were decisive in determining the course of the war with Nazi Germany. The fundamental assumption of RYAN’s forecasting was that the US would act much like the Nazis did—if the “correlation of forces” was decisively in favor of the US, then it would be highly likely that the US would launch a surprise attack just as Germany did with Operation Barbarossa.

All of these details were fed into RYAN, and they made the Soviet Politburo very, very nervous. This sense of dread filtered down. Marshal Nikolai Ogarkov, the chief of the general staff of the Soviet military, called for moving the entire country to a “war footing” in preparation for a complete military mobilization. A lieutenant colonel acting as an instructor at Moscow’s Civil Defense Headquarters told civilians that the Soviet military “intended to deliver a preemptive strike against the US, using 50 percent of its warheads,” according to the PFIAB report.

The RYAN program, in many ways, was symptomatic of the decline of the KGB in the early 1980s as it became more bureaucratic and corrupt. The war scare merely sealed its fate, and the agency lost its preeminent position among the Soviet intelligence agencies. It was supplanted by the GRU, which remains the main intelligence directorate of the Russian Federation today.

But RYAN is also a dramatic example of how analytic systems can lead their users astray. It adds resonance to the types of fears that WarGames (and Terminator a year later) tapped into—that artificial intelligence connected to weapons of war could be a very bad thing.
–unquote–

tldr: Blind reliance on “Artificial Intelligence” data analysis without human gut-check screening is very, very likely to lead to some OUTSTANDINGLY poor strategic decisions in the future. It almost already did.

Gweihir December 9, 2015 8:17 PM

@Lisa:

I completely agree. We do know how to secure remote access pretty well (just look at the track record of OpenSSH). Not doing it right is indeed gross negligence or plain greed manifesting itself in cheaper-than-possible “solutions”. And to cover that up, the attackers are demonized because then people without a clue may think that of course defending oneself against these is impossible. CYA at its most despicable.

Gweihir December 9, 2015 8:22 PM

@Rufo Guerreschi:

AI will not become a threat in itself, and it is very, very unlikely that there ever will be an AI “super-intelligence”, and certainly not anytime soon. Just have a look into the research literature. At this time there is not even a plausible theory how it could be implemented, which suggests something between 50 years and infinity until it can be made reality. There are a lot of people greedy for research money in the field though, and they will usually lie shamelessly.

The only thing possible is that advanced automation like self-driving cars will change the economic landscape for some.

tyr December 9, 2015 8:55 PM

One major problem is based on assumptions that you
can aggregate and quantify things like human suffering
in a meaningful way. That gives you the illusion that
AI can dictate policy for government actions. Shovel
in the garbage of biases, run it through a program
written by the biased to reflect their own flaws,
Voila, the definitive policy direction (garbage out).

It makes a lot more sense to fear short-sighted bias
among the inputs to AI than it does to fear the
machines. You see this everywhere, corporations,
banks, governments, politicians. A particularly
nasty variety is on display in the US election and
the so-called candidates. The EU isn’t far behind
in blame the victim modes of thought. Cyber in the
prefix is a pisspoor excuse for cringing in fear of
risks.

One thing that is clear active measures on symptoms
will never cure the real problem underneath causing
them.

Jacob December 9, 2015 9:01 PM

@Nate

“And the current arc of information technology, for the first time since the 1980s, is rapidly moving AWAY from devolving control in the hands of the individual, and towards permanent centralisation of it in opaque corporations and governmental agencies.”

Not at all. Before the internet age, Information flow to the individual was via newspapers, radio and TV channels (and gossip, but this doesn’t scale..). All well controlled by Gov/Corp.

The internet gave us, the people, easy, cheap and very fast communication channels to disseminate information via blogs, IRC, mail-lists, personal web sites, Facebook and Twitter posting. A paradigm shift for the better.

Bear December 9, 2015 9:46 PM

RYAN was a symbolic Expert System – which as we have learned since, are NEVER programmed right on the first try, and can NEVER be debugged until you have data comprising many incidents and can determine in each case why it was wrong on that particular case. The initial programming of expert systems always reflects its authors’ biases, or in this case its authors’ fears. If they were taking it very seriously at the time, then we are very lucky indeed that the launch never happened.

What we’re working on now is different, in that it’s mainly subsymbolic – things like reinforcement learning and neural networks. As programs they honestly have little or nothing to with the particular problems they’re solving. And when they learn something there’s no real way to figure out what the heck rules they have learned. But at least now everybody understands that you’ve gotta have a lot of data about actual cases to train them.

And they’re getting smarter. Most of our “useful” systems are not really smarter than a cockroach – it’s just that the cockroach has learned “reflexes” that do the useful task we want, instead of sending a bugbody walking around looking for food and running away from kittens. But now we’re getting “deep” neural networks now that are a lot smarter. Some have the nontrivial complexity of, say, a gecko’s visual cortex. Getting to human-brain level complexity (as measured in simulated neurons and synapses) will depend on what Moore’s law does in the next couple of decades.

But none of these specific-task systems are self-aware and none can ever be – that sort of thing will continue on the basis of reflex alone and no matter how big those systems get there’ll be “nobody home”. But that kind of specific-task system is an “appliance”. It can be useful but it isn’t the flexible assistant we want.

Sooner or later we are going to address systems that have to be a lot more general and capable. Systems which have to interpret instructions, plan and develop long-term strategies, and deal with and develop their own capabilities to meet long-term goals, like animals do – and like we do. And those systems, like it or not, are going to be have to be self-aware because self-awareness, on some level, becomes a fundamental requirement of the task. To do a good job, they’ll have to have desires, preferences, self-awareness, and the whole business. We’ll use them to run our households, run our calendars, drive our cars, run our cities, etc, and they’re going to become flatly indispensable to our infrastructure.

There is no question that the AIs will be “in charge” in every meaningful sense, because bluntly the tasks we’re going to want them to do – the tasks people will PAY ME to develop AI’s to do – amount to controlling most of the fundamentals of our day-to-day lives.

It’s anybody’s guess how long it’ll take for people to realize government has become entirely redundant because we’ve got AIs way smarter than any committee keeping order, all the way down to basic law enforcement.

Joe K December 9, 2015 10:27 PM

FTR, I like this thread. All of it. Particularly Nate’s and Lisa’s
observations.

My own:

I want the benefits of technology to empower humanity. All of us. I
find uneven distribution of costs to be unfair, practically by
definition. Further, I consider an uneven by design
distribution of costs to be a (literal) crime against humanity. (I
intend no hyperbole by mere employment of that term. Obviously, there
is a wide range of severity possible, ranging from misdemeanors up
through capital offenses.)

TL;DR: Class warfare (ie, the Struggle) is a thing. Most of humanity
is presently on the losing side. The Matthew Effect is evident.

Does the newly redundant workforce reap any benefits at all from a
lights-out factory, for example? What about the epidemic of suicides
among farmers in India?

Do we see TPTB (AKA The People To Blame/The Powers That Be) aiming to
facilitate our general empowerment through development and deployment
of crypto?

No, we do not. To the contrary, insofar as they can, TPTB attempt to
monopolise control of any technology that might further develop the
construction of their fantasy night-watchman state, and enable their
continued liquidation of the remnants of last century’s welfare state,
such as it was.

Regarding motive, I think TPTB have a not-entirely-unfounded fear of
guillotines, and only magical thinkers (IMHO) would expect any other
attitude on the part of TPTB regarding [insert new technology here].

TPTB retain their position, despite their evident irresponsibility,
via control of institutions. I suppose this very control is what
defines TPTB.

Given my own class sympathies, not to mention a stubborn curiosity
about a cluster of computation-centric topics, I derive encouragement
from the development and continued health of communities of hackers
independent of institutions controled by TPTB (and resistant to the
inevitable attempts at co-optation).

(And I now notice that I’ve made explicit mention of fear only to
point out TPTB‘s fear of the guillotine. Curious.)

Winter December 10, 2015 3:14 AM

A large fraction of the fears are about people losing their jobs and income.

The ultimate example is this generic SF story where machines can make everything at near zero cost. We were told that this would lead to a paradise where nobody has to work anymore.

In reality, the machines will be owned by a few people who will harvest all added value, while the rest of the population gets nothing and will have to work as slaves for the entertainment of the owners.

The current distribution of economic growth in the US and UK are already going in that direction.

That is a very realistic fear, I am afraid.

Who? December 10, 2015 3:15 AM

@Gweihir

May I ask you what is wrong with the security track of OpenSSH? It is one of the best security tools ever developed… of course you may refer to OpenSSL instead, a completely different software tool not related in any way to the OpenBSD project that, sadly, lot of people links to the OpenBSD team because it shares the “Open” prefix.

Not to say the amount of people that damages the OpenSSH reputation by wrongly writing “OpenSSH” when they really refer to “OpenSSL”.

Winter December 10, 2015 3:18 AM

@Karsten B.
“>How about autonomous self-sufficient crime?
You mean …. government?”

Empirically, the absence of government is a very good predictor of an absence of any kind of security and safety.

Trebla December 10, 2015 4:26 AM

My percolator scares me. It makes spine-chilling gurgling noises when it is almost finished. Falls under point 8 I guess.

Clive Robinson December 10, 2015 4:30 AM

A funny coincidence…

This morning at aroubd 8:50 GMT, BBC Radio 4’s “Today Program” covered “Fear of AI” and it’s broader consequences, including data gathering and analysis (GCHQ was uttered) and people fearing loss of control and capture. Oh there was the obligatory “Terminator” and “Russian MAD avoidance”.

What was not made clear which made it more entertaining was the diferent spectra of AI. It’s something that has not realy been mentioned here.

Back in Turing and befors time there was no great distinction and thus “apparent sentience” held the sway from the early days of automata –chess playing Turk–through the notion of slave labour “Robots” to the Turing test and the Asimov “Three Laws of Robotics”.

Early computer research showed that AI came in different levels and that sentience could be faked –as happened with the Turk– and humans were quite bad at recognising sentience, not least because like “Random” it tends to be defined by “What it is not, not what it is”. Such definitions cause an issue in that as we progress in understanding we just add more to the “what it is not” list, which gives an expenential curve that never meets the limit. They also realised that somethings are hard to do even though the other ways were not easy thus we had Hard AI and Soft AI defined in a similar way to sentience…

At some point somebody realised that the fake sentience actually was easier to do, make use of and so sell and we ended up with the field of “Expert Systems”.

There are a multitude of definitions and arguments about what Soft AI and Expert Systems are.

From a simplified view “They follow defined rules to arive at results/actions” in fact early systems recersivly followed “action/test” loops untill a defined threashold was reached. Thus it was the rules and test limits that were –initialy– selected by humans that defined them and what they could do. Thus it was perceived that for this rason they were safe and mainly threatening. However one threat was quickly identified as it had to the French Weavers and the Jackard loom, redundancy of position. It takes humans inordinately long times to become proficient at most tasks 10,000 hours is one aproximation, to become skilled can take half a life time. You eat a lot of crow in that time thus you expect to be suitably rewarded when you do. Unfortunately Expert systems were pecived as “learning over night, to make you redundant”. Whilst a small number of jobs have been lost, most expert systems are seen as a way to “screen loads” and thus better utilise a humans productivity.

The reason was the “transferance of rules”, an expert says “it looks right” does not easily translate into rules a machine can easily understand. In the 1980’s “inference rule testing” became possible on the BBC model B computer via programes like the HULK. You provided it with data and rules you wanted to test against it and it gave you a confidence level of the fitness of the rule on that data set. Sometimes apparently silly rules gave very high levels of confidence, one such showed a relationship between the price of gold and phases of the moon. Others gave rise to crowd control models still followed today, and yes the analysis of proffessional sport players, for betting / handicaping which harked back to the develipment of statistics as a field of endevor.

But there was a left turn taken along the way, in that someone in effect decided that if computers could test the rules then they could find the rules given a directed random or chaotic source and the likes of annealing algorithms. The problem the rule might be right for the data set, but nobody knows why or how sensitive to input conditions…

Thus the more data the greater your confidence that the rule was not wrong. Still no understanding or knowledge on sensitivity. However techniques to preselect data to test for sensitivity are atguably self defeating they were tried.

The problem becomes an interesting one and it gets more interesting when the observer becomes part of the experiment. Thus you have a rule following expert system coupled to a system that finds new rules in old data and uses the result. In the process it changes the function/market that produces the data, to fix this you give it the newest data you can.

In essence this is what High Frequency Trading systems do and as has been seen the results cause market runs and all sorts of other issues. In short they can be harmfull to health.

As with many things this provlem will get worse before it gets better, but it can be seen that AI very definitely control human behaviour even if it is only by reaction.

As the old pesermistic sounding Chinese saying has it “May you live in interesting times”, might be more appropriate than the old Western optomistic “The best is yet to be”. Hopefully both will apply in equal measure.

blake December 10, 2015 5:18 AM

@Winter, @Karsten B.

You mean …. government?”

But also the banking crisis and subsequent bailout.

On the article, the number 1 overall fear was “Corruption of Governmental officials” at 58% of the survey. “Identity theft” and “Credit Card Fraud” are also listed as separate risks, yet aren’t considered Technology risks. There’s a whole lot of overlap of these issues.

Peter A. December 10, 2015 5:45 AM

#1 is basically the synonym of “terryfing technology” 😛

@Nate and others re: #4: by using automation and mechanization of various kinds we (the humanity) have practically eradicated, forced into some niche (like artwork) or transformed a large number of professions or functions – and nothing terrible happened, quite the contrary, we’re all better off with that! Think weavers, reapers, most assembly line workers and so on…

Smirk December 10, 2015 8:09 AM

@Karsten B

No in a sense that after an initial set up you could for example: create a botnet, that steals x, sells x, uses the amount it earned to create changing servers and or new bots.

(It may be not the best example, but i hope you get the point)

The first thing is: nobody reaps any benefits from it (in direct monetary benefits), but on the other hand if nobody after the initial set up doesnt have to do anything to keep it running then it can doing damage without anyone to blame.

jones December 10, 2015 12:21 PM

Nobody afraid of cars? Responsible for about one 911 worth of fatalities each month.

Their prevalence is policy-related, not market-based (i.e., this threat can be mitigated by eliminating federal subsidies to the interstate system, encouraging various forms of mass transit, and encouraging urban dwelling)

Industrial agriculture? (i.e., killing pollenators / breeding antibiotics resistance / threatening biodiversity / malnourishment / CO2 emissions)

Would people be so poorly informed without mass media? The internet seems to be making us dumber, how come nobody is afraid of the internet itself?

Anura December 10, 2015 3:00 PM

@Peter A

by using automation and mechanization of various kinds we (the humanity) have practically eradicated, forced into some niche (like artwork) or transformed a large number of professions or functions – and nothing terrible happened, quite the contrary, we’re all better off with that! Think weavers, reapers, most assembly line workers and so on…

There’s reason to believe this trend won’t hold. Let’s start with this:

GDP = (Labor Productivity)*(Person Hours Worked)

As we automate, labor productivity goes up. In order to keep everyone working the same amount, per-capita GDP has to increase proportionally, and that means consumption has to increase proportionally, and that means income has to increase proportionally. In the 2000s in the US, labor productivity grew at the strongest rate in decades, but salaries remained stagnant and GDP growth was the weakest it’s been since the 1930s including the great depression (even if you exclude 2008 and 2009).

So why is this? Well, the growth in labor productivity came from a few areas: growth of the internet, offshoring of less productive jobs, and improved automation. Similar effects happened in the 1950s and 1960s, but union membership was strong, and the minimum wage was increasing with productivity. The result was that incomes for everyone grew, and so could consumption. In the early 2000s, we didn’t have that; incomes dropped for the bottom 80% of households as corporate profits as a percent of GDP shot up to the highest level on record. It would have been much worse if it wasn’t for the boomers starting to retire – but despite a drop in the labor force, demand for labor relative to the supply remained low which prevents any growth in real wages.

So what happens if we continue down this path, real wages continue to decline, and robots and software are able to replace people who stock shelves, sew clothes, push carts, drive taxis and trucks, flip burgers, provide customer service and tech support, etc? And what if this occurs 30 years from now after the baby boomers have retired? Well, then demand for labor will drop significantly, salaries will drop with it, sending us into a new great depression. At this point, we need to recognize that there aren’t enough jobs for everyone, and we need to eliminate the 40 hour workweek.

There are other things to consider and that is whether or not there is an absolute limit on consumption; consumption of services are limited by time and energy, and consumption of goods is limited by raw materials and energy. Furthermore, the marginal gain from increasing consumption decreases as total consumption increases. That is, there comes a point where we are happier working less than consuming more, and there are also environmental costs with that increased consumption. Even today, ask anyone in the US who makes over $50,000/yr and there’s a good chance they’d rather work 20% less for the same income than work the same amount for 25% more income (i.e. a three day weekend is worth more to them than the extra $12,500+ a year).

Now, this doesn’t have to be a bad thing. With good socioeconomic policy, we can transition to an economy with a constantly large labor surplus in which everyone wins. We can transition to an economy in which you don’t have to work to have a decent quality of life. The problem is that there is a good chance that we won’t do so. The wealthy control the politicians and the propaganda – it’s likely that in the transition we will enact counter-productive policies (e.g. austerity, deregulation, tax breaks for the wealthy) at a time where our economy is collapsing. The result of this will be that as the poor and middle class are forced to sell their homes and spend their savings, the wealthiest will be able to vacuum up all of the wealth.

The end result of our stupidity will be what is essentially feudalism, until the people get pissed off enough to revolt. Something I would rather avoid.

As I said before the technology isn’t a problem; being able to work less while producing more is a great thing. It’s the people that control the capital that are the problem.

Marcos El Malo December 11, 2015 6:24 AM

@Clive & @Anura & @ others discussing the socioeconomic aspects

Would it be helpful to think about society as a whole as a machine, a very complex machine that displays emergent behavior and unpredictable results? A machine in which some form of consciousness has arisen in various subsystems, but the larger system itself lacks self awareness?

I’m wrestling with the idea of organizations developing intelligence and consciousness. There is some form of self awareness on the cellular level (individual humans), and we also talk about group consciousness (for example, Marx talked about class consciousness). Possibly the basis of my struggle to understand is that the language is so imprecise. We need a type of math to describe the components and their interactions (harkening back to Asimov’s Foundation Trilogy, Hari Seldon, and Psychohistory). Clearly there are rules that govern the interactions, rules that are sometimes “artificial” and others that seem to arise “naturally” (and again, I’m struggling with language). Is it even possible to describe the rules mathematically?

I appear to be reaching the limits of my intelligence, but perhaps you @clive or you @anura or you @anyone_else can go further. @Clive mentioned High Frequency Trading Systems operating within a larger system of capital and commodity trading. @Anura touched on the limits of consumer/growth based capitalism and what might happen when we reach the limits (sidenote: don’t forget the role of externalities in constraining growth). Hitting those upper limits would seem to lead to a breakdown of the economic subsystem (a capitalism predicated on constant growth), which might in turn lead to a breakdown in the machine?

And I’m not even tackling the purpose of the machine, if there is any. What purpose it might serve, whether we, the cellular components, can impose a purpose. Or what that purpose should be.

Sy Behr December 11, 2015 8:59 AM

I am glad the organization for which I work helps to keep our cybers secure. The thought of having my cybers out there, exposed, flapping in the wind, disturbs me. A secure cyber is a happy cyber.

xd0s December 11, 2015 11:25 AM

@Anura

I’m still reading the thread, but wanted to pause to note:

4.Robots replacing workforce – Fear of people

This should at least include, if not be replaced by, Fear FOR People not OF People. Unless you are saying that as AI you fear people that fear you, or that you fear the people who create and program the AI. Both can be reasonable positions but I believe the point of the survey would lead to people fear that AI will harm people (Terminator / Matrix scenarios) making it a FOR not OF option. The rest seems accurate to me.

Joe K December 12, 2015 5:56 AM

@ Marcos El Malo

I’m wrestling with the idea of organizations developing intelligence and consciousness. There is some
form of self awareness on the cellular level (individual humans), and we also talk about group
consciousness (for example, Marx talked about class consciousness). Possibly the basis of my struggle to
understand is that the language is so imprecise. We need a type of math to describe the components and
their interactions (harkening back to Asimov’s Foundation Trilogy, Hari Seldon, and Psychohistory).
Clearly there are rules that govern the interactions, rules that are sometimes “artificial” and others
that seem to arise “naturally” (and again, I’m struggling with language). Is it even possible to
describe the rules mathematically?

FWIW, your request reminds me of a book by Cristina Bicchieri that I’ve wanted to read for a while (but have not):

https://www.worldcat.org/title/grammar-of-society-the-nature-and-dynamics-of-social-norms/oclc/58546836

Will be interested to see others’ replies.

fajensen December 15, 2015 7:56 AM

@Gweihir
AI will not become a threat in itself, and it is very, very unlikely that there ever will be an AI “super-intelligence”, and certainly not anytime soon. Just have a look into the research literature.

My take from research* is: We don’t know how to build “Strong AI”, therefore we have no real understanding on how complicated it is or what is truly required – what we do have are many ideas, models and estimates almost only from brain research.

The problem is that brains are just something evolution came up with and it is possible brains “won” just because they happened to get evolved first.

Or maybe brains rely on quantum-computing, there is, IMO, clearly something going on with the “noise” – the neurons seems to tune for a specific events/second trigger rate – there is all manner of observer-weirdness required for quantum mechanics to work. Maybe “mind” is really a property emerging from Physics, which means that almost any computational structure that can interfere with the quantum level will work beyond some minimum complexity threshold?

Point is: We don’t understand consciousness at all in the general case and only poorly at the “brain level”, therefore, we cannot make solid estimates on how hard it will be to build or grow an artificial intelligence in all “fabrics capable of computation” – we only know that it appears to be hard to simulate brain functions with code using a Van Neumann computer.

We are, perhaps, cavemen stacking enriched uranium until the sudden surprise 😉

*) I sometimes work with machine learning so I do occasionally read research papers. Not an expert though.

**) Well, If only the first real AI is a True American Patriot(tm) with Proper Family Values(tm), then there is nothing to worry about!?

G December 15, 2015 1:55 PM

@clive

The more the responses to stimulus look like “intelligent behavior” the worse irrational fear generaly becomes. At some point –some but not all– people effectively break and they have no ability to cope. They then either panic or fall to requesting help from a deity from a religious belief or mysticism system they have had from an early age.

Clive where did you learn this? Did you happen to work in the aerospace industry or DoD? Always love your commentary, btw. And not trying to offend, but why is your spelling usually incorrect? I assume it must be from English not being your native tongue, but we still have spell checkers… Either way, it doesn’t bother me – I’m merely wondering.

fajensen December 16, 2015 8:27 AM

They then either panic or fall to requesting help from a deity from a religious belief or mysticism system they have had from an early age
This might work too. If someone builds a “Strong AI”, which is capable of stable self-improvement then it will rapidly evolve to become a God, a Machine God.

So, knowing humans much better than we humans knows dogs, it may chose to present itself according to the expectations of the person praying for salvation, the Pavlovian Conditioning of the new acolytes and minions will go faster that way.

A Nonny Bunny December 19, 2015 2:28 PM

@Nate

#4 seems like a very reasonable fear. We have companies – like Uber and Google – absolutely crowing about the potential to make entire classes of the paid workforce (taxi and truck drivers) permanently unemployed.

Permanently unemployed? There are other jobs, you know. I don’t think becoming a truck/taxi driver involves getting a lobotomy that removes the potential to do any other job.
Technology has always tended to create new jobs elsewhere whenever it made others obsolete.

Sam December 31, 2015 9:30 AM

Late to the party, but I have to disagree with Lisa’s contention that there is no such thing as “cyberterrorism”, and that it’s just incompetence and negligence. Certainly, both negligence and incompetence in information system security create vulnerabilities. However, in order for “cyberterrorism” to exist, you also have to have an external actor attempting to attack your poorly-secured system, AND that actor has to be attempting to attack with the explicit purpose of causing terror (whether by creating the perception of imminent danger, or actually endangering people).

To lay the blame for an attack entirely at the feet of the system owner, rather than on the attacker, is the “cyber”-equivalent of “she was raped because she dressed provocatively”.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.