Data and Goliath's Big Idea

Data and Goliath is a book about surveillance, both government and corporate. It's an exploration in three parts: what's happening, why it matters, and what to do about it. This is a big and important issue, and one that I've been working on for decades now. We've been on a headlong path of more and more surveillance, fueled by fear­--of terrorism mostly­--on the government side, and convenience on the corporate side. My goal was to step back and say "wait a minute; does any of this make sense?" I'm proud of the book, and hope it will contribute to the debate.

But there's a big idea here too, and that's the balance between group interest and self-interest. Data about us is individually private, and at the same time valuable to all us collectively. How do we decide between the two? If President Obama tells us that we have to sacrifice the privacy of our data to keep our society safe from terrorism, how do we decide if that's a good trade-off? If Google and Facebook offer us free services in exchange for allowing them to build intimate dossiers on us, how do we know whether to take the deal?

There are a lot of these sorts of deals on offer. Waze gives us real-time traffic information, but does it by collecting the location data of everyone using the service. The medical community wants our detailed health data to perform all sorts of health studies and to get early warning of pandemics. The government wants to know all about you to better deliver social services. Google wants to know everything about you for marketing purposes, but will "pay" you with free search, free e-mail, and the like.

Here's another one I describe in the book: "Social media researcher Reynol Junco analyzes the study habits of his students. Many textbooks are online, and the textbook websites collect an enormous amount of data about how­--and how often­--students interact with the course material. Junco augments that information with surveillance of his students' other computer activities. This is incredibly invasive research, but its duration is limited and he is gaining new understanding about how both good and bad students study­--and has developed interventions aimed at improving how students learn. Did the group benefit of this study outweigh the individual privacy interest of the subjects who took part in it?"

Again and again, it's the same trade-off: individual value versus group value.

I believe this is the fundamental issue of the information age, and solving it means careful thinking about the specific issues and a moral analysis of how they affect our core values.

You can see that in some of the debate today. I know hardened privacy advocates who think it should be a crime for people to withhold their medical data from the pool of information. I know people who are fine with pretty much any corporate surveillance but want to prohibit all government surveillance, and others who advocate the exact opposite.

When possible, we need to figure out how to get the best of both: how to design systems that make use of our data collectively to benefit society as a whole, while at the same time protecting people individually.

The world isn't waiting; decisions about surveillance are being made for us­--often in secret. If we don't figure this out for ourselves, others will decide what they want to do with us and our data. And we don't want that. I say: "We don't want the FBI and NSA to secretly decide what levels of government surveillance are the default on our cell phones; we want Congress to decide matters like these in an open and public debate. We don't want the governments of China and Russia to decide what censorship capabilities are built into the Internet; we want an international standards body to make those decisions. We don't want Facebook to decide the extent of privacy we enjoy amongst our friends; we want to decide for ourselves."

In my last chapter, I write: "Data is the pollution problem of the information age, and protecting privacy is the environmental challenge. Almost all computers produce personal information. It stays around, festering. How we deal with it­--how we contain it and how we dispose of it­--is central to the health of our information economy. Just as we look back today at the early decades of the industrial age and wonder how our ancestors could have ignored pollution in their rush to build an industrial world, our grandchildren will look back at us during these early decades of the information age and judge us on how we addressed the challenge of data collection and misuse."

That's it; that's our big challenge. Some of our data is best shared with others. Some of it can be 'processed'­--anonymized, maybe­--before reuse. Some of it needs to be disposed of properly, either immediately or after a time. And some of it should be saved forever. Knowing what data goes where is a balancing act between group and self-interest, a trade-off that will continually change as technology changes, and one that we will be debating for decades to come.

This essay previously appeared on John Scalzi's blog Whatever.

EDITED TO ADD (3/7): Hacker News thread.

Posted on March 6, 2015 at 2:10 PM • 39 Comments

Comments

SimonMarch 6, 2015 2:52 PM

Maybe "our data" sounds innocuous, like what we ate for breakfast. Nobody cares.

And if you mean data used in identity theft, or what constitutes a threat to nat'l security, doesn't sound like it.

DoubletalkMarch 6, 2015 3:05 PM

It is not logical to ascribe values to groups.

Since a group is not alive, it can't have values.

A group is nothing more than several people regarded collectively. The membership criteria for a group is the result of a choice made by the individual thinking about the group. Different people may decide on different boundaries for different groups composed of the same individuals. In other words, there is no natural and definite group identity to which values can be assigned.

It may well be true to say that the members of a group all have values, but this is to use the term values in a distributive sense that applies to the members individually rather than collectively to the group as a whole.

Anonymous CowardMarch 6, 2015 3:19 PM

There are crypto systems and information systems alike that allow us to negotiate the tradeoffs between group- and individual- interest. Both privacy preserving statistics and secure multi-party computation, if truly invested in, are practical technological tools for dealing with these (and other) problems. In fact, real world applications of both of these technologies have been applied by governments to solve these dilemmas (e.g. Danish sugar beet market).

SSL was once slower, relative to the experience of users when it was created, and less tried and tested. SMPC and PPS can and should be built today. It will be a decade or two before they are mature and adopted everywhere but very much like SSL, it is of fundamental importance to start today.

vas pupMarch 6, 2015 3:19 PM

@Bruce:"When possible, we need to figure out how to get the best of both: how to design systems that make use of our data collectively to benefit society as a whole, while at the same time protecting people individually."
I guess it depends. E.g. we testing on hiring health (including mental) of LEO who has in his/her possession about (15x2 = 30 rounds of ammunition), and I think similar testing should be conducted on prospective POTUS (who has nuclear 'suitcase' in possession) and members of Congress who may declare war. I mean for some elected and appointed top level gov executives/judiciary constituents should know substantially more to prevent problems in the future (like Forestall jumped out of the window). I guess level of health and financial 'health' in particular - like mandatory tax return disclosure for at least 3 years preceding election for top level on State and Federal level should be prerequisite. To be honest with you and all respected bloggers, I don't care about extra marital affairs of prospective or actual elected/appointed officials - that is their family problem (if not affected job functions).
And by the way, don't be afraid that 'sleeping' or 'acting' agent of foreign gov is elected. FBI is taking care of that (with big margin of false positives - just in case).

When you want marry somebody, both parties should be aware of hidden skeletons of prospective partner as well. Other partner should know everything to decide with open mind yes or no.

Conclusion: you may keep your personal information for yourself and your privacy (as right to be left alone) be protected until you asking for trust from other people: constituents, prospective husband/wife, etc. Then, trust required truth and disclosure. My humble opinion.

tzMarch 6, 2015 3:22 PM

Missing is the idea of individual rights. The discussion is purely utilitiarian. Worse than Kelo (eminent domain - hey, if the government wants it, they can seize it and pay you what they think it is worth).

There was a survey in the mall last weekend I participated in. They paid me a nominal fee. One way to value it is with a paid opt-out. I give Google $x per month, and they don't track or inspect the data in drive or Gmail, or whatever. What is the X, how much is it worth Google to track me?

As to medical data - what happens if all the data leaks and your most private medical details end up on pastebin, along with your SSN, birthday, etc? So sorry, but we have to break some eggs to make an omelet. Note that while HIPPA says my data must be protected, there has not been any penalty severe enough for the providers to take seriously. If a hospital administrator spend 20 years in prison, I think they might be careful with the data. If they simply get a $50k fine, they won't care.

Waze (not Wayz) problem is not that they have my location, speed, and heading, it is that it links it to me and saves the linkage. I wouldn't be as bothered if say I would get an occasional discount coupon or something to pay me for the reduced battery life of having the GPS on, and then grab and properly anonymize the data, at least going forward.

Maybe we need an anonymization provider - companies could buy data, but it would be decoupled from ID over items and over time. Anything else would require proper consent.

But big data is much like the "mandatory binding arbitration" clauses now in nearly every contract and EULA - give up your constitutional rights to judge and jury or your phone or other device won't work. You won't get credit even with a 850+ FICO score. That is a similar problem, but we accept that, or ignore the implications.

DanielMarch 6, 2015 3:48 PM

@bruce

It is misguided to define the debate in terms of self-interest. I prefer the concept of self-autonomy because autonomy encapsulates the idea of decision making that self-interest does not. Even when my self-interest is identical to group interest if someone else in the group makes the decision my self-interest has been strengthened but my self-autonomy has been weakened. Who gets to decide is as critical as what gets decided.

@Doubletalk.

You are confusing labels and identities. It is true that a group is not alive and therefore cannot posses a value. Yet a value can be ascribed to a group of people in the same way that an average can be ascribed to a group of numbers. It's no different that talking about a one-to-many relationship in relational databases. We can still ascribe attributes to the many in order to manipulate the data set. It's artificial but it's not illogical.

Bob S.March 6, 2015 4:26 PM

I don't see a way or chance for end users to gain electronic opt-out/opt-in rights/powers whatsoever as it stands now.

Users are out-numbered, out-powered, out-monied by geometric proportion. Of course corporations won't cut us any slack. Unfortunately, it looks that's the way most governments are going too, with the USA in the lead.

Maybe "THEY" will mess up so bad some day it will precipitate revolt. But, certainly not in the short term. I think our best chance at electronic freedom will be some leap frog technological breakthrough that will give the common man a real choice to protect his stuff, or not: a golden ring to counteract the gold key so to speak.

(Bruce my security stuff has been unhappy with your site for a couple days, trying something new maybe?)

DoubletalkMarch 6, 2015 4:33 PM

@Daniel "Yet a value can be ascribed to a group of people in the same way that an average can be ascribed to a group of numbers."

In the present context, Bruce is talking about attempting to balance individual values against group values. My point is that this is nonsensical because the two things are incomparable. The living and the undead have nothing to balance.

When you dereference the words "group values", what you end up with is any number of things depending on who is doing the interpretation. One interpretation is what you are talking about, some sort of index or mathematical construct.

Bruce is aiming at a universal principle to guide information law, and @tz is exactly right: "Missing is the idea of individual rights."

BoppingAroundMarch 6, 2015 4:35 PM

> Maybe "our data" sounds innocuous
...until you combine it with something else. Suddenly, what you're eating for breakfast might raise your insurance costs, should the panoptocracy invade that sphere of life and start deciding what you should eat and what you should not.

Far-fetched? Who knows.

WalksWithCrowsMarch 6, 2015 4:51 PM

I will have to finish the book to really have informed comment on this, but strongly do agree this is the time of laying fundamental groundwork, globally, and a time when major mistakes are likely to be made which may now be considered perfectly acceptable... but in a later day may be shown to be positively abhorrent.

Which means: people are very capable of being entirely abhorrent today, mainstream, moderate, and be unaware of it. It is critical to try and become aware of such things because invariably they are self-destructive.

Often such behavior is overwhelmingly self-destructive, yet historically engaged in, and the people of the times are entirely unaware of it. A critical "for instance" is situations where knowledge was absent which could have prevented the collapse of cities and nations.

Historically and systematically, human beings have a profound capacity for both individual and collective self-destruction. Worse, though this is one of the most obvious things about people, individually and collectively, historically and today -- what is also true is there is invariably and profound strong blinding subjectivity in these regards.

Which is effectively a form of conceit. The, "yeah, they did that, but that is totally not me" problem. Human beings will go to extraordinary lengths to blind themselves, in fact, objective truth is only possible for them if they are dedicated to being very painfully self-critical and socially-critical. Part of this can be considered chemical: human beings operate very much on an instinctive, chemical level and have very deep group bonding systems involved in doing so.

The capacity to reason and control one's instinctual and chemical capacities, I believe, is what is possibly redeeming here. To be more specific then merely calling it "criticism". Today? While there are many systems in the information revolution which do improve and invite people to think: they are also very much capable of completely evading all manner of independent thought whatsoever in a 24/7 stream of incoming information.

Add to these factors: even when producing outgoing information, which can appear to be independent thinking, honest, rigorous reasoning -- it may actually merely be recycling collective attitudes, prejudices, and ideas. Speaking to the crowd to belong. Can look like reasoning, but it is not at all independent.

On these subjects, for me, what I believe to be one of the most likely destroyers of global society today is in the realm of government surveillance. [On the subject of subjective blindness & the negative power of the information revolution, I do hold there is a much more fatal flaw, but as it is not in the realm of the "privacy" discussion nor is necessary for public discourse, I will pass that topic.]

By nature government surveillance is not able to be managed, and by nature, it has a distinct trend towards authoritarian totalitarianism. Historical precedent on this is nearly unanimous. Understanding this is not difficult if people look into the historical precedents, however doing so requires unwieldy obscure research and deep diving over long periods of time on the most painful of situations. So, it is infrequently performed.

Contrast: reading a massive, obscure tome, for instance, on the Holocaust or Soviet problems with reading far more fluffy material. The former is literally painful to do, and so people rarely do it. It interests them, and does have a wide interest level, but it brings a person into empathy with the most grim of circumstances. A similar problem you can find in true crime. People can binge on true crime shows, for instance, but they walk away with a decidedly different mood then if they binged on a merely clever, fictional show where the good guys always win and anyone getting hurt is merely theoretical.

Further, people have an immensely strong tendency towards group bonding. A problem I mentioned above. The Nazis did not see themselves (usually) as evil. They were "us vs them". Likewise with the Soviets. Or with Hoover and his FBI. Same is the situation today, with Americans and their situations, or British and their situations. They belong to the group. It is their life, it is the definition of who they are.

Even if they are doing the very same things and operating in exactly the very same way, it does not matter. The words are different (though they collectively make them mean the same thing), but this is because their own selves and their groups are different. What? Americans acting as if they were working for North Korea? Impossible. Can't you hear the words "democracy" and "liberty", and look and see: not even very many Koreans there at all.

"Morality" therefore is entirely subjective to the wills and needs of the group.

"Good" is what gives the individual group praise. "Bad" is what gives the individual group condemnation. Nothing more, nothing less. And today, with studies, for instance, on oxytocin, we know for sure this is chemical, instinctual, and devoid of any truly independent reasoning faculties.

eg, http://motherboard.vice.com/blog/the-brains-love-hormone-forms-social-bonds-and-preserves-traumatic-memories or http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3029708/ and later studies that continue to come out.

You can fight nature and rise above it, as anyone who has controlled themselves, say sexually, for instance, can attest to. But not easily. And often even trying to, well, 'fighting against nature is nature too"... that is, they can think they are, but merely replacing one instinct with another merely for greater group reward.

The specifics here of "how" and "why" government surveillance tends invariably to trend to "total surveillance of the citizenry", or "how it tends towards utilization of control of politicians and corporate and private VIPs", or "how it tends to, by nature, tend to evade any manner of stricture, or oversight"... I think people have a good idea of, here, and besides, is its' own self an immense subject... and this is already a very lengthy response.


65535March 6, 2015 4:51 PM

"Data is the pollution problem of the information age…” – Bruce S

Well put.

But, the pollution is toxic to the point of getting the wrong person put on a list or even killed. That data did not stop the Boston bombers and probably contributed to it. That is poisonous pollution.

DanielMarch 6, 2015 4:52 PM

@doubletalk.

People make the same arguement about coporate rights--that a corporation is artifical, legal construct that can't possess any rights because it is "undead" or "nonliving".

The problem with that arguement is that we don't live in that world. We live a world where corporations do have many of the same rights as individuals. Where people want to leave their real life fortunes to their virtual avatars in Second Life. I don't think this trend is going to go away--I think it is going to get more intense. So in the sense you mean it I don't see Bruce as doing anything other than bowing to reality. In reality, we balance the rights of the undead agianst the living. Go read the SCOTUS opinion in Hobby Lobby.

WalksWithCrowsMarch 6, 2015 5:01 PM

@Doubletalk

A group is nothing more than several people regarded collectively. The membership criteria for a group is the result of a choice made by the individual thinking about the group. Different people may decide on different boundaries for different groups composed of the same individuals. In other words, there is no natural and definite group identity to which values can be assigned.

Noting the rhetorical nature of your post, and moving on:

People are as subjectively individuals as they are subjectively group members. I argued this extensively above, but I might go so far as to add: unfortunately, you can not put new wine into old wine sacks. It is quite impossible (* with an unmentioned caveat ;-) ) to change a tiger's stripes or change the nature of a lemming.

So, just as an individual may plunge headlong into a course of certain self-destruction, so, too, might a group. And preventing either from averting their course from outside is distinctly impossible.

Wesley ParishMarch 7, 2015 4:17 AM

Being an amateur scribbler of some minor notoriety, the question naturally occurs to me: what happens when AI becomes a reality? what happens when individual data stores become "managed"/"corrupted" by AI and minute definitions of individuals become "initialized" within said AIs?

I mean, your data is you: any company that makes customer data out of whole cloth usually walks the plank when it's discovered. What happens if your data becomes self-activating, self-initializing, self-reprogramming?

This is one of the questions that I don't see either the Big Business or Big Government asking. (Certainly, they'd not be so blase about NSA's self-inflicted protocol corruption if they'd actually ask that sort of question. How can we know that the data they have on each and any of us is even remotely correct from one minute to the next?) As far as I can remember, only Greg Egan's ever asked that sort of question in SF.

Mike (just plain Mike)March 7, 2015 7:08 AM

@tz

    There was a survey in the mall last weekend I participated in. They paid me a nominal fee. One way to value it is with a paid opt-out. I give Google $x per month, and they don't track or inspect the data in drive or Gmail, or whatever. What is the X, how much is it worth Google to track me?

How about if profiting from tracking and personal data were made illegal/heavily-constrained (new 'clean air act'=='clean data act' legislation/regulation) and then $X would settle to being approximately enough to cover Google's cost-per-user of actually running their search-engine and other cloud services etc., along with a small profit margin for Google (free market)?

Gerard van VoorenMarch 7, 2015 7:36 AM

@ Mike

How about if profiting from tracking and personal data were made illegal/heavily-constrained ...

I cut the quote right there. Why would the profit of a few companies be more important than privacy? If companies want to make money they have to find another way.

Mike (just plain Mike)March 7, 2015 7:44 AM

@Bruce (cc @WalksWithCrows)

Governing – in a stable way – the balance between individual value and group value is of course at the very root of sociality across all taxa – not just humans – and as I understand it these issues are still very much a live/open area of research in evolutionary biology. I think there may also be a subtler point here in which privacy, which looks superficially like an individual value, might actually be considered more of a group value if one adopts a sufficiently wide perspective and/or a sufficiently long time-scale.

Short diversion regarding the idea that short term efficiency can be antithetical to long term flexibility and survival: Nature often seems to be horrifyingly inefficient. For example – why bother with all the sex stuff in higher plants and animals – why not just make females able to give birth to clones of themselves – no wasting resources on those stupid males and the whole fertilisation/mate-selection rigmarole – surely it would be more efficient? The answer – as I understand it – is that actually it is a lot more efficient to just clone yourself in the short term (and indeed some taxa do go down this route from time to time and can do very well in the short term). However, in the long term (at least in higher organisms) it is doomed because the self-cloning all-girl species’ (for want of a better word) capacity for adaptation is massively reduced compared to populations that reproduce sexually – it’s fine when you’re adapted to your current circumstances – but if things change – and things always change sooner or later – then you’re probably doomed to be out-competed by those who can change themselves because all you can do is copy yourself for eternity. I think there is a rough analogy here with short-term versus long-term thinking with regard to societies and states. Nature has to strike a very fine balance between keeping things the same (so not messing up viable creatures in a given stable environment from one generation to the next) while at the same time also allowing for the inevitable requirement that things must be able to change in the long run as and when the circumstances/environment changes – and at the root of this trick in the natural world are the various mechanisms for the persistence of genetic material (maintaining stability) along with deliberate de-stabilising gene shuffling and gene transfer mechanisms – which undoubtedly do introduce short-term inefficiencies. Similarly, mostly what state-security is about is trying to keep everything stable and the same – on an even keel – to constrain everyone’s behavior within certain bounds in order to preserve the smooth operation of the machine that is society – and there’s nothing intrinsically wrong with that (e.g. preventing people nicking other peoples' stuff and/or arbitrarily beheading each other is surely a good thing). The problem is that the more efficiently you work out how to keep things very stable, how to preempt bad things, how to constrain everyone’s behavior within certain norms and keep things very much the same from one generation to the next – the less able your society/culture is able to change/adapt when external or internal driving factors require it – and thus in the longer term the more your society/culture is at risk of collapse/failure, and the less able it is to compete with other more flexible cultures during periods of rapid change. It’s very easy to think we – in what we believe to be this enlightened modern age – are thoroughly groovy and know what sorts of things should and should not be allowed – surely every ‘reasonable person’ can agree that we are only interested in stopping the things that are obviously harmful. But even if we could agree a set of rules for a perfect contemporary morality-filter for selecting our secret-policepersons I don’t think that would be enough – I worry about all the things that have previously been considered by the people in-charge (and often the majority of the proles too) in various parts of the world at various points in history to be 'obviously harmful’ and/or down-right seditious/immoral and threatening to society: land-reform, employment-reform and trade unions, universal suffrage, women’s suffrage, civil rights in the US, suggesting that the earth goes round the sun, the idea that homosexuality is not criminal, being Jewish, being a Catholic, not being a Catholic, being a communist (US), not being a communist (USSR) etc. etc. Imagine if you had a really efficient technology-based state security apparatus (Richelieu with PRISM) – you could probably hold the official line – and thus hold back what we would now view as progress – for far longer than was possible before. Maybe you could hold the line indefinitely.

We need law enforcement for sure. We need the social machine to be policed and to function well – and we need the social DNA to be able to retain its integrity over time – but we also need it to be able to change. We need rules – but we also need the rules to be able to change. This is a very hard problem – a difficult circle to square – just like it is for Nature – but I don’t think doing state security in the most efficient way possible is an unquestionably good idea. This, I think, may be what privacy is probably for in the context of otherwise highly cooperative individually-intelligent social animals. Respecting privacy is perhaps like a deliberate decision to not do state/group/society-level security as efficiently as it possibly could be done because there is an acknowledgement of the necessity of change in the rules – the acknowledgement that some of the rules could be wrong, or that the rules could become wrong over time. I think privacy is probably an essential part of the mechanism by which society can change – anything that is new (art, invention, ideas, discovery) or is against the orthodoxy (social reform) needs time to be able to grow and develop away from the crushing, critical and potentially exploitative gaze of consensus/convention/commercial-competition. Yes – there are obvious risks in deliberately allowing dark corners – huge risks – the risk is that most of the things you are allowing to grow and develop beyond your knowledge will be genuinely harmful – truly socio-pathological/criminal – there will be costs, and it is stupid to deny this or to pretend that you can eliminate all costs of allowing a given level of privacy. I think about the Norwegian reaction to Brevik – it seems to me they thought about it, and then decided to do pretty much nothing – to change nothing – they seemed to accept that this is the sort of thing that will happen from time to time given the level of privacy/freedom they have in their society – I don’t question, all other things being equal, that they would have been a bit more likely to have prevented or limited Brevik if they had cameras in every home/street/tree and tracked everyone – but it seems they decided not to go down that route. You surely need to balance – you need limits to privacy – but if you erase the capacity for privacy altogether in your zeal to end all criminality/anti-sociality (as defined at your given point in history) simply because, finally, you believe that the technology means you can then you are probably dooming society to sclerosis – ossification – and ultimately revolution/collapse. It’s not an easy sell though: “we have deliberately chosen not to introduce further security measures that may well have saved your loved-one because of the long-term needs of society to be able to subvert itself”. I think China is very interesting in this respect – there has clearly been a lot of change – but there is also a lot of machinery designed to be able to hold back changes/ideas that are considered undesirable. Interesting that China may be going through its phase of adjustment to industrial-pollution issues and information-pollution issues at the same point in history!

BoppingAroundMarch 7, 2015 9:45 AM

Wesley Parish,
> What happens if your data becomes self-activating, self-initializing, self-reprogramming?
We might see more distinct shades of us and our aspects. We see them today as if through the thick mist — think of contextual ads.

Several books I read that concern deceit and persuasion matters talk that people are prone to believing those whom they like or who are similar to them. Expect the systems that use these shades to be welcomed unless they will be too spooky[0]. Another possible outcome is realisation of the ulterior motives of creators of these systems; some efficiency might be lost there but I doubt anything will be done to put them out and away from our lives.

----------------------------

[0] http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/

> And we found out that as long as a pregnant woman thinks she hasn’t been spied on, she’ll use the coupons. She just assumes that everyone else on her block got the same mailer for diapers and cribs. As long as we don’t spook her, it works.”

qbMarch 7, 2015 11:47 AM

Bruce,

Is your book available (or going to be available) from any DRM-free e-outlet? I remember I've got Liars and Outliers from O'Reilly, but Data and Goliath is not in their stock.

WalksWithCrowsMarch 7, 2015 12:34 PM

@Wesley Parish

Being an amateur scribbler of some minor notoriety, the question naturally occurs to me: what happens when AI becomes a reality? what happens when individual data stores become "managed"/"corrupted" by AI and minute definitions of individuals become "initialized" within said AIs?

Psychological technology is that far. That is people can take data from others and put themselves in a state where they believe they are that person. There was a show on something like this, "the Pretender".

Automating some process of that, say, in a program, where logical trees are creating, built on your data, for say, the purpose of predictive behavioral analysis... which, is, to some very meager degree already being done.

Then the mini-similitude personality could be run against a very huge set of "what if" scenarios.


How can we know that the data they have on each and any of us is even remotely correct from one minute to the next

They do not. There are studies and polls done, and there are a vast many areas where data collection is highly effective at such things as discerning very personal information about someone. But, on the level of really trying to understand someone, beyond the metadata... people can not understand their own selves.


Again, stressing, I am talking about well beyond "what clothes you prefer", "where you are likely to vacation next and when" and so on. Well beyond the superficial to deeper questions.

To some degree, however, it probably would be possible to create a sort of virtual environment for said personal object and run not just scenarios on the personal object as if they existed in a vacuum... but run the personal objects against scenarios involving many people. Virtualizing society. For instance, this could be very valuable in predicting "what to say to someone to convince them to vote for your candidate".

A criteria there would be to have a lot of political data on the people and be able to have that meaningfully parsed.


Likely, something else could be invented, a bit more insidious: a human operated "online" bot where the bot translates "as if" Jack or Jane were speaking and not the human. Call the human Adam.

For instance, one way people anonymize themselves today, those who take very drastic measures, by assuming the personality of someone else who already exists online. This prevents the problem of anyone being able to search on your name and find you do not exist online.

Much more effective for governments, where they can reroute potential traffic to the real person, but effective on minor levels for some level of anonymity. Albeit, obviously, highly unethical.

As it stands, if questioned, Adam would have to have really studied Jack or Jane. I am not saying Jack or Jane could present themselves as Jack or Jane to anyone who knows them. Only to strangers.

That is AI, but AI enhanced by a human.

But, there is something else which could be done, which is to use that data and, if one can sufficiently backpost, then they could change around details of the data and create a new person with a long history online. And they could even perpetuate that, so, for instance, that person keeps up a similitude of their everyday communications.

eg, they post to facebook in the morning, they make xyz patterns of calls at xyz times, etc


WalksWithCrowsMarch 7, 2015 3:27 PM

@Mike (Just Plain Mike)

Thank you, some very good points. Helps me with this model I am working on.

Walking through some of the points, largely out of order, however.

You surely need to balance – you need limits to privacy – but if you erase the capacity for privacy altogether in your zeal to end all criminality/anti-sociality (as defined at your given point in history) simply because, finally, you believe that the technology means you can then you are probably dooming society to sclerosis – ossification – and ultimately revolution/collapse. It’s not an easy sell though: “we have deliberately chosen not to introduce further security measures that may well have saved your loved-one because of the long-term needs of society to be able to subvert itself”.

Well, it might be noted here some contrast: That is not such a hard sell for China, but it is a hard sell for the United States.

China/US, China/South Korea, Chine/Japan, China/Taiwan are excellent comparison models in studies of "progress", efficiency and inefficiency.

Other useful comparisons: China/19th Century Western Europe, China modern/China 19th century, China/19th century US, etc.

I believe by taking in these various comparisons we can actually define, to some useful degree, "progress".

I do believe we can also look at the history of Western Europe, South Korea, Taiwan, Japan, the States, and other nations and also begin to flesh out a deeper definition of "progress".

And we can even come to various useful conclusions about "what were contributing factors to that progress", as well as "what were harmful factors to that progress".

This includes not only the founding documents, but as well, additions to those documents made in many revolutions of society (from scientific and informational revolutions to industrial revolutions to revolutions of definition of "rights" of individuals).

One of the sad ironies in these comparisons is that as the US made very inefficient moves for individuals, such as a wide variety of worker's rights, including safety issues both for workers and consumers... cost of manufacturing skyrocketed. And eventually, much of the manufacturing and other fields were moved to China where the rules are nowhere near as costly.

Even worse, I have seen some predictions that China is going to have some very serious economic problems as the 3D printing industry takes off. Which I find plausible.

Put another way, I would argue it is in the possibly in the best interests of the "first world" nations to keep the "second" and "third" world nations where they are and worse, at times. But, are these short term best interests, or long term? What are the costs, and what are the values?

So, it is not entirely unlike the public privacy public safety problems.

(I am, btw, very much here including not just China, but Mexico, and, of course, the Middle East.)

Very interesting that as a major component of first world nations, has arisen the need for highly accurate capabilities for predictive analysis on exactly these issues and others.

To some degree these nations are helping to keep themselves in their subservient states. To some degree they are absolutely not. So, there are also those components of the problems. And, you also see this in the public privacy (or individual rights) public exposure (or public rights) problems.


But even if we could agree a set of rules for a perfect contemporary morality-filter for selecting our secret-policepersons I don’t think that would be enough – I worry about all the things that have previously been considered by the people in-charge (and often the majority of the proles too) in various parts of the world at various points in history to be 'obviously harmful’ and/or down-right seditious/immoral and threatening to society: land-reform, employment-reform and trade unions, universal suffrage, women’s suffrage, civil rights in the US, suggesting that the earth goes round the sun, the idea that homosexuality is not criminal, being Jewish, being a Catholic, not being a Catholic, being a communist (US), not being a communist (USSR) etc. etc. Imagine if you had a really efficient technology-based state security apparatus (Richelieu with PRISM) – you could probably hold the official line – and thus hold back what we would now view as progress – for far longer than was possible before. Maybe you could hold the line indefinitely.


Yes, very true. And I do believe this is likely one major reason "technologists" can become so passionate about the topics. And they will tend to see *it is most surely in their best interests to do so*.

Though, there are a vast number of problems here, and it is also in their own best interest to attempt to study both sides of the problem.

However, unlike in so many other fields, many arguments "the other side" might make they will either not be privy to because a critical part of a job of these law and intel agencies is to either not report something at all (including very serious nation based attack problems and even terrorist and terrorist organization based attack problems), or to lie about the information. Further, even when it is reported, such as the Brits eventually declassifying portions of their WWII intelligence programs, it is often highly obscure.

And it may also be pointed out that what really needs to be understood is nation based domestic intelligence, which is even much more obscure then 'nation vs nation' intelligence. That is, by the very nature of the business, the bad will vastly outweigh the good, because if there is good it will still be in use (even if always illegal), and so there will be little to no comment on it -- and much misleading comment.

Further, another problem with that manner of system: by design it seems to be very conducive towards exploitation for personal gain. For instance, Hoover. He not only relished the power he had and sought to expand it at every juncture he could and nearly by any means necessary, but he also had the power through these systems to ensure he could stay in power far past any reasonable expectations exactly because of these systems.

That is, the problem can bleed into one of the most "progressive" methods available to domestic secret services which is, almost startling: if it is illegal then it is far more limited then if it is legal. Problem is, today, the understanding of "protecting sources" and methodologies has deeply matured. So what worked for Hoover (and in many circumstances which we have record of it did completely or partially work), can not work today.

(This especially became clear in the late 60s and Hoover's atrocious programs against American Communists and, perhaps what really stands out to people, against Martin Luther King Jr.)

But, there are severe risks, and not the least of these is subversive actions performed by nation states. I think people calculating "it is all terrorism" are not understanding the problem.

To a certain degree this is coming to head with the US confronting Russia over Ukraine. Many in the IT sector are aware Russia has long been putting in effective electronic sleeper agents. There are so many junctures where there is a possibility of a foreign nation sabotaging, really in one stroke, the economy in a very devastating way.

(There are corollary problems as well, of course, outside the technical field, such as monetary stability attacks.)

gordoMarch 7, 2015 9:48 PM

Brace for the Quantified Society
By Rafal Rohozinski and Robert Muggah, Open Canada January 9, 2015

An axiom of the Quantified Society is that it is global and highly networked. Stem-cell therapy and medical tourism is available from Moscow to Seoul. CCTV cameras are now almost as common in Beijing as in London, the most surveilled city on earth. The expansion of smart cities in the north and south is bringing the promise of a Singapore-like benevolent authoritarianism closer to reality than ever before. There is now talk of big data eugenics, with American and Chinese firms busily sequencing thousands of people with “very-high IQs.” Meanwhile, for the rest of us, the market is starting to reflect insurance premiums adjusted for genetically-based disease. (para. 10)

http://www.ocnus.net/artman2/publish/Research_11/Brace%20for%20the%20Quantified%20Society.shtml

WalksWithCrowsMarch 7, 2015 9:48 PM

@Wesley Parish

Speaking of stealing people's private data and using it for AI:

http://boingboing.net/2015/03/07/whats-up-with-these-incredib.html

(-:


IDK, the problem I have been working on is for a private science fiction project, where I try and consider "what if" a really further advanced civilization was able to communicate with earth or even, Urantia style, seeded this planet. In which case, I have come to the conclusion they probably would have been able to move into another state of being, and probably would be able to use human minds like we use computers. Hacking into them, as it were. Likely, they would even be able to create virtual constructs of humans and operate very many of them at the very same time. As well as, in general, create virtual reality bubbles in everyday reality.

As they would have no linguistic similarities at a more core level then just being able to, say, as an American might be able to speak Japanese... but not, for instance, get Japanese jokes, or understand Japanese social customs and rituals... what if they did not have animals or material objects as we did, and had an entirely alien moral system? How could they communicate without backgrounds? Not immediately obvious...

But if you notice in conversations, moral understandings, and major and minor background details, as well as such things as using examples from nature are all common components of language. Not difficult to fake if they seeded the earth, but truly establishing rapport in terms of "moral outlook" and such... probably would be.

So their version of a "phone" to humans would probably be similar to using an AI virtual human, for instance, to speak and translate as well as they could to us. Not entirely unlike how useful it is to have a local buddy when you go to a very foreign place.


WalksWithCrowsMarch 7, 2015 10:42 PM

@Gordo

There is now talk of big data eugenics, with American and Chinese firms busily sequencing thousands of people with “very-high IQs.”

Heard of the insurance-genes issue, but not this one. That is not wise criteria. Though the way the system currently seems to work is that quite often those who claw their way to the top of social hierarchies tend to have the most children, and other disturbing (perhaps) criteria.

I am surprised they do not consider longevity a higher priority.

Consistently.

Intellectually, I believe, people can create complex fantasies about "my name living on" or "I am helping generations in the future after I am dead", but the reality is they operate as if they do understand: when they are dead, such things will be entirely meaningless to them.

Wesley ParishMarch 8, 2015 2:36 AM

@BoppingAround @WalksWithCrows

Thanks for your comments on my post. I did write a small story in that general direction called Mephistopheles in Silicon where a robot/android life insurance salesman goes around to someone dying of undiagnosed cancer (diagnosed by the Mephistopheles character obviously) and gets the rights to the online character of John Brand assigned to him.

I've also been fiddling/riddling around with a story on the horrors of finding the algorithms of financial derivatives, credit default swaps, and like horrors "mated" with database management systems, all in a sense of play by an awakening AI "playing with its toes". None of the characters in the story are quite sure who they'll officially be from one moment to the next: it's a take on B. Traven's Death Ship.

Mike (just plain Mike)March 8, 2015 7:37 AM

@WalksWithCrows and others

On AI: A lot of people basically gave up on doing what I would call proper AI towards the end of the last century. Much of what is being done now that people like to call AI is really just big-data – statistical stuff based on large amounts of often noisy/lazy data – and often the data is square pegs rammed into round holes in the first place – but this approach is often ‘good-enough’ and far better/more-income-generating than anything the old hard AI (in both senses) approaches were producing for the market. The problem with this statistical stuff is that is it inherently conservative – the data is necessarily a representation of how things are or how they were, but sure, you can make exemplary game-show contestants who can tell you in the blink of an eye all the monarchs of England and the dimensions of the Golden Gate bridge – but to quote Julie Burchill slightly out of context, such apparently knowledgeable individuals are really just "a stupid person’s idea of an intelligent person". Watson is unlikely to be able to contribute any new ideas to a discussion of the constitutional merits of monarchy, or to be any help designing a bridge if you have some new unique requirement/constraint that no one has ever had to deal with before. Granted, some of these statistical systems have a limited ability to generalise, but they will always only be ‘good enough’ – they may well work in 99% of the uses cases their designers un-imaginatively considered, but there will always be edge cases they can’t handle or, worse, will get completely wrong because fundamentally they (these algorithms) do no actually understand what is going on. Ask a five-year-old when Jane Austen was born and likely they won’t know the answer – ask Google the following: "OK Google – I have a piece of paper that is blue on one side and orange on the other, it is flat on a table in front of me with the orange side up and I then turn the piece of paper over – what colour do I see?" the five year old will have no trouble with answering that one, but Google will have no idea – literally. To an extent this new generation of so called AI is a bit of a confidence trick – a bit of good ol’ Barnum hoopla – though I guess if your idea of smart/wise really is ‘knowledgeable’ then you’ll get what you pay for. A society governed by the wisdom of this sort of AI will probably be very good at maintaining (and defining) its norms – very good at reliably re-building its bridges the same way it always has, very good at deducing and repeating the formulae for past successes in engineering and the arts, and endlessly regurgitating the same-old-same-old with cycles of minor variation. The allure/promise of big-data is just another example in a long line – it is the allure of something-for-nothing built on the mystery of secret-sauce/clever-maths. The problem with suspending understanding in favour of faith in some secret-sauce/maths is that you have no real grasp of the applicability of the system – no understanding of when it might not be a good idea to use it or where it might fail completely. The wise old birds in the financial markets have this phrase "it works until it doesn’t" – and indeed the financial crisis provided quite a good example – many black-box systems, trained on large corpuses of previous knowledge, which had up to that point been demonstrably successful often became catastrophically un-successful when faced with novelty. My understanding is that those in the financial markets have now developed a healthy layer of scepticism around these formally magical-something-for-nothing-secret-sauce approaches – but I worry that it will be a while yet before everyone else starts to shed their dewy eyed enthusiasm and develops similar experience-based cynicism. I worry that something like this is the idea behind PRISM like systems: “Hey guys – we have these rocket-science AI algorithms which we can apply to a large corpus of noisy data on the entire population and then use it to identify the high-risk individuals – maybe a bit like shining white-light into a prism... and it splits the light up into its different components and spreads them out along a nice line so we can look at each type individually – that gives me an idea for a name for this thing!” I wouldn’t be at all surprised if it worked rather well actually – 99% of the time anyway. "OK Google – give me a list of the top ten people in my area most likely to commit-child-murder/become-radicalised-terrorists in the next ten years". I think Google/Watson style technology would probably be quite good at that – and if that is the case then someone will then say "if we pre-emptively monitor these people and/or maybe even lock some of them up then we will be able to save X people from horrible-death over the next ten years" and – probably – it will be possible to demonstrate in a rigorous quantitative way that, yes, all other things being equal, they are indeed correct about that – and someone will then make the argument that there will be some false positives – and they will also be correct in saying that – but then someone else will say that curtailing the freedom of the small number of false positives will be a small price to pay for saving X lives – and many will likely agree with them depending on the details of what sort of pre-emptive action is being proposed – and so that will all be very interesting if it happens – and may well get implemented – and then – as the wise-old-birds say – it will work until it doesn’t.
Incidentally, if you ask me, the way proper AI will eventually happen will be through trying to understand/model existing biological systems and/or giving things ‘bodies’ that will necessarily require approaches that allow/require these system to have some sort of interaction with and understanding-of/internal-model-of/representation-of their external reality. There are lots of very cool people doing very cool things along these lines right now too – but it will be a while yet before a virtual-nematode or big-dog is going to be able to give you spoken directions to the nearest chip-shop, so I guess it’s going to be Barhnam style statistical I-can’t-believe-it’s-not-AI at the consumer/social-policy level for all of us for the next few years (at least there’s no danger of the current big-data not-really-AIs taking over the world – they may be used as tools by people who want to take over the world, but they won’t themselves want to take over the world because they have no point of view... however, the virtual nematodes, or their oncoming successors at least, will probably be a rather different matter and may indeed be capable of harboring their own independent ambitions!) Not that this doesn’t mean that things aren’t already getting a bit scary because it is pretty much impossible to question the current not-really-AI-at-all AIs: "But why does the computer say I can’t take this flight?" "Well, actually, no one really knows because it is just this black box statistical/learning algorithm based on clever-maths that was trained on all this data and we can show that, in aggregate, the rules that it has come up with mean that not letting people like you on this flight saves lives so, sorry sir/madam, but this decision is, quite literally, un-questionable."

Clive RobinsonMarch 8, 2015 11:41 AM

@ Mike (just plain Mike),

The problem with this statistical stuff is that is it inherently conservative – the data is necessarily a representation of how things are or how they were,

By definition it's "how they were" when the original analysis was done or last updated.

However there is a problem with all such systems that rarely gets talked about, which is "the observer effects the outcome" and thus the predictions become self defeating or chaotic in effect.

To see this think of it this way, let's assume you are trying to stop people being murdered. After analysing the available data going back several years or decades you come up with a model that you then pasivly test for a time period to see if it makes predictions correctly.

During this time society moves, and will "age the model" with the result at the end of this period, it is likely to be less accurate than at the start. Therefore you update the model to account for it.

If the test period was satisfactory then the model may be put into use. If it is then some murders will be prevented, some people will be falsely accused and some murders which the model cannot predict will happen, as you would expect.

The problem is at the end of any given time period it is in use it is skewing the data the model is based on. Because there is no way to tell if a murder has been prevented or a person falsely accused. The only valid data is the murders that happen that the model could not predict. Thus the new data set is skewed to what would be either unpredictable random acts of murder, or the edge cases that the model can not predict.

If the model is updated on this skewed data it will over time become skewed and murders it would have otherwise prevented now happen.

This means that the model becomes either out of data as society moves, or it becomes biased or limited in some way.

This is because the signal to noise ratio drops because you are antenuating or removing the signal you can predict, whilst the random noise and what you can not predict in effect stays at the same level over the short time period.

At some point the little signal that remains is insufficient in any given time period and the law of small numbers starts to apply. If the signal was effectivly invarient then using an increased time period would effectivly boost the signal marginally. However due to societal changes the signal is not inveriant and thus increasing the time period falls foul of the law of diminishing returns rather quickly.

Thus what will happen is that initialy the murder rate will drop but it will reach a limiting case beyond which improvment will be initialy random in nature. This random nature will if not treated with care effect the way the model is updated and this is likely to cause the model to go into some form of oscillation or chaotic behaviour where small changes in society will cause magnified effects in the model (sometimes called "ringing").

Thus trying to improve the model beyond a certain point will have a significant risk of making things actually worse...

It's just one of the reasons why we cannot have anything aproaching a model society without crime or risk. A point few politicos appear capable of understanding or taking on board, nor for that matter a large part of the population.

But there is a further issue to consider, whilst the random signal might remain constant over short periods, it actually increases with population size and worsening socioeconomic conditions that generaly worsen with an increase in population as simplisticaly "there are more mouths to feed from the same pot". However as we are aware some people consider themselves "more equal than others" and thus require a larger share of the pot as "their right". This causes a disproportionate decrease of socioeconomic conditions to those who share the rest of the pot, which means the random noise signal rises faster than expected from just the change in population size. This causes over compensation by those who believe they are incharge and the result is generaly a "crack down" which makes socioeconomic conditions worse... effectivly you get a positive feedback effect which boosts the random noise signal further... and so "the turn of the screw" increases.

From a real security point of view this is highly dangerous as the result is eventually catastrophic failure which is rarely peacful when it happens in societies.

Mike (just plain Mike)March 8, 2015 1:16 PM

@Clive: Agreed. I wasn’t going to go there – but yes – absolutely – I tried to cover myself a bit when I said “it will be possible to demonstrate in a rigorous quantitative way that, yes, all other things being equal, they are indeed correct about that [that murders will be prevented]” but, of course, once you start applying your pe-crime system then all other things will not be equal any more – you will have begun to eat your own dog-food. Soros (and others) I believe like to call this reflexivity. If you ask me it is just the bleeding-obvious to most people of an engineering disposition – but apparently an exotic and esoteric notion in other fields – particularly social policy. On the financials side, you could (and probably still can) often see quite literal ringing (over minute and even second time scales) in the price charts of some stocks and commodities.

WalksWithCrowsMarch 8, 2015 5:12 PM

@Wesley Parish

Thanks for sharing. Interesting story. And interesting plot & reference to the "Death Ship". Oddly, I have actually seen Traven's movie - anyway - Sierra Treasure one, with Humphrey Bogart.

Oddly, my main reading fascination has always been news and non-fiction, but I have an very passionate interest in cinema (shows or movies)... best sci fi book I have read however past few years, think "rapture of the nerds" by Doctorow, though also have enjoyed the Nighwatch/Daywatch Russian series, and Dream London.

Yet I do write a bit sci-fi & fantasy. Partly I do not read so much because it can influence me unduly, and where I get my writing from is other stuff entirely. :-)

Though, just do so privately. I have a career already, and any message I might want to get out, I have other means of doing so.

These things said, I like the concept of an android stealing personal data from life insurance collections to further enhance their own search towards higher emulation of human beings. :-) The whole soul-AI question is one of the more interesting ones, I have found, anyway, in scifi cinema (eg, PKD's related influenced works, AI the movie, etc.)

(Have not approached the subject directly myself, but do enjoy plodding around subjects on morality, language, identity and such, and considering very alien environments, beings, and technology.)

My primary interest in sublime matters at all, though, would be in supernatural horror and fantasy. :-) (And far, far moreso even in matters discussed on this blog, though I like to wrap comp sec and espionage sort of mindsets in supernatural fantasy and horror... you know, like what death note did. :-) )


WalksWithCrowsMarch 8, 2015 5:28 PM

@Uh, Mike (just plain Mike)

Thanks for another very interesting post.

Now that you mention it, I have seen the whole AI search seem to die out post-late-90s, and yes, was exactly when the "big data" ideas really started to rise... (when LISP was a hot topic, yet massive social media, relational promise was "the thing" in my circles, anyway)...

Shortly after I had read this subject here, was at vice, and saw this article: http://www.vice.com/read/how-to-stop-killer-robots-taking-over-the-world-212 Which kind of confirms that.

I think big data analysis for counter-terrorism and counter-espionage is very limited, but essential, if people can crunch the numbers effectively. Which they often can not. My work there has been most in heuristic systems, which is along the same lines, getting humans behind very clever systems. Because we shouldn't be lying to ourselves: not there yet to throw out the humans.

(As it was especially big pre-2007 or so, for com sec and av companies to pretend to have perfect heuristic tech, which was abysmally far from the truth. Nowadays, I do see far more promising systems, however. And no surprise they all ultimately are just much less false positive prone, and much better at providing a much more vetted list to analysts.)

I do absolutely agree you want to work from biological data in these regards. And, that said, evolutionary large data set, number crunching... might be quite interesting. Though, we still know so very little even about the human mind. Such things as "why do we sleep", "why do we dream". But, these advances continue to be moving, and studies from neuroscience & behavioral sciences continue to be very interesting.

Still, people overpromise automation, and keep doing this. All the massive indicators the Boston Bomber had, while he go on a list, they missed him entirely. Osama Bin Laden was not caught by data crunching, though some advanced tech was used, by and large the system was against the actual search that really worked. Probably in no small part because of people's misplaced confidence in the mojo of it all...


artyomMarch 9, 2015 2:54 AM

Anyone familiar with the 'Scorched-Earth society' mentioned by Peter Watts?

http://www.rifters.com/real/shorts/TheScorchedEarthSociety-transcript.pdf
"Here's a wild thought: don't just offer data protection, especially when you can't guarantee it. Offer data destruction. Not BrinWorld, where everyone knows everything and lions lie down
with lambs; a more hard-edged place where, when the lions come calling, we burn down our
chunk of the veldt rather than hand it over.

Forget the Transparent Society. Let's call this the Scorched-Earth society."

DerekMarch 9, 2015 6:18 AM

"If the model is updated on this skewed data it will over time become skewed and murders it would have otherwise prevented now happen."-Clive Robinson

That's particularly why it's so lucrative for such model to receive back feeds iteratively so that it can be incrementally altered in continuum. For most cases, near perfect is as good as perfection.

Clive RobinsonMarch 9, 2015 8:26 AM

@ Derek,

That's particularly why it's so lucrative for such model to receive back feeds iteratively so that it can be incrementally altered in continuum. For most cases, near perfect is as good as perfection

The problem with this is three fold,

1, Firstly even gravey trains run out of steam.
2, Such updates are never close to optimal, so not even remotely close to perfect.
3, It can be seen that there is a minimum period for any system, whereby any changes can not be distinquished from random in the results.

The last one is the "killer" that eventually brings the gravey train to a dead slow crawl or worse a dead stop. Because it's not something you can get around even with trickery, it's a consiquence of the laws not just of our tangible physical world, but the intangible information universe as well...

Put overly simply, to reduce the time period, means having a wider bandwidth, as noise is proportional to bandwidth, the noise goes up. However above the signal bandwidth, increasing the bandwidth does not increase the signal, and perhaps contraintuitively it also makes the signal less certain. This means that for any system there is an optimum bandwidth for the signal.

Likewise other limits apply to such things as "signal estimation" or information prediction, to work even remotely well they have to have an inveriant or near invarient signal. Again there is an optimal bandwidth for such estimators.

But another limit is the length of time it takes a signal to get from it's point of origin to the input of the detector, this has many limitations bandwidth and coding delays being the easy ones to spot, but ultimately the one even pre-college kids know cannot be beaten, the speed of light.

There is no such thing as a "free lunch" when you are using positive feedback systems, get it wrong and you will be lucky if you don't get a face full of shrapnel.

aaiMarch 9, 2015 11:33 AM

"We don't want the governments of China and Russia to decide what censorship capabilities are built into the Internet; we want an international standards body to make those decisions."

Strangely enough, a couple of hours back I read a DW article on how oppressive regimes abuse the Interpol to track dissidents and different-minded people.

So, we're better be careful what we wish for.

Clive RobinsonMarch 9, 2015 12:07 PM

@ aai,

So, we're better be careful what we wish for.

Yes the thing about International Standards Bodies is they come in three flavours,

1, UN or equivalent supra national.
2, National.
3, Trade.

In nearly all cases they end up being infultrated or influanced by the super power or UN veto holding nations. One trick you see is by the Five Eyes where they engineer votes to get the equivalent of backdoors or other intel friendly tech in place often on a "Health or Safety" issue.

One more recent trick is buying power leverage against far east manufactures. The US mandated GPS in mobiles for "safety reasons" because of the size of the market it's cheaper to put GPS in all phones rather than have split inventory issues.

The US intel community were reasonably certain this would happen but just to make sure other nations talked about mandatory GPS thus it became a done deal.

Manufacturer based trade bodies are generally independent only through the "get up to speed" years, after the technology starts to be accepted the manufactures just send token representatives to fight their own corner this splitting up of common purpose allows the intel types to come in and throw their weight around. It's the same sort of thing that was see with NIST and Dual EC, other representatives avoid conflict and eventually give up. I've seen it happen to many times to believe it's just coincidence, mind you they are usually a lot more subtle than the NIST example.

Thus the chance of getting government IC / LE agency unencumbered standards are low...

WalksWithCrowsMarch 10, 2015 4:04 PM

Anyway, near finishing the book, and there are quite a number of very big and powerful ideas presented therein, but yes, I would have to say it has a peak in exactly this conundrum, which well sums it up:

But there's a big idea here too, and that's the balance between group interest and self-interest.

That is a sort of title. But, then, that is the title. And there are, within, so many different streams of exceedingly excellent thought.

Considering privacy changes, however, the US and UK are pushing for, I am not so sure this will be so much a matter of debate for decades. Maybe more of a good argument to find ways to short their collective public's continued confidence in pushes made by those under auspices of high capital, when the credit rating should actually be incredibly low.

Both Cameron and numerous key US leaders are not saying, "We want positive reform, for we have found ourselves bankrupt". Instead, they are pushing ahead for even heftier "loans" based on consumer credit of confidence, which is blatantly obviously misplaced.

Hard to find such chances at returns for immense shorting opportunities where the gains are so obvious. And, for that matter, all of the investing creditors still gullible enough to continue on with them on such a blatantly bad track. That is quite a big potential feast coming up.


Of course, there does remain merit in those societies. I am not much an expert on the UK, though they do, like the US, have very good and strong economies of artistic product, for instance.

I suppose relatively speaking for the size, then, it is about the same.

And, then, of course, both nations have many who are not buying into to this continued load of crap. Nor did they ever. In a very real sense, they have been making already long term shorting investments, which clearly will pay off for them.

(I am not speaking about literal money in these things.)


BoppingAroundMarch 12, 2015 5:03 PM

re: Europe reaction on media usage by terrorists:
In other words, censorship in action.

AlanMarch 12, 2015 8:05 PM

@Doubletalk was right.

Rights don't belong to groups, they belong to individuals.

To the other guy: The SCOTUS decision on Hobby Lobby was right as far as it went, and wrong because it didn't go far enough. Part of the reason is that Roberts, along with the entire federal government, tried to continue the government centralization of the ongoing effort to obliterate individual rights, because they want to globalize the control, with their world government in formation.

Utilitarian arguments are useless until people understand the moral imperative of individual rights, and respect the principle that our natural state is freedom from all coercion.

The individual is greater than the group. Individual > Group = True.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.