Replacing Judgment with Algorithms

China is considering a new “social credit” system, designed to rate everyone’s trustworthiness. Many fear that it will become a tool of social control—but in reality it has a lot in common with the algorithms and systems that score and classify us all every day.

Human judgment is being replaced by automatic algorithms, and that brings with it both enormous benefits and risks. The technology is enabling a new form of social control, sometimes deliberately and sometimes as a side effect. And as the Internet of Things ushers in an era of more sensors and more data—and more algorithms—we need to ensure that we reap the benefits while avoiding the harms.

Right now, the Chinese government is watching how companies use “social credit” scores in state-approved pilot projects. The most prominent one is Sesame Credit, and it’s much more than a financial scoring system.

Citizens are judged not only by conventional financial criteria, but by their actions and associations. Rumors abound about how this system works. Various news sites are speculating that your score will go up if you share a link from a state-sponsored news agency and go down if you post pictures of Tiananmen Square. Similarly, your score will go up if you purchase local agricultural products and down if you purchase Japanese anime. Right now the worst fears seem overblown, but could certainly come to pass in the future.

This story has spread because it’s just the sort of behavior you’d expect from the authoritarian government in China. But there’s little about the scoring systems used by Sesame Credit that’s unique to China. All of us are being categorized and judged by similar algorithms, both by companies and by governments. While the aim of these systems might not be social control, it’s often the byproduct. And if we’re not careful, the creepy results we imagine for the Chinese will be our lot as well.

Sesame Credit is largely based on a US system called FICO. That’s the system that determines your credit score. You actually have a few dozen different ones, and they determine whether you can get a mortgage, car loan or credit card, and what sorts of interest rates you’re offered. The exact algorithm is secret, but we know in general what goes into a FICO score: how much debt you have, how good you’ve been at repaying your debt, how long your credit history is and so on.

There’s nothing about your social network, but that might change. In August, Facebook was awarded a patent on using a borrower’s social network to help determine if he or she is a good credit risk. Basically, your creditworthiness becomes dependent on the creditworthiness of your friends. Associate with deadbeats, and you’re more likely to be judged as one.

Your associations can be used to judge you in other ways as well. It’s now common for employers to use social media sites to screen job applicants. This manual process is increasingly being outsourced and automated; companies like Social Intelligence, Evolv and First Advantage automatically process your social networking activity and provide hiring recommendations for employers. The dangers of this type of system—from discriminatory biases resulting from the data to an obsession with scores over more social measures—are too many.

The company Klout tried to make a business of measuring your online influence, hoping its proprietary system would become an industry standard used for things like hiring and giving out free product samples.

The US government is judging you as well. Your social media postings could get you on the terrorist watch list, affecting your ability to fly on an airplane and even get a job. In 2012, a British tourist’s tweet caused the US to deny him entry into the country. We know that the National Security Agency uses complex computer algorithms to sift through the Internet data it collects on both Americans and foreigners.

All of these systems, from Sesame Credit to the NSA’s secret algorithms, are made possible by computers and data. A couple of generations ago, you would apply for a home mortgage at a bank that knew you, and a bank manager would make a determination of your creditworthiness. Yes, the system was prone to all sorts of abuses, ranging from discrimination to an old-boy network of friends helping friends. But the system also couldn’t scale. It made no sense for a bank across the state to give you a loan, because they didn’t know you. Loans stayed local.

FICO scores changed that. Now, a computer crunches your credit history and produces a number. And you can take that number to any mortgage lender in the country. They don’t need to know you; your score is all they need to decide whether you’re trustworthy.

This score enabled the home mortgage, car loan, credit card and other lending industries to explode, but it brought with it other problems. People who don’t conform to the financial norm—having and using credit cards, for example—can have trouble getting loans when they need them. The automatic nature of the system enforces conformity.

The secrecy of the algorithms further pushes people toward conformity. If you are worried that the US government will classify you as a potential terrorist, you’re less likely to friend Muslims on Facebook. If you know that your Sesame Credit score is partly based on your not buying “subversive” products or being friends with dissidents, you’re more likely to overcompensate by not buying anything but the most innocuous books or corresponding with the most boring people.

Uber is an example of how this works. Passengers rate drivers and drivers rate passengers; both risk getting booted out of the system if their rankings get too low. This weeds out bad drivers and passengers, but also results in marginal people being blocked from the system, and everyone else trying to not make any special requests, avoid controversial conversation topics, and generally behave like good corporate citizens.

Many have documented a chilling effect among American Muslims, with them avoiding certain discussion topics lest they be taken the wrong way. Even if nothing would happen because of it, their free speech has been curtailed because of the secrecy surrounding government surveillance. How many of you are reluctant to Google “pressure cooker bomb”? How many are a bit worried that I used it in this essay?

This is what social control looks like in the Internet age. The Cold-War-era methods of undercover agents, informants living in your neighborhood, and agents provocateur is too labor-intensive and inefficient. These automatic algorithms make possible a wholly new way to enforce conformity. And by accepting algorithmic classification into our lives, we’re paving the way for the same sort of thing China plans to put into place.

It doesn’t have to be this way. We can get the benefits of automatic algorithmic systems while avoiding the dangers. It’s not even hard.

The first step is to make these algorithms public. Companies and governments both balk at this, fearing that people will deliberately try to game them, but the alternative is much worse.

The second step is for these systems to be subject to oversight and accountability. It’s already illegal for these algorithms to have discriminatory outcomes, even if they’re not deliberately designed in. This concept needs to be expanded. We as a society need to understand what we expect out of the algorithms that automatically judge us and ensure that those expectations are met.

We also need to provide manual systems for people to challenge their classifications. Automatic algorithms are going to make mistakes, whether it’s by giving us bad credit scores or flagging us as terrorists. We need the ability to clear our names if this happens, through a process that restores human judgment.

Sesame Credit sounds like a dystopia because we can easily imagine how the Chinese government can use a system like this to enforce conformity and stifle dissent. Our own systems seem safer, because we don’t believe the corporations and governments that run them are malevolent. But the dangers are inherent in the technologies. As we move into a world where we are increasingly judged by algorithms, we need to ensure that they do so fairly and properly.

This essay previously appeared on CNN.com.

Posted on January 8, 2016 at 5:21 AM36 Comments

Comments

Derek Chadwick January 8, 2016 6:08 AM

Now we need the open source community to make a serious effort on anti-profiling software such as machine learning algorithms that generate and propagate disinformation/misinformation that will overwhelm the surveillance algorithms with garbage data and spurious red flags.

Vintermann January 8, 2016 6:55 AM

Winter: that assumes the algorithms are dumber than you, and they might not be for much longer. The data mining we’ve seen so far is crude compared to what is happening, now that DNNs are taking off in a big way.

But as a student of neural nets myself, this also gives me a bit of hope. There’s one thing about DNNs that’s even more important than their power, and which is grossly underestimated. That is their objectivity. They have less assumptions built in than older algorithms – they can more directly optimize against what they’re supposed to, with less of a model, less preconceptions. Also, like older algorithms but unlike human analysts (however brainy!), they have no ego to protect or private agendas to push.

For a benign example, try Spotify’s Discover Weekly. It is scary how much sense it’s made out of my inconsistent musical tastes.

CouldntPossiblyComment January 8, 2016 6:58 AM

Great article.

Online social interactions seem hard to obfuscate outside of specific interactions where people explicitly talk in code. The bulk of social media’s purpose is to send specific data to targeted individuals who would be confused upon receipt of obfuscated equivalents.

Is it being proposed that everyone on Facebook friends N random strangers (actually… a lot already do that, but they think they’re friends because they spoke once) and adjusts those links randomly? What about on LinkedIn, where doing so would come across as grossly unprofessional?

How does one obfuscate a Twitter feed? It’s easy to filter out known bad feeds if there’s a specific few that produce obvious random garbage. How could regular citizens’ tweets be randomised & produce misinformation?

I guess I’m not seeing how activists could ‘overwhelm surveillance algorithms’ in a manner that isn’t trivial to filter out due to origin without willing participants in the bulk of the people monitored, and without leading to just as much confusion in the recipients as the surveillance.

The closest I could envisage is creating hundreds of thousands of convincing fake people who post and tweet to each other, but over time you’d be able to filter them out because of the lack of interaction with ‘real people’ (degrees of separation etc) – or get shut down for spamming.

As an individual you can just sidestep the whole thing by not using any of these, but Bruce touches on this too – e.g. reaching a point where it’s considered weird & non-conforming to not have a credit card, and where you’re denied access to certain things because of it. I haven’t yet had a job interview where someone said ‘you seem to have no social media presence – could you explain that?’ but I could see it happening in a decade’s time.

HiTechHiTouch January 8, 2016 7:30 AM

May I remind people (and the author) that FICO scores are not a measure of credit worthiness as much as a measure of your potential profitability as a customer,

The “secret” algorithm down grades people who shop around (count of credit history inquiries) and people who complain (items disputed). It selects for people who accept higher margin prices and don’t incur support costs.

Is this tendency proper to apply to a measure of goodness in society?

Rob January 8, 2016 7:45 AM

@CouldntPossiblyComment: “As an individual you can just sidestep the whole thing by not using any of these, but Bruce touches on this too …”

I am a minimal user of social networks: no Twitter, Facebook with only 4 friends (close family) but no posts from me, no LinkedIn, no Flickr, no anything else; a few anonymous comments on blogs; a few questions for help on DIY forums, never login to Google or Microsoft.

I tried, but couldn’t use, AirBnB: no-one would reply to my enquiries. My son tells me it’s because I don’t have a credible on-line identity.

A Proprietary World without Soul January 8, 2016 7:52 AM

Recently the highest levels of the Chinese government publically met with the highest levels of American high tech at Microsoft Seattle campus. These CEOs posed for a remarkable group picture depicting their voluntarily cooperation between repressive government and them monetizing user privacy using proprietary algorithms.

American High-Tech is again meeting in Silicon Valley but this time in secret. The subject include restoring secret, non-judicial warrantless, back doors and weakened encryption. The goal is to allow complete government access to all citizen data without any oversight, just like before Snowden.
This time the government has the legal immunity CISA gift. The only issues remaining are to establish profitable confidential power sharing agreements.

http://www.theguardian.com/technology/2016/jan/07/white-house-social-media-terrorism-meeting-facebook-apple-youtube-

Clearly this second meeting establishes precedent where American High tech will only be allowed to operate in countries AFTER allowing the host government to eavesdrop. For their part, they will insist and be granted legal indemnification.

Its notable the American dependent ad-click ‘free’ press did not risk reporting on this meeting.
As we see, this new highly integrated mass surveillance system already allows control of the press. Governments throughout the world will be able to track their opposition with daily reports of their every thought and action. The White House/NSA/CISA team already provided a great example.

Can this all-knowing proprietary system be weaponized by ambitious corporate, PAC or ruling political parties?
Can they manufacture evidence or set-up situations to weaken the opposition? Can they tailor news?
Do corporations behave badly knowing they are above the law with no chance of being prosecuted? Look no further as Hedge Fund/Wall St expands by financing profitable High-tech surveillance.
A proprietary (zero transparency) system where every citizen becomes a high-value target.

Winter January 8, 2016 8:12 AM

@A Proprietary World without Soul
“Clearly this second meeting establishes precedent where American High tech will only be allowed to operate in countries AFTER allowing the host government to eavesdrop.”

And how is this news?

Let’s start with the US. Did you miss the Snowden documents?

scot January 8, 2016 8:53 AM

“It’s already illegal for these algorithms to have discriminatory outcomes, even if they’re not deliberately designed in.”

So how does that work? If the bias was not due to any conscious design decision–maybe not even an unconscious one–then who is at fault? That seems akin to making it illegal to rain on the 4th of July; sure, it’s a great goal, but what purpose does it actually serve, other than giving the impression of doing something without actually doing something?

Even more frightening:

http://www.bloomberg.com/news/articles/2016-01-07/apple-buys-startup-that-sees-what-s-behind-your-smile

This is, at a primitive level, mind reading. “Effective” psychics are good at this, providing prompts and reading reactions to seek out information leaking out of a mark involuntarily. With sufficiently sophisticated algorithms (or insufficiently poker faced individuals) this becomes a way of detecting thoughtcrime, and once it’s detectable, how long before it becomes illegal?

jordan January 8, 2016 9:23 AM

@scot

“It’s already illegal for these algorithms to have discriminatory outcomes, even if they’re not deliberately designed in.”

So how does that work?

The task already involves collecting, collating and cross referencing huge volumes of personal information, right? Get some help from the NSA, who already do this, and are used to doing 7 illegal things each morning before breakfast at Milliways.

Gweihir January 8, 2016 9:39 AM

This is not a new question: It is the question whether what can be done technologically should be done. Failures in this task very nearly destroyed the human race several times over (cold war), so the human race in general is not very good at this.

However, in many cases sanity does prevail in the end. The world did not die in nuclear fire because some people refused to push the button, even given clear indications that they should. ABC weapons are banned and, except for a few rogue states, nobody uses them. (They are still being researched and stockpiled by the insane, though.) Spying on citizens is at least considered a dishonorable and dangerous thing, albeit the desires of a honor-less and paranoid minority are very problematic here.

Hence this will remain interesting, in particular which level of catastrophe will be required before enough people wise up.

paul January 8, 2016 10:06 AM

@Vintermann:

The ability of neural networks to ferret out weird regularities in a training set is two-edged at best. If the training set is truly representative and contains scores that correlate closely to the thing that the trainer says they’re looking for, that’s great. Otherwise, the neural net may end up amplifying human prejudices in really weird ways. (Way back in the 80s, when this kind of thing first got going, an acquaintance at an AI company told me the story of a neural net that crunched a large bank’s mortage-decision data set and came back with a configuration that essentially said “All this information about income and job classification and age and credit history is irrelevant; all that’s needed to give a high-quality match to your training set is to know whether the applicant is white or black.”)

This is going to be especially true for things like social networks, where the wisdom of crowds is perhaps not all it could be.

Oh, and another vote for the weirdness of credit scores. They don’t like people who pay on time or by check/cash, regardless of the size of their balances.

Andrew January 8, 2016 10:57 AM

The progress cannot be stopped, if it can be done it will be done. So soon we all be subjects of such algorithms.

It will be all like Quran, a matter of usage and interpretation. They can be good and used like a tool and help some persons to take decisions or they can be pushed to extreme and create a nazi society. It’s all about us.

Transparency of algorithms is for certain the starting point.

Matt Bury January 8, 2016 10:59 AM

Re: Facebook and SN “analysts,” They’re committing the error of guilt by association which is supposedly prohibited by all modern democracies. See: https://en.wikipedia.org/wiki/Freedom_of_association It doesn’t matter if citizens are sanctioned by governments or corporations, the outcome is the same. You are not free to associate with whomever you want. You must be careful who you talk to and what you say for fear of losing your job, being denied insurance, credit, security clearance, etc., being detained and interrogated when crossing borders, etc. It’s already happening on a frighteningly quickly growing scale and so far next to none of it is open to appeal, judicial oversight, or democratic accountability… it’s a secret for security reasons 😉

I’ll probably get “flagged” for writing this. I wonder if it’ll affect my credit rating at some point in the not too distant future?

Lawless: Feds Pay for Secret Corporate Spies January 8, 2016 11:37 AM

Anythings goes when you are above the Law. It doesn’t matter in the least if its illegal.

Feds paid Amtrak worker to spy on passengers for 20 YEARS
.
.
.
The DEA inappropriately also asked the airport screener to report passengers carrying large sums of money, “in exchange for a reward based on money seized by the DEA.” That violated internal DEA procedures, and may have violated people’s right to be free of unreasonable searches and seizures.

http://www.washingtonexaminer.com/watchdog-feds-paid-amtrak-worker-to-spy-on-passengers/article/2579951

RequiredNameHere January 8, 2016 11:39 AM

@HiTechHiTouch

“…FICO scores are not a measure of credit worthiness as much as a measure of your potential profitability as a customer”

That. …and plain old fraud by businesses in the loan process.

I remember this one time that businesses in collaboration with banks totally ignored FICO scores and other readily available financial eligibility metrics and gave loans to folks that obviously couldn’t afford to pay them back and then bundled and sold those crap loans to the public and it nearly caused the financial collapse of a nation.

But that was forever ago. I’m sure now it’s all on the up and up.

AlanS January 8, 2016 1:32 PM

A better title might be “Hiding Judgment with Algorithms”. Yes, sometimes it might simply be replacing but in many cases the algorithms serve a strategic purpose or reflect social judgements. They are rarely neutral.

Ray Dillinger January 8, 2016 3:18 PM

I use learning algorithms on a fairly regular basis, and …

Yeah, they learn to do exactly the kind of illegal discrimination you’re talking about because it maximizes profits.

First example: (thank goodness this didn’t go live). College project in symbolic AI (rule-based expert system, back-chaining from the imperative ‘maximize profit’ and given a bunch of rules and axioms that were supposed to model the kind of things CEOs made decisions about): The minute its axioms included the fact that “at many companies women get paid less than men” It cut the salaries of its female employees on the grounds that there was less competition for those employees and hence it could retain them for less pay maximizing profits.

Second example: This was from a news story not long ago. A (Harvard?) professor with a stereotypically African-American name observed that when people entered her name in an address-lookup site, the ad on the side was usually from a company that was offering to research her criminal record, whereas when someone entered her colleague’s name the ad displayed on the side was usually from a company that offered to research employment history or school records. And this was because the Neural Network behind the ad server had learned to associate a particular set of names with more clicks on one type of ad, and a different set of names with more clicks on a different type of ad.

Third example: At my first job, I wrote a system that estimated the value of real estate. It discovered that ethnic neighborhoods had lower real estate values and, in use, advised lenders of these estimates – resulting in “lowball” offers to ethnic homeowners trying to get equity loans. There was not a single line of code in that system that checked for the racial mix of a neighborhood; it was just observing patterns including location, and estimating real estate value based on those patterns.

In the long run, as long as humans discriminate, learning algorithms will learn to exploit that discrimination for a profit – usually in ways that also discriminate against the minority. These days we try to be careful to NOT give the algorithms the information that discrimination is based on. So instead of the actual name (which must distinguish individual people, even if it’s illegal to discriminate by race) we give it a hash of the name – still unique by person but no longer ethnically distinguishable. But as long as there’s any pattern it can detect (like a cookie from a business whose clientele are mostly ethnic) these systems will learn to distinguish (and discriminate against) different races by some other criteria and it’s not necessarily easy to figure out what criteria form the pattern it’s detecting.

The problem is real. The algorithms work. Often they work far too well and we can’t even figure out how they’re detecting the things they oughtn’t detect and discriminate against. And the problem won’t go away because maximizing profits is real too. When we cut names out of the system and it can’t use them to figure out when the ‘criminal records’ ad is going to be more profitable, the ad server notices their profits go down and wants us to put it back.

Sancho_P January 8, 2016 5:27 PM

@Bruce:

“As we move into a world where we are increasingly judged by algorithms, we need to ensure that they do so fairly and properly.”

No, wrong way, sorry.

Let’s assume we get transparency (public algorithms + personal insight),
as well as oversight and accountability.

We could delete unfair mistakes from the score board.
Perfect. Applause. Not!

The bad thing is the judgement per se.
The judgement reflects the mindset of the average, of the sheeple and their herders.
From burning witches to allow (?) homosexuality, have we / they changed?
Why? Because of a normalized mindset?

It’s not a question of “fairness” or justice, it would help to legalize the wrong way.

Even if “they do so fairly and properly” it will chain up mankind.
Unpleasant future it is.

Impossibly Stupid January 8, 2016 6:28 PM

“And if we’re not careful, the creepy results we imagine for the Chinese will be our lot as well . . . It’s now common for employers to use social media sites to screen job applicants.”

That shark was jumped a long time ago, before social media even existed, when HR became little more than doing keyword searches on resumes. I once had a recruiter come to me asking if I had any iPad development experience, so I explained to them that I do, and it falls under the umbrella of the iOS work I list on my resume. But they needed me to make a ton of edits to add the exact words/phrases so that their stupid algorithms would make the matches. Who knows how many incompetent people are being hired because their resume is just a garbage dump that gets judged to be a great match.

tyr January 8, 2016 7:28 PM

If I remember correctly the chinese have a dangan (dossier)
on every citizen maintained by the government. It has been
around since the beginning of the 20th century maybe longer.
Adding automation to that is just the normal feeping
creaturism that occurs with new toolsets for beaureaucrats.

If a neural net thinks a panda picture is a picture of a
vulture why would you assume the output from any neural net
has universal validity. How to even discover programmer bias
let alone fix it in proprietary software is a nightmare to
even contemplate.

Neighborhood Nanny January 9, 2016 12:37 AM

The US government is judging you as well. Your social media postings could get you on the terrorist watch list, affecting your ability to fly on an airplane and even get a job.

This has always seemed like a bad situation needing fixing to me. When I was younger with more naive visions of society, I imagined that creating second class citizenry in this way was a horrible idea. It seems to me lately that the policing forces of our society are more like parasites that, instead of wanting to minimize crime, merely want a fairly easy comfortable carreer. Thus when they encounter someone being ‘radicalized’, they (criminally) facilitate that person’s further engagement in a life of escalating criminal behavior. The idea being that once the escalation reaches a certain point, they come in and ‘save the day’ getting the benefits and status within society that results. In general I thought the laws against ‘entrapment’ were meant to combat this kind of outcome. But those attitudes towards ‘entrapment’ seem to be quite different than decades ago (or perhaps I just got the wrong impression when I was younger).

This is what social control looks like in the Internet age. The Cold-War-era methods of undercover agents, informants living in your neighborhood, and agents provocateur is too labor-intensive and inefficient. These automatic algorithms make possible a wholly new way to enforce conformity. And by accepting algorithmic classification into our lives, we’re paving the way for the same sort of thing China plans to put into place.

Not disagreeing much here, but I think it is extremely important not to dismiss how modern technology vastly lessens the cost of the labor-intensive varieties. With any imagination, having access to NSA cyber tools would make the job of agent provocateurs and neighborhood spies much easier. I mean holy shit, just think about that one for a minute or two. But of course you are correct, a computer performing a task for a billion people is highly concerning. I would only emphasize that your overall concern should factor in both scenarios working in concert. The fully automated surveillance and persecution can vastly reduce the number of dissenters that need to be managed/handled/pressured/silenced. Then with that reduced number, the effectiveness of the human agents with their modern cyber tech makes things pretty profoundly different than they were during the cold war and earlier.

Our own systems seem safer, because we don’t believe the corporations and governments that run them are malevolent.

See the movie ‘The Insider’ starring Russel Crowe as the man who stood up to the orwellian doublespeak of the tobacco conspiracy. Russian Proverb- “Everything our leaders told us about communism was a lie. Everything they told us about capitalism was the truth.”

Yush January 9, 2016 2:07 AM

Fear of these kinds of algorithms are what have pushed me away from Google and search engines in general. It’s making me fear to explore much of the Internet anymore, as surveillance is aggressively increased and its opponents barely make a peep against it.

I hate what various interest groups are trying to do to the Internet now that it has dramatically centralized. I truly, fundamentally despise this torture. This hell.

Vladimir January 9, 2016 9:29 AM

It is when we have only one mandatory rating system we set each person in a position to “assimilate or be destroyed”. And it looks like that the issue in “one mandatory” part not in “rating system” part.

Bob Paddock January 9, 2016 9:51 AM

@CouldntPossiblyComment

“Is it being proposed that everyone on Facebook friends N random strangers (actually… a lot already do that, but they think they’re friends because they spoke once) and adjusts those links randomly?”

Facebook itself has taken to sending friend requests to people that I know nothing about. I’ve also gotten friend requests from people that never sent them. I now ask each person if they really sent such a request most said they did not send them.

Due to my wife’s suicide form Chronic Pain, I am constantly posting health articles about Chronic Pain and the extreme dangers of Fluoroquinolone antibiotics like Levaquin and Cipro et.al. People that do not know the history often think I’m the one with these debilitating conditions wishing for death. Yet I’m healthy and highly regarded for helping those people in places that people don’t see because of HIPPA privacy laws. That is the problem with any system that replaces the common sense of a human to simply asking me the history.

@Scot et.al.

How to trick a neural network into thinking a panda is a vulture.

Blake January 9, 2016 2:15 PM

You may also just violate the algorithms’ (presumed) assumptions: when they try to pass judgment, you will easily be able to prove them wrong. If everyone, or perhaps, enough, does this, the value of the algorithms would be revealed for what they are: simple text matching programs.

Jeremy Kun January 9, 2016 7:33 PM

It is worth noting that there is an emerging area of research on “fairness for algorithms.” One of the goals is to design a mathematically rigorous theory of what it even means for an algorithm to be fair in the first place.

For anyone interested, check out the website http://www.fatml.org.

AlanS January 10, 2016 7:29 PM

Since the 1870s economists have reduced the economy to algorithms. What is ‘the market’ but one big algorithm in their crazed imaginations? But their market algorithms are themselves cultural artifacts that function by disguising sociocultural factors as ‘externalities’. What’s being discussed above is just an extension of this. Algorithms express relationships. They are not ‘neutral’ or ‘fair’ except within an already assumed and unnoticed sociocultural world.

Vintermann January 11, 2016 1:49 AM

@Mark Xu Neyer

Are you familiar with Raph Levien’s work on attack resistant trust metrics, which he used for Advogato? It looks a lot like yours. In my opinion, it’s sound and underappreciated, and I wish you good luck in building something around your model. I used to play around with concepts like those too.

Especially if you manage to combine that with blockchain technology, you have something really exciting in my opinion… but it’s not from that guaranteed that it will catch on.

@paul

Yeah, I’m not surprised that a learning system with the brain of a gnat (NNs in the 80’s), when told to imitate analysts (rather than optimize outcomes directly, i.e. reduce defaults, increase profits) would just fall into discriminating on race. But since systems have become so much more powerful, it’s now more feasible to target the outcomes directly – provided the will is there.

PetrF January 11, 2016 7:28 AM

@Vintermann: Are the neural networks NN (BigData tools) able to provide causal reasoning for the given result ?

I am afraid we are slowly approaching a world, where you will find yourself on a terrorist suspect list just because a black box made that decision.

You stated that the strength of the black box is objectivity. There are no doubts on that. Therefore reasoning is not needed, consequently there is no possibility to appeal, … Your constitutional rights are gone, because the machine is “objective” and makes no mistakes.

Just have a look how the fingerprint matching can be screwed (try to google Brandon Mayfield case) and this is just a beginning of a world where decisions are made based on machine generated pattern matches. Even worse it is with DNA matching where the same principles of possible false positives apply (and are often ignored). Match was found therefore you are guilty unless you can find someone who will listen to your prove that you are innocent.

Even today the decision makers have little or no knowledge of the scientific (probabilistic) background of the pattern matching systems, yet the fingerprint matching and DNA matching is more easy to grasp than the principles of NN based black box decision making. Try to prove that the black box is wrong – no one will listen to you unless you are a congress member.

Jordan January 11, 2016 2:11 PM

It seems like one of the real problems is that often those correlations are completely valid and very strong… but we somehow also want to weigh in a desire to avoid false positives.

If you have several associates who are terrorists, it doesn’t mean that you’re a terrorist… but the odds are much, much higher that you are than if you don’t have any terrorist associates.

To what extent do we protect “society” by assuming that people who associate with terrorists may well be terrorists, and to what extent do we protect individuals by assuming that there is some better reason for their association with these terrorists?

If the data says that Elbonians are, as a group, a poor credit risk, what do we do? Do we protect profits[*] by discriminating against them, or do we ignore the data to protect those particular Elbonians who are indeed creditworthy? Maybe we could isolate out those particular correlates that are the true cause of the pattern of Elbonian un-credit-worthiness – perhaps, for instance, being covered in mud makes it hard to keep a job, and so clean Elbonians are OK – but that’s really hard.

[] Some might consider “profits” a dirty word. Substitute instead “jobs” and “ability to invest in *other loans”.

Jordan January 11, 2016 2:24 PM

Just noticed that there’s a different commenter using “jordan”. Just to be clear, there are two distinct people.

r January 11, 2016 6:32 PM

@Jordan, differentiate then. 🙂 I’ve seen the moderators here separate imposters for the benefit of others too, so thanks for pointing that out Jordan.

@all,

Adding this: Police Agencies Using Software To Generate “Threat Scores” of Suspects

http://m.slashdot.org/story/305095

More software profiling links.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.