Comments

DavidFMM January 26, 2016 8:48 AM

“The program scoured billions of data points, including arrest reports, property records, commercial databases, deep Web searches and the man’s social- media postings.”

Meaningless number. The numbers that are important are the data points found on the individual, not the entire data set. If the number of relative data points for the individual was too low, for example, the rating would be inconclusive (in statistics, if the sample size is too small, a trend cannot be definitively established).

Joe K January 26, 2016 8:57 AM

The searches return the names of residents and scans them against a range of publicly available data to generate a color-coded threat level for each person or address: green, yellow or red.

What, no purple? Tinky Winky is gonna feel left out.

Bob January 26, 2016 9:02 AM

I wonder how long it will take for someone to get shot because the machine spits out “red” and gives an officer an itchy trigger finger.

paul January 26, 2016 9:35 AM

Because we know that all that information in public databases is so accurate, and that statistical inferences apply to individuals…

This is the logical endpoint of a system that isolates police officers from the people they police. Beat cops, or even officers in patrol cars with a specified area of operation, get to know their territories and, over time, form an idea of who is what kind of potential threat (or not). Systems like this try to synthesize all that information without actually interacting with any of the people involved. Solves some problems, perhaps, but brings in a host of others.

(I immediately think that these folks read Harriet the Spy as children and missed the cautionary part.)

blake January 26, 2016 9:35 AM

What about an app which takes a photo of the police officer that just pulled you over, cross references him against news articles, social media posts, court records, and flashes Green / Yellow / Red about the probability of immanent physical abuse?

Larson January 26, 2016 9:47 AM

I find it very difficult to feel any remorse for the loss of privacy of people who are constantly stroking their cell phone, texting, Tweeting, and Facebooking their every thought the instant it occurs to them. They voluntarily gave up any right to privacy when they joined the social media revolution.

On the other hand, when Fresno Police Chief Jerry Dyer said: “officers are often working on scant or even inaccurate information when they respond to calls, so Beware and the Real Time Crime Center give them a sense of what may be behind the next door” did he ever stop to think how he would feel if a similar resource was available to everyone else? Let’s suppose that an organization like copblock.org were to start a nationwide database of every police officer in the United States. This could include such information as the home address, personal phone numbers, property records, list of family members, social media postings, disciplinary records, badge number, automobile registrations, and so on, all available from public records. This database could be freely searched by anyone who had an encounter with the officer. Something like this would go a long way to put the citizens on equal footing with the police. Why do I get the feeling that Police Chief Jerry Dyer would feel that the shoe would pinch when put on the other foot?

albert January 26, 2016 10:47 AM

@Joe K,
Tinky Winky is listed as “probably homosexual”, so, no he won’t feel left out. Outed maybe, but not left out.

@Larson,
Although cops like to think of themselves as a special group, they are -citizens-, just like everyone else. Here we have ‘citizens’, and the ‘military’. The latter group falls under military rules and the military justice system, which has the force of law, recognized by the civilian gov’t.

@Anyone,
Yet another example of trying to get computers to do human work. When are folks going to learn that computers can’t do everything, especially when sound human judgment is required.

I wouldn’t be surprised if that ‘trade secret’ stuff is actually a cover for racial, sexual, religious, and other kinds of profiling.

. .. . .. — ….

Hersey Rulings Replace Equality Enforcement January 26, 2016 11:16 AM

First why do establishment reporters omit the most obvious questions and situations?
What are the negative Big Data effects for REPORTING a crime?

What about the person calling the police to report a suspicious vehicle driving past their house?
Is the person calling being data-mined for threat or reputation analysis? Will posting on Facebook that “I Luv the police” cause them to treat the call seriously?

This is just one of millions of situations where a citizen calling to report a crime is themselves being investigated and adding to a secret, unregulated dossier!
Get it everyone?

In the past police could only stop a person if they witnessed or suspected a crime. THEN they run the criminal background check. Now they go after the person simply CALLING to report a possible crime.

Personally I seldom use the 911 system as its designed for real emergencies. However I do call the non-emergency police lines (which are also recorded) for petty stuff. In recent years I’ve noticed the police could care less about minor infractions, even if its against city law. Is this type of system also classifying law abiding citizens? Why are we paying taxes to enrich data-miners?

I say get a cheap burner phone to avoid yourself being tracked by police and private corporations without your informed consent. When asked for my name state this is an ‘anonymous call’.

What to stop employers from using your calls to police as part of a background check? Or political organizations buying the same data?
The nightmare of Big-Data being being used to control citizens actions continues…

herman January 26, 2016 11:19 AM

@blake: I think you have a winner with your ‘cop checker’. You should do a Kickstart campaign and will be covered in gold.

Stasi January 26, 2016 11:31 AM

They tried this in GDR – to this day what happened with all the data collected is a mystery. The German intelligence agencies had their hands on it for over a decade, and won’t say even now what they did with it.

Either people care or are too dumb to know how all this data can be mis-used by Governments. We are well and truely heading towards the Orwellian nightmare.

Regarding collating information on the Police, you can bet it will be deemed illegal for reason of “National Security.”

Clive Robinson January 26, 2016 12:01 PM

@ Stasi,

Regarding collating information on the Police, you can bet it will be deemed illegal for reason of “National Security.”

You’ld think after OPM leak some idiot’s up on the hill would start to see the light rather than be blinded by dollar signs wherever they go…

As it happens there are already some laws in the US for protecting the ID’s of certain tax payer funded persons (anyone remember Scooter Liby?).

I doubt it would even take a twitch of POTUS’s pen hand to extend it…

That said contrary to what many think US jurisdiction has it’s limits, thus if the site was setup outside of the US and updated via a couple of non-friendly nations there would be little that could be done.

Further in the US privacy law is backwards in that it protects the “data collectors rights” over those of the “data subject”. Thus any such “Cop-Base” would be a “protected work”, if the Gov decieded to change that for Police or others arbitarily they would be signing away the rights of “data collectors” think of what it would mean for FaceCrook or the Alphabet Soup formerly known as giggle…

It would be a mess, because laws that try to have it both ways realy cause loop holes you can steam mega-cruise ships through and all sorts of other knotty issues. That would make life amusing for the onlookers in the EU, as the US judiciary flop and flip like a fish out of water…

Z January 26, 2016 1:11 PM

No measure of precision? recall? F1 score?

I think if the tax payers’ of Fresno (and other cities) are going to pay for it, they deserve to at least know how well it works.

Not to mention, the potential psychological effects it has on officers should certainly merit debate. The risk of predisposing first-responders seems rather high.

Joe K January 26, 2016 1:21 PM

As if on cue:
Accountability in Algorithmic Decision-making – ACM Queue, January 25, 2016
http://queue.acm.org/detail.cfm?id=2886105

A pertinent excerpt:

For government, FOIA (Freedom of Information Act) and similar laws in the United States and many other jurisdictions compel disclosure of government records and data when requested, though there are, of course, exceptions, such as when a government integrates a third-party system protected by trade secrets. There has been at least one successful use of an FOIA request to compel the disclosure of government source code. In another FOIA case decided in 1992, the Federal Highway Administration resisted disclosing the algorithm it used to compute safety ratings for carriers, but ultimately lost in court to a plaintiff that successfully argued that the government must disclose the weighting of factors used in that calculation.

Thus, FOIA is of some use in dealing with the government use of algorithms, but one of the issues with current FOIA law in the U.S. is that it does not require agencies to create documents that don’t already exist. Hypothetically, a government algorithm could compute a variable in memory that corresponds to some protected class such as race and use that variable in some other downstream decision. As long as that variable in memory was never directly stored in a document, FOIA would be unable to compel its disclosure. Audit trails could help mitigate this issue by recording stepwise correlations and inferences made during the prediction process. Guidelines should be developed for when government use of an algorithm should trigger an audit trail.[see DK Citron and FA Pasquale, 2014, The Scored Society]

It may be time to reconsider FOIA regulation along the lines of what I propose be called FOIPA (Freedom of Information Processing Act). FOIPA would sidestep the issues associated with disclosing formulas or source code and instead allow the public to submit benchmark data sets that the government would be required to process through its algorithm and then provide the results. This would allow interested parties, including journalists or policy experts, to run assessments that prod the government algorithm, benchmark errors, and look for cases of discrimination or censorship. For example, you could take two rows of data that varied on just one piece of sensitive information like race and examine the outcome to determine if unjustified discrimination occurred.

Jeremy January 26, 2016 1:39 PM

I doubt it’s possible to formulate any reasonable law that restricts your ability to run computer algorithms on data that you already have.

If you post something publicly on social media, then anyone who wants to can already read it. Your criminal record is more private, but I think we probably want the police to be able to access that, too.

I think any law of the general form “a human being is allowed to read all of that information and then summarize the results to an officer on the scene, but it is not allowed to have a COMPUTER read all of that information and summarize the results to an officer on the scene” would be deeply misguided. Ordinary civilians outside of law enforcement are going to use computer algorithms to help summarize information and make decisions; I think it is both impractical and undesirable to restrict law enforcement from doing so.

Instead, I would want to focus on these areas:

(1) It should be made clear that using a computer algorithm to help you aggregate information does NOT resolve you of the responsibility to use that information appropriately. If you shoot an innocent person, the fact that the computer gave you a “red” assessment should NOT be a defense (at least not in itself). It should be presumed reckless to rely on an algorithm’s output except to the extent that you personally understand what that output represents, what its limitations are, and to what extent it is reliable. (This suggests that it may be IMPOSSIBLE to use a “secret sauce” algorithm responsibly, though maybe there’s a way around that by empirically measuring its performance.)

(2) If the police are creating NEW data streams specifically to feed into these algorithms, then we need to ask whether the police ought to have those data streams in the first place. Searching public records, or databases that they already had? Presumed OK unless someone can articulate a specific reason to the contrary. Deploying new cameras and stingrays, creating new databases? That requires oversight and case-by-case evaluation.

steven January 26, 2016 2:26 PM

Oh, with a system so good, nobody needs to even need to call the cops. When someone appears on the radar as a ‘threat’, they could go arrest them right away. Perhaps before they committed the crime, wouldn’t that be great…

Blank Archon January 26, 2016 3:09 PM

DOMINIQUE: “If you’d told your lawyer about your criminal profile she might’ve been better prepared to defend you.”

BLANK REG: “I told you, Dom, I didn’t know!”

DOMINIQUE: “How could you not know you have a criminal profile?”

BLANK REG (holds up a printout): “It’s something called a… the ‘career capability malfeasance program’.”

THEORA JONES: “CPMP. It’s the program that matches Blanks with unassigned profiles. It compares the crime template to the personality template and if it matches, you’re assumed guilty.”

EDISON CARTER: “Criminals can blank out their identities but they can’t get rid of their criminal profile.”

DOMINIQUE: “Are you saying there are more criminal profiles than there are criminals?”

THEORA JONES: “Precisely. If a Blank is arrested now they run a CPMP on him. If the computer says the probability is high enough, it’s considered a match.”

DOMINIQUE (to Reg): “So that isn’t really your profile!”

BLANK REG: “That’s what I’ve been tryin’ to tell you! Blimey, Dom!”

DOMINIQUE: “Isn’t science wonderful? Oh, Reg.”

EDISON CARTER: “Yeah, wonderfully inhuman.”

(From Max Headroom, “Academy”)

qwertty January 26, 2016 3:23 PM

Seems to me that this kind of system if very prone to positive feedback loops. When people have a “red” score, police interventions become more serious, leading to a higher rate of ensuing prosecutions (“Well, his score was red, and so we tought we might as well search the whole house, and looky, we found some drugs!” Meanwhile, the green dot and his stash are safe, since the cops will just ask him a couple of polite questions. Same thing with resisting arrest/insulting officers).

The main problem here is that, since the system uses criminal records a priori, it is likely to reproduce (or even accentuate) biases that were committed during prior interactions with the target. Except that this time, it won’t be racism, but “science”.

PS: How long until not having a social network presences makes you “red”?

BoppingAround January 26, 2016 4:08 PM

blake,
Easy one, cut out the green colour. The police don’t like cameras being
flicked at them, you can safely assume it’ll be yellow at the very least.

qwertty,

PS: How long until not having a social network presences makes you “red”?

Depends on whom you are dealing with. For some places and people the answer
would be it’s here already.

HiTechHiTouch January 26, 2016 6:47 PM

I know it’s late to the discussion, but no one has mentioned that the courts have held law enforcement immune for acting on incorrect data in the NCIS (and other) data bases.

I’ve personally been jailed because a clerk didn’t finish a transaction.

Afterwards my lawyer gave me case law showing I had no recourse against the clerk, the cop, or anyone else involved. No one could be held accountable for the bad information.

Needless to say, I had no way to know anything about my mysterious appearance in the database until after the guns were pointed at me and the handcuffs were on.

Wang-Lo January 26, 2016 8:20 PM

@Bob: “I wonder how long it will take for someone to get shot because the machine spits out “red” and gives an officer an itchy trigger finger.”

Or for an officer to get shot because the machine tells him the next Clyde Barrow is a “green”.

tyr January 26, 2016 9:11 PM

I knew a guy from Fresno who said their law enforcement was
so draconian against ordinary folk because it was the centre
of Mafia overlords and they wanted the town to have a low
profile. That way no one would notice they were there.

Smokey Joe Alioto also assured the public when he was Mayor
that there is no Mafia in San francisco.

Tony H. January 27, 2016 3:48 PM

@Joe K:
“It may be time to reconsider FOIA regulation along the lines of what I propose be called FOIPA (Freedom of Information Processing Act). FOIPA would sidestep the issues associated with disclosing formulas or source code and instead allow the public to submit benchmark data sets that the government would be required to process through its algorithm and then provide the results. This would allow interested parties, including journalists or policy experts, to run assessments that prod the government algorithm, benchmark errors, and look for cases of discrimination or censorship. For example, you could take two rows of data that varied on just one piece of sensitive information like race and examine the outcome to determine if unjustified discrimination occurred.”

Of course one of the implicit inputs is, as with Volkswagen, going to be “running in benchmark mode”.

Joe K January 28, 2016 6:53 AM

@Tony H.

Of course one of the implicit inputs is, as with Volkswagen, going to be “running in benchmark mode”.

True, Diakopoulos’s imaginary FOIPA would of course be hampered by perverse institutional incentives to “comply” in bad faith. Just like with FOIA:

https://www.techdirt.com/blog/?tag=foia
https://www.eff.org/deeplinks/2016/01/case-missing-comma-why-congress-must-fix-foias-law-enforcement-exemption

All the same, I appreciated his piece for its exploration of the space between (a) full open-source for all of the State’s automated decision-making, and (b) the status quo.

I shudder to imagine how utterly worthless Beware and many of its friends in the US LEA software ecosystem are. Given the present lack of transparency, how much testing has Beware endured, and of what quality? (For that matter, are tests even definable? i.e., does a specification even exist?) Should one expect it to be any better than an air-traffic-control system sold to cargo-cultists? And, if so, by precisely how much?

And, having said all that:

http://kvpr.org/post/fresno-pd-drops-beware-threat-assessment-program

“The public was alarmed that the police department had secretly started experimenting with software that potentially labeled them as threats to public safety,” [the ACLU’s Matt] Cagle said.

The FPD will continue to use a more limited version of the program that tells officers at the crime center if there is a criminal history at a particular address. In that case, a symbol will appear in the screen that will show them previous arrests and convictions at a particular address. However, it will not assign any type of color-coded threat assessment.

Marcos El Malo January 30, 2016 9:25 PM

This Beware program sounds relatively benign compared to the Mimority Report program.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.