2018 Annual Report from AI Now

The research group AI Now just published its annual report. It's an excellent summary of today's AI security challenges, as well as a policy agenda to address them.

This is related, and also worth reading.

Posted on December 10, 2018 at 9:27 AM • 28 Comments

Comments

FaustusDecember 10, 2018 10:30 AM

@ Bruce

If you have any interest in supporting free general purpose programming I wouldn't welcome this report. It is simply an attempt to use the A.I. boogeyman as a Trojan Horse to regulate all sorts of things related to computers.

The Medium article catches on to this: Most of the social problems involved predate A.I.

I am anti warrantless surveillance whether an AI looks at the data or a conventional database does. The problem is THE SENSORS. This information should not be collected. Once it is collected it is going to be analyzed in all sorts of ways, none of which is good, none of which can really be controlled. I can do massive analysis on my servers with nobody knowing the difference. The camera pointing at the street (etc) is where surveillance can be stopped.

An AI algorithm can't be biased and effective. What can be effectively biased is the data the AI learns from. If you feed an AI data from a biased justice system, it will give biased results, because what it knows is only what it is told. If I only feed a facial recognition program black faces it will probably not work as well on white faces (although some generalization should occur.)

The algorithms are not biased, AIs are not biased: WE ARE. We have to choose training data carefully to not pass on this bias. Can you imagine what conclusions an AI would draw watching MSNBC and FOX news all the time? It would effectively be insane.

PeaceHeadDecember 10, 2018 12:44 PM

The image tells a lot.
In some web browsers/configurations it can't be fully and properly viewed.
Here's a direct link to it for those who want to think about it:

https://cdn-images-1.medium.com/max/2000/1*cgcqBdxRpPIzhJ0lbQB81g.png

The image stimulates my mind more than the article.
I think the image's insights deliver.

AI really is a pain as it is in this troubled era.
I hope that benevolent AI can be properly shielded and insulated from data corruption and ethical corruption and behavioral corruption etc.

Most other AI is a SEVERE WASTE OF ALL PRECIOUS RESOUCES.

As usual, we have set up for a fool's errand.
Trashing and eradicating everything wonderful and inherently valuable just to install gear which destroys and damages everything wonderful and inherently valuable is NOT my idea of progress!!!!

So yeah, as yet, AI tends to be a pAIn. (And so do Natural Language Interfaces and Programs, which are NOT the same!)

As usual, the ones pushing for the most devastating items in all of human history tend to be very dangerous individuals and groups who lack both common sense and compassion.

I feel that it's every rational person's duty to do whatever creatively possible to block those engineers of armageddon every time. Otherwise, they will charge us money for breathing oxygen to pay for activation of the doomsday devices they build with monies and resources they stole from all of us.

I am not even crazy and I'm not even joking.

Peace Be To All Alliance In The Fight Against World Terrorism!

{stand down pre-emptively; pre-emptively stand down; we can't afford to accidentally extinguish all life of Earth}

It's more important to secretly save massive quantities of lives than to follow the protocols that get us all killed DEAD rather rapidly. I've done the math; I remember my humanity.

P.S.-The holidays next year will be a lot happier if we can dropkick the dwarf of oppression off our backs. Happy Holidays, everyone, regardless of creed, credo, religion, ethnicity, etc.

PhaeteDecember 10, 2018 2:53 PM

Another security report that shows just an empty page if your javascript is turned off.
Not the best start there.

Best non script tag experience i had was a security page that congratulated me on having javascript turned off, can't remember the site though.

Jesse ThompsonDecember 10, 2018 3:10 PM

I think that one of the elements that always appears to be missing in any of these conversations at the intersection of AI vs bias, regulation vs surveillance is simply one of control.

Bias can only be defined relative to an expected outcome. Somebody has to define that outcome, and then somebody else has to enforce it for it to make any difference to anyone else.

The people writing the AI Now 2018 report expect Government to define and enforce it.

Julia Powles warns that government can't really be trusted to define or to enforce it, "we" are supposed to.

Alright, then who is "we"?

"We" built the government specifically to represent our interests. Either "We" have the power to force their hand to do "our" will, or we don't in which case "we" don't have a seat at the table to begin with.

There is no way for us to band together to make all of our voices heard that is not some variant of a government: either working through the current one or else deposing them and installing a new one. Because that's all government is: the collected power of the people governed.

Sometimes not all of the people governed, especially in plutocracies, but you can't wrest control from the current government without in the process making a new one in it's place regardless.

So I find it stupid to criticize the current government (regardless of the fact that of course they deserve it) without being able to offer and then collectively stand behind superior alternative suggestions about how to govern.

And then we have to agree on what alternatives we actually intend to implement. I am in favor of UBI, for example. I'd wager that 75% of the regular commentors on this blog are against it and that Bruce himself would most likely just keep his lips zipped shut in preference to touching that third rail. ;)

So how can "we" govern 300 million people in practice (the US, in this example) when we can't even work out a fair way to agree to support the same policies to start with? Some way for each of our ideas to compete in a sportsmanlike manner, get a single (or synthesized) winner, and everyone actually pledges to support whatever does win that contest?

After all that is how Democracy is supposed to work, we've just learned enough from history to know that gerrymandering (as a subset of representation, I'm actually against term-based representation these days as well) and first-past-the-post are ingredients we don't want in any new implementations.

And it wouldn't hurt to work out some actually secure manifestation of electronic voting while we're at it, too. I mean holy crap on a cracker, if we can't protect the fact that each of a large number of people checked certain boxes then how do we claim to be able to secure any assets at all, in any circumstances?

Impossibly StupidDecember 10, 2018 3:52 PM

Meh. Both links seem to conflate the current fad of machine learning with AI; it's far from it. It is silly to try to address these issues as though they were meaningful in the context of machines. The bias and overreaching surveillance and everything else are problems that are the result of our all-too-human natural "intelligence". The only way machines are going to solve our issues is if we actually do develop a genuine AI some day. Otherwise, we need better human systems to address human foibles. Sadly, we do not appear to be living at a time when humans are seriously interested in solving their problems. Looking to machines to "think" for us at this point in history thus only compounds the problem.

gordoDecember 10, 2018 8:56 PM

Regarding "AI Now", "unknown knowns" and "The people themselves . . ."

---

Nostalgia, nirvana and ‘unknown knowns’ in public policy
The twin impulses affecting policy debates
By Helen Sullivan

Asia and the Pacific Policy Society
19 February 2018

Looking ahead, the impulses of nostalgia and nirvana may have increasing salience as populism feeds off the certainty of nostalgia, and the digital revolution brings the techno-political nirvana into reach.


But there is another feature of the knowledge-policy interface that is inherent in both the ‘nostalgia’ and ‘nirvana’ impulses, and impacts on how well we understand the potential of different policy models, approaches and instruments – the ‘unknown knowns’.

‘Unknown knowns’ are defined by Slavoj Žižek as those things we are unaware of knowing, or choose not to know, but nevertheless form the background to our public values and so inform how we practice public policy.

https://www.policyforum.net/nostalgia-nirvana-unknown-knowns-public-policy/

---

What Rumsfeld Doesn't Know
That He Knows About Abu Ghraib
by Slavoj Zizek

In These Times
May 21 2004

In March 2003, Rumsfeld engaged in a little bit of amateur philosophizing about the relationship between the known and the unknown: "There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are also unknown unknowns. There are things we don't know we don't know." What he forgot to add was the crucial fourth term: the "unknown knowns," the things we don't know that we know-which is precisely, the Freudian unconscious, the "knowledge which doesn't know itself," as Lacan used to say.


If Rumsfeld thinks that the main dangers in the confrontation with Iraq were the "unknown unknowns," that is, the threats from Saddam whose nature we cannot even suspect, then the Abu Ghraib scandal shows that the main dangers lie in the "unknown knowns" - the disavowed beliefs, suppositions and obscene practices we pretend not to know about, even though they form the background of our public values.

http://www.lacan.com/zizekrumsfeld.htm

---

Social Problems
by Henry George 1883
Chapter 01
The Increasing Importance of Social Questions

[13] There is in all the past nothing to compare with the rapid changes now going on in the civilized world. It seems as though in the European race, and in the nineteenth century, man was just beginning to live -- just grasping his tools and becoming conscious of his powers. The snail's pace of crawling ages has suddenly become the headlong rush of the locomotive, speeding faster and faster. This rapid progress is primarily in industrial methods and material powers. But industrial changes imply social changes and necessitate political changes. Progressive societies outgrow institutions as children outgrow clothes. Social progress always requires greater intelligence in the management of public affairs; but this the more as progress is rapid and change quicker.


[14] And that the rapid changes now going on are bringing up problems that demand most earnest attention may be seen on every hand. Symptoms of danger, premonitions of violence, are appearing all over the civilized world. Creeds are dying, beliefs are changing; the old forces of conservatism are melting away. Political institutions are failing, as clearly in democratic America as in monarchical Europe. There is growing unrest and bitterness among the masses, whatever be the form of government, a blind groping for escape from conditions becoming intolerable. To attribute all this to the teachings of demagogues is like attributing the fever to the quickened pulse. It is the new wine beginning to ferment in old bottles. To put into a sailing-ship the powerful engines of a first-class ocean steamer would be to tear her to pieces with their play. So the new powers rapidly changing all the relations of society must shatter social and political organizations not adapted to meet their strain.

[15] To adjust our institutions to growing needs and changing conditions is the task which devolves upon us. Prudence, patriotism, human sympathy, and religious sentiment, alike call upon us to undertake it. There is danger in reckless change; but greater danger in blind conservatism. The problems beginning to confront us are grave -- so grave that there is fear they may not be solved in time to prevent great catastrophes. But their gravity comes from indisposition to recognize frankly and grapple boldly with them.

[16] These dangers, which menace not one country alone, but modern civilization itself, do but show that a higher civilization is struggling to be born -- that the needs and the aspirations of men have outgrown conditions and institutions that before sufficed.

[. . .]

[19] The progress of civilization requires that more and more intelligence be devoted to social affairs, and this not the intelligence of the few, but that of the many. We cannot safely leave politics to politicians, or political economy to college professors. The people themselves must think, because the people alone can act.

https://web.archive.org/web/20050427152733/http://www.schalkenbach.org:80/library/george.henry/sp01.html

HumdeeDecember 10, 2018 9:30 PM

It is misleading to say that bias is a social problem. Bias is a life necessity. The only definition of social bias that is comprehensible is someone else's preference that you happen to dislike. Thus any attempt to "fix" bias either is an attempt to paper over genuine differences of opinion or a backdoor attempt to forcibly impose one set of values on people who don't want them.

Having said that, I do agree with the larger view the Medium article takes which is the problem of data collection concentration. But that issue has nothing to do with promoting socially liberal causes and linking them together is a social and political mistake.

gordoDecember 10, 2018 9:54 PM

@ Humdee,

It is misleading to say that bias is a social problem. Bias is a life necessity.

Insofar as we fail to seek common ground, biases are social problems.

Bruce SchneierDecember 11, 2018 6:01 AM

@Fasustus:

"If you have any interest in supporting free general purpose programming I wouldn't welcome this report. It is simply an attempt to use the A.I. boogeyman as a Trojan Horse to regulate all sorts of things related to computers."

Given that I am generally in favor of regulating all sorts of things related to computers, this would be a good outcome -- although I would not use the words "boogeyman" and "Trojan Horse."

I get that an unreglated Internet was really fun libertarian playground, but that was when it fundamentally didn't matter. Now it does -- now the Internet affects like and property, not to mention liberty, freedom, and democracy -- and the computer industry will need to look a lot more like the rest of society.

Bruce SchneierDecember 11, 2018 6:03 AM

@Humdee

"It is misleading to say that bias is a social problem. Bias is a life necessity."

Something can be both a social problem and a life necessity.

FaustusDecember 11, 2018 9:19 AM

@ Bruce

This is your house and I have no intention of being contentious with you here. I appreciate that you share this forum.

I personally can't think of any recent internet regulation that has actually been helpful, rather than decreasing liberty on the net and shoring up monopolies. Based on your posts I think you generally agree. FOSTA, for example, is causing all sorts of limitations on personal expression.

I guess net neutrality was the last regulation that seemed helpful, may it rest in peace. Surprisingly its demise hasn't yet seemed to have much fallout.

Concerns about drawing conclusions from data that includes information about protected classes (race, religion, gender, etc) and proxies for protected classes (things like address and name give a good indication of race and maybe religion for example) are really concerns about statistics. AI is not required to use this data to reinforce existing biases. It's already being used sans AI to make all sorts of decisions about people.

I would love to see credit reporting agencies better regulated. I would love for people who don't care about credit to be able to opt out of data collection. I think people should be told exactly what data points were used in making decisions about them. But I am not holding my breath!

My main point is that little of this is about AI. It is about data.

But I don't think we are going to find consensus on data. This data is used in official and unofficial policing that I think people support much more than I do. Bob wants to do a background check on his internet date. Mrs. Smith wants a background check on a potential tenant. The Jones family doesn't want "sex offenders" (which could be almost anything these days) in its neighborhood. The Rotary wants law and order. The Jacksons want to qualify for a loan that is near the limit for their income. BigCorp Insurance doesn't want to go broke paying unanticipated claims.

Data both serves us and hurts us and the data that hurts one probably helps another. This isn't a new thing in any sense except scale. Without consensus, politicians will probably continue to serve the connected and to appease loud interest groups with badly thought out policies.

As far as the dangers that are unique to AI go, like world domination and robot soldiers, I do not want these things. But do we really want to hobble our own AI work and let this tech be developed by our adversaries?

You note that I am a libertarian. It is true, but I am a humanist libertarian. I think mutually beneficial solutions are available if we could bypass our human apex-predatory natures. I don't think that leaving people to stew in misery is a good solution for anyone. I don't worship private property in the face of abject human need.

But I also think the vast majority of the world is leading a better and better existence. Almost everyone is rich in the gifts of technology. I am not jealous of the yachts and mansions of the super rich because most of them have made great contributions to our society, and obsessively addressed the needs of our culture through doing long hours of tedious and (to me) unsatisfying managerial work. They are more likely to deliver alternate energy, space travel, disease cures and a global warming solution than governments elected by confused and angry citizens.

Sancho_PDecember 11, 2018 10:03 AM

@Bruce, Humdee, gordo

Maybe the word “problem” should not be in the sentence at all.
“Problem” (*here*) generally means something unpleasant, but bias is not:
It is the reason why some survive and others don’t.

Bias is pleasant, esp. if we recognize that it is bias.
It’s easy to spot in others but we have problems to notice our own.

FaustusDecember 11, 2018 10:56 AM

@ Jurgen

Great link. I agree that ham-fistedly inserting an anti-bias bias into the input or results of an AI will ruin the optimization. I suspect that people will want to preserve any bias that benefits protected groups so cleansing the data of protected data and its proxies probably won't be satisfactory.

I think the most promising approach is to adjust the scoring (aka reward) function to add a compensatory value to adjust the score based on evidence of prior bias.

But will this "affirmative action" be accepted by the majority? Will it have unexpected negative consequences?

vas pupDecember 11, 2018 11:34 AM

Artificial synapses made from nanowires:
https://www.sciencedaily.com/releases/2018/12/181205133928.htm

"Scientists from Jülich together with colleagues from Aachen and Turin have produced a memristive element made from nanowires that functions in much the same way as a biological nerve cell. The component is able to both save and process information, as well as receive numerous signals in parallel. The resistive switching cell made from oxide crystal nanowires is thus proving to be the ideal candidate for use in building bioinspired "neuromorphic" processors, able to take over the diverse functions of biological synapses and neurons.

For years memristive cells have been ascribed the best chances of being capable of taking over the function of neurons and synapses in bioinspired computers. They alter their electrical resistance depending on the intensity and direction of the electric current flowing through them. In contrast to conventional transistors, their last resistance value remains intact even when the electric current is switched off. Memristors are thus fundamentally capable of learning."

Impossibly StupidDecember 11, 2018 12:04 PM

@Humdee

Bias is a life necessity.

You are conflating judgement with prejudice. The former is a valid part of life, but the latter is the problem that is actually being discussed when people talk about bias. Machine learning is only presented with a subset of data, which itself is often skewed based on artifacts of historical prejudice. And since it isn't AI, it doesn't have the ability to question the completeness or accuracy of that data. It can be very wrong about its conclusions, but lacks the ability to detect its own mistakes; that's the kind of bias people are taking issue with.

@Faustus

My main point is that little of this is about AI. It is about data.

Exactly. Machine learning breakthroughs have made sifting through data much easier, but it is not AI. And most times the derivative data that is the result is essentially a black box, because the neural networks don't have the ability to meaningfully describe what they learned.

As far as the dangers that are unique to AI go, like world domination and robot soldiers, I do not want these things.

There's nothing there that's "unique to AI". People have long sought to dominate the masses and kill other people with machines. If anything, an actual AI might be the thing that gets us away from faulty human thinking and blind ambition. The greater danger is not that smart machines will enslave us, but that megalomaniacal sociopaths will continue to use dumb machines to do that. Intelligence, in any form, is not what a ruling party wants as a check to their power.

FaustusDecember 11, 2018 12:32 PM

@ Impossibly

If anything, an actual AI might be the thing that gets us away from faulty human thinking and blind ambition. The greater danger is not that smart machines will enslave us, but that megalomaniacal sociopaths will continue to use dumb machines to do that. Intelligence, in any form, is not what a ruling party wants as a check to their power.

This is an interesting reversal of thought. We have started assuming AIs are dangerous. Maybe that is an incorrect assumption.

I think the questions hinges on whether fairness can be discovered in a similar way to the way mathematical truths can be discovered. If there is objective, computable fairness, an AI would be a likely agent to find it.

I think most or all of us probably wouldn't like that fairness, though. We are animals, not angels or computers. And we are apex predatory animals who take our position at the pinnacle of the pyramid of feeding as our right. We use the idea of fairness to strengthen the pack, to make us better predators, and ultimately to serve ourselves. I don't think we viscerally would accept a fairness that didn't benefit our predatory nature at least indirectly.

One insight of Buddhism is that "evil" never benefits the evildoer when all is taken into account. I don't think more than 10 people deeply believe this, though. And that is probably an exaggeration!

gordoDecember 11, 2018 12:42 PM

@Sancho_P, Bruce, Humdee, (@Impossibly Stupid, just now seeing your post),

Considering the issues at hand, and their socio-technical nature, maybe the word prejudice is the more precise social construct, with bias working as its technical euphemism. Rewording the sentence, then, we see the problem:

It is misleading to say that prejudice is a social problem.

As Henry George put it, above:

There is danger in reckless change; but greater danger in blind conservatism.

The danger we face comes as the two combine. Amplifying the status quo, while expedient, is the problem, and it, too, has a social component.

vas pupDecember 13, 2018 9:14 AM

@all: I am in process of reading report. There are good suggestions that I agree upon, e.g. standardization of requirements for AI training data set.
Regarding other, e.g. if you know that women reoffend less than man and apply gender parameter to train AI separately for both subsets, then by the same token why you could not train AI separately to predict reoffending by race?
I am afraid that current tendency of political correctness will override real science and AI prediction/evaluation in particular challenging math as base for decision making.
Science (any) is starting with measurement and sometimes delivered results which you have to know rather which you like: inconvenient truth. That is the nature of science which by objective research sometimes overrides our common sense - otherwise we still will think that the Earth is flat. At the end of the day 2+2=4 in US, UK, Iran, Russia, China, North Korea, Saudi Arabia, Israel, Germany and even on the Moon and Mar, because math is neutral to ideology (not statistics which could be manipulated - fake palls). For me political correctness playing currently the same bad role against science as Inquisition in Middle Ages. I stick to Ben's Franklin: "No freedom of thoughts no wisdom". Just separate facts and opinions, objective reality and subjective preferences, fight ideas by ideas not by labeling opponents, personal attacks and kind of social McCartism.

gordoDecember 13, 2018 6:43 PM

@ vas pup,

The maths use statistics.

The accuracy, fairness, and limits of predicting recidivism
Julia Dressel and Hany Farid
Science Advances 17 Jan 2018:
Vol. 4, no. 1, eaao5580
DOI: 10.1126/sciadv.aao5580

One widely used criminal risk assessment tool, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS; Northpointe, which rebranded itself to “equivant” in January 2017), has been used to assess more than 1 million offenders since it was developed in 1998. The recidivism prediction component of COMPAS—the recidivism risk scale—has been in use since 2000. This software predicts a defendant’s risk of committing a misdemeanor or felony within 2 years of assessment from 137 features about an individual and the individual’s past criminal record.


[. . .]

While the debate over algorithmic fairness continues, we consider the more fundamental question of whether these algorithms are any better than untrained humans at predicting recidivism in a fair and accurate way. We describe the results of a study that shows that people from a popular online crowdsourcing marketplace—who, it can reasonably be assumed, have little to no expertise in criminal justice—are as accurate and fair as COMPAS at predicting recidivism. In addition, although Northpointe has not revealed the inner workings of their recidivism prediction algorithm, the accuracy of COMPAS on one data set can be explained with a simple classifier (7); we confirm their result here. In further agreement with Angelino et al. (7), we also show that although COMPAS may use up to 137 features to make a prediction, the same predictive accuracy can be achieved with only two features, and that more sophisticated classifiers do not improve prediction accuracy or fairness. Collectively, these results cast significant doubt on the entire effort of algorithmic recidivism prediction.

http://advances.sciencemag.org/content/4/1/eaao5580.full

vas pupDecember 14, 2018 9:38 AM

@gordo - Thank you for information provided.
Based on my understanding AI and statistics used by human (when even utilized both unbiased approach) could provide probability only of particular case occurrence, so use them to make final decision is not 0 or 1, but rather making decision between 0 and 1.
E.g. tossing coin many time, and you have distribution closer of 50:50 as many times you repeat test. For outcome of particular one test - it is uncertain, but looks like you better than me in math - I'll appreciate your critics/input/clarification.

gordoDecember 14, 2018 6:48 PM

@ vas pup,

You're very kind, and no, I'm neither a mathematician nor a statistician.

On the question of recidivism likelihood, the COMPAS software does use a scale and the human assessment performed was binary. As so, maybe it's a comparison of apples and oranges, yet the accuracy of predictive outcomes for each method were, for all practical purposes, the same.
Given the stakes, one should expect better from software systems like COMPAS.

My comment was meant to say that, at some point, the math begs interpretation, thus, "The maths use statistics". I don't see any way around that.

vas pupDecember 17, 2018 10:57 AM

@gordo in particular and @all:
Science (real/honest one) is objective (math in particular), but ideology and laws adopted by humans are subjective. I see many examples when laws of humans try to suppress the law of nature by all tools available including several levels of violence through the whole human history after the first state set up and start generating laws.
My point is that legislators could adopt the law that sunrise is on the west, and sunset is on the east, but sun does NOT care. That is truism to explain my point. At the end of the day, human laws (subjective/ideologically biased)and ideology which contradict laws of nature (discovered or not yet discovered by science) are either not enforceable at all or required violent enforcement kind of modern version of inquisition - aka Thought Police of XXI century.
Same applied to utilization of x-refed information obtained by IC multiple independent sources as input for making decision by highest level of ruling folks OR asking IC to provide information which confirmed already made subjective decisions. By the way, logic is fought by logical fallacies (the letter is pseudo logic for brain washing folks with higher level of IQ than average Joe/Jane). I'd say you need to apply same paradigm as fighting flu to false/ideological biased statements: you can't totally isolate everybody of human contacts (virus / false information, including AI), but you do flu shot to create some resistance/recognition of such statements based primary on their CONTENT, not by exclusively on their SOURCE. That is open minded approach for AI, e.g. China developed workable gene-editing tool CRISP-R. Q:Should all other advanced nations reject this just because they don't like Chinese ideology/political structure?

gordoDecember 17, 2018 8:55 PM

@ vas pup,

That is open minded approach for AI, e.g. China developed workable gene-editing tool CRISP-R. Q:Should all other advanced nations reject this just because they don't like Chinese ideology/political structure?

Regarding the Crispr twins incident, and the Chinese government's response, please see: http://www.xinhuanet.com/english/2018-11/29/c_137640246.htm

Q:With respect to the Chinese government's response to this incident, to which law(s) of nature has(have) the Chinese government done violence?

Clive RobinsonDecember 18, 2018 4:04 AM

@ vas pup,

but ideology and laws adopted by humans are subjective

You could have added "and capricious" at the end there ;-)

vas pupDecember 18, 2018 12:29 PM

@gordo: Sorry, can't continue on that - Moderator will ban me from blog for my views. In short, mankind for a long time utilized knowledge to alter in favorable way biological objects (like domestication dogs, creating new sorts of plants used as a food, recently adding genes of goats so their milk could be used to cerate super strong strings, etc.) Russians try to create hybrid of man and ape (hopefully failed). The degree of interference with laws of nature (in biology in particular) is ideologically biased (e.g.creating designer baby - I guess as history confirmed those bans will affect regular folks, but folks with money override those bans and go to the places with favorable ideology OR government will secretly working of creating super soldier - kind of Hollywood come into life).
@Clive: Agree. I always stick to the point that draconian laws are not iron laws, but rather capricious laws. Those laws create environment for their selective application: "For friends everything - for others - Law".
I recently found good phrase in the book 'The Perfect Weapon' (hope to read it whole soon):
"In short, we are inventing new vulnerabilities faster than we are eliminating old ones." That was about cyber security - same applied for laws. Some of them are dormant is the books for decades until it is necessary to hook somebody and utilize them. That is guilt by default paradigm.

vas pupDecember 18, 2018 3:49 PM

@all:
Artificial intelligence is being used by musicians to help compose melodies, write lyrics and even perform. It may only be a matter of time before a computer has a number one hit.

http://www.bbc.com/future/story/20181217-the-musical-geniuses-that-cannot-hear
]
"And AI music is also finding its way into the charts. Music producer Alex da Kid’s track Not Easy, featuring Ambassadors, Elle King and Wiz Khalifa was a Top 40 hit in the Billboard Chart in 2016. It used IBM Watson, a computer system capable of answering questions posed in natural language, [!!!]to read blogs, news articles, and social media to gauge emotional sentiments around topical themes[that is perfect for targeted psychological warfare - I guess has potential] . It also analysed the lyrics of the top 100 songs for each week over the previous five years.

With this data, Watson “arrived at an [!]emotional fingerprint of culture,[!]” according to IBM, which was used to help create the song’s simple lyrics.Da Kid then used Beat – IBM’s AI music making software – to pick out musical elements that would be pleasing to the listener, meaning the AI partially wrote the hit song, or at least inspired parts of it.

Google’s DeepMind team is working to take this concept even further with a new project called WaveNet, which will create “a deep generative model of raw audio waveforms… able to generate speech which mimics any human voice”. Using a network of algorithms that is modelled on the human brain, it takes in audio and can then push it out in new forms. It raises the possibility of AI not only writing music and lyrics but also singing them."


Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.