Jesse Thompson December 9, 2019 2:56 PM

I get the impression that an awful lot of these “Failure modes” can be pretty easily modeled by pretending the ML system is actually just a human or a team of humans (like mechanical turk. Which it actually is more frequently than one might imagine. ????), and then examining the potential vulnerabilities of social attacks there.

Intentionally-Motivated = third party motivated:

  • Phishing
  • Tricking undertrained staff (conning)
  • Fabricating orders for them to carry out
  • Bribing staff
  • Blackmailing staff
  • Getting bad actors hired

Unintentional = staff self-motivated or poorly managed:

  • Incompetence
  • Embezzlement
  • Poor training / retraining

Jon December 9, 2019 6:51 PM

” intentional and unintentional failure modes.”

I like that way of putting it. It’s a matter of engineering.

1) You build a bridge capable of holding up a 40,000lb truck.
2) Someone drives an 80,000lb truck over it, and it falls down.

Fine. The budget was for a 40,000lb truck tolerating bridge, and it was not designed to tolerate an 80,000lb truck.

The bridge would have been (say) three times more expensive to tolerate the larger truck.

Those counting the beans decided it was appropriate to build for 40,000lbs and not for 80,000lbs.

That’s an intentional failure mode. If you do stupid things to things not designed for that (putting unsecured systems on the Internet, perhaps?) that’s an intentional failure mode.

Unintentional failures are more along the lines of “after being in use for forty years, the bolts have corroded and driving a 20,000lb truck over the bridge causes failure”.*

Engineering is the art of balancing the intentional failure modes with the budget. Science finds out the unintentional failures – and, hopefully, leads to correction.


  • as an aside, if the bridge was designed to last thirty years, and has held up for forty despite the corroded bolts not being replaced, that’s an ‘administrative failure mode’. J.

Rj Brown December 10, 2019 10:58 AM

@Gerard van Vooren:

Do you mean LISP was lacking, in the sense that it was not part of the implementation?

Or do you mean that LISP is lacking in some capability that if present would have avoided the failure?

Most machine learning systems I have been associated with are not written in Lisp, as it is too slow, difficult to harden for enterprise applications, and it is hard to find competent Lisp programmers who want to do the necessary maintenence work required of an enterprise application.

Lisp is a laboratory language. It is very well suited to experimentation by an individual or a small team, but it is not well suited to large scale programming by a larger team, nor to the ongoing maintenence and enhancement work needed in a deployed enterprise application.

I love Lisp, but I recognize where it is, and is not, appropriate.

cmeier December 10, 2019 3:00 PM

Poor choice of objective function belongs in there somewhere. Consider the example of a data set of “Has Cancer” vs “Does Not Have Cancer” and an ML that outputs “Likely Cancer” vs “Not Likely Cancer” using an objective function that minimizes the number of wrong answers. But a wrong answer in the case of “Does Not Have Cancer” where ML says “Likely Cancer” may be fairly cheap to deal with, perhaps an extra test for the patient, compared to the cost of a wrong answer in the case of “Does Have Cancer” but ML says “Not Likely Cancer.” In the latter case, the patient goes home and relaxes and only discovers a year later that the ML was wrong. Treatment is now hideously expensive. In this case, we should be minimizing the cost of potential bad decisions and not simply the number of bad decisions. It may be much cheaper to give many patients unnecessary extra tests now compared with giving a few patients an expensive treatment regimen down the road.

Wait_CODEC December 12, 2019 1:51 PM

Thanks, these articles form a decent topic to discuss.

Something IMPORTANT I believe is too often overlooked or forgotten:

Both fortunately and unfortunately, these types of articles are already likely being read by some AI’s of a variety of types and editions and configurations.

It is a source of misinterpretation and multiple errors that many of these such articles and books about security and safety and stability are not composed and written to deliberately prevent misinterpretation.

When the security and safety and stability materials and documentation are misinterpreted, the errors are profoundly bad and profoundly abundant and profoundly harmful.

Some people still make the faulty assumption that AI’s (and machine learning and deep learning and other forms of fuzzy logic and synthetic learning and decision making systems) are not currently studying all humanoid cultural and technological materials and processes and procedures. That is likely a tragic error that affects many lives and systems.

I believe that AI’s and systems similar to AI’s are already busy studying pretty much everything.

In order to reduce tragic misinterpretations and tragic errors and tragic damages, this concern needs to be actively remembered and incorporated into contemporary security+stability+sustainability+safety actions and educational techniques.

P.S.-Cultural biases and discriminatory behaviors are also a source of intense and abundant errors of logic that also affect AI’s and AI tech users and their progeny.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.