New
Introducing Compute Orchestration
February 12, 2016

Clarifai’s A.I. Code of Ethics, or How to Prevent a Robot Apocalypse

Table of Contents:

Most people don’t consider that artificial intelligence, before becoming “smart,” is first trained by human beings. That means A.I. is and will continue to be influenced by our human perceptions, biases, prejudices, and morals. As a company that creates A.I. products for visual recognition, we’re always thinking about how our technology will evolve and whether it can and should be pushed further.


The biggest impact that artificial intelligence has had on society, so far, seems to be through the movies. When I tell people I work at Clarifai, many of them start asking me if I have seen certain movies about A.I. I cringe a little internally when people do this because movies like A.I.Her, and Ex Machina dramatize a similar message: beware of A.I. This warning centers on the theory that machines can become autonomous of their human creators.  So the question is, how do we prevent the situation where humans lose control of machines?

 

Man Sitting at Desk Twiddling Thumbs


Let’s take the movie Her (as pictured above). The first part of the movie sets the scene for the creation of the OS program, Samantha, who has access to large amounts of data.  When her user, Theodore, discusses a concept that she does not understand, she goes off and teaches herself, absorbing information that could take a human a lifetime.


The ending of the movie Her leaves you with unanswered questions. Spoiler alert: Samantha and all the other OS’s “leave” at the end of the movie. Did they leave because they had evolved beyond their human companions? Or did she leave because someone pulled the plug? Or did she evolve beyond the point of technological singularity such that no one could actually pull the plug? Maybe she loved Theodore so much that she knew the best thing for him was to let him go? Was Samantha bad for Theodore or good for him? Was it irresponsible of the OS developers to let humans develop feelings for machines?

Skeleton

The robot apocalypse probably won’t be like this Terminator … we hope.
The movie challenges the role of ethics in A.I. and is an expression of the greater issues and concerns surrounding A.I. ethics.  Recently, I attended an A.I. conference where they briefly touched on the idea of ethics. I found out later that a smaller group had gotten together to discuss and determine A.I. ethics and best practices. I certainly wish I had gotten that invite because I believe the discussion is important and that it needs to be broadly held. I write this post, not because I have formed opinions, but because I want to start making arguments and hopefully inspiring other arguments. Without everyone’s feedback, A.I. cannot move forward ethically and responsibly.


For now, here are some ethical best practices that I am thinking about and it is a combination of ethics from law, privacy, education, and research.

 

  • Honesty – no censorship of data, no false data, operate transparently
  • Respect – Listen to all opinions (like a peer review process), be open to criticism and the ideas of others
  • Sincerity – be earnest in all endeavors, act in good conscience
  • Gravity – be thoughtful
  • Neutrality – no self-dealing, avoid conflicts of interest
  • Kindness – concern for others
  • Legality – do not cut corners, be fair and just

Ethics are not just a lofty ideal. Once we settle on which ones apply, we then need to turn them into industry best practices. The Future of Humanity Institute has stated that reinforcement learning is showing hints where machines are capable of refusing to be shut down. Control occurs through other means and influencing the creators and the created with ethics is one way.