The Ethical Considerations of AI Development

Ever since I was young, I have been fascinated by artificial intelligence (AI) and determined to create human-level AI.

Not long ago, this ambition was seen as a fanciful dream, reserved for the cinema screen or video games. However, with the rapid development of AI, these conversations are making their way into the mainstream, and the issue of ethics is taking center stage.

In 2014, I founded GoodAI, a research and development company based in Prague. Our aim is to develop general artificial intelligence – as fast as possible – to help humanity and understand the universe.

General versus narrow AI

When discussing AI, it is important to understand the difference between general and narrow AI.

General AI is a system that can adapt the way it approaches novel tasks, thus becoming more efficient at solving future objectives. The aim of general AI is to solve tasks that not even its creators can anticipate. It is often referred to as human-level AI, or strong AI, and it has not been created yet.

Narrow AI, we see every day. It refers to an AI system that can perform very specific tasks, but does not do much else. For example, Google recently created an AI that mastered the ancient Chinese board game Go. The machine competed in a tournament and beat the best players in the world. However, if you asked it to tell you the difference between a cat and a dog, it couldn’t tell you the answer. It is often referred to as weak AI or specific AI.

Finally, once general AI has been reached, it has been theorized that it will not take long for AI to surpass humans in terms of intelligence, reaching a stage of superintelligence.

I believe the questions of ethics are most important in the development of general AI. Once AI reaches human levels of intelligence, how can we ensure that it will be “good” and share our values?

Master Chinese Go player Ke Jie is defeated by Google’s artificial intelligence program AlphaGo during their first match on May 23, 2017.
Master Chinese Go player Ke Jie is defeated by Google’s artificial intelligence program AlphaGo during their first match on May 23, 2017.

Creating morals

The outlook of an AI agent is very much determined by its creator, who programs and teaches it.

It is impossible to simply hard code a set of morals, or ethics, into an AI system, that tells it what to do in every different scenario. It is not good enough just to teach basic concepts such as “right” and “wrong” or “good” and “bad.” Values and morals change with time, and context, and are rarely black and white.

Furthermore, we must aim to teach AI agents to understand things in the way we do. For example, if we give an AI the instruction “help people,” we have to be sure that the AI has the same understanding of “people” as we do.

This is why we aim to instill a deep understanding of human values on our AI. With this understanding it will be able to make complex decisions and judgments in real-life situations.

At GoodAI, our ultimate aim is to create general AI that can be used as a powerful tool by humans. We could use it to augment our own intelligence and help us to solve some of the most pressing global issues.

This stage of development would see AI become part of our everyday activities. We would use it without even thinking, as naturally as we put on a pair of glasses. However, as humans and AI become closer, and possibly even merge, it is the understanding of human values that will be vital to making sure it is safe.

Learning like a child

Philosopher Nick Bostrom has outlined a scenario where an AI has been given one objective – to maximize its paperclip collection. In his example, a superintelligent AI decides that eliminating humans will help maximize its paperclip collection efficiently.

The scenario is an extreme example of how a mundane, seemingly harmless, command could potentially lead to disaster, if an AI is not sufficiently taught about humans and their values. At GoodAI, we teach our AI agents in schools, much like you would teach a child. Our aim is to teach them a complex set of values so they are not as one-dimensional as the AI in the example. We have carefully tailored curricula that expose them to increasingly complex environments. We are teaching them to understand the world the way we do, and to respect human morals and ethics. The aim is to train them to use knowledge they have already learned and apply it to situations they are encountering for the first time – we call this gradual learning.

I see our AI agents as blank canvases. Our job is to fill them with knowledge so that they can navigate for themselves and make decisions about what is morally and ethically acceptable.

For now, we are teaching the AI. However, with time, it is likely that AI will reach superintelligence and be far smarter than the best human minds in every field.

At this point it may be difficult to draw a line between humans and AI, because humans will be using it to augment their own abilities. At this stage, we will be able to use AI to create new, better values and completely transform society.

Race for general

Reaching the level of superintelligence seems a long way off, especially since we haven’t reached general AI yet. However, it is essential to make sure that the work we do now ensures the safe development of AI.

As companies, governments, and individuals race to be the first to create a general AI, there is a concern that safety may be neglected. Faster deployment of powerful AI might take priority because of the pressure of economic and military competition, and it could have devastating results if speed comes at the price of safety.

At GoodAI, we run the worldwide General AI Challenge. The second round of the Challenge launches in early 2018, and asks participants to come up with a proposal of practical steps that can be taken to avoid the AI race scenario.

“For now, we are teaching the AI. However, with time, it is likely that AI will reach superintelligence and be far smarter than the best human minds in every field.”

We hope that this will have a positive impact on the development of AI, encourage interdisciplinary discussion among AI researchers, social scientists, game theorists, economists, and so on, and open up the topic of safety in AI development to a wider audience.

Gene Editing and Xenotransplantation

1308 - Catalan poet and theologian Ramon Llull publishes Ars generalis ultima (The Ultimate General Art), creating an early system of logic. Some believe he’s the founding father of information science.

1666 - Mathematician and philosopher Gottfried Leibniz publishes Dissertatio de arte combinatoria (On the Combinatorial Art), proposing an alphabet of human thought and arguing that all ideas are nothing but combinations of a relatively small number of simple concepts.

1763 - Thomas Bayes develops a framework for reasoning about the probability of events. Bayesian inference will become a leading approach in machine learning.

1898 - Nikola Tesla makes a demonstration of the world’s first radio-controlled vessel. The boat was equipped with, as Tesla described, “a borrowed mind.”

1921 - Czech writer Karel Čapek introduces the word “robot” in his play R.U.R. (Rossum’s Universal Robots).

1943 - Warren S. McCulloch and Walter Pitts publish “A Logical Calculus of the Ideas Immanent in Nervous Activity.” This will become the inspiration for computer-based “neural networks” (and later “deep learning”).

1943 - The ENIAC, first computer, is invented by J. Presper Eckert and John Mauchly at the University of Pennsylvania. It is completed in 1946

1950 - Alan Turing develops the “Turing Test”, a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

1951 - Marvin Minsky and Dean Edmunds build SNARC (Stochastic Neural Analog Reinforcement Calculator), the first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons.

August 31, 1955 - The term “artificial intelligence” is coined in a proposal by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. The workshop, which took place a year later, in July and August 1956, is generally considered as the official birthdate of the new field.

1959 - Arthur Samuel coins the term “machine learning,” reporting on programming a computer “so that it will learn to play a better game of checkers than can be played by the person who wrote the program.”

1965 - Joseph Weizenbaum develops ELIZA, an interactive program that carries on a dialogue in English language on any topic.

1966 - Shakey the robot is the first general-purpose mobile robot to be able to reason about its own actions.

1969 - Arthur Bryson and Yu-Chi Ho describe backpropagation as a multi-stage dynamic system optimization method. It has contributed significantly to the success of deep learning in the 2000s and 2010s.

1972 - MYCIN, an early expert system for identifying bacteria causing severe infections and recommending antibiotics, is developed at Stanford University.

1986 - The first driverless car, a Mercedes-Benz van equipped with cameras and sensors, built at Bundeswehr University in Munich under the direction of Ernst Dickmanns, drives up to 55 mph on empty streets.

1997 - Deep Blue becomes the first computer chess-playing program to beat a reigning world chess champion.

2000 - MIT’s Cynthia Breazeal develops Kismet, a robot that could recognize and simulate emotions.

2009 - Google starts developing, in secret, a driverless car. In 2014, it became the first to pass, in Nevada, a U.S. state self-driving test.

2011 - Watson, a natural language question answering computer, competes on Jeopardy! and defeats two former champions.

2015 - Hanson Robotics creates Sophia, the most humanlike robot ever to exist. She is designed to learn and adapt to human behaviors and work with humans.

March 2016 - Google DeepMind’s AlphaGo defeats Go champion Lee Sedol.

August 2017 - 116 leading AI and robotic experts sign an open letter to the United Nations to ban the use of “killer robots” and other lethal autonomous weapons.

October 2017 - The robot Sophia is granted




 

Author: Marek Rosa

Marek Rosa
Marek Rosa is the CEO and CTO of GoodAI, a general artificial intelligence R&D company, and the CEO and founder of Keen Software House, an independent game development studio best known for their best-seller Space Engineers (2mil+ copies sold).