The Ethics and Morality of AI

The Ethics and Morality of AI
The V1 Edition

In August 2017, 116 AI and robotics experts including Tesla CEO Elon Musk signed a petition to make killer robots illegal. The letter calls for the United Nations to enforce a ban as a matter of urgency, warning that autonomous weapons could bring about a “third revolution in warfare.”

The question of robot morality has been around for some time. Last year, the British Standards Institute released an official guide to help robot developers create ethically sound machines. The document is based on scientist and fiction writer Isaac Asimov’s first law of robotics, which, written in 1942, states “A robot may not injure a human being, or, through inaction, allow a human being to come to harm.”

Researchers are putting these guidelines into practice. For example, the Georgia Institute of Technology’s professor Ronald Arkin is currently building machines that abide by international humanitarian law.

Meanwhile, the University of Connecticut’s Susan Anderson is working with her Hartford University professor husband, Michael, to develop ethical robots that assist the elderly. The couple are training the robots among ethicists rather than the general public, so that they don’t pick up any immoral behavior like Microsoft’s racist chatbot Tay.

Dr. Daniel Glaser, director of Science Gallery at King’s College London, argues that making robots ethical should be a designer’s number one priority. Writing for the Guardian, he suggests that the development of a robot should be modeled on that of a child, where morality is taught before language skills or algebra.

“We can’t retrofit morality and ethics,” he says. “We need to focus on that first, build it into their core, and then teach them to drive.”