Teaching Robots the Do's and Don'ts of Morality

V1 logo

Society

Teaching Robots the Do's and Don'ts of Morality Will Kitson
March 14, 2018

As machines become increasingly sophisticated, there is growing concern as to how best to keep them in line. These roles require a moral compass of sorts.

It’s a classic story of good versus evil – almost. As machines become increasingly sophisticated, there is growing concern as to how best to keep them in line. Films like The Terminator and 2001: A Space Odyssey provide a glimpse into people’s worst fears about powerful robots turning against humans.

AI has already become sophisticated enough to beat one of the world's best players of Go, score better than 75 percent of Americans in visual intelligence tests, and even land work writing for the Associated Press. So, the best way to ensure they’re used for good may be for humans to teach them the difference between right and wrong.

While the notion of a nefarious robot uprising is still way beyond the current state of AI technology, the time has come to teach our machine-learned friends the importance of moral decision-making. AI systems are playing a more dominant role in society. Soon, they will be driving in the streets, protecting people’s homes and monitoring cities. And these are roles that require a moral compass of sorts.

The solution could be to train AI machines in morality using a voting-based system, or crowdsourcing. In 2016, researchers from the Massachusetts Institute of Technology created the Moral Machine – an online platform that hopes to build a picture of “human opinion on how machines should make decisions when faced with moral dilemmas.” The website invites users to judge what an AI-powered, self-driving car should do in a choice between two destructive scenarios; for instance, whether the car containing four passengers should collide with four pedestrians or swerve into a roadblock. The ultimate goal of crowdsourcing morality is to impart humankind’s collective ethical reasoning onto AI so that machines will make the same decisions that people would.  

This type of crowdsourcing morality, however, is far from perfect. It relies on the assumption that humankind’s ideas of morality should be imbued into machines. “[It] doesn’t make the AI ethical,” says Cornell School of Law professor James Grimmelmann. “It makes the AI ethical or unethical in the same way that large numbers of people are ethical or unethical.”

The problem has already been broached by both influential companies and governments. Germany, for instance, drafted a set of ethical guidelines in June 2017 for autonomous cars, including 20 rules for software designers to make “safety, human dignity, personal freedom of choice and data autonomy” a priority. And Google – which is developing its own self-driving car, Waymo – has established an Ethics & Society unit for its AI research group, DeepMind. The group is committed to “the inclusion of many voices, and ongoing critical reflection” in order to responsibly explore machine learning technology.  

Any morality companies impart onto AI programs is likely to be as flawed as humans are. It seems unlikely society can teach ethical decision-making to machines without passing on a predetermined set of biases and flaws. But for now, approaching the problem in a democratic, crowdsourced manner may be the best option available.

Related Posts:
V1. Editions: 
Society
No items found.

Join the V1. family of subscribers and discover a better way to work!

FREE BONUS REPORT: A New Generation of Work
Password requires 8 characters minimum
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.