Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. T&Cs apply

The ethical implications of robotics

In this video we see that robotic systems have the potential to cause harm. Rafael introduces the trolly problem to think about ethics in robotics.

In this video Rafael explains that robotic systems have the potential to be used in the wrong way by someone, cause damage or even kill someone. He introduces a thought experiment, called the trolly problem, which demonstrates the ethical implications of designing robotic systems.

The trolley problem is a traditional thought experiment in ethics and psychology. Researchers from MIT developed the Moral Machine which is an interactive website that generates moral dilemmas related to the trolley problem, for autonomous vehicles. You can use the website and solve such dilemmas yourself. You will be presented with two scenarios each time, and you need to choose what the autonomous vehicle should do.

Optional: Learn more about the trolly problem.

Try to solve one or two scenarios on the Moral Machine website if you want to experience it and test your judgement.

The website was active from January 2016 to July 2020 (but it’s still available for you to experience today), and data collected from users all around the world used to analyse the judgement of people from different backgrounds. The findings are interesting, and you can read more in their Nature publication: ‘The Moral Machine Experiment’ [1]. Here is the abstract of the paper:

“With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available.”

[1] Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.F. and Rahwan, I., 2018. The moral machine experiment. Nature, 563(7729), pp.59-64.

This article is from the free online

How to Get Into Robotics

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now