MIT Moral Machine Experiment: When accidents are inevitable, how to choose self-driving cars?
With the rapid development of artificial intelligence, people begin to pay attention to how machines make moral decisions and how to quantify the expectations of society for ethical principles that guide machine behavior. Self-driving cars have started cruise testing on some roads, which requires human society to reach an agreement on the principles that should be applied when life-threatening traffic accidents inevitably occur. Any attempt to design artificial intelligence ethics must understand the public’s moral cognition. Therefore, reaching a consensus on this matter requires not only the discussion of engineers and ethicists, but also the opinions of future consumers.
In 2016, MIT deployed the online experimental platform "Moral Machine" to explore the moral dilemma faced by self-driving cars. Moral Machine is designed as a multilingual online "serious game", which is used to collect data around the world as much as possible, and to evaluate moral preferences by understanding how citizens want to solve moral problems in the case of inevitable accidents. The experimental results were published on the website of Nature in October 2018.
On the main interface of the moral machine, users can see the inevitable accident scene, which leads to two different results according to whether the self-driving car suddenly turns or continues to drive. Accident scenes are generated by moral machines, and the exploration strategies followed focus on nine factors: keeping human beings (or pets), keeping straight (or turning), protecting pedestrians (or passengers), protecting more lives (or fewer lives), protecting men (or women), protecting young people (or elderly people), protecting pedestrians (or jaywalkers) who cross the road legally, and protecting healthy people (people).

Moral machine dilemma. The self-driving car suddenly failed to brake. If you continue driving, one female athlete and one male athlete will die (left). Turning will lead to the death of a female athlete and an overweight man (right).
Based on the 40 million decisions made by millions of people from 233 countries and regions in 10 languages, the researchers described the experimental results from four aspects: first, summarizing the global moral preferences; Secondly, record the changes of personal preferences according to the demographic data of the respondents; Thirdly, the cross-cultural ethical differences are reported and three main national clusters are found. Fourthly, the analysis found the correlation between ethical differences and modern system and profound cultural characteristics.

The map of the world highlights the location of visitors to the moral machine. Each point represents the position where at least one visitor makes at least one decision (n = 3960 million). The number of visitors or decisions at each location are not shown.
Global preference
The survey results show that in the moral machine experiment, there are three very strong preferences, namely, protecting human beings instead of animals, protecting more lives and protecting young lives. In the researcher’s view, these three preferences should be considered by policy makers.
In 2017, the German Ethics Commission on Automated and Connected Driving put forward a set of ethical rules, which is the only attempt to provide official guidance for the ethical choice of autonomous vehicles. In Article 7 of this rule, it is clearly pointed out that in the case of dilemma, the protection of human life should take precedence over the protection of other animal lives. This rule is consistent with the social expectations shown in the survey results. Article 9 of this rule stipulates that any distinction based on personal characteristics (such as age) should be prohibited, which obviously conflicts with the tendency of protecting young lives in the investigation, showing the tension between public opinions and professional opinions.

Global preference icon.
individual difference
Through further analysis, the researcher completed the answers about age, education, gender, income, political and religious views to evaluate individual differences, so as to evaluate whether preferences are affected by these six characteristics.
It is found that individual variables have no significant influence on any of the nine factors. Among them, the most significant influence is determined by the gender and religious beliefs of the respondents. For example, male respondents have a low tendency to forgive women, and there is a weak correlation between religious beliefs and people’s tolerance tendency. On the whole, none of these six variables split their subgroups into opposite effect directions. Although there are some individual differences (for example, both male and female respondents expressed a preference for retaining women, but the latter showed a stronger preference), this is not the key information for policy makers.
Cultural cluster
Through geographical positioning, researchers can identify the country where the moral machine responders live, and look for national clusters with homogeneous moral tendencies. Through analysis, they divided these countries into three categories:
The first cluster (which researchers call the western cluster) includes Protestant, Catholic and Orthodox Christian cultural groups in the northern United States and many European countries. The internal structure of the cluster also shows a remarkable face validity, with a sub-cluster containing Scandinavian countries and a sub-cluster containing Commonwealth countries.
The second cluster (called Oriental Cluster by researchers) includes many countries and regions in the Far East, such as Japanese and Taiwan, China Confucian cultural groups, Indonesian Islamic countries, Pakistani and Saudi Arabia, and so on.
The third cluster (which researchers call a broad southern cluster) includes Latin American countries in Central America and South America, except some countries partially influenced by France.
This cluster model shows that geographical and cultural proximity may make people in some different countries and regions focus on the common preference for machine ethics. However, the differences between clusters may bring bigger problems: for example, for the eastern cluster, the preference to keep younger roles instead of older ones is much lower, while for the southern cluster, it is much higher, and the same is true for the preference to keep higher status roles; Compared with the other two clusters, the protection of human beings in the southern cluster countries is much weaker than that of pets. Only (weakly) tend not to let pedestrians pass passengers, and (moderately) tend not to let legal people pass illegal people, which seems to have the same degree of tendency in all groups.
Researchers believe that manufacturers and policy makers need to pay attention to the moral preferences of people in countries and regions where they design artificial intelligence systems and policies. Although the public’s moral preference is not necessarily the decisive factor in making moral policies, people’s willingness to buy self-driving cars and tolerate them on the road will depend on the adaptability of the moral rules adopted.
Forecast at the national level
Through further analysis, the researchers also found that the preferences revealed by the moral machine are highly related to the cultural and economic differences between countries and regions. The more culturally similar two countries or regions are, the more similar their choices in the moral machine are.
By observing the systematic differences between individualism culture and collectivism culture, we can find that the respondents from individualism culture emphasize the unique value of each person and show a strong tendency to protect more lives; Respondents from collectivist culture emphasize respect for older members, showing that they don’t like to protect young lives. Because policy makers need to consider the preference for protecting the majority and the preference for protecting the young, this split of individualism and collectivism culture may become an important obstacle to the general machine ethics.
Do policy makers need to consider whether those who cross the road illegally should receive the same protection as those who cross the road legally? Compared with other ethical priorities, should their protection priority be reduced? Through observation, it is found that participants from poorer and weaker countries and regions are more tolerant of pedestrians crossing the road illegally, probably because they have a lower degree of compliance with the rules and have a lighter punishment for violations.
In addition, people from economically unequal countries and regions treat the rich and the poor unequally in the moral machine, which can be explained by inequality infiltrating into people’s moral preferences; In almost all countries and regions, participants show a preference for women, but this preference is stronger in countries with better women’s health and survival prospects.
discuss
Although so far, human beings have never allowed a machine to decide who should live and who should die in an instant without real-time monitoring, it will happen in the most ordinary aspects of our lives in the near future. Before we allow cars to make ethical decisions, we need to have a global dialogue to express our preferences to the companies that design ethical algorithms and the policy makers who will supervise them.
The experiment of moral machine shows us three kinds of strong preferences of human beings in the face of extreme situations, which can be used as the cornerstone of discussing general machine ethics. The ambition and goal of this experiment are atypical. Researchers have reached a large number of participants by deploying a viral online platform. No previous study has tried to measure moral preferences by using a nine-dimensional experimental design in more than 200 countries. Although this method bypasses the difficulties of conventional research methods, it also leads to the defects that the sample cannot be fully matched with the social population of each country and region. However, from another perspective, the range of sample collection is those people who are close to the network and interested in technology. They are also more likely to participate in the early use of driverless cars, and the data is not meaningless.
The researchers point out that although we can make machines accurately follow moral preferences, we can’t reach a general consensus because even the strongest preferences expressed through moral machines show huge cultural differences. However, this does not mean that mankind’s journey to consensus machine ethics is doomed to failure from the beginning, because although we can’t deny that human beings will experience substantive difficulties such as internal conflicts, interpersonal differences and cultural differences in the moral field, these difficulties are not fatal. From the data, we find that many areas in the world still show some relatively consistent tendencies.
Although researchers have repeatedly stressed that the moral machine experiment aims to explore the moral dilemma faced by self-driving cars, it gives suggestions to future policy makers by studying the public’s moral preferences. However, many netizens said that the focus of this experiment does not seem to be on self-driving cars, but mainly on the exploration of the value system, decision-making priorities and processes of people participating in the test. On the one hand, this research result shows that it is difficult for human beings to reach an agreement on the correct answers to these questions, and it is very difficult to formulate laws and standards on these issues; On the other hand, it doesn’t reflect how people actually drive, and the stress response when they really face an accident can’t be well thought out like in the test. This is like a simple thought experiment, only a more complicated version of the tram problem. Some netizens also said that the most urgent task to deal with a series of problems that may be brought about by driverless cars on the road should be to reduce the accident rate; In addition, unmanned driving needs to be combined with the road traffic system to realize the complete separation of people and vehicles, and the problems faced in the experiment are no longer problems. These doubts also reflect from the side that many people are not ready to let machines decide the fate of mankind. At present, it seems that giving machines morality is beyond the range of human mind.