When considering the development of the autonomous vehicle (AV) – a vehicle, for example a car, that will no longer need a driver –, we quickly realise that ethics has a special place in the debates (1). This relates to the moral dilemmas that such a vehicle, driven by artificial intelligence, would face. These dilemmas would arise from tragic accident situations in which a passenger would not be able to take control of the driving. The control system would decide for itself. Its programme would therefore have to incorporate moral decision-making rules. To resolve these dilemmas, one invokes either utilitarian perspectives (the vehicle’s decision rules must maximise the well-being of all the persons involved in a situation) or deontological perspectives (its decision rules must respect the human being by never considering him as a mere means to an end). If the steering system of the automatic vehicle obeys the first perspective, it will be able to decide who should be sacrificed in an unavoidable accident situation. If it follows the second perspective, it will refuse to make such a choice, because designating people to be spared or sacrificed would then be an inherently immoral option. The situations that constitute the substance of these moral dilemmas seem to ignore the role of sentiments in ethics, which are part of sentimentalist ethics. We discuss this absence in this post and the next.  

1.

A review of the use of the terms “feeling,” “emotion,” “sympathy,” and “empathy” in various recent studies on ethical issues raised by the autonomous vehicle reveals a virtual absence (2). For example, the recent research article on the “Moral Machine” (a serious online game on which people can make a moral decision when faced with accident scenarios involving AVs) (3), the results of which were published recently (4), did not include any of these occurrences. Another widely quoted article, which deals with the social dilemmas of autonomous vehicles (5), does include occurrences of the words “sentiment” and “emotion,” but the first occurrence does not refer to the concept of “sentiment” in the sense of a “complex, fairly stable and lasting emotional state, composed of intellectual, emotional or moral elements, and which concerns either the ‘self’ (pride, jealousy…) or others (love, envy, hate…)” (6). The second occurrence, on the other hand, deserves to be transcribed in full. It concerns the following dilemma, which the algorithm controlling the AV will have to solve: killing the passenger of the vehicle versus killing several pedestrians.

“The utilitarian course of action, in that situation, would be for the AV to swerve and kill its passenger, but AVs programmed to follow this course of action might discourage buyers who believe their own safety should trump other considerations. Even though such situations may be exceedingly rare, their emotional saliency is likely to give them broad public exposure and a disproportionate weight in individual and public decisions about AVs. To align moral algorithms with human values, we must start a collective discussion about the ethics of AVs – that is, the moral algorithms that we are willing to accept as citizens and to be subjected to as car owners.”

The discussion takes seriously the emotions that moral dilemmas could evoke in the public because of their consequences – attacks on human life as well as the very fact of choosing between people. But these emotions are neither defined nor analysed (this was not the subject of the researchers’ work). In addition, it is not known whether any of these emotions would be felt by the protagonists in a situation of moral dilemma. Indignation, for example, would be part of the emotions generated by the “emotional salience” of the situation, but it would not be shared by the protagonists, i.e. passengers and pedestrians. Indignation is rather peculiar to the spectators of a situation, not to its actors. The protagonists would likely experience fear and horror, and could be panicked – emotions that the public is likely to experience only through imagination.  

2.

Another article, dealing with the legal doctrine of necessity specific to Anglo-American jurisprudence, applies the concept of emotion to the circumstances of an accident (7). The possibility of derogating from what is prohibited by law presupposes special circumstances depriving the agents of freedom of choice. In the author’s words, the doctrine of necessity deals precisely with “emergency cases in which human agents have intentionally caused damages to life and property in order to avoid other damages and losses, when avoiding all evils is deemed to be impossible.” Does this doctrine apply to situations where people in panic make a choice that causes harm to others? And, in this case, can the agents be “excused” because the situation had deprived them of their will? If this were the case, that is, if necessity were conceptually associated with excuse, the doctrine of necessity could not apply to the autonomous car driving system:

“If necessity was only an excuse based on the weakness of human will and motivation, then it could not arguably apply to any programmed behaviour of AVs. If the point of necessity were to permit to do what is otherwise prohibited when under the emotional pressure of tragic and sudden circumstances, then it would always be impermissible to deliberately instruct in advance an artificial agent to damage anyone.”

 

3.

Cédric Villani’s recent report  (in French) on artificial intelligence includes a section on emotions. It concerns social robots and automated assistants. The report notes that “the development of these machines’ capacities for empathy – i.e. their ability to express a particular emotion to adapt to [their] interlocutor at a given moment – can be beneficial in order to personalise, reassure the user.” It also raises two types of risks:

– those related to the “emotional relationships that can be established between the beneficiary and the machine, and their possible consequences (dependence, exploitation of emotional vulnerabilities, confusion with human empathy…);”

– those related to the possible use, “for commercial or surveillance purposes, [of] data on emotions obtained in real contexts.”

 

4.

Let’s summarize the meaning of the three types of occurrences identified:

– the first concerns the emotions activated in the spectators’ minds by the perception of a moral dilemma involving an autonomous vehicle;

– the second refers to the emotional overload in emergency situations – an overload that an autonomous vehicle, deprived of the ability to experience emotions, could not undergo;

– the third involves the ability of robots to express emotions and, as a result, to put human-machine interactions in a new light.

Do these three types of meanings allow a discussion to begin on the role that a sentimentalist ethics – a morality based on sentiments – could play in resolving the moral dilemmas we have mentioned? We will discuss this issue in the next post. Alain Anquetil (1) We refer here to levels 4 (high automation) and 5 (complete automation) of autonomous vehicles – see “The 5 autonomous driving levels explained.” See also this document from the U.S. National Highway Traffic Safety Administration (NHTSA). (2) These are fifteen texts relating to the ethics of the autonomous car, including twelve research articles and three reports, including the report by Cédric Villani: “Donner un sens à l’intelligence artificielle: pour une stratégie nationale et européenne,” submitted to the French government on 28 March 2018. (3) E. Awad, S. Dsouza, R. Kim, J. Schulz, J. Henrich, A. Shariff, J.-F. Bonnefon & I. Rahwan, “The Moral Machine experiment,” Nature, Vol. 563, November 1, 2018. (4) See “Experiment in morality illustrates challenges in programming self-driving cars to make life-or-death decisions,” The Globe and Mail, 24 October 2018. (5) J.-F. Bonnefon, A. Shariff, I. Rahwan, « The social dilemma of autonomous vehicles », Science, vol. 352, n° 6293, 24 June 2016, pp. 1573-1576. (6) The definition comes from the French site CNRTL. The sentence taken from the article referred to in the previous note relates to “public sentiment” in the sense of an opinion or belief. (7) F. Santoni de Sio, “Killing by Autonomous Vehicles and the legal doctrine of necessity,” Ethical Theory and Moral Practice, 20(2), 2017, pp. 411-429.      

Share this post:
Share with FacebookShare with LinkedInShare with TwitterSend to a friendCopy to clipboard