Abstract
Through modern driver assistant systems, algorithmic decisions already have a significant impact on the behavior of vehicles in everyday traffic. This will become even more prominent in the near future considering the development of autonomous driving functionality. The need to consider ethical principles in the design of such systems is generally acknowledged. However, scope, principles and strategies for their implementations are not yet clear. Most of the current discussions concentrate on situations of unavoidable crashes in which the life of human beings is existentially affected. In this paper, we argue that ethical considerations should be mandatory for any algorithmic decision of autonomous vehicles, instead of a limitation to hazard situations. Such an ethically aligned behavior is relevant because autonomous vehicles, like any other traffic participants, operate in a shared public space, where every behavioral decision impacts the operational possibilities of others. These possibilities concern the fulfillment of a road-user’s safety, utility and comfort needs. We propose that, to operate ethically in such space, an autonomous vehicle will have to take its behavior decisions according to a just distribution of operational possibilities among all traffic participants. Using an application on a partially-autonomous prototype vehicle, we describe how to apply and implement concepts of distributive justice to the driving environment and demonstrate the impact on its behavior in comparison to an advanced but egoistic decision maker.
Similar content being viewed by others
Notes
See for example Waymo’s self-driving cars (Waymo 2017) or Volvo’s Drive Me plans for alpha-testing with real families (https://www.volvocars.com/intl/about/our-innovation-brands/intellisafe/autonomous-driving/drive-me).
The trolley-problem is a classic ethical thought experiment, which is referred to in the ethics of autonomous vehicles. It can be described as follows: A driverless trolley is heading towards a group of people on a track. The only chance to safe their lives is to pull a switch which has the effect that the trolly is redirected to a side-track where only a single person is killed instead. This scenario is used to compare ethical justifications in dilemma situation.
“Protecting the human right to free movement while ensuring that the opportunities to meet human needs and interests are fairly distributed” (Mladenovic and McPherson 2016, p. 1135).
Lamont and Favor (2017) presents a similar set of key aspects (see chapter: “Scope and Role of Distributive Principles”).
One definition: “justice in distribution of good and evil” in a “comparative treatment of individuals” (Frankena 1963, p. 10).
“Things that every rational man is presumed to want”, for example: “liberty and opportunity, income and wealth, and the social bases of self-respect” (Rawls 2009, p. 54).
He defines common goods as “resources, opportunities, profits and advantages, roles and offices, responsibilities, taxes and burdens - in general, the common stock and the incidents of communal enterprise, which do not serve the common good unless and until they are appropriated to particular individuals” (Finnis 2011, p. 166).
This perspective has become popular as “luck egalitarianism” (Anderson 1999, p. 327).
For instance Persson argues for an extreme egalitarianism position (Persson 2007).
“Imagine a case in which the enslavement of a very small minority population will make the much larger majority population very happy” (Shafer-Landau 2007, p. 415).
For instance Ferretti argues that any “risk imposition” is of ethical relevance even when in the end no actual damage occurs (Ferretti 2016).
Leben is also aware of that and claims a more general applicability of his approach in the conclusion of his article (Leben 2017).
In some autonomous vehicle prototypes, behavioral decisions are made by a “black-box” machine learning algorithm, that maps incoming information (e.g. image data) to behavior output based on prior training examples (Bojarski et al 2016).
In a report of the ethic-commission, assigned by the German Ministry of Transport, the authors claim that autonomous vehicles should be programmed that they “drive in a defensive and anticipatory manner, posing as little risk as possible to vulnerable road users” (Ethics-Commission 2017, p. 6).
For instance Nyholm and Smith assume that autonomous vehicles will be programmed to optimize their passengers’ interests: “optimally safe, fuel-efficient, and travel time-efficient” (Nyholm and Smids 2018, p. 2).
For technical details on the system, see Weisswange et al. (2019).
For instances an ambulance could be a privileged traffic participant.
See for example: Ethics-Commission (2017).
In Ethics-Commission (2017): Paragraph 9 and in more detail in the appendix (only in the German version).
Street regulations for example in Germany may contain general traffic principles, which are relevant for such scenes. The street regulation in Germany obliges cautious driving and mutual respect and understanding as a first principle (Paragraph (1) in the StVO - Germany). It is a vague formulation but still an important principle because it allows people to be held partly responsible for e.g. being incautious, even when strict traffic law (for example the right of way) is on their side - Paragraph (1) in the StVO - Germany (http://www.gesetze-im-internet.de/stvo_2013/).
References
Anderson, E. S. (1999). What is the point of equality? Ethics, 109(2), 287–337.
Bentham, J. (1996). The collected works of Jeremy Bentham: An introduction to the principles of morals and legislation. Oxford: Clarendon Press.
Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, LD., Monfort, M., Muller, U., Zhang, J., Zhang, X., Zhao, J., & Zieba, K. (2016). End to End Learning for Self-Driving Cars. In Advances in Neural Information Processing Systems 29 (NIPS 2016) Deep Learning Symposium, Barcelona, Spain, arXiv:1604.07316
Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576.
Charisi, V., Dennis, LA., Fisher, M., Lieck, R., Matthias, A., Slavkovik, M., Sombetzki, J., Winfield, AFT., & Yampolskiy, R. (2017). Towards moral autonomous systems. CoRR arXiv:1703.04741
Coeckelbergh, M. (2016). Responsibility and the moral phenomenology of using self-driving cars. Applied Artificial Intelligence, 30(8), 748–757.
Cohen, G. A. (1989). On the currency of egalitarian justice. Ethics, 99(4), 906–944.
Danaher, J. (2016). Robots, law and the retribution gap. Ethics and Information Technology, 18(4), 299–309.
Dworkin, R. (1981). What is equality? part 2: Equality of resources. Philosophy & Public Affairs, 10(4), 283–345.
Elster, J. (1993). Local justice: How institutions allocate scarce goods and necessary burdens G—Reference, Information and Interdisciplinary Subjects Series, New York: Russell Sage Foundation
Ethics-Commission (2017). Retrieved January 30, 2018, from Automated and connected driving. https://www.bmvi.de/SharedDocs/EN/ Documents/G/ethic-commission-report.pdf.
Ferretti, M. P. (2016). Risk imposition and freedom. Politics, Philosophy & Economics, 15(3), 261–279.
Finnis, J. (2011). Natural law and natural rights. clarendon law series. Oxford: Oxford University Press.
Frankena, W. K. (1963). Ethics. New Jersey: Prentice-Hall Foundations of Philosophy Series, Prentice-Hall Incorporated
Gerdes, J. C., & Thornton, S. M. (2016). Implementable ethics for autonomous vehicles. In M. Maurer, J. C. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomous driving: Technical, legal and social aspects (pp. 87–102). Berlin: Springer.
Goodall, N. (2014a). Ethical decision making during automated vehicle crashes. Transportation Research Record: Journal of the Transportation Research Board, 1(2424), 58–65.
Goodall, N. (2014b). Machine ethics and automated vehicles. In G. Meyer & S. Beiker (Eds.) Road vehicle automation (pp. 93–102), Cham: Springer.
Harsanyi, J. C. (1977). Morality and the theory of rational behavior. Social Research, 44(4), 623–656.
Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21(3), 619–630.
Hin-Yan, L. (2016). Structural discrimination and autonomous vehicles: Immunity devices, trump cards and crash optimisation. What Social Robots Can and Should Do: In Proceedings of Robophilosophy 2016/TRANSOR 2016 290:164–173
Jenkins, R. (2017). The need for moral algorithms in autonomous vehicles. In P. Otto (Ed.) ETHiCS: A reinvention of ethics in the digital age? (pp. 88–97), Berlin: iRights Media.
Keeling, G. (2017). Commentary: Using virtual reality to assess ethical decisions in road traffic scenarios: Applicability of value-of-life-based models and influences of time pressure. Frontiers in Behavioral Neuroscience, 11, 247.
Kleinehagenbrock, M., Nishigaki, M., Kastner, R., Schmuedderich, J., Rebhan, S., Weisswange, TH., Kamiya, H., Mori, N., Kusuhara, S., & Ishida, S. (2015). Introduction of intelligent adaptive cruise control (i-acc): A predictive safety system. In FAST-zero’15: 3rd International Symposium on Future Active Safety Technology Toward Zero Traffic Accidents, 2015
Lamont, J., & Favor, C. (2017). Distributive justice. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy, winter (2017th ed.). Stanford: Metaphysics Research Lab, Stanford University.
Leben, D. (2017). A rawlsian algorithm for autonomous vehicles. Ethics and Information Technology, 19(2), 107–115.
Lin, P. (2015). Why ethics matters for autonomous cars. In M. Maurer, J. C. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomous Driving: Technical, Legal and Social Aspects (pp. 69–85). Berlin: Springer.
Luetge, C. (2017). The german ethics code for automated and connected driving. Philosophy & Technology, 30(4), 547–558.
Lütge, C., Rusch, H., Uhl, M., & Luetge, C. (2014). Experimental ethics: Toward an empirical moral philosophy. Berlin: Springer.
Margalit, A. (1998). The decent society. Cambridge, MA: Harvard University Press.
Mill, J. S. (1861). Utilitarianism. Oxford: Oxford University Press.
Misselhorn, C. (2015). Collective agency and cooperation in natural and artificial systems. Collective Agency and Cooperation in Natural and Artificial Systems (pp. 3–24). Berlin: Springer.
Mladenovic, M. N., & McPherson, T. (2016). Engineering social justice into traffic control for self-driving vehicles? Science and Engineering Ethics, 22(4), 1131–1149.
Nussbaum, M. C. (1992). Human functioning and social justice: In defense of aristotelian essentialism. Political Theory, 20(2), 202–246.
Nyholm, S. (2018a). Attributing agency to automated systems: Reflections on human-robot collaborations and responsibility-loci. Science and Engineering Ethics, 24(4), 1201–1219.
Nyholm, S. (2018b). The ethics of crashes with self-driving cars: A roadmap, ii. Philosophy Compass, 13(7), 1–10.
Nyholm, S., & Smids, J. (2016). The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Ethical Theory and Moral Practice, 19(5), 1275–1289.
Nyholm, S., & Smids, J. (2018). Automated cars meet human drivers: responsible human-robot coordination and the ethics of mixed traffic. Ethics and Information Technology. https://doi.org/10.1007/s10676-018-9445-9
Persson, I. (2007). A defence of extreme egalitarianism. In N. Holtug & K. Lippert-Rasmussen (Eds.), Egalitarianism: New essays on the nature and value of equality (pp. 83–98). Oxford: Clarendon Press.
Rawls, J. (2009). A theory of justice (Revised ed.). Cambridge, MA: Harvard University Press.
Schmidt, V. H. (1992). Adaptive justice: Local distributive justice in sociological perspective. Theory and Society, 21(6), 789–816.
Schmuedderich, J., Rebhan, S., Weisswange, TH., Kleinehagenbrock, M., Kastner, R., Nishigaki, M., Kamiya, H., Mori, N., Kusuhara, S., & Ishida, S. (2015). A novel approach to driver behavior prediction using scene context and physical evidence for intelligent adaptive cruise control (i-acc). In FAST-zero’15: 3rd International Symposium on Future Active Safety Technology Toward Zero Traffic Accidents, 2015
Shafer-Landau, R. (2007). Ethical theory: An anthology. Blackwell philosophy anthologies. West Sussex: Wiley-Blackwell.
Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.
Waymo (2017) Waymo safety report: On the road to fully self-driving. Retrieved March 03, 2018, from https://waymo.com/safetyreport.
Weisswange, T.H., Rebhan, S., Bolder B., Steinhardt, N.A., Joublin, F., Schmuedderich, J., & Goerick, C. (2019). Intelligent traffic flow assist: Optimized highway driving using conditional behavior prediction. IEEE Intelligent Transportation Systems Magazine (in press).
Acknowledgements
The authors would like to thank the anonymous reviewers for their support towards improving the overall quality of the paper through the process.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Dietrich, M., Weisswange, T.H. Distributive justice as an ethical principle for autonomous vehicle behavior beyond hazard scenarios. Ethics Inf Technol 21, 227–239 (2019). https://doi.org/10.1007/s10676-019-09504-3
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10676-019-09504-3