Solving the Problem of Moral Autonomy in Autonomous Vehicle Software

BCLP
Contact

Until now, humans have not only been responsible for operating automobiles but also acting as their moral decision-makers.  The invention of “fully autonomous” vehicles raises the question of how vehicles are—or whether they should be—morally autonomous.  This article addresses: (1) how moral frameworks are integrated into autonomous vehicles (“AVs”); (2) what liability attaches depending on the moral framework; and (3) how vehicles’ moral coding may be regulated in the future.1

(1) The “Knob” Approach to Autonomous Moral Decision-Making

Software developers have begun to program autonomous vehicles to operate by a set of moral rules when on the road.2  But whose morality is it?

One of the most discussed moral dilemmas that autonomous vehicles will face is whose harm takes precedence when an accident is unavoidable—the driver, passengers, third parties, etc.3  Imagine, for example, an AV approaches an intersection at high speed and a group of five people leap unexpectedly into the intersection.  Should the AV swerve to avoid the group of five even though the maneuver would result in hitting a single pedestrian, or continue forward?  Is the answer different if swerving would endanger the life of the driver or passengers in the AV?  What if the pedestrian is elderly?  What if the pedestrian has a child in a stroller?  In these situations, the vehicle’s intelligence will need to assess the dilemma and make the “right” choice.

The decision to prioritize the well-being of a driver or passengers over bystanders, or groups of bystanders over individuals, may be hard-coded into the car as a “Utilitarian” or “Impartial” moral framework.4  If the decision to change lanes to lessen the death toll brings greater risk to the driver’s own life, the car owner / passenger may want the path that causes least harm to him.  When embedded as a core premise of the car’s moral framework, this is known as the “Full Egoist” mode.  Alternatively, the car owner may prioritize others’ safety over their own and want a car embedded with “Full Altruist” mode.

While perhaps unlikely to be implemented, some researchers have suggested leaving the decision about which moral framework autonomous cars should have to the driver herself.5 This approach envisions the installation of a “moral knob” in the vehicle which a passenger can set to their desired moral framework among the choices of “Full Altruist,” “Impartial,” or “Full Egoist.”6

All of these options exemplify a “top-down” approach—hard-coding a set of moral premises into a vehicle.7 The vehicle makes decisions within the installed framework without the need for further human intervention.8

Conversely, in a “bottom-up” approach, an AV observes and replicates human operators and applies machine learning to develop its own code of ethics.9  Such an approach has already proven capable of teaching an AV how to function on the road.10  Using this process, researchers at NVIDIA Corporation trained an AV to drive on roads and highways by teaching it to detect road features.11  It remains to be seen, however, whether it will be able to capture every nuance associated with split-second moral decision-making on the road.12

Thinking through these moral dilemmas before implementing new technology might help stimulate acceptance of the technology.  Often, a new technology’s adoption depends on consumers accepting the product notwithstanding its direct impact on the safety and well-being of humans.13

(2) Liability for a Car’s “Ethical” Decision-Making

Even assuming society can reach consensus on the type of moral framework to implement in autonomous cars, there remains the question of who, if anyone, may face liability for a software developer’s framing of a car’s moral compass. 

There are bound to be incidents where a car’s moral framework will cause injury to others.  In one of the scenarios described above, if the car swerves to hit an elderly pedestrian, the moral code embedded in the car has chosen to harm a single bystander instead of passively allowing the death of five people directly in front of it.  If the elderly pedestrian’s surviving son brings a wrongful death suit, is it properly aimed at the driver, the hardware manufacturer, or the software manufacturer who embedded the moral framework into the car’s autonomous decision-making functions?

Regardless of whether the decision to strike one pedestrian instead of five may be justified from a moral standpoint, under the current legal framework it could still expose the creator of the car’s moral decision-making code to civil liability—or potentially even criminal liability.14  Because the vehicle manufacturer is causally responsible for creating the second incident, it could potentially be held liable for the death of the single person.15  The potential responsibility associated with encoding moral frameworks that can theoretically choose to harm is a crucial consideration in engineering the artificial intelligence of AVs.

By putting the choice into the hands of the consumer with the “Knob” approach above, manufacturers could potentially limit their liability.  However, the Knob approach may alienate a large number of consumers who do not want the responsibility to “choose who to harm” and its potentially resulting criminal liability.  Another portion of consumers—likely a large portion—would choose “Impartial,” which could revert the moral decision-making back to the AV and thus the liability back onto the manufacturer.16 Also, we must question whether installation of the knob itself merits exposure to some liability and how joint and several liability may be affected.

As a more practical approach, industry groups could push for a new liability framework for AVs that limits criminal and civil liability.17  Current criminal and tort law can deter further technological development because of the potential liability manufacturers may face once AVs hit the roads in large numbers and are involved in accidents.18 Some may argue that full legislative protection could have a negative effect by disincentivizing manufacturers from focusing on safety.19 Yet, the driving desire behind the creation of AVs is to substantially decrease the car accident fatality rate by eliminating or at least lessening human error.  Legislators and the public alike should be reminded this will be a life-saving technology and certain legislative protections may actually enhance that goal.

A new, partial liability framework for AVs could balance competing concerns by lessening current liability but still incentivizing manufacturers to prioritize product safety.  If society is ready to accept AVs on the streets and the decision is justified by the drastic increase in safety that such technology could provide,20 then the creation of a new partial liability framework by the legislature could be critical in encouraging the safe development of such vehicles.21

(3) Exploring Regulations that Foster Consumer Confidence without Compromising Industry Innovation  

The emergence of moral decision-making in our cars also raises broader social issues of regulation and transparency.  Integrating morality into autonomous cars is at the forefront of creating ethical guidelines for artificial intelligence.22  Such ethical frameworks will soon be extended into diverse autonomous machines performing a variety of functions.23  Navigating AV morality presents the opportunity to envision the extent of government regulation to ensure safety and foster industry development.

The National Highway Traffic Safety Administration (“NHTSA”) monitors new technologies through rulemaking and amending standards or creating new standards, as well as enforcing regulations to address safety defects.24  It remains to be seen whether NHTSA will now conclude that “safety” includes ensuring the “moral competence” of AVs.  In such a scenario, regulations may address the way the vehicle itself thinks and the manner in which it will be engineered to make decisions.  As a result, AV manufacturers may have to balance protecting their proprietary information with granting access to regulatory agencies.

Currently, AV manufacturers self-report on a broad range of product development issues, including ethical considerations.25 Soon, NHTSA may mandate this reporting to give consumers greater transparency into the development process.26  Until then, manufacturers can choose for themselves what should and should not be disclosed. 

Regulatory bodies in other countries have already passed ethical guideline requirements.  For example, the German “Ethics Commission” created a set of rules that delimits the way in which morality can be integrated into AVs and emphasizes public participation in regulating the moral codes embedded in the vehicles.27  In part, the rules establish that, when accidents are inevitable, coding that takes action to limit overall human harm is acceptable.28  However, characteristics such as “age, gender, and physical or mental constitution” cannot be taken into account when the vehicle makes moral decisions.29 The rules also mandate public reporting on new technologies and programming of AVs.30

While the U.S. need not mimic regulations like those of the German Ethics Commission, manufacturers and other industry players in the United States should take note of such regulations’ impact on public confidence in AVs.  The public will likely want to make informed decisions about the AVs they choose to buy, ride in, or accept as fellow commuters on the roads.  Such “buy-in” will likely bolster consumer confidence in and adoption of AVs and ensure these new technologies reach full integration in the near future.

The authors would like to recognize summer associate Shane Djokovich for his contributions in drafting this article.


1. Vicky Charisi, et. al., Towards Moral Autonomous Systems (Nov. 1, 2017), https://arxiv.org/pdf/1703.04741.pdf; Alex Shashkevich, Stanford Scholars, Researchers Discuss Key Ethical Questions Self-Driving Cars Present (May 27, 2017),  https://news.stanford.edu/2017/05/22/stanford-scholars-researchers-discuss-key-ethical-questions-self-driving-cars-present/.

2. Can Autonomous Vehicles Make Moral and Ethical Decisions (Aug. 14, 2017), https://www.design-engineering.com/moral-autonomous-vehicle-1004027337-1004027337/.

3. Tobias Holstein, Ethical and Social Aspects of Self Driving Cars (Jan. 18, 2018), https://arxiv.org/pdf/1802.04103.pdf.

4. Abigail Beall, Driverless Cars Could Let You Choose Who Survives in a Crash (Oct. 13, 2017), https://www.newscientist.com/article/2150330-driverless-cars-could-let-you-choose-who-survives-in-a-crash/.

5. Id.

6. Id.

7. Amitai Etzioni and Oren Etzioni, Incorporating Ethics into Artificial Intelligence (Feb. 27, 2017), http://ai2-website.s3.amazonaws.com/publications/etzioni-ethics-into-ai.pdf.

8. Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right From Wrong. New York: Oxford University Press (2009).

9. Etzioni, supra note 7.

10. Bojarski, Mariusz, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel et al., End to End Learning for Self-Driving Cars (Apr. 25, 2016), https://arxiv.org/abs/1604.07316. 

11. Id.

12. See Etzioni, supra note 7.

13. See Patrick Lin, Robot Cars and Fake Ethical Dilemmas (Apr. 3, 2017), https://www.forbes.com/sites/patricklin/2017/04/03/robot-cars-and-fake-ethical-dilemmas/.

14. See Sabine Gless, Emily Silverman, and Thomas Weigend, If Robots Cause Harm, Who Is To Blame? Self-Driving Cars And Criminal Liability (2016), http://euro.ecom.cmu.edu/program/law/08-732/AI/Gless.pdf.

15. See id.

16. See id.

17. Alexander Hevelke and Julian Nida-Rumelin, Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis (Jun. 11, 2014), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4430591/.

18. See Gary Marchant and Rachel Lindor, The Coming Collision between Autonomous Vehicles and the Liability System (Dec. 17, 2012), https://web.law.asu.edu/Portals/31/Marchant_autonomous_vehicles.pdf.

19. Id.

20. See Nidhi Karla and Susan M. Paddock, Driving to Safety: How Many Miles of Driving Would it Take to Demonstrate Autonomous Vehicle Liability (2016), https://orfe.princeton.edu/~alaink/SmartDrivingCars/Papers/RAND_TestingAV_HowManyMiles.pdf.

21. Hevelke, supra note 17.

22. See Charisi, supra note 1.

23. See id.

24. Federal Automated Vehicles Policy: Accelerating the Next Revolution in Roadway Safety, Nt’l High. Traffic Safety Admin. (Sept. 2016), https://www.transportation.gov/AV/federal-automated-vehicles-policy-september-2016.

25. Id

26. Id.

27. See Ethics Commission Automated and Connected Driving (June 2017), https://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.pdf?__blob=publicationFile.

28. Id.

29. Id.

30. Id.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© BCLP

Written by:

BCLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

BCLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide