When Disney Meets Dilemma: The Ethics of Self-Driving Cars

As the nascent autonomous vehicle (AV) industry grows, AV stakeholders are already contemplating how AVs should handle ethical dilemmas (e.g. “trolley problems” wherein one must decide the lesser of two evils). AV makers not only need to successfully program cars to “make” ethical decisions; they also need to CHOOSE the ethical rules by which their AVs make such decisions. As Gus Lubin suggests, the latter task entails making judgments on questions such as: “If a person falls onto the road in front of a fast-moving AV,” should the AV “swerve into a traffic barrier, potentially killing the passenger, or go straight, potentially killing the pedestrian?”

Companies have already begun answering such questions, revealing the ethical rules upon which their AVs might operate. For example, as Google X founder Sebastian Thrun revealed, Google X’s cars would be programmed to hit the smaller of two objects (if they had to hit one or the other). As Lubin explained, an algorithm to hit smaller objects is already “an ethical decision…a choice to protect the passengers by minimizing their crash damage.” (It’s presumably safer to crash into a smaller object if one had to crash into one).

The prospect of companies algorithmically programming AVs to choose one’s death over another’s, seemed problematic at first. Playing MIT’s online Moral Machine game, I had to decide whether a driverless car should choose to kill two female athletes and doctors (“stay”) or two male ones (“swerve”). Making such decisions was already uncomfortable because in order to do so, I needed to judge whether one set of lives was more valuable than another. I felt all the more troubled, as I imagined companies programming these value judgments into real-life AVs that could actually kill. Perhaps Consumer Watchdog’s Wayne Simpson felt similarly uneased, as he wrote: “The public has a right to know” whether robot cars are programmed to prioritize “the life of the passenger, the driver, or the pedestrian… If these questions are not answered in full light of day … corporations will program these cars to limit their own liability, not to conform with social mores, ethical customs, or the rule of law.”

Yet human drivers who confront trolley problems must make ethical choices on who to kill/save as well, and we clearly aren’t viscerally hesitant about letting humans drive. (I’d love comments explaining why we reach differently to humans v. robot drivers). In fact, compared to robots, human drivers facing trolley problems might not accurately decide who to kill/save based on a set of ethical principles; they might not operate under ethical principles at all. Rather, humans might panic and freeze, or act only based on self-preservation instincts. Moreover, stepping aside from “trolley dilemma” road situations, as Lubin writes, the “most ethical decision may be the one that gets the most AVs on the road,” given that AVs are on average safer than human drivers. As the WSJ pointed out in August 2016, driverless cars could eliminate 90% of the 30,000+ annual car accidents that are mostly caused by human error (Lubin cited this article).

In light of these concerns and prospects, should the government allow or encourage companies to develop (and sell) AVs programmed to operate upon a set of ethical rules? Why or why not? If yes, who should decide what AVs’ ethical rules are and how? — Eric

4 thoughts on “When Disney Meets Dilemma: The Ethics of Self-Driving Cars

  1. Over 30 different companies, ranging from traditional automobile companies like Tesla, BMW to companies not normally associated with automobiles such as Apple and Google, are constructing and creating autonomous vehicles. Research shows that in as few as three to five years, fully autonomous vehicles will be in regular use on road. I believe that governments should encourage companies to develop and sell programmed to operate with a faster reaction time. A NHTSA study in 2015 found that it could take a AV up to 17 seconds to respond to an incident while a human takes on average less than second. Yet, despite this seemingly universal transition to AVs, I believe the government should not depend on companies developing AVs to solve all transportation issues, especially within cities.

    Instead, the government should encourage these companies to invest in transit or allocate more government funding into alternate modes of transportation. 25,000 people per hour can use transit as their form of transportation to move within a city. 9,000 people per hour can use walking – compare these numbers to the mere 600 people per hour who can use cars to move within a city. Although AVs offer many benefits, they do not solve traffic congestion or efficient, effective transportation concerns.

  2. Before debating whether such ethical norms should be codified and included in the training of autonomous vehicles, it’s worth remembering the point Eric brought up concerning the safety increases that would accompany the large-scale introduction of self-driven cars. Beyond simply removing most human-error-related accidents, autonomous cars supersede human ability to avoid accidents; 360-degree cameras, finely-tuned SoC/GPU combos with immense processing power (Nvidia’s soon-to-be-released DRIVE PX Pegasus completes over 320 trillion operations per second), lidar sensing, etc. give autonomous cars “superhuman” senses, allowing them to reduce the occurrences of such ethical dilemmas drastically.

    That being said, these situations will certainly arise nonetheless. Even the best of autonomous vehicles would fail to avoid accidents, mostly due to the simultaneous presence of human drivers (and thus human error). This is something that will become less and less relevant as the prevalence of autonomous cars increases, and accident rates approach the 90% decrease cited in many articles.

    In the meantime, however, it’s hard to avoid these type of ethical qualms. I think there are many aspects of this debate that are somewhat at odds with the way we treat similar situations but with human control. When faced with a split-second, lose-lose situation on the road, even the most ethically-minded human driver doesn’t take his/her preconceived ideas of which action would be more justifiable and come to a rational conclusion on the spot. Instead, in the very short period of time offered, humans rely on instinct and gut reaction. What is uncomfortable about autonomous vehicles is the fact that AI doesn’t necessary have this pre-built-in “instinct,” and as a result it must be programmed (trained) into its constitution. It’s interesting to keep in mind that human decision-making in such cases is anything but consistent; in some cases humans a guess (e.g. faced with an oncoming vehicle in one’s own lane, some drivers might swerve into the oncoming lane and hope the damage of swerving is less than remaining/braking/etc), some; in some cases humans demonstrate self-sacrificial behaviors (e.g. swerving off a mountain road to avoid hitting a raccoon), even when, in cases like these, the behavior is often directly against what most would consider to be “preferable” (almost everyone would agree that the life of the driver is not worth risking for that of a small animal). Not only that, but we accept that the driver next to us may take different actions in given scenarios than we might—that he/she might put our life at risk when acting on instinct. What would be interesting to elucidate is why, then, given the inconsistent, often erroneous behavior of human drivers that we accept as normal, are autonomous systems subject to such severe scrutiny?

  3. It seems essential that AVs have some programmed framework for reading these difficult situations and making a decision, but the question of who decides these ethical frameworks, and how, is essential to the development of this technology for widespread use. I think federal and state governments are going to have to create some kind of regulatory framework for this technology, but doing so needs to balance prudent regulation with avoiding a situation where lawmakers with little understanding of this technology constrain its growth through overly restrictive or imprudent regulation. Governments can create broad guidelines to make sure AVs are safe and rely on decision-making frameworks that do not violate traffic laws and uphold a basic standard of acceptable behavior from a driver, human or not. While these ethical dilemmas are difficult and will require tough choices, the potential benefits of AV for traffic safety, as Eric points out, are immense.

  4. The presence of self-driving cars exerts a number of positive externalities on society, the most prominent of which is increased safety. In 2015, more than 35,000 people were killed in vehicular accidents in the U.S. Of those crashes, 94% were due to poor decision making and human error such as speeding and drunk driving. AVs, on the other hand, have operated with nearly zero error in test drives, with the only accidents occurring when humans have taken the wheel. According to the Centers for Disease Control, AVs are predicted to save 50 million lives globally within half a century. AVs also save money. The technology has the potential to save $190 billion a year in healthcare costs by reducing up to 90% of accidents. Additionally, since AVs optimize for efficiency in braking and acceleration, there is a potential to reduce carbon emissions by 300 million tons per year in the U.S.

    Despite the benefits of self-driving cars, many issues arise. AV technology has the potential to take over human jobs and gradually phase out human workers. This is especially problematic because AV technology is applicable to such a wide range of industries. Currently in the U.S., there are 600,000 Uber drivers, 181,000 taxi drivers, 168,000 transit bus drivers, 505,000 school bus drivers, and 1 million truck drivers. At least 2.5 million jobs are directly at risk of being automated in the coming years. As a result, income inequality between low-skilled workers (AVs will reduce the number of jobs available to these workers and wages paid out) and high-skilled workers (AVs will increase their productivities) will increase.

    Rather than blocking the progress of AV technology / rollout, the U.S. should tax self-driving car companies (Waymo, Uber, Tesla, etc.) and invest that money into training displaced workers for emerging engineering jobs (or perhaps allocate that money to universal basic income…). This solution would allow AVs to exert their positive externalities while lessening potential income / wealth inequality among workers.

    As we move forward, several questions arise. First, we will need efficient federal regulation in the U.S. to make sure that AVs are rolled out responsibly. What factors influence what laws get passed? What will states say? What will be the reaction of constituents? Second, we need to address the idea of human discomfort with self-driving cars taking over the road. If AVs are 1% safer than human drivers, will we be comfortable with them? What if they’re 10% safer? 100% safer? What is that “magic” number for us to be comfortable getting into a self-driving car?

Leave a Reply