As the nascent autonomous vehicle (AV) industry grows, AV stakeholders are already contemplating how AVs should handle ethical dilemmas (e.g. “trolley problems” wherein one must decide the lesser of two evils). AV makers not only need to successfully program cars to “make” ethical decisions; they also need to CHOOSE the ethical rules by which their AVs make such decisions. As Gus Lubin suggests, the latter task entails making judgments on questions such as: “If a person falls onto the road in front of a fast-moving AV,” should the AV “swerve into a traffic barrier, potentially killing the passenger, or go straight, potentially killing the pedestrian?”
Companies have already begun answering such questions, revealing the ethical rules upon which their AVs might operate. For example, as Google X founder Sebastian Thrun revealed, Google X’s cars would be programmed to hit the smaller of two objects (if they had to hit one or the other). As Lubin explained, an algorithm to hit smaller objects is already “an ethical decision…a choice to protect the passengers by minimizing their crash damage.” (It’s presumably safer to crash into a smaller object if one had to crash into one).
The prospect of companies algorithmically programming AVs to choose one’s death over another’s, seemed problematic at first. Playing MIT’s online Moral Machine game, I had to decide whether a driverless car should choose to kill two female athletes and doctors (“stay”) or two male ones (“swerve”). Making such decisions was already uncomfortable because in order to do so, I needed to judge whether one set of lives was more valuable than another. I felt all the more troubled, as I imagined companies programming these value judgments into real-life AVs that could actually kill. Perhaps Consumer Watchdog’s Wayne Simpson felt similarly uneased, as he wrote: “The public has a right to know” whether robot cars are programmed to prioritize “the life of the passenger, the driver, or the pedestrian… If these questions are not answered in full light of day … corporations will program these cars to limit their own liability, not to conform with social mores, ethical customs, or the rule of law.”
Yet human drivers who confront trolley problems must make ethical choices on who to kill/save as well, and we clearly aren’t viscerally hesitant about letting humans drive. (I’d love comments explaining why we reach differently to humans v. robot drivers). In fact, compared to robots, human drivers facing trolley problems might not accurately decide who to kill/save based on a set of ethical principles; they might not operate under ethical principles at all. Rather, humans might panic and freeze, or act only based on self-preservation instincts. Moreover, stepping aside from “trolley dilemma” road situations, as Lubin writes, the “most ethical decision may be the one that gets the most AVs on the road,” given that AVs are on average safer than human drivers. As the WSJ pointed out in August 2016, driverless cars could eliminate 90% of the 30,000+ annual car accidents that are mostly caused by human error (Lubin cited this article).
In light of these concerns and prospects, should the government allow or encourage companies to develop (and sell) AVs programmed to operate upon a set of ethical rules? Why or why not? If yes, who should decide what AVs’ ethical rules are and how? — Eric