When Disney Meets Dilemma: The Ethics of Self-Driving Cars

As the nascent autonomous vehicle (AV) industry grows, AV stakeholders are already contemplating how AVs should handle ethical dilemmas (e.g. “trolley problems” wherein one must decide the lesser of two evils). AV makers not only need to successfully program cars to “make” ethical decisions; they also need to CHOOSE the ethical rules by which their AVs make such decisions. As Gus Lubin suggests, the latter task entails making judgments on questions such as: “If a person falls onto the road in front of a fast-moving AV,” should the AV “swerve into a traffic barrier, potentially killing the passenger, or go straight, potentially killing the pedestrian?”

Companies have already begun answering such questions, revealing the ethical rules upon which their AVs might operate. For example, as Google X founder Sebastian Thrun revealed, Google X’s cars would be programmed to hit the smaller of two objects (if they had to hit one or the other). As Lubin explained, an algorithm to hit smaller objects is already “an ethical decision…a choice to protect the passengers by minimizing their crash damage.” (It’s presumably safer to crash into a smaller object if one had to crash into one).

The prospect of companies algorithmically programming AVs to choose one’s death over another’s, seemed problematic at first. Playing MIT’s online Moral Machine game, I had to decide whether a driverless car should choose to kill two female athletes and doctors (“stay”) or two male ones (“swerve”). Making such decisions was already uncomfortable because in order to do so, I needed to judge whether one set of lives was more valuable than another. I felt all the more troubled, as I imagined companies programming these value judgments into real-life AVs that could actually kill. Perhaps Consumer Watchdog’s Wayne Simpson felt similarly uneased, as he wrote: “The public has a right to know” whether robot cars are programmed to prioritize “the life of the passenger, the driver, or the pedestrian… If these questions are not answered in full light of day … corporations will program these cars to limit their own liability, not to conform with social mores, ethical customs, or the rule of law.”

Yet human drivers who confront trolley problems must make ethical choices on who to kill/save as well, and we clearly aren’t viscerally hesitant about letting humans drive. (I’d love comments explaining why we reach differently to humans v. robot drivers). In fact, compared to robots, human drivers facing trolley problems might not accurately decide who to kill/save based on a set of ethical principles; they might not operate under ethical principles at all. Rather, humans might panic and freeze, or act only based on self-preservation instincts. Moreover, stepping aside from “trolley dilemma” road situations, as Lubin writes, the “most ethical decision may be the one that gets the most AVs on the road,” given that AVs are on average safer than human drivers. As the WSJ pointed out in August 2016, driverless cars could eliminate 90% of the 30,000+ annual car accidents that are mostly caused by human error (Lubin cited this article).

In light of these concerns and prospects, should the government allow or encourage companies to develop (and sell) AVs programmed to operate upon a set of ethical rules? Why or why not? If yes, who should decide what AVs’ ethical rules are and how? — Eric

18 thoughts on “When Disney Meets Dilemma: The Ethics of Self-Driving Cars

  1. Over 30 different companies, ranging from traditional automobile companies like Tesla, BMW to companies not normally associated with automobiles such as Apple and Google, are constructing and creating autonomous vehicles. Research shows that in as few as three to five years, fully autonomous vehicles will be in regular use on road. I believe that governments should encourage companies to develop and sell programmed to operate with a faster reaction time. A NHTSA study in 2015 found that it could take a AV up to 17 seconds to respond to an incident while a human takes on average less than second. Yet, despite this seemingly universal transition to AVs, I believe the government should not depend on companies developing AVs to solve all transportation issues, especially within cities.

    Instead, the government should encourage these companies to invest in transit or allocate more government funding into alternate modes of transportation. 25,000 people per hour can use transit as their form of transportation to move within a city. 9,000 people per hour can use walking – compare these numbers to the mere 600 people per hour who can use cars to move within a city. Although AVs offer many benefits, they do not solve traffic congestion or efficient, effective transportation concerns.

  2. Before debating whether such ethical norms should be codified and included in the training of autonomous vehicles, it’s worth remembering the point Eric brought up concerning the safety increases that would accompany the large-scale introduction of self-driven cars. Beyond simply removing most human-error-related accidents, autonomous cars supersede human ability to avoid accidents; 360-degree cameras, finely-tuned SoC/GPU combos with immense processing power (Nvidia’s soon-to-be-released DRIVE PX Pegasus completes over 320 trillion operations per second), lidar sensing, etc. give autonomous cars “superhuman” senses, allowing them to reduce the occurrences of such ethical dilemmas drastically.

    That being said, these situations will certainly arise nonetheless. Even the best of autonomous vehicles would fail to avoid accidents, mostly due to the simultaneous presence of human drivers (and thus human error). This is something that will become less and less relevant as the prevalence of autonomous cars increases, and accident rates approach the 90% decrease cited in many articles.

    In the meantime, however, it’s hard to avoid these type of ethical qualms. I think there are many aspects of this debate that are somewhat at odds with the way we treat similar situations but with human control. When faced with a split-second, lose-lose situation on the road, even the most ethically-minded human driver doesn’t take his/her preconceived ideas of which action would be more justifiable and come to a rational conclusion on the spot. Instead, in the very short period of time offered, humans rely on instinct and gut reaction. What is uncomfortable about autonomous vehicles is the fact that AI doesn’t necessary have this pre-built-in “instinct,” and as a result it must be programmed (trained) into its constitution. It’s interesting to keep in mind that human decision-making in such cases is anything but consistent; in some cases humans a guess (e.g. faced with an oncoming vehicle in one’s own lane, some drivers might swerve into the oncoming lane and hope the damage of swerving is less than remaining/braking/etc), some; in some cases humans demonstrate self-sacrificial behaviors (e.g. swerving off a mountain road to avoid hitting a raccoon), even when, in cases like these, the behavior is often directly against what most would consider to be “preferable” (almost everyone would agree that the life of the driver is not worth risking for that of a small animal). Not only that, but we accept that the driver next to us may take different actions in given scenarios than we might—that he/she might put our life at risk when acting on instinct. What would be interesting to elucidate is why, then, given the inconsistent, often erroneous behavior of human drivers that we accept as normal, are autonomous systems subject to such severe scrutiny?

  3. It seems essential that AVs have some programmed framework for reading these difficult situations and making a decision, but the question of who decides these ethical frameworks, and how, is essential to the development of this technology for widespread use. I think federal and state governments are going to have to create some kind of regulatory framework for this technology, but doing so needs to balance prudent regulation with avoiding a situation where lawmakers with little understanding of this technology constrain its growth through overly restrictive or imprudent regulation. Governments can create broad guidelines to make sure AVs are safe and rely on decision-making frameworks that do not violate traffic laws and uphold a basic standard of acceptable behavior from a driver, human or not. While these ethical dilemmas are difficult and will require tough choices, the potential benefits of AV for traffic safety, as Eric points out, are immense.

  4. The presence of self-driving cars exerts a number of positive externalities on society, the most prominent of which is increased safety. In 2015, more than 35,000 people were killed in vehicular accidents in the U.S. Of those crashes, 94% were due to poor decision making and human error such as speeding and drunk driving. AVs, on the other hand, have operated with nearly zero error in test drives, with the only accidents occurring when humans have taken the wheel. According to the Centers for Disease Control, AVs are predicted to save 50 million lives globally within half a century. AVs also save money. The technology has the potential to save $190 billion a year in healthcare costs by reducing up to 90% of accidents. Additionally, since AVs optimize for efficiency in braking and acceleration, there is a potential to reduce carbon emissions by 300 million tons per year in the U.S.

    Despite the benefits of self-driving cars, many issues arise. AV technology has the potential to take over human jobs and gradually phase out human workers. This is especially problematic because AV technology is applicable to such a wide range of industries. Currently in the U.S., there are 600,000 Uber drivers, 181,000 taxi drivers, 168,000 transit bus drivers, 505,000 school bus drivers, and 1 million truck drivers. At least 2.5 million jobs are directly at risk of being automated in the coming years. As a result, income inequality between low-skilled workers (AVs will reduce the number of jobs available to these workers and wages paid out) and high-skilled workers (AVs will increase their productivities) will increase.

    Rather than blocking the progress of AV technology / rollout, the U.S. should tax self-driving car companies (Waymo, Uber, Tesla, etc.) and invest that money into training displaced workers for emerging engineering jobs (or perhaps allocate that money to universal basic income…). This solution would allow AVs to exert their positive externalities while lessening potential income / wealth inequality among workers.

    As we move forward, several questions arise. First, we will need efficient federal regulation in the U.S. to make sure that AVs are rolled out responsibly. What factors influence what laws get passed? What will states say? What will be the reaction of constituents? Second, we need to address the idea of human discomfort with self-driving cars taking over the road. If AVs are 1% safer than human drivers, will we be comfortable with them? What if they’re 10% safer? 100% safer? What is that “magic” number for us to be comfortable getting into a self-driving car?

  5. Intuitively, I share Eric’s discomfort. I find it hard to imagine companies programming ethical choices like “should we decide to save a pregnant woman or a child?” But as Eric rightly pointed out, we all face ethical choices when we drive. Part of the cost of driving right now is the potential of a fatal accident. If companies were to program ethical decisions, at least they would align more with societal values than if an individual was. An individual driver, in the moment an accident is about to occur, will almost always protect him/herself. An individual driver is also more likely to make a decision that benefits the passengers in his/ her car over the pedestrians, because the passengers are more likely to be people close to him/ her (ex: children, partners). Therefore, human drivers are more likely to decide to sacrifice a large number of people and save themselves than AV is. Since AVs would presumably be programmed to save a large number of people over the driver, at least they are making decisions that minimize the cost of human life the most.

    Further, if we all transitioned to AVs, accidents would happen at a much lower rate. It is during the transitional phase when human drivers are sharing the road with AVs that problems like the one that Eric is describing are most likely to arise; a road with only AVs is free of human error and therefore less prone to accidents. We eventually want to move towards having a road with just AVs. There are several obstacles to this path, however. For example, the fact that AVs are not necessarily going to prioritize the life of a driver makes a person less willing to get into an AV. I think we need to start thinking about how best to incentivize switching to AVs.

  6. I think that a significant challenge in leading people to accept self-driving cars is accountability. We intuitively place more faith in a person’s decision making skills than a machine’s, even though as noted above self-driving cars have the capacity to be more rational than human drivers. In the event of a conventional automobile accident there are measures in place to determine who the responsible party is. Law enforcement uses a variety of sources and set processes to assess what happened and why. In the case of accidents that cause damage, injury or death, the legal system assigns responsibility to people and in some cases imposes punishments on them. This brings up the question of who is held accountable in accidents involving automated cars. Is a developer who made a software error out of carelessness that caused a death as liable as a person who hit a pedestrian while texting and driving? It seems like these situations aren’t necessarily comparable, but they produce the same end result. In addition to the ethical questions present in the online simulation, there are a wide range of relevant regulatory and legal questions that have not been addressed.

  7. Autonomous vehicles have an incredible potential to decrease accidents caused by human error, it is important to note that the 90% statistic (“self driving vehicles could eliminate 90% of all auto accidents in the U.S., prevent up to $190 billion in damages and health costs annually and save thousands of lives” WSJ 3/5/15) is preceded by the condition of a “widespread embrace” of the technology. As previous bloggers have mentioned, 90% of accidents will not be eliminated immediately, and it is important to think about the transition period – assuming that the transition period will ever end. I think that it is unlikely that our society will every fully, or even widely, embrace AV technology. No matter how advanced and efficient public transportation gets, people still insist on driving personal vehicles. While the same privacy and efficiency would be offered by AVs, it seems that there is still some aspect of control that humans find it so difficult to give up.

    In terms of ethics, I think this is an added obstacle that will be difficult to surmount, and it ties back into the opposing articles on autonomous weapons from last week. Ackerman argued that we could make autonomous weapons ethical, but Russell et al. retorted that deciding what is ethical and what isn’t would be a complex and debated issue. The ethics issue brings up the question of whether these vehicles should be programmed to prioritize the passenger’s safety or the “greater societal good” (i.e. the least number of casualties), but this doesn’t have a simple answer. Each human driver brings his/her own set of ethics into a vehicle, and in high-intensity scenarios, those ethics may go completely out the window. There’s no easy answer, but these are questions that will have to be addressed if AVs are to be embraced by enough people to have a real effect on driving safety.

  8. When I think about autonomous vehicles, I, too, feel uncomfortable with a machine making ethical choices for its passive, “out of the loop” passengers. It is a situation similar to autonomous weapons making decisions about whether a target lives or dies. The feeling that human passengers are not in control of the vehicle seems to be the main factor that leads to the sense of discomfort we feel when thinking about riding in an autonomous vehicle. However, as Melissa Cefkin of the Human Centered Systems practice at the Nissan Research Center points out, humans forfeit control when riding in a human-driven Uber or taxi (https://www.theglobeandmail.com/globe-drive/culture/technology/the-ethical-dilemmas-of-self-drivingcars/article37803470/). Thus, the feeling of riding without control should not seem so farfetched or unfamiliar.

    Autonomous vehicles are overly feared, especially by the older generations of drivers today. It seems to me that the only way for AV’s to be accepted and successful on the road is to be fully incorporated into our transportation system. A mixed integration of some AV’s alongside our current human-driven vehicles induces more anxiety than does a full deployment of AV’s, all operating on a single network (though this unified network does introduce severe consequences should that network be subject to a cyber-attack). Perhaps this is where the government should focus its resources: building up defense against a cyber-attack on a single AV network, such that it becomes virtually impossible to breach by nefarious actors.

    Channeling government resources into this sort of cyber-defense and incentivizing private companies to program the best ethical algorithm would minimize AV moral discrepancies and maximize AV safety. The “perfect” ethical algorithm does not exist, as there will always be philosophical debate over which ethical rules should be obeyed in response to a Trolley Problem (actively interfere or passively stand by) and this is not a thought exercise I wish to weigh in on here. I leave that debate to be worked out by philosophers and engineers working together to program an ethical response into an algorithm. Moreover, Larry Hutchinson, president of Toyota Canada, asserts, “The greater challenge is the artificial intelligence behind the machine. Think of the millions of situations that we process and decisions that we have to make in real traffic…We need to program that intelligence into a vehicle, but we don’t have the data yet to create a machine that can perceive and respond to the virtually endless permutations of near misses and random occurrences that happen on even a simple trip to the corner store.” (https://www.theglobeandmail.com/globe-drive/culture/technology/the-ethical-dilemmas-of-self-drivingcars/article37803470/). While a set of ethical rules is certainly an important consideration in the design of AV’s, I do not believe it is the most sole source of the general discomfort humans feel when imagining an AV-filled future.

    Another source of the discomfort we feel when surrendering control to a machine might stem from a fear of being replaced. In other words, automation is replacing humans in a variety of fields – from manufacturing products on assembly lines to teaching (MOOCs). We tend to think of obtaining a driver’s license and car ownership as important milestones on the path to independent adulthood. The possibility of autonomous vehicles revolutionizing the automobile industry, making the ethical choices that human drivers are used to making, might be perceived as an affront to our freedom as drivers. If it is technologically possible to program a set of ethical rules into an AV’s algorithm, then humans may begin to wonder what other skillsets previously thought to be uniquely human can be programmed into an algorithm…Creativity? Curiosity? Consciousness? Not only does this pose a serious disruption within the automobile industry, but it also threatens the broader labor market; truck drivers, taxi drivers – in the future, maybe aircraft pilots – may find themselves unemployed.

    Aside from the replacement by automation, there is a myriad of other negative externalities that may occur alongside a full deployment of AVs. According to philosophy professor Patrick Lin, “There are concerns about advertising (could cars be programmed to drive past certain shops?), liability (who is responsible if the car is programmed to put someone at risk?), social issues (drinking could increase once drunk driving isn’t a concern), and privacy (‘an autonomous car is basically Big Brother on wheels’)” (https://qz.com/1204395/self-driving-cars-trolley-problem-philosophers-are-building-ethical-algorithms-to-solve-the-problem/). Indeed, autonomous vehicles do seem to have more negative “what-if’s” to their rise in development, and it is easy to see why they might scare some people, but this type of public reaction and speculation about the future occurred with the rise of cell phones and their disruption of the communication industry. As Cefkin observes, “People felt they weren’t as much in control as to when they communicated with people, but we adapted. That feeling itself is socially constructed and there will be some replacement feeling of control in the future, it just won’t look the way it does today.” (https://www.theglobeandmail.com/globe-drive/culture/technology/the-ethical-dilemmas-of-self-drivingcars/article37803470/).

  9. Like many of the bloggers have shown,statistics show that there will be fewer crashes, decreased fatalities on the road, and fewer damage and health costs. However, I have to agree with Ella regarding societal acceptance of AVs and how important that is for understanding the ethics regarding the use of AVs. With any introduction of new technology there will be a transition period. Many people are resistant to change when it comes to technology and tools they use almost everyday. Cars and driving cars are a major part of our daily lives. Thus, I think Ella is right in stating that is unlikely that AVs will be “widely embraced.”

    This means that there will be some extended period of time in which there are human-drivers and AVs, and while the AVs will be safe, it is hard to tell how human drivers will respond to the AVs on the road, and how the AVs would be able to respond to human error or human erratic behavior (i.e. Road rage). Moreover, this has major implications for the ethics of AVs and AV regulations: can you shorten this transition period by mandating everyone get AVs, can a government force everyone to stop driving, how do you handle hold outs who want to keep driving, if AVs never gain wide acceptance by society how do we handle the mix of AVs and human drivers?

    Once AVs are on the road there are other ethics considerations to be aware of such as who is accountable for ensuring they are not hacked–as we saw in the cybercrime lecture everything can be hacked and the kinetic effects of hacking AVs could lead to major crashes. Also, who is accountable for any errors in the tech or fatalities if they occur.

    These are all important policy implications that need to be taken into consideration. While AVs may lead to a greater good overall and lower damages, fatalities, and crashes, the path to achieve that goal seems to be very fraught and at this point not fully fleshed out.

  10. To me it seems that the discomfort around machines making ethical decisions autonomously stems from the lack of transparent culpability in high-stakes scenarios. When a human is in a “trolley problem-type” scenario, they may or may not make the most objectively permissible choice, but it is clearly up to them to defend their actions. In the case of robot decision making, blame is abstracted to the heuristics that have been decided by executives and materialized by engineers. One decidedly tricky aspect of this scenario is that companies who build these technologies can account for this risk through insurance policies–thus implicitly accepting deaths that they will facilitate as a cost of doing business. The size of these companies and the ability for them to defray costs of deaths through insurance policies is frightening because it reduces the magnitude of importance behind these decisions. As long as companies can minimize bad publicity by taking defensible stances, they will be able to thrive. And from their perspective, they are biased to protect the passengers who are their customers, as they will certainly lose less business if an autonomous vehicle kills a pedestrian than if it kills a passenger. For this reason, it is clear that governments must step in to ensure transparent policies that follow stances that societies as a whole agree on.

  11. With regards to Jenny and Ella’s concerns, I think creating an override function allowing humans to take the wheel might alleviate concerns enough for AVs to become widespread, after which point it could be gradually phased out. Ethically speaking, I think that AVs are very valuable. People make ethical decisions when they drive now, but using AVs moves the difficult choices away from them and to the manufacturer. People would no longer be held responsible for fatal crashes, instead the blame would fall on whoever designed the AV.

    Autonomous vehicles are theoretically capable of operating at a far higher level and respond much faster than even the best human drivers. To throw that away due to concerns about ethical problems that would exist even without AVs seems foolish. Humans also have to play the moral machine game when they drive. However, adrenaline and confusion might cause them to make a clearly “wrong” choice, such as swerving and killing five people just to save one life. A machine that is well programmed to make “correct” choices will do so under any conditions, ensuring the infliction minimal harm in every case.

    Autonomous vehicles, once developed past a certain point would also eliminate the use of cars as a weapon. Road rage incidents would decrease and the emerging trend of cars being used as weapons would be reversed.

    It seems to me that AVs bring a number of benefits and positive externalities while inflicting minimal costs. They are safer and more efficient and in fact do not impose any new moral dilemmas on society. Therefore, efforts to develop safe and reliable AV technology should continue.

  12. The moral and ethical questions surrounding “Trolley Problems” remain the same as other issues about autonomous technology: Where do we draw the moral and ethical distinction between self-operated technology and human lives? And at what legal and/or political cost?

    Since autonomous cars are so new, I think it’s fair to say Americans can only speak in hypotheticals about Trolley Problems until scientists and engineers develop the technology further. And, right now, I’m not even convinced we should be talking in hypotheticals about self-driving safety if self-driving cars cannot even make real safety distinctions, yet. Some of the comments here assume the present autonomous technology can respond, let alone recognize, Trolley Problem incidents. However, I wonder if even that is true.

    My reason for skepticism is last month’s Uber accident in Arizona. On the public roads, an Uber self-driving car killed a wayward pedestrian, creating the first human, civilian fatality inflicted by an autonomous vehicle. There were many contingencies behind the fatality—the incident happened at night, the operator was not paying attention to the road, and the pedestrian’s proximity made it difficult to swerve—but that is the point. The autonomous technology’s failure to respond to an unpredictable situation demonstrated it is not yet capable of predicting and preventing an accident. Much remains to be improved and right now the technology is not even at a developmental stage where it can credibly perform its intended function.

    In principle, I agree with Connor’s basic argument that ultimately some sort of regulatory framework may be necessary to overcome these safety issues related to Trolley Problems. Some sort of semi-autonomous provision might be the best policy recourse, as this would maintain human oversight of the technology and allow a person to intervene if/when things go wrong and judgements need to be made. Of course, humans are imperfect, and again, I reiterate the difficulty about speaking in hypotheticals about the future. However, I think this could be one way to address the present problem we have here: bypass it completely.

  13. I think that something which has been mentioned, but not thoroughly explored, is our inherent discomfort when considering deaths caused by an autonomous vehicle, and how irrational that is compared to our acceptance of human error on the road as a cause of death. While the case Jordan mentioned was surely tragic, how many human driven cars have killed people in the time between when AV started driving on roads and this first civilian fatality? Aside form largely emotional reactions to “robots” making ethical decisions about human lives, the fact is that lives would be saved. Even if, as Ella rightly points out, a lack of full adoption means that the 90% reduction in fatalities cited in many of these posts is an overstatement, even a 25, 50, or 75% reduction in road fatalities means tens of thousand of lived saved per year. I think the negative reaction to AV either making ethical choices or malfunctioning is a fear of change and new technology more than a rational calculus based on evidence. Of course, some sort of regulatory framework would have to be put in place to prevent companies from giving their vehicles decision-making code to maximize profit at the expense of safety or ethics, but with even good (as opposed to perfect) regulation, a large-scale adoption of electronic vehicles would be a large improvement on the status quo with regard to road safety.

  14. I want to address the point Eric alluded to in his post that, the ethical issues that face legislators in regards to the development and future implementation of autonomous vehicles are not new. Drivers similarly encounter “trolley” situations in which drivers make split second decisions, that may not abide by a set of ethical principles. Perhaps this is the reason that creating a codified set of laws to govern the ethical choice (or pick the “lesser of two evils”) becomes so controversial.

    By setting ethical principles for autonomous vehicle development, like Google X cars having been programmed to hit the smaller object, we are making conscious decision about who more deserves to live. Whereas in a single incident, a driver may decide to swerve instinctually, or freeze in panic. I pose that we take comfort in not having to make a decision that could potentially take another’s life. Actor-observer bias allows us to shift the blame of death onto the instantaneousness of a car accident.

    Coding autonomous weapons forces us to make the “conscious” choice. This makes us uncomfortable. The legalization of killing another human is problematic and unpalatable. The death penalty presents similar ethical qualms about codified killing.

  15. I personally am of the opinion that AVs will never sell if they operate under any set of ethical rules that doesn’t place a general priority on protecting passengers. That is truly the only way that these companies will be able to establish a market and appeal to consumers. As much as I’d like to think that utilitarian ethics inform my purchasing decisions, there is no way I would ever purchase a car that I knew with certainty would sacrifice my life under a wide variety of circumstances. I, along with what I would assume is a vast majority of the general public, would never consent to such terms. Imagine hitching a ride with a friend who tells you before the trip that they will undoubtedly ram you both into a brick wall if more than two pedestrians cross your path at once. You would drive yourself or find another ride.

    This is why I feel that Sebastian Thrun with Google X is coming as close as one likely can to creating an algorithm that approximates utilitarian ethics while also prioritizing passenger safety. By always hitting the smallest object, these cars would not only be increasing the chances that the passenger survives the crash, but also decreasing overall damage done to the outside world, whether the object(s) hit be inanimate or living. While there are certainly complications inherent to this framework (would hitting a small child really be better than hitting grown man? Or a rogue grocery cart for that matter?), on the whole I think such programming would still do a far better job than currently done by humans of decreasing damages caused in car accidents. Also, this algorithm’s prioritization of passenger safety would create for greater consumer interest than one based purely in utilitarian ethics, which would allow for AVs to become more rapidly standardized, saving hundreds of thousands of lives in the long run.

  16. As many have noted above, especially Marion, there does exist some latent fear around autonomous vehicles in general that is perhaps fueling a lot of hysteria around the aforementioned hypothetical of having to choose between two sets of lives in the MIT moral machine game. In the grand scheme of things, it is important to remember that most lives lost in vehicular accidents are the result of human error; a problem that autonomous vehicles will avoid. The statistics cited by Monica do a great job of corroborating this very point. Every technological change is accompanied by some degree of suspicion by the people whose lives are affected by that technology. We should be more careful in situating the ethical dilemmas and problems of autonomous cars within a larger context of their overall benefit.

    With respect to Connor’s point about possible regulation, I do not believe the biggest fear should be overburdening regulation, as the technology for autonomous vehicles exists and has been proven to function well. Regulation would be difficult to get passed in the first place due to the fact that humans themselves have not solved these ethical dilemmas. In addition, if the AV companies or other technology experts lobbied hard enough for innovation that improves overall safety, then lawmakers in Washington would presumably agree, given the lack of technical information most of them would presumably have on the subject. While regulation does seem like a concern, and in other policy areas it may impact innovation and growth, I venture to argue that the fear with autonomous vehicles’ regulation may be overstated.

  17. Part of the issue is how AV’s seem to codify certain moral systems into a more rigid law framework. The trolley problem is a problem precisely because there isn’t a correct answer that exist for all people (most people tend to end up using a utilitarian calculus, but other variations of the problem show that ends up fading when your own interests are at stake). Most people, when they drive a car, don’t necessarily use a specific moral system as they try to figure out where they will crash. It’s a split second decision and most of us don’t have our ethical and moral prerogatives at the forefront of our minds at that point. So even if we all have different moral systems that we end up using, at the time of accident, it is mostly brute luck that is shaping who gets affected in what way.

    An AV is different in that it codifies a specific set of criteria. We can tell beforehand who will likely be hit because it can act on those instructions in that split second before an impact occurs. For me, the discomfort isn’t so much as the fact that it is autonomous versus a human being, it is more so that AV’s end up making a statement, by virtue of a coding scheme, about what moral framework it is that we prefer as a society (I don’t think that issue has been definitely resolved yet). And in the end, it seems like people will have to do such a declaration because having multiple companies have multiple different systems encoded seems like a regulatory disaster waiting to happen (I could be wrong on that account, however). I think the attempt to crash into the smallest object is an interesting method to sidestep the problem–but all it is is a side step and not really a solution. If a car had to crash into a truck with one driver in it, or a small car with a family inside, which should it do? The small size algorithm doesn’t work out too well there for most people and their preferences.

    In the end, though, it is true as many people have pointed out that AV’s will end up saving more lives statistically. So we cannot just put it aside. I think there needs to be a more in depth discussion of how we assign responsibility in those situations and a greater exploration of whom to blame. It goes without saying that the preferences of the system have to be made public before it is even put onto roads.

  18. Reading through the comments, I was surprised to see that Wesley had a viewpoint on AVs that was similar to mine. Although somewhat unconventional, my first reaction to companies coding the “decision-making process” of these AVs was that no matter what we believed to be ethical, if AVs did not prioritize the life of the passengers, they would never be successfully sold in the market. And given that private companies must care about profit, they have to code their machines in a way that would benefit the customers – the passengers. This has some serious implications for human lifestyles, as once pedestrians realize that AV cars will always favor passengers, they will be more comfortable with being in a car themselves. To answer your question then, Eric, I would argue that the government has very little role to play in this privatization/monetization of “moral standards”, other than entirely banning AVs or financially incentivizing companies to not only favor customers (passengers).

    According to my results in the Moral Machine, I seem to believe that women and young children deserve to be “most saved” in genders comparison to other, species, health levels or beings of social values. This opened my eyes to a lot of my internal biases – as a woman myself, I must have subconsciously wanted women more alive than other groups, and as someone who grew around young kids (I was very involved in child education all throughout my life), I seem to value children more than other age distribution. Although I have never believed these two groups to be high in my “ethical ladder,” after viewing the results, it sort of makes sense. This experiment taught me that the scale of ethics depends on the person writing these laws. It is logical that a white AV coder would want other white people –including (or because of) themselves – alive more than other race groups, and that a large person would want other large people to be more safe. Ethics, then, that depend upon the coder – or group of coders—, complicates the situation to more individual/person realms that have yet to be discussed. I’m curious to hear about what other people think.

Leave a Reply