Programming Life And Death Decisions into Robot Driven Cars

google-carGoogle’s corporate motto from day one has been “don’t be evil.” Of course, anyone who wants to debate whether the company has met that objective must first consider the question: “How do you define evil?” With Google (along with Audi, Ford, Mercedes, Nissan, Toyota and Volvo) emerging as a key player in “driverless car” technology, that question is rapidly becoming more than just philosophical. The way corporations apply ethics has literally taken on life and death urgency.

So far, engineering features that improve vehicle safety have been a straightforward matter in terms of ethics. Blind-spot monitors, electronic stability control, forward-collision and lane-departure warnings are good things, with very little downside. But the march to autonomous vehicle control is a continuum. In 2014, the Federal Department of Transportation approved V2V, a gateway technology that allows cars to communicate with each other. These systems will eventually prevent a huge number of accidents by providing more data than a human can comprehend in an emergency situation.

But truly driverless cars will involve a huge ethics element that so far has been missing from the deployment of automated safety features. This is because at some point robotic drivers of a driverless cars will be required to make the life and death decisions that human drivers confront now. There will also be situations that cause the robot to choose between two unpleasant outcomes – two forms of evil if you will. How can the machine be taught to make those decisions and who decides what moral platform they are based upon?

In a typical “two bad choices” scenario, the hypothetical driver must choose between horrible outcomes. One choice means they might sacrifice their own life to save others. The other choice means they will selfishly choose to live while others die. In a recent real life example, a Russian police officer saved the lives of children on a school bus by choosing to drive his squad car head-on into the grille of a reckless driver, preventing the motorist from hitting the bus. The fact that the policeman miraculously survived in no way changes the fact that he made an instantaneous decision to save children at the potential cost of his own life.

Now manufacturers of fully automated vehicles must confront the question: Is it practical and right to expect a robot driver to make the kind of ethical decision that would mean saving a dozen lives while killing its passenger? Unlike a human driver, the robot has no stake in the matter. In terms of technology, is it even possible to program a machine to analyze visual data and commit its passenger to injury or death without their input.

What if the robot makes the wrong decision? Computers make a lot of errors that are based on sound logic.

Taking the argument further, what if the robot driver makes a moral decision based on ambiguous data? For example, let’s pretend the school bus is carrying murderers rather than school children. If the robot driver concludes it must save the bus because it is a school bus, then the occupant of the automated car is involuntarily sacrificing their own life to save a busload of evildoers. Or can a robot be programmed to establish a hierarchy of human value – say school children vs criminals – and will the passenger have any influence over the outcome? Would you choose to sacrifice your life for a bus full of pickpockets but not a bus full of murderers? Admittedly, we don’t expect this type of situation to come up often, but it is an illustration of how complex the issue can become.

This brings us hard up against the reality that robot drivers will inevitably make digital missteps that cause injury or property damage. They will also make mistakes that cost someone their life. At that point, the question moves beyond the ethical and becomes economic and legal. Since the robot is a machine, the question of liability will become perhaps even more complex than it already is. Since the manufacturer programmed the robot, are they not responsible for decisions that result in wrongful death? Will government impose ethical programming standards on manufacturers that requires all driving machines to make the same decisions? Or will the free market allow us to choose a range of moral options that give us some control over the decision making process in specific scenarios.

Driverless car manufacturers are well aware of these challenges, and are actively engaging philosophers and ethicists in an effort to work through the challenges linked to the first generation of “social robots.” While the moral questions regarding the greater good vs the rights of the individual have always generated endless debate, the issues of insurance and legal liability must get resolved in order for the driverless future to begin.  Will the same principles of negligence, strict liability, misrepresentation, and breach of warranty be applicable in this brave new automated world?

The Institute of Electrical and Electronics Engineers (IEEE), a well-respected organization serving the technical professions, has listed legal liability and public policy making as more important than technology and infrastructure as potential deal-breakers for in-car automation. In California, the Association of California Insurance Companies is already pushing legislation clarifying that manufacturers retain all liability for damage and injuries caused by their “product.” And the RAND Corporation think tank issued a report last year suggesting that a no fault approach for evolving car insurance laws would remove a blockade to a technology that will ultimately improve safety.

With on-road testing going on right now and manufacturers promising preliminary roll outs by the end of the decade, look for new developments in the area of vehicle liability law. In fact, this may be the time to supplement your law degree with a PhD in philosophy.