The ability of machines to make ethical decisions is quickly becoming a critical component in the development of self-driving cars: Ultimately, these questions will drive fundamental changes in insurance and liability law.
Google’s corporate motto from day one has been “don’t be evil.” Of course, anyone who wants to debate whether the company has met that objective must first consider the question: “How do you define evil?” With Google (along with Audi, Ford, Mercedes, Nissan, Toyota and Volvo) emerging as a key player in “driverless car” technology, that question is rapidly becoming more than just philosophical. The way corporations apply ethics has literally taken on life and death urgency.
So far, engineering features that improve vehicle safety have been a straightforward matter in terms of ethics. Blind-spot monitors, electronic stability control, forward-collision and lane-departure warnings are good things, with very little downside. But the march to autonomous vehicle control is a continuum. In 2014, the Federal Department of Transportation approved V2V, a gateway technology that allows cars to communicate with each other. These systems will eventually prevent a huge number of accidents by providing more data than a human can comprehend in an emergency situation.