Observing Thinking

Observing Thinking
Observing Thinking

Tuesday, May 10, 2016

Pros and Cons of Autonomous Cars: Part 3





We’ve been investigating some of the Pros and Cons of driverless or autonomous cars. So far, we’ve looked at them in terms of Safety/Security, Time, and Money. We have also looked at comparisons between “robot cars” and other autonomous vehicles such as busses and trains. In this column we examine perhaps the thorniest of issues: Ethics.

While you can quantitatively measure Time, Money and Security, most ethical theories can only be evaluated qualitatively. Utilitarianism (Simple definition: Does the outcome of an action insure the greatest good for the greatest number of people?) attempts to overcome this problem by weighing the costs against the benefits and focuses on the consequences of an action regardless of the intention. At the other end of the spectrum is Deontology or Rule-based ethics where an act is judged right or wrong according to its adherence to a set of rules --- the ten commandment for example. If we use Deontology as our ethical guide then we would focus more on Intention rather than outcome. So,if your intention is good the act is good no matter the outcome.

A nice exercise in applying these two ethical theories is the “Trolley Problem”.( “Trolley” is a British word that translates in US english to “Train”) which is at:

https://en.wikipedia.org/wiki/Trolley_problem

Briefly, here is the scenario: There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice? A 2009 survey shows that 68% of professional philosophers would pull the switch (sacrifice the one individual to save five lives), 8% would not, and the remaining 24% had another view or could not answer”. (A surprising result as I would have expected a much higher percentage of philosophers responding that they could not, would not answer.)

For the Utilitarian, this is a no-brainer. One choice results in five dead, the other only One. Pull the lever and don’t look back. From the Deontological point of view it would depend on the set of rules you have chosen to follow; if it includes “Thou shalt not kill” then either action is wrong; the best one can do is modify the rule to “Thou shalt not kill, but if thou must, kill as few as possible” which puts one on the slippery slope to Utilitarianism. Some problems have no solution. Sigh.

For our purposes, imagine a driverless car in a situation where: a child darts out in front of the car, it’s too late for the ai car to stop but if it swerves you wipe out five pedestrians. Do we let the autonomous car make the call knowing full well that this decision is embedded in its software and ultimately that software was written by a team of programmers who, after all are only human? Some say these cars should have a human overide (like the emergency stop cord on a train); others say to trust the software --- it’s been tested (when was the last time your were tested?)) and can react much faster than you.

And what about letting the car break the rules when necessary? I usually give bikers a wide berth, even crossing the double line when it’s safe. Would a robot car be programmed to do the same?

All of this raises the question of responsibility. It’s your car, either owned or leased, and you better have insurance that covers situations like this. But what about the car manufacturer, and what about the programmers who wrote the software --- are they also liable?

If the thought of driverless cars scares you, consider the possibility of autonomous armed drones where their goal is not safety but destruction. For better or for worse, science fiction is rapidly becoming fact.

No comments:

Post a Comment

Search This Blog