Autonomous Cars and Their Ethical Conundrum

on October 14, 2015

I am rooting for autonomous cars. AARP has said I am now a Senior Citizen. I have the card they sent me in the mail to prove it. While I admit I am getting up there in age, I am concerned someday the DMV will yank my license for either age or health reasons. When that time comes, I want a self-driving car in my driveway to make sure I have the freedom to go anywhere I want as I have since I was 16 years old. Unlike others who are hesitant about driverless cars, I would embrace it wholeheartedly as I would just as soon sit back and read, work on my laptop or peruse an iPad than have to deal with traffic and navigation. Even today, if all I needed to do is tell my car where to take me and all I had to do is get in and sit back and enjoy the ride, I would be one happy guy.

Now I know we are years away from getting autonomous cars on the road and getting the right kind of government regulations passed to make this possible. But the technology is getting close enough to create these types of vehicles and, in theory, they could be ready for the streets within the next 3-5 years. I suspect I have at least another 15-20 years before the DMV yanks my license so, as long as they are are ready by then, I can live with that. But as I have been thinking about the various obstacles and roadblocks that must be solved before everyone could embrace autonomous vehicles, there is one particular issue tied to their success that concerns me — ethics.

At the recent Re/code Mobile Conference, they had a great panel on self-driving cars. At the end of the session, I posed this question to the speakers on the panel:

Let’s say that I am in a self-driving car. It has full control and the brakes go out. We are about to enter an intersection where a school bus had almost finished turning left, a kid on his bike is in the crosswalk just in front of the car and an elderly woman is about enter the cross walk on the right. How does this car deal with this conundrum? Does it think “if I swerve to the left I take out a school bus with 30 kids on it. If I go straight, I take out a kid on a bike. If I swerve right, I hit the little old lady?” Is it thinking, “the bus has many lives on it and the kid on the bike is young and has a long life ahead but the elderly woman has lived a long life so I will take her out” as the least onerous solution?”

I realize this question is over the top but one can imagine many types of ethical issues a self-driving car will encounter. Understanding how the engineers, philosophers and ethicists design the final algorithms that sit at the heart of these autonomous vehicles will be very important to to the success of these automobiles.

My colleague over at PC Mag, Doug Newcom, wrote a good piece on this ethical question. He said:

“The day after getting a ride in Google’s self-driving car in Mountain View, California, I attended an event at Mercedes-Benz’s North American R&D facility in nearby Sunnyvale. Among several topics covered throughout the day, Stanford professor and head of the university’s Revs program Chris Gerdes gave a presentation that delved into the subject of ethics and autonomous cars.

Gerdes revealed that Revs has been collaborating with Stanford’s philosophy department on ethical issues involving autonomous vehicles, while the university has also started running a series of tests to determine what kind of decisions a robotic car may make in critical situations.

As part of his presentation, Gerdes made a case for why we need philosophers to help study these issues. He pointed out that ethical issues with self-driving cars are a moving target and “have no limits,” although it’s up to engineers to “bound the problem.”

To do this and move the ethics of self-driving technology beyond a mere academic discussion, Revs is running experiments with Stanford’s x1 test vehicle by placing obstacles in the road. He noted that placing different priorities within the vehicles’ software program have led to “very different behaviors.”

I am encouraged by the work being done at Stanford’s Revs Program and know similar work is being done at many universities and inside all of the autonomous car makers. Solving this ethics problem needs to be at the top of their list when it comes to how they program these cars beyond the fundamental software tied to cameras, CPUs, sensors, etc., that control the car’s functions. While I doubt they could ever program a car’s ethical take on all situations they could encounter, these folks will have to go the extra mile on this issue if the public is ever to really embrace driverless cars and make them the future of personal transportation.