Autonomous Cars and Their Ethical Conundrum

I am rooting for autonomous cars. AARP has said I am now a Senior Citizen. I have the card they sent me in the mail to prove it. While I admit I am getting up there in age, I am concerned someday the DMV will yank my license for either age or health reasons. When that time comes, I want a self-driving car in my driveway to make sure I have the freedom to go anywhere I want as I have since I was 16 years old. Unlike others who are hesitant about driverless cars, I would embrace it wholeheartedly as I would just as soon sit back and read, work on my laptop or peruse an iPad than have to deal with traffic and navigation. Even today, if all I needed to do is tell my car where to take me and all I had to do is get in and sit back and enjoy the ride, I would be one happy guy.

Now I know we are years away from getting autonomous cars on the road and getting the right kind of government regulations passed to make this possible. But the technology is getting close enough to create these types of vehicles and, in theory, they could be ready for the streets within the next 3-5 years. I suspect I have at least another 15-20 years before the DMV yanks my license so, as long as they are are ready by then, I can live with that. But as I have been thinking about the various obstacles and roadblocks that must be solved before everyone could embrace autonomous vehicles, there is one particular issue tied to their success that concerns me — ethics.

At the recent Re/code Mobile Conference, they had a great panel on self-driving cars. At the end of the session, I posed this question to the speakers on the panel:

Let’s say that I am in a self-driving car. It has full control and the brakes go out. We are about to enter an intersection where a school bus had almost finished turning left, a kid on his bike is in the crosswalk just in front of the car and an elderly woman is about enter the cross walk on the right. How does this car deal with this conundrum? Does it think “if I swerve to the left I take out a school bus with 30 kids on it. If I go straight, I take out a kid on a bike. If I swerve right, I hit the little old lady?” Is it thinking, “the bus has many lives on it and the kid on the bike is young and has a long life ahead but the elderly woman has lived a long life so I will take her out” as the least onerous solution?”

I realize this question is over the top but one can imagine many types of ethical issues a self-driving car will encounter. Understanding how the engineers, philosophers and ethicists design the final algorithms that sit at the heart of these autonomous vehicles will be very important to to the success of these automobiles.

My colleague over at PC Mag, Doug Newcom, wrote a good piece on this ethical question. He said:

“The day after getting a ride in Google’s self-driving car in Mountain View, California, I attended an event at Mercedes-Benz’s North American R&D facility in nearby Sunnyvale. Among several topics covered throughout the day, Stanford professor and head of the university’s Revs program Chris Gerdes gave a presentation that delved into the subject of ethics and autonomous cars.

Gerdes revealed that Revs has been collaborating with Stanford’s philosophy department on ethical issues involving autonomous vehicles, while the university has also started running a series of tests to determine what kind of decisions a robotic car may make in critical situations.

As part of his presentation, Gerdes made a case for why we need philosophers to help study these issues. He pointed out that ethical issues with self-driving cars are a moving target and “have no limits,” although it’s up to engineers to “bound the problem.”

To do this and move the ethics of self-driving technology beyond a mere academic discussion, Revs is running experiments with Stanford’s x1 test vehicle by placing obstacles in the road. He noted that placing different priorities within the vehicles’ software program have led to “very different behaviors.”

I am encouraged by the work being done at Stanford’s Revs Program and know similar work is being done at many universities and inside all of the autonomous car makers. Solving this ethics problem needs to be at the top of their list when it comes to how they program these cars beyond the fundamental software tied to cameras, CPUs, sensors, etc., that control the car’s functions. While I doubt they could ever program a car’s ethical take on all situations they could encounter, these folks will have to go the extra mile on this issue if the public is ever to really embrace driverless cars and make them the future of personal transportation.

Published by

Tim Bajarin

Tim Bajarin is the President of Creative Strategies, Inc. He is recognized as one of the leading industry consultants, analysts and futurists covering the field of personal computers and consumer technology. Mr. Bajarin has been with Creative Strategies since 1981 and has served as a consultant to most of the leading hardware and software vendors in the industry including IBM, Apple, Xerox, Compaq, Dell, AT&T, Microsoft, Polaroid, Lotus, Epson, Toshiba and numerous others.

38 thoughts on “Autonomous Cars and Their Ethical Conundrum”

  1. No doubt there will be some accidents with self-driving cars, but any ethical conundrum should be offset easily by the dramatic decrease in total accidents and fatalities. Most humans are terrible drivers, truly awful. However, most people think they are great drivers. Self-driving cars can’t arrive soon enough.

  2. I think it’s safe to say you will not live long enough to see autonomous cars. Google has really managed to hornswoggle a lot of tech geeks and journalists with their robot car demo, but it remains that, a demo that cannot be scaled beyond the thousand or so miles of road that they’ve taught their car to drive on without a staggering amount of work.

    Google is tackling something (teach a computer to see the road and interpret its status in a flexible and intelligent manner) that we simply have no idea how to even begin trying to do and may never manage to achieve. As with so many other overhyped advances in AI research, most of their progress has been achieved by moving the goal posts rather than by actually making any headway with the difficult problem of teaching computers to think.

    Early on Google gave up on teaching their robot car how to drive and instead focused on giving it a fantastically detailed map of the road it was to drive on However, that means that what they’ve been showing off so proudly is a tech demo that cannot ever be scaled up from the small network of roads they’ve mapped in enough detail for thier stupid robot to navigate to the entire road network.

    1. Thank you for this. “Hornswoggle” is perhaps too polite a word. We have ‘self-driving airliners’ now. They only require the attention of 2 full time people, only fly prescribed routes (standardized, known well in advance) not on-demand routing, and still we have ‘pilot error’ as one of the leading causes of crashes.

      The notion that people who cannot be relied upon to operate steering wheels and brake pedals will be able to expertly manage programmatic car control is…optimistic.

      I am weary of the pipe dream of ‘sit back and work on my laptop’ which has sold so many on the illusion of self-driving cars.

      And the quandaries posed by this article are not only ‘ethical’. They are legal. The penalty for a driver today choosing the school bus, the elderly woman or the kid on the bike is manslaughter. Today, these acts are ‘accidental’. When we write the code to enact them, they become deliberate. Philosophy will not be a useful defense against the civil suit filed by the relatives of any element in this set of potential victims. To be completely ethical, perhaps Mr. Bajarin’s car should self-destruct, leaving only dust, not shrapnel.

      Mr. Bajarin, in your ethics problem, did you not forget that it is you that entered the intersection and created the potential death of someone? Why are you the only one guaranteed to survive? There is a fourth choice, Mr. Bajarin.

      Does this notion diminish the halo of your “self-driving” car? If you lose the philosophical debate, will you embrace the consequences of the fourth choice as eagerly as you seem ready to embrace the correct selection when there are three choices?

        1. Perhaps. AI is coming, we can only attempt to channel it in useful ways. There is, as you may imply, a real issue with AI. Do we wish “accidental” deaths (manslaughter?) to be replaced with “programmed, determined, preordained deaths” (murder?). The particular issue of auto “accidents” is perhaps an extreme one, but one promise of AI is to replace random actions with deterministic actions. It’s not clear how humans will react to that.

          But back to the example Mr. Bajarin uses, is any of us really ready for a phone call that says a computer determined that our spouse was less valuable than some other potential casualty in a given scenario, and that as a result of this our spouse is dead? I like to think I am logical, but I am not ready for that. To generalize, there are many value assessments and judgements we make in our lives, and it is not clear that we are anywhere close to agreeing on these, or to abide by group consensus, or even allow individuals to program “rules” about who might be killed by our car. As for the scenario where I put together situations where the “rules”, however devised, can be used to ordain an action by a machine so as to escape any personal culpability for an action — gee, nobody is ready to get their arms around that. Two teenagers park a school bus in a side street, watch pedestrians cross a side street with traffic, and when a car comes along at the right time dart on their bicycles into traffic — knowing the algorithm will choose to kill the pedestrian. What do we all think of that? AI should determine that the bicycle rider has “non-accidental” intentions and kill the bicyclist?

          We are not ready. Expectations? Nope. Technology? Nope. Algorithmic development? Maybe sort of. Ethics? Nope. The implications for our lives from all of this? Guess….

          1. 1100101,

            I think Mr. Bajarin’s scenario is a bit of a ‘straw man’ (i’m sure there are other more real world scenarios which provide a similar dilemma, so I’m not going to dwell on the particulars), but I’m sure the decision tree will be to avoid unprotected people in the vehicle’s path. Hitting another vehicle is more likely the least risky option.

            In your scenario, yes if the vehicle has no other option, it hits the cyclists. They are at fault. Most of these scenarios, hard braking with minimal steering will probably be the most likely decision. obstacle avoidance type steering maneuvers are fraught with peril (unpredictable outcomes).

            Unexpected obstacles in a road way will most likely result in those obstacles being hit. Otherwise, throwing a fake kid in front of your enemy knowing the autonomous vehicle will swerve off the road killing the occupants will become a new form of homicide.

            You are correct, the non-technical details are as big an obstacle as the technical ones. The ‘autonomous’ vehicle train is leaving the station. The issues solved seem to outweigh the issues created. We’ll see if that ends up being true or not.

            Cheers-

      1. I think the legal issue with Mr. Bajarin’s conundrum is ‘why did the brake’s fail’, not the decision made by the AI in this hypothetical autonomous vehicle. The party responsible for the brakes failing is on the hook in this situation. ;v)

        Not sure I understand you general hostility against the interest in self driving autos??? Autonomous vehicles won’t be distracted by phones, kids, alcohol, road rage, etc.

        Your anger towards Mr. Bajarin is misdirected.

        I do agree, the roads would be safer if drunk driver’s hurtling through a red light/stop sign would automatically self destruct.

        1. Sorry if my comments seem to imply a hostility towards Mr. Bajarin. I do have a hostility towards the general effort in the media, certain companies, and tech analysts in particular, for their attempt to oversell “autonomous cars”. I have this reaction for two reasons:

          1. Some of the marketing spin (primarily from Google and Tesla, echoed by tech analysts) is full of claims of great breakthroughs. Many people do not understand that the first autonomous car was a Mercedes, on the Autobahn, in 1997. Many do not compare what Google and Tesla claim vs. what many auto companies are already shipping as product, and have for years. Companies like Mercedes, Audi, BMW and others have some or all of these ‘cutting edge” ideas in products that customers buy right now — with warranties and all the normal auto product stuff.

          2. There is a general misunderstanding regarding “autonomous”. A great model to set expectations would be the autopilot systems on modern airliners, but no one wants to hear that “autonomous” does not mean “no attention paid” or “no liability incurred”. We are many years away from anything approaching true, complete autonomy. Despite what some people, desperate for media attention and stock price appreciation, would have you think.

          1. 1100101, “e” ?

            I hear you. I worked many years in the auto industry (mid-90s @ Chrysler, and about a decade at Freightliner trucks through the mid 00’s). Google is getting an outsized share of attention for their efforts. The commercial vehicles industry even led the passenger auto segment with blind spot warning, lane departure warning and active cruise control systems. Passenger vehicles have had auto -parking and collision avoidance systems (auto braking etc.) for years.

            You are probably correct that “fully autonomous vehicles are a ways off,at least in the way people imagine: where a user would plug in any address and the vehicle drives them there. But, we are much closer to the 80% (?) of what a fully autonomous vehicle could get us. It is not too difficult to imagine public places like airports implementing self parking vehicle lots (especially rental vehicles!). Freeway driving is lower hanging fruit than suburban driving. Imagine letting your vehicle navigate bumper to bumper traffic in LA! that stuff is relatively doable. It will come in steps and the user will implement it like they do ‘cruise control’.

            I would propose much of the enthusiasm is from the desire of many people to be able to relinquish driving duties to their vehicle in many situations.

      2. There’s absolutely nothing ethical about abdicating one’s responsibilities and assuming we are absolved of them.

    2. I don’t think you’d get much agreement on your timeline from Travis Kalanick, the CEO of Uber. He has said that his intention is to make using Uber cheaper than car ownership, and the primary means of accomplishing that is to eliminate the expense of the driver, so the company wants self-driving cars. It’s well known that Uber does not let obstacles stand in their way for long, and they recently hired 50 of the best robotics experts to help them reach their goal. I expect to see Uber running autonomous cars successfully within 5 years.

      1. I still don’t even know how autonomous Google’s cars, which seem to be the most advanced for now, are. Are they doing all the sensing and processing on-board the car ? Is all the necessary data onboard too, except for traffic/roadwork updates ?
        If Google’s demos are based on a few kilometers of highly detailed mappping (unavalaible for any other strech of road) plus offboard real-time processing, plus street-side camera feeds, this thing is nowhere near scaling out of its playpen.

      2. The only way to make self-driving cars able to navigate anywhere is going to be to make them smart enough to interpret the road and make judgement decisions about it.

        That is currently impossible, and given the way AI research has continued to make absolutely no headway on problems like that for the past 20+ years, I really don’t think it’s anywhere on the horizon either. Moore’s law isn’t the issue here — it’s not that we don’t have computers that drive in an intelligent manner at highway speeds, we don’t have computers of any size that can be taught to drive in an intelligent manner at any speed. We currently don’t have the faintest idea how to buld such computers.

        Google did it by cheating, adopting a “boil the ocean” approach — break the road down into millions of mappable facts and then teach a computer to navigate the map like a well trained rat. Which isn’t something you can do nationwide unless you’re willing to spend decades and hundreds of billions of dollars.

        The tech industry has a long history of vastly overestimating its ability to make progress with AI. The long line of tech companies jumping on the driverless car bandwagon is just the latest example of that.

        1. I think you’ve convinced me on the timeline issue. Given how terrible human drivers are the low hanging fruit of ‘autonomous’ vehicles is the driver assist tech that has already been making its way into vehicles you can buy today. Perhaps Apple has something up its sleeve in this regard.

  3. It would be interesting to run these “driving ethics test scenarios” by random human drivers in a “simulator” and track the decisions made. How many drivers hit the kid, or the elderly person or the bus? (BTW School busses are built like tanks).

    I imagine there will be a hierarchy of” emergency action protocols”, much like Asimov’s three Laws.

    As others have pointed out, the fewer human drivers on the road will most likely translate to fewer death/injuries accidents.

    While google’s autonomous vehicle program employs very detailed mapping, they have shown mapping populated urban areas is achievable and current autonomous systems have a much easier time in rural highway/freeway situations. I think, barring legal issues, we will see at the least autonomous vehicles in our lifetime.

    1. I would expect in the majority of the simulations, the driver just hits the brakes in a panic, careens out of control, and basically the physics of the situation, not conscious driver action, determines who gets injured.

      The implicit assumption that the driver is able to assess the situation and make a choice based on this assessment is a view that is way more optimistic than the reality that most drivers out there are totally incapable of performing even the most rudimentary mental calculations when confronted with a life-threatening real time traffic emergency.

      1. Yes, I’m sure the majority of the time humans drivers, pretty much slam on the brakes. i wouldn’t expect much more than that either.

        I didn’t mean to insinuate drivers would be able to do much more than that. swerving would be a close second though. My point was, it would be interesting to see what the most likely thing a human would do vs AI

  4. It’s a bit of a fake issue:

    1- With a human driving, the situation is much more likely to happen. Even if brakes don’t fail.
    2- With a human driving, the reflex will probably always be “save my passengers, then myself”, statistically leading to more casualties than whatever “minimum casualties rule” the car uses.
    3- I’d assume intelligent cars to have 2 or 3 sets of brakes, and to be able to use them (which most drivers aren’t). Simple elevators have 2 sets of brakes, cars will probabbly have 3, even if one blows out the engine.
    4- an intelligent car will blow its horn, making everyone aware of the situation. Drivers rarely do.

    The question is interesting nonetheless. And probably oversimplified: is the kid a juvenile delinquent, and the old lady mother theresa (well, there are doubts about mother theresa’s ethics, pick your own saintly oldie). Who is richer (the justice, education, and healthcare systems make most of their decisions based on that) or cuter/better dressed ? Which skin color/apparent religion/ethnicity is everyone ? Is any of them a celebrity ?

    I’m for equal opportunity roadkill, but I’m not sure I could live with myself after killing anyone. So I’d go ditch, then electric pole, then bus (can probably take it if we’re both going slow, and the kids might learn something about safe driving), old lady, kid on bike. Of course my robot should think like me.

    1. I believe Volvo has some statement that they will accept full liability on incidents related to full autonomous driving. But, if the bus is turning left and almost complete, it’s probably better to downshift rapidly, which the car would most likely do and skid along the body of the 40+ foot long bus to come to a stop. There’s no reason to have full impact, and a swerve could cause a flip, which the cars system probably tries to avoid with some type of DTCS.

      And, you’re right, the problem is over simplified and is more an engineering process before an ethical argument.

      1. That process could potentially allow for solutions in such situations that the average driver may be unaware or unable to think of quickly enough.

        Joe

  5. We can’t even deploy autonomous planes and trains for which the computing problem is far less complex (it might have already been solved), and we’re dreaming of autonomous cars?

    1. However, while for planes and trains it can arguably be assumed that the trained, professional human element is the fail-safe (accidents to the contrary not-withstanding), I could see a scenario where the power-brokers decide autonomous cars are deployed to protect us from ourselves. I wold not support such a position, but I could see it happen.

      Joe

      1. What I fear is that once autonomous cars get deployed, governments, in the interest of ‘safety’ will gradually impose all sorts of restrictions on human-driven cars until they are eventually banned outright or regulated out of existence. I just know that will happen because bureaucrats will think “a system of transportation with autonomous cars would really work perfectly if not for these pesky, unpredictable, erratic human drivers that drive the autonomous cars’ guidance systems haywire.” Mark my words.

    1. Based on the number of car accidents daily, it seems the human mind has problems with the human element as well.

      Joe

        1. But what if you could put all the wisdom of expert handling of cars into the hands of the not-so-experts? We are already down that path with anti-lock breaks and driver assisted systems to help with blind-spots.

          I don’t think autonomous is anytime soon, but I think we are already headed toward more driver assist systems. That will make the notion of the autonomous, truly “auto”-mobile easily accepted.

          Joe

    1. Yeah, after last year’s winter in Boston, I’ve been saying the same thing. What does a self-driving car do in this scenario. You are entering a narrow street where both sides of the road are filled with snow covered vehicles such that there is room for only a single lane at a time. Good luck with a car that has no driver controls.

      1. My car already has a “snow-conditions” setting. From my personal experience I don’t see an autonomous car doing worse than human drivers. I lived in Maine for 10 years and Chicago and Connecticut each for two. Humans are hardly the standard for expert snow driving. Maybe a smart car will at least wait to have its roof cleared of snow before letting a driver pull out.

        Joe

  6. When these “autonomous” “self-driving” cars figure in an accidents- hitting someone, collisions, et. al, who’ll be the one accountable?

    The passenger in the driver’s seat or The auto-manufacturer?

Leave a Reply

Your email address will not be published. Required fields are marked *