You are stretched out comfortably in your fully self-driving car, speeding across the road. You don’t have to sit any more, since cars are now designed for maximum comfort and utility, not based on having a human driver.
Suddenly, an enormous concrete building section topples over in the back of the construction truck driving in front of you. In a fraction of a second, this giant obstacle gives your car’s AI a simple decision: brake or dodge.
To your left is an SUV with a human passenger.
To your right is an unprotected motorcyclist on a motorcycle.
Straight ahead is certain death.
What should your AI do?
This is an updated version of the Trolley Problem I wrote about last month. After a lively discussion about this dilemma,
shared some interesting thoughts about this concept as it pertains to autonomous cars. Ale and I have worked together in the past, even publishing two pieces on the Turing Test in the same week (mine is here, Ale’s is here). He writes Mostly Harmless Ideas (check him out after reading this!).Let’s dive deeper into this concept. It gets really interesting really quickly.
Thorny Scenarios
Building on the above scene, it’s tough to imagine wanting to buy an AI that would choose the option with the least chance of saving your life. Here’s Alejandro:
If the AI decides to crash into the SUV, you'll most likely survive, albeit with serious injuries, and the other driver also has a good chance. However, if you choose to hit the biker, he will almost certainly die, but you will maximize your chances of not getting hurt. If the AI tries to save you at all costs, it will kill the biker.
So, killing the motorcyclist would be the “optimal solution” for you, but do we want to encourage the most selfish solution in a life-and-death scenario like this?
A utilitarian framework (an extremely common mindset in tech circles) would tell us to program the AI to choose to hit the SUV, which option minimizes the chances of someone dying. But this risk calculus gets weird pretty soon. Suppose that instead of an SUV, there are two motorcyclists, one on each side, but here's the problem: one is wearing a helmet, and the other is not. Which is the option that minimizes the chances that someone will die?
Yikes. The original scenario was thorny enough, but this brings your decision-discomfort to a whole new level.
Even though the biker with the helmet only has a slightly better chance of surviving here, that could be enough of an edge to influence a decision.
Suddenly, doing the “right thing” as a motorcyclist (wearing protection) increases your chances of getting hit by a car. We probably don't want that either, or we'll all be riding our bikes with as little protection as possible to avoid being "picked" by the AIs. This would be a very strange world.
Thornier Still
Let’s repaint the original scenario to see if we can make it even more impossible.
Imagine there are two cyclists now, one on each side of you. The concrete section slams down in front of you. If you veer right, your car will kill a pregnant woman. If you veer left, you’ll kill an elderly couple.
How does one measure the value of potential life? How does that compare to the two humans who have lived full lives? Does age matter? Should it?
What if it was two teenagers and two older folks? What if the teenagers had just killed someone?
What if there are two children, but one of them has leukemia? This gets very uncomfortable very quickly.
We begin to get an idea of what we’re really up against here: we are asking an AI to make decisions that we ourselves can’t really make, given all of our lives to decide. An AI will have a minute fraction of a second.
Other Ways to Decide
Another way to think about this ethical conundrum is to consider how a human being would react in a similar spot. We make decisions in-the-moment that we often regret, but we make these decisions quickly, with one part brain, one part emotions, and one part randomness.
Could we just abscond and throw our hands up in the air?
One possible solution is to refuse to choose at all.
In an impossible situation, let the AI flip a coin, so whatever happens, it will be a matter of luck. This is closer to how a human would act in this situation. It saves us from having to do strange moral calculations.
But is our moral inconvenience more important than someone's life?
This question might not be as rhetorical as it seems. Conflicts over finding the best solution can often lead to other conflicts, especially in the political arena.
Why is this question so difficult? Well, morality is complicated. It seems immoral to even consider that there could be a pre-defined answer for whose life is worth more. But that calculation will come up every time we put an AI to make life-and-death decisions. And the AI will have a formula in this case, even if that formula is “choose at random.”
This had better be a formula we can live with as a collective.
So… Who’s At Fault?
If you're driving, you'll do what your gut tells you. And whatever that decision, whatever the consequences, any court will find that you did not have enough time to think and judge yourself accordingly.
But an AI had enough time to think and make a decision. That decision was pre-programmed, calculated from a pre-programmed formula, learned from data, reinforced, etc. In any case, there are humans behind this who had a lot of time to think carefully about it. And they made a conscious decision, didn't they?
Someone made the programming decision, but should that person go to jail if an AI kills someone? Who gets sued?
In our society, there are two ways disputes of this sort are usually settled, at least on a national level: violent conflict, or politics.
I’m afraid this is a discussion destined for political debate and discourse. Our politicians are most likely to be the ones who decide.
Don’t Blink
We are fast-approaching the place where these aren’t just thought experiments; they are fully life-and-death conundrums. Self-driving cars are going to be the norm before we know it, and we need to wrap our heads around what that means.
While engineers and programmers can code in an AI's reactions, it's up to us to decide what those reactions should be.
Let's move this conversation from the realm of hypothetical scenarios to the corridors of policy and governance. Consider this place your “dinner table” for conversation, and tell me (and everyone) what you think about these dilemmas.
Leave me a comment:
Here's an aspect we always overlook in these thoughts experiments: we have forgiveness and tolerance of error for human drivers.
That is the monkey wrench in all the discussions of self driving cars. We don't offer them the same.
Funnily enough, the "cyclist with a helmet" example is closer to reality than you might think.
There's been research (here's one I found: https://www.bath.ac.uk/announcements/helmet-wearing-increases-risk-taking-and-sensation-seeking/) that suggested cars are more likely to pass closer and in a riskier manner past cyclists that have helmets, subconsciously believing that they're better protected.
And then there's the fun twist in the form of "moral hazard" where cyclists themselves act slightly more recklessly when they're wearing a helmet.
Conclusion: Take no safety precautions whatsover if you want to live! (Citation needed)