Vehicle automation is fascinating, may possibly save thousands of lives each year, could reduce fuel consumption (an open question), and leaves many legal questions to be addressed (“who’s at fault when a self-driving car crashes???”). These are the required statements when writing any article about self-driving cars, and we’ve written quite a few.
One rarely mentioned aspect of vehicle automation is the issue of how a vehicle decides to take on risk. For example, say a self-driving car decides that it would have a better view of a traffic light if it moves into the left lane, out from behind an SUV with heavily-tinted windows. This move would provide slightly better data, but it’s a discretionary movement—it wants that extra data, but doesn’t need it. There’s a small amount of risk of crashing with any lane change. Is the extra data worth the additional risk?
Amazingly, Google published a patent last week discussing exactly such things. Don’t let the title fool you: “Consideration of risks in active sensing for an autonomous vehicle” is not as dry as it sounds. Let ratpag break it down: when a vehicle wants to maneuver in order to get better data, like changing lanes to see a traffic signal, it estimates the value of the improved data and the probability of that improvement happening, based on historical examples. Multiply the benefits and the probability, and you get the “pro” score.
To measure the downsides, you do something similar. The car determines the probability of all “bad events” that may happen, no matter how small. These probabilities are again based on historical data, for example that a car is more likely to hit a pedestrian when driving near a sidewalk than when traveling one lane over. The probability of each bad event occurring is multiplied by the magnitude of that event. Sum them up, and you get the “con” score. If the pro outweighs con, change lanes. There are some additional details and caveats, but unless you actually want to read the whole patent like we did, just trust that this is the gist of it.
This is great news for those who have argued that automated vehicles should incorporate ethics into their decision-making systems. This same pro/con logic could be programmed into crash avoidance systems. Given a choice between hitting a pedestrian and a vehicle, basic understanding of safety dictates we hit the vehicle, and this patent is a first attempt at codifying this type of decision.
This praise comes with several qualifiers. First, it’s not clear from the patent how these risk magnitudes are determined. Google provides a table with sample magnitudes.
These were obviously pulled out of thin air for the patent, but if this were to be deployed, some values would be needed. The immediate question is how to determine these values. The most obvious answer is from actuarial tables. But implemented practically, this would mean that a self-driving car may assign a higher risk value to a Tesla than a Chevy S10 based solely on damage estimates. Or to put it more morbidly, pedestrians in a poor neighborhood may have less resources to pursue a lawsuit, so we’ll just assign them a slightly lower risk magnitude.
There are a few possible solutions. First, make the risk magnitudes public. Let people know how you assign value to these situations. I’m sure Google will cry “trade secret!” but you know what? ratpag doesn’t care. You make $40B in annual profit, and this risk algorithm has a profound affect on people’s lives.
The other alternative is government regulated risk magnitudes, but ratpag isn’t sure that’s feasible (imagine every potential “bad event” that can occur on a roadway), and ratpag is COMPLETELY sure Google knee-jerk hates this idea. Another alternative is to give the passenger some control over risk magnitudes, but then ratpag thinks about the worst three drivers he saw today and would rather they just have no choice in anything whatsoever.
So kudos to Google for at least taking a crack at the self-driving car ethics problem, but please put some restrictions or code of ethics in place when calculating risk magnitudes, and for the love of the game someone force them to provide these factors upon request.