As we all know by now, a self-driving Tesla crashed, killing its driver. Apparently the car's computer couldn't "see" the huge, unmarked trailer because of the solar glare. So the big truck turned left in front of the car and a classic T-bone resulted. Oops!
We will play a variety of roles in this next wave of mechanization and robotization. But, perhaps most basic is telling the computers what and how to think. A particularly fascinating part of this is a bit like the old thought experiment in philosophy: suppose you have access to a railroad switch that directs a speeding, out-of-control train. Do you throw the switch so that the train kills just one person on Track A or let the train hurtle forward killing five on Track B? If you divert the train to Track A, your behavior results in the killing of fewer people — lives are saved — but your pulling the level directly resulted in the death of someone. How's that feel?
"But I saved four lives!" That's true, and you showed your utilitarian stripes in doing so. Utilitarianism arose in Britain in the seventeenth and eighteenth centuries as a model for ethical behavior. Vastly oversimplified, it posits that one (you, me, the town manager, the federal government) should behave in such a way so that the most people benefit. There can be some very sad beings left behind with utilitarianism, but as long as a majority is happy, we have done the right thing. Most governments and institutions we know and understand claim they try to operate this way.
OK, back to those cars. Generically, driverless cars are called autonomous vehicles and, in spite of the Tesla crash, it's still full steam ahead (Ford says five years; Uber says some elements in weeks). These vehicles are widely anticipated to reduce road carnage, partly because they don't speed, get distracted, or do drugs. But what does that "brain" in there to do in a crisis? A recent study published in Science asked a bunch of us what we wanted the computer to make the car do when things got dicey. Virtually all respondents were good utilitarians who wanted the AVs to sacrifice their passengers for the common good. But then came the hitch: the common good is fine in the abstract, but most respondents wanted their personal AV to protect its precious passengers at all costs. So that vehicle would plow into a group of pedestrians, rather than avoiding them, especially if avoidance would take the car off the road, hurting or killing the special people inside.
Further, most people did not want utilitarian algorithms forced on them by car companies or the government. If you want to avoid a batch of trick-or-treaters and kill yourself in your car, fine. But keep your regulating hands off of my vehicle. Those kids shouldn't be eating all that sugar anyway.
To a motorcyclist, all of this seems a bit abstract. My car drives itself? Fine, the poor dear bores me anyway. My motorcycle? Absolutely not. Making it move the way I want to move is the whole point. Any twisting road illustrates the difference: In a car that road slows you down and is a pain; on a motorcycle, it's heaven.
But there might be another big upside to AVs. Motorcyclists manage to crash a lot, and about half the time we get help in doing so from others on the road. But what if drunk, distracted, and just plain lousy drivers are mostly in safer AVs. That camera/computer is more likely to determine that I'm in the other lane than is the wasted person wobbling around in a conventional vehicle. The camera never blinks, the CPU reacts in milliseconds, the car is moving at the posted speed, and my lane is secure. What's not to like?
Bob Engel lives in Marlboro with his motorcycles, wife, and cat. The opinions expressed by columnists do not necessarily reflect the views of the Brattleboro Reformer.