When should your self-driving car kill you?
At Digital Trends I take another look at a question that is now gaining some currency: How should autonomous cars be programmed when all the choices are bad and someone has to die in order to maximize the number of lives that are saved?
The question gets knottier the more you look at it. In two regards especially:
First, it makes sense to look at this through a utilitarian lens, but when you do, you have to be open to the possibility that it’s morally better to kill a 64 year old who’s at the end of his productive career (hey, don’t look at me that way!) vs. a young parent, or a promising scientist or musician. We consider age and health when doing triage for organ replacements. Should our cars do it for us when deciding who dies?
Second, the real question is who gets to decide this? The developers at Google who are programming the cars? And suppose the Google software disagrees with the prioritization of the Tesla self-driving cars? Who wins? Or, do we want to have a cross-manufacturer agreement about whose life to sacrifice if someone has to die in an accident? A global agreement about the value of lives?
Yeah, sure. What could go wrong with that? /s
Categories: philosophy dw