As autonomous cars become closer to being regular sightings on Colorado highways, it appears as if consumers and retailers alike are marveling over technology that will allow a car to make split second decisions to avoid crashes or minimize the impact of crashes in order to keep humans safe. However, it also appears that much is not being discussed about the ethical and practical issues surrounding crash optimization.
There’s an inherent paradox with that term. How can you possibly “optimize” a crash that will potentially leave people injured? One could argue that a severe injury is better than a person losing their life, so a crash that causes some physical damage is ostensibly better.
Crash optimization comes with difficult ethical decisions surrounding the programming of vehicles to minimize harm. For example, should a car be programmed to swerve and hit a larger vehicle, such as an SUV, in order to avoid hitting a motorcycle rider with no helmet on? Hitting the SUV may be the better scenario given that a larger vehicle could absorb the impact while protecting its occupants. But does that make driving an SUV inherently more dangerous because it could be targeted by autonomous vehicles programmed to optimize a crash.
Additionally, are programmers tacitly rewarding those who prefer smaller cars because of their fuel efficiency and their smaller operating costs? Is this really fair to those who chose to drive SUVs because of their versatility? These are ethical questions that could quite possibly turn into legal questions.