Posts
Comments
Let’s say, for simplicity’s sake, that me jumping in front of the trolley will stop it. So I do that, and boom, six lives saved. But if the trolley problem is a metaphor for any real-world problem, there are millions of trolleys hurtling down millions of tracks, and whether you jump in front of one of those trolleys yourself or not, millions of people are still going to die.
This really got me thinking about something. You mentioned that sacrificing yourself will result in the train being stopped with one casualty and six people saved. The most common decision made in the Trolley problem is to sacrifice one person to save the other five. Any one of these two scenarios (either sacrificing yourself or the person on the track) will result in exactly one casualty. However, the number of people saved will be different. Sacrificing yourself (or any outside person really) will save more people than pulling the lever. I feel confident in saying that most people will prefer one casualty and six saved over one casualty and five saved (simple math right?).
This would imply that the "solution" to the Trolley problem (from a very simple and utilitaristic standpoint) is not to pull the lever, nor to just stand there, but to sacrifice a completely innocent bystander...
This also fits into to question of how an AI would deal with the Trolley problem. If, for example, an AI is given the instruction to "save the most people", it's not unreasonable to assume it could make a very different decision than an "ethical" human being, or something even scarier; A decision we didn't even consider.