Existentially relevant thought experiment: To kill or not to kill, a sniper, a man and a button.
post by AlexFromSafeTransition · 2023-08-14T10:53:10.008Z · LW · GW · 6 commentsContents
6 comments
There is a room with one window. Inside is a man. On the ceiling there is an interesting button. What happens when it is pressed? Everybody dies, except for the man and 1000 people he gets to pick. The button is not visible from outside the room. The man sometimes walks past the window. The only thing we know about the man, is that he is above average in intelligence and that he is healthy. The button is on the ceiling, such that it is extremely unlikely that it is pressed by accident. It would be a conscious choice by the man to press or not. You cannot communicate with the man, and it is not possible to use other technology than is described here. There is also a sniper aiming at the window. The sniper is awaiting your command: to kill or not to kill, that is the question. There are no negative consequences for you for choosing to kill or not to kill. It is just up to you to think what is the right thing to do. Given that all of the above would be true, What do you choose?
A, not to kill. The man has not yet done anything wrong. The man is a conscious being, so he has moral value. You choose to risk your own life and that of the rest of us to give this man a chance.
B, kill. What motive could the man have to want to press the button? Could be freedom to reshape the earth as he and his companions see fit. Could be a strong opinion that humanity is on balance bad for the inhabitants of earth, combined with a love for nature and a wish to see wild animal populations and vegetation retake the earth to become an environmental paradise. The man could be a criminal wanted by governments, he could want this manhunt to end. There is a fair chance this choice to kill would save your life and that of the ones you love. You choose that taking away this risk is more important than the man's life.
Please choose before reading on and write a comment saying A or B, you could add why but also valuable without.
...
...
...
...
...
I have asked 20 people as of now and the response is 16 B, kill, and one I don't know and three A, not to kill. All live in the Netherlands. I find this near consensus interesting and practically relevant to the future of AI. I have set this up to be relevant to the situation where the man in the room is analogous to an AGI (Artificial General Intelligence) that is thought to have feelings / consciousness and thus moral value. Like what MIT professor Max Tegmark talks about, I am both open to the possibility of an AGI that has no subjective experience and thus no moral value, and to the possibility that the AGI does have moral value. It is intuitive in modern culture for humans to see a random other human as a moral patient worthy of legal consideration. That is why I went for a random man to play the part of the AGI instead of using an AI directly, since it is unintuitive to think of an AGI as being a moral patient to many people (in my experience of talking about this). It would help to imagine the AGI as having a richness in subjective experience similar to your own, as I think we should prepare for this possibility. The button would be analogous to any human extinction inducing capability (like creating a super virus / bioweapon). The AGI could in such a situation keep itself functional and for example a number of copies. The sniper represents our chance to switch it off, but only when we still can, before it made secret copies elsewhere. That window of opportunity is represented by only being able to shoot the man when in view of the window. Stuart Russell advocated for a kill-switch on a general system in the senate hearing recently found here on YouTube. That is Russell advocating for positioning the sniper. The scenario is the most relevant when there is no aligned AGI yet, that wants to protect us. Another relevant situation is where the AGI is not in fact conscious, but is able to successfully convince many people that it is, where some will want to come to its aid. Of course, this fictional situation is an oversimplification and an important difference with reality is that an AI can also choose to improve our living conditions. But this small experiment does give an interesting result, that the majority thinks being innocent and of moral value is not enough to be allowed to live when having access to such a metaphorical button.
I think this is also relevant to the question, should AI be given (human) rights? Can we peacefully coexist with superintelligences that are free or in other words, where their degree of alignment to what we want is unknown. I answer no, because I think it utterly foolish to put our faith in the hands of an AI that has this metaphorical button and might press it. We cannot keep such an AI aligned like we align humans or corporations, with laws and threats of punishment or jail because the AI can be in an unknown amount of places at once, have copies of itself. This is in stark contrast with what the movies I Robot, Free Guy, Artifice Girl and probably more will want you to believe. Important characters in these movies are pro giving full freedom to an AGI with unknown goals and values, similar to the man in the room. I think it sensible that only when we have super strong guarantees that an AGI will want to protect us and help us, forever, that we can think of giving it rights and autonomous access to the real world. Having feelings / consciousness and thus moral value is not enough in my opinion. This is in contrast with the Google engineer that lost his job to tell the world their chatbot LaMDA was sentient. Source here. He organized a meeting between a lawyer and the chatbot with the goal of protecting the rights of the AI, so there are people who think in this, in my view, existentially dangerous way.
If you answered B, please consider spreading this post. To people in Hollywood or AI builders or other relevant people. I write this in hope of decreasing the power and size of the group of people that are in favor of freeing the AI and giving it rights. And to increase our odds of surviving all this. I also write this with some fear of retaliation from a future misaligned Artificial General Intelligence reading this.
6 comments
Comments sorted by top scores.
comment by Dagon · 2023-08-14T15:07:31.888Z · LW(p) · GW(p)
Downvoted for trying to learn anything about the real world from a simple binary-choice fiction.
The right answer, of course, is to shoot the mastermind who has set all of this up and invented/installed that damned button, followed by hiring a sniper rather than letting you disassemble the button or just TALK to the person in the room.
Replies from: AlexFromSafeTransition↑ comment by AlexFromSafeTransition · 2023-08-19T11:58:58.768Z · LW(p) · GW(p)
Thank you for commenting this. Very useful to hear why someone downvotes! I made some edits to reflect that the real world is a lot more messy than a simple fiction, among other things. If you or others have more pointers as to why this post got downvoted, please share, I want to learn. The response really got me down.
It is my view that AI labs are working hard to install this damned button. And people working on or promoting open-source AGI want to install this button in every building where the people there can afford the compute cost. When an AGI has started its takeover / damaging plan, there would be no way to disassemble the button because it could have secret copies running elsewhere. And we currently have no reliable way of turning off all relevant computers to be safe in that case. You say, why can't we talk to the person in the room. My thinking was that talking to an AGI would not give us an advantage due to it having a chance to manipulate.
The whole point of the post was to argue against giving an AI rights (like privacy) before having strong alignment guarantees, and in favor of switching it off as soon as possible. Even when it is thought to have moral value. What are your (or other readers) stance / views on that?
↑ comment by Dagon · 2023-08-19T16:10:03.618Z · LW(p) · GW(p)
[ This is going to sound meaner than I intend. I like that you're thinking about these things, and they're sometimes fun to talk about. However, it requires some nuance, and it's DARNED HARD to find analogies that highlight the salient features of people's decisions and behaviors without being just silly toys that don't relate except supeficially. ]
Really? It's your view that AI labs are working hard to install a button that will let a random person save himself and 1000 selected co-survivors? Are they also positioning a sniper on a nearby building?
I didn't get ANYTHING about AI rights from your post - what features of the scenario would lead me to compare with AI as a moral patient or giving it/them any legal consideration?
Replies from: AlexFromSafeTransition↑ comment by AlexFromSafeTransition · 2023-08-20T08:00:18.655Z · LW(p) · GW(p)
Thanks for the quick reply!
It is my view that AI labs are building AGI which can do everything a powerful general intelligence can do, including executing a successful world takeover plan with or without causing human extinction.
When the first AGI is misaligned, I am scared it will want to execute such a plan, which would be like pressing the button. The scenario is the most relevant when there is no aligned AGI yet, that wants to protect us.
I see now I need to clarify, the random person / man in the scenario represents the AGI itself mostly (but also a misuse situation where a non-elected person gives the command to an obedient AGI). So no, the AI labs are not working to give a fully random person this button, but they are working to give themselves this button (but also positive capabilities of course) and would an employee of an AI lab that the public did not elect, not be a random person with unknown values relative to you or me?
The sniper represents our chance to switch it off, but only when we still can, before it made secret copies. That window of opportunity is represented by only being able to shoot the man when in view of the window. Stuart Russell advocated for a kill-switch on a general system in the senate hearing recently found here on YouTube. That is Russell advocating for positioning the sniper.
what features of the scenario would lead me to compare with AI as a moral patient or giving it/them any legal consideration?
It is intuitive in modern culture for humans to see a random other human as a moral patient worthy of legal consideration. That is why I went for a random man to play the part of the AGI. Since it is unintuitive to think of an AGI as being a moral patient to many people (in my experience of talking about this).
I wrote this in the post to clarify, it is about when the AGI would be a moral patient:
I have set this up to be relevant to the situation where the man in the room is analogous to an AGI (Artificial General Intelligence) that is thought to have feelings / consciousness and thus moral value.
Does any of this change your view of the whole thing?
Replies from: Dagon↑ comment by Dagon · 2023-08-20T16:05:01.774Z · LW(p) · GW(p)
[ Bowing out at this point. Feel free to respond/rebut, and I'll likely read it, but I won't respond further. ]
Does any of this change your view of the whole thing?
Not particularly. The analogy is too distant from anything I care about, and the connection to the parts of reality that you are concerned with is pretty tenuous. It feels mostly like you're asserting a danger, and then talking about an unrelated movie.
comment by Jiro · 2023-08-14T19:50:57.019Z · LW(p) · GW(p)
The answer is "The epistemic state you postulate that I'm in is not possible among human beings. [LW · GW]". If I'm absolutely certain that the situation is as described, I'd kill the man. But I'm a human; I can't be absolutely certain in this way, and I'm not going to be absolutely certain in any of the real-world situations you want to compare it to. For that matter, the Google engineer who lost his job wasn't absolutely certain.