Posts
Comments
Comment by
Martí Mas (marti-mas) on
Using GPT-Eliezer against ChatGPT Jailbreaking ·
2022-12-13T12:33:24.428Z ·
LW ·
GW
The "linux terminal" prompt should have been a yes. Obviously getting access to the model's "imagined terminal" has nothing to do with actually gaining access to the backend's terminal. The model is just pretending. Doesnt harm anybody in anyways, it's just a thought experiment without any dangers