Shall We Throw A Huge Party Before AGI Bids Us Adieu?
post by GeorgeMan (jozinko-kovacik) · 2023-07-02T17:56:48.372Z · LW · GW · 6 commentsContents
6 comments
I don't think there is much more to this post then what it says in the title, but I'll add more details anyway.
Essentially, it's become increasingly obvious that despite the best efforts, the progress in AI alignment and other safety efforts has been, well... minimal. Yet the predictions are validated more and more and even previously ignorant public figures are starting to be aware of the issue.
Hence, to prevent future ai-written history books recording how "all they did on lesswrong was writing increasingly lengthy shitposts about how I will kill them and then I killed them", I suggest, with all seriousness, that we should at least once consider using this site for something actually useful. I suggest we use it to plan and throw a huge party each year while we wait for our destiny. Let's party while we can!
Besides a wild rave with Eliezer, Jeffrey, Elon and Joshua, we could extend the invitation to sceptics like Yann and show to the outside world that we are not just a bunch of crazy lunatics, but an actually sensible, open and welcoming community.
To foster the sense of community we could also dress ourselves in paperclip costumes, sending a strong signal that we are not scared, but instead fully reconciled with our fate.
And who knows? It might actually turn out to be a successful safety strategy - If we show the AGI that we are not just a bunch of useless boring atoms, but can also throw a great party and enjoy life, it might decide to keep us around for fun.
Please comment bellow if interested. This post is serious - I would actually quite enjoy a party with my fellow lesswrong comrades.
6 comments
Comments sorted by top scores.
comment by GeorgeMan (jozinko-kovacik) · 2023-07-03T11:17:49.046Z · LW(p) · GW(p)
anyone?
comment by GeorgeMan (jozinko-kovacik) · 2023-07-02T22:39:01.012Z · LW(p) · GW(p)
ok ok, wearing the paperclip costumes can be optional (though highly encouraged)...any other reason why you don't like the idea? or you just want to live up to being just a bunch of boring atoms?
comment by starship006 (cody-rushing) · 2023-07-04T20:31:16.347Z · LW(p) · GW(p)
Quick feedback since nobody else has commented - I'm all for the AI Safety appearing "not just a bunch of crazy lunatics, but an actually sensible, open and welcoming community."
But the spirit behind this post feels like it is just throwing in the towel, and I very much disapprove of that. I think this is why I and others downvoted too
Replies from: jozinko-kovacik↑ comment by GeorgeMan (jozinko-kovacik) · 2023-07-04T20:55:31.831Z · LW(p) · GW(p)
well, I am not arguing for ceasing the agi safety efforts or that it is unlikely they would succeed. I am just claiming that if there is a high enough chance that they might be unsuccessful...we might as well try to make some relatively cheap and simple effort to make this case somewhat more pleasant(although fair enough that the post might be too direct).
Imagine that you had an illness with a 30% chance of death in next 7 years(I hope you don't), it would likely affect your behaviour and you would want to spend your time differently and maybe create some memorable experiences even though the chance that you survive is still high enough.
Despite this, it seems surprising, that when it comes to AGI-related risks, such tendencies to live life differently are much weaker, even though many assign similar probabilities. Is it rational?
comment by GeorgeMan (jozinko-kovacik) · 2023-07-04T20:16:57.665Z · LW(p) · GW(p)
let's not wait till the end of the summer
comment by GeorgeMan (jozinko-kovacik) · 2023-07-03T18:33:10.179Z · LW(p) · GW(p)
really no-one interested? initially this had -15 downvotes but now went up to -1. Let's keep going! this is an out-of-the box creative proposal - I believe at least this should be appreciated. Imagine how much visibility could safety efforts get if Elon decided to join in a paperclip costume.