LessWrong Poll on AGI
post by Niclas Kupper (niclas-kupper) · 2022-11-10T13:13:57.387Z · LW · GW · 6 commentsContents
6 comments
This is just a short post to promote this Pol.is poll to loosely aggregate opinions about AGI / AGI x-risk of the LessWrong community. This poll is less structured than a questionnaire and anyone can add statements that others can agree/disagree with. Pol.is then automatically generates a nice visualization of similar stances.
I recently used Pol.is to gather complaints of PhD students at my department and it exceeded my expectations, so I thought why not try it here. If this is a success I might make a follow-up post with any interesting findings. These things can be slow to get started so be sure to come back to vote on new statements that others have made.
Here is the link: https://pol.is/7yvdmar3nj
6 comments
Comments sorted by top scores.
comment by Niclas Kupper (niclas-kupper) · 2022-11-11T14:56:18.231Z · LW(p) · GW(p)
Some early results:
- Most people disagreed with the following two statements: "I think the probability of AGI before 2030 is above 50%" and "AI-to-human safety is fundamentally the same kind of problem as any interspecies animal-to-animal cooperation problem".
- Most people agreed with the statements: "Brain-computer interfaces (e.g. neuralink tech) that is strong and safe enough to be disruptive will not be developed before AGI." and "Corporate or academic labs are likely to build AGI before any state actor does."
- There seems to be two large groups who's main disagreement is about the statement " I think the probability of AGI before 2040 is above 50%". We will call people agreeing Group A and people disagreeing Group B.
- Group A agreed with "By 2035 it will be possible to train and run an AGI on fewer compute resources than required by PaLM today (if society survives that long)." and "I think establishing a norm of safety testing new SotA models in secure sandboxes is a near-term priority."
- Group B agreed with "I think the chance of an AI takeover is less than 15%".
- The most uncertainty was around the following two statements: "The 'Long Reflection' seems like a good idea, and I hope humanity manages to achieve that state." and "TurnTrout's 'training story' about a diamond-maximizer seemed fatally flawed, prone to catastrophic failure."
comment by niknoble · 2022-11-11T05:21:49.549Z · LW(p) · GW(p)
That's a cool site. Group A for life!
(Edit: They switched A and B since I wrote this 😅)
Replies from: nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-11-11T21:37:50.704Z · LW(p) · GW(p)
The groups evolve every time someone answers a question. For a little while there were five separate groups, then it merged back to two! Fascinating to see. I hope more people join in to ask and answer questions.
comment by the gears to ascension (lahwran) · 2022-11-10T21:32:12.006Z · LW(p) · GW(p)
it'll be interesting to see what this looks like in a couple of weeks, if it keeps getting responses. I added two questions.
comment by the gears to ascension (lahwran) · 2022-12-05T08:13:08.944Z · LW(p) · GW(p)
any chance you could document more of the results from this poll now that it's had some time to settle?
comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-11-11T04:31:58.262Z · LW(p) · GW(p)
I've added a few questions, and I'll be excited to see what others add.