Request for collaborators - Survey on AI risk priorities
post by whpearson · 2017-02-06T20:14:04.161Z · LW · GW · Legacy · 12 commentsContents
Planning document None 12 comments
After some conversations here I thought I would try and find out what the community of people who care about AI risk think are the priorities for research.
To represent peoples opinions fairly I wanted to get input from people who care about the future of intelligence. Also I figure that other people will have more experience designing and analyzing surveys than me and getting their help or advice would be a good plan.
Planning document
Here is the planning document, give me a shout if you want edit rights. I'll be filling in the areas for research over the next week or so.
I'll set up a trello if I get a few people interested.
12 comments
Comments sorted by top scores.
comment by whpearson · 2017-02-07T19:34:22.568Z · LW(p) · GW(p)
I'm currently lacking people to put the more mainstream points across.
I'd like to know why people aren't interested in helping me.
[pollid:1197]
Replies from: J_Thomas_Moros, Lumifer↑ comment by J Thomas Moros (J_Thomas_Moros) · 2017-02-09T23:24:50.257Z · LW(p) · GW(p)
None of your survey choices seemed to fit me. I am concerned about and somewhat interested in AI risks. However, I currently would like to see more effort put into cryonics and reversing aging.
To be clear, I don't want to reduce the effort/resources currently put into AI risks. I just think they they are over weighted relative to cryonics and age reversal and would like to see any additional resource go to those until a better balance is achieved.
↑ comment by Lumifer · 2017-02-07T20:11:46.129Z · LW(p) · GW(p)
Do you have a short write-up somewhere about what do you want to do and why other people should help you?
Replies from: whpearson↑ comment by whpearson · 2017-02-07T20:49:05.100Z · LW(p) · GW(p)
I want to gather information about what people care about in AI Risks. Other people should help me if they also want to gather information about what people care about in AI Risks.
Replies from: Lumifer↑ comment by Lumifer · 2017-02-07T21:13:59.468Z · LW(p) · GW(p)
By "people" do you mean "LW people"? If you're interested in what the world cares about, running polls on LW will tell you nothing useful.
Replies from: whpearson↑ comment by whpearson · 2017-02-07T21:20:10.333Z · LW(p) · GW(p)
Oh, you've not read the document I linked to in the post. I planned to try and get it posted in LW, EA forum and subreddits associated with AI and AIrisk.
Replies from: Lumifer↑ comment by Lumifer · 2017-02-07T22:02:37.959Z · LW(p) · GW(p)
I looked at that document. I still don't see why do you think you'll be able to extract useful information out of a bunch of unqualified opinions (and a degree in psychology qualifies for AI risk discussions? really?) And why is the EA forum relevant to this?
Replies from: whpearson↑ comment by whpearson · 2017-02-07T22:48:26.421Z · LW(p) · GW(p)
I'm bound to get useful information as I am only interested in what people think. If you are interested in existential risk reduction, why wouldn't you be interested in what other people think? Surviving is a team sport.
Someone recommended EA here for existential risk discussion
Replies from: Lumifer↑ comment by Lumifer · 2017-02-08T17:13:23.692Z · LW(p) · GW(p)
If you are interested in existential risk reduction, why wouldn't you be interested in what other people think?
For the same reasons quantum physicists don't ask the public which experiments they should run next.
Surviving is a team sport.
Errrr... That really depends X-)
Replies from: whpearson↑ comment by whpearson · 2017-02-08T18:55:12.322Z · LW(p) · GW(p)
For the same reasons quantum physicists don't ask the public which experiments they should run next.
But a quantum research institute that is funded via donations might ask the public which of the many experiments they want to run might attract funding. They can hire more researchers and answer more questions. Build good will etc.
Replies from: Lumifer