0 comments
Comments sorted by top scores.
comment by Mati_Roy (MathieuRoy) · 2022-01-27T02:26:47.588Z · LW(p) · GW(p)
If not, do you think there’s value in faithful representation of this topic within the art realm?
I think so
Are there any public misconceptions of AI that you think are dangerous? Or to a lesser extreme: hamper AI funding?
https://futureoflife.org/background/aimyths/
Have you ever seen the subject of AI faithfully represented in media art: ex films, books, graphic novels, etc?
See What are fiction stories related to AI alignment? [LW · GW]. Not all of them qualify, but some do. I think those ones are very good: The Intelligence Explosion and NeXt [LW · GW]
Is there a go-to resource that you recommend for someone outside of the field to learn about contemporary issues around AI Safety?
FLI has articles and a podcast: https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
80,000 Hours has some articles and episodes on this: https://80000hours.org/podcast/
The AI Revolution: The Road to Superintelligence by WaitButWhy
Superintelligence by Nick Bostrom
Human Compatible by Stuart Russell
For more, see my list of lists here: https://www.facebook.com/groups/aisafetyopen/posts/263224891047211/
Do you think the AI research community or LW is caricatured in any way that is harmful to AI research?
I don't know if the overall sign is positive or negative, but I'd guess there are likely non-zero caricatures that are harmful.
Are there any specific issues around AI that concern you the most?
The alignment problem (or are you asking what concerns us the most within that scope?)
If someone said they didn’t believe AI can have any positive impact on humanity, what’s your go-to positive impact/piece of research to share?
I don't have one. Depends where they're coming from with that belief.
How did your interest in AI begin?
I don't know if I became interested in LessWrong or machine learning first -- one of those.
Do you think there is enough general awareness around AI research and safety? If not, what do you think would help ferment AI safety in public and political discourse?
That's assuming most people here want this -- I don't think that's the case
Or to a lesser extreme: hamper AI funding?
I don't know if by "hamper" you mean reduce, but it seems to be like there are conflicting views/models here about whether that would be good or bad.
What do you personally think the likelihood of AGI is?
That is, that humans eventually create AGI, right?
Replies from: Maelle_Andre↑ comment by Curious_Cruiser (Maelle_Andre) · 2022-01-27T05:33:34.822Z · LW(p) · GW(p)
The alignment problem (or are you asking what concerns us the most within that scope?)
Yes, what issue concerns you most within the scope of AI alignment? (Edited original q for clarity, thanks)
That's assuming most people here want this -- I don't think that's the case
Why do you think most people here would not want greater public awareness around the topic of AI safety? (Removed the assumption from the original q)
That is, that humans eventually create AGI, right?
Indeed! (Edited original q to specify this)
comment by mtaran · 2022-01-27T06:01:38.593Z · LW(p) · GW(p)
This seems like something that would be better done as a Google form. That would make it easier for people to correlate questions + answers (especially on mobile) and it can be less stressful to answer questions when the answers are going to be kept private.
Replies from: Maelle_Andre↑ comment by Curious_Cruiser (Maelle_Andre) · 2022-01-27T14:04:31.857Z · LW(p) · GW(p)
Those are great points! Google forms added.