Survey - Psychological Impact of Long-Term AI Engagement

post by Manuela García (manuela-garcia) · 2024-09-17T17:31:38.383Z · LW · GW · 1 comments

Contents

1 comment

Hello everyone,

I’m conducting a survey to better understand the psychological and emotional effects of long-term engagement with AI technologies, particularly within the AI safety community. This is an invitation for you to take part in this anonymous questionnaire, which explores how engagement with AI could influence emotions, stress levels, and mental health. 

Who should participate?

• Anyone involved in AI development, research, or policy

• Members of the AI safety community, including advocates and researchers

• Individuals concerned about the societal and existential implications of AI

For participants interested, the report and analysis of this questionnaire will be shared once it’s released. 

Link to the Form

Your contribution is deeply valued; this is how we can generate a greater understanding of the psychological challenges faced by individuals in the AI community, and in turn, more effectively address the stress and anxiety caused by this issue, building the resiliency needed to navigate these challenges assertively and empathetically. 

Finally, I’m committed to discussing any emotional challenges related to AI in more detail, therefore feel free to reach out at manugarciaat@gmail.com.

Thank you in advance for your time.

1 comments

Comments sorted by top scores.

comment by Seth Herd · 2024-09-17T19:44:38.784Z · LW(p) · GW(p)

You're not going to get useful results if you lump together people working on AGI and people working on AGI safety. Mixing AI work and safety with AGI work and safety will be similarly baffling.

They each have very different implications.

Those working on AI and AGI think they're building the most useful technology the world has ever seen.

Those working on AI safety think we have a new technology with dangers and opportunities, like every previous one. They're probably pleased to be doing important work.

Those working on AGI safety think there's a very good chance the world ends soon because of unpreventable human idiocy. That can really get them down.

I can't tell which category you're interested in, or whether you can distinguish these very different viewpoints in your data.