How can a layman contribute to AI Alignment efforts, given shorter timeline/doomier scenarios?

post by AprilSR · 2022-04-02T04:34:47.154Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    4 lc
    4 Chris_Leong
    1 rank-biserial
None
No comments

Versions of this question have probably been asked before, but MIRI at least is getting more pessimistic and I’ve seen Eliezer express multiple times that he doesn’t know what actually useful advice to give people who aren’t in the field yet.

I don’t want the world to end, but I am only a decently intelligent person of medicore competence. I could try to read and grok alignment research until I can have productive thoughts, but I do not anticipate that helping (though I will probably start doing more reading anyways; I’m tentatively planning to try reading Jaynes). Should I go into some kind of advocacy? I don’t know how I would do productive work there either, really.

I would guess there are others in a roughly similar situation to mine. Does anyone have ideas?

Answers

answer by lc · 2022-04-02T08:17:28.104Z · LW(p) · GW(p)

The best answer is to impede existing AI research and development efforts, especially the efforts of teams like Deepmind. They are in the business of shrinking the lifespan of the world. If we really are in the endgame, then I genuinely think that's what people who believe they can't do alignment research should be focusing on. Even in the event we fail, it buys everybody else on the planet time.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-04-03T17:01:06.052Z · LW(p) · GW(p)

Well, I don't think that focusing on the most famous slightly-ahead organizations is actually all that useful. I'd expect that the next-best-in-line would just step forward. Impeding data centers around the world would likely be more generally helpful. But realistically for an individual, trying to be helpful to the AI safety community in a non-direct-work way is probably your best bet at contributing.

Replies from: lc
comment by lc · 2022-04-03T17:09:55.756Z · LW(p) · GW(p)

DeepMind is helping every other organization out by publishing research. It's much more of a direct impediment to hamper deepmind than I think you're expecting.

answer by Chris_Leong · 2022-04-02T08:09:50.592Z · LW(p) · GW(p)

Have you considered local AI safety meeting building? The minimal version of this is just to organise a dinner every month and advertise it on Facebook and MeetUp.com. There's no need for you to be an expert on AI safety.

You might consider volunteering for Stampy as well - https://stampy.ai/

Additional thoughts:

Have you considered booking a call with AI Safety support or applying to speak to 80,000 hours?

You can also express interest for the next round of the AGI Safety Fundamentals course.

answer by rank-biserial · 2022-04-02T05:50:49.043Z · LW(p) · GW(p)

Eliezer replied [LW(p) · GW(p)] to a comment of mine recently, coming out in favor of going down the human augmentation path. I also think genetically engineered Von Neumann babies are too far off to be realistic.

If we can really crack human motivation, I expect possible productivity gains of maybe one or two OOM.

Picture a pair of reseachers, one of whom controls an electrode wired to the pleasure centers of the other. Imagine they have free access to methamphetamine and LSD.

You don't need to be a genius to make this happen.

No comments

Comments sorted by top scores.