What are the major underlying divisions in AI safety?
post by Chris_Leong · 2022-12-06T03:28:02.694Z · LW · GW · 1 commentThis is a question post.
Contents
Answers 1 weverka None 1 comment
I’ve recently been thinking about how different researchers have wildly different conceptions of what needs to be done in order to solve alignment and what projects are net-positive.
I started making a list of core divisions;
- Empiricism vs conceptual research: Which is more valuable or do we need both?
- Take-off speeds: How fast will it be?
- Ultimate capability level: What level of intelligence will AI’s reach? How much of an advantage does this provide them?
- Offense-defense balance: Which has the advantage?
- Capabilities externalities: How bad are these?
Are there any obvious ones that I’ve missed?
Answers
answer by weverka · 2022-12-07T14:36:06.479Z · LW(p) · GW(p)
Is it likely to do more good than harm?
1 comment
Comments sorted by top scores.
comment by Charlie Steiner · 2022-12-08T01:31:54.454Z · LW(p) · GW(p)
Gosh, someone made a gigantic flowchart of AI Alignment and posted it on here a few months back. But I can't remember who it was at the moment.
Fortunately, I am a good googler: https://www.alignmentforum.org/s/aERZoriyHfCqvWkzg [? · GW]
If you're interested in categorizing all the things, you might imagine generating dichotomies by extremizing notes or relationships in such a flowchart.