Which AI Safety research agendas are the most promising?
post by Chris_Leong · 2022-07-13T07:54:30.427Z · LW · GW · 1 commentThis is a question post.
Contents
Answers 5 Thomas Kwa -1 Alexander Gietelink Oldenziel None 1 comment
Answers
answer by Thomas Kwa · 2022-09-12T19:46:10.595Z · LW(p) · GW(p)
Everyone disagrees, but Thomas Larsen has now answered this here [LW · GW] in a way I'm satisfied with.
answer by Alexander Gietelink Oldenziel · 2022-07-13T12:28:21.731Z · LW(p) · GW(p)
The ones with actual math.
↑ comment by celeste · 2022-07-13T13:53:01.732Z · LW(p) · GW(p)
why?
Replies from: alexander-gietelink-oldenziel↑ comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2022-10-21T12:25:49.860Z · LW(p) · GW(p)
Although good alignment research has been done that does not involve maths [e.g. here and here [LW(p) · GW(p)] ] good math* remains the best high-level proxy of nontrivial, substantive, deep ideas that will actually add up to durable knowledge.
*what distinguishes good math from bad math? that's a tricky question that requires a strong inside view.
1 comment
Comments sorted by top scores.
comment by plex (ete) · 2022-07-13T13:35:33.402Z · LW(p) · GW(p)
Stampy has a list of some of them (and welcomes additions or corrections on the wiki entry!).