Slim overview of work one could do to make AI go better (and a grab-bag of other career considerations)
post by Chi Nguyen · 2024-03-20T23:17:52.964Z · LW · GW · 1 commentsContents
1 comment
1 comments
Comments sorted by top scores.
comment by niplav · 2024-03-21T08:20:12.613Z · LW(p) · GW(p)
Prevent sign flip and other near misses
The problem that we have with one proposed solution (adding a dummy utility function that highly disvalues a specific non-suffering thing) is that the resulting utility function is not reflectively stable.
So a theory of [LW · GW] value formation [LW · GW] and especially on achieving vNM coherence (or achieving whatever framework for rational preferences turns out to be the "correct" one) would be useful here. Then during the process of value formation humans can supervise decision points (i.e., in which direction to resolve the preference).