Beneficial initial conditions for AGI
post by mikbp · 2023-03-14T17:41:08.504Z · LW · GW · No commentsThis is a question post.
Contents
Answers 3 JBlack None No comments
Is there anywhere an approachable (not too long, understandable by an informed lay person) list/explanation of the current understanding for what the initial conditions for an aligned AGI should/could be?
Answers
There is no such list or explanation anywhere. We, as a species, simply do not know. A large part of the reason for the existence of this site was the hope that we might develop such a list, but we have not yet succeeded.
↑ comment by mikbp · 2023-03-15T20:21:03.071Z · LW(p) · GW(p)
Oh, thank you! I thought that what doesn't exist was a list of initial conditions that we could already work on. I didn't expect that there is nothing at all, even far fetched. So, if I understand it correctly, for all the proposals so far developed, there have been someone suggesting a credible way an AGI could doge them. Do I understand it correctly?
Replies from: JBlack↑ comment by JBlack · 2023-03-16T02:06:59.167Z · LW(p) · GW(p)
To the best of my knowledge yes, though I am not an alignment researcher and there may be some proposals that could in fact work, and there are definitely some that will be obscure enough that the only people who know about them also believe they will work.
As far as I know, there aren't any proposals that are generally believed to work. The field is far too young for that. We don't yet even have a good handle on what the problems will be in practice, which sort of goes with the territory when we need to consider the behaviours of things far smarter than ourselves.
No comments
Comments sorted by top scores.