ShowMeTheProbability's Shortform
post by ShowMeTheProbability · 2022-08-15T09:52:39.648Z · LW · GW · 3 commentsContents
3 comments
3 comments
Comments sorted by top scores.
comment by ShowMeTheProbability · 2022-08-15T09:52:39.967Z · LW(p) · GW(p)
The lack of falsification criteria for AGI (unresearched rant)
Situation: Lots if people are talking about AGI, and AGI safety but nobody can point to one. This is a Serious Problem, and a sign that you are confused.
Problem:
- Currently proposed AGI tests are ad-hoc nonsense (https://intelligence.org/2013/08/11/what-is-agi/)
- Historically when these tests are passed the goalposts are shifted (Turning test was passed by fooling humans, which is incredibly subjective and relatively easy).
Solution:
- A robust and scalable test of abstract cognitive ability.
- A test that could be passed by a friendly AI in such a way as to communicate co-operative intent, without all the humans freaking out.
Would anyone be interested in such a test so that we can detect the subject of our study?
Replies from: Raemon↑ comment by Raemon · 2022-08-15T20:13:47.241Z · LW(p) · GW(p)
Becoming capable of building such a test is essentially the entire field of AI alignment. (yes, we don't have the ability to build such a test and that's bad, but the difficulty lives in the territory. MIRI's previously stated goal were specifically to become less confused)
Replies from: ShowMeTheProbability↑ comment by ShowMeTheProbability · 2022-08-15T22:30:59.790Z · LW(p) · GW(p)
Thanks for the feedback!
I'll see if my random idea can be formalised in such a way to constitute a (hard) test of cognition which is satisfying to humans.