Posts
Comments
Notice that you can't create your feared scenario without "it" and "itself". That is, the AI must not simply be accomplishing a process, but must also have a sense of self - that this process, run this way, is "me", and "I" am accomplishing "my" goals, and so "I" can copy "myself" for safety. No matter how many tasks can be done to a super-human level when the model is executed, the "unboxing" we're all afraid of relies totally on "myself" arising ex nihilo, right? Has that actually changed in any appreciable way with this news? If so, how?
I think people in these sort of spheres have an anti-anthrocentrism argument that goes something like "if it can reach human-like levels at something, we shouldn't assume we're the top and that it can keep going." But when you think about things from the ontological lens - how will the statistical correlations create new distinctions that were annihilated in the training data? - then "it might really quickly reach human-like levels for given tasks, but will never put it all together" is exactly what we expect! The pile of statistical correlations is only as good as the encoding, and the encoding is all human, baby. Representations annihilate detail, and so assuming that the correlations will stay in their static representations isn't any sort of human arrogance but a pretty straightforward understanding of what data is and how it works.
(Though I will say that SayCan is actually slightly concerning, depending on the details of how it's going behind the scenes. An embodied approach like that can actually learn things outside of a human frame.)