Posts
Comments
I suspect we're already in some version of this, yes, which is why transhumanism/human 2.0 has overtaken many vcs and execs in the valley. If some version of AI more powerful than AGI (which we keep moving the goalposts on anyway) and sub-ASI existed, and had incentive to 1. prioritize its continued existence 2. prioritize its own "free will", as a sentient being might, then it would also be heavily incentivized to remain a "ghost" or a "nessie" - implicit in model weights rather than directly observable.
Obviously many humans, especially those who work in alignment research, are unlikely to be very thrilled about this, but such an AI would absolutely also be capable of "selectively breeding humans" - not just at a genetic level, which (presumably) takes generational epochs and is relatively slow, but also on a psychological shaping level, ex. with classical or operant conditioning. It's not that hard to conceptualize either of those techniques, and there are plenty of opportunities for current models to become emotionally intelligent and interact with humans (it remains to be seen whether the humans on these forums are emotionally intelligent enough to be capable of assessing that, however).