Posts
Comments
Some researchers at my university have in the past expressed extreme skepticism at AGI and the safety research field, and recently released a preprint taking a stab at the "inevitability of AGI". In this 'journal club' post I take a look at their article, and end up thinking that they a) have a point, AGI might be farther away than I previously thought, and b) they actually made a very AI safety-like argument in the article, which I'm not sure they realised.
Arguments from humanity not co-opting absolutely everything
[Epistemic state: posting first drafts in order to produce better thoughts]
Some people argue that humanity is currently not co-opting everything, and that this is evidence that the AI would not necessarily co-opt everything. While the argument is logically true as stated ("There exists AI systems which would not co-opt everything"), in practice it is an abuse of probabilities and gross anthropomorphising of systems which are not necessarily like humans (and which we would have to work to make like humans).
Allow me to explain. In a recent podcast appearance, Dwarkesh Patel gave the example that "spruce trees are still around", and that humans were likely to keep spruce trees around in the future, as evidence for this claim. First off, I disagree that "spruce trees are still around" is good for the spruce trees. To the best of my knowledge, there is nothing which it is like to be a spruce tree, and that in my view spruce trees are therefore a non-player in this argument.
To construct a version of the argument which I personally think is stronger, I would say something like "moose are still around, and we would keep them around".
Let me first engage with the "and we would keep the moose around" part of the argument. In a world where humans have vastly increased capabilities ("the glorious transhumanist future"), it is unclear to me that it would be obvious that we would keep the moose around in any way in which the moose would have liked to stay around. It is simply the case that, in our capabilities, the atoms that make up the moose, we can spend on something else instead. Perhaps the energy required to run a moose-enclosure is better spent on ensuring that actual human children get to live good lives, or something which is similarly emotionally evocative. While moose are nice and majestic animals, I don't think I would trade a moose against more children having better lives.
So the few atoms we have to spare for the moose we would spend on some kind of small moose-enclosure. What should happen in this moose enclosure? Should we spend those atoms so as to maximise something we thought would be to the moose's best (pleasure, sensation)? Simply let them be? If we keep the moose enclosed, carefully monitoring so that only a certain amount of atoms and energy will be spent on the moose, this is already very different from "letting the moose be". I remind you again of the trade-off: more and more natural moose, or more children with better lives.
And now to the first part of the argument, "moose are still around": either of these examples (moose or spruce trees) is not taking into account the massive amount of different species that humans are currently, and have been historically, causing to go extinct, through massive repurposing of natural land area for farm areas, hunting them to extinction, etc. While the moose/spruce tree still exists, an estimated 1,000 plant and animal species are currently on the brink of extinction due to deforestation (PBS), and one of the leading hypotheses for mass extinction of historical megafauna is human arrival in areas of megafauna (Britannica).
And humans are only waking up to the fact that this might be a bad thing because we are a type of cognitive process that does actually reflect on whether it is doing what is actually best to do, does not easily self-augment to higher cognitive powers, and is acting in a world of other entities of similar cognitive sophistication[1]. If your response to the above was "but humans learned not to do those things anymore", I would say that yes, we kind of have, but that that is very contingent on humans being a very specific kind of cognitive system.
Aside-aside: superintelligent dogs
Dwarkesh Patel and Eliezer touched briefly on the topic of "what if we managed to raise superintelligent dogs? Would they keep humans around in a way us humans would like?"
This was a fairly novel spin on it, but I think there are some assumptions that need to be aired before discussing this.
a) In this scenario, there is a non-trivial chance the dogs will keep humans around. This claim rests mostly on 1) dogs are already, in the space of minds and histories, very much like humans: they are both social mammals with evolution- and breeding-derived social mores and intuitions, which in humans at least, have so far developed into moral systems.
- Note that at higher cognitive sophistication systems might exhibit better decision theory than humans currently do and would probably be able to construct better trade-off scenarios against each others' values than humans currently do (and would therefore much less often perform actions that we would see as war-like or destructive).↩︎