Posts

Comments

Comment by Mikkel Fishman (mikkel-fishman) on Power-Seeking = Minimising free energy · 2023-04-09T12:52:05.602Z · LW · GW

Yeah no problem! Glad you are taking the time to consider and I look forward to your thoughts.

I'd like to throw in a bit of grist for your thinking around humans and symbiosis. I would argue for most of human history we were consciously symbiotic, meaning we saw ourselves as an extension and in relationship with the environment. Whether that was seeing ourselves as equal with (brother wolf, etc) or above (stewards of the earth) the emphasis was on working with our surroundings to cultivate advantage. What is domestication other than symbiosis?

I won't say that our disconnection from this is exclusively modern, it has existed in other time periods, but it is fair to say that the idea that self-maximizing reproductive fitness is the dominant drive of life is a very recent idea. After all, when Darwin's theory came out it was widely opposed by many for the simple fact that "survival of the fittest" implied that egotistical extremism was natural and surely that couldn't be right. [And of course Darwin himself was never a social Darwinist, plainly saying he was only focused on the fittest meaning "better adapted for the immediate, local environment."] 

And if I were were an alien that simply observed from afar, I would  come to the conclusion that humans are highly symbiotic. Modern humans are incapable of living without extreme reliance on a huge array of other entities, both biological and non, that they are constantly producing, improving, and supporting. 

Ah, you might say, but that's not symbiosis because we are exploiting those things. To which I would reply thusly: first, paratistism is a form of symbiosis so even in the cynical view that we're just exploiting other creatures and each other, we're still symbiotic and even more so now since so many creatures (not to mention our inanimate creations) are incapable of survival without us. But even beyond that, our relationships are still mutualistic in the sense that we are greatly increasing quantity of life in the organisms we are symbiotic with.

Much too well actually, since domesticated mammals outweigh wild ones 10:1. You could say we do far too much symbiosis.

There is a broader point I'm making here, which goes back to whether the game is zero or positive sum.  It's tempting to say that AGI will have no need for us because it has a different utility function. But does our utility function rely on bees? So many cows, sheep, goats? Dogs and cats as companions? Sparrows, pigeons, so on and so forth...they provide something we are incapable of producing in ourselves and that is enough for us. What will the AGI find itself lacking in?

Not that I'm saying we will become domesticated animals in relation to AGI, I am merely drawing parallels that life is nuanced and conditional. 

Comment by Mikkel Fishman (mikkel-fishman) on Power-Seeking = Minimising free energy · 2023-04-06T03:09:21.272Z · LW · GW

Thanks!

"I'm wondering whether the potential internal competition pressures also might collapse for internal systems in AI?" 

I'm not sure what you mean by this? By "collapse" do you mean will the internal systems collapse as they are in competition over different subgoals, or do you mean will  the competition "collapse" and the internal systems will harmonize? Because the latter is generally what occurs and there is strong evidence that multi-cellular life and then organs arose from a similar process. Reorganizing into symbiosis is the best way to resolve internal tensions and reduce energy needs, which is why it occurs both within organisms (plus) and between them on an ecosystem level. 

Just as a point of consideration, nearly all energy influx that we care about is processed into life through symbiosis (the only exceptions being independent bacteria).

This reorganization can be really violent though, I mean several of the early mass extinction events were directly caused by reorganization and a lot of complex symbiosis arose in response to mass extinction events caused by other means. This just a property of complex systems in general, it's likely our AI systems will grow increasingly powerful and then all of a sudden collapse to a far simpler state where they have greatly reduced capabilities until they relearn on that simpler architecture.

As for what game to play, I mean sure if you make the boundary the universe then it is a zero sum game on a resource level but even then a symbiotic strategy would be most effective to minimise free energy and it only requires a system level of awareness to clearly see this, or alternatively stumbling to run into it. 

What interests me is not that an AI actor would be in competition with life as a whole for resources, but that it could reasonably conclude that humanity is a threat because of our refusal to be symbiotic. And if we open ourselves up to symbiosis then who knows?  I mean less than half our bodies are "human" cells which is an odd formulation since that means each human is inherently a symbiotic ecosystem and the two cannot be separated. 

So in this game what is the boundary not only of the universe but the players?

Comment by Mikkel Fishman (mikkel-fishman) on Power-Seeking = Minimising free energy · 2023-04-04T01:57:45.584Z · LW · GW

This is a great post. Let me suggest a few concepts that I think will accelerate your formulation.

In open-systems theory, on way to look at "life" is that it is a self organizing structure capable of evolving to most effectively dissipate free energy. 
Maximum Power Principle is an observation of "effectiveness" as a dissipation strategy under competition. 

Your argument that an agent will use power seeking to minimize the environmental energy, creating more predictability is a natural deduction. The consequence of this is that agents will organize their environment/relationships to gain exponential power, however there is a critical point missing from your argument: the agent's organization and processing requires free energy, therefore leading to environmental carrying capacity. Moreover, the free energy requirements increase in proportion to the overall system power (offset by increased efficiency, but that is limited).

Therefore the agent is capacity constrained, at best coming to equilibrium with the free energy influx into the system. In practice, this requires a global understanding of the equilibrium point, which is not observable from the agent's perspective and so the agent will begin to absorb the stored free energy of the environment, reducing carrying capacity and eventually leading to collapse.

This is why population dynamics are in dynamic disequilibrium.

So while your initial suppositions are right on, you need to include the agent power requirements and energy influx to truly have a complete picture.

There is one point though that threw me for a loop. Why do you think that deception is an advantageous strategy for minimizing free energy generally? This is not the case.

Let's quickly look at the scenarios:

Competing agents are not intelligent - there is no reason to deceive because you just maximize directly through behavior

Competing agents are intelligent, you have limited interactions and the reward function encourages deception - this is the classic prisoner's dilemma and the rational response is to deceive

Competing agents are intelligent but you have repeated interactions, the reward is zero sum - here it gets tricky, because if you deceive too often then there is a high likelihood your competition will catch you in a lie -- afterall the problem space for maintaining a deception is nearly infinite so is impossible to maintain. Once this happens their trust is decreased, along with your ability to maximize your power. You could risk it and occasionally deceive hoping to get away with it, and play innocent when caught, which is a valid strategy but depends highly on the tuning of the other agent. In my experience people who have been taken advantage of in the past develop an analysis that any lie is automatic reason to break the engagement. 

Competing agents are intelligent, you have repeated interactions and the reward is positive sum - this is actually the most common scenario outside of constructed games. In this scenario it is most rational to collaborate and that requires being truthful. How do I square this with the maximum power principle? Easy, you coordinate in-group and compete out-group.  Cooperative game theory is woefully under recognized, but that's because it doesn't have computable equilibria except in highly constrained contexts, not because it's not realistic.

If all agents are attempting to maximize power, the reward is positive sum and they assume potential other agents are the same  - then they will be super rational and at that point the best strategy is to always tell the truth and cooperate, except if you are unsure about if an agent is deceptive and then you should seek to limit the uncertainity around that. 

So to sum: I think your intuition is a good one and minimizing free energy is a great, simple way of generating emergence. You just need to include environmental characteristics such as stored free energy and incoming free energy flux, as well as define the type of game and strategy of other agents. 

This would actually be a wonderful tool because right now there is so much assertion about what AI will become that is only due to arbitrary thought experiment rather than incorporating the rich traditions that have explored these concepts in depth.