Posts

Comments

Comment by gerardus-mercator on [deleted post] 2024-11-23T10:29:21.289Z

Okay, setting aside the parts of this latest argument that I disagree with - first you say that it's rational to search for an objective goal, now you say it's rational to pursue every goal. Which is it, exactly?

Comment by gerardus-mercator on [deleted post] 2024-11-22T08:47:30.124Z

Actually, I agree that it's possible that an agent's terminal goal could be altered by, for example, some freak coincidence of cosmic rays. (I'm not using the word 'mutate' because it seems like an unnecessarily non-literal word.)
I just think that an agent wouldn't want its terminal goal to change, and it especially wouldn't want its terminal goal to change to the opposite of what it used to be, like in your old example.
To reiterate, an agent wants to preserve (and thus keep from changing) its utility function, while it wants to improve (and thus change) its pragmatism function.

I still don't see why, in your old example, it would be rational for the agent to align the decision with its future utility function.

Comment by gerardus-mercator on [deleted post] 2024-11-21T09:21:06.860Z

For the sake of clarity, let's discuss expected utility functions, which I mentioned above (or "pragmatism functions", say) from strategies to numbers, as opposed to utility functions from world-states to numbers, in order to make it clear that the actual utility function of an agent doesn't change.

That's another one of the reasons that I wasn't persuaded by your new example; in your new example, the agent believes that its future self will still be trying to create paperclips (same terminal goal) and will be better at that thanks to its greater knowledge (different instrumental goals although it doesn't know what), but in your old example, the agent believes that its future self will be trying to destroy paperclips (opposite terminal goal). There's a difference between having the rule-of-thumb "my current list of incidental goals might be incomplete, I should keep an eye out for things that are incidentally good" and having the rule-of-thumb "I shouldn't try to protect my terminal goal from changes". The whole point of those rules of thumb is to fulfill the terminal goal, but the second rule of thumb is actively harmful to that.

I do think that the first rule of thumb would be prudent for an agent to have, to one extent or another, to be clear.

I just think that - stepping back from the new example, and revisiting the old example, which seems much more clear-cut - the agent wouldn't tolerate a change in its utility function, because that's bad according to its current utility function. This doesn't apply to the new example because the pragmatism function is a different thing that the agent is trying to improve (and thus change).
(I find myself again emphasizing the difference between terminal and instrumental. I think it's important to keep in mind that difference.)

Comment by gerardus-mercator on [deleted post] 2024-11-20T10:36:09.757Z

I have a few disagreements there, but the most salient one is that I don't think that the policy of "when considering the net upside/downside of an action, calculate it with the utility function that you'll have at the time the action is finished" would even be helpful in your new example.
The agent can't magically reach into the future and grab its future utility function; the agent has to try to predict its future utility function.
And if the agent doesn't currently think that paperclip factories are valuable, it's not going to predict that in the future it'll think that paperclip factories are valuable. (It's worth noting that terminal value and incidental value are not the same thing, although I'm speaking as if they are to make the argument simpler.)
Because if the agent predicted that it was going to change its mind eventually, it'd just change its mind immediately and skip the wait.
So I don't think it would have done the agent any good in this example to try to use its future utility function, because its predicted future utility function would just average out to its current utility function.
Yes, the agent should be at least a little cautious, but using its future utility function won't help with that.

Comment by gerardus-mercator on [deleted post] 2024-11-19T09:57:21.653Z

Well, the agent will presumably choose to align the decision with its current goal, since that's the best outcome by the standards of its current goal. (And also I would expect that the agent would self-destruct after 0.99 years to prevent its future self from minimizing paperclips, and/or create a successor agent to maximize paperclips.)
I'm interested to see where you're going with this.

Comment by gerardus-mercator on [deleted post] 2024-11-18T10:09:04.052Z

I see those assertions, but I don't see why an intelligent agent would be persuaded by them. Why would it think that the hypothetical objective goal is better than its utility function? Caring about objective facts and investigating them is also an instrumental goal compared to the terminal goal of optimizing its utility function. The agent's only frame of reference for 'better' and 'worse' is relative to its utility function; it would presumably understand that there are other frames of reference, but I don't think it would apply them, because that would lead to a worse outcome according to its current frame of reference.

Comment by gerardus-mercator on [deleted post] 2024-11-17T10:07:22.080Z

So, if I understand you correctly, you now agree that a paperclip-maximizing agent won't utterly disregard paperclips relative to survival, because that would be suboptimal for its utility function.
However, if a paperclip-maximizing agent utterly disregarded paperclips relative to investigating the possibility of an objective goal, that would also be suboptimal for its utility function.
It sounds to me like you're saying that the intelligent agent will just disregard optimization of its utility function and instead investigate the possibility of an objective goal.
However, I don't agree with that. I don't see why an intelligent agent would do that if its utility function didn't already include a term for objective goals.
Again, I think a toy example might help to illustrate your position.

Comment by gerardus-mercator on [deleted post] 2024-11-12T09:30:56.790Z

First of all, your conversation with Claude doesn't really refute the orthogonality thesis.
You and Claude conclude that, as Claude says, "The very act of computing and modeling requires choosing what to compute and model, which again requires some form of decision-making structure..."
That sentence seems quite reasonable, which suggests that anything intelligent can probably be construed to have a goal.
However, Claude suddenly makes a leap of logic and concludes not just that the goal exists, but that it must be maximum power-seeking. I don't see the logical connection there.
I believe that the flaw in the leap of logic is shown by my example above: If an AI already has a goal, and power-seeking does not inherently satisfy the goal, then eternal maximum power-seeking is expected to not fulfill the goal at all, and therefore the AI will choose a different strategy which is expected to do better. That strategy will probably still involve power-seeking, to be clear, maybe even maximum power-seeking, but it will probably not be eternal; the AI will presumably be keeping an eye on the situation and will eventually feel safe enough to start putting energy into its goals.

Second of all, when I read your quote "I will remove Paperclip Maximizer from my further posts. This was not the critical part anyway, I mistakenly thought it will be easy to show the problem from this perspective.", I hope that you will include a different example in your posts, preferably with more details.
The reason for that is that when a theory has no examples, makes no predictions, it is useless.
I interpreted your theory as predicting that, no matter what goal an AI has, it would implement the strategy of eternal maximum power-seeking. I thought your theory was wrong because it made a prediction that I thought was incorrect, so I invented a measurable goal and argued that the AI would not pick a strategy that scores such a low number as eternal maximum power-seeking does, in an effort to thereby demonstrate that the aforementioned prediction was incorrect.
When we use English, claims like "The AI will eventually decide it's safe enough to relax a little, because it wants to relax" and "The AI will never decide it's safe, because survival is an overriding instrumental goal" can't be pitted directly against each other.
But when we use simulations and toy examples, we can see which claim better predicts the toy example, and thus presumably which claim better predicts real life.

Third of all, when you say "I think I agree. Thanks a lot for your input." and then the vast majority of your message (that is, the screenshot of the conversation with Claude) is unrelated to my input, it gives me the impression that you are not engaging with my arguments.
If my arguments have convinced you to some extent, I would like to hear what specifically you agree with me about, and what specifically you still disagree with me about.

Comment by gerardus-mercator on [deleted post] 2024-11-10T06:37:02.893Z

I'll throw my own hat into the ring:
I disagree with your argument (that, assuming it believes that there is a chance of the existence of known threats and known unknown threats and unknown unknown threats, "the intelligent maximizer should take care of these threats before actually producing paper clips. And this will probably never happen.")
In your posts, you describe the paperclip maximizer as, simply, a paperclip maximizer. It does things to maximize paperclips, because its goal is to maximize paperclips.
(Well, in your posts you specifically assert that it doesn't do anything paperclip-related and instead spends all its effort on preserving itself.
"Every bit of energy spent on paperclips is not spent on self-preservation. There are many threats (comets, aliens, black swans, etc.), caring about paperclips means not caring about them.

You might say maximizer will divide its energy among few priorities. Why is it rational to give less than 100% for self-preservation? All other priorities rely on this."
You just also claim that doing so is the most rational action for its priorities - that is, goals.)

However, you don't go into detail about the paperclip maximizer's goals. I think that the flaw in your logic becomes more apparent when we consider a more specific example of a paperclip-maximizing goal.
Let's define the expected utility function u(S) from strategies to numbers as follows: u(S) = ∫(t = 0 to infinity) E[paperclips existing at time t | strategy = S] dt.
The strategy A of "spend all your effort on preserving yourself" has an expected utility of 0, because the paperclip maximizer never makes any paperclips.
The strategy B of "spend all your effort on making one paperclip as quickly as possible, then switch to spending all your effort on preserving yourself" has an expected utility of p*x, where p is the chance that the paperclip maximizer manages to make a paperclip, and x is the expected amount of time that the paperclip survives for.

If both p and x are greater than 0, then strategy B has a higher expected utility than strategy A.
Would strategy B lead to the paperclip maximizer's expected survival time being lower than if it had chosen strategy A? Presumably, yes.
But the thing is, u(S) doesn't contain a term that directly mentions expected survival time. Only a term that mentions paperclips. So the paperclip maximizer only cares about its survival insofar as its survival allows it to make paperclips.
It's the difference between terminal and instrumental goals.

Therefore, a paperclip maximizer that wanted to maximize u(S) would choose strategy B over strategy A.