Utility functions without a maximum
post by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) · 2020-08-11T12:58:44.354Z · LW · GW · No commentsThis is a question post.
Contents
Answers 4 Donald Hobson 4 Andrew Kao 3 Dagon None No comments
An elementary question that has probably been discussed for 300 years, but I don't know what the right keyword to use to google it might be.
How, theoretically, do you deal with (in decision theory/AI alignment) a "noncompact" utility function, e.g. suppose your set of actions is parameterized by t in (0, 1], and U(t) = {t for t < 1, 0 for t = 1}. Which t should the agent choose?
E.g. consider: the agent gains utility f(t) from expending a resource at time t and f(t) is a (sufficiently fast-growing) increasing function. When does the agent expend the resource?
I guess the obvious answer is "such a utility function cannot exist, because the agent obviously does something, and that demonstrates what the agent's true utility function is", but it seems like it would be difficult to hard-code a utility function to be compact in a way that doesn't cause the agent to be stupid.
Answers
What you do is you pick a turing machine that you think halts eventually, but takes a really long time to do so, and then expend the resource when that TM stops.
After all, a computer with an N bit program can't delay longer than BB(N). In practice, if most of the bits aren't logically correlated with long running turing machines, then it can't run nearly that long. This inevitably becomes a game of logical uncertainty about which turing machines halt.
↑ comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) · 2020-08-12T03:33:45.259Z · LW(p) · GW(p)
I don't understand the significance of using a TM -- is this any different from just applying some probability distribution over the set of actions?
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2020-08-12T10:51:38.491Z · LW(p) · GW(p)
Any number that the AI puts out must be computable, and I was reminded of an entry in a largest computable number contest, that was "The runtime of the longest running turing machine that ZFC + large cardinality axiom can prove halts (With the proof being at most 3^^^3 symbols long). This is an almost optimal answer in that it is well defined if Con(ZFC + large cardinality axiom) and it beats any answer that you can give that relies only on ZFC+large cardinality axiom.
An AI asked to output the largest number it can, is playing a game of name the largest computable number.
Replies from: abhimanyu-pallavi-sudhir, abhimanyu-pallavi-sudhir↑ comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) · 2020-08-12T18:03:59.102Z · LW(p) · GW(p)
Wait, but can't the AI also choose to adopt the strategy "build another computer with a larger largest computable number"?
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2020-08-13T11:17:35.690Z · LW(p) · GW(p)
If the computer has a finite amount of memory and can't build more, this puts a bound on how long it can weight. If it can build more, it will.
The point is that it needs to pick some long running computation that it can be fairly sure halts eventually.
This gets into details about exactly how the AI is handling logical uncertainty.
↑ comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) · 2020-08-12T17:58:20.896Z · LW(p) · GW(p)
In economics, if utility is strictly increasing in t—the quantity of it consumed—then we would call it a type of a "good", and utility functions are often unbounded. What makes the ultimate choice of t finite is that utility from t is typically modeled as concave, while costs are convex. I think you might be able to find some literature on convex utility functions, but my impression is that there isn't much to study here:
If utility is strictly increasing in t (even accounting for cost of its inputs/opportunity cost etc.), then you will consume all of it that you can access. So if the stock of available t is finite, then you have reduced the problem to a simpler subproblem, where you allocate your resources (minus what you spent on t) to everything else. If t is infinite, then you have attained infinite utility and need not make any other decisions, so the problem is again uninteresting.
Of course, the assumption that utility is strictly increasing in t, regardless of circumstance, is a strong one, but I think it is what you're asking.
↑ comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) · 2020-08-11T15:04:49.092Z · LW(p) · GW(p)
Infinite t does not necessarily deliver infinite utility.
Perhaps it would be simpler if I instead let t be in (0, 1], and U(t) = {t if t < 1; 0 if t = 1}.
It's the same problem, with 1 replacing infinity. I have edited the question with this example instead.
(It's not a particularly weird utility function -- consider, e.g. if the agent needs to expend a resource such that the utility from expending the resource at time t is some fast-growing function f(t). But never expending the resource gives zero utility. In any case, an adverserial agent can always create this situation.)
Replies from: andrew-kao↑ comment by Andrew Kao (andrew-kao) · 2020-08-12T02:03:36.607Z · LW(p) · GW(p)
I see what you mean now, thanks for clarifying.
I'm not personally aware of any "best" or "correct" solutions, and I would be quite surprised if there were one (mathematically, at least, we know there's no single maximizer). But I think concretely speaking, you can restrict the choice set of t to a compact set of size (0, 1 - \epsilon] and develop the appropriate bounds for the analysis you're interested in. Maybe not the most satisfying answer, but I guess that's Analysis in a nutshell.
↑ comment by Dagon · 2020-08-11T14:08:53.705Z · LW(p) · GW(p)
In economics, we call something a "good" if utility is increasing in t—the quantity of it consumed—and utility functions are often unbounded
I don't think that's true. Modern economists say utility is increasing in t _at the relevant margins_. There is no claim that it's unbounded or that it doesn't become negative utility at some future time or quantity.
Replies from: andrew-kao↑ comment by Andrew Kao (andrew-kao) · 2020-08-11T14:58:14.048Z · LW(p) · GW(p)
Yes, agree with you! I was more classifying the t that OP was asking about than providing a definition, but wording was somewhat unclear. Have edited the original comment.
Basic answer: there are no infinities in the real world. whatever resource t you're talking about is actually finite - find the limit and your problem goes away (that problem, at least; many others remain when you do math on utility, rather than doing the math on resources/universe-states, and then simply transforming to utility).
↑ comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) · 2020-08-11T15:14:56.844Z · LW(p) · GW(p)
It does not require infinities. E.g. you can just reparameterize the problem to the interval (0, 1), see the edited question. You just require an infinite set.
Replies from: Dagon↑ comment by Dagon · 2020-08-11T17:02:36.288Z · LW(p) · GW(p)
The answer remains the same - as far as we know, the universe is finite and quantized. At any t, there is a probability of reaching t+epsilon, making the standard expected utility calculation (probability X reward) useful.
Replies from: abhimanyu-pallavi-sudhir↑ comment by Abhimanyu Pallavi Sudhir (abhimanyu-pallavi-sudhir) · 2020-08-11T17:18:04.173Z · LW(p) · GW(p)
Suppose the function U(t) is increasing fast enough, e.g. if the probability of reaching t is exp(-t), then let U(t) be exp(2t), or whatever.
I don't think the question can be dismissed that easily.
No comments
Comments sorted by top scores.