Against Against Boredom
post by Logan Zoellner (logan-zoellner) · 2021-05-16T18:19:59.909Z · LW · GW · 8 commentsContents
Experience maximizing hedonism Infinite Boredom What SHOULD we do? None 8 comments
I'm trying to clarify some feelings I had after reading the post Utopic Nightmares [LW · GW]. Specifically, this bit:
But in a future world where advancing technology’s returns on the human condition stop compensating for a state less than perfect hedonism, we can imagine editing boredom out of our lives
I would like to describe a toy moral theory that--while not exactly what I believe--gets at why I would consider "eliminating boredom" morally objectionable.
Experience maximizing hedonism
Consider an agent that perceives external reality through a set of sensors ,,... . It uses these sensors to build a model of external reality and estimate it's position in that reality at a point in time as a state . It also has a number of actions available to it at any given time.
The agent estimates the number of reachable future states and "chooses" its actions so as to maximize the value of for some future time . Obviously if the agent is dead, it cannot perceive or affect its future state, so it estimates .
Internally the agent is running some kind of hill-climbing algorithm, so it experiences a reward after choosing an action at time t of the form . In this way, the agent experiences pleasure when it takes actions that increase and pain when it takes actions that decrease and over time the agent learns to take actions that maximize .
Infinite Boredom
Now consider the infinite boredom of Utopic Nightmares. In this case the agent reaches a local maximum for and is now constant (and equal to zero). But of course there is no reason why need be zero when is constant. There's no reason why we couldn't have instead used for our hill-climb. The agent would experience endless bliss (for positive values of c) or endless suffering (for negative values of c). Human experience suggests that our personal setting for is in fact significantly negative (as humans suffer greatly from boredom).
What might be the value of using a large negative value for ? Consider, perhaps, the case of Simulated Annealing where the algorithm intentionally makes "wrong" moves in order to escape when trapped in a local maximum. The key consideration is that is not the thing being optimized is. Changing R(t) in a way that doesn't increase V(t) doesn't actually improve the situation of our agent, only it's perception of the situation. In any case, our minds are the product of evolution, so it appears that historically the fitness maximizing value for c is negative.
What SHOULD we do?
At this point, an important objection can be made. Namely, one cannot derive an ought from an is. Just because humans have an existing bias toward a negative value for , what does that tell us about what ought to be? Why shouldn't humans be happy even if relegated to endless boredom? One argument is Chesterton's fence, i.e. until we are quite sure why we dislike boredom so, we ought not mess with it. Another is that if humans ever become "content" with boredom, we cut off all possibility of further growth (however small).
My main point, though, is that I would consider eliminating boredom wrong because it optimizes for our feelings and not our well being .
8 comments
Comments sorted by top scores.
comment by Donald Hobson (donald-hobson) · 2021-05-17T21:12:10.147Z · LW(p) · GW(p)
"A superintelligent FAI with total control over all your sensory inputs" seems to me a sufficient condition to avoid boredom. Kind of massive overkill. Unrestricted internet access is usually sufficient.
You don't need to edit out pain sensitivity from humans to avoid pain. You can have a world where nothing painful happens to people. Likewise you don't need to edit out boredom, you can have a world with lots of interesting things in it.
Think of all the things a human in the modern day might do for fun, and add at least as many things that are fun and haven't been invented yet.
comment by Jozdien · 2021-05-16T20:24:36.913Z · LW(p) · GW(p)
My main point, though, is that I would consider eliminating boredom wrong because it optimizes for our feelings and not our well being .
I'd argue boredom is the element that makes us optimize for over . Boredom is why we value even temporary negatives, because of that subsequent boost back up, which indicates optimizing for . Removing boredom would let you optimize for instead.
One argument is Chesterton's fence, i.e. until we are quite sure why we dislike boredom so, we ought not mess with it.
I agree. But this isn't something I propose we do now or even at the moment we have the power to; we can hold off on considering it until we know the complete ramifications. But we will at some point, and at that point, we can mess with it.
Another is that if humans ever become "content" with boredom, we cut off all possibility of further growth (however small).
Yeah, that is a downside. But there may come a time at which growth steps diminish enough that we should consider whether the lost value from not directly maxing out our pleasure stats is worth the advancement.
On a side note, I really appreciate that you took the time to write a post in response. That was my first post on LW, and the engagement is very encouraging.
Replies from: logan-zoellner↑ comment by Logan Zoellner (logan-zoellner) · 2021-05-16T21:03:06.050Z · LW(p) · GW(p)
Boredom is why we value even temporary negatives, because of that subsequent boost back up, which indicates optimizing for .
Yep, that is precisely the point of simulated annealing. Allowing temporary negative values lets you escape local maxima.
On a side note, I really appreciate that you took the time to write a post in response. That was my first post on LW, and the engagement is very encouraging.
It was an interesting post and made me think about some things I hadn't in a while, thanks for writing it!
Replies from: Jozdien↑ comment by Jozdien · 2021-05-16T21:22:50.907Z · LW(p) · GW(p)
Yep, that is precisely the point of simulated annealing. Allowing temporary negative values lets you escape local maxima.
In that future scenario, we'd have a precise enough understanding of emotions and their fulfilment space to recognize local maxima. If we could ensure within reason that being caught in local maxima isn't a problem, would temporary negative values still have a place?
Replies from: Bernhard↑ comment by Bernhard · 2021-05-19T18:31:57.147Z · LW(p) · GW(p)
I read the original post, and kind of liked it, but I also very much disagreed with it.
I am somewhat befuddled by the chain of reasoning in that post, as well as that of the community in general.
In mathematics, you may start from some assumptions, and derive lots of things, and if ever you come upon some inconsistencies, you normally conclude that one of your assumptions is wrong (if your derivation is okay).
Anyway, here it seems to me, that you make assumptions, derive something ludicrous, and then tap yourself on the shoulder and conclude, that obviously everything has to be correct. To me, that does not follow.
If you assume an omnipotent basilisk (if you multiply by infinity), then obviously you can derive anything you damn well please.
One concrete example (There were many more in the original post):
we'd have a precise enough understanding of emotions and their fulfilment space to recognize local maxima
The way to recognize local extrema is exactly to walk away from them far enough. If you know of another way, please elaborate, because I'd very much like to sell it myself if you don't mind.
Another is that if humans ever become "content" with boredom, we cut off all possibility of further growth (however small).
> Yeah, that is a downside.
I would argue that is the most important point in fact. You assume that you are looking for an optimum in a static potential landscape. The dinosaurs kind of did the same.
The only way to keep surviving in a dynamic potential landscape, is to keep optimizing, and not tap yourself on the shoulder for a job well done, and just stop.
A simple example: Kids during puberty kind of seem to be doing the opposite of whatever their parents tell them. Why? Because they know (somehow) that there are other, better minima in reach (even if your parents are the god-kings of the earth) (Who wants to be a carpenter, when you can be a Youtuber, famous for Idontreallycare...)
Anyway, in my opinion, boredom is a solution for the same class of problem, just not intergenerational, but instead more in a day-to-day manner.
Replies from: Jozdien↑ comment by Jozdien · 2021-05-19T20:54:43.158Z · LW(p) · GW(p)
The way to recognize local extrema is exactly to walk away from them far enough. If you know of another way, please elaborate, because I'd very much like to sell it myself if you don't mind.
Thinking it back after a couple days, I think my reply with finding maximums was still caught up in indirect measures of achieving hedons. We have complete control over our sensory inputs, we can give ourselves exactly whatever upper bound there exists. Less "semi-random walks in n-space to find extrema" and more "redefine the space so where you're standing goes as high as your program allows".
The only way to keep surviving in a dynamic potential landscape, is to keep optimizing, and not tap yourself on the shoulder for a job well done, and just stop.
For what it's worth, that was just to keep in with the fictional scenario I was describing. In a more realistic scenario of that playing out, we would task AGI with optimizing; we're just relatively standing around anyway.
In that scenario, though: why do we consider growth important? You talked about surviving, I'm not clear on that - this was assuming a point in the future when we don't have to worry about existential risk (or they're the kind we provably can't solve, like the universe ending) or death of sentient lives. Yes, growth allows for more sophisticated methods of value attainment, but I also said that it's plausible that we reach so high that there we start getting diminishing returns. Then, are the benefits of that future potential worth not reaping them to their maximum for a longer stretch of time?
Replies from: Bernhard↑ comment by Bernhard · 2021-05-20T18:54:22.234Z · LW(p) · GW(p)
we would task AGI with optimizing
I see, that kind of makes sense. I still don't like it though, if that is the only process to optimize.
For me, in your fictional world, humans are to AI what in our world pets are to humans. I understand that it could come about, but I would not call it a "Utopia".
this was assuming a point in the future when we don't have to worry about existential risk
This is kind of what I meant before. Of course you can assume that, but it is such a powerful assumption, that you can use it to derive nearly anything at all. (Just like in math if you divide by zero). Of course optimizing for survival is not important, if you cannot die by definition.
Replies from: Jozdien↑ comment by Jozdien · 2021-05-20T20:26:47.584Z · LW(p) · GW(p)
For me, in your fictional world, humans are to AI what in our world pets are to humans.
If I understand your meaning of this correctly, I think you're anthropomorphizing AI too much. In the scenario where AI is well aligned to our values (other scenarios probably not having much of a future to speak of), their role might be something without a good parallel to society today; maybe an active benevolent deity without innate desires.
Of course you can assume that, but it is such a powerful assumption, that you can use it to derive nearly anything at all. (Just like in math if you divide by zero). Of course optimizing for survival is not important, if you cannot die by definition.
I think it's possible we would still die at the end of the universe. But even without AI, I think there would be a future point where we can be reasonably certain of our control over our environment to the extent that barring probably unprovable problems like simulation theory, we can rest easy until then.