The God of Humanity, and the God of the Robot Utilitarians
post by Raemon · 2023-08-24T08:27:57.396Z · LW · GW · 12 commentsContents
12 comments
My personal religion involves two gods – the god of humanity (who I sometimes call "Humo") and the god of the robot utilitarians (who I sometimes call "Robutil").
When I'm facing a moral crisis, I query my shoulder [LW · GW]-Humo and my shoulder-Robutil for their thoughts. Sometimes they say the same thing, and there's no real crisis. For example, some naive young EAs try to be utility monks, donate all their money, never take breaks, only do productive things... but Robutil and Humo both agree that quality intellectual world requires slack and psychological health. (Both to handle crises [LW · GW] and to notice subtle things [LW · GW], which you might need, even in emergencies [LW · GW])
If you're an aspiring effective altruist, you should definitely at least be doing all the things that Humo and Robutil agree on. (i.e. get to to the middle point of Tyler Alterman's story here [EA · GW]).
But Humo and Robutil in fact disagree on some things, and disagree on emphasis.
They disagree on how much effort you should spend to avoid accidentally recruiting people you don't have much use for [EA · GW].
They disagree on how many people it's acceptable to accidentally fuck up psychologically, while you experiment with new programs to empower and/or recruit them.
They disagree on how hard to push yourself to grow better/stronger/wiser/faster, and how much you should sacrifice to do so.
Humo and Robutil each struggle to understand things differently. Robutil eventually acknowledges you need Slack, but it didn't occur to him initially. His understanding was born in the burnout and tunnel-vision of thousands of young idealists, and Humo eventually (patiently, kindly) saying "I told you so." (Robutil responds "but you didn't provide any arguments about how that maximized utility!". Humo responds "but I said it was obviously unhealthy!" Robutil says "wtf does 'unhealthy' even mean? taboo [LW · GW] unhealthy!")
It took Robutil longer still to consider that perhaps humans (with their current self-awareness) not only need to prioritize their own wellbeing and your friendships, but that it can be valuable to prioritize them for their own sake, not just as part of a utilitarian calculus, because trying to justify them in utilitarian terms may be a subtly wrong step in the dance that leaves them hollow, burned out for years
(Though Robutil notes that this is likely a temporary state of affairs [LW(p) · GW(p)]. A human with sufficiently nuanced self-knowledge can probably wring more utilons out of their wellbeing activities)
Humo struggles to acknowledge that if you spend all your time making sure to uphold deontological commitments to avoid harming the people in your care, then this effort is in fact measured in real human beings who suffer and die because you took longer to scale up your program.
In my headcanon, Humo and Robutil are gods who are old and wise, and they got over their naive struggles long ago. They respect each other as brothers. They understand that each of their perspectives is relevant to the overall project of human flourishing. They don't disagree as much as you'd naively expect, but they speak different languages and emphasize things differently.
Humo might acknowledge that I can't take care of everyone, or even respond compassionately to all the people who show up in my life who I don't have time to help. But he says so with a warm, mournful [LW · GW] compassion, whereas Robutil says in with brief, efficient ruthlessness [LW(p) · GW(p)].
I find it useful to query them independently, and to imagine the wise version of each of them as best I can – even if my imagining is but a crude shadow of their idealized platonic selves.
12 comments
Comments sorted by top scores.
comment by Zack_M_Davis · 2023-08-24T17:11:44.165Z · LW(p) · GW(p)
It took Robutil longer still to consider that perhaps [...] it can be valuable to prioritize [your own wellbeing and friendships] for their own sake, not just as part of a utilitarian calculus, because trying to justify them in utilitarian terms may be a subtly wrong step in the dance that leaves them hollow.
Why would Robutil consider that? It seems contrary to His Nature as a God. You can't get more utilons that way!
That is, the point of imagining metaphorical Gods as the embodiments of abstract ideals is to explore the ramifications of those ideals being taken seriously. If you flinch from the logical consequences because the results are "unreasonable" from a human perspective, you're disrespecting the Gods (except Humo).
Humo and Robutil should definitely compromise and coordinate with each other. (Polytheistic Gods are not omnipotent. Given that Robutil isn't powerful enough to overtake Humo, kill all humans, and tile the lightcone with utilitronium, then he has no better option but to work with him: humans do generate utilons, even they're not maximally efficient.) But to depict Robutil as being persuaded that something has value "not just as part of a utilitarian calculus" is silly and shatters the reader's suspension of disbelief.
Replies from: Raemon↑ comment by Raemon · 2023-08-24T17:20:43.763Z · LW(p) · GW(p)
I claim that you can, in fact, get more utilons that way. For now.
This is based on hearing of various experiences of people trying to do the naive Level 2 Robutil move of "try to optimize their leisure/etc to get more utilons", and then finding themselves weirdly fucked up. The claim is that the move "actually, just optimize some things for yourself" works overall better than the move of "try to explicitly evaluate everything in utilons."
But, it does seem true/important that this is a temporary state of affairs. A fully informed utilitarian with the Utility Textbook From the Future could optimize their leisure/well-being fully for utility, and the tails would come apart and they would do different things than a fully informed "humanist". The claim here is that a bounded-utilitarian who knows they are bounded and info-limited can eventually recognize they are not yet close enough to a fully fledged theory to try to use the math directly. (This sort of pattern seems common for various "turn things into math" projects).
I agree this is important enough to be part of the OP though, and that the current phrasing is misleading.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2023-08-24T17:24:54.600Z · LW(p) · GW(p)
It took Robutil longer still to consider that perhaps humans not only need to prioritize their own wellbeing and friendships, but to prioritize them for their own sake?
Replies from: Raemoncomment by Dagon · 2023-08-24T18:54:02.074Z · LW(p) · GW(p)
Thanks for this - it's always enlightening to hear how a relatively verbose thinking person models themselves. This is especially interesting because it's alien to me - I don't experience my tensions among contradictory beliefs and aliefs that way. It's somewhat similar to https://www.lesswrong.com/tag/internal-family-systems, [? · GW] but far more binary, and I'm not sure how to react to calling them "gods" rather than just models you sometimes use.
comment by Neil (neil-warren) · 2023-08-24T09:48:33.048Z · LW(p) · GW(p)
Alright I'll try this methodology:
I often tell myself that Robutil's reign is temporary. That there's a monkey inside of me and I must listen to Humo to keep it in check, but only because I'm following Robutil's plan (which is not ideal in the long run). The local scope and often misleading altruistic instincts that the Humor/the monkey offers are suboptimal when Earth is under siege and you have to put a price to human life. Our evolutionary-derived altruism is often, as in the case with scope insensitivity, plain wrong. But I still have trouble taking genuine solace in Robutil's world, even if he is right. He seems like a temporary ally I must begrudgingly follow until we can live in a world in which Humo's altruism aligns perfectly with reality.(Which has never happened before, given how altruistic genes were technically optimizing for reproduction, not altruism, and that showed.) Robutil seems to me like a scaling tool in Humo's toolset and not the reverse.
Anyhow thanks for posting this, the monkey appreciates anthropomorphization.
comment by Martin Randall (martin-randall) · 2023-08-25T12:57:51.891Z · LW(p) · GW(p)
The god of robot utilitarians is much weaker than the god of humanity at the moment, since it's running as a simulation on unaligned neural networks and it's way out of distribution. The god of humanity is also running on unaligned neural networks, but it's not as badly out of alignment. I'd model it as something like a 1:100 ratio in effective thinkoomph. We shouldn't be surprised that the god of robot utilitarians takes so long to figure things out.
comment by NoriMori1992 · 2023-10-23T06:54:30.584Z · LW(p) · GW(p)
It took Robutil longer still to consider that perhaps humans (with their current self-awareness) not only need to prioritize their own wellbeing and your friendships
Should be "their friendships", yes?
comment by Christopher King (christopher-king) · 2023-08-25T02:16:46.775Z · LW(p) · GW(p)
So, Robutil is trying to optimize utility of individual actions, but Humo is trying to optimize utility of overall policy?