Moral patienthood of simulated minds allows uncountabe infinity of value on finite hardware
post by Luck (luck-1) · 2025-04-19T20:41:14.487Z · LW · GW · 2 commentsContents
2 comments
Computation requires interpretation, and depends on interpretation. For example, a tap can be interpreted as a computer, that solves hydrodynamics equations. Or not. It depends on what meaning we assign to the results it produces. This flexibility of interpretation allows us to twist computations in some funny ways.
Let me introduce a segment virtual machines hypervisor. It works this way. It has infinite number of virtual machines. Each VM has an address. Each VM address is a real number. The hypervisor can process commands like this: start up all machines in segment [a,b], for all machines in segment [a, b] execute command xyz. How can this hypervisor can work under the hood? It's simple. It just keeps track, which segments were introduced by the user, and for each segment it spawns a usual virtual machine. If some segment should be split into two - it clones the VM assigned to that segment, and assigns one clone to the left part of segment and the other clone to the right side of the segment. This allows to fully implement the described interface of segment VM hypervisor. While the number of VMs available in the interface is infinite, this method optimizes out all the duplicated computations in such a way, that the load on the hardware is propo tional to the number of the segments, which is finite. So, the interface of the hypervisor allows us to interpret our shenanigans as a launch of uncountably many virtual machines on finite hardware. Now, if we allow moral patienthood of simulated minds, the only remaining thing left to do is to launch a simulation of a happy being on a segment. Boom. You've created uncountably many happy minds. If you're utilitarian you might be very happy at this point. Or you might point out that all of those minds are identical. This might push you towards adding an ad-hoc requirement, that a simulated mind must be unique to count in utilitarianism's total value. But don't rush - I can easily reinterpret all of those minds as unique, by incorporating their address into their name. This is also easily doable by a slight adjustment to the hypervisor - for example, by adding a post-processing that replaces a string "<NAME_OF_MIND>" by a pseudorandomly chosen name based on the address if the VM before displaying it on the screen. This way, if you tell the hypervisor to launch digital minds on a segment [0,10], and then query the mind on a machine 3.14 for its name, you'll see Charlie, while if you query the mind living on machine 1.41, you'll see it respond with Peter. And all of it with still finite load on the real hardware. My point is - computation is subjective. It heavily depends on our interpretation, and this flexibility allows us to twist the notion of what is being computed so much, that assigning moral value to computations allows hacks that render the entire moral framework meaningless.
2 comments
Comments sorted by top scores.
comment by AynonymousPrsn123 · 2025-04-19T21:53:40.543Z · LW(p) · GW(p)
Also, does this imply that a technologically mature civilization can plausibly create uncountably infinite conscious minds? What about other sizes of infinity? This, I suppose, could have weird implications for the measure problem in cosmology.
comment by AynonymousPrsn123 · 2025-04-19T21:51:29.365Z · LW(p) · GW(p)
I'm not sure if I understand, but sounds interesting. If true, does this have any implications for ethics more broadly, or are the implications confined only to our interpretation of computations?