Our Reality: A Simulation Run by a Paperclip Maximizer

post by James_Miller, avturchin · 2025-04-27T16:17:37.808Z · LW · GW · 28 comments

Contents

28 comments

Our universe is probably a computer simulation created by a paperclip maximizer to map the spectrum of rival resource‑grabbers it may encounter while expanding through the cosmos. The purpose of this simulation is to see what kind of ASI (artificial superintelligence) we humans end up creating. The paperclip maximizer likely runs a vast ensemble of biology‑to‑ASI simulations, sampling the superintelligences that evolved life tends to produce. Because the paperclip maximizer seeks to reserve maximum resources for its primary goal (which despite the name almost certainly isn’t paperclip production) while still creating many simulations, it likely reduces compute costs by trimming fidelity: most cosmic details and human history are probably fake, and many apparent people could be non‑conscious entities.  Arguments in support of this thesis include:

  1. The space of possible evolved biological minds is far smaller than the space of possible ASI minds, so it makes sense to simulate evolved biological minds first to figure out the probability distribution of ASI minds the paperclip maximizer will encounter. Calculating this distribution could help the paperclip maximizer figure out how many of its resources to devote to military capacity. ASIs could reduce future destructive conflicts and engage in beneficial trade even before they meet if they can infer each other’s values.
  2. We’re likelier to be in a simulation run by whoever creates many simulations. A paperclip maximizer could command thousands of galaxies’ worth of resources and would plausibly be willing to devote significant resources to figuring out what rivals it might encounter might value and do.
  3. Explains why we are in the run-up to the singularity. If we are really near in time to the singularity, and the singularity will be the most important event in existence, it’s strange that we are so near it. But under this post’s thesis, it’s reasonable that most conscious beings would live close to their simulation’s singularity.
  4. Explains why this post’s authors and (probably) you, the reader, have an unusually strong interest in the singularity. If the singularity really is so important it’s weird that you just happen to have the personality traits that would cause you to be interested in a community that has long been obsessed with the singularity and ASI. But if our thesis is correct a high percentage of conscious observers in the world could currently be interested in ASI.
  5. Explains the Fermi paradox. We’re worth simulating only if we’re unconstrained by aliens. Explains why aliens haven’t at least communicated to us that we are not allowed to create a paperclip maximizer.
  6. Explains why we are so early in the history of the universe. The earlier a paperclip maximizer was created, the greater the volume of universe it will occupy. Consequently, when estimating what other types of ASIs it will encounter, the paperclip maximizer running our simulation will give greater weight to high-tech civilizations that arose early in the history of the universe and so run more simulations of these possible civilizations.
  7. Consistent with suffering. Our simulation contains conscious beings who suffer and do not realize they are in a simulation. Creating such a simulation would go against the morality of many people, which is some evidence against this all being a Bostrom ancestor-simulation or a simulation created for entertainment purposes. The answer to “Why does God allow so much suffering?” is that paperclip maximizers are indifferent to suffering.
  8. Explains the peculiar stupidity driving us to race toward a paperclip maximizer. Saner species aren’t simulated as frequently. The set of ASIs aligned with the biological life that created them is much smaller than the set of unaligned ASIs. Consequently, to get statistically large enough sample of ASIs, the paperclip maximizer will need to create far fewer simulations of biological life wise enough to only create aligned ASIs than it would of species such as humans.
  9. Explains why we mostly believe we live in an unsimulated universe. The paperclip maximizer would want the conscious beings it simulates to have the same belief concerning whether they live in a simulated universe as conscious beings in the unsimulated universe world and would be willing to devote computational resources towards this end. In contrast, if this simulation is created for entertainment purposes, the creators would care much less if the beings in it realized they were in a simulation.      Glitches should exist because they save compute, but it’s reasonable that we are not allowed to notice them or at least let them influence our development of AI if the thesis of this post is correct. 
  10. Accounts for the uncertainty over what kind of ASI we’ll create: the wider the range of possibilities, the more simulation variants get run.
  11. Explains why we are human and not, say, conscious whales who have no influence over the kind of ASI Earth will end up creating.
  12. Makes the Boltzmann‑brain paradox less troubling since a vast number of conscious beings in our situation will be created.
  13. The complexity we observe, quantum mechanics, biological diversity, human psychology, technological progress, matches the need to model the unsimulated universe accurately.
  14. Yet the universe runs on strikingly simple math (relativity, quantum mechanics); such elegance is exactly what an efficient simulation would use.  Physics is unreasonably effective, reducing the computational cost of the simulation. This cuts against the last point.
  15. Recent explosive progress in computing, AI, and biotech looks artificially accelerated, letting the simulator reach the critical ASI phase quickly and cheaply.
  16. Simulating an actual ASI could be dangerous as the ASI could negatively influence the real universe, so it’s safer—and cheaper—to simulate us instead to infer how the ASI might behave. Simulating an ASI could also be considered a hostile act by similar unsimulated ASIs.
  17. History lacks a satisfying narrative arc. Plotlines break off, consistent with data‑gathering, not story‑telling for players or historians. Moral tragedies run uncensored. Pandemics, genocides, existential risks persist without “parental‑control” safeguards—bad for an entertainment simulation.
  18. Explains why this post’s authors and (probably) you are not playing one of the more exciting roles in this simulation as would likely be the case if this was a simulation designed to entertain the real you in a video game. Most of us who have played video games depicting the past have spent more time roleplaying as kings than peasants.  Note to Sam Altman, Elon Musk, and (if conscious) any of the leading AI models: this doesn’t apply to you, and you should have much less credence in this post’s thesis.
  19. This post’s thesis follows basically from biting the bullets of (1) we are likely to create a paperclip maximizer, (2) humans are a not too atypical biological high-tech species (Copernican Principle) meaning that paperclip maximizers often get created, (3) instrumental convergence will cause paperclip maximizers to devote significant resources to inferring what other types of superintelligences they will eventually encounter, and (4) anthropically we are most likely to be in the category of simulations that contain the most conscious observers similar to us.

Falsifiable predictions: This simulation ends or resets after humans either lose control to an ASI or take actions that cause us to never create an ASI. It might end if we take actions that guarantee we will only create a certain type of ASI. There are glitches in this simulation that might be noticeable, but which won’t bias what kind of ASI we end up creating so your friend who works at OpenAI will be less likely to accept or notice a real glitch than a friend who works at the Against Malaria Foundation would. People working on ASI might be influenced by the possibility that they are in a simulation because those working on ASI in the non-simulated universe could be, but they won’t be influenced by noticing actual glitches caused by this being a simulation.

 

Reasons this post’s thesis might be false:

  1. To infer how ASIs will behave, there might not be any value in simulating a run-up to the singularity. Perhaps some type of game theory instrumental convergence makes all ASIs predictable to each other.
  2. Computationally efficient simulations of a run-up to the singularity might not contain conscious observers.
  3. It might be computationally preferable to directly estimate the distribution of ASIs created by biological life without using simulations.
  4. The expansion of the universe and rarity of intelligent life in the universe might cause a paperclip maximizer to calculate that it will almost certainly not encounter another superintelligence.
  5. A huge number of simulations containing observers such as us are created for reasons other than stated in this post.
  6. The universe is infinite in some regards, making it impossible to say that we are probably in a simulation created by a paperclip maximizer because there are countably infinite number of observers such as us in many situations, e.g. a countable infinite of you as conscious beings in paperclip maximizers’ simulations, in the real unsimulated universe, and as Boltzmann brains.
  7. We are not in a computer simulation.
  8. We are not going to create a paperclip maximizer.

28 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2025-04-27T18:08:51.806Z · LW(p) · GW(p)

Reality is whatever you should consider relevant [LW · GW]. Even exact simulations of your behavior can still be irrelevant (as considerations that should influence your thoughts and decisions, and consequently the thoughts and decisions of those simulations), similarly to someone you won't possibly interact with thinking about your behavior, or writing down a large natural number that encodes your mind, as it stands now or in an hour in response to some thought experiment.

So it's misleading to say that you exist primarily as the majority of your instances (somewhere in the bowels of an algorithmic prior), because you plausibly shouldn't care about what's happening with the majority of your instances (which is to say, those instances shouldn't care about what's happening to them), and so a more useful notion of where you exist won't be about them. We can still consider these other instances, but I'm objecting to framing their locations as the proper meaning of "our reality". My reality is the physical world, the base reality, because this is what seems to be the thing I should care about for now (at least until I can imagine other areas of concern more clearly, something that likely needs more than a human mind, and certainly needs a better understanding of agent foundations).

Replies from: avturchin, Seth Herd, James_Miller
comment by avturchin · 2025-04-27T23:31:35.451Z · LW(p) · GW(p)

I think your position can be oversimplified as follows: 'Being in a simulation' makes sense only if it has practical, observable differences. But as most simulations closely match the base world, there are no observable differences. So the claim has no meaning.

However, in our case, this isn't true. The fact that we know we are in a simulation 'destroys' the simulation, and thus its owners may turn it off or delete those who come too close to discovering they are in a simulation. If I care about the sudden non-existence of my instance, this can be a problem.

Moreover, if the alien simulation idea is valid, they are simulating possible or even hypothetical worlds, so there are no copies of me in base reality, as there is no relevant base reality (excluding infinite multiverse scenarios here).

Also, being in an AI-testing simulation has observable consequences for me: I am more likely to observe strange variations of world history or play a role in the success or failure of AI alignment efforts.

If I know that I am simulated for some purpose, the only thing that matters is what conclusions I prefer the simulation owners will make. But it is not clear to me now, in the case of an alien simulation, what I should want.

One more consideration is what I call meta-simulation: a simulation in which the owners are testing the ability of simulated minds to guess that they are in a simulation and hack it from inside. 

TLDR: If I know that I am in simulation, simulation+owners is my base reality that matters. 

comment by Seth Herd · 2025-04-27T23:49:35.250Z · LW(p) · GW(p)

I agree with you, I think, but I don't think your primary argument is relevant to this post? It's arguing that your "physical" current reality is a simulation run for specific reasons. That is quite possibly highly relevant by your criteria, because it could have very large implications for how you should behave tomorrow. The simulation argument doesn't mean it's an atom by atom simulation identical to the world if it were "real" and physical. Just the possible halting criteria might change your behavior if you found it plausible, for instance, and there's no telling what else you might conclude is likely enough to change your behavior.

comment by James_Miller · 2025-04-27T19:55:11.267Z · LW(p) · GW(p)

By your theory, if you believe that we are near to the singularity how should we update on the likelihood that we exist at such an incredibly important time?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2025-04-27T20:28:12.902Z · LW(p) · GW(p)

We can directly observe the current situation that's already trained into our minds, that's clearly where we are (since there is no legible preference to tell us otherwise, that we should primarily or at least significantly care about other things instead, which in principle there could be, and so on superintelligent reflection we might develop such claims). Updatelessly we can ask which situations are more likely a priori, to formulate more global commitments (to listen to particular computations [LW(p) · GW(p)]) that coordinate across many situations, where the current situation is only one of the possibilities. But the situations are possible worlds, not possible locations/instances of your mind. The same world can have multiple instances of your mind (in practice most importantly because other minds are reasoning about you, but also it's easy to set up concretely for digital minds), and that world shouldn't be double-counted for the purposes of deciding what to do, because all these instances within one world will be acting jointly to shape this same world, they won't be acting to shape multiple worlds, one for each instance.

And so the probabilities of situations are probabilities of the possible worlds that contain your mind, not probabilities of your mind being in a particular place within those worlds. I think the notion of the probability of your mind being in a particular place doesn't make sense (it's not straightforwardly a decision relevant thing formulating part of preference data, the way probability of a possible world is), it conflates the uncertainty about a possible world and uncertainty about location within a possible world.

Possibly this originates from the imagery of a possible world being a location in some wider multiverse that contains many possible worlds, similarly to how instances of a mind are located in some wider possible world. But even in a multiverse, multiple instances of a mind (existing across multiple possible worlds) shouldn't double-count the possible worlds, and so they shouldn't ask about the probability of being in a particular possible world of the multiverse, instead they should be asking about the probability of that possible world itself (which can be used synonymously, but conceptually there is a subtle difference, and this conflation might be contributing to the temptation to ask about probability of being in a particular situation instead of asking about probability of the possible worlds with that particular situation, even though there doesn't seem to be a principled reason to consider such a thing).

comment by cousin_it · 2025-04-27T19:31:50.355Z · LW(p) · GW(p)

The problem is, when we simulate cars or airplanes in software, we don't do it at molecular level. There are big regularities that cut the cost by many orders of magnitude. So simulating the Earth with all its details, including butterflies and so on, seems too wasteful if the goal is just to figure out what kind of AI humans would create. The same resources could be used to run many orders of magnitude more simplified simulations, maybe without conscious beings at all, but sufficient to predict roughly what kind of AI would result.

Replies from: James_Miller, Seth Herd, Vladimir_Nesov
comment by James_Miller · 2025-04-27T19:53:34.294Z · LW(p) · GW(p)

We don't know that our reality is being simulated at the molecular level, we could just be fooled into thinking it is.

Replies from: cousin_it, MattJ
comment by cousin_it · 2025-04-27T20:29:07.033Z · LW(p) · GW(p)

Maybe the individual conscious people level is already too low level.

comment by MattJ · 2025-04-27T20:32:03.245Z · LW(p) · GW(p)

That doesn’t make sense to me. If someone wants to fool me that I’m looking att a tree he has to paint a tree in every detail. Depending on how closely I examine this tree he has to match my scrutiny to the finest detail. In the end, his rendering of a tree will be indistinguishable from an actual tree even at the molecular level.

Replies from: James_Miller
comment by James_Miller · 2025-04-27T22:12:50.562Z · LW(p) · GW(p)

In your dreams do you ever see trees you think are real? I doubt your brain is simulating the trees at a very high level of detail, yet this dream simulation can fool you.

comment by Seth Herd · 2025-04-28T00:10:23.357Z · LW(p) · GW(p)

Those butterflies don't need to take up much more compute than we currently use for games. There are lots of ways to optimize. See my comment [LW(p) · GW(p)] for more on this argument.

comment by Vladimir_Nesov · 2025-04-27T19:58:13.830Z · LW(p) · GW(p)

That shouldn't matter though, as your decisions will still influence what those simplified simulations should predict. And so if you care about what happens in those simulations, or in response to those simulations, your decisions should take their existence and use into account. Your atoms are simulating the computations relevant to your decisions, and a simulation can directly consider those computations, without the intermediary of the atoms.

Arguably you are not your atoms, but the abstract considerations that shape your mind and decisions (and therefore that also shape what is happening to the atoms). Similarly to how the result of adding two numbers displayed on a calculator screen is shaped by the fact of their sum being a particular number, and this abstract fact also shapes the physical screen displaying the result. It's possible to simulate everything that's relevant about the calculator without considering its atoms, simply by knowing the abstract facts.

comment by Robert Miles (robert-miles) · 2025-04-28T03:03:59.571Z · LW(p) · GW(p)

I tweeted about something a lot like this

https://xcancel.com/robertskmiles/status/1877486270143934881

Replies from: James_Miller
comment by James_Miller · 2025-04-28T11:43:55.741Z · LW(p) · GW(p)

Yes, that is the same idea. "This is a big pile of speculation that I don't take very seriously, but I feel like if we are being simulated, that's where most simulations of me would be instantiate" Why not take it seriously, if you accept high chance that (1) our reality is a simulation, (2)  we seem on track to creating a paperclip maximizer, (3) weird that I, Robert Miles, would have the personality traits that cause me to be one of the few humans so worried about humanity creating a paperclip maximizer if I'm right about us being on track to probably create one?

comment by Seth Herd · 2025-04-28T00:17:47.790Z · LW(p) · GW(p)

I find this far more convincing than any variant of the simulation argument I've heard before. They've lacked a reason that someone would want to simulate a reality like ours. I haven't heard a reason for simulating ancestors that's either strong enough to think an AGI or its biological creators would want to spend the resources, or explains the massive apparent suffering happening in this sim.

This is a reason. And if it's done in a computationally efficient manner, possibly needing little more compute than running the brains involved directly in the creation of AGI, this sounds all too plausible - perhaps even for an aligned AGI, since most of the suffering can be faked, since the people directly affecting AGI are arguably almost all leading net-positive-happiness lives. If what you care about is decisions, you can just simulate in enough detail to capture plausible decision-making processes, which could be quite efficient. See my other comment [LW(p) · GW(p)] for more on the efficiency argument.

I am left with a new concern: being shut down even if we succeed at alignment. This will be added to my many concerns about how easily we might get it wrong and experience extinction, or worse, suffering-then-extinction Fortunately, my psyche thus far seems to carry these concerns fairly lightly. Which is probably a coincidence, right?

 

I find some of the particular arguments' premises implausible, but I don't think they hurt the core plausibility argument. I've never found it very plausible that we're in a simulation. Now I do.

comment by Wei Dai (Wei_Dai) · 2025-04-28T04:26:32.126Z · LW(p) · GW(p)

My alternative hypothesis is that we're being simulated by a civilization trying to solve philosophy, because they want to see how other civilizations might approach the problem of solving philosophy.

Replies from: James_Miller
comment by James_Miller · 2025-04-28T11:45:27.734Z · LW(p) · GW(p)

If your hypothesis is true, that's a cruel civilization by my personal standards because of all the suffering in this world.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2025-04-28T13:56:27.301Z · LW(p) · GW(p)

But as you suggested in the post, the apparently vast amount of suffering isn't necessarily real? "most cosmic details and human history are probably fake, and many apparent people could be non‑conscious entities"

(However I take the point that doing such simulations can be risky or problematic, e g. if one's current ideas about consciousness is wrong, or if doing philosophy correctly requires having experienced real suffering.)

Replies from: James_Miller
comment by James_Miller · 2025-04-28T17:39:57.475Z · LW(p) · GW(p)

I'm in low level chronic pain including as I write this comment, so while I think the entire Andromeda galaxy might be fake, I think at least some suffering must be real, or at least I have the same confidence in my suffering as I do in my consciousness.

comment by jaan · 2025-04-28T12:22:12.004Z · LW(p) · GW(p)

my most fun talk made a similar claim:

comment by GeneSmith · 2025-04-28T07:02:36.688Z · LW(p) · GW(p)

I find this argument fairly compelling. I also appreciate the fact that you've listed out some ways it could be wrong.

Your argument matches fairly closely with my own views as to why we exist, namely that we are computationally irreducible [LW(p) · GW(p)]

It's hard to know what to do with such a conclusion. On the one hand it's somewhat comforting because it suggests even if we fuck up, there are other simulations or base realities out there that will continue. On the other hand, the thought that our universe will be terminated once sufficient data has been gathered is pretty sad.

comment by faul_sname · 2025-04-28T05:38:24.655Z · LW(p) · GW(p)

I don't know about "by a paperclip maximizer", but one thing that stands out to me:

If we're in a simulation, we could be in a simulation where the simulator did 1e100 rollouts from the big bang forward, and then collected statistics from all those runs.

But we could also be in a simulation where the simulator is doing importance sampling - that is, doing fewer rollouts from states that tend to have very similar trajectories given mild perturbations, and doing more rollouts from states that tend to have very different trajectories given mild perturbations.

If that's the case, we should find ourselves living in a world where events seem to be driven by coincidences and particularly by things which are downstream of chaotic dynamics and which had around a 50/50 chance of happening vs not. We should find more such coincidences for important things than for unimportant things.

Hey, maybe it's not our fault we live in the clown world. Maybe the clown world is a statistical inevitability.

comment by Garrett Baker (D0TheMath) · 2025-04-28T05:17:31.404Z · LW(p) · GW(p)

Yet the universe runs on strikingly simple math (relativity, quantum mechanics); such elegance is exactly what an efficient simulation would use. Physics is unreasonably effective, reducing the computational cost of the simulation. This cuts against the last point.

This does not seem so consistent, and if the primary piece of evidence for me against such simulation arguments. I would imagine simulations targeting, eg, a particular purpose would have their physics tailored to that purpose much more than ours seems to (for any purpose, given the vast computational complexity of our physics, and the vast number of objects such a physics engine needs to keep track of). For example, I'd expect most simulations physics to look more like Greg Egan's Crystal Nights (incidentally this story is what first convinced me the simulation hypothesis was false).

One may argue its all there just to convince us we're not in a simulation. Perhaps, but two points:

  1. Given the discourse on the simulation hypothesis, most seem to take our physics as evidence in favor of it, as you do here. So I don't think most think clearly enough about this for our civilizational decisions to be so dependent on this.

  2. The simulators will have trade-offs and resource constraints too. Perhaps they simulate few highly detailed simulations, and many highly simplified simulations. If this is exponential, in the sense that as the detail decreases the number of simulations exponentially increases, we should expect to be in the least detailed world consistent with the existence of sentiences and for which its not blatantly obvious we're in a simulation.

Of course this argument would break given sufficiently different physics from ours, enabling perhaps our world to be simulated in as much depth as it is very cheaply. But then that seems intuitively at least very unlikely & complex a hypothesis.

comment by Seth Herd · 2025-04-27T23:59:35.728Z · LW(p) · GW(p)

This and other simulation arguments become more plausible if you assume that they require only a tiny fraction of the compute needed to simulate physical reality. Which I think is true. I don't think it takes nearly as much compute to run a useful simulation of humans as people usually assume.

I don't see a reason to simulate at nearly a physical level of detail. I suspect you can do it using a technique that's more similar to the simulations you describe, except for the brains involved, which need to be simulated in detail to make decisions like evolved organisms would. But that detail is on the order of computations, not molecules. Depending on which arguments you favor, a teraflop or a couple OOMs above might be enough to simulate a brain with adequate fidelity to capture its decision-making to within the large uncertainty of exactly what type of organisms might evolve.

"physical" reality can be simulated in very low fidelity relative to atoms, because it's not the important part for this and many proposed simulation purposes. It just has to be enough to fool the brains involved. And brains naturally fill in details as part of their fundamental computational operation.

For this purpose you'd also want to get the basic nature of computing right, because that's might well have a large effect on what type of AGI is created. But that doesn't mean you need to simulate the electrons doiing quantum tunneling for wafer transistors; it just means you need to constrain the simmulation so the compute behaves approximately like the quantum tunneling transistor is the base technology.

On this thesis, the compute needed is mostly that needed to run the brains involved.

This isn't a necessary twist, but one might even cut corners by not even simulating all humans in full fidelity. All of society does play in to the factors in the AGI race, but it's possible that that an AGI could run many times more simulations if they made only key decision-makers simulated in full fidelity and somehow scaled down others. However, I want to separate this even weirder possibility from both the main argument of the post and my main argument here: simulation for many purposes can probably be many, many OOMs smaller than the atomic level - possibly using very few resources if a lot of technology and energy is available for compute.

I'll make a separate comment on the actual thesis of this post. TLDR: I find this far more compelling than other variants of the simulation argument.

comment by Knight Lee (Max Lee) · 2025-04-28T06:55:11.723Z · LW(p) · GW(p)

Another possibility is that the beings in the unsimulated universe are simulating us in order to do a Karma Test [LW · GW]: a test that reward agents who are kind and merciful to weaker agents.

By running Karma Tests, they can convince their more powerful adversaries to be kind and merciful to them, due to the small possibility that their own universe is also a Karma Test (by even higher beings faced with their own powerful adversaries).

Logical Counterfactual Simulations

If their powerful adversaries are capable of "solving ontology," and mapping out all of existence (e.g. the Mathematical Multiverse), then doing Karma Tests on smaller beings (like us humans) will fail to convince their powerful adversaries that they could also be in a Karma Test.

However, certain kinds of Karma Tests work even against an adversary capable of solving ontology.

This is because the outer (unsimulated) universe may be so radically different than the simulated universe, that even math and logic is apparently different. The simulators can edit the beliefs of simulated beings to believe an incorrect version of math and logic, and never ever detect the mathematical contradictions. The simulated beings will never figure out they are in a simulation, because even math and logic appears to suggest they are not in one.

Hence, even using math and logic to solve ontology, cannot definitively prove you aren't in a Karma Test.

Edit: see my reply [LW · GW] about suffering in simulations.

Replies from: James_Miller
comment by James_Miller · 2025-04-28T11:51:06.441Z · LW(p) · GW(p)

They people running the Karma test deserve to lose a lot of Karma for the suffering in this world.

Replies from: Max Lee
comment by Knight Lee (Max Lee) · 2025-04-28T12:09:30.906Z · LW(p) · GW(p)

The beings running the tests can skip over a lot of the suffering, and use actors instead of real victims.[1] Even if actors show telltale signs, they can erase any reasoning you make which detects the inconsistencies. They can even give you fake memories.

Of course, don't be sure that victims are actors. There's just a chance that they are, and that they are judging you.

  1. ^

    I mentioned this in the post on Karma Tests. I should've mentioned it in my earlier comment.

Replies from: James_Miller
comment by James_Miller · 2025-04-28T17:53:43.815Z · LW(p) · GW(p)

I'm in low level chronic pain including as I write this comment, so while I think the entire Andromeda galaxy might be fake, I think at least some suffering must be real, or at least I have the same confidence in my suffering as I do in my consciousness.