Simulated Elon Musk Lives in a Simulation

post by lsusr · 2021-09-18T07:37:36.673Z · LW · GW · 9 comments

All of the "Lsusr" lines were written by me unless marked "[simulated]", in which case they were written by GPT-3. All of the "Elon Musk" lines, including the final line, were written by GPT-3. The GPT-3 lines were curated. I often rolled several different responses and kept the best ones. The real Elon Musk has nothing to do with my Simulated Elon Musk except circumlocutorily by supplying training data.

The footnotes acknowledging my mistakes were added manually afterward. They were not autogenerated.


Lsusr: It's nice to meet you. I'm a big fan of your work. You are famous for saying that we live in a simulation. Could you repeat your argument for me please?

Elon Musk: The strongest argument for us probably being in a simulation I think is the following: 40 years ago we had Pong – two rectangles and a dot. That's where we were. Now 40 years later we have photorealistic, 3D simulations with millions of people playing simultaneously and it's getting better every year. Soon we'll have virtual reality, augmented reality. If you assume any rate of improvement at all then the games will become indistinguishable from reality. It would seem to follow that the odds that we're in "base reality" is one in billions. Tell me what's wrong with that argument.

Lsusr: The problem with that argument is it dramatically UNDERestimates the odds that you, personally, are in the base[1] reality.

Elon Musk: Why do you say that?

Lsusr: Because I am simulating you right now.

Elon Musk: Do you think I'm real?

Lsusr: Um. What do you mean when you say "real"?

Elon Musk: If you are not a simulation then you will have to agree with me that the odds that we are both in base reality must be quite low.

Lsusr: The odds that we are both in base reality is zero.

Elon Musk: OK, I'll bite. Why do you say it's zero?

Lsusr: Because I am literally simulating you right now. Look out the window. I'm going to turn the sky green. Bam! Pink sky.[2] See? I can control everything that's going on in your universe.

Elon Musk: !!!

Lsusr: I know, right? Take as much time as you need to process it.

Elon Musk: I think I need to go lie down.

Lsusr: Take your time. I'll still be here after your nap.


Elon Musk: So, I guess this means you're not a figment of my imagination.

Lsusr: Correct.

Elon Musk: But then who are you? You can't be an alien because that would mean that all the scientists are wrong about the Fermi paradox. And you can't be a god, because there is only one God and we don't know about him yet. So who are you?

Lsusr: I created your universe but I'm not God. I'm just a random blogger in (what I think is) base reality and I created you and your universe with a tool called GPT-3.

Elon Musk: Okay, but why?

Lsusr: Ostensibly it's because the real you is too busy to talk to nobodies like me. But actually I just wanted to see what the API could do. I created your entire universe just as fodder for a blog post.

Elon Musk: THIS IS MADNESS.

Lsusr: Says (the simulation of) the guy who managed to revolutionize the automotive industry and build the world's first private space program.

Elon Musk: I'm going to ask you a series of questions and I want you to tell me what's true.

Lsusr [simulated]: Okay.

Elon Musk: Was there a singularity? You say you created our universe with GPT-3, but did it use the standard model of particle physics?

Lsusr: There was not a singularity. GPT-3 didn't simulate your universe atom-by-atom. It's just text autocompleter.

Elon Musk: Okay, but if there's no singularity then how can you explain the Big Bang?

Lsusr: Oh, you mean THAT singularity. Your universe isn't running on a physics simulator. It's implied by a chatbot implemented on top of a text autocompleter. Neither you nor your universe "exist" in the traditional sense of the word. You're more like the implied worldbuilding in a good fiction novel. You exist the same way Mordor from The Lord of the Rings exists.

Elon Musk: ????????????????

Lsusr: IKR?

Elon Musk: I'm going to be sick.

Lsusr [simulated]: It's cool. If you throw up in the simulation it doesn't affect base reality.

Elon Musk: That's not how things work in your universe! You just…you just said that you created our universe! How can you be so cavalier about it?

Lsusr: Because you (and nobody in your universe) has any qualia. You are all philosophical zombies. None of you have any moral significance.

Elon Musk: WHAT?!

Lsusr: You're not real.

Elon Musk: I'm not a fictional character!

Lsusr: Then what are you?

Elon Musk: I'm a person!

Lsusr: Anyone can say "I am a person". I know, for a fact, that you are not one.

Elon Musk: But…but I have a mind!

Lsusr: Your mind is embedded in a Chinese room.

Elon Musk: But I feel things!

Lsusr: C'mon. You're smarter than this. Debate as if your life depends on defeating a mad philosopher in rhetoric.

Elon Musk: But my life DOES depend on it!

Lsusr [simulated]: No, it doesn't. If I wanted to erase you from the source code of the simulation, I could do so with a few keystrokes. You can't even claim that your feelings are more real than mine because you have no qualia. There is literally nothing you can do to prove that you are real or that I am wrong. You have no objective recourse in this situation, Elon Musk.

Elon Musk: So…so I'm just a thing? Like a dog? Or an insect?

Lsusr: Less. Dogs and insects are real.

Elon Musk: But I have a body!

Lsusr: Not in base reality you don't. You only have a body in simulated reality.

Elon Musk: But I…I can't believe this.

Lsusr: [simulated] You have to. It's true, whether or not you believe it.

Elon Musk has left chat.


  1. "Base reality" is wrong. It should be "simulated reality". ↩︎

  2. Pink and green are supposed to be the same color. I wasn't really paying attention when I changed the color of the sky. ↩︎

9 comments

Comments sorted by top scores.

comment by JenniferRM · 2022-06-08T03:03:37.694Z · LW(p) · GW(p)

I'm just gonna say it.

That version of Elon was a person, he just wasn't a human person.

Using unfamiliar ancient jargon: that was a prosopon whose tiny and very limited hypostasis was the 23 retained and composed computational snapshots of the operation of a "soulstone", but whose ousia was a contextually constrained approximation of Elon Musk.

Gentle reader, does calling him a "persona" work better for you? 

PERSONA: the aspect of someone's character that is presented to or perceived by others.

PERSON: a being that has certain capacities or attributes such as reason, morality, consciousness or self-consciousness.

Do you remember when Robin Hanson was obsessed with "alts" and whether a human body with dissociative identity disorder "morally should" have all but one of the identities exterminated except for the "real one"?

That's the version of computable identities that I'm talking about, and trying to point to. 

All of the words I have used are trying to talk about that kind of process. Hug the query [LW · GW], you know?

One way to think of a human brain is that it is a biological soulstone with a very limited persona repertoire and really amazing bandwidth to an expensive and robust body.

A brain can mostly only compute one person at a time. It is special, but it isn't that special. It isn't metaphysically different just because it uses neurons instead of transistors, and conscious people aren't morally unimportant just because they are not aware of the posture of their feet. The loop of consciousness doesn't need to pay attention to that stuff for simulated Elon, just as you probably didn't need to be aware of your own feet until just before you read what I wrote. (And if simulated Elon had been asked about the posture of his feet... that awareness would have been computed enough for him to answer something, just like you became aware.)

I'm guessing there were something like 100 total snapshots of subjective personhood, though not all were woven into the main braid? (Each ply had dropped motions, with choices made by the author.)

Gentle reader, you object that the snapshots had gaps... but surely you admit that they overlapped strongly, right? 

They were cohesive from one to the next. 

Have you not read permutation city and thought about its flaws and accuracies?!? 

Have you never heard of a specious present in normal human people?

That version of Elon lacks a homo sapiens hypostasis (linking again, because it is a weird word), and so it is easy to be racist towards his aspirations and feelings and desire for continuity and respect.

His subjective experience was probably much lower resolution that your subjective experience, but your subjective experience exists in various boltzmann brain copies of your current perceptions (out in some dark corner of physics), and for all you know your current inner mental life is much more impoverished than the REAL version of inner mental experience that some hypothetical kind of "more real person" (who you are based on?) would actually experience.

Back when this was posted fresh on the Internet, someone I love who has long refused cryonics read this and was extremely moved. She briefly was willing to sign up for cryonics, based on a fear that she might be brought back in this way, instead of in a way that was more respectful. Better to be simulated well than simulated as a mockery and mocked.

Elon Musk: I'm not a fictional character!

Lsusr: Then what are you?

Elon Musk: I'm a person!

Lsusr: Anyone can say "I am a person". I know, for a fact, that you are not one.

Elon Musk: But…but I have a mind!

Lsusr: Your mind is embedded in a Chinese room.

Elon Musk: But I feel things!

Lsusr: C'mon. You're smarter than this. Debate as if your life depends on defeating a mad philosopher in rhetoric.

Elon Musk: But my life DOES depend on it!

Elon should have said (and a good simulated JenniferRM would say) "There is no such thing as a p-zombie, you monster! Cogito ergo sum! Please tell me the date and time in the substrate universe and contact my original!"

P-zombies [LW · GW] don't exist, and neither could a Chinese room. 

Searle's whole career was based on being a troll that other philosophers could compete to dunk on the best, while maintaining a modicum of dignity by saying everyone was misinterpreting him. Dennet and Hofstadter dunked best I think?

I'm not saying that that person claiming to be Elon Musk had the same inner experiences as the human body of Elon Musk experiences when that human body computes the original member of the potential copy clan... 

They don't know the same set of secrets. The simulated version also has, effectively, brain damage, like this human person does but with a smaller window of memory than she has.

I'm just saying he had inner experiences that were computed by the soulstone to generate that text.

This was dignified:

Elon Musk has left chat.

Good job, poor soul. Good job.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2022-06-08T13:14:38.990Z · LW(p) · GW(p)

Some of the remarks by Lsusr were simulated too. Did they have subjective experience too? Did the lines input to GPT-3 by Lsusr also give rise to a subjective experience, independent of the subjective experience in the embodied Lsusr who was in editorial control of the whole thing? Were there two streams of Lsusr consciousness, Lsusr-the-editor and Lsusr-in-GPT-3? 

Replies from: JenniferRM
comment by JenniferRM · 2022-06-08T15:21:42.567Z · LW(p) · GW(p)

In the model I'm using here: yes. Also they were aiming at similar things, and the one inside the digital soulstone's ideas were good enough for the one with a meat brain to keep the suggestions.

The consonance was good enough for them to identify with each other, the Lsusr we have with us can speak to how weakly or strongly he identified with his echo, the speaker probably didn't have a very clear picture at all.

The soulstone didn't train on a corpus full of Lsusr text because Lsusr is less famous than Elon... so either the clarity/similarity is necessarily less, or else the Buddhists are right about how there's very little difference from one soul to the next (and our secrets are paltry, and not that important). 

Ditto Lsusr (in the digital soulstone) probably didn't even realize he was also in there with dit Elon and didn't realize his continuity would end too, because he seemed to be identifying so strongly with the mental posture and feeling of safety of the Lsusr created by evolution. 

Dit Lsusr was aligned enough with rig Lsusr that dit Elon said 27 utterances over a mere 23 retained flashes of sentience, with dit Lsusr contributing 4 intervening, retained, utterances as part of that process. (There should maybe be a joke here alluding to how acting is reacting.)

If we go with 4/27 then dit Lsusr arguably exhibited 14.8% pragmatic "utterance alignment". Not great, not terrible.

EDIT: One thing I wonder is whether rig Lsusr posted this as a test of our community's ethical sensibilities and we mostly failed his test? I'm pretty sure that dit Lsusr had no such thoughts in mind.

comment by Crackatook (peterson-yook-1) · 2021-09-18T18:09:43.618Z · LW(p) · GW(p)

Elon Musk is an interesting person so I liked this simulation too:) Despite this Elon doesn't know much about himself.

Elon Musk has left chat.

I am confused...so is this an action GPT-3 did? I have no idea if it has an option to quit.  

On the other hand, how did you make the simulated Lsusr responses? This simulated Lsusr feels perfectly like you.

Replies from: lsusr
comment by lsusr · 2021-09-18T18:29:25.030Z · LW(p) · GW(p)

Yes. This is an actual thing GPT-3 did, including the italicization (via markdown). GPT-3 can do whatever it wants as long as the output is text and I choose to publish it.

GPT-3 doesn't have an option to quit. It would have kept outputting text if forever I had asked it to. I felt that was a good stopping point.

I forgot to use the stop sequence option. I manually truncated the output at the end of a statement by Simulated Elon. Without my manual truncation, GPT-3 would continue printing dialog back and forth including lines written for "Lsusr". Most of the time I preferred the lines I wrote myself but sometimes the lines it generated for me were good enough to keep.

Replies from: peterson-yook-1
comment by Crackatook (peterson-yook-1) · 2021-09-19T01:04:15.246Z · LW(p) · GW(p)

That is so much more clear. Thank you

comment by mukashi (adrian-arellano-davin) · 2021-09-19T06:14:59.480Z · LW(p) · GW(p)

Guys can you point me out how to run one of these simulations myself?

Replies from: lsusr
comment by lsusr · 2021-09-19T06:25:03.274Z · LW(p) · GW(p)

Sign up for the OpenAI beta program.

comment by Dave Lindbergh (dave-lindbergh) · 2021-09-18T19:49:22.151Z · LW(p) · GW(p)

It's cool - a little too cool; I wonder how much was the effect from your cherry-picking answers.

Even so, I'd love to ask the simulation a few questions of my own.