LessWrong 2.0 Reader
View: New · Old · Topnext page (older posts) →
next page (older posts) →
As a test, I asked a non-philosopher friend of mine what their view is. Here's a transcript of our short conversation: https://docs.google.com/document/d/1s1HOhrWrcYQ5S187vmpfzZcBfolYFIbeTYgqeebNIA0/edit
I was a bit annoyingly repetitive with trying to confirm and re-confirm what their view is, but I think it's clear from the exchange that my interpretation is correct at least for this person.
cstinesublime on Discomfort StackingI'm confused, is the death to discomfort comparison based on the cumulative experience that the loved ones and friends of a person who has died might experience in grief and despair that someone they cared about died? Or are you suggesting that a death is a superlatively uncomfortable event for the individual who is dying?
I can't see a way of making discomfort to death fungible, at least partly because to experience discomfort requires someone to continue on living.
I expect a lot more open releases this year and am committed to test their capabilities and safety guardrails rigorously.
Glad you're planning on continual testing, that seems particularly important here, where the default is every once in awhile some new report comes out with a single data point about how good some model is and people slightly freak out. Having the context of testing numerous models over time seems crucial for actually understanding the situation and being able to predict upcoming trends. Hopefully you have and will continue to find ways to reduce the effort needed to run marginal experiments, e.g., having a few clearly defined tasks you repeatedly use, reusing finetuning datasets, etc.
robbbb on When is a mind me?Is there even anybody claiming there is an experiential difference?
Yep! Ask someone with this view whether the current stream of consciousness continues from their pre-uploaded self to their post-uploaded self, like it continues when they pass through a doorway. The typical claim is some version of "this stream of consciousness will end, what comes next is only oblivion", not "oh sure, the stream of consciousness is going to continue in the same way it always does, but I prefer not to use the English word 'me' to refer to the later parts of that stream of consciousness".
This is why the disagreement here has policy implications: people with different views of personal identity have different beliefs about the desirability of mind uploading. They aren't just disagreeing about how to use words, and if they were, you'd be forced into the equally "uncharitable" perspective that someone here is very confused about how relevant word choice is to the desirability of uploading.
The alternative to this is that there is a disagreement about the appropriate semantic interpretation/analysis of the question. E.g. about what we mean when we say "I will (not) experience such and such". That seems more charitable than hypothesizing beliefs in "ghosts" or "magic".
I didn't say that the relevant people endorse a belief in ghosts or magic. (Some may do so, but many explicitly don't!)
It's a bit darkly funny that you've reached for a clearly false and super-uncharitable interpretation of what I said, in the same sentence you're chastising me for being uncharitable! But also, "charity" is a bad approach to trying to understand other people [LW · GW], and bad epistemology can get in the way of a lot of stuff.
mikhail-samin on When is a mind me?But I hope the arguments I've laid out above make it clear what the right answer has to be: You should anticipate having both experiences.
Some quantum experiments allow us to mostly anticipate some outcomes and not others. Either quantum physics doesn’t work the way Eliezer thinks it works and the universe is very small to not contain many spontaneously appearing copies of your brain, or we should be pretty surprised to continually find ourselves in such an ordered universe, where we don’t start seeing white noise over and over again.
I agree that if there are two copies of the brain that perfectly simulate it, both exist; but it’s not clear to me what should I anticipate in terms of ending up somewhere. Future versions of me that have fewer copies would feel like they exist just as much as versions that have many copies/run on computers with thicker wires/more current would feel.
But finding myself in an orderly universe, where quantum random number generators produce expected frequencies of results, requires something more than the simple truth that if there’s an abstract computation being computed, well, it is computed, and if it is experiencing, it’s experiencing (independently of how many computers in which proportions using which physics simulating frameworks physically run it).
I’m pretty confused about what is needed to produce a satisfying answer, conditional on a large enough universe, and the only potential explanation I came up with after thinking for ~15 minutes (before reading this post) was pretty circular and not satisfying (I’m not sure of a valid-feeling way that would allow me to consider something in my brain entangled with how true this answer is, without already relying on it).
(“What’s up with all the Boltzmann brain versions of me? Do they start seeing white noise, starting from every single moment? Why am I experiencing this instead?”)
And in a large enough universe, deciding to run on silicon instead of proteins might be pretty bad, because maybe, if GPUs that run the brain are tiny enough, most future versions of you might end up in weird forms of quantum immortality instead of being simulated.
If I physically scale my brain size on some outputs of results of quantum dice throws but not others, do I start observing skewed frequencies of results?
gwern on I measure Google's MusicLM over 3 months as it appears to go from jaw-dropping to embarrassingly repeating itselfAny updates on this? For example, I notice that the new music services like Suno & Udio seem to be betraying a bit of mode collapse, but they certainly do not degenerate into such within-song repetition like these were.
quinn-dougherty on Quinn's Shortformtim-liptrot on Building Blocks of Politics: An Overview of Selectorate TheoryHe had become so caught up in building sentences that he had almost forgotten the barbaric days when thinking was like a splash of color landing on a page.
Those redditors have pretty weak arguments. The first comment is basically "the other academics all agree with the popular claim that Gilley is criticizing, so the popular claim must be true". The second guy basically states "Gilley correctly argues that Hoschild's evidence for a population decline is too weak. But if the evidence is bad, Gilley can't prove there was a genocide. Therefore Gilley is wrong".
neil-warren on Neil Warren's ShortformFHI at Oxford
by Nick Bostrom (recently turned into song [LW · GW]):
the big creaky wheel
a thousand years to turn
thousand meetings, thousand emails, thousand rules
to keep things from changing
and heaven forbid
the setting of a precedent
yet in this magisterial inefficiency
there are spaces and hiding places
for fragile weeds to bloom
and maybe bear some singular fruit
like the FHI, a misfit prodigy
daytime a tweedy don
at dark a superhero
flying off into the night
cape a-fluttering
to intercept villains and stop catastrophes
and why not base it here?
our spandex costumes
blend in with the scholarly gowns
our unusual proclivities
are shielded from ridicule
where mortar boards are still in vogue
Update #1
Lots of info to share! Here's a bunch of awesome people confirmed as coming.
Eliezer Yudkowsky | The Sequences [? · GW] | HPMOR | Project Lawful |
Scott Alexander | SlateStarCodex | Astral Codex Ten | UNSONG |
Zvi Mowshowitz | The Zvi | Don't Worry About The Vase |
Alexander Wales | Worth the Candle | Alexander Wales |
Kevin Simler | Melting Asphalt | The Elephant in the Brain |
Katja Grace | World Spirit Sock Puppet | AI Impacts |
Sarah Constantin | Rough Diamonds |
Martin Sustrik | 250bpm | LW [LW · GW] |
Duncan Sabien | Homo Sabiens | r!Animorphs |
John Wentworth | LW [LW · GW] |
Abram Demski | LW [LW · GW] |
Alicorn | Alicorn | LW [LW · GW] |
Jacob Falkovich | PutANumOnIt | LW [LW · GW] |
Zack Davis | LW [LW · GW] |
Daystar Eld | Daystar Eld |
GeneSmith | LW [LW · GW] |
Ozy Brennan | Thing of Things |
Two sessions I'm personally quite excited to go to are Sarah Constantin's "My First Fact Post" and Alicorn's "My First GlowFic" (I want to try to do both of these things!).