Posts

Comments

Comment by qmotus on Open thread, May 15 - May 21, 2017 · 2017-05-25T14:42:28.841Z · LW · GW

Actually, I'm just interested. I've been wondering if big world immortality is a subject that would make people a) think that the speaker is nuts, b) freak out and possibly go nuts or c) go nuts because they think the speaker is crazy; and whether or not it's a bad idea to bring it up.

Comment by qmotus on Open thread, May 15 - May 21, 2017 · 2017-05-22T13:18:16.358Z · LW · GW

Are people close to you aware that this is a reason that you advocate cryonics?

Comment by qmotus on Open thread, May 15 - May 21, 2017 · 2017-05-22T07:52:24.712Z · LW · GW

What cosmological assumptions? Assumptions related to identity, perhaps, as discussed here. But it seems to me that MWI essentially guarantees that for every observer-moment, there will always exist a "subsequent" one, and the same seems to apply to all levels of a Tegmark multiverse.

Comment by qmotus on Open thread, May 15 - May 21, 2017 · 2017-05-17T10:01:07.271Z · LW · GW

(I'm not convinced that the universe is large enough for patternism to actually imply subjective immortality.)

Why wouldn't it be? That conclusion follows logically from many physical theories that are currently taken quite seriously.

Comment by qmotus on Open thread, May 15 - May 21, 2017 · 2017-05-16T07:55:07.362Z · LW · GW

I'm not willing to decipher your second question because this theme bothers me enough as it is, but I'll just say that I'm amazed figuring this stuff out is not considered a higher priority by rationalists. If at some point someone can definitely tell me what to think about this, I'd be glad about it.

Comment by qmotus on Open thread, May 15 - May 21, 2017 · 2017-05-16T07:34:25.947Z · LW · GW

I guess we've had this discussion before, but: the difference between patternism and your version of subjective mortality is that in your version we nevertheless should not expect to exist indefinitely.

Comment by qmotus on Open thread, Mar. 20 - Mar. 26, 2017 · 2017-03-21T09:47:44.684Z · LW · GW

I feel like it's rather obvious that this is approximately what is meant. The people who talk of democratizing AI are, mostly, not speaking about superintelligence or do not see it as a threat (with the exception of Elon Musk, maybe).

Comment by qmotus on Open thread, Oct. 17 - Oct. 23, 2016 · 2016-11-14T08:53:30.449Z · LW · GW

You also can't know if you're in a simulation, a Big quantum world, a big cosmological world, or if you're a reincarnation

But you can make estimates of the probabilities (EY's estimate of the big quantum world part, for example, is very close to 1).

So really I just go with my gut and try to generally make decisions that I probably won't think are stupid later given my current state of knowledge.

That just sounds pretty difficult, as my estimate of whether a decision is stupid or not may depend hugely on the assumptions I make about the world. In some cases, the decision that would be not-stupid in a big world scenario could be the complete opposite of what would make sense in a non-big world situation.

Comment by qmotus on Quantum Bayesianism · 2016-11-14T08:47:58.627Z · LW · GW

If you're looking for what these probabilities tell us about the underlying "reality"

I am. It seems to me that if quantum mechanics is about probabilities, then those probabilities have to be about something: essentially, this seems to suggest that either the underlying reality is unknown, indicating that quantum mechanics needs to be modified somehow, or that Qbism is more like an "interpretation of MWI", where one chooses to only care about the one world she finds herself in.

Comment by qmotus on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-25T15:30:38.040Z · LW · GW

Fortunately, Native American populations didn't plummet because they were intentionally killed, they mostly did so because of diseases brought by Europeans.

Comment by qmotus on Open thread, Oct. 17 - Oct. 23, 2016 · 2016-10-21T10:33:49.172Z · LW · GW

Thanks for the tip. I suppose I actually used to be pretty good at not giving too many fucks. I've always cared about stuff like human rights or climate change or, more lately, AI risk, but I've never really lost much sleep over them. Basically, I think it would be nice if we solved those problems and, but the idea that humanity might go extinct in the future doesn't cause me too much headache in itself. The trouble is, I think, that I've lately begun to think that I may have a personal stake in this stuff, the point illustrated by the EY post that I linked to. See also my reply to moridinamael.

Comment by qmotus on Open thread, Oct. 17 - Oct. 23, 2016 · 2016-10-21T10:28:52.687Z · LW · GW

The part about not being excited about anything sounds very accurate and is certainly a part of the problem. I've also tried just taking up projects and focusing on them, but I should probably try harder as well.

However, a big part of the problem is that it's not just that those things feel insignificant; it's also that I have a vague feeling that I'm sort of putting my own well-being in jeopardy by doing that. As I said, I'm very confused about things like life, death and existence, on a personal level. How do I focus on mundane things when I'm confused about basic things such as whether I (or anyone) else should expect to eventually die or to experience a weird-ass form of subjective anthropic immortality, and about what that actually means? Should that make me act somehow?

Comment by qmotus on Open thread, Oct. 17 - Oct. 23, 2016 · 2016-10-17T15:23:21.415Z · LW · GW

I'm having trouble figuring out what to prioritize in my life. In principle, I have a pretty good idea of what I'd like to do: for a while I have considered doing a Ph.D in a field that is not really high impact, but not entirely useful either, combining work that is interesting (to me personally) and hopefully a modest salary that I could donate to worthwhile causes.

But it often feels like this is not enough. Similar to what another user posted here a while ago, reading LessWrong and about effective altruism has made me feel like nothing except AI and maybe a few other existential risks are worth focusing on (not even things that I still consider to be enormously important relative to some others). In principle I could focus on those, as well. I'm not intelligent enough to do serious work on Friendly AI, but I probably could transition, relatively quickly, to working on machine learning and in data science, with perhaps some opportunities to contribute and likely higher earnings.

The biggest problem, however, is that whenever I seem to be on track towards doing something useful and interesting, a monumental existential confusion kicks in and my productivity plummets. This is mostly related to thinking about life and death.

EY recently suggested that we should care about solving AGI alignment because of quantum immortality (or its cousins). This is a subject that has greatly troubled me for a long time. Thinking logically, big world immortality seems like an inescapable conclusion from some fairly basic assumption. On the other hand, the whole idea feels completely absurd.

Having to take that seriously, even if I don't believe in it 100 percent, has made it difficult for me to find joy in the things that I do. Combining big world immortality with other usual ideas regarding existential risks and so on that are prevalent in the LW memespace sort of suggests that the most likely outcome I (or anybody else) can expect in the long run is surviving indefinitely as the only remaining human, or nearly certainly as the only remaining person among those that I currently know. Probably in an increasingly bad health as well.

It doesn't help that I've never been that interested in living for a very long time, like most transhumanists seem to be. Sure, I think aging and death are problems that we should eventually solve, and in principle I don't have anything against living for a significantly longer time than the average human lifespan, but it's not something that I've been very interested in actively seeking and if there's a significant risk that those very many years would not be very comfortable, then I quickly lose interest. So the theories that sort of make this whole death business seem like an illusion are difficult to me. And overall, the idea does make the mundane things that I do now seem even more meaningless. Obviously, this is taking its toll on my relationships with other people as well.

This has also led me to approach related topics a lot less rationally than I probably should. Because of this, I think both my estimate of the severity of the UFAI problem and our ability to solve this has gone up, as has my estimate of the likelihood that we'll be able to beat aging in my lifetime - because those are things that seem to be necessary to escape the depressing conclusions I've pointed out.

I'm not good enough at fooling myself, though. As I said, my ability to concentrate on doing anything useful is very weak nowadays. It actually often feels easier to do something that I know is an outright waste of time but gives something to think about, like watching YouTube, playing video games or drinking beer.

I would appreciate any input. Given how seriously people here take things like the simulation argument, the singularity or MWI, existential confusion cannot be that uncommon. How do people usually deal with this kind of stuff?

Comment by qmotus on Quantum Bayesianism · 2016-10-13T14:34:07.193Z · LW · GW

I'm certainly not an instrumentalist. But the argument that MWI supporters (and some critics, like Penrose) generally make, and which I've found persuasive, is that MWI is simply what you get if you take quantum mechanics at face value. Theories like GRW have modifications to the well-established formalism that we, as far as I know, have no empirical confirmation of.

Comment by qmotus on Quantum Bayesianism · 2016-10-12T21:08:30.795Z · LW · GW

Fair enough. I feel like I have a fairly good intuitive understanding of quantum mechanics, but it's still almost entirely intuitive, and so is probably entirely inadequate beyond this point. But I've read speculations like this, and it sounds like things can get interesting: it's just that it's unclear to me how seriously we should take them at this stage, and also some of them take MWI as a starting point, too.

Regarding QBism, my idea of it is mostly based on a very short presentation of it by Rüdiger Schack at a panel, and the thing that confuses me is that if quantum mechanics is entirely about probability, then what do those probabilities tell us about?

Comment by qmotus on Quantum Bayesianism · 2016-10-12T20:57:14.803Z · LW · GW

I'm not sure what you mean by OR, but if it refers to Penrose's interpretation (my guess, because it sounds like Orch-OR), then I believe that it indeed changes QM as a theory.

Comment by qmotus on Quantum Bayesianism · 2016-10-12T20:55:19.031Z · LW · GW

Guess I'll have to read that paper and see how much of it I can understand. Just at a glance, it seems that in the end they propose one of the modified theories like GRW interpretation might be the right way forward. I guess that's possible, but how seriously should we take those when we have no empirical reasons to prefer them?

Comment by qmotus on Quantum Bayesianism · 2016-10-11T22:00:55.292Z · LW · GW

If it doesn't fundamentally change quantum mechanics as a theory, is the picture likely to turn out fundamentally different from MWI? Roger Penrose, a vocal MWI critic, seems to wholeheartedly agree that QM implies MWI; it's just that he thinks that this means the theory is wrong. David Deutsch, I believe, has said that he's not certain that quantum mechanics is correct; but any modification of the theory, according to him, is unlikely to do away with the parallel universes.

QBism, too, seems to me to essentially accept the MWI picture as the underlying ontology, but then says that we should only care about the worlds that we actually observe (Sean Carroll has presented criticism similar to this, and mentioned that it sounds more like therapy to him), although it could be that I've misunderstood something.

Comment by qmotus on Quantum Bayesianism · 2016-10-11T09:59:55.574Z · LW · GW

Do you think that we're likely to find something in those directions that would give a reason to prefer some other interpretation than MWI?

Comment by qmotus on Open thread, Sep. 12 - Sep. 18, 2016 · 2016-09-15T10:54:07.158Z · LW · GW

It could be that reality has nasty things in mind for us that we can't yet see and that we cannot affect in any way, and therefore I would be happier if I didn't know of them in advance. Encountering a new idea like this that somebody has discovered is one my constant worries when browsing this site.

Comment by qmotus on Open Thread, Sept 5. - Sept 11. 2016 · 2016-09-06T17:22:49.785Z · LW · GW

Wouldn't that mean surviving alone?

Comment by qmotus on The map of ideas how the Universe appeared from nothing · 2016-09-03T12:36:29.510Z · LW · GW

MUH has a certain appeal, but its problems as well, as you say (and substituting CUH for MUH feels a little ad hoc to me), and I fear parsimony can lead us astray here in any case. I still think it's a good attempt, but we should not be too eager to accept it.

Maybe you should make a map of reasons for why this question matters. It's probably been regarded as an uninteresting question since it is difficult (if not impossible) the test empirically, and because of this humanity has overall not directed enough brainpower to solving it.

Comment by qmotus on Open Thread, Aug. 22 - 28, 2016 · 2016-08-24T10:31:07.734Z · LW · GW

Uh, I think you should format your post so that somebody reading that warning would also have time to react to it and actually avoid reading the thing you're warning about.

Comment by qmotus on Open Thread, Aug. 15. - Aug 21. 2016 · 2016-08-18T17:20:57.491Z · LW · GW

With those assumptions (especially modal realism), I don't think your original statement that our simulation was not terminated this time doesn't quite make sense; there could be a bajillion simulations identical to this, and even if most of them we're shut down, we wouldn't notice anything.

In fact, I'm not sure what saying "we are in a simulation" or "we are not in a simulation" exactly means.

Comment by qmotus on Open Thread, Aug. 1 - Aug 7. 2016 · 2016-08-02T11:56:00.558Z · LW · GW

It all looks like political fight between Plan A and Plan B. You suggest not to implement Plan B as it would show real need to implement Plan A (cutting emissions).

That's one thing. But also, let's say that we choose Plan B, and this is taken as a sign that reducing emissions is unnecessary and global emissions soar. We then start pumping aerosols into the atmosphere to cool the climate.

Then something happens and this process stops: we face unexpected technical hurdles, or maybe the implementation of this plan has been largely left to a smallish number of nations and they are incapable or unwilling to implement it anymore, perhaps a large-scale war occurs, or something like that. Because of the extra CO2, we'd probably be worse off than if we had even partially succeeded with Plan A. So what's the expected payoff of choosing A or B?

As I said, I'm a bit wary of this, but I also think that it's important to research climate engineering technologies and make plans so that they can be implemented if (and probably when) necessary. The best option would probably be a mixture of plans A and B, but as you said, it looks like a bit of a prisoner's dilemma.

Comment by qmotus on Open Thread, Aug. 1 - Aug 7. 2016 · 2016-08-02T10:15:14.090Z · LW · GW

I would still be a bit reluctant to advocate climate engineering, though. The main worry, of course, is that if we choose that route, we need to commit to in the long term, like you said. Openly embracing climate engineering would probably also cause emissions to soar, as people would think that there's no need to even try to lower emissions any more. So, if for some reason the delivery of that sulfuric acid into the atmosphere or whatever was disrupted, we'd be in trouble. And do we know enough of such measures to say that there safe?. Of course, if we believe that history will end anyways within decades or centuries because of singularity, long-term effects of such measures may not matter so much.

Also, many people, whether or not they're environmentalists strictly speaking, care about keeping our ecosystems at least somewhat undisrupted, and large scale climate engineering doesn't fit too well with that view.

But I agree that we're not progressing fast enough with emissions reductions (we're not progressing with them at all, actually), so we'll probably have to resort to some kind of plan B eventually.

Comment by qmotus on Open Thread, Aug. 1 - Aug 7. 2016 · 2016-08-02T10:04:41.224Z · LW · GW

I think many EAs consider climate change to be very important, but often just think that it receives a lot of attention already and solving it is difficult, and that there are therefore better things to focus on. Like 80 000 hours for example.

Comment by qmotus on Earning money with/for work in AI safety · 2016-07-21T17:14:13.515Z · LW · GW

Will your results ultimately take the form of blog posts such as those, or peer-reviewed publications, or something else?

I think FRI's research agenda is interesting and that they may very well work on important questions that hardly anyone else does, but I haven't yet supported them as I'm not certain about their ability to deliver actual results or the impact of their research, and find it a tad bit odd that it's supported by effective altruism organizations, since I don't see any demonstration of effectiveness so far. (No offence though, it looks promising.)

Comment by qmotus on The map of future models · 2016-07-05T08:50:36.517Z · LW · GW

I wouldn't call cryonics life extension; sounds more like resurrection to me. And, well, "potentially indefinite life extension" after that, sure.

Comment by qmotus on Open Thread May 23 - May 29, 2016 · 2016-05-28T15:18:58.780Z · LW · GW

I bet many LessWrongers are just not interested in signing up. That's not irrational, or rational, it's just a matter of preferences.

Comment by qmotus on Open Thread May 2 - May 8, 2016 · 2016-05-03T16:37:39.840Z · LW · GW

either we are in a simulation or we are not, which is obviously true

Just wanted to point out that this is not necessarily true; in a large enough multiverse, there would be many identical copies of a mind, some of which would probably be "real minds" dwelling in "real brains", and some would be simulated.

Comment by qmotus on Does immortality imply eternal existence in linear time? · 2016-04-27T15:29:41.851Z · LW · GW

If I copied your brain right now, but left you alive, and tortured the copy, you would not feel any pain (I assume). I could even torture it secretly and you would be none the wiser.

Well.. Let's say I make a copy of you at time t. I can also make them forget which one is which. Then, at time t + 1, I will tickle the copy a lot. After that, I go back in time to t - 1, tell you of my intentions and ask you if you expect to get tickled. What do you reply?

Does it make any sense to you to say that you expect to experience both being and not being tickled?

Comment by qmotus on Does immortality imply eternal existence in linear time? · 2016-04-25T17:17:05.557Z · LW · GW
  1. Maybe; it would probably think so, at least if it wasn't told otherwise.

  2. Both would probably think so.

  3. All three might think so.

  4. I find that a bit scary.

  5. Wouldn't there, then, be some copies of me not being tortured and one that is being tortured?

Comment by qmotus on Does immortality imply eternal existence in linear time? · 2016-04-25T11:27:36.424Z · LW · GW

Let's suppose that the contents of a brain are uploaded to a computer, or that a person is anesthesized and a single atom in their brain is replaced. What exactly would it mean to say that personal identity doesn't persist in such situations?

Comment by qmotus on Does immortality imply eternal existence in linear time? · 2016-04-25T10:18:41.485Z · LW · GW

If there's no objective right answer, you can just decide for yourself. If you want immortality and decide that a simulation of 'you' is not actually 'you', I guess you ('you'?) will indeed need to find a way to extend your biological life. If you're happy with just the simulation existing, then maybe brain uploading or FAI is the way to go. But we're not going to "find out" the right answer to those questions if there is no right answer.

But I think the concept of personal identity is inextricably linked to the question of how separate consciousnesses, each feeling their own qualia, can arise.

Are you talking about the hard problem of consciousness? I'm mostly with Daniel Dennett here and think that the hard problem probably doesn't actually exist (but I wouldn't say that I'm absolutely certain about this), but if you think that the hard problem needs to be solved, then I guess this identity business also becomes somewhat more problematic.

Comment by qmotus on Does immortality imply eternal existence in linear time? · 2016-04-24T18:15:42.607Z · LW · GW

Isn't it purely a matter of definition? You can say that a version of you with one atom of yourself is you or that it isn't; or that a simulation of you either is or isn't you; but there's no objective right answer. It is worth nothing, though, that if you don't tell the different-by-one-atom version, or the simulated version, of the fact, they would probably never question being you.

Comment by qmotus on Open Thread April 11 - April 17, 2016 · 2016-04-15T06:55:37.052Z · LW · GW

I suppose so, and that's where the problems for consequentialism arise.

Comment by qmotus on Open Thread April 11 - April 17, 2016 · 2016-04-13T18:04:42.849Z · LW · GW

What I've noticed is that this has caused me to slide towards prioritizing issues that affect me personally (meaning that I care somewhat more about climate change and less about animal rights than I have previously done).

Comment by qmotus on Open Thread April 11 - April 17, 2016 · 2016-04-11T12:11:33.478Z · LW · GW

Past surveys show that most LessWrongers are consequentialists, and many are also effective altruism advocates. What do they think of infinities in ethics?

As I've intuitively always favoured some kind of negative utilitarianism, this has caused me some confusion.

Comment by qmotus on Lesswrong 2016 Survey · 2016-04-05T16:13:02.237Z · LW · GW

Peak oil said we'd run out of oil Real Soon Now, full stop

Peak oil refers to the moment when the production of oil has reached a maximum and after which it declines. It doesn't say that we'll run out of it soon, just that production will slow down. If consumption increases at the same time, it'll lead to scarcity.

If you are trying to rebuild you don't need much oil

Well, that probably depends on how much damage has been done. If civilization literally had to be rebuilt from scratch, I'd wager that a very significant portion of that cheap oil would have to be used.

Comment by qmotus on Lesswrong 2016 Survey · 2016-04-05T15:31:36.315Z · LW · GW

Besides, can we now finally admit peak oil was wrong?

Unfortunately, we can't. While we're not going to run out of oil soon (in fact, we should stop burning it for climate reasons long before we do; also, peak oil is not about oil depletion), we are running out of cheap oil. The EROEI of oil has fallen significantly since we started extracting it on a large scale.

This is highly relevant for what is discussed here. In the early 20th century, we could produce around 100 units of energy from oil for every unit of energy we used to extract it; those rebuilding the civilization from scratch today or in the future would have to make do with far less.

Comment by qmotus on Open Thread March 28 - April 3 , 2016 · 2016-04-05T10:11:49.236Z · LW · GW

Another interpretation is that it is a name for an implication of MWI that a even many people who fully accept MWI seem to somehow miss (or deny, for some reason; just have a look at discussions in relevant Reddit subs, for example).

Objective-collapse theories in a spatially or temporally infinite universe or with eternal inflation etc. actually say that it holds with nonzero but very small probability, but essentially give it an infinite number of chances to happen, meaning that this scenario is for all practical purposes identical to MWI. But I think what you are saying can be supposed to mean something like "if the world was like the normal intuitions of most people say it is like", in which case I still think there's a world of difference between very small probability and very small measure.

I'm not entirely convinced by the usual EY/LW argument that utilitarianism can be salvaged in an MWI setting by caring about measure, but I can understand it and find it reasonable. But when this is translated to a first-person view, I find it difficult. The reason I believe that the Sun will rise tomorrow morning is not because my past observations indicate that it will happen in a majority of "branches" ("branches" or "worlds" of course not being a real thing, but a convenient shorthand), but because it seems like the most likely thing for me to experience, given past experiences. But if I'm in a submarine with turchin and x-risk is about to be realized, I don't get how I could "expect" that I will most likely blow up or be turned into a pile of paperclips like everyone else, while I will certainly (and only) experience it not happening. If QI is an attitude, and a bad one too, I don't understand how to adopt any other attitude.

Actually, I think there are at least a couple of variations of this attitude: the first one that people take upon first hearing of the idea and giving it some credibility is basically "so I'm immortal, yay; now I could play quantum russian roulette and make myself rich"; the second one, after thinking about it a bit more, is much more pessimistic; there are probably others, but I suppose you could say that underneath there is this core idea that somehow it makes sense to say "I'm alive" if even a very small fraction of my original measure still exists.

Comment by qmotus on Open Thread March 28 - April 3 , 2016 · 2016-04-03T21:13:20.288Z · LW · GW

I have never been able to understand what different predictions about the world anyone expects if "QI works" versus if "QI doesn't work", beyond the predictions already made by physics.

Turchin may have something else in mind, but personally (since I've also used this expression several times on LW) I mean something like this: usually people think that when they die, their experience will be irreversibly lost (unless extra measures like cryonics are taken, or they are religious), meaning that the experiences they have just prior to death will be their final ones (and death will inevitably come). If "QI works", this will not be true: there will never be final experiences, but instead there will be an eternal (or perhaps almost eternal) chain of experiences and thus no final death, from a first-person point of view.

Of course, it could be that if you've accepted MWI and the basic idea of multiple future selves implied by it then this is not very radical, but it sounds like a pretty radical departure from our usual way of thinking to me.

Comment by qmotus on [LINK] Why Cryonics Makes Sense - Wait But Why · 2016-03-31T15:38:22.114Z · LW · GW

I find that about as convincing as "if you see a watch there must be a watchmaker" style arguments.

I don't see the similarity here.

There are a number of ways theorized to test if we're in various kinds of simulation and so far they've all turned up negative.

Oh?

String theory is famously bad at being usable to predict even mundane things even if it is elegant and "flat" is not the same as "infinite".

It basically makes no new testable predictions right now. Doesn't mean that it won't do so in the future. (I have no opinion about string theory myself, but a lot of physicists do see it as promising. Some don't. As far as I know, we currently know of no good alternative that's less weird.)

Comment by qmotus on [LINK] Why Cryonics Makes Sense - Wait But Why · 2016-03-31T07:49:30.157Z · LW · GW

As yet we have ~zero evidence for being in a simulation.

We have evidence (albeit no "smoking-gun evidence") for eternal inflation, we have evidence for a flat and thus infinite universe, string theory is right now our best guess at what the theory of everything is like; these all predict a multiverse where everything possible happens and where somebody should thus be expected to simulate you.

Your odds of waking up in the hands of someone extremely unfriendly is unchanged. You're just making it more likely that one fork of yourself might wake up in friendly hands.

Well, I think that qualifies. Our language is a bit inadequate for discussing situations with multiple future selves.

Comment by qmotus on [LINK] Why Cryonics Makes Sense - Wait But Why · 2016-03-29T12:58:25.778Z · LW · GW

you can somewhat salvage traditional notions of fear ... Simulationist Heaven ... It does take the sting off death though

I find the often prevalent optimism on LW regarding this a bit strange. Frankly, I find this resurrection stuff quite terrifying myself.

I am continuously amused how catholic this cosmology ends up by sheer logic.

Yeah. It does make me wonder if we should take a lot more critical stance towards the premises that lead us to it. Sure enough, the universe is under no obligation to make any sense to us; but isn't it still a bit suspicious that it's turning out to be kind of bat-shit insane?

Comment by qmotus on [LINK] Why Cryonics Makes Sense - Wait But Why · 2016-03-29T12:55:41.038Z · LW · GW

Of course not. But whether people here agree with him or not, they usually at least think that his arguments need to be considered seriously.

Comment by qmotus on [LINK] Why Cryonics Makes Sense - Wait But Why · 2016-03-29T12:53:14.489Z · LW · GW

I don't believe in nested simulverse etc

You mean none of what I mentioned? Why not?

but I feel I should point out that even if some of those things were true waking up one way does not preclude waking up one or more of the other ways in addition to that.

You're right. I should have said "make it more likely", not "make sure".

Comment by qmotus on Open Thread March 28 - April 3 , 2016 · 2016-03-28T12:01:53.796Z · LW · GW

I think the point is that if extinction is not immediate, then the whole civilisation can't exploit big world immortality to survive; every single member of that civilisation would still survive in their own piece of reality, but alone.

Comment by qmotus on [LINK] Why Cryonics Makes Sense - Wait But Why · 2016-03-27T20:25:20.521Z · LW · GW

By "the preface" do you mean the "memetic hazard warnings"?

Yes.

I don't think that is claiming that it is a rational response to claims about the word.

I don't get this. I see a very straightforward claim that cryonics is a rational response. What do you mean?

This is a quantum immortality argument. If you actually believe in quantum immortality, you have bigger problems. Here is Eliezer offering cryonics as a solution to those, too.

I've read that as well. It's the same argument, essentially (quantum immortality doesn't actually have much to do with MWI in particular). Basically, Eliezer is saying that quantum immortality is probably true, it could be very bad, and we should sign up for cryonics as a precaution.