Hope and False Hope
post by TekhneMakre · 2021-09-04T09:46:23.513Z · LW · GW · 60 commentsContents
60 comments
I'm trying to figure out why people don't want to really consider cryonics as an option. E.g., maybe they never really consider anything; maybe life sucks for them and they don't want to live; maybe they are afraid of future justice; maybe they are afraid of social punishment for doing weird stuff (woo-oo-oooh, Walt Disney's severed head, woo-oo-oo-ooh); maybe they don't have the money; maybe they judge that to even reconsider whether it's actually a scam is to be scammed; maybe they don't think humanity will last long enough; maybe they don't think it'll ever be feasible to reconstitute a person via superintelligent nanotechnology from frozen brain tissue (what a ridiculous opinion, lol). Stated reasons often include death being good, appropriate, comprehensible, poetic, expected, natural, etc., but I don't feel like I can yet make any empathetic or logical sense out of that response, and it always strikes me as a cover for something else, though I don't know.
Another hypothesis I hadn't crystallized before, as a sort of generalization of the fear of scams: maybe they are afraid of taking on False Hope. Some terms:
- Having a Wish (that X) or Wishing (for X) is stance of being ready to take whatever opportunities are given to make the world be a certain way (be X). A Wish is ambiguously either the Wishing stance, or X itself (so you could reach a Wish).
- A person's Life-force is their energy, attention, effort, care, thought, problem-solving, optimization, interest, planning, consideration, orientation, etc.
- A Hope is a way of comporting yourself that's well-suited for pursuing a Wish that you believe you can reach. Someone with Hope in a Wish for X will invest their Life-force towards bringing about X. Footnotes: [1] [2]. Hope also ambiguously refers to elements of a strategy generated by Hope, which can also be a source of Hope (e.g. "This treaty is our last hope for peace."). [3]
- A False Hope is a Hope that won't actually bring about the Wish. [4] [5] False Hope is terrifying because Hope is powerful: if you'd go past the edges of the Earth to save a loved one, you might burn up all your Life-force for nothing on a False Hope, or worse, put your Life-force towards ends that are worse than nothing.
Some people exploit and create False Hope. Hope for money attracts pyramid schemers, Hope for growth in production via capital attracts predatory lenders, Hope for child-rearing attracts sexual users, Hope for any Wish shared by many people (i.e. political will) attracts politicians, Hope for any Wish that's hard to verify progress towards attracts charlatans, and searching for someone to invest Hope in attracts narcissists and cult leaders. [6]
So, here's my new guess about people who are not interested in seriously thinking about cryonics. They have a Wish to live and Wishes for each loved one to live, but they've mourned any Hope [7] in that Wish, including any Hope in their ability to discern new opportunities to reach the Wish. Cryonics offers (what therefore appears to be necessarily False) Hope. The Hope of cryonics is compelling, as it's addressed at some of their true Wishes. So the seeming False Hope is terrifying to the core, because it has a shot at recruiting much or most of their Life-force, and then subjecting that investment to pointless incineration, permanent incarceration [8], or disastrous mistargeting.
Footnotes:
[1] When someone yearns (wistfully, say) for X, they have a Wish for it but do not have Hope in that Wish; in X's absence they don't seek it, but if unsought it showed up for the taking, they'd take it and cherish it from then on. When someone despairs, they are being pushed to give up Hope. To mourn (for a person, but also for a mission, a place, a home, a friendship, etc.) is to give up Hope, but not always entirely to give up the Wish (a Hopeless prisoner can mourn but still yearn to be free).
[2] A Hope isn't just a belief that a Wish is possible, as a world. You could Wish X and believe the world admits the possibility of X but believe that your Life-force isn't enough to bring it about. The latter can be a self-fulfilling prophecy: if you don't have Hope then you will not follow through on your plans and other people will not become Hopeful about your Wish, so you will have no help from your future self and from other people. So Hope is sort of like a decision and sort of like a belief, an equilibrium state between the world's possibilities and your history of decisions. Cf. predictive coding.
[3] More precisely, you can invest a strategy with Hope by organizing the Hope's Life-force under the assumption that you'll pursue that strategy. So a strategy represents a Hope, and e.g. is something that multiple people participating in the same Hope can choose to coordinate to do. A belief that a strategy might work is cause/reason to Hope in general, and is reason to invest that specific strategy with Hope.
[4] False Hope can, for example, be caused by mistakenly thinking that a strategy is coherent / can be enacted ("I'll save the world by getting a lot of political power by any means necessary, and then [[somehow]] I'll decide to make the right things happen."); or caused by an incorrect belief that something is possible (e.g., a perpetual motion machine); or by trusting someone who isn't trustworthy for this Hope; or by a runaway autogenesis of Hope ("I'm feeling good, so my efforts will work") taken too far (following a "success spiral" off a cliff, perhaps as in a manic episode?). There can of course be a false lack of Hope (i.e. you actually could do it if you tried, maybe only if you really try for real), for the same reasons. People also exploit this, e.g. masters convincing the underclass not to revolt, or an abuser degrading their victim's self-esteem to trick them into thinking they can't do anything without the abuser. People can cause other people to have a False Hope or false lack of Hope by having or simulating the same.
[5] It might be righter to think of Hope as having a nested or network structure. (For example, mourning for a friendship, without giving up Hope in possible future friendships; or mourning for friendship in general, but keeping Hope in business partnerships.) Then a Wish is just a Hope that hasn't invested Life-force into any particular strategy. Though, I'd want to say that it is always possible to choose to invest in a highly general meta-strategy of searching for new strategies and meta-strategies and understanding, and investing in that strategy constitutes a proper Hope, not a mere Wish. Maybe despair is giving up a certain False Hope, and it pushes also on the Hope that originally invested in the False Hope. Despair pushes to give up more Hope because the Hope has been revealed to be False or to have made an investment in False Hope, and perhaps it has been revealed to have been founded on False Hope, which poor foundation may feel, or be, nearly indistinguishable from poor investment. The process of despair is the conflict between pushing "from below / inside" to give up more Hope as False, and pushing "from above / outside" to not give up the broader Hope, because maybe it will regenerate new Hope that can't be seen yet. Desperation is the conflict not quite resolving, leaving you stuck in a strategy that you know you shouldn't be investing Life-force in. Mourning is fully giving up a Hope, though maybe never giving up the Wish (if your dead relative came to you alive, you wouldn't ignore them).
[6] I think money isn't exactly embodied Hope, but rather is a tool to replace some functions of Hope. SpaceX employees probably have a lot of Hope in SpaceX, Elon, Mars, etc., but they need to eat and sleep etc., and it's somehow easier to arrange that with money than by having Hope in space also concentrated in specific farmers, house builders, etc. I think I've heard that sometimes people become less prosocial when paid a small amount of money than when paid no money (in many contexts) (which I'm not sure how to interpret but it's suggestive to me of "oh, ok, they aren't viewing this as part of some broader Hope we might share, it's just trade between individuals").
[7] Maybe there's two kinds of desperation: there's the kind where you feel stuck, but if only you had faith in the broader Hope, you could just mourn and then regenerate new Hope; and then there's the kind where there isn't any broader Hope, and if you mourn then you are giving up the Wish as well, and the only option is to desperately try to grow a new strategy out of nothing, out of the lack of a meta-strategy / broader Hope that you've left yourself with. This suggests somehow factoring your Hope in advance to be able to give up enough but not entirely, unless you really want to give up entirely.
[8] Hope that's shaped a certain way, a way that deemphasizes [9] working with one's self across time, gets fenced into an impoverished area of strategy-space. Like those people, the ones who've already got their strategy worked out, in whatever arena, and there's just no convincing them to try something else. E.g., an activist who doesn't have purchase on their mission, and just sort of wanders around exhorting deaf ears. They care, but they are stuck.
[9] Hope crucially has openness: exploration is valuable in part because explorers gain information, and information is valuable because it indicates effective decisions, and working with your future self is a key aspect of the stance I'm calling Hope; so Hope is connected to exploration. Hope is also connected to evaluating possible strategies with a type of optimism ("angelic semantics"), and expecting/deciding that you'll find the goodness mountains across the valleys that surround your local optimum if you perturb yourself.
60 comments
Comments sorted by top scores.
comment by tslarm · 2021-09-05T14:04:02.719Z · LW(p) · GW(p)
I'm not sure if you'd count me as having "really considered" cryonics; I've genuinely thought about the topic, but I've never been tempted to take any steps toward signing up, or researching it in depth. Here are a couple of reasons:
- Failure (or 'success' with caveats) needn't merely leave me with the default outcome (nothingness) -- some of the conceivable failure modes are horrifying. I don't want to go into detail, but there's the potential for a lot of suffering.
- I'm instinctively biased toward myself and the preservation of my own life, but I don't think I particularly ought to be. My self-preservation instinct doesn't kick in when I think about cryonics, and nobody who loves me is going to be cryopreserved, so my absence won't hurt anyone. I see no reason to try to override my natural tendency to do nothing.
- I don't see any compelling reason to assign fundamental value to the extension of a life. When I think about personal identity in a reductionist way, I see nothing intrinsically important about the connections that make a set of experience-moments part of one person's life rather than another's (or some others'). I instinctively want to continue to live, and I definitely want the people I care about to continue to live -- but where those desires aren't naturally present, I don't see why I should push myself to try to create them or override their absence. (Of course I don't want anyone to go through the pain of death and bereavement, but that's a separate question; see the next sub-point.)
- Of course, if I could convince myself to have high confidence in cryonics, it would probably take the sting out of my and my loved ones' deaths (provided I could convince them to sign up too). But if we were to sign up without much confidence, we would all still have to go through the ordinary experiences of fear and grief, whether or not we subsequently woke up in the future.
- Why don't these instincts kick in with respect to cryonics? Probably a lack of confidence that it has much chance of working (based on a combination of lazy heuristics and some genuine thought/reading), and the fact that it is weird enough not to map easily onto anything I instinctively recognise as 'survival'. So I have to think about it, and when I do I reach the conclusions I've mentioned here.
- I don't see any compelling reason to assign fundamental value to the extension of a life. When I think about personal identity in a reductionist way, I see nothing intrinsically important about the connections that make a set of experience-moments part of one person's life rather than another's (or some others'). I instinctively want to continue to live, and I definitely want the people I care about to continue to live -- but where those desires aren't naturally present, I don't see why I should push myself to try to create them or override their absence. (Of course I don't want anyone to go through the pain of death and bereavement, but that's a separate question; see the next sub-point.)
↑ comment by TekhneMakre · 2021-09-06T00:06:04.650Z · LW(p) · GW(p)
>Failure (or 'success' with caveats) needn't merely leave me with the default outcome (nothingness) -- some of the conceivable failure modes are horrifying. I don't want to go into detail, but there's the potential for a lot of suffering.
(This, by the way, does seem like both a potentially solid reason to not want cryopreservation, and also might describe a cryptic motive of people who haven't thought about this as much as you have, which I gestured at by "afraid of future justice".)
Replies from: tslarm↑ comment by tslarm · 2021-09-06T00:27:39.480Z · LW(p) · GW(p)
It's an interesting one -- I think people differ hugely in terms of both how they weigh (actual or potential) happiness against suffering, and how much they care about prolonging life per se. I'm pretty sure I've seen people on LW and/or SSC say they would prefer to intensely suffer forever than to die, whereas I am very much on the opposite side of that question. I'm also unusually conservative when it comes to trading off suffering against happiness.
I don't know how much this comes down to differing in-the-moment experiences (some people tend to experience positive/negative feelings more/less intensely than others in similar circumstances), differing after-the-fact judgments and even memories (some people tend to forget how bad an unpleasant experience was; some are disproportionately traumatised by it), differing life circumstances, etc. I do suspect it's largely based on some combination of factors like these, rather than disagreements that are primarily intellectual.
edit: I've kind of conflated the happiness-suffering tradeoff and the 'suffering versus oblivion' dilemma here. In the second paragraph I was mostly talking about the happiness-suffering tradeoff.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-06T00:32:48.364Z · LW(p) · GW(p)
It's an intellectual disagreement in the sense that it's part of a false lack of Hope, and if that isn't being corrected by local gradients I don't see what other recourse there is besides reason.
Replies from: tslarm↑ comment by tslarm · 2021-09-06T00:49:36.898Z · LW(p) · GW(p)
If we agreed on the probability of each possible outcome of cryonic preservation, but disagreed on whether the risk was worth it, how would we go about trying to convince the other they were wrong?
Replies from: TekhneMakre, TekhneMakre↑ comment by TekhneMakre · 2021-09-06T01:00:29.529Z · LW(p) · GW(p)
The point isn't to convince each other, the point is to find places where one or the other has true and useful information and ideas that the other doesn't have.
↑ comment by TekhneMakre · 2021-09-06T00:54:40.545Z · LW(p) · GW(p)
The point of my post is that the probabilities themselves depend on whether we consider the risk worth it. To say it another way, which flattens some of the phenomenology I'm trying to do but might get the point across, I'm saying it's a coordination problem, and computing beliefs in a CDT way is failing to get the benefits of participating fully in the possibilities of the coordination problem.
edit: Like, if everyone thought is was worth it, then it would be executed well (maybe), so the probability would be much higher, so it is worth it. A "self-fulfilling prophecy", from a CDT perspective.
↑ comment by TekhneMakre · 2021-09-05T21:29:31.345Z · LW(p) · GW(p)
>I'm instinctively biased toward myself and the preservation of my own life, but I don't think I particularly ought to be.
>I see no reason to try to override my natural tendency to do nothing.
>but where those desires aren't naturally present, I don't see why I should push myself to try to create them or override their absence.
To me this sounds like you're speaking from the perspective of a third party, asking what tslarm should do. Is that right? Do you know who the third party is?
Replies from: tslarm↑ comment by tslarm · 2021-09-05T23:47:04.606Z · LW(p) · GW(p)
Not a literal third party, but I do try to think about ethical questions from the perspective of a hypothetical impartial observer. (With my fundamental values, though; so if it's anyone, it's basically me behind a veil of ignorance.)
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-05T23:49:09.825Z · LW(p) · GW(p)
In what sense do they have your values if they don't recommend taking actions that would lead to things you want?
Replies from: tslarm↑ comment by tslarm · 2021-09-05T23:56:56.357Z · LW(p) · GW(p)
I think there's some confusion between us -- why do you say "they don't recommend taking actions that would lead to things you want"?
edit: actually, I think I know roughly what you mean -- hang on and I'll edit this into a proper response.
What I consider my 'fundamental values' are pretty few: suffering is bad, happiness is good, and some sort of commitment to equality, such that a set of experiences don't matter more or less because of who is having them. Those are the values that my 'impartial observer' shares with me; if we're using the veil of ignorance metaphor, we can leave out the commitment to equality and let that arise naturally via the veil.
As a real person I am of course very partial, and I also care about some things that, on reflection, I see as only having instrumental or contingent importance. When I'm thinking about an ethical question, I try to adopt the impartial perspective; then, in guiding my real-world actions, my view of what is ethical will be integrated with everything else that affects my behaviour, and some sort of compromise will be reached. So, with respect to cryonics: if I don't feel any desire to do it, and I don't see any ethical reason to do it, what's the problem with not doing it?
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-06T00:21:18.699Z · LW(p) · GW(p)
>suffering is bad, happiness is good
Suffering isn't bad and happiness isn't good. Suffering without motion is bad, and suffering with motion is good (I'm not just saying the suffering is worth it, I'm saying the suffering itself is good, it's the raising and working out of conflicts that need to be worked out (it would have been better to avoid the conflict in the first place, but the suffering isn't the point)). Happiness is something that sometimes happens when you and the world are on the way towards good things. You've flattened your actual values, which really refer to and relate with the world, into mere states of mind (which previously were derivative aspects of your values). If you were really as you described, you'd become a suicidal heroin abuser.
>When I'm thinking about an ethical question, I try to adopt the impartial perspective; then, in guiding my real-world actions, my view of what is ethical will be integrated with everything else that affects my behaviour, and some sort of compromise will be reached.
This sounds like you've reached a "compromise" that entails basically not trying too hard to do anything. This seems very likely to be undesirable from either the "ethical" perspective or the "real person" perspective.
>if I don't feel any desire to do it, and I don't see any ethical reason to do it, what's the problem with not doing it?
You list (1) feelings of desire, and (2) ethical reasons. But what about reasons to desire, or reasons that your desire takes up and makes part of itself?
Replies from: tslarm↑ comment by tslarm · 2021-09-06T00:39:01.052Z · LW(p) · GW(p)
There's just no way we're going to agree on the really fundamental stuff; your first-paragraph assertions are as unconvincing to me as I'm sure mine are to you.
This sounds like you've reached a "compromise" that entails basically not trying too hard to do anything. This seems very likely to be undesirable from either the "ethical" perspective or the "real person" perspective.
I've accepted that I'm imperfect with respect to my ethical system. I don't know how you got from there to the assumption that I'm 'not trying too hard to do anything'.
Do you think you behave in a perfectly ethical way? If so, I think you're almost certainly either deceiving yourself or simply making the compromise one level up, i.e. adopting a very undemanding ethics.
You list (1) feelings of desire, and (2) ethical reasons. But what about reasons to desire
I think these are encompassed in my ethical considerations -- I still matter from that perspective, just not disproportionately much. So if something would be good for me, that fact has ethical importance.
or reasons that your desire takes up and makes part of itself?
I don't think I know what this means.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-06T00:52:36.565Z · LW(p) · GW(p)
>There's just no way we're going to agree on the really fundamental stuff; your first-paragraph assertions are as unconvincing to me as I'm sure mine are to you.
Aren't we on LW to discuss difficult important questions?
Do you think you see why someone might call it a type error to think that happiness is what we "want" or "value"? I could expand on that point and that might help.
>I've accepted that I'm imperfect with respect to my ethical system. I don't know how you got from there to the assumption that I'm 'not trying too hard to do anything'.
I don't want to talk with you as if I'm speaking on behalf of an ethical coalition that's external to you. I want to present ideas to you that your inside view can consider and extract translated ideas that are true and useful by its own judgement.
I'm saying that conflicts between your "ethics" and your "real person" should be interesting as points of potential greater understanding, but it sounds like you're throwing away those opportunities.
>these are encompassed in my ethical considerations -- I still matter from that perspective, just not disproportionately much
But the way you speak makes it sounds like your ethics consider the "feelings of desire" of your "real person" to be purely moral patiency, without moral agency; your "real person" can suffer like a tortured animal, but can't creatively elaborate desire into an open-ended exploration of possibilities that takes ideas into itself, like a free agent.
>or reasons that your desire takes up and makes part of itself?
>I don't think I know what this means.
It means that desire isn't a feeling, it's just the most noticeable aspect of an agent. Agents can incorporate new information and ideas, and hence become more strategic (including more in accordance with ethical behavior).
Replies from: tslarm↑ comment by tslarm · 2021-09-06T01:09:45.222Z · LW(p) · GW(p)
Aren't we on LW to discuss difficult important questions?
You're welcome to make the arguments! I'm just trying to be honest here, that I think we're extremely unlikely to change each other's minds at that level. (IME, productive discussion on purely moral/evaluative questions is quite rare, and usually requires some common ground at a lower level.)
Do you think you see why someone might call it a type error to think that happiness is what we "want" or "value"?
If you're saying I'm the one making that error, I think I failed to get across my position. I don't think happiness is necessarily what we want or value -- i was using the word to refer to a certain quality of conscious experience, basically the opposite of suffering.
I'm saying that conflicts between your "ethics" and your "real person" should be interesting as points of potential greater understanding, but it sounds like you're throwing away those opportunities.
Fair enough -- again, feel free to make the arguments. I've got to go afk for a while, but I will come back and at least try to consider them.
I'm still a bit confused by the rest of your post -- some of our issues may be semantic (we seem to use some key words quite differently) but I suspect some fundamental disagreements are also getting in the way. Sorry that this is a bit of a non-response; if you do want to go deeper, I'll try to give your words proper consideration later today or tomorrow.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-06T01:21:45.869Z · LW(p) · GW(p)
> I don't think happiness is necessarily what we want or value -- i was using the word to refer to a certain quality of conscious experience, basically the opposite of suffering.
You wrote above:
>What I consider my 'fundamental values' are pretty few: suffering is bad, happiness is good, and some sort of commitment to equality, such that a set of experiences don't matter more or less because of who is having them. Those are the values that my 'impartial observer' shares with me;
It sounds like you said here that "happiness is good" is one of your "fundamental values". Maybe this doesn't respond to what you mean, but what I'm saying is that it's almost a type error to say "this quality of conscious experience is what I'm trying to make happen in the world". The quality of conscious experience you're talking about is a derivative aspect, like a component or a side effect, of a process of your mind learning to understand and affect the world to get what it wants. So I can see how it's confusing: if you want X, and whenever you get X you're happy, then saying "what I'm trying to get is being happy, and X is a subgoal of that" almost captures all of the X that you want. But it's backwards, if you see what I mean; you're happy because you got what you wanted, it's not that you wanted to be happy. If you optimized medium-strongly for that, you'd become a heroin addict or similar. If you optimized normal-strongly, you'd self deceive about unpleasant stuff, as people do. If you optimized for what you actually want, you'd be happy (or at least happier) in a wholesome way (the opposite of the unwholesome happiness of heroin).
>Aren't we on LW to discuss difficult important questions?
>You're welcome to make the arguments!
Ok. It's not just arguments, but also more generally communicating provocations towards reevaluating things (for example, poetry might be not at all an argument but might evoke something which then usefully transform once evoked, in a way that usefully and truly changes your beliefs and strategies). Part of what I'm trying to do is to make more of [whatever it is that's really guiding our behavior and thinking] be available to attend to and change.
Replies from: tslarm↑ comment by tslarm · 2021-09-06T09:36:03.057Z · LW(p) · GW(p)
It sounds like you said here that "happiness is good" is one of your "fundamental values". Maybe this doesn't respond to what you mean, but what I'm saying is that it's almost a type error to say "this quality of conscious experience is what I'm trying to make happen in the world". The quality of conscious experience you're talking about is a derivative aspect, like a component or a side effect, of a process of your mind learning to understand and affect the world to get what it wants. So I can see how it's confusing: if you want X, and whenever you get X you're happy, then saying "what I'm trying to get is being happy, and X is a subgoal of that" almost captures all of the X that you want. But it's backwards, if you see what I mean; you're happy because you got what you wanted, it's not that you wanted to be happy. [..] If you optimized for what you actually want, you'd be happy (or at least happier) in a wholesome way (the opposite of the unwholesome happiness of heroin).
I agree that trying to optimise directly for happiness is often counterproductive, and having goals and desires other than happiness is often the best way to actually attain happiness; I disagree that this implies happiness isn't what fundamentally matters, or what I personally value. (I don't define 'what I value' as 'what I am actually trying to bring about', though -- I believe it is possible, and common, to fail to act in accordance with one's values. And yes, in my case there's the potential for significant tension between what I believe, after lengthy consideration, is fundamentally good, and what I aim at or care about moment-to-moment. But I don't think this carries the implications you seem to think it carries.)
If you optimized medium-strongly for that, you'd become a heroin addict or similar.
Strongly disagree; I don't personally know any heroin addicts, but from what I have read and seen, becoming a heroin addict is a terrible strategy if one wants to be happy and avoid suffering over the medium-long term. Your first reaction may be that this is a pedantic quibble, but I don't think there is actually anything close to true 'wireheading' currently available to us; the real-world examples people give always have obvious downsides even from a purely hedonistic standpoint. So I think they do more to mislead and confuse than to clarify or support any anti-hedonist arguments.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-06T10:19:34.030Z · LW(p) · GW(p)
> don't define 'what I value' as 'what I am actually trying to bring about', though
How do you define it, and why do you care about that definition?
Replies from: tslarm↑ comment by tslarm · 2021-09-07T05:35:24.507Z · LW(p) · GW(p)
How do you define it
It's context-dependent, but if we're still talking about the bit where I said my "fundamental values" include "happiness is good", I meant that I think increasing the amount of happiness and reducing the amount of suffering in the world makes the world better. (edit: And not instrumentally, i.e. because of something else that suffering and happiness lead to -- the point is that to me, happiness and suffering are the ground-level things that matter; other things become instrumentally important by virtue of promoting happiness and reducing suffering.)
why do you care about that definition?
I'm not sure exactly what you're asking here -- why do I want to define the word that way? or why did I feel the need to point out that I don't define the word in the other way? or why do I care about the thing that definition points to? or something else.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-07T07:56:45.035Z · LW(p) · GW(p)
I'm saying that it seems to be a useful concept, the concept of "the aims that an agent continues to pursue (across, say, learning new information, inventing new strategies, changing its conceptual vocabulary)". That's what I'd call values.
You wrote that you have an "impartial observer" who shares "fundamental values" with you. I didn't believe you that the impartial observer shares values with you, because it doesn't try to recommend actions that would bring about what your "real person" would want: in your words, "I instinctively want to continue to live, and I definitely want the people I care about to continue to live". Here the "impartial observer" is using the word "instinctively" as a derogatory word, turning the aim of the "real person" into a mere "feeling of desire", rather than a "fundamental value". That is, the "impartial observer" pretends that your "instincts" are aimed merely at feelings, as in "take the sting out of my and my loved ones' deaths", as if the point of cryonics is merely to avoid the unpleasant thought of them dying rather than a person you love never living more and their shell rotting in the ground.
Your "impartial observer" then explicitly declared that it would not attempt to facilitate the "real person" with expanding its desires across new information, strategies, and ideas, and would treat them as mere inconveniences that might meaninglessly show up in place or another:
"...desires aren't naturally present, I [[observer]] don't see why I [[observer]] should push myself [["real person"]] to try to create them or override their absence"
"...these instincts kick in with respect to cryonics..."
"...the fact that it is weird enough not to map easily onto anything I instinctively recognise as 'survival'..."
I perceive this as a self-destructive conflict, and I wanted to explore and make precise what you meant by "the values that my 'impartial observer' shares with me", because that seems like part of the conflict.
>I meant that I think increasing the amount of happiness and reducing the amount of suffering in the world makes the world better
What I'm saying is, is that happiness is what your brain does when its "is the world getting better" detector is returning "hell yeah!". So what you're saying is a vicious circle. (It's fine though, because your "is the world getting better" detector should still be mostly intact. You just have to decide to listen to it, rather than pretending that you want to trick it.)
Replies from: tslarm↑ comment by tslarm · 2021-09-07T11:25:32.481Z · LW(p) · GW(p)
You wrote that you have an "impartial observer" who shares "fundamental values" with you [...]
I feel like you're reifying the impartial observer, and drawing some dubious conclusions from that. The impartial observer is just a metaphor -- it's me, trying to think about the world from a certain perspective. (I know you haven't literally failed to realise that, but it's hard for me to make sense of some of the things you're saying, unless there's some kind of confusion between us on that point.)
All of my varied and sometimes conflicting feelings, beliefs, instincts, desires etc. are equally real. Some of them I endorse on reflection, others I don't; some of them I see as pointing at something fundamentally important, others I don't.
the "impartial observer" pretends that your "instincts" are aimed merely at feelings
I don't think I've ever suggested that my instincts are "aimed merely at feelings" -- if they're 'aimed' at anything other than their direct targets, it probably makes more sense to say they're aimed at the propagation of my genes, which is presumably why they're part of me in the first place. And on reflection, I don't see the propagation of my genes as the supreme good to be aimed at above all else, so it's not surprising that I'm sometimes going to disagree with my instincts.
as if the point of cryonics is [...]
"the point of cryonics" can be whatever someone signing up wants it to be! I get that for some people, death is the ultimate bad thing, and I have some sympathy with them (you?) on that. I don't like death, I'm afraid of it, etc. I haven't talked myself into thinking that I'm fine with it. But, on reflection, and like I said a few comments up, when I think about personal identity and what it actually means for a specific person to persist through time, I'm not convinced that it is fundamentally important whether an experience-moment belongs to one particular entity or another -- or whether a set of experience-moments belongs to one entity or a group of others. (And that's what's fundamentally important to me -- conscious experience. That's what I think matters in the world; the quality of conscious experiences is what I think makes the world good or bad, better or worse.)
None of this means that death doesn't suck. But to me, it primarily sucks because of all the pain it causes. If we all somehow really got used to it, to the point that we could meet it without fear or horror, and could farewell each other without overwhelming grief, I would see that as a great improvement. A hundred generations living a hundred years each doesn't seem intrinsically worse to me than a single quasi-immortal generation living for 10,000 years. Right now I'd take the second option, because yeah, death sucks. But (setting aside the physical decay that in fact tends to precede and accompany it), in my opinion the degree to which it sucks is contingent on our psychology.
I perceive this as a self-destructive conflict, and I wanted to explore and make precise what you meant by "the values that my 'impartial observer' shares with me", because that seems like part of the conflict.
[...]
What I'm saying is, is that happiness is what your brain does when its "is the world getting better" detector is returning "hell yeah!". So what you're saying is a vicious circle. (It's fine though, because your "is the world getting better" detector should still be mostly intact. You just have to decide to listen to it, rather than pretending that you want to trick it.)
I appreciate your directness, but I don't really appreciate the ratio of confident, prescriptive psycholanalysis to actual argument. You're asserting a lot, but giving me few reasons to take your assertions seriously enough to gain anything from them. (I don't mean this conversation should be about providing me with some gain -- but I don't get the sense you are open to having your own mind changed on any of the topics we're discussing; your purpose seems to be to fix me in some way.) I genuinely disagree with you on the fundamental importance of happiness. I might be wrong, but I'm not simply confused -- at least not in a way that you can dispel simply by asking questions and asserting your own conflicting beliefs.
Sorry if that comes across in an insulting way; I do appreciate your attempts to work through these issues with me. But this has felt like a fairly one-sided dialogue, in the sense that you seem to think exactly one of us has a lot to learn. Which isn't necessarily a problem, and perhaps it's the attitude most of us take into most such discussions -- but if you want to teach me, I need you to do more to positively support your own convictions, rather than just confidently assert them and try to socratic-dialogue your way to a diagnosis of what's wrong with mine.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-07T12:27:04.211Z · LW(p) · GW(p)
>the ratio of confident, prescriptive psycholanalysis to actual argument
I appreciate you engaging generally, and specifically mentioning these process points. The reason I'm stating things without caveats etc. is that it feels like there's a huge gulf between us, and so it seems like the only way that would possibly get anywhere is to make clear conjectures and describe them as bluntly as possible, so that key points of disagreement can come to the front. I want to provide arguments for the propositions, but I want to direct efforts to do that towards where it matters most, so I'm hoping to home in on key points. I'm not hoping to dispel your confusions just by stating some position, I'm hoping to clarify your position in contrast to points I'm stating. My psychoanalyses are rude in some sense, and I want to hold them very lightly; I do at least put uncertainty-words on them (e.g. "I perceive this as....") to hopefully indicate that I'm describing not something that I'm permanently confident of, but something that's my current best guess given the data.
>You're asserting a lot, but giving me few reasons to take your assertions seriously enough to gain anything from them.
> I genuinely disagree with you on the fundamental importance of happiness
I described a view of what happiness is, and the implication of that view that happiness isn't a terminal value. I don't think you responded to that, except to say that you disagreed with the implication, and that you have a different definition of value, which is "making the world better". Maybe it would help if you expanded more on your disagreement? Did my argument make sense, or is there something to clarify?
The reason this seems important to me is that upthread you said:
>Not a literal third party, but I do try to think about ethical questions from the perspective of a hypothetical impartial observer. (With my fundamental values, though; so if it's anyone, it's basically me behind a veil of ignorance.)
Basically I think our disagreement is over whether the impartial judgements actually share your values. I've been trying to point out how it looks a lot more like the impartial judgements are using a different criterion for what constitutes a better world than the criterion implied by your desires. E.g. on the one hand you're afraid of your loved ones dying, which I take to imply that the world is better if your loved ones don't die. On the other hand some of your other statements sound like the only problem is the fear and unhappiness around death. So basically my question is, how do you know that the impartial conclusions are right, given that you still have fear of your loved ones dying?
Another point that might matter, is that I don't think it makes sense to talk about "moments of conscious experience" as isolated from the person who's experiencing them. Which opens the door for death mattering--if we care about conscious experience, and conscious experience implies identity across time, we might are about those identities continuing. The reason I think it doesn't make sense to talk of isolated experience is that experience seems like it always involves beliefs and significance, not mere valence or data.
Replies from: tslarm↑ comment by tslarm · 2021-09-07T14:10:27.318Z · LW(p) · GW(p)
Re your first paragraph -- fair enough, and thanks for clarifying. Something about this approach has rubbed me the wrong way, but I am stressed IRL at the moment and that is probably making me pricklier than I would otherwise be. (By the way, so that I don't waste your time, I should say that I might stop responding at some point before anything is resolved. If so, please don't interpret that as an unfriendly or insulting response -- it will just be the result of me realising that I'm finding the conversation stressful, and/or spending too much time on it, and should probably leave it alone.)
I described a view of what happiness is, and the implication of that view that happiness isn't a terminal value.
I think you're referring to the following lines -- let me know if I'm missed others.
Happiness is something that sometimes happens when you and the world are on the way towards good things.
Depending on exactly how you mean this, I think it might beg the question, or at least be missing a definition of 'good things' and a justification for why that excludes happiness. Or, if you mean 'good things' loosely enough that I might agree with the quoted sentence, I don't think it bears on the question of whether happiness is/ought to be a terminal value.
The quality of conscious experience you're talking about is a derivative aspect, like a component or a side effect, of a process of your mind learning to understand and affect the world to get what it wants.
I would quibble with this, if "your mind learning to understand and affect the world to get what it wants" is intended as an exhaustive description of how happiness arises -- but more to the point, I don't see how it implies that I shouldn't consider happiness to be a fundamentally, intrinsically good thing.
happiness is what your brain does when its "is the world getting better" detector is returning "hell yeah!"
Again, even if this is true, I don't think it bears on the fundamental point. I don't see anything necessarily unreasonable about wanting everyone, including me, to experience the feeling they get when their 'world getting better' module is firing. (And seeing that feeling, rather than whatever triggers it, as the really important thing.)
I think you see a conflict between one (unconscious) part of my mind saying 'the world is getting better [in some way that isn't entirely about me or other people feeling happier or suffering less], have some happiness as a reward!' and the part that writes and talks and (thinks that it) reasons saying 'increasing happiness and reducing suffering is what it means for the world to get better!'. But I just don't have a problem with that conflict, or at least I don't see how it implies that the 'happiness is good' side is wrong. (Likewise for the conflict between my 'wanting' one thing in a moral sense and 'wanting' other, sometimes conflicting things in other senses.)
Basically I think our disagreement is over whether the impartial judgements actually share your values. I've been trying to point out how it looks a lot more like the impartial judgements are using a different criterion for what constitutes a better world than the criterion implied by your desires. E.g. on the one hand you're afraid of your loved ones dying, which I take to imply that the world is better if your loved ones don't die. On the other hand some of your other statements sound like the only problem is the fear and unhappiness around death. So basically my question is, how do you know that the impartial conclusions are right, given that you still have fear of your loved ones dying?
From a certain perspective I'm not confident that they're right, but I don't see any good reason for you to be confident that they're wrong. I am confident that they're right in the sense that my ground level, endorsed-upon-careful-reflection moral/evaluative convictions just seem like fundamental truths to me. I realise there's absolutely no reason for anyone else to find that convincing -- but I think everyone who has moral or axiological opinions is making the same leap of faith at some point, or else fudging their way around it by conflating the normative and the merely descriptive. When you examine your convictions and keep asking 'why', at some point you're either going to hit bottom or find yourself using circular reasoning. (Or I guess there could be some kind of infinite regress, but I'm not sure what that would look like and I don't think it would be an improvement over the other options.)
I know that's probably not very satisfying, but that's basically why I said above that I can't see us changing each other's mind at this fundamental level. I've got my ground-level convictions, you've got yours, we've both thought about them pretty hard, and unless one of us can either prove that the other is being inconsistent or come up with a novel and surprisingly powerful appeal to intuition, I'm not sure what we could say to each other to shift them.
Another point that might matter, is that I don't think it makes sense to talk about "moments of conscious experience" as isolated from the person who's experiencing them. Which opens the door for death mattering--if we care about conscious experience, and conscious experience implies identity across time, we might are about those identities continuing. The reason I think it doesn't make sense to talk of isolated experience is that experience seems like it always involves beliefs and significance, not mere valence or data.
I should have gone to bed a while ago and this is a big topic, so I won't try to respond now, but I agree that this sort of disagreement is probably important. I do think I'm more likely to change my views on personal identity, moments of experience etc. than on most of what we've been discussing, so it could be fruitful to elaborate on your position if you feel like it.
(But I should make it clear that I see consciousness -- in the 'hard problem', qualia, David Chalmers sense -- as real and irreducible (and, as is probably obvious by now, supremely important). That doesn't mean I think worrying about the hard problem is productive -- as best I can tell there's no possible argument or set of empirical data that would solve it -- but I find every claim to have dissolved the problem, every attempt to define qualia out of existence, etc., excruciatingly unconvincing. So if your position on personal identity etc. conflicts with mine on those points, it would probably be a waste of time to elaborate on it with the intention of convincing me -- though of course it could still serve to clarify a point of disagreement.)
Replies from: TekhneMakre, TekhneMakre↑ comment by TekhneMakre · 2021-09-08T02:39:24.352Z · LW(p) · GW(p)
>I think everyone who has moral or axiological opinions is making the same leap of faith at some point, or else fudging their way around it by conflating the normative and the merely descriptive
This may be right, but we can still notice differences, especially huge ones, and trace back their origins. It actually seems pretty surprising if you and I have wildly, metaphysically disparate values, and at least interesting.
Replies from: tslarm↑ comment by tslarm · 2021-09-08T05:04:28.992Z · LW(p) · GW(p)
To this end I think it would help if you laid out your own ground-level values, and explained to whatever extent is possible why you hold them (and perhaps in what sense you think they are correct).
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-08T05:31:07.886Z · LW(p) · GW(p)
I mean, at risk of seeming flippant, I just want to say "basically all the values your 'real person' holds"?
Like, it's just all that stuff we both think is good. Play, life, children, exploration; empowering others to get what they want, and freeing them from pointless suffering; understanding, creating, expressing, communicating, ...
I'm just... not doing the last step where I abstract that into a mental state, and then replace it with that mental state. The "correctness" comes from Reason, it's just that the Reason is applied to more greatly empower me to make the world better, to make tradeoffs and prioritizations, to clarify things, to propagate logical implications... For example, say I have an urge to harm someone. I generally decide to nevertheless not harm them, because I disagree with the intuition. Maybe it was put there by evolution fighting some game I don't want to fight, maybe it was a traumatic reaction I had to something years ago; anyway, I currently believe the world will be better if I don't do that. If I harm someone, they'll be less empowered to get what they want; I'll less live among people who are getting what they want, and sharing with me; etc.
↑ comment by TekhneMakre · 2021-09-08T02:26:51.717Z · LW(p) · GW(p)
> I don't see how it implies that I shouldn't consider happiness to be a fundamentally, intrinsically good thing
Because it's replacing the thing with your reaction to the thing. Does this make sense, as stated?
What I'm saying is, when we ask "what should I consider to be a fundamentally good thing", we have nothing else to appeal to other than (the learned generalizations of) those things which our happiness comes from. Like, we're asking for clarification about what our good-thing-detectors are aimed at. So I'm pointing out that, on the face of it, your stated fundamental values---happiness, non-suffering---are actually very very different from the pre-theoretic fundamental values---i.e. the things your good-thing-detectors detect, such as having kids, living, nuturing, connecting with people, understanding things, exploring, playing, creating, expressing, etc. Happiness is a mental event, those things are things that happen in the world or in relation to the world. Does this make sense? This feels like a fundamental point to me, and I'm not sure we've gotten shared clarity about this.
>I don't see anything necessarily unreasonable about wanting everyone, including me, to experience the feeling they get when their 'world getting better' module is firing. (And seeing that feeling, rather than whatever triggers it, as the really important thing.)
I mean, it's not "necessarily unreasonable", in the sense of the orthogonality thesis of values---one could imagine an agent that coherently wants certain mental states to exist. I'm saying a weaker claim: it's just not what you actually value. (Yes this is in some sense a rude claim, but I'm not sure what else to do, given that it's how the world seems to me and it's relevant and it would be more rude to pretend that's not my current position. I don't necessarily think you ought to engage with this as an argument, exactly. More like a hypothesis, which you could come to understand, and by understanding it you could come to recognize it as true or false of yourself; if you want to reject it before understanding it (not saying you're doing that, just hypothetically) then I don't see much to be gained by discussing it, though maybe it would help other people.) A reason I think it's not actually what you value is that I suspect you wouldn't press a button that would make everyone you love be super happy, with no suffering, and none of their material aims would be achieved (other than happiness), i.e. they wouldn't explore or have kids, they wouldn't play games or tell stories or make things, etc., or in general Live in any normal sense of the word; and you wouldn't press a button like that for yourself. Would you?
Replies from: tslarm↑ comment by tslarm · 2021-09-08T04:59:28.515Z · LW(p) · GW(p)
Because it's replacing the thing with your reaction to the thing. Does this make sense, as stated?
Not without an extra premise somewhere.
we're asking for clarification about what our good-thing-detectors are aimed at
I think this is something we disagree on. It seems to me that one of your premises is "what is good = what our good-thing detectors are aimed at", and I don't share that premise. Or, to the extent that I do, the good-thing detector I privilege is different from the one you privilege; I see no reason to care more about my pre-theoretic good-thing detector than the 'good-thing detector' that is my whole process of moral and evaluative reflection and reasoning.
your stated fundamental values---happiness, non-suffering---are actually very very different from the pre-theoretic fundamental values---i.e. the things your good-thing-detectors detect, such as having kids, living, nuturing, connecting with people, understanding things, exploring, playing, creating, expressing, etc.
That's the thing -- I'm okay with that, and I still don't see why I ought not to be.
Happiness is a mental event, those things are things that happen in the world or in relation to the world. Does this make sense?
Of course -- and the mental events are the things that I think ultimately matter.
I'm saying a weaker claim: it's just not what you actually value.
I think this is true for some definitions of value, so to some degree our disagreement here is semantic. But it also seems that we disagree about which senses of 'value' or 'values' are important. I have moral values that are not reducible to, or straightforwardly derivable from, the values you could infer from my behaviour. Like I said, I am imperfect by my own lights -- my moral beliefs and judgments are one important input to my decision-making, but they're not the only ones and they don't always win. (In fact I'm not always even thinking on those terms; as I presume most people do, I spend a lot of my time more or less on autopilot. The autopilot was not programmed independently from my moral values, but nor is it simply an implementation (even an imperfect heuristic one) of them.)
A reason I think it's not actually what you value is that I suspect you wouldn't press a button that would make everyone you love be super happy, with no suffering, and none of their material aims would be achieved (other than happiness), i.e. they wouldn't explore or have kids, they wouldn't play games or tell stories or make things, etc., or in general Live in any normal sense of the word; and you wouldn't press a button like that for yourself. Would you?
I've often thought about this sort of question, and honestly it's hard to know which versions of wireheading/experience-machining I would or wouldn't do. One reason is that in all realistic scenarios, I would distrust the technology and be terrified of the ways it might backfire. But also, I am well aware that I might hold back from doing what I believed I ought to do -- perhaps especially with respect to other people, because I have a (healthy, in the real world) instinctive aversion to overriding other people's autonomy even for their own good. Again though, the way I use these words, there is definitely no contradiction between the propositions "I believe state of the world X would be better", "I believe I ought to make the world better where possible", and "in reality I might not bring about state X even if I could".
edit: FWIW on the concrete question you asked, IF I somehow had complete faith in the experience machine reliably working as advertised, and IF all my loved ones were enthusiastically on board with the idea, I reckon I would happily plug us all in. In reality they probably wouldn't be, so I would have to choose between upsetting them terribly by doing it alone, or plugging them in against their wishes, and I reckon in that case I would probably end up doing neither and sticking with the status quo.
edit again: That idea of "complete faith" in the machine having no unexpected downsides is hard to fully internalise; in all realistic cases I would have at least some doubt, and that would make it easy for all the other pro-status-quo considerations to win out. But if I was truly 100% convinced that I could give myself and everyone else the best possible life, as far as all our conscious experiences were concerned? It would be really hard to rationalise a decision to pass that up. I still can't imagine doing it to other people if they were begging me not to, but I think I would desperately try to convince them and be very upset when I inevitably failed. And if/when there was nobody left to be seriously hurt by my plugging myself in, and the option was still available to me, I think I'd do that.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-08T05:24:45.008Z · LW(p) · GW(p)
>to some degree our disagreement here is semantic
The merely-lexical ambiguity is irrelevant of course. You responded to the top level post giving your reasons for not taking action re/ cryonics. So we're just talking about whatever actually affects your behavior. I'm taking sides in your conflict, trying to talk to the part of you that wants to affect the world, against the part of you that wants to prevent you from trying to affect the world (by tricking your good-world-detectors).
>I see no reason to care more about my pre-theoretic good-thing detector than the 'good-thing detector' that is my whole process of moral and evaluative reflection and reasoning.
Reflection and reasoning, we can agree these things are good. I'm not attacking reason, I'm trying to implement reason by asking about the reasoning that you took to go from your pre-theoretic good-thing-detector to your post-theoretic good-thing judgements. I'm pointing out that there seems, prima facie, to be a huge divergence between these two. Do you see the apparent huge divergence? There could be a huge divergence without there being a mistake, that's sort of the point of reason, to reach conclusions you didn't know already. It's just that I don't at all see the reasoning that led you there, and it still seems to have produced wrong conclusions. So my question is, what was the reasoning that brought you to the conclusion that, despite what your pre-theoretic good-thing-detectors are aimed at (play, life, etc.), actually what's a good thing is happiness (contra life)? So far I don't think you've described that reasoning, only stated that its result is that you value happiness. (Which is fine, I haven't asked so explicitly, and maybe it's hard to describe.)
Replies from: tslarm↑ comment by tslarm · 2021-09-08T06:01:53.870Z · LW(p) · GW(p)
The 'reasoning' is basically just teasing out implications, checking for contradictions, that sort of thing. The 'reflection' includes what could probably be described as a bunch of appeals to intuition. I don't think I can explain or justify those in a particularly interesting or useful way; but I will restate that I can only assume you're doing the same thing at some point.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-08T08:05:45.813Z · LW(p) · GW(p)
How, in broad strokes, does one tease out the implication that one cares mainly about happiness and suffering, from the pre-theoretic caring about kids, life, play, etc.?
Replies from: tslarm↑ comment by tslarm · 2021-09-08T09:03:41.619Z · LW(p) · GW(p)
Well I pre-theoretically care about happiness and suffering too. I hate suffering, and I hate inflicting suffering or knowing others are suffering. I like being happy, and like making others happy or knowing they're happy. So it's not really a process of teasing out, it's a process of boiling down, by asking myself which things seem to matter intrinsically and which instrumentally. One way of doing this is to consider hypothetical situations, and selectively vary them and observe the difference each variation makes to my assessment of the situation. (edit: so that's one place the 'teasing out' happens -- I'll work out what value set X implies about hypothetical scenarios a, b, and c, and see if I'm happy to endorse those implications. It's probably roughly what Rawls meant by 'reflective equilibrium' -- induce principles, deduce their implications, repeat until you're more or less satisfied.)
Basically, conscious states are the only things I have direct access to, and I 'know' (in a way that I couldn't argue someone else into accepting, if they didn't perceive it directly, but that is more obvious to me than just about anything else) that some of them are good and some of them are bad. Via emotional empathy and intellectual awareness of apparently relevant similarities, I deduce that other people and animals have a similar capacity for conscious experience, and that it's good when they have pleasant experiences and bad when they have unpleasant ones. (edit: and these convictions are the ones I remain sure of, at the end of the boiling-down/reflective equilibrium process)
I think I'll bow out of the discussion now -- I think we've both done our best, but to be blunt, I feel like I'm having to repeatedly assure you that I do mean the things I've said and I have thought about them, and like you are still trying to cure me of 'mistakes' that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don't share.
Replies from: TekhneMakre, TekhneMakre↑ comment by TekhneMakre · 2021-09-08T09:19:25.609Z · LW(p) · GW(p)
>Well I pre-theoretically care about happiness and suffering too.
That you think this, and that it might be the case, for the record, wasn't previously obvious to me, and makes a notch more sense out of the discussion.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-08T09:21:11.139Z · LW(p) · GW(p)
For example, it makes me curious as to whether, when observing say a pre-civilization group of humans, I'd end up wanting to describe them as caring about happiness and suffering, beyond caring about various non-emotional things.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-08T09:25:41.879Z · LW(p) · GW(p)
Ok, actually I can see a non-Goodharting reason to care about emotional states as such, though it's still instrumental, so isn't what tslarm was talking about: emotional states are blunt-force brain events, and so in a context (e.g. modern life) where the locality of emotions doesn't fit into the locality of the demands of life, emotions are disruptive, especially suffering, or maybe more subtly any lack of happiness.
↑ comment by TekhneMakre · 2021-09-08T09:17:17.817Z · LW(p) · GW(p)
>I think I'll bow out of the discussion now
Ok, thanks for engaging. Be well. Or I guess, be happy and unsufferful.
>I think we've both done our best, but to be blunt, I feel like I'm having to repeatedly assure you that I do mean the things I've said and I have thought about them, and like you are still trying to cure me of 'mistakes' that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don't share.
I don't want to poke you more and risk making you engage when you don't want to, but just as a signpost for future people, I'll note that I don't recognize this as describing what happened (except of course that you felt what you say you felt, and that's evidence that I'm wrong about what happened).
Replies from: tslarm↑ comment by tslarm · 2021-09-08T09:33:05.425Z · LW(p) · GW(p)
Ok, thanks for engaging. Be well. Or I guess, be happy and unsufferful.
Cheers. I won't plug you into the experience machine if you don't sign me up for cryonics :)
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-08T09:34:35.158Z · LW(p) · GW(p)
Deal! I'm glad we can realize gains from trade across metaphysical chasms.
comment by rsaarelm · 2021-09-05T10:56:27.628Z · LW(p) · GW(p)
One thing I've started thinking more after first hearing of cryonics is that keeping an organization around and alive in the long term, order-of-magnitude centuries, is really hard. One of the first ways to fail in the O-ring chain of cryonics leading to successful revivification is the cryonics organization storing the vitrified bodies dissolving or becoming terminally incompetent and the bodies melting and rotting.
Concerns about health care system dysfunction notwithstanding, there is still very thick social proof that seeing an accredited doctor is a net positive when you're ill, and also that the medical system will continue to be reasonably reliable and socially supported, so that a medicine you rely on in the long term suddenly becoming unavailable is an alarming and unexpected event, rather than a common occurrence. The social proof of cryonics orgs is mostly that they're sort of there, about as notable as they were ten or twenty years ago and they have absolutely no buy-in from wider society or legislature. The buy-in would create expectations that random emergency responders and medical personnel will help fulfill your cryonics contract when you're incapable of action or that there would be some reaction other than "good riddance to the charlatans" if the orgs look like they're about to go under.
As it stands, I can apply an abstraction "if I get sick, I can go to the hospital", because "hospital" is a robust category with the wider society. I do not feel like I can currently similarly abstractly state "I make a contract with a cryonics facility to have myself cryopreserved when I'm clinically dead", because there currently isn't a social category of "cryonics facility" like there is one of "hospital". There is a small handful of particular cryonics organizations of varying appearances of competency, founded and run by people operating from a particular late 20th century techno-optimistic subculture (the one that had things like Extropianism come out of it), that seems to be both in decline and actively shunned by many ideologues of a more recent cultural zeitgeist. As it stands, I'm entirely indifferent about a hospital CEO retiring because I'm quite confident the wider society has the will and ability to perpetuate the hospital organization, but I'm quite a bit concerned what will happen with the present-day cryonics orgs when their CEOs retire, because the orgs have no similar societal support network and it also looks like we might be moving on from the cultural period that inspired competent people to found or join cryonics orgs.
Replies from: JBlack, TekhneMakre↑ comment by JBlack · 2021-09-05T12:37:46.835Z · LW(p) · GW(p)
I'd be substantially more confident in cryonics if it were actually supported by society with stable funding, regulations, transparency, priority in case of natural disasters, ongoing well-supported research, guarantees about future coverage of revival and treatment costs, and so on.
Even then I have strong doubts about uninterrupted maintenance of clients for anything like a hundred years. Even with the best intentions, more than 99.9999% uptime for any single organization (including through natural disasters and changes in society) is hard. And yet, that's the easier part of the problem.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-05T22:03:44.278Z · LW(p) · GW(p)
>99.9999%
I think this is too many nines. If you have to last 100 years, say, and LN takes over a month to boil off and let the patients thaw, then it's more like 99.9% uptime.
>if it were actually supported by society
So, I agree with this, but I want to make a clear distinction between a supposed "current default" state of the world/society, vs. what can/will actually happen. When Peter Thiel is asked to predict the future, he responds something like that he doesn't think it quite makes sense to predict parts of the future that depend on our choices, and it makes more sense to think of it as deciding than predicting. Part of the point of my post is to point out that an important strategic consideration is, "If we all had Hope in this, then would it succeed?", sometimes more important than "If I act in causal best-response to the current default, what can I get?". Does this make sense to you? This is maybe the main point I want to get across.
↑ comment by TekhneMakre · 2021-09-05T21:19:48.844Z · LW(p) · GW(p)
I think this makes sense. The main things that make my probabilities still fairly high for success:
1. It doesn't seem super complicated to keep the basic functionality going. I haven't looked into it much, but IIUC basically you just keep putting liquid nitrogen in the dewars, and LN is not expensive or hard to produce.
2. Probably economic and technological progress is going to accelerate this century, implying more material and technological abundance, meaning that cryopreservation will be less difficult and won't need to be for too long.
3. Part of the point of my post is to discuss the dynamic that results from people saying "Oh, well, most everyone else doesn't want X, so X won't succeed, so I will not act as though X will succeed", but where this is actually a false lack of Hope. Of course, it could be a correct lack of Hope; but there's self-reinforcing dynamics here, so I want to point out to you that "what society thinks" has some non-determinacy in it, i.e. you and people like you can to some extent in some ways choose what society thinks.
comment by gjm · 2021-09-04T11:58:57.370Z · LW(p) · GW(p)
This, or something like it, is also one reason why the sorts of hopes and fears about AI that are common on LW are not so common in the rest of the world. "These people say that technological developments that might be just around the corner have the potential to reshape the world completely, and therefore we need to sink a lot of time and effort and money into worrying about 'AI safety'; well, we've heard that sort of thing before. We've learned not to dedicate ourselves to millenarian religions, and this is just the same thing in fancy dress."
It's a very sensible heuristic. It will fail catastrophically any time there is an outrageously large threat or opportunity that isn't easy to see. (Arguably it's been doing so over the last few decades with climate change. Arguably something similar is at work in people who refuse vaccination on the grounds that COVID-19 isn't so bad as everyone says it is.) Not using it will fail pretty badly any time there isn't an outrageously large threat or opportunity, but there's something that can be plausibly presented as one. I don't know of any approach actually usable by a majority of people that doesn't suffer one or the other of those failure modes.
comment by Dagon · 2021-09-05T15:11:11.100Z · LW(p) · GW(p)
I'm trying to figure out why people don't want to really consider cryonics as an option.
Which people? Many have considered cryonics (and decided not to pursue it), many have not. You jump to some generalizations without many examples of why you think these are the correct cruxes, and you don't state who you think you can help/convince with this analysis.
I think you make a VERY wrong turn when you go from "fear of being scammed" to "fear of false hope". It's not the false hope that makes scams unpleasant (for me, it may actually be part of it for some), it's the WASTE involved in thinking about, paying for, and otherwise investing in the scam.
Cryonics is not a scam, in that everyone involved seems to be quite sincere in delivering as much of their promise as they can. But it's the same cost/benefit calculation as many scams - some expense for a hard-to-measure and probably very small chance of success.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-05T21:31:03.335Z · LW(p) · GW(p)
> it's the WASTE involved in thinking about, paying for, and otherwise investing in the scam.
That is what I said in the post, which indicates you didn't think about what I said.
comment by Tom (tom-1) · 2021-09-05T13:49:35.335Z · LW(p) · GW(p)
It does surprise me that cyronics is not more popular than it is.
I'd like to add another consideration to your list of impediments: the difficulty of actually executing upon a plan to get oneself cryopreserved.
Let's say you are not concerned with sudden, unexpected death - which should make things simpler and cheaper - but you do want a plan to preempt mental decline, eg. dementia. Assume also that you are completely confident that you can do whatever is required of yourself - the hardest part should be something akin to committing suicide but I think this would be less difficult for the sort of people who would consider cryopreservation (eg. no religious qualms about suicide).
Nevertheless, upon investigation, it appears to be near impossible in the current US regulatory environment to make this happen. And much worse if you are not in the US so you need to get yourself or your body/brain there.
Perhaps the development of cryopreservation operations in countries with less developed (or less well enforced) regulatory frameworks would help address this, eg. I think I read that there is now a company in Russia...
comment by JBlack · 2021-09-04T14:06:35.269Z · LW(p) · GW(p)
While it does seem worthwhile from a purely selfish point of view, $150k+ for a small chance of revival (my estimate: no more than 2%) seems expensive from the point of view of things that money can buy to promote the future welfare of people I care about.
Replies from: TekhneMakre, TekhneMakre↑ comment by TekhneMakre · 2021-09-04T15:23:56.127Z · LW(p) · GW(p)
At what probability, roughly, would it start to be appealing? Is it more like, say, 5%, 15%, or 50%?
What are the main points of failure that you anticipate in plan? E.g. do you think the preservation technique is probably insufficient; brain emulation or nanotech or etc. won't be feasible; human society won't make it to a transhumanist world; people in the future won't care enough; or what?
Replies from: JBlack↑ comment by JBlack · 2021-09-05T04:29:34.318Z · LW(p) · GW(p)
If I could be confident of 5%, it would be attractive right now. The problem isn't really any single point of failure, the problem is that there are way too many points of failure that all have pretty good chances of happening, any single one of which dooms all or most of the clients. Even so, if I had substantially more assets then it would be attractive even at 0-2%.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-05T05:28:50.193Z · LW(p) · GW(p)
Most of the points of failure seem like things that could be averted by a small group of people who care about the patients not thawing. Usually when people care about something it is at least feasible, if not inevitable, that all the avertable failures are actually averted. Rocket launches, for example, have at least hundreds of points of potential failure. It sounds like your reasoning would imply that rocket launches could never ever happen. In other words, I'm saying, the points of failure seem pretty correlated with each other, with shared factors being "civilization doesn't fall apart" and "there's people with resources who care about the patients".
See https://www.lesswrong.com/posts/ebiCeBHr7At8Yyq9R/being-half-rational-about-pascal-s-wager-is-even-worse?commentId=4ZrmawPKwNqMbzMXy [LW(p) · GW(p)]
and
https://www.lesswrong.com/posts/ebiCeBHr7At8Yyq9R/being-half-rational-about-pascal-s-wager-is-even-worse?commentId=XxANusJcNkiRcq7kF [LW(p) · GW(p)]
Is your 5% here an actual probability estimate (e.g., you have experience making probability estimates that you get feedback on and followed a similar process in this case; or, you'd make a bet against someone who said "20% probability" or on the other side "1% probability" if it were easy to have the bet resolved), or is it more an impressionistic nonquantitative statement of your sense of the plausibility of cryonics working? I ask because maybe the meta-question here is less "what specifically is JBlack's expected utility calculation and where might there be important flaws, if any", and more "how is JBlack dealing with decision making for nebulous questions like cryonics".
Replies from: JBlack↑ comment by JBlack · 2021-09-05T08:34:51.948Z · LW(p) · GW(p)
I have in fact looked into cryonics as a possible life-extension mechanism, looked at a bunch of the possible failure modes, and many of these cannot be reliably averted by a group of well-meaning people. If you're actually trying to model "people who are not currently investing in cryonic preservation", then it does little good to post hypotheses such as "they are too scared of false hope". Maybe some are, but certainly not all.
Also yes, my threshold around 5% is where I have calculated that it would be "worth it to me", and my upper bound (not estimate) of 2% is based on some weeks of investigation of cryonic technology, the social context in which it occurs, and expectations for the next 50 years. If there have been any exciting revolutions in the past ten years that significantly alter these numbers, I haven't seen them mentioned anywhere (including company websites and sites like this).
As far as bets go, I am literally staking my life on this estimate, am I not?
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-05T08:48:36.290Z · LW(p) · GW(p)
>then it does little good to post hypotheses such as "they are too scared of false hope"
What would you recommend? I spend time talking to such people, but I'm just one person and I wish there were more theory about this. I'm surprised LW isn't interested.
>Maybe some are, but certainly not all.
What do view as the main factors?
>many of these cannot be reliably averted by a group of well-meaning people
Which ones are you thinking of? Some that I'm worried about:
-- forceful intervention (e.g. by a state or invader) that prevents people from putting LN in the dewars
-- collapse of civilization, such that it's too expensive to produce LN and even people who care a lot can't rig something together (have you looked into this? do you know under what circumstances this might happen?)
-- X-risk, e.g. AGI risk, killing everyone (in particular, preventing humanity from developing nanotech etc.)
> investigation of cryonic technology, the social context in which it occurs, and expectations for the next 50 years
Which seem like key points of failure? You said there's lots of points of failure, but this doesn't give an estimate, if we expand conjunctions we have to also expand disjunctions. For example, I could be worried that I might die in a way that makes my cryo-preservation worse--I'm left dead for a few days, there's physical trauma to my brain, etc. This does decrease the probability of success, but also we have to factor in uncertainty about how good preservation needs to be; it's not just P(perfect preservation)xP(humanity makes it to nanotech), it's also additionally P(bad preservation)xP(nanotech)xP(souls can be well reconstructed from bad preservations).
↑ comment by TekhneMakre · 2021-09-04T14:38:21.367Z · LW(p) · GW(p)
Life insurance costs in the ballpark of your internet bill, give or take. Have you already maxed out on life insurance?
Replies from: JBlack↑ comment by JBlack · 2021-09-05T04:05:56.746Z · LW(p) · GW(p)
Life insurance is insurance, a way of paying extra to deal with expensive events that have a low probability of occurring, to give you a high probability of (financially) surviving it. Paying that amount extra in case of something that is nearly guaranteed to happen, and gives you a small chance of getting past it, seems the exact opposite of the case where insurance makes sense.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-05T05:01:19.839Z · LW(p) · GW(p)
I don't follow. The point is that (1) you don't know when you're going to die, it could be tomorrow and it could be in 30 years, and (2) you don't have $150,000 lying around right now, or at any given time between now and 100 years from now, to pay for your preservation if you happen to die. If, for example, you get a so-called "universal life" policy, you pay in to a sort of savings account for some decades; after some point it's just money, I think, and until that point it's worth $150,000 (or whatever you're paying for) if you die. It's like you're "leasing money that you only get if you die", so that like a lease you get to have the thing the whole time, not just after you've saved up the money.
(This is a normal thing that people do to fund cryonic suspension: https://www.cryonics.org/resources/life-insurance )
Replies from: JBlack↑ comment by JBlack · 2021-09-05T12:21:25.882Z · LW(p) · GW(p)
I'm aware that this is a thing that people do. I expect that people doing it have very much (at least order-of-magnitude) greater confidence that the process will work, since the probability thresholds to make cryonic-funded-by-insurance worthwhile are substantially greater than for cryonic-funded-by-investments unless capacity for investment is negligible and the insurance is very cheap.
That is, it's really only for people in their 20's who don't have much income and yet want to pay ten thousand dollars or so to reduce the probability of dying permanently in the next decade by something like 0.0005. Every decade in which they don't build up enough to pay for it outright, they're on a losing treadmill because the premiums typically more than double per decade of age, and on top of that they have been forgoing investment growth with that money the whole time.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-05T22:35:20.417Z · LW(p) · GW(p)
>Every decade in which they don't build up enough to pay for it outright, they're on a losing treadmill because the premiums typically more than double per decade of age,
Some universal life insurance policies are fixed premium. But yeah, this mainly helps if you get it when you're younger.
>reduce the probability of dying permanently in the next decade by something like 0.0005
A male in the US age 20-30 has about 0.015 chance of dying in that time period; a female half that. https://www.ssa.gov/oact/STATS/table4c6.html
We can condition on being healthy, not taking big risks, not committing suicide; I'm not sure how much that is. Your estimates suggest it's less by a factor of 15 or so, if probability of cryonics working is 50%. Eyeballing causes of death https://www.worldlifeexpectancy.com/usa-cause-of-death-by-age-and-gender this seems implausible, a factor of say 3 seems plausible, though maybe dying to homicide and car accidents is much more controllable than I'm assuming.
Replies from: JBlack↑ comment by JBlack · 2021-09-06T08:54:40.385Z · LW(p) · GW(p)
What you are describing is covered by the condition at the start of my post: "very much (at least order-of-magnitude) greater confidence that the process will work".
My calculation is based on the minimum probability that it will work for it to be worthwhile for me, which is around 5% chance of success.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2021-09-06T09:25:19.121Z · LW(p) · GW(p)
I'm saying that people in their 20s who have at least order of magnitude greater confidence than you that cryonics will work, don't need to care at the level of 0.0005, just at the level of 0.003, which is much greater. This seems in conflict with what you wrote.
Replies from: JBlack