What's an important (new) idea you haven't had time to argue for yet?

post by Mati_Roy (MathieuRoy) · 2019-12-10T20:36:31.353Z · LW · GW · 2 comments

This is a question post.

Contents

  Answers
    15 romeostevensit
    11 AngelicaChuan
    6 shminux
    6 Dagon
    4 MathieuRoy
None
2 comments

Of course, I'm not expecting you to support the idea in the answers, but simply mentioning its conclusion:)

Answers

answer by romeostevensit · 2019-12-10T21:41:05.537Z · LW(p) · GW(p)

People like to pretend they are doing fine by using a cognitive algorithm for judging that is riddled with availability heuristic, epistemically unsound dialectics and other biases. Almost everyone I meet is physically and emotionally unwell and shies away from thinking about it. What rare engagement does happen occurs with close intimates who are selected for having the same blind spots as them.

It's like everyone has this massive assumption that things will turn out fine, even though the default outcome is terrible (see obesity and medicated mental health rates). Or they just have learned helplessness about learned helplessness.

comment by Jiro · 2019-12-10T22:19:07.823Z · LW(p) · GW(p)

Under what circumstances do you get people telling you they are fine? That doesn't happen to me very much--"I'm fine" as part of normal conversation does not literally mean that they are fine.

Replies from: romeostevensit
comment by romeostevensit · 2019-12-11T21:06:59.272Z · LW(p) · GW(p)

It's more like strong resistance to change on the theory that the current trajectory doesn't wind up as a flaming pile of wreckage.

comment by Dagon · 2019-12-11T22:10:17.790Z · LW(p) · GW(p)

I think you'd need to define "fine" a little better for me to understand your argument. The likely result, for each of us, is death. I feel pretty helpless about that, and it's both learned and reasoned.

In the meantime, I do some things that make it slightly more pleasant for me and others, and perhaps (very hard to measure) more likely that there will be more others in the future than there would otherwise be. But I also do things that are contrary to those long-term goals, which bring shorter-term joy or expectation of survival.

The default (and inevitable) outcome _is_ terrible. And that's fine.

Replies from: MathieuRoy
comment by Mati_Roy (MathieuRoy) · 2019-12-12T12:40:07.938Z · LW(p) · GW(p)

by "that's fine", you mean "I learned helplessness", right? (just checking, because I'm not sure what it means to say that something terrible is fine)

Replies from: Dagon
comment by Dagon · 2019-12-12T19:13:50.470Z · LW(p) · GW(p)

I think it's actual helplessness, not "learned helplessness". "learned", in this context, usually implies "incorrect".

"that's fine" means I believe that a change in my actions will cause more harm than it does good to my life-satisfaction index (or whatever we're calling the artificial construct of "sum of (possibly discounted) future utility stream"). It's perfectly reasonable to say it's "terrible" compared to some non-real ideal, and "fine" compared to actual likely futures.

Unless you're just saying "people are hopelessly bad at modeling the world and making decisions", in which case I agree, but the problem is WAY deeper than you imply here.

Replies from: romeostevensit, MathieuRoy
comment by romeostevensit · 2019-12-12T19:21:10.468Z · LW(p) · GW(p)

I did a bad job of saying that I'm trying to highlight the attentional failures involved specifically.

comment by Mati_Roy (MathieuRoy) · 2019-12-12T23:03:37.109Z · LW(p) · GW(p)

I'm not sure what cause you to like this framing and what it does to you psychologically, but personally it seems important to me to differentiate what's aligned with my preferences and what's fixable as 2 different concepts. I think having a single word for both "things that can be changed, but are okay as they are" and "things that can't be changed, but are not okay as they are" would render my cognition pretty confused, but maybe that's a cognitive hack to feel better or something.

Replies from: Dagon
comment by Dagon · 2019-12-13T00:36:16.290Z · LW(p) · GW(p)

Interesting - I do suspect there's a personality difference that makes us prefer different framings for this. For me, it would be maddening to have preferences over unreachable states.

answer by shminux · 2019-12-11T04:58:19.817Z · LW(p) · GW(p)

Traversable wormholes, were they to exist for any length of time, would act as electric and gravitational Faraday cages, i.e. attenuate non-normal electric and gravitational field exponentially inside their throats with the scale parameter of the mouth size/throat circumference. Consequently, the electric/gravitational field around them is non-conservative. This follows straightforwardly from solving the Laplace equation, but never discussed in the literature as far as I can find.

answer by Dagon · 2019-12-10T21:27:58.299Z · LW(p) · GW(p)

Not new, but possibly more important than it gets credit for. I haven't had time to figure out why it doesn't apply pretty broadly to all optimization-under-constraints problems.

https://en.wikipedia.org/wiki/Theory_of_the_second_best


answer by Mati_Roy (MathieuRoy) · 2019-12-10T20:41:08.137Z · LW(p) · GW(p)

Updated: 2019-12-10

2 of them:

  • there's a lot of advantages to video-recording your life (I want to write much more about this, and only took time for a very brief overview so far https://matiroy.com/writings/Should-I-record-my-life.html)
  • if MWI is true and today's cryonics is good enough, we can use quantum lottery to cryopreserve literally everyone for the cost of setting up a quantum lottery + some overhead (probably much less than 100k USD)
comment by habryka (habryka4) · 2019-12-10T20:56:42.240Z · LW(p) · GW(p)

I am confused. If MWI is true, we are all already immortal, and every living mind is instantiated a very large number of times, probably literally forever (since entropy doesn't actually decrease in the full multiverse, and is just a result of statistical correlation, but if you buy the quantum immortality argument you no longer care about this).

Replies from: MathieuRoy, Viliam
comment by Mati_Roy (MathieuRoy) · 2019-12-11T16:26:04.142Z · LW(p) · GW(p)

I disagree, but haven't had time to write why yet:)

comment by Viliam · 2019-12-10T21:15:53.935Z · LW(p) · GW(p)

If the lottery would pay for cryonics and a luxurious life afterwards, we could increase the chance of luxurious immortality.

Quantum immortality only makes you immortal, but you probably also want to have a good life.

2 comments

Comments sorted by top scores.

comment by Mati_Roy (MathieuRoy) · 2019-12-10T20:41:50.016Z · LW(p) · GW(p)

Maybe it would be a useful norm for people to have such a list of ideas; it would allow to move faster

Replies from: ChristianKl
comment by ChristianKl · 2019-12-10T21:15:09.471Z · LW(p) · GW(p)

It seems that in many cases ideas don't just need arguments in their favor but also explanation/model building to be useful to others.