Speculative Evopsych, Ep. 1

post by Optimization Process · 2018-11-22T19:00:04.676Z · LW · GW · 9 comments

Contents

9 comments

(cw death, religion, suicide, evolutionary psychology, shameless tongue-in-cheek meta-contrarianism)

I have a passing interest in biology, so I recently bought some fruit flies to run experiments on. I did two things to them. First, I bred them for intelligence. The details are kinda boring, so let’s fast-forward: after a few tens of millions of generations, they were respectably intelligent, with language and culture and technology so on.

In parallel with that, and more interestingly, whenever a fly was about to die of injury, I immediately plucked it out of the box, healed it, and put it in a different box (“Box Two”), a magnificent paradise where it blissfully lived out the rest of its days. Evolutionarily, of course, relocation was equivalent to death, and so the flies evolved to treat them the same: you could still make somebody stop affecting your world by stabbing them, and their kin would still grieve and seek revenge – the only difference was the lack of a corpse.

It didn’t really matter that the two boxes were separated only by a pane of glass, and that the flies in Box One could clearly see their “deceased” fellows living fantastic lives in Box Two. They “knew” on an abstract, intellectual level that getting “fatally” wounded wouldn’t actually make them stop having conscious experiences like death would. But evolution doesn’t care about that distinction; so it doesn’t select for organisms that care about that distinction; so the flies generally disregarded Box Two.

A small subculture in Box One claimed that “if anybody actually believed in Box Two and all its wonders, they’d stab themself through the heart in order to get there faster. Everybody’s literally-mortal fear of relocation proves that they don’t truly believe in Box Two, they only – at best – believe they believe.”

Strangely, nobody found this argument convincing.

9 comments

Comments sorted by top scores.

comment by Ben Pace (Benito) · 2018-11-22T21:33:48.060Z · LW(p) · GW(p)

Edit: This comment substantially misunderstood the post. See Jessica's comment [LW(p) · GW(p)] containing explanation and examples.

shameless tongue-in-cheek meta-contrarianism

Indeed. I'm quite baffled that you wrote this.

If there is a subtler point you're trying to make, I feel like this example is a bad one [LW · GW].

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2018-11-22T21:39:35.804Z · LW(p) · GW(p)

How is this a bad/political example? I read this as a useful and interesting thought experiment on the implications of epiphenomena with respect to ecosystems.

Replies from: taymon-beal, Benito
comment by Taymon Beal (taymon-beal) · 2018-11-23T00:07:14.660Z · LW(p) · GW(p)

Would you mind saying in non-metaphorical terms what you thought the point was? I think this would help produce a better picture of how hard it would have been to make the same point in a less inflammatory way.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2018-11-23T00:39:39.677Z · LW(p) · GW(p)

Ecosystems, and organisms in them, generally don't care about stuff that can't be turned into power-within-the-ecosystem. Box two exists, but unless the members of box one can utilize box two for e.g. information/computation/communication, it doesn't matter to anyone in box one.

Other places where this applies:

  • Highly competitive industries won't care about externalities or the long-term future. Externalities and the future are in box two. They might not even be modeled.
  • Young people have a personal interest in making their life better when they're older, but under sufficient competitive pressure (e.g. in competitive workplaces, or in status-based social groups), won't do so. Nursing homes are box two.
  • People playing power games at a high level (e.g. in politics) will have a hard time caring about anything not directly relevant to the power game. Most of the actual effects are, from the perspective of the power game, in box two; those effects that actually are directly relevant get modeled as part of the power game itself, i.e. box one. Signing a bill is not about the policy effects, it's about signalling, because the policy effects only affect the power game on a pretty long timescale (and likely won't even be modeled from within the power game), and signalling affects it immediately.

(These examples are somewhat worse for making the point because the case is much more clear in the case of evolution; humans are sometimes rational agents that act non-ecologically)

Replies from: Benito
comment by Ben Pace (Benito) · 2018-11-23T01:31:11.030Z · LW(p) · GW(p)

The examples are really clear and makes the OP much more interesting to me, thanks. I retract my criticism.

comment by Ben Pace (Benito) · 2018-11-22T22:39:00.427Z · LW(p) · GW(p)

I feel like the core example here has a long history of being argued for with increasingly strong anti-epistemologies [LW · GW], and so it feels like an especially strong example of a thing to not spend time trying to steelman. We should expect such arguments for it to reliably be really good at making us confused without there being a useful insight behind our confusion.

If the argument is just being used as an example to make an interesting point about, as you say, epiphenomena and selection processes, then I think there is probably a large swathe of examples that aren't this particular example.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2018-11-22T23:17:08.439Z · LW(p) · GW(p)

The point is really analytically simple, it doesn't require steelmanning to understand, you can just read the post. You don't need to use the outside view for arguments like this, you can just spend a small amount of effort trying to understand it (argument screens off authority [LW · GW]). It isn't even arguing positively for the existence of the afterlife, it's at most arguing against one particularly weak argument against it.

Contrast this with, say, the ontological argument, which is not analytically simple, has obvious problems/counterexamples, and might be worth understanding more deeply, but likely isn't worth the effort since (from an atheistic perspective) it's likely on priors to be made in bad faith based on a motivated confusion.

In general if "politics is the mindkiller" is preventing you from considering all analytical arguments that you interpret as being on a particular side of a political issue, then "politics is the mindkiller" is likely mindkilling you more than politics itself. (My impression was that religion wasn't even a significant political issue around here, since opinion is so near-unanimously against its literal truth...)

I don't see a different example that makes the point as strongly and clearly, do you see one?

Replies from: Benito
comment by Ben Pace (Benito) · 2018-11-22T23:48:38.343Z · LW(p) · GW(p)

You may be right. It certainly seems likely to me that the author was just picking a narratively good example.

I did recently experience some arguments surprisingly similar to the one in the OP (things similar to this) definitely designed to be deeply confusing, and I was also incredibly surprised to find the environment I was in (not LW, but some thoughtful people) taking them seriously and being confused by them, which made me decrease my threshold for pointing out this type of cognitive route as bad and not-worth-exploring. I haven't the time to think up as clear an example as the OP's - as I say, it seems plausible that this one just is the most narratively simple. There are often religion-metaphors in abstract problems (e.g. decision theory) that are clearly natural to use.

You say you found the OP to be a useful thought experiment, and that already causes me to think I might be mistaken, I'm pretty sure the part of me that thought the example was bad also would predict you wouldn't find it very useful.

Replies from: cousin_it
comment by cousin_it · 2018-11-23T00:51:12.768Z · LW(p) · GW(p)

I think the OP is more about evolution giving us irrational drives that override our intellect. For example, if someone believes that bungee jumping is safe but is still afraid to jump, their belief is right but their fear is wrong, so the fear shouldn't be taken as a strong argument against the belief.