Thoughts on The Replacing Guilt Series — pt 1
post by dragohole · 2019-07-05T19:35:58.053Z · LW · GW · 9 commentsContents
9 comments
I’m currently going through Nate’s series on caring. He starts with the issue of listless guilt. First post is about the stamp collector, the second one is about allowing yourself to fight for something.
I want to log my thoughts on this, because it seems that there’s enough to chew on here.
First, a bit on the stamp collector.
I agree with the general message. It’s true, people aren’t optimized solely for pleasure-seeking. There are other things. Evolutionary psychology has a sufficient enough explanation, I believe. The heuristic “people are ultimately hedonists” is simply wrong, its predictive power is weak.
Now, about the second post.
Yes, you can care about things in the world. You can only access it through your map, but this doesn’t postulate wireheading as the final goal.
I, too, think that the word “want” is loaded. Let’s dissect.
People are embedded in their environments. Their motivational systems are complex. There’s the primal “draw” towards certain things, and it’s mediated by different aspects of motivational systems. For example, it’s generally easier to feel this primal “draw” towards concrete things, which impinge on our senses (let’s not talk PP here).
But we also can use our executive functions in tandem with the ebb-and-flow of our primal motivations. You can “force” yourself to start working on something, and the short bursts of reinforcement for achieving measurable (micro-)milestones will do the rest.
What’s important though is the link between the “forceful” and the “primal”, S2 and S1. There’s only so much you can realistically do purely through S2 control. Usually it’s both S2 and S1 that affect your productivity.
(correct me if I’m wrong here; it might be possible to run on S2 for a long time, but I believe that at some point your local productivity will tank so hard it won’t be worth to continue anymore; furthermore, if you try to abuse your S1 with S2 in this way for longer periods of time, S1 will start resisting, and you’ll probably burn out and\or ruin the rest of your life; alright, I just stated a bunch of obvious, but potentially important stuff to someone who haven't thought about it before; you might be one of them, and if you are, here's your insight: you don't actually need motivation to do stuff, thanks to your S2. go now, try it.)
I think it’s important to also tell yourself that you’re allowed to care about external world for selfish reasons, too, and actually change the world! It’s something that comes with the understanding that “people are ultimately hedonists” is too weak, and pure pleasure isn’t the terminal value, because it simply doesn’t work. Soares mentions that true selfishness doesn’t lead to the listless guilt, and he’s correct. The people who connect “selfishness” to “root of my mild depression” are using a shallow conception of selfishness, shallow in the sense of “it’s a shitty model, wake up!”.
(for those who start screaming “wireheading!” — sure, but it doesn’t exist yet, so don’t make decisions based on stupid thought experiments, more on this later)
Then Soares gives us this thought experiment. Read through it, if you want to refresh your memory.
Imagine you live alone in the woods, having forsaken civilization when the Unethical Psychologist Authoritarians came to power a few years back.
Your only companion is your dog, twelve years old, who you raised from a puppy. (If you have a pet or have ever had a pet, use them instead.)
You're aware of the fact that humans have figured out how to do some pretty impressive perception modification (which is part of what allowed the Unethical Psychologist Authoritarians to come to power).
One day, a psychologist comes to you and offers you a deal. They'd like to take your dog out back and shoot it. If you let them do so, they'll clean things up, erase your memories of this conversation, and then alter your perceptions such that you perceive exactly what you would have if they hadn't shot your dog. (Don't worry, they'll also have people track you and alter the perceptions of anyone else who would see the dog, so that they also see the dog, so that you won't seem crazy. And they'll remove that fact from your mind, so you don't worry about being tracked.)
In return, they'll give you a dollar.
Most people reject it! Their gut feeling screams “DOGGIE DIES” and they refuse.
I say, they’re probably not being consequentialist enough.
Imagine you took the offer. How is the world different now?
You try to touch the dog. Presumably, the correct neurons get activated, and you feel the gentle brush of its hair on your hand. You can’t move your hand any further than the presumed physical location of the dog allows, because the unethical-but-smart psychologists somehow magically have your inhibitory system under control, and they’ve calculated the density of the relevant part of the dog, so you feel more or less the right amount of pushback.
Does the dog run around like a real dog, barking loudly and chasing squirrels? Well, of course! The unethical-but-smart psychologists did everything to ensure that any difference in perceived dog’s behavior is so small it’s unnoticeable to the human eye, so there’s nothing uncanny about this happy, but not truly real creature.
If we can’t find any significant consequences in your routine interactions with the dog, what about everything else? How do the evil psychologists actually make this happen? What kind of horrible voodoo artifact would it take to control not only your perception, but perceptions of everyone else?
One might say: but you can’t use the dog’s bones in order to dig its grave, because only your perceptions get controlled. That’s true. Your perception of “digging a hole” might be altered, but you won’t be able to actually dig a hole, because perceptions aren’t spades.
Now, what if you try to dig a hole with the poor dog’s bones, and an actual hole appears? You try putting your hand inside, and nothing pushes back. It’s a real hole.
Congratulations, the unethical-but-smart psychologists created an instrument that allows them to rewrite reality in whichever way they want, in a fine-grained and very precise way, too! You spill the dog's guts? Yep, here they are, fitting right into the hole you dug out. You feed the dog cheap canned food and it poops it out. (It's even better if it doesn't. I'd buy a genetically modified pet that doesn't shit everywhere. "But it's important that my dog poops, because I care about reality!" Oh my god, it must also be important to you that humans suffer and die, because you care about reality. Naturalistic fallacy much?) Everything works as it's supposed to.
Do you still think that taking a dollar is the wrong choice, even though literally nothing changes afterwards? If you do, do you think it’s a rational choice? Or is your S1 deluding you?
You don’t care about minimal differences. If the atoms of what constitutes the dog are a bit lacking, so what? It doesn’t necessarily matter and you know it. If you ever shaved your dog, you altered the constitution of its atoms by a great amount, probably several times. If the dog in fact does act a bit differently, but it impossible to detect with the human eye, so what?
“But the dog isn’t actually a dog, it’s some whacko mechanism that impinges on my senses and makes me see a dog!” Yes! The real dog isn’t actually a dog either way, it’s particles that impinge on your senses and make you see a dog! Oh wait...
If there are no real consequences that make a real difference, then you should take the offer. Maybe the voodoo artifact of the evil psychologists is, in fact, very unstable, and you want to destroy it before it destroys everything else, but let’s leave it to sci-fi authors, shall we?
It might be a rational choice to refuse the offer, but only if the bad consequences of the choice outweigh the good ones. If I’m confident that taking the offer and getting the dog shot will lead to a better world, I will do it. Did you flinch? Great! Now shut up and multiply.
I used the words “truly real”. The dog doesn’t matter, the consequences of the phenomenon that you call “dog” matter. “Real” is just a heuristic, and a useful one at that, because the world is complex and artificial interference can easily become misguided and lead to disastrous consequences, if applied to larger scale phenomena.
Anyway, what am I getting at? The fact that you chose to refuse the offer, if you did, doesn’t necessarily mean that you care about reality. If it’s true that nothing of importance is changed in the world after your dog gets shot, then your feelings deceive you. If you say that miniscule differences matter to you, and that even microscopic alterations in the “natural” behavior of dog matter in themselves, then you’re either a liar or a fool. Fight me.
But if you refuse the offer because you think that “real” is a good heuristic, and you don’t have enough evidence to trust the damned psychologists, then I applaud you.
I’m sure that there are certain people who were also left dissatisfied with the thought experiment and went “meh, I’m still right after all”.
If you’re one of them, you’re wrong.
Not in your analysis, but in the way you model reality. No, heroin isn’t all you need, you know how most actual, real people who go this route end up. No, wireheading isn’t all you need, because you live in a world that doesn’t have it, and probably won’t have it until you die.
Yes, I’m leading you to a pretty obvious conclusion.
Here it is.
STOP MAKING STUPID INFERENCES FROM DUMB THOUGHT EXPERIMENTS.
Orient yourself to reality.
Look.
When you conduct a thought experiment, you create a very rough, simplified, tiny model of some part of existence, and then struggle to update on the fake evidence that you’re getting. And you can succeed! And it’s gonna be awful and horrible and bad!
Thought experiments can be useful, but you need to know exactly what you are getting yourself into, and how exactly to interpret the evidence you’re getting from them, and what exact hypotheses they are supporting. Don’t fall prey to folk psychology.
Let’s get back to selfishness. Selfishness is definitely not hedonism, not in the strict sense of the word. In order to learn what selfishness is, you need to study humans, not philosophy. And if you do, you’ll find out that a bunch of stuff matters: our aliefs, other humans, sex, prestige, et cetera. And then there’s you, a unique individual with a given set of traits and abilities. And they’re malleable, at least in principle, because neuroplasticity. And there’s your environment that’s variously suited (or not) to your traits and abilities. Learn! Learn and flourish!
Soares is correct in advising you to search your memory for motivating material.
Memory reconsolidation served you well and gave you almost perfect, clean, censored memories of the good stuff.
“Wait, you’re saying it’s not real, is it?”
Yes! Or no! So what! Who cares! You’re a human, act in ways which are suited to humans, don’t follow some dry, abstract idea. If the memory keeps you moving, then it has served its purpose already. Keep in mind that reality can be different from how you remember it, but also remember (ha!) that without change there won’t be change. Prepare yourself for the journey and go exploring!
“But which one to pick?”
Who cares? Satisfice if you can! You want to launch yourself into the world and get destroyed by it, in order to get reborn and become stronger, like a phoenix.
Nate is also right in saying that you won’t get rid of the listlessness after merely daydreaming for a while. That’s not how humans work, obviously. But it’s a start.
Motivation, desire won’t just appear in front of you in a puff of smoke, it’s born out of S1 crashing into experience. And what a coincidence! Our Inner Sim can generate experience on spot! You'll need ways to keep the flame alive, though.
Maybe Nate knew all this already. Maybe he decided to be superrational and shuffle his arguments in order to convince you. I don’t know.
I’m offering you the Hard Way out.
You’ll probably struggle with inner conflicts a lot more if you follow this Way.
Keep chipping away at them and you’ll get rewarded by better, stronger models, which won’t get broken by reality.
You know why? Because they will complete it.
9 comments
Comments sorted by top scores.
comment by romeostevensit · 2019-07-06T01:20:06.492Z · LW(p) · GW(p)
Most opium users don't end up addicted. It's not at all clear that not allowing the self selected bottom 10% of sufferers to wirehead out of some thought experiment about slippery slopes for the other 90% is correct.
also see https://www.wireheading.com/
comment by dxu · 2019-07-05T20:24:38.383Z · LW(p) · GW(p)
I used the words “truly real”. The dog doesn’t matter, the consequences of the phenomenon that you call “dog” matter.
Wrong. This misses the point of the thought experiment entirely, which is precisely that people are allowed to care about things that aren't detectable by any empirical test. If someone is being tortured in a spaceship that's constantly accelerating away from me, such that the ship and I cannot interact with each other even in principle, I can nonetheless hold that it would be morally better if that person were rescued from their torture (though I myself obviously can't do the rescuing). There is nothing incoherent about this.
In the case of the dog, what matters to me is the dog's mental state. I do not care that I observe a phenomenon exhibiting dog-like behavior; I care that there is an actual mind producing that behavior. If the dog wags its tail to indicate contentment, I want the dog to actually feel content. If the dog is actually dead and I'm observing a phantom dog, then there is no mental state to which the dog's behavior is tied, and hence a crucial element is missing--even if that element is something I can't detect even in principle, even if I myself have no reason to think the element is absent. There is nothing incoherent about this, either.
Fundamentally, you seem to be demanding that other people toss out any preferences they may have that do not conform to the doctrine of the logical positivists. I see no reason to accede to this demand, and as there is nothing in standard preference theory that forces me to accede, I think I will continue to maintain preferences whose scope includes things that actually exist, and not just things I think exist.
Replies from: dragohole↑ comment by dragohole · 2019-07-05T20:38:36.556Z · LW(p) · GW(p)
Nothing incoherent about the first part with the spaceship.
What's an actual mind? How do you know that a dog has it? Would you care about an alien living creature that has a different mind-design and doesn't feel qualia? Anyway, if you have no reason to think that the element is absent, then you'll believe that it's present. It's precisely because you feel that something is (or will be) missing, you refuse the offer. You do have some priors about what consequences will be produced by your choice, and that's OK. Nothing incoherent in refusing the offer. That is, if you do have reasons to believe that that's the case.
I'm talking consequentialism, not logical positivism.
EDIT: It might just be a misunderstanding. When I'm talking about phenomena, I'm not talking about qualia, I'm talking about the general category of "events that take place in reality".
EDIT2: Ah. I don't think that there can be two worlds which are completely identical yet different (p-zombies stuff). But yeah, if we find out that the differences between a mind that experiences qualia and the mind that doesn't are insignificant (e.g. aliens!), then I do think it's weird to care about qualia, especially when there are so many other things to care about. But that's, like, my opinion, dude. It's fine if you disagree.
Replies from: dxu↑ comment by dxu · 2019-07-05T20:50:15.693Z · LW(p) · GW(p)
What's an actual mind?
My philosophy of mind is not yet advanced enough to answer this question. (However, the fact that I am unable to answer a question at present does not imply that there is no answer.)
How do you know that a dog has it?
In a certain sense, I don't. However, I am reasonably confident that regardless of whatever actually constitutes mindfulness, enough of it is shared between the dog and myself that if the dog turns out not to have a mind, then I also do not have a mind. Since I currently believe I do, in fact, have a mind, it follows that I believe the dog does as well.
(Perhaps you do not believe dogs have minds. In that case, the correct response would be to replace the dog in the thought experiment with something you do believe has a mind--for example, a close friend or family member.)
Would you care about an alien living creature that has a different mind-design and doesn't feel qualia?
Most likely not, though I remain uncertain enough about my own preferences that what I just said could be false.
Anyway, if you have no reason to think that the element is absent, then you'll believe that it's present. It's precisely because you feel that something is (or will be) missing, you refuse the offer. You do have some priors about what consequences will be produced by your choice, and that's OK. Nothing incoherent in refusing the offer. That is, if you do have reasons to believe that that's the case.
I agree with this, but it seems not to square with what you wrote originally:
Do you still think that taking a dollar is the wrong choice, even though literally nothing changes afterwards? If you do, do you think it’s a rational choice? Or is your S1 deluding you?Replies from: dragohole
↑ comment by dragohole · 2019-07-05T21:04:21.044Z · LW(p) · GW(p)
We're assuming that 'literally nothing [of importance] changes'.
I'm not claiming it follows from what I described earlier in the post, it's an assumption, made in order to make a point, because thought experiment :)
Albeit I concede that it's not clear from what I wrote.
comment by Shmi (shminux) · 2019-07-05T23:17:57.266Z · LW(p) · GW(p)
I couldn't figure out how your post is related to the subject of your title, guilt.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-07-06T00:19:33.365Z · LW(p) · GW(p)
It's mostly commentary on Nate Soares's sequence "Replacing Guilt" on Mindingourway.com
Replies from: shminux↑ comment by Shmi (shminux) · 2019-07-06T00:45:37.113Z · LW(p) · GW(p)
Ah. I guess a link to the source would be useful.
Replies from: anna_macdonald