0 comments
Comments sorted by top scores.
comment by Said Achmiz (SaidAchmiz) · 2017-10-05T13:08:36.652Z · LW(p) · GW(p)
An alternative model:
“I could try to get myself to like broccoli. But it is impossible that I will succeed. It is quite possible, however, that I’ll convince myself that I’ve succeeded; that I’ll think I’ve succeeded; and so, armed with the new (and mistaken) belief that I now like broccoli, I’ll start eating it. This will be terrible, because I will then be eating a thing I don’t actually enjoy, and I’ll be suffering from the cognitive dissonance of thinking that I’ve successfully convinced myself to like broccoli, of deriving conscious pride and satisfaction from having convinced myself to like broccoli, while not enjoying it in the least. In the worst case, that pride and satisfaction in (allegedly but not really) successful self-modification will become a part of my identity; and my (fake) liking of broccoli will one day contribute to a major psychological dislocation (when that aspect of self-identity comes into conflict with reality), with catastrophic consequences; or the internal conflict will continually erode my psyche with nameless anxieties and other demons of the mind, with less-catastrophic, but no less horrible, consequences.”
I, for one, refuse to let broccoli destroy me.
↑ comment by Conor Moreton · 2017-10-05T15:58:17.704Z · LW(p) · GW(p)
Oooooh, I like this a lot. Can I copy it into the post above?
↑ comment by Said Achmiz (SaidAchmiz) · 2017-10-05T16:31:04.520Z · LW(p) · GW(p)
Sure.
↑ comment by Rhaine · 2017-10-05T16:03:52.646Z · LW(p) · GW(p)
This sounds extremely relatable! I think what works for me when I have a fear of adopting a fake belief about who I am as a person, is that personalities are very arbitrary and fake in their nature (as in, there is no objective list of traits you objectively have, you just have a list of reactions you would have to different impulses) and so if some belief passes the "self Turing test" (me believing it is true), then it is just as real and good as any other belief I have about myself.
Unless there is something really wrong going on with my perception and my brain is actually malfunctioning and 60% of my beliefs are actually fake and no one is telling me anything because the malfunctioning can hide itself so well... Madoka save our anxious souls.
↑ comment by Said Achmiz (SaidAchmiz) · 2017-10-05T16:35:42.493Z · LW(p) · GW(p)
there is no objective list of traits you objectively have, you just have a list of reactions you would have to different impulses
What are personality traits, but reactions to impulses?
What you said sounds like “you don’t have personality traits; all you have are [a synonym for ‘personality traits’]”.
if some belief passes the "self Turing test" (me believing it is true), then it is just as real and good as any other belief I have about myself
Beliefs do not have to be “not real” to be mistaken.
With respect, it seems to me that you are confused about some basic things, like what beliefs are, and what they are for…
comment by magfrump · 2017-10-05T18:57:35.320Z · LW(p) · GW(p)
I tend to have a sort of opposite problem to the Hufflepuff trap: I can run through the exercise and see practical actions I could take, but then do not find them compelling.
Often I can even pull myself all the way through to writing down the specifics of an exercise I could do within five minutes and then... I just leave it on the page and go back to playing phone games.
Some of the things you say about willpower at the end resonate more with me, but the place willpower needs to get used is different for me, and saying I need to use willpower doesn't result in willpower being used by me. Many posts in the past have discussed this problem and I don't want to derail the discussion (though I haven't really found any of them to help me much).
But moving forward from that, if there are people whose problems are more "flinch away from acknowledging the correct behavior" and my problems are more "fail to execute on the correct behavior" that suggests an interesting division of labor possibility for paired productivity. Unfortunately one of the important components in getting there is "hang out with other rationalists in a more personal setting" which is perhaps chief among my correct behaviors that I've failed to execute on.
comment by Raemon · 2017-10-05T04:52:20.315Z · LW(p) · GW(p)
Medium-note: I think I mostly understood what you were saying here, but this was the first time in your series I felt a sense of "this seems like a 201 level course that I've only taken the 101 level intro to, and am missing a fair bit of background."
↑ comment by Conor Moreton · 2017-10-05T05:50:02.039Z · LW(p) · GW(p)
Uh-oh. LMK where to add context?
Edit: have done a quick second pass and tried to clean things up a bit.
↑ comment by Raemon · 2017-10-05T18:49:10.092Z · LW(p) · GW(p)
It might have been more like 'this feels like 201 talk when I took a year break from university, know all the terms, but am struggling to keep up with everything'
↑ comment by Raemon · 2017-10-05T19:11:26.071Z · LW(p) · GW(p)
I think one fundamental confusion was that your description of a Hufflepuff Trap... didn't match my conception of what a Hufflepuff Trap would mean (it seemed like any ambitious person regardless of approach/methodology could get caught by that thing, and it didn't have anything I think of as specific-to-Hufflepuff). This, plus a higher ratio of (unique-jargon : normal-words) made it hard for the essay to hang together.
comment by [deleted] · 2017-10-05T12:34:40.453Z · LW(p) · GW(p)
If I understand this correctly, then I agree with the premise that an error lies in identifying too strongly with a current set of preferences. In situations where right now "I want y, but y first requires x and I don't want x" it's easy for me to fall into a default mode of inaction; instead of examining/testing my preferences and maybe finding that x isn't such a nuisance after all. If my preferences are equally weighted, then I just choose one or the other. If it turns out that I have a higher order preference for y it makes sense for me to supplement attempts to get y with technqiues like ITC mapping and Trigger Action Planning despite the cost of x. Future selves may or may not assign positive value to x, and if it still turns out that in fact u(¬x)>u(y) then often it's trivially easy for me to revert to the original set of preferences. [Edit: Unless of, course, as Said pointed out, some kind of social pressure has led me to believe that the cost of x is not as large as it really is.]
[Edit: I don't think it's very likely that a person is going to fool themselves into liking x if they really, totally, utterly hate the thought of x, and it would be a bad thing to persist with x if it would lead to suffering in the long term. This is what I mean by a higher order preference, ie: that if the net gain from y outweighs the cost of x then clearly, obviously, absolutely: do x! This is obvious in the case where y = ¬x. Just try the damn broccoli. If you fail, your identity is not and should not be at stake. Nor should it be a herculean feat of willpower to test the water.
One example where my suggestion might break down: replace eating broccoli with smoking tobacco in the original formulation. This 'bug' seems perfectly reasonable now, right? Here, smoking may be more pleasurable than the health costs, but most people have a higher order preference for longevity. If I were to begin smoking, I might find it pleasurable, and come into conflict with my higher order preferences. In this case, is it better to reorder my preferences, to ignore them and remain in conflict with myself or to quit (with some difficulty)? Clearly reordering preferences would be a bad thing for my health, and thus an irrational choice. Ignoring my preferences would lead to so some severe psychological problems eventually. Also it may not be so easy to revert to the original preference but there are tools (nicotine replacement/vaping) that make it significantly easier. I'd argue that in most cases, we come out better for the experiment, as it gives us an opportunity to update on potentially ill-founded preferences. If you end up making a u-turn then at least there is some more solid evidence for your original desire to avoid the broccoli.
In the spirit of espistemic rationality we should always be willing to test cherished beliefs and update them if they turn out to be useless. Like you say, "push through the partial objections" but I would add: only up the point where these internal objections start to become real problems.]
↑ comment by Said Achmiz (SaidAchmiz) · 2017-10-05T14:54:00.581Z · LW(p) · GW(p)
Just try the damn brocolli.
Nope.
Elaborating:
Sometimes, people try to get me to do or try things which, in my estimation, it would probably be good for me to do/try.
In such cases, I refuse to do or try these things. In fact, I refuse even more vehemently and consistently than when people try to get me to do or try things that I expect to dislike, or that I really don’t think are good ideas. In fact, I think such cases—when people are trying to get me to do things that I think might be good ideas or that I think I might like—are the cases when it’s especially important to consistently and credibly refuse.
“Why??”, you ask?
Two reasons:
First, because I definitely do not want to establish the precedent that I can be talked into things. (No, I do not think that “taking ideas seriously” is mostly—much less entirely—a good trait.) This would give people an incentive to try and talk me into things; I don’t want that.
(Aside: a big part of the problem is that this precedent-setting, and resulting perceived incentives, occur even if I decide to do or try the thing for totally unrelated reasons; protestations of “no, I’m not doing this because you told me I should!” are obviously useless.)
(Naturally, people who know/recognize and expect this pattern may perceive a way to manipulate me, via “reverse psychology”; fortunately, defenses against this sort of thing are fairly easy—things like “on a randomly determined subset of occasions, break this pattern”, etc.)
Second, because cases when I think a thing might be a good idea, or I might enjoy it, are precisely those cases when I am most susceptible to being persuaded to do/try a thing which, on reflection, is not actually a good idea/enjoyable, and to subsequently false convincing myself that it is a good idea or that I do enjoy it—in short, the danger is greatest, then, of social pressure / social proof overriding my own judgment. Thus it’s most critical, at such times, to firmly refuse.
P.S.: There are some people whom I trust to such a degree—of whom I am so certain that they have my best interests in mind and that they know me well enough that their model of my preferences and so on is accurate—that if they suggest something to me, then the reasoning I outlined above is inapplicable. There are very, very few such people. No one who has known me for less than a decade could possibly be included in that set.
↑ comment by [deleted] · 2017-10-05T15:15:28.874Z · LW(p) · GW(p)
Ah, okay; with regard to your first objection, I'm sorry if it seemed like I was considering cases where another agent is trying to update your preferences on your behalf. I didn't intend this. I admire your will to avoid being manipulated by others and to avoid social pressure, but I imagined this preference updating to being happening internally, without too much weight given to how your preferences might be percieved by others. Extra care should be taken then to make others less aware of your internal mechanisms, to avoid 'hacking' from unwholesome characters. Personally, I don't find social pressures overide my judgement in almost all cases, but I can't speak for others in the same way (I'm more of a "stuff you, I do what I want" kind of guy). I'll ammend my comment in light of this.
With regard to your second objection, I can see how a misstep could lead me into thinking a bad idea (in reality) is a good one, but I'm operating on a baseline assumption that an ideal agent would recognise these bad ideas quite quickly.
Do you think my general assumption that it's better to be more open to updates is just wrong? I'm relatively new to the rationalsphere, and appreciate the disagreement!
↑ comment by Said Achmiz (SaidAchmiz) · 2017-10-05T15:45:04.757Z · LW(p) · GW(p)
I imagined this preference updating to being happening internally
I’m not sure I quite grasp your meaning, here; rephrase?
I'm more of a 'stuff you, I do what I want' kind of guy
Yes. So am I. It takes vigilance to maintain this, else you may find yourself going along with others while still having the self-image of yourself as a “stuff you, I do what I want” kind of guy.
an ideal agent would
Perhaps, but I am not an ideal agent. Are you an ideal agent?
Do you think my general assumption that it's better to be more open to updates is just wrong?
More than what?
(In any case, I strongly suspect the optimum here varies interpersonally; but I will say that, in my experience, most people who consider themselves “rationalists” should probably be less open to updates.)
I'm relatively new to the rationalsphere, and appreciate the disagreement!
As do I (despite not being new). :)
↑ comment by [deleted] · 2017-10-05T15:54:17.521Z · LW(p) · GW(p)
I’m not sure I quite grasp your meaning, here; rephrase?
In your mind, I guess.
Perhaps, but I am not an ideal agent. Are you an ideal agent?
Definitely not! But at least with respect to my personal preferences, I'm pretty good at identifying those which are good or bad and having the comittment (willpower?) to change them if they're bad.
More than what?
More than the character in the thought experiment who is unwilling to try the broccoli, I suppose.
I'll take stock of your comments in any case and think more about this.
comment by [deleted] · 2017-10-05T04:42:12.079Z · LW(p) · GW(p)
I just wanted to say that I've had the "broccoli error" response before, and the phrasing was almost identical to the one in my head.
↑ comment by Conor Moreton · 2017-10-05T06:04:58.135Z · LW(p) · GW(p)
Yeah. For me in this case it's something like, "But I don't want to establish good eating and exercise habits, because then I'll have to eat right and exercise forever."
comment by Rossin · 2017-10-05T06:51:34.074Z · LW(p) · GW(p)
I definitely have had the experience of trying to live up to a standard and it feeling awful, which then inhibits the desire to make future attempts. I think that feeling indicates a need to investigate whether you're making things difficult on yourself. For example, I would often attempt to learn too many things at once and do too much work at once because I thought the ideal person would basically be learning and working all the time. Then, when I felt myself breaking down it sent my stress levels through the roof because if I couldn't keep going that meant I was just unable to be the ideal I wanted to be. Instead of asking, "okay, the way I'm doing this is clearly unsustainable, but the standard is still worthwhile, how do I change my way of thinking or going about this thing?", I would just try to force myself to continue on, feeling constantly stressed about not failing.
But when I began to ask the question, I saw that I could decrease the work I was putting on myself to something I could actually manage all the time and that would be actually most productive in the long run. And I recognized that sometimes I'll just be exhausted and unable to do something and that doesn't make my whole attempt to live up to the standard a failure. This, it became easier to live up to the standard, or rather, my ideal standard shifted organically to the standard of how I think I can become the best me instead of the standard I ascribed to an unspecified ideal person who I am just not capable of being.
↑ comment by Conor Moreton · 2017-10-05T15:58:45.506Z · LW(p) · GW(p)
Yeah. I think this probably ties in strongly with Zvi's post about slack.
comment by Qiaochu_Yuan · 2018-01-22T03:51:45.427Z · LW(p) · GW(p)
(This is a comment that ended up on the wrong post again.)