post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Bucky · 2019-12-13T10:47:13.055Z · LW(p) · GW(p)

Firstly, I'm with you on your model of status and the availability of perceived opportunity for additional status in a hyper-connected world is really interesting.

Where I have a big disagreement is in the lesson to take from this. Your argument is that we should essentially try to turn off status as a motivator. I would suggest it would be wiser to try to better align status motivations with the things we actually value.

I struggle hugely with akrasia. If I didn't have some external motivation then I'd probably just lie in bed all day watching tv. I don't know if I'm unusually susceptible to this but my impression is that this is a fairly common problem, even if to a lesser extent in some.

One of my solutions to this is to deliberately do things for the sake of status. Rather, I look for opportunities where me getting more status aligns with me doing things which I think are good.

As an example, take karma on LessWrong. This isn't completely analogous to status but every time I get karma I feel a little (or sometimes big!) boost of self-worth. If writing on LessWrong is aligned with my values then this is a good thing. If you add in a cash prize from someone respected in the community then my status circuit is triggered [LW(p) · GW(p)] significantly to motivate me to write an answer even if the actual size of the cash prize doesn't justify the amount of time put in! [1] I could try to fight against this and not allow status triggers but I don't think that would actually improve my self-actualisation.

In a non LW context, if status in the eyes of my family is important, I won't just spend my time watching tv but will also spend time playing with my kids. I would play with my kids anyway as I know it's the right thing to do and is fun but on those occasions where tv is more appealing, listening to my status motivation can help me do the right thing while expending less will-power. [2]

On a practical level I'm not sure that trying to ban status motivations is practical. As you point out a status high is readily achievable elsewhere so if opportunities for status are banned within one community then this would just subconsciously motivate me to look elsewhere.

[1] This isn't a complaint!

[2] I am aware that confessing to this in most places would be seen as a huge social faux pas, I'm hoping LW will be more understanding.


Replies from: None, Gurkenglas
comment by [deleted] · 2019-12-13T16:09:29.844Z · LW(p) · GW(p)
I am aware that confessing to this in most places would be seen as a huge social faux pas, I'm hoping LW will be more understanding.

You're good. You're just confessing something that is true for most of us anyway.

Where I have a big disagreement is in the lesson to take from this. Your argument is that we should essentially try to turn off status as a motivator. I would suggest it would be wiser to try to better align status motivations with the things we actually value.

Up to a point. It is certainly true that status motivations have led to great things, and I'm personally also someone who is highly status-driven but manages to mostly align that drive with at least neutral things, but there's more.

I struggle hugely with akrasia. If I didn't have some external motivation then I'd probably just lie in bed all day watching tv.

The other great humanist psychologist besides Maslow was Adam Rogers. His thinking can be seen as an expansion on this "subagent motivation is perceived opportunity" idea. He proposed an ideal vs an actual self. The ideal self is what you imagine you could and should be. Your actual self is what you imagine you are. The difference between ideal self and actual self, he said, was the cause of suffering. I believe that Buddhism backs this up too.

I'd like to expand on that and say that the difference between ideal self (which seems like a broader class of things that includes perceived opportunity but also social standards, the conditions you're used to, biological hardwiring, etc) and your actual self is the thing that activates your subagents. The bigger the difference, the more your subagents are activated by this difference.

Furthermore, the level of activation of your subagents causes cognitive dissonance (a.k.a. akrasia), i.e. one or multiple of your subagents not getting what they want even though they're activated.

And THAT is my slightly-more-gears-level model of where suffering comes from.

So here's what I think is actually going on with you: you're torn between multiple motivations until the status subagent comes along and pulls you out of your deadlock because it's stronger than everything else. So now there's less cognitive dissonance and you're happy that this status incentive came along. It cut your gordian knot. However, I think it's also possible to resolve this dissonance in a more constructive way. I.e. untie the knot. In some sense the status incentive pushes you into a local optimum.

I realise that I'm probably hard to follow. There's too much to unpack here. I should probably try and write a sequence.


Replies from: Bucky
comment by Bucky · 2019-12-13T22:04:53.410Z · LW(p) · GW(p)

I think that’s a good explanation. I agree that the solution to Akrasia I describe is kind of hacked together and is far from ideal. If you have a better solution to this I would be very interested and it would change my attitude to status significantly. I suspect that this is the largest inferential gap you would have to cross to get your point across to me, although as I mentioned I’m not sure how central I am as an example.

I’m not sure suffering is the correct frame here - I don’t really feel like Akrasia causes me to suffer. If I give in then I feel a bit disappointed with myself but the agent which wants me to be a better person isn’t very emotional (which I think is part of the problem). Again there may be an inferential gap here.

comment by Gurkenglas · 2019-12-13T12:02:20.585Z · LW(p) · GW(p)

"I have trouble getting myself doing the right thing, focusing on what selfish reasons I have to do it helps." sounds entirely socially reasonable to me. Maybe that's just because we here believe that picking and choosing what x=selfish arguments to listen to is not aligned with x=selfishness.

Replies from: Bucky
comment by Bucky · 2019-12-13T12:51:57.541Z · LW(p) · GW(p)

This is a beautifully succinct way of phrasing it. I still have enough deontologist in me to feel a little dirty every time I do it though!

comment by Gordon Seidoh Worley (gworley) · 2019-12-13T18:43:10.580Z · LW(p) · GW(p)
You run out of perceived opportunity.

This is an aside, but I want to think about what this means in terms of predictive processing.

What would it mean, in PP terms, to have an experience that could be reified as "running out of perceived opportunity"? The running out part seems straightforward: that's having succeeded in hitting a setpoint (and this fits with the satiation explanation you've been using). So that would make the setpoint perceived opportunity, but what does that mean?

I think the trick is to understand it as expectation of getting what we want. That is, an opportunity is an expectation to minimize deviation from some setpoint, in this case let's say for status (keeping in mind that at a neurological level there is almost certainly not a single control system for status on a single variable, it instead being a thing made up of many little parts that get combined together in correlated ways that allow us to reasonable lump them together as "status").

Thus it seems this phenomenon of status satisficing is explainable and would be predicted by PP, contingent on there being neurological encoding of status via setpoints, and status is such a robust phenomenon in humans that it seems unlikely that this would not be the case.

comment by avturchin · 2019-12-13T14:17:44.460Z · LW(p) · GW(p)

I observed that "availability of perceived opportunity for additional status" results in the multiplication of new projects ("may be I should start Instagram?" etc), which is followed by exhausting multitasking and most projects left unfinished. At the end there are 0 finished projects after years of work and low status in many fields.

comment by Isnasene · 2019-12-14T02:53:31.020Z · LW(p) · GW(p)
But here's the kicker: in this globalist hyper-connected century, we don't really run out of perceived opportunity anymore. What does happen, is that we're perpetually stuck with motivations that people in the past would have perceived as morally depraved

Agreed but it's also worth noting that this can run the other way too. This globalist hyper-connected century can also provide us with motivations that seem unusually noble. Part of the 80000 Hours schtick is the idea that we're uniquely advantaged to do extremely good things for the world and the internet is pretty good at helping us discover bottomless pits of suffering. Because our heuristics for "being a good person" and "having the status associated with being a good person" are pretty muddied, pursuit of these noble goals can also often been driven by this Molochian sense of competition and have the same negative psychological effects as competition for any other kind of available opportunity.

This kind of thing has also had a good bit of discussion in effective altruism -- the feeling of constant competition for constantly available opportunity produces psychological costs:

“Hey you! You know, all these ideas that you had about making the world a better place, like working for Doctors without Borders? They probably aren’t that great. The long-term future is what matters. And that is not funding constrained, so earning to give is kind of off the table as well. But the good news is, we really, really need people working on these things. We are so talent constraint… (20 applications later) … Yeah, when we said that we need people, we meant capable people. Not you. You suck.”

-- After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation [EA · GW]

And from another comment on that article by Peter Hurford [EA(p) · GW(p)] to help show how this is about status:

I really wish we (as an EA community) didn't work so hard to accidentally make earning to give so uncool. It's a job that is well within the reach of anyone, especially if you don't have unrealistic expectations of how much money you need to make and donate to feel good about your contributions. It's also a very flexible career path and can build you good career capital along the way.

As a pretty capable person who's both seen the great things Effective Altruism has done and who has stayed on the side-lines so far (partially because thinking too hard about effective altruism has made me feel bad about myself in the past), I'm not really sure what actual solutions there are for this.

So how do we stop the status subagent? By removing our opportunity for status altogether. I have the suspicion that healthy, high-trust cultures tend to make status as predictable and as hard to change as possible

It's kind of fun to contrast this with effective altruism/Less Wrongian culture which, for good reason, tries to maximize altruistic achievement or performance and deliberately eschews a lot of conventional social signalling to do so (i.e. Worried that you're not credentialled enough to get a high status position? Check out these people who got cool AI research roles by saving a bit of money and self-studying really hard!).

In general, it seems like cultures with predictable and hard-to-change status roles are necessarily sacrificing something like "optimized performance" or "equality of opportunity" since they use low-noise proxies (you can be 100% certain about whether you went to college) while actual metrics of performance and capability tend to a) be noisy, b) be hard for an individual to predict about themself and c) are hard to rank-order relative to competitors (remember the chaos of extra-curriculars and SAT studying that ran up to applying to college? 'Cuz I do).

Idk what the optimal balance of "psychological benefits from well-defined status roles" and "success benefits from optimizing actual success" is but hopefully someone figures it out. Otherwise Moloch is just gonna pick for us.