Wanting to Succeed on Every Metric Presented

post by Logan Riggs (elriggs) · 2021-04-12T20:43:01.240Z · LW · GW · 25 comments

There’s a tendency to want to score high on every metric you come across. When I first read Kegan’s 5 stages of adult development, I wanted to be a stage 5 meta-rationalist! Reading the meditation book “The Mind Illuminated” (TMI), I wanted to be stage 10 (and enlightened and stage 8 jhana and…)!  I remember seeing this dancer moonwalk sideways and wanting to be that good too! 

This tendency is harmful.

But isn’t it good to want to be good at things? Depends on the "things" and your personal goals. What I’m pointing out is a tendency to become emotionally invested in metrics and standards, without careful thought on what you actually value. If you don’t seriously investigate your own personal preferences and taste, you may spend years of your life invested in something you don’t actually care about. By adding this habit of reflection, you could become much happier than you are right now.

[Note: I believe most people are bad at figuring out what they actually value and prefer. For example, I thought skilled pianists are cool and high status, but when I actually became decent enough to wow your average Joe, being cool in those moments wasn’t as cool as I thought it would be. As they say, “Wanting is better than having”.]

There’s a difference between wanting to score 100’s/all A+’s and scoring well enough to get a job. There’s a difference between reading multiple textbooks cover-to-cover and reading the 40% or so that seem relevant to your tasks.  There are tradeoffs; you can’t optimize for everything. When you perceive a metric that makes you really want to score highly on, nail down the tradeoffs in fine-grained details. What about this do you actually care about? What’s the minimum you could score on this metric and still get what you want? What do you actually want? Speaking out loud or writing this out is good for getting an outside view and notice confusion.

Noticing this pattern is half the battle. To make it concrete, here are examples from my life:

Running - I ran cross country and track for 3 years, but then I realized I don’t enjoy running long distance. Later I found out that sprinting is fun! If I was better at knowing my values, I could’ve just played ultimate frisbee with friends instead.

Dancing - I used to imagine dancing at weddings and such and looking really cool! I remember being really self-conscious and slightly miserable when I did dance in front of others. Trying to impress people is disappointing (and trying to be cool is so uncool). Now I value dancing because it’s fun and a good workout; I don’t worry about recording myself and consistently improving or dancing hypotheticals.

Kegan’s 5 stage development - I used to want to be stage 5, and I remember reading lots of David Chapman’s work to figure this out. I believe I benefited from this, but I ironically would’ve understood it better if I considered my values better. Now I value it as a useful framing for how large segments of people interpret the world. [See? I pointed out that it’s just another system with its own set of limits. I’m a cool kid now, right?]

Meditation - Becoming enlightened or TMI stage 10 sounded really cool! I’ve spent 100’s of hours meditating now, but I would’ve been much better off if I crystallized in my head the skills being optimized and how improving those skills improved my life. It wasn’t the “wanting to be enlightened prevented becoming enlightened” trope, but optimizing for a fuzzy “enlightened” metric was worse than more tractable metrics with clear feedback.

What I value now from meditation is being happier, accepting reality, being okay with metaphysical uncertainty (not freaking out when realizing I can’t directly control all my thoughts, or noticing my sense of self being constructed), and maintaining awareness of context, all of which are much clearer metrics that I actually care about.

Grades - I wanted all A’s and to work my hardest on every assignment, wasting a lot of time I could’ve spent elsewhere! Afterwards, I learned to do just enough to graduate and signal with my GPA that I’m a hard worker/smart. [Once, I missed my final exam where I needed a 60 to keep an A, dropping me to a C. I thought it was hilarious. Thanks Nate!]

Social Appraisals - I used to be emotionally affected by most everybody’s social rewards and punishments (i.e. attention and praise vs ignoring and criticism). I’ve felt awful and disappointed so many times because of this! I’ve come to realize that I actually only care about <10 people’s opinion of my worth, and they all care about me and know me well. [Note: this is separate from taking someone’s thoughts into consideration]

The post that prompted this was Specializing in problems we don’t understand [LW · GW]. Great post! I noticed the compulsion to work on this problem immediately without considering my current context and goals, so I wrote this post instead.

Topics people in this community may benefit from re-evaluating are:

So… do you feel compelled to succeed according to the metric I’ve presented?

25 comments

Comments sorted by top scores.

comment by cata · 2021-04-13T21:45:35.147Z · LW(p) · GW(p)

I am a pretty serious chess player and among other things, chess gave me a clearer perception of the direct cost in time and effort involved in succeeding at any given pursuit. You can look at any chess player's tournament history and watch as they convert time spent into improved ability at an increasingly steep rate.

As a result, I can confidently think things like "I could possibly become a grandmaster, but I would have to dedicate my life to it for ten or twenty years as a full-time job, and that's not worth it to me. On the other hand, I could probably become a national master with several more years of moderate work in evenings and weekends, and that sounds appealing." and I now think of my skill in other fields in similar terms. As a result, I am less inclined to, e.g. randomly decide that I want to become a Linux kernel wizard, or learn a foreign language, or learn to draw really well, because I clearly perceive that those actions have a quite substantial cost.

Replies from: elriggs
comment by Logan Riggs (elriggs) · 2021-04-14T03:19:49.886Z · LW(p) · GW(p)

There is something here along the lines of "becoming skilled at a thing helps you better understand the appeal (and costs) of being skilled at other things". It's definitely not the only thing you need because I've been highly skilled at improv piano, but still desired these other things.

What I want to point out in the post is the disconnect between becoming highly skilled and what you actually value. It's like eating food because it's popular as opposed to actually tasting it and seeing if you like that taste (there was an old story here on LW about this, I think). 

Making the cost explicit does help ("it would take decades to become a grandmaster"), but there can be a lack of feedback on why becoming a national master sounds appealing to you. Like the idea of being [cool title] sounds appealing, but is the actual, visceral, moment-to-moment experience of it undeniably enjoyable to you? (in this case, you can only give an educated guess until you become it, but an educated guess can be good enough!)

comment by Hazard · 2021-04-13T02:27:38.782Z · LW(p) · GW(p)

I liked that you provided a lot of examples!

If the details are available within you, I'd love to hear more about what the experience of noticing these fake values was like. Say for getting A's, I'd hazard a guess that at some point pre-this-revelation you did something like "thinking about why A's matter". What was that like? What was different about that reflection from more recent reflection? Has it been mostly a matter of learning to pay attention and then it's all easy, or have you had to learn what different sorts of motivation/fake-real values feel like, or other?

Does it feel like there were any "pre-requisites" for being able to notice the difference?

Replies from: elriggs
comment by Logan Riggs (elriggs) · 2021-04-13T07:12:09.527Z · LW(p) · GW(p)

If the details are available within you

Maybe! One framing is: I expected "great accomplishments that people I admire say is good" to make me very happy or very liked, but reality was not as great, even negative sometimes. This pattern was hidden because:

  •  I wasn't explicit with my expectations - if I was clear with how happy all A's would make me and paid attention when I did get an A, I would realize the disconnect sooner. 
    • Related: making explicit the goals that all A's helps me with (seriously considering why it matters in fine-grained details) would've been much more aligned with my goals than the proxy "get all A's". This serious analysis is not something I really did, but rationality skills of thinking an idea through while noticing confusions helped (I include focusing [LW · GW] here)
  • I was in love with the idea of e.g. running a marathon and didn't pay attention to how accomplishing it actually made me feel in physical sensations, or how the process I went about achieving that goal made me feel in physical sensations. This even happened with food! I used to eat a box of Zebra cakes (processed pastry cakes) on my drive home, but one time I decided to actually taste it instead of eating while distracted (inspired by mindful eating meditation). It was actually kind of gross and waxy and weirdly sweet, and I haven't eaten more than a few of them these past several years.

I liked that you provided a lot of examples!

Thanks! Real life examples keep me honest. I was even thinking of your post [LW · GW], specifically the image of you scrambling to maintain and improve 10+ skills. How would you answer your own question?

Replies from: Hazard
comment by Hazard · 2021-04-15T02:47:06.073Z · LW(p) · GW(p)

Sometimes when I'm writing an email to someone at work, I noticing I'm making various faces, as if to convey the emotion in the sentence I'm writing. It's like... I'm composing a sentence, I'm imagining what I'm trying to express, and I'm imagining that expression, and along with that comes the physical faces and mental stances of the thing I'm expressing. It's like I'm trying to fill in and inhabit some imagined state.

Over the past year I've noticed a similar sort of feeling when I'm thinking about something I could potentially do, and I'm being motivated by appearing impressive. The idea/thought is there, and then I try to "fill it up" and momentarily live into that world. There's normally a slight tension in my forehead that starts to form. There's also a sort of "zooming in" feeling in my head. It likely sounds drastic me typing it out, but this is all pretty subtle and I didn't notice it for a while.

Anywho, mostly if I find myself pleasurably stewing in the imagined state of the thing, it's a sign for me that it's about impressiveness. I seem to not sit in the idea when there's other motivations? I can't think of any reason why that would be the case, but it seems to be for me.

Replies from: elriggs
comment by Logan Riggs (elriggs) · 2021-04-17T18:43:07.013Z · LW(p) · GW(p)

I'm currently interested in the idea of "the physical sensation correlation of different mental states", like becoming intimately aware of the visceral, physical felt sense of being stressed or triggered, or relaxed and open, or having a strong sense of self or a small sense of identity, or a strong emotion in physical sensations only or a strong emotion with a story and sense of self attached or...

Specifically practicing this would look like paying attention to your body's felt sense while doing [thing] in different ways (like interacting with your emotions using different system's techniques). Building this skill will create higher quality feedback from your body's felt-sense, allowing a greater ability to identify different states in the wild. This post's idea of hijacked values and your comment point to a specific feeling attached to hijacked values. 

This better bodily intuition may be a more natural, long term solution to these types of problems than what I would naively come up with (like TAPs or denying the part of me that actually wants the "bad" thing)

comment by nim · 2021-04-13T15:52:33.132Z · LW(p) · GW(p)

I agree with your claims that "I believe most people are bad at figuring out what they actually value and prefer" and "by adding this habit of reflection, you could become much happier than you are right now", but I experience the scales you're discussing as extremely handy self-exploration tools in a way that I'm not sure from the post whether you do.

I read in the post a description of a relationship with assessment scales where the scales serve as instructions to a person -- "I want to attain this state because it's at the top of the list". I also read it as contrasting this scales-as-orders mindset against a paradigm where instructions are cherry-picked out of scales based on personal preferences and values that are both intrinsic and obvious to oneself.

I think there's a third approach to scales or metrics: they can serve as a distillation of others' research more like a library or a department store than a set of orders. When I look at scales like the ones you describe, I tend to treat them the way I treat an engaging book, asking questions like:

  • What would it be like to be at this stage? What would suck about being there?
  • If I had to pick just one of these stages to occupy indefinitely, which one would I actually like best, based on these descriptions?
  • What evidence have I seen that someone could be in several of these stages at once, or none?
  • What hypothetical people can I imagine who are technically at a "lower" stage but get more/better things done than those at a "higher" stage or vice versa?

In these ways, internalizing a well-researched scale gives me the feeling I get when I pick up an item in a video game that reveals a section of the world map which was hidden before.

Replies from: elriggs
comment by Logan Riggs (elriggs) · 2021-04-13T17:18:36.505Z · LW(p) · GW(p)

Your bulleted self-inquiries are very useful. These seem like more playful questions that I would feel comfortable asking someone else if I felt they were being hijacked by a metric/scaling (where a more naive approach could come across as judgmental and untactful). 

Not all of your questions fit every situation of course, but that's not the point! Actually, I want to try out a few examples:

Long-distance running

What would it be like to be very skilled? I would be much fitter than I am now!, so less winded when doing other things. I feel like there's a bragging angle, but who likes a bragger?

What would suck? The long practice hours, I will likely be more prone to injuries and joint problems.

What's the good part of training to be a skilled runner? Consistently being outside would be nice. I think I would feel better after training.

What would be the bad part of training? That out-of-breath feeling and burning muscles is uncomfortable.

Are there people who aren't skilled long distance runners, but are still better in a meaningful way? Swimmers are very fit, have greater upper body strength, and aren't as prone to injuries (though looking it up, they do suffer shoulder injuries)

AI Alignment Researcher

What would it look like to be successful? Being paid to do research full time. Making meaningful contributions that reduce x-risk. Having lots of smart people who will listen and give you feedback. Have a good understanding of lots of different, interesting topics.

What would suck about it? Maybe being in an official position will cause counter-productive pressure/responsibility to make meaningful contributions. I will be open to more criticism. I may feel responsible and slightly helpless regarding people who want to work on alignment, but have trouble finding funding.

What would be great about the process of becoming successful? Learning interesting subjects and becoming better at working through ideas.  Gaining new frames to view problems. Meeting new people to discuss interesting ideas and "iron sharpening iron". Knowing I'm working on something that feels legitimately important.

What would suck about the process? The context-loading of math texts is something to get used to. There's a chance of failure due to lack of skill or not knowing the right people. There is no road map to guarantee success, so there is a lot of uncertainty on what to do specifically. 

Any people who also are great but not successful Alignment researchers? There's people who are good at communicating these ideas with others (for persuasion and distillation), or work at machine learning jobs and will be in good positions of power for AI safety concerns. There are also other x-risks to work on out there and EA fields that also viscerally feel important. 

I'll leave it here due to time, but I think I would add "How could I make the process of getting good more enjoyable?" and making explicit what goals I actually care about.

comment by Dagon · 2021-04-12T22:15:39.768Z · LW(p) · GW(p)

The flip side of this is tradeoff bias (a term I just made up - it's related to false dichotomies).  Assuming you have to give up something to get a desired goal, and that costs always equal rewards is a mistake.  Some people CAN get all As, excel at sports, and have a satisfying social life.  Those people should absolutely do so.  

I think the post has good underlying advice: don't beat yourself up or make bad tradeoffs if you CAN'T have it all.  Experimenting to understand tradeoffs, and making reasoned choices about what you really value is necessary.  But don't give up things you CAN get, just because you assume there's a cost you can't identify.

Replies from: elriggs
comment by Logan Riggs (elriggs) · 2021-04-12T22:43:20.792Z · LW(p) · GW(p)

I may have been misleading, but my point is not about tradeoffs, but about not pursuing things that you don't actually care about upon reflection. 

Thanks for bringing this up. I believe explicitly stating tradeoffs is important because you may then realize that you actually don't care about them. For example, I don't actually care about being "enlightened" or reaching stage 10 in TMI (though I thought I did). I would have come to a better conclusion and had better meditation sessions earlier if I made the metrics I care about explicit. 

[Though, this isn't true for looking cool dancing or eating new foods because I don't know if I like them until it happens]

comment by Davidmanheim · 2021-04-14T11:28:10.726Z · LW(p) · GW(p)

Noting the obvious connection to Goodhart's law [LW · GW] - and elsewhere I've described the mistake of pushing to maximize easy-to-measure / cognitively available items rather than true goals.

comment by Logan Riggs (elriggs) · 2021-04-12T21:23:10.254Z · LW(p) · GW(p)

This post's purpose is only to point out the pattern and nudge basic self-reflection, and that it is sometimes enough to solve the problem. It doesn't solve all problems regarding hijacked values, which is what I'm currently trying to find a good-enough solution to (or create a good-enough taxonomy of types of hijacked values and heuristics).

For example, some of these are identity based. I saw myself as a hard worker, so I worked hard at every school assignment, even when it wasn't at all necessary. 

Others are gamificiation-y (like a video game): 100%-ing a game, reaching a certain rank/level, daily rewards!, or recommended videos/articles, or a cliff-hanger in a story with the next chapter available. 

Others are extreme social incentive-y, such as cults, abusive relationships, and multi-level marketing, where local rewards and punishments become more salient than what you used to value (or would value if you left that environment for a few years).

I'm currently not in love with these divisions. A better framing (according to the metric of achieving your goals better) would make it clear what action to take in each situation to better know your true values.

Replies from: elriggs
comment by Logan Riggs (elriggs) · 2021-04-13T00:22:49.467Z · LW(p) · GW(p)

For the gamification one, they tend to involve a bunch of open loops that leave you wanting to resolve (where the cliff hanger is a great example). This causes thoughts regarding the loop to come up spontaneously. In context, the loops aren't that important, but locally, they may appear more important (like being angry at a loved one for interrupting you or preventing you from finishing a show/chapter/etc). I think being triggered in general here counts. Typical antidotes is the traditional "take a walk" and regarding meditation, better awareness and capacity to let go (not arguing that meditation works here, but may write a post on that)

This is different than cults and abusive relationships, where there is a strong motivation to leave your normal environment (the type of abusive relationship I have in mind is "you can't see your friends anymore"), making the local rewards and punishments more salient as time goes by. I may even include drugs w/ withdrawals here. The usual solution is leaving those environments for healthier ones to compare against, though this happens in transitions due to ideas coupling (bucket errors). [This feels unsubstantiated to me and would benefit from more specific examples]. The gamification one had two answers: "change environment" or "change your relationship to the environment". There may be some situations where you're forced in a horrible environment and your only choice is to change your relationship to your environment, but this would require some high-level meditation insights in my opinion. "Leaving" seems the most actionable response. Maybe "recognizing" you're in a cult is an important vein for future thought. 

The identity based will cause ignoring/flinching from incompatible thoughts. This may benefit from becoming more sensitive to subtle thoughts you typically ignore (noticing confusions was a similar process for me). I feel like meditating relates, but I'm unsure on the mechanism. It's mumble mumble everything is empty mumble.

There's also a thread on "horrible events cause you to realize what's important" to look into.

comment by Yoav Ravid · 2021-04-13T09:50:29.028Z · LW(p) · GW(p)

I think Rationality itself is also a metric that fits this pattern (similarly to enlightenment). Taking into account that becoming more rational isn't free, and might actually have a substantial cost, I'm pretty sure for many people it's not worth it to invest in becoming more rational. I feel there's a full post to be written here, but I don't yet have the clarity to write it.

Replies from: johnswentworth, elriggs
comment by johnswentworth · 2021-04-13T17:05:16.404Z · LW(p) · GW(p)

I actually just started drafting a post in this vein. I'm framing the question as "what are the relative advantages and disadvantages of explicit rationality?". It's a natural follow-up to problems we don't understand [LW · GW]: absent practice in rationality and being agenty (whether we call it that or not), we'll most likely end up as cultural-adaptation-executors [? · GW]. That works well mainly for problems where cultural/economic selection pressures have already produced good strategies. Explicit rationality is potentially useful mainly when that's not the case - either because some change has messed up evolved strategies, or because selection pressures are misaligned with what we want, or because the cultural/economic search mechanisms were insufficient to find good strategies in the first place.

Replies from: elriggs, Yoav Ravid
comment by Logan Riggs (elriggs) · 2021-04-13T17:24:13.347Z · LW(p) · GW(p)

Regarding "problems we don't understand", you pointed out an important meta-systematic skill: figuring out when different systems apply and don't apply (by applying new systems learned to a list of 20 or so big problems). 

The new post you're eluding to sounds interesting, but rationality is a loaded term. Do you have specific skills of rationality in mind for that post?

Replies from: johnswentworth
comment by johnswentworth · 2021-04-13T17:37:46.189Z · LW(p) · GW(p)

Do you have specific skills of rationality in mind for that post?

No, which is part of the point. I do intend to start from the sequences-esque notion of the term (i.e. "rationality is systematized winning"), but I don't necessarily intend to point to the kinds of things LW-style "rationality" currently focuses on. Indeed, there are some things LW-style "rationality" currently focuses on which I do not think are particularly useful for systematized winning, or are at least overemphasized.

Replies from: elriggs
comment by Logan Riggs (elriggs) · 2021-04-13T19:19:27.540Z · LW(p) · GW(p)

No, which is part of the point.

I don't know what point you're referring to here. Do you mean that listing specific skills of rationality is bad for systematized winning?

I also want to wrangle more specifics from you, but I can just wait for your post:)

Replies from: johnswentworth
comment by johnswentworth · 2021-04-13T19:34:26.967Z · LW(p) · GW(p)

Oh, I mean that part of the point of the post is to talk about what relative advantages/disadvantages rationality should have, in principle, if we're doing it right - as opposed to whatever specific skills or strategies today's rationalist community happens to have stumbled on. It's about the relative advantages of the rationality practices which we hopefully converge to in the long run, not necessarily the rationality practices we have today.

Replies from: elriggs
comment by Logan Riggs (elriggs) · 2021-04-13T19:47:05.706Z · LW(p) · GW(p)

Oh! That makes sense as a post on it's own. 

Listing pros and cons of current rationalist techniques could then be compared to your ideal version of rationality to see what's lacking (or points out holes in the "ideal version"). Also, "current rationality techniques" is ill-defined in my head and the closest I can imagine is the CFAR manual, though that is not the list I would've made. 

Replies from: johnswentworth
comment by johnswentworth · 2021-04-13T22:01:55.207Z · LW(p) · GW(p)

Yup, exactly.

comment by Yoav Ravid · 2021-04-13T17:47:59.474Z · LW(p) · GW(p)

Nice! Looking forward to reading your post. I wrote a few notes myself under the title "Should You Become Rational"*, but it turn into enough for a post. One of the things that I wanted to consider is whether its someone's duty to become more rational, which I think is an interesting question (and it's a topic that was discussed on LW, see Your Rationality is My Business [LW · GW]). My current conclusion is that your obligation to become more rational is relative to how much influence you have to wish to have on the world and on other people. Of course, even if true, this point might be slightly moot, since only someone who is already interested in rationality might agree with it, others are unlikely to care.

* "Rational" pretty much for lack of a better word that still kept it short, didn't want to use rationalist as that's an identification as part of a specific group, which isn't the point

comment by Logan Riggs (elriggs) · 2021-04-13T12:24:24.325Z · LW(p) · GW(p)

"Rationality" was a vague metric for me when I first started reading the sequences. Breaking it down into clear skills (taking ideas seriously, noticing confusion, "truth" as predictive accuracy, etc) with explicit benefits and common pitfalls would be useful.

Once you nail down what metrics you're talking about when you say "rationality", I believe the costs and benefits of investing in becoming more rational will be clearer. 

Feel free to brainstorm as replies to this comment, I would enjoy a full post on the subject.

comment by Pablo Repetto (pablo-repetto-1) · 2021-04-12T22:22:44.650Z · LW(p) · GW(p)

May I propose "appraisals" as a substitute for "opinions"? It is more precise, in that it implies judgement of worth

Replies from: elriggs
comment by Logan Riggs (elriggs) · 2021-04-12T22:48:54.238Z · LW(p) · GW(p)

Thanks! Changed to "social appraisals". Someone's opinion is definitely a loaded term which may lead to pattern matching. I'm also fine with more novel phrasing since it's explained immediately after.