Posts

How much should we value life? 2021-09-06T22:30:29.976Z
A brief review of The Scout Mindset 2021-08-26T20:47:08.731Z
When Programmers Don't Understand Code, Don't Blame The User 2021-08-18T19:59:04.285Z
A Qualitative and Intuitive Explanation of Expected Value 2021-08-10T03:31:13.314Z
When writing triggers memory reconsolidation 2021-07-25T22:10:26.517Z
Believing vs understanding 2021-07-24T03:39:44.168Z
Preparing for ambition 2021-07-19T06:13:10.477Z
Happy paths and the planning fallacy 2021-07-18T23:26:30.920Z
Bad names make you open the box 2021-06-09T03:19:14.107Z
Why don't long running conversations happen on LessWrong? 2021-05-30T22:36:03.951Z
Don't feel bad about not knowing basic things 2021-05-24T01:49:57.637Z
Is driving worth the risk? 2021-05-11T05:04:47.935Z
Taking the outside view on code quality 2021-05-07T04:16:52.912Z
Naming and pointer thickness 2021-04-28T06:35:08.865Z
Bayes' theorem, plausible deniability, and smiley faces 2021-04-11T20:41:10.324Z
Think like an educator about code quality 2021-03-27T05:43:52.579Z
The best frequently don't rise to the top 2021-03-25T06:10:20.278Z
The best things are often free or cheap 2021-03-18T02:57:15.012Z
Five examples 2021-02-14T02:47:07.317Z
How should you go about valuing your time? 2021-01-10T06:54:56.372Z
Babble Thread 2021-01-09T21:52:12.383Z
Thoughts on Mustachianism 2021-01-09T09:27:36.839Z
Conversation, event loops, and error handling 2021-01-08T08:05:49.224Z
Give it a google 2020-12-29T05:30:39.133Z
adamzerner's Shortform 2020-12-16T09:51:03.460Z
Why I love stand up comedy 2020-12-16T09:34:22.198Z
Bad reductionism 2020-12-16T08:21:33.944Z
Debugging the student 2020-12-16T07:07:09.470Z
Map and Territory: Summary and Thoughts 2020-12-05T08:21:07.031Z
Writing to think 2020-11-17T07:54:44.523Z
When socializing, to what extent does walking reduce the risk of contracting Covid as opposed to being stationary? 2020-11-16T00:39:30.182Z
What are some good examples of fake beliefs? 2020-11-14T07:40:19.776Z
What is the right phrase for "theoretical evidence"? 2020-11-01T20:43:38.747Z
What is our true life expectancy? 2020-10-23T23:17:13.414Z
Should we use qualifiers in speech? 2020-10-23T04:46:10.075Z
Blog posts as epistemic trust builders 2020-09-27T01:47:07.830Z
Losing the forest for the trees with grid drawings 2020-09-24T21:13:35.180Z
Updates Thread 2020-09-09T04:34:20.509Z
More Right 2020-07-22T03:36:54.007Z
In praise of contributing examples, analogies and lingo 2020-07-13T06:43:48.975Z
What gripes do you have with Mustachianism? 2020-06-11T23:42:42.472Z
Does taking extreme measures to avoid the coronavirus make sense when you factor in the possibility of a really long life? 2020-06-05T00:58:49.775Z
"No evidence" as a Valley of Bad Rationality 2020-03-28T23:45:44.927Z
Is the Covid-19 crisis a good time for x-risk outreach? 2020-03-19T16:14:45.344Z
Is the coronavirus the most important thing to be focusing on right now? 2020-03-18T22:48:17.191Z
Assorted thoughts on the coronavirus 2020-03-18T07:08:30.614Z
Why would panic during this coronavirus pandemic be a bad thing? 2020-03-08T08:32:50.753Z
Reflections on Premium Poker Tools: Part 4 - Smaller things that I've learned 2019-10-11T01:26:40.240Z
Reflections on Premium Poker Tools: Part 3 - What I've learned 2019-10-11T00:49:10.739Z
Reflections on Premium Poker Tools: Part 2 - Deciding to call it quits 2019-10-09T04:17:25.259Z

Comments

Comment by adamzerner on adamzerner's Shortform · 2021-09-18T20:52:23.655Z · LW · GW

I suspect that the term "cognitive" is often over/misused.

Let me explain what my understanding of the term is. I think of it as "a disagreement with behaviorism". If you think about how psychology progressed as a field, first there was Freudian stuff that wasn't very scientific. Then behaviorism emerged as a response to that, saying "Hey, you have to actually measure stuff and do things scientifically!" But behaviorists didn't think you could measure what goes on inside someone's head. All you could do is measure what the stimulus is and then how the human responded. Then cognitive people came along and said, "Er, actually, we have some creative ways of measuring what's going on in there." So, the term "cognitive", to me at least, refers very broadly to that stuff that goes on inside someone's head.

Now think about a phrase like "cognitive bias". Does "cognitive" seem appropriate? To me it seems way too broad. Something like "epistemic bias" seems more appropriate.

Comment by adamzerner on £2000 bounty - contraceptives (and UTI) literature review · 2021-09-17T01:41:47.704Z · LW · GW

Interpreted charitably, I believe that mschons' comment is claiming that for this situation in particular, combing through Google Scholar isn't the best approach, not that it is inappropriate in general.

Comment by adamzerner on Writing On The Pareto Frontier · 2021-09-17T01:27:27.927Z · LW · GW

Note that Pareto optimality is again relevant to choosing examples/explanations: different examples will make sense to different people. Just offering very different examples from what others have written before is a good way to reach the Pareto frontier.

I think that this is an important point. Personally, I didn't realize it until I read Non-Expert Explanation.

The way I think about it, the clarity of an explanation is a 2-place word. It doesn't make sense to say that an explanation is clear. You have to say that it is clear to Alice. Or clear to Bob. What is clear to one person might not be clear to another person.

In the language of pareto frontiers, I suppose you could say that one axis is "clearness to Alice" and another "clearness to Bob", etc. And even if you do poorly on the other axes of "clearness to Carol", "clearness to Dave", etc., it could still be a pareto frontier if you can't do better along eg. "clearness to Carol" without trading off how well you're doing on eg. "clearness to Alice". There's no opportunity to do better along one axis without doing worse along another. You wrote the best article out there that targets Alice and Bob.

All of this is of course related to what was said in the <Topic> For <Audience> section as well.

It may also be worth noting that being on the pareto frontier doesn't necessarily make it a good post. Eg if you write a post that is incredibly good at explaining calculus to John Doe, but terrible at explaining it to everyone else in the world, and John Doe has no interest in calculus, that post would be at the pareto frontier, but would also be silly to write.

Comment by adamzerner on LessWrong is paying $500 for Book Reviews · 2021-09-14T04:23:23.544Z · LW · GW

Elicit prediction for the probability that there will be more than 5.5 submissions that receive a payout by October 13th (the title doesn't mention receiving a payout, but I intended for it to).

My thinking: a quick scan of the Book Reviews tag indicates about 2.5 posts with that tag per month. I suppose another 0.5-1 or so are book reviews but just haven't been tagged as such. So that is my baseline. From there, I expect the $500 reward to give a solid bump, but nothing too crazy.

Comment by adamzerner on How much should we value life? · 2021-09-12T02:48:47.238Z · LW · GW

I think we're getting closer to agreement as I'm starting to see what you're getting at. My comment here would be that yes, your values can shift, and they have shifted after thinking hard about what post-Singularity life will be like and getting all excited. But the shift it has caused is a larger multiplier in front of the temporal discounted integral, not the disabling of temporal discounting altogether.

I'm in agreement here! Some follow up questions: what are your thoughts on how much discounting should be done? Relatedly, what are your thoughts on how much we should value life? Is it obvious that past eg. 500 years, it's far enough into the future that it becomes negligible? If not, why aren't these things that are discussed? Also, do you share my impression that people (on LW) largely assume that life expectancy is something like 80 years and life is valued at something like $10M?

Yes, but I don't think there is any layer of reasoning beneath that preference. Evading death is just something that is very much hard-coded into us by evolution.

Regardless of whether it stems from a layer of reasoning or whether it is hard-coded, doesn't it imply that you aren't doing too much temporal discounting? If you did a lot of temporal discounting and didn't value the years beyond eg. 250 years old very much, then death wouldn't be that bad, right?

Comment by adamzerner on How much should we value life? · 2021-09-11T19:21:21.947Z · LW · GW

I think I may have lead us down the wrong path here. The ultimate question is the one of temporal discounting, and that question depends on how much we do/should value those post-singularity life years. If values can't shift, then there isn't really anything to talk about; you just ask yourself how much you value those years, and then move on. But if they can shift, and you acknowledge that they can, then we can discuss some thought experiments and stuff. It doesn't seem important to discuss whether those shifts are due to discovering more about your pre-existing values, or due to actually changing those pre-existing values.

Haha, unfortunately you posed the question to the one guy out of 100 who would gladly answer "Absolutely", followed by "What's wrong with being an immature teenager?"

Ah, I see. You and I probably just have very different intuitions regarding what to value then, and I sense that thought experiments won't bring us much closer.

Actually, I wonder what you think of this. Are you someone who sees death as a wildly terrible thing (I am)? If so, isn't it because you place a correspondingly high value on the years of life you'd be losing?

The other is through value shift, along the lines of "I just ate a whole pizza and now that my food-craving brain-subassembly has shut up my value function consists mostly of concerns for my long-term health". Even setting temporal discounting aside, I fail to see why your post-dinner-values should take precedence over your pre-dinner-values, or for that matter why deathbed-values should take precedence over teenage-values. They are both equally real moments of conscious experience.

In the pizza example, I think the value shift would moreso be along the lines of "I was prioritizing my current self too much relative to my future selves". Presumably, post-dinner-values would be incorporating pre-dinner-self. Eg. it wouldn't just say, "Screw my past self, my values are only about the present moment and onwards." So I see your current set of values as being the most "accurate", in which case regret minimization seems like it makes sense.

Comment by adamzerner on How much should we value life? · 2021-09-11T06:50:49.835Z · LW · GW

To be sure, I don't actually think whether Accelerationism is right has any effect on the validity of your points.

Yeah I agree. Seems like it just gives a small bump in the likelihood of living to the singularity, but that has a very small impact on the larger question, because the larger question is a lot more sensitive to other variables like how long a lifespan you'd expect post-singularity, and how much if any temporal discounting should be done.

To the rest of your points about exponential growth, I unfortunately don't understand it enough to really be able to respond, sorry.

I'm not sure the prescriptive context is meaningful with regard to values. It's like having a preference over preferences. You want whatever you want, and what you should want doesn't matter because you don't actually want that, wherever that should came from.

First of all, thank you very much for engaging with me here. This is exactly the sort of thing I was hoping to get in the comments. A good critique that I hadn't thought (enough) about, and one that hits my central point hard, rather than just hitting tangential points that don't have much influence on the central point (although I appreciate those too). I also think you're expressing yourself very clearly, which makes it pleasant to engage with.

The more I think about it, the more I think I should apply some temporal discounting. However, I still lean towards not doing it too much.

In some theoretical sense, I agree that rationality can't tell you what to value, only how to achieve your values (as well as how to figure out what is true). But in a more practical sense, I think that often times you can examine your values and realize that, well, I shouldn't say "there is good reason to change them", but I guess I could say "you find yourself inspired to change them" or "they've just been changed". Like you mention, thought experiments can be a great tool, but I think it's more than just that they help you discover things. I think they can inspire you to change your values. I do agree that it isn't really something that you can just decide to change though.

As an example, consider an immature teenager who doesn't care at all about his future self and just wants to have fun right now. Would you say, "Well, he values what he values."? Probably not.

So then, I think this question of temporal discounting is really one that needs to be explored. It's not enough to just say, "10k years from now? I don't care about that." Maybe we're being immature teenagers.

Comment by adamzerner on How much should we value life? · 2021-09-09T22:48:06.873Z · LW · GW

Thank you for the post, it was quite a nostalgia trip back to 2015 for me because of all the Wait But Why references.

Sure thing! Yeah I felt similar nostalgia. I love and miss Wait But Why.

However, my impression is that the Kurzweilian Accelerationism school of thought has largely fallen out of favor in transhumanist circles since that time, with prominent figures like Peter Thiel and Scott Alexander arguing that not only are we not accelerating, we can barely even keep up with 19th century humanity in terms of growth rate. Life expectancy in the US has actually gone down in recent years for the first time.

To continue the nostalgia:

  1. The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isn’t totally smooth and uniform. Kurzweil explains that progress happens in “S-curves”:

An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:

  1. Slow growth (the early phase of exponential growth)
  2. Rapid growth (the late, explosive phase of exponential growth)
  3. A leveling off as the particular paradigm matures3

If you look only at very recent history, the part of the S-curve you’re on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but that’s missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

Maybe you're right though. I don't have a good enough understanding to say. Thanks for contributing this thought.

I felt like it was moot to go into such detailed analysis of the other factors when this one alone could easily vary by ten orders of magnitude.

Yeah, in retrospect that does make sense. I think I got lost in the weeds.

An important consideration that was left out is temporal discounting.

...

By choosing a sufficiently yet physically plausible number (such as 100 trillion years), you could justify almost any measure to reduce your risk of dying before Singularity and unambiguously resolve e.g. the question of driving risk.

But I doubt that's a good representation of your actual values. I think you're much more likely to do exponential discounting of future value, making the integral of value over time finite even in the limit of infinite time. This should lead to much more stable results.

Sigh. Yeah, this was a crucial thing that I left out. It might even be the crux. My mistake for not talking about it in the OP.

Here are my thoughts. Descriptively, I see that temporal discounting is something that people do. But prescriptively, I don't see why it's something that we should do. Maybe I am just different, but when I think about, say, 100 year old me vs current 28 year old me, I don't feel like I should prioritize that version less. Like everyone else, there is a big part of me that thinks "Ugh, let me just eat the pizza instead of the salad, forget about future me". But when I think about what I should do, and how I should prioritize future me vs present me, I don't really feel like there should be discounting.

That was for 100 year old me. What about 1k? 10k? 100k? Eh, maybe. The feeling of "screw it, forget that future self" is stronger. But I still question whether that is how I should weigh things. After all, 100k year old me is still just as conscious and real as 28 year old me.

Regret minimization seems relevant, although I've only ever taken a cursory look at the concept. If I didn't weigh 100k year old me enough, it could be something I end up regretting.

Another perspective is the one I take at the end in the Assume The Hypothetical section. If you were going to live to 100k years old, how would you feel? For me it makes me feel like I should really value that future self.

However, once we start talking about stuff like a trillions years and 3^^^3, I start to think, "This is getting crazy, maybe we should be temporally discounting." Why at that point and not at 100k? Idk. Because that's what my intuition is? I guess I don't have a great grasp on this question.

Really? This is LessWrong after all^^

Yeah. I have a lot of respect for LessWrongers in general but I do think that there are some common failure modes, and not Taking Ideas Seriously enough is one of them. (I mean that criticism constructively not contentiously, of course.)

And I think that the reception this post has gotten is decently strong evidence that I was correct (not perfect of course; there are alternative explanations). I see this as a very important topic that doesn't have obvious conclusions, and thus should receive more attention. Even if you are correct about temporal discounting, it is not at all obvious to me how much we should discount and how that affects the final conclusion regarding how much we should value life.

Comment by adamzerner on How much should we value life? · 2021-09-09T22:10:12.902Z · LW · GW

You'll happily overwrite past-you with present-you

Why do you say that? I don't see it as overwriting. I am 28 years old. The way I see it is, I live 28 years, then I go back to the time I was born, then I re-live those 28 years, and so I get to be alive for 56 years.

Comment by adamzerner on How much should we value life? · 2021-09-09T05:53:59.840Z · LW · GW

You're welcome. Thanks for the compliment. I'm glad you enjoyed it :)

Comment by adamzerner on How much should we value life? · 2021-09-09T04:52:22.667Z · LW · GW

What behaviors would such a person have that differ from a person who values their life at $1 billion?

Driving, perhaps. I arrived at something like $2.50/mile at a $10B valuation of life. So for a $1B valuation, that'd be $0.25/mile, which seems reasonable to pay in various situations. But at a $3^^^3 valuation it would no longer be worth it.

Is there anything deterring you from valuing your life at $3^^^3?

With the estimates I made in this post, it doesn't seem reasonable to value it at something crazily high like that.

Comment by adamzerner on How much should we value life? · 2021-09-08T18:58:17.879Z · LW · GW

I don't have a great grasp on the question of "what makes you you". However, I do feel solid about "yesterday you" = "present moment you" = "100k years from now you". In which case living for eg. 100k years, there isn't an issue where it isn't you that is alive 100k years from now.

If that's still "you", should it be a problem for your mind to be erased, and that mind to be written in its place?

Yes, I see that as a problem because it'd still be a short lifespan. You wouldn't be alive and conscious from years 30 through 100k. I would like to maximize the amount of years that I am alive and conscious (and happy).

Comment by adamzerner on How much should we value life? · 2021-09-07T22:55:33.969Z · LW · GW

If I'm currently willing to pay $100k to avoid a 1% chance of dying, that doesn't mean that a 100x increase in my estimate of life expectancy will convince me to pay $100k to avoid a 0.01% chance of dying - that change might bankrupt me.

I'm not following how this example is influenced by your wealth. In both scenarios, you are paying $100k. If it was $100k to avoid a 1% chance vs a $1k to avoid a 0.01% chance, then I see how wealth matters. If you have $105k in savings, paying $100k would bring you down to $5k in savings which is a big deal, whereas paying $1k would bring you down to $104k which isn't too big a deal.

I think this is due to (something like) diminishing marginal utility. But even with that factored in, my sense is that the tremendous value of post-singularity life overwhelms it.

Comment by adamzerner on How much should we value life? · 2021-09-07T16:24:03.782Z · LW · GW

Huh, good question. Initially I assumed the answer was "yes, basically" and thought the probability was high enough that it wasn't worth getting into. But the scenarios you mention are making me less sure of that.

I'd love to get input from others on this. It's actually a question I plan on investigating further anyway as I do some research and decided whether or not I want to sign up for cryonics.

Comment by adamzerner on Humanity is Winning the Fight Against Infectious Disease · 2021-09-06T20:56:23.021Z · LW · GW

IMO yes, it is a gigantic problem. I agree that there are tradeoffs where progress implies some amount of tail risk as a consequence. The thing is that I don't think we are navigating those tradeoffs well. The analogy I like to use is that our technological progress is like giving a machine gun to a child. Bad things are bound to happen. To use that analogy, when/if we mature to the level of Competent Adult or something, that would be the time to start playing with machine guns.

Comment by adamzerner on Josh Jacobson's Shortform · 2021-09-01T07:52:43.467Z · LW · GW

If you're interested in a perspective from left field, I wrote a post about how we should perhaps be valuing life wildly more than we currently are, and that if so, driving becomes a terrible idea.

Comment by adamzerner on How To Write Quickly While Maintaining Epistemic Rigor · 2021-09-01T04:58:59.453Z · LW · GW

My advice is: don’t try to persuade people that the idea is true/good. Persuasion is a bad habit from high school. Instead, try to accurately describe where the idea came from, the path which led you to think it’s true/plausible/worth a look.

Wow, this makes a ton of sense. I can't believe I never realized/internalized it before.

Comment by adamzerner on How To Write Quickly While Maintaining Epistemic Rigor · 2021-09-01T04:57:10.719Z · LW · GW

This reminds me of Julia Galef's video Something I like about Richard Dawkins.

I was was trying to put my finger on what it was, and I think I've pinned it down. Basically, Richard would bring up topics of conversation not because he had a a well-articulated, definitive opinion that he wanted to share about that topic, but because he thought it was interesting. And he didn't yet know what he thought about it.

So just for example, we ended up on the topic of communication styles. And he noted, with a hint of curiosity in his voice, that it actually comes across as kind of aggressive, or confrontational when someone speaks very clearly and to the point, without adding a lot of qualifying phrases and statements around their point.

And he mused aloud, "I wonder why that is? Why would that be?" And we thought about it together.

And I think that's very rare. Most people -- even intellectually curious people -- in conversation, will make points that they've basically already thought about, already decided how they feel about. And it's actually quite rare for someone to introduce topics of conversation and want to just figure it out together, on the spot.

Comment by adamzerner on Humanity is Winning the Fight Against Infectious Disease · 2021-08-31T05:51:46.249Z · LW · GW

We might be winning, but by developing technologically as much as we are, I think we are playing with fire when it comes to tail risk and Taleb's idea of fragility.

Comment by adamzerner on Gravity Turn · 2021-08-31T04:21:31.141Z · LW · GW

I get the sense that this gravity turn analogy is applicable and useful in contexts beyond academic research (in mathematics). However, other applications aren't coming to my mind right now. If anyone has ideas, I'd love to hear them.

Comment by adamzerner on adamzerner's Shortform · 2021-08-23T02:57:14.026Z · LW · GW

I was just listening to the Why Buddhism Is True episode of the Rationally Speaking podcast. They were talking about what the goal of meditation is. The interviewee, Robert Wright, explains:

the Buddha said in the first famous sermon, he basically laid out the goal, "Let's try to end suffering."

What an ambitious goal! But let's suppose that it was achieved. What would be the implications?

Well, there are many. But one that stands out to me as particularly important as well as ignored, is that it might be a solution to existential risk. Maybe if people were all happy, maybe they'd be inclined to sit back, take a deep breathe, stop fighting, take their foot off the gas, and start working towards solutions to existential risks.

Comment by adamzerner on When Programmers Don't Understand Code, Don't Blame The User · 2021-08-20T19:44:56.421Z · LW · GW

I'm glad to hear that! I started learning Haskell this year but haven't reached the point you're describing.

Comment by adamzerner on When Programmers Don't Understand Code, Don't Blame The User · 2021-08-19T15:50:23.446Z · LW · GW

Yes! This! I wanted to make this point in the OP as well, but couldn't think of good examples or arguments, so I just stuck to programming. I'd love to see you or someone else expand on it in a separate post though.

Now that I think about it, Getting Eulered is similar.

Comment by adamzerner on When Programmers Don't Understand Code, Don't Blame The User · 2021-08-18T23:45:18.895Z · LW · GW

That's a good point. I may have came across too strong with my point. What I intended is to say that you should (strongly) consider that the code isn't simple enough, not that you should assume it by default.

Comment by adamzerner on Out to Get You · 2021-08-16T15:35:16.261Z · LW · GW

Max loss is the price.

That's really interesting. By nature, I tend to flinch away from losing money harder than I flinch away from things like time/attention/etc. I think most people are like this. But you make a good point that the former is capped while the latter is often uncapped (in a sense). That alone isn't enough to make the latter scarier than the former, but it certainly gives it a nice push in that direction.

Comment by adamzerner on Out to Get You · 2021-08-16T15:30:11.414Z · LW · GW

I am among those with a natural hatred of Getting Got. I needed to learn to relax and enjoy the show when the deal is Worth It. Getting Got imposes a large emotional cost for people like me. I have worked to put this aside when it’s time to Get Got, while preserving my instincts as a defense. That’s hard.

This reminds me of Chapter 19: Delayed Gratification of HPMoR.

This is the chapter where Harry learns to lose.

Professor Quirrell walked back to his desk and resumed leaning on it. "Sometimes we forget the most basic things, since it has been too long since we learned them. I realized I had done the same with my own lesson plan. You do not teach students to throw until you have taught them to fall. And I must not teach you to fight if you do not understand how to lose."

Comment by adamzerner on Staying Grounded · 2021-08-14T22:10:53.468Z · LW · GW

But you can't do away with them entirely, you just don't have enough processing power.

I don't think that's what johnswentworth is advocating.

This isn’t always the best choice, but it’s usually worth at least thinking about how to do it - because the process of thinking about it forces you to recognize that the Symbol does not necessarily give the thing, and consider what’s actually needed.

Comment by adamzerner on Staying Grounded · 2021-08-14T21:48:05.388Z · LW · GW

One concept that comes to mind is Lost Purposes. Maybe it explains why people fail to stay grounded.

Eg. maybe someone who pursues Good Grades initially does so because it is a means to the end of being Satisfied With Life, but somewhere along the way they start enjoying the feeling of getting Good Grades, and simultaneously forget that the end goal they had in mind was being Satisfied With Life.

I think this hypothesis is similar to some hypotheses you mention in the post, but is subtly different. Maybe.

  • For the hypothesis that people only say they are pursuing whatever end goal, eg. being Satisfied With Life, but in reality are just seeking social status, I think that implies that this was the plan all along. Whereas with Lost Purposes, there is a shift: at first you were pursuing A, but then you shift and start pursuing B.
  • For the hypothesis that people are Goodharting on the Symbol, I'm not sure. That seems pretty similar to Lost Purposes actually. I suspect that there are subtle differences that elude me at the moment.

I suspect that the Lost Purposes hypothesis is related to being Posioned By Ambition. When you pursue something hard, even if you logically understand that it is a means to some distinct end, your brain becomes convinced that it is an end in itself, and produces emotions accordingly. I further suspect that Staying Grounded is largely about preventing this from happening.

I wish I knew more about how to successfully do so. Saying "Fuck The Symbols" sounds somewhat promising. It's sorta like demonstrating to your brain, "See look! I don't actually care about this Symbol!". IIRC, there's evidence that your brain kinda observes your behavior and uses it to make inferences about what you care about, if that makes sense. But I'm not optimistic. I get the feeling that Staying Grounded is a hard problem to solve.

Comment by adamzerner on Staying Grounded · 2021-08-14T21:36:44.320Z · LW · GW

I really like the structure of this post.

In particular, 1) starting off by giving the reader a sense of what the post is about. I think that sort of context is useful before proceeding to the section on examples. I also like how this was accomplished so quickly.

And then 2) diving into various examples before proceeding to the larger idea. Eliezer wrote about this in SotW: Be Specific, and it's always stuck with me.

(Also, note how this post starts off with a specific example - an instance of the concrete-abstract writing pattern in which you state the example first and the generalization afterward. This is one of the most common bits of nonfiction writing advice I dispense: "Open with the concrete example, not the abstract explanation!")

I'm a big fan of it. And I predict that over the course of time, this pattern will become more and more popular on Less Wrong, and then in the blogosphere more broadly.

Pulling it off is one of those things that is weirdly difficult though, I think. Same with coming up with examples. So well done!

Comment by adamzerner on A Qualitative and Intuitive Explanation of Expected Value · 2021-08-12T03:16:05.501Z · LW · GW

I was a little busy at the time, but mostly because I'm familiar with (the concept of) expected value - so since I wasn't learning something new, so I decided to move on.

Gotcha, thank you.

I wonder if that's a result of a tendency towards inaction, in general.

Hm, I don't get that impression but I'm not sure. I'm gonna keep an eye out for this.

Comment by adamzerner on A Qualitative and Intuitive Explanation of Expected Value · 2021-08-10T19:51:17.578Z · LW · GW

That could be explained in passing by adding something like:

Yeah, but it still does go off in a different direction from the larger point of explaining what expected value is and why it is common sensical. I could see adding it though. It felt like a close call to me as I was writing this. I also considered touching on how this expected value stuff is supported by the academics.

Whether or not it's actually "really, really harmful"

I see what you're saying about how people do reasonably well in everyday life by using their heuristics. But my opinion is that "reasonably well" is a stretch, and the lack of expected value based thinking is in fact pretty harmful. Unfortunately, my thoughts on this are pretty jumbled and I'm not at the point of being able to explain why I believe this very well. Maybe I'll figure it out and write up my thoughts in a future post.

(I also stopped reading around 'Would you buy the lottery ticket?')

Would you mind letting me know why? I promise I won't take offense or anything, it'd just be useful feedback for me to improve as a writer.

Comment by adamzerner on A Qualitative and Intuitive Explanation of Expected Value · 2021-08-10T13:15:00.937Z · LW · GW

Yeah I was going to address this but didn't want to get too sidetracked with diminishing marginal utility.

Comment by adamzerner on When writing triggers memory reconsolidation · 2021-08-02T23:54:28.351Z · LW · GW

That's really cool, thanks for pointing that out!

Comment by adamzerner on When writing triggers memory reconsolidation · 2021-08-02T17:17:22.016Z · LW · GW

I'd welcome that. It seems useful for the non-expert explanation reasons I talked about in that last paragraph.

Comment by adamzerner on Wanted: Foom-scared alignment research partner · 2021-07-27T22:59:44.338Z · LW · GW

I meet all six of those bullet points, basically. Emailed ya.

Comment by adamzerner on Believing vs understanding · 2021-07-24T22:46:15.293Z · LW · GW

Yeah that does make sense. I guess it depends on the feature in question and how close the competition is.

Comment by adamzerner on ($1000 bounty) How effective are marginal vaccine doses against the covid delta variant? · 2021-07-22T01:49:13.226Z · LW · GW

I've thought about doing something similar with posing a question and offering a bounty for responses to the extent that they change my mind. I haven't pulled the trigger yet. I don't know why. Kudos to you for doing so!

Comment by adamzerner on Happy paths and the planning fallacy · 2021-07-19T20:53:04.599Z · LW · GW

In software development we sometimes play this game where you estimate how long a task will take, and then by sleight of hand the estimate becomes a commitment.

Yeah, I ran into that recently and it lead to issues. Not fun.

Comment by adamzerner on Preparing for ambition · 2021-07-19T20:52:20.571Z · LW · GW

I actually don't share that impression. The impression I get is that even for successful startups, there's a sort of hedonic adaptation where your sights end up being set on a new goal, which then produces stress. Eg. feeling content is fleeting and it is common to always feel perpetually "behind".

Comment by adamzerner on Preparing for ambition · 2021-07-19T17:42:19.868Z · LW · GW

No problem.

Comment by adamzerner on Preparing for ambition · 2021-07-19T16:21:48.514Z · LW · GW

To make sure I am understanding, are you saying that extra stress gives you an extra "oomph" that helps you achieve your goals? For me, it was the opposite, so marginal stress was harmful for me. Furthermore, my understanding is that this is true for others as well. Extra stress only provides an "oomph" when it is "good stress", not "bad stress".

Comment by adamzerner on [deleted post] 2021-07-18T22:21:30.910Z

Seems worth referencing and linking to the HPMoR chapter Noticing Confusion.

Comment by adamzerner on Precognition · 2021-07-14T03:25:14.154Z · LW · GW

$10M is a roughly standard value that the US government uses: https://en.wikipedia.org/wiki/Value_of_life#United_States.

1000x, the short answer is "the singularity seems pretty plausible, and given how powerful a singularity would be, assuming we get the alignment problem right, I'd think we'd live really, really, really, long lives". I go into some more detail in https://www.lesswrong.com/posts/AL6jdmpcxESxQTpfQ/is-driving-worth-the-risk, although that post is still very hand-wavy.

Comment by adamzerner on Precognition · 2021-07-14T00:44:26.788Z · LW · GW

In the OP, jasoncrawford says:

We see this in the US with covid: The same people who didn’t realize that we all should be wearing masks, when they were life-saving, are now slow to realize/admit that we can stop wearing them.

At $10M/life, I can* agree that a mask isn't worth it. But if we eg. 1000x the value of life, wearing a mask provides something like $5 of value instead of $0.005, and at that price it would be worth it. So the question becomes whether we should value life wildly highly like this.

The point I was trying to make in my comment was, "perhaps we should value life wildly highly like this". The following is mentioned in the OP as a way to do "precognition": "2. Listen to other independent thinkers." As I explain in this post, I get the impression that if you go by the judgements of these expert, independent thinkers, it might be justifiable to value life wildly highly.

*Not wearing a mask feels like defecting in a prisoners dilemma to me, which makes me feel icky.

Comment by adamzerner on Precognition · 2021-07-02T19:04:46.274Z · LW · GW

I'm not seeing where there is a tradeoff. Let's assume you are making a 1hr grocery run in SF. From there, you are faced with a choice: mask or no mask. My argument is that life should be valued wildly highly, and thus wearing a mask is worth it. It's not like wearing a mask is something that takes money or time, and that you could spend that money or time doing something more valuable (essentially).

Comment by adamzerner on Precognition · 2021-06-30T21:01:44.954Z · LW · GW

The blogrolls on https://astralcodexten.substack.com/ and https://www.overcomingbias.com/ come to mind as lists.

Comment by adamzerner on Precognition · 2021-06-30T17:38:14.977Z · LW · GW

are now slow to realize/admit that we can stop wearing them.

I'd like to offer a "precognition" rebuttal to this. If you value life at the typical $10M, one microcovid = $0.01, then wearing a mask "costs" $0.01. But if you value life at eg. 1000x that amount, then wearing a mask costs $10, a bad price to pay. Applying "2. Listen to other independent thinkers.", my read on it is that the consensus opinion of experts is that the singularity is quite plausible, and that this would imply wildly longer lifespans. Putting those two things together, it seems reasonable to value life wildly more than is typical, and that you seem to be valuing it at in saying that we shouldn't wear masks indoors anymore.

Comment by adamzerner on Precognition · 2021-06-30T17:17:43.229Z · LW · GW

Definitely. I feel confident that it is something that will stick with me and have a meaningful influence on my thinking.

Comment by adamzerner on Bad names make you open the box · 2021-06-19T07:02:50.040Z · LW · GW

System 1 vs System II is a good example of poor naming in the academic community.

Comment by adamzerner on Walking to School · 2021-06-14T16:38:07.232Z · LW · GW

You might be interested in this perspective from the Not Just Bikes YouTube channel that Amsterdam being designed for pedestrians and bikers makes this problem of worrying about your kids being hit by a car mostly go away.