Updates Thread

post by adamzerner · 2020-09-09T04:34:20.509Z · LW · GW · 41 comments

If you've updated your belief about something you think is worth noting, post it here.


Comments sorted by top scores.

comment by adamzerner · 2020-09-09T04:55:37.601Z · LW(p) · GW(p)

Decent shift away from assuming by default that decisions made by large organizations are reasonable.

I'm in the process of interviewing with Google for a programming job and the recruiter initially told me they do the interview in a Google Doc, and to practice coding in the Google Doc so I'm familiar with the environment for the interview.

I tried doing so and found it very frustrating. The vertical space between lines is too large. Page breaks get in the way. There is just a lot of annoying things about trying to program in a Google Doc.

So then, why would Google choose to have people use it for interviews? They're aware of these difficulties, and yet they chose to use Google Docs for interviews anyway. Why? They're a bunch of smart people, so surely there must be things in the trade-off calculus that make it worthwhile.

I looked around a little bit because I was curious and the only good thing I found was this Quora page on it. There wasn't much insight. The big thing seemed like it was because it was easier for hiring committees to comment on what the candidate wrote and discuss it as they decide whether or not to pass the candidate. That makes sense as an upside, but it doesn't explain why they'd use Google Docs, because you could just have the candidate program in a normal text editor and then copy-paste it into Google Docs afterwards. And I know that I'm not the first person to have thought of that idea. So at this point I just felt confused, not sure whether to give Google the benefit of the doubt or to trust my intuitive sense that having a candidate program in a Google Doc is an awful idea.

Today I had my phone interview, and they're using this interviewing.google.com thing where you code in a normal (enough) editor. Woo hoo! My interviewer was actually in the engineering productivity department at Google and I asked him about it at the end of the interview (impulsive; probably not the most beneficial thing I could have chosen to ask about). We didn't have much time to talk about it but his response seemed like he felt like this new approach is clearly better than using Google Docs, from which I infer that there wasn't some hidden benefit to using Google Docs that I was overlooking.

I also interpret the fact that they moved from using Google Docs to using the normal editor as evidence that the initial decision to use Google Docs wasn't carefully considered. I'm having trouble articulating why I interpret this as evidence. In worlds where there is some hidden benefit that makes Google Docs superior to a normal editor for these interviews, I just wouldn't have expected them to shift to this new approach. It's possible that the initial decision to use Google Docs was reasonable and carefully considered, and they just came across new information that lead to them changing their minds, but it feels more likely that it wasn't carefully considered initially and what happened was more "Wait, this is stupid, why are we using Google Docs for interviews? Let's do something better." And if that's true for an organization as reputable as Google, I'd expect it to happen in all sorts of other organizations. Meaning that the next time I think to myself, "This seems obviously stupid. But they're smart people. Should I give them the benefit of the doubt?", I lean a decent amount more towards answering "No".

comment by ChristianKl · 2020-09-10T10:57:51.557Z · LW(p) · GW(p)

That makes sense as an upside, but it doesn't explain why they'd use Google Docs, because you could just have the candidate program in a normal text editor and then copy-paste it into Google Docs afterwards. 

It's plausible that they not only care about the final code but also the process of writing the code. Simple copy pasting might not give them that data. On the other hand their custom build editor for interviewing.google.com might provide it. 

comment by adamzerner · 2020-09-10T16:05:50.165Z · LW(p) · GW(p)

That makes sense as an alternative hypothesis.

comment by adamzerner · 2020-09-09T05:33:23.647Z · LW(p) · GW(p)

Credibility of the CDC on SARS-CoV-2 [LW · GW] is related but to me it belongs to a reference class that is at least moderately different. 1) Because of it being in the arena of politics. And 2) because they have an incentive to lie to the public for infohazrd reasons; regardless of whether or not you agree with it. What I'm trying to discuss with the Google example above is the reference class of an organization "getting it wrong" for "internal" reasons rather than "external" ones.

comment by Rudi C (rudi-c) · 2020-09-09T17:24:30.996Z · LW(p) · GW(p)

It’s pretty simple, I think; The cost of the problems of Google Doc fall on you, with a small cost on Google itself, and negligible cost on the decision makers in Google responsible.

PS: Couldn’t you just copy the code you wrote in an editor to the Doc? If not, this might be hidden upside: They can watch as people code on Google Doc (as far as I remember), but doing this with an editor is somewhat harder. (VSCode’s liveshare or using a TUI editor in a shared tmux session seem better solutions to me, but Google optimizes for the lowest common denominator.)

comment by adamzerner · 2020-09-09T19:10:59.224Z · LW(p) · GW(p)

It’s pretty simple, I think; The cost of the problems of Google Doc fall on you, with a small cost on Google itself, and negligible cost on the decision makers in Google responsible.

Wouldn't it hurt the signal-to-noise ratio in evaluating candidates?

PS: Couldn’t you just copy the code you wrote in an editor to the Doc?

Yes. To me the implication of this is that it'd make sense to do so. I'm not sure how it relates to your follow up point.

They can watch as people code on Google Doc (as far as I remember), but doing this with an editor is somewhat harder.

There are options. http://collabedit.com/ is my goto.

comment by habryka (habryka4) · 2020-09-10T18:50:19.788Z · LW(p) · GW(p)

Pretty substantial shift that simplicity in programming is much better and more achievable than I thought

This one is a bit hard to characterize, but I think over the past few months I got better as a software engineer, and finally gained enough clarity around how a bunch of stuff worked that I didn't really understand previously (like compilers, VMs, a bunch of networking stuff, and type systems), to get a sense of where the Chesterton's fences are in programming land that you want to keep up, and which ones you want to tear down, and I've overall updated more in the "if something looks really unnecessarily complicated, it is probably indeed unnecessarily complicated, and not just some hidden complexity that comes up in the implementation". A substantial fraction of this is definitely also the result of me just looking into a lot of JS libraries, and repeatedly finding them doing really dumb stuff that could be done much simpler.

comment by Dagon · 2020-09-10T20:57:04.137Z · LW(p) · GW(p)

This pretty much describes my mumble-many years of increasing scope as a programmer. There is plenty of irreducible complexity around, but there's FAR MORE accidental complexity from incorrect attempts at encapsulation, forcing edge cases into "standard mechanisms" (bloating those standards), failing to force edge cases into "standard mechanisms" (bloating the non-core systems), insufficiently abstracted or too-abstracted models of data and behaviors, etc.

It's easy to move complexity around a system. Done haphazardly, this increases overall complexity. Done carefully, it can decrease overall complexity. Knowing the difference (and being able to push back on (l)users who are not listening to process changes that make the world easier) is what makes it worth hiring a Principal rather than just a Senior SDE.

comment by adamzerner · 2020-09-09T04:47:15.921Z · LW(p) · GW(p)

Significant shift in favor of voting in (presidential) elections being worthwhile.

Previously I figured that the chance of your vote mattering — in the consequentialist sense of actually leading to a different candidate being elected — is so incredibly small that voting isn't something that is actually worthwhile. With the US presidential election coming up I decided to revisit that belief.

I googled and came across What Are the Chances Your Vote Matters? by Andrew Gelman. I didn't read it too carefully but I see that he estimates the chances of your vote mattering ranging from one in a million to one in a trillion. Those odds may seem low, but he also makes the following argument:

If your vote is decisive, it will make a difference for over 300 million people. If you think your preferred candidate could bring the equivalent of a $100 improvement in the quality of life to the average American—not an implausible number, given the size of the federal budget and the impact of decisions in foreign policy, health, the courts, and other areas—you’re now buying a $30 billion lottery ticket. With this payoff, a 1 in 10 million chance of being decisive isn’t bad odds.

$100/person seems incredibly low, but even at that estimate it's enough for voting to have a pretty high expected value.

Assuming his estimates of whether or not your vote mattering are in the right ballpark. But I figure that they are. I recall seeing Gelman come up in the rationality community various times, including in the sidebar of Overcoming Bias. That's enough evidence for me to find him highly trustworthy.

In retrospect, I feel silly for having previously thought that voting wasn't worthwhile. How could I have overlooked the insanely large payoff part of the expected value calculation?

comment by adamzerner · 2020-09-09T05:27:17.461Z · LW(p) · GW(p)

In retrospect, I feel silly for having previously thought that voting wasn't worthwhile. How could I have overlooked the insanely large payoff part of the expected value calculation?

Moderate shift towards distrusting my own reasoning ability.

I feel like this is a pretty big thing for me to have overlooked. And that my overlooking it points towards me generally being vulnerable to overlooking similarly important things in the future, in which case I can't expect myself to reason super well about things.

comment by romeostevensit · 2020-09-10T01:39:15.582Z · LW(p) · GW(p)

Instead of thinking of it as a global variable, I tend to think of my reasoning ability as 'good but narrow.' Which is to say can arrive at good local results but prone to missing better optimums even if nearby if along dimensions that that problem domain doesn't highlight. The higher the dimensionality of the problem, the more I view my results as highly provisional and more a function of time:result quality/completeness

comment by adamzerner · 2020-09-10T03:49:56.717Z · LW(p) · GW(p)

I like that way of thinking about it. The ability to notice those other dimensions seems like a hugely important skill though. It reminds me of this excerpt from HPMOR:

A Muggle security expert would have called it fence-post security, like building a fence-post over a hundred metres high in the middle of the desert. Only a very obliging attacker would try to climb the fence-post. Anyone sensible would just walk around the fence-post, and making the fence-post even higher wouldn't stop that.

Once you forgot to be scared of how impossible the problem was supposed to be, it wasn't even difficult...

comment by eapache (evan-huus) · 2020-09-09T12:23:16.389Z · LW(p) · GW(p)

Significant shift in the magnitude of positive impact for physical exercise on mental state.

There’s been a lot of research over the years indicating that exercise makes you happier and generally feel better. I’ve always trusted that research to be accurate, but vastly underestimated the size of the effect until I started exercising regularly myself a few months ago. I’m not sure if I’ve actually experienced a greater boost than the research suggests is normal, or if I just never bothered reading the research closely enough and made some bad assumptions.

comment by Neel Nanda (neel-nanda-1) · 2020-09-09T18:41:30.191Z · LW(p) · GW(p)

I'd be curious to hear more about this shift, and how long it took before it became noticeable - exercising more is something I'm currently trying to motivate myself to do!

comment by eapache (evan-huus) · 2020-09-09T22:58:21.406Z · LW(p) · GW(p)

For me specifically, I've learned recently that a basic workout in the morning has big effects on my mood throughout the rest of the day. Specifically I do https://www.youtube.com/watch?v=mmq5zZfmIws before breakfast, which only takes about 10 minutes. The change was noticeable the very first day, and still is - even on days when I skip for a good reason and don't feel bad about it, I still end up noticeably grumpier and more sluggish.

Worth noting that I've jogged in the morning before, and also done that same workout before at other times of the day. So to be more specific, there's something about resistance training in the morning that is really beneficial for me.

Motivation has become a lot easier because of this; my System 1 has recognized the positive association and so it's become something I (mostly) want to do now.

comment by adamzerner · 2020-09-10T03:56:46.573Z · LW(p) · GW(p)

I like that NYT 7 Minute Workout a lot. I also noticed that doing it made me happier. I stopped because I have achilles problems though.

comment by romeostevensit · 2020-09-10T01:52:32.311Z · LW(p) · GW(p)

Updated even further in the direction that our understanding of 'intentional' behavior is extremely confused and this confusion is why so many downstream areas have difficulty with formalisms (paradoxes, incomputable results, etc.) Sense that this is one of the key bottleneck in designing systems that generate cooperative equilibria among that system's users. We don't have a design language for incentive management.

Updated in the direction that there's a bunch of path dependence baked in to what we learn is safely ignorable early on in life and this contributes to fragility of our transfer learning. Hidden parameter. This is in addition/contrast to, the 'trauma' story of how early childhood affects us.

Updated that ADHD significantly affects my day to day experience.

Updated my sense of how people spend their time on reviewing the top 100 most visited websites.

Updated in the direction that there is no 'cure' for cancer. Not in the sense that there is no cure for any specific cancer, but that cancer is a (universal?) dynamic that can occur at any level of abstraction/optimization.

comment by MakoYass · 2020-09-11T02:00:51.824Z · LW(p) · GW(p)

I don't think cancer is universal. I think it's a phenomenon of a level of evolutionary maturity where mutation sometimes has to come at the expense of sustainable accounting. Life may effectively transcend it, not absolutely, it is impossible to transcend it absolutely, but well enough.

comment by adamzerner · 2020-09-09T19:23:41.980Z · LW(p) · GW(p)

Decent shift away from thinking that knowledge of algorithms and data structures is likely to matter in programming.

I read Vue.js Creator Evan You Interview this morning. This stuck out to me:

Evrone: You joined Google Creative Lab as a creative technologist with an Art History major. Did you experience any lack of math, algorithms and data structures education while working on the Vue? Do we need to study computer science theory to become programmers, or do we need to learn how to be "software writers" and prefer code that is boring but easy to understand?

Evan: Honestly not much — personally I think that Vue, or front-end frameworks in general, isn’t a particularly math/algorithm intensive field (compared to databases, for example). I also still don’t consider myself very strong in algorithm or data structures. It definitely helps to be good in those, but building a popular framework has a lot more to do with understanding your users, designing sensible APIs, building communities, and long term maintenance commitment.

I would have expected front-end frameworks to require a good deal of algorithm intensiveness. I'm not sure exactly how to update on this evidence.

To take a simplistic approach, I'm thinking about it like this. Imagine a spectrum of how "complicated" an app is. On one end are complicated apps that require a lot of algorithmic intensiveness, and on the other are simple apps that don't. I see front-end frameworks as being at maybe the 80th percentile in complexity, and so hearing that they don't actually require algorithmic intensiveness makes me feel like things in the ballpark of the 80th percentile all drop off somewhat.

comment by habryka (habryka4) · 2020-09-09T19:28:40.426Z · LW(p) · GW(p)

Huh, my sense is that it's usually the opposite. "Complicated" domains aren't very amenable to algorithmic solutions, and are just lots of messy special-cases with no easily-embeddable structure. And "simple" domains are ones where you can make a lot of progress with nice algorithms, because they actually have some embeddable structure you care about.

comment by adamzerner · 2020-09-10T07:50:22.059Z · LW(p) · GW(p)

Small-to-decent update against "group rationality practice" being of interest to LessWrongers.

I had originally predicted this thread to get a good amount more upvotes and comments. More generally, I felt optimistic about "group rationality practice" being something/a type of post that would be of interest to LessWrongers. My object-level model still tells me that I'm right, but the data point of this post shifts me away from it a small-to-decent amount.

comment by adamzerner · 2020-09-09T05:46:02.507Z · LW(p) · GW(p)

Decent shift in favor of the pareto principle applying to cooking.

This one is more about saliency than about changing my beliefs, but let's roll with it anyway.

I cooked tomato sauce last night and it came out great. But I took a very not-fancy approach to it. I just sauteed a bunch of garlic in olive oil and butter, added some red pepper flakes, dumped in three cans of tomato puree, and let it simmer for about five or six hours.

Previously I've messed around with Serious Eats' much more complicated version. It includes adding fish sauce, tomato paste, chopped onions and carrots, whole onions and carrots while simmering, using an oven instead of the stove top, red wine, basil, oregano, and whatever else. After messing around with different versions of all that it seems to me that along the lines of the pareto principle, there are a few things that are responsible for the large majority of taste differences: 1) how long you simmer it for, 2) how much fat you use, and 3) how much acid you use. Everything else seems like it only has a marginal impact. And last night I felt like I got those variables just right (actually it could have used a little more acidity but I didn't have any red wine).

But this goes against the message I feel like I receive a lot in the culinary world that all these little things are important. I guess the message I'm trying to point at is like an anti-pareto principle. Which sounds like I'm strawmanning, but I don't think I am.

Anyway, I guess I've always been a "culinary pareto" person rather than a "culinary anti-pareto" person, but something about last night just made it feel very salient to me. And I think this shift in saliency also serves the function of shifting my beliefs in practice.

comment by Viliam · 2020-09-09T20:19:34.647Z · LW(p) · GW(p)

Since COVID-19 I am cooking at home a lot, and I would say that most details don't matter (either the difference is difficult to notice, or the difference is negligible). Even cooking a soup 30 minutes longer (I got distracted and forgot I was cooking) made no big difference.

Exceptions: burning food; adding too much salt or acid.

Possible explanation is that some people are more sensitive about the taste, and those may be the ones that add the tiny details in recipes. They may be overrepresented among professional cooks.

Before I got some experience and self-confidence, I was often scared by too many details in the recipes. These days I mostly perceive the recipe as a "binary code" and try to see the "source code" behind it. The source code is like "cook A and B together, optionally add C or D", with some implied rules like "always use E with A, unless specifically told otherwise". The amounts officially specified with two significant digits usually don't have to be taken too precisely; plus or minus 20% is often perfectly okay. Sometimes a details actually matters... you will find out by experimenting; then you can underline that part of the recipe.

I would like to see a Pareto cookbook. ("Potato soup: Peel and cut a few potatoes, cook in water for 10-30 minutes, add 1/4 teaspoon of salt. Expert version: while cooking add some bay leaves and a little fat.") So that one could start with the simple version, and optionally add the less important details later.

comment by adamzerner · 2020-09-09T20:48:36.777Z · LW(p) · GW(p)

These days I mostly perceive the recipe as a "binary code" and try to see the "source code" behind it.

Wow, that's an awesome analogy!

I would like to see a Pareto cookbook.

I was thinking the same thing. I spend way too much time watching cooking videos on YouTube, and so if there was something like that out there I feel like there's a good chance I would have stumbled across it at this point. Although I'd say Adam Ragusea is reasonably close.

comment by romeostevensit · 2020-09-10T01:35:44.756Z · LW(p) · GW(p)

>I would like to see a Pareto cookbook.

+1 there are some entries in this genre but I've found them to be low quality and still aimed at a dramatically higher level of effort:results ratios because of the selection effect on the sort of person who would write a cook book and take lots of actions for granted. I want recipes by Musashi. If you make an extra movement you'll lose your arm in a sword fight.

comment by romeostevensit · 2020-09-10T01:33:36.883Z · LW(p) · GW(p)

If it weren't for frozen peas and carrots my vegetable consumption would halve.

comment by adamzerner · 2020-09-10T03:44:58.878Z · LW(p) · GW(p)

Frozen peas are a pretty big staple for me as well. I find them to be a bit inconsistent though. At best they're sweet and kinda juicy, but at worst they don't have that sweetness and are sorta mealy. Any tips?

I've never been able to eat frozen carrots because of the texture. Do you like them or just put up with them?

comment by romeostevensit · 2020-09-10T07:50:56.425Z · LW(p) · GW(p)

I don't have nearly that much food quality resolution.

comment by adamzerner · 2020-09-10T07:17:57.331Z · LW(p) · GW(p)

Small update in favor of video games being worthwhile.

I've always been an anti-video games person. Because a) I presume there are many better things to do with ones time, regardless of ones goals. And b) because I presume video games are rather addicting, and thus the potential downside is amplified.

But recently I started playing some video (well, computer) games and a) they've been making me happy. Perhaps there are some better options but I think right now I'm enjoying playing them more than the things I normally assume are better than video games, like reading a book or socializing, And b) I'm only finding it slightly addicting.

This has made me think that I've overestimated (a) and (b), but only by a small amount.

comment by Pattern · 2020-09-11T02:12:57.768Z · LW(p) · GW(p)

What games changed your mind?

comment by MakoYass · 2020-09-11T02:04:03.309Z · LW(p) · GW(p)

Nintendo games often seem designed intentionally to be anti-addictive. Shallow systems to the point of only being desirable in short sessions, divided into chunks, wholesome in their explicit reminders to take frequent breaks, having clean endings.

An activity in an atmosphere as an occasional treatment for some malaise, perhaps.

comment by adamzerner · 2020-09-11T02:42:24.649Z · LW(p) · GW(p)

That's good to know. I'll keep it in mind.

comment by adamzerner · 2020-09-09T05:04:15.749Z · LW(p) · GW(p)

Decent shift away from thinking that frequent in-person social contact is necessary for most people.

Since March I have been extremely isolated. I like with my girlfriend but other than her I haven't really interacted with other humans in person. I saw her friends in person two or three times. Other than that everything else has been <30 second conversations (paying the rent, getting groceries, etc.). And those conversations have been only about twice a month. So that's an extremely low level of in-person social contact.

I feel a little bit of craving for in person social contact, but only a little bit. Which surprises me, because I would have expected to feel a good amount more.

My impression is that on the spectrum of "how often person needs in-person social contact", I require less than other people, but I'm not too extreme. Maybe at the 10th or 20th percentile, something like that. And so if this is how I'm feeling, I'd expect people at the 30th percentile to feel a little more craving, people at the 40th to feel a little more than that, 50th a little more than that.

It's hard to give a good qualitative description of this, but my impression is that the implication is that people up to eg. the 80th percentile wouldn't experience a significant amount of distress or anything from this low a level of social contact. Which is not what I thought before my experiences since March.

Rather than speculating from this one data point, it would probably be more fruitful to look into what researchers have found, but this still feels worth writing up as an exercise at least.

comment by Zian · 2020-09-10T18:51:30.881Z · LW(p) · GW(p)

Update that being really smart with nearly instant access to the right answer is not going to save people from making basic cognitive mistakes.

Background : I have an extremely common health condition. There are well established treatment guidelines freely available and those guidelines have been quite consistent for several years. They say that if life gets worse, try X. Then try Y if that doesn't help enough.

When I told my doctor that things weren't good and outright said I knew that such information existed (as a gentle reminder), she suggested doing Z. Z had not been recommended for many years.

When Z failed, she wanted to try Y but I explicitly asked about X. She agreed it was a good idea and suggested an X-like solution.

The pharmacy got an electronic prescription for A, which is not like X at all and is administered differently than all X-like things.

Thankfully, the mistake was caught at that point by the pharmacy and I got X.

Probable mistakes :

  • not using a checklist or any clinical reference material
  • overly trusting one's memory
  • Ignoring contradictory facts from the medical records /drug prescribing software
  • lack of a quick feedback loop for medication questions from a pharmacy (the pharmacy first reported a problem existed over 5 days before I notified the doctor)
comment by adamzerner · 2020-09-10T21:01:18.618Z · LW(p) · GW(p)

I have the same beliefs and have had similar experiences with doctors. A simple search for literature reviews of my condition (achilles tendinitis) showed that things they were doing like prescribing anti-inflammatories weren't effective. I suspect that they learned things once in medical school and don't take the time to stay up to date. And also that there is an element of social reinforcement if their doctor friends are all doing the same thing.

It makes me think back to what Romeo said [LW(p) · GW(p)] about reasoning ability being "good but narrow". That it can easily just completely overlook certain dimensions. That idea has been swimming around in my head and I am  feeling more and more confident that it is hugely important.

comment by adamzerner · 2020-09-10T07:44:15.631Z · LW(p) · GW(p)

Small update in favor of the importance of brand. And, correspondingly, against the importance of merit.

I was just listening to Joe Rogan's interview of Robert Sapolsky. Partly because I like Sapolsky, and partly because I myself tried starting a podcast, failed at it + found interviewing to be a much more difficult skill than I previously expected, am now curious about what makes a good interviewer, and have tried listening to a few Joe Rogan interviews because he's supposed to be a great interviewer.

But I have been pretty unimpressed with Rogan. In his interview of Sapolsky, he jumps right into the topic of toxoplasmosis, which is a cat parasite. My thoughts:

  • If you had a spectrum of all the possible topics you could talk to Robert Sapolsky about, this one would maybe be at the 10th-20th percentile in terms of interest to the general population, I'd guess.
  • I found the conversation to be very difficult to follow and was tempted to give up on it. And I expect that I am probably around the 80th-90th percentile in terms of listeners who would be able to follow it.
  • I got the impression that some of the questions he asked were motivated by him wanting to sound smart rather than by what would best steer the conversation in the direction that would most benefit the podcast.

This all makes me suspect that Rogan isn't actually that great of an interviewer, and that the success of his podcast is largely due to a positive feedback loop where the podcast is successful, interesting people want to be on it, more success, more incentive for interesting people to be on it.

It's not a large update though, just a small one. I didn't think any of this through too carefully and I recognize that success is a tricky thing to understand and explain. And also that Rogan does have a good reputation as an interviewer, not just as having a good podcast.

comment by adamzerner · 2020-09-10T07:56:59.660Z · LW(p) · GW(p)

Small update in favor of it being important to have better vocabulary to describe one's confidence (and, more generally, one's thoughts).

I've been saying things like "a decent shift away" a lot. "Decent", "small", "plausible", "a good amount", "somewhat of an impact", "a significant degree", "trivial impact" — these are all terms that I find myself reaching for. But the "menu" of terms at my disposal feels very insufficient. I wish I had better terms to describe my thoughts.

I've always been a big believer in the importance of this (the linguistic relativity hypothesis, roughly). But the experience of writing up comments for this post has shifted me a small amount further in that direction.

Furthermore, I read through some of the CFAR handbook today, and that too has contributed to this small shift. I didn't feel like I learned anything new, per se, but a lot of the terminology, phrases and expressions they use were new to me, and I expect that they'll be pretty beneficial.

comment by adamzerner · 2020-09-10T07:27:30.189Z · LW(p) · GW(p)

Small update in favor of writing being good for my mental health.

You know that sound your computer makes when the CPU is really active? The fan kicks on to cool it down. My girlfriend says that she can see this happening to me when my mind is running.

And my mind runs a lot. All of the comments I've made here are examples of threads that run in my head throughout the day.

It's pretty unpleasant. I have to think more about why exactly that is, but part of it is a) that I feel like the threads are "running away from me" and I need to "catch them", and b) because they constantly pop up and interrupt what I was previously doing or thinking about. Maybe a better way to describe it would be to call it "cognitive hyperventilating".

Writing them all out here is helping me a little bit. But a) it's only a little bit, and b) I already knew this from the time I've spent journaling. So the new evidence I have only allows for a small update. It would be wrong to rehash the previous/historical evidence I have and update on it again (I recall Eliezer writing about this at some point).

If anyone has had similar experiences or has any advice, I'd love to hear it.

comment by ChristianKl · 2020-09-10T10:00:04.916Z · LW(p) · GW(p)

As far as I understand it, to have your awareness buffer not filled by your mind running you need to fill it with something else.

Meditation is a way to learn to fill your attention with what you perceive. Various physical activity also fills awareness buffer.

comment by adamzerner · 2020-09-10T07:21:59.816Z · LW(p) · GW(p)

Decent update in favor of the top idea in your mind being really important.

Paul Graham wrote an essay called The Top Idea in Your Mind. He argues (to paraphrase from my memory) that a) you only have space for ~1 thing as the top thing in your mind, and b) that this one thing is what your brain is going to be processing and thinking about subconsciously, and is what you're going to be making progress on.

Since starting this Updates Thread post, I've noticed myself thinking about the updates I make in everyday life, and looking for more pieces of evidence that I can update on. I think it's because this stuff is the top idea in my mind right now.

(Like other updates, I think this one is more about saliency than actually changing my beliefs. I need to think more about what the differences between saliency and updating actually are and how they relate to each other. I'd love to hear more about what others think about this.)