Posts

Blog posts as epistemic trust builders 2020-09-27T01:47:07.830Z · score: 19 (4 votes)
Losing the forest for the trees with grid drawings 2020-09-24T21:13:35.180Z · score: 17 (8 votes)
Updates Thread 2020-09-09T04:34:20.509Z · score: 49 (15 votes)
More Right 2020-07-22T03:36:54.007Z · score: 22 (17 votes)
In praise of contributing examples, analogies and lingo 2020-07-13T06:43:48.975Z · score: 28 (11 votes)
What gripes do you have with Mustachianism? 2020-06-11T23:42:42.472Z · score: 12 (5 votes)
Does taking extreme measures to avoid the coronavirus make sense when you factor in the possibility of a really long life? 2020-06-05T00:58:49.775Z · score: 4 (2 votes)
"No evidence" as a Valley of Bad Rationality 2020-03-28T23:45:44.927Z · score: 108 (44 votes)
Is the Covid-19 crisis a good time for x-risk outreach? 2020-03-19T16:14:45.344Z · score: 17 (7 votes)
Is the coronavirus the most important thing to be focusing on right now? 2020-03-18T22:48:17.191Z · score: 51 (21 votes)
Assorted thoughts on the coronavirus 2020-03-18T07:08:30.614Z · score: 11 (5 votes)
Why would panic during this coronavirus pandemic be a bad thing? 2020-03-08T08:32:50.753Z · score: 9 (6 votes)
Reflections on Premium Poker Tools: Part 4 - Smaller things that I've learned 2019-10-11T01:26:40.240Z · score: 19 (7 votes)
Reflections on Premium Poker Tools: Part 3 - What I've learned 2019-10-11T00:49:10.739Z · score: 27 (10 votes)
Reflections on Premium Poker Tools: Part 2 - Deciding to call it quits 2019-10-09T04:17:25.259Z · score: 40 (10 votes)
Reflections on Premium Poker Tools: Part 1 - My journey 2019-10-09T00:42:05.694Z · score: 43 (14 votes)
Feature Request: Self-imposed Time Restrictions 2019-05-15T22:35:15.883Z · score: 22 (7 votes)
You can be wrong about what you like, and you often are 2018-12-17T23:49:39.935Z · score: 32 (10 votes)
What is abstraction? 2018-12-15T08:36:01.089Z · score: 25 (8 votes)
Trivial inconveniences as an antidote to akrasia 2018-05-18T05:34:55.430Z · score: 49 (16 votes)
Science like a chef 2018-02-08T21:23:45.425Z · score: 76 (25 votes)
Productivity: Working towards a summary of what we know 2017-11-09T22:04:28.389Z · score: 97 (51 votes)
Idea for LessWrong: Video Tutoring 2017-06-23T21:40:50.118Z · score: 13 (13 votes)
Develop skills, or "dive in" and start a startup? 2017-05-26T18:07:34.109Z · score: 1 (2 votes)
How I'd Introduce LessWrong to an Outsider 2017-05-03T04:32:21.396Z · score: 8 (6 votes)
New meet up in Las Vegas! 2017-04-28T23:57:21.098Z · score: 2 (3 votes)
Meetup : Las Vegas Meetup 2017-04-28T00:52:37.705Z · score: 0 (1 votes)
Should we admit it when a person/group is "better" than another person/group? 2016-02-16T09:43:48.330Z · score: 0 (14 votes)
Sports 2015-12-26T19:54:39.204Z · score: 12 (13 votes)
Non-communicable Evidence 2015-11-17T03:46:01.503Z · score: 10 (17 votes)
What is your rationalist backstory? 2015-09-25T01:25:04.036Z · score: 8 (9 votes)
Why Don't Rationalists Win? 2015-09-05T00:57:28.156Z · score: 1 (13 votes)
Test Driven Thinking 2015-07-24T18:38:46.991Z · score: 3 (6 votes)
Is Greed Stupid? 2015-06-23T20:38:34.027Z · score: -6 (18 votes)
Effective altruism and political power 2015-06-17T17:47:11.509Z · score: 4 (6 votes)
Ideas to Improve LessWrong 2015-05-25T22:55:00.818Z · score: 10 (11 votes)
Communicating via writing vs. in person 2015-05-22T04:58:06.373Z · score: 4 (5 votes)
Lessons from each HPMOR chapter in one line [link] 2015-04-09T14:51:53.411Z · score: 11 (12 votes)
How urgent is it to intuitively understand Bayesianism? 2015-04-07T00:43:43.215Z · score: 7 (8 votes)
Learning by Doing 2015-03-24T01:56:43.462Z · score: 4 (7 votes)
Saving for the long term 2015-02-24T03:33:32.183Z · score: 7 (8 votes)
[LINK] Wait But Why - The AI Revolution Part 2 2015-02-04T16:02:08.888Z · score: 17 (18 votes)
Respond to what they probably meant 2015-01-17T23:37:38.135Z · score: 11 (18 votes)
The Superstar Effect 2015-01-03T06:11:19.710Z · score: 10 (19 votes)
Ways to improve LessWrong 2014-09-14T02:25:26.228Z · score: 5 (6 votes)
Is it a good idea to use Soylent once/twice a day? 2014-09-08T00:00:36.118Z · score: 7 (11 votes)
What motivates politicians? 2014-09-05T05:41:01.629Z · score: 3 (8 votes)
Why are people "put off by rationality"? 2014-08-05T18:15:03.905Z · score: 3 (10 votes)
What do rationalists think about the afterlife? 2014-05-13T21:46:48.131Z · score: -17 (27 votes)
A medium for more rational discussion 2014-02-24T17:20:49.248Z · score: 10 (17 votes)

Comments

Comment by adamzerner on Blog posts as epistemic trust builders · 2020-09-27T18:37:48.310Z · score: 2 (1 votes) · LW · GW

Interesting points about social networks and link aggregators. I think you're right.

But at the same time, after years of reading Hacker News, I start to notice and come across the same authors, and I find myself going "Oh I remember you" when I browse HN. It's possible that this experience is rare, but my impression is that I'm a pretty "middle of the pack" reader, and so I expect that others have similar experiences. So then, it seems to me that the effect is still large enough to be worth noting.

Comment by adamzerner on The rationalist community's location problem · 2020-09-26T18:09:16.931Z · score: 2 (1 votes) · LW · GW

What are the benefits you have in mind of making other connections? Intellectual? Hedonic? Networking?

Intellectual: To me, online discussion does a pretty good job providing diversity of opinion and conversation.

Hedonic: I'm under the impression that the 80/20 principle usually applies heavily, in the sense of the first 2 people you spend the most time with providing a huge chunk of the value, the next 5 providing a good amount, then there's drop off, etc. If that's true, then the marginal rationalist interactions would be filling in the tail end and not providing too much value.

Networking: This does make sense. After seeing Raemon's comment and sleeping on it I woke up feeling like this could be a big deal. Mostly because of the fact that rationalist organizations do a lot of good for the world. Secondly because although it may be possible to "do networking stuff" remotely, in practice that just doesn't really happen.

Comment by adamzerner on The rationalist community's location problem · 2020-09-26T17:53:55.826Z · score: 2 (1 votes) · LW · GW

In any case, whether or not it would work in normal times, it seems like not a priority right now given the state of the world :P 

Yeah, I definitely agree with that.

Perhaps, but I've found that without a Schelling event like the annual SSC Meetups Everywhere (sadly and obviously canceled this year, maybe I should do something to replace it...), people almost never take that step of reaching out. The map is just so passive, although maybe the real problem is as you implied: that we don't have critical mass.

Hm, maybe it just needs a kickstart. Like if someone from LW sends out a cold email: "Hey, there are 5 other LessWrongers around you. Interested in starting a meetup?"* From there, if you can get that meetup to happen and the people meet each other in person maybe they'll keep in touch.

Something like that happened for me with Indie Hackers. They reached out to me with that message, I started a meetup and it was sustained for over a year until covid.

*I noticed last night that you can subscribe to this on the community map, but it's opt in and difficult to find, and I suspect those two things explain why it hasn't worked.

Comment by adamzerner on The rationalist community's location problem · 2020-09-26T17:45:37.554Z · score: 2 (1 votes) · LW · GW

Thank you :)

Comment by adamzerner on The rationalist community's location problem · 2020-09-26T04:39:26.420Z · score: 2 (1 votes) · LW · GW

Original: Hm, in my mind that stuff could largely be done remotely, but I'm probably underestimating the importance of in person interaction.

New: This does make sense. After seeing Raemon's comment and sleeping on it I woke up feeling like this could be a big deal. Mostly because of the fact that rationalist organizations do a lot of good for the world. Secondly because although it may be possible to "do networking stuff" remotely, in practice that just doesn't really happen.

Comment by adamzerner on The rationalist community's location problem · 2020-09-26T03:35:25.008Z · score: 2 (1 votes) · LW · GW

I've always suspected that connecting rationalists with other rationalists who are already nearby would be a relatively low hanging fruit. Eg. by pushing the community map harder.

Comment by adamzerner on The rationalist community's location problem · 2020-09-26T03:18:53.364Z · score: 2 (1 votes) · LW · GW

Is there a large benefit to being in a rationalist hub versus living in a rationalist house? Personally I'm pretty sure my answer to that question would be "no", but I'm curious how others feel.

Comment by adamzerner on The rationalist community's location problem · 2020-09-26T02:58:43.062Z · score: 8 (4 votes) · LW · GW

I've lived in Vegas for the past four years or so and have a lot of thoughts about it as a place to live. I wrote some of them up on the Mr. Money Mustache forum and can elaborate if anyone is interested.

My main thought is that for 3-4 months out of the year it's hot enough where you really can't be outside (100+ degrees during the day with a brutal sun), and that to me is a pretty big issue. I expect that too many people would be put off by it for it to work as rationalist hub.

I also tried starting a LessWrong meetup here and never had anyone show up.

Comment by adamzerner on The rationalist community's location problem · 2020-09-26T02:49:06.784Z · score: 2 (1 votes) · LW · GW

Similar point: it seems to me that having multiple hubs makes sense.

Comment by adamzerner on The new Editor · 2020-09-24T01:10:40.407Z · score: 2 (1 votes) · LW · GW

Awesome!

Comment by adamzerner on The new Editor · 2020-09-23T21:14:31.198Z · score: 4 (2 votes) · LW · GW

I didn't realize that you can copy-paste images. Whoops! In this comment I initially got a link for the image because I didn't know you could copy-paste.

Comment by adamzerner on The new Editor · 2020-09-23T19:52:21.990Z · score: 2 (1 votes) · LW · GW

Embedding screenshots would be really cool as a future feature IMO.

Comment by adamzerner on The new Editor · 2020-09-23T19:50:12.790Z · score: 6 (3 votes) · LW · GW

There's a small bug I've noticed. Steps to reproduce:

  1. Type.
  2. Delete everything you've typed. The placeholder text will appear.
  3. Type again. The new text will overlap with the placeholder text for a few seconds.

Here's what it looks like when the text overlaps the placeholder text:

In the past I think I recall it taking longer than a few seconds for the overlap to go away.

Comment by adamzerner on Gems from the Wiki: Do The Math, Then Burn The Math and Go With Your Gut · 2020-09-18T00:34:58.994Z · score: 6 (3 votes) · LW · GW

Relevant excerpt from Chapter 86 of HPMOR:

Harry was wondering if he could even get a Bayesian calculation out of this. Of course, the point of a subjective Bayesian calculation wasn't that, after you made up a bunch of numbers, multiplying them out would give you an exactly right answer. The real point was that the process of making up numbers would force you to tally all the relevant facts and weigh all the relative probabilities. Like realizing, as soon as you actually thought about the probability of the Dark Mark not-fading if You-Know-Who was dead, that the probability wasn't low enough for the observation to count as strong evidence. One version of the process was to tally hypotheses and list out evidence, make up all the numbers, do the calculation, and then throw out the final answer and go with your brain's gut feeling after you'd forced it to really weigh everything. The trouble was that the items of evidence weren't conditionally independent, and there were multiple interacting background facts of interest...

Comment by adamzerner on Low hanging fruits (LWCW 2020) · 2020-09-15T22:28:52.554Z · score: 5 (3 votes) · LW · GW

I've been wanting to write a post about low hanging fruits for awhile. I get the sense that there are a lot of them and that pursuing them is often the best use of ones time.

Comment by adamzerner on Covid 9/10: Vitamin D · 2020-09-11T05:18:12.490Z · score: 2 (1 votes) · LW · GW

Any thoughts on how Trump's admission to lying about covid to prevent panic changes things?

Comment by adamzerner on Updates Thread · 2020-09-11T02:42:24.649Z · score: 2 (1 votes) · LW · GW

That's good to know. I'll keep it in mind.

Comment by adamzerner on Updates Thread · 2020-09-11T02:41:45.133Z · score: 2 (1 votes) · LW · GW

Bloons Tower Defense 5.

Comment by adamzerner on Updates Thread · 2020-09-10T21:01:18.618Z · score: 2 (1 votes) · LW · GW

I have the same beliefs and have had similar experiences with doctors. A simple search for literature reviews of my condition (achilles tendinitis) showed that things they were doing like prescribing anti-inflammatories weren't effective. I suspect that they learned things once in medical school and don't take the time to stay up to date. And also that there is an element of social reinforcement if their doctor friends are all doing the same thing.

It makes me think back to what Romeo said about reasoning ability being "good but narrow". That it can easily just completely overlook certain dimensions. That idea has been swimming around in my head and I am  feeling more and more confident that it is hugely important.

Comment by adamzerner on Updates Thread · 2020-09-10T16:05:50.165Z · score: 2 (1 votes) · LW · GW

That makes sense as an alternative hypothesis.

Comment by adamzerner on Updates Thread · 2020-09-10T07:56:59.660Z · score: 2 (1 votes) · LW · GW

Small update in favor of it being important to have better vocabulary to describe one's confidence (and, more generally, one's thoughts).

I've been saying things like "a decent shift away" a lot. "Decent", "small", "plausible", "a good amount", "somewhat of an impact", "a significant degree", "trivial impact" — these are all terms that I find myself reaching for. But the "menu" of terms at my disposal feels very insufficient. I wish I had better terms to describe my thoughts.

I've always been a big believer in the importance of this (the linguistic relativity hypothesis, roughly). But the experience of writing up comments for this post has shifted me a small amount further in that direction.

Furthermore, I read through some of the CFAR handbook today, and that too has contributed to this small shift. I didn't feel like I learned anything new, per se, but a lot of the terminology, phrases and expressions they use were new to me, and I expect that they'll be pretty beneficial.

Comment by adamzerner on Updates Thread · 2020-09-10T07:50:22.059Z · score: 5 (4 votes) · LW · GW

Small-to-decent update against "group rationality practice" being of interest to LessWrongers.

I had originally predicted this thread to get a good amount more upvotes and comments. More generally, I felt optimistic about "group rationality practice" being something/a type of post that would be of interest to LessWrongers. My object-level model still tells me that I'm right, but the data point of this post shifts me away from it a small-to-decent amount.

Comment by adamzerner on Updates Thread · 2020-09-10T07:44:15.631Z · score: 3 (2 votes) · LW · GW

Small update in favor of the importance of brand. And, correspondingly, against the importance of merit.

I was just listening to Joe Rogan's interview of Robert Sapolsky. Partly because I like Sapolsky, and partly because I myself tried starting a podcast, failed at it + found interviewing to be a much more difficult skill than I previously expected, am now curious about what makes a good interviewer, and have tried listening to a few Joe Rogan interviews because he's supposed to be a great interviewer.

But I have been pretty unimpressed with Rogan. In his interview of Sapolsky, he jumps right into the topic of toxoplasmosis, which is a cat parasite. My thoughts:

  • If you had a spectrum of all the possible topics you could talk to Robert Sapolsky about, this one would maybe be at the 10th-20th percentile in terms of interest to the general population, I'd guess.
  • I found the conversation to be very difficult to follow and was tempted to give up on it. And I expect that I am probably around the 80th-90th percentile in terms of listeners who would be able to follow it.
  • I got the impression that some of the questions he asked were motivated by him wanting to sound smart rather than by what would best steer the conversation in the direction that would most benefit the podcast.

This all makes me suspect that Rogan isn't actually that great of an interviewer, and that the success of his podcast is largely due to a positive feedback loop where the podcast is successful, interesting people want to be on it, more success, more incentive for interesting people to be on it.

It's not a large update though, just a small one. I didn't think any of this through too carefully and I recognize that success is a tricky thing to understand and explain. And also that Rogan does have a good reputation as an interviewer, not just as having a good podcast.

Comment by adamzerner on Updates Thread · 2020-09-10T07:27:30.189Z · score: 2 (1 votes) · LW · GW

Small update in favor of writing being good for my mental health.

You know that sound your computer makes when the CPU is really active? The fan kicks on to cool it down. My girlfriend says that she can see this happening to me when my mind is running.

And my mind runs a lot. All of the comments I've made here are examples of threads that run in my head throughout the day.

It's pretty unpleasant. I have to think more about why exactly that is, but part of it is a) that I feel like the threads are "running away from me" and I need to "catch them", and b) because they constantly pop up and interrupt what I was previously doing or thinking about. Maybe a better way to describe it would be to call it "cognitive hyperventilating".

Writing them all out here is helping me a little bit. But a) it's only a little bit, and b) I already knew this from the time I've spent journaling. So the new evidence I have only allows for a small update. It would be wrong to rehash the previous/historical evidence I have and update on it again (I recall Eliezer writing about this at some point).

If anyone has had similar experiences or has any advice, I'd love to hear it.

Comment by adamzerner on Updates Thread · 2020-09-10T07:21:59.816Z · score: 2 (1 votes) · LW · GW

Decent update in favor of the top idea in your mind being really important.

Paul Graham wrote an essay called The Top Idea in Your Mind. He argues (to paraphrase from my memory) that a) you only have space for ~1 thing as the top thing in your mind, and b) that this one thing is what your brain is going to be processing and thinking about subconsciously, and is what you're going to be making progress on.

Since starting this Updates Thread post, I've noticed myself thinking about the updates I make in everyday life, and looking for more pieces of evidence that I can update on. I think it's because this stuff is the top idea in my mind right now.

(Like other updates, I think this one is more about saliency than actually changing my beliefs. I need to think more about what the differences between saliency and updating actually are and how they relate to each other. I'd love to hear more about what others think about this.)

Comment by adamzerner on Updates Thread · 2020-09-10T07:17:57.331Z · score: 4 (2 votes) · LW · GW

Small update in favor of video games being worthwhile.

I've always been an anti-video games person. Because a) I presume there are many better things to do with ones time, regardless of ones goals. And b) because I presume video games are rather addicting, and thus the potential downside is amplified.

But recently I started playing some video (well, computer) games and a) they've been making me happy. Perhaps there are some better options but I think right now I'm enjoying playing them more than the things I normally assume are better than video games, like reading a book or socializing, And b) I'm only finding it slightly addicting.

This has made me think that I've overestimated (a) and (b), but only by a small amount.

Comment by adamzerner on Updates Thread · 2020-09-10T03:56:46.573Z · score: 4 (2 votes) · LW · GW

I like that NYT 7 Minute Workout a lot. I also noticed that doing it made me happier. I stopped because I have achilles problems though.

Comment by adamzerner on Updates Thread · 2020-09-10T03:49:56.717Z · score: 2 (1 votes) · LW · GW

I like that way of thinking about it. The ability to notice those other dimensions seems like a hugely important skill though. It reminds me of this excerpt from HPMOR:

A Muggle security expert would have called it fence-post security, like building a fence-post over a hundred metres high in the middle of the desert. Only a very obliging attacker would try to climb the fence-post. Anyone sensible would just walk around the fence-post, and making the fence-post even higher wouldn't stop that.

Once you forgot to be scared of how impossible the problem was supposed to be, it wasn't even difficult...

Comment by adamzerner on Updates Thread · 2020-09-10T03:44:58.878Z · score: 2 (1 votes) · LW · GW

Frozen peas are a pretty big staple for me as well. I find them to be a bit inconsistent though. At best they're sweet and kinda juicy, but at worst they don't have that sweetness and are sorta mealy. Any tips?

I've never been able to eat frozen carrots because of the texture. Do you like them or just put up with them?

Comment by adamzerner on Updates Thread · 2020-09-09T20:48:36.777Z · score: 7 (4 votes) · LW · GW

These days I mostly perceive the recipe as a "binary code" and try to see the "source code" behind it.

Wow, that's an awesome analogy!

I would like to see a Pareto cookbook.

I was thinking the same thing. I spend way too much time watching cooking videos on YouTube, and so if there was something like that out there I feel like there's a good chance I would have stumbled across it at this point. Although I'd say Adam Ragusea is reasonably close.

Comment by adamzerner on Updates Thread · 2020-09-09T19:23:41.980Z · score: 6 (3 votes) · LW · GW

Decent shift away from thinking that knowledge of algorithms and data structures is likely to matter in programming.

I read Vue.js Creator Evan You Interview this morning. This stuck out to me:

Evrone: You joined Google Creative Lab as a creative technologist with an Art History major. Did you experience any lack of math, algorithms and data structures education while working on the Vue? Do we need to study computer science theory to become programmers, or do we need to learn how to be "software writers" and prefer code that is boring but easy to understand?

Evan: Honestly not much — personally I think that Vue, or front-end frameworks in general, isn’t a particularly math/algorithm intensive field (compared to databases, for example). I also still don’t consider myself very strong in algorithm or data structures. It definitely helps to be good in those, but building a popular framework has a lot more to do with understanding your users, designing sensible APIs, building communities, and long term maintenance commitment.

I would have expected front-end frameworks to require a good deal of algorithm intensiveness. I'm not sure exactly how to update on this evidence.

To take a simplistic approach, I'm thinking about it like this. Imagine a spectrum of how "complicated" an app is. On one end are complicated apps that require a lot of algorithmic intensiveness, and on the other are simple apps that don't. I see front-end frameworks as being at maybe the 80th percentile in complexity, and so hearing that they don't actually require algorithmic intensiveness makes me feel like things in the ballpark of the 80th percentile all drop off somewhat.

Comment by adamzerner on Updates Thread · 2020-09-09T19:10:59.224Z · score: 2 (1 votes) · LW · GW

It’s pretty simple, I think; The cost of the problems of Google Doc fall on you, with a small cost on Google itself, and negligible cost on the decision makers in Google responsible.

Wouldn't it hurt the signal-to-noise ratio in evaluating candidates?

PS: Couldn’t you just copy the code you wrote in an editor to the Doc?

Yes. To me the implication of this is that it'd make sense to do so. I'm not sure how it relates to your follow up point.

They can watch as people code on Google Doc (as far as I remember), but doing this with an editor is somewhat harder.

There are options. http://collabedit.com/ is my goto.

Comment by adamzerner on Updates Thread · 2020-09-09T05:46:02.507Z · score: 5 (3 votes) · LW · GW

Decent shift in favor of the pareto principle applying to cooking.

This one is more about saliency than about changing my beliefs, but let's roll with it anyway.

I cooked tomato sauce last night and it came out great. But I took a very not-fancy approach to it. I just sauteed a bunch of garlic in olive oil and butter, added some red pepper flakes, dumped in three cans of tomato puree, and let it simmer for about five or six hours.

Previously I've messed around with Serious Eats' much more complicated version. It includes adding fish sauce, tomato paste, chopped onions and carrots, whole onions and carrots while simmering, using an oven instead of the stove top, red wine, basil, oregano, and whatever else. After messing around with different versions of all that it seems to me that along the lines of the pareto principle, there are a few things that are responsible for the large majority of taste differences: 1) how long you simmer it for, 2) how much fat you use, and 3) how much acid you use. Everything else seems like it only has a marginal impact. And last night I felt like I got those variables just right (actually it could have used a little more acidity but I didn't have any red wine).

But this goes against the message I feel like I receive a lot in the culinary world that all these little things are important. I guess the message I'm trying to point at is like an anti-pareto principle. Which sounds like I'm strawmanning, but I don't think I am.

Anyway, I guess I've always been a "culinary pareto" person rather than a "culinary anti-pareto" person, but something about last night just made it feel very salient to me. And I think this shift in saliency also serves the function of shifting my beliefs in practice.

Comment by adamzerner on Updates Thread · 2020-09-09T05:33:23.647Z · score: 5 (3 votes) · LW · GW

Credibility of the CDC on SARS-CoV-2 is related but to me it belongs to a reference class that is at least moderately different. 1) Because of it being in the arena of politics. And 2) because they have an incentive to lie to the public for infohazrd reasons; regardless of whether or not you agree with it. What I'm trying to discuss with the Google example above is the reference class of an organization "getting it wrong" for "internal" reasons rather than "external" ones.

Comment by adamzerner on Updates Thread · 2020-09-09T05:27:17.461Z · score: 2 (1 votes) · LW · GW

In retrospect, I feel silly for having previously thought that voting wasn't worthwhile. How could I have overlooked the insanely large payoff part of the expected value calculation?

Moderate shift towards distrusting my own reasoning ability.

I feel like this is a pretty big thing for me to have overlooked. And that my overlooking it points towards me generally being vulnerable to overlooking similarly important things in the future, in which case I can't expect myself to reason super well about things.

Comment by adamzerner on Updates Thread · 2020-09-09T05:04:15.749Z · score: 3 (2 votes) · LW · GW

Decent shift away from thinking that frequent in-person social contact is necessary for most people.

Since March I have been extremely isolated. I like with my girlfriend but other than her I haven't really interacted with other humans in person. I saw her friends in person two or three times. Other than that everything else has been <30 second conversations (paying the rent, getting groceries, etc.). And those conversations have been only about twice a month. So that's an extremely low level of in-person social contact.

I feel a little bit of craving for in person social contact, but only a little bit. Which surprises me, because I would have expected to feel a good amount more.

My impression is that on the spectrum of "how often person needs in-person social contact", I require less than other people, but I'm not too extreme. Maybe at the 10th or 20th percentile, something like that. And so if this is how I'm feeling, I'd expect people at the 30th percentile to feel a little more craving, people at the 40th to feel a little more than that, 50th a little more than that.

It's hard to give a good qualitative description of this, but my impression is that the implication is that people up to eg. the 80th percentile wouldn't experience a significant amount of distress or anything from this low a level of social contact. Which is not what I thought before my experiences since March.

Rather than speculating from this one data point, it would probably be more fruitful to look into what researchers have found, but this still feels worth writing up as an exercise at least.

Comment by adamzerner on Updates Thread · 2020-09-09T04:55:37.601Z · score: 13 (7 votes) · LW · GW

Decent shift away from assuming by default that decisions made by large organizations are reasonable.

I'm in the process of interviewing with Google for a programming job and the recruiter initially told me they do the interview in a Google Doc, and to practice coding in the Google Doc so I'm familiar with the environment for the interview.

I tried doing so and found it very frustrating. The vertical space between lines is too large. Page breaks get in the way. There is just a lot of annoying things about trying to program in a Google Doc.

So then, why would Google choose to have people use it for interviews? They're aware of these difficulties, and yet they chose to use Google Docs for interviews anyway. Why? They're a bunch of smart people, so surely there must be things in the trade-off calculus that make it worthwhile.

I looked around a little bit because I was curious and the only good thing I found was this Quora page on it. There wasn't much insight. The big thing seemed like it was because it was easier for hiring committees to comment on what the candidate wrote and discuss it as they decide whether or not to pass the candidate. That makes sense as an upside, but it doesn't explain why they'd use Google Docs, because you could just have the candidate program in a normal text editor and then copy-paste it into Google Docs afterwards. And I know that I'm not the first person to have thought of that idea. So at this point I just felt confused, not sure whether to give Google the benefit of the doubt or to trust my intuitive sense that having a candidate program in a Google Doc is an awful idea.

Today I had my phone interview, and they're using this interviewing.google.com thing where you code in a normal (enough) editor. Woo hoo! My interviewer was actually in the engineering productivity department at Google and I asked him about it at the end of the interview (impulsive; probably not the most beneficial thing I could have chosen to ask about). We didn't have much time to talk about it but his response seemed like he felt like this new approach is clearly better than using Google Docs, from which I infer that there wasn't some hidden benefit to using Google Docs that I was overlooking.

I also interpret the fact that they moved from using Google Docs to using the normal editor as evidence that the initial decision to use Google Docs wasn't carefully considered. I'm having trouble articulating why I interpret this as evidence. In worlds where there is some hidden benefit that makes Google Docs superior to a normal editor for these interviews, I just wouldn't have expected them to shift to this new approach. It's possible that the initial decision to use Google Docs was reasonable and carefully considered, and they just came across new information that lead to them changing their minds, but it feels more likely that it wasn't carefully considered initially and what happened was more "Wait, this is stupid, why are we using Google Docs for interviews? Let's do something better." And if that's true for an organization as reputable as Google, I'd expect it to happen in all sorts of other organizations. Meaning that the next time I think to myself, "This seems obviously stupid. But they're smart people. Should I give them the benefit of the doubt?", I lean a decent amount more towards answering "No".

Comment by adamzerner on Updates Thread · 2020-09-09T04:47:15.921Z · score: 7 (6 votes) · LW · GW

Significant shift in favor of voting in (presidential) elections being worthwhile.

Previously I figured that the chance of your vote mattering — in the consequentialist sense of actually leading to a different candidate being elected — is so incredibly small that voting isn't something that is actually worthwhile. With the US presidential election coming up I decided to revisit that belief.

I googled and came across What Are the Chances Your Vote Matters? by Andrew Gelman. I didn't read it too carefully but I see that he estimates the chances of your vote mattering ranging from one in a million to one in a trillion. Those odds may seem low, but he also makes the following argument:

If your vote is decisive, it will make a difference for over 300 million people. If you think your preferred candidate could bring the equivalent of a $100 improvement in the quality of life to the average American—not an implausible number, given the size of the federal budget and the impact of decisions in foreign policy, health, the courts, and other areas—you’re now buying a $30 billion lottery ticket. With this payoff, a 1 in 10 million chance of being decisive isn’t bad odds.

$100/person seems incredibly low, but even at that estimate it's enough for voting to have a pretty high expected value.

Assuming his estimates of whether or not your vote mattering are in the right ballpark. But I figure that they are. I recall seeing Gelman come up in the rationality community various times, including in the sidebar of Overcoming Bias. That's enough evidence for me to find him highly trustworthy.

In retrospect, I feel silly for having previously thought that voting wasn't worthwhile. How could I have overlooked the insanely large payoff part of the expected value calculation?

Comment by adamzerner on microCOVID.org: A tool to estimate COVID risk from common activities · 2020-09-01T20:29:35.663Z · score: 2 (1 votes) · LW · GW

There's a lot of trickiness in "if you just let anyone submit disagreeing statements, you're opening yourself up to managing arguments about whether so-and-so is a crackpot or whatever" and that sounds like a huge pain, I'm not sure if there's a way to sidestep that.

I don't think it'd really be possible to side step it 100%, but if you eg. only accept statements from people with PhDs, maybe that'd be good enough. Eg. maybe the benefit of the extra inputs would outweigh the fact that the sources aren't fully vetted.

Comment by adamzerner on microCOVID.org: A tool to estimate COVID risk from common activities · 2020-09-01T20:26:24.873Z · score: 3 (2 votes) · LW · GW

To me microCOVID's defaults seem close enough to the truth that the ideal version you describe wouldn't provide too much marginal value.

Especially since, at least to me, the value is mostly in knowing what activities I will/won't do rather than nailing down the precise number of microCOVIDs. Eg. knowing that eating at a restaurant inside is 8,500 microCOVIDs instead of 10,000 wouldn't be enough to get me to eat at a restaurant inside, so it doesn't really matter to me whether the real number is 8,500 or 10,000. However, given the wide confidence intervals, maybe this point doesn't have too much weight.

Comment by adamzerner on microCOVID.org: A tool to estimate COVID risk from common activities · 2020-09-01T16:52:52.619Z · score: 2 (1 votes) · LW · GW

I keep coming back to the "dollars conversion" because there's a very real sense in which we're trained our entire lives to evaluate how to price things in dollars; if I tell you a meal costs $25 you have an instant sense of whether that's cheap or outrageous. Since we don't have a similar fine-tuned model for risk, piggybacking one on the other could be a good way to build intuition faster.

That's a great way to put it. And since the goal of the microCOVID project is behavior change (presumably), I think it's crucial to get the "have an instant sense of whether it's cheap or outrageous" part right. Without that I fear that only the most committed people would be motivated enough to change their behavior, but a lot of those people are probably being cautious to begin with.

Anecdotally, I was talking to my brother (not super committed) about it last night, and that data point supported what I'm saying.

Comment by adamzerner on microCOVID.org: A tool to estimate COVID risk from common activities · 2020-09-01T16:43:23.225Z · score: 3 (2 votes) · LW · GW

He was saying that it is worth $10k to him to avoid the experience of being sick with but not dying from Covid.

Impact on others can be incorporated into the dollar estimate using R0 and the value you place on those other lives as parameters.

Edit: microCOVIDs also excludes impact on others.

Comment by adamzerner on microCOVID.org: A tool to estimate COVID risk from common activities · 2020-08-31T16:42:26.989Z · score: 2 (1 votes) · LW · GW

Ah I see. My mistake for missing that!

Comment by adamzerner on microCOVID.org: A tool to estimate COVID risk from common activities · 2020-08-31T01:26:22.293Z · score: 8 (7 votes) · LW · GW

I think it'd be cool to go from microCOVIDs to expected QALYs lost, and then from there put a rough dollar figure on it based on the value of a QALY.

Edit:

  • 10 microCOVIDS 
  • = 1 in 100k chance of getting COVID
  • = 1 in 10M chance of dying from COVID @ 0.1% fatality rate
  • = 0.000005 expected QALYs lost @ 50 QALYs available to lose
  • = $0.10 @ $200k/QALY
  • = $0.01 / microCOVID with these assumptions

Eg. 10 microCOVIDs = 0.0005 expected QALYs lost (assuming 50 QALYs available to lose) = $100 (@ $200k/QALY).

Knowing that it "costs" about $100 to hang out with two friends outside feels a lot more concrete and actionable than knowing that there's a 1 in 100k chance it gives me COVID, in no small part due to scope insensitivity.

Comment by adamzerner on Covid 8/20: A Little Progress · 2020-08-20T18:23:54.219Z · score: 2 (1 votes) · LW · GW

I'd be interested in hearing some discussion of what is happening in other countries. a) Because I'm curious about what's happening but also b) because I figure it says something about what we can expect in the US.

Comment by adamzerner on Is Wirecutter still good? · 2020-08-07T23:40:14.629Z · score: 6 (4 votes) · LW · GW

My data point: They're my go-to source so I've made various purchases based off their recommendations over the years, and I've been pretty happy with them as a whole.

Also, what Romeo says about them being an 80:20 option seems very plausible to me.

Comment by adamzerner on Tools for keeping focused · 2020-08-05T19:38:43.612Z · score: 2 (1 votes) · LW · GW

SelfControl is by far my favorite productivity tool. You can block a website for a period of time in such a way that is irreversible, even if you uninstall the SelfControl app itself. I use it in tandem with auto-selfcontrol, which is used to schedule and run blocks automatically. I'd also recommend extending the max block length to like a week or something rather than 24 hours. I like having longer periods of time like a few days at least without internet. 

Comment by adamzerner on What are you looking for in a Less Wrong post? · 2020-08-02T19:52:51.615Z · score: 2 (1 votes) · LW · GW

Yeah. Also Write To Say Stuff Worth Knowing by Robin Hanson.

Comment by adamzerner on What are you looking for in a Less Wrong post? · 2020-08-01T21:19:30.519Z · score: 14 (7 votes) · LW · GW

For me, it boils down to being useful.

For something to be useful, it first has to be true. From there, there's a bunch of different ways for a post to close the gap and be something that I find useful. Maybe it teaches me how to be happy. Maybe it teaches something about rationality. Maybe it teaches me something about how the world works.

Comment by adamzerner on Tagging Open Call / Discussion Thread · 2020-08-01T19:46:04.436Z · score: 7 (4 votes) · LW · GW

When I click "Add Tag", this is what I see:

Non-expanded view of Add Tag

Then I clicked to show more, because I know there are a lot more tags and want to make sure that if I tag a post it has all of the proper tags (because if I don't it'll be marked as tagged and it's likely that no one will return to it to add the proper tags):

Expanded view of Add Tag

But this view isn't organized well like the concepts portal is (below), so I felt the need to skim through each individual tag, which took a long time. Seems like it'd be a good idea to organize the above view to look more like the below view.

Expanded view of Add Tag