Interesting points about social networks and link aggregators. I think you're right.
But at the same time, after years of reading Hacker News, I start to notice and come across the same authors, and I find myself going "Oh I remember you" when I browse HN. It's possible that this experience is rare, but my impression is that I'm a pretty "middle of the pack" reader, and so I expect that others have similar experiences. So then, it seems to me that the effect is still large enough to be worth noting.
What are the benefits you have in mind of making other connections? Intellectual? Hedonic? Networking?
Intellectual: To me, online discussion does a pretty good job providing diversity of opinion and conversation.
Hedonic: I'm under the impression that the 80/20 principle usually applies heavily, in the sense of the first 2 people you spend the most time with providing a huge chunk of the value, the next 5 providing a good amount, then there's drop off, etc. If that's true, then the marginal rationalist interactions would be filling in the tail end and not providing too much value.
Networking: This does make sense. After seeing Raemon's comment and sleeping on it I woke up feeling like this could be a big deal. Mostly because of the fact that rationalist organizations do a lot of good for the world. Secondly because although it may be possible to "do networking stuff" remotely, in practice that just doesn't really happen.
In any case, whether or not it would work in normal times, it seems like not a priority right now given the state of the world :P
Yeah, I definitely agree with that.
Perhaps, but I've found that without a Schelling event like the annual SSC Meetups Everywhere (sadly and obviously canceled this year, maybe I should do something to replace it...), people almost never take that step of reaching out. The map is just so passive, although maybe the real problem is as you implied: that we don't have critical mass.
Hm, maybe it just needs a kickstart. Like if someone from LW sends out a cold email: "Hey, there are 5 other LessWrongers around you. Interested in starting a meetup?"* From there, if you can get that meetup to happen and the people meet each other in person maybe they'll keep in touch.
Something like that happened for me with Indie Hackers. They reached out to me with that message, I started a meetup and it was sustained for over a year until covid.
*I noticed last night that you can subscribe to this on the community map, but it's opt in and difficult to find, and I suspect those two things explain why it hasn't worked.
Original: Hm, in my mind that stuff could largely be done remotely, but I'm probably underestimating the importance of in person interaction.
New: This does make sense. After seeing Raemon's comment and sleeping on it I woke up feeling like this could be a big deal. Mostly because of the fact that rationalist organizations do a lot of good for the world. Secondly because although it may be possible to "do networking stuff" remotely, in practice that just doesn't really happen.
I've lived in Vegas for the past four years or so and have a lot of thoughts about it as a place to live. I wrote some of themup on the Mr. Money Mustache forum and can elaborate if anyone is interested.
My main thought is that for 3-4 months out of the year it's hot enough where you really can't be outside (100+ degrees during the day with a brutal sun), and that to me is a pretty big issue. I expect that too many people would be put off by it for it to work as rationalist hub.
I also tried starting a LessWrong meetup here and never had anyone show up.
Harry was wondering if he could even get a Bayesian calculation out of this. Of course, the point of a subjective Bayesian calculation wasn't that, after you made up a bunch of numbers, multiplying them out would give you an exactly right answer. The real point was that the process of making up numbers would force you to tally all the relevant facts and weigh all the relative probabilities. Like realizing, as soon as you actually thought about the probability of the Dark Mark not-fading if You-Know-Who was dead, that the probability wasn't low enough for the observation to count as strong evidence. One version of the process was to tally hypotheses and list out evidence, make up all the numbers, do the calculation, and then throw out the final answer and go with your brain's gut feeling after you'd forced it to really weigh everything. The trouble was that the items of evidence weren't conditionally independent, and there were multiple interacting background facts of interest...
I have the same beliefs and have had similar experiences with doctors. A simple search for literature reviews of my condition (achilles tendinitis) showed that things they were doing like prescribing anti-inflammatories weren't effective. I suspect that they learned things once in medical school and don't take the time to stay up to date. And also that there is an element of social reinforcement if their doctor friends are all doing the same thing.
It makes me think back to what Romeo said about reasoning ability being "good but narrow". That it can easily just completely overlook certain dimensions. That idea has been swimming around in my head and I am feeling more and more confident that it is hugely important.
Small update in favor of it being important to have better vocabulary to describe one's confidence (and, more generally, one's thoughts).
I've been saying things like "a decent shift away" a lot. "Decent", "small", "plausible", "a good amount", "somewhat of an impact", "a significant degree", "trivial impact" — these are all terms that I find myself reaching for. But the "menu" of terms at my disposal feels very insufficient. I wish I had better terms to describe my thoughts.
I've always been a big believer in the importance of this (the linguistic relativity hypothesis, roughly). But the experience of writing up comments for this post has shifted me a small amount further in that direction.
Furthermore, I read through some of the CFAR handbook today, and that too has contributed to this small shift. I didn't feel like I learned anything new, per se, but a lot of the terminology, phrases and expressions they use were new to me, and I expect that they'll be pretty beneficial.
Small-to-decent update against "group rationality practice" being of interest to LessWrongers.
I had originally predicted this thread to get a good amount more upvotes and comments. More generally, I felt optimistic about "group rationality practice" being something/a type of post that would be of interest to LessWrongers. My object-level model still tells me that I'm right, but the data point of this post shifts me away from it a small-to-decent amount.
Small update in favor of the importance of brand. And, correspondingly, against the importance of merit.
I was just listening to Joe Rogan's interview of Robert Sapolsky. Partly because I like Sapolsky, and partly because I myself tried starting a podcast, failed at it + found interviewing to be a much more difficult skill than I previously expected, am now curious about what makes a good interviewer, and have tried listening to a few Joe Rogan interviews because he's supposed to be a great interviewer.
But I have been pretty unimpressed with Rogan. In his interview of Sapolsky, he jumps right into the topic of toxoplasmosis, which is a cat parasite. My thoughts:
If you had a spectrum of all the possible topics you could talk to Robert Sapolsky about, this one would maybe be at the 10th-20th percentile in terms of interest to the general population, I'd guess.
I found the conversation to be very difficult to follow and was tempted to give up on it. And I expect that I am probably around the 80th-90th percentile in terms of listeners who would be able to follow it.
I got the impression that some of the questions he asked were motivated by him wanting to sound smart rather than by what would best steer the conversation in the direction that would most benefit the podcast.
This all makes me suspect that Rogan isn't actually that great of an interviewer, and that the success of his podcast is largely due to a positive feedback loop where the podcast is successful, interesting people want to be on it, more success, more incentive for interesting people to be on it.
It's not a large update though, just a small one. I didn't think any of this through too carefully and I recognize that success is a tricky thing to understand and explain. And also that Rogan does have a good reputation as an interviewer, not just as having a good podcast.
Small update in favor of writing being good for my mental health.
You know that sound your computer makes when the CPU is really active? The fan kicks on to cool it down. My girlfriend says that she can see this happening to me when my mind is running.
And my mind runs a lot. All of the comments I've made here are examples of threads that run in my head throughout the day.
It's pretty unpleasant. I have to think more about why exactly that is, but part of it is a) that I feel like the threads are "running away from me" and I need to "catch them", and b) because they constantly pop up and interrupt what I was previously doing or thinking about. Maybe a better way to describe it would be to call it "cognitive hyperventilating".
Writing them all out here is helping me a little bit. But a) it's only a little bit, and b) I already knew this from the time I've spent journaling. So the new evidence I have only allows for a small update. It would be wrong to rehash the previous/historical evidence I have and update on it again (I recall Eliezer writing about this at some point).
If anyone has had similar experiences or has any advice, I'd love to hear it.
Decent update in favor of the top idea in your mind being really important.
Paul Graham wrote an essay called The Top Idea in Your Mind. He argues (to paraphrase from my memory) that a) you only have space for ~1 thing as the top thing in your mind, and b) that this one thing is what your brain is going to be processing and thinking about subconsciously, and is what you're going to be making progress on.
Since starting this Updates Thread post, I've noticed myself thinking about the updates I make in everyday life, and looking for more pieces of evidence that I can update on. I think it's because this stuff is the top idea in my mind right now.
(Like other updates, I think this one is more about saliency than actually changing my beliefs. I need to think more about what the differences between saliency and updating actually are and how they relate to each other. I'd love to hear more about what others think about this.)
Small update in favor of video games being worthwhile.
I've always been an anti-video games person. Because a) I presume there are many better things to do with ones time, regardless of ones goals. And b) because I presume video games are rather addicting, and thus the potential downside is amplified.
But recently I started playing some video (well, computer) games and a) they've been making me happy. Perhaps there are some better options but I think right now I'm enjoying playing them more than the things I normally assume are better than video games, like reading a book or socializing, And b) I'm only finding it slightly addicting.
This has made me think that I've overestimated (a) and (b), but only by a small amount.
I like that way of thinking about it. The ability to notice those other dimensions seems like a hugely important skill though. It reminds me of this excerpt from HPMOR:
A Muggle security expert would have called it fence-post security, like building a fence-post over a hundred metres high in the middle of the desert. Only a very obliging attacker would try to climb the fence-post. Anyone sensible would just walk around the fence-post, and making the fence-post even higher wouldn't stop that.
Once you forgot to be scared of how impossible the problem was supposed to be, it wasn't even difficult...
Frozen peas are a pretty big staple for me as well. I find them to be a bit inconsistent though. At best they're sweet and kinda juicy, but at worst they don't have that sweetness and are sorta mealy. Any tips?
I've never been able to eat frozen carrots because of the texture. Do you like them or just put up with them?
These days I mostly perceive the recipe as a "binary code" and try to see the "source code" behind it.
Wow, that's an awesome analogy!
I would like to see a Pareto cookbook.
I was thinking the same thing. I spend way too much time watching cooking videos on YouTube, and so if there was something like that out there I feel like there's a good chance I would have stumbled across it at this point. Although I'd say Adam Ragusea is reasonably close.
Evrone:You joined Google Creative Lab as a creative technologist with an Art History major. Did you experience any lack of math, algorithms and data structures education while working on the Vue? Do we need to study computer science theory to become programmers, or do we need to learn how to be "software writers" and prefer code that is boring but easy to understand?
Evan:Honestly not much — personally I think that Vue, or front-end frameworks in general, isn’t a particularly math/algorithm intensive field (compared to databases, for example). I also still don’t consider myself very strong in algorithm or data structures. It definitely helps to be good in those, but building a popular framework has a lot more to do with understanding your users, designing sensible APIs, building communities, and long term maintenance commitment.
I would have expected front-end frameworks to require a good deal of algorithm intensiveness. I'm not sure exactly how to update on this evidence.
To take a simplistic approach, I'm thinking about it like this. Imagine a spectrum of how "complicated" an app is. On one end are complicated apps that require a lot of algorithmic intensiveness, and on the other are simple apps that don't. I see front-end frameworks as being at maybe the 80th percentile in complexity, and so hearing that they don't actually require algorithmic intensiveness makes me feel like things in the ballpark of the 80th percentile all drop off somewhat.
Decent shift in favor of the pareto principle applying to cooking.
This one is more about saliency than about changing my beliefs, but let's roll with it anyway.
I cooked tomato sauce last night and it came out great. But I took a very not-fancy approach to it. I just sauteed a bunch of garlic in olive oil and butter, added some red pepper flakes, dumped in three cans of tomato puree, and let it simmer for about five or six hours.
Previously I've messed around with Serious Eats' much more complicated version. It includes adding fish sauce, tomato paste, chopped onions and carrots, whole onions and carrots while simmering, using an oven instead of the stove top, red wine, basil, oregano, and whatever else. After messing around with different versions of all that it seems to me that along the lines of the pareto principle, there are a few things that are responsible for the large majority of taste differences: 1) how long you simmer it for, 2) how much fat you use, and 3) how much acid you use. Everything else seems like it only has a marginal impact. And last night I felt like I got those variables just right (actually it could have used a little more acidity but I didn't have any red wine).
But this goes against the message I feel like I receive a lot in the culinary world that all these little things are important. I guess the message I'm trying to point at is like an anti-pareto principle. Which sounds like I'm strawmanning, but I don't think I am.
Anyway, I guess I've always been a "culinary pareto" person rather than a "culinary anti-pareto" person, but something about last night just made it feel very salient to me. And I think this shift in saliency also serves the function of shifting my beliefs in practice.
Credibility of the CDC on SARS-CoV-2 is related but to me it belongs to a reference class that is at least moderately different. 1) Because of it being in the arena of politics. And 2) because they have an incentive to lie to the public for infohazrd reasons; regardless of whether or not you agree with it. What I'm trying to discuss with the Google example above is the reference class of an organization "getting it wrong" for "internal" reasons rather than "external" ones.
In retrospect, I feel silly for having previously thought that voting wasn't worthwhile. How could I have overlooked the insanely large payoff part of the expected value calculation?
Moderate shift towards distrusting my own reasoning ability.
I feel like this is a pretty big thing for me to have overlooked. And that my overlooking it points towards me generally being vulnerable to overlooking similarly important things in the future, in which case I can't expect myself to reason super well about things.
Decent shift away from thinking that frequent in-person social contact is necessary for most people.
Since March I have been extremely isolated. I like with my girlfriend but other than her I haven't really interacted with other humans in person. I saw her friends in person two or three times. Other than that everything else has been <30 second conversations (paying the rent, getting groceries, etc.). And those conversations have been only about twice a month. So that's an extremely low level of in-person social contact.
I feel a little bit of craving for in person social contact, but only a little bit. Which surprises me, because I would have expected to feel a good amount more.
My impression is that on the spectrum of "how often person needs in-person social contact", I require less than other people, but I'm not too extreme. Maybe at the 10th or 20th percentile, something like that. And so if this is how I'm feeling, I'd expect people at the 30th percentile to feel a little more craving, people at the 40th to feel a little more than that, 50th a little more than that.
It's hard to give a good qualitative description of this, but my impression is that the implication is that people up to eg. the 80th percentile wouldn't experience a significant amount of distress or anything from this low a level of social contact. Which is not what I thought before my experiences since March.
Rather than speculating from this one data point, it would probably be more fruitful to look into what researchers have found, but this still feels worth writing up as an exercise at least.
Decent shift away from assuming by default that decisions made by large organizations are reasonable.
I'm in the process of interviewing with Google for a programming job and the recruiter initially told me they do the interview in a Google Doc, and to practice coding in the Google Doc so I'm familiar with the environment for the interview.
I tried doing so and found it very frustrating. The vertical space between lines is too large. Page breaks get in the way. There is just a lot of annoying things about trying to program in a Google Doc.
So then, why would Google choose to have people use it for interviews? They're aware of these difficulties, and yet they chose to use Google Docs for interviews anyway. Why? They're a bunch of smart people, so surely there must be things in the trade-off calculus that make it worthwhile.
I looked around a little bit because I was curious and the only good thing I found was this Quora page on it. There wasn't much insight. The big thing seemed like it was because it was easier for hiring committees to comment on what the candidate wrote and discuss it as they decide whether or not to pass the candidate. That makes sense as an upside, but it doesn't explain why they'd use Google Docs, because you could just have the candidate program in a normal text editor and then copy-paste it into Google Docs afterwards. And I know that I'm not the first person to have thought of that idea. So at this point I just felt confused, not sure whether to give Google the benefit of the doubt or to trust my intuitive sense that having a candidate program in a Google Doc is an awful idea.
Today I had my phone interview, and they're using this interviewing.google.com thing where you code in a normal (enough) editor. Woo hoo! My interviewer was actually in the engineering productivity department at Google and I asked him about it at the end of the interview (impulsive; probably not the most beneficial thing I could have chosen to ask about). We didn't have much time to talk about it but his response seemed like he felt like this new approach is clearly better than using Google Docs, from which I infer that there wasn't some hidden benefit to using Google Docs that I was overlooking.
I also interpret the fact that they moved from using Google Docs to using the normal editor as evidence that the initial decision to use Google Docs wasn't carefully considered. I'm having trouble articulating why I interpret this as evidence. In worlds where there is some hidden benefit that makes Google Docs superior to a normal editor for these interviews, I just wouldn't have expected them to shift to this new approach. It's possible that the initial decision to use Google Docs was reasonable and carefully considered, and they just came across new information that lead to them changing their minds, but it feels more likely that it wasn't carefully considered initially and what happened was more "Wait, this is stupid, why are we using Google Docs for interviews? Let's do something better." And if that's true for an organization as reputable as Google, I'd expect it to happen in all sorts of other organizations. Meaning that the next time I think to myself, "This seems obviously stupid. But they're smart people. Should I give them the benefit of the doubt?", I lean a decent amount more towards answering "No".
Significant shift in favor of voting in (presidential) elections being worthwhile.
Previously I figured that the chance of your vote mattering — in the consequentialist sense of actually leading to a different candidate being elected — is so incredibly small that voting isn't something that is actually worthwhile. With the US presidential election coming up I decided to revisit that belief.
I googled and came across What Are the Chances Your Vote Matters? by Andrew Gelman. I didn't read it too carefully but I see that he estimates the chances of your vote mattering ranging from one in a million to one in a trillion. Those odds may seem low, but he also makes the following argument:
If your vote is decisive, it will make a difference for over 300 million people. If you think your preferred candidate could bring the equivalent of a $100 improvement in the quality of life to the average American—not an implausible number, given the size of the federal budget and the impact of decisions in foreign policy, health, the courts, and other areas—you’re now buying a $30 billion lottery ticket. With this payoff, a 1 in 10 million chance of being decisive isn’t bad odds.
$100/person seems incredibly low, but even at that estimate it's enough for voting to have a pretty high expected value.
Assuming his estimates of whether or not your vote mattering are in the right ballpark. But I figure that they are. I recall seeing Gelman come up in the rationality community various times, including in the sidebar of Overcoming Bias. That's enough evidence for me to find him highly trustworthy.
In retrospect, I feel silly for having previously thought that voting wasn't worthwhile. How could I have overlooked the insanely large payoff part of the expected value calculation?
There's a lot of trickiness in "if you just let anyone submit disagreeing statements, you're opening yourself up to managing arguments about whether so-and-so is a crackpot or whatever" and that sounds like a huge pain, I'm not sure if there's a way to sidestep that.
I don't think it'd really be possible to side step it 100%, but if you eg. only accept statements from people with PhDs, maybe that'd be good enough. Eg. maybe the benefit of the extra inputs would outweigh the fact that the sources aren't fully vetted.
To me microCOVID's defaults seem close enough to the truth that the ideal version you describe wouldn't provide too much marginal value.
Especially since, at least to me, the value is mostly in knowing what activities I will/won't do rather than nailing down the precise number of microCOVIDs. Eg. knowing that eating at a restaurant inside is 8,500 microCOVIDs instead of 10,000 wouldn't be enough to get me to eat at a restaurant inside, so it doesn't really matter to me whether the real number is 8,500 or 10,000. However, given the wide confidence intervals, maybe this point doesn't have too much weight.
I keep coming back to the "dollars conversion" because there's a very real sense in which we're trained our entire lives to evaluate how to price things in dollars; if I tell you a meal costs $25 you have an instant sense of whether that's cheap or outrageous. Since we don't have a similar fine-tuned model for risk, piggybacking one on the other could be a good way to build intuition faster.
That's a great way to put it. And since the goal of the microCOVID project is behavior change (presumably), I think it's crucial to get the "have an instant sense of whether it's cheap or outrageous" part right. Without that I fear that only the most committed people would be motivated enough to change their behavior, but a lot of those people are probably being cautious to begin with.
Anecdotally, I was talking to my brother (not super committed) about it last night, and that data point supported what I'm saying.
I think it'd be cool to go from microCOVIDs to expected QALYs lost, and then from there put a rough dollar figure on it based on the value of a QALY.
= 1 in 100k chance of getting COVID
= 1 in 10M chance of dying from COVID @ 0.1% fatality rate
= 0.000005 expected QALYs lost @ 50 QALYs available to lose
= $0.10 @ $200k/QALY
= $0.01 / microCOVID with these assumptions
Eg. 10 microCOVIDs = 0.0005 expected QALYs lost (assuming 50 QALYs available to lose) = $100 (@ $200k/QALY).
Knowing that it "costs" about $100 to hang out with two friends outside feels a lot more concrete and actionable than knowing that there's a 1 in 100k chance it gives me COVID, in no small part due to scope insensitivity.
I'd be interested in hearing some discussion of what is happening in other countries. a) Because I'm curious about what's happening but also b) because I figure it says something about what we can expect in the US.
SelfControl is by far my favorite productivity tool. You can block a website for a period of time in such a way that is irreversible, even if you uninstall the SelfControl app itself. I use it in tandem with auto-selfcontrol, which is used to schedule and run blocks automatically. I'd also recommend extending the max block length to like a week or something rather than 24 hours. I like having longer periods of time like a few days at least without internet.
For something to be useful, it first has to be true. From there, there's a bunch of different ways for a post to close the gap and be something that I find useful. Maybe it teaches me how to be happy. Maybe it teaches something about rationality. Maybe it teaches me something about how the world works.
Then I clicked to show more, because I know there are a lot more tags and want to make sure that if I tag a post it has all of the proper tags (because if I don't it'll be marked as tagged and it's likely that no one will return to it to add the proper tags):
But this view isn't organized well like the concepts portal is (below), so I felt the need to skim through each individual tag, which took a long time. Seems like it'd be a good idea to organize the above view to look more like the below view.