Posts

2019 Review: Voting Results! 2021-02-01T03:10:19.284Z
Last day of voting for the 2019 review! 2021-01-26T00:46:35.426Z
The Great Karma Reckoning 2021-01-15T05:19:32.447Z
COVID-19: home stretch and fourth wave Q&A 2021-01-06T22:44:29.382Z
The medical test paradox: Can redesigning Bayes rule help? 2020-12-24T22:08:30.602Z
The LessWrong 2019 Review 2020-12-02T11:21:11.533Z
Sunday November 29th, 12:00PM (PT) — Andreas Stuhlmüller on Combining GPT-3 and Forecasting 2020-11-26T22:14:35.540Z
[Event] Ajeya's Timeline Report Reading Group #1 (Nov. 17, 6:30PM - 8:00PM PT) 2020-11-14T19:14:43.550Z
Sunday November 15th, 12:00PM (PT) — talks by Abram Demski, Daniel Kokotajlo and (maybe) more! 2020-11-13T00:53:17.126Z
Open & Welcome Thread – November 2020 2020-11-03T20:46:12.745Z
Sunday October 11th, 12:00PM (PT) — talks by Alex Zhu, Martin Sustrik and Steve Byrnes 2020-10-07T23:22:14.050Z
The new Editor 2020-09-23T02:25:53.914Z
AI Advantages [Gems from the Wiki] 2020-09-22T22:44:36.671Z
Sunday September 27, 12:00PM (PT) — talks by Alex Flint, Alex Zhu and more 2020-09-22T21:59:56.546Z
Gems from the Wiki: Do The Math, Then Burn The Math and Go With Your Gut 2020-09-17T22:41:24.097Z
Sunday September 20, 12:00PM (PT) — talks by Eric Rogstad, Daniel Kokotajlo and more 2020-09-17T00:27:47.735Z
Gems from the Wiki: Paranoid Debating 2020-09-15T03:51:10.453Z
Gems from the Wiki: Acausal Trade 2020-09-13T00:23:32.421Z
Notes on good judgement and how to develop it (80,000 Hours) 2020-09-12T17:51:27.174Z
How Much Computational Power Does It Take to Match the Human Brain? 2020-09-12T06:38:29.693Z
What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers 2020-09-12T01:46:07.349Z
‘Ugh fields’, or why you can’t even bear to think about that task (Rob Wiblin) 2020-09-11T20:31:00.990Z
Sunday September 13, 12:00PM (PT) — talks by John Wentworth, Liron and more 2020-09-10T19:49:06.325Z
How To Fermi Model 2020-09-09T05:13:19.243Z
Conflict, the Rules of Engagement, and Professionalism 2020-09-05T05:04:16.081Z
Open & Welcome Thread - September 2020 2020-09-04T18:14:17.056Z
Sunday September 6, 12pm (PT) — Casual hanging out with the LessWrong community 2020-09-03T02:08:25.687Z
Open & Welcome Thread - August 2020 2020-08-06T06:16:50.337Z
Use resilience, instead of imprecision, to communicate uncertainty 2020-07-20T05:08:52.759Z
The New Frontpage Design & Opening Tag Creation! 2020-07-09T04:37:01.137Z
AI Research Considerations for Human Existential Safety (ARCHES) 2020-07-09T02:49:27.267Z
Open & Welcome Thread - July 2020 2020-07-02T22:41:35.440Z
Open & Welcome Thread - June 2020 2020-06-02T18:19:36.166Z
[U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government 2020-04-10T05:00:35.435Z
[Announcement] LessWrong will be down for ~1 hour on the evening of April 10th around 10PM PDT (5:00AM GMT) 2020-04-09T05:09:24.241Z
April Fools: Announcing LessWrong 3.0 – Now in VR! 2020-04-01T08:00:15.199Z
Rob Bensinger's COVID-19 overview 2020-03-28T21:47:31.684Z
Coronavirus Research Ideas for EAs 2020-03-27T22:10:35.767Z
March 25: Daily Coronavirus Updates 2020-03-27T04:32:18.530Z
March 24th: Daily Coronavirus Link Updates 2020-03-26T02:22:35.214Z
March 22nd & 23rd: Coronavirus Link Updates 2020-03-25T01:08:14.499Z
March 21st: Daily Coronavirus Links 2020-03-23T00:43:29.913Z
March 20th: Daily Coronavirus Links 2020-03-21T19:17:33.320Z
March 19th: Daily Coronavirus Links 2020-03-21T00:00:54.173Z
Sarah Constantin: Oxygen Supplementation 101 2020-03-20T01:00:16.453Z
March 18th: Daily Coronavirus Links 2020-03-19T22:20:27.217Z
March 17th: Daily Coronavirus Links 2020-03-18T20:55:45.372Z
March 16th: Daily Coronavirus Links 2020-03-18T00:00:33.273Z
Kevin Simler: Outbreak 2020-03-16T22:50:37.994Z
March 14/15th: Daily Coronavirus link updates 2020-03-16T22:24:11.637Z

Comments

Comment by habryka (habryka4) on The Scout Mindset - read-along · 2021-04-22T01:33:58.630Z · LW · GW

That's good to hear! I haven't yet gotten super far into the book, so can't judge for myself yet, and my guess about doing more first-principles reasoning was mostly based on priors.

Comment by habryka (habryka4) on The Scout Mindset - read-along · 2021-04-22T00:50:14.412Z · LW · GW

I am really glad about this choice, and also made similar epistemic updates over the last few years, and my guess is if I was to write a book, I would probably make a similar choice (though probably with more first-principles reasoning and a lot more fermi-estimates, though the latter sure sounds like it would cut into my sales :P).

Comment by habryka (habryka4) on The Scout Mindset - read-along · 2021-04-18T01:36:08.505Z · LW · GW

Thank you for doing this! 

I was planning to listen to the book, and I hope I can get around to leaving some comments with thoughts here.

Comment by habryka (habryka4) on People Will Listen · 2021-04-13T05:40:49.977Z · LW · GW

I have 2-3 friends I know about who lost $5k+ on Mt. Gox going down, and didn't hold crypto anywhere else. 

Comment by habryka (habryka4) on "Taking your environment as object" vs "Being subject to your environment" · 2021-04-12T04:50:27.288Z · LW · GW

Yeah, the current phrase feels confusing to me. If a human takes something else as a subject that... feels like it has some different connotations. In my mind the two opposing phrases are "being subject to" (passive) and "taking as object" (active).

Comment by habryka (habryka4) on niplav's Shortform · 2021-04-10T04:33:15.700Z · LW · GW

Yep, that's fine. I am not a moral prescriptivist who tells you what you have to care about. 

I do think that you are probably going to change your mind on this at some point in the next millennium if we ever get to live that long, and I do have a bunch of arguments that feel relevant, but I don't think it's completely implausible you really don't care.

I do think that not caring about how people are far away is pretty common, and building EA on that assumption seems fine. Not all clubs and institutions need to be justifiable to everyone.

Comment by habryka (habryka4) on Monastery and Throne · 2021-04-09T23:30:28.947Z · LW · GW

Hmm, I do think I honestly believe that behavioral scientists might be worse than the average politician at predicting public response. Like, I am not totally confident, but I think I would take a 50% bet. So this strikes me as overall mildly bad (though not very bad, I don't expect either of these two groups to be very good at doing this).

Comment by habryka (habryka4) on niplav's Shortform · 2021-04-09T23:29:23.335Z · LW · GW

I don't know, I think it's a pretty decent argument. I agree it sometimes gets overused, but I do think given it's assumptions "you care about people far away as much as people closeby" and "there are lots of people far away you can help much more than people close by" and "here is a situation where you would help someone closeby, so you might also want to help the people far away in the same way" are all part of a totally valid logical chain of inference that seems useful to have in discussions on ethics. 

Like, you don't need to take it to an extreme, but it seems locally valid and totally fine to use, even if not all the assumptions that make it locally valid are always fully explicated.

Comment by habryka (habryka4) on Open and Welcome Thread - April 2021 · 2021-04-09T23:09:40.083Z · LW · GW

Yeah, that makes sense. Will be more careful with moving old historical posts to the frontpage for this reason.

Comment by habryka (habryka4) on Open and Welcome Thread - April 2021 · 2021-04-08T20:08:42.690Z · LW · GW

Can you paste the link of the RSS feed? We've recently moved a bunch of old sequences post to the frontpage that we missed when we did our initial pass in 2017, so that seems like the most reasonable cause, if you are subscribed to a feed that filters only on frontpage posts. 

Comment by habryka (habryka4) on Open & Welcome Thread – March 2021 · 2021-04-06T20:47:40.432Z · LW · GW

Yeah, I really want to get around to this. I am sorry for splitting the feature-set awkwardly across two editors!

Comment by habryka (habryka4) on Rationalism before the Sequences · 2021-04-06T03:54:52.953Z · LW · GW

Woah, at least one of those summaries seems really quite inaccurate. Bad enough that like, I feel like I should step in as a moderator and be like "wait, this doesn't seem OK". 

I am not very familiar with ESR's opinions, but your summary of "white people at BLM protests should be assumed to be communists and shot at will" is really misrepresenting the thing he actually said. What he actually said was "White rioters, on the other hand, will be presumed to be Antifa Communists attempting to manipulate this tragedy for Communist political ends;", with the key difference being "white rioters" instead of "white people". While there is still plenty to criticize in that sentence, this seems like a really crucial distinction that makes that sentence drastically less bad.

Topics like this tend to get really politicized and emotional, which I think means it's reasonable to apply some extra scrutiny and care to not misrepresent what other people said, and generally err on the side of quoting verbatim (ideally while giving substantial additional context).

Comment by habryka (habryka4) on Would a post about optimizing physical attractiveness be fitting for this forum? · 2021-04-05T07:44:09.618Z · LW · GW

Lukeprog wrote some related posts a while ago: https://www.lesswrong.com/posts/x8Fp9NMgDWbuMpizA/rationality-lessons-learned-from-irrational-adventures-in

In particular the stuff on fashion.

Comment by habryka (habryka4) on What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) · 2021-04-01T01:35:57.085Z · LW · GW

This is great, thank you! 

Minor formatting note: The italics font on both the AI Alignment Forum and LessWrong isn't super well suited to large block of text, so I took the liberty to unitalicize a bunch of the large blockquotes (which should be sufficiently distinguishable as blockquotes without the italics). Though I am totally happy to reverse it if you prefer the previous formatting. 

Comment by habryka (habryka4) on Rationalism before the Sequences · 2021-03-31T22:07:57.341Z · LW · GW

This post of mine feels closely related: https://www.lesswrong.com/posts/xhE4TriBSPywGuhqi/integrity-and-accountability-are-core-parts-of-rationality 

  • I have come to believe that people's ability to come to correct opinions about important questions is in large part a result of whether their social and monetary incentives reward them when they have accurate models in a specific domain. This means a person can have extremely good opinions in one domain of reality, because they are subject to good incentives, while having highly inaccurate models in a large variety of other domains in which their incentives are not well optimized.
  • People's rationality is much more defined by their ability to maneuver themselves into environments in which their external incentives align with their goals, than by their ability to have correct opinions while being subject to incentives they don't endorse. This is a tractable intervention and so the best people will be able to have vastly more accurate beliefs than the average person, but it means that "having accurate beliefs in one domain" doesn't straightforwardly generalize to "will have accurate beliefs in other domains".

    One is strongly predictive of the other, and that’s in part due to general thinking skills and broad cognitive ability. But another major piece of the puzzle is the person's ability to build and seek out environments with good incentive structures.
  • Everyone is highly irrational in their beliefs about at least some aspects of reality, and positions of power in particular tend to encourage strong incentives that don't tend to be optimally aligned with the truth. This means that highly competent people in positions of power often have less accurate beliefs than competent people who are not in positions of power.
  • The design of systems that hold people who have power and influence accountable in a way that aligns their interests with both forming accurate beliefs and the interests of humanity at large is a really important problem, and is a major determinant of the overall quality of the decision-making ability of a community. General rationality training helps, but for collective decision making the creation of accountability systems, the tracking of outcome metrics and the design of incentives is at least as big of a factor as the degree to which the individual members of the community are able to come to accurate beliefs on their own.
Comment by habryka (habryka4) on Rationalism before the Sequences · 2021-03-30T22:51:38.052Z · LW · GW

Mod note: I moved this to frontpage despite it being a bit similar to things we've historically left on people's personal blog. Usually there are three checks I run for deciding whether to put something on the frontpage: 

  1. Is it not timeless? 
  2. Is it trying to sell you something, or persuade you, or leverage a bunch of social connections to get you to do something?  (e.g. eliciting donations usually falls in this category)
  3. Is it about community inside-baseball that makes it hard to participate in if you aren't part of the social network?

For this essay, I think the answer is "No" for basically all three (with the last one maybe being a bit true, but not really), so overall I decided to move this to the frontpage.

Comment by habryka (habryka4) on Conceptual engineering: the revolution in philosophy you've never heard of · 2021-03-30T04:47:36.741Z · LW · GW

Hi, I'm new to this site so not sure if late comments are still answered...

Late comments are generally encouraged around here, and we generally aim to have discussion that stands the test of time, and don't ever shut down comment threads because they are too old.

Comment by habryka (habryka4) on deluks917's Shortform · 2021-03-27T21:08:14.968Z · LW · GW

That sucks. Sorry for your loss.

Comment by habryka (habryka4) on Is a Self-Iterating AGI Vulnerable to Thompson-style Trojans? · 2021-03-25T18:10:24.118Z · LW · GW

Edit note: I fixed your images for you. They seemed broken on Chrome since the server on which they were hosted didn't support https. 

Comment by habryka4 on [deleted post] 2021-03-24T05:49:36.383Z

I think the Open Thread is probably a generally better place to bring up random new ideas related to Roko's basilisk stuff. This page is more for discussing the current content of the page, and how it might be improved.

Comment by habryka (habryka4) on Open & Welcome Thread – March 2021 · 2021-03-23T04:00:22.284Z · LW · GW

Yep, agree, also want this. Just a bit complicated tech-wise and UI-wise, so it's a reasonably large investment.

Comment by habryka (habryka4) on Book review: Why we sleep · 2021-03-22T22:37:32.658Z · LW · GW

Note that guzey's excellent writeup on this definitely qualifies, and I offered to send him the money, but if I remember correctly he didn't want it, and we will settle it informally when we hang out in the future sometime.

Comment by habryka (habryka4) on A Retrospective Look at the Guild of Servants: Alpha Phase · 2021-03-22T22:31:01.959Z · LW · GW

This is great, thank you for writing this up and I am looking forward to where this goes! 

(I probably have more detailed thoughts, but not sure whether I will get around to writing them up, so it seemed better to leave encouragement instead of nothing)

Comment by habryka (habryka4) on just_browsing's Shortform · 2021-03-16T04:53:42.173Z · LW · GW

Airpods are amazing at switching between devices (in particular macs and iPhones). Only set of headphones that seems to have made this work reliably.

Comment by habryka (habryka4) on Direct effects matter! · 2021-03-16T00:39:12.125Z · LW · GW

Yeah, I meant it as "I think this comment is OK and shouldn't be deleted or cause the author to get a warning, but it seemed like the kind of thing that could lead to followup comments that would be quite bad"

Comment by habryka (habryka4) on Rad Hardman's Shortform · 2021-03-15T07:06:24.482Z · LW · GW

We have a setting that allows you to view them exactly that way. But I think too large of a fraction of people reading LW posts are lurkers, and this means that I am hesitant. to force them to press an additional button for every poll, but it seems like a reasonable setting that allows some people to opt into that.

Comment by habryka (habryka4) on RSS Feeds are fixed and should be properly functional this time · 2021-03-15T01:19:33.548Z · LW · GW

In the left sidebar menu, click on Subscribe (RSS/Email).

Comment by habryka (habryka4) on Direct effects matter! · 2021-03-14T05:29:42.649Z · LW · GW

Mod note: We generally try to keep generalizations about political parties, and general central culture-war stuff out of most of the site discussion. I think this comment is fine, but I would prefer the comments on this post not become a "the left thinks or the right thinks" type of discussion, which I think is rarely fruitful. 

Comment by habryka (habryka4) on How can we stop talking past each other when it comes to postrationality? · 2021-03-12T20:45:18.570Z · LW · GW

David Gerard not only has 1000 karma but for a long time admin rights at as least our Wiki. I think it's strawmanning him to say that he just doesn't understand LessWrong when he spent years in our community and then decided that it's not the right place for him anymore.

No, just because you spend years here does not mean you understand the core ideas. 

I think we have plenty of evidence that David Gerard frequently completely makes up random strawmans that have nothing to do with us, and maybe there is a small corner of his mind that does have an accurate model of what we are about, but almost always when he writes something he says random things that have very little to do with what we actually do. 

Comment by habryka (habryka4) on How can we stop talking past each other when it comes to postrationality? · 2021-03-12T20:43:47.973Z · LW · GW

No, his critique of bayesianism is also attacking something very different from the sequences, it is again talking about something much narrower. Indeed, substantial fractions of the sequences overlap with his critique of bayesianism (in particular all the stuff about embededness, logical uncertainty, incomputability and TDT-style concerns). I don't think he agrees with everything in the sequences, but when he writes critiques, I am pretty sure he is responding to something else than the sequences.

Comment by habryka (habryka4) on How can we stop talking past each other when it comes to postrationality? · 2021-03-12T17:55:19.310Z · LW · GW

Also, having 220 karma on the site is really not much evidence you understand what rationality is about. David Gerard has over 1000 karma and very clearly doesn't understand what the site is about either.

I am pretty sure Chapman has also said he hasn't read the sequences, though generally I think he understands most content on the site fine. The problem is again not that he doesn't understand the site, but just that he is using the word rationality to mean something completely different. I like a bunch of his critique, and indeed Eliezer made 90% of the same critiques when he talks about "old Rationality" in the sequences.

See this tweet: 

https://twitter.com/Meaningness/status/1298019579978014720

Not sure to what extent I’m subtweeted here, but in case clarification is helpful, by “rationalism” I do NOT mean LW. It’s weird that the Berkeley people think “rationalism” is something they invented, when it’s been around for 2600+ years.

Prototypical rationalists are Plato, Kant, Bertrand Russell, not Yudkowsky. If you want a serious living figure, Chomsky or Dennett maybe.

Comment by habryka (habryka4) on How can we stop talking past each other when it comes to postrationality? · 2021-03-12T17:51:13.256Z · LW · GW

David Chapman has said himself that when he is referring to rationality, what he is talking about has nothing to do with LessWrong. He is referring to the much older philosophical movement of "Rationalism". The whole thing with Chapman is literally just an annoying semantic misunderstanding. He also has some specific critiques of things that Eliezer said, but 95% of the time when he critiques rationalism has absolutely nothing to do with what is written on this site.

Comment by habryka (habryka4) on Rad Hardman's Shortform · 2021-03-12T07:40:52.359Z · LW · GW

I also prefer spoiler blocks

Comment by habryka (habryka4) on [Lecture Club] Awakening from the Meaning Crisis · 2021-03-12T07:21:17.731Z · LW · GW

Required some integration from both sides. But yeah, the new editor made it much easier.

Comment by habryka4 on [deleted post] 2021-03-11T20:31:23.723Z

My guess is we want to rename this tag to "Quantified Self" since that sure seems like it should get a tag?

Comment by habryka (habryka4) on supposedlyfun's Shortform · 2021-03-10T22:38:00.984Z · LW · GW

Yep, I expect some people will want them turned off, which is why we tried to make that pretty easy! It might also make sense to batch them into a weekly batch instead of a daily one, which I've done at some points to reduce the degree to which I felt like I was goodharting on them.

Comment by habryka (habryka4) on A whirlwind tour of Ethereum finance · 2021-03-06T21:55:55.356Z · LW · GW

Thanks for reporting! Will make sure we fix this soon.

Comment by habryka (habryka4) on John_Maxwell's Shortform · 2021-03-05T00:52:53.752Z · LW · GW
  • Replication Crisis definitely hit hard. Lots of stuff there. 
  • People's timelines have changed quite a bit. People used to plan for 50-60 years, now it's much more like 20-30 years. 
  • Bayesianism is much less the basis for stuff. I think this one is still propagating, but I think Embedded Agency had a big effect here, at least on me and a bunch of other people I know.
  • There were a lot of shifts on the spectrum "just do explicit reasoning for everything" to "figuring out how to interface with your System 1 sure seems really important". I think Eliezer was mostly ahead of the curve here, and early on in LessWrong's lifetime we kind of fell prey to following our own stereotypes.
  • A lot of EA related stuff. Like, there is now a lot of good analysis and thinking about how to maximize impact, and if you read old EA-adjacent discussions, they sure strike me as getting a ton of stuff wrong.
  • Spaced repetition. I think the pendulum on this swung somewhat too far, but I think people used to be like "yeah, spaced repetition is just really great and you should use it for everything" and these days the consensus is more like "use spaced repetition in a bunch of narrow contexts, but overall memorizing stuff isn't that great". I do actually think rationalists are currently underusing spaced repetition, but overall I feel like there was a large shift here. 
  • Nootropics. I feel like in the past many more people were like "you should take this whole stack of drugs to make you smarter". I see that advice a lot less, and would advise many fewer people to follow that advice, though not actually sure how much I reflectively endorse that.
  • A bunch of AI Alignment stuff in the space of "don't try to solve the AI Alignment problem directly, instead try to build stuff that doesn't really want to achieve goals in a coherent sense and use that to stabilize the situation". I think this was kind of similar to the S1 stuff, where Eliezer seemed ahead of the curve, but the community consensus was kind of behind. 
Comment by habryka (habryka4) on Seven Years of Spaced Repetition Software in the Classroom · 2021-03-04T07:23:38.417Z · LW · GW

Wow, it's great to see follow-up posts over the course of seven years. Thank you so much for the work you put into this! I am really looking forward to reading this thoroughly sometime in the next few days.

Comment by habryka4 on [deleted post] 2021-03-03T02:22:15.771Z

Yep, I feel similarly, though overall think the EA Forum is pursuing a cultural strategy that is somewhat different from ours that makes it a bit less costly, but not much. I have generally been open about various cultural concerns I've had about the EA Forum when talking to CEA.

Comment by habryka (habryka4) on Takeaways from one year of lockdown · 2021-03-02T23:19:45.817Z · LW · GW

Also... it seems really unreasonable to say "if you can't handle 10 hours of grueling negotiations about what COVID precautions to take, you're weak and I need to cut you out of my life and/or take away decisionmaking power from you during times of stress." I would guess that, uhh, most people are weak by that definition.

To be clear, I do indeed think we have the luxury to exclude most people from our lives. Indeed any rule that doesn't exclude 90%+ of people from your life to a very large degree seems far too lax to me.

Also, 10 hours really doesn't seem that much over the course of a pandemic, so I do think the above holds for me. It just seems really really crucial to maintain coordination ability in crises, and this will require some harsh decisions.

Comment by habryka (habryka4) on Introduction to Reinforcement Learning · 2021-03-01T19:03:11.219Z · LW · GW

Note: There is a broken image in the post: 

Comment by habryka (habryka4) on Anna and Oliver discuss Children and X-Risk · 2021-02-28T18:38:41.326Z · LW · GW

I am pretty confused on this, and as I said above, don't put much weight on this study because I also have some sense that the author isn't super trustworthy (though I haven't found any critique of this specific paper). 

Overall, my current sense is that the effect on women in particular is quite strong, and women who choose to have children will reduce their chance of major achievement by at least 40% or so. For men it's probably weaker, and I am a lot less sure what the data says. 

Comment by habryka (habryka4) on Anna and Oliver discuss Children and X-Risk · 2021-02-28T02:06:59.130Z · LW · GW

There is also this paper, which aims to show that as soon as great scientists marry, they very quickly stop producing great achievements, but something about it irks me out and I don't currently put a ton of weight on it: 

Comment by habryka (habryka4) on Anna and Oliver discuss Children and X-Risk · 2021-02-28T02:04:46.197Z · LW · GW

I found this analysis one of the most useful I have found: https://academic.oup.com/psychsocgerontology/article/64B/6/767/550078

Abstract: 

Compared with married parents, childless married couples tend to have slightly more income and about 5% more wealth. Unmarried childless men enjoy no income advantage over unmarried fathers but have 24%–33% more wealth. Compared with older unmarried mothers, unmarried childless women have 12%–31% more income and about 33% more wealth. The strength of these relationships increases as one moves up the distribution of income or wealth.

At the higher levels of wealth, the effects become quite strong. 

And when I looked at random datasets like "what percentage of famous scientists according to this one random collection of biographies of scientists are married?" I found about 27% of them to be unmarried, which is a stark increase from about 15% population average. Traditional marriage here is used as a proxy for having children, partially because I think they have effects via the same mechanisms, and partially because they are heavily correlated.

Comment by habryka (habryka4) on Anna and Oliver discuss Children and X-Risk · 2021-02-28T02:01:43.710Z · LW · GW

Yep, in general about 85% of people have kids, with something like half of the people who don't not doing so because they have fertility problems, or other things that tend to classify them as being "involuntarily childless" in a bunch of studies. So the population to study here (people who voluntarily don't have children) have historically only made up something like 7% of the population. So just looking through successful lists of people and seeing that most of them have kids isn't really going to provide a ton of evidence.

Comment by habryka (habryka4) on Covid 2/25: Holding Pattern · 2021-02-27T00:24:27.401Z · LW · GW

Just to check, do you want us to reimport, or did you do it yourself?

Comment by habryka (habryka4) on Utility Maximization = Description Length Minimization · 2021-02-24T21:04:30.975Z · LW · GW

Promoted to curated: As Adele says, this feels related to a bunch of the Jeffery-Bolker rotation ideas, which I've referenced many many times since then, but in a way that feels somewhat independent, which makes me more excited about there being some deeper underlying structure here.

I've also had something like this in my mind for a while, but haven't gotten around to formalizing it, and I think I've seen other people make similar arguments in the past, which makes this a valuable clarification and synthesis that I expect to get referenced a bunch.

Comment by habryka (habryka4) on Apply to Effective Altruism Funds now · 2021-02-24T07:20:13.493Z · LW · GW

Maybe, but really depends on whether you have a good track record or there is some other reason why it seems like a good idea to fund from an altruistic perspective.

Comment by habryka (habryka4) on Best way to write a bicolor article on Less Wrong? · 2021-02-23T18:08:22.027Z · LW · GW

If you ever want to do anything particularly weird in an article, you can send me plain HTML via the Intercom and I insert it into the post directly (after doing some basic sanitization). This will make the post usually admin-only editable (if you used any HTML features that are admin only), but works well-enough, and I've done this a few times for articles that really wanted to use color (Beth's AI Safety Debate writeup was one that comes to mind here)