Posts

Open & Welcome Thread - December 2019 2019-12-03T00:00:29.481Z · score: 12 (3 votes)
Matthew Walker's "Why We Sleep" Is Riddled with Scientific and Factual Errors 2019-11-16T20:27:57.039Z · score: 57 (32 votes)
Open & Welcome Thread - November 2019 2019-11-02T20:06:54.030Z · score: 12 (4 votes)
Long Term Future Fund application is closing this Friday (October 11th) 2019-10-10T00:44:28.241Z · score: 29 (5 votes)
AI Alignment Open Thread October 2019 2019-10-04T01:28:15.597Z · score: 28 (8 votes)
Long-Term Future Fund: August 2019 grant recommendations 2019-10-03T20:41:16.291Z · score: 37 (10 votes)
Survival and Flourishing Fund Applications closing in 3 days 2019-10-02T00:12:21.287Z · score: 21 (4 votes)
SSC Meetups Everywhere: St. Louis, MO 2019-09-14T06:41:26.972Z · score: 0 (0 votes)
SSC Meetups Everywhere: Singapore 2019-09-14T06:38:47.621Z · score: 0 (0 votes)
SSC Meetups Everywhere: San Antonio, TX 2019-09-14T06:37:06.931Z · score: 0 (0 votes)
SSC Meetups Everywhere: Rochester, NY 2019-09-14T06:35:57.399Z · score: 2 (1 votes)
SSC Meetups Everywhere: Rio de Janeiro, Brazil 2019-09-14T06:34:49.726Z · score: 0 (0 votes)
SSC Meetups Everywhere: Riga, Latvia 2019-09-14T06:31:30.880Z · score: 0 (0 votes)
SSC Meetups Everywhere: Reno, NV 2019-09-14T06:24:01.941Z · score: 0 (0 votes)
SSC Meetups Everywhere: Pune, India 2019-09-14T06:22:00.590Z · score: 0 (0 votes)
SSC Meetups Everywhere: Prague, Czechia 2019-09-14T06:17:22.395Z · score: 0 (0 votes)
SSC Meetups Everywhere: Pittsburgh, PA 2019-09-14T06:13:43.997Z · score: 0 (0 votes)
SSC Meetups Everywhere: Phoenix, AZ 2019-09-14T06:10:21.429Z · score: 0 (0 votes)
SSC Meetups Everywhere: Oxford, UK 2019-09-14T05:59:04.728Z · score: 0 (0 votes)
SSC Meetups Everywhere: Ottawa, Canada 2019-09-14T05:56:03.155Z · score: 0 (-1 votes)
SSC Meetups Everywhere: Oslo, Norway 2019-09-14T05:52:44.748Z · score: 0 (0 votes)
SSC Meetups Everywhere: Orange County 2019-09-14T05:49:28.441Z · score: 0 (0 votes)
SSC Meetups Everywhere: Oklahoma City 2019-09-14T05:44:02.157Z · score: 0 (0 votes)
SSC Meetups Everywhere: Norman, OK 2019-09-14T05:37:04.278Z · score: 0 (0 votes)
SSC Meetups Everywhere: New York City, NY 2019-09-14T05:33:27.384Z · score: 0 (0 votes)
SSC Meetups Everywhere: New Haven, CT 2019-09-14T05:29:45.664Z · score: 0 (0 votes)
SSC Meetups Everywhere: New Delhi, India 2019-09-14T05:27:28.837Z · score: 0 (0 votes)
SSC Meetups Everywhere: Munich, Germany 2019-09-14T05:22:58.408Z · score: 1 (1 votes)
SSC Meetups Everywhere: Moscow, Russia 2019-09-14T05:14:03.792Z · score: 0 (-1 votes)
SSC Meetups Everywhere: Miami, FL 2019-09-14T03:36:45.087Z · score: 0 (0 votes)
SSC Meetups Everywhere: Memphis, TN 2019-09-14T03:34:28.740Z · score: 0 (0 votes)
SSC Meetups Everywhere: Melbourne, Australia 2019-09-14T03:32:23.510Z · score: 0 (0 votes)
SSC Meetups Everywhere: Medellin, Colombia 2019-09-14T03:30:32.369Z · score: 0 (0 votes)
SSC Meetups Everywhere: Manchester, UK 2019-09-14T03:28:08.448Z · score: 0 (0 votes)
SSC Meetups Everywhere: Madrid, Spain 2019-09-14T03:26:27.015Z · score: 0 (-1 votes)
SSC Meetups Everywhere: Madison, WI 2019-09-14T03:24:44.933Z · score: 0 (0 votes)
SSC Meetups Everywhere: Lexington, KY 2019-09-14T03:19:52.765Z · score: 0 (0 votes)
SSC Meetups Everywhere: Kitchener-Waterloo, ON 2019-09-14T03:16:50.644Z · score: 0 (0 votes)
SSC Meetups Everywhere: Kiev, Ukraine 2019-09-14T03:14:32.244Z · score: 0 (0 votes)
SSC Meetups Everywhere: Jacksonville, FL 2019-09-14T03:11:45.407Z · score: 0 (-1 votes)
SSC Meetups Everywhere: Iowa City, IA 2019-09-14T03:10:24.372Z · score: 0 (0 votes)
SSC Meetups Everywhere: Indianapolis, IN 2019-09-14T03:05:13.331Z · score: 0 (0 votes)
SSC Meetups Everywhere: Honolulu, HI 2019-09-14T03:02:49.127Z · score: 0 (0 votes)
SSC Meetups Everywhere: Helsinki, Finland 2019-09-14T03:01:22.561Z · score: 0 (0 votes)
SSC Meetups Everywhere: Fairbanks, AK 2019-09-14T02:58:05.828Z · score: 0 (0 votes)
SSC Meetups Everywhere: Halifax, Nova Scotia, Canada 2019-09-14T02:54:32.900Z · score: 0 (0 votes)
SSC Meetups Everywhere: Edinburgh, Scotland 2019-09-14T02:52:42.732Z · score: 0 (0 votes)
SSC Meetups Everywhere: Denver, CO 2019-09-14T02:48:01.870Z · score: 0 (0 votes)
SSC Meetups Everywhere: Columbus, OH 2019-09-14T02:38:23.758Z · score: 0 (0 votes)
SSC Meetups Everywhere: Cologne, Germany 2019-09-14T02:36:47.508Z · score: 0 (0 votes)

Comments

Comment by habryka4 on Applications of Economic Models to Physiology? · 2019-12-11T04:03:07.248Z · score: 6 (3 votes) · LW · GW

I've also spent 30 minutes looking for anything in this space and didn't find anything. The closest that I could find was Neuroeconomics.

Comment by habryka4 on The Credit Assignment Problem · 2019-12-11T02:51:32.106Z · score: 2 (1 votes) · LW · GW

Promoted to curated: It's been a while since this post has come out, but I've been thinking of the "credit assignment" abstraction a lot since then, and found it quite useful. I also really like the way the post made me curious about a lot of different aspects of the world, and I liked the way it invited me to boggle at the world together with you. 

I also really appreciated your long responses to questions in the comments, which clarified a lot of things for me. 

One thing comes to mind for maybe improving the post, though I think that's mostly a difference of competing audiences: 

I think some sections of the post end up referencing a lot of really high-level concepts, in a way that I think is valuable as a reference, but also in a way that might cause a lot of people to bounce off of it (even people with a pretty strong AI Alignment background). I can imagine a post that includes very short explanations of those concepts, or moves them into a context where they are more clearly marked as optional (since I think the post stands well without at least some of those high-level concepts)

Comment by habryka4 on Is Rationalist Self-Improvement Real? · 2019-12-11T01:54:58.532Z · score: 2 (1 votes) · LW · GW

nods Seems good. I agree that there are much more interesting things to discuss. 

Comment by habryka4 on Is Rationalist Self-Improvement Real? · 2019-12-11T00:27:58.555Z · score: 8 (4 votes) · LW · GW

nods You did say the following: 

I honestly don’t see how they could sensibly be aggregated into anything at all resembling a natural category

I interpreted that as saying "there is no resemblance between attending a CFAR workshop and reading the sequences", which seems to me to include the natural categories of "they both include reading/listening to largely overlapping concepts" and "their creators largely shared the same aim in the effects it tried to produce in people". 

I think there is a valuable and useful argument to be made here that in the context of trying to analyze the impact of these interventions, you want to be careful to account for the important differences between reading a many-book length set of explanations and going to an in-person workshop with in-person instructors, but that doesn't seem to me what you said in the your original comment. You just said that there is no sensible way to put these things into the same category, which just seems obviously wrong to me, since there clearly is a lot of shared structure to analyze between these interventions. 

Comment by habryka4 on Is Rationalist Self-Improvement Real? · 2019-12-10T22:00:14.233Z · score: 8 (4 votes) · LW · GW

I mean, a lot of the CFAR curriculum is based on content in the sequences, the handbook covers a lot of the same declarative content, and they are setting out with highly related goals (with Eliezer helping with early curriculum development, though much less so in recent years). The beginning of R:A-Z even explicitly highlights how he thinks CFAR is filling in many of the gaps he left in the sequences, clearly implying that they are part of the same aim. 

Sure, there are differences, but overall they are highly related and I think can meaningfully be judged to be in a natural category. Similar to how a textbook and a university-class or workshop on the same subject are obviously related, even though they will differ on many relevant dimensions.

Comment by habryka4 on Robin Hanson on the futurist focus on AI · 2019-12-10T21:52:32.939Z · score: 9 (4 votes) · LW · GW

Note that all three of the linked paper are about "boundedly rational agents with perfectly rational principals" or about "equally boundedly rational agents and principals". I have been so far unable to find any papers that follow the described pattern of "boundedly rational principals and perfectly rational agents".

Comment by habryka4 on What's an important (new) idea you haven't had time to argue for yet? · 2019-12-10T20:56:42.240Z · score: 6 (4 votes) · LW · GW

I am confused. If MWI is true, we are all already immortal, and every living mind is instantiated a very large number of times, probably literally forever (since entropy doesn't actually decrease in the full multiverse, and is just a result of statistical correlation, but if you buy the quantum immortality argument you no longer care about this).

Comment by habryka4 on ozziegooen's Shortform · 2019-12-10T20:10:15.892Z · score: 8 (4 votes) · LW · GW

Bayesian agents are logically omniscient, and I think a large fraction of deceptive practices rely on asymmetries in computation time between two agents with access to slightly different information (like generating a lie and checking the consistencies between this new statement and all my previous statements) 

My sense is also that two-player games with bayesian agents are actually underspecified and give rise to all kinds of weird things due to the necessity for infinite regress (i.e. an agent modeling the other agent modeling themselves modeling the other agent, etc.), which doesn't actually reliably converge, though I am not confident. A lot of decision-theory seems to do weird things with bayesian agents. 

So overall, not sure how well you can prove theorems in this space, without having made a lot of progress in decision-theory, and I expect the resolution to a lot of our confusions in decision-theory to be resolved by moving away from bayesianism. 

Comment by habryka4 on "I don't know." · 2019-12-10T06:13:53.024Z · score: 2 (1 votes) · LW · GW

Yep, that's correct. We experimented with some other indicators, but this was the one that seemed least intrusive.

Comment by habryka4 on Books on the zeitgeist of science during Lord Kelvin's time. · 2019-12-09T20:58:14.071Z · score: 12 (3 votes) · LW · GW

I am also interested in this, and would give around $50 for some good sources on this (this is not a commitment that I will pay the best answer to this question, just that if an answer is good enough, I will send the person $50)

Comment by habryka4 on Drowning children are rare · 2019-12-08T18:09:12.498Z · score: 20 (8 votes) · LW · GW

I mean, I agree that Coca Cola engages in marketing practices that try to fabricate associations that are not particularly truth-oriented, but that's very different from the thing with Theranos. 

I model Coca Cola mostly as damaging for my health, and model its short-term positive performance effects to be basically fully mediated via caffeine, but I still think it's providing me value above and beyond those those benefits, and outweighing the costs in certain situations. 

Theranos seems highly disanalogous, since I think almost no one who knew the actual extend of Theranos' capabilities, and had accurate beliefs about its technologies, would give money to them. I have pretty confident bounds on the effects of coca-cola, and still decide to sometimes give them my money, and would be really highly surprised if there turns out to be a fact about coke that its internal executives are aware of (even subconsciously) that would drastically change that assessment for me, and it doesn't seem like that's what you are arguing for. 

Comment by habryka4 on Drowning children are rare · 2019-12-06T19:05:12.849Z · score: 14 (6 votes) · LW · GW

Somewhat confused by the coca-cola example. I don't buy coke very often, but it seems usually worth it to me when I do buy it (in small amounts, since I do think it tastes pretty good). Is the claim that they are not providing any value some kind of assumption about my coherent extrapolated volition? 

Comment by habryka4 on LW Team Updates - December 2019 · 2019-12-06T00:45:55.412Z · score: 6 (4 votes) · LW · GW

Yeah, I agree with this. I've been more annoyed by performance as well lately, and we are pretty close to shipping a variety of performance improvements that I expect will make a significant difference here (and have a few more in the works afterwards, though I think it will be quite a while until we are competitive with greaterwrong performance wise, in large parts due to just fundamentally different architectures).

Comment by habryka4 on Gears-Level Models are Capital Investments · 2019-12-05T21:06:14.278Z · score: 2 (1 votes) · LW · GW

Promoted to curated: I think this post captured some core ideas in predictions and modeling in a really clear way, and I particularly liked how it used a lot of examples and was just generally very concrete in how it explained things. 

Comment by habryka4 on BrienneYudkowsky's Shortform · 2019-12-04T23:55:44.485Z · score: 19 (3 votes) · LW · GW

I really like this concept. It currently feels to me like a mixture between a fact post and an essay

From the fact-post post: 

You explicitly do not look for opinion, even expert opinion. You avoid news, and you're wary of think-tank white papers. You're looking for raw information. You are taking a sola scriptura approach, for better and for worse.

And then you start letting the data show you things. 

You see things that are surprising or odd, and you note that. 

You see facts that seem to be inconsistent with each other, and you look into the data sources and methodology until you clear up the mystery.

You orient towards the random, the unfamiliar, the things that are totally unfamiliar to your experience. One of the major exports of Germany is valves?  When was the last time I even thought about valves? Why valves, what do you use valves in?  OK, show me a list of all the different kinds of machine parts, by percent of total exports.  

From Paul Graham's essay post: 

Figure out what? You don't know yet. And so you can't begin with a thesis, because you don't have one, and may never have one. An essay doesn't begin with a statement, but with a question. In a real essay, you don't take a position and defend it. You notice a door that's ajar, and you open it and walk in to see what's inside.

If all you want to do is figure things out, why do you need to write anything, though? Why not just sit and think? Well, there precisely is Montaigne's great discovery. Expressing ideas helps to form them. Indeed, helps is far too weak a word. Most of what ends up in my essays I only thought of when I sat down to write them. That's why I write them.

Comment by habryka4 on What additional features would you like on LessWrong? · 2019-12-04T22:43:02.272Z · score: 4 (3 votes) · LW · GW

Yep, feel free to ping us on Intercom and we will gladly change your username. 

Comment by habryka4 on [AN #76]: How dataset size affects robustness, and benchmarking safe exploration by measuring constraint violations · 2019-12-04T21:13:25.045Z · score: 4 (3 votes) · LW · GW

Natural Language Processing.

Not to be confused with Neuro-Linguistic Programming.

Comment by habryka4 on Open & Welcome Thread - December 2019 · 2019-12-04T18:52:07.877Z · score: 2 (1 votes) · LW · GW

Variable-width is the web's default, so it's definitely not harder to do. Many very old websites (10+ years old) use variable width, before anyone started thinking about typography on the web, so in terms of web-technologies, that's definitely the default. 

Comment by habryka4 on Open & Welcome Thread - December 2019 · 2019-12-04T04:57:18.488Z · score: 11 (3 votes) · LW · GW

I have a bunch of thoughts on this, some quick ones:

The reading experience on wikis is very heavily optimized for skimming. This causes some of the following design choices:

  • Longer line-width causes a more distinct right-outline of the text, this makes it easier to orient while quickly scrolling past things
  • Since most text is never going to be read, a lot of text is smaller, and the line-lengths are longer to vertically compress the text, making it overall faster to navigate around different sections of the page
  • The content aims to be canonical and comprehensive, both of these cause a much more concrete distinction between "the article" and "the discussion" since you need to apply the canonicity and comprehensiveness criteria to only the article and not the discussion
  • Because of the focus on comprehensiveness, you generally want to impose structure not only on every single article, but on the whole knowledge graph. But in order to do that, you need to actually bring the knowledge graph into a format you can constrain, which you can only do for internal links, and not external links.
Comment by habryka4 on Two types of mathematician · 2019-12-02T19:44:57.070Z · score: 4 (2 votes) · LW · GW

I've referenced the Grothendieck quote in this post many times since it came out, and the quote itself seems important enough to be worth curating it. 

I've also referenced this post a few times in a broader context around different mathematical practices, though definitely much less frequently than I've referenced the Grothendieck quote. 

Comment by habryka4 on Inadequate Equilibria vs. Governance of the Commons · 2019-12-02T18:45:37.438Z · score: 2 (1 votes) · LW · GW

I mostly just endorse everything in my curation notice, and have referenced this post a few times in the last 1.5 years. 

Comment by habryka4 on Argument, intuition, and recursion · 2019-12-02T06:07:05.027Z · score: 3 (2 votes) · LW · GW

I've gotten a lot of value out of posts in the reference class of "attempts at somewhat complete models of what good reasoning looks like", and this one has been one of them. 

I don't think I fully agree with the model outlined here, but I think the post did succeed at adding it to my toolbox.

Comment by habryka4 on Everything I ever needed to know, I learned from World of Warcraft: Goodhart’s law · 2019-12-02T06:01:29.730Z · score: 5 (2 votes) · LW · GW

I've referenced this post a few times a very good and concrete example of Goodhart's law, that felt like it both illustrated the costs, while also showing the actual (usually good) reasons for why people put metrics in place in the first place. 

Comment by habryka4 on The funnel of human experience · 2019-12-02T05:51:44.359Z · score: 5 (3 votes) · LW · GW

I still endorse everything in my curation notice, and also think that the question of what fraction of human experience is happening right now is an important point to be calibrated on in order to have good intuitions about scientific progress and the general rate of change for the modern world. 

Comment by habryka4 on Naming the Nameless · 2019-12-02T05:48:48.947Z · score: 4 (2 votes) · LW · GW

I... don't know exactly why I think this post is important, but I think it's really quite important, and I would really like to see it clarified via the review process. 

I think this post was one of the posts that changed my mind over the last year quite a bit, mostly by changing my relationship to legibility. While this post doesn't directly mention it, I think it's highly related. 

Comment by habryka4 on Toolbox-thinking and Law-thinking · 2019-12-02T05:44:07.256Z · score: 4 (2 votes) · LW · GW

I've used this analogy quite a few times, and also got a good amount of mileage out of categorizing my own mental processed according to this classification. 

Comment by habryka4 on Towards a New Impact Measure · 2019-12-02T04:24:16.045Z · score: 15 (5 votes) · LW · GW

This post, and TurnTrout's work in general, have taken the impact measure approach far beyond what I thought was possible, which turned out to be both a valuable lesson for me in being less confident about my opinions around AI Alignment, and valuable in that it helped me clarify and think much better about a significant fraction of the AI Alignment problem. 

I've since discussed TurnTrout's approach to impact measures with many people. 

Comment by habryka4 on Understanding is translation · 2019-12-02T04:22:36.714Z · score: 7 (3 votes) · LW · GW

This post struck me as exceptional because it conveyed a pretty core concept in very few words, and it just kind of ended up sticking with me. It's not like I hadn't previously thought of the search for isomorphisms as an important part of understanding, but this post allowed me to make that more explicit, and provided a good common reference to it. 

Comment by habryka4 on On the Chatham House Rule · 2019-12-02T04:20:57.723Z · score: 11 (4 votes) · LW · GW

A significant fraction of events I go to are operated under Chatham House rules. A significant fraction of the organizers of those events don't seem to understand the full consequences of those rules, and I've referenced this post multiple times when talking to people about those rules. 

Comment by habryka4 on What makes people intellectually active? · 2019-12-02T04:08:49.305Z · score: 2 (1 votes) · LW · GW

The answers to this question were really great, and I've referenced many of them since the time this post was written. I've found them quite useful in my personal reflections on how I myself can sustain being intellectually generative and active myself, and how to build an organization in which other people are able to do so. 

Comment by habryka4 on How did academia ensure papers were correct in the early 20th Century? · 2019-12-02T04:02:11.296Z · score: 5 (3 votes) · LW · GW

It's a really important question, and the answers actually helped me answer it (though they were far from comprehensive). 

Comment by habryka4 on The Bat and Ball Problem Revisited · 2019-12-02T03:50:15.023Z · score: 4 (2 votes) · LW · GW

I've referenced the cognitive reflection test as one of those litmus tests of rationality, where I feel like any decent practice of rationality should get people to reliably answer the questions on that test. I found this to actually be the best coverage of the whole test, and it's analysis of people's reasoning to be a significant step up from what I've seen in other coverages of the test.

Comment by habryka4 on Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical · 2019-12-02T03:45:49.779Z · score: 4 (2 votes) · LW · GW

I think the question of "how good are governments and large institutions in general at aggregating information and handling dangerous technologies?" is a really key question for dealing with potential catastrophic risks from technology. In trying to answer that question, I've references this post a few times. 

Comment by habryka4 on Research: Rescuers during the Holocaust · 2019-12-02T03:39:31.682Z · score: 3 (2 votes) · LW · GW

I've come back to this post a few times, mostly as a concrete example of an approach to understanding human minds that consists of pointing to large effect sizes in human behavior that help you a lot in putting bounds on hypothesis space. 

Comment by habryka4 on Inconvenience Is Qualitatively Bad · 2019-12-02T02:52:12.568Z · score: 4 (2 votes) · LW · GW

I think the type of person who tries to systematize their thinking a lot tends to also be particularly susceptible to arguments of the type "why don't you just do X?". I think these arguments are very widespread and have large effects on people, and I've used this post as a reference a few times to counteract those arguments in the many cases where they were wrongly applied. 

Comment by habryka4 on eigen's Shortform · 2019-12-02T01:20:57.259Z · score: 4 (2 votes) · LW · GW

Hmm, I like this idea. I've been thinking of ways to curate and synthesize comment sections for a while, and the original sequences might be a good place to put that in action. 

Comment by habryka4 on Robustness to Scale · 2019-12-02T01:05:41.706Z · score: 2 (1 votes) · LW · GW

I've used the concepts in this post a lot when discussing various things related to AI Alignment. I think asking "how robust is this AI design to various ways of scaling up?" has become one of my go-to hammers for evaluating a lot of AI Alignment proposals, and I've gotten a lot of mileage out of that. 

Comment by habryka4 on Circling · 2019-12-02T00:58:48.419Z · score: 5 (2 votes) · LW · GW

While I've always had many hesitations around circling as a packaged deal, I have come to believe that as a practice it ended up addressing many things that I care about, and in many important settings I would now encourage people to engage in circling-related practices. As such, I think it has actually played a pretty key role in developing my current models of group dynamics, and in-particular the effects of various social relationships on the formation of beliefs. 

This post is I think the best written explanation of circling we have, so I think it's quite valuable to review, and has a good chance of deserving a place in our collection of best posts of 2018. 

Comment by habryka4 on eigen's Shortform · 2019-12-01T19:14:00.747Z · score: 8 (3 votes) · LW · GW

I've reread them about 3-4 times. Two of those times were with comments (the first time and the most recent time). I found reading the comments quite valuable.

Comment by habryka4 on Useful Does Not Mean Secure · 2019-11-30T10:24:43.985Z · score: 12 (6 votes) · LW · GW

You can also write markdown comments on LW, just set the "use markdown editor" in your user settings. 

Comment by habryka4 on Caring less · 2019-11-29T23:08:00.685Z · score: 2 (1 votes) · LW · GW

I think this post summarizes a really key phenomenon when thinking about how collective reasoning works, and the discussion around it provides some good explanations. 

I've explained this observation many times before this post even came out, but with this post I finally had a pretty concrete and concise reference, and have used it a few times for that purpose. 

Comment by habryka4 on Is Clickbait Destroying Our General Intelligence? · 2019-11-29T22:10:48.635Z · score: 12 (3 votes) · LW · GW

I kind of have conflicting feelings about this post, but still think it should at least be nominated for the 2018 review. 

I think the point about memetically transmitted ideas only really being able to perform a shallow, though maybe still crucial, part of cognition is pretty important and might deserve this to be nominated alone. 

But the overall point about clickbait and the internet feels also really important to me, but I also feel really conflicted because it kind of pattern-matches to a narrative that I feel performs badly on some reference-class forecasting perspectives. I do think the Goodhart's law points are pretty clear, but I really wish we could do some more systematic study of whether the things that Eliezer is pointing to are real. 

So overall, I think I really want this to be reviewed, at least so that we can maybe collectively put some effort into finding more empirical sources of Eliezer's claims in this post, and see whether they hold up. If they do, then I do think that that is of quite significant importance. 

Comment by habryka4 on Unrolling social metacognition: Three levels of meta are not enough. · 2019-11-29T22:01:17.341Z · score: 6 (3 votes) · LW · GW

I've thought a lot about this post in the last year, and also referenced it a few times in the broader context of talking to people about ideas around common-knowledge. I think it, together with Ben's post on common knowledge communicates the core concept quite well. 

Comment by habryka4 on A voting theory primer for rationalists · 2019-11-29T21:45:20.753Z · score: 2 (1 votes) · LW · GW

I think voting theory is pretty useful, and this is the best introduction I know of. I've linked it to a bunch of people in the last two years who were interested in getting a basic overview over voting theory, and it seemed to broadly be well-received. 

Comment by habryka4 on Metaphilosophical competence can't be disentangled from alignment · 2019-11-29T21:44:08.104Z · score: 7 (3 votes) · LW · GW

I think there is a key question in AI Alignment that Wei Dai has also talked about, that is something like "is it even safe to scale up a human?", and I think this post is one of the best on that topic. 

Comment by habryka4 on The Loudest Alarm Is Probably False · 2019-11-29T21:24:11.323Z · score: 2 (1 votes) · LW · GW

I mostly second Vaniver's nomination. I've also found this post really useful when thinking about LessWrong as an organization, and how my own preferences might often be actively pushing things in the wrong direction. 

Comment by habryka4 on On the Loss and Preservation of Knowledge · 2019-11-29T21:11:11.751Z · score: 6 (3 votes) · LW · GW

I think of Samo's posts, this one was the one that stuck with me the most, probably because of my strong interest in intellectual institutions and how to build them. 

I've more broadly found Samo's worldview helpful in many situations, and found this post to be one of the best introductions to it. 

Comment by habryka4 on Toward a New Technical Explanation of Technical Explanation · 2019-11-29T20:54:42.184Z · score: 12 (3 votes) · LW · GW

This post actually got me to understand how logical induction works, and also caused me to eventually give up on bayesianism as the foundation of epistemology in embedded contexts (together with Abram's other post on the untrollable mathematician). 

Comment by habryka4 on An Untrollable Mathematician Illustrated · 2019-11-29T20:53:02.046Z · score: 12 (3 votes) · LW · GW

I think this post, together with Abram's other post "Towards a new technical explanation" actually convinced me that a bayesian approach to epistemology can't work in an embedded context, which was a really big shift for me. 

Comment by habryka4 on Extended Quote on the Institution of Academia · 2019-11-29T20:47:06.289Z · score: 2 (1 votes) · LW · GW

This post gave me a really concrete model of academia and its role in society, in a way that I've extensively built on since then for a lot of my thinking on LessWrong, but also the broader problem of how to distill and combine knowledge for large groups of people.