Posts
Comments
I've also spent 30 minutes looking for anything in this space and didn't find anything. The closest that I could find was Neuroeconomics.
Promoted to curated: It's been a while since this post has come out, but I've been thinking of the "credit assignment" abstraction a lot since then, and found it quite useful. I also really like the way the post made me curious about a lot of different aspects of the world, and I liked the way it invited me to boggle at the world together with you.
I also really appreciated your long responses to questions in the comments, which clarified a lot of things for me.
One thing comes to mind for maybe improving the post, though I think that's mostly a difference of competing audiences:
I think some sections of the post end up referencing a lot of really high-level concepts, in a way that I think is valuable as a reference, but also in a way that might cause a lot of people to bounce off of it (even people with a pretty strong AI Alignment background). I can imagine a post that includes very short explanations of those concepts, or moves them into a context where they are more clearly marked as optional (since I think the post stands well without at least some of those high-level concepts)
nods Seems good. I agree that there are much more interesting things to discuss.
nods You did say the following:
I honestly don’t see how they could sensibly be aggregated into anything at all resembling a natural category
I interpreted that as saying "there is no resemblance between attending a CFAR workshop and reading the sequences", which seems to me to include the natural categories of "they both include reading/listening to largely overlapping concepts" and "their creators largely shared the same aim in the effects it tried to produce in people".
I think there is a valuable and useful argument to be made here that in the context of trying to analyze the impact of these interventions, you want to be careful to account for the important differences between reading a many-book length set of explanations and going to an in-person workshop with in-person instructors, but that doesn't seem to me what you said in the your original comment. You just said that there is no sensible way to put these things into the same category, which just seems obviously wrong to me, since there clearly is a lot of shared structure to analyze between these interventions.
I mean, a lot of the CFAR curriculum is based on content in the sequences, the handbook covers a lot of the same declarative content, and they are setting out with highly related goals (with Eliezer helping with early curriculum development, though much less so in recent years). The beginning of R:A-Z even explicitly highlights how he thinks CFAR is filling in many of the gaps he left in the sequences, clearly implying that they are part of the same aim.
Sure, there are differences, but overall they are highly related and I think can meaningfully be judged to be in a natural category. Similar to how a textbook and a university-class or workshop on the same subject are obviously related, even though they will differ on many relevant dimensions.
Note that all three of the linked paper are about "boundedly rational agents with perfectly rational principals" or about "equally boundedly rational agents and principals". I have been so far unable to find any papers that follow the described pattern of "boundedly rational principals and perfectly rational agents".
I am confused. If MWI is true, we are all already immortal, and every living mind is instantiated a very large number of times, probably literally forever (since entropy doesn't actually decrease in the full multiverse, and is just a result of statistical correlation, but if you buy the quantum immortality argument you no longer care about this).
Bayesian agents are logically omniscient, and I think a large fraction of deceptive practices rely on asymmetries in computation time between two agents with access to slightly different information (like generating a lie and checking the consistencies between this new statement and all my previous statements)
My sense is also that two-player games with bayesian agents are actually underspecified and give rise to all kinds of weird things due to the necessity for infinite regress (i.e. an agent modeling the other agent modeling themselves modeling the other agent, etc.), which doesn't actually reliably converge, though I am not confident. A lot of decision-theory seems to do weird things with bayesian agents.
So overall, not sure how well you can prove theorems in this space, without having made a lot of progress in decision-theory, and I expect the resolution to a lot of our confusions in decision-theory to be resolved by moving away from bayesianism.
Yep, that's correct. We experimented with some other indicators, but this was the one that seemed least intrusive.
I am also interested in this, and would give around $50 for some good sources on this (this is not a commitment that I will pay the best answer to this question, just that if an answer is good enough, I will send the person $50)
I mean, I agree that Coca Cola engages in marketing practices that try to fabricate associations that are not particularly truth-oriented, but that's very different from the thing with Theranos.
I model Coca Cola mostly as damaging for my health, and model its short-term positive performance effects to be basically fully mediated via caffeine, but I still think it's providing me value above and beyond those those benefits, and outweighing the costs in certain situations.
Theranos seems highly disanalogous, since I think almost no one who knew the actual extend of Theranos' capabilities, and had accurate beliefs about its technologies, would give money to them. I have pretty confident bounds on the effects of coca-cola, and still decide to sometimes give them my money, and would be really highly surprised if there turns out to be a fact about coke that its internal executives are aware of (even subconsciously) that would drastically change that assessment for me, and it doesn't seem like that's what you are arguing for.
Somewhat confused by the coca-cola example. I don't buy coke very often, but it seems usually worth it to me when I do buy it (in small amounts, since I do think it tastes pretty good). Is the claim that they are not providing any value some kind of assumption about my coherent extrapolated volition?
Yeah, I agree with this. I've been more annoyed by performance as well lately, and we are pretty close to shipping a variety of performance improvements that I expect will make a significant difference here (and have a few more in the works afterwards, though I think it will be quite a while until we are competitive with greaterwrong performance wise, in large parts due to just fundamentally different architectures).
Promoted to curated: I think this post captured some core ideas in predictions and modeling in a really clear way, and I particularly liked how it used a lot of examples and was just generally very concrete in how it explained things.
I really like this concept. It currently feels to me like a mixture between a fact post and an essay.
From the fact-post post:
You explicitly do not look for opinion, even expert opinion. You avoid news, and you're wary of think-tank white papers. You're looking for raw information. You are taking a sola scriptura approach, for better and for worse.
And then you start letting the data show you things.
You see things that are surprising or odd, and you note that.
You see facts that seem to be inconsistent with each other, and you look into the data sources and methodology until you clear up the mystery.
You orient towards the random, the unfamiliar, the things that are totally unfamiliar to your experience. One of the major exports of Germany is valves? When was the last time I even thought about valves? Why valves, what do you use valves in? OK, show me a list of all the different kinds of machine parts, by percent of total exports.
From Paul Graham's essay post:
Figure out what? You don't know yet. And so you can't begin with a thesis, because you don't have one, and may never have one. An essay doesn't begin with a statement, but with a question. In a real essay, you don't take a position and defend it. You notice a door that's ajar, and you open it and walk in to see what's inside.
If all you want to do is figure things out, why do you need to write anything, though? Why not just sit and think? Well, there precisely is Montaigne's great discovery. Expressing ideas helps to form them. Indeed, helps is far too weak a word. Most of what ends up in my essays I only thought of when I sat down to write them. That's why I write them.
Yep, feel free to ping us on Intercom and we will gladly change your username.
Natural Language Processing.
Not to be confused with Neuro-Linguistic Programming.
Variable-width is the web's default, so it's definitely not harder to do. Many very old websites (10+ years old) use variable width, before anyone started thinking about typography on the web, so in terms of web-technologies, that's definitely the default.
I have a bunch of thoughts on this, some quick ones:
The reading experience on wikis is very heavily optimized for skimming. This causes some of the following design choices:
- Longer line-width causes a more distinct right-outline of the text, this makes it easier to orient while quickly scrolling past things
- Since most text is never going to be read, a lot of text is smaller, and the line-lengths are longer to vertically compress the text, making it overall faster to navigate around different sections of the page
- The content aims to be canonical and comprehensive, both of these cause a much more concrete distinction between "the article" and "the discussion" since you need to apply the canonicity and comprehensiveness criteria to only the article and not the discussion
- Because of the focus on comprehensiveness, you generally want to impose structure not only on every single article, but on the whole knowledge graph. But in order to do that, you need to actually bring the knowledge graph into a format you can constrain, which you can only do for internal links, and not external links.
I've referenced the Grothendieck quote in this post many times since it came out, and the quote itself seems important enough to be worth curating it.
I've also referenced this post a few times in a broader context around different mathematical practices, though definitely much less frequently than I've referenced the Grothendieck quote.
I mostly just endorse everything in my curation notice, and have referenced this post a few times in the last 1.5 years.
I've gotten a lot of value out of posts in the reference class of "attempts at somewhat complete models of what good reasoning looks like", and this one has been one of them.
I don't think I fully agree with the model outlined here, but I think the post did succeed at adding it to my toolbox.
I've referenced this post a few times a very good and concrete example of Goodhart's law, that felt like it both illustrated the costs, while also showing the actual (usually good) reasons for why people put metrics in place in the first place.
I still endorse everything in my curation notice, and also think that the question of what fraction of human experience is happening right now is an important point to be calibrated on in order to have good intuitions about scientific progress and the general rate of change for the modern world.
I... don't know exactly why I think this post is important, but I think it's really quite important, and I would really like to see it clarified via the review process.
I think this post was one of the posts that changed my mind over the last year quite a bit, mostly by changing my relationship to legibility. While this post doesn't directly mention it, I think it's highly related.
I've used this analogy quite a few times, and also got a good amount of mileage out of categorizing my own mental processed according to this classification.
This post, and TurnTrout's work in general, have taken the impact measure approach far beyond what I thought was possible, which turned out to be both a valuable lesson for me in being less confident about my opinions around AI Alignment, and valuable in that it helped me clarify and think much better about a significant fraction of the AI Alignment problem.
I've since discussed TurnTrout's approach to impact measures with many people.
This post struck me as exceptional because it conveyed a pretty core concept in very few words, and it just kind of ended up sticking with me. It's not like I hadn't previously thought of the search for isomorphisms as an important part of understanding, but this post allowed me to make that more explicit, and provided a good common reference to it.
A significant fraction of events I go to are operated under Chatham House rules. A significant fraction of the organizers of those events don't seem to understand the full consequences of those rules, and I've referenced this post multiple times when talking to people about those rules.
The answers to this question were really great, and I've referenced many of them since the time this post was written. I've found them quite useful in my personal reflections on how I myself can sustain being intellectually generative and active myself, and how to build an organization in which other people are able to do so.
It's a really important question, and the answers actually helped me answer it (though they were far from comprehensive).
I've referenced the cognitive reflection test as one of those litmus tests of rationality, where I feel like any decent practice of rationality should get people to reliably answer the questions on that test. I found this to actually be the best coverage of the whole test, and it's analysis of people's reasoning to be a significant step up from what I've seen in other coverages of the test.
I think the question of "how good are governments and large institutions in general at aggregating information and handling dangerous technologies?" is a really key question for dealing with potential catastrophic risks from technology. In trying to answer that question, I've references this post a few times.
I've come back to this post a few times, mostly as a concrete example of an approach to understanding human minds that consists of pointing to large effect sizes in human behavior that help you a lot in putting bounds on hypothesis space.
I think the type of person who tries to systematize their thinking a lot tends to also be particularly susceptible to arguments of the type "why don't you just do X?". I think these arguments are very widespread and have large effects on people, and I've used this post as a reference a few times to counteract those arguments in the many cases where they were wrongly applied.
Hmm, I like this idea. I've been thinking of ways to curate and synthesize comment sections for a while, and the original sequences might be a good place to put that in action.
I've used the concepts in this post a lot when discussing various things related to AI Alignment. I think asking "how robust is this AI design to various ways of scaling up?" has become one of my go-to hammers for evaluating a lot of AI Alignment proposals, and I've gotten a lot of mileage out of that.
While I've always had many hesitations around circling as a packaged deal, I have come to believe that as a practice it ended up addressing many things that I care about, and in many important settings I would now encourage people to engage in circling-related practices. As such, I think it has actually played a pretty key role in developing my current models of group dynamics, and in-particular the effects of various social relationships on the formation of beliefs.
This post is I think the best written explanation of circling we have, so I think it's quite valuable to review, and has a good chance of deserving a place in our collection of best posts of 2018.
I've reread them about 3-4 times. Two of those times were with comments (the first time and the most recent time). I found reading the comments quite valuable.
You can also write markdown comments on LW, just set the "use markdown editor" in your user settings.
I think this post summarizes a really key phenomenon when thinking about how collective reasoning works, and the discussion around it provides some good explanations.
I've explained this observation many times before this post even came out, but with this post I finally had a pretty concrete and concise reference, and have used it a few times for that purpose.
I kind of have conflicting feelings about this post, but still think it should at least be nominated for the 2018 review.
I think the point about memetically transmitted ideas only really being able to perform a shallow, though maybe still crucial, part of cognition is pretty important and might deserve this to be nominated alone.
But the overall point about clickbait and the internet feels also really important to me, but I also feel really conflicted because it kind of pattern-matches to a narrative that I feel performs badly on some reference-class forecasting perspectives. I do think the Goodhart's law points are pretty clear, but I really wish we could do some more systematic study of whether the things that Eliezer is pointing to are real.
So overall, I think I really want this to be reviewed, at least so that we can maybe collectively put some effort into finding more empirical sources of Eliezer's claims in this post, and see whether they hold up. If they do, then I do think that that is of quite significant importance.
I've thought a lot about this post in the last year, and also referenced it a few times in the broader context of talking to people about ideas around common-knowledge. I think it, together with Ben's post on common knowledge communicates the core concept quite well.
I think voting theory is pretty useful, and this is the best introduction I know of. I've linked it to a bunch of people in the last two years who were interested in getting a basic overview over voting theory, and it seemed to broadly be well-received.
I think there is a key question in AI Alignment that Wei Dai has also talked about, that is something like "is it even safe to scale up a human?", and I think this post is one of the best on that topic.
I mostly second Vaniver's nomination. I've also found this post really useful when thinking about LessWrong as an organization, and how my own preferences might often be actively pushing things in the wrong direction.
I think of Samo's posts, this one was the one that stuck with me the most, probably because of my strong interest in intellectual institutions and how to build them.
I've more broadly found Samo's worldview helpful in many situations, and found this post to be one of the best introductions to it.
This post actually got me to understand how logical induction works, and also caused me to eventually give up on bayesianism as the foundation of epistemology in embedded contexts (together with Abram's other post on the untrollable mathematician).
I think this post, together with Abram's other post "Towards a new technical explanation" actually convinced me that a bayesian approach to epistemology can't work in an embedded context, which was a really big shift for me.
This post gave me a really concrete model of academia and its role in society, in a way that I've extensively built on since then for a lot of my thinking on LessWrong, but also the broader problem of how to distill and combine knowledge for large groups of people.