Posts
Comments
I've been doing Quantified Intuitions' Estimation Game every month. I really enjoy it. A big thing I've learned from it is the instinct to think in terms of orders of magnitude.
Well, not necessarily orders of magnitude, but something similar. For example, a friend just asked me about building a little web app calculator to provide better handicaps in golf scrambles. In the past I'd get a little overwhelmed thinking about how much time such a project would take and default to saying no. But this time I noticed myself approaching it differently.
Will it take minutes? Eh, probably not. Hours? Possibly, but seems a little optimistic. Days? Yeah, seems about right. Weeks? Eh, possibly, but even with the planning fallacy, I'd be surprised. Months? No, it won't take that long. Years. No way.
With this approach I can figure out the right ballpark very quickly. It's helpful.
Many years after having read it, I'm finding that the "Perils of Interacting With Acquaintances" section in The Great Perils of Social Interaction has really stuck with me. It is probably one of the more useful pieces of practical advice I've come across in my life. I think it's illustrated really well in this barber story:
But that assumes that you can only be normal around someone you know well, which is not true. I started using a new barber last year, and I was pleasantly surprised when instead of making small talk or asking me questions about my life, he just started talking to me like I was his friend or involving me in his conversations with the other barber. By doing so, he spared both of us the massive inauthenticity of a typical barber-customer relationship and I actually enjoy going there now.
I make it a point to "be normal" around people and it's become something of a habit. One I'm glad that I've formed.
I get the sense that autism is particularly unclear, but I haven't looked closely enough at other conditions to be confident in that.
Something I've always wondered about is what I'll call sub-threshold successes. Some examples:
- A stand up comedian is performing. Their jokes are funny enough to make you smile, but not funny enough to pass the threshold of getting you to laugh. The result is that the comedian bombs.
- Posts or comments on an internet forum are appreciated but not appreciated enough to get people to upvote.
- A restaurant or product is good, but not good enough to motivate people to leave ratings or write reviews.
It feels to me like there is an inefficiency occurring in these sorts of situations. To get an accurate view of how successful something is you'd want to incorporate all of the data, not just data that passes whatever (positive or negative) threshold is in play. But I think the inefficiencies are usually not easy to improve on.
In A Sketch of Good Communication -- or really, in the Share Models, Not Beliefs sequence, which A Sketch of Good Communication is part of -- the author proposes that, hm, I'm not sure exactly how to phrase it.
I think the author (Ben Pace) is proposing that in some contexts, it is good to spend a lot of effort building up and improving your models of things. And that in those contexts, if you just adopt the belief of others without improving your model, well, that won't be good.
I think the big thing here is research. In the context of research, Ben proposes that it's important to build up and improve your model. And for you to share with the community what beliefs your model outputs.
This seems correct to me. But I'm pretty sure that it isn't true in other contexts.
For example, I wanted to buy a new thermometer recently. Infrared ones are convenient, so I wanted to know if they're comparably accurate to oral ones. I googled it and Cleveland Clinic says they are. Boom. Good enough for me. In this context, I don't think it was worth spending the effort updating my model of thermometer accuracy. In this context, I just need the output.
I think it'd be interesting to hear people's thoughts on when it is and isn't important to improve your models. In what contexts?
I think it'd also be interesting to hear more about why exactly it is harmful in the context of intellectual progress to stray away from building and improving your models. There's probably a lot to say. I think I remember the book Superforecasters talk about this, but I forget.
Hm. On the one hand, I agree that there are distinct things at play here and share the instinct that it'd be appropriate to have different words for these different things. But on the other hand, I'm not sure if the different words should fall under the umbrella of solitude, like "romantic solitude" and "seeing human faces solitude".
I dunno, maybe it should. After all, it seems that in different conceptualizations of solitude, it's about being isolated from something (others' minds, others' physical presence).
Ultimately, I'm trusting Newport here. I think highly of him and know that he's read a lot of relevant literature. At the same time, I still wouldn't argue too confidently that his preferred definition is the most useful one.
That makes sense. I didn't mean to imply that such an extreme degree of isolation is a net positive. I don't think it is.
That makes sense. Although I think the larger point I was making still stands: that in reading the book you're primarily consuming someone else's thoughts, just like you would be if the author sat there on the bench lecturing you (it'd be different if it were more of a two-way conversation; I should have clarified that in the post).
I suppose "primarily" isn't true for all readers, for all books. Perhaps some readers go slowly enough where they actually spend more of their time contemplating than they do reading, but I get the sense that that is pretty rare.
Cool! I have a feeling you'd like a lot of Cal Newport's work like Digital Minimalism and Deep Work.
When I'm walking around or riding the train, I want to be able to hear what's going on around me.
That makes sense about walking around, but why do you want to hear what's going on around you when you're riding the train?
Yeah, that all makes sense. I think solitude probably exists along a spectrum, where in listening to music maybe you have 8/10 solitude instead of 10/10 but in watching a TV show you only get 2/10. The relevant question is probably "to what extent are the outputs of other minds influencing your thoughts".
Actually, now that I think about it, I wonder why we're focusing on the outputs of other minds. What about other things that influence your thoughts? Like, I don't know, bumble bees flying around you? I'm afraid of bumble bees so I know I'd have trouble focusing on my own thoughts in that scenario.
That said, I'm sure that outputs of other minds are probably a large majority of what is intrusive and prevents you from focusing on your own thoughts. But it still seems to me like the thing we actually care about is being able to focus on your own thoughts, not just reducing your exposure to the outputs of other minds.
Hm. I was actually assuming in this post that the podcasts in question were actually "Effective Information" as opposed to "Trivia" or "Mental Masturbation". The issue is that even if they are "Effective Information", you also need to have solitude in your "diet", and the benefit of additional "Effective Information" probably isn't worth the cost of less solitude.
But I'm also realizing now that much of the time podcasts aren't actually "Effective Information" and are instead something like "Trivia" or "Mental Masturbation". And I see that as a separate but also relevant problem. And I think that carbs is probably a good analogy for that too. Or maybe something like refined sugar. It's a quick hedonic hit and probably ok to have in limited doses, but you really don't want to have too much of it in your diet.
The claim is that it's helpful, not that it's necessary. I certainly agree that good ideas can come from low-solitude things like conversations.
But I think solitude also has lingering benefits. Like, maybe experiencing some solitude puts you in position to have productive conversations. On the other hand, maybe if you spend weeks in solitude-debt you'll be in a poor position to have productive conversations. Something like that.
I would buy various forms of merch, including clothing. I feel very fond of LessWrong and would find it cool to wear a shirt or something with that brand.
No. DOGE didn't cross my mind. It was most directly inspired by the experience of realizing that I can factor in the journey as well as the destination with my startup.
I think it can generate negative externalities at times. However, I think that in terms of expected value it's usually positive.
In public policy, experimenting is valuable. In particular, it provides a positive externality.
Let's say that a city tests out a somewhat quirky idea like paying NIMBYs to shut up about new housing. If that policy works well, other cities benefit because now they can use and benefit from that approach.
So then, shouldn't there be some sort of subsidy for cities that test out new policy ideas? Isn't it generally a good thing to subsidize things that provide positive externalities?
I'm sure there is a lot to consider. I'm not enough of a public policy person to know what the considerations are though or how to weigh them.
Pet peeve: when places close before their stated close time. For example, I was just at the library. Their signs say that they close at 6pm. However, they kick people out at 5:45pm. This caught me off guard and caused me to break my focus at a bad time.
The reason that places do this, I assume, is because employees need to leave when their shift ends. In this case with the library, it probably takes 15 minutes or so to get everyone to leave, so they spend the last 15 minutes of their shift shoeing people out. But why not make the official closing time is 5:45pm while continuing to end the employee's shifts at 6:00pm?
I also run into this with restaurants. With restaurants, it's a little more complicated because there are usually two different closing times that are relevant to patrons: when the kitchen closes and when doors close. Unless food is served ~immediately like at Chipotle or something, it wouldn't make sense to make these two times equivalent. If it takes 10 minutes to cook a meal, doors close at 9:00pm, and someone orders a meal at 8:59pm, well, you won't be able to serve the meal before they need to be out.
But there's an easy solution to this: just list each of the two close times. It seems like that would make everyone happy.
I wonder how much of that is actually based on science, and how much is just superstition / scams.
In basketball there isn't any certification. Coaches/trainers usually are former players themselves who have had some amount of success, so that points towards them being competent to some extent. There's also the fact that if you don't feel like you're making progress with a coach you can fire them and hire a new one. But I think there is also a reasonably sized risk of the coach lacking competence and certain players sticking with them anyway, for a variety of reasons.
I'm sure that similar things are true in other fields, including athletics but also in fields like chess where there isn't a degree you could get. In fields with certifications and degrees it probably happens less often, but I know I've dealt with my fair share of incompetent MDs and PhDs.
So ultimately, I agree with the sentiment that finding competent coaches might involve some friction, but despite that, it still feels to me like a very tractable problem. Relatedly, I'm seeing now that there has been some activity on the topic of coaching in the EA community.
What is specific, from this perspective, for AI alignment researchers? Maybe the feeling of great responsibility, higher chance of burnout and nightmares?
I don't expect that the needs of alignment researchers are too unique when compared to the needs of other intellectuals. I mention alignment researchers because I think they're a prototypical example of people having large, positive impacts on the world, as opposed to intellectuals who study string theory or something.
I was just watching this Andrew Huberman video titled "Train to Gain Energy & Avoid Brain Fog". The interviewee was talking about track athletes and stuff their coaches would have them do.
It made me think back to Anders Ericsson's book Peak: Secrets from the New Science of Expertise. The book is popular for discussing the importance of deliberate practice, but another big takeaway from the book is the importance of receiving coaching. I think that takeaway gets overlooked. Top performers in fields like chess, music and athletics almost universally receive coaching.
And at the highest levels the performers will have a team of coaches. LeBron James is famous for spending roughly $1.5 million a year on his body.
And he’s like, “Well, he’s replicated the gym that whatever team — whether it was Miami or Cleveland — he’s replicated all the equipment they have in the team’s gym in his house. He has two trainers. Everywhere he goes, he has a trainer with him.” I’m paraphrasing what he told me, so I might not be getting all these facts right. He’s got chefs. He has all the science of how to sleep. All these different things. Masseuses. Everything he does in his life is constructed to have him play basketball and to stay on the court and to be as healthy as possible and to absorb punishment when he goes into the basket and he gets crushed by people.
This makes me think about AI safety. I feel like the top alignment researchers -- and ideally a majority of competent alignment researchers -- should have such coaching and resources available to them.
I'm not exactly sure what form this would take. Academic/technical coaches? Writing coach? Performance psychologists? A sleep specialist? Nutritionist? Meditation coach?
All of this costs money of course. I'm not arguing that this is the most efficient place to allocate our limited resources. I don't have enough of an understanding of what the other options are to make such an argument.
But I will say that providing such resources to alignment researchers seems like it should pretty meaningfully improve their productivity. And if so, we are in fact funding constrained. I recall (earlier?) conversations about funding not being a constraint, rather the real constraint is that there aren't good places to spend such money.
Also relevant is that this is perhaps an easier sell to prospective donors then something more wacky. Like, it seems like a safe bet to have a solid impact, and there's a precedent for providing expert performers with such coaching, so maybe that sort of thing is appealing to prospective donors.
Finally, I recall hearing at some point that in a field like physics, the very top researchers -- people like Einstein -- have a very disproportionate impact. If so, I'd think that it's at least pretty plausible that something similar is true in the field of AI alignment. And if it is, then it'd probably make sense to spend time 1) figuring out who the Einsteins are and then 2) investing in them and doing what we can to maximize their impact.
Wow, I just watched this video where Feynman makes an incredible analogy between the rules of chess and the rules of our physical world.
You watch the pieces move and try to figure out the underlying rules. Maybe you come up with a rule about bishops needing to stay on the same color, and that rule lasts a while. But then you realize that there is a deeper rule that explains the rule you've held to be true: bishops can only move diagonally.
I'm butchering the analogy though and am going to stop talking now. Just go watch the video. It's poetic.
One thing to keep in mind is that, from what I understand, ovens are very imprecise so you gotta exercise some judgement when using them. For example, even if you set your oven to 400°F, it might only reach 325°F. Especially if you open the oven to check on the food (that lets out a lot of heat).
I've also heard that when baking on sheet pans, you can get very different results based on how well seasoned your sheet pan is. That shouldn't affect this dish though since the intent is for the top to be the crispy part and that happens via convection rather than conduction. But maybe how high or low you place the baking dish in your oven will affect the crispiness.
As another variation, I wonder how it'd come out if you used a sheet pan instead of a baking dish. I'd think that you'd get more crispy bits because of the increase in surface area of potato that is exposed to heat. Personally I'm a big fan of those crispy bits!
You'd probably need to use multiple sheet pans, but that doesn't seem like much of an inconvenience. You can also vary the crispiness by varying the amount of exposed surface area. Like, even if you use a sheet pan you can still kinda stack the potatoes on top of one another in order to reduce the exposed surface area.
I have not seen that post. Thank you for pointing me to it! I'm not sure when I'll get to it but I added it to my todo list to read and potentially discuss further here.
Scott's take on the relative futility of resolving high-level generators of disagreement (which seems to be beyond Level 7? Not sure) within reasonable timeframes is kind of depressing.
Very interesting! This is actually the topic that I really wanted to get to. I haven't been able to figure out a good way to get a conversation or blog post started on that topic though, and my attempts to do so lead me to writing this (tangential) post.
I could see that happening, but in general, no, I wouldn't expect podcast hosts to already be aware of a substantial subset of arguments from the other side.
My impression is that podcasters do some prep but in general aren't spending many days let alone multiple weeks or months of prep. When you interview a wide variety of people and discuss a wide variety of topics, as many podcasters including the ones I mentioned do, I think that means that hosts will generally not be aware of a substantial subset of arguments from the other side.
For the sake of argument, I'll accept your points about memes, genes, and technology being domains where growth is usually exponential. But even if those points are true, I think we still need an argument that growth is almost always exponential across all/most domains.
The central claim that "almost all growth is exponential growth" is an interesting one. However, I am not really seeing that this post makes an argument for it. It feels more like it is just stating it as a claim.
I would expect an argument to be something like "here is some deep principle that says that growth is almost always in proportion to the thing's current size". And then to give a bunch of examples of this being the case in various domains. (I found the examples in the opening paragraph to be odd. Bike 200 miles a week or never? Huh?) I also think it'd be helpful to point out counterexamples and spend some time commenting on them.
[This contains spoilers for the show The Sopranos.]
In the realm of epistemics, it is a sin to double-count evidence. From One Argument Against An Army:
I talked about a style of reasoning in which not a single contrary argument is allowed, with the result that every non-supporting observation has to be argued away. Here I suggest that when people encounter a contrary argument, they prevent themselves from downshifting their confidence by rehearsing already-known support.
Suppose the country of Freedonia is debating whether its neighbor, Sylvania, is responsible for a recent rash of meteor strikes on its cities. There are several pieces of evidence suggesting this: the meteors struck cities close to the Sylvanian border; there was unusual activity in the Sylvanian stock markets before the strikes; and the Sylvanian ambassador Trentino was heard muttering about “heavenly vengeance.”
Someone comes to you and says: “I don’t think Sylvania is responsible for the meteor strikes. They have trade with us of billions of dinars annually.” “Well,” you reply, “the meteors struck cities close to Sylvania, there was suspicious activity in their stock market, and their ambassador spoke of heavenly vengeance afterward.” Since these three arguments outweigh the first, you keep your belief that Sylvania is responsible—you believe rather than disbelieve, qualitatively. Clearly, the balance of evidence weighs against Sylvania.
Then another comes to you and says: “I don’t think Sylvania is responsible for the meteor strikes. Directing an asteroid strike is really hard. Sylvania doesn’t even have a space program.” You reply, “But the meteors struck cities close to Sylvania, and their investors knew it, and the ambassador came right out and admitted it!” Again, these three arguments outweigh the first (by three arguments against one argument), so you keep your belief that Sylvania is responsible.
Indeed, your convictions are strengthened. On two separate occasions now, you have evaluated the balance of evidence, and both times the balance was tilted against Sylvania by a ratio of 3 to 1.
You encounter further arguments by the pro-Sylvania traitors—again, and again, and a hundred times again—but each time the new argument is handily defeated by 3 to 1. And on every occasion, you feel yourself becoming more confident that Sylvania was indeed responsible, shifting your prior according to the felt balance of evidence.
The problem, of course, is that by rehearsing arguments you already knew, you are double-counting the evidence. This would be a grave sin even if you double-counted all the evidence. (Imagine a scientist who does an experiment with 50 subjects and fails to obtain statistically significant results, so the scientist counts all the data twice.)
I had the thought that something similar probably applies to morality as well. I'm thinking of Tony Soprano.
People say that Soprano is an asshole. Some say he is a sociopath. I'm not sure where I stand. But I finished watching The Sopranos recently and one thought that I frequently had when he'd do something harmful is that his hand was kinda forced.
For example, there was a character in the show named Adriana. Adriana became an informant to the FBI at some point. When Tony learned this, he had her killed.
Having someone killed is, in some sense, bad. But did Tony have a choice? If he didn't she very well could have gotten Tony and the rest of the mob members sent to jail, or perhaps sentenced to the death penalty. When that is the calculus, we usually don't expect the person in Tony's shoes to prioritize the person in Adriana's shoes.
It makes me think back to when I played poker. Sometimes you end up in a bad spot. It looks like you just don't have any good options. Folding seems too nitty. Calling is gross. Raising feels dubious. No move you make will end well.
But alas, you do in fact have to make a decision. The goal is not necessarily to find a move that will be good in an absolute sense. It's to make the best move relative to the other moves you can make. To criticize someone who chooses the best move in a relative sense because it is a bad move in an absolute sense is unfair. You have to look at it from the point-of-decision.
Of course, you also want to look back at how you got yourself in the bad spot in the first place. Like if you made a bad decision on the flop that put you in a bad spot on the turn, you want to call out the play you made on the flop as bad and learn from it. But you don't want to "double count" the move you made on the flop once you've moved on to analyzing the next street.
Using this analogy, I think Tony Soprano made some incredibly bad preflop moves that set himself up for a shit show. And then he didn't do himself any favors on the flop. But once he was on later streets like the turn and river, I'm not sure how bad his decisions actually were. And more generally, I think it probably makes sense to avoid "double counting" the mistakes people made on earlier streets when they are faced with decisions on later streets.
I spent the day browsing the website of Josh W. Comeau yesterday. He writes educational content about web development. I am in awe.
For so many reasons. The quality of the writing. The clarity of the thinking. The mastery of the subject matter. The metaphors. The analogies. The quality and attention to detail of the website itself. Try zooming in to 300%. It still look gorgeous.
One thing that he's got me thinking about is the place that sound effects and animation have on a website. Previously my opinion was that you should usually just leave 'em out. Focus on more important things. It's hard to implement them well; they usually just make the site feel tacky. They also add a decent amount of complexity.
But Josh does such a good job of utilizing sound effects and animation! Try clicking one of the icons on the top right. Or moving your cursor around those dots at the top of the home page. Or clicking the "heart" button at the end of the "Table of contents" section for one of his posts. It's so satisfying.
I'm realizing that my previous opinion was largely a cached thought. When I think about it now, I arrive at a different perspective. Right now I'm suspecting that both sound effects and animations should be treated as something to aspire towards. If it's a smaller-scale site and you don't have the skills or the resources to incorporate them, that's ok. But if it's a larger-scale site, I dunno, I feel like it's often worth prioritizing.
Anyway, the main thing I want to talk about is his usage of demos that you can explore. For example, check out his demos of how the flex-grow
property works. It's one thing to read the docs. It's another to see a visualization. It's another to play with a demo. They say that a picture is worth a thousand words. How many words is an interactive demo worth?
I don't think such demos always add that much value. Like for flex-grow
, I think it adds some value, but not too much compared to a visualization like the one here. On the other hand, the demo for content vs items actually made the concept click for me in a way that I don't think would have without the demo. It makes me think back to Explorable Explanations along with some of the other work of Bret Victor.
So yeah, sometimes demos really do add a lot of educational value. But even when they don't, I think that they can also add a lot of value in other ways. For example, by being engaging, or by providing delight.
This all makes me wonder about how worthwhile it is for writers to try to incorporate such interactive demos into their posts. I'm coming from a place where I'm thinking about the fact that they often add a lot of value. Yes, they're also pretty costly, but if they add a lot of value, I dunno, maybe it's worth paying the cost. Or maybe it's worth figuring out a way to lower it. I'm also coming from a place where I observe that people will, at times, put a lot of effort into some sort of written material that they produce, rewriting it and revising it and whatnot.
Then again, it is pretty costly to incorporate such demos. You'd have to learn to code, and be pretty good at it. You'd have to develop a pretty strong intuition for good design. Those are skills that take years to learn. Maybe doing so is worthwhile if your main thing is writing or education, but otherwise, probably not.
Another thing you could do is pay someone who has those skills. Again, doing so is going to be expensive. Maybe the cost is worth it for a large project like a book or something, but for something moreso on the scale of the blog post, probably not.
As a creative solution, I wonder whether it'd make sense to find young people earlier in their careers who have the needed skills and who are looking to get real world experience, make connections, and add to their resumes. Finding such people would probably be a decent amount of work though. Maybe if there was a platform to help? Meh. I feel cynical. Fundamentally, you're trying to get valuable, skilled labor for free. Feels too much like an uphill battle.
I suppose that like all things you could just point to the fact that AI will be good enough to do this sort of thing at some point. However, I don't think that observation is a helpful one. The conversation here is about how to improve content that you produce via interactive demos. Once AI is good enough to freely or cheaply produce those demos, it'll also be good enough to just produce the overall content.
What about people like me? I like to write. I want to produce good content. I am a front end leaning web developer. I think I have an eye for design. Maybe I am the type of person who could take the time and produce these sorts of interactive demos for the content I produce?
Nah. I don't think it'd be practical. Right now I have other things aside from writing that I'm prioritizing and I'm not looking to spend more than something on the scale of hours for a given post. At other points I aimed to spend something more on the scale of days for a given post, but even that is probably too short of a time scale to justify interactive demos. I think interactive demos become relevant when you're dealing with weeks, if not months. And so even if you have the skills, I think it often isn't practical if you aren't eg. a book author or something.
Maybe there is a deeper issue here. Maybe it's that we are the kind that can't cooperate. From a God's Eye perspective, I feel like I'd much prefer to take 100 authors and have them coordinate to produce 1 amazing blog post than for them to go off on their own and produce 100 mediocre blog posts. But observing this is hardly a solution. If we were able to solve this problem of a lack of cooperation it'd have impacts far beyond explorable explanations.
So overall, I guess I'm not really seeing anything actionable here with respect to the interactive demos.
I just came across That's Not an Abstraction, That's Just a Layer of Indirection on Hacker News today. It makes a very similar point that I make in this post, but adds a very helpful term: indirection. When you have to "open the box", the box serves as an indirection.
When I was a student at Fullstack Academy, a coding bootcamp, they had us all do this (mapping it to the control key), along with a few other changes to such settings like making the key repeat rate faster. I think I got this script from them.
My instinct is that it's not the type of thing to hack at with workarounds without buy in from the LW team.
If there was buy in from them I expect that it wouldn't be much effort to add some sort of functionality. At least not for a version one; iterating on it could definitely take time, but you could hold off on spending that time iterating if there isn't enough interest, so the initial investment wouldn't be high effort.
I think this is a great idea, at least in the distillation aspect.
Thanks!
Having briefer statements of the most important posts would be very useful in growing the rationalist community.
I think you're right, but I think it's also important to think about dilution. Making things lower-effort and more appealing to the masses brings down the walls of the garden, which "dilutes" things inside the garden.
But I'm just saying that this is a consideration. And there are lots of considerations. I feel confused about how to enumerate through them, weigh them, and figure out which way the arrow points: towards being more appealing to the masses or less appealing. I know I probably indicated that I lean towards the former when I talked about "summaries, analyses and distillations" in my OP, but I want to clarify that I feel very uncertain and if anything probably lean towards the latter.
But even if we did want to focus on having taller walls, I think the "more is possible" point that I was ultimately trying to gesture at in my OP still stands. It's just that the "more" part might mean things like coming up with things like higher quality explanations, more and better examples of what the post is describing, knowledge checks, and exercises.
Since we don't currently have that list of distilled posts (AFAIK - anyone?)
There is the Sequence Highlights which has an estimated reading time of eight hours.
Sometimes when I'm reading old blog posts on LessWrong, like old Sequence posts, I have something that I want to write up as a comment, and I'm never sure where to write that comment.
I could write it on the original post, but if I do that it's unlikely to be seen and to generate conversation. Alternatively, I could write it on my Shortform or on the Open Thread. That would get a reasonable amount of visibility, but... I dunno... something feels defect-y and uncooperative about that for some reason.
I guess what's driving that feeling is probably the thought that in a perfect world conversations about posts would happen in the comments section of the post, and by posting elsewhere I'm contributing to the problem.
But now that I write that out I'm feeling like that's a bit silly thought. Fixing the problem would take a larger concentration of force than just me posting a few comments on old Sequence posts once in a while. By me posting my comments in the comments sections of the corresponding post, I'm not really moving the needle. So I don't think I endorse any feelings of guilt here.
I would like to see people write high-effort summaries, analyses and distillations of the posts in The Sequences.
When Eliezer wrote the original posts, he was writing one blog post a day for two years. Surely you could do a better job presenting the content that he produced in one day if you, say, took four months applying principles of pedagogy and iterating on it as a side project. I get the sense that more is possible.
This seems like a particularly good project for people who want to write but don't know what to write about. I've talked with a variety of people who are in that boat.
One issue with such distillation posts is discoverability. Maybe you write the post, it receives some upvotes, some people see it, and then it disappears into the ether. Ideally when someone in the future goes to read the corresponding sequence post they would be aware that your distillation post is available as a sort of sister content to the original content. LessWrong does have the "Mentioned in" section at the bottom of posts, but that doesn't feel like it is sufficient.
I recently started going through some of Rationality from AI to Zombies again. A big reason why is the fact that there are audio recordings of the posts. It's easy to listen to a post or two as I walk my dog, or a handful of posts instead of some random hour-long podcast that I would otherwise listen to.
I originally read (most of) The Sequences maybe 13 or 14 years ago when I was in college. At various times since then I've made somewhat deliberate efforts to revisit them. Other times I've re-read random posts as opposed to larger collections of posts. Anyway, the point I want to make is that it's been a while.
I've been a little surprised in my feelings as I re-read them. Some of them feel notably less good than what I remember. Others blow my mind and are incredible.
The Mysterious Answers sequence is one that I felt disappointed by. I felt like the posts weren't very clear and that there wasn't much substance. I think the main overarching point of the sequence is that an explanation can't say that all outcomes are equally probable. It has to say that some outcomes are more probable than others. But that just seems kinda obvious.
I think it's quite plausible that there are "good" reasons why I felt disappointed as I re-read this and other sequences. Maybe there are important things that are going over my head. Or maybe I actually understand things too well now after hanging around this community for so long.
One post that hit me kinda hard that I really enjoyed after re-reading it was Rationality and the English Language, and then the follow up post, Human Evil and Muddled Thinking. The posts helped me grok how powerful language can be.
If you really want an artist’s perspective on rationality, then read Orwell; he is mandatory reading for rationalists as well as authors. Orwell was not a scientist, but a writer; his tools were not numbers, but words; his adversary was not Nature, but human evil. If you wish to imprison people for years without trial, you must think of some other way to say it than “I’m going to imprison Mr. Jennings for years without trial.” You must muddy the listener’s thinking, prevent clear images from outraging conscience. You say, “Unreliable elements were subjected to an alternative justice process.”
I'm pretty sure that I read those posts before, along with a bunch of related posts and stuff, but for whatever reason the re-read still meaningfully improved my understand the concept.
I assume you mean wearing a helmet while being in a car to reduce the risk of car related injuries and deaths. I actually looked into this and from what I remember, helmets do more harm than good. They have the benefit of protecting you from hitting your head against something but the issue with accidents comes much moreso from the whiplash, and by adding more weight to (the top of) your head, helmets have the cost of making whiplash worse, and this cost outweighs the benefits by a fair amount.
Yes! I've always been a huge believer in this idea that the ease of eating a food is important and underrated. Very underrated.
I'm reminded of this clip of Anthony Bourdain talking about burgers and how people often put slices of bacon on a burger, but that in doing so it makes the burger difficult to eat. Presumably because when you go to take a bite you the whole slice of bacon often ends up sliding off the burger.
Am I making this more enjoyable by adding bacon? Maybe. How should that bacon be introduced into the question? It's an engineer and structural problem as much as it is a flavor experience. You really have to consider all of those things. One of the greatest sins in "burgerdom" I think is making a burger that's just difficult to eat.
I've noticed that there's a pretty big difference in the discussion that follows from me showing someone a draft of a post and asking for comments and the discussion in the comments section after I publish a post. The former is richer and more enjoyable whereas the latter doesn't usually result in much back and forth. And I get the sense that this is true for other authors as well.
I guess one important thing might be that with drafts, you're talking to people who you know. But I actually don't suspect that this plays much of a role, at least on LessWrong. As an anecdote, I've had some incredible conversations with the guy who reviews drafts of posts on LessWrong for free and I had never talked to him previously.
I wonder what it is about drafts. I wonder if it can or should be incorporated into regular posts.
Thanks Marvin! I'm glad to hear that you enjoyed the post and that it was helpful.
Imho your post should be linked to all definitions of the sunk cost fallacy.
I actually think the issue was more akin to the planning fallacy. Like when I'd think to myself "another two months to build this feature and then things will be good", it wasn't so much that I was compelled because of the time I had sunk into the journey, it was more that I genuinely anticipated that the results would be better than they actually were.
It isn't active, sorry. See the update at the top of the post.
See also: https://www.painscience.com/articles/strength-training-frequency.php.
Summary:
Strength training is not only more beneficial for general fitness than most people realize, it isn’t even necessary to spend hours at the gym every week to get those benefits. Almost any amount of it is much better than nothing. While more effort will produce better results, the returns diminish rapidly. Just one or two half hour sessions per week can get most of the results that you’d get from two to three times that much of an investment (and that’s a deliberately conservative estimate). This is broadly true of any form of exercise, but especially so with strength training. In a world where virtually everything in health and fitness is controversial, this is actually fairly settled science.
Oh I see, that makes sense. In retrospect that is a little obvious that you don't have to choose one or the other :)
So does the choice of which type of fiber to take boil down to the question of the importance of constipation vs microbiome and cholesterol? It's seeming to me like if the former is more important you should take soluble non-fermentable fiber, if the latter is more important you should take soluble fermentable fiber (or eat it in a whole food), and that insoluble fiber is never/rarely the best option.
Funny. I have a Dropbox folder where I store video tours of all the apartments I've ever lived in. Like, I spend a minute or two walking around the apartment and taking a video with my phone.
I'm not sure why, exactly. Partly because it's fun to look back. Partly because I don't want to "lose" something that's been with me for so long.
I suspect that such video tours are more appropriate for a large majority of people. 10 hours and $200-$500 sounds like a lot. And you could always convert the video tour into digital art some time in the future if you find the nostalgia is really hitting you.
Hm. I hear ya. Good point. I'm not sure whether I agree or disagree.
I'm trying to think of an analogy and came up with the following. Imagine you go to McDonalds with some friends and someone comments that their burger would be better if they used prime ribeye for their ground beef.
I guess it's technically true, but something also feels off about it to me that I'm having trouble putting my finger on. Maybe it's that it feels like a moot point to discuss things that would make something better that are also impractical to implement.
I just looked up Gish gallops on Wikipedia. Here's the first paragraph:
The Gish gallop (/ˈɡɪʃ ˈɡæləp/) is a rhetorical technique in which a person in a debate attempts to overwhelm an opponent by abandoning formal debating principles, providing an excessive number of arguments with no regard for the accuracy or strength of those arguments and that are impossible to address adequately in the time allotted to the opponent. Gish galloping prioritizes the quantity of the galloper's arguments at the expense of their quality.
I disagree that focusing on the central point is a recipe for Gish gallops and that it leads to Schrodinger's importance.
Well, I think that it in combination with a bunch of other poor epistemic norms it might be a recipe for those things, but a) not by itself and b) I think the norms would have to be pretty poor. Like, I don't expect that you need 10/10 level epistemic norms in the presence of focusing on the central point to shield from those failure modes, I think you just need something more like 3/10 level epistemic norms. Here on LessWrong I think our epistemic norms are strong enough where focusing on the central point doesn't put us at risk of things like Gish gallops and Schrodinger's importance.
I actually disagree with this. I haven't thought too hard about it and might just not be seeing it, but on first thought I am not really seeing how such evidence would make the post "much stronger".
To elaborate, I like to use Paul Graham's Disagreement Hierarchy as a lens to look through for the question of how strong a post is. In particular, I like to focus pretty hard on the central point (DH6) rather than supporting and tangential points. I think the central point plays a very large role in determining how strong a post is.
Here, my interpretation of the central point(s) is something like this:
- Poverty is largely determined by the weakest link in the chain.
- Anoxan is a helpful example to illustrate this.
- It's not too clear what drives poverty today, and so it's not too clear that UBI would meaningfully reduce poverty.
I thought the post did a nice job of making those central points. Sure, something like a survey of the research in positive psychology could provide more support for point #1, for example, but I dunno, I found the sort of intuitive argument for point #1 to be pretty strong, I'm pretty persuaded by it, and so I don't think I'd update too hard in response to the survey of positive psychology research.
Another thing I think about when asking myself how strong I think a post is is how "far along" it is. Is it an off the cuff conversation starter? An informal write up of something that's been moderately refined? A formal write up of something that has been significantly refined?
I think this post was somewhere towards the beginning of the spectrum (note: it was originally a tweet, not a LessWrong post). So then, for things like citations supporting empirical claims, I don't think it's reasonable to expect very much from the author, and so I lean away from viewing the lack of citations as something that (meaningfully) weakens the post.
What would it be like for people to not be poor?
I reply: You wouldn't see people working 60-hour weeks, at jobs where they have to smile and bear it when their bosses abuse them.
I appreciate the concrete, illustrative examples used in this discussion, but I also want to recognize that they are only the beginnings of a "real" answer to the question of what it would be like to not be poor.
In other words, in an attempt to describe what he sees as poverty, I think Eliezer has taken the strategy of pointing to a few points in Thingspace and saying "here are some points; the stuff over here around these points is roughly what I'm trying to gesture at". He hasn't taken too much of a stab at drawing the boundaries. I'd like to take a small stab at drawing some boundaries.
It seems to me that poverty is about QALYs. Let's wave our hands a bit and say that QALYs are a function of 1) the "cards you're dealt" and 2) how you "play your hand". With that, I think that we can think about poverty as happening when someone is dealt cards that make it "difficult" for them to have "enough" QALYs.
This happens in our world when you have to spend 40 hours a week smiling and bearing it. It happens in Anoxan when you take shallow breaths to conserve oxygen for your kids. And it happened to hunter-gatherers in times of scarcity.
There are many circumstances that can make it difficult to live a happy life. And as Eliezer calls out, it is quite possible for one "bad apple circumstance", like an Anoxan resident not having enough oxygen, to spoil the bunch. For you to enjoy abundance in a lot of areas but scarcity in one/few other areas, and for the scarcity to be enough to drive poverty despite the abundance. I suppose then that poverty is driven in large part by the strength of the "weakest link".
Note that I don't think this dynamic needs to be very conscious on anyone's part. I think that humans instinctively execute good game theory because evolution selected for it, even if the human executing just feels a wordless pull to that kind of behavior.
Yup, exactly. It makes me think back to The Moral Animal by Robert Wright. It's been a while since I read it so take what follows with a grain of salt, because I could be butchering some stuff, but that book makes the argument that this sort of thing goes beyond friendship and into all types of emotions and moral feelings.
Like if you're at the grocery store and someone just cuts you in line for no reason, one way of looking at it is that the cost to you is negligible -- you just need to wait an additional 45 seconds for them to check out -- and so the rational thing would be to just let it happen. You could confront them, but what exactly would you have to gain? Suppose you are traveling and will never see any of the people in the area ever again.
But we have evolved such that this situation would evoke some strong emotions regarding unfairness, and these emotions would often drive you to confront the person who cut you in line. I forget if this stuff is more at the individual level or the cultural level.