Posts
Comments
I wonder how much of that is actually based on science, and how much is just superstition / scams.
In basketball there isn't any certification. Coaches/trainers usually are former players themselves who have had some amount of success, so that points towards them being competent to some extent. There's also the fact that if you don't feel like you're making progress with a coach you can fire them and hire a new one. But I think there is also a reasonably sized risk of the coach lacking competence and certain players sticking with them anyway, for a variety of reasons.
I'm sure that similar things are true in other fields, including athletics but also in fields like chess where there isn't a degree you could get. In fields with certifications and degrees it probably happens less often, but I know I've dealt with my fair share of incompetent MDs and PhDs.
So ultimately, I agree with the sentiment that finding competent coaches might involve some friction, but despite that, it still feels to me like a very tractable problem. Relatedly, I'm seeing now that there has been some activity on the topic of coaching in the EA community.
What is specific, from this perspective, for AI alignment researchers? Maybe the feeling of great responsibility, higher chance of burnout and nightmares?
I don't expect that the needs of alignment researchers are too unique when compared to the needs of other intellectuals. I mention alignment researchers because I think they're a prototypical example of people having large, positive impacts on the world, as opposed to intellectuals who study string theory or something.
I was just watching this Andrew Huberman video titled "Train to Gain Energy & Avoid Brain Fog". The interviewee was talking about track athletes and stuff their coaches would have them do.
It made me think back to Anders Ericsson's book Peak: Secrets from the New Science of Expertise. The book is popular for discussing the importance of deliberate practice, but another big takeaway from the book is the importance of receiving coaching. I think that takeaway gets overlooked. Top performers in fields like chess, music and athletics almost universally receive coaching.
And at the highest levels the performers will have a team of coaches. LeBron James is famous for spending roughly $1.5 million a year on his body.
And he’s like, “Well, he’s replicated the gym that whatever team — whether it was Miami or Cleveland — he’s replicated all the equipment they have in the team’s gym in his house. He has two trainers. Everywhere he goes, he has a trainer with him.” I’m paraphrasing what he told me, so I might not be getting all these facts right. He’s got chefs. He has all the science of how to sleep. All these different things. Masseuses. Everything he does in his life is constructed to have him play basketball and to stay on the court and to be as healthy as possible and to absorb punishment when he goes into the basket and he gets crushed by people.
This makes me think about AI safety. I feel like the top alignment researchers -- and ideally a majority of competent alignment researchers -- should have such coaching and resources available to them.
I'm not exactly sure what form this would take. Academic/technical coaches? Writing coach? Performance psychologists? A sleep specialist? Nutritionist? Meditation coach?
All of this costs money of course. I'm not arguing that this is the most efficient place to allocate our limited resources. I don't have enough of an understanding of what the other options are to make such an argument.
But I will say that providing such resources to alignment researchers seems like it should pretty meaningfully improve their productivity. And if so, we are in fact funding constrained. I recall (earlier?) conversations about funding not being a constraint, rather the real constraint is that there aren't good places to spend such money.
Also relevant is that this is perhaps an easier sell to prospective donors then something more wacky. Like, it seems like a safe bet to have a solid impact, and there's a precedent for providing expert performers with such coaching, so maybe that sort of thing is appealing to prospective donors.
Finally, I recall hearing at some point that in a field like physics, the very top researchers -- people like Einstein -- have a very disproportionate impact. If so, I'd think that it's at least pretty plausible that something similar is true in the field of AI alignment. And if it is, then it'd probably make sense to spend time 1) figuring out who the Einsteins are and then 2) investing in them and doing what we can to maximize their impact.
Wow, I just watched this video where Feynman makes an incredible analogy between the rules of chess and the rules of our physical world.
You watch the pieces move and try to figure out the underlying rules. Maybe you come up with a rule about bishops needing to stay on the same color, and that rule lasts a while. But then you realize that there is a deeper rule that explains the rule you've held to be true: bishops can only move diagonally.
I'm butchering the analogy though and am going to stop talking now. Just go watch the video. It's poetic.
One thing to keep in mind is that, from what I understand, ovens are very imprecise so you gotta exercise some judgement when using them. For example, even if you set your oven to 400°F, it might only reach 325°F. Especially if you open the oven to check on the food (that lets out a lot of heat).
I've also heard that when baking on sheet pans, you can get very different results based on how well seasoned your sheet pan is. That shouldn't affect this dish though since the intent is for the top to be the crispy part and that happens via convection rather than conduction. But maybe how high or low you place the baking dish in your oven will affect the crispiness.
As another variation, I wonder how it'd come out if you used a sheet pan instead of a baking dish. I'd think that you'd get more crispy bits because of the increase in surface area of potato that is exposed to heat. Personally I'm a big fan of those crispy bits!
You'd probably need to use multiple sheet pans, but that doesn't seem like much of an inconvenience. You can also vary the crispiness by varying the amount of exposed surface area. Like, even if you use a sheet pan you can still kinda stack the potatoes on top of one another in order to reduce the exposed surface area.
I have not seen that post. Thank you for pointing me to it! I'm not sure when I'll get to it but I added it to my todo list to read and potentially discuss further here.
Scott's take on the relative futility of resolving high-level generators of disagreement (which seems to be beyond Level 7? Not sure) within reasonable timeframes is kind of depressing.
Very interesting! This is actually the topic that I really wanted to get to. I haven't been able to figure out a good way to get a conversation or blog post started on that topic though, and my attempts to do so lead me to writing this (tangential) post.
I could see that happening, but in general, no, I wouldn't expect podcast hosts to already be aware of a substantial subset of arguments from the other side.
My impression is that podcasters do some prep but in general aren't spending many days let alone multiple weeks or months of prep. When you interview a wide variety of people and discuss a wide variety of topics, as many podcasters including the ones I mentioned do, I think that means that hosts will generally not be aware of a substantial subset of arguments from the other side.
For the sake of argument, I'll accept your points about memes, genes, and technology being domains where growth is usually exponential. But even if those points are true, I think we still need an argument that growth is almost always exponential across all/most domains.
The central claim that "almost all growth is exponential growth" is an interesting one. However, I am not really seeing that this post makes an argument for it. It feels more like it is just stating it as a claim.
I would expect an argument to be something like "here is some deep principle that says that growth is almost always in proportion to the thing's current size". And then to give a bunch of examples of this being the case in various domains. (I found the examples in the opening paragraph to be odd. Bike 200 miles a week or never? Huh?) I also think it'd be helpful to point out counterexamples and spend some time commenting on them.
[This contains spoilers for the show The Sopranos.]
In the realm of epistemics, it is a sin to double-count evidence. From One Argument Against An Army:
I talked about a style of reasoning in which not a single contrary argument is allowed, with the result that every non-supporting observation has to be argued away. Here I suggest that when people encounter a contrary argument, they prevent themselves from downshifting their confidence by rehearsing already-known support.
Suppose the country of Freedonia is debating whether its neighbor, Sylvania, is responsible for a recent rash of meteor strikes on its cities. There are several pieces of evidence suggesting this: the meteors struck cities close to the Sylvanian border; there was unusual activity in the Sylvanian stock markets before the strikes; and the Sylvanian ambassador Trentino was heard muttering about “heavenly vengeance.”
Someone comes to you and says: “I don’t think Sylvania is responsible for the meteor strikes. They have trade with us of billions of dinars annually.” “Well,” you reply, “the meteors struck cities close to Sylvania, there was suspicious activity in their stock market, and their ambassador spoke of heavenly vengeance afterward.” Since these three arguments outweigh the first, you keep your belief that Sylvania is responsible—you believe rather than disbelieve, qualitatively. Clearly, the balance of evidence weighs against Sylvania.
Then another comes to you and says: “I don’t think Sylvania is responsible for the meteor strikes. Directing an asteroid strike is really hard. Sylvania doesn’t even have a space program.” You reply, “But the meteors struck cities close to Sylvania, and their investors knew it, and the ambassador came right out and admitted it!” Again, these three arguments outweigh the first (by three arguments against one argument), so you keep your belief that Sylvania is responsible.
Indeed, your convictions are strengthened. On two separate occasions now, you have evaluated the balance of evidence, and both times the balance was tilted against Sylvania by a ratio of 3 to 1.
You encounter further arguments by the pro-Sylvania traitors—again, and again, and a hundred times again—but each time the new argument is handily defeated by 3 to 1. And on every occasion, you feel yourself becoming more confident that Sylvania was indeed responsible, shifting your prior according to the felt balance of evidence.
The problem, of course, is that by rehearsing arguments you already knew, you are double-counting the evidence. This would be a grave sin even if you double-counted all the evidence. (Imagine a scientist who does an experiment with 50 subjects and fails to obtain statistically significant results, so the scientist counts all the data twice.)
I had the thought that something similar probably applies to morality as well. I'm thinking of Tony Soprano.
People say that Soprano is an asshole. Some say he is a sociopath. I'm not sure where I stand. But I finished watching The Sopranos recently and one thought that I frequently had when he'd do something harmful is that his hand was kinda forced.
For example, there was a character in the show named Adriana. Adriana became an informant to the FBI at some point. When Tony learned this, he had her killed.
Having someone killed is, in some sense, bad. But did Tony have a choice? If he didn't she very well could have gotten Tony and the rest of the mob members sent to jail, or perhaps sentenced to the death penalty. When that is the calculus, we usually don't expect the person in Tony's shoes to prioritize the person in Adriana's shoes.
It makes me think back to when I played poker. Sometimes you end up in a bad spot. It looks like you just don't have any good options. Folding seems too nitty. Calling is gross. Raising feels dubious. No move you make will end well.
But alas, you do in fact have to make a decision. The goal is not necessarily to find a move that will be good in an absolute sense. It's to make the best move relative to the other moves you can make. To criticize someone who chooses the best move in a relative sense because it is a bad move in an absolute sense is unfair. You have to look at it from the point-of-decision.
Of course, you also want to look back at how you got yourself in the bad spot in the first place. Like if you made a bad decision on the flop that put you in a bad spot on the turn, you want to call out the play you made on the flop as bad and learn from it. But you don't want to "double count" the move you made on the flop once you've moved on to analyzing the next street.
Using this analogy, I think Tony Soprano made some incredibly bad preflop moves that set himself up for a shit show. And then he didn't do himself any favors on the flop. But once he was on later streets like the turn and river, I'm not sure how bad his decisions actually were. And more generally, I think it probably makes sense to avoid "double counting" the mistakes people made on earlier streets when they are faced with decisions on later streets.
I spent the day browsing the website of Josh W. Comeau yesterday. He writes educational content about web development. I am in awe.
For so many reasons. The quality of the writing. The clarity of the thinking. The mastery of the subject matter. The metaphors. The analogies. The quality and attention to detail of the website itself. Try zooming in to 300%. It still look gorgeous.
One thing that he's got me thinking about is the place that sound effects and animation have on a website. Previously my opinion was that you should usually just leave 'em out. Focus on more important things. It's hard to implement them well; they usually just make the site feel tacky. They also add a decent amount of complexity.
But Josh does such a good job of utilizing sound effects and animation! Try clicking one of the icons on the top right. Or moving your cursor around those dots at the top of the home page. Or clicking the "heart" button at the end of the "Table of contents" section for one of his posts. It's so satisfying.
I'm realizing that my previous opinion was largely a cached thought. When I think about it now, I arrive at a different perspective. Right now I'm suspecting that both sound effects and animations should be treated as something to aspire towards. If it's a smaller-scale site and you don't have the skills or the resources to incorporate them, that's ok. But if it's a larger-scale site, I dunno, I feel like it's often worth prioritizing.
Anyway, the main thing I want to talk about is his usage of demos that you can explore. For example, check out his demos of how the flex-grow
property works. It's one thing to read the docs. It's another to see a visualization. It's another to play with a demo. They say that a picture is worth a thousand words. How many words is an interactive demo worth?
I don't think such demos always add that much value. Like for flex-grow
, I think it adds some value, but not too much compared to a visualization like the one here. On the other hand, the demo for content vs items actually made the concept click for me in a way that I don't think would have without the demo. It makes me think back to Explorable Explanations along with some of the other work of Bret Victor.
So yeah, sometimes demos really do add a lot of educational value. But even when they don't, I think that they can also add a lot of value in other ways. For example, by being engaging, or by providing delight.
This all makes me wonder about how worthwhile it is for writers to try to incorporate such interactive demos into their posts. I'm coming from a place where I'm thinking about the fact that they often add a lot of value. Yes, they're also pretty costly, but if they add a lot of value, I dunno, maybe it's worth paying the cost. Or maybe it's worth figuring out a way to lower it. I'm also coming from a place where I observe that people will, at times, put a lot of effort into some sort of written material that they produce, rewriting it and revising it and whatnot.
Then again, it is pretty costly to incorporate such demos. You'd have to learn to code, and be pretty good at it. You'd have to develop a pretty strong intuition for good design. Those are skills that take years to learn. Maybe doing so is worthwhile if your main thing is writing or education, but otherwise, probably not.
Another thing you could do is pay someone who has those skills. Again, doing so is going to be expensive. Maybe the cost is worth it for a large project like a book or something, but for something moreso on the scale of the blog post, probably not.
As a creative solution, I wonder whether it'd make sense to find young people earlier in their careers who have the needed skills and who are looking to get real world experience, make connections, and add to their resumes. Finding such people would probably be a decent amount of work though. Maybe if there was a platform to help? Meh. I feel cynical. Fundamentally, you're trying to get valuable, skilled labor for free. Feels too much like an uphill battle.
I suppose that like all things you could just point to the fact that AI will be good enough to do this sort of thing at some point. However, I don't think that observation is a helpful one. The conversation here is about how to improve content that you produce via interactive demos. Once AI is good enough to freely or cheaply produce those demos, it'll also be good enough to just produce the overall content.
What about people like me? I like to write. I want to produce good content. I am a front end leaning web developer. I think I have an eye for design. Maybe I am the type of person who could take the time and produce these sorts of interactive demos for the content I produce?
Nah. I don't think it'd be practical. Right now I have other things aside from writing that I'm prioritizing and I'm not looking to spend more than something on the scale of hours for a given post. At other points I aimed to spend something more on the scale of days for a given post, but even that is probably too short of a time scale to justify interactive demos. I think interactive demos become relevant when you're dealing with weeks, if not months. And so even if you have the skills, I think it often isn't practical if you aren't eg. a book author or something.
Maybe there is a deeper issue here. Maybe it's that we are the kind that can't cooperate. From a God's Eye perspective, I feel like I'd much prefer to take 100 authors and have them coordinate to produce 1 amazing blog post than for them to go off on their own and produce 100 mediocre blog posts. But observing this is hardly a solution. If we were able to solve this problem of a lack of cooperation it'd have impacts far beyond explorable explanations.
So overall, I guess I'm not really seeing anything actionable here with respect to the interactive demos.
I just came across That's Not an Abstraction, That's Just a Layer of Indirection on Hacker News today. It makes a very similar point that I make in this post, but adds a very helpful term: indirection. When you have to "open the box", the box serves as an indirection.
When I was a student at Fullstack Academy, a coding bootcamp, they had us all do this (mapping it to the control key), along with a few other changes to such settings like making the key repeat rate faster. I think I got this script from them.
My instinct is that it's not the type of thing to hack at with workarounds without buy in from the LW team.
If there was buy in from them I expect that it wouldn't be much effort to add some sort of functionality. At least not for a version one; iterating on it could definitely take time, but you could hold off on spending that time iterating if there isn't enough interest, so the initial investment wouldn't be high effort.
I think this is a great idea, at least in the distillation aspect.
Thanks!
Having briefer statements of the most important posts would be very useful in growing the rationalist community.
I think you're right, but I think it's also important to think about dilution. Making things lower-effort and more appealing to the masses brings down the walls of the garden, which "dilutes" things inside the garden.
But I'm just saying that this is a consideration. And there are lots of considerations. I feel confused about how to enumerate through them, weigh them, and figure out which way the arrow points: towards being more appealing to the masses or less appealing. I know I probably indicated that I lean towards the former when I talked about "summaries, analyses and distillations" in my OP, but I want to clarify that I feel very uncertain and if anything probably lean towards the latter.
But even if we did want to focus on having taller walls, I think the "more is possible" point that I was ultimately trying to gesture at in my OP still stands. It's just that the "more" part might mean things like coming up with things like higher quality explanations, more and better examples of what the post is describing, knowledge checks, and exercises.
Since we don't currently have that list of distilled posts (AFAIK - anyone?)
There is the Sequence Highlights which has an estimated reading time of eight hours.
Sometimes when I'm reading old blog posts on LessWrong, like old Sequence posts, I have something that I want to write up as a comment, and I'm never sure where to write that comment.
I could write it on the original post, but if I do that it's unlikely to be seen and to generate conversation. Alternatively, I could write it on my Shortform or on the Open Thread. That would get a reasonable amount of visibility, but... I dunno... something feels defect-y and uncooperative about that for some reason.
I guess what's driving that feeling is probably the thought that in a perfect world conversations about posts would happen in the comments section of the post, and by posting elsewhere I'm contributing to the problem.
But now that I write that out I'm feeling like that's a bit silly thought. Fixing the problem would take a larger concentration of force than just me posting a few comments on old Sequence posts once in a while. By me posting my comments in the comments sections of the corresponding post, I'm not really moving the needle. So I don't think I endorse any feelings of guilt here.
I would like to see people write high-effort summaries, analyses and distillations of the posts in The Sequences.
When Eliezer wrote the original posts, he was writing one blog post a day for two years. Surely you could do a better job presenting the content that he produced in one day if you, say, took four months applying principles of pedagogy and iterating on it as a side project. I get the sense that more is possible.
This seems like a particularly good project for people who want to write but don't know what to write about. I've talked with a variety of people who are in that boat.
One issue with such distillation posts is discoverability. Maybe you write the post, it receives some upvotes, some people see it, and then it disappears into the ether. Ideally when someone in the future goes to read the corresponding sequence post they would be aware that your distillation post is available as a sort of sister content to the original content. LessWrong does have the "Mentioned in" section at the bottom of posts, but that doesn't feel like it is sufficient.
I recently started going through some of Rationality from AI to Zombies again. A big reason why is the fact that there are audio recordings of the posts. It's easy to listen to a post or two as I walk my dog, or a handful of posts instead of some random hour-long podcast that I would otherwise listen to.
I originally read (most of) The Sequences maybe 13 or 14 years ago when I was in college. At various times since then I've made somewhat deliberate efforts to revisit them. Other times I've re-read random posts as opposed to larger collections of posts. Anyway, the point I want to make is that it's been a while.
I've been a little surprised in my feelings as I re-read them. Some of them feel notably less good than what I remember. Others blow my mind and are incredible.
The Mysterious Answers sequence is one that I felt disappointed by. I felt like the posts weren't very clear and that there wasn't much substance. I think the main overarching point of the sequence is that an explanation can't say that all outcomes are equally probable. It has to say that some outcomes are more probable than others. But that just seems kinda obvious.
I think it's quite plausible that there are "good" reasons why I felt disappointed as I re-read this and other sequences. Maybe there are important things that are going over my head. Or maybe I actually understand things too well now after hanging around this community for so long.
One post that hit me kinda hard that I really enjoyed after re-reading it was Rationality and the English Language, and then the follow up post, Human Evil and Muddled Thinking. The posts helped me grok how powerful language can be.
If you really want an artist’s perspective on rationality, then read Orwell; he is mandatory reading for rationalists as well as authors. Orwell was not a scientist, but a writer; his tools were not numbers, but words; his adversary was not Nature, but human evil. If you wish to imprison people for years without trial, you must think of some other way to say it than “I’m going to imprison Mr. Jennings for years without trial.” You must muddy the listener’s thinking, prevent clear images from outraging conscience. You say, “Unreliable elements were subjected to an alternative justice process.”
I'm pretty sure that I read those posts before, along with a bunch of related posts and stuff, but for whatever reason the re-read still meaningfully improved my understand the concept.
I assume you mean wearing a helmet while being in a car to reduce the risk of car related injuries and deaths. I actually looked into this and from what I remember, helmets do more harm than good. They have the benefit of protecting you from hitting your head against something but the issue with accidents comes much moreso from the whiplash, and by adding more weight to (the top of) your head, helmets have the cost of making whiplash worse, and this cost outweighs the benefits by a fair amount.
Yes! I've always been a huge believer in this idea that the ease of eating a food is important and underrated. Very underrated.
I'm reminded of this clip of Anthony Bourdain talking about burgers and how people often put slices of bacon on a burger, but that in doing so it makes the burger difficult to eat. Presumably because when you go to take a bite you the whole slice of bacon often ends up sliding off the burger.
Am I making this more enjoyable by adding bacon? Maybe. How should that bacon be introduced into the question? It's an engineer and structural problem as much as it is a flavor experience. You really have to consider all of those things. One of the greatest sins in "burgerdom" I think is making a burger that's just difficult to eat.
I've noticed that there's a pretty big difference in the discussion that follows from me showing someone a draft of a post and asking for comments and the discussion in the comments section after I publish a post. The former is richer and more enjoyable whereas the latter doesn't usually result in much back and forth. And I get the sense that this is true for other authors as well.
I guess one important thing might be that with drafts, you're talking to people who you know. But I actually don't suspect that this plays much of a role, at least on LessWrong. As an anecdote, I've had some incredible conversations with the guy who reviews drafts of posts on LessWrong for free and I had never talked to him previously.
I wonder what it is about drafts. I wonder if it can or should be incorporated into regular posts.
Thanks Marvin! I'm glad to hear that you enjoyed the post and that it was helpful.
Imho your post should be linked to all definitions of the sunk cost fallacy.
I actually think the issue was more akin to the planning fallacy. Like when I'd think to myself "another two months to build this feature and then things will be good", it wasn't so much that I was compelled because of the time I had sunk into the journey, it was more that I genuinely anticipated that the results would be better than they actually were.
It isn't active, sorry. See the update at the top of the post.
See also: https://www.painscience.com/articles/strength-training-frequency.php.
Summary:
Strength training is not only more beneficial for general fitness than most people realize, it isn’t even necessary to spend hours at the gym every week to get those benefits. Almost any amount of it is much better than nothing. While more effort will produce better results, the returns diminish rapidly. Just one or two half hour sessions per week can get most of the results that you’d get from two to three times that much of an investment (and that’s a deliberately conservative estimate). This is broadly true of any form of exercise, but especially so with strength training. In a world where virtually everything in health and fitness is controversial, this is actually fairly settled science.
Oh I see, that makes sense. In retrospect that is a little obvious that you don't have to choose one or the other :)
So does the choice of which type of fiber to take boil down to the question of the importance of constipation vs microbiome and cholesterol? It's seeming to me like if the former is more important you should take soluble non-fermentable fiber, if the latter is more important you should take soluble fermentable fiber (or eat it in a whole food), and that insoluble fiber is never/rarely the best option.
Funny. I have a Dropbox folder where I store video tours of all the apartments I've ever lived in. Like, I spend a minute or two walking around the apartment and taking a video with my phone.
I'm not sure why, exactly. Partly because it's fun to look back. Partly because I don't want to "lose" something that's been with me for so long.
I suspect that such video tours are more appropriate for a large majority of people. 10 hours and $200-$500 sounds like a lot. And you could always convert the video tour into digital art some time in the future if you find the nostalgia is really hitting you.
Hm. I hear ya. Good point. I'm not sure whether I agree or disagree.
I'm trying to think of an analogy and came up with the following. Imagine you go to McDonalds with some friends and someone comments that their burger would be better if they used prime ribeye for their ground beef.
I guess it's technically true, but something also feels off about it to me that I'm having trouble putting my finger on. Maybe it's that it feels like a moot point to discuss things that would make something better that are also impractical to implement.
I just looked up Gish gallops on Wikipedia. Here's the first paragraph:
The Gish gallop (/ˈɡɪʃ ˈɡæləp/) is a rhetorical technique in which a person in a debate attempts to overwhelm an opponent by abandoning formal debating principles, providing an excessive number of arguments with no regard for the accuracy or strength of those arguments and that are impossible to address adequately in the time allotted to the opponent. Gish galloping prioritizes the quantity of the galloper's arguments at the expense of their quality.
I disagree that focusing on the central point is a recipe for Gish gallops and that it leads to Schrodinger's importance.
Well, I think that it in combination with a bunch of other poor epistemic norms it might be a recipe for those things, but a) not by itself and b) I think the norms would have to be pretty poor. Like, I don't expect that you need 10/10 level epistemic norms in the presence of focusing on the central point to shield from those failure modes, I think you just need something more like 3/10 level epistemic norms. Here on LessWrong I think our epistemic norms are strong enough where focusing on the central point doesn't put us at risk of things like Gish gallops and Schrodinger's importance.
I actually disagree with this. I haven't thought too hard about it and might just not be seeing it, but on first thought I am not really seeing how such evidence would make the post "much stronger".
To elaborate, I like to use Paul Graham's Disagreement Hierarchy as a lens to look through for the question of how strong a post is. In particular, I like to focus pretty hard on the central point (DH6) rather than supporting and tangential points. I think the central point plays a very large role in determining how strong a post is.
Here, my interpretation of the central point(s) is something like this:
- Poverty is largely determined by the weakest link in the chain.
- Anoxan is a helpful example to illustrate this.
- It's not too clear what drives poverty today, and so it's not too clear that UBI would meaningfully reduce poverty.
I thought the post did a nice job of making those central points. Sure, something like a survey of the research in positive psychology could provide more support for point #1, for example, but I dunno, I found the sort of intuitive argument for point #1 to be pretty strong, I'm pretty persuaded by it, and so I don't think I'd update too hard in response to the survey of positive psychology research.
Another thing I think about when asking myself how strong I think a post is is how "far along" it is. Is it an off the cuff conversation starter? An informal write up of something that's been moderately refined? A formal write up of something that has been significantly refined?
I think this post was somewhere towards the beginning of the spectrum (note: it was originally a tweet, not a LessWrong post). So then, for things like citations supporting empirical claims, I don't think it's reasonable to expect very much from the author, and so I lean away from viewing the lack of citations as something that (meaningfully) weakens the post.
What would it be like for people to not be poor?
I reply: You wouldn't see people working 60-hour weeks, at jobs where they have to smile and bear it when their bosses abuse them.
I appreciate the concrete, illustrative examples used in this discussion, but I also want to recognize that they are only the beginnings of a "real" answer to the question of what it would be like to not be poor.
In other words, in an attempt to describe what he sees as poverty, I think Eliezer has taken the strategy of pointing to a few points in Thingspace and saying "here are some points; the stuff over here around these points is roughly what I'm trying to gesture at". He hasn't taken too much of a stab at drawing the boundaries. I'd like to take a small stab at drawing some boundaries.
It seems to me that poverty is about QALYs. Let's wave our hands a bit and say that QALYs are a function of 1) the "cards you're dealt" and 2) how you "play your hand". With that, I think that we can think about poverty as happening when someone is dealt cards that make it "difficult" for them to have "enough" QALYs.
This happens in our world when you have to spend 40 hours a week smiling and bearing it. It happens in Anoxan when you take shallow breaths to conserve oxygen for your kids. And it happened to hunter-gatherers in times of scarcity.
There are many circumstances that can make it difficult to live a happy life. And as Eliezer calls out, it is quite possible for one "bad apple circumstance", like an Anoxan resident not having enough oxygen, to spoil the bunch. For you to enjoy abundance in a lot of areas but scarcity in one/few other areas, and for the scarcity to be enough to drive poverty despite the abundance. I suppose then that poverty is driven in large part by the strength of the "weakest link".
Note that I don't think this dynamic needs to be very conscious on anyone's part. I think that humans instinctively execute good game theory because evolution selected for it, even if the human executing just feels a wordless pull to that kind of behavior.
Yup, exactly. It makes me think back to The Moral Animal by Robert Wright. It's been a while since I read it so take what follows with a grain of salt, because I could be butchering some stuff, but that book makes the argument that this sort of thing goes beyond friendship and into all types of emotions and moral feelings.
Like if you're at the grocery store and someone just cuts you in line for no reason, one way of looking at it is that the cost to you is negligible -- you just need to wait an additional 45 seconds for them to check out -- and so the rational thing would be to just let it happen. You could confront them, but what exactly would you have to gain? Suppose you are traveling and will never see any of the people in the area ever again.
But we have evolved such that this situation would evoke some strong emotions regarding unfairness, and these emotions would often drive you to confront the person who cut you in line. I forget if this stuff is more at the individual level or the cultural level.
Why? Because extra information could help me impress them.
I've always been pretty against the idea of trying to impress people on dates.
It risks false positives. Ie. it risks a situation where you succeed at impressing them, go on more dates or have a longer relationship than you otherwise would, and then realize that you aren't compatible and break up. Which isn't necessarily a bad thing but I think it is more often than not.
Impressing your date also reduces the risk of false negatives, which is a good thing. Ie. it helps avoid the scenario where someone who you're compatible with rejects you. Maybe this is too starry-eyed, but I like to think that if you just bring your true self to the table, are open-minded, and push yourself to be a little vulnerable, the risk of such false negatives is pretty low.
I think this is especially relevant because I think the emotionally healthy person heuristic probably says to try to impress your date.
Hm yeah, I feel the same way. Good point.
America's response to covid seems like one example of this.
If I'm remembering correctly from Zvi's blog posts, he criticized the US's policy for being a sort of worst of both worlds middle ground. A strong, decisive requirement to enforce things like masking and distancing might have actually eradicated the virus and thus been worthwhile. But if you're not going to take an aggressive enough stance, you should just forget it: half-hearted mitigation policies don't do enough to "complete the bridge" and so aren't worth the economic and social costs.
It's not a perfect example. The "unfinished bridge" here provides positive value, not zero value. But I think the amount of positive value is low enough that it would be useful to round it down to zero. The important thing is that you get a big jump in value once you cross some threshold of progress.
I think a lot of philanthropic causes are probably in a similar boat.
When there are lots of small groups spread around making very marginal progress on a bunch of different goals, it's as if they're building a bunch of unfinished bridges. This too isn't a perfect example because the "unfinished bridges" provide some value, but like the covid example, I think the amount of value is small enough that we can just round it to zero.
On the other hand, when people get a little barbaric and rally around a single cause, there might be enough concentration of force to complete the bridge.
Project idea: virtual water coolers for LessWrong
Previous: Virtual water coolers
Here's an idea: what if there was a virtual water cooler for LessWrong?
- There'd be Zoom chats with three people per chat. Each chat is a virtual water cooler.
- The user journey would begin by the user expressing that they'd like to join a virtual water cooler.
- Once they do, they'd be invited to join one.
- I think it'd make sense to restrict access to users based on karma. Maybe only 100+ karma users are allowed.
- To start, that could be it. In the future you could do some investigation into things like how many people there should be per chat.
Seems like an experiment that is both cheap and worthwhile.
If there is interest I'd be happy to create a MVP.
(Related: it could be interesting to abstract this and build a sort of "virtual water cooler platform builder" such that eg. LessWrong could use the builder to build a virtual water cooler platform for LessWrong and OtherCommunity could use the builder to build a virtual water cooler platform for their community.)
Update: I tried a few doses of Adderall, up to 15mg. I didn't notice anything.
I was envisioning that you can organize a festival incrementally, investing more time and money into it as you receive more and more validation, and that taking this approach would de-risk it to the point where overall, it's "not that risky".
For example, to start off you can email or message a handful of potential attendees. If they aren't excited by the idea you can stop there, but if they are then you can proceed to start looking into things like cost and logistics. I'm not sure how pragmatic this iterative approach actually is though. What do you think?
Also, it seems to me that you wouldn't have to actually risk losing any of your own money. I'd imagine that you'd 1) talk to the hostel, agree on a price, have them "hold the spot" for you, 2) get sign ups, 3) pay using the money you get from attendees.
Although now that I think about it I'm realizing that it probably isn't that simple. For example, the hostel cost ~$5k and maybe the money from the attendees would have covered it all but maybe less attendees signed up than you were expecting and the organizers ended up having to pay out of pocket.
On the other hand, maybe there is funding available for situations like these.
Virtual watercoolers
As I mentioned in some recent Shortform posts, I recently listened to the Bayesian Conspiracy podcast's episode on the LessOnline festival and it got me thinking.
One thing I think is cool is that Ben Pace was saying how the valuable thing about these festivals isn't the presentations, it's the time spent mingling in between the presentations, and so they decided with LessOnline to just ditch the presentations and make it all about mingling. Which got me thinking about mingling.
It seems plausible to me that such mingling can and should happen more online. And I wonder whether an important thing about mingling in the physical world is that, how do I say this, you're just in the same physical space, next to each other, with nothing else you're supposed to be doing, and in fact what you're supposed to be doing is talking to one another.
Well, I guess you're not supposed to be talking to one another. It's also cool if you just want to hang out and sip on a drink or something. It's similar to the office water cooler: it's cool if you're just hanging out drinking some water, but it's also normal to chit chat with your coworkers.
I wonder whether it'd be good to design a virtual watercooler. A digital place that mimicks aspects of the situations I've been describing (festivals, office watercoolers).
- By being available in the virtual watercooler it's implied that you're pretty available to chit chat with, but it's also cool if you're just hanging out doing something low key like sipping a drink.
- You shouldn't be doing something more substantial though.
- The virtual watercooler should be organized around a certain theme. It should attract a certain group of people and filter out people who don't fit in. Just like festivals and office water coolers.
In particular, this feels to me like something that might be worth exploring for LessWrong.
Note: I know that there are various Slack and Discord groups but they don't meet conditions (1) or (2).
More dakka with festivals
In the rationality community people are currently excited about the LessOnline festival. Furthermore, my impression is that similar festivals are generally quite successful: people enjoy them, have stimulating discussions, form new relationships, are exposed to new and interesting ideas, express that they got a lot out of it, etc.
So then, this feels to me like a situation where More Dakka applies. Organize more festivals!
How? Who? I dunno, but these seem like questions worth discussing.
Some initial thoughts:
- Assurance contracts seem like quite the promising tool.
- You probably don't need a hero license to go out and organize a festival.
- Trying to organize a festival probably isn't risky. It doesn't seem like it'd involve too much time or money.
I wish there were more discussion posts on LessWrong.
Right now it feels like it weakly if not moderately violates some sort of cultural norm to publish a discussion post (similar but to a lesser extent on the Shortform). Something low effort of the form "X is a topic I'd like to discuss. A, B and C are a few initial thoughts I have about it. What do you guys think?"
It seems to me like something we should encourage though. Here's how I'm thinking about it. Such "discussion posts" currently happen informally in social circles. Maybe you'll text a friend. Maybe you'll bring it up at a meetup. Maybe you'll post about it in a private Slack group.
But if it's appropriate in those contexts, why shouldn't it be appropriate on LessWrong? Why not benefit from having it be visible to more people? The more eyes you get on it, the better the chance someone has something helpful, insightful, or just generally useful to contribute.
The big downside I see is that it would screw up the post feed. Like when you go to lesswrong.com and see the list of posts, you don't want that list to have a bunch of low quality discussion posts you're not interested in. You don't want to spend time and energy sifting through the noise to find the signal.
But this is easily solved with filters. Authors could mark/categorize/tag their posts as being a low-effort discussion post, and people who don't want to see such posts in their feed can apply a filter to filter these discussion posts out.
Context: I was listening to the Bayesian Conspiracy podcast's episode on LessOnline. Hearing them talk about the sorts of discussions they envision happening there made me think about why that sort of thing doesn't happen more on LessWrong. Like, whatever you'd say to the group of people you're hanging out with at LessOnline, why not publish a quick discussion post about it on LessWrong?
Hm, maybe.
Sometimes it can be a win-win situation. For example, if the call leads to you identifying a problem they're having and solving it in a mutually beneficial way.
But often times that isn't the case. From their perspective, the chances are low enough where, yeah, maybe the cold call just feels spammy and annoying.
I think that cold calls can be worthwhile from behind a veil of ignorance though. That's the barometer I like to use. If I were behind a veil of ignorance, would I endorse the cold call? Some cold calls are well targeted and genuine, in which case I would endorse them from behind a veil of ignorance. Others are spammy and thoughtless, in which case I wouldn't endorse them.
I agree with everything you've said. Let me try to clarify where it is that I think we might be disagreeing.
I am of the opinion that some "narrow problems" are "good candidates" to build "narrow solutions" for but that other "narrow problems" are not good candidates to build "narrow solutions" for and instead really call for being solved as part of an all-in-one solution.
I think you would agree with this. I don't think you would make the argument that all "narrow problems" are "good candidates" to build "narrow solutions" for.
Furthermore, as I argue in the post, I think that the level of "cohesion" often plays an important role in how "appropriate" it is to use a "narrow solution" for a "narrow problem". I think you would agree with this as well.
I suspect that our only real disagreement here is how we would weigh the tradeoffs. I think I lean moderately more in the direction of thinking that cohesiveness is important enough to make various "narrow problems" insufficiently good candidates for a "narrow solution" and you lean moderately more in the direction of thinking that cohesiveness isn't too big a deal and the "narrow problem" still is a good candidate for building a "narrow solution" for.
To be clear, I don't think that any of this means that I should attempt to build all-in-one products. I think it means that in my calculus for what "narrow problem" I should attempt to tackle I should factor in the level of cohesion.
In practice, all-in-one tools always need a significant degree of setup, configuration and customization before they are useful for the customer. Salesforce, for example, requires so much customization, you can make a career out of just doing Salesforce customization.
I can see that being true for all-in-one tools like Salesforce that are intended to be used across industries, but what about all-in-one tools that are more targeted?
For example, Bikedesk is an all-in-one piece of software that is specifically for bike shops and I would guess that the overall amount of setup and configuration for a shop using Bikedesk is lower than that of a bike shop using a handful of more specific tools.
The tradeoff is between a narrowly focused tool that does one job extremely well immediately, with little or no setup
I suppose the "little or no setup" part is sometimes this is the case, but it seems to me that often times it is not the case. Specifically, when the level of cohesiveness is high it seems to me that it is probably not the case.
Using the bike shop as an example, inventory management software that isn't part of an all-in-one solution needs inventory data to be fed to it and thus will require a moderate amount of setup and configuration.
See also Adam Ragusea's podcast episode on the topic.
Hm, gotcha.
It's tough, I think there are a lot of tradeoffs to navigate.
- You could join a big company. You'll 1) get paid, 2) work on something that lots of people use, but 3) you'll be a small cog in a large machine, and it sounds like that's not really what you're looking for. It sounds like you enjoy autonomy and having a meaningful and large degree of ownership.
- You could work on your own project. That addresses 3. But then 1 and 2 become pretty big risks. It's hard to build something that makes good money and lots of people use.
- You could join an open source project that lots of people use and is lacking contributors. But there's often not really a path to getting paid there.
- Something interesting: https://fresh.deno.dev/. I really like what they're doing. I personally think it's the best web framework out there. And there's only one person working on it. He's an incredible developer. Deno is paying him to work on it. I'm not sure if they'd be open to paying a second contributor. And I am not too optimistic that Fresh will become something that many people use.
- Working on LessWrong is an interesting possibility. After all, you're a longtime user and have the right skillset. However, 1) I'm not sure how good the prospects are for getting paid, 2) it's a relatively small community so you wouldn't be getting that "tons and tons of people use something I built" feeling, and 3) given that it's later stage and there's a handful of other developers working on it, I'm not sure if it'd provide you enough feeling of ownership.
- Joining a small company seems like the most realistic way to get 1, 2 and 3, but the magnitude of each might not be idea: smaller companies tend to pay less, have fewer users, and still have enough employees such that you don't really have that much ownership.
My best guess is that starting your own company would be best. Something closer to an indie hacker-y lifestyle SaaS business than a "swing for the fences" VC-backed business. The latter is probably better if you're earning to give and looking to maximize impact, but since you're leaning more towards designing a good life for yourself, I think the former is better, and I also think most people would agree with that. I've seen a lot of VC's be very open about the fact that the "swing for the fences" approach is frequently not actually in the founder's interest.
I'm looking to do the lifestyle SaaS business thing right now btw. If you're interested in that I'd love to chat: shoot me a DM.
I was thinking that too actually. And at the time I was thinking that for cohesion-related reasons, it's often the case that there just isn't a market for narrow tools like inventory software and instead the market demands an all-in-one tool, in which case there wouldn't be a demand for a tool that solves the problem of many formats of POS system data.
But now I'm not so sure. I'm feeling pretty agnostic. I'm not clear on how often the market demand is largely for all-in-one solutions vs how often there is a market demand for narrow solutions.
I guess it's a matter of pros and cons and tradeoffs.
On the one hand a product that solves a narrow and specific problem can focus more on that problem and do a better job of addressing it than a general, all-in-one product can. But then on the other hand it still seems to me that the what I propose about cohesion stands.
Using Anrok as an example, on the one hand the fact that Anrok is narrowly focused on tax and thus is able to do a better job of solving tax-related problems works in Anrok's favor. But on the other hand, there are cohesion-related things that work against Anrok such as having to integrate with other tools and such as customers having to spend more time shopping (with an all-in-one solution they just buy one thing and are done).
I suppose you'd agree that there are in fact tradeoffs at play here and that the real question is what direction the scale tends to lean. And I suppose you are of the opinion that the scale tends to lean in favor of narrower, more targeted solutions than broader, more all-in-one solutions. Is all of that true? If so, would you mind elaborating more on why you are of that belief?
Kudos for writing this post. I know it's promotional/self-interested, but I think that's fine. It's also pro-social. Having the rule/norm to encourage this type of post seems unlikely to be abused in a net-negative sort of way (assuming some reasonable restrictions are in place).
What are your goals? Money? Impact? Meaning? To what extent?
I think it'd also be helpful to elaborate on your skillset. Front end? Back end? Game design? Mobile apps? Design? Product? Data science?