Posts
Comments
That's an excellent thought experiment!
Piggybacking off of it, suppose the man is struggling in life and otherwise doesn't get any exercise. Suppose that Sunday morning digging really improves his health and quality of life a lot. Is the activity justified?
How about if he's depressed? What if he isn't digging by himself but is instead digging with a tight nit community of other gold believers? The digging provides him with feelings of warmth and connection that make life worth living. Is it justified then?
Where I'm coming from is that, supposing we view truth as an end in-and-of-itself, I want to question how much weight we give to truth relative to other ends we are interested in. I think that regardless of whether you are a consequentialist or virtue ethicist or deontologist or whatever, non-naive versions of these philosophies will weigh different considerations against one another.[1]
And so I don't think OP's position here indicates that he assigns a low value to truth. I moreso suspect that he is weighing truth against other important considerations and feels that the calculus comes out in favor of sacrificing some truth in favor of other important things.
- ^
Thanks to Gordon for helping me understand this in this dialogue!
My thesis is that most people, including the overwhelmingly atheist and non-religious rationalist crowd, would be better off if they actively participated in an organized religion.
My argument is roughly that religions uniquely provide a source of meaning, community, and life guidance not available elsewhere, and to the extent anything that doesn't consider itself a religion provides these, it's because it's imitating the package of things that makes something a religion.
I would have liked to see the post focus more on the second paragraph. I feel like the post very minimally focused on it and instead, the majority of the post was on related topics like which religion one should choose.
I think this is also promising as a way of coordinating online video chats.
I love this analogy! And the idea. I'm bullish on it being a good one.
But I've always thought that's the wrong way to think about these sorts of things. The question isn't whether it's a good idea, it's whether it's worth trying as an experiment. And that question feels to me like it has an obvious answer of "yes". It's cheap, low-downside, plausible, and high enough upside.
I've also always thought that, while seemingly mundane, these sorts of questions about how to have effective meetups might be really important. After all:
Lighthaven is a project by Lightcone Infrastructure, which is dedicated to facilitating intellectual progress on humanity's most important questions.
- https://www.lighthaven.space/
Well, "really important" might be a stretch. I see it as a Very General Helper move. It improves the productivity of people who are doing important things. Maybe "really important" should be reserved for more direct moves to tackle the central bottleneck. I'm not sure. I suppose it depends on the specifics.
If you decide to do this anyways, you will usually not get audiovisual feedback from the other audience members that it was rude/cringeworthy for you to interject, even if internally they are desperate for you to stop doing it.
You also very well might not get this feedback from the presenter. They may not be confrontational enough to call you out on it. And with the spotlight on them, they may feel uncomfortable doing things like sighing in exasperation or showing frustration in their facial expressions and body language.
at a rationalist conference
Not that I expect you to disagree, but to make it explicit, I don't think this is something that is specific to rationalist conferences. I think it applies to a large majority of conferences.
I took a cooking class once. The instructor's take on this was that yes, people do have too much sodium. But that is largely because processed food and food at restaurants has crazy amounts of sodium. Salting food that you cook at home is totally fine and is really hard to overdo in terms of health impact.
In fact, she called it out as a common failure mode where home cooks are afraid to use too much salt in their food. Not only is doing so ok, but even if it wasn't, by making your food taste better, it might motivate you to eat at home more and on balance lower your total sodium intake.
How come no special priority to salt? From what I understand getting the salt level right is essential ("salt to taste"). Doing so makes a dish taste "right" and it brings out the flavors of the other ingredients, making them taste more like themself, and not necessarily making the dish taste saltier in too noticeable a way.
Hm, maybe. I feel like sometimes "seasoning" can refer to "salt and spices" but in other contexts, like the first sentence of my OP, it moreso points to spices.
That seems plausible. There's also hedonic adaptation stuff. Things that seem gross to us might have been fine to people in earlier eras. Although Claude claims that having said all of this, people still often found their food to be gross.
I just made some dinner and was thinking about how salt and spices[1] now are dirt cheap, but throughout history they were precious and expensive. I did some digging and apparently low and middle class people didn't even really have access to spices. It was more for the wealthy.
Salt was important mainly to preserve food. They didn't have fridges back then! So even poor people usually had some amount of salt to preserve small quantities of food, but they had to be smart about how they allocated it.
In researching this I came to realize that throughout history, food was usually pretty gross. Meats were partially spoiled, fats went rancid, grains were moldy. This would often cause digestive problems. Food poisoning was a part of life.
Could you imagine! That must have been terrible!
Meanwhile, today, not only is it cheap to access food that is safe to eat, it's cheap to use basically as much salt and spices as you want. Fry up some potatoes in vegetable oil with salt and spices. Throw together some beans and rice. Incorporate a cheap acid if you're feeling fancy -- maybe some malt vinegar with the potatoes or white vinegar with the beans and rice. It's delicious!
I suppose there are tons of examples of how good we have it today, and how bad people had it throughout history. I like thinking about this sort of thing though. I'm not sure why, exactly. I think I feel some sort of obligation. An obligation to view these sorts of things as they actually are rather than how they compare to the Joneses, and to appreciate when I truly do have it good.
- ^
It feels weird to say the phrase "salt and spices". It feels like it's an error and that I meant to say "salt and pepper". Maybe there's a more elegant way of saying "salt and spices", but it of course isn't an error.
It makes me think back to something I heard about "salt and pepper", maybe in the book How To Taste. We often think of them as going together and being on equal footing. They aren't on equal footing though, and they don't always have to go together. Salt is much more important. Most dishes need salt. Pepper is much more optional. Really, pepper is a spice, and the question is 1) if you want to add spice to your dish and 2) if so, what spice. You might not want to add spice, and if you do want to add spice, pepper might not be the spice you want to add. So maybe "salt and spices" should be a phrase that is used more often than "salt and pepper".
Would anyone be interested in having a conversation with me about morality? Either publicly[1] or privately.
I have some thoughts about morality but I don't feel like they're too refined. I'm interested in being challenged and working through these thoughts with someone who's relatively knowledgeable. I could instead spend a bunch of time eg. digging through the Stanford Encyclopedia of Philosophy to refine my thoughts, but a) I'm not motivated enough to do that and b) I think it'd be easier and more fun to have a conversation with someone about it.
- To start, I think you need to be clear about what it is you're actually asking when you talk about morality. It's important to have clear and specific questions. It's important to avoid wrong questions. When we ask if something is moral, are we asking whether it is desirable? To you? To the average person? To the average educated person? To one's Coherent Extrapolated Volition (CEV)? To some sort of average CEV? Are we asking whether it is behavior that we want to punish in order to achieve desirable outcomes for a group? Reward?
- It seems to me that a lot of philosophizing about morality and moral frameworks is about fit. Like, we have intuitions about what is and isn't moral in different scenarios, and we try to come up with general rules and frameworks that do a good job of "fitting" these intuitions.
- A lot of times our intuitions end up being contradictory. When this happens, you could spend time examining it and arriving at some sort of new perspective that no longer has the contradiction. But maybe it's ok to have these contradictions. And/or maybe it's too much work to actually get rid of them all.
- I feel like there's something to be said for more "enlightened" feelings about morality. Like if you think that A is desirable but that preference is based on incorrect belief X, and if you believed ~X you'd instead prefer B, something seems "good" about moving from A to B.
- I'm having trouble putting my finger on what I mean by the above bullet point though. Ultimately I don't see a way to cross the is-ought gap. Maybe what I mean is that I personally prefer for my moral preferences to be based on things that are true, but I can't argue that I ought to have such a preference.
- As discussed in this dialogue, it seems to me that non-naive versions of moral philosophies end up being pretty similar to one another in practice. A naive deontologist might tell you not to lie to save a child from a murderer, but a non-naive deontologist would probably weigh the "don't lie" rule against other rules and come to the conclusion that you should lie to save the child. I think in practice, things usually add up to normality.
- I kinda feel like everything is consequentialism. Consider a virtue ethicist who says that what they ultimately care about is acting in a virtuous way. Well, isn't that a consequence? Aren't they saying that the consequence they care about is them/others acting virtuously, as opposed to eg. a utilitarian caring about consequences of involving utility?
- ^
The feature's been de-emphasized but you can initiate a dialog from another user's profile page.
I learned about S-curves recently. It was in the context of bike networks. As you add bike infrastructure, at first it doesn't lead to much adoption because the infrastructure isn't good enough to get people to actually use it. Then you pass some threshold and you get lots of adoption. Finally, you hit a saturation point where improvements don't move the needle much because things are already good.
I think this is a really cool concept. I wish I knew about it when I wrote Beware unfinished bridges.
I feel like there are a lot of situations where people try to make progress on the "introduction phase" of the S-curve without having a plan for actually reaching the growth phase. It happens with bike infrastructure. If a startup founder working on a new social network did this, it'd likely be fatal. I'm struggling to come up with good examples of this though.
Also, I wonder if there is a name for this failure mode where you work on the introduction phase without having a plan for actually reaching the growth phase. Seems worth naming.
Haha, yup. I have a Shoulder Justis now that frequently reminds me of this to disambiguate words like "this" and "that", which I'm grateful for.
Yeah, that seems plausible. I have no issues with that sort of a recommendation. I think cover-to-cover recommendations also happen not infrequently though.
I don't think social obligations play much if any role in my pet peeve here. If someone recommends a book to me without considering the large investment of time I'd have to make to read it, but doesn't apply any social pressure, I'd still find that to be frustrating.
I guess it's kinda like if someone recommends a certain sandwich without factoring in the cost. Maybe the sandwich is really good, but if it's $1,000, it isn't worth it. And if it's moderately good but costs $25, it also isn't worth it. More generally, whether something is worthwhile depends on both the costs and the benefits, and I think that in making recommendations one should consider them both.
My claim isn't that they capture all the content or that they are a perfect replacement. My (implied) claim is that they are a good 80-20 option.
A pet peeve of mine is when people recommend books (or media, or other things) without considering how large of an investment they are to read. Books usually take 10 hours or so to read. If you're going to go slow and really dig into it, it's probably more like 20+ hours. To make the claim "I think you should read this book", the expected benefit should outweigh the relatively large investment of time.
Actually, no, the bar is higher than that. There are middle-ground options other than reading the book. You can find a summary, a review, listen to an interview with the author about the book, or find blog posts on the same topic. So to recommend reading the book in full, doing so has to be better than one of those middle-ground options, or worthwhile after having completed one of the middle-ground options.
To be charitable, maybe people frequently aren't being literal when they recommend books. Maybe they're not actually saying "I think it would be worth your time to read this book in full, and that you should prioritize doing so some time in the next few months". Maybe they are just saying they though the book was solid.
Now, every program believes they give students a chance to practice because they have them work with real clients, during what is even called "practicums". But seeing clients does not count as practice, at least not according to the huge body of research in the area of skill development.
According to the science, seeing clients would be categorized, not as practice, but as "performance". In order for something to be considered practice, it needs to be focused on one skill at a time. And when you're actually seeing a client, you're having to use a dozen or more skills at once, in real time, without a chance to slow down and focus on one skill long enough to improve upon it.
The research on expertise is clear: performance, where you're doing the whole thing at once, does not lead to improvement in one's abilities. That's why therapists, on average, don't improve in their outcomes with more years of experience.
The truth is, having the chance to see more clients (gain clinical experience) does not make us better therapists. What does? Something called deliberate practice.
-- Dr. Tori Olds, Picking a Graduate Program | How to Become a Therapist - Part 4 of 6
I was thinking about what I mean when I say that something is "wrong" in a moral sense. It's frustrating and a little embarrassing that I don't immediately have a clear answer to this.
My first thought was that I'm referring to doing something that is socially suboptimal in a utilitarian sense. Something you wouldn't want to do from behind a veil of ignorance.
But I don't think that fully captures it. Suppose you catch a cold, go to a coffee shop when you're pre-symptomatic, and infect someone. I wouldn't consider that to be wrong. It was unintentional. So I think intent matters. But it doesn't have to be fully intentional either. Negligence can still be wrong.
So is it "impact + intent", then? No, I don't think so. I just bought a $5.25 coffee. I could have donated that money and fed however many starving families. From behind a veil of ignorance, I wouldn't endorse the purchase. And yet I wouldn't call it "wrong".
This thought process has highlighted for me that I'm not quite sure where to draw the boundaries. And I think this is why people talk about "gesturing". Like, "I'm trying to gesture at this idea". I'm at a place where I can gesture at what I mean by "wrongness". I can say that it is in this general area of thingspace, but can't be more precise. The less precise your boundaries/clouds, the more of a gesture it is, I suppose. I'd like to see a (canonical) post on the topic of gesturing.
In these situations I suppose there's probably wisdom in replacing the symbol with the substance. Ditching the label, talking directly about the properties, talking less about the central node.
Many people (including me) have opinions on current US president Donald Trump, none of which are relevant here because, as is well-known to LessWrong, politics is the mind-killer.
I think that "none of which are relevant" is too strong a statement and is somewhat of a misconception. From the linked post:
If you want to make a point about science, or rationality, then my advice is to not choose a domain from contemporary politics if you can possibly avoid it. If your point is inherently about politics, then talk about Louis XVI during the French Revolution. Politics is an important domain to which we should individually apply our rationality—but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.
So one question is about how ok it is to use examples from the domain of contemporary politics. I think it's pretty widely agreed upon on LessWrong that you should aim to avoid doing so.
But another question is whether it is ok to discuss contemporary politics. I think opinions differ here. Some think it is more ok than others. Most opinions probably hover around something like "it is ok sometimes but there are downsides to doing so, so approach with caution". I took a glance at the FAQ and didn't see any discussion of or guidance on how to approach the topic.
Related: 0 and 1 Are Not Probabilities
I've been doing Quantified Intuitions' Estimation Game every month. I really enjoy it. A big thing I've learned from it is the instinct to think in terms of orders of magnitude.
Well, not necessarily orders of magnitude, but something similar. For example, a friend just asked me about building a little web app calculator to provide better handicaps in golf scrambles. In the past I'd get a little overwhelmed thinking about how much time such a project would take and default to saying no. But this time I noticed myself approaching it differently.
Will it take minutes? Eh, probably not. Hours? Possibly, but seems a little optimistic. Days? Yeah, seems about right. Weeks? Eh, possibly, but even with the planning fallacy, I'd be surprised. Months? No, it won't take that long. Years. No way.
With this approach I can figure out the right ballpark very quickly. It's helpful.
Many years after having read it, I'm finding that the "Perils of Interacting With Acquaintances" section in The Great Perils of Social Interaction has really stuck with me. It is probably one of the more useful pieces of practical advice I've come across in my life. I think it's illustrated really well in this barber story:
But that assumes that you can only be normal around someone you know well, which is not true. I started using a new barber last year, and I was pleasantly surprised when instead of making small talk or asking me questions about my life, he just started talking to me like I was his friend or involving me in his conversations with the other barber. By doing so, he spared both of us the massive inauthenticity of a typical barber-customer relationship and I actually enjoy going there now.
I make it a point to "be normal" around people and it's become something of a habit. One I'm glad that I've formed.
I get the sense that autism is particularly unclear, but I haven't looked closely enough at other conditions to be confident in that.
Something I've always wondered about is what I'll call sub-threshold successes. Some examples:
- A stand up comedian is performing. Their jokes are funny enough to make you smile, but not funny enough to pass the threshold of getting you to laugh. The result is that the comedian bombs.
- Posts or comments on an internet forum are appreciated but not appreciated enough to get people to upvote.
- A restaurant or product is good, but not good enough to motivate people to leave ratings or write reviews.
It feels to me like there is an inefficiency occurring in these sorts of situations. To get an accurate view of how successful something is you'd want to incorporate all of the data, not just data that passes whatever (positive or negative) threshold is in play. But I think the inefficiencies are usually not easy to improve on.
In A Sketch of Good Communication -- or really, in the Share Models, Not Beliefs sequence, which A Sketch of Good Communication is part of -- the author proposes that, hm, I'm not sure exactly how to phrase it.
I think the author (Ben Pace) is proposing that in some contexts, it is good to spend a lot of effort building up and improving your models of things. And that in those contexts, if you just adopt the belief of others without improving your model, well, that won't be good.
I think the big thing here is research. In the context of research, Ben proposes that it's important to build up and improve your model. And for you to share with the community what beliefs your model outputs.
This seems correct to me. But I'm pretty sure that it isn't true in other contexts.
For example, I wanted to buy a new thermometer recently. Infrared ones are convenient, so I wanted to know if they're comparably accurate to oral ones. I googled it and Cleveland Clinic says they are. Boom. Good enough for me. In this context, I don't think it was worth spending the effort updating my model of thermometer accuracy. In this context, I just need the output.
I think it'd be interesting to hear people's thoughts on when it is and isn't important to improve your models. In what contexts?
I think it'd also be interesting to hear more about why exactly it is harmful in the context of intellectual progress to stray away from building and improving your models. There's probably a lot to say. I think I remember the book Superforecasters talk about this, but I forget.
Hm. On the one hand, I agree that there are distinct things at play here and share the instinct that it'd be appropriate to have different words for these different things. But on the other hand, I'm not sure if the different words should fall under the umbrella of solitude, like "romantic solitude" and "seeing human faces solitude".
I dunno, maybe it should. After all, it seems that in different conceptualizations of solitude, it's about being isolated from something (others' minds, others' physical presence).
Ultimately, I'm trusting Newport here. I think highly of him and know that he's read a lot of relevant literature. At the same time, I still wouldn't argue too confidently that his preferred definition is the most useful one.
That makes sense. I didn't mean to imply that such an extreme degree of isolation is a net positive. I don't think it is.
That makes sense. Although I think the larger point I was making still stands: that in reading the book you're primarily consuming someone else's thoughts, just like you would be if the author sat there on the bench lecturing you (it'd be different if it were more of a two-way conversation; I should have clarified that in the post).
I suppose "primarily" isn't true for all readers, for all books. Perhaps some readers go slowly enough where they actually spend more of their time contemplating than they do reading, but I get the sense that that is pretty rare.
Cool! I have a feeling you'd like a lot of Cal Newport's work like Digital Minimalism and Deep Work.
When I'm walking around or riding the train, I want to be able to hear what's going on around me.
That makes sense about walking around, but why do you want to hear what's going on around you when you're riding the train?
Yeah, that all makes sense. I think solitude probably exists along a spectrum, where in listening to music maybe you have 8/10 solitude instead of 10/10 but in watching a TV show you only get 2/10. The relevant question is probably "to what extent are the outputs of other minds influencing your thoughts".
Actually, now that I think about it, I wonder why we're focusing on the outputs of other minds. What about other things that influence your thoughts? Like, I don't know, bumble bees flying around you? I'm afraid of bumble bees so I know I'd have trouble focusing on my own thoughts in that scenario.
That said, I'm sure that outputs of other minds are probably a large majority of what is intrusive and prevents you from focusing on your own thoughts. But it still seems to me like the thing we actually care about is being able to focus on your own thoughts, not just reducing your exposure to the outputs of other minds.
Hm. I was actually assuming in this post that the podcasts in question were actually "Effective Information" as opposed to "Trivia" or "Mental Masturbation". The issue is that even if they are "Effective Information", you also need to have solitude in your "diet", and the benefit of additional "Effective Information" probably isn't worth the cost of less solitude.
But I'm also realizing now that much of the time podcasts aren't actually "Effective Information" and are instead something like "Trivia" or "Mental Masturbation". And I see that as a separate but also relevant problem. And I think that carbs is probably a good analogy for that too. Or maybe something like refined sugar. It's a quick hedonic hit and probably ok to have in limited doses, but you really don't want to have too much of it in your diet.
The claim is that it's helpful, not that it's necessary. I certainly agree that good ideas can come from low-solitude things like conversations.
But I think solitude also has lingering benefits. Like, maybe experiencing some solitude puts you in position to have productive conversations. On the other hand, maybe if you spend weeks in solitude-debt you'll be in a poor position to have productive conversations. Something like that.
I would buy various forms of merch, including clothing. I feel very fond of LessWrong and would find it cool to wear a shirt or something with that brand.
No. DOGE didn't cross my mind. It was most directly inspired by the experience of realizing that I can factor in the journey as well as the destination with my startup.
I think it can generate negative externalities at times. However, I think that in terms of expected value it's usually positive.
In public policy, experimenting is valuable. In particular, it provides a positive externality.
Let's say that a city tests out a somewhat quirky idea like paying NIMBYs to shut up about new housing. If that policy works well, other cities benefit because now they can use and benefit from that approach.
So then, shouldn't there be some sort of subsidy for cities that test out new policy ideas? Isn't it generally a good thing to subsidize things that provide positive externalities?
I'm sure there is a lot to consider. I'm not enough of a public policy person to know what the considerations are though or how to weigh them.
Pet peeve: when places close before their stated close time. For example, I was just at the library. Their signs say that they close at 6pm. However, they kick people out at 5:45pm. This caught me off guard and caused me to break my focus at a bad time.
The reason that places do this, I assume, is because employees need to leave when their shift ends. In this case with the library, it probably takes 15 minutes or so to get everyone to leave, so they spend the last 15 minutes of their shift shoeing people out. But why not make the official closing time is 5:45pm while continuing to end the employee's shifts at 6:00pm?
I also run into this with restaurants. With restaurants, it's a little more complicated because there are usually two different closing times that are relevant to patrons: when the kitchen closes and when doors close. Unless food is served ~immediately like at Chipotle or something, it wouldn't make sense to make these two times equivalent. If it takes 10 minutes to cook a meal, doors close at 9:00pm, and someone orders a meal at 8:59pm, well, you won't be able to serve the meal before they need to be out.
But there's an easy solution to this: just list each of the two close times. It seems like that would make everyone happy.
I wonder how much of that is actually based on science, and how much is just superstition / scams.
In basketball there isn't any certification. Coaches/trainers usually are former players themselves who have had some amount of success, so that points towards them being competent to some extent. There's also the fact that if you don't feel like you're making progress with a coach you can fire them and hire a new one. But I think there is also a reasonably sized risk of the coach lacking competence and certain players sticking with them anyway, for a variety of reasons.
I'm sure that similar things are true in other fields, including athletics but also in fields like chess where there isn't a degree you could get. In fields with certifications and degrees it probably happens less often, but I know I've dealt with my fair share of incompetent MDs and PhDs.
So ultimately, I agree with the sentiment that finding competent coaches might involve some friction, but despite that, it still feels to me like a very tractable problem. Relatedly, I'm seeing now that there has been some activity on the topic of coaching in the EA community.
What is specific, from this perspective, for AI alignment researchers? Maybe the feeling of great responsibility, higher chance of burnout and nightmares?
I don't expect that the needs of alignment researchers are too unique when compared to the needs of other intellectuals. I mention alignment researchers because I think they're a prototypical example of people having large, positive impacts on the world, as opposed to intellectuals who study string theory or something.
I was just watching this Andrew Huberman video titled "Train to Gain Energy & Avoid Brain Fog". The interviewee was talking about track athletes and stuff their coaches would have them do.
It made me think back to Anders Ericsson's book Peak: Secrets from the New Science of Expertise. The book is popular for discussing the importance of deliberate practice, but another big takeaway from the book is the importance of receiving coaching. I think that takeaway gets overlooked. Top performers in fields like chess, music and athletics almost universally receive coaching.
And at the highest levels the performers will have a team of coaches. LeBron James is famous for spending roughly $1.5 million a year on his body.
And he’s like, “Well, he’s replicated the gym that whatever team — whether it was Miami or Cleveland — he’s replicated all the equipment they have in the team’s gym in his house. He has two trainers. Everywhere he goes, he has a trainer with him.” I’m paraphrasing what he told me, so I might not be getting all these facts right. He’s got chefs. He has all the science of how to sleep. All these different things. Masseuses. Everything he does in his life is constructed to have him play basketball and to stay on the court and to be as healthy as possible and to absorb punishment when he goes into the basket and he gets crushed by people.
This makes me think about AI safety. I feel like the top alignment researchers -- and ideally a majority of competent alignment researchers -- should have such coaching and resources available to them.
I'm not exactly sure what form this would take. Academic/technical coaches? Writing coach? Performance psychologists? A sleep specialist? Nutritionist? Meditation coach?
All of this costs money of course. I'm not arguing that this is the most efficient place to allocate our limited resources. I don't have enough of an understanding of what the other options are to make such an argument.
But I will say that providing such resources to alignment researchers seems like it should pretty meaningfully improve their productivity. And if so, we are in fact funding constrained. I recall (earlier?) conversations about funding not being a constraint, rather the real constraint is that there aren't good places to spend such money.
Also relevant is that this is perhaps an easier sell to prospective donors then something more wacky. Like, it seems like a safe bet to have a solid impact, and there's a precedent for providing expert performers with such coaching, so maybe that sort of thing is appealing to prospective donors.
Finally, I recall hearing at some point that in a field like physics, the very top researchers -- people like Einstein -- have a very disproportionate impact. If so, I'd think that it's at least pretty plausible that something similar is true in the field of AI alignment. And if it is, then it'd probably make sense to spend time 1) figuring out who the Einsteins are and then 2) investing in them and doing what we can to maximize their impact.
Wow, I just watched this video where Feynman makes an incredible analogy between the rules of chess and the rules of our physical world.
You watch the pieces move and try to figure out the underlying rules. Maybe you come up with a rule about bishops needing to stay on the same color, and that rule lasts a while. But then you realize that there is a deeper rule that explains the rule you've held to be true: bishops can only move diagonally.
I'm butchering the analogy though and am going to stop talking now. Just go watch the video. It's poetic.
One thing to keep in mind is that, from what I understand, ovens are very imprecise so you gotta exercise some judgement when using them. For example, even if you set your oven to 400°F, it might only reach 325°F. Especially if you open the oven to check on the food (that lets out a lot of heat).
I've also heard that when baking on sheet pans, you can get very different results based on how well seasoned your sheet pan is. That shouldn't affect this dish though since the intent is for the top to be the crispy part and that happens via convection rather than conduction. But maybe how high or low you place the baking dish in your oven will affect the crispiness.
As another variation, I wonder how it'd come out if you used a sheet pan instead of a baking dish. I'd think that you'd get more crispy bits because of the increase in surface area of potato that is exposed to heat. Personally I'm a big fan of those crispy bits!
You'd probably need to use multiple sheet pans, but that doesn't seem like much of an inconvenience. You can also vary the crispiness by varying the amount of exposed surface area. Like, even if you use a sheet pan you can still kinda stack the potatoes on top of one another in order to reduce the exposed surface area.
I have not seen that post. Thank you for pointing me to it! I'm not sure when I'll get to it but I added it to my todo list to read and potentially discuss further here.
Scott's take on the relative futility of resolving high-level generators of disagreement (which seems to be beyond Level 7? Not sure) within reasonable timeframes is kind of depressing.
Very interesting! This is actually the topic that I really wanted to get to. I haven't been able to figure out a good way to get a conversation or blog post started on that topic though, and my attempts to do so lead me to writing this (tangential) post.
I could see that happening, but in general, no, I wouldn't expect podcast hosts to already be aware of a substantial subset of arguments from the other side.
My impression is that podcasters do some prep but in general aren't spending many days let alone multiple weeks or months of prep. When you interview a wide variety of people and discuss a wide variety of topics, as many podcasters including the ones I mentioned do, I think that means that hosts will generally not be aware of a substantial subset of arguments from the other side.
For the sake of argument, I'll accept your points about memes, genes, and technology being domains where growth is usually exponential. But even if those points are true, I think we still need an argument that growth is almost always exponential across all/most domains.
The central claim that "almost all growth is exponential growth" is an interesting one. However, I am not really seeing that this post makes an argument for it. It feels more like it is just stating it as a claim.
I would expect an argument to be something like "here is some deep principle that says that growth is almost always in proportion to the thing's current size". And then to give a bunch of examples of this being the case in various domains. (I found the examples in the opening paragraph to be odd. Bike 200 miles a week or never? Huh?) I also think it'd be helpful to point out counterexamples and spend some time commenting on them.
[This contains spoilers for the show The Sopranos.]
In the realm of epistemics, it is a sin to double-count evidence. From One Argument Against An Army:
I talked about a style of reasoning in which not a single contrary argument is allowed, with the result that every non-supporting observation has to be argued away. Here I suggest that when people encounter a contrary argument, they prevent themselves from downshifting their confidence by rehearsing already-known support.
Suppose the country of Freedonia is debating whether its neighbor, Sylvania, is responsible for a recent rash of meteor strikes on its cities. There are several pieces of evidence suggesting this: the meteors struck cities close to the Sylvanian border; there was unusual activity in the Sylvanian stock markets before the strikes; and the Sylvanian ambassador Trentino was heard muttering about “heavenly vengeance.”
Someone comes to you and says: “I don’t think Sylvania is responsible for the meteor strikes. They have trade with us of billions of dinars annually.” “Well,” you reply, “the meteors struck cities close to Sylvania, there was suspicious activity in their stock market, and their ambassador spoke of heavenly vengeance afterward.” Since these three arguments outweigh the first, you keep your belief that Sylvania is responsible—you believe rather than disbelieve, qualitatively. Clearly, the balance of evidence weighs against Sylvania.
Then another comes to you and says: “I don’t think Sylvania is responsible for the meteor strikes. Directing an asteroid strike is really hard. Sylvania doesn’t even have a space program.” You reply, “But the meteors struck cities close to Sylvania, and their investors knew it, and the ambassador came right out and admitted it!” Again, these three arguments outweigh the first (by three arguments against one argument), so you keep your belief that Sylvania is responsible.
Indeed, your convictions are strengthened. On two separate occasions now, you have evaluated the balance of evidence, and both times the balance was tilted against Sylvania by a ratio of 3 to 1.
You encounter further arguments by the pro-Sylvania traitors—again, and again, and a hundred times again—but each time the new argument is handily defeated by 3 to 1. And on every occasion, you feel yourself becoming more confident that Sylvania was indeed responsible, shifting your prior according to the felt balance of evidence.
The problem, of course, is that by rehearsing arguments you already knew, you are double-counting the evidence. This would be a grave sin even if you double-counted all the evidence. (Imagine a scientist who does an experiment with 50 subjects and fails to obtain statistically significant results, so the scientist counts all the data twice.)
I had the thought that something similar probably applies to morality as well. I'm thinking of Tony Soprano.
People say that Soprano is an asshole. Some say he is a sociopath. I'm not sure where I stand. But I finished watching The Sopranos recently and one thought that I frequently had when he'd do something harmful is that his hand was kinda forced.
For example, there was a character in the show named Adriana. Adriana became an informant to the FBI at some point. When Tony learned this, he had her killed.
Having someone killed is, in some sense, bad. But did Tony have a choice? If he didn't she very well could have gotten Tony and the rest of the mob members sent to jail, or perhaps sentenced to the death penalty. When that is the calculus, we usually don't expect the person in Tony's shoes to prioritize the person in Adriana's shoes.
It makes me think back to when I played poker. Sometimes you end up in a bad spot. It looks like you just don't have any good options. Folding seems too nitty. Calling is gross. Raising feels dubious. No move you make will end well.
But alas, you do in fact have to make a decision. The goal is not necessarily to find a move that will be good in an absolute sense. It's to make the best move relative to the other moves you can make. To criticize someone who chooses the best move in a relative sense because it is a bad move in an absolute sense is unfair. You have to look at it from the point-of-decision.
Of course, you also want to look back at how you got yourself in the bad spot in the first place. Like if you made a bad decision on the flop that put you in a bad spot on the turn, you want to call out the play you made on the flop as bad and learn from it. But you don't want to "double count" the move you made on the flop once you've moved on to analyzing the next street.
Using this analogy, I think Tony Soprano made some incredibly bad preflop moves that set himself up for a shit show. And then he didn't do himself any favors on the flop. But once he was on later streets like the turn and river, I'm not sure how bad his decisions actually were. And more generally, I think it probably makes sense to avoid "double counting" the mistakes people made on earlier streets when they are faced with decisions on later streets.