This is a lovely post and it really resonated with me. I've yet to really orient myself in the EA world, but "fix the normalization of child abuse" is something I have in my mind as a potential cause area. Really happy to hear you've gotten out, even if the permanent damage from sleep deprivation is still sad.
I'm currently working on a text document full of equations that use variables with extremely long names. I'm in the process of simplifying it by renaming the variables. For complicated reasons, I have to do this by hand.
Just now, I noticed that there's a series of variables O1-O16, and another series of variables F17-F25. For technical reasons relating to the work I'm doing, I'm very confident that the name switch is arbitrary and that I can safely rename the F's to O's without changing the meaning of the equations.
But I'm doing this by hand. If I'm wrong, I will potentially was a lot of work by (1) making this change (2) making a bunch of other changes (3) realizing I was wrong (4) undoing all the other changes (5) undoing this change (6) re-doing all the changes that came after it.
And for a moment, this spurred me to become less confident about the arbitrariness of the naming convention!
The correct thought would have been "I'm quite confident about this, but seeing as the stakes are high if I'm wrong and I can always do this later, it's still not worth it to make the changes now."
The problem here was that I was conflating "X is very likely true" with "I must do the thing I would do if X was certain". I knew instinctively that making the changes now was a bad idea, and then I incorrectly reasoned that it was because it was likely to go wrong. It's actually unlikely to go wrong, it's just that if it does go wrong, it's a huge inconvenience.
It's funny that this came up on LessWrong around this time, as I've just recently been thinking about how to get vim-like behavior out of arbitrary text boxes. Except I also have the additional problem that I'm somewhat unsatisfied with vim. I've been trying to put together my own editor with an "API first" mentality, so that I might be able to, I don't know, eventually produce some kind of GTK widget that acts like my editor by default. Or something. And then maybe it'll be easy to make a variant of, say, Thunderbird, in which the email-editing text box is one of those instead of a normal text box.
(If you're curious, I have two complaints about vim. (1) It's a little bloated, what with being able to open a terminal inside of the editor and using a presumably baked-in variant of sed to do find-and-replace rather than making you go through a generic "run such-and-such program on such-and-such text selection" command if you want the fancy sed stuff. And (2) its commands are slightly irregular, like how d/foo deletes everything up to what the cursor would land on if you just typed /foo but how dfi deletes everything up to and including what the cursor would land on if you just typed fi.)
It seems like "agent X puts a particular dollar value on human life" might be ambiguous between "agent X acts as though human lives are worth exactly N dollars each" and "agent X's internal thoughts explicitly assign a dollar value of N to a human life". I wonder if that's causing some confusion surrounding this topic. (I didn't watch the linked video.)
If you think traffic RNG is bad in the Glitchless category, you should watch someone streaming any% attempts. The current WR has a three-mile damage boost glitch that skips the better part of the commute, saving 13 minutes, and the gal who got it had to grind over 14k attempts for it (about a dozen of them got similar boosts but died on impact).
This reminds me of something I thought of a while back, that I'd like to start doing again now that I've remembered it. Whenever I sense myself getting unfairly annoyed at someone (which happens a lot) I try to imagine that I'm watching a movie in which that person is the protagonist. I imagine that I know what their story and struggles are, and that I'm rooting for them every step of the way. Now that I'm getting into fiction writing, I might also try imagining that I'm writing them as a character, which has the same vibe as the other techniques. The one time I've actually tried this so far, it worked really well!
Re the second sentence: lol. Yeah, I bet you're right.
Your last paragraph is interesting to me. I don't think I can say that I've had the same experience, though I do think that some people have that effect on me. I can think of at least one person who I normally don't run out of gas when I'm talking to them. But I think other people actually amplify the problem. For example, I meet with three of my friends for a friendly debate on a weekly basis, and the things they say frequently run against the grain of my mind, and I often run out of gas while trying to figure out how to respond to them.
This very much matches my own experiences! Keeping something in the back of my mind has always been somewhere between difficult and impossible for me, and for that reason I set timers for all important events during the day (classes, interviews, meetings, etc). I also carry a pocket-sized notebook and a writing utensil with me wherever I go, in case I stumble on something that I have to deal with "later".
I have also found my attention drifting away in the middle of conversations, and I too have cultivated the skill of non-rudely admitting to it and asking the other person to repeat themselves.
As for improvising... I play piano, and the main thing I do is improvise! I find improv sessions much easier to stay engaged in than sessions spent trying to read through sheet music.
And, I also have a ton of projects that are 1/4 to 3/4 done (though I think that's probably common to a larger subset of people than the other things).
So thanks for sharing your experiences! I had never seriously considered the possibility that I had ADHD before, even though I've known for a while that I have a somewhat atypical mind. I'm gonna look into that! Makes note in said pocket-sized notebook.
Side note: I think one reason I never wondered whether I have ADHD is that, in my perception, claiming to have ADHD is something of a "fad" among people in my age group, and I think my brain sort of silently assumed that that means it's not also a real condition that people can actually suffer from. That's gonna be a WHOOPS from me, dawg.
When somebody is advocating taking an action, I think it can be productive to ask "Is there a good reason to do that?" rather than "Why should we do that?" because the former phrasing explicitly allows for the possibility that there is no good reason, which I think makes it both intellectually easier to realize that and socially easier to say it.
To answer that question, it might help to consider when you even need to measure effort. Off the cuff, I'm not actually sure there are any (?). Maybe you're an employer and you need to measure how much effort your employees are putting in? But on second thought that's actually a classic case where you don't need to measure effort, and you only need to measure results.
pain isn't the unit of effort, but for many things it's correlated with whatever that unit is.
I think this correlation only appears if you're choosing strategies well. If you're tasked with earning a lot of money to give to charity, and you generate a list of 100 possible strategies, then you should toss out all the strategies that don't lie on the pareto boundary of pain and success. (In other words, if strategy A is both less effective and more painful then strategy B, then you should never choose strategy A.) Pain will correlate with success in the remaining pool of strategies, but it doesn't correlate in the set of all strategies. And OP is saying that people often choose strategies that are off the pareto boundary because they specifically select pain-inducing strategies under the misconception that those strategies will all be successful as well.
For what it's worth, I value you even though you're a stranger and even if your life is still going poorly. I often hear people saying how much better their life got after 30, after 40, after 50. Imagine how much larger the effect could be after cryosuspension!
I've been thinking of signing up for cryonics recently. The main hurdle is that it seems like it'll be kind of complicated, since at the moment I'm still on my parent's insurance, and I don't really know how all this stuff works. I've been worrying that the ugh field surrounding the task might end up being my cause of death by causing me to look on cryonics less favorably just because I subconsciously want to avoid even thinking about what a hassle it will be.
But then I realized that I can get around the problem by pre-committing to sign up for cryonics no matter what, then just cancelling it if I decide I don't want it.
It will be MUCH easier to make an unbiased decision if choosing cryonics means doing nothing rather than meaning that I have to go do a bunch of complicated paperwork now. It will be well worth a few months (or even years) of dues.
Eliezer, you're definitely setting up a straw man here. Of course it's not just you -- pretty much everybody suffers from this particular misunderstanding of logical positivism.
How do you know that the phrase "logical positivism" refers to the correct formulation of the idea, rather than an exaggerated version? I have no trouble at all believing that a group of people discovered the very important notion that untestable claims can be meaningless, and then accidentally went way overboard into believing that difficult-to-test claims are meaningless too.
There's evidence to be had in the fact that, though it's been known for a long time, it's not a big field of study with clear experts.
This is true. It's only a mild comfort to me, though, since I don't have too much faith in humanity's ability to conjure up fields of study for important problems. But I do have some faith.
From very light googling, it seems likely to happen over hundreds or thousands of years, which puts it pretty far down the list of x-risk worries IMO.
Also true. This makes me update away from "we might wake up dead tomorrow" and towards "the future might be pretty odd, like maybe we'll all wear radiation suits when we're outside for a few generations".
('overdue') presumes some knowledge of mechanism, which I don't have. Roughly speaking it's a 1 in 300,000 risk each year and not extinction level.
Am I misunderstanding, or is this an argument from ignorance? The article says we're overdue; that makes it sound like someone has an idea of what the mechanism is, and that person is saying that according to their model, we're overdue. Actually, come to think of it, "overdue" might not imply knowledge of a mechanism at all! Maybe we simply have good reason to believe that this has happened about every 300,000 years for ages, and conclude that "we're overdue" is a good guess.
it's not as though the field temporarily disappears completely!
I'll just throw in my two cents here and say that I was somewhat surprised by how serious the Ben's post is. I was around for the Petrov Day celebration last year, and I also thought of it as just a fun little game. I can't remember if I screwed around with the button or not (I can't even remember if there was a button for me).
Then again, I do take Ben's point: a person does have a responsibility to notice when something that's being treated like a game is actually serious and important. Not that I think 24 hours of LW being down is necessarily "serious and important".
Overall, though, I'm not throwing much of a reputation hit (if any at all) into my mental books for you.
Yeah. This post could also serve, more or less verbatim, as a write-up of my own current thoughts on the matter. In particular, this section really nails it:
As above, my claim is not that the photon disappears. That would indeed be a silly idea. My claim is that the very claim that a photon "exists" is meaningless. We have a map that makes predictions. The map contains a proton, and it contains that proton even outside any areas relevant to predictions, but why should I care? The map is for making predictions, not for ontology.
I don't suppose that. I suppose that the concept of a photon actually existing is meaningless andirrelevant to the model.
This latter belief is an "additional fact". It's more complicated than "these equations describe my expectations".
And the two issues you mention — the spaceship that's leaving Earth to establish a colony that won't causally interact with us, and the question of whether other people have internal experiences — are the only two notes of dissonance in my own understanding.
(Actually, I do disagree with "altruism is hard to ground regardless". For me, it's very easy to ground. Supposing that the question "Do other people have internal conscious experiences?" is meaningful and that the answer is "yes", I just very simply would prefer those people to have pleasant experiences rather than unpleasant ones. Then again, you may mean that it's hard to convince other people to be altruistic, if that isn't their inclination. In that case, I agree.)
Thanks for pointing this out. I think the OP might have gotten their conclusion from this paragraph:
(Note that, in the web page that the OP links to, this very paragraph is quoted, but for some reason "energy" is substituted for "center-of-mass". Not sure what's going on there.)
In any case, this paragraph makes it sound like participants who inherited a wrong theory did do worse on tests of understanding (even though participants who inherited some theory did the same on average of those who inherited only data, which I guess implies that those who inherited a right theory did better). I'm slightly off-put by the fact that this nuance isn't present in the OP's post, and that they haven't responded to your comment, but not nearly as much as I had been when I'd read only your comment, before I went to read (the first 200 lines of) the paper for myself.
Yeah, I just... stopped worrying about these kinds of things. (In my case, "these kinds of things" refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can't win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; [...]
I see. In that case, I think we're reacting differently to our situations due to being in different epistemic states. The uncertainty involved in Everett branches is much less Knightian -- you can often say things like "if I drive to the supermarket today, then approximately 0.001% of my future Everett branches will die in a car crash, and I'll just eat that cost; I need groceries!". My state of uncertainty is that I've barely put five minutes of thought into the question "I wonder if there are any tremendously important things I should be doing right now, and particularly if any of the things might have infinite importance due to my future being infinitely long."
And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...
Well, that's another reference to "popular" theism. Popular theism is a subset of theism in general, which itself is a subset of "worlds in which there's something I should be doing that has infinite importance".
On the other hand, if you assume an evil god, then... maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.
Yikes!! I wish LessWrong had emojis so I could react to this possibility properly :O
So... you can't really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.
This advice makes sense, though given the state of uncertainty described above, I would say I'm already on it.
Psychologically, if you can't get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. [...]
This is a good fallback plan for the contingency in which I can't figure out the truth and then subsequently fail to acknowledge my ignorance. Fingers crossed that I can at least prevent the latter!
[...] your theory can still benefit from some concepts having shorter words for historical reasons [...]
Well, I would have said that an exactly analogous problem is present in normal Kolmogorov Complexity, but...
But historical evidence shows that humans are quite bad at this.
...but this, to me, explains the mystery. Being told to think in terms of computer programs generating different priors (or more accurately, computer programs generating different universes that entail different sets of perfect priors) really does influence my sense of what constitutes a "reasonable" set of priors.
I would still hesitate to call it a "formalism", though IIRC I don't think you've used that word. In my re-listen of the sequences, I've just gotten to the part where Eliezer uses that word. Well, I guess I'll take it up with somebody who calls it that.
By the way, it's just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam's razor. I'm nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?
[...] The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file.
Insightful comments! I see the connection: really, every compression of a file is a compression into the shortest program that will output that file, where the programming language is the decompression algorithm and the search algorithm that finds the shortest program isn't guaranteed to be perfect. So the best compression algorithm ever would simply be one with a really really apt decompression routine (one that captures the nuanced nonrandomness found in files humans care aboue very well) and an oracle for computing shortest programs (rather than a decent but imperfect search algorithm).
> But then my concern just transforms into "what if there's a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc".
Then we are no longer talking about gods in the modern sense, but about powerful aliens.
Well, if the "inside/outside the universe" distinction is going to mean "is/isn't causally connected to the universe at all" and a god is required to be outside the universe, then sure. But I think if I discovered that the universe was a simulation and there was a being constantly watching it and supplying a fresh bit of input every hundred Planck intervals in such a way that prayers were occasionally answered, I would say that being is closer to a god than an alien.
But in any case, the distinction isn't too relevant. If I found out that there was a vessel with intelligent life headed for Earth right now, I'd be just as concerned about that life (actual aliens) as I would be about god-like creatures that should debatably also be called aliens.
Aha, no, the mind reading part is just one of several cultures I'm mentioning. (Guess Culture, to be exact.) If I default to being an Asker but somebody else is a Guesser, I might have the following interaction with them:
Me: [looking at some cookies they just made] These look delicious! Would it be all right if I ate one?
Them: [obviously uncomfortable] Uhm... uh... I mean, I guess so...
Here, it's retroactively clear that, in their eyes, I've overstepped a boundary just by asking. But I usually can't tell in advance what things I'm allowed to ask and what things I'm not allowed to ask. There could be some rule that I just haven't discovered yet, but because I haven't discovered it yet, it feels to me like each case is arbitrary, and thus it feels like I'm being required to read people's minds each time. Hence why I'm tempted to call Guess Culture as "Read-my-mind Culture".
(Contrast this to Ask Culture, where the rule is, to me, very simple and easy to discover: every request is acceptable to make, and if the other person doesn't want you to do what you're asking to do, they just say "no".)
The Civ analogy makes sense, and I certainly wouldn't stop at disproving all actually-practiced religions (though at the moment I don't even feel equipped to do that).
Well, you cannot disprove such thing, because it is logically possible. (Obviously, "possible" does not automatically imply "it happened".) But unless you assume it is "simulations all the way up", there must be a universe that is not created by an external alien lifeform. Therefore, it is also logically possible that our universe is like that.
Are you sure it's logically possible in the strict sense? Maybe there's some hidden line of reasoning we haven't yet discovered that shows that this universe isn't a simulation! (Of course, there's a lot of question-untangling that has to happen first, like whether "is this a simulation?" is even an appropriate question to ask. See also: Greg Egan's book Permutation City, a fascinating work of fiction that gives a unique take on what it means for a universe to be a simulation.)
It's just a cosmic horror that you need to learn to live with. There are more.
This sounds like the kind of thing someone might say who is already relatively confident they won't suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then...
(WARNING: graphic imagery) ...upon your bodily death, your consciousness will be embedded in an indestructible body and put in a 15K degree oven for 100 centuries. (END).
Would you still say it was just another cosmic horror you have to learn to live with? If you wouldn't still say that, but you say it now because your probability estimate is less than 1/1000, how did you come to have that estimate?
Any programming language; for large enough values it doesn't matter. If you believe that e.g. Python is much better in this regard than Java, then for sufficiently complicated things the most efficient way to implement them in Java is to implement a Python emulator (a constant-sized piece of code) and implementing the algorithm in Python. So if you chose a wrong language, you pay at most a constant-sized penalty. Which is usually irrelevant, because these things are usually applied to debating what happens in general when the data grow.
The constant-sized penalty makes sense. But I don't understand the claim that this concept is usually applied in the context of looking at how things grow. Occam's razor is (apparently) formulated in terms of raw Kolmogorov complexity -- the appropriate prior for an event X is 2^(-B), where B is the Kolmogorov Complexity of X.
Let's say general relativity is being compared against Theory T, and the programming language is Python. Doesn't it make a huge difference whether you're allowed to "pip install general-relativity" before you begin?
But there are cases where you can have an intuition that for any reasonable definition of a programming language, X should be simpler than Y.
I agree that these intuitions can exist, but if I'm going to use them, then I detest this process being called a formalization! If I'm allowed to invoke my sense of reasonableness to choose a good programming language to generate my priors, why don't I instead just invoke my sense of reasonableness to choose good priors? Wisdom of the form "programming languages that generate priors that work tend to have characteristic X" can be transformed into wisdom of the form "priors that work tend to have characteristic X".
Just an intuition pump: [...]
I have to admit that I kind of bounced off of this. The universe-counting argument makes sense, but it doesn't seem especially intuitive to me that the whole of reality should consist of one universe for each computer program of a set length written in a set language.
(Actually, I probably never heard explicitly about Kolmogorov complexity at university, but I learned some related concepts that allowed me to recognize what it means and what it implies, when I found it on Less Wrong.)
Can I ask which related concepts you mean?
[...] so it is the complexity of the outside universe.
Oh, that makes sense. In that case, the argument would be that nothing outside MY universe could intervene in the lives of the simulated Life-creatures, since they really just live in the same universe as me. But then my concern just transforms into "what if there's a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc".
Epistemic status: really shaky, but I think there's something here.
I naturally feel a lot of resistance to the way culture/norm differences are characterized in posts like Ask and Guess and Wait vs Interrupt Culture. I naturally want to give them little pet names, like:
Guess culture = "read my fucking mind, you badwrong idiot" culture.
Ask culture = nothing, because this is just how normal, non-insane people act.
I think this feeling is generated by various negative experiences I've had with people around me, who, no matter where I am, always seem to share between them one culture or another that I don't really understand the rules of. This leads to a lot of interactions where I'm being told by everyone around me that I'm being a jerk, even when I can "clearly see" that their is nothing I could have done that would have been correct in their eyes, or that what they wanted me to do was impossible or unreasonable.
But I'm starting to wonder if I need to let go of this. When I feel someone is treating me unfairly, it could just be because (1) they are speaking in Culture 1, then (2) I am listening in Culture 2 and hearing something they don't mean to transmit. If I was more tuned in to what people meant to say, my perception of people who use other norms might change.
I feel there's at least one more important pair of cultures, and although I haven't mentioned it yet, it's the one I had in mind most while writing this post. Something like:
Culture 1: Everyone speaks for themselves only, unless explicitly stated otherwise. Putting words in someone's mouth or saying that they are "implying" something they didn't literally say is completely unacceptable. False accusations are taken seriously and reflect poorly on the accuser.
Culture 2: The things you say reflect not only on you but also on people "associated" with you. If X is what you believe, you might have to say Y instead if saying X could be taken the wrong way. If someone is being a jerk, you don't have to extend the courtesy of articulating their mistake to them correctly; you can just shun them off in whatever way is easiest.
I don't really know how real this dichotomy is, and if it is real, I don't know for sure how I feel about one being "right" and the other being "wrong". I tried semi-hard to give a neutral take on the distinction, but I don't think I succeeded. Can people reading this tell which culture I naturally feel opposed to? Do you think I've correctly put my finger on another real dichotomy? Which set of norms, if either, do you feel more in tune with?
I wouldn't call the dead chieftain a god -- that would just be a word game.
But then, how did this improbably complicated mechanism come into existence? Humans were made by evolution, were gods too? But then again those gods are not the gods of religion; they are merely powerful aliens. But powerful aliens are neither creators of the universe, nor are they omniscient.
Wait wait! You say a god-like being created by evolution cannot be a creator of the universe. But that's only true if you constrain that particular instance of evolution to have occered in *this* universe. Maybe this universe is a simulation designed by a powerful "alien" in another universe, who itself came about from an evolutionary process in its own universe.
It might be "omniscient" in the sense that it can think 1000x as fast as us and has 1000x as much working memory and is familiar with thinking habits that are 1000x as good as ours, but that's a moot point. The real thing I'm worried about isn't whether there exists an omniscient-omnipotent-benevolent creature, but rather whether there exists *some* very powerful creature who I might need to understand to avoid getting horrible outcomes.
I haven't yet put much thought into this, since I only recently came to believe that this topic merits serious thought, but the existence of such a powerful creature seems like a plausible avenue to the conclusion "I have an infinite fate and it depends on me doing/avoiding X".
[...] Occam's razor [...]
This is another area where my understanding could stand to be improved (and where I expect it will be during my next read-through of the sequences). I'm not sure exactly what kind of simplicity Occam's razor uses. Apparently it can be formalized as Kolmogorov complexity, but the only definition I've ever found for that term is "the Kolmogorov Complexity of X is the length of the shortest computer program that would output X". But this definition is itself in need of formalization. Which programming language? And what if X is something other than a stream of bits, such as a dandelion? And even once that's answered, I'm not quite sure how to arrive at the conclusion that Kolmogorov-ly simpler things are more likely to be encountered.
(All that being said, I'd like to note that I'm keeping in mind that just because I don't understand these things doesn't mean there's nothing to them. Do you know of any good learning resources for someone who has my confusions about these topics?)
And it's not like you created the universe by simulating it, because you are merely following the mathematical rules; so it's more like the math created that universe and you are only observing it.
If the beings in that mathematical universe will pray to gods, there is no way for anyone outside to intervene (while simultaneously following the mathematical rules). So the universe inside the Game of Life is a perfectly godless universe, based on math.
That much makes sense, but I think it excludes a possibly important class of universe that is based on math but also depends on a constant stream of data from an outside source. Imagine a Life-like simulation ruleset where the state of the array of cells at time T+1 depended on (1) the state of the array at time T and (2) the on/off state of a light switch in my attic at time T. I could listen to the prayers of the simulated creatures and use the light switch to influence their universe such that they are answered.
You make a good point -- even if my belief was technically true, it could still have been poorly framed and inactionable (is there a name for this failure mode?).
But in fact, I think it's not even obvious that it was technically true. If we say "calories in" is the sum of the calorie counts on the labels of each food item you eat (let's assume the labels are accurate) then could there not still be some nutrient X that needs to be present for your body to extract the calories? Say, you need at least an ounce of X to process 100 calories? If so, then one could eat the same amount of food, but less X, and potentially lose weight.
Or perhaps the human body can only process food between four and eight hours after eating it, and it doesn't try as hard to extract calories if you aren't being active, so scheduling your meals to take place four hours before you sit around doing nothing would make them "count less".
Calories are (presumably?) a measure of chemical potential energy, but remember that matter itself can also be converted into energy. There's no antimatter engine inside my gut, so my body fails to extract all of the energy present in each piece of food. Couldn't the mechanism of digestion also fail to extract all the chemical potential energy of species "calorie"?
Thanks for the feedback! Here's another one for ya. A relatively long time ago I used to be pretty concerned about Pascal's wager, but then I devised some clever reasoning why it all cancels out and I don't need to think about it. I reasoned that one of three things must be true:
I don't have an immortal soul. In this case, I might as well be a good person.
I have an immortal soul, and after my bodily death I will be assigned to one of a handful of infinite fates, depending on how good of a person I was. In this case it's very important that I be a good person.
Same as above, but the decision process is something else. In this case I have no way of knowing how my infinite fate will be decided, so I might as well be a good person during my mortal life and hope for the best.
But then, post-LW, I realized that there are two issues with this:
It doesn't make any sense to separate out case 2 from the enormous ocean of possibilities allowed for by case 3. Or rather, I can separate it, but then I need to probabilistically penalize it relative to case 3, and I also need to slightly shift the "expected judgment criterion" found in case 3 away from "being a good person is the way to get a good infinite fate", and it all balances out.
More importantly, this argument flippantly supposes that I have no way of discerning what process, if any, will be used to assign me an infinite fate. An infinite fate, mind you. I ought to be putting in more thought than this even if I thought the afterlife only lasted an hour, let alone eternity.
So now I am back to being rather concerned about Pascal's wager, or more generally, the possibility that I have an immortal soul and need to worry about where it eventually ends up.
From my first read-through of the sequences I remember that it claims to show that the idea of there being a god is somewhat nonsensical, but I didn't quite catch it the first time around. So my first line of attack is to read through the sequences again, more carefully this time, and see if they really do give a valid reason to believe that.
This belief wasn't really affecting my eating habits, so I don't think I'll be changing much. My rules are basically:
No meat (I'm a vegetarian for moral reasons).
If I feel hungry but I can see/feel my stomach being full by looking at / touching my belly, I'm probably just bored or thirsty and I should consider not eating anything.
Try to eat at least a meal's worth of "light" food (like toast or cereal as opposed to pizza or nachos) per day. This last rule is just to keep me from getting stomach aches, which happens if I eat too much "heavy" food in too short a time span.
I think I might contend that this kind of reflects an agnostic position. But I'm glad you asked, because I hadn't noticed before that rule 2 actually does implicitly assume some relationship between "amount of food" and "weight change", and is put in place so I don't gain weight. So I guess I should really have said that what I tossed out the window was the extra detail that calories alone determine the effect food will have on one's weight. I still believe, for normal cases, that taking the same eating pattern but scaling it up (eating more of everything but keeping the ratios the same) will result in weight gain.
It's happened again: I've realized that one of my old beliefs (pre-LW) is just plain dumb.
I used to look around at all the various diet (Paleo, Keto, low carb, low fat, etc.) and feel angry at people for having such low epistemic standards. Like, there's a new theory of nutrition every two years, and people still put faith in them every time? Everybody swears by a different diet and this is common knowledge, but people still swear by diets? And the reasoning is that "fat" (the nutrient) has the same name as "fat" (the body part people are trying to get rid of)?
Then I encountered the "calories in = calories out" theory, which says that the only thing you need to do to lose weight is to make sure that you burn more calories than you eat.
And I thought to myself, "yeah, obviously.".
Because, you see, if the orthodox asserts X and the heterodox asserts Y, and the orthodox is dumb, then Y must be true!
Anyway, I hadn't thought about this belief in a while, but I randomly remembered it a few minutes ago, and as soon as I remembered its origins, I chucked it out the window.
(PS: I wouldn't be flabbergasted if the belief turned out true anyway. But I've reverted my map from the "I know how the world is" state to the "I'm awaiting additional evidence" state.)
In a normal scientific field, you build a theory, push it to the limit with experimental evidence, and then replace it with something better when it breaks down.
LW-style rationality is not a normal scientific field.
I was under the impression that CFAR was doing something like this, using evidence to figure out which techniques actually do what they seem like they're doing. If not... uh-oh! (Uh-oh in the sense that I beleived something for no reason, not in the sense that CFAR would therefore be badwrong in my eyes.)
It's a community dialog centered around a shared set of wisdom-stories. [...] I posit that we are likely to be an average example of such a community, with an average amount of wisdom and an average set of foibles.
I'm not sure I know what kind of community you're talking about. Are there other readily-available examples?
One of those foibles will be [...] Another will be [...] And a third will be [...]
How do you know?
More charitably, I do think these are real risks. Especially the first, which I think I may fall victim to, at least with Eliezer's writings.
My anxiety is that I/we are getting off-track, alienated from ourselves, and obsessed with proxy metrics for rationality. [...] We focus on what life changes fit into the framework or what will be interesting to others in this community, rather than what we actually need to do. I'd like to see more storytelling and attempts to draw original wisdom from them, and more contrarian takes/heresy.
My current belief (and strong hope) is that the attitude of this community is exactly such that if you are right about that, you will be able to convince people of it. "You're not making improvements, you're just roleplaying making improvements" seems like the kind of advice a typical LessWronger would be open to hearing.
By the way, I saw your two recent posts (criticism of popular LW posts, praise of popular LW posts) and I think they're good stuff. The more I think on this, the more I wonder if the need for "contrarian takes" of LW content has been a blind spot for me in my first year of rationality. It's an especially insidious one if so, because I normally spit out contrarian takes as naturally as I breathe.
Sorry that this is all horrible horrible punditry, darkly hinting and with no verifiable claims, but I don't have the time to make it sharper.