Thanks for pointing this out. I think the OP might have gotten their conclusion from this paragraph:
(Note that, in the web page that the OP links to, this very paragraph is quoted, but for some reason "energy" is substituted for "center-of-mass". Not sure what's going on there.)
In any case, this paragraph makes it sound like participants who inherited a wrong theory did do worse on tests of understanding (even though participants who inherited some theory did the same on average of those who inherited only data, which I guess implies that those who inherited a right theory did better). I'm slightly off-put by the fact that this nuance isn't present in the OP's post, and that they haven't responded to your comment, but not nearly as much as I had been when I'd read only your comment, before I went to read (the first 200 lines of) the paper for myself.
Yeah, I just... stopped worrying about these kinds of things. (In my case, "these kinds of things" refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can't win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; [...]
I see. In that case, I think we're reacting differently to our situations due to being in different epistemic states. The uncertainty involved in Everett branches is much less Knightian -- you can often say things like "if I drive to the supermarket today, then approximately 0.001% of my future Everett branches will die in a car crash, and I'll just eat that cost; I need groceries!". My state of uncertainty is that I've barely put five minutes of thought into the question "I wonder if there are any tremendously important things I should be doing right now, and particularly if any of the things might have infinite importance due to my future being infinitely long."
And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...
Well, that's another reference to "popular" theism. Popular theism is a subset of theism in general, which itself is a subset of "worlds in which there's something I should be doing that has infinite importance".
On the other hand, if you assume an evil god, then... maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.
Yikes!! I wish LessWrong had emojis so I could react to this possibility properly :O
So... you can't really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.
This advice makes sense, though given the state of uncertainty described above, I would say I'm already on it.
Psychologically, if you can't get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. [...]
This is a good fallback plan for the contingency in which I can't figure out the truth and then subsequently fail to acknowledge my ignorance. Fingers crossed that I can at least prevent the latter!
[...] your theory can still benefit from some concepts having shorter words for historical reasons [...]
Well, I would have said that an exactly analogous problem is present in normal Kolmogorov Complexity, but...
But historical evidence shows that humans are quite bad at this.
...but this, to me, explains the mystery. Being told to think in terms of computer programs generating different priors (or more accurately, computer programs generating different universes that entail different sets of perfect priors) really does influence my sense of what constitutes a "reasonable" set of priors.
I would still hesitate to call it a "formalism", though IIRC I don't think you've used that word. In my re-listen of the sequences, I've just gotten to the part where Eliezer uses that word. Well, I guess I'll take it up with somebody who calls it that.
By the way, it's just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam's razor. I'm nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?
[...] The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file.
Insightful comments! I see the connection: really, every compression of a file is a compression into the shortest program that will output that file, where the programming language is the decompression algorithm and the search algorithm that finds the shortest program isn't guaranteed to be perfect. So the best compression algorithm ever would simply be one with a really really apt decompression routine (one that captures the nuanced nonrandomness found in files humans care aboue very well) and an oracle for computing shortest programs (rather than a decent but imperfect search algorithm).
> But then my concern just transforms into "what if there's a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc".
Then we are no longer talking about gods in the modern sense, but about powerful aliens.
Well, if the "inside/outside the universe" distinction is going to mean "is/isn't causally connected to the universe at all" and a god is required to be outside the universe, then sure. But I think if I discovered that the universe was a simulation and there was a being constantly watching it and supplying a fresh bit of input every hundred Planck intervals in such a way that prayers were occasionally answered, I would say that being is closer to a god than an alien.
But in any case, the distinction isn't too relevant. If I found out that there was a vessel with intelligent life headed for Earth right now, I'd be just as concerned about that life (actual aliens) as I would be about god-like creatures that should debatably also be called aliens.
Aha, no, the mind reading part is just one of several cultures I'm mentioning. (Guess Culture, to be exact.) If I default to being an Asker but somebody else is a Guesser, I might have the following interaction with them:
Me: [looking at some cookies they just made] These look delicious! Would it be all right if I ate one?
Them: [obviously uncomfortable] Uhm... uh... I mean, I guess so...
Here, it's retroactively clear that, in their eyes, I've overstepped a boundary just by asking. But I usually can't tell in advance what things I'm allowed to ask and what things I'm not allowed to ask. There could be some rule that I just haven't discovered yet, but because I haven't discovered it yet, it feels to me like each case is arbitrary, and thus it feels like I'm being required to read people's minds each time. Hence why I'm tempted to call Guess Culture as "Read-my-mind Culture".
(Contrast this to Ask Culture, where the rule is, to me, very simple and easy to discover: every request is acceptable to make, and if the other person doesn't want you to do what you're asking to do, they just say "no".)
The Civ analogy makes sense, and I certainly wouldn't stop at disproving all actually-practiced religions (though at the moment I don't even feel equipped to do that).
Well, you cannot disprove such thing, because it is logically possible. (Obviously, "possible" does not automatically imply "it happened".) But unless you assume it is "simulations all the way up", there must be a universe that is not created by an external alien lifeform. Therefore, it is also logically possible that our universe is like that.
Are you sure it's logically possible in the strict sense? Maybe there's some hidden line of reasoning we haven't yet discovered that shows that this universe isn't a simulation! (Of course, there's a lot of question-untangling that has to happen first, like whether "is this a simulation?" is even an appropriate question to ask. See also: Greg Egan's book Permutation City, a fascinating work of fiction that gives a unique take on what it means for a universe to be a simulation.)
It's just a cosmic horror that you need to learn to live with. There are more.
This sounds like the kind of thing someone might say who is already relatively confident they won't suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then...
(WARNING: graphic imagery) ...upon your bodily death, your consciousness will be embedded in an indestructible body and put in a 15K degree oven for 100 centuries. (END).
Would you still say it was just another cosmic horror you have to learn to live with? If you wouldn't still say that, but you say it now because your probability estimate is less than 1/1000, how did you come to have that estimate?
Any programming language; for large enough values it doesn't matter. If you believe that e.g. Python is much better in this regard than Java, then for sufficiently complicated things the most efficient way to implement them in Java is to implement a Python emulator (a constant-sized piece of code) and implementing the algorithm in Python. So if you chose a wrong language, you pay at most a constant-sized penalty. Which is usually irrelevant, because these things are usually applied to debating what happens in general when the data grow.
The constant-sized penalty makes sense. But I don't understand the claim that this concept is usually applied in the context of looking at how things grow. Occam's razor is (apparently) formulated in terms of raw Kolmogorov complexity -- the appropriate prior for an event X is 2^(-B), where B is the Kolmogorov Complexity of X.
Let's say general relativity is being compared against Theory T, and the programming language is Python. Doesn't it make a huge difference whether you're allowed to "pip install general-relativity" before you begin?
But there are cases where you can have an intuition that for any reasonable definition of a programming language, X should be simpler than Y.
I agree that these intuitions can exist, but if I'm going to use them, then I detest this process being called a formalization! If I'm allowed to invoke my sense of reasonableness to choose a good programming language to generate my priors, why don't I instead just invoke my sense of reasonableness to choose good priors? Wisdom of the form "programming languages that generate priors that work tend to have characteristic X" can be transformed into wisdom of the form "priors that work tend to have characteristic X".
Just an intuition pump: [...]
I have to admit that I kind of bounced off of this. The universe-counting argument makes sense, but it doesn't seem especially intuitive to me that the whole of reality should consist of one universe for each computer program of a set length written in a set language.
(Actually, I probably never heard explicitly about Kolmogorov complexity at university, but I learned some related concepts that allowed me to recognize what it means and what it implies, when I found it on Less Wrong.)
Can I ask which related concepts you mean?
[...] so it is the complexity of the outside universe.
Oh, that makes sense. In that case, the argument would be that nothing outside MY universe could intervene in the lives of the simulated Life-creatures, since they really just live in the same universe as me. But then my concern just transforms into "what if there's a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc".
Epistemic status: really shaky, but I think there's something here.
I naturally feel a lot of resistance to the way culture/norm differences are characterized in posts like Ask and Guess and Wait vs Interrupt Culture. I naturally want to give them little pet names, like:
Guess culture = "read my fucking mind, you badwrong idiot" culture.
Ask culture = nothing, because this is just how normal, non-insane people act.
I think this feeling is generated by various negative experiences I've had with people around me, who, no matter where I am, always seem to share between them one culture or another that I don't really understand the rules of. This leads to a lot of interactions where I'm being told by everyone around me that I'm being a jerk, even when I can "clearly see" that their is nothing I could have done that would have been correct in their eyes, or that what they wanted me to do was impossible or unreasonable.
But I'm starting to wonder if I need to let go of this. When I feel someone is treating me unfairly, it could just be because (1) they are speaking in Culture 1, then (2) I am listening in Culture 2 and hearing something they don't mean to transmit. If I was more tuned in to what people meant to say, my perception of people who use other norms might change.
I feel there's at least one more important pair of cultures, and although I haven't mentioned it yet, it's the one I had in mind most while writing this post. Something like:
Culture 1: Everyone speaks for themselves only, unless explicitly stated otherwise. Putting words in someone's mouth or saying that they are "implying" something they didn't literally say is completely unacceptable. False accusations are taken seriously and reflect poorly on the accuser.
Culture 2: The things you say reflect not only on you but also on people "associated" with you. If X is what you believe, you might have to say Y instead if saying X could be taken the wrong way. If someone is being a jerk, you don't have to extend the courtesy of articulating their mistake to them correctly; you can just shun them off in whatever way is easiest.
I don't really know how real this dichotomy is, and if it is real, I don't know for sure how I feel about one being "right" and the other being "wrong". I tried semi-hard to give a neutral take on the distinction, but I don't think I succeeded. Can people reading this tell which culture I naturally feel opposed to? Do you think I've correctly put my finger on another real dichotomy? Which set of norms, if either, do you feel more in tune with?
I wouldn't call the dead chieftain a god -- that would just be a word game.
But then, how did this improbably complicated mechanism come into existence? Humans were made by evolution, were gods too? But then again those gods are not the gods of religion; they are merely powerful aliens. But powerful aliens are neither creators of the universe, nor are they omniscient.
Wait wait! You say a god-like being created by evolution cannot be a creator of the universe. But that's only true if you constrain that particular instance of evolution to have occered in *this* universe. Maybe this universe is a simulation designed by a powerful "alien" in another universe, who itself came about from an evolutionary process in its own universe.
It might be "omniscient" in the sense that it can think 1000x as fast as us and has 1000x as much working memory and is familiar with thinking habits that are 1000x as good as ours, but that's a moot point. The real thing I'm worried about isn't whether there exists an omniscient-omnipotent-benevolent creature, but rather whether there exists *some* very powerful creature who I might need to understand to avoid getting horrible outcomes.
I haven't yet put much thought into this, since I only recently came to believe that this topic merits serious thought, but the existence of such a powerful creature seems like a plausible avenue to the conclusion "I have an infinite fate and it depends on me doing/avoiding X".
[...] Occam's razor [...]
This is another area where my understanding could stand to be improved (and where I expect it will be during my next read-through of the sequences). I'm not sure exactly what kind of simplicity Occam's razor uses. Apparently it can be formalized as Kolmogorov complexity, but the only definition I've ever found for that term is "the Kolmogorov Complexity of X is the length of the shortest computer program that would output X". But this definition is itself in need of formalization. Which programming language? And what if X is something other than a stream of bits, such as a dandelion? And even once that's answered, I'm not quite sure how to arrive at the conclusion that Kolmogorov-ly simpler things are more likely to be encountered.
(All that being said, I'd like to note that I'm keeping in mind that just because I don't understand these things doesn't mean there's nothing to them. Do you know of any good learning resources for someone who has my confusions about these topics?)
And it's not like you created the universe by simulating it, because you are merely following the mathematical rules; so it's more like the math created that universe and you are only observing it.
If the beings in that mathematical universe will pray to gods, there is no way for anyone outside to intervene (while simultaneously following the mathematical rules). So the universe inside the Game of Life is a perfectly godless universe, based on math.
That much makes sense, but I think it excludes a possibly important class of universe that is based on math but also depends on a constant stream of data from an outside source. Imagine a Life-like simulation ruleset where the state of the array of cells at time T+1 depended on (1) the state of the array at time T and (2) the on/off state of a light switch in my attic at time T. I could listen to the prayers of the simulated creatures and use the light switch to influence their universe such that they are answered.
You make a good point -- even if my belief was technically true, it could still have been poorly framed and inactionable (is there a name for this failure mode?).
But in fact, I think it's not even obvious that it was technically true. If we say "calories in" is the sum of the calorie counts on the labels of each food item you eat (let's assume the labels are accurate) then could there not still be some nutrient X that needs to be present for your body to extract the calories? Say, you need at least an ounce of X to process 100 calories? If so, then one could eat the same amount of food, but less X, and potentially lose weight.
Or perhaps the human body can only process food between four and eight hours after eating it, and it doesn't try as hard to extract calories if you aren't being active, so scheduling your meals to take place four hours before you sit around doing nothing would make them "count less".
Calories are (presumably?) a measure of chemical potential energy, but remember that matter itself can also be converted into energy. There's no antimatter engine inside my gut, so my body fails to extract all of the energy present in each piece of food. Couldn't the mechanism of digestion also fail to extract all the chemical potential energy of species "calorie"?
Thanks for the feedback! Here's another one for ya. A relatively long time ago I used to be pretty concerned about Pascal's wager, but then I devised some clever reasoning why it all cancels out and I don't need to think about it. I reasoned that one of three things must be true:
I don't have an immortal soul. In this case, I might as well be a good person.
I have an immortal soul, and after my bodily death I will be assigned to one of a handful of infinite fates, depending on how good of a person I was. In this case it's very important that I be a good person.
Same as above, but the decision process is something else. In this case I have no way of knowing how my infinite fate will be decided, so I might as well be a good person during my mortal life and hope for the best.
But then, post-LW, I realized that there are two issues with this:
It doesn't make any sense to separate out case 2 from the enormous ocean of possibilities allowed for by case 3. Or rather, I can separate it, but then I need to probabilistically penalize it relative to case 3, and I also need to slightly shift the "expected judgment criterion" found in case 3 away from "being a good person is the way to get a good infinite fate", and it all balances out.
More importantly, this argument flippantly supposes that I have no way of discerning what process, if any, will be used to assign me an infinite fate. An infinite fate, mind you. I ought to be putting in more thought than this even if I thought the afterlife only lasted an hour, let alone eternity.
So now I am back to being rather concerned about Pascal's wager, or more generally, the possibility that I have an immortal soul and need to worry about where it eventually ends up.
From my first read-through of the sequences I remember that it claims to show that the idea of there being a god is somewhat nonsensical, but I didn't quite catch it the first time around. So my first line of attack is to read through the sequences again, more carefully this time, and see if they really do give a valid reason to believe that.
This belief wasn't really affecting my eating habits, so I don't think I'll be changing much. My rules are basically:
No meat (I'm a vegetarian for moral reasons).
If I feel hungry but I can see/feel my stomach being full by looking at / touching my belly, I'm probably just bored or thirsty and I should consider not eating anything.
Try to eat at least a meal's worth of "light" food (like toast or cereal as opposed to pizza or nachos) per day. This last rule is just to keep me from getting stomach aches, which happens if I eat too much "heavy" food in too short a time span.
I think I might contend that this kind of reflects an agnostic position. But I'm glad you asked, because I hadn't noticed before that rule 2 actually does implicitly assume some relationship between "amount of food" and "weight change", and is put in place so I don't gain weight. So I guess I should really have said that what I tossed out the window was the extra detail that calories alone determine the effect food will have on one's weight. I still believe, for normal cases, that taking the same eating pattern but scaling it up (eating more of everything but keeping the ratios the same) will result in weight gain.
It's happened again: I've realized that one of my old beliefs (pre-LW) is just plain dumb.
I used to look around at all the various diet (Paleo, Keto, low carb, low fat, etc.) and feel angry at people for having such low epistemic standards. Like, there's a new theory of nutrition every two years, and people still put faith in them every time? Everybody swears by a different diet and this is common knowledge, but people still swear by diets? And the reasoning is that "fat" (the nutrient) has the same name as "fat" (the body part people are trying to get rid of)?
Then I encountered the "calories in = calories out" theory, which says that the only thing you need to do to lose weight is to make sure that you burn more calories than you eat.
And I thought to myself, "yeah, obviously.".
Because, you see, if the orthodox asserts X and the heterodox asserts Y, and the orthodox is dumb, then Y must be true!
Anyway, I hadn't thought about this belief in a while, but I randomly remembered it a few minutes ago, and as soon as I remembered its origins, I chucked it out the window.
(PS: I wouldn't be flabbergasted if the belief turned out true anyway. But I've reverted my map from the "I know how the world is" state to the "I'm awaiting additional evidence" state.)
In a normal scientific field, you build a theory, push it to the limit with experimental evidence, and then replace it with something better when it breaks down.
LW-style rationality is not a normal scientific field.
I was under the impression that CFAR was doing something like this, using evidence to figure out which techniques actually do what they seem like they're doing. If not... uh-oh! (Uh-oh in the sense that I beleived something for no reason, not in the sense that CFAR would therefore be badwrong in my eyes.)
It's a community dialog centered around a shared set of wisdom-stories. [...] I posit that we are likely to be an average example of such a community, with an average amount of wisdom and an average set of foibles.
I'm not sure I know what kind of community you're talking about. Are there other readily-available examples?
One of those foibles will be [...] Another will be [...] And a third will be [...]
How do you know?
More charitably, I do think these are real risks. Especially the first, which I think I may fall victim to, at least with Eliezer's writings.
My anxiety is that I/we are getting off-track, alienated from ourselves, and obsessed with proxy metrics for rationality. [...] We focus on what life changes fit into the framework or what will be interesting to others in this community, rather than what we actually need to do. I'd like to see more storytelling and attempts to draw original wisdom from them, and more contrarian takes/heresy.
My current belief (and strong hope) is that the attitude of this community is exactly such that if you are right about that, you will be able to convince people of it. "You're not making improvements, you're just roleplaying making improvements" seems like the kind of advice a typical LessWronger would be open to hearing.
By the way, I saw your two recent posts (criticism of popular LW posts, praise of popular LW posts) and I think they're good stuff. The more I think on this, the more I wonder if the need for "contrarian takes" of LW content has been a blind spot for me in my first year of rationality. It's an especially insidious one if so, because I normally spit out contrarian takes as naturally as I breathe.
Sorry that this is all horrible horrible punditry, darkly hinting and with no verifiable claims, but I don't have the time to make it sharper.
One obstacle to discovering how the sequences were affected is that some of the dependencies on psychology/sociology/etc might not be explicitly called out, or might not even have been explicit in Eliezer's own mind as he wrote. But I would just say that means we'll have to work harder at sussing out the truth.
I want to begin my response by noting that I'm in the stage of learning about rationality where I feel that there are still things I don't yet know that, when I learn them, will flush some old conclusions completely down the toilet. (I think this is what Nick Bostrom calls a crucial consideration). So, if there's evidence and/or reasoning motivating your position beyond that which you've shared already, you should make sure to identify it and let me know what it is, and it might genuinely change my position.
That said, I think the arguments I see in this comment are flawed. Before I say why, let me first say exactly what I think the points of disagreement are. First, the replication crisis. I think the following statement (written by me, but taken partly from your post) is one you would agree with and I am rather skeptical of:
Many of the conclusions found in LessWrong's early writings have been cast into doubt, on account of having relied on social psychology results that have been cast into doubt.
I read the first few books of the sequences about a year ago, and then I read all of the sequences a couple of months ago. From what I recall, the heuristics & biases program and Bayesian statistics played a dominant role in generating his conclusions, with some evolutionary theory serving to exemplify shortcomings in human reasoning by contrasting what evolutionary theorists used to believe with what we now know (see the Simple Math of Evolution sequence). I don't recall much reliance on social psychology, though I also don't have a very good grasp on what that field studies, so I might not recognize its findings when I see them. Are there specific examples of posts you can give whose conclusions you think (a) rely on results that failed replication and (b) are dubious because of it?
I'd like to note that, although I haven't checked his examples myself myself, I suspect Eliezer knew to be careful about this kind of thing. In How They Nail It Down he explains that a handful of scientific studies aren't enough to believe a phenomenon is real, but that a suite of hundreds of studies, each pitting the orthodox formulation against some alternate interpretation and finding the orthodox interpretation superior, is. He uses the Conjunction Fallacy, one of his go-to examples of human bias, as an example of a phenomenon that passes the test. Perhaps Eliezer managed to identify the phenomena which had not yet been nailed down (and would go on to fail replication) and managed not to rely on them?
Now the second disagreement. I think you would say, and I would not, that:
Rationality has expected conclusions, such as "AI is a serious problem" or "the many-worlds interpretation of quantum physics is the correct one", that you are supposed to come to. Furthermore, you are not supposed to doubt these conclusions -- you're just supposed to believe them.
I admit that Eliezer's position on doubt is more nuanced than I was remembering it as I wrote everything above this sentence. But have a look at The Proper Use of Doubt, from the sequence Letting Go. In this essay, he warns against having doubts that are too ineffectual; in other words, he advises his audience to make sure they act on their doubts, and that, if appropriate, the process of acting on their doubts actually results in "tearing a cherished belief to shreds." (emphasis mine).
[...] rationality, for them, is not an objective procedure. It's a thoroughly human act, and it's also a lifestyle and an attitude.
I'm not entirely sure what you're getting at with the "objective procedure / human act" distinction. Based only on the labeles, I would tentatively agree that rationality is very much a human act. Overcoming biases specific to the human brain is one of its pillars, after all. But I'm not sure what this has to do with either of the points I raised in my comment. Maybe you could put it another way?
It is the systematization of these intuitive, introspection-based techniques that I'm worried about. Now that some self-appointed experts with a nonprofit have produced this (genuinely valuable) material, it makes it easier for people to use the techniques with the expectation of the results the creators tell them they'll receive, rather than doing their own introspection and coming up with original insights and contradictory findings.
Now, where else have I heard of that sort of thing before?
You've probably seen something like it at the heart of every knowledge-gathering endeavor that lasted more than one generation. Everything I know about particle physics was taught to me; none of it derives from original thought on my part. This includes the general attitude that the universe is made of tiny bits whose behavior can be characterized very accurately by mathematical equations. If I wanted to derive knowledge myself, I would have to go out to my back yard and start doing experiments with rocks -- unaware not only of facts like the mass of a proton, not only of the existence of protons, but also of the existence of knowledge such as "protons exist". I would never cross that gap in a single lifetime.
It seems to me that there is a trade-off between original thought, which is good, and speed of development of a collaborative effort, which is also good. Telling your students more results in faster development, but less original thought and therefore less potential to catch mistakes. Telling them less results in more original thought but therefore more wheel-reinvention. I admit that there will be some tendency for people to read about techniques of rationality and then immediately fall victim to the placebo effect. But I think there is also some tendency for Eliezer and CFAR to be smart, say true & useful things, and then pass them on to others who go on to get good use out of them.
Would you agree with that last statement? Do you think my "trade-off" analysis is appropriate? If so, is it just that you think the rationalist community leans too far towards teaching-much and too far away from teaching-little? Or have I completely mis-characterized the problem you see in rationalist teachings (exemplified by Boggling)?
So much of LessWrong's early writings are steeped in scientific findings that died in replication.
Uh-oh, I didn't know about this. Does anyone know which ones?
My fear about systematized rationality is that it supplies us with methods and expected conclusions, [...]. I'm still a believer in the kind of art that undermines your confidence in the answers it provides.
What? What are the "expected conclusions" of rationality? My understanding was that rationality is supposed to be *exactly* the kind of art you describe in the second sentence here.
Disclaimer: I sort of skimmed this post, maybe I'm missing something.
It works, but it's self-certified, so your browser is probably blocking it. You can add an exception if you'd like, but I should have it fixed "soon" (3rd party certification can be gotten for free, I just need to get around to it).
I was about to type out a rebuttal of this, but halfway through I realized I actually agree with you. The "some non-random property" of the digits of the powers of two is that they are all digits found in order inside of powers of two. I would even go so far as to say that if the statement really can't be proven (even taking into account the fact that the digits aren't truly random) then there's a sense in which it isn't true. (And if it can't be proven false, then I'd also say it isn't false.)
Ah, I see what you're saying now. So it is analogous to the cancer example: higher stakes make less-likely-to-succeed-efforts more worth doing. (When compared with lower stakes, not when compared with efforts more likely to succeed, of course.) That makes sense.
As a side note, I wonder if I should have had him bet on a less specific series of events. The way the story is currently makes it almost sound like I'm just rehashing the "burdensome details" sequence, but what I was really trying to call out was the fairly specific fallacy of "X is all the information I have access too, therefore X is enough information to make the decision".
Overall I wish I had put more thought into this story. I did let it simmer in mind for a few days after writing it but before posting it, but the decision to finally publish was kind of impulsive, and I didn't try very hard to determine if it was comprehensible before doing so. Oops! I've updated towards "I need to do more work to make my writing clear".
In the cancer diagnosis example, part of the reason that I would think it's less clear that Sylvanus is being an idiot is that you really might be able to get some evidence about the presence of cancer by paying close attention to the affected organ.
I think I see where you're coming from, though. The importance of a cancer diagnosis (compared to a news addiction) does mean that trying out various apparently dumb ways of getting at the truth becomes a lot more sane. But I don't think I understand what you're saying in the first sentence. What needs to be assumed when reasoning about existential risk, and how are the high stakes responsible for forcing us to assume it?
(For context, my knowledge about the various existential risks humanity might face is pretty shallow, but I'm on board with the idea that they can exist and are important to think about and act upon.)
Yeah, that's almost certainly because they are all https links as well. In another branch of this comment thread Raven has pointed me to a place where I can get an https certificate for free, so I should be able to fix this soonish. Thanks!
I wasn't aware of the image enhancement stuff, but it sounds different from what I'm getting at. If I had to write a moral for this story, I would say "Just because X is all the information you have to go on doesn't mean that X is enough information to give a high-quality answer to the problem at hand".
One time where I feel like I see people making this mistake is with the problem of induction. There are those who say "well, if you take the problem of induction too seriously, you can't know for sure that the sun will rise tomorrow!" and conclude that there must be an issue with the problem of induction, rather than wondering whether they really might not know for sure whether the sun will rise tomorrow.
I (believe that I) saw Scott Alexander make this sort of mistake in one of his recent posts, but I can't go check because... well, the blog doesn't exist at the moment. Actually, I heard it through the podcast, which is still available, so I might just listen back to the recent episodes and see if I (1) find the snippet I'm thinking of and (2) still think it's an instance of this mistake. If condition 1 is met, I'll come back and edit in a report of condition 2.
Stupid idea: Have a handful of students from each school volunteer to be assigned extremely difficult, real-world tasks, such as "become an officer at Microsoft within the next five years". These people would be putting any other of their life plans on hold, so you'd need to incentivize them with some kind of reward and/or sense of honor/loyalty to their school.
Ach, nuts. I even spent a minute trying to understand where I'd gone wrong, reasoning that it wasn't all that likely that Jacob's post would contain something as strange as the thing I thought I was seeing. Oh well.
Leftists blame loneliness on capitalism — single people buy twice as many toasters, sex toys, and Netflix subscriptions.
I know you aren't saying you agree with this logic, but I'll just point out that in the case of toasters and Netflix subscriptions, there's a much more obvious explanation, which is that a couple living together only needs one toaster between them, so on average they only buy .5 toasters each.
I was wondering what people would think of that. I chose this name because it "seemed cool", which I put in quotes because it refers to a specific kind of feeling that I can't really articulate. Short titles often give me this feeling.
If you think it's too short (eg, it seems spammy or you think it might annoy other users to see it) then let me know and I'll be happy to come up with something that gives a better idea of what the post is about.
reducing those small frictions result in much more notes and less disruption of the current task, you think of something, a note is added in a few seconds and you can continue working on.
I upvoted for this snippet because it's an important aspect of the situation that I forgot to call out in the main post.
Would you mind sharing your code?
Sure! This one is actually a one-liner: it's simply "gedit ~/Documents/lists/$1", which you put in a file called "l" in your ~/bin/ directory. If you prefer a different editor, you can swap out "gedit" for "emacs" or the command used to launch whatever editor you like. (This advice is directed at others reading this comment chain, you probably already know how to do that.)
I found that using some keybindings to rely solely on the keyboard also made a good improvement.
That's a good idea. I currently have a piece of software that I use to type diacritics (for Toaq) but I'm not super happy with it — it kind of bugs out on occasion and can be slow to insert the characters I want. The software I'm using is AutoKey. What do you use? Are you happy with it?
I recommend also implementing some scripts to search on the web [...]
This is also a good idea. I'm pretty fast at typing and pretty slow with the mouse, so I'd probably instead make a macro for "prompt me for a search key, open a new tab, search that thing, then take me back to the tab I was in before".
This is a good point. I'd do well to remember that repeated phrases stick in the mind: I'm currently on a bit of a reification spree where I'm giving names to a whole bunch of personal concepts (like moods, mental tools, etc) and since I would like these phrases to stick in the mind I think I shall repeat them.
I think I prefer the status quo design, but not very strongly. Between the two designs pictured here, I at first preferred the one where the authors weren't bolded, but now I think I prefer the one where the whole line is bolded, since "[insert author whose posts I enjoy] has posted something" is as newsworthy as "there's a post called [title I find enticing]".
Something I've noticed about myself is that I tend to underestimate how much I can get used to things, so I might end up just as happy with whichever design is chosen.
I just noticed that I've got two similarity clusters in my mind that keep getting called to my attention by wording dichotomies like high-priority and low-priority, but that would themselves be better labeled as big and small. This was causing me to interpret phrases like "doing a string of low-priority tasks" as having a positive affect (!) because what it called to mind was my own activity of doing a string of small, on-average medium-priority tasks.
My thought process might improve overall if I toss out the "big" and "small" similarity clusters and replace them with clusters that really are centered around "high-priority" and "low-priority".
Strong upvote for the very clear explanation of the basics. I would definitely read any further posts elaborating on this -- for example, if you explained some of the simple quantum gates and maybe an algorithm or two that asymptotically outperforms its classical analog.
I don't have much to add to this discussion, but I want to note that I'm extremely interested in any further insights you have about this, because this problem has always bothered me.
I expect you've already thought of this, but you might get some epistemic mileage out of looking at what primary documents a fact traces back to and reasoning about what those documents can/cannot possibly prove about the past. For example, if the claim about boat sizes had traced back to a set of documents about boats only in Europe, you would know to be suspicious.
I'm starting to feel frustrated (and confused) by this conversation, because it feels to me like people are responding to something other than what I'm saying. Let me try to clarify what I'm getting at.
As far as I know, this conversation began on Put A Num On It, where Jacob used the phrase "overcoming intuition" as a name for one of his hypotheses about why rationalists are more polygamous than others. He says:
The willingness to entertain the idea that your intuitions about truth may be wrong is a prerequisite for learning Rationality, and Rationality further cultivates that skill.
So it seems to me that he was trying to bind the phrase "overcoming intuition" to the idea of overcoming the tight grip that intuitions hold over most people. Not throwing out all of our intuitions' conclusions (I completely agree that that would be bad) but rather getting our intuitions under control so that we don't just automatically obey them at every turn.
Do you agree that this is what Jacob meant by the phrase? Separately, do you agree that this is a reasonable thing to do?
Since I am confused, I will generate some hypotheses about what's going on:
I have completely misunderstood Jacob's intent for what the phrase means.
Evidence in favor of this hypothesis: This is going to sound sulky, but I cross my heart that I'm just trying to be a good rationalist: I seem to be the one with the less popular opinion here. Obviously it might just be the case that I really do have an unpopular opinion, but it's also exactly what you'd expect to see if I was on a different page than everybody else about what we were talking about.
Others are jumping into the conversation late and are not aware of the commentary given about the phrase by the person who wrote it.
Evidence in favor of this hypothesis: The comment at the top of this chain says "I absolutely hate this phrase and everything it represents", but I don't see why OP would feel so strongly about "the willingness to entertain the idea that your intuitions about truth may be wrong", which is what the phrase is being used to represent here. This makes me think that the phrase represents something other than that to OP.
Before I joined the site, there was some divide between people who really did think that we should throw out all of our intuition's conclusions and people who did not acknowledged that sometimes correct conclusions could have illegible conclusions, and people responding to this comment chain are thinking of that divide when they profess their hatred of the phrase "overcoming intuition".
Evidence in favor of this hypothesis: Even though I pointed out that Jacob was using the phrase to mean a certain thing, two different people have insisted that the phrase really means something different. That makes me think that the phrase has a history in this community.
Why not? I'm sorry if I'm being dense, but my understanding is that this community's main focus is fixing the issues that human intuitions have. All it takes is a relabeling of "bias" with the word "intuition" to describe this process as "overcoming intuition". Is that not what the phrase stands for? Or, a more specific guess, does it stand for a specific variant of rationality in which the whole intuition really is what you try to overcome?
(Or, yet another alternative, do you disagree with this community's main stated goal? This isn't my main guess because it looks like you're a fairly prevalent and popular participant here, neither of which I would expect for somebody with fringe views.)