Posts

How worried are the relevant experts about a magnetic pole reversal? 2020-10-07T16:19:51.607Z
Why associative operations? 2020-07-16T12:36:47.802Z
The allegory of the hospital 2020-07-02T21:46:09.269Z
l 2019-12-30T05:23:51.727Z
What's going on with "provability"? 2019-10-13T03:59:08.748Z
[Linkpost] Otu and Lai: a story 2019-09-15T20:21:12.445Z
How has rationalism helped you? 2019-08-24T01:31:06.616Z
Information empathy 2019-07-30T01:32:45.174Z
Sunny's Shortform 2019-07-28T02:40:05.241Z
Bayes' Theorem in three pictures 2019-07-21T07:01:45.068Z
Why it feels like everything is a trade-off 2019-07-18T01:33:04.764Z

Comments

Comment by sunny-from-qad on Pain is not the unit of Effort · 2020-11-28T00:57:19.071Z · LW · GW

To answer that question, it might help to consider when you even need to measure effort. Off the cuff, I'm not actually sure there are any (?). Maybe you're an employer and you need to measure how much effort your employees are putting in? But on second thought that's actually a classic case where you don't need to measure effort, and you only need to measure results.

(Disclaimer: I have never employed anybody.)

Comment by sunny-from-qad on Pain is not the unit of Effort · 2020-11-25T09:41:17.683Z · LW · GW

pain isn't the unit of effort, but for many things it's correlated with whatever that unit is.

I think this correlation only appears if you're choosing strategies well. If you're tasked with earning a lot of money to give to charity, and you generate a list of 100 possible strategies, then you should toss out all the strategies that don't lie on the pareto boundary of pain and success. (In other words, if strategy A is both less effective and more painful then strategy B, then you should never choose strategy A.) Pain will correlate with success in the remaining pool of strategies, but it doesn't correlate in the set of all strategies. And OP is saying that people often choose strategies that are off the pareto boundary because they specifically select pain-inducing strategies under the misconception that those strategies will all be successful as well.

Comment by sunny-from-qad on Sunny's Shortform · 2020-10-31T08:49:45.420Z · LW · GW

A koan:

If the laundry needs to be done, put in a load of laundry.
If the world needs to be saved, save the world.
If you want pizza for dinner, go preheat the oven.

Comment by sunny-from-qad on You Only Live Twice · 2020-10-30T13:25:32.946Z · LW · GW

So it's been 10 years. How are you feeling about cryonics now?

Comment by sunny-from-qad on You Only Live Twice · 2020-10-30T13:24:39.678Z · LW · GW

It's been ten years. How are you enjoying life?

For what it's worth, I value you even though you're a stranger and even if your life is still going poorly. I often hear people saying how much better their life got after 30, after 40, after 50. Imagine how much larger the effect could be after cryosuspension!

Comment by sunny-from-qad on Sunny's Shortform · 2020-10-30T08:27:12.457Z · LW · GW

I've been thinking of signing up for cryonics recently. The main hurdle is that it seems like it'll be kind of complicated, since at the moment I'm still on my parent's insurance, and I don't really know how all this stuff works. I've been worrying that the ugh field surrounding the task might end up being my cause of death by causing me to look on cryonics less favorably just because I subconsciously want to avoid even thinking about what a hassle it will be.

But then I realized that I can get around the problem by pre-committing to sign up for cryonics no matter what, then just cancelling it if I decide I don't want it.

It will be MUCH easier to make an unbiased decision if choosing cryonics means doing nothing rather than meaning that I have to go do a bunch of complicated paperwork now. It will be well worth a few months (or even years) of dues.

Comment by sunny-from-qad on No Logical Positivist I · 2020-10-28T16:44:07.625Z · LW · GW

Eliezer, you're definitely setting up a straw man here. Of course it's not just you -- pretty much everybody suffers from this particular misunderstanding of logical positivism.

How do you know that the phrase "logical positivism" refers to the correct formulation of the idea, rather than an exaggerated version? I have no trouble at all believing that a group of people discovered the very important notion that untestable claims can be meaningless, and then accidentally went way overboard into believing that difficult-to-test claims are meaningless too.

Comment by sunny-from-qad on Willpower Hax #487: Execute by Default · 2020-10-24T09:08:22.221Z · LW · GW

So it's been 11 years. Do you still remember pjeby's advice? Did it change your life?

Comment by sunny-from-qad on How worried are the relevant experts about a magnetic pole reversal? · 2020-10-08T07:46:46.396Z · LW · GW

There's evidence to be had in the fact that, though it's been known for a long time, it's not a big field of study with clear experts.

This is true. It's only a mild comfort to me, though, since I don't have too much faith in humanity's ability to conjure up fields of study for important problems. But I do have some faith.

From very light googling, it seems likely to happen over hundreds or thousands of years, which puts it pretty far down the list of x-risk worries IMO.

Also true. This makes me update away from "we might wake up dead tomorrow" and towards "the future might be pretty odd, like maybe we'll all wear radiation suits when we're outside for a few generations".

Comment by sunny-from-qad on How worried are the relevant experts about a magnetic pole reversal? · 2020-10-08T07:42:50.163Z · LW · GW

('overdue') presumes some knowledge of mechanism, which I don't have. Roughly speaking it's a 1 in 300,000 risk each year and not extinction level.

Am I misunderstanding, or is this an argument from ignorance? The article says we're overdue; that makes it sound like someone has an idea of what the mechanism is, and that person is saying that according to their model, we're overdue. Actually, come to think of it, "overdue" might not imply knowledge of a mechanism at all! Maybe we simply have good reason to believe that this has happened about every 300,000 years for ages, and conclude that "we're overdue" is a good guess.

it's not as though the field temporarily disappears completely!

How do you know?

Comment by sunny-from-qad on How worried are the relevant experts about a magnetic pole reversal? · 2020-10-08T07:38:11.770Z · LW · GW

2. ('overdue') presumes some knowledge of mechanism, which I don't have. Roughly speaking it's a 1 in 300,000 risk each year and not extinction level.

 

Comment by sunny-from-qad on Postmortem to Petrov Day, 2020 · 2020-10-07T16:51:10.797Z · LW · GW

I'll just throw in my two cents here and say that I was somewhat surprised by how serious the Ben's post is. I was around for the Petrov Day celebration last year, and I also thought of it as just a fun little game. I can't remember if I screwed around with the button or not (I can't even remember if there was a button for me).

Then again, I do take Ben's point: a person does have a responsibility to notice when something that's being treated like a game is actually serious and important. Not that I think 24 hours of LW being down is necessarily "serious and important".

Overall, though, I'm not throwing much of a reputation hit (if any at all) into my mental books for you.

Comment by sunny-from-qad on This Territory Does Not Exist · 2020-08-13T23:58:58.250Z · LW · GW

Yeah. This post could also serve, more or less verbatim, as a write-up of my own current thoughts on the matter. In particular, this section really nails it:

As above, my claim is not that the photon disappears. That would indeed be a silly idea. My claim is that the very claim that a photon "exists" is meaningless. We have a map that makes predictions. The map contains a proton, and it contains that proton even outside any areas relevant to predictions, but why should I care? The map is for making predictions, not for ontology.

[...]

I don't suppose that. I suppose that the concept of a photon actually existing is meaningless and irrelevant to the model.

[...]

This latter belief is an "additional fact". It's more complicated than "these equations describe my expectations".

And the two issues you mention — the spaceship that's leaving Earth to establish a colony that won't causally interact with us, and the question of whether other people have internal experiences — are the only two notes of dissonance in my own understanding.

(Actually, I do disagree with "altruism is hard to ground regardless". For me, it's very easy to ground. Supposing that the question "Do other people have internal conscious experiences?" is meaningful and that the answer is "yes", I just very simply would prefer those people to have pleasant experiences rather than unpleasant ones. Then again, you may mean that it's hard to convince other people to be altruistic, if that isn't their inclination. In that case, I agree.)

Comment by sunny-from-qad on The Valley of Bad Theory · 2020-08-09T02:54:59.680Z · LW · GW

Thanks for pointing this out. I think the OP might have gotten their conclusion from this paragraph:

(Note that, in the web page that the OP links to, this very paragraph is quoted, but for some reason "energy" is substituted for "center-of-mass". Not sure what's going on there.)

In any case, this paragraph makes it sound like participants who inherited a wrong theory did do worse on tests of understanding (even though participants who inherited some theory did the same on average of those who inherited only data, which I guess implies that those who inherited a right theory did better). I'm slightly off-put by the fact that this nuance isn't present in the OP's post, and that they haven't responded to your comment, but not nearly as much as I had been when I'd read only your comment, before I went to read (the first 200 lines of) the paper for myself.

Comment by sunny-from-qad on Sunny's Shortform · 2020-08-06T21:35:15.212Z · LW · GW

Kk! Thanks for the discussion :)

Comment by sunny-from-qad on Sunny's Shortform · 2020-08-06T01:28:15.394Z · LW · GW

Yeah, I just... stopped worrying about these kinds of things. (In my case, "these kinds of things" refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can't win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; [...]

I see. In that case, I think we're reacting differently to our situations due to being in different epistemic states. The uncertainty involved in Everett branches is much less Knightian -- you can often say things like "if I drive to the supermarket today, then approximately 0.001% of my future Everett branches will die in a car crash, and I'll just eat that cost; I need groceries!". My state of uncertainty is that I've barely put five minutes of thought into the question "I wonder if there are any tremendously important things I should be doing right now, and particularly if any of the things might have infinite importance due to my future being infinitely long."

And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...

Well, that's another reference to "popular" theism. Popular theism is a subset of theism in general, which itself is a subset of "worlds in which there's something I should be doing that has infinite importance".

On the other hand, if you assume an evil god, then... maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.

Yikes!! I wish LessWrong had emojis so I could react to this possibility properly :O

So... you can't really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.

This advice makes sense, though given the state of uncertainty described above, I would say I'm already on it.

Psychologically, if you can't get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. [...]

This is a good fallback plan for the contingency in which I can't figure out the truth and then subsequently fail to acknowledge my ignorance. Fingers crossed that I can at least prevent the latter!

[...] your theory can still benefit from some concepts having shorter words for historical reasons [...]

Well, I would have said that an exactly analogous problem is present in normal Kolmogorov Complexity, but...

But historical evidence shows that humans are quite bad at this.

...but this, to me, explains the mystery. Being told to think in terms of computer programs generating different priors (or more accurately, computer programs generating different universes that entail different sets of perfect priors) really does influence my sense of what constitutes a "reasonable" set of priors.

I would still hesitate to call it a "formalism", though IIRC I don't think you've used that word. In my re-listen of the sequences, I've just gotten to the part where Eliezer uses that word. Well, I guess I'll take it up with somebody who calls it that.

By the way, it's just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam's razor. I'm nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?

[...] The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file.

Insightful comments! I see the connection: really, every compression of a file is a compression into the shortest program that will output that file, where the programming language is the decompression algorithm and the search algorithm that finds the shortest program isn't guaranteed to be perfect. So the best compression algorithm ever would simply be one with a really really apt decompression routine (one that captures the nuanced nonrandomness found in files humans care aboue very well) and an oracle for computing shortest programs (rather than a decent but imperfect search algorithm).

> But then my concern just transforms into "what if there's a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc".

Then we are no longer talking about gods in the modern sense, but about powerful aliens.

Well, if the "inside/outside the universe" distinction is going to mean "is/isn't causally connected to the universe at all" and a god is required to be outside the universe, then sure. But I think if I discovered that the universe was a simulation and there was a being constantly watching it and supplying a fresh bit of input every hundred Planck intervals in such a way that prayers were occasionally answered, I would say that being is closer to a god than an alien.

But in any case, the distinction isn't too relevant. If I found out that there was a vessel with intelligent life headed for Earth right now, I'd be just as concerned about that life (actual aliens) as I would be about god-like creatures that should debatably also be called aliens.

Comment by sunny-from-qad on Sunny's Shortform · 2020-08-02T12:04:07.368Z · LW · GW

Aha, no, the mind reading part is just one of several cultures I'm mentioning. (Guess Culture, to be exact.) If I default to being an Asker but somebody else is a Guesser, I might have the following interaction with them:

Me: [looking at some cookies they just made] These look delicious! Would it be all right if I ate one?

Them: [obviously uncomfortable] Uhm... uh... I mean, I guess so...

Here, it's retroactively clear that, in their eyes, I've overstepped a boundary just by asking. But I usually can't tell in advance what things I'm allowed to ask and what things I'm not allowed to ask. There could be some rule that I just haven't discovered yet, but because I haven't discovered it yet, it feels to me like each case is arbitrary, and thus it feels like I'm being required to read people's minds each time. Hence why I'm tempted to call Guess Culture as "Read-my-mind Culture".

(Contrast this to Ask Culture, where the rule is, to me, very simple and easy to discover: every request is acceptable to make, and if the other person doesn't want you to do what you're asking to do, they just say "no".)

Comment by sunny-from-qad on Sunny's Shortform · 2020-07-31T21:32:53.565Z · LW · GW

I couldn't parse this question. Which part are you referring to by "it", and what do you mean by "instead of asking you"?

Comment by sunny-from-qad on Sunny's Shortform · 2020-07-31T16:48:01.643Z · LW · GW

The Civ analogy makes sense, and I certainly wouldn't stop at disproving all actually-practiced religions (though at the moment I don't even feel equipped to do that).

Well, you cannot disprove such thing, because it is logically possible. (Obviously, "possible" does not automatically imply "it happened".) But unless you assume it is "simulations all the way up", there must be a universe that is not created by an external alien lifeform. Therefore, it is also logically possible that our universe is like that.

Are you sure it's logically possible in the strict sense? Maybe there's some hidden line of reasoning we haven't yet discovered that shows that this universe isn't a simulation! (Of course, there's a lot of question-untangling that has to happen first, like whether "is this a simulation?" is even an appropriate question to ask. See also: Greg Egan's book Permutation City, a fascinating work of fiction that gives a unique take on what it means for a universe to be a simulation.)

It's just a cosmic horror that you need to learn to live with. There are more.

This sounds like the kind of thing someone might say who is already relatively confident they won't suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then...

(WARNING: graphic imagery) ...upon your bodily death, your consciousness will be embedded in an indestructible body and put in a 15K degree oven for 100 centuries. (END).

Would you still say it was just another cosmic horror you have to learn to live with? If you wouldn't still say that, but you say it now because your probability estimate is less than 1/1000, how did you come to have that estimate?

Any programming language; for large enough values it doesn't matter. If you believe that e.g. Python is much better in this regard than Java, then for sufficiently complicated things the most efficient way to implement them in Java is to implement a Python emulator (a constant-sized piece of code) and implementing the algorithm in Python. So if you chose a wrong language, you pay at most a constant-sized penalty. Which is usually irrelevant, because these things are usually applied to debating what happens in general when the data grow.

The constant-sized penalty makes sense. But I don't understand the claim that this concept is usually applied in the context of looking at how things grow. Occam's razor is (apparently) formulated in terms of raw Kolmogorov complexity -- the appropriate prior for an event X is 2^(-B), where B is the Kolmogorov Complexity of X. 

Let's say general relativity is being compared against Theory T, and the programming language is Python. Doesn't it make a huge difference whether you're allowed to "pip install general-relativity" before you begin? 

But there are cases where you can have an intuition that for any reasonable definition of a programming language, X should be simpler than Y.

I agree that these intuitions can exist, but if I'm going to use them, then I detest this process being called a formalization! If I'm allowed to invoke my sense of reasonableness to choose a good programming language to generate my priors, why don't I instead just invoke my sense of reasonableness to choose good priors? Wisdom of the form "programming languages that generate priors that work tend to have characteristic X" can be transformed into wisdom of the form "priors that work tend to have characteristic X".

Just an intuition pump: [...]

I have to admit that I kind of bounced off of this. The universe-counting argument makes sense, but it doesn't seem especially intuitive to me that the whole of reality should consist of one universe for each computer program of a set length written in a set language.

(Actually, I probably never heard explicitly about Kolmogorov complexity at university, but I learned some related concepts that allowed me to recognize what it means and what it implies, when I found it on Less Wrong.)

Can I ask which related concepts you mean?

[...] so it is the complexity of the outside universe.

Oh, that makes sense. In that case, the argument would be that nothing outside MY universe could intervene in the lives of the simulated Life-creatures, since they really just live in the same universe as me. But then my concern just transforms into "what if there's a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc".

Comment by sunny-from-qad on Sunny's Shortform · 2020-07-30T06:20:30.101Z · LW · GW

Epistemic status: really shaky, but I think there's something here.

I naturally feel a lot of resistance to the way culture/norm differences are characterized in posts like Ask and Guess and Wait vs Interrupt Culture. I naturally want to give them little pet names, like:

  • Guess culture = "read my fucking mind, you badwrong idiot" culture.
  • Ask culture = nothing, because this is just how normal, non-insane people act.

I think this feeling is generated by various negative experiences I've had with people around me, who, no matter where I am, always seem to share between them one culture or another that I don't really understand the rules of. This leads to a lot of interactions where I'm being told by everyone around me that I'm being a jerk, even when I can "clearly see" that their is nothing I could have done that would have been correct in their eyes, or that what they wanted me to do was impossible or unreasonable.

But I'm starting to wonder if I need to let go of this. When I feel someone is treating me unfairly, it could just be because (1) they are speaking in Culture 1, then (2) I am listening in Culture 2 and hearing something they don't mean to transmit. If I was more tuned in to what people meant to say, my perception of people who use other norms might change.

I feel there's at least one more important pair of cultures, and although I haven't mentioned it yet, it's the one I had in mind most while writing this post. Something like:

  • Culture 1: Everyone speaks for themselves only, unless explicitly stated otherwise. Putting words in someone's mouth or saying that they are "implying" something they didn't literally say is completely unacceptable. False accusations are taken seriously and reflect poorly on the accuser.
  • Culture 2: The things you say reflect not only on you but also on people "associated" with you. If X is what you believe, you might have to say Y instead if saying X could be taken the wrong way. If someone is being a jerk, you don't have to extend the courtesy of articulating their mistake to them correctly; you can just shun them off in whatever way is easiest.

I don't really know how real this dichotomy is, and if it is real, I don't know for sure how I feel about one being "right" and the other being "wrong". I tried semi-hard to give a neutral take on the distinction, but I don't think I succeeded. Can people reading this tell which culture I naturally feel opposed to? Do you think I've correctly put my finger on another real dichotomy? Which set of norms, if either, do you feel more in tune with?

Comment by sunny-from-qad on Sunny's Shortform · 2020-07-29T20:50:20.021Z · LW · GW

But atoms aren't similar to calories, are they? I maintain that this hypothesis could be literally false, rather than simply unhelpful.

Comment by sunny-from-qad on Sunny's Shortform · 2020-07-29T20:38:53.698Z · LW · GW

I wouldn't call the dead chieftain a god -- that would just be a word game.

But then, how did this improbably complicated mechanism come into existence? Humans were made by evolution, were gods too? But then again those gods are not the gods of religion; they are merely powerful aliens. But powerful aliens are neither creators of the universe, nor are they omniscient.

Wait wait! You say a god-like being created by evolution cannot be a creator of the universe. But that's only true if you constrain that particular instance of evolution to have occered in *this* universe. Maybe this universe is a simulation designed by a powerful "alien" in another universe, who itself came about from an evolutionary process in its own universe.

It might be "omniscient" in the sense that it can think 1000x as fast as us and has 1000x as much working memory and is familiar with thinking habits that are 1000x as good as ours, but that's a moot point. The real thing I'm worried about isn't whether there exists an omniscient-omnipotent-benevolent creature, but rather whether there exists *some* very powerful creature who I might need to understand to avoid getting horrible outcomes.

I haven't yet put much thought into this, since I only recently came to believe that this topic merits serious thought, but the existence of such a powerful creature seems like a plausible avenue to the conclusion "I have an infinite fate and it depends on me doing/avoiding X".

[...] Occam's razor [...]

This is another area where my understanding could stand to be improved (and where I expect it will be during my next read-through of the sequences). I'm not sure exactly what kind of simplicity Occam's razor uses. Apparently it can be formalized as Kolmogorov complexity, but the only definition I've ever found for that term is "the Kolmogorov Complexity of X is the length of the shortest computer program that would output X". But this definition is itself in need of formalization. Which programming language? And what if X is something other than a stream of bits, such as a dandelion? And even once that's answered, I'm not quite sure how to arrive at the conclusion that Kolmogorov-ly simpler things are more likely to be encountered.

(All that being said, I'd like to note that I'm keeping in mind that just because I don't understand these things doesn't mean there's nothing to them. Do you know of any good learning resources for someone who has my confusions about these topics?)

And it's not like you created the universe by simulating it, because you are merely following the mathematical rules; so it's more like the math created that universe and you are only observing it.

If the beings in that mathematical universe will pray to gods, there is no way for anyone outside to intervene (while simultaneously following the mathematical rules). So the universe inside the Game of Life is a perfectly godless universe, based on math.

That much makes sense, but I think it excludes a possibly important class of universe that is based on math but also depends on a constant stream of data from an outside source. Imagine a Life-like simulation ruleset where the state of the array of cells at time T+1 depended on (1) the state of the array at time T and (2) the on/off state of a light switch in my attic at time T. I could listen to the prayers of the simulated creatures and use the light switch to influence their universe such that they are answered.

Comment by sunny-from-qad on Sunny's Shortform · 2020-07-29T03:57:21.615Z · LW · GW

You make a good point -- even if my belief was technically true, it could still have been poorly framed and inactionable (is there a name for this failure mode?).

But in fact, I think it's not even obvious that it was technically true. If we say "calories in" is the sum of the calorie counts on the labels of each food item you eat (let's assume the labels are accurate) then could there not still be some nutrient X that needs to be present for your body to extract the calories? Say, you need at least an ounce of X to process 100 calories? If so, then one could eat the same amount of food, but less X, and potentially lose weight.

Or perhaps the human body can only process food between four and eight hours after eating it, and it doesn't try as hard to extract calories if you aren't being active, so scheduling your meals to take place four hours before you sit around doing nothing would make them "count less".

Calories are (presumably?) a measure of chemical potential energy, but remember that matter itself can also be converted into energy. There's no antimatter engine inside my gut, so my body fails to extract all of the energy present in each piece of food. Couldn't the mechanism of digestion also fail to extract all the chemical potential energy of species "calorie"?

Comment by sunny-from-qad on Sunny's Shortform · 2020-07-28T08:27:46.279Z · LW · GW

Thanks for the feedback! Here's another one for ya. A relatively long time ago I used to be pretty concerned about Pascal's wager, but then I devised some clever reasoning why it all cancels out and I don't need to think about it. I reasoned that one of three things must be true:

  1. I don't have an immortal soul. In this case, I might as well be a good person.
  2. I have an immortal soul, and after my bodily death I will be assigned to one of a handful of infinite fates, depending on how good of a person I was. In this case it's very important that I be a good person.
  3. Same as above, but the decision process is something else. In this case I have no way of knowing how my infinite fate will be decided, so I might as well be a good person during my mortal life and hope for the best.

But then, post-LW, I realized that there are two issues with this:

  • It doesn't make any sense to separate out case 2 from the enormous ocean of possibilities allowed for by case 3. Or rather, I can separate it, but then I need to probabilistically penalize it relative to case 3, and I also need to slightly shift the "expected judgment criterion" found in case 3 away from "being a good person is the way to get a good infinite fate", and it all balances out.
  • More importantly, this argument flippantly supposes that I have no way of discerning what process, if any, will be used to assign me an infinite fate. An infinite fate, mind you. I ought to be putting in more thought than this even if I thought the afterlife only lasted an hour, let alone eternity.

So now I am back to being rather concerned about Pascal's wager, or more generally, the possibility that I have an immortal soul and need to worry about where it eventually ends up.

From my first read-through of the sequences I remember that it claims to show that the idea of there being a god is somewhat nonsensical, but I didn't quite catch it the first time around. So my first line of attack is to read through the sequences again, more carefully this time, and see if they really do give a valid reason to believe that.

Comment by sunny-from-qad on Sunny's Shortform · 2020-07-28T01:38:43.150Z · LW · GW

This belief wasn't really affecting my eating habits, so I don't think I'll be changing much. My rules are basically:

  1. No meat (I'm a vegetarian for moral reasons).
  2. If I feel hungry but I can see/feel my stomach being full by looking at / touching my belly, I'm probably just bored or thirsty and I should consider not eating anything.
  3. Try to eat at least a meal's worth of "light" food (like toast or cereal as opposed to pizza or nachos) per day. This last rule is just to keep me from getting stomach aches, which happens if I eat too much "heavy" food in too short a time span.

I think I might contend that this kind of reflects an agnostic position. But I'm glad you asked, because I hadn't noticed before that rule 2 actually does implicitly assume some relationship between "amount of food" and "weight change", and is put in place so I don't gain weight. So I guess I should really have said that what I tossed out the window was the extra detail that calories alone determine the effect food will have on one's weight. I still believe, for normal cases, that taking the same eating pattern but scaling it up (eating more of everything but keeping the ratios the same) will result in weight gain.

Comment by sunny-from-qad on Sunny's Shortform · 2020-07-27T11:10:48.057Z · LW · GW

It's happened again: I've realized that one of my old beliefs (pre-LW) is just plain dumb.

I used to look around at all the various diet (Paleo, Keto, low carb, low fat, etc.) and feel angry at people for having such low epistemic standards. Like, there's a new theory of nutrition every two years, and people still put faith in them every time? Everybody swears by a different diet and this is common knowledge, but people still swear by diets? And the reasoning is that "fat" (the nutrient) has the same name as "fat" (the body part people are trying to get rid of)?

Then I encountered the "calories in = calories out" theory, which says that the only thing you need to do to lose weight is to make sure that you burn more calories than you eat.

And I thought to myself, "yeah, obviously.".

Because, you see, if the orthodox asserts X and the heterodox asserts Y, and the orthodox is dumb, then Y must be true!

Anyway, I hadn't thought about this belief in a while, but I randomly remembered it a few minutes ago, and as soon as I remembered its origins, I chucked it out the window.

Oops!

(PS: I wouldn't be flabbergasted if the belief turned out true anyway. But I've reverted my map from the "I know how the world is" state to the "I'm awaiting additional evidence" state.)

Comment by sunny-from-qad on Telling more rational stories · 2020-07-21T16:10:53.405Z · LW · GW

In a normal scientific field, you build a theory, push it to the limit with experimental evidence, and then replace it with something better when it breaks down.

LW-style rationality is not a normal scientific field.

I was under the impression that CFAR was doing something like this, using evidence to figure out which techniques actually do what they seem like they're doing. If not... uh-oh! (Uh-oh in the sense that I beleived something for no reason, not in the sense that CFAR would therefore be badwrong in my eyes.)

It's a community dialog centered around a shared set of wisdom-stories. [...] I posit that we are likely to be an average example of such a community, with an average amount of wisdom and an average set of foibles.

I'm not sure I know what kind of community you're talking about. Are there other readily-available examples?

One of those foibles will be [...] Another will be [...] And a third will be [...]

How do you know?

More charitably, I do think these are real risks. Especially the first, which I think I may fall victim to, at least with Eliezer's writings.

My anxiety is that I/we are getting off-track, alienated from ourselves, and obsessed with proxy metrics for rationality. [...]  We focus on what life changes fit into the framework or what will be interesting to others in this community, rather than what we actually need to do. I'd like to see more storytelling and attempts to draw original wisdom from them, and more contrarian takes/heresy.

My current belief (and strong hope) is that the attitude of this community is exactly such that if you are right about that, you will be able to convince people of it. "You're not making improvements, you're just roleplaying making improvements" seems like the kind of advice a typical LessWronger would be open to hearing.

By the way, I saw your two recent posts (criticism of popular LW posts, praise of popular LW posts) and I think they're good stuff. The more I think on this, the more I wonder if the need for "contrarian takes" of LW content has been a blind spot for me in my first year of rationality. It's an especially insidious one if so, because I normally spit out contrarian takes as naturally as I breathe.

Sorry that this is all horrible horrible punditry, darkly hinting and with no verifiable claims, but I don't have the time to make it sharper.

I've been there! ^^

Comment by sunny-from-qad on Criticism of some popular LW articles · 2020-07-19T14:14:59.587Z · LW · GW

As another example, when I get into a debate with someone in the comments section, I tend to upvote the other person's comments as long as they're reasonably well-thought-out and well-written.

Comment by sunny-from-qad on Telling more rational stories · 2020-07-19T08:46:24.935Z · LW · GW

One obstacle to discovering how the sequences were affected is that some of the dependencies on psychology/sociology/etc might not be explicitly called out, or might not even have been explicit in Eliezer's own mind as he wrote. But I would just say that means we'll have to work harder at sussing out the truth.

Comment by sunny-from-qad on Telling more rational stories · 2020-07-18T13:06:11.060Z · LW · GW

I want to begin my response by noting that I'm in the stage of learning about rationality where I feel that there are still things I don't yet know that, when I learn them, will flush some old conclusions completely down the toilet. (I think this is what Nick Bostrom calls a crucial consideration). So, if there's evidence and/or reasoning motivating your position beyond that which you've shared already, you should make sure to identify it and let me know what it is, and it might genuinely change my position.

That said, I think the arguments I see in this comment are flawed. Before I say why, let me first say exactly what I think the points of disagreement are. First, the replication crisis. I think the following statement (written by me, but taken partly from your post) is one you would agree with and I am rather skeptical of:

Many of the conclusions found in LessWrong's early writings have been cast into doubt, on account of having relied on social psychology results that have been cast into doubt.

I read the first few books of the sequences about a year ago, and then I read all of the sequences a couple of months ago. From what I recall, the heuristics & biases program and Bayesian statistics played a dominant role in generating his conclusions, with some evolutionary theory serving to exemplify shortcomings in human reasoning by contrasting what evolutionary theorists used to believe with what we now know (see the Simple Math of Evolution sequence). I don't recall much reliance on social psychology, though I also don't have a very good grasp on what that field studies, so I might not recognize its findings when I see them. Are there specific examples of posts you can give whose conclusions you think (a) rely on results that failed replication and (b) are dubious because of it?

I'd like to note that, although I haven't checked his examples myself myself, I suspect Eliezer knew to be careful about this kind of thing. In How They Nail It Down he explains that a handful of scientific studies aren't enough to believe a phenomenon is real, but that a suite of hundreds of studies, each pitting the orthodox formulation against some alternate interpretation and finding the orthodox interpretation superior, is. He uses the Conjunction Fallacy, one of his go-to examples of human bias, as an example of a phenomenon that passes the test. Perhaps Eliezer managed to identify the phenomena which had not yet been nailed down (and would go on to fail replication) and managed not to rely on them?

Now the second disagreement. I think you would say, and I would not, that:

Rationality has expected conclusions, such as "AI is a serious problem" or "the many-worlds interpretation of quantum physics is the correct one", that you are supposed to come to. Furthermore, you are not supposed to doubt these conclusions -- you're just supposed to believe them.

I admit that Eliezer's position on doubt is more nuanced than I was remembering it as I wrote everything above this sentence. But have a look at The Proper Use of Doubt, from the sequence Letting Go. In this essay, he warns against having doubts that are too ineffectual; in other words, he advises his audience to make sure they act on their doubts, and that, if appropriate, the process of acting on their doubts actually results in "tearing a cherished belief to shreds." (emphasis mine).

[...] rationality, for them, is not an objective procedure. It's a thoroughly human act, and it's also a lifestyle and an attitude.

I'm not entirely sure what you're getting at with the "objective procedure / human act" distinction. Based only on the labeles, I would tentatively agree that rationality is very much a human act. Overcoming biases specific to the human brain is one of its pillars, after all. But I'm not sure what this has to do with either of the points I raised in my comment. Maybe you could put it another way?

It is the systematization of these intuitive, introspection-based techniques that I'm worried about. Now that some self-appointed experts with a nonprofit have produced this (genuinely valuable) material, it makes it easier for people to use the techniques with the expectation of the results the creators tell them they'll receive, rather than doing their own introspection and coming up with original insights and contradictory findings.

Now, where else have I heard of that sort of thing before?

You've probably seen something like it at the heart of every knowledge-gathering endeavor that lasted more than one generation. Everything I know about particle physics was taught to me; none of it derives from original thought on my part. This includes the general attitude that the universe is made of tiny bits whose behavior can be characterized very accurately by mathematical equations. If I wanted to derive knowledge myself, I would have to go out to my back yard and start doing experiments with rocks -- unaware not only of facts like the mass of a proton, not only of the existence of protons, but also of the existence of knowledge such as "protons exist". I would never cross that gap in a single lifetime.

It seems to me that there is a trade-off between original thought, which is good, and speed of development of a collaborative effort, which is also good. Telling your students more results in faster development, but less original thought and therefore less potential to catch mistakes. Telling them less results in more original thought but therefore more wheel-reinvention. I admit that there will be some tendency for people to read about techniques of rationality and then immediately fall victim to the placebo effect. But I think there is also some tendency for Eliezer and CFAR to be smart, say true & useful things, and then pass them on to others who go on to get good use out of them.

Would you agree with that last statement? Do you think my "trade-off" analysis is appropriate? If so, is it just that you think the rationalist community leans too far towards teaching-much and too far away from teaching-little? Or have I completely mis-characterized the problem you see in rationalist teachings (exemplified by Boggling)?

Comment by sunny-from-qad on Telling more rational stories · 2020-07-18T02:24:14.027Z · LW · GW

So much of LessWrong's early writings are steeped in scientific findings that died in replication.

Uh-oh, I didn't know about this. Does anyone know which ones?

My fear about systematized rationality is that it supplies us with methods and expected conclusions, [...]. I'm still a believer in the kind of art that undermines your confidence in the answers it provides. 

What? What are the "expected conclusions" of rationality? My understanding was that rationality is supposed to be *exactly* the kind of art you describe in the second sentence here.

Disclaimer: I sort of skimmed this post, maybe I'm missing something.

Comment by sunny-from-qad on Why associative operations? · 2020-07-16T20:29:59.259Z · LW · GW

It works, but it's self-certified, so your browser is probably blocking it. You can add an exception if you'd like, but I should have it fixed "soon" (3rd party certification can be gotten for free, I just need to get around to it).

Comment by sunny-from-qad on Why associative operations? · 2020-07-16T19:50:48.924Z · LW · GW

Good stuff, thanks for your comment!

Comment by sunny-from-qad on The Goldbach conjecture is probably correct; so was Fermat's last theorem · 2020-07-16T18:37:12.645Z · LW · GW

I was about to type out a rebuttal of this, but halfway through I realized I actually agree with you. The "some non-random property" of the digits of the powers of two is that they are all digits found in order inside of powers of two. I would even go so far as to say that if the statement really can't be proven (even taking into account the fact that the digits aren't truly random) then there's a sense in which it isn't true. (And if it can't be proven false, then I'd also say it isn't false.)

Comment by sunny-from-qad on The allegory of the hospital · 2020-07-16T11:36:52.775Z · LW · GW

Ah, I see what you're saying now. So it is analogous to the cancer example: higher stakes make less-likely-to-succeed-efforts more worth doing. (When compared with lower stakes, not when compared with efforts more likely to succeed, of course.) That makes sense.

Comment by sunny-from-qad on The allegory of the hospital · 2020-07-15T20:03:31.234Z · LW · GW

As a side note, I wonder if I should have had him bet on a less specific series of events. The way the story is currently makes it almost sound like I'm just rehashing the "burdensome details" sequence, but what I was really trying to call out was the fairly specific fallacy of "X is all the information I have access too, therefore X is enough information to make the decision".

Overall I wish I had put more thought into this story. I did let it simmer in mind for a few days after writing it but before posting it, but the decision to finally publish was kind of impulsive, and I didn't try very hard to determine if it was comprehensible before doing so. Oops! I've updated towards "I need to do more work to make my writing clear".

Comment by sunny-from-qad on The allegory of the hospital · 2020-07-15T19:56:52.247Z · LW · GW

In the cancer diagnosis example, part of the reason that I would think it's less clear that Sylvanus is being an idiot is that you really might be able to get some evidence about the presence of cancer by paying close attention to the affected organ.

I  think I see where you're coming from, though. The importance of a cancer diagnosis (compared to a news addiction) does mean that trying out various apparently dumb ways of getting at the truth becomes a lot more sane. But I don't think I understand what you're saying in the first sentence. What needs to be assumed when reasoning about existential risk, and how are the high stakes responsible for forcing us to assume it?

(For context, my knowledge about the various existential risks humanity might face is pretty shallow, but I'm on board with the idea that they can exist and are important to think about and act upon.)

Comment by sunny-from-qad on The allegory of the hospital · 2020-07-03T16:19:24.374Z · LW · GW

Yeah, that's almost certainly because they are all https links as well. In another branch of this comment thread Raven has pointed me to a place where I can get an https certificate for free, so I should be able to fix this soonish. Thanks!

Comment by sunny-from-qad on The allegory of the hospital · 2020-07-03T16:17:03.216Z · LW · GW

I wasn't aware of the image enhancement stuff, but it sounds different from what I'm getting at. If I had to write a moral for this story, I would say "Just because X is all the information you have to go on doesn't mean that X is enough information to give a high-quality answer to the problem at hand".

One time where I feel like I see people making this mistake is with the problem of induction. There are those who say "well, if you take the problem of induction too seriously, you can't know for sure that the sun will rise tomorrow!" and conclude that there must be an issue with the problem of induction, rather than wondering whether they really might not know for sure whether the sun will rise tomorrow.

I (believe that I) saw Scott Alexander make this sort of mistake in one of his recent posts, but I can't go check because... well, the blog doesn't exist at the moment. Actually, I heard it through the podcast, which is still available, so I might just listen back to the recent episodes and see if I (1) find the snippet I'm thinking of and (2) still think it's an instance of this mistake. If condition 1 is met, I'll come back and edit in a report of condition 2.

Comment by sunny-from-qad on The allegory of the hospital · 2020-07-03T16:00:16.724Z · LW · GW

Ooh, I'll check that out. Thanks for the tip!

Comment by sunny-from-qad on The allegory of the hospital · 2020-07-03T03:19:54.543Z · LW · GW

Thanks for the feedback. It seems at least a handful of people are in the same boat, so I might try to re-work this in the future.

Comment by sunny-from-qad on The allegory of the hospital · 2020-07-03T03:18:28.449Z · LW · GW

Thank you, that's a consequence of me self-certifying. I've changed the link from https to http, which prevents the issue.

Comment by sunny-from-qad on 3 Levels of Rationality Verification · 2020-05-05T06:43:46.099Z · LW · GW

Stupid idea: Have a handful of students from each school volunteer to be assigned extremely difficult, real-world tasks, such as "become an officer at Microsoft within the next five years". These people would be putting any other of their life plans on hold, so you'd need to incentivize them with some kind of reward and/or sense of honor/loyalty to their school.

Comment by sunny-from-qad on Sunny's Shortform · 2020-04-29T14:18:21.386Z · LW · GW

When you ask a question to a crowd, the answers you get back have a statistical bias towards overconfidence, because people with higher confidence in their answers are more likely to respond.

Comment by sunny-from-qad on Adding Up To Normality · 2020-03-25T02:07:31.484Z · LW · GW

I like this post.

promise yourself to keep steering the plane mostly as normal while you think about lift

This is a good, short, memorable proverb to remember the point of the post by.

Comment by sunny-from-qad on Go F*** Someone · 2020-01-16T02:58:00.274Z · LW · GW

Ach, nuts. I even spent a minute trying to understand where I'd gone wrong, reasoning that it wasn't all that likely that Jacob's post would contain something as strange as the thing I thought I was seeing. Oh well.

Comment by sunny-from-qad on Go F*** Someone · 2020-01-16T02:18:48.981Z · LW · GW
Leftists blame loneliness on capitalism — single people buy twice as many toasters, sex toys, and Netflix subscriptions.

I know you aren't saying you agree with this logic, but I'll just point out that in the case of toasters and Netflix subscriptions, there's a much more obvious explanation, which is that a couple living together only needs one toaster between them, so on average they only buy .5 toasters each.

Comment by sunny-from-qad on l · 2019-12-31T00:47:49.889Z · LW · GW

I was wondering what people would think of that. I chose this name because it "seemed cool", which I put in quotes because it refers to a specific kind of feeling that I can't really articulate. Short titles often give me this feeling.

If you think it's too short (eg, it seems spammy or you think it might annoy other users to see it) then let me know and I'll be happy to come up with something that gives a better idea of what the post is about.

Comment by sunny-from-qad on l · 2019-12-31T00:43:35.811Z · LW · GW
reducing those small frictions result in much more notes and less disruption of the current task, you think of something, a note is added in a few seconds and you can continue working on.

I upvoted for this snippet because it's an important aspect of the situation that I forgot to call out in the main post.

Would you mind sharing your code?

Sure! This one is actually a one-liner: it's simply "gedit ~/Documents/lists/$1", which you put in a file called "l" in your ~/bin/ directory. If you prefer a different editor, you can swap out "gedit" for "emacs" or the command used to launch whatever editor you like. (This advice is directed at others reading this comment chain, you probably already know how to do that.)

I found that using some keybindings to rely solely on the keyboard also made a good improvement.

That's a good idea. I currently have a piece of software that I use to type diacritics (for Toaq) but I'm not super happy with it — it kind of bugs out on occasion and can be slow to insert the characters I want. The software I'm using is AutoKey. What do you use? Are you happy with it?

I recommend also implementing some scripts to search on the web [...]

This is also a good idea. I'm pretty fast at typing and pretty slow with the mouse, so I'd probably instead make a macro for "prompt me for a search key, open a new tab, search that thing, then take me back to the tab I was in before".


Comment by sunny-from-qad on l · 2019-12-31T00:33:11.321Z · LW · GW

Thanks for the link. Your guess is right: from a cursory glance it looks like this software would be a bit too heavyweight for my purposes. But, I bet somebody will benefit from seeing this.