Posts

Arguments in parallel vs arguments in series 2021-11-09T08:31:46.965Z
Does anyone else sometimes "run out of gas" when trying to think? 2021-02-03T12:57:55.188Z
I object (in theory) 2020-12-30T12:50:56.420Z
How worried are the relevant experts about a magnetic pole reversal? 2020-10-07T16:19:51.607Z
Why associative operations? 2020-07-16T12:36:47.802Z
The allegory of the hospital 2020-07-02T21:46:09.269Z
l 2019-12-30T05:23:51.727Z
What's going on with "provability"? 2019-10-13T03:59:08.748Z
[Linkpost] Otu and Lai: a story 2019-09-15T20:21:12.445Z
How has rationalism helped you? 2019-08-24T01:31:06.616Z
Information empathy 2019-07-30T01:32:45.174Z
Sunny's Shortform 2019-07-28T02:40:05.241Z
Bayes' Theorem in three pictures 2019-07-21T07:01:45.068Z
Why it feels like everything is a trade-off 2019-07-18T01:33:04.764Z

Comments

Comment by Sunny from QAD (Evan Rysdam) on Arguments in parallel vs arguments in series · 2021-11-11T07:56:51.245Z · LW · GW

Right. The 100 arguments the person gives aren't 100 parallel arguments in favor of them having good reasons to believe evolution is false, for exactly the reason you give. So my reasoning doesn't stop you from concluding that they have no good reason to disbelieve.

And, they are still 100 arguments in parallel that evolution is false, and my reasoning in the post correctly implies that you can't read five of them, see that they aren't good arguments, and conclude that evolution is true. (That conclusion requires a good argument in favor of evolution, not a bad one against it.)

Comment by Sunny from QAD (Evan Rysdam) on Arguments in parallel vs arguments in series · 2021-11-11T07:50:22.883Z · LW · GW

Yeah. I should have made it clear that this post is prescribing a way to evaluate incoming arguments, rather than describing how outgoing arguments will be received by your audience.

Comment by Sunny from QAD (Evan Rysdam) on alenglander's Shortform · 2021-09-08T07:34:09.586Z · LW · GW

Alternate framing: if you already know that criticisms coming from one's outgroup are usually received poorly, then the fact that they are received better when coming from the ingroup is a hidden "success mode" that perhaps people could use to make criticisms go down easier somehow.

Comment by Sunny from QAD (Evan Rysdam) on Sunny's Shortform · 2021-09-08T07:29:20.805Z · LW · GW

Idea: "Ugh-field trades", where people trade away their obligations that they've developed ugh-fields for in exchange for other people's obligations. Both people get fresh non-ugh-fielded tasks. Works only in cases where the task can be done by somebody else, which won't be every time but might be often enough for this to work.

Comment by Sunny from QAD (Evan Rysdam) on Fermi Fingers · 2021-08-10T22:52:33.702Z · LW · GW

or 10^(+/- 35) if you're weird

Excuse you, you mean 6^(+/- 35) !

Comment by Sunny from QAD (Evan Rysdam) on Working With Monsters · 2021-07-21T10:08:06.252Z · LW · GW

This is a nice story, and nicely captures the internal dissonance I feel about cooperating with people who disagree with me about my "pet issue", though like many good stories it's a little simpler and more extreme than what I actually feel.

Comment by Sunny from QAD (Evan Rysdam) on Precognition · 2021-06-22T01:10:05.928Z · LW · GW

This could be a great seed for a short story. The protagonist can supposedly see the future but actually they're just really really good at seeing the present and making wise bets. 

Comment by Sunny from QAD (Evan Rysdam) on Precognition · 2021-06-22T01:07:59.657Z · LW · GW

May I see it too? 

Asking because the post above advised me to purchase cheap chances at huge upsides and this seems like one of those ^^

Comment by Sunny from QAD (Evan Rysdam) on Small and Vulnerable · 2021-05-10T05:01:37.670Z · LW · GW

This is a lovely post and it really resonated with me. I've yet to really orient myself in the EA world, but "fix the normalization of child abuse" is something I have in my mind as a potential cause area. Really happy to hear you've gotten out, even if the permanent damage from sleep deprivation is still sad.

Comment by Sunny from QAD (Evan Rysdam) on Sunny's Shortform · 2021-04-24T01:44:06.116Z · LW · GW

I just caught myself committing a bucket error.

I'm currently working on a text document full of equations that use variables with extremely long names. I'm in the process of simplifying it by renaming the variables. For complicated reasons, I have to do this by hand.

Just now, I noticed that there's a series of variables O1-O16, and another series of variables F17-F25. For technical reasons relating to the work I'm doing, I'm very confident that the name switch is arbitrary and that I can safely rename the F's to O's without changing the meaning of the equations.

But I'm doing this by hand. If I'm wrong, I will potentially was a lot of work by (1) making this change (2) making a bunch of other changes (3) realizing I was wrong (4) undoing all the other changes (5) undoing this change (6) re-doing all the changes that came after it.

And for a moment, this spurred me to become less confident about the arbitrariness of the naming convention!

The correct thought would have been "I'm quite confident about this, but seeing as the stakes are high if I'm wrong and I can always do this later, it's still not worth it to make the changes now."

The problem here was that I was conflating "X is very likely true" with "I must do the thing I would do if X was certain". I knew instinctively that making the changes now was a bad idea, and then I incorrectly reasoned that it was because it was likely to go wrong. It's actually unlikely to go wrong, it's just that if it does go wrong, it's a huge inconvenience.

Whoops.

Comment by Sunny from QAD (Evan Rysdam) on Vim · 2021-04-07T23:35:08.170Z · LW · GW

It's funny that this came up on LessWrong around this time, as I've just recently been thinking about how to get vim-like behavior out of arbitrary text boxes. Except I also have the additional problem that I'm somewhat unsatisfied with vim. I've been trying to put together my own editor with an "API first" mentality, so that I might be able to, I don't know, eventually produce some kind of GTK widget that acts like my editor by default. Or something. And then maybe it'll be easy to make a variant of, say, Thunderbird, in which the email-editing text box is one of those instead of a normal text box.

(If you're curious, I have two complaints about vim. (1) It's a little bloated, what with being able to open a terminal inside of the editor and using a presumably baked-in variant of sed to do find-and-replace rather than making you go through a generic "run such-and-such program on such-and-such text selection" command if you want the fancy sed stuff. And (2) its commands are slightly irregular, like how d/foo deletes everything up to what the cursor would land on if you just typed /foo but how dfi deletes everything up to and including what the cursor would land on if you just typed fi.)

Comment by Sunny from QAD (Evan Rysdam) on Sunny's Shortform · 2021-04-01T18:54:35.435Z · LW · GW

Also as a side note, I'm curious what's actually in the paywalled posts. Surely people didn't write a bunch of really high-quality content just for an April Fools' day joke?

Comment by Sunny from QAD (Evan Rysdam) on Sunny's Shortform · 2021-04-01T18:51:46.860Z · LW · GW

I was 100%, completely, unreservedly fooled by this year's April Fools' joke. Hilarious XDD

Comment by Sunny from QAD (Evan Rysdam) on Privacy vs proof of character · 2021-02-28T14:01:13.717Z · LW · GW

the paucity of scenarios where such a proof would be desired (either due to a lack of importance of such character, or a lack of relevant doubt),

(or by differing opinion of what counts as desirable character!)

Comment by Sunny from QAD (Evan Rysdam) on Privacy vs proof of character · 2021-02-28T13:59:33.671Z · LW · GW

To summarize: a binary property P is either discernable (can't keep your status private) or not (can't prove your status).

Comment by Sunny from QAD (Evan Rysdam) on Remember that to value something infinitely is usually to give it a finite dollar value · 2021-02-16T18:27:48.675Z · LW · GW

It seems like "agent X puts a particular dollar value on human life" might be ambiguous between "agent X acts as though human lives are worth exactly N dollars each" and "agent X's internal thoughts explicitly assign a dollar value of N to a human life". I wonder if that's causing some confusion surrounding this topic. (I didn't watch the linked video.)

Comment by Sunny from QAD (Evan Rysdam) on Chinese History · 2021-02-15T08:04:20.266Z · LW · GW

I haven't read the post, but I thought I should let you know that several questions have answers that are not spoiler'd.

Comment by Sunny from QAD (Evan Rysdam) on Speedrunning my Morning Makes the Coffee Taste Weird · 2021-02-12T14:50:12.032Z · LW · GW

(The glitch exploits a subpixel misalignment present in about 0.1% of Toyota cars and is extremely difficult to execute even if you know you have a car with the alignment issue right in front of you.)

Comment by Sunny from QAD (Evan Rysdam) on Speedrunning my Morning Makes the Coffee Taste Weird · 2021-02-12T14:48:17.757Z · LW · GW

If you think traffic RNG is bad in the Glitchless category, you should watch someone streaming any% attempts. The current WR has a three-mile damage boost glitch that skips the better part of the commute, saving 13 minutes, and the gal who got it had to grind over 14k attempts for it (about a dozen of them got similar boosts but died on impact).

Comment by Sunny from QAD (Evan Rysdam) on Image vs. Impact: Can public commitment be counterproductive for achievement? · 2021-02-12T02:22:44.164Z · LW · GW

Your comment blew my mind.

Alternative: write important things many times.

Comment by Sunny from QAD (Evan Rysdam) on Speedrunning my Morning Makes the Coffee Taste Weird · 2021-02-12T00:34:30.828Z · LW · GW

Nice time. Here are some thoughts for possible additional timesaves:

  • Wake your partner up before even putting the coffee on so she can be a little more awake when she's helping with your hair.
  • Sleep in your work clothes to skip the part where you get dressed.
  • Drive 20-30mph over the speed limit. (This is probably best as an IL strat, since if you crash or get pulled over then the run is pretty much dead.)

If you manage to get all these in a run, then depending on the length of your commute I think you'll be able to gold this split by 5-10 more minutes.

Comment by Sunny from QAD (Evan Rysdam) on Ways of being with you · 2021-02-05T17:34:15.577Z · LW · GW

This reminds me of something I thought of a while back, that I'd like to start doing again now that I've remembered it. Whenever I sense myself getting unfairly annoyed at someone (which happens a lot) I try to imagine that I'm watching a movie in which that person is the protagonist. I imagine that I know what their story and struggles are, and that I'm rooting for them every step of the way. Now that I'm getting into fiction writing, I might also try imagining that I'm writing them as a character, which has the same vibe as the other techniques. The one time I've actually tried this so far, it worked really well!

Comment by Sunny from QAD (Evan Rysdam) on Does anyone else sometimes "run out of gas" when trying to think? · 2021-02-04T12:22:33.464Z · LW · GW

Thanks for your response!

Comment by Sunny from QAD (Evan Rysdam) on Does anyone else sometimes "run out of gas" when trying to think? · 2021-02-04T12:16:38.611Z · LW · GW

Thanks for sharing!

Comment by Sunny from QAD (Evan Rysdam) on Does anyone else sometimes "run out of gas" when trying to think? · 2021-02-04T12:10:08.103Z · LW · GW

Re the second sentence: lol. Yeah, I bet you're right.

Your last paragraph is interesting to me. I don't think I can say that I've had the same experience, though I do think that some people have that effect on me. I can think of at least one person who I normally don't run out of gas when I'm talking to them. But I think other people actually amplify the problem. For example, I meet with three of my friends for a friendly debate on a weekly basis, and the things they say frequently run against the grain of my mind, and I often run out of gas while trying to figure out how to respond to them.

Comment by Sunny from QAD (Evan Rysdam) on Does anyone else sometimes "run out of gas" when trying to think? · 2021-02-04T11:59:24.199Z · LW · GW

Thanks for sharing!

Comment by Sunny from QAD (Evan Rysdam) on Does anyone else sometimes "run out of gas" when trying to think? · 2021-02-04T11:58:24.403Z · LW · GW

This very much matches my own experiences! Keeping something in the back of my mind has always been somewhere between difficult and impossible for me, and for that reason I set timers for all important events during the day (classes, interviews, meetings, etc). I also carry a pocket-sized notebook and a writing utensil with me wherever I go, in case  I stumble on something that I have to deal with "later".

I have also found my attention drifting away in the middle of conversations, and I too have cultivated the skill of non-rudely admitting to it and asking the other person to repeat themselves.

As for improvising... I play piano, and the main thing I do is improvise! I find improv sessions much easier to stay engaged in than sessions spent trying to read through sheet music.

And, I also have a ton of projects that are 1/4 to 3/4 done (though I think that's probably common to a larger subset of people than the other things).

So thanks for sharing your experiences! I had never seriously considered the possibility that I had ADHD before, even though I've known for a while that I have a somewhat atypical mind. I'm gonna look into that! Makes note in said pocket-sized notebook.

Side note: I think one reason I never wondered whether I have ADHD is that, in my perception, claiming to have ADHD is something of a "fad" among people in my age group, and I think my brain sort of silently assumed that that means it's not also a real condition that people can actually suffer from. That's gonna be a WHOOPS from me, dawg.

Comment by Sunny from QAD (Evan Rysdam) on I object (in theory) · 2020-12-31T10:52:53.914Z · LW · GW

Good point! I admit that although I've thought about this incident many times, this has never occurred to me.

Comment by Sunny from QAD (Evan Rysdam) on Sunny's Shortform · 2020-12-16T11:56:14.643Z · LW · GW

When somebody is advocating taking an action, I think it can be productive to ask "Is there a good reason to do that?" rather than "Why should we do that?" because the former phrasing explicitly allows for the possibility that there is no good reason, which I think makes it both intellectually easier to realize that and socially easier to say it.

Comment by Sunny from QAD (Evan Rysdam) on Pain is not the unit of Effort · 2020-11-28T00:57:19.071Z · LW · GW

To answer that question, it might help to consider when you even need to measure effort. Off the cuff, I'm not actually sure there are any (?). Maybe you're an employer and you need to measure how much effort your employees are putting in? But on second thought that's actually a classic case where you don't need to measure effort, and you only need to measure results.

(Disclaimer: I have never employed anybody.)

Comment by Sunny from QAD (Evan Rysdam) on Pain is not the unit of Effort · 2020-11-25T09:41:17.683Z · LW · GW

pain isn't the unit of effort, but for many things it's correlated with whatever that unit is.

I think this correlation only appears if you're choosing strategies well. If you're tasked with earning a lot of money to give to charity, and you generate a list of 100 possible strategies, then you should toss out all the strategies that don't lie on the pareto boundary of pain and success. (In other words, if strategy A is both less effective and more painful then strategy B, then you should never choose strategy A.) Pain will correlate with success in the remaining pool of strategies, but it doesn't correlate in the set of all strategies. And OP is saying that people often choose strategies that are off the pareto boundary because they specifically select pain-inducing strategies under the misconception that those strategies will all be successful as well.

Comment by Sunny from QAD (Evan Rysdam) on Sunny's Shortform · 2020-10-31T08:49:45.420Z · LW · GW

A koan:

If the laundry needs to be done, put in a load of laundry.
If the world needs to be saved, save the world.
If you want pizza for dinner, go preheat the oven.

Comment by Sunny from QAD (Evan Rysdam) on You Only Live Twice · 2020-10-30T13:25:32.946Z · LW · GW

So it's been 10 years. How are you feeling about cryonics now?

Comment by Sunny from QAD (Evan Rysdam) on You Only Live Twice · 2020-10-30T13:24:39.678Z · LW · GW

It's been ten years. How are you enjoying life?

For what it's worth, I value you even though you're a stranger and even if your life is still going poorly. I often hear people saying how much better their life got after 30, after 40, after 50. Imagine how much larger the effect could be after cryosuspension!

Comment by Sunny from QAD (Evan Rysdam) on Sunny's Shortform · 2020-10-30T08:27:12.457Z · LW · GW

I've been thinking of signing up for cryonics recently. The main hurdle is that it seems like it'll be kind of complicated, since at the moment I'm still on my parent's insurance, and I don't really know how all this stuff works. I've been worrying that the ugh field surrounding the task might end up being my cause of death by causing me to look on cryonics less favorably just because I subconsciously want to avoid even thinking about what a hassle it will be.

But then I realized that I can get around the problem by pre-committing to sign up for cryonics no matter what, then just cancelling it if I decide I don't want it.

It will be MUCH easier to make an unbiased decision if choosing cryonics means doing nothing rather than meaning that I have to go do a bunch of complicated paperwork now. It will be well worth a few months (or even years) of dues.

Comment by Sunny from QAD (Evan Rysdam) on No Logical Positivist I · 2020-10-28T16:44:07.625Z · LW · GW

Eliezer, you're definitely setting up a straw man here. Of course it's not just you -- pretty much everybody suffers from this particular misunderstanding of logical positivism.

How do you know that the phrase "logical positivism" refers to the correct formulation of the idea, rather than an exaggerated version? I have no trouble at all believing that a group of people discovered the very important notion that untestable claims can be meaningless, and then accidentally went way overboard into believing that difficult-to-test claims are meaningless too.

Comment by Sunny from QAD (Evan Rysdam) on Willpower Hax #487: Execute by Default · 2020-10-24T09:08:22.221Z · LW · GW

So it's been 11 years. Do you still remember pjeby's advice? Did it change your life?

Comment by Sunny from QAD (Evan Rysdam) on How worried are the relevant experts about a magnetic pole reversal? · 2020-10-08T07:46:46.396Z · LW · GW

There's evidence to be had in the fact that, though it's been known for a long time, it's not a big field of study with clear experts.

This is true. It's only a mild comfort to me, though, since I don't have too much faith in humanity's ability to conjure up fields of study for important problems. But I do have some faith.

From very light googling, it seems likely to happen over hundreds or thousands of years, which puts it pretty far down the list of x-risk worries IMO.

Also true. This makes me update away from "we might wake up dead tomorrow" and towards "the future might be pretty odd, like maybe we'll all wear radiation suits when we're outside for a few generations".

Comment by Sunny from QAD (Evan Rysdam) on How worried are the relevant experts about a magnetic pole reversal? · 2020-10-08T07:42:50.163Z · LW · GW

('overdue') presumes some knowledge of mechanism, which I don't have. Roughly speaking it's a 1 in 300,000 risk each year and not extinction level.

Am I misunderstanding, or is this an argument from ignorance? The article says we're overdue; that makes it sound like someone has an idea of what the mechanism is, and that person is saying that according to their model, we're overdue. Actually, come to think of it, "overdue" might not imply knowledge of a mechanism at all! Maybe we simply have good reason to believe that this has happened about every 300,000 years for ages, and conclude that "we're overdue" is a good guess.

it's not as though the field temporarily disappears completely!

How do you know?

Comment by Sunny from QAD (Evan Rysdam) on How worried are the relevant experts about a magnetic pole reversal? · 2020-10-08T07:38:11.770Z · LW · GW
Comment by Sunny from QAD (Evan Rysdam) on Postmortem to Petrov Day, 2020 · 2020-10-07T16:51:10.797Z · LW · GW

I'll just throw in my two cents here and say that I was somewhat surprised by how serious the Ben's post is. I was around for the Petrov Day celebration last year, and I also thought of it as just a fun little game. I can't remember if I screwed around with the button or not (I can't even remember if there was a button for me).

Then again, I do take Ben's point: a person does have a responsibility to notice when something that's being treated like a game is actually serious and important. Not that I think 24 hours of LW being down is necessarily "serious and important".

Overall, though, I'm not throwing much of a reputation hit (if any at all) into my mental books for you.

Comment by Sunny from QAD (Evan Rysdam) on This Territory Does Not Exist · 2020-08-13T23:58:58.250Z · LW · GW

Yeah. This post could also serve, more or less verbatim, as a write-up of my own current thoughts on the matter. In particular, this section really nails it:

As above, my claim is not that the photon disappears. That would indeed be a silly idea. My claim is that the very claim that a photon "exists" is meaningless. We have a map that makes predictions. The map contains a proton, and it contains that proton even outside any areas relevant to predictions, but why should I care? The map is for making predictions, not for ontology.

[...]

I don't suppose that. I suppose that the concept of a photon actually existing is meaningless and irrelevant to the model.

[...]

This latter belief is an "additional fact". It's more complicated than "these equations describe my expectations".

And the two issues you mention — the spaceship that's leaving Earth to establish a colony that won't causally interact with us, and the question of whether other people have internal experiences — are the only two notes of dissonance in my own understanding.

(Actually, I do disagree with "altruism is hard to ground regardless". For me, it's very easy to ground. Supposing that the question "Do other people have internal conscious experiences?" is meaningful and that the answer is "yes", I just very simply would prefer those people to have pleasant experiences rather than unpleasant ones. Then again, you may mean that it's hard to convince other people to be altruistic, if that isn't their inclination. In that case, I agree.)

Comment by Sunny from QAD (Evan Rysdam) on The Valley of Bad Theory · 2020-08-09T02:54:59.680Z · LW · GW

Thanks for pointing this out. I think the OP might have gotten their conclusion from this paragraph:

(Note that, in the web page that the OP links to, this very paragraph is quoted, but for some reason "energy" is substituted for "center-of-mass". Not sure what's going on there.)

In any case, this paragraph makes it sound like participants who inherited a wrong theory did do worse on tests of understanding (even though participants who inherited some theory did the same on average of those who inherited only data, which I guess implies that those who inherited a right theory did better). I'm slightly off-put by the fact that this nuance isn't present in the OP's post, and that they haven't responded to your comment, but not nearly as much as I had been when I'd read only your comment, before I went to read (the first 200 lines of) the paper for myself.

Comment by Sunny from QAD (Evan Rysdam) on Sunny's Shortform · 2020-08-06T21:35:15.212Z · LW · GW

Kk! Thanks for the discussion :)

Comment by Sunny from QAD (Evan Rysdam) on Sunny's Shortform · 2020-08-06T01:28:15.394Z · LW · GW

Yeah, I just... stopped worrying about these kinds of things. (In my case, "these kinds of things" refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can't win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; [...]

I see. In that case, I think we're reacting differently to our situations due to being in different epistemic states. The uncertainty involved in Everett branches is much less Knightian -- you can often say things like "if I drive to the supermarket today, then approximately 0.001% of my future Everett branches will die in a car crash, and I'll just eat that cost; I need groceries!". My state of uncertainty is that I've barely put five minutes of thought into the question "I wonder if there are any tremendously important things I should be doing right now, and particularly if any of the things might have infinite importance due to my future being infinitely long."

And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...

Well, that's another reference to "popular" theism. Popular theism is a subset of theism in general, which itself is a subset of "worlds in which there's something I should be doing that has infinite importance".

On the other hand, if you assume an evil god, then... maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.

Yikes!! I wish LessWrong had emojis so I could react to this possibility properly :O

So... you can't really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.

This advice makes sense, though given the state of uncertainty described above, I would say I'm already on it.

Psychologically, if you can't get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. [...]

This is a good fallback plan for the contingency in which I can't figure out the truth and then subsequently fail to acknowledge my ignorance. Fingers crossed that I can at least prevent the latter!

[...] your theory can still benefit from some concepts having shorter words for historical reasons [...]

Well, I would have said that an exactly analogous problem is present in normal Kolmogorov Complexity, but...

But historical evidence shows that humans are quite bad at this.

...but this, to me, explains the mystery. Being told to think in terms of computer programs generating different priors (or more accurately, computer programs generating different universes that entail different sets of perfect priors) really does influence my sense of what constitutes a "reasonable" set of priors.

I would still hesitate to call it a "formalism", though IIRC I don't think you've used that word. In my re-listen of the sequences, I've just gotten to the part where Eliezer uses that word. Well, I guess I'll take it up with somebody who calls it that.

By the way, it's just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam's razor. I'm nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?

[...] The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file.

Insightful comments! I see the connection: really, every compression of a file is a compression into the shortest program that will output that file, where the programming language is the decompression algorithm and the search algorithm that finds the shortest program isn't guaranteed to be perfect. So the best compression algorithm ever would simply be one with a really really apt decompression routine (one that captures the nuanced nonrandomness found in files humans care aboue very well) and an oracle for computing shortest programs (rather than a decent but imperfect search algorithm).

> But then my concern just transforms into "what if there's a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc".

Then we are no longer talking about gods in the modern sense, but about powerful aliens.

Well, if the "inside/outside the universe" distinction is going to mean "is/isn't causally connected to the universe at all" and a god is required to be outside the universe, then sure. But I think if I discovered that the universe was a simulation and there was a being constantly watching it and supplying a fresh bit of input every hundred Planck intervals in such a way that prayers were occasionally answered, I would say that being is closer to a god than an alien.

But in any case, the distinction isn't too relevant. If I found out that there was a vessel with intelligent life headed for Earth right now, I'd be just as concerned about that life (actual aliens) as I would be about god-like creatures that should debatably also be called aliens.

Comment by Sunny from QAD (Evan Rysdam) on Sunny's Shortform · 2020-08-02T12:04:07.368Z · LW · GW

Aha, no, the mind reading part is just one of several cultures I'm mentioning. (Guess Culture, to be exact.) If I default to being an Asker but somebody else is a Guesser, I might have the following interaction with them:

Me: [looking at some cookies they just made] These look delicious! Would it be all right if I ate one?

Them: [obviously uncomfortable] Uhm... uh... I mean, I guess so...

Here, it's retroactively clear that, in their eyes, I've overstepped a boundary just by asking. But I usually can't tell in advance what things I'm allowed to ask and what things I'm not allowed to ask. There could be some rule that I just haven't discovered yet, but because I haven't discovered it yet, it feels to me like each case is arbitrary, and thus it feels like I'm being required to read people's minds each time. Hence why I'm tempted to call Guess Culture as "Read-my-mind Culture".

(Contrast this to Ask Culture, where the rule is, to me, very simple and easy to discover: every request is acceptable to make, and if the other person doesn't want you to do what you're asking to do, they just say "no".)

Comment by Sunny from QAD (Evan Rysdam) on Sunny's Shortform · 2020-07-31T21:32:53.565Z · LW · GW

I couldn't parse this question. Which part are you referring to by "it", and what do you mean by "instead of asking you"?

Comment by Sunny from QAD (Evan Rysdam) on Sunny's Shortform · 2020-07-31T16:48:01.643Z · LW · GW

The Civ analogy makes sense, and I certainly wouldn't stop at disproving all actually-practiced religions (though at the moment I don't even feel equipped to do that).

Well, you cannot disprove such thing, because it is logically possible. (Obviously, "possible" does not automatically imply "it happened".) But unless you assume it is "simulations all the way up", there must be a universe that is not created by an external alien lifeform. Therefore, it is also logically possible that our universe is like that.

Are you sure it's logically possible in the strict sense? Maybe there's some hidden line of reasoning we haven't yet discovered that shows that this universe isn't a simulation! (Of course, there's a lot of question-untangling that has to happen first, like whether "is this a simulation?" is even an appropriate question to ask. See also: Greg Egan's book Permutation City, a fascinating work of fiction that gives a unique take on what it means for a universe to be a simulation.)

It's just a cosmic horror that you need to learn to live with. There are more.

This sounds like the kind of thing someone might say who is already relatively confident they won't suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then...

(WARNING: graphic imagery) ...upon your bodily death, your consciousness will be embedded in an indestructible body and put in a 15K degree oven for 100 centuries. (END).

Would you still say it was just another cosmic horror you have to learn to live with? If you wouldn't still say that, but you say it now because your probability estimate is less than 1/1000, how did you come to have that estimate?

Any programming language; for large enough values it doesn't matter. If you believe that e.g. Python is much better in this regard than Java, then for sufficiently complicated things the most efficient way to implement them in Java is to implement a Python emulator (a constant-sized piece of code) and implementing the algorithm in Python. So if you chose a wrong language, you pay at most a constant-sized penalty. Which is usually irrelevant, because these things are usually applied to debating what happens in general when the data grow.

The constant-sized penalty makes sense. But I don't understand the claim that this concept is usually applied in the context of looking at how things grow. Occam's razor is (apparently) formulated in terms of raw Kolmogorov complexity -- the appropriate prior for an event X is 2^(-B), where B is the Kolmogorov Complexity of X. 

Let's say general relativity is being compared against Theory T, and the programming language is Python. Doesn't it make a huge difference whether you're allowed to "pip install general-relativity" before you begin? 

But there are cases where you can have an intuition that for any reasonable definition of a programming language, X should be simpler than Y.

I agree that these intuitions can exist, but if I'm going to use them, then I detest this process being called a formalization! If I'm allowed to invoke my sense of reasonableness to choose a good programming language to generate my priors, why don't I instead just invoke my sense of reasonableness to choose good priors? Wisdom of the form "programming languages that generate priors that work tend to have characteristic X" can be transformed into wisdom of the form "priors that work tend to have characteristic X".

Just an intuition pump: [...]

I have to admit that I kind of bounced off of this. The universe-counting argument makes sense, but it doesn't seem especially intuitive to me that the whole of reality should consist of one universe for each computer program of a set length written in a set language.

(Actually, I probably never heard explicitly about Kolmogorov complexity at university, but I learned some related concepts that allowed me to recognize what it means and what it implies, when I found it on Less Wrong.)

Can I ask which related concepts you mean?

[...] so it is the complexity of the outside universe.

Oh, that makes sense. In that case, the argument would be that nothing outside MY universe could intervene in the lives of the simulated Life-creatures, since they really just live in the same universe as me. But then my concern just transforms into "what if there's a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc".

Comment by Sunny from QAD (Evan Rysdam) on Sunny's Shortform · 2020-07-30T06:20:30.101Z · LW · GW

Epistemic status: really shaky, but I think there's something here.

I naturally feel a lot of resistance to the way culture/norm differences are characterized in posts like Ask and Guess and Wait vs Interrupt Culture. I naturally want to give them little pet names, like:

  • Guess culture = "read my fucking mind, you badwrong idiot" culture.
  • Ask culture = nothing, because this is just how normal, non-insane people act.

I think this feeling is generated by various negative experiences I've had with people around me, who, no matter where I am, always seem to share between them one culture or another that I don't really understand the rules of. This leads to a lot of interactions where I'm being told by everyone around me that I'm being a jerk, even when I can "clearly see" that their is nothing I could have done that would have been correct in their eyes, or that what they wanted me to do was impossible or unreasonable.

But I'm starting to wonder if I need to let go of this. When I feel someone is treating me unfairly, it could just be because (1) they are speaking in Culture 1, then (2) I am listening in Culture 2 and hearing something they don't mean to transmit. If I was more tuned in to what people meant to say, my perception of people who use other norms might change.

I feel there's at least one more important pair of cultures, and although I haven't mentioned it yet, it's the one I had in mind most while writing this post. Something like:

  • Culture 1: Everyone speaks for themselves only, unless explicitly stated otherwise. Putting words in someone's mouth or saying that they are "implying" something they didn't literally say is completely unacceptable. False accusations are taken seriously and reflect poorly on the accuser.
  • Culture 2: The things you say reflect not only on you but also on people "associated" with you. If X is what you believe, you might have to say Y instead if saying X could be taken the wrong way. If someone is being a jerk, you don't have to extend the courtesy of articulating their mistake to them correctly; you can just shun them off in whatever way is easiest.

I don't really know how real this dichotomy is, and if it is real, I don't know for sure how I feel about one being "right" and the other being "wrong". I tried semi-hard to give a neutral take on the distinction, but I don't think I succeeded. Can people reading this tell which culture I naturally feel opposed to? Do you think I've correctly put my finger on another real dichotomy? Which set of norms, if either, do you feel more in tune with?

Comment by Sunny from QAD (Evan Rysdam) on Sunny's Shortform · 2020-07-29T20:50:20.021Z · LW · GW

But atoms aren't similar to calories, are they? I maintain that this hypothesis could be literally false, rather than simply unhelpful.