Posts

Ten Modes of Culture War Discourse 2024-01-31T13:58:20.572Z
On the proper date for solstice celebrations 2023-10-20T13:55:02.999Z
Proof of posteriority: a defense against AI-generated misinformation 2023-07-17T12:04:29.593Z
What is some unnecessarily obscure jargon that people here tend to use? 2023-07-12T13:52:22.832Z
Through a panel, darkly: a case study in internet BS detection 2023-07-02T13:40:48.186Z
Solstice song: Here Lies the Dragon 2022-12-26T16:08:34.740Z
Austin LW meetup notes: The FTX Affair 2022-11-22T14:01:15.625Z
Charging for the Dharma 2022-11-11T14:02:51.811Z
Is there a good way to award a fixed prize in a prediction contest? 2022-11-02T21:37:45.111Z
Crossword puzzle: LessWrong Halloween 2022 2022-10-21T12:41:58.676Z
Adversarial epistemology 2022-08-24T16:57:03.165Z
LW Meetup @ DEFCON (Las Vegas) - 5-7pm Thu. Aug. 11 at Forum Food Court (Caesars) 2022-08-08T14:57:34.588Z
To what extent is your AGI timeline bimodal or otherwise "bumpy"? 2022-05-16T17:42:54.281Z
The Tree of Worlds (Solstice speech) 2021-12-20T04:39:39.313Z
The shoot-the-moon strategy 2021-07-21T16:19:48.226Z
The Schelling Game (a.k.a. the Coordination Game) 2021-05-03T14:31:38.267Z
Texas Freeze Retrospective: meetup notes 2021-03-03T14:48:18.965Z
Texas Freeze Retrospective & Emergency Planning (Non-Texans Welcome!) 2021-02-25T00:05:24.199Z
Teacher's Password: The LessWrong Mystery Hunt Team 2020-12-04T00:04:42.900Z
Interest survey: Forming an MIT Mystery Hunt team (Jan. 15-18, 2021) 2020-11-13T18:33:13.745Z
Austin Petrov Day: 6:30pm 9/26 2020-09-08T14:23:53.079Z
Socially-distanced outdoor Petrov Day ceremonial manual 2020-08-24T14:11:49.804Z
Austin LW/SSC Far-comers Meetup: Feb. 8, 1:30pm 2020-01-14T14:46:33.173Z
Austin LW: Survey for far-traveling attendees Jan-Feb 2020 2019-12-30T15:40:22.442Z
Austin meetup notes Nov. 16, 2019: SSC discussion 2019-11-19T13:30:53.446Z
Austin LW/SSC/EA "Meetups Everywhere" Meetup: 9/30 6pm 2019-09-12T05:40:14.096Z

Comments

Comment by jchan on If I care about measure, choices have additional burden (+AI generated LW-comments) · 2024-11-16T23:50:59.700Z · LW · GW

However, in Many-Worlds Interpretation (MWI), I split my measure between multiple variants, which will be functionally different enough to regard my future selves as different minds. Thus, the act of choice itself lessens my measure by a factor of approximately 10. If I care about this, I'm caring about something unobservable.

If we're going to make sense of living in a branching multiverse, then we'll need to adopt a more fluid concept of personal identity.

Scenario: I take a sleeping pill that will make me fall asleep in 30 minutes. However, the person who wakes up in my bed the next morning will have no memory of that 30-minute period; his last memory will be of taking the pill.

If I imagine myself experiencing that 30-minute interval, intuitively it doesn't at all feel like "I have less than 30 minutes to live." Instead, it feels like I'd be pretty much indifferent to being in this situation - maybe the person who wakes up tomorrow is not "me" in the artificial sense of having a forward-looking continuity of consciousness with my current self, but that's not really what I care about anyway. He is similar enough to current-me that I value his existence and well-being to nearly the same degree as I do my own; in other words, he "is me" for all practical purposes.

The same is true of the versions of me in nearby world branches. I can no longer observe or influence them, but they still "matter" to me. Of course, the degree of self-identification will decrease over time as they diverge, but then again, so does my degree of identification with the "me" many decades in the future, even assuming a single timeline.

Comment by jchan on Inquisitive vs. adversarial rationality · 2024-09-19T01:48:31.497Z · LW · GW

This can be a great time-saver because it relies on each party to present the best possible case for their side. This means I don't have to do any evidence-gathering myself; I just need to evaluate the arguments presented, with that heuristic in mind. For example, if the pro-X side cites a bunch of sources in favor of X, but I look into them and find them unconvincing, then this is pretty good evidence against X, and I don't have to go combing through all the other sources myself. The mere existence of bad arguments for X is not in itself evidence against X, but the fact that they're presented as the best possible arguments is.

Of course the problem is, outside of a legal proceeding, parties rarely have that strong an incentive to dig up the best possible arguments. Their time is limited as well, and they don't really suffer much consequence from failing to convince you. Also, the discussion medium might structurally impede the best arguments from being given (e.g. replies in a Twitter thread need to be posted quickly or else nobody will see them). Or worse yet, a skilled propaganda campaign can flood the zone with bad pro-X arguments from personages who appear to be pro-X but are secretly against it, knowing that the audience is going to be evaluating these arguments using the adversarial heuristic.

Comment by jchan on social lemon markets · 2024-04-26T15:47:46.584Z · LW · GW

In my experience, Americans are actually eager to talk to strangers and make friends with them if and only if they have some good reason to be where they are and talk to those people besides making friends with people.

A corollary of this is that if anyone at an [X] gathering is asked “So, what got you into [X]?” and answers “I heard there’s a great community around [X]”, then that person needs to be given the cold shoulder and made to feel unwelcome, because otherwise the bubble of deniability is pierced and the lemon spiral will set in, ruining it for everyone else.

However, this is pretty harsh, and I’m not confident enough in this chain of reasoning to actually “gatekeep” people like this in practice. Does this ring true to you?

Comment by jchan on On green · 2024-03-22T18:04:52.155Z · LW · GW

I highly recommend Val Plumwood's essay Tasteless: towards a food-based approach to death for a "green-according-to-green" perspective.

Plumwood would turn the "deep atheism" framing on its head, by saying in effect "No, you (the rationalist) are the real theist". The idea is that even if you've rejected Cartesian/Platonic dualism in metaphysics, you might still cling for historical reasons to a metaethical-dualist view that a "real monist" would reject, i.e. the dualism between the evaluator and the evaluated, or between the subject and object of moral values. Plumwood (I think) would say that even the "yin" (acceptance of nature) framing is missing the mark, because it still assumes a distinction between the one doing the accepting and the nature being accepted, positing that they simply happen to be aligned through some fortunate circumstance, rather than being one and the same thing.

Comment by jchan on Ten Modes of Culture War Discourse · 2024-02-04T14:25:52.970Z · LW · GW

It's a question of whether drawing a boundary on the "aligned vs. unaligned" continuum produces an empirically-valid category; and to this end, I think we need to restrict the scope to the issues actually being discussed by the parties, or else every case will land on the "unaligned" side. Here, both parties agree on where they stand vis-a-vis C and D, and so would be "Antagonistic" in any discussion of those options, but since nobody is proposing them, the conversation they actually have shouldn't be characterized as such.

Comment by jchan on Ten Modes of Culture War Discourse · 2024-02-01T15:54:52.375Z · LW · GW

On the contrary, I'd say internet forum debating is a central example of what I'm talking about.

Comment by jchan on Ten Modes of Culture War Discourse · 2024-02-01T15:53:32.127Z · LW · GW

This "trying to convince" is where the discussion will inevitably lead, at least if Alice and Bob are somewhat self-aware. After the object-level issues have been tabled and the debate is now about whether Alice is really on Bob's side, Bob will view this as just another sophisticated trick by Alice. In my experience, Bob-as-the-Mule can only be dislodged when someone other than Alice comes along, who already has a credible stance of sincere friendship towards him, and repeats the same object-level points that Alice made. Only then will Bob realize that his conversation with Alice had been Cassandra/Mule.

(Example I've heard: "At first I was indifferent about whether I should get the COVID vaccine, but then I heard [detestable left-wing personalities] saying I should get it, so I decided not to out of spite. Only when [heroic right-wing personality] told me it was safe did I get it.")

Comment by jchan on Ten Modes of Culture War Discourse · 2024-02-01T15:52:27.256Z · LW · GW

#1 - I hadn't thought of it in those terms, but that's a great example.

#2 - I think this relates to the involvement of the third-party audience. Free speech will be "an effective arena of battle for your group" if you think the audience will side with you once they learn the truth about what [outgroup] is up to. Suppose Alice and Bob are the rival groups, and Carol is the audience, and:

  • Alice/Bob are SE/SE (Antagonist/Antagonist)
  • Alice/Carol are SF/IE (Guru/Rebel)
  • Bob/Carol are IF/SE (Siren/Sailor)

If this is really what's going on, Alice will be in favor of the debate continuing because she thinks it'll persuade Carol to join her, while Bob is opposed to the debate for the same reason. This is why I personally am pro-free-speech - because I think I'm often in the role of Carol, and supporting free speech is a "tell" for who's really on my side.

Comment by jchan on A discussion of normative ethics · 2024-01-17T19:55:34.905Z · LW · GW

I think this is not a great example because the virtues being extolled here are orthogonal to the outcome.

Would it still be possible to explain these virtues in a consequentialist way, or is it only some virtues that can be explained in this way?

And consequentialists can choose to value their own side more than the other side, or to be indifferent between sides, so I'm not sure what the conflict between virtue ethics and consequentialism would be here.

The special difficulty here is that the two sides are following the same virtue-ethics framework, and come into conflict precisely because of that. So, whatever this framework is, it cannot be cashed out into a single corresponding consequentialist framework that gives the same prescriptions.

Comment by jchan on Why are people unkeen to immortality that would come from technological advancements and/or AI? · 2024-01-17T19:51:37.696Z · LW · GW

It could be that people regard the likelihood of being resurrected into a bad situation (e.g. as a zoo exhibit, a tortured worker em, etc.) as outweighing that of a positive outcome.

Comment by jchan on A discussion of normative ethics · 2024-01-11T17:46:49.325Z · LW · GW

Aren't there situations (at least in some virtue-ethics systems) where it's fundamentally impossible to reduce (or reconcile) virtue-ethics to consequentialism because actions tending towards the same consequence are called both virtuous and unvirtuous depending on who does them? (Or, conversely, where virtuous conduct calls for people to do things whose consequences are in direct opposition.)

For example, the Iliad portrays both Achilles (Greek) and Hector (Trojan) as embodying the virtues of bravery/loyalty/etc. for fighting for their respective sides, even though Achilles's consequentialist goal is for Troy to fall, and Hector's is for that not to happen. Is this an accurate characterization of how virtue-ethics works? Is it possible to explain this in a consequentialist frame?

Comment by jchan on Austin LW/SSC Winter Solstice 2023 · 2023-12-21T18:22:51.384Z · LW · GW

Thanks everyone for coming! Feedback survey here: https://forms.gle/w32pisonKdwK1bHJ6

Comment by jchan on Portable Chargers are Great · 2023-11-22T06:34:35.825Z · LW · GW

It's also nice to be able to charge up in a place where directly plugging in your device would be inconvenient or would risk theft, e.g. at a busy cafe where the only outlet is across the room from your table.

Comment by jchan on What is an "anti-Occamian prior"? · 2023-10-23T18:38:19.247Z · LW · GW

I want to say something like: "The bigger N is, the bigger a computer needs to be in order to implement that prior; and given that your brain is the size that it is, it can't possibly be setting N=3↑↑↑↑↑3."

Now, this isn't strictly correct, since the Solomonoff prior is uncomputable regardless of the computer's size, etc. - but is there some kernel of truth there? Like, is there a way of approximating the Solomonoff prior efficiently, which becomes less efficient the larger N gets?

Comment by jchan on Discussion: LLaMA Leak & Whistleblowing in pre-AGI era · 2023-03-07T16:27:07.200Z · LW · GW

I'm unsure whether it's a good thing that LLaMA exists in the first place, but given that it does, it's probably better that it leak than that it remain private.

What are the possible bad consequences of inventing LLaMA-level LLMs? I can think of three. However, #1 and #2 are of a peculiar kind where the downsides are actually mitigated rather than worsened by greater proliferation. I don't think #3 is a big concern at the moment, but this may change as LLM capabilities improve (and please correct me if I'm wrong in my impression of current capabilities).

  1. Economic disruption: LLMs may lead to unemployment because it's cheaper to use one than to hire a human to do the same work. However, given that they already exist, it's only a question of whether the economic gains accrue to a few large corporations or to a wider mass of people. If you think economic inequality is bad (whether per se or due to its consequences), then you'll think the LLaMA leak is a good thing.
  2. Informational chaos: You can never know whether product reviews, political opinions, etc. are actually genuine expressions of what some human being thinks rather than AI-generated fluff created by actors with an interest in deceiving you. This was already a problem (i.e. paid shills), but with LLMs it's much easier to generate disinformation at scale. However, this problem "solves itself" once LLMs are so easily accessible that everyone knows not to trust anything they read anyway. (By contrast, if LLMs are kept private, AI-generated content seems more trustworthy because it comes in a wider context where most content is still human-authored.)
  3. Infohazard production: If e.g. there's some way of building a devastating bioweapon using household materials, then it'd be really bad if LLaMA made this knowledge more accessible, or could discover it anew. However, I haven't seen any evidence that LLaMA is capable of discovering new scientific knowledge that's not in the training set, or that querying it to surface existing such knowledge is any more effective than using a regular search engine. But this may change with more advanced models.
Comment by jchan on Talking to God · 2023-01-03T23:11:56.008Z · LW · GW

One time, a bunch of particularly indecisive friends had started an email thread in order to arrange a get-together. Several of them proposed various times/locations but nobody expressed any preferences among them. With the date drawing near, I broke the deadlock by saying something like "I have consulted the omens and determined that X is the most auspicious time/place for us to meet." (I hope they understood I was joking!) I have also used coin-flips or the hash of an upcoming Bitcoin block for similar purposes.

I think the sociological dynamic is something like: Nobody really cares what we coordinate on, but they do care about (a) not wanting to be seen as unjustifiably grabbing social status by imposing a single choice on everyone else, and (b) not wanting to accept lower status by going along with someone else's preference. So, to coordinate, we defer the choice to some "objective" external process, so that nobody's social status is altered by it.

An example where this didn't work: The Gregorian calendar took centuries to be adopted throughout Europe, despite being justified by "objective" astronomical data, because non-Catholic countries thought of it as a "papal imposition" whose acceptance would imply acceptance of the Pope's authority over the whole Christian church. (Much better to stick with Julius Caesar's calendar instead!)

Comment by jchan on Shared reality: a key driver of human behavior · 2022-12-26T17:00:44.672Z · LW · GW

This may shed some light onto why people have fun playing the Schelling game. It's always amusing when I discover how uncannily others' thoughts match my own, e.g. when I think to myself "X! No, X is too obscure, I should probably say the more common answer Y instead", and then it turns out X is the majority answer after all.

Comment by jchan on Austin LW/SSC Winter Solstice 2022 · 2022-12-21T14:48:35.343Z · LW · GW

Thanks everyone for coming! Feedback survey here: https://forms.gle/Nx4vqmXZnJ8EuuKP9

Comment by jchan on Boston Solstice 2022 Retrospective · 2022-12-20T20:34:44.993Z · LW · GW

What exactly did you do with the candles? I've seen pictures and read posts mentioning the fact that candles are used at solstice events, but I'm having trouble imagining how it works without being logistically awkward. E.g.:

  1. Where are the candles stored before they're passed out to the audience?
  2. At what point are the candles passed out? Do people get up from their seats, go get a candle, and then return to their seats, or do you pass around a basket full of candles?
  3. When are the candles initially lit? Before or after they're distributed?
  4. When are the candles extinguished during the "darkening" phase? How does each person know when to extinguish their own candle?
  5. Is there a point later when people can ditch their candles? Otherwise, it must be annoying to have to hold a lit candle throughout the whole "brightening" phase.
  6. What happens to the candles at the end?
Comment by jchan on The True Spirit of Solstice? · 2022-12-15T18:36:21.873Z · LW · GW

I wrote up the following a few weeks ago in a document I shared with our solstice group, which seems to independently parallel G Gordon Worley III's points:

To- | morrow can be brighter than [1]
to- | day, although the night is cold [2]
the | stars may seem so very far
a- | way... [3]
But | courage, hope and reason burn,
in | every mind, each lesson learned, [4]
[5] | shining light to guide to our way,
[6] | make tomorrow brighter than [7]
to- | day....

  1. It's weird that the comma isn't here, but rather 1 beat later.
  2. The unnecessary syncopation on "night is cold" is all but guaranteed to throw people off.
  3. If this is supposed to rhyme with "today" from before, it falls flat because "today" is not really at the end of the line, despite the way it's written.
  4. A rhyme is set up here with "burn"/"learned," but there is no analogous rhyme in the first stanza.
  5. It really feels like there should be an unstressed pickup syllable here, based on the expectation set by all the previous measures.
  6. Same here.
  7. The stanza should really end here, but it goes on for another measure. (A 9-measure phrase? Who does that?)

To clarify some of these points:

  • 1 & 3: There's a mismatch between the poetic grouping of words and the rhythmical grouping, which is probably why bgaesop stumbles at that spot. This mismatch is made obvious by writing out the words according to the rhythmical grouping, as above.
  • 2: The "official" version has "night is cold" on a downbeat with the rhythm "16th, 8th, quarter", which is a very unusual rhythm. Notice that in the live recording here, the group attempts the syncopated rhythm the first time, but stumbles into "the stars may seem...", and then reverts to the much more natural rhythm "8th, 8th, dotted-8th" in all subsequent iterations.
  • 7: Mozart's Musical Joke makes fun of bad compositions by starting off with a 7-measure phrase. Phrases are usually in powers or 2 or "nice" composite numbers like 6 or 12; a large prime number like 7 is silly because it can't be imagined as having any internal regularity. You could maybe get away with 9 if it can be thought of as 3 3-measure subphrases, but this song doesn't do that.

In my opinion, a good singalong song must have very low or zero tolerance for any irregularities in rhyme or rhythm. In LW jargon, if you think of the song as a stream of data which people are trying to predict in real time, you want them to quickly form an accurate, low-Kolmogorov-complexity model of the whole song based on just a small amount of input at the beginning.

(I've always hated singing "the bombs" in the Star-Spangled Banner!)

Comment by jchan on Austin LW meetup notes: The FTX Affair · 2022-11-24T15:05:09.882Z · LW · GW

I think most non-experts still have only a vague understanding of what cryptocurrency actually is, and just mentally lump together all related enterprises into one big category - which is reinforced by the fact that people involved in one kind of business will tend to get involved in others as well. FTX is an exchange, Alameda is a fund, and FTT is a currency, and each of these things could theoretically exist apart from the others, but a layperson will point at all of them and say "FTX" in the same way as one might refer to a PlayStation console as "the Nintendo."

Legally speaking this is nonsense, but when we're talking about "social context," a lack of clarity in the common understanding of what exactly these businesses do might provide an opening for self-deception on the part of the people running them, regarding what illegal activities are "socially acceptable" in their field.

Comment by jchan on Austin LW meetup notes: The FTX Affair · 2022-11-22T14:06:05.233Z · LW · GW

Meta question: What do you think of this style of presenting information? Is it useful?

Comment by jchan on Charging for the Dharma · 2022-11-12T18:17:31.199Z · LW · GW

The more resources people in a community have, the easier it is for them to run events that are free for the participants. The tech community has plenty of money and therefore many tech events are free.

This applies to "top-down funded" events, like a networking thing held at some tech startup's office, or a bunch of people having their travel expenses paid to attend a conference. There are different considerations with regard to ideological messages conveyed through such events (which I might get into in another post), but this is different from the central example of a "tech/finance/science bubble event" that I'm thinking of, which is "a bunch of people meeting in a cafe/bar/park".

Or alternatively, do it the way the church does and have no entrance fee and ask for donations during the event.

I would indeed have found this less off-putting, though I'm not sure exactly why.

Comment by jchan on Charging for the Dharma · 2022-11-12T18:05:25.506Z · LW · GW

This is a fair point but I think not the whole story. The events that I'm used to (not just LW and related meetups, but also other things that happen to attract a similar STEM-heavy crowd) are generally held in cafes/bars/parks where nobody has to pay anything to put on the event, so it seems like financial slack isn't a factor in whether those events happen or not.

Could it be an issue of organizers' free time? I don't think it's particularly time-consuming to run a meetup, especially if you're not dealing with money and accounting, though I could be wrong.

We might also consider the nature of the activity. One can't very well meditate in a bar, but parks are still an option, albeit less comfortable than a yoga studio. But isn't it worth accepting the discomfort for the sake of bringing in more people? Depends on what you're trying to do, I guess.

Comment by jchan on Charging for the Dharma · 2022-11-12T17:29:32.468Z · LW · GW

Really helpful to hear an on-the-ground perspective!

(I do live in America - Austin specifically.)

I don't think this issue is specific to spirituality; these are just the most salient examples I can think of where it's been dealt with for a long time and explicitly discussed in ancient texts. (For a non-spiritual example, according to Wikipedia the Platonic Academy didn't charge fees either, though I doubt they left any surviving writings explaining why.)

How would you respond to someone who says "I can easily pay the recommended donation of $20 but I don't think this event/activity is worth nearly as much as you seem to think I should consider it worth, so I'm going to pay only $5 so that it's still positive-on-net for me to be here"? In other words, pay-what-you-want as opposed to pay-what-you-can.

If I were in your position I'd probably welcome such a person at first, but if they keep coming back while still paying only $5 I might be inclined to think negatively of them, or pressure them to either pay more or leave. Which also seems like a bad thing, so maybe it's best to collect donations anonymously so that nobody feels pressured.

The problem is that the functions of "doing X" and "convincing people that doing X is worth" are often being served simultaneously by the same activities, and are difficult to disentangle.

Comment by jchan on Where the logical fallacy is not (Generalization From Fictional Evidence) · 2022-11-11T18:24:31.303Z · LW · GW

You are forced to trust what others tell you.

The difference between fiction and non-fiction is that non-fiction at least purports to be true, while fiction doesn't. I can decide whether I want to trust what Herodotus says, but it's meaningless to speak of "trusting" the Sherlock Holmes stories because they don't make any claims about the world. Imagining that they do is where the fallacy comes in.

For example, kung-fu movies give a misleading impression of how actual fights work, not because the directors are untrustworthy or misinformed, but because it's more fun than watching realistic fights, and they're optimizing for that, not for realism.

Comment by jchan on Charging for the Dharma · 2022-11-11T18:10:19.662Z · LW · GW

If you categorically don’t pay people who are a purveyor of values, then you are declaring that you want that nobody is a purveyor of values as their full-time job.

Would this really be a bad thing? The current situation seems like a defect/defect equilibrium - I want there to be full-time advocates for Good Values, but only to counteract all the other full-time advocates for Bad Values. It would be better if we could just agree to ratchet down the ideological arms race so that we can spend our time on more productive, non-zero-sum activities.

But unlike soldiers in a literal arms race, value-purveyors ("preachers" for short) only have what power we give them. A world where full-time preachers are ipso facto regarded as untrustworthy seems more achievable than one in which we all magically agree to dismantle our militaries.

I think there could be a lot of value generated by having more people organize valuable events and take money for them.

Perhaps, but this positive value will be more than counteracted by the negative value generated by Bad-Values-havers also organizing more events.

This intuitively seems true to me, but may not be obvious. It's based on the assumption that some attributes of an ideology (e.g. the presence of sincere advocates) are relatively more truth-correlated than other attributes (e.g. the profitability of events). Therefore, increasing the weight with which these more-truth-correlated attributes contribute to swaying public opinion, and decreasing the weight of less-truth-correlated attributes, will tend to promote the truth winning out.

(I have more points to add, but I'll do that in another comment.)

Comment by jchan on Is there a good way to award a fixed prize in a prediction contest? · 2022-11-03T00:38:17.487Z · LW · GW

OK, so if I understand this correctly, the proposed method is:

  1. For each question, determine the log score, i.e. the natural logarithm of the probability that was assigned to the outcome that ended up happening.
  2. Find the total score for each contestant.
  3. For each contestant, find e to the power of his/her total score.
  4. Distribute the prize to each contestant in a fraction proportional to that person's share in the sum of that number across all contestants.

(Edit: I suppose it's simpler to just multiply all of each contestant's probabilities together, and distribute the award proportional to that result.)

Comment by jchan on Mind is uncountable · 2022-11-02T21:49:52.703Z · LW · GW

I have a vague memory of a dream which had a lasting effect on my concept of personal identity. In the dream, there were two characters who each observed the same event from different perspectives, but were not at the time aware of each other's thoughts. However, when I woke up, I equally remembered "being" each of those characters, even though I also remembered that they were not the same person at the time. This showed me that it's possible for two separate minds to merge into one, and that personal identity is not transitive.

Comment by jchan on Humans do acausal coordination all the time · 2022-11-02T21:45:22.970Z · LW · GW

See also Newcomblike problems are the norm.

When I discuss this with people, the response is often something like: My value system includes a term for people other than myself - indeed, that's what "morality" is - so it's redundant / double-counting to posit that I should value others' well-being also as an acausal "means" to achieving my own ends. However, I get the sense that this disagreement is purely semantic.

Comment by jchan on Crossword puzzle: LessWrong Halloween 2022 · 2022-10-22T00:34:33.633Z · LW · GW

Hint:

It's a character from a movie.

Comment by jchan on Crossword puzzle: LessWrong Halloween 2022 · 2022-10-21T22:54:24.860Z · LW · GW

It turns out Japanese words are really useful for filling in crosswords, since they have so many vowels.

Comment by jchan on Crossword puzzle: LessWrong Halloween 2022 · 2022-10-21T17:25:32.777Z · LW · GW

Well done! This is faster than I expected it to be solved.

Comment by jchan on Supposing Europe is headed for a serious energy crisis this winter, what can/should one do as an individual to prepare? · 2022-09-01T19:24:31.235Z · LW · GW

Texas Freeze Retrospective may have some useful info.

Comment by jchan on Adversarial epistemology · 2022-08-27T22:26:44.513Z · LW · GW

If the cryptography example is too distracting, we could instead imagine a non-cryptographic means to the same end, e.g. printing the surveys on leaflets which the employees stuff into envelopes and drop into a raffle tumbler.

The point remains, however, because (just as with the blinded signatures) this method of conducting a survey is very much outside-the-norm, and it would be a drastic world-modeling failure to assume that the HR department actually considered the raffle-tumbler method but decided against it because they secretly do want to deanonymize the surveys. Much more likely is that they simply never considered the option.

But if employees did start adopting the rule "don't trust the anonymity of surveys that aren't conducted via raffle tumbler", even though this is epistemically irrational at first, it would eventually compel HR departments to start using the tumbler method, whereupon the odd surveys that still are being conducted by email will stick out, and it would now be rational to mistrust them. In short, the Adversarial Argument is "irrational" but creates the conditions for its own rationality, which is why I describe it as an "acausal negotiation tactic".

Comment by jchan on Adversarial epistemology · 2022-08-27T22:25:10.914Z · LW · GW

You mention "Infra-Bayesianism" in that Twitter thread - do you think that's related to what I'm talking about here?

Comment by jchan on Adversarial epistemology · 2022-08-27T22:24:28.456Z · LW · GW

This is interesting, because it seems that you've proved the validity of the "Strong Adversarial Argument", at least in a situation where we can say:

This event is incompatible with XYZ, since Y should have been called.

In other words, we can use the Adversarial Argument (in a normal Bayesian way, not as an acausal negotiation tactic) when we're in a setting where the rule against hearsay is enforced. But what reason could we have had for adopting that rule in the first place? It could not have been because of the reasoning you've laid out here, which presupposes that the rule is already in force! The rule is epistemically self-fulfilling, but its initial justification would have seemed epistemically "irrational".

So, why do we apply it in a courtroom setting but not in ordinary conversation? In short, because the stakes are higher and there's a strong positive incentive to deceive.

Comment by jchan on The Validity of Self-Locating Probabilities · 2021-08-21T08:12:05.255Z · LW · GW

To make it slightly more concrete, we could say: one copy is put in a red room, and the other in a green room; but at first the lights are off, so both rooms are pitch black. I wake up in the darkness and ask myself: when I turn on the light, will I see red or green?

There’s something odd about this question. “Standard LessWrong Reductionism” must regard it as meaningless, because otherwise it would be a question about the scenario that remains unanswered even after all physical facts about it are known, thus refuting reductionism. But from the perspective of the test subject, it certainly seems like a real question.

Can we bite this bullet? I think so. The key is the word “I” - when the question is asked, the asker doesn’t know which physical entity “I” refers to, so it’s unsurprising that the question seems open even though all the physical facts are known. By analogy, if you were given detailed physical data of the two moons of Mars, and then you were asked “Which one is Phobos and which one is Deimos?”, you might not know the answer, but not because there’s some mysterious extra-physical fact about them.

So far so good, but now we face an even tougher bullet: If we accept quantum many-worlds and/or modal realism (as many LWers do), then we must accept that all probability questions are of this same kind, because there are versions of me elsewhere in the multiverse that experience all possible outcomes.

Unless we want to throw out the notion of probabilities altogether, we’ll need some way of understanding self-location problems besides dismissing them as meaningless. But I think the key is in recognizing that probability is ultimately in the map, not the territory, however real it may seem to us - i.e. it is a tool for a rational agent to achieve its goals, and nothing more.

Comment by jchan on The Schelling Game (a.k.a. the Coordination Game) · 2021-05-03T22:38:56.188Z · LW · GW

Thinking more about this:

  1. Is it possible to get good at this game?
  2. Does this game teach any useful skills?

I don't think there's a generalized skill of being good at this game as such, but you can get good at it when playing with a particular group, as you become more familiar with their thought processes. Playing the game might not develop any individual's skills, but it can help the group as a whole develop camaraderie by encouraging people to make mental models of each other.

Comment by jchan on The Schelling Game (a.k.a. the Coordination Game) · 2021-05-03T22:33:42.362Z · LW · GW

I've played a variant like this before, except that only one clue would be active at once - if the clue is neither defeated nor contacted within some amount of time, then we'd move on to another clue, but the first clue can be re-asked later. The amount of state seemed manageable for roadtrips/hikes/etc.

Comment by jchan on Unconvenient consequences of the logic behind the second law of thermodynamics · 2021-03-09T01:00:11.269Z · LW · GW

Maybe we are anthropically more likely to find ourselves in places with low komolgorov complexity descriptions. ("All possible bitstrings, in order" is not a good law of physics, just because it contains us somewhere).

Another way of thinking about this, which amounts to the same thing: Holding the laws of physics constant, the Solomonoff prior will assign much more probability to a universe that evolves from a minimal-entropy initial state, than to one that starts off in thermal equilibrium. In other words:

  • Description 1: The laws of physics + The Big Bang
  • Description 2: The laws of physics + some arbitrary configuration of particles

Description 1 is much shorter than Description 2, because the Big Bang is much simpler to describe than some arbitrary configuration of particles. Even after the heat-death of the universe, it's still simpler to describe it as "the Big Bang, 10^zillion years on" rather than by exhaustive enumeration of all the particles.

This dispenses with the "paradox" of Boltzmann Brains, and Roger Penrose's puzzle about why the Big Bang had such low entropy despite its overwhelming improbability.

Comment by jchan on Unconvenient consequences of the logic behind the second law of thermodynamics · 2021-03-09T00:49:17.864Z · LW · GW

Here's the way I understand it: A low-entropy state takes fewer bits to describe, and a high-entropy state takes more. Therefore, a high-entropy state can contain a description of a low-entropy state, but not vice-versa. This means that memories of the state of the universe can only point in the direction of decreasing entropy, i.e. into the past.

Comment by jchan on Texas Freeze Retrospective: meetup notes · 2021-03-04T07:21:08.170Z · LW · GW

I think the "normal items that helped" category is especially important, because it's costly in terms of money, time, and space to get prepper gear specifically for the whole long tail of possible disasters. If resources are limited, then it's best to focus on buying things that are both useful in everyday life and also are the general kind-of-thing that's useful in disaster scenarios, even if you can't specifically anticipate how.

Comment by jchan on Texas Freeze Retrospective: meetup notes · 2021-03-04T07:06:50.181Z · LW · GW

Good to know that this was useful. I hadn't thought of this meetup as "journalism," but I suppose it was in a sense.

Comment by jchan on Teacher's Password: The LessWrong Mystery Hunt Team · 2020-12-04T01:38:14.707Z · LW · GW

Same here.

Comment by jchan on Interest survey: Forming an MIT Mystery Hunt team (Jan. 15-18, 2021) · 2020-11-13T23:06:04.434Z · LW · GW

You may be right... I just need a rough headcount now, so if you want to take time to ponder the team name feel free to leave it blank now and then submit the form again later with your suggestion. (Edited the form to say so.)

Comment by jchan on The Solomonoff Prior is Malign · 2020-10-25T18:37:39.867Z · LW · GW

I'm trying to wrap my head around this. Would the following be an accurate restatement of the argument?

  1. Start with the Dr. Evil thought experiment, which shows that it's possible to be coerced into doing something by an agent who has no physical access to you, other than communication.
  2. We can extend this to the case where the agents are in two separate universes, if we suppose that (a) the communication can be replaced with an acausal negotation, with each agent deducing the existence and motives of the other; and that (b) the Earthlings (the ones coercing Dr. Evil) care about what goes on in Dr. Evil's universe.
    • Argument for (a): With sufficient computing power, one can run simulations of another universe to figure out what agents live within that universe.
    • Argument for (b): For example, the Earthlings might want Dr. Evil to write embodied replicas of them in his own universe, thus increasing the measure of their own consciousness. This is not different in kind from you wanting to increase the probability of your own survival - in both cases, the goal is to increase the measure of worlds in which you live.
  3. To promote their goal, when the Earthlings run their simulation of Dr. Evil, they will intervene in the simulation to punish/reward the simulated Dr. Evil depending on whether he does what they (the Earthlings) want.
  4. For his own part, Dr. Evil, if he is using the Solomonoff prior to predict what happens next in his universe, must give some probability to the hypothesis that him being in such a simulation is in fact what explains all of his experiences up till that point (rather than him being a ground-level being). And if that hypothesis is true, then Dr. Evil will expect to be rewarded/punished based on whether he carries out the wishes of the Earthlings. So, he will modify his actions accordingly.
  5. The probability of the simulation hypothesis may be non-negligible, because the Solomonoff prior considers only the complexity of the hypothesis and not that of the computation unfolding from it. In fact, the hypothesis "There is a universe with laws A+B+C, which produces Earthlings who run a simulation with laws X+Y+Z which produces Dr. Evil, but then intervene in the simulation as described in #3" may actually be simpler (and thus more probable) than "There is a universe with laws X+Y+Z which produces Dr. Evil, and those laws hold forever".
Comment by jchan on Postmortem to Petrov Day, 2020 · 2020-10-04T13:32:34.084Z · LW · GW

I’d suggest that even a counterfactual donation of $100 to charity not occurring would feel more significant than the frontpage going down for a day.

This suggests an interesting idea: A charity drive for the week leading up to Petrov Day, on condition that the funds will be publicly wasted if anyone pushes the button (e.g. by sending bitcoin to a dead-end address, or donating to two opposing politicians' campaigns).

Comment by jchan on What are examples of Rationalist fable-like stories? · 2020-09-29T06:01:56.640Z · LW · GW

Archimedes's Chronophone

Comment by jchan on On "Not Screwing Up Ritual Candles" · 2020-09-28T17:57:13.773Z · LW · GW

For an outdoor ceremony, you'll want to avoid open flames because (a) the wind might blow them out, and (b) they'll attract bugs that die in the flame. Instead you can use lanterns like these. (Peel off the branding sticker for a cleaner look.) The aesthetic ends up being more rugged/industrial than fancy/refined.

Practical considerations when using these lanterns:

  1. The glass window and the upper surface of the lantern get extremely hot (enough to boil water, at least). Use an oven mitt to manipulate these parts.
  2. For this reason, opening and closing the window is cumbersome. To light the lantern or transfer the flame, use a thin bamboo skewer that you can insert through the gap in the top of the lantern. When you're done with the skewer, douse it in a jar of sand (not water, so you can reuse it).
    • This method also loses the "Can­dle #1 [being] the one light­ing Can­dle #2, rather than vice-versa" distinction.
    • What does the skewer itself symbolize? Perhaps "the generations who died carrying #1 forward to #2 without ever seeing the result" (I dunno, I just made that up now; maybe it doesn't need to symbolize anything.)
  3. The flame can be extinguished by pushing down the top of the lantern (using an oven mitt) into its "collapsed" position, and then placing an inverted glass bowl on top of it for 3-5 seconds to choke off its oxygen supply. (Glass, rather than ceramic or metal, so that you can see when the flame has gone out.) Then un-collapse the lantern, again using the oven mitt. (See the video on the Amazon page for a demo of collapsing/uncollapsing.)
    • Or, you can blow sharply through the top of the lantern, but this is difficult if you're wearing a mask.
    • If you've opened the window in order to pour wax from the candle, collapsing+uncollapsing is the easiest way to re-close the window.