Comment by qiaochu_yuan on What Vibing Feels Like · 2019-03-11T23:52:17.674Z · score: 8 (5 votes) · LW · GW

Yes, I strongly agree that this is missing and it sucks. I have a lot to say about why I think this is happening, which hopefully will be converted from a Google Doc into a series of blog posts soonish.

Comment by qiaochu_yuan on Policy-Based vs Willpower-Based Intentions · 2019-02-28T20:27:34.381Z · score: 10 (5 votes) · LW · GW

There's an interesting thing authentic relating people do at workshops that they call "setting intentions," and I think it works in a different way than either of these. The difference seems to me to be that the intention is being "held by the group." I'm not sure how to explain what I mean by this. There are at least two visible signs of it:

1) people remind each other about the intention, and

2) people reward each other for following through on the intention.

If everyone knows the intention is being held by the group in this way, it both comes to mind more easily and feels more socially acceptable to follow through on (the latter might be causing the former). In my experience group intentions also require almost no willpower, but they also don't feel quite like policies to me (that would be "agreements") - they're more like especially salient affordances.

The key ritual is that at some point someone asks "so, do we all agree to hold this intention?" and we raise our hands if so - and we look around the room so we can see each other's hands. That way the collective holding of the intention can enter common knowledge.

Said another way, it's something like trying to define the values of the group-mind we're attempting to come together to form.

---

I relate to your resistance to willpower-based intentions. It's something like, a lot of people have an "inner authoritarian" or "inner tyrant" that is the internalized avatar of other people making them do stuff when they were younger (parents, teachers, etc.), whose job it is to suppress the parts of them that are unacceptable according to outer tyrants. You can live under your inner tyrant's reign of terror, which works as long as submitting to the inner tyrant keeps your life running smoothly, e.g. it placates your outer tyrants and they feed you and stuff.

At some point this strategy can stop working, and then other parts of you might engage in an internal rebellion against your inner tyrant; I think I was in a state like this for most of 2017 and 2018, and probably still am now to some extent. At this stage using willpower can feel like giving in to the inner tyrant.

Then I think there's some further stage of development that involves developing "internal leadership," whatever that is.

There's a bit in the Guru Papers about this. One quote, I think in the context of submitting to renunciate morality (e.g. Christian morality):

Maintaining denial actually requires constant surveillance of the thing you are pretending isn’t there. This deepens the internal splits that renunciation promises to heal. It requires the construction of a covert inner authoritarian to keep control over the “bad” stuff you reject. This inner tyrant is probably not strong enough to do the job on its own, so you submit to an external authority whose job is to strengthen the internal tyrant.
Comment by qiaochu_yuan on Informal Post on Motivation · 2019-02-27T09:30:46.803Z · score: 18 (4 votes) · LW · GW

Glad to see you're writing about this! I think motivation is a really central topic and there's lots more to be said about it than has been said so far around here.

When we're struggling with motivation to do X, it's because only S2 predicts/believes that X will lead to reward. S1 isn't convinced. Your S2 models say X is good, but S1 models don't see it. This isn't necessarily a bug. S2 reasoning can be really shitty, people can come up with all kinds of dumb plans for things that won't help, and it's not so bad that their S1 models don't go along with them.

I think these days S1 and S2 have become semantic stopsigns, and in general I recommend that people stop using these terms both in their thinking and verbally, and instead try to get more specific about what parts of their mind actually disagree and why. I can report, for example, that CFAR doesn't use these terms internally.

Anna Salamon used to say, in the context of teaching internal double crux at CFAR workshops, that there's no such thing as an S1 vs. S2 conflict. All conflicts are "S1 vs. S1." "Your S2," whatever that means, may be capable of engaging in logical reasoning and having explicit verbal models about things, but the part of you that cares about the output of all of that reasoning is a part of your S1 (in my internal dialect, just "a part of you"), and you'll make more progress once you start identifying what part that is.

---

Here's an example of what getting more specific might look like. Suppose I'm a high school student and "my S1" says play video games and "my S2" says do my homework. What is actually going on here?

One version could be that I know I get social reinforcement from my parents and my teachers to do homework, or more sinisterly that I get socially punished for not doing it. So in this case "my S2" is a stopsign blocking an inquiry into the power structure of school, and generally the lack of power children have in modern western society, which is both harder and less comfortable to think about than "akrasia."

Another version is someone told me to do my homework so I'll go to a good college so I'll get a good job. In this case "my S2" is a stopsign blocking an inquiry into why I care about any of the nodes in this causal diagram - maybe I want to go to a good college because it'll make me feel better about myself, maybe I want to get a good job to avoid disappointing my parents, etc.

That's on the S2 side, but "my S1" is also blocking inquiry. Why do I want to play video games? Not "my S1," just me; I can own that desire. There are obvious stories to tell about video games being more immediately pleasurable and addictive than most other things I could do, and those stories have some weight, but they're also a distraction; much easier to think about than why I wouldn't rather do anything else. In my actual experience, the times in my life I have played video games the most, the reasons were mostly emotional: I was really depressed and lonely and felt like a failure, and video games (and lots of other stuff) distracted me from feeling those things. Those feelings were very painful to think about, and that pain prevented me from even looking at the structure of this problem, let alone debugging it, for a long time.

(One sign I was doing this is that the video games I chose were not optimized for pleasure. I deliberately avoided video games that could be fun in a challenging way, because I didn't want to feel bad about doing poorly at them. Another sign is that everything else I did was also chosen for its ability to distract: for example, watching anime (never live-action TV, too uncomfortably close to real life), reading fiction (never nonfiction, again too uncomfortably close to real life), etc.)

An inference I take from this model is that you are best able to focus on long-term S2 goals (those which have the least inherent ability to influence you) if you have taken care of the rest of things which motivate you. Eat enough, sleep enough, spend time with friends. When you're trying to fight the desire to address those things, you're using willpower, and willpower is a stopgap measure.

Strongly agree, except that I wouldn't use the term "S2 goals." That's a stopsign. Again I suggest getting more specific: what part of you has those goals and why? Where did they come from?

So part of what I need to do now is really figure out how to do green-brain, growth-orientation motivation over red-brain, deficit-reduction motivation.

If I understand correctly what you mean by this, I have a lot of thoughts about how to do this. The short, unsatisfying version, which will probably surprise no one, is "find out what you actually want by learning how to have feelings."

The long version can be explained in terms of Internal Family Systems. The deal is that procrastinative behaviors like playing a lot of video games are evidence that you're trying to avoid feeling a bad feeling, and that that bad feeling is being generated by a part of you that IFS calls an "exile." Exiles are producing bad feelings in order to get you to avoid a catastrophic situation that resembles a catastrophic situation earlier in your life that you weren't prepared for; for example, if you were overwhelmed by criticism from your parents as a child, you might have an exile that floods you with pain whenever people criticize you, especially people you really respect in a way that might cause you to project your parents onto them.

Exiles are paired with parts called protectors, whose job it is to protect exiles from being triggered. In the criticism example, that might look like avoiding people who criticize you, avoiding doing things you might get criticized for, or feeling sleepy or confused when someone manages to criticize you anyway.

Behavior that's driven by exile / protector dynamics (approximately "red-brain, deficit-reduction," if I understand you correctly) can become very dysfunctional, as the goal of avoiding psychological pain becomes a worse and worse proxy for avoiding bad situations. In extreme cases it can almost completely block your access to what you want, as that becomes less of a priority than avoiding pain. In the criticism example, you might be so paralyzed by the possibility that someone could criticize you for doing things that you stop doing anything.

There are lots of different ways to get exiles and protectors to chill the fuck out, and once they do you get to find out what you actually want when you aren't just trying to avoid pain. It's good times. See also my comment on Kaj's IFS post.

Comment by qiaochu_yuan on Epistemic Tenure · 2019-02-27T01:32:57.477Z · score: 11 (5 votes) · LW · GW

This seems like a bad idea to me; I think people who are trying to have good ideas should develop courage instead. If you don't have courage your ideas are being pushed around by fear in general, and asking for a particular source of that fear to be ameliorated will not solve the general problem.

Comment by qiaochu_yuan on Two Small Experiments on GPT-2 · 2019-02-21T04:33:52.864Z · score: 17 (10 votes) · LW · GW

Thanks for writing this up! I'm excited to see more people running experiments like this.

When you say "if I take X as a prompt, I get Y," how many trials did you wait? In my own experimentation I've found lil' GPT-2's performance to be really variable across trials, and I've needed to wait 5 trials in some cases to get results I even sort of liked.

My sense overall of how lil' GPT-2 functions after playing with it for awhile on several different kinds of prompts is that it has a strong sense of genre, and has done something like learned a bunch of different genre conventions for the different types of texts in the training set. If the prompt strongly resembles a genre familiar from training it will run with that genre, although sometimes it'll wander off into another genre. It does quite poorly on prompts that I suspect don't strongly match to a genre in the training set.

For example, I tried to run a Turing test (mostly as a joke) by prompting with "Question: what is your name? Answer:" and I got this on my ~2nd to 4th trial (don't remember), with my speculations as to genre in [square brackets]:

Foulplay
I'm a Guardian Huntress, a deviation of the <Here> Monks keyword. I'm like a little "mechanic-creature hire", but is Disassembler which is general clique-of-guy? I usually live in forms of Priests or Whispers of the Old Gods, but if it's Asian Brotherhood of Magi..... [RP forum?]
deepmen
Reply ~29000 0 ~13 min 0 By TrueBeliever For MA teenage adventurer: Ooops It looks stupid but I'm given the wrong answer, and this logic is corrupt. Other first hand studies: sometimes your famed mechanical fiends attack other nvrock's - I had such too - but you're supposed to have flying paths to defend them would not it? This alien spawn is being unnecessary and a waste of time Man - you stole the Saalis... [gaming forum?]
Shadow Netherlands
Reply ~120200 0 ~30 min 0 By john : I know there's no better examples of
a landing back off 2lands ( ou ever?) Low challenge difficulty
a lot of new cards in my deck,
which doesn't draw cards: stacks high levels Some need life if w/o rest of deck already Defense Emperor's | Inferno/Neck Scorer Necronomicon Mysticpetal Mana Enigma Mortalswords Mysticmist Scout Zarthai Sniper [MtG or Hearthstone forum?]
It's a multiplier and it increases the yield on a lv301 fighter fighter, next to nothing it's probably never in my deck or some random deck Dofrone's | Boltforge Fastboat Sling dmt3 Efreet Flattestalker Infernal Eater Toxic Water Hurricane For another Holy orb suggested... [gaming forum? LoL?]

Was planning on posting a longer (mostly humorous) post with my own results but that post is low priority so I don't know when it's going to happen.

Comment by qiaochu_yuan on Building up to an Internal Family Systems model · 2019-01-29T01:29:11.129Z · score: 30 (11 votes) · LW · GW

Thanks for writing this! I am very excited that this post exists. I think what this model suggests about procrastination and addiction alone (namely, that they're things that managers and firefighters are doing to protect exiles) are already huge, and resonate strongly with my experience.

In the beginning of 2018 I experienced a dramatic shift that I still don't quite understand; my sense of it at the time was that there was this crippling fear / shame that had been preventing me from doing almost anything, that suddenly lifted (for several reasons, it's a long story). That had many dramatic effects, and one of the most noticeable ones was that I almost completely stopped wanting to watch TV, read manga, play video games, or any of my other addiction / procrastination behaviors. It became very clear that the purpose of all of those behaviors was numbing and distraction ("general purpose feeling obliterators" used by firefighters, as waveman says in another comment) from how shitty I felt all the time, and after the shift I basically felt so good that I didn't want or need to do that anymore.

(This lasted for awhile but not forever; I crashed hard in September (long story again) before experiencing a very similar shift again a few weeks ago.)

Another closely related effect is that many things that had been too scary for me to think about became thinkable (e.g. regrettable dynamics in my romantic relationships), and I think this is a crucial observation for the rationality project. When you have exile-manager-firefighter dynamics going on and you don't know how to unblend from them, you cannot think clearly about anything that triggers the exile, and trying to make yourself do it anyway will generate tremendous internal resistance in one form or another (getting angry, getting bored, getting sleepy, getting confused, all sorts of crap), first from managers trying to block the thoughts and then from firefighters trying to distract you from the thoughts. Top priority is noticing that this is happening and then attending to the underlying emotional dynamics.

Comment by qiaochu_yuan on Transhumanism as Simplified Humanism · 2018-12-12T03:13:29.687Z · score: 8 (5 votes) · LW · GW

I like this reading and don't have much of an objection to it.

Comment by qiaochu_yuan on Transhumanism as Simplified Humanism · 2018-12-07T07:12:11.186Z · score: 14 (16 votes) · LW · GW

This is a bad argument for transhumanism; it proves way too much. I'm a little surprised that this needs to be said.

Consider: "having food is good. Having more and tastier food is better. This is common sense. Transfoodism is the philosophy that we should take this common sense seriously, and have as much food as possible, as tasty as we can make it, even if doing so involves strange new technology." But we tried that, and what happened was obesity, addiction, terrible things happening to our gut flora, etc. It is just blatantly false in general that having more of a good thing is better.

As for "common sense": in many human societies it was "common sense" to own slaves, to beat your children, again etc. Today it's "common sense" to circumcise male babies, to eat meat, to send people who commit petty crimes to jail, etc., to pick some examples of things that might be considered morally repugnant by future human societies. Common sense is mostly moral fashion, or if you prefer it's mostly the memes that were most virulent when you were growing up, and it's clearly unreliable as a guide to moral behavior in general.

Figuring out the right thing to do is hard, and it's hard for comprehensible reasons. Value is complex and fragile; you were the one who told us that!

---

In the direction of what I actually believe: I think that there's a huge difference between preventing a bad thing happening and making a good thing happen, e.g. I don't consider preventing an IQ drop equivalent to raising IQ. The boy has had120 IQ his entire life and we want to preserve that, but the girl has had 110 IQ her entire life and we want to change that. Preserving and changing are different, and preserving vs. changing people in particular is morally complicated. Again the argument Eliezer uses here is bad and proves too much:

Either it’s better to have an IQ of 110 than 120, in which case we should strive to decrease IQs of 120 to 110. Or it’s better to have an IQ of 120 than 110, in which case we should raise the sister’s IQ if possible. As far as I can see, the obvious answer is the correct one.

Consider: "either it's better to be male than female, in which case we should transition all women to men. Or it's better to be female than male, in which case we should transition all men to women."

---

What I can appreciate about this post is that it's an attempt to puncture bad arguments against transhumanism, and if it had been written more explicitly to do that as opposed to presenting an argument for transhumanism, I wouldn't have a problem with it.

Comment by qiaochu_yuan on Preliminary thoughts on moral weight · 2018-08-14T18:52:27.014Z · score: 11 (12 votes) · LW · GW

This whole conversation makes me deeply uncomfortable. I expect to strongly disagree at pretty low levels with almost anyone else trying to have this conversation, I don't know how to resolve those disagreements, and meanwhile I worry about people seriously advocating for positions that seem deeply confused to me and those positions spreading memetically.

For example: why do people think consciousness has anything to do with moral weight?

Comment by qiaochu_yuan on The ever expanding moral circle · 2018-08-14T05:16:00.774Z · score: 20 (9 votes) · LW · GW

Relevant reading: gwern's The Narrowing Circle. He makes the important point that moral circles have actually narrowed in various ways, and also that it never feels that way because the things outside the circle don't seem to matter anymore. Two straightforward examples are gods and our dead ancestors.

Comment by qiaochu_yuan on Open Thread August 2018 · 2018-08-02T23:49:38.583Z · score: 7 (4 votes) · LW · GW

Does anyone else get the sense that it feels vaguely low-status to post in open threads? If so I don't really know what to do about this.

Comment by qiaochu_yuan on Strategies of Personal Growth · 2018-07-30T18:36:08.130Z · score: 12 (7 votes) · LW · GW

This makes sense, but I also want to register that I viscerally dislike "controlling the elephant" as a frame, in roughly the same way as I viscerally dislike "controlling children" as a frame.

Comment by qiaochu_yuan on Strategies of Personal Growth · 2018-07-28T19:26:48.385Z · score: 5 (5 votes) · LW · GW

Huh. Can you go into more detail about what you've done and how it's helped you? Real curious.

Comment by qiaochu_yuan on Strategies of Personal Growth · 2018-07-28T18:52:50.308Z · score: 20 (14 votes) · LW · GW
I think the original mythology of the rationality community is based around cheat codes

A lot of the original mythology, in the sense of the things Eliezer wrote about in the sequences, is about avoiding self-deception. I continue to think this is very important but think the writing in the Sequences doesn't do a good job of teaching it.

The main issue I see with the cheat code / munchkin philosophy as it actually played out on LW is that it involved a lot of stuff I would describe as tricking yourself or the rider fighting against / overriding the elephant, e.g. strategies like attempting to reward yourself for the behavior you "want" in order to fix your "akrasia." Nothing along these lines, e.g. Beeminder, worked for me when I experimented with them, and the whole time my actual bottleneck was that I was very sad and very lonely and distracting myself from and numbing that (which accounted for a huge portion of my "akrasia," the rest was poor health, sleep and nutrition in particular).

Comment by qiaochu_yuan on ISO: Name of Problem · 2018-07-24T18:26:37.375Z · score: -1 (3 votes) · LW · GW

This question feels confused to me but I'm having some difficulty precisely describing the nature of the confusion. When a human programmer sets up an IRL problem they get to choose what the domain of the reward function is. If the reward function is, for example, a function of the pixels of a video frame, IRL (hopefully) learns which video frames human drivers appear to prefer and which they don't, based on which such preferences best reproduce driving data.

You might imagine that with unrealistic amounts of computational power IRL might attempt to understand what's going on by modeling the underlying physics at the level of atoms, but that would be an astonishingly inefficient way to reproduce driving data even if it did work. IRL algorithms tend to have things like complexity penalties to make it possible to select e.g. a "simplest" reward function out of the many reward functions that could reproduce the data (this is a prior but a pretty reasonable and justifiable one as far as I can tell) and even with large amounts of computational power I expect it would still not be worth using a substantially more complicated reward function than necessary.

Comment by qiaochu_yuan on ISO: Name of Problem · 2018-07-24T17:26:19.620Z · score: 13 (5 votes) · LW · GW

IRL does not need to answer this question along the way to solving the problem it's designed to solve. Consider, for example, using IRL for autonomous driving. The input is a bunch of human-generated driving data, for example video from inside a car as a human drives it or more abstract (time, position, etc.) data tracking the car over time, and IRL attempts to learn a reward function which produces a policy which produces driving data that mimics its input data. At no point in this process does IRL need to do anything like reason about the distinction between, say, the car and the human; the point is that all of the interesting variation in the data is in fact (from our point of view) being driven by the human's choices, so to the extent that IRL succeeds it is hopefully capturing the human's reward structure wrt driving at the intuitively obvious level.

In particular a large part of what is selecting the level at which to work is the human programmer's choice of how to set up the IRL problem, in the selection of the format of the input data, the selection of the format of the reward function, and in the selection of the format of the IRL algorithm's actions.

In any case, in MIRI terminology this is related to multi-level world models.

Comment by qiaochu_yuan on Replace yourself first if you're moving to the Bay · 2018-07-23T18:49:54.698Z · score: 10 (7 votes) · LW · GW

Thanks for the mirror! My recommendation is more complicated than this, and I'm not sure how to describe it succinctly. I think there is a skill you can learn through practices like circling which is something like getting in direct emotional contact with a group, as distinct from (but related to) getting in direct emotional contact with the individual humans in that group. From there you have a basis for asking yourself questions like, how healthy is this group? How will the health of the group change if you remove this member from it? Etc.

It also sounds like there's an implicit thing in your mirror that is something like "...instead of doing explicit verbal reasoning," and I don't mean to imply that either.

Comment by qiaochu_yuan on Replace yourself first if you're moving to the Bay · 2018-07-23T01:24:45.279Z · score: 14 (10 votes) · LW · GW

I appreciate the thought. I don't feel like I've laid out my position in very much detail so I'm not at all convinced that you've accurately understood it. Can you mirror back to me what you think my position is? (Edit: I guess I really want you to pass my ITT which is a somewhat bigger ask.)

In particular, when I say "real, living, breathing entity" I did not mean to imply a human entity; groups are their own sorts of entities and need to be understood on their own terms, but I think it does not even occur to many people to try in the sense that I have in mind.

Comment by qiaochu_yuan on Replace yourself first if you're moving to the Bay · 2018-07-22T22:00:33.210Z · score: 13 (11 votes) · LW · GW

(For additional context on this comment you can read this FB status of mine about tribes.)

There's something strange about the way in which many of us were trained to accept as normal that two of the biggest transitions in our lives - high school to college, college to a job - get packaged in with abandoning a community. In both of those cases it's not as bad as it could be because everyone is sort of abandoning the community at the same time, but it still normalizes the thing in a way that bugs me.

There's a similar normalization of abandonment, I think, in the way people treat break-ups by default. Yes, there are such things as toxic relationships, and yes, I want people to be able to just leave those without feeling like they owe their ex-partner anything if that's what they need to do, but there are two distinct moves that are being bucketed here. I've been lucky enough to get to see two examples recently of what it looks like for a couple to break up without abandonment: they mutually decide that the relationship isn't working, but they don't stop loving each other at all throughout the process of getting out of the relationship, and they stay in touch with the emotional impact the other is experiencing throughout. It's very beautiful and I feel a lot of hope that things can be better seeing it.

What I think I'm trying to say is that there's something I want to encourage that's upstream of all of your suggestions, which is something like seeing a community as a real, living, breathing entity built out of the connections between a bunch of people, and being in touch emotionally with the impact of tearing your connections away from that entity. I imagine this might be more difficult in local communities where people might end up in logistically important roles without... I'm not sure how to say this succinctly without using some Val language, but like, having the corresponding emotional connections to other community members that ought to naturally accompany those roles? Something like a woman who ends up effectively being a maid in a household without being properly connected to and respected as a mother and wife.

Comment by qiaochu_yuan on Osmosis learning: a crucial consideration for the craft · 2018-07-18T18:59:42.547Z · score: 10 (2 votes) · LW · GW

Yes, absolutely. This is what graduate school and CFAR workshops are for. I used to say both of the following things back in 2013-2014:

  • that nearly all of the value of CFAR workshops came from absorbing habits of thought from the instructors (I think this less now, the curriculum's gotten a lot stronger), and
  • that the most powerful rationality technique was moving to Berkeley (I sort of still think this but now I expect Zvi to get mad at me for saying it).

I have personally benefited a ton over the last year and a half through osmosing things from different groups of relationalists - strong circling facilitators and the like - and I think most rationalists have a lot to learn in that direction. I've been growing increasingly excited about meeting people who are both strong relationalists and strong rationalists and think that both skillsets are necessary for anything really good to happen.

There is this unfortunate dynamic where it's really quite hard to compete for the attention of the strongest local rationalists, who are extremely deliberate about how they spend their time and generally too busy saving the world to do much mentorship, which is part of why it's important to be osmosing from other people too (also for the sake of diversity, bringing new stuff into the community, etc.).

Comment by qiaochu_yuan on A framework for thinking about wireheading · 2018-07-16T23:54:58.631Z · score: 2 (1 votes) · LW · GW

I think your description of the human relationship to heroin is just wrong. First of all, lots of people in fact do heroin. Second, heroin generates reward but not necessarily long-term reward; kids are taught in school about addiction, tolerance, and other sorts of bad things that might happen to you in the long run (including social disapproval, which I bet is a much more important reason than you're modeling) if you do too much heroin.

Video games are to my mind a much clearer example of wireheading in humans, especially the ones furthest in the fake achievement direction, and people indulge in those constantly. Also television and similar.

Comment by qiaochu_yuan on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-15T20:43:29.559Z · score: 14 (4 votes) · LW · GW
In particular, you shouldn't force yourself to believe that you're attractive.

And I never said this.

But there's a thing that can happen when someone else gaslights you into believing that you're unattractive, which makes it true, and you might be interested in undoing that damage, for example.

Comment by qiaochu_yuan on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-15T20:41:43.874Z · score: 9 (5 votes) · LW · GW

Yes, this.

There's a thing MIRI people talk about, about the distinction between "cartesian" and "naturalized" agents: a cartesian agent is something like AIXI that has a "cartesian boundary" separating itself from the environment, so it can try to have accurate beliefs about the environment, then try to take the best actions on the environment given those beliefs. But a naturalized agent, which is what we actually are and what any AI we build actually is, is part of the environment; there is no cartesian boundary. Among other things this means that the environment is too big to fully model, and it's much less clear what it even means for the agent to contemplate taking different actions. Scott Garrabrant has said that he does not understand what naturalized agency means; among other things this means we don't have a toy model that deserves to be called "naturalized AIXI."

There's a way in which I think the LW zeitgeist treats humans as cartesian agents, and I think fully internalizing that you're a naturalized agent looks very different, although my concepts and words around this are still relatively nebulous.

Comment by qiaochu_yuan on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-12T21:26:48.137Z · score: 21 (8 votes) · LW · GW
The problem is that the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them!

I want to point out that this is not an esoteric abstract problem but a concrete issue that actual humans face all the time. There's a large class of propositions whose truth value is heavily affected by how much you believe (and by "believe" I mean "alieve") them - e.g. propositions about yourself like "I am confident" or even "I am attractive" - and I think the LW zeitgeist doesn't really engage with this. Your beliefs about yourself express themselves in muscle tension which has real effects on your body, and from there leak out in your body language to affect how other people treat you; you are almost always in the state Harry describes in HPMoR of having your cognition constrained by the direct effects of believing things on the world as opposed to just by the effects of actions you take on the basis of your beliefs.

There's an amusing tie-in here to one of the standard ways to break the prediction market game we used to play at CFAR workshops. At the beginning we claim "the best strategy is to always write down your true probability at any time," but the argument that's supposed to establish this has a hidden assumption that the act of doing so doesn't affect the situation the prediction market is about, and it's easy to write down prediction markets violating this assumption, e.g. "the last bet on this prediction market will be under 50%."

Comment by qiaochu_yuan on An Exercise in Applied Rationality: A New Apartment · 2018-07-09T22:04:34.878Z · score: 4 (2 votes) · LW · GW

I do not. Fortunately, you can just test it empirically for yourself!

Comment by qiaochu_yuan on An Exercise in Applied Rationality: A New Apartment · 2018-07-09T08:04:11.882Z · score: 12 (4 votes) · LW · GW

General advice that I think basically applies to everybody is to try to lock down sleep, diet, and exercise (not sure what order these go in exactly) solidly.

Random sleep tips:

  • Try to sleep in as much darkness as possible. Blackout curtains + a sleep mask is as dark as I know how to easily make things, although you might find the sleep mask takes some getting used to. Just a sleep mask is already pretty good.
  • Blue light from screens at night disrupts your sleep; use f.lux or equivalent to straightforwardly deal with this.
  • Lower body temperature makes it easier to sleep, so take hot showers at night, which cause your body to cool down in response.
  • If you're having trouble falling asleep at a consistent time, consider supplementing small (on the order of 0.1 mg) amounts of melatonin. (Edit: see SSC post on melatonin for more, which recommends 0.3 mg.) A lot of the melatonin you'll find commercially is 3-5 mg and that's too much. I deal with this by biting off small pieces, not sure if that's a good idea. (Melatonin stopped working for me in February anyway, not sure what's up with that.)

I have thoughts about diet and exercise but fewer general recommendations; the main thing you want here is something that feels good and is sustainable to you.

Other than that, something feels off to me about the framing of the question. I feel like I'd have to know a lot more about what kind of person you are and what kind of things you want out of your life to give reasonable answers. Everything is just very contextual.

Comment by qiaochu_yuan on Stories of Summer Solstice · 2018-07-09T07:51:26.854Z · score: 6 (3 votes) · LW · GW

The drum circle leading up to sunset was beautiful, but the drum (+ dance + singing) circle after sunset was really fun. I drummed and it was fun! Then I danced and it was fun! Then I sang and it was fun! Nat started improvising a melody and I tried that and it was fun, and then Nat started improvising lyrics and I tried that and it was even better

and then we played one of my favorite games, sing-as-many-songs-with-the-same-chord-progression-at-the-same-time-as-possible, with the most people I've ever gotten to do it with –

anyway, all of that made me really happy, and I feel really grateful to everyone who helped make it all possible.

Comment by Qiaochu_Yuan on [deleted post] 2018-06-28T13:11:12.539Z

Whoops, hang on, I definitely did not intend for all of these posts to be crossposted to LW. I thought Ben Pace had set things up so that only things tagged #lw would be crossposted, but that doesn't seem to have been what happened. My bad. I can't seem to delete the posts or make them invisible.

Comment by qiaochu_yuan on Last Chance to Fund the Berkeley REACH · 2018-06-28T11:09:07.927Z · score: 26 (6 votes) · LW · GW

Pledged $50 / mo. I haven't been to an event at REACH yet but I'm happy about the events I've seen on Facebook being hosted there, and expect to attend and/or host something there in the nearish future if it keeps existing. Everything Ray et al. have been writing about community health and so forth resonates with me and I'm happy to put my money where my resonance is.

Comment by qiaochu_yuan on Why kids stop asking why · 2018-06-06T00:25:18.948Z · score: 6 (1 votes) · LW · GW

I think a pattern that makes sense to me is cycles of exploration and exploitation: learn about the world, act on that understanding, use the observations you acquired from acting to guide further learning, etc. The world is big and complicated enough that I think you don't hit anything close to diminishing marginal returns on asking "why?" (my experience, if anything, has been increasing marginal returns as I've gotten better at learning things), although I agree that it's important to get some acting going on in there too.

Comment by qiaochu_yuan on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2018-06-04T18:26:22.200Z · score: 20 (4 votes) · LW · GW

Benquo, this is really great to hear. This is a shift I went through gradually throughout 2017 and I think it's really important.

Comment by qiaochu_yuan on Teaching Methodologies & Techniques · 2018-06-04T15:20:43.853Z · score: 21 (4 votes) · LW · GW

Teaching is not about methodology; it's metis, not episteme. (I am also not a schoolteacher but I have taught at CFAR workshops.)

I love cousin_it's suggestion that you should start teaching a student regularly as soon as possible, but I have an additional suggestion about how to spend that time: namely, your goal should not be to teach anyone anything but to find out how students' minds work (and since anyone can be a student, this means your goal is to find out how people's minds work), and how those minds interface with the material you want to teach. E.g. if you attempt to teach your student X and they're not getting it, instead of being frustrated at how they're not getting it, be curious about what's happening for the student instead of getting it. How are they interpreting the words you're saying? What models, if any, are they building in their head of the situation? Etc. etc.

Comment by qiaochu_yuan on Duncan Sabien on Moderating LessWrong · 2018-05-26T00:19:27.717Z · score: 25 (5 votes) · LW · GW
Not actually approaching a discussion collaboratively.
Not being up-to-speed enough to contribute to a discussion.

Yeah, these are two of the things that have been turning me off from trying to keep up with comments the most. I don't really have any ideas short of incredibly aggressive moderation under a much higher bar for comments and users than has been set so far.

Comment by qiaochu_yuan on Duncan Sabien on Moderating LessWrong · 2018-05-25T20:22:36.525Z · score: 62 (17 votes) · LW · GW

(Meta, in case it's relevant to anyone: it's felt to me like LW has been deluged in the last few weeks by people saying things that seem very clearly wrong to me (this is not at all specific to this post / discussion), but in such a way that it would take a lot of effort on my part to explain clearly what seems wrong to me in most of the cases. I'm not willing to spend the time necessary to address every such thing or even most of them, but I also don't have a clear sense of how to prioritize, so I approximately haven't been commenting at all as a result because it just feels exhausting. I'm making an exception for this post more or less on a whim.)

There's a lot I like about this post, and I agree that a lot of benquo's comments were problematic. That said, when I put on my circling hat, it seems pretty clear to me that benquo was speaking from a place of being triggered, and I would have confidently predicted that the conversation would not have improved unless this was addressed in some way. I have some sense of how I would address this in person but less of a sense of how to reasonably address it on LW.

There's something in Duncan's proposed norms in the direction of "be responsible for your own triggered-ness." And there's something I like about that in principle, but also I think practice almost nobody on LW can do this reliably, including Duncan, and I would want norms that fail more gracefully in the presence of multiple people being triggered and not handling it ideally. At the very least, I want this concept of being triggered to be in common knowledge (I think it's not even in mutual knowledge at the moment) so we can talk about it when it's relevant, and ideally I'd want norms that make it okay to say things like "hey, based on X, Y, and Z I suspect you're currently a little triggered, do you want to slow down this conversation in A, B, or C way?" without this being taken as, like, a horrendous overreaching accusation.

Comment by qiaochu_yuan on Visions of Summer Solstice · 2018-05-21T23:24:18.819Z · score: 6 (1 votes) · LW · GW

I don't know their names or anything else about who they were. One of them just seemed really remarkably clueless (I'm honestly just confused, he looked 25 or something but talked like he was maybe 15?), but he had a friend who said he was party-hopping or something and hit on a woman I knew in a way that I expect creeped her out based on my read of her body language, but I didn't actually check with her afterwards. Neither of them seemed like rationalists to me at all, and this isn't an issue I've encountered at other rationalist events. I have no idea where they came from.

Comment by qiaochu_yuan on The Second Circle · 2018-05-21T23:02:06.355Z · score: 13 (3 votes) · LW · GW
unwilling-to-quite-express-preferences-but-expressing-them-anyway kind of very circling-way?

I think expressing preferences is fine, but there's usually some more fine-grained aspect of your experience of having the preference that you can also talk about. E.g. "I want to change topics" is fine, but better would be "I'm feeling impatient with the current discussion and want to talk about something that will feel more productive," or something. And then someone else might ask you more about how it feels to be impatient and want something more productive to happen, etc.

The suggestions on facilitation style point towards the don't-introduce-everyone-at-once concept being important, since it doesn't seem compatible with not following that principle.

Little confused about how to parse this sentence. Not sure what "don't-introduce-everyone-at-once concept" means, or what principle you're referring to with the phrase "that principle."

It sounds like you think that having an explicit object-other-than-itself that isn't pure raw ground level is a mistake, or at least a different class of thing than what you think is the valuable thing? Say a bit more there?

I think it's something like an advanced thing to try, and it's not something I'd start beginners on by default, although this isn't a strong opinion, and I might change my mind if I experimented with it more.

What I expect to happen to most groups of people if you try to start them circling on an explicit object-level topic is that they'll mostly talk about the topic in a way that makes it harder for them to see what's going on in their experience and the experience of other people in the circle, e.g. if everyone is regurgitating cached thoughts and/or signaling intelligence or whatever. If there are enough experienced circlers in the circle then I'd feel more confident that this sort of thing will get called out if it starts happening, but I'd be uncomfortable trying it with a circle consisting entirely of beginners, especially if the facilitator isn't very experienced. There's a thing about building form here.

What often happens in the absence of a topic to start with, especially in groups of people who know each other already, is that a topic sort of emerges naturally out of the circling dynamics, e.g. maybe Person A says something that triggers Person B and then the topic is whatever's happening between A and B, and sometimes a third Person C gets involved and then the topic is whatever's happening with the three of them. But in order to get to this point with beginners, the facilitator needs to be able to guide people to the point where they feel comfortable saying things that might make other people in the circle feel uncomfortable, and that's tricky to do as a beginner.

In general, it's worth mentioning that, in my view, a lot of the advice and guidelines people give beginners about circling are training wheels / band-aids meant to help you avoid various flavors of being distracted away from your experience, and once you get good enough at zeroing in on your experience you can mostly discard them.

Comment by qiaochu_yuan on The Second Circle · 2018-05-21T07:29:59.021Z · score: 21 (4 votes) · LW · GW

(Epistemic status: intuitions built off of somewhere around 70 hours of circling over the course of the last year and a half, including facilitating somewhere around 15-20 circles.)

The topic of the meetup is… circling. Circling, it seems, is about circling. We’re explicitly supposed not to talk about anything. Or try to accomplish anything, other than connect.
The art must have an end other than itself or it collapses into infinite recursion.

The art does have an end other than itself (at least I think it does), but a beginner focusing on what the beginner thinks that end is is not a good way to begin learning the art. Circling is like meditation in that way.

Then disaster struck – Jacob worried that disaster had struck. And felt he had to Do Something. Treat the situation as bad.
Which made it bad. From there, all downhill. Nothing disastrous, but less connection, more awkward, no road to recovery.

Facilitating circles is really quite difficult and this is a reasonably large component of why; it takes a decent amount of tacit knowledge and skill to learn how to navigate issues like this as a facilitator. Part of the skill involves welcoming whatever is happening in the circle, including your own sense that Something Is Wrong and needs to be Fixed, and in the long run learning how to take all that as object.

Circling Europe has a philosophy towards facilitation called "surrendered leadership" that I'm not particularly qualified to explain, but roughly it involves thinking of facilitation as just being a really good participant, as opposed to a person who is in charge of trying to make the circle "good" as opposed to "bad," whatever those mean.

I also think it's in fact a mistake to focus explicitly on connection as a goal while circling, although others might disagree. At least for beginners I think this is a distraction from finding out what is even happening in everyone's experience at all.

Tonight, we’d had a circle about circling. Previously, we’d had a circle about something quite important. An object level to work with, and build upon, to prevent the meta cycle. So tonight felt not real, like a game. Previously was not a game.

Circles are not about circling; to the extent that they're about anything, they're about what the participants are experiencing in the moment. Your experience is the object level.

You might be experiencing a bunch of meta thoughts about circling, and you can talk about those thoughts if you want, but a different thing you can do, that a facilitator may or may not attempt to encourage you to do, is to talk about the experience of having those thoughts, especially any emotional flavor or accompanying sensations in your body. E.g. rather than "I'm worried we'll be going too meta," something like "I notice I feel frustrated and impatient; I have a desire to tap my fist against the ground, and I feel a flushing in my face and upper chest. I imagine the frustration and impatience is about a worry that we're going too meta and wasting time, and the idea of wasting time has me feeling angry."

You also might be experiencing your thoughts wandering off to an unrelated topic, and you can talk about that - not necessarily the topic, but your experience of your thoughts wandering off to that topic. E.g. "I notice my thoughts wandering to a really interesting essay I read. I'm a little worried that I'm no longer 'circling right,' whatever that means, and feeling some embarrassment and awkwardness around that."

Comment by qiaochu_yuan on Visions of Summer Solstice · 2018-05-21T06:39:25.729Z · score: 13 (5 votes) · LW · GW

So, I like this, and also I'm going to use this space to make a complaint about how winter solstice went, which has some bearing on how much I'll want to go to summer solstice insofar as I'm worried about it having the same problem.

Namely: 1) I felt like the atmosphere of winter solstice really wanted to be nice-Thanksgiving-dinner-with-people-I-know-and-like, but in fact I did not know and/or like a lot of the people there so the whole experience felt tonally dissonant to me, 2) in addition to the tone at large, in the specific there were several specific people there (all men) who creeped me out, who I expected creeped out other people there, and who I have never seen at a rationalist event before or since, and this seems really bad.

I don't know by what process, if any, guests were filtered, but if the answer is "basically none" I think this is basically incompatible with wanting a Thanksgiving-dinner-ish vibe.

Comment by qiaochu_yuan on Challenges to Christiano’s capability amplification proposal · 2018-05-21T06:31:11.553Z · score: 26 (9 votes) · LW · GW

So, congratulations are in order to the LW team for putting in the work necessary to create the features that Eliezer wanted before coming back (IIRC mostly reign of terror moderation?). Hooray! The Eliezer-posts-things-on-Facebook equilibrium was so much worse for so many reasons, not least of which is how hard it is to search for old FB posts / refer back to them in other discussions.

Comment by qiaochu_yuan on Personal relationships with goodness · 2018-05-21T02:27:00.653Z · score: 11 (2 votes) · LW · GW
I'm a little perplexed about what you find horrifying about the side-taking hypothesis.

I think there was a part of me that was still in some sense a moral realist and the side-taking hypothesis broke it.

Comment by qiaochu_yuan on The Berkeley Community & The Rest Of Us: A Response to Zvi & Benquo · 2018-05-20T22:03:30.046Z · score: 45 (11 votes) · LW · GW
it's a general issue with the way CFAR operates, building up intense social connections over the course of a weekend, then dropping them suddenly.

So, this is definitely a thing that happens, and I'm aware of and sad about it, but it's worth pointing out that this is a generic property of all sufficiently good workshops and things like workshops (e.g. summer camps) everywhere (the ones that aren't sufficiently good don't build the intense social connections in the first place), and to the extent that it's a problem CFAR runs into, 1) I think it's a little unfair to characterize it as the result of something CFAR is particularly doing that other similar organizations aren't doing, and 2) as far as I know nobody else knows what to do about this either.

Or are you suggesting that the workshops shouldn't be trying to build intense social connections?

Comment by qiaochu_yuan on Can our universe contain a perfect simulation of itself? · 2018-05-20T21:59:10.442Z · score: 6 (1 votes) · LW · GW
I'm curious about what you think would be a meaningful definition of a "perfect encoding".

Part of the point of the excerpt you quoted from Aaronson is that in any notion of an encoding, some of the computational work is being done by the decoding procedure, whatever that is. So e.g. you can specify a programming language, and build a compiler that will compile and execute programs in that programming language, and then talk about a program perfectly encoding something if it outputs that thing when run. Some of the computational work is being done by the program but some of it's being done by the compiler.

Comment by qiaochu_yuan on Everything I ever needed to know, I learned from World of Warcraft: Incentives and rewards · 2018-05-20T02:55:32.910Z · score: 13 (3 votes) · LW · GW

Spliddit solves a one-off fair division problem, but the problem faced by WoW guilds is importantly different because 1) it's iterated, 2) you have to sink in resources to get access to each round of the fair division problem (killing a raid boss or whatever), and 3) players can leave at any time if the rewards aren't good enough.

Comment by qiaochu_yuan on Can our universe contain a perfect simulation of itself? · 2018-05-20T02:47:35.344Z · score: 33 (8 votes) · LW · GW

Basically I think you're confused. You correctly begin to identify the core of the problem and then don't engage with it here:

It seems like what counts as a perfect simulation hides most of the complexity of this problem. For now, I'm going to hand wave a bit and say a perfect simulation is anything that can asses the truth value of a arbitrary proposition about the universe.

Interpreting this definition sufficiently strongly, the following straightforward diagonalization argument can be applied: suppose you had such a simulation, and that it had some sort of output channel that it used to display answers to questions about the universe. Ask it "is the answer to this question that's about to be displayed in the output channel no?"

In the positive direction, there is the following lovely theorem: without loss of generality, you can always assume that a program has access to a copy of its own source code. Given this fact, one might try to apply the above diagonalization argument, as follows: a program attempts to run itself, outputting true if it outputs false, and outputting false if it outputs true. What happens?

Straightforward: the program doesn't halt, outputting nothing; you can think of this as being the result of the program repeatedly calling itself over and over again. A slight modification of this argument produces the usual proof of the undecidability of the halting problem.

Comment by qiaochu_yuan on A Sketch of Good Communication · 2018-05-16T23:54:21.006Z · score: 6 (1 votes) · LW · GW

Yeah, "the" was much too strong and that is definitely not a thing I think. I don't appreciate the indirect accusation that I'm trying to get people to defer to my authority by inducing high time preference in them.

Comment by qiaochu_yuan on Personal relationships with goodness · 2018-05-15T00:09:31.169Z · score: 6 (1 votes) · LW · GW

So, yes, in addition to my own story I have more thoughts about what kind of story I want for people in general, roughly along these lines:

And he said – no, absolutely, stay in your career right now. In fact, his philosophy was that you should do exactly what you feel like all the time, and not worry about altruism at all, because eventually you’ll work through your own problems, and figure yourself out, and then you’ll just naturally become an effective altruist.

Or not, and that would also be fine.

I have strong intuitions about a thing which I'll roughly label "not skipping developmental stages." I think there is something like a developmental stage at which thinking about altruism is natural and won't slowly corrupt your soul, and I worry about something like people not knowing what stage they're at, not being at this stage, and trying to pretend to themselves and others that they are. The problem is roughly, I think most people are trying to do EA at Kegan 3, which is subject to tons of Goodharting / signaling issues, and it seems like a bad idea to me to seriously try to do EA until Kegan 4 or 5.

Comment by qiaochu_yuan on Personal relationships with goodness · 2018-05-14T21:48:13.435Z · score: 52 (12 votes) · LW · GW

I think this whole discussion so far hides dangerous amounts of confusion around the concept of "good," and any serious progress will involve unpacking this confusion in much more detail. Here are some other stories I think it's important to have in the mix when thinking about this.

Goodness is about signaling: You know this one. In the ancestral environment people wanted to signal that they would make useful allies, which involves having properties like standing up for your friends, keeping your promises, etc. Perhaps they even wanted to signal that they would be good leaders of the tribe, which involves having properties like looking out for the well-being of the tribe. Also, humans are bad at lying. All this adds up to a strong incentive to signal both to yourself and to others that you care about doing things that are "good" = things that would make you a desirable ally or leader, or whatever.

Goodness is about coordinating decisions about who to back in social conflicts: This is the side-taking hypothesis of morality. Read the link for more details. This is maybe the most horrifying idea I've come across in the last year.

Goodness is an eldritch horror / egregore: Some crazy societal / cultural process indoctrinated you with this concept of "good" for reasons that have basically nothing to do with what you want. Cf. people who have been indoctrinated with communism or a religion, or fictional people living in a dystopia. There is just this distributed entity running on a bunch of humans propagating itself through virulent memes, and who knows what it's optimizing for, but probably not what I want.

My story is some kind of complicated mix of these; many parts of it are nonverbal and verbalizing them would require some effort on my part. But if I had to try verbalizing, it might go something like this:

"Many people, including me, seem to have some concept of what it means for a person or action to be 'good.' It seems like a complicated concept and I notice I'm confused. When I try to label a person as 'good' or 'bad,' including myself, it feels like I am basically always making some kind of mistake, maybe a type error. I have some kind of desire to be able to label myself 'good,' which seems to come from some sense that if I am 'good' then I 'deserve' (another complicated concept I notice confusion around) to be happy, or 'deserve' other people's love, or something like that.

This concept I have of 'goodness' came from somewhere, and I'm not sure I trust whatever process it came from. I have some sense that my desire to use it is protecting something, but whatever that is I'd rather work with it directly.

What seems a lot less complicated than 'goodness' or 'badness' is thinking about what I want. I want a lot of things, many of which involve other people. I have some sense of what it means for people to be able to trust each other and cooperate in a way that makes both of them better off, and a lot of what I want revolves around this; I want to be a trustworthy person who can cooperate with other people in ways that make both of us better off, so I can get other things I want. I also want to continue existing so I can get all the other things I want. I in fact don't want to do a lot of the actions that I might naively want to label as 'bad' because they would make me less trustworthy and I don't want that.

I have the sense that I'm made out of a bunch of parts that want different things, and those parts are still in the process of learning how to trust and cooperate with each other so they can all get more of what they want."

One thing I've been playing with in the last few months is learning to stop being subject to the concept of goodness. It's been very freeing; I feel a lot more capable of thinking through the actual consequences of my actions (including decision-theoretic consequences) and deciding if I want those consequences or not, as opposed to feeling shackled by a bunch of deontological constraints that were put into place by processes I don't trust.

Comment by qiaochu_yuan on Terrorism, Tylenol, and dangerous information · 2018-05-13T18:37:20.543Z · score: 35 (8 votes) · LW · GW

This is not a good argument against caring about terrorism; I wrote a blog post about this but it seems to be frequently misunderstood so it's probably not very good.

Comment by qiaochu_yuan on Advocating for factual advocacy · 2018-05-06T21:12:29.143Z · score: 23 (5 votes) · LW · GW
Human morality is mostly a justification mechanism, trying to give coherence to the actions we were going to do anyway.

Here is an even more Hansonian view that I think makes better predictions: the side-taking hypothesis says that morality is a social tool for deciding which side to support in a conflict between groups. Extended quote:

Here is a distinctive human problem that just might explain our distinctive moral condemnation: Humans, more than any other species, support each other in fights, whether fistfights, yelling matches, or gossip campaigns. In most animal species, fights are mano-a-mano or between fixed groups. Humans, however, face complicated conflicts in which bystanders are pressured to choose sides in other people’s fights, and it’s unclear who will take which side. Think about the intrigues of family feuds, office politics, or international relations.
One side-taking strategy is supporting the higher-status fighter like a boss against a coworker or parent against child. However, this encourages bullies because higher-ups can exploit their position. Another strategy is to form alliances with friends and loyally support them. Alliances deflate bullies but create another problem: When everyone sides with their own friend, the group tends to split into evenly matched sides and fights escalate. This is costly for bystanders because they get scuffed up fighting their friends’ battles.
Moral condemnation offers a third strategy for choosing sides. People can use moral judgment to assess the wrongness of fighters’ actions and then choose sides against whoever was most immoral. When all bystanders use this strategy, they all take the same side and avoid the costs of escalated fighting. That is, moral condemnation functions to synchronize people’s side-taking decisions. This moral strategy is, of course, mostly unconscious just like other evolved programs for vision, movement, language, and so on.
For moral side-taking to work, the group needs to invent and debate moral rules to cover the most common fights—rules about violence, sex, resources, etc. Humans are quite motivated to do just this. Once moral rules are established, people can use accusations of wrongdoing as coercive threats to turn the group, including your family and friends, against you [emphasis mine].

What the side-taking hypothesis suggests is that making the moral case for e.g. vegetarianism is a matter of convincing people to gang up against non-vegetarians in various ways, or rather convincing people that other people will do this. Insofar as you think this is bad, you might want to spread vegetarianism through a conduit other than morality.

Worth meditating on the side-taking hypothesis as it applies to the recent debacle around the vegan blogger who bought ice cream for a kid and got shamed by other vegans over it.

Comment by qiaochu_yuan on Bayes' Law is About Multiple Hypothesis Testing · 2018-05-04T17:56:42.064Z · score: 37 (7 votes) · LW · GW
Now we've got it: we see the need to enumerate every hypothesis we can in order to test even one hypothesis properly.

A cached handle I have for this is "the negation of a hypothesis is not a hypothesis"; said another way, "the negation of a model is not a model." Insofar as a hypothesis / model is a thing that makes predictions, "not (a thing that makes predictions)" isn't a thing that makes predictions. E.g. "person X just didn't understand the concept" is not a hypothesis about what's going on when person X gets a problem wrong on a test.

Rationalist Lent is over

2018-03-30T05:57:03.117Z · score: 49 (19 votes)

The Math Learning Experiment

2018-03-21T21:59:04.682Z · score: 124 (35 votes)

Deciphering China's AI Dream

2018-03-18T03:26:13.471Z · score: 35 (8 votes)

Are you the rider or the elephant?

2018-02-21T07:25:04.371Z · score: 74 (26 votes)

Rationalist Lent

2018-02-13T23:55:29.713Z · score: 84 (29 votes)

Paper Trauma

2018-01-31T22:05:43.859Z · score: 128 (49 votes)

CFAR workshop with new instructors in Seattle, 6/7-6/11

2017-05-20T00:18:22.109Z · score: 8 (9 votes)

In memory of Thomas Schelling

2016-12-13T22:17:51.257Z · score: 10 (11 votes)

Against utility functions

2014-06-19T05:56:29.877Z · score: 43 (48 votes)

What resources have increasing marginal utility?

2014-06-14T03:43:14.195Z · score: 36 (37 votes)

The January 2013 CFAR workshop: one-year retrospective

2014-02-18T18:41:13.935Z · score: 34 (37 votes)

Useful Questions Repository

2013-07-25T02:58:35.717Z · score: 23 (24 votes)

Evidential Decision Theory, Selection Bias, and Reference Classes

2013-07-08T05:16:48.460Z · score: 25 (26 votes)

[LINK] Cantor's theorem, the prisoner's dilemma, and the halting problem

2013-06-30T20:26:03.002Z · score: 13 (14 votes)

[LINK] The Selected Papers Network

2013-06-14T20:20:21.542Z · score: 9 (10 votes)

Useful Concepts Repository

2013-06-10T06:12:49.639Z · score: 32 (33 votes)

[LINK] Sign up for DAGGRE to improve science and technology forecasting

2013-05-26T00:08:55.793Z · score: 3 (4 votes)

[LINK] Soylent crowdfunding

2013-05-21T19:09:31.034Z · score: 7 (14 votes)

Privileging the Question

2013-04-29T18:30:35.545Z · score: 110 (108 votes)

[LINK] Causal Entropic Forces

2013-04-20T23:57:34.160Z · score: 5 (10 votes)

Post Request Thread

2013-04-11T01:28:46.351Z · score: 20 (21 votes)

Solved Problems Repository

2013-03-27T04:51:54.419Z · score: 28 (32 votes)

Boring Advice Repository

2013-03-07T04:33:41.739Z · score: 60 (64 votes)

Think Like a Supervillain

2013-02-20T08:34:55.618Z · score: 31 (46 votes)

Rationalist Lent

2013-02-14T06:32:40.415Z · score: 44 (44 votes)

Thoughts on the January CFAR workshop

2013-01-31T10:16:08.725Z · score: 37 (38 votes)