Posts

16 types of useful predictions 2015-04-10T03:31:39.018Z
A map of Bay Area memespace 2013-09-23T17:34:58.111Z
Three ways CFAR has changed my view of rationality 2013-09-10T18:24:21.981Z
CFAR workshop, June 15th, Salt Lake City UT 2013-06-06T16:47:42.749Z
New applied rationality workshops (April, May, and July) 2013-04-09T02:58:58.442Z
Life Extension versus Replacement 2011-11-30T01:47:10.475Z
How rationality can make your life more awesome 2011-11-29T01:23:26.980Z

Comments

Comment by Julia_Galef on The Scout Mindset - read-along · 2021-04-22T08:40:57.780Z · LW · GW

... By the way, you might've misunderstood the point of the Elon Musk examples. The point wasn't that he's some exemplar of honesty. It was that he was motivated to try to make his companies succeed despite believing that the most likely outcome was failure. (i.e., he is a counterexample to the common claim "Entrepreneurs have to believe they are going to succeed, or else they won't be motivated to try")

Comment by Julia_Galef on The Scout Mindset - read-along · 2021-04-22T00:55:56.356Z · LW · GW

Thanks! I do also rely to some extent on reasoning... for example, Chapter 3 is my argument for why we should expect to be better off with (on the margin) more scout mindset and less soldier mindset, compared to our default settings. I point out some basic facts about human psychology (e.g., the fact that we over-weight immediate consequences relative to delayed consequences) and explain why it seems to me those facts imply that we would have a tendency to use scout mindset less often than we should, even just for our own self interest.

The nice thing about argumentation (as compared to citing studies) is that it's pretty transparent -- the reader can evaluate my logic for themselves and decide if they buy it.

Comment by Julia_Galef on The Scout Mindset - read-along · 2021-04-22T00:19:20.592Z · LW · GW

Hey Ozzie! Thanks for reading / reviewing.

I originally hoped to write a more “scholarly” book, but I spent months reading the literature on motivated reasoning and thought it was mostly pretty bad, and anyway not the actual cause of my confidence in the core claims of the book such as “You should be in scout mindset more often.” So instead I focused on the goal of giving lots of examples of scout mindset in different domains, and addressing some of the common objections to scout mindset, in hopes of inspiring people to practice it more often. 

I left in a handful of studies that I had greater-than-average confidence in (for various reasons, which I might elaborate on in a blog post – e.g. I felt they had good external validity and no obvious methodological flaws). But I tried not to make it sound like those studies were definitive, nor that they were the main cause of my belief in my claims.

Ultimately I’m pretty happy with my choice. I understand why it might be disappointing for someone expecting a lot of research... but I think it's an unfortunate reality, given the current state of the social sciences, that books which cite a lot of social science studies tend to give off an impression of rigor that is not deserved.

Comment by Julia_Galef on Why startup founders have mood swings (and why they may have uses) · 2015-12-10T23:56:47.325Z · LW · GW

This doesn't really ring true to me (as a model of my personal subjective experience).

The model in this post says despair is "a sign that important evidence has been building up in your buffer, unacknowledged, and that it’s time now to integrate it into your plans."

But most of the times that I've cycled intermittently into despair over some project (or relationship), it's been because of facts I already knew, consciously, about the project. I'm just becoming re-focused on them. And I wouldn't be surprised if things like low blood sugar or anxiety spilling over from other areas of my life are major causes of some Fact X seeming far more gloomy on one particular day than it did just the day before.

And similarly, most of the times I cycle back out of despair, it's not because of some new information I learned or an update I made to my plans. It's because, e.g., I went to sleep and woke up the next morning and things seemed okay again. Or because my best friend reminded me of optimistic Facts Y and Z which I already knew about, but hadn't been thinking about.

Comment by Julia_Galef on Speculative rationality skills and appropriable research or anecdote · 2015-07-22T00:39:58.612Z · LW · GW

Hey, I'm one of the founders of CFAR (and used to teach the Reference Class Hopping session you mentioned).

You seem to be misinformed about what CFAR is claiming about our material. Just to use Reference Class Hopping as an example: It's not the same as reference class forecasting. It involves doing reference class forecasting (in the first half of the session), then finding ways to put yourself in a different reference class so that your forecast will be more encouraging. We're very explicit about the difference.

I've emailed experts in reference class forecasting, described our "hopping" extension to the basic forecasting technique, and asked: "Is anyone doing research on this?" Their response: "No, but what you're doing sounds useful." [If I get permission to quote the source here I will do so.]

This is pretty standard for most of our classes that are based on existing techniques. We cite the literature, then explain how we're extending it and why.

Comment by Julia_Galef on 16 types of useful predictions · 2015-04-08T19:47:38.608Z · LW · GW

I usually try to mix it up. A quick count shows 6 male examples and 2 female examples, which was not a deliberate choice, but I guess I can be more intentional about a more even split in future?

Comment by Julia_Galef on Harper's Magazine article on LW/MIRI/CFAR and Ethereum · 2014-12-15T19:13:12.466Z · LW · GW

Thanks for showing up and clarifying, Sam!

I'd be curious to hear more about the ways in which you think CFAR is over-(epistemically) hygienic. Feel free to email me if you prefer, but I bet a lot of people here would also be interested to hear your critique.

Comment by Julia_Galef on Harper's Magazine article on LW/MIRI/CFAR and Ethereum · 2014-12-15T19:01:49.782Z · LW · GW

Sure, here's a CDC overview: http://www.cdc.gov/handwashing/show-me-the-science-hand-sanitizer.html They seem to be imperfect but better than nothing, and since people are surely not going to be washing their hands every time they cough, sneeze, or touch communal surfaces, supplementing normal handwashing practices with hand sanitizer seems like a probably-helpful precaution.

But note that this has turned out to be an accidental tangent since the "overhygienic" criticism was actually meant to refer to epistemic hygiene! (I am potentially also indignant about the newly clarified criticism, but would need more detail from Sam to find out what, exactly, about our epistemic hygiene he objects to.)

Comment by Julia_Galef on Harper's Magazine article on LW/MIRI/CFAR and Ethereum · 2014-12-12T23:55:43.901Z · LW · GW

Edited to reflect the fact that, no, we certainly don't insist. We just warn people that it's common to get sick during the workshop because you're probably getting less sleep and in close contact with so many other people (many of whom have recently been in airports, etc.). And that it's good practice to use hand sanitizers regularly, not just for your own sake but for others'.

Comment by Julia_Galef on Harper's Magazine article on LW/MIRI/CFAR and Ethereum · 2014-12-12T23:07:50.185Z · LW · GW

Perhaps this is silly of me, but the single word in the article that made me indignantly exclaim "What!?" was when he called CFAR "overhygienic."

I mean... you can call us nerdy, weird in some ways, obsessed with productivity, with some justification! But how can you take issue with our insistence [Edit: more like strong encouragement!] that people use hand sanitizer at a 4-day retreat with 40 people sharing food and close quarters?

[Edit: The author has clarified above that "overhygienic" was meant to refer to epistemic hygiene, not literal hygiene.]

Comment by Julia_Galef on Tell Culture · 2014-01-19T08:18:50.155Z · LW · GW

"I'm beginning to find this conversation aversive, and I'm not sure why. I propose we hold off until I've figured that out."

I read this suggested line and felt a little worried. I hope rationalist culture doesn't head in that direction.

There are plenty of times when I agree a policy of frankness can be useful, but one of the risks of such a policy is that it can become an excuse to abdicate responsibility for your effect on other people.

If you tell me that you're having an aversive reaction to our conversation, but can't tell me why, it's going to stress me out, and I'm going to feel compelled to go back over our conversation to see if I can figure out what I did to cause that reaction in you. That's a non-negligible burden to dump on someone.

If, instead, you found an excuse to leave the conversation gracefully (no need for annoyed body language), you can reflect on the conversation later and decide if there is anything in particular I did to cause your aversive reaction. Maybe so, and you want to bring it up with me later. Or maybe you decide you overreacted to a comment I made, which you now believe you misinterpreted. Or maybe you decide you were just anxious about something unrelated. Overall, chances are good that you can save me a lot of stress and self-consciousness by dealing with your emotions yourself as a first pass, and making them my problem only if (upon reflection) you decide that it would be helpful to do so.

Comment by Julia_Galef on Why CFAR? · 2014-01-09T19:25:56.821Z · LW · GW

Yes, that makes a lot of sense!

Since we don't have any programmers on staff at the moment, we went with the less-than-ideal solution of a manual thermometer, which we update about once a day -- but it certainly would be better to have it happen automatically.

For now, I've gone with the kluge-y solution of an "Updated January XXth" note directly above the menu bar. Thanks for the comment.

Comment by Julia_Galef on Why CFAR? · 2014-01-02T19:01:06.326Z · LW · GW

several mainstream media articles about CFAR on their way, including one forthcoming shortly in the Wall Street Journal

That article's up now -- it was on the cover of the Personal Journal section of the WSJ, on December 31st. Here's the online version: More Rational Resolutions

Comment by Julia_Galef on A map of Bay Area memespace · 2013-09-24T07:12:20.774Z · LW · GW

Great one, thanks!

Comment by Julia_Galef on A map of Bay Area memespace · 2013-09-24T06:58:55.658Z · LW · GW

Agreed. I might add them to a future version of this map.

This time around I held off mainly because I was confounded by how to add them; drugs really do pervade so many of these groups, in different variants: psychadelics are strong among the counterculture and New Age culture, nootropics are more popular among rationalists and biohackers/Quantified Self, and both are popular among transhumanists. (See this H+ article for a discussion of psychadelic transhumanists.)

Comment by Julia_Galef on Three ways CFAR has changed my view of rationality · 2013-09-10T07:47:00.695Z · LW · GW

Well, I'd say that LW does take account of who we are. They just haven't had the impetus to do so quite as thoroughly as CFAR has. As a result there are aspects of applied rationality, or "rationality for humans" as I sometimes call it, that CFAR has developed and LW hasn't.

Comment by Julia_Galef on New applied rationality workshops (April, May, and July) · 2013-04-08T18:23:47.710Z · LW · GW

If it makes you feel less hesitant, we've given refunds twice. One person at a workshop last year who said he'd expected polish and suits, and another who said he enjoyed it but wasn't sure it was going to help enough with his current life situation to be worth it.

Comment by Julia_Galef on New applied rationality workshops (April, May, and July) · 2013-04-08T15:44:17.649Z · LW · GW

Fixed now, sorry!

Comment by Julia_Galef on New applied rationality workshops (April, May, and July) · 2013-04-08T15:35:10.942Z · LW · GW

Fixed! Thanks, I apparently didn't understand how links worked in this system.

Comment by Julia_Galef on Nov 16-18: Rationality for Entrepreneurs · 2012-11-09T00:09:21.926Z · LW · GW

Not sure what kind of evidence you're looking for here; that's just a description of our selection criteria for attendees.

Comment by Julia_Galef on Life Extension versus Replacement · 2011-11-30T19:52:30.829Z · LW · GW

Preferring utilitarianism is a moral intuition, just like preferring Life Extension. The former's a general intuition, the latter's an intuition about a specific case.

So it's not a priori clear which intuition to modify (general or specific) when the two conflict.

Comment by Julia_Galef on Life Extension versus Replacement · 2011-11-30T19:47:13.118Z · LW · GW

Right -- I don't claim any of my moral intuitions to be true or correct; I'm an error theorist, when it comes down to it.

But I do want my intuitions to be consistent with each other. So if I have the intuition that utility is the only thing I value for its own sake, and I have the intuition that Life Extension is better than Replacement, then something's gotta give.

Comment by Julia_Galef on Life Extension versus Replacement · 2011-11-30T17:31:59.795Z · LW · GW

When our intuitions in a particular case contradict the moral theory we thought we held, we need some justification for amending the moral theory other than "I want to."

Comment by Julia_Galef on Life Extension versus Replacement · 2011-11-30T17:22:18.428Z · LW · GW

I agree, and that's why my intuition pushes me towards Life Extension. But how does that fact fit into utilitarianism? And if you're diverging from utilitarianism, what are you replacing it with?

Comment by Julia_Galef on How rationality can make your life more awesome · 2011-11-30T05:10:41.931Z · LW · GW

Excellent.

Comment by Julia_Galef on Life Extension versus Replacement · 2011-11-30T03:23:51.301Z · LW · GW

One doesn't have to be better than the other. That's what's in dispute.

I think making this comparison is important philosophically, because of the implications our answer has for other utilitarian dilemmas, but it's also important practically, in shaping our decisions about how to allocate our efforts to better the world.

Comment by Julia_Galef on Life Extension versus Replacement · 2011-11-30T03:19:24.828Z · LW · GW

Thanks -- but if I'm reading your post correctly, your arguments hinge on the utility experienced in Life Extension being greater than that in Replacement. Is that right? If I stipulate that the utility is equal, would your answer change?

Comment by Julia_Galef on Life Extension versus Replacement · 2011-11-30T02:52:41.025Z · LW · GW

Ah, true! I edited it again to include the original setup, so that people will know what Logos01 and drethelin are referring to.

Comment by Julia_Galef on Life Extension versus Replacement · 2011-11-30T02:19:31.206Z · LW · GW

Thanks -- I fixed the setup.

Comment by Julia_Galef on How rationality can make your life more awesome · 2011-11-29T07:03:17.666Z · LW · GW

My framing was meant to be encouraging you to disproportionately question beliefs which, if false, make you worse off. But motivated skepticism is disproportionately questioning beliefs that you want to be false. That's an important difference, I think.

Are you claiming that my version is also a form of motivated skepticism (perhaps a weaker form)? Or do you think my version's fine, but that I need to make it clearer in the text how what I'm encouraging is different from motivated skepticism?

Comment by Julia_Galef on Communicating rationality to the public: Julia Galef's "The Straw Vulcan" · 2011-11-26T22:36:15.215Z · LW · GW

Incidentally, the filmmaker didn't capture my slide with the diagram of the revised model of rationality and emotions in ideal human* decision-making, so I've uploaded it.

The Straw Vulcan model of ideal human* decisionmaking: http://measureofdoubt.files.wordpress.com/2011/11/screen-shot-2011-11-26-at-3-58-00-pm.png

My revised model of ideal human* decisionmaking: http://measureofdoubt.files.wordpress.com/2011/11/screen-shot-2011-11-26-at-3-58-14-pm.png

*I realize now that I need this modifier, at least on Less Wrong!

Comment by Julia_Galef on Communicating rationality to the public: Julia Galef's "The Straw Vulcan" · 2011-11-26T22:14:49.307Z · LW · GW

Great point, in many cases, such as when you're trying to decide what school to go to, and you make the decision deliberatively but taking into account the data from your intuitive reactions to the schools.

But in other cases, such as chess-playing, aren't you mainly just deciding based on your System 1 judgments? (Admittedly I'm no chess player; that's just my impression of how it works.)

I agree you need to use System 2 for your meta-judgment about which system to use in a particular context, but once you've made that meta-judgment, I think there are some cases in which you make the actual judgment based on System 1.

Am I correctly understanding your point?

Comment by Julia_Galef on Communicating rationality to the public: Julia Galef's "The Straw Vulcan" · 2011-11-26T22:00:29.974Z · LW · GW

Yup, I went through the same reasoning myself -- I decided on "system 1" and "system 2" for their neutral tone, and also because they're Stanovich's preferred terms.

Comment by Julia_Galef on Communicating rationality to the public: Julia Galef's "The Straw Vulcan" · 2011-11-26T20:52:24.769Z · LW · GW

Good question. My intended meaning was closest to (h). (Although isn't (g) pretty much equivalent?)

Comment by Julia_Galef on New Rationality Blog: 'Measure of Doubt' · 2011-04-01T20:53:58.167Z · LW · GW

Hey, thanks for the shoutout! @SilasBarta -- Yeah, I first encountered the mirror paradox in G&R, but I ended up explaining it differently than Drescher did, drawing on Gardner as well as some discussions with a friend, so I didn't end up quoting Drescher after all. I do like his explanation, though.

Comment by Julia_Galef on Disguised Queries · 2010-03-12T20:01:31.627Z · LW · GW

This was a really clarifying post for me. I had gotten to the point of noticing that "What is X?" debates were really just debates over the definition of X, but I hadn't yet taken the next step of asking why people care about how X is defined.

I think another great example of a disguised query is the recurring debate, "Is this art?" People have really widely varying definitions of "art" (e.g., some people's definition includes "aesthetically interesting," other people's definition merely requires "conceptually interesting") -- and in one sense, once both parties explain how they use the word "art," the debate should resolve pretty quickly.

But of course, since it's a disguised query, the question "Is this art?" should really be followed up with the question "Why does it matter?" As far as I can tell, the disguised query in this case is usually "does this deserve to be taken seriously?" which can be translated in practice into, "Is this the sort of thing that deserves to be exhibited in a gallery?" And that's certainly a real, non-semantic debate. But we can have that debate without ever needing to decide whether to apply the label "art" to something -- in fact, I think the debate would be much clearer if we left the word "art" out of it altogether.

I've elaborated on this topic on Rationally Speaking: http://rationallyspeaking.blogspot.com/2010/03/is-this-art-and-why-thats-wrong.html ...and I cite this LW post. Thanks, Eliezer.

Comment by Julia_Galef on Dissolving the Question · 2010-01-02T23:46:04.578Z · LW · GW

Eliezer, you wrote:

But when you're really done, you'll know you're done. Dissolving the question is an unmistakable feeling...

I'm not so sure. There have been a number of mysteries throughout history that were resolved by science, but people didn't immediately feel as if the scientific explanation really resolved the question, even though it does to us now -- like the explanation of light as being electromagnetic waves.

I frequently find it tricky to determine whether a feeling of dissatisfaction indicates that I haven't gotten to the root of a problem, or whether it indicates that I just need time to become comfortable with the explanation. For instance, it feels to me like my moral intuitions are objectively correct rules about how people should and shouldn't behave. Yet my reason tells me that they are simply emotional reactions built into my brain by some combination of biology and conditioning. I've gotten somewhat more used to that fact over time, but it certainly didn't feel at first like it successfully explained why I feel that X is "wrong" or Y is "right."

Comment by Julia_Galef on Two Truths and a Lie · 2009-12-27T18:12:08.794Z · LW · GW

I like the cuteness of turning an old parlor game into a theory-test. But I suspect a more direct and effective test would be to take one true fact, invert it, and then ask your test subject which statement fits their theory better. (I always try to do that to myself when I'm fitting my own pet theory to a new fact I've just heard, but it's hard once I already know which one is true.)

Other advantages of this test over the original one proposed in the post: (1) You don't have to go to the trouble of thinking up fake data (a problematic endeavor, because there is some art to coming up with a realistic-sounding false fact -- and also because you actually have to do some research to make sure that you didn't generate a true fact by accident). (2) Your test subject only has a 1 in 2 shot at guessing right by chance, as opposed to a 2 in 3 shot.