Comment by annasalamon on Rule Thinkers In, Not Out · 2019-02-28T00:02:57.342Z · score: 102 (39 votes) · LW · GW

I used to make the argument in the OP a lot. I applied it (among other applications) to Michael Vassar, who many people complained to me about (“I can’t believe he made obviously-fallacious argument X; why does anybody listen to him”), and who I encouraged them to continue listening to anyhow. I now regret this.

Here are the two main points I think past-me was missing:

1. Vetting and common knowledge creation are important functions, and ridicule of obviously-fallacious reasoning plays an important role in discerning which thinkers can (or can’t) help fill these functions.

(Communities — like the community of physicists, or the community of folks attempting to contribute to AI safety — tend to take a bunch of conclusions for granted without each-time-reexamining them, while trying to add to the frontier of knowledge/reasoning/planning. This can be useful, and it requires a community vetting function. This vetting function is commonly built via having a kind of “good standing” that thinkers/writers can be ruled out of (and into), and taking a claim as “established knowledge that can be built on” when ~all “thinkers in good standing” agree on that claim.

I realize the OP kind-of acknowledges this when discussing “social engineering”, so maybe the OP gets this right? But I undervalued this function in the past, and the term “social engineering” seems to me dismissive of a function that in my current view contributes substantially to a group’s ability to produce new knowledge.)

2. Even when a reader is seeking help brainstorming hypotheses (rather than vetting conclusions), they can still be lied-to and manipulated, and such lies/manipulations can sometimes disrupt their thinking for long and costly periods of time (e.g., handing Ayn Rand to the wrong 14-year-old; or, in my opinion, handing Michael Vassar to a substantial minority of smart aspiring rationalists). Distinguishing which thinkers are likely to lie or manipulate is a function more easily fulfilled by a group sharing info that rules thinkers out for past instances of manipulative or dishonest tactics (rather than by the individual listener planning to ignore past bad arguments and to just successfully detect every single manipulative tactic on their own).

So, for example, Julia Galef helpfully notes a case where Steven Pinker straightforwardly misrepresents basic facts about who said what. This is helpful to me in ruling out Steven Pinker as someone who I can trust not to lie to me about even straightforwardly checkable facts.

Similarly, back in 2011, a friend complained to me that Michael would cause EAs to choose the wrong career paths by telling them exaggerated things about their own specialness. This matched my own observations of what he was doing. Michael himself told me that he sometimes lied to people (not his words) and told them that the thing that would most help AI risk from them anyhow was for them to continue on their present career (he said this was useful because that way they wouldn’t rationalize that AI risk must be false). Despite these and similar instances, I continued to recommend people talk to him because I had “ruled him in” as a source of some good novel ideas, and I did this without warning people about the rest of it. I think this was a mistake. (I also think that my recommending Michael led to considerable damage over time, but trying to establish that claim would require more discussion than seems to fit here.)

To be clear, I still think hypothesis-generating thinkers are valuable even when unreliable, and I still think that honest and non-manipulative thinkers should not be “ruled out” as hypothesis-sources for having some mistaken hypotheses (and should be “ruled in” for having even one correct-important-and-novel hypothesis). I just care more about the caveats here than I used to.

Comment by annasalamon on Some Thoughts on My Psychiatry Practice · 2019-01-19T19:08:16.179Z · score: 3 (1 votes) · LW · GW

Thanks for the portraits; I appreciate getting to read it. I'm curious what would happen if you got one of them to read "The Elephant in the Brain". (No idea if it'd be good or bad. Just seems like it might have some chance at causing something different.)

Comment by annasalamon on CFAR 2017 Retrospective · 2017-12-19T20:59:34.864Z · score: 27 (10 votes) · LW · GW

I continue to think CFAR is among the best places to donate re: turning money into existential risk reduction (including this year -- basically because our good done seems almost linear in the number of free-to-participants programs we can run (because those can target high-impact AI stuff), and because the number of free-to-participants programs we can run is more or less linear in donations within the range in which donations might plausibly take us). If anyone wants my take on how this works, or on our last year or our upcoming year or anything like that, I'd be glad to talk: anna at rationality dot org.

Comment by annasalamon on In the presence of disinformation, collective epistemology requires local modeling · 2017-12-16T20:59:13.387Z · score: 55 (17 votes) · LW · GW

RyanCarey writes:

If you are someone of median intelligence who just want to carry out a usual trade like making shoes or something, you can largely get by with recieved wisdom.

AFAICT, this only holds if you're in a stable sociopolitical/economic context -- and, more specifically still, the kind of stable sociopolitical environment that provides relatively benign information-sources. Examples of folks who didn't fall into this category: (a) folks living in eastern Europe in the late 1930's (especially if Jewish, but even if not; regardless of how traditional their trade was); (b) folks living in the Soviet Union (required navigating a non-explicit layer of recieved-from-underground knowledge); (c) folks literally making shoes during time-periods in which shoe-making was disrupted by the industrial revolution. It is to my mind an open question whether any significant portion of the US/Europe/etc. will fall into the "can get by largely with received wisdom" reference class across the next 10 years. (They might. I just actually can't tell.)

"Flinching away from truth” is often about *protecting* the epistemology

2016-12-20T18:39:18.737Z · score: 89 (87 votes)

Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”

2016-12-12T19:39:50.084Z · score: 38 (38 votes)
Comment by annasalamon on CFAR's new mission statement (on our website) · 2016-12-10T21:36:19.357Z · score: 0 (0 votes) · LW · GW

This is fair; I had in mind basic high school / Newtonian physics of everyday objects. (E.g., "If I drop this penny off this building, how long will it take to hit the ground?", or, more messily, "If I drive twice as fast, what impact would that have on the kinetic energy with which I would crash into a tree / what impact would that have on how badly deformed my car and I would be if I crash into a tree?").

Comment by annasalamon on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” · 2016-12-10T18:48:21.092Z · score: 5 (5 votes) · LW · GW

We would indeed love to help those people train.

Comment by annasalamon on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” · 2016-12-10T18:44:57.202Z · score: 9 (9 votes) · LW · GW

Yes. Or will seriously attempt this, at least. It seems required for cooperation and good epistemic hygiene.

Comment by annasalamon on CFAR's new mission statement (on our website) · 2016-12-10T18:41:35.421Z · score: 1 (1 votes) · LW · GW

Thanks; good point; will add links.

Comment by annasalamon on CFAR's new mission statement (on our website) · 2016-12-10T18:40:10.562Z · score: 1 (1 votes) · LW · GW

In case there are folks following Discussion but not Main: this mission statement was released along with:

Comment by annasalamon on CFAR’s new focus, and AI Safety · 2016-12-10T18:32:31.300Z · score: 1 (1 votes) · LW · GW

Oh, sorry, the two new docs are posted and were in the new ETA:

http://lesswrong.com/lw/o9h/further_discussion_of_cfars_focus_on_ai_safety/ and http://lesswrong.com/r/discussion/lw/o9j/cfars_new_mission_statement_on_our_website/

CFAR's new mission statement (on our website)

2016-12-10T08:37:27.093Z · score: 7 (8 votes)
Comment by AnnaSalamon on [deleted post] 2016-12-10T08:31:08.209Z

Apologies; the link is broken and I'm not sure how to edit or delete it; real link is: http://rationality.org/about/mission

Comment by annasalamon on CFAR’s new focus, and AI Safety · 2016-12-10T08:02:59.842Z · score: 3 (3 votes) · LW · GW

Thanks for the thoughts; I appreciate it.

I agree with you that framing is important; I just deleted the old ETA. (For anyone interested, it used to read:

ETA: Having talked just now to people at our open house, I would like to clarify: Even though our aim is explicitly AI Safety...
CFAR does still need an art of rationality, and a community of rationality geeks that support that. We will still be investing at least some in that community. We will also still be running some "explore" workshops of different sorts aiming at patching gaps in the art (funding permitting), not all of which will be deliberately and explicitly backchained form AI Safety (although some will). Play is generative of a full rationality art. (In addition to sometimes targeting things more narrowly at particular high-impact groups, and otherwise more directly backchaining.) (More in subsequent posts.)

I'm curious where our two new docs leave you; I think they make clearer that we will still be doing some rationality qua rationality.

Will comment later re: separate organizations; I agree this is an interesting idea; my guess is that there isn't enough money and staff firepower to run a good standalone rationality organization in CFAR's stead, and also that CFAR retains quite an interest in a standalone rationality community and should therefore support it... but I'm definitely interested in thoughts on this.

Julia will be launching a small spinoff organization called Convergence, facilitating double crux conversations between EAs and EA-adjacent people in, e.g., tech and academia. It'll be under the auspices of CFAR for now but will not have opinions on AI. I'm not sure if that hits any of what you're after.

Comment by annasalamon on CFAR’s new focus, and AI Safety · 2016-12-08T09:23:00.255Z · score: 1 (1 votes) · LW · GW

I think we'll have it all posted by Dec 18 or so, if you want to wait and see. My personal impression is that MIRI and CFAR are both very good buys this year and that best would be for each to receive some donation (collectively, not from each person); I expect the care for MIRI to be somewhat more straight-forward, though.

I'd be happy to Skype/email with you or anyone re: the likely effects of donating to CFAR, especially after we get our posts up.

Comment by annasalamon on CFAR’s new focus, and AI Safety · 2016-12-04T20:33:22.886Z · score: 1 (1 votes) · LW · GW

No, it was just a pun. I believe trying to improve predictive accuracy is better than trying to promote view X (for basically any value of "X"); which I was hoping the pun of "Brier Boosting" off "Signal Boosting" would point to; but not Briers Score as such.

Comment by annasalamon on CFAR’s new focus, and AI Safety · 2016-12-04T20:31:26.758Z · score: 6 (6 votes) · LW · GW

Coming up. Working on a blog post about it; will probably have it up in ~4 days.

Comment by annasalamon on CFAR’s new focus, and AI Safety · 2016-12-03T20:53:02.066Z · score: 0 (0 votes) · LW · GW

Thanks!

CFAR’s new focus, and AI Safety

2016-12-03T18:09:13.688Z · score: 31 (32 votes)
Comment by annasalamon on CFAR’s new focus, and AI Safety · 2016-12-03T06:44:14.158Z · score: 6 (6 votes) · LW · GW

We'll be continuing the workshops, at least for now, with less direct focus, but with probably a similar amount of net development time going into them even if the emphasis is on more targeted programs. This is partly because we value the existence of an independent rationality community (varied folks doing varied things adds to the art and increases its integrity), and partly because we’re still dependent on the workshop revenue for part of our operating budget.

Re: others taking up the mantel: we are working to bootstrap an instructor training; have long been encouraging our mentors and alumni to run their own thingies; and are glad to help others do so. Also Kaj Sotala seems to be developing some interesting training thingies designed to be shared.

Comment by annasalamon on CFAR’s new focus, and AI Safety · 2016-12-03T04:56:18.572Z · score: 0 (0 votes) · LW · GW

Got any that contrast with "raising awareness" or "outreach"?

Comment by annasalamon on Fact Posts: How and Why · 2016-12-03T02:50:07.867Z · score: 5 (3 votes) · LW · GW

You can basically believe the contents of an intro chemistry textbook; you have to be much more careful with the contents of an intro psychology or sociology textbook.

Comment by annasalamon on CFAR’s new focus, and AI Safety · 2016-12-03T02:23:04.099Z · score: 12 (12 votes) · LW · GW

Interesting idea; shall consider.

Comment by annasalamon on Double Crux — A Strategy for Resolving Disagreement · 2016-12-01T04:18:11.622Z · score: 4 (4 votes) · LW · GW

TAPs = Trigger Action Planning; referred to in the scholarly literature as "Implementation intentions". The Inner Simulator unit is CFAR's way of referring to what you actually expect to see happen (as contrasted with, say, your verbally stated "beliefs".)

Good point re: being careful about implied common knowledge.

Comment by annasalamon on On the importance of Less Wrong, or another single conversational locus · 2016-11-29T18:16:33.261Z · score: 13 (15 votes) · LW · GW

I'm empowered to hunt down the relevant people and start conversations about it that are themselves empowered to make the shift. (E.g. to talk to Nate/Eliezer/MIRI, and Matt Fallshaw who runs Trike Apps.).

I like the idea of granting domain ownership if we in fact go down the BDFL route.

Comment by annasalamon on On the importance of Less Wrong, or another single conversational locus · 2016-11-28T00:51:50.591Z · score: 15 (15 votes) · LW · GW

It is actually not obvious to me that we gain by having upvotes/downvotes be private (rather than having it visible to readers who upvoted or downvoted which post, as on Facebook). But I haven't thought about it much.

Comment by annasalamon on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T23:24:35.055Z · score: 17 (17 votes) · LW · GW

I think you are wrong about the need for a single Schelling point and I submit as evidence: Crony Beliefs. We have a mesh network where valuable articles do get around. Lesswrong is very much visited by many (as evidence by the comments on this post). When individuals judge information worthy; it makes its way around the network and is added to our history.

So: this is subtle. But to my mind, the main issue isn't that ideas won't mostly-percolate. (Yes, lots of folks seem to be referring to Crony Beliefs. Yes, Molloch. Yes, etc.) It's rather that there isn't a process for: creating common knowledge that an idea has percolated; having people feel empowered to author a reply to an idea (e.g., pointing out an apparent error in its arguments) while having faith that if their argument is clear and correct, others will force the original author to eventually reply; creating a common core of people who have a common core of arguments/analysis/evidence they can take for granted (as with Eliezer's Sequences), etc.

I'm not sure how to fully explicitly model it. But it's not mostly about the odds that a given post will spread (let's call that probability "p"). It's more about a bunch of second-order effects from thingies requiring that p^4 or something be large (e.g., that you will both have read the post I want to reference (p), and I'll know you'll have read it (~p^2), and that that'll be true for a large enough fraction of my audience that I don't have to painfully write my post to avoid being misunderstood by the people who haven't read that one post (maybe ~p^3 or something, depending on threshold proportion), for each of the "that one posts" that I want to reference (again, some slightly higher conjunctive requirement, with the probability correspondingly going down)...

I wish I knew how to model this more coherently.

Comment by annasalamon on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T22:29:20.096Z · score: 36 (36 votes) · LW · GW

Re: 1, I vote for Vaniver as LW's BDFL, with authority to decree community norms (re: politics or anything else), decide on changes for the site; conduct fundraisers on behalf of the site; etc. (He already has the technical admin powers, and has been playing some of this role in a low key way; but I suspect he's been deferring a lot to other parties who spend little time on LW, and that an authorized sole dictatorship might be better.)

Anyone want to join me in this, or else make a counterproposal?

Comment by annasalamon on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T22:23:15.888Z · score: 2 (2 votes) · LW · GW

Ignoring the feasibility question for a minute, I'm confused about whether it would be desirable (if feasible). There are some obvious advantages to making it easy for people to choose what to read. And as a general heuristic, making it easy for people to do things they want to do seems usually good/cooperative. But there are also strong advantages to having common knowledge of particular content/arguments (a cannon; a single thread of assumed "yes that's okay to assume and build on"); and making user displays individual (as e.g. Facebook does) cuts heavily against that.

(I realize you weren't talking about what was all-things-considered desirable, only about what feels exciting/boring.)

Comment by annasalamon on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T21:11:52.834Z · score: 6 (6 votes) · LW · GW

It seems to me that for larger communities, there should be both: (a) a central core that everyone keeps up on, regardless of subtopical interest; and (b) topical centers that build in themselves, and that those contributing to that topical center are expected to be up on, but that members of other topical centers are not necessarily up on. (So that folks contributing to a given subtopical center should be expected to be keeping up with both that subtopic, and the central cannon.)

It seems to me that (a) probably should be located on LW or similar, and that, if/as the community grows, the number of posts within (a) can remain capped by some "keep up withable" number, with quality standards raising as needed.

On the importance of Less Wrong, or another single conversational locus

2016-11-27T17:13:08.956Z · score: 91 (90 votes)
Comment by annasalamon on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T08:39:16.809Z · score: 5 (5 votes) · LW · GW

good intellectual content

Yes. I wonder if there are somehow spreadable habits of thinking (or of "reading while digesting/synethesizing/blog posting", or ...) that could themselves be written up, in order to create more ability from more folks to add good content.

Probably too meta / too clever an idea, but may be worth some individual brainstorms?

Comment by annasalamon on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T08:30:43.098Z · score: 8 (8 votes) · LW · GW

(ii) seems good, and worth adding more hands and voices to; it seems to me we can do it in a distributed fashion, and just start adding to LW and going for momentum, though.

sarahconstantin and some others have in fact been doing something like (ii), and was I suspect a partial cause of e.g. this post of mine, and of:

Efforts to add to (ii) would I think be extremely welcome; it is a good idea, and I may do more of it as well.

If anyone reading has a desire to revitalize LW, reading some of these or other posts and adding a substantive (or appreciative) comment is another way to encourage thoughtful posting.

Comment by annasalamon on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T08:17:29.936Z · score: 14 (14 votes) · LW · GW

Thoughts on RyanCarey's problems list, point by point:

Until we fix the following problems, our efforts to attract writers will be pushing uphill against a strong incentive gradient:

Not sure all of them are "problems", exactly. I agree that incentive gradients matter, though.

Comments on the specific "problems":

1 Posts on LessWrong are far less aesthetically pleasing than is now possible with modern web design, such as on Medium. The design is also slightly worse than on the EA Forum and SSC.

Insofar as 1 is true, it seems like a genuine and simple bug that is probably worth fixing. Matt Graves is I believe the person to talk to if one has ideas or $ to contribute to this. (Or the Arbital crew, insofar as they're taking suggestions.)

2 Posts on LessWrong are much less likely to get shared / go viral than posts on Medium and so have lower expected views. [snip]

The extent to which this is a bug depends on the extent to which posts are aimed at "going viral" / getting shared. If our aim is intellectual generativity, then we do want to attract the best minds of the internet to come think with us, and that does require sometimes having posts go viral. But it doesn't require optimizing the average post for that; it in fact almost benefits from having most posts exist in the relative quiet of a stable community, a community (ideally) with deep intellectual context with which to digest that particular post, such that one can often speak to that community without worrying about whether one's points will be intelligible or palatable to newcomers.

Insofar as writers expect on a visceral level that "number of shares" is the useful thing... people will be pulling against an incentive gradient when choosing LW over Facebook. Insofar as writers come to expect on a visceral level that “adding to this centralized conversational project” tracks value, and that number of shares (from parties who don’t then join the conversation, and who don’t carry on their own good intellectual work elsewhere) is mostly a distraction or blinking light… the incentive may actually come to feel different.

People do sometimes do what is hard when they perceive it to be useful.

3 Comments on LessWrong are more critical and less polite than comments on other sites.

I feel there’s an avoidable part of this, which we should avoid; and then an actually useful part of this, which we should keep (and should endeavor to develop positive affect around — when one accurately perceives the usefulness of a thing, it can sometimes come to feel better). See Sarah’s recent post: On Trying Not To Be Wrong

4 Posts on LessWrong are held in lower regard academic communities like ML and policy than posts elsewhere, including on Medium.

This seems like a bad sign, though I am not sure what to do about it. I don’t think it’s worth compromising the integrity of our conversation for the sake of outside palatability; cross-posting seems plausible; I’d also like to understand it more.

Comment by annasalamon on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T07:01:11.106Z · score: 17 (21 votes) · LW · GW

I am extremely excited about this. I suspect we should proceed trying to reboot Less Wrong, without waiting, while also attempting to aid Arbital in any ways that can help (test users, etc.).

Comment by annasalamon on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T06:25:03.600Z · score: 17 (17 votes) · LW · GW

Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)

This seems right to me. It seems to me that "moderation" in this sense is perhaps better phrased as "active enforcement of community norms of good discourse", not necessarily by folks with admin privileges as such. Also simply explicating what norms are expected, or hashing out in common what norms there should be. (E.g., perhaps there should be a norm of posting all "arguments you want the community to be aware of" to Less Wrong or another central place, and of keeping up with all highly upvoted / promoted / otherwise "single point of coordination-marked" posts to LW.)

I used to do this a lot on Less Wrong; then I started thinking I should do work that was somehow "more important". In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. "Someone is wrong on less wrong" seems to me to be an actually worth fixing; it seems like that's how we make a community that is capable of vetting arguments.

Comment by annasalamon on Lost Purposes · 2016-11-17T05:23:31.892Z · score: 2 (2 votes) · LW · GW

"More than anything, you must be thinking of carrying your movement through to cutting him. You must thoroughly research this." (Emphasis added.)

I wish I could upvote this post multiple times. Especially the "You must thoroughly research this" part, although also all the examples of institutions failing to accomplish much of anything, even when you might naively have really thought they would (e.g., individual health procedures work; how could large-scale institutions for doing those procedures not work?). I read this post back in 2008 and was like "oh, cool, it's a nice handle for a common little bug that happens to me and other people". I reread it just now, and was more like "Shit, good point, we actually do need to 'thoroughly research' how to avoid lost purposes, or how to carry our movements through to cutting the enemy. For lack of this, the EA and AI Safety movements have perhaps mostly wasted their last several years." Recommending the "thoroughly research" part to anyone who wants to carry on a complex task across a set of tens or hundreds (or thousands) of people without having all the value dissipate into noise.

Comment by annasalamon on Several free CFAR summer programs on rationality and AI safety · 2016-04-16T06:04:31.379Z · score: 0 (0 votes) · LW · GW

Working through these slowly; should be up to date by 4/24.

Several free CFAR summer programs on rationality and AI safety

2016-04-14T02:35:03.742Z · score: 18 (21 votes)
Comment by annasalamon on Several free CFAR summer programs on rationality and AI safety · 2016-04-11T18:51:28.482Z · score: 2 (2 votes) · LW · GW

WAISS, MSFP, CfML, and (for high-school-aged folk) EuroSPARC all have some ability to apply for travel assistance.

Comment by annasalamon on Several free CFAR summer programs on rationality and AI safety · 2016-04-10T19:02:45.136Z · score: 1 (1 votes) · LW · GW

Alas, yes; I found that unfortunate as well, since I, too, had wanted to attend both!

Consider having sparse insides

2016-04-01T00:07:07.777Z · score: 12 (17 votes)
Comment by annasalamon on Why CFAR's Mission? · 2016-02-01T01:26:08.798Z · score: 1 (1 votes) · LW · GW

The fundraiser closes today at midnight Pacific time; if you've been planning to donate, now is the moment. Marginal funds seem to me to be extremely impactful this year; I'd be happy to discuss. http://rationality.org/donate-2015/

Comment by annasalamon on Why CFAR's Mission? · 2016-01-26T09:02:01.613Z · score: 2 (2 votes) · LW · GW

It feels like there is an implicit assumption in CFAR's agenda that most of the important things are going to happen in one or two decades from now.

I don't think this; it seems to me that the next decade or two may be pivotal, but they may well not be, and the rest of the century matters quite a bit as well in expectation.

There are three main reasons we've focused mainly on adults:

  1. Adults can contribute more rapidly, and so can be part of a process of compounding careful-thinking resources in a shorter-term way. E.g. if adults are hired now by MIRI, they improve the ratio of thoughtfulness within those thinking about AI safety, and this can in turn impact the culture of the field, the quality of future years’ research, etc.

  2. For reasons resembling (1), adults provide a faster “grounded feedback cycle”. E.g., adults who come in with business or scientific experience can tell us right away whether the curricula feel promising to them; students and teens are more likely to be indiscriminatingly enthusiastic. .

  3. Adults can often pay their own way at the workshops; children can’t; we therefore cannot afford to run very many workshops for kids until we somehow acquire either more donation, or more financial resource in some other way.

Nevertheless, I agree with you that programs targeting children can be higher impact per person and are extremely worthwhile in the medium- to long-run. This is indeed part of the motivation for SPARC, and expanding such programs is key to our long-term aims; marginal donation is key to our ability to do these quickly, and not just eventually.

Comment by annasalamon on The correct response to uncertainty is *not* half-speed · 2016-01-16T02:19:39.903Z · score: 3 (5 votes) · LW · GW

It's true there are situations in which this isn't the case, but I think they're rare enough that it's worth acknowledging the value of hesitation in many cases and trying to be clear about distinguishing valid from invalid hesitation.

It seems to me that thinking through uncertainties and scenarios is often really really important, as is making specific safeguards that will help you if your model turns out to be wrong; but I claim that there is a different meaning of "hesitation" that is like "keeping most of my psyche in a state of roadblock while I kind-of hang out with my friend while also feeling anxious about my paper", or something, that is very different from actually concretely picturing the two scenarios, and figuring out how to create an outcome I'd like given both possibilities. I'm not expressing it well, but does the distinction I am trying to gesture at make sense?

Comment by annasalamon on The correct response to uncertainty is *not* half-speed · 2016-01-16T02:16:36.851Z · score: 4 (6 votes) · LW · GW

If you take a weighted sum of (75% likely 60mph forward) + (25% likely 60 mph backward), you get (30 mph forward).

Stopping briefly to choose a plan might've been sensible, if it was easier to think while holding still; stopping after that (I had no GPS or navigation ability) wouldn't've helped; I had to proceed in some direction to find out where the hotel was, and there was no point in doing that not at full speed.

Often a person should hedge bets in some fashion, or should take some action under uncertainty that is different from the action one would take if one were certain of model 1 or of model 2. The point is that "hedging" or "acting under uncertainty" in this way is different in many particulars from the sort of "kind of working" that people often end up accidentally doing, from a naiver sort of average. Often it e.g. involves running info-gathering tests at full speed, one after another. Or e.g., betting "blue" each time in this experiment, while also attempting to form better models.

The correct response to uncertainty is *not* half-speed

2016-01-15T22:55:03.407Z · score: 82 (86 votes)
Comment by annasalamon on Why CFAR's Mission? · 2016-01-11T03:45:25.473Z · score: 5 (5 votes) · LW · GW

Example practice problems and small real problems:

  • Fermi estimation of everyday quantities (e.g., "how many minutes will I spend commuting over the next year? What's the expected savings if I set a 5-minute timer to try to optimize that?);
  • Figuring out why I'm averse to work/social task X and how to modify that;
  • Finding ways to optimize recurring task X;
  • Locating the "crux" of a disagreement about a trivia problem ("How many barrels of oil were sold worldwide in 1970?" pursued with two players and no internet) or a harder-to-check problem ("What are the most effective charities today?"), such that trading evidence for the crux produces shifts in one's own and/or the other player's views.

Larger real problems: Not much to point to as yet. Some CFAR alums are running start-ups, doing scientific research for MIRI or elsewhere, etc. and I imagine make estimates of various quantities in real life, but I don't know of any discoveries of note. Yet.

Comment by annasalamon on Why CFAR's Mission? · 2016-01-11T03:19:35.050Z · score: 3 (3 votes) · LW · GW

If we can get politicians to be more sane about short, medium, and long-term existential risk, it seems like that would be a win-win scenario. What are CFAR's thoughts on that?

Getting politicians to me more sane sounds awesome, but somewhat harder for us and more outside our immediate reach than getting STEM-heavy students to be more sane. I realize I said "who are most likely to actually usefully impact the world", but I should perhaps instead have said "who have high values for the product of [likely to usefully impact the world if they think well] * [comparatively easy for us to assist in acquiring good thinking skills]"; and STEM seems to help with both of these.

Still, we are keen to have aspiring politicians, civil servants, etc. to our workshops, we've found financial aid for several such in the past, and we'd love it if you or others would recommend our workshops to aspiring rationalists who are interested in this path (as well as in other paths).

Comment by annasalamon on Why CFAR's Mission? · 2016-01-11T03:11:54.517Z · score: 5 (5 votes) · LW · GW

See my reply above. It is worth noting also that there is follow-up after the workshop (emails, group Skype calls, 1-on-1 follow-up sessions, and accountability buddies), and that the workshops are for many an entry-point into the alumni community and a longer-term community of practice (with many participating in the google group; attending our weekly alumni dojo; attending yearly alumni reunions and occasional advanced workshops, etc.).

(Even so, our methodology if not what I would pick if our goal was to help participants memorize rote facts. But for ways of thinking, it seems to work better than anything else we've found. So far.)

Comment by annasalamon on Why CFAR's Mission? · 2016-01-11T02:59:41.506Z · score: 14 (14 votes) · LW · GW

The short answer: because we're trying to teach a kind of thinking rather than a pile of information, and this kind of thinking seems to be vary more easily acquired in an immersive multi-day context -- especially a context in which participants have set aside their ordinary commitments, and are free to question their normal modes of working/socializing/etc. without needing to answer their emails meanwhile.

Why I think this: CFAR experimented quite a bit with short classes (1 hour, 3 hours, etc.), daylong commuter events, multi-day commuter events, and workshops of varying numbers of days. We ran our first immersive workshop 6 months into our existence, after much experimentation with short formats; and we continued to experiment extensively with varied formats thereafter.

We found that participants were far more likely to fill in high scores to "0 to 10, are you glad you came?" at multi-day residential events. We found also that they seemed to us to engage with the material more fully and acquire the "mindset" of applied rationality more easily and more deeply, and that conversations relaxed, opened up, and became more honest/engaged as each workshop progressed, with participants feeling free to e.g. question whether their apparently insoluble problems were in fact insoluble, whether they in fact wanted to stay in the careers they felt "already stuck" in, whether they could "become a math person after all" or "learn social skills after all" or come to care about the world even if they hadn't been born that way, etc.

We also find we learn more from participants with whom we have more extensive contact, and the residential setting provides that well per unit staff time -- we can really get in the mode of hanging out with a given set of participants, trying to understand where they're at, forming hypotheses that might help, trying those hypotheses real-time in a really data-rich setting, seeing why that didn't quite work, and trying again... And developing better curricula is perhaps CFAR's main focus.

That said, discussed in our year-end review & fundraiser post, we are planning to attempt more writing, both for the sake of scalable reading and for the sake of more explicitly formulating some of what we think we know. It'll be interesting to see how that goes.

(You might also check our Critch's recent post on why CFAR has focused so much on residential workshops.)

Comment by annasalamon on [Stub] The problem with Chesterton's Fence · 2016-01-09T06:42:02.868Z · score: 1 (1 votes) · LW · GW

Nobody worked hard on making sure people could remove fences without understanding them ..., so this principle is not protected.

This seems false to me. I agree with Stuart's opening suggestion that democracy, free markets, and the Enlightenment more generally are in part designed to make it easy to dismantle historical patterns (e.g. religion, guilds, aristocracy, traditions; one can see this discussion explicitly in e.g. Adam Smith, Locke, Toqueville, Bacon). Bostrom's "status quo bias" also comes to mind.

Why CFAR's Mission?

2016-01-02T23:23:30.935Z · score: 41 (42 votes)
Comment by annasalamon on Why CFAR? The view from 2015 · 2015-12-21T11:02:24.366Z · score: 8 (8 votes) · LW · GW

Thank you! We appreciate this enormously.

Comment by annasalamon on Why CFAR? The view from 2015 · 2015-12-21T11:01:59.890Z · score: 3 (3 votes) · LW · GW

Thanks!

Comment by annasalamon on Why CFAR? The view from 2015 · 2015-12-21T11:01:55.603Z · score: 6 (6 votes) · LW · GW

Thanks!

Comment by annasalamon on Why CFAR? The view from 2015 · 2015-12-21T08:52:43.782Z · score: 3 (3 votes) · LW · GW

We revised the text some after posting; apologies to anyone who replied to original text that has now been changed.

Comment by annasalamon on Why CFAR? The view from 2015 · 2015-12-21T08:51:25.084Z · score: 4 (4 votes) · LW · GW

Sorry. Original phrasing around how we were now going to measure was pretty bad, I agree. I just edited it. I had been bothered by the very text you quoted, and we had an internal thread where we all discussed that and agreed that the phrases were wrong... but we were slow about that, and you commented while we were discussing! The new text more closely reflects the actual structure of how we've been thinking about it all.

It's a bit tricky to publish a long post with many co-editors without letting something inaccurate through (at least in a sleep-deprived marathon like we very rationally used before publishing this one...; there were a bunch of us working collaboratively on the text...); but we should probably in fact have edited a bit more before posting; anyhow, my apologies for editing this text on you after you commented.

Why startup founders have mood swings (and why they may have uses)

2015-12-09T18:59:51.323Z · score: 52 (54 votes)

Two Growth Curves

2015-10-02T00:59:45.489Z · score: 35 (38 votes)

CFAR-run MIRI Summer Fellows program: July 7-26

2015-04-28T19:04:27.403Z · score: 23 (24 votes)

Attempted Telekinesis

2015-02-07T18:53:12.436Z · score: 85 (84 votes)

How to learn soft skills

2015-02-07T05:22:53.790Z · score: 51 (49 votes)

CFAR fundraiser far from filled; 4 days remaining

2015-01-27T07:26:36.878Z · score: 42 (47 votes)

CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype

2014-12-26T15:33:08.388Z · score: 61 (64 votes)

Upcoming CFAR events: Lower-cost bay area intro workshop; EU workshops; and others

2014-10-02T00:08:44.071Z · score: 17 (18 votes)

Why CFAR?

2013-12-28T23:25:10.296Z · score: 71 (74 votes)

Meetup : CFAR visits Salt Lake City

2013-06-15T04:43:54.594Z · score: 4 (5 votes)

Want to have a CFAR instructor visit your LW group?

2013-04-20T07:04:08.521Z · score: 17 (18 votes)

CFAR is hiring a logistics manager

2013-04-05T22:32:52.108Z · score: 12 (13 votes)

Applied Rationality Workshops: Jan 25-28 and March 1-4

2013-01-03T01:00:34.531Z · score: 20 (21 votes)

Nov 16-18: Rationality for Entrepreneurs

2012-11-08T18:15:15.281Z · score: 25 (30 votes)

Checklist of Rationality Habits

2012-11-07T21:19:19.244Z · score: 123 (121 votes)

Possible meetup: Singapore

2012-08-21T18:52:07.108Z · score: 6 (7 votes)

Center for Modern Rationality currently hiring: Executive assistants, Teachers, Research assistants, Consultants.

2012-04-13T20:28:06.071Z · score: 4 (9 votes)

Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28

2012-03-29T20:48:48.227Z · score: 24 (27 votes)

How do you notice when you're rationalizing?

2012-03-02T07:28:21.698Z · score: 12 (13 votes)

Urges vs. Goals: The analogy to anticipation and belief

2012-01-24T23:57:04.122Z · score: 82 (84 votes)

Poll results: LW probably doesn't cause akrasia

2011-11-16T18:03:39.359Z · score: 47 (50 votes)

Meetup : Talk on Singularity scenarios and optimal philanthropy, followed by informal meet-up

2011-10-10T04:26:09.284Z · score: 4 (7 votes)

[Question] Do you know a good game or demo for demonstrating sunk costs?

2011-09-08T20:07:55.420Z · score: 5 (6 votes)

[LINK] How Hard is Artificial Intelligence? The Evolutionary Argument and Observation Selection Effects

2011-08-29T05:27:31.636Z · score: 14 (15 votes)

Upcoming meet-ups

2011-06-21T22:28:40.610Z · score: 3 (6 votes)

Upcoming meet-ups:

2011-06-11T22:16:09.641Z · score: 9 (12 votes)

Upcoming meet-ups: Buenos Aires, Minneapolis, Ottawa, Edinburgh, Cambridge, London, DC

2011-05-13T20:49:59.007Z · score: 29 (34 votes)

Mini-camp on Rationality, Awesomeness, and Existential Risk (May 28 through June 4, 2011)

2011-04-24T08:10:13.048Z · score: 39 (42 votes)

Learned Blankness

2011-04-18T18:55:32.552Z · score: 140 (133 votes)

Talk and Meetup today 4/4 in San Diego

2011-04-04T11:40:05.167Z · score: 6 (7 votes)

Use curiosity

2011-02-25T22:23:54.462Z · score: 58 (59 votes)

Make your training useful

2011-02-12T02:14:03.597Z · score: 93 (96 votes)

Starting a LW meet-up is easy.

2011-02-01T04:05:43.179Z · score: 39 (40 votes)

Branches of rationality

2011-01-12T03:24:35.656Z · score: 75 (78 votes)

If reductionism is the hammer, what nails are out there?

2010-12-11T13:58:18.087Z · score: 14 (17 votes)

Anthropologists and "science": dark side epistemology?

2010-12-10T10:49:41.139Z · score: 10 (11 votes)

Were atoms real?

2010-12-08T17:30:37.453Z · score: 61 (74 votes)

Help request: What is the Kolmogorov complexity of computable approximations to AIXI?

2010-12-05T10:23:55.626Z · score: 4 (5 votes)

Goals for which Less Wrong does (and doesn't) help

2010-11-18T22:37:36.984Z · score: 62 (61 votes)

Making your explicit reasoning trustworthy

2010-10-29T00:00:25.408Z · score: 82 (86 votes)

Any LW-ers in Munich, Athens, or Israel?

2010-10-11T06:56:01.092Z · score: 5 (6 votes)