Posts

Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" 2015-03-28T09:17:55.577Z · score: 15 (19 votes)
Musk on AGI Timeframes 2014-11-17T01:36:12.012Z · score: 19 (23 votes)

Comments

Comment by artaxerxes on 2017 LessWrong Survey · 2017-09-15T13:13:55.805Z · score: 19 (19 votes) · LW · GW

I did it!

Comment by artaxerxes on Superintelligence discussed on Startalk · 2017-03-20T05:31:18.581Z · score: 1 (1 votes) · LW · GW

The segment on superintelligence starts at 45:00, it's a rerun of a podcast from 2 years ago. Musk says it's a concern, Bill Nye commenting on Musk's comments about it afterwards says that we would just unplug it and is dismissive. Neil is similarly skeptical and half heartedly plays devils advocate but clearly agrees with Nye.

Comment by artaxerxes on Open thread, Feb. 13 - Feb. 19, 2017 · 2017-02-16T13:49:02.246Z · score: 1 (1 votes) · LW · GW

I'd even suspect that it's possible that it's even more open to being abused by assholes. Or at least, pushing in the direction of "tell" may mean less opportunity for asshole abuse in many cases.

Comment by artaxerxes on Dan Carlin six hour podcast on history of atomic weapons · 2017-02-09T17:47:45.984Z · score: 1 (1 votes) · LW · GW

I've heard good things about Dan Carlin's podcasts about history but I've never been sure which to listen to first. Is this a good choice, or does it assume you've heard some of his other ones, or perhaps are other podcasts better to listen to first?

Comment by artaxerxes on Open thread, Jan. 30 - Feb. 05, 2017 · 2017-02-07T09:26:00.524Z · score: 1 (1 votes) · LW · GW

Whose Goodreads accounts do you follow?

Comment by artaxerxes on Open thread, Jan. 30 - Feb. 05, 2017 · 2017-01-31T21:33:17.614Z · score: 9 (9 votes) · LW · GW

If you buy a Humble Bundle these days, it's possible to use their neat sliders to allocate all of the money you're spending towards charities of your choice via the PayPal giving fund, including Lesswrong favourites like MIRI, SENS and the Against Malaria Foundation. This appears to me to be a relatively interesting avenue for charitable giving, considering that it is (at least apparently) as effective per dollar spent as a direct donation would be to these charities.

Contrast this with buying games from the Humble Store, which merely allocates 5% of money spent to a chosen charity, or using Amazon Smile which allocates a miniscule 0.5% of the purchase price of anything you buy. While these services are obviously a lot more versatile in terms of the products on offer, they to me are clearly more something you set up if you're going to be buying stuff anyway rather than what this appears to be to me, a particular opportunity.

Here are a couple of examples of the kinds of people for whom I think this might be worthwhile:

  1. People who are interested in video games or comics or whatever including any that are available in Humble Bundles to purchase them entirely guilt-free, with the knowledge that the money is going to organisations they like.

  2. People who are averse to more direct giving and donations for whatever reason to be able to support organisations they approve of in a more comfortable, transactional way, in a manner similar to buying merchandise.

  3. People who may be expected to give gifts as part of social obligation, and for whom giving gifts of the kinds of products offered in these bundles is appropriate, to do so while all of the money spent goes to support their pet cause.

Comment by artaxerxes on Open thread, Nov. 14 - Nov. 20, 2016 · 2016-11-15T05:18:03.100Z · score: 6 (6 votes) · LW · GW

Can anyone explain to me what non-religious spirituality means, exactly? I had always thought it was an overly vague to meaningless new age term in that context but I've been hearing people like Sam Harris use the term unironically, and 5+% of LW are apparently "atheist but spiritual" according to the last survey, so I figure it's worth asking to find out if I'm missing out on something not obvious. The wikipedia page describes a lot of distinct, different ideas when it isn't impenetrable, so that didn't help. There's one line there where it says

The term "spiritual" is now frequently used in contexts in which the term "religious" was formerly employed.

and that's mostly how I'm familiar with its usage as well.

Comment by artaxerxes on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-14T03:01:18.247Z · score: 0 (0 votes) · LW · GW

This is a really good comment, and I would love to hear responses to objections of this flavour from Eliezer etc.

Saying "we haven't had a nuclear exchange with Russia yet, therefor our foreign policy and diplomatic strategy is good" is an obvious fallacy. Maybe we've just been lucky.

I mean it's less about whether or not it's good as much as it is trying to work out the likelihood of whether policies resulting from Trump's election are going to be worse. You can presuppose that current policies are awful and still think that Trump is likely to make things much worse.

Comment by artaxerxes on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-14T02:38:59.134Z · score: 2 (2 votes) · LW · GW

Like, reading through Yudkowsky's stuff, his LW writings and HPMOR, there is the persistent sense that he is 2 guys.

One guy is like "Here are all of these things you need to think about to make sure that you are effective at getting your values implemented". I love that guy. Read his stuff. Big fan.

Other guy is like "Here are my values!" That guy...eh, not a fan. Reading him you get the idea that the whole "I am a superhero and I am killing God" stuff is not sarcastic.

It is the second guy who writes his facebook posts.

Yes, I agree with this sentiment and am relieved someone else communicated it so I didn't have to work out how to phrase it.

I don't share (and I don't think my side shares), Yudkowsky's fetish for saving every life. When he talks about malaria nets as the most effective way to save lives, I am nodding, but I am nodding along to the idea of finding the most effective way to get what you want done, done. Not at the idea that I've got a duty to preserve every pulse.

I don't think Yudkowsky think malaria nets are the best use of money anyway, even if they are in the short term the current clearest estimate as to where to put your money in in order to maximise lives saved. In that sense I don't think you disagree with him, he doesn't fetishize preserving pulses in the same way that you don't. Or at least, that's what I remember reading. First thing I could find corroborating that model of his viewpoint is his interview with Horgan.

There is a conceivable world where there is no intelligence explosion and no superintelligence. Or where, a related but logically distinct proposition, the tricks that machine learning experts will inevitably build up for controlling infrahuman AIs carry over pretty well to the human-equivalent and superhuman regime. Or where moral internalism is true and therefore all sufficiently advanced AIs are inevitably nice. In conceivable worlds like that, all the work and worry of the Machine Intelligence Research Institute comes to nothing and was never necessary in the first place, representing some lost number of mosquito nets that could otherwise have been bought by the Against Malaria Foundation.

There’s also a conceivable world where you work hard and fight malaria, where you work hard and keep the carbon emissions to not much worse than they are already (or use geoengineering to mitigate mistakes already made). And then it ends up making no difference because your civilization failed to solve the AI alignment problem, and all the children you saved with those malaria nets grew up only to be killed by nanomachines in their sleep. (Vivid detail warning! I don’t actually know what the final hours will be like and whether nanomachines will be involved. But if we’re happy to visualize what it’s like to put a mosquito net over a bed, and then we refuse to ever visualize in concrete detail what it’s like for our civilization to fail AI alignment, that can also lead us astray.)

I think that people who try to do thought-out philanthropy, e.g., Holden Karnofsky of Givewell, would unhesitatingly agree that these are both conceivable worlds we prefer not to enter. The question is just which of these two worlds is more probable as the one we should avoid. And again, the central principle of rationality is not to disbelieve in goblins because goblins are foolish and low-prestige, or to believe in goblins because they are exciting or beautiful. The central principle of rationality is to figure out which observational signs and logical validities can distinguish which of these two conceivable worlds is the metaphorical equivalent of believing in goblins.

I think it’s the first world that’s improbable and the second one that’s probable. I’m aware that in trying to convince people of that, I’m swimming uphill against a sense of eternal normality – the sense that this transient and temporary civilization of ours that has existed for only a few decades, that this species of ours that has existed for only an eyeblink of evolutionary and geological time, is all that makes sense and shall surely last forever. But given that I do think the first conceivable world is just a fond dream, it should be clear why I don’t think we should ignore a problem we’ll predictably have to panic about later. The mission of the Machine Intelligence Research Institute is to do today that research which, 30 years from now, people will desperately wish had begun 30 years earlier.

Also, on this:

Yes, electing Hillary Clinton would have been a better way to ensure world prosperity than electing Donald Trump would. That is not what we are trying to do. We want to ensure American prosperity.

Especially here, I'm pretty sure Eliezer is more concerned about general civilisational collapse and other globally negative outcomes which he sees as non-trivially more likely with Trump as president. I don't think this is as much of a difference in values and specifically differences with regards to how much you each value each level of the concentric circles of the proximal groups around you. At the very least, I don't think he would agree that a Trump presidency would be likely to result in improved American prosperity over Clinton.

I just want to point out that Yudkowsky is making the factual mistake of modeling us as being shitty at achieving his goals, when in truth we are canny at achieving our own.

I think this is probably not what's going on, I honestly think Eliezer is being more big picture about this, in the sense that he is concerned more about increased probability of doomsday scenarios and other outcomes unambiguously bad for most human goals. That's the message I got from his facebook posts anyway.

Comment by artaxerxes on Open thread, Sep. 12 - Sep. 18, 2016 · 2016-09-12T15:50:07.328Z · score: 3 (3 votes) · LW · GW

LessWrong has made me if anything more able to derive excitement and joy from minor things, so if I were you I would check if LW is really to blame or otherwise find out if there are other factors causing this problem.

Comment by artaxerxes on September 2016 Media Thread · 2016-09-04T02:28:40.523Z · score: 0 (0 votes) · LW · GW

You didn't link to your MAL review for Wind Rises!

Comment by artaxerxes on September 2016 Media Thread · 2016-09-04T02:25:14.076Z · score: 2 (2 votes) · LW · GW

Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity by Holden Karnofsky. Somehow missed this when it was posted in May.

Compare, for example, Thoughts on the Singularity Institute (SI) one of the most highly upvoted posts ever on LessWrong.

Edit: See also Some Key Ways in Which I've Changed My Mind Over the Last Several Years

Comment by artaxerxes on Open Thread, Aug. 1 - Aug 7. 2016 · 2016-08-01T04:03:50.211Z · score: 1 (1 votes) · LW · GW

What's the worst case scenario involving climate change given that for some reason no large scale wars occur due to its contributing instability?

Climate change is very mainstream, with plenty of people and dollars working on the issue. LW and LW-adjacent groups discuss many causes that are thought to be higher impact and have more room for attention.

But I realised recently that my understanding of climate change related risks could probably be better, and I'm not easily able to compare the scale of climate change related risks to other causes. In particular I'm interested in estimations of metrics such as lives lost, economic cost, and similar.

If anyone can give me a rundown or point me in the right direction that would be appreciated.

Comment by artaxerxes on April 2016 Media Thread · 2016-04-08T06:11:36.359Z · score: 1 (1 votes) · LW · GW

Sure, but that doesn't change all the tax he evaded.

Comment by artaxerxes on April 2016 Media Thread · 2016-04-06T11:47:45.059Z · score: 2 (2 votes) · LW · GW

There is this.

Comment by artaxerxes on April 2016 Media Thread · 2016-04-06T11:45:25.703Z · score: 1 (1 votes) · LW · GW

Not to mention all that tax evasion never actually got resolved.

Comment by artaxerxes on Open Thread March 28 - April 3 , 2016 · 2016-04-01T01:29:16.094Z · score: 7 (7 votes) · LW · GW

CGP Grey has read Bostrom's Superintelligence.

Transcript of the relevant section:

Q: What do you consider the biggest threat to humanity?

A: Last Q&A video I mentioned opinions and how to change them. The hardest changes are the ones where you're invested in the idea, and I've been a techno-optimist 100% all of my life, but [Superintelligence: Paths, Dangers, Strategies] put a real asterisk on that in a way I didn't want. And now Artificial Intelligence is on my near term threat list in a deeply unwelcome way. But it would be self-delusional to ignore a convincing argument because I don't want it to be true.

I like this how this response describes motivated cognition, the difficulty of changing your mind and the Litany of Gendlin.

He also apparently discusses this topic on his podcast, and links to the amazon page for the book in the description of the video.

Grey's video about technological unemployment was pretty big when it came out, and it seemed to me at the time that he wasn't too far off of realising that there were other implications of increasing AI capability that were rather plausible as well, so it's cool to see that it happened.

Comment by artaxerxes on Open Thread March 28 - April 3 , 2016 · 2016-03-31T10:39:17.301Z · score: 3 (3 votes) · LW · GW

This exists, at least.

Comment by artaxerxes on Lesswrong 2016 Survey · 2016-03-27T07:42:08.188Z · score: 36 (36 votes) · LW · GW

Took it!

It ended somewhat more quickly this time.

Comment by artaxerxes on Lesswrong 2016 Survey · 2016-03-27T07:09:56.999Z · score: 3 (3 votes) · LW · GW

Typo question 42

Yes but I don't think it's logical conclusions apply for other reasons

Comment by artaxerxes on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-23T06:16:26.825Z · score: 2 (2 votes) · LW · GW

Dawkins' Greatest Show on Earth is pretty comprehensive. The shorter the work as compared to that, the more you risk missing widely held misconceptions people have.

Comment by artaxerxes on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-22T06:58:41.762Z · score: 6 (6 votes) · LW · GW

Not a guide, but I think the vocab you use matters a lot. Try tabooing 'rationality', the word itself mindkills some people straight to straw vulcan etc. Do the same with any other words that have the same effect.

Comment by artaxerxes on Open thread, December 7-13, 2015 · 2015-12-12T03:46:03.446Z · score: 0 (0 votes) · LW · GW

I recall being taught to argue towards the predetermined point of view in schools and extra-curriculum activities like debating. Is that counterproductive or suboptimal?

This has been talked about before. One suggestion is to not make it a habit.

Comment by artaxerxes on Open thread, December 7-13, 2015 · 2015-12-12T03:37:46.189Z · score: 0 (0 votes) · LW · GW

Could you without intentionally listening to music for 30 days?

Can you rephrase this?

Comment by artaxerxes on December 2015 Media Thread · 2015-12-02T20:41:43.252Z · score: 1 (1 votes) · LW · GW

Yeah, I pretty much agree, but the important point to make is that any superintelligent ant hive hypotheses would have to be at least as plausible and relevant to the topic of the book as Hanson's ems to make it in. Note Bostrom dismisses brain-computer interfaces as a superintelligence pathway fairly quickly.

Comment by artaxerxes on December 2015 Media Thread · 2015-12-02T16:44:29.992Z · score: 1 (1 votes) · LW · GW

This interlude is included despite the fact that Hanson’s proposed scenario is in contradiction to the main thrust of Bostrom’s argument, namely, that the real threat is rapidly self-improving A.I.

I can't say I agree with your reasoning behind why Hanson's ideas are in the book. I think the book's content is written with accuracy in mind first and foremost, and I think Hanson's ideas are there because Bostrom thinks they're genuinely a plausible direction the future could go, especially in the circumstances where recursive self improving AI of the kinds traditionally envisioned turns out to be unlikely or difficult or impossible for whatever reasons. I don't think those ideas are there in an effort to mine the Halo effect.

And really, the book's main thrust is in the title. Paths, Dangers, Strategies. Even if these outcomes are not necessarily mutually exclusive (inc. the possibility of singletons forming out of initially multi-polar outcomes as discussed in p.176 onwards), talking about potential pathways is very obviously relevant, I would have thought.

Comment by artaxerxes on November 2015 Media Thread · 2015-11-11T21:28:25.327Z · score: 0 (0 votes) · LW · GW

Announced? Orokamonogatari came out in October.

Comment by artaxerxes on Rationality Quotes Thread November 2015 · 2015-11-08T17:45:22.331Z · score: 3 (3 votes) · LW · GW

This is a great quote, but even moreso than Custers and Lees I feel like we need someone not so much on the front lines, but someone to win the whole war - maybe Lincoln, but my knowledge of the American Civil War is poor. Preventing death from most relevant causes (aging, infectious disease, etc.) seems within reach before the end of the century, as a conservative guess. Hastening winning that war means that society will no longer need so many generals, Lees, Custers or otherwise.

Comment by artaxerxes on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-11-06T09:32:47.627Z · score: 0 (0 votes) · LW · GW

it's rather depressing that progress of this kind seems so impossible. Thanks for the link.

Comment by artaxerxes on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-11-06T09:29:14.382Z · score: 0 (0 votes) · LW · GW

The videos you linked were already accounted for. The vid of the Superintelligence panel with Musk, Soares, Russell, Bostrom, etc. is the one that's been missing for so long.

Comment by artaxerxes on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-11-01T12:47:52.639Z · score: 4 (4 votes) · LW · GW

There are still plenty of videos from EA Global nowhere to be found on the net. If anyone could point me in the direction of, for example, the superintelligence panel with Elon Musk, Nate Soares, Stuart Russell, and Nick Bostrom that'd be great.

Why has organisation of uploading these videos been so poor? I am assuming that the intention is not to hide away any record of what went on at these events. Only the EA Global Melbourne vids are currently easily findable.

Comment by artaxerxes on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-10-30T04:24:07.114Z · score: 0 (0 votes) · LW · GW

https://www.reddit.com/r/nootropics is a decent start, also check the sidebar

Comment by artaxerxes on Open thread, Oct. 19 - Oct. 25, 2015 · 2015-10-21T05:55:04.035Z · score: 4 (4 votes) · LW · GW

This post discusses MIRI, and what they can do with funding of different levels.

What are you looking for, more specifically?

Comment by artaxerxes on Open thread, Oct. 12 - Oct. 18, 2015 · 2015-10-14T10:49:01.502Z · score: 2 (2 votes) · LW · GW

You could be depressed.

Comment by artaxerxes on Open thread, Oct. 5 - Oct. 11, 2015 · 2015-10-08T17:41:31.927Z · score: 13 (13 votes) · LW · GW

I think it's great, the ideas getting out is what matters. Whether Eliezer gets some credit or not, the whole reason he said this stuff in the first place was so that people would understand it, repeat it and spread the concept, and that's exactly what's going on. If anything, Eliezer was trying very early to optimize for most convincing and easily understandable phrases, analogies, arguments, etc. so the fact that other people are repeating them or perhaps convergently evolving towards them shows that he did a good job.

And really, if Eliezer's status as a non-formally educated autodidact or whatever else is problematic or working against easing the spread of the information, then I don't see a problem with not crediting him in every single reddit post and news article. The priority is presumably ensuring greater awareness of the problems, and part of that is having prestigious people like Stephen Hawking deliver the info. It's not like there aren't dated posts and pdfs online that show Eliezer saying this stuff more than a decade ago, people can find how early he was on this train.

Comment by artaxerxes on October 2015 Media Thread · 2015-10-04T19:49:18.680Z · score: 0 (0 votes) · LW · GW

I was fairly unimpressed by the first few episodes of season 1, is season 2 significantly better?

Comment by artaxerxes on Instrumental Rationality Questions Thread · 2015-08-26T10:34:07.343Z · score: 1 (1 votes) · LW · GW

As long as the other person is still pretty happy, it doesn't really matter too much if you're happier. That's not to say that things can't go wrong, but it's not a hard rule that people must be of equal happiness levels in order to be together successfully.

Comment by artaxerxes on Why people want to die · 2015-08-25T09:42:26.392Z · score: 2 (4 votes) · LW · GW

If you want to know the specific numbers of how unusual you are compared to the rest of LW, 18.2% of LW is married, and 18.4% of LW has at least 1 kid, according to the most recent survey results.

Comment by artaxerxes on Open Thread - Aug 24 - Aug 30 · 2015-08-24T12:15:16.239Z · score: 1 (1 votes) · LW · GW

From this description alone, it sounds like a good idea if you don't plan on having kids.

Comment by artaxerxes on Open Thread - Aug 24 - Aug 30 · 2015-08-24T12:11:31.303Z · score: 0 (0 votes) · LW · GW

Wonderful, thanks.

Comment by artaxerxes on Open Thread - Aug 24 - Aug 30 · 2015-08-24T11:16:28.384Z · score: 1 (1 votes) · LW · GW

I'll ask again: where are the videos from the various EA Global events?

If stuff hasn't been recorded or even if stuff just hasn't been uploaded yet, I feel like the EA community is probably missing out on a fair amount of publicity and views and attention by not having their relevant content out and viewable promptly, while it's all still fresh. This is especially true since I hear some big names went to some of these events.

I am assuming effective altruists want their point of view to spread and become more popular. It seems to me that they are not being particularly effective at allowing this to occur so far. The events themselves may or may not be rather quite good to make up for this, but how would I know.

Comment by artaxerxes on Open thread, Aug. 17 - Aug. 23, 2015 · 2015-08-17T11:11:48.068Z · score: 6 (6 votes) · LW · GW

Where are all the vids of the speeches and stuff from the EA global events? Surely they were recorded?

Comment by artaxerxes on We really need a "cryonics sales pitch" article. · 2015-08-06T14:03:07.301Z · score: 0 (0 votes) · LW · GW

Where's the answer?

Comment by artaxerxes on Immortality Roadmap · 2015-07-29T03:28:07.733Z · score: 2 (2 votes) · LW · GW

I like these maps. Keep up the good work!

The pdf is fine, but the image version of this map is unreadable. It might be worth making a higher resolution version of it.

Comment by artaxerxes on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-28T19:29:47.459Z · score: 8 (8 votes) · LW · GW

You said

Do you have suggestions for either: a. dealing with it b. getting people to answer the right question

I said

I just want to yell at people; "answer the question I asked! not the one you felt like answering that was similar to the one I asked because you thought that was what I wanted to hear about or ask about!".

Try this, except instead of yelling, say it nicely.

and I also said

One thing you could do as an example is some variation of "oh sorry, I must have phrased the question poorly, I meant (the question again, perhaps phrased differently or with more detail or with example answers or whatever)".

So I answered the question in detail.

Perhaps you aren't very good at recognizing when someone has answered your question? Obviously this is only one data point so we can't look into it too heavily, but we have at least established that this is something you are capable of doing.

Comment by artaxerxes on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-28T00:16:37.247Z · score: 3 (3 votes) · LW · GW

Yeah, this happens.

I just want to yell at people; "answer the question I asked! not the one you felt like answering that was similar to the one I asked because you thought that was what I wanted to hear about or ask about!".

Try this, except instead of yelling, say it nicely.

One thing you could do as an example is some variation of "oh sorry, I must have phrased the question poorly, I meant (the question again, perhaps phrased differently or with more detail or with example answers or whatever)".

Comment by artaxerxes on Open Thread, Jul. 13 - Jul. 19, 2015 · 2015-07-16T06:09:10.650Z · score: 0 (0 votes) · LW · GW

Wouldn't "inter-family" be between different families? I'm not sure, but "intra-family" makes more sense to me, if you're trying to refer to incestuous relationships. A quick google search suggests the same.

I'm not sure what society will do, but I don't see anything wrong with incest or incestuous relationships in general, and don't believe that they should be illegal. That's not to say that incestuous relationships can't have something wrong with them, but from what I can tell, incestuous relationships that have something wrong with them are due to reasons separate to the fact that they are incestuous (paedophilic, abusive, power imbalance, whatever).

Comment by artaxerxes on Open Thread, Jun. 22 - Jun. 28, 2015 · 2015-06-22T04:38:02.690Z · score: 20 (20 votes) · LW · GW

A short, nicely animated adaptation of The Unfinished Fable of the Sparrows from Bostrom's book was made recently.

Comment by artaxerxes on Stupid Questions June 2015 · 2015-05-31T03:56:28.724Z · score: 1 (3 votes) · LW · GW

Maybe not for that reason. But the opportunity cost of having kids, for example in terms of time and money, is pretty high. You could easily make an argument that those resources would be more effectively used for higher impact activities.

The money as dead children analogy might be particularly useful here, since we're comparing kids with kids.

Comment by artaxerxes on Leaving LessWrong for a more rational life · 2015-05-22T01:30:54.311Z · score: 8 (10 votes) · LW · GW

Show me the evidence that the impact of CFAR instruction has higher expected humanitarian benefit dollar-for-dollar than an equivalent donation to SENS, or pick-your-favorite-charity.

I don't think they do, but I don't think we were comparing CFAR to SENS or other effective altruist endorsed charities, I was contesting the claim that CFAR was comparable to religions and mumbo jumbo:

Is it correct to compare CFAR with religions and mumbo jumbo?

I think it is.

I mean, they're literally basing their curricula on cognitive science. If you look at their FAQ, they give examples of the kinds of scientifically grounded, evidence based methods they use for improving rationality:

While research on cognitive biases has been booming for decades, we’ve spent more time identifying biases than coming up with ways to evade them.

There are a handful of simple techniques that have been repeatedly shown to help people make better decisions. “Consider the opposite” is a name for the habit of asking oneself, “Is there any reason why my initial view might be wrong?” That simple, general habit has been shown to be useful in combating a wide variety of biases, including overconfidence, hindsight biases, confirmation bias, and anchoring effects [see Arkes, 1991; Arkes, Fault, Guilmette, & Hart, 1988; Koehler, 1994; Koriat, Lichtenstein, & Fischhoff, 1980; Larrick, 2004; Mussweiler, Strack, & Pfeiffer, 2000]

Most of us sometimes fall prey to the planning fallacy, where we underestimate the amount of time it’s going to take us to complete a project. But one strategy that’s been shown to work, and which we teach in our workshops, is “reference class forecasting,” which entails asking yourself how it’s taken you, or people you know, to complete similar tasks. [see Buehler, Griffin, & Ross (2002)

A third technique which has strong empirical backing (though it is not typically classified under “de-biasing” research”) is cognitive therapy, which has successfully improved participants’ depression and anxiety by the use of rational thinking habits like asking oneself, “What evidence do I have for that assumption?” Cognitive therapy in particular is an encouraging demonstration that simple rational thinking techniques can become automatic and regularly used, to great effect.

So I just don't know where someone can be coming from when they suggest that CFAR's methods are comparably grounded in evidence to religion and mumbo jumbo, when they're literally grounding their methods on evidence as a rule and religion and mumbo jumbo are grounded on other things, like universal quantum consciousness and gods.