Podcast: What's Wrong With LessWrong

post by Alfred · 2022-12-21T07:06:08.728Z · LW · GW · 11 comments

This is a link post for https://youtu.be/Fl_ZQlYKdz4

Contents

11 comments

if you needed a reason to exit this subculture, here are several dozen including the cult of genius, ingroup-overtrust, insularity, out-of-touchness, lack of rigor, and lack of sharp culture that exists in the current environments.

timestamps are included so that you can skip around each video topic.

finally, there are 34 references in the description. I would have included more but this exceeded the character limit.

11 comments

Comments sorted by top scores.

comment by starship006 (cody-rushing) · 2022-12-21T07:26:42.385Z · LW(p) · GW(p)

I'm trying to engage with your criticism faithfully, but I can't help but get the feeling that a lot of your critiques here seem to be a form of "you guys are weird": your guys's privacy norms are weird, your vocabulary is weird, you present yourself off as weird, etc. And while I may agree that sometimes it feels as if LessWrongers are out-of-touch with reality at points, this criticism, coupled with some of the other object-level disagreements you were making, seems to overlook the many benefits that LessWrong provides; I can personally attest to the fact that I've improved in my thinking as a whole due to this site. If that makes me a little weird, then I'll accept that as a way to help me shape the world as I see fit. And hopefully I can become a little less weird through the same rationality skills this site helps develop

Replies from: Alfred
comment by Alfred · 2022-12-21T07:55:58.592Z · LW(p) · GW(p)

1. https://www.amazon.com/Cambridge-Handbook-Reasoning-Handbooks-Psychology/dp/0521531012

2. https://www.amazon.com/Rationality-What-Seems-Scarce-Matters/dp/B08X4X4SQ4

3. https://www.amazon.com/Cengage-Advantage-Books-Understanding-Introduction/dp/1285197364

4. https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555

5. https://www.amazon.com/Predictably-Irrational-audiobook/dp/B0014EAHNQ

6. https://www.amazon.com/BIASES-HEURISTICS-Collection-Heuristics-Everything/dp/1078432317

7. https://www.amazon.com/Informal-Logical-Fallacies-Brief-Guide/dp/0761854339

there is very little, with respect to rationality, learned here that will not be learned through these texts.

Replies from: Viliam
comment by Viliam · 2022-12-21T13:33:05.929Z · LW(p) · GW(p)

I agree that LessWrong is not an exclusive source of most/all ideas found here.

I think this means less than (I think) you are trying to suggest. For example, right now I am reading a textbook on set theory, and although I am pretty sure that 99-100% of the information there could also be found in other sources, that it not a sufficient reason to throw the textbook away. There are other possible advantages, such as being more accessible, putting all the information in the same place and showing the connection.

Another important thing is what is not included. Like, if you show me a set of people who read "Thinking Fast and Slow" and "Predictably Irrational", I would expect that many of them have also enjoyed reading Malcolm Gladwell and Nassim Taleb, etc. You know, these things are a genre, and yes if you read a lot of this genre, you will be familiar with the good ideas in the LW Sequences. But the genre also comes with a lot of strong beliefs that do not replicate. (Talking for 10 minutes with someone who reads Taleb's tweets regularly makes me want to scream.)

Then, there is the community. Reading the books is nice, but then I typically want to discuss them with someone. In extreme case, discuss the ways how the things we learned could be applied to improve our everyday lives. (And again, what is excluded is just as important as what is included.)

Replies from: Alfred
comment by Alfred · 2022-12-22T21:59:19.641Z · LW(p) · GW(p)

But the genre also comes with a lot of strong beliefs that do not replicate. (Talking for 10 minutes with someone who reads Taleb's tweets regularly makes me want to scream.)

 

By this criterion, absolutely no one should be using LessWrong as a vehicle for learning. The Malcolm Gladwell reader you proposed might have been a comparable misinformation vehicle, in, say, 2011, but as of 2022 LessWrong is by a chasmic margin worse about this. It's debatable whether the average LessWrong user even reads what they're talking about anymore.

I can name a real-life example: in a local discord of about 100 people, Aella argued that the MBTI is better understood holistically under the framework of Jungian psychology, and that looking at the validity of each subtest (e.g. "E/I", "N/S", ""T/F", "J/P") is wrongly reductive. This is not just incorrect, it is the opposite of true; it fundamentally misunderstands what psychometric validity even is. I wrote a fairly long correction of this, but I am not sure anyone bothered to read it — most people will take what community leaders say at face value, because the mission statement of the ingroup of LessWrong is "people who are rational" and the thinking goes that someone who is rational, surely, would have taken care of this. (This was not at all the case.)

I don't think further examples will help, but they are abundant throughout this sphere; there is a reason I spent 30 minutes of that audio debunking the pseudoscientific and even quasi-mystical beliefs common to Alexander Kruel's sphere of influence.

Replies from: Viliam
comment by Viliam · 2022-12-23T18:47:32.165Z · LW(p) · GW(p)

Aella argued that the MBTI is better understood holistically under the framework of Jungian psychology, and that looking at the validity of each subtest (e.g. "E/I", "N/S", ""T/F", "J/P") is wrongly reductive. This is not just incorrect, it is the opposite of true; it fundamentally misunderstands what psychometric validity even is.

I 100% agree.

From my perspective, the root of the problem is that we do not have clear boundaries for {rationality, Less Wrong, this kind of stuff}. Because if I tell you "hey, from my perspective Aella is not a community leader, and if she posted such claims on LW they would get downvoted", on one hand, I sincerely mean it, and this was the reply I originally wanted to write; on the other hand, I would understand if that seemed like a motte-and-bailey definition of the rationalist community... whenever someone says something embarassing, we quickly redefine the rationalist community to mean "not this person, or at least not this specific embarassing statement". Especially considering that Aella is known to many people in the rationalist community, and occasionally posts on Less Wrong (not about MBTI though).

I would more strongly object against associating LW with "quasi-mystical beliefs common to Alexander Kruel's sphere of influence". I mean, Alexander Kruel is like the #2 greatest hater of Less Wrong (the #1 place belongs to David Gerard), so it does not make any sense to me to blame his opinions on us.

Scott uses the term "rationalist-adjacent" for people who hang out with actual LW members and absorb some of their memes, but also disagree with them in many ways. So, from my perspective, "rationalist proper" is what is written on Less Wrong (both the articles and the comments), plus most of what Eliezer or Scott write on other places; and "rationalist adjacent" is the comment section of ACX, discord, meetups, etc., including Aella (and also - although I hate to admit it - Alexander Kruel).

I agree that the "rationalist adjacent" sphere is full of pseudoscientific bullshit. Not sure if strictly worse, but definitely not better than Malcolm Gladwell. :(

I am not sure what to do about this. I am pretty sure that many other communities have a similar problem, too. You have a few "hardcore members", and then you have many people who enjoy hanging out with them... how should you properly explain that "these things are popular among the people who hang out with us, but they are actually not popular among the hardcore members"?

It seems to me that people who enjoy reading the { Gladwell, Taleb, Arieli, Kahneman... } genre seem usually happy when they find Less Wrong. It is yet another source of insight porn they can add to their collection and expand their vocabulary with. The only problem is the intolerance of Less Wrong against some ideas (such as religion), but this is not a problem if you simply cherry-pick the parts you like. Those people usually do not identify as rationalists.

Now we would need a nice way to signal that certain forum is full of people familiar with the rationalist memes, but despite that, most of them are not rationalists. Such as the ACX comment section, and probably the discord you mentioned.

comment by Viliam · 2022-12-21T14:07:12.890Z · LW(p) · GW(p)

"Most of the sequences are not about rationality, but about things that Eliezer considers cool, such as AI or evolutionary psychology..."

I see that we disagree a lot already at the beginning. You say that rationality is about overcoming biases. I agree with that, but then I am also curious why those biases exist. How I see it, human biases are either random quirks of human evolution (which evolutionary psychology might explain), or something that happens to intelligences in general (and then we should also expect AI to be prone to them).

Also, what are the biases? A frequent approach I have seen is providing a list of "fallacies" that you are supposed to avoid. That is a thing that can easily be abused; if you know enough fallacies, you can dismiss almost everything you do not like (start by rejecting science as a "fallacy of argumenting by authority", and then use the rest of the list to shoot down any attempt to rederive the knowledge from scratch).

But maybe more importantly, how does this kind of rationality survive under reflection? Rationality defined as avoiding the list of fallacies in Wikipedia or in some textbook... but why exactly this list, as opposed to making my own list, or maybe using some definition of correct thinking provided by a helpful political or religious institution? Why is Wikipedia or a Cambridge Handbook the correct source of the list of fallacies? Sounds to me like a fallacy of authority, or a fallacy of majority, or one of those other things you want me to avoid doing.

What if there is a fallacy that hasn't been discovered yet? If I proposed one, how would we know whether it should be added to the list? (Is it okay if I edit Wikipedia to add "fallacy of political correctness"? Just kidding.)

"AI is not rationality. AI is machine learning. Just call it what it is."

The recent debates on LW are often about machine learning, because that is the current hot thing. But other approaches were tried in the past, for example expert systems. Who knows, maybe in hindsight we will laugh about everything that was not machine learning as an obvious dead end. Maybe. Anyway, the artificial intelligence mentioned in the Sequences is defined more broadly.

And if machine learning turns out to be the only way to get artificial intelligence... I think it will be quite important to consider its rationality and biases. Especially when it becomes smarter that humans, or if it starts to control a significant part of economy or military.

If a sufficiently smart artificial intelligence becomes widely accessible as a smartphone app, so that you can ask it any question (voice recognition, you do not even have to type) and it will give you a good answer with probability 99%, and when it becomes cheap enough so that most people can afford it... at that moment, the question of AI rationality and alignment with human values will be more important than human rationality, because at that moment most humans will outsource their thinking to the cloud. (Just like today many people object against learning things, because you can find anything on Google. But on Google, you still need to use the right keywords, separate good info from nonsense, and figure out how to apply this to your current problem. The AI will do all these things for you.)

Replies from: ChristianKl, Alfred
comment by ChristianKl · 2022-12-21T14:39:44.924Z · LW(p) · GW(p)

"AI is not rationality. AI is machine learning. Just call it what it is."

If we take GPT3 as an example ChatGPT is able to multiply two four-digit numbers while earlier versions of GPT3 didn't.

The way ChatGPT is able to do that is to automatically go into thinking through the multiplication step by step instead of trusting its "intuition".

That's very similar to how a human needs to think through this multiplication step-by-step to get the right answer. 

I don't know exactly how they did it, but I think there's a good chance that they provided it a lot of relevant training data that suggests this reasoning heuristic when asked to multiply two four-digit numbers. 

When it comes to moving from the current ChatGPT capabilities to new capabilities I would expect that a good portion of the work is to think about what heuristics it should use to face new problems and then create training data that demonstrate those heuristics. 

This way of thinking about what heuristics should be used is the topic of rationality. Some rationality topics are specific to human minds but plenty is more general and also important for AI.

comment by Alfred · 2022-12-22T21:28:44.191Z · LW(p) · GW(p)

A bias is an error in weighting or proportion or emphasis. This differs from a fallacy, which is an error in reasoning specifically. Just to make up an example, an attentional bias would be a misapplication of attention -- the famous gorilla experiment -- but there would be no reasoning underlying this error per se. The ad hominem fallacy contains at least implicit reasoning about truth-valued claims.

Yes, it's possible that AI could be a concern for rationality. But AI is an object of rationality; in this sense, AI is like carbon emissions; it has room for applied rationality, absolutely, but it is not rationality itself. People who read about AI through this medium are not necessarily learning about rationality. They may be, but they also may not be. As such, the overfocus on AI is a massive departure from the original subject matter, much like how it would be if LessWrong became overwhelmed with ways to reduce carbon emissions.

Anyway -- that aside, I actually don't disagree much at all with most of what you said.

The issue is that when these concerns have been applied to the foundation of a community concerned with the same things, they have been staggeringly wrongheaded and resulted in the disparities between mission statements and practical realities, which is more or less the basis of my objection. I am no stranger to criticizing intellectual communities; I have outright argued that we should expand the federal defunding criteria to include of certain major universities such as UC Berkeley itself. For all of the faults that have been levied against academia — and I have been such a critic of these norms that I was in Tucker Carlson's book ("Ship of Fools" p. 130) as a Person Rebelling Against Academic Norms — I have never had a discussion as absurd as I have when questioning why MIRI should receive Effective Altruism funding. It was and still is one of the most bizarre and frankly concerning lines of reasoning I've ever experienced, especially when contrasted with the position of EA leaders to address homelessness or the drug war. The concept of LessWrong and much of EA, on face, is not objectionable; what has resulted absolutely is.

Replies from: Viliam
comment by Viliam · 2022-12-23T19:36:45.134Z · LW(p) · GW(p)

why MIRI should receive Effective Altruism funding

I guess the argument is that (a) a superhuman AI will probably be developed soon, (b) whether it is properly aligned with human values or not will have tremendous impact on the future of humanity, and (c) MIRI is one of the organizations that take this problem most seriously.

If you agree with all three parts, then the funding makes sense. If you disagree with any one of them, it does not. At least from political perspective, it would be better to not talk about funding missions that require belief in several controversial statements to justify them.

This is partially about plausibility of the claims, and partially about prevention vs reaction. Other EA charities are reactive: a problem already exists, we want to solve it. In case of malaria, it is not about curing the people who are already sick, but about preventing other people from getting sick... but anyway, people sick of malaria already exist.

I was looking for some analogy, when humanity spent a lot of resources on prevention, but I actually don't remember any. Even recently with covid, a lot of people had to die first; perhaps at the beginning we could have prevented all this, but precisely because it did not happen yet, it didn't seem important.

comment by TAG · 2022-12-23T20:55:39.583Z · LW(p) · GW(p)

right now I am reading a textbook on set theory, and although I am pretty sure that 99-100% of the information there could also be found in other sources, that it not a sufficient reason to throw the textbook away.

What about there being information in the other books , that isn't in the one you're reading?

As far as I know, there isn't some big Manichean battle in the world of set theory , but there are in many other areas.

You can read a book on one of those subjects, and it can create a misleading impression without saying anything false, by omitting the other side of the story.

comment by Alfred · 2022-12-21T07:07:09.893Z · LW(p) · GW(p)

note: if your comment contains the phrase "my tribe" it's best autodeleted on the grounds that you are a clueless dweeb who is part of the problem