Most smart and skilled people are outside of the EA/rationalist community: an analysis
post by titotal (lombertini) · 2024-07-12T12:13:56.215Z · LW · GW · 36 commentsThis is a link post for https://open.substack.com/pub/titotal/p/most-smart-and-skilled-people-are?r=1e0is3&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
Contents
36 comments
36 comments
Comments sorted by top scores.
comment by Steven Byrnes (steve2152) · 2024-07-12T15:40:39.137Z · LW(p) · GW(p)
(this comment is partly self-plagiarized from here [LW(p) · GW(p)])
Before doing any project or entering any field, you need to catch up on existing intellectual discussion on the subject.
I think this is way too strong. There are only so many hours in a day, and they trade off between
- (A) “try to understand the work / ideas of previous thinkers” and
- (B) “just sit down and try to figure out the right answer”.
It’s nuts to assert that the “correct” tradeoff is to do (A) until there is absolutely no (A) left to possibly do, and only then do you earn the right to start in on (B). People should do (A) and (B) in whatever ratio is most effective for figuring out the right answer. I often do (B), and I assume that I’m probably reinventing a wheel, but it’s not worth my time to go digging for it. And then maybe someone shares relevant prior work in the comments section. That’s awesome! Much appreciated! And nothing went wrong anywhere in this process! See also here.
A weaker statement would be “People in LW/EA commonly err in navigating this tradeoff, by doing too much (B) and not enough (A).” That weaker statement is certainly true in some cases. And the opposite is true in other cases. We can argue about particular examples, I suppose. I imagine that I have different examples in mind than you do.
~~
To be clear, I think your post has large kernels of truth and I’m happy you wrote it.
Replies from: bogdan-ionut-cirstea↑ comment by Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2024-07-15T10:38:27.070Z · LW(p) · GW(p)
'Before doing any project or entering any field, you need to catch up on existing intellectual discussion on the subject.'
I think this is way too strong.
Still probably directionally correct, though, especially for the typical EA / rationalist, especially in AI safety research (most often relatively young and junior in terms of research experience / taste).
On the tradeoff between (A) “try to understand the work / ideas of previous thinkers” and
(B) “just sit down and try to figure out the right answer”, I think for A) might have already been made significantly easier by chatbots like Claude 3.5, while B) probably hasn't changed anywhere near as much. I expect the differential to probably increase in the near-term, with better LLMs.
↑ comment by TsviBT · 2024-07-15T15:33:38.135Z · LW(p) · GW(p)
especially in AI safety research
This is insanely wrong; it's exactly opposite of the truth. If you want to do something cool in the world, you should learn more stuff from what other humans have done. If, on the other hand, you want to solve the insanely hard engineering/philosophy problem of AGI alignment in time for humanity to not be wiped out, you absolutely should prioritize solving the problem from scratch.
Replies from: andrei-alexandru-parfeni, lahwran↑ comment by sunwillrise (andrei-alexandru-parfeni) · 2024-07-16T03:23:05.567Z · LW(p) · GW(p)
insanely wrong
I'd like to offer some serious pushback on the practice of using words like "insane" to describe positions that are not obviously false and which a great number of generally reasonable and well-informed members of the community agree with. It is particularly inappropriate to do that when you have offered no concrete, object-level arguments or explanations [1] for why AI safety researchers should prioritize "solving the problem from scratch."
Adding in phrases like "it's exactly opposite of the truth" and "absolutely" not only fails to help your case, but in my view actively makes things worse by using substance-free rhetoric that misleads readers into thinking the case you are bringing forward is stronger than it actually is or that this matter is so obvious and trivial that they shouldn't even need to think very hard about it before taking your side.
- ^
By which I mean, you have included no such arguments in this particular comment, nor have you linked to any other place containing arguments that you agree with on this topic, nor have you offered any explanations in any other comments on this post (I checked, and you have made no other comments on it yet), nor does a cursory look at your profile seem to indicate any posts or recent comments where such ideas might appear.
↑ comment by TsviBT · 2024-07-16T08:19:14.661Z · LW(p) · GW(p)
I disagree re/ the word "insane". The position to which I stated a counterposition is insane.
"it's exactly opposite of the truth" and "absolutely" not only fails to help your case, but in my view actively makes things worse by using substance-free rhetoric that misleads readers into thinking the case you are bringing forward is stronger than it actually is or that this matter is so obvious and trivial that they shouldn't even need to think very hard about it before taking your side.
I disagree, I think I should state my actual position. The phrases you quoted have meaning and conveys my position more than if they were removed.
Replies from: andrei-alexandru-parfeni↑ comment by sunwillrise (andrei-alexandru-parfeni) · 2024-07-16T13:21:20.755Z · LW(p) · GW(p)
I disagree, I think I should state my actual position. The phrases you quoted have meaning and conveys my position more than if they were removed.
It does not matter one bit if this is your "actual position". The point of community norms about discourse is that they constrain what is or isn't appropriate to say in a given situation; they function on the meta-level by setting up proper incentives [LW · GW] for users to take into account when crafting their contributions here, independently of their personal assessments about who is right on the object-level. So your response is entirely off-topic, and the fact you expected it not to be is revealing of a more fundamental confusion in your thinking about this matter.
Moderation (when done properly) does not act solely to resolve individual disputes on the basis of purely local characteristics to try to ensure specific outcomes. Remember [LW(p) · GW(p)], the law is not an optimizer [LW · GW], but rather a system informed by principles of mechanism design [? · GW] that generates specific, legible, and predictable set of real rules [LW · GW] about what is or isn't acceptable, a system that does not bend in response to the clever arguments [LW · GW] of an individual who thinks that he alone is special and exempt from them.
As Duncan Sabien once wrote [LW · GW]:
Standards are not really popular. Most people don't like them. Or rather, most people like them in the abstract, but chafe when they get in the way, and it's pretty rare for someone to not think that their personal exception to the standard is more justified than others' violations. Half the people here, I think, don't even see the problem that I'm trying to point at. Or they see it, but they don't see it as a problem.
I think it would have been weird and useless for you to straight-up lie in your previous comment [LW(p) · GW(p)], so of course you thought what you were saying communicated your real position. Why else would you have written it? But "communicate whatever you truly feel about something, regardless of what form your writing takes" is a truly terrible way of organizing any community in which meaningful intellectual progress [? · GW] is intended. By contrast, giving explanations and reasoning in situations where you label the beliefs of others as "insane" prevents conversations from becoming needlessly heated and spiraling into Demon Threads [LW · GW], while also building towards [LW(p) · GW(p)] a community that maintains high-quality contributions.
All of this stuff has already been covered in the large number of expositions people have given over the last few years on what LessWrong is about and what principles animate norms and user behavior (1 [LW · GW], 2 [LW · GW], 3 [LW · GW], 4 [LW · GW], 5 [LW · GW], etc).
Replies from: TsviBT↑ comment by TsviBT · 2024-07-16T15:10:39.085Z · LW(p) · GW(p)
I doubt that we're going to get anything useful here, but as an indication of where I'm coming from:
- I would basically agree with what you're saying if my first comment had been ad hominem, like "Bogdan is a doo-doo head". That's unhelpful, irrelevant, mean, inflammatory, and corrosive to the culture. (Also it's false lol.)
- I think a position can be wrong, can be insanely wrong (which means something like "is very far from the truth, is wrong in a way that produces very wrong actions, and is being produced by a process which is failing to update in a way that it should and is failing to notice that fact"), and can be exactly opposite of the truth (for example, "Redwoods are short, grass is tall" is, perhaps depending on contexts, just about the exact opposite of the truth). And these facts are often knowable and relevant if true. And therefore should be said--in a truth-seeking context. And this is the situation we're in.
- If you had responded to my original comment with something like
"Your choice of words makes it seem like you're angry or something, and this is coming out in a way that seems like a strong bid for something, e.g. attention or agreement or something. It's a bit hard to orient to that because it's not clear what if anything you're angry about, and so readers are forced to either rudely ignore / dismiss, or engage with someone who seems a bit angry or standoffish without knowing why. Can you more directly say what's going on, e.g. what you're angry about and what you might request, so we can evaluate that more explicitly?"
or whatever is the analogous thing that's true for you, then we could have talked about that. Instead you called my relatively accurate and intentional presentation of my views as "misleading readers into thinking the case you are bringing forward is stronger than it actually is or that this matter is so obvious and trivial..." which sounds to me like you have a problem in your own thinking and norms of discourse, which is that you're requiring that statements other people make be from the perspective of [the theory that's shared between the expected community of speakers and listeners] in order for you to think they're appropriate or non-misleading.
- The fact that I have to explain this to you is probably bad, and is probably mostly your responsibility, and you should reevaluate your behavior. (I'm not trying to be gentle here, and if gentleness would help then you deserve it--but you probably won't get it here from me.)
↑ comment by sunwillrise (andrei-alexandru-parfeni) · 2024-07-16T15:30:27.520Z · LW(p) · GW(p)
I think a position can be wrong, can be insanely wrong (which means something like "is very far from the truth, is wrong in a way that produces very wrong actions, and is being produced by a process which is failing to update in a way that it should and is failing to notice that fact"), and can be exactly opposite of the truth (for example, "Redwoods are short, grass is tall" is, perhaps depending on contexts, just about the exact opposite of the truth). And these facts are often knowable and relevant if true. And therefore should be said--in a truth-seeking context.
I agree to some extent, which is why I said [LW(p) · GW(p)] the following to gears:
The fact that you chose the word "insane" to describe something that did not seem obviously false, had a fair bit of support in this community, and that you had not given any arguments against at the time was the problem.
The fact that you think something is "insane" is informationally useful to other people, and, all else equal, should be communicated. But all else is not equal, because (as I explained in my previous comments), it is a fabricated option [LW · GW] to think that relaxing norms around the way in which particular kinds of information is communicating will not negatively affect the quality of the conversation that unfolds afterwards.
So you could (at least in my view, not sure what the mods think) say something is "insane" if you explain why, because this allows for opportunities to drag the conversation away from mud-slinging Demon Threads [LW · GW] and towards the object-level arguments being discussed (and, in this case, saying you think your interlocutor's position is crazy could actually be helpful at times, since it signals a great level of disagreement and allows for the quicker identification of how many inferential distances [LW · GW] between you and the other commenters). Likewise, you could give your conclusions without presenting arguments or explanations for it, as long as your position is not stated in an overly inflammatory manner, because this then incentivizes useful and clear-headed discourse later on when users can ask what the arguments actually are. But if you go the third route, then you maximize the likelihood of the conversation getting derailed.
"Your choice of words makes it seem like you're angry or something, and this is coming out in a way that seems like a strong bid for something, e.g. attention or agreement or something. It's a bit hard to orient to that because it's not clear what if anything you're angry about, and so readers are forced to either rudely ignore / dismiss, or engage with someone who seems a bit angry or standoffish without knowing why. Can you more directly say what's going on, e.g. what you're angry about and what you might request, so we can evaluate that more explicitly?"
This framing focuses on the wrong part, I think. You can be as angry as you want to when you are commenting on LessWrong, and it seems to be inappropriate to enforce norms about the emotions one is supposed to feel when contributing here. The part that matters is whether specific norms of discourse are getting violated (about the literal things someone is writing, not how they feel in that moment), in which case (as I have argued above) I believe the internal state of mind of the person violating them is primarily irrelevant.
you have a problem in your own thinking and norms of discourse
I'm also not sure what you mean by this. You also implied later on that "requiring that statements other people make be from the perspective of [the theory that's shared between the expected community of speakers and listeners] in order for you to think they're appropriate" is wrong, which... doesn't make sense to me, because that's the very definition of the word appropriate: "meeting the requirements [i.e. norms] of a purpose or situation."
The same statement can be appropriate or inappropriate, depending on the rules and norms of the community it is made in.
Replies from: TsviBT↑ comment by TsviBT · 2024-07-16T15:37:51.733Z · LW(p) · GW(p)
to think that relaxing norms around the way in which particular kinds of information is communicating will not negatively affect the quality of the conversation that unfolds afterwards.
If this happens because someone says something true, relevant, and useful, in a way that doesn't have alternative expressions that are really easy and obvious to do (such as deleting the statement "So and so is a doo-doo head"), then it's the fault of the conversation, not the statement.
Replies from: andrei-alexandru-parfeni↑ comment by sunwillrise (andrei-alexandru-parfeni) · 2024-07-16T15:44:08.360Z · LW(p) · GW(p)
doesn't have alternative expressions
The alternative expression, in this particular case (not in the mine run of cases), is not to change the word "insane" (because it seems you are certain enough in your belief that it is applicable here that it makes sense for you to communicate this idea some way), but rather to simply write more (or link to a place that contain arguments which relate, with particularity, to the situation at hand) by explaining why you think it's is true that the statement is "insane".
If you are so confident in your conclusion that you are willing to label the articulation of the opposing view as "insane", then it should be straightforward (and more importantly, should not take so much time that it becomes daunting) to give reasons for that, at the time you make that labeling.
Replies from: TsviBT↑ comment by TsviBT · 2024-07-16T15:46:08.462Z · LW(p) · GW(p)
it should be straightforward (and more importantly, should not take so much time that it becomes daunting) to give reasons for that
NOPE!
Replies from: andrei-alexandru-parfeni↑ comment by sunwillrise (andrei-alexandru-parfeni) · 2024-07-16T15:47:32.173Z · LW(p) · GW(p)
I think I'm going to bow out of this conversation right now, since it doesn't seem you want to meaningfully engage.
↑ comment by TsviBT · 2024-07-16T08:16:50.695Z · LW(p) · GW(p)
The comment I was responding to also didn't offer serious relevant arguments.
https://tsvibt.blogspot.com/2023/09/a-hermeneutic-net-for-agency.html
Replies from: andrei-alexandru-parfeni, bogdan-ionut-cirstea, lahwran↑ comment by sunwillrise (andrei-alexandru-parfeni) · 2024-07-16T13:33:29.650Z · LW(p) · GW(p)
The comment I was responding to also didn't offer serious relevant arguments.
And it didn't label the position it was arguing against as "insane", so this is also entirely off-topic.
It would be ideal for users to always describe why they have reached the conclusions they have, but that is a fabricated option [LW · GW] which does not take into account the basic observation that requiring such explanations creates such a tremendous dis-incentive to commenting that it would drastically reduce the quantity of useful contributions in the community, thus making things worse off than they were before.
So the compromise we reach is one in which users can state their conclusions in a relatively neutral manner that does not poison the discourse that comes afterwards, and then if another user has a question or a disagreement about this matter, later on they can then have a regular, non-Demon Thread [LW · GW] discussion about it in which they explain their models and the evidence they had to reach their positions.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2024-07-16T13:50:09.841Z · LW(p) · GW(p)
I think you are also expressing high emotive confidence in your comments. You are presenting a case, and your expressed confidence slightly lower, but still elevated.
Replies from: andrei-alexandru-parfeni↑ comment by sunwillrise (andrei-alexandru-parfeni) · 2024-07-16T14:03:48.596Z · LW(p) · GW(p)
I agree[1], and I think it is entirely appropriate to do so, given that I have given some explanations of the mental models behind my positions on these matters.
For clarity, I'll summarize my conclusion here, on the basis of what I have explained before (1 [LW(p) · GW(p)], 2 [LW(p) · GW(p)], 3 [LW(p) · GW(p)]):
- It is fine[2] to label opinions you disagree with as "insane".
- It is fine to give your conclusions without explaining the reasons behind your positions.[3]
- It is not fine to do 1 and 2 at the same time.
With regards to your "taboo off-topic" reaction, what I mean by "off-topic" in this case is "irrelevant to the discussion at hand, by focusing on the wrong level of abstraction (meta-level norms vs object-level discourse) and by attempting to say the other person behaved similarly [LW(p) · GW(p)], which is incorrect as a factual matter (see the distinction between points 2 and 3 above), but more importantly, immaterial to the topic at hand even if true".
- ^
I suspect my regular use of italics is part of what is giving off this impression.
- ^
Although not ideal in most situations, and should be (lightly) discouraged in most spots.
- ^
Although it would be best to be willing to engage in discussion about those reasons later on if other users challenge you on them.
↑ comment by Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2024-07-16T09:48:00.250Z · LW(p) · GW(p)
The comment I was responding to also didn't offer serious relevant arguments.
I'm time-bottlenecked now, but I'll give one example. Consider the Natural Abstraction Hypothesis (NAH) agenda (which, fwiw, I think is an example of considerably-better-than-average work on trying to solve the problem from scratch). I'd argue that even for someone interested in this agenda: 1. most of the relevant work has come (and will keep coming) from outside the LW community (see e.g. The Platonic Representation Hypothesis and compare the literature reviewed there with NAH-related work on LW); 2. (given the previous point) the typical AI safety researcher interested in NAH would do better to spend most of their time (at least at the very beginning) looking at potentially relevant literature outside LW, rather than either trying to start from scratch, or mostly looking at LW literature.
Replies from: TsviBT↑ comment by TsviBT · 2024-07-16T10:10:20.516Z · LW(p) · GW(p)
considerably-better-than-average work on trying to solve the problem from scratch
It's considerably better than average but is a drop in the bucket and is probably mostly wasted motion. And it's a pretty noncentral example of trying to solve the problem from scratch. I think most people reading this comment just don't even know what that would look like.
even for someone interested in this agenda
At a glance, this comment seems like it might be part of a pretty strong case that [the concrete ML-related implications of NAH] are much better investigated by the ML community compared to LW alignment people. I doubt that the philosophically more interesting aspects of Wentworth's perspectives relating to NAH are better served by looking at ML stuff, compared to trying from scratch or looking at Wentworth's and related LW-ish writing. (I'm unsure about the mathematically interesting aspects; the alternative wouldn't be in the ML community but would be in the mathematical community.)
And most importantly "someone interested in this agenda" is already a somewhat nonsensical or question-begging conditional. You brought up "AI safety research" specifically, and by that term you are morally obliged to mean [the field of study aimed at figuring out how to make cognitive systems that are more capable than humanity and also serve human value]. That pursuit is better served by trying from scratch. (Yes, I still haven't presented an affirmative case. That's because we haven't even communicated about the proposition yet.)
↑ comment by the gears to ascension (lahwran) · 2024-07-16T13:41:19.507Z · LW(p) · GW(p)
Links have high attrition rate, cf ratio of people overcoming a trivial inconvenience. Post your arguments compressed inline to get more eyeballs on them.
↑ comment by the gears to ascension (lahwran) · 2024-07-16T14:02:09.834Z · LW(p) · GW(p)
Can you expand on this concisely inline? I agree with the comment you're replying to strongly and think it has been one of miri's biggest weaknesses in the past decade that they didn't build the fortitude to be able to read existing work without becoming confused by its irrelevance. But I also think your and Abram's research direction intuitions seem like some of the most important in the field right now, alongside wentworth. I'd like to understand what it is that has held you back from speed reading external work for hunch seeding for so long. To me, it seems like solving from scratch is best done not from scratch, if that makes sense. Don't defer to what you read.
Replies from: TsviBT↑ comment by TsviBT · 2024-07-16T14:44:30.878Z · LW(p) · GW(p)
I'd like to understand what it is that has held you back from speed reading external work for hunch seeding for so long.
Well currently I'm not really doing alignment research. My plans / goals / orientation / thinking style have changed over the years, so I've read stuff or tried to read stuff more or less during different periods. When I'm doing my best thinking, yes, I read things for idea seeding / as provocations, but it's only that--I most certainly am not speed reading, the opposite really: read one paragraph, think for an hour and then maybe write stuff. And I'm obviously not reading some random ML paper, jesus christ. Philosophy, metamathematics, theoretical biology, linguistics, psychology, ethology, ... much more interesting and useful.
To me, it seems like solving from scratch is best done not from scratch, if that makes sense.
Absolutely, I 100% agree, IIUC. I also think:
- A great majority of the time, when people talk about reading stuff (to "get up to speed", to "see what other people have done on the subject", to "get inspiration", to "become more informed", to "see what approaches/questions there are"...), they are not doing this "from scratch not from scratch" thing.
- "the typical EA / rationalist, especially in AI safety research (most often relatively young and junior in terms of research experience / taste)" is absolutely and pretty extremely erring on the side of failing to ever even try to solve the actual problem at all.
Don't defer to what you read.
Yeah, I generally agree (https://tsvibt.blogspot.com/2022/09/dangers-of-deferrence.html), though you probably should defer about some stuff at least provisionally (for example, you should probably try out, for a while, the stance of deferring to well-respected philosophers about what questions are interesting).
I think it's just not appreciated how much people defer to what they read. Specifically, there's a lot of frame deference. This is usually fine and good in lots of contexts (you don't need to, like, question epistemology super hard to become a good engineer, or question whether we should actually be basing our buildings off of liquid material rather than solid material or something). It's catastrophic in AGI alignment, because our frames are bad.
Not sure I answered your question.
↑ comment by Lucius Bushnaq (Lblack) · 2024-07-16T15:27:16.010Z · LW(p) · GW(p)
I think this is particularly incorrect for alignment, relative to a more typical STEM research field. Alignment is very young[1]. There's a lot less existing work worth reading than in a field like, say, lattice quantum field theory. Due to this, the time investment required to start contributing at the research frontier is very low, relatively speaking.
This is definitely changing. There's a lot more useful work than there was when I started dipping my toe into alignment three years ago. But compared to something like particle physics, it's still very little.
- ^
In terms of # total smart people hours invested
comment by Mitchell_Porter · 2024-07-13T10:51:16.798Z · LW(p) · GW(p)
I'm curious - if you repeated this study, but with "the set of all Ivy League graduates" instead of "the EA/rationalist community", how does it compare?
comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-07-14T08:40:53.362Z · LW(p) · GW(p)
Preach, brother.
One hundred twenty percent agreed. Hubris is the downfall of the rationalist project.
comment by rotatingpaguro · 2024-07-12T12:32:45.218Z · LW(p) · GW(p)
Before doing any project or entering any field, you need to catch up on existing intellectual discussion on the subject.
My current take on this topic is to follow this scheme:
1) dedicate some time to think about the problem on your own, without searching the literature
2) look in the literature, compare with your own thoughts
3) get feedback from a human in the field
4) repeat
Do you think (1) makes sense, or is your position extreme enough to reject (1) altogether, or spend only a very short time on it, say < 1 hour?
↑ comment by metachirality · 2024-07-13T04:20:51.092Z · LW(p) · GW(p)
IMO trying the problem yourself before researching it makes you appreciate what other people have already done even more. It's pretty easy to fall victim to hindsight bias if you haven't experienced the difficulty of actually getting anywhere.
comment by ChristianKl · 2024-07-13T16:58:49.545Z · LW(p) · GW(p)
Rationalism loves jargon, including jargon that is just completely unnecessary. For example, the phrase “epistemic status” is a fun technique where you say how confident you are in a post you make. But it could be entirely replaced with the phrase “confidence level”, which means pretty much the exact same thing.
Jargon is good when it allows us to make distinctions. The phrase “epistemic status” as used in this community does not mean the same thing as “confidence level”.
A confidence level boils down to the probability that a given claim is true. It might be phrases in more vague language, but it's about the likelihood that a given thesis is correct.
If I say "Epistemic status: This is written in textbooks of the field" I'm not stating a probability about whether or not my claim is true. I can make the statement without having to be explicit about my confidence in the textbooks of a field. Different readers might have different confidence levels in textbooks of the field I'm talking about.
If I listen to someone making claims about physics and Bob says A is very likely while Dave says A is certainly false, I get both of their confidence levels. If I additionally learn that the epistemic status of Bob is that he's a physics professor speaking in his field of expertise, while Bob never engaged academically with physics but spent a lot of time thinking about physics independently, I learn something that goes beyond what I got from listening to both of their confidence levels.
This saves everybody a whole lot of time. But unfortunately a lot of articles in the ea/rat community seem to only cite or look at other blog posts in the same community. It has a severe case of “not invented here” syndrome.
Is is generally true for academia as well, academia generally cites ideas only if those ideas have been expressed by other academics and are frequently even focused on whether they have been expressed in their own discipline.
If you want an example of this dynamic, Nassim Taleb writes in The Black Swan about how what economists call the Black–Scholes formula, is a formula that was known to quants before under another name. Economists still credit Black–Scholes for it, because what traders do is “not invented here”.
That said, of course reading broadly is good.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2024-07-13T22:52:33.923Z · LW(p) · GW(p)
I use rationalist jargon when I judge that the benefits (of pointing to a particular thing) outweigh the costs (of putting off potential readers). And my opinion is that “epistemic status” doesn’t make the cut.
Basically, I think that if you write an “epistemic status” at the top of a blog post, and then delete the two words “epistemic status” while keeping everything else the same, it works just about as well. See for example the top of this post [LW · GW].
comment by Ebenezer Dukakis (valley9) · 2024-07-13T03:55:57.739Z · LW(p) · GW(p)
Great post. Self-selection seems huge for online communities, and I think it's no different on these fora.
Confidence level: General vague impressions and assorted thoughts follow; could very well be wrong on some details.
A disagreement I have with both the rationalist and EA communities is what the process of coming to robust conclusions looks like. In those communities, it seems like the strategy is often to identify a few super-geniuses who go do a super-deep analysis, and come to a conclusion that's assumed to be robust and trustworthy. See the "Groupthink" section on this page for specifics.
From my perspective, I would rather see an ordinary-genius do an ordinary-depth analysis, and then have a bunch of other people ask a bunch of hard questions. If the analysis holds up against all those hard questions, then the conclusion can be taken as robust.
Everyone brings their own incentives, intuitions, and knowledge to a problem. If a single person focuses a lot on a problem, they run into diminishing returns regarding the number of angles of attack. It seems more effective to generate a lot of angles of attack by taking the union of everyone's thoughts.
From my perspective, placing a lot of trust in top EA/LW thought leaders ironically makes them less trustworthy, because people stop asking why the emperor has no clothes.
The problem with saying the emporer has no clothes is: Either you show yourself a fool, or else you're attacking a high-status person. Not a good prospect either way, in social terms.
EA/LW communities are an unusual niche with opaque membership norms, and people may want to retain their "insider" status. So they do extra homework before accusing the emperor of nudity, and might just procrastinate indefinitely.
There can also be a subtle aspect of circular reasoning to thought leadership: "we know this person is great because of their insights", but also "we know this insight is great because of the person who said it". (Certain celebrity users on these fora get 50+ positive karma on basically every top-level post. Hard to believe that the authorship isn't coloring the perception of the content.)
A recent illustration of these principles might be the pivot to AI Pause. IIRC, it took a "super-genius" (Katja Grace) writing a super long post before Pause became popular. If an outsider simply said: "So AI is bad, why not make it illegal?" -- I bet they would've been downvoted [EA(p) · GW(p)]. And once that's downvoted, no one feels obligated to reply. (Note, also -- I don't believe there was much reasoning transparency regarding why the pause strategy was considered unpromising at the time. You kinda had to be an insider like Katja to know the reasoning in order to critique it.)
In conclusion, I suspect there are a fair number of mistaken community beliefs which survive because (1) no "super-genius" has yet written a super-long post about them, and (2) poking around by asking hard questions is disincentivized.
Replies from: ChristianKl↑ comment by ChristianKl · 2024-07-13T20:46:06.124Z · LW(p) · GW(p)
From my perspective, I would rather see an ordinary-genius do an ordinary-depth analysis, and then have a bunch of other people ask a bunch of hard questions. If the analysis holds up against all those hard questions, then the conclusion can be taken as robust.
On LessWrong, there's a comment section where hard questions can be asked and are asked frequently. The same is true on ACX.
On the other hand, GiveWell recommendations don't allow raising hard questions in the same way and most of the grant decisions are made behind closed doors.
A recent illustration of these principles might be the pivot to AI Pause. [...] I don't believe there was much reasoning transparency regarding why the pause strategy was considered unpromising at the time.
I don't think AI policy is a good example for discourse on LessWrong. There are strategic reasons to be less transparent about how to affect public policy then for most other topics. Everything that's written publically can be easily picked up by journalists wanting to write stories about AI.
I think you can argue that more reasoning transparency around AI policy would be good, but it's not something that generalizes over other topics on LessWrong.
Replies from: valley9↑ comment by Ebenezer Dukakis (valley9) · 2024-07-14T19:49:06.720Z · LW(p) · GW(p)
On LessWrong, there's a comment section where hard questions can be asked and are asked frequently.
In my experience, asking hard questions here is quite socially unrewarding. I could probably think of a dozen or so cases where I think the LW consensus "emperor" has no clothes, that I haven't posted about, just because I expect it to be an exercise in frustration. I think I will probably quit posting here soon.
I don't think AI policy is a good example for discourse on LessWrong. There are strategic reasons to be less transparent about how to affect public policy then for most other topics.
In terms of advocacy methods, sure. In terms of desired policies, I generally disagree.
Everything that's written publically can be easily picked up by journalists wanting to write stories about AI.
If that's what we are worried about, there is plenty of low-hanging fruit in terms of e.g. not tweeting wildly provocative stuff for no reason. (You can ask for examples, but be warned, sharing them might increase the probability that a journalist writes about them!)
comment by Screwtape · 2024-07-16T16:44:30.248Z · LW(p) · GW(p)
Noting that I'm upvoting, but mostly for the "How Big is EA and rationalism" section. I've had "get a good order of magnitude estimate for the community" on my backlog for a while and never got it into a place that felt publishable. I'm glad someone got to it!
comment by Matt Goldenberg (mr-hire) · 2024-07-13T06:46:14.577Z · LW(p) · GW(p)
median rationalist at roughly MENSA level. This still feels wrong to me: if they’re so smart, where are the nobel laureates? The famous physicists? And why does arguing on Lesswrong make me feel like banging my head against the wall?
I think you'd have to consider both Scott Aaronson and Taylor Cowen to be rationalist adjacent, and both considered intellectual heavyweights
Dustin Moskovitz EA adjacent, again considered a heavyweight, but applied to business rather than academia
Then there's the second point, but unfortunately I haven't seen any evidence that someone being smart makes them pleasant to argue with (the contrary in fact)
Replies from: metachirality↑ comment by metachirality · 2024-07-13T15:19:12.355Z · LW(p) · GW(p)
Emmett Shear might also count, but he might merely be rationalist-adjacent.
comment by Review Bot · 2024-07-16T02:41:34.171Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?