alenglander's Shortform

post by Aryeh Englander (alenglander) · 2019-12-02T15:35:02.996Z · LW · GW · 10 comments

Contents

10 comments

10 comments

Comments sorted by top scores.

comment by Aryeh Englander (alenglander) · 2021-09-06T20:25:03.426Z · LW(p) · GW(p)

Failure mode I think I've noticed, including among rationalists (and certainly myself!): If someone in your in-group criticizes something about the group, then people often consider that critique to be reasonable. If someone outside the group levels the exact same criticism, then that feels like an attack on the group, and your tribal defensiveness kicks into gear, potentially making you more susceptible to confirmation / disconfirmation bias or the like. I've noticed myself and I'm pretty sure others in the rationalist community doing this, and even reacting in clearly different ways to the exact same critique when we hear it from an in-group member or someone outside the group.

Do you think this is correct or off the mark? Also, is there a name for this and have their been studies about it?

Replies from: Viliam, Evan Rysdam, wunan
comment by Viliam · 2021-09-07T14:15:10.561Z · LW(p) · GW(p)

Different connotations? For example, if you say "looking at the results, most rationalists are actually not that smart", as an inside criticism it seems to imply "therefore, we should try harder and do rationality more consistently, or admit our limitations and develop techniques that take them into account", but as an outside criticism it seems to imply "therefore you smartasses should not be so dismissive of astrology and homeopathy, unlike you those guys make lots of money".

It reminds me of "concern trolling", whose bailey is "any criticism or suggestion made by outgroup", but the motte is, if I understand it correctly, trying to convice the group to do something suboptimal / create discord in the group / waste group's time by providing what seems to be a helpful advice / genuine question.

Like, after outside criticism, the usual following step is offering an advice how to fix the problem. "Despite all the talk about winning, most rationalists don't seem to significantly win at life. As a rationalist, you should reject armchair reasoning in favor of empirical evidence. I know many people whose lives have dramatically improved after finding Jesus. You should try it, too..."

And sometimes the following step was not made yet, but you already expect it. Which could be a mistake. But often is not. It is easy to err in either direction.

Replies from: alenglander
comment by Aryeh Englander (alenglander) · 2021-09-08T00:37:46.012Z · LW(p) · GW(p)

Yes, this sounds like a reasonable interpretation.

comment by Sunny from QAD (Evan Rysdam) · 2021-09-08T07:34:09.586Z · LW(p) · GW(p)

Alternate framing: if you already know that criticisms coming from one's outgroup are usually received poorly, then the fact that they are received better when coming from the ingroup is a hidden "success mode" that perhaps people could use to make criticisms go down easier somehow.

comment by wunan · 2021-09-06T21:06:25.790Z · LW(p) · GW(p)

Do you have some examples? I've noticed that rationalists tend to ascribe good faith to outside criticisms too often, to the extent that obviously bad-faith criticisms are treated as invitations for discussions. For example, there was an article about SSC in the New Yorker that came out after Scott deleted SSC but before the NYT article. Many rationalists failed to recognize the New Yorker article as a hit piece which I believe it clearly was, even more clearly now that the NYT article has come out.

Replies from: alenglander
comment by Aryeh Englander (alenglander) · 2021-09-07T02:54:14.069Z · LW(p) · GW(p)

I am reluctant to mention specific examples, partly because maybe I've misunderstood and partly because I hate being at all confrontational. But regardless, I have definitely seen this outside the rationalist community, and I have definitely noticed myself doing this. Usually I only do it in my head though, where I feel upset when it's coming from outside my group but if someone inside the group says it then I'll mentally nod along.

comment by Aryeh Englander (alenglander) · 2022-03-24T18:52:16.683Z · LW(p) · GW(p)

I keep having kind of off-the-cuff questions I would love to ask the community, but I don't know where the right place is to post those questions. I don't usually have the time to go polish up the questions so that they are high quality, cite appropriate sources and previous discussions, etc., but I would still like them answered! Typically these are the types of questions I might post on Facebook, but I think I would get higher quality answers here.

Do questions of this sort belong as question posts, shortform posts, or comments on the monthly open threads? Or do they in fact belong on Facebook and not here since they are not at all polished or well researched beyond some quick Google searches? And if I ask as a short form post or as a comment on the open thread, will that get only a small fraction of the attention (and therefore the responses) as if I would have posted as a separate question post?

Replies from: Dagon, Raemon
comment by Dagon · 2022-03-24T22:26:52.458Z · LW(p) · GW(p)

A lot depends on the question and the kinds of responses you hope to get.  I don't think it's necessarily about polish and cites of previous work, but you do need enough specificity and clarity to show what you already know and what specific part of the topics you mention you're asking about.

High-quality answers come from high-quality questions.  There are things you can ask on facebook which don't work well here, if you're not putting much effort into preparation.  And there are things you can ask in shortform that you can't easily ask in a top-level question post (without getting downvoted).  You're correct that you get less traction in those places, but the expectations are also lower.  Also, shortform (and facebook) has more expectation of refinement and discussion via comments, where capital-Q Question posts are generally (but not always) expected to elicit answers.

All that said, you've been here long enough to get 500+ karma - I'd recommend just trying stuff out.  The only way to spend that karma is to make risky posts, so get good value out of it by experimenting!   I strongly expect that some questions will catch peoples' attention even if you're not greatly prepared in the post, and some questions won't get useful responses no matter how perfect your form is.  

comment by Raemon · 2022-03-24T19:11:21.825Z · LW(p) · GW(p)

Either of the three options you listed are fine. Question posts don't need to be super-high-polish. Shortform and Open Threads are sort of interchangeable.

comment by Aryeh Englander (alenglander) · 2019-12-02T15:35:03.228Z · LW(p) · GW(p)

Something I've been thinking about recently. I've been reading several discussions surrounding potential risks from AI, especially the essays and interviews on AI Impacts. A lot of these discussions seem to me to center on trying to extrapolate from known data, or to analyze whether AI is or is not analogous to various historical transitions.

But it seems to me that trying to reason based on historical precedent or extrapolated data is only one way of looking at these issues. The other way seems to be more like what Bostrom did in Superintelligence, which seems more like reasoning based on theoretical models of how AI works, what could go wrong, how the world would likely react, etc.

It seems to me that the more you go with the historical analogies / extrapolated data approach, the more skeptical you'll be of claims from people claiming that AI risk is a huge problem. And conversely, the more you go with the reasoning from theoretical models approach, the more concerned you'll be. I'd probably put Robin Hanson somewhere close to the extreme end of the extrapolated data approach, and I'd put Eliezer Yudkowsky and Nick Bostrom close to the extreme end of the theoretical models approach. AI Impacts seems to fall closer to Hanson on this spectrum.

Of course, there's no real hard line between the two approaches. Reasoning from historical precedent and extrapolated data necessarily requires some theoretical modeling, and vice versa. But I still think the basic distinction holds value.

If this is right, then the question is how much weight should we put on each type of reasoning, and why?

Thoughts?