Sherrinford's Shortform

post by Sherrinford · 2020-05-02T17:19:22.661Z · LW · GW · 45 comments


Comments sorted by top scores.

comment by Sherrinford · 2020-11-06T08:17:01.105Z · LW(p) · GW(p)

It would be great if people first did some literature research before presenting their theory of life, universe and everything. If they did not find any literature, they should say so.

Replies from: Dagon
comment by Dagon · 2020-11-06T17:42:32.408Z · LW(p) · GW(p)

I considered looking for any studies or documentation about whether blog and website posts are improved by prior research or references.  But then I got distracted, so I just wrote this comment instead.

Replies from: Sherrinford
comment by Sherrinford · 2020-11-06T19:08:05.960Z · LW(p) · GW(p)

At least you didnt write a long longform post :)

comment by Sherrinford · 2021-05-30T08:59:14.485Z · LW(p) · GW(p)

1.) Conflict theory in practice: you see conflicts of interest, explain them to your ingroup, and if they don't agree, they are corrupted by the enemy.

2.) Mistake theory in practice: you identify behavior as bad, explain that to everybody and if they don't agree either move to 1.) or note that people are very stupid.

comment by Sherrinford · 2021-05-18T09:56:09.392Z · LW(p) · GW(p)

Saying that "the control system" does something is about as informative as saying that "emergence" is the cause of something.

Replies from: Measure
comment by Measure · 2021-05-18T19:11:50.578Z · LW(p) · GW(p)

"Control system" means something a bit more specific than "whatever it is that causes the behavior of the system". A control system has an input, an output, and a set point. It varies the output based on the difference (error) between the input and the desired set point (sometimes it has terms for the derivative or for the accumulation of the error, but not always). In practice this means that it is hard to move the output away from the set point since the control system will respond by pushing in the opposite direction.

Replies from: Sherrinford
comment by Sherrinford · 2021-05-18T19:35:56.624Z · LW(p) · GW(p)

In the context in which I have been seeing the statement that "the" control system moves a certain behavior, there is nothing but the claim that the control system does exactly do what it is claimed to do. No precise explanation. No precise prediction (sure, the claim is that the output moves towards the set point, but nothing about the time dimension). If anything, the term is always used to "explain" behavior ex-post.

comment by Sherrinford · 2020-07-07T16:44:13.235Z · LW(p) · GW(p)

Currently reading Fooled by Randomness, almost 20 years after it was published. By now I have read about a third of it. Up to now, it seems neither very insightful nor dense; all the insights (or observations) seem to be what you can read in the (relatively short) wikipedia article. It is also not extremely entertaining.

I wonder whether it was a revealing, revolutionary book back in the days, or whether it is different to people with a certain background (or lack thereof), such that my impression is, in some sense, biased. I also wonder whether the other books by Taleb are better, but given the praise that FbR seems to have received, I guess it is not likely that the Black Swan would be fundamentally different from FbR.

Replies from: rudi-c
comment by Rudi C (rudi-c) · 2020-07-12T18:18:00.838Z · LW(p) · GW(p)

I read Black Swan early in my introduction to heuristics and biases, in my teens. I remember that the book was quite illuminating for me, though I disliked Taleb's narcissism and his disrespect for the truth. I don't think it was so much "insightful" as helping me internalize a few big insights. The book's content definitely overlaps a lot with beginner rationality, so you might not find it worthwhile after all. I read a bit of FbR and about half of Antifragile as well, but I found those much less interesting.

An aside: Taleb talks about general topics. It's hard to say new things in that market (it's saturated), and the best parts of his new insights have already become part of the common lexicon.

comment by Sherrinford · 2020-06-13T18:52:24.602Z · LW(p) · GW(p)

New results published in Cell suggest that Sars-Cov 2 gets into the body via the nasal mucosa and then gets into deep parts of the lung via body fluids, and possibly into the brain. A second part of the same study suggests that there may be a partial immunity against Sars-Cov 2 of people who had Sars or Mers. (Disclaimer: I only read a newspaper summary.)

comment by Sherrinford · 2021-05-23T19:23:10.957Z · LW(p) · GW(p)

If had to delete itself for some reason, where would you go instead?

Replies from: Viliam
comment by Viliam · 2021-05-25T10:01:47.471Z · LW(p) · GW(p)

I suppose many people would then move to , and there I would ask the same question in an Open Thread.

Replies from: Sherrinford
comment by Sherrinford · 2021-05-25T12:37:09.528Z · LW(p) · GW(p)

Do you think that commenting in Open Threads is very similar to posting and commenting here?

Replies from: Viliam
comment by Viliam · 2021-05-25T15:02:00.371Z · LW(p) · GW(p)

More like, I would ask in the ACX Open Thread "what is the place you go to now that LW is gone?" And then I would follow the crowd.

Posting in ACX Open Threads as such... well, you get a smart audience, but there is simply too much content. Reading those threads is a full-time job.

Replies from: Sherrinford
comment by Sherrinford · 2021-05-27T15:28:06.777Z · LW(p) · GW(p)

Interesting focal point, though I wonder how strong the overlap is.

comment by Sherrinford · 2020-07-29T07:10:25.733Z · LW(p) · GW(p)

The results of Bob Jacob's LessWrong survey [LW · GW] are quite interesting. It's a pity the sample is so small.

The visualized results (link in his post) are univariate, but I would like to highlight some things:

49 out of 56 respondents identifying as "White",
53 out of 59 respondents born male and 46 out of 58 identifying male cisgender
47 of 59 identifying as heterosexual (comparison:
1 out of 55 working in a "blue collar" profession
Most people identify as "left of center" in some sense. At the same time, 30 out of 55 identify as "libertarian", but there were multiple answers allowed.
31 of 59 respondents think they are at least "upper middle class"; 22 of 59 think the family they were raised in was "upper middle class". (Background: In social science surveys, wealthy people usually underestimate their position, and poor people overestimate it but to a lesser extent.)

I would not have guessed the left-of-center identification, and I would have slightly underestimated the share of male (cisgender).

Replies from: Viliam
comment by Viliam · 2020-08-01T20:59:05.901Z · LW(p) · GW(p)
I would not have guessed the left-of-center identification

If you have 9 people who identify as left-wing and 1 person who identifies as right-wing, many people will hysterically denounce the entire group as "extreme right", based on the fact that the 1 person wasn't banned.

Furthermore, if you have people who identify as left-wing, but don't fully buy the current Twitter left-wing orthodoxy, they too will be denounced by some as "extreme right".

This skews the perception.

Replies from: Sherrinford
comment by Sherrinford · 2020-08-02T06:19:57.788Z · LW(p) · GW(p)

I don't think that fits what I am talking about:

  1. The survey was non-binary. Your first claim does not distinguish extremes and moderates.
  2. The survey was anonymous. You cannot ban anonymous people.
  3. I see no reason why people should have overstated their leftishness.
  4. If your statement is meant to explain why my perception differs from the result, it does not fit. My perception based on posts and comments would have been relatively more rightwing, less liberal / social democratic / green etc.
  5. I don't see where leftwing lesswrongers are denounced as rightwing extremists. In particular, I don't see where this explains people identifying as leftwing in the survey.
Replies from: Viliam
comment by Viliam · 2020-08-02T15:38:13.808Z · LW(p) · GW(p)

My model is that in USA most intelligent people are left-wing. Especially when you define "left-wing" to mean the 50% of the political spectrum, not just the extreme. And there seem to be many Americans on Less Wrong, just like on most English-speaking websites.

(Note that I am not discussing here why this is so. Maybe the left-wing is inherently correct. Or maybe the intelligent people are just more likely to attend universities where they get brainwashed by the establishment. I am not discussing the cause here, merely observing the outcome.)

So, I would expect Less Wrong to be mostly left-wing (in the 50% sense). My question is, why were you surprised by this outcome?

I don't see where leftwing lesswrongers are denounced as rightwing extremists.

For example, "neoreaction" is the only flavor of politics that is mentioned in the Wikipedia article about LessWrong. It does not claim that it is the predominant political belief, and it even says that Yudkowsky disagrees with them. Nonetheless, it is the only political opinion mentioned in connection with Less Wrong. (This is about making associations rather than making arguments.) So a reader who does not know how to read between the lines properly, might leave with an impression that LW is mostly right-wing. (Which is exactly the intended outcome, in my opinion.) And Wikipedia is not the only place where this game of associations is played.

Replies from: Sherrinford, Dirichlet-to-Neumann
comment by Sherrinford · 2020-08-02T19:20:25.174Z · LW(p) · GW(p)
"My model is that in USA most intelligent people are left-wing. Especially when you define "left-wing" to mean the 50% of the political spectrum, not just the extreme."

I agree. (I assume that by political spectrum you refer to something "objective"?)

And there seem to be many Americans on Less Wrong, just like on most English-speaking websites.

Given the whole Bay-area thing, I would have expected a higher share. In the survey, 37 out of 60 say they are residing in the US.

So, I would expect Less Wrong to be mostly left-wing (in the 50% sense). My question is, why were you surprised by this outcome?

Having been in this forum for a while, my impressions based on posts and comments led me to believe that less than 50% of people on lessrong would say of themselves that they are on values 1-5 of 1-10 scale from left-wing to right-wing. In fact, 41/56 did so.

For example, "neoreaction" is the only flavor of politics that is mentioned in the Wikipedia article about LessWrong. It does not claim that it is the predominant political belief, and it even says that Yudkowsky disagrees with them. Nonetheless, it is the only political opinion mentioned in connection with Less Wrong. (This is about making associations rather than making arguments.) So a reader who does not know how to read between the lines properly, might leave with an impression that LW is mostly right-wing. (Which is exactly the intended outcome, in my opinion.) And Wikipedia is not the only place where this game of associations is played.

The wikipedia article, as far as I can see, explains in that paragraph where the neoreactionary movement originated. I don't agree on the "intended outcome", or rather, I do not see why I should believe that.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2020-08-02T20:20:33.091Z · LW(p) · GW(p)

The wikipedia article, as far as I can see, explains in that paragraph where the neoreactionary movement originated.

It's not true, though! The article claims: "The neoreactionary movement first grew on LessWrong, attracted by discussions on the site of eugenics and evolutionary psychology".

I mean, okay, it's true that we've had discussions on eugenics [LW · GW] and evolutionary psychology [LW · GW], and it's true that a few of the contrarian nerds who enthusiastically read Overcoming Bias back in the late 'aughts were also a few of the contrarian nerds who enthusiastically read Unqualified Reservations. But "first grew" (Wikipedia) and "originated" (your comment) really doesn't seem like a fair summary of that kind of minor overlap in readership. No one was doing neoreactionary political theorizing on this website. Okay, I don't have a exact formalization of what I mean by "no one" in the previous sentence because I haven't personally read and remembered every post in our archives [? · GW]; maybe there are nonzero posts with nonnegative karma that could be construed to match this description. Still, in essence, you can only make the claim "true" by gerrymandering the construal of those words [LW · GW].

And yet the characterization will remain in Wikipedia's view of us—glancing at the talk page, I don't expect to win an edit war with David Gerard [LW(p) · GW(p)].

Replies from: jimrandomh, Sherrinford
comment by jimrandomh · 2021-03-08T00:46:04.722Z · LW(p) · GW(p)

I don't expect to win an edit war with David Gerard.

Now might be a good time to revisit that belief. He just got a topic ban on "editing about Scott Siskind, broadly construed". I make the case on the LessWrong article talk page that this topic ban could be construed as extending to the LessWrong article, and also that there is a separate case against him editing that article based on similar behavior to the behavior that he got a topic ban for.

Replies from: Sherrinford, Sherrinford
comment by Sherrinford · 2021-03-08T09:34:55.171Z · LW(p) · GW(p)

It would probably really take a lot of time to even understand what is and what is not considered to be in accordance with wikpedia rules. I note that, as in any other volunteer organization, a lot depends on who wants to put more time and effort into fighting for his/her convictions.

Replies from: Viliam
comment by Viliam · 2021-05-25T15:48:32.044Z · LW(p) · GW(p)

The Wikipedia rules are... meaningful, if you interpret them in good faith. But if are using arguments as soldiers [? · GW], then pretty much in every situation for any side you can find a rule that can be used in its favor. The key is to find it, and express it using words familiar to other admins.

For example, if a person edits an article they are somehow related to, it is either a good thing (we want to encourage experts to edit Wikipedia) or a bad thing (conflict of interest). Depending on whether you agree with the person or not, you choose the relevant rule, and insist that it applies. Similarly, most content can be removed as not important (Wikipedia is an encyclopedia, not a collection of everything) or kept as important to some people (Wikipedia is not on paper, we do not need to worry about number of pages). Short articles can be removed (as useless) or kept (because even a short article encourages people to extend it). Then there is a debate about what sources are considered "reliable" scientifically (demand higher rigor if you disagree with the conclusion, anything goes if you agree) and politically (conservative or neutral). Shortly, the rules do not enforce themselves; they need people to enforce their interpretation.

As you said, if you visibly volunteer a lot, you gain status within the community. When a conflict escalates, the higher status person has much better chance to win.

The more experiences people can use more sophisticated techniques, for example if you are high status and you break enough rules that there is a realistic chance you might get banned, say: "guys, I honestly believe I did nothing wrong, but I value our friendship and peace so much that I decided to stop editing the article, because I love this community so much". Then everyone rejoices that the problem was resolved without having to ban a high-status person. Two weeks later you change your mind and start editing the article again. If anyone proposes a ban again, your friends will dismiss it, because "we already had this debate, stop wasting everyone's time". (David Gerard tried to play this card, and almost succeeded.)

There is also the passive-aggressive art of treating opponents with subtle disrespect and framing their activities in worst possible light, and when someone does the same to you, crying "assuming good faith is the fundamental principle of Wikipedia debates". Generally, accusing your opponents of breaking Wikipedia rules is a good way to gain support among admins. For example, if you complain about Wikipedia bias on a different website, it can be linked as a proof of "brigading".

Original research is discouraged on Wikipedia, but of course this itself becomes a topic of a debate (anything your opponent said, unless it is literally a quote, is original research; but of course you cannot really write an encyclopedic article as a concatenation of quotes). You can play the game of making parts of article longer or shorter depending on whether they put the target in favorable or unfavorable light. I could go on... but generally, there are all kinds of tricks, and the high-status volunteer is more likely to know them, and is more likely to be forgiven for using them.

comment by Sherrinford · 2021-03-08T10:22:16.944Z · LW(p) · GW(p)

I had to sigh when I read "it can be hard to find editors who don't have a strong opinion about the person. But this is very far from that, likely one reason why the NYT actually used David Gerard as a source".

comment by Sherrinford · 2020-08-03T06:02:51.603Z · LW(p) · GW(p)

Interesting. I had maybe read the Wikipedia article a long time ago, but it did not leave any impression in my memory. Now rereading it, I did not find it dramatic, but I see your point.

Tbh, I stilĺ do not fully understand how Wikipedia works (that is, I do not have a model who determines how an article develops). And the "originated" (ok maybe that is only almost and not fully identical to "first grew") is just what I got from the article. The problem with the association is that it is hard to definitely determine what even makes things mentionable, but once somebody publibly has to distance himself from something, this indicates a public kind of association.

Further reading the article, my impression is that it indeed cites things that in Wikipedia count as sources for its claims. If the impression of lesswrong is distorted, then this may be a problem of what kinds of thing on lesswrong are covered by media publications? Or maybe it is all just selective citing, but then it should be possible to cite other things.

Replies from: Viliam
comment by Viliam · 2020-08-03T14:36:35.797Z · LW(p) · GW(p)

In theory, Wikipedia strives to be impartial. In practice, the rules are always only as good as the judges who uphold them. (All legal systems involve some degree of human judgment somewhere in the loop, because it is impossible to write a set of rules that covers everything and doesn't allow some clever abuse. That's why we talk about the letter and the spirit of the law.)

How to become a Wikipedia admin? You need to spend a lot of time editing Wikipedia in a way other admins consider helpful, and you need to be interested in getting the role. (Probably a few more technical details I forgot.) The good thing is that by doing a lot of useful work you send a costly signal that you care about Wikipedia. The bad thing is that if certain political opinion becomes dominant among the existing admins, there is no mechanism to fix this bias; it's actually the other way round, because edits disagreeing with the consensus would be judged as harmful, and would probably disqualify their author from becoming an admin in the future.

I don't assume bad faith from most of Wikipedia editors. Being wrong about something feels the same from inside as being right; and if other people agree with you, that is usually a good sign. But if you have a few bad actors who can play it smart, who can pretend that their personal grudges are how they actually see the world... considering that other admins already see them as part of the same team, and the same political bias means they already roughly agree on who are the good guys and who are the bad guys... it is not difficult to defend their decisions in front of jury of their peers. An outsider has no chance in this fight, because the insider is fluent with local lingo. Whatever they want to argue, they can find a wiki-rule pointing in that direction; of course it would be just as easy for them to find a wiki-rule pointing in the opposite direction (e.g. if you want to edit an article about something you are personally involved with, you have a "conflict of interest", which is a bad thing; if I want to do the same thing, my personal involvement makes me a "subject-matter expert", which is a good thing; your repetitive editing of the article to make your point is "vandalism", my repetitive editing of the article to make an opposite point is "reverting vandalism"); and then the other admins will nod and say: "of course, if this is what the wiki-rules say, our job is to obey them".

The specific admin that is so obsessed with Less Wrong is David Gerard from RationalWiki. He keeps a grudge for almost a decade, when he added Less Wrong to his website as an example of pseudoscience, mostly because of the quantum physics sequence. After being explained that actually "many worlds" is one of the mainstream interpretations among the scientists, he failed to say oops [LW · GW], and continued in the spirit of: well, maybe I was technically wrong about the quantum thing, but still... and spent the last decade trying to find and document everything that is wrong with Less Wrong. (Roko's Basilisk -- a controversial comment that was posted on LW once, deleted by Eliezer along with the whole thread, then posted on RationalWiki as "this is what people at Less Wrong actually believe". Because the fact that it was deleted is somehow a proof that deep inside we actually agree with it, but we don't want the world to know. Neoreaction -- a small group of people who enjoyed debating their edgy beliefs on Less Wrong, were considered entertaining for a while, then became boring and were kicked out. Again, the fact that they were not kicked out sooner is evidence of something dark.) Now if you look who makes most edits on the Wikipedia page about Less Wrong: it's David Gerard. If you go through the edit history and look at the individual changes, most of them are small and innocent, but they are all in the same direction: the basilisk and neoreaction must remain in the article, no matter how minuscule they are from perspective of someone who actually reads Less Wrong; on the other hand, mentions of effective altruism must be kept as short as possible. All of this is technically true and defensible, but... I'd argue that the Less Wrong described by the Wikipedia article does not resemble the Less Wrong its readers know, and that we have David Gerard and his decade-long work to thank for this fact.

If the impression of lesswrong is distorted, then this may be a problem of what kinds of thing on lesswrong are covered by media publications?

True, but most of the information in media originates from RationalWiki, where it was written by David Gerard. A decade ago, RationalWiki used to be quite high in google rankings, if I remember correctly; any journalist who did a simple background check would find it. Then he or she would ask about the juicy things in the interview, and regardless of the answer, the juicy things would be mentioned in the article. Which means that the next journalist would now find them both at RationalWiki and in the previous article, which means that he or she would again make a part of the interview about it, reinforcing the connection. It is hard to find an article about Less Wrong that does not mention Roko's Basilisk, despite the fact that it is discussed here rarely, and usually in the context of "guys, I have read about this thing called Roko's Basilisk in the media, and I can't find anything about it here, could you please explain me what this is about?"

Part of this is the clickbait nature of media: given the choice between debating neoreaction and debating technical details of the latest decision theory, it doesn't matter which topic is more relevant to Less Wrong per se, they know that their audience doesn't care about the latter. And part of the problem with Wikipedia is that it is downstream of the clickbait journalism. They try to use more serious sources, but sometimes there is simply no other source on the topic.

Replies from: Sherrinford
comment by Sherrinford · 2020-08-03T21:13:31.201Z · LW(p) · GW(p)

Thanks for the history overview! Very interesting. Concerning the wikipedia dynamics, I agree that this is plausible, as it is a plausible development of nearly every volunteer organization, in particular if they try to be grassroots-democratic. The wikipedia-media problem is known ( though in this particular case I was a bit surprised about the "original research" and "reliable source" distinction. Many articles there did not seem very "serious". On the other hand, during this whole "lost in hyperspace", I also found "A frequent poster to LessWrong was Michael Anissimov, who was MIRI’s media director until 2013." ( which was news to me. In internet years, all this is so long ago that I did not have any such associations. (I would rather have expected lesswrong to be notable for demanding the dissolution of the WHO, but probably that is not yet clickbaity enough.)

comment by Dirichlet-to-Neumann · 2021-05-25T14:30:40.165Z · LW(p) · GW(p)

My model is that what is called "left of center" in the USA is "far right, at least economically"* in Europe (and what the USA call "socialism" is "what everyone agrees with".

*"economically" does a fair bit of work here - on issues like immigration for example the left right divide is the same as in the US.

comment by Sherrinford · 2020-06-01T09:28:12.449Z · LW(p) · GW(p)

You would hope that people actually saw steelmanning as an ideal to follow. If that was ever true, the corona pandemic and the policy response seem to have killed the demand for this. It seems to become acceptable to attribute just any kind of seemingly-wrong behavior to either incredible stupidity or incredible malice, both proving that all institutions are completely broken.

Replies from: Dagon
comment by Dagon · 2020-06-01T16:14:04.189Z · LW(p) · GW(p)

I like the word "institurions". Some mix of institutions, intuitions, and centurions, and I agree that they're completely broken.

Replies from: Sherrinford
comment by Sherrinford · 2020-06-01T16:48:44.791Z · LW(p) · GW(p)

:-) Thanks. But I corrected it.

comment by Sherrinford · 2021-04-26T19:39:30.787Z · LW(p) · GW(p)

I guess this is a really bad time to write book reviews for lesswrong.

comment by Sherrinford · 2021-04-24T13:44:00.845Z · LW(p) · GW(p)

When people write articles containing wrong statements and statements without evidence or source, you can use your knowledge of the wrong statements to update the probability that the statements without evidence or source are true.

Replies from: Dagon
comment by Dagon · 2021-04-24T15:40:51.554Z · LW(p) · GW(p)

Kind of the reverse of Gell-Mann Amnesia (  Arguably, it should be applied to editorial units (sites, publications, etc.), not just to individual authors.

Replies from: Sherrinford
comment by Sherrinford · 2021-04-24T18:48:14.226Z · LW(p) · GW(p)

Yes. I hope certain forums and sites I regularly read don't continue developing into a direction of not demanding evidence and sources for claims.

By the way, there is also the danger that someone at some point just exploits his/her own reputation to push an agenda.

comment by Sherrinford · 2021-03-21T17:00:13.321Z · LW(p) · GW(p)

More articles on the supposed Astra Zeneca bloodclot mechanism, adding to this [LW(p) · GW(p)]:

(All in German, but I think that in general, automated translation has become really good.)

comment by Sherrinford · 2020-12-16T13:22:35.743Z · LW(p) · GW(p)

I would love to see examples of contributions with actual steelmanning instead of just seeing people who pay lipservice to it.

Replies from: niplav
comment by niplav · 2020-12-16T16:15:01.096Z · LW(p) · GW(p)

I believe that steelmanning has mostly been deprecated and replaced with ideological turing tests.

Replies from: Kaj_Sotala, Sherrinford
comment by Kaj_Sotala · 2020-12-16T17:13:36.130Z · LW(p) · GW(p)

ITTs and steelmanning feel like they serve different (though overlapping) purposes to me. For example, if I am talking with people who are not X (libertarians, socialists, transhumanists, car-owners...), we can try to steelman an argument in favor of X together. But we can't do an ITT of X, since that would require us to talk to someone who is X.

Replies from: Sherrinford
comment by Sherrinford · 2020-12-16T18:21:17.824Z · LW(p) · GW(p)

Yes, though I assume the best test for whether you really steelman someone would be if you can take a break and ask her whether your representation fits.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-12-17T12:35:10.275Z · LW(p) · GW(p)

I don't think an ITT is a good test of a steelman. Often you're coming from a different frame from someone else, and strong arguments to you will be framed completely differently from strong arguments for someone else.

Replies from: Sherrinford
comment by Sherrinford · 2020-12-17T14:19:19.854Z · LW(p) · GW(p)

Yes maybe an ITT tests a fleshman instead of a steelman or a strawman...

comment by Sherrinford · 2020-12-16T16:49:19.212Z · LW(p) · GW(p)

What I mean is: 

I would like to see that people who write articles about what the supposed actions or motivations of other people - or government agencies, firms, or whatever - are to actually try to present their actions and motivations in a way that at least assumes that they are not completely dumb or evil or pathetic. It seems to be fashionable that when people do not see the sense behind actions, they do not try hard but jump to the conclusion that it must be due to some despicable, stupid, or at least equilibrium-inefficient behavior (e.g. some claims about "signalling"; no proper analysis whether the claim makes sense in a given situation required). This may feel very insightful; after all, the writer seemingly has a deeper insight into social structures than the social agents. But supposed insights that feel too good can be dangerous. And that a model is plausible does not mean that it applies to every situation.

comment by Sherrinford · 2020-10-25T17:48:27.423Z · LW(p) · GW(p)

Among EA-minded people interested in preventing climate change, it seems Clean Air Task Force (CATF) is seen very favorably. Why? The "Climate Change Cause Area Report" by Founders Pledge (PDF) gives an overview.

CATF's work is  introduced as follows:

"It was founded in 1996 with the aim of enacting federal policy reducing the air pollution caused by American coal-fired power plants. This campaign has been highly successful and has been a contributing factor to the retirement of a large portion of the US coal fleet." (p. 5)

On p. 88, you will read:

"Do they have a a good track record? CATF have conceived of and led several successful advocacy campaigns in the US, which have had very large public health and environmental benefits. According to our rough model, through their past work, they have averted a tonne of CO 2 e for around $1.

Is their future work cost- - effective? Going forward, CATF plans to continue its work on power plant regulation and to advocate for policy support for innovative but neglected low carbon technologies.

Given their track record and the nature of their future projects, we think it is likely that a donation to CATF would avert a tonne of CO 2 e for $0.10-$1."

On p. 91:

"CATF was founded in 1996 to advocate for regulation of the damaging air pollution produced by the US coal fleet, initially focusing on sulphur dioxide (SO 2 ) and nitrogen oxide (NO x ). They later advocated for controls on mercury emissions. The theory of change was that the cost of emission controls for conventional pollutants and mercury would result in the retirement or curtailment of coal plant operation resulting in reductions in CO 2 (and other) emissions. CATF conceived of the campaign goal, designed the strategy, and led the campaign, in turn drawing in philanthropic support and recruiting other environmental NGOs to the campaign."

How does the evaluation work? A spreadsheet with an evaluation shows benefits of the policy impact.

Where do the numbers come from? The spreadsheet states "subjective input" in several cells. The "Climate Change Cause Area Report" by Founders Pledge (p. 129--) states that "CATF is typical of research and policy advocacy organisations in that it has worked on heterogeneous projects. This makes it difficult to evaluate all of CATF’s past work, as this would require us to assess their counterfactual impact in a range of different contexts in which numerous actors are pushing for the same outcome." The report then asks e.g. how much CATF "brought the relevant regulation forward", and the answers seem to rely strongly on assessment by CATF. Nonetheless, it makes assessments like "Our very rough realistic estimate is therefore that CATF brought the relevant regulation forward by 12 months. The 90% confidence interval around this estimate is 6 months to 2 years." On p. 91 you can read: "Through each of these mechanisms, CATF increased the probability that regulation was introduced earlier in time. Our highly uncertain realistic estimate is that through their work, CATF brought regulation on US coal plants forward by 18 months, with a lower bound of 9 months and a higher bound of 4 years. CATF believe this to be a major underestimate, and have told us that they could have brought the relevant regulation forward by ten years."

While of course it's fine to give subjective estimates, they should be taken with a grain of salt. It seems the comparison is much more reliant on such subjectivity than when you evaluate charities with concrete, repeatedly applied health interventions.

What, if anything, could be biased?

Additional to the (probably unavoidable) reliance on self-information, the following paragraph made me wonder:

"CATF have told us that at the time the campaign was conceived, major environmental organisations were opposed to reopening the question of plant emissions after the Clean Act Amendments of 1990, as they feared the possibility that legislative debate would unravel other parts of the Act. 216 This is based on conversations at the time with the American Lung Association, Environmental Defense Fund, and the Natural Resources Defense Council."

How can we know whether such fears were justified ex ante? How do we guard against survivorship or hindsight bias?