Sherrinford's Shortform

post by Sherrinford · 2020-05-02T17:19:22.661Z · LW · GW · 116 comments

Contents

116 comments

116 comments

Comments sorted by top scores.

comment by Sherrinford · 2021-05-30T08:59:14.485Z · LW(p) · GW(p)

1.) Conflict theory in practice: you see conflicts of interest, explain them to your ingroup, and if they don't agree, they are corrupted by the enemy.

2.) Mistake theory in practice: you identify behavior as bad, explain that to everybody and if they don't agree either move to 1.) or note that people are very stupid.

comment by Sherrinford · 2020-11-06T08:17:01.105Z · LW(p) · GW(p)

It would be great if people first did some literature research before presenting their theory of life, universe and everything. If they did not find any literature, they should say so.

Replies from: Dagon
comment by Dagon · 2020-11-06T17:42:32.408Z · LW(p) · GW(p)

I considered looking for any studies or documentation about whether blog and website posts are improved by prior research or references.  But then I got distracted, so I just wrote this comment instead.

Replies from: Sherrinford
comment by Sherrinford · 2020-11-06T19:08:05.960Z · LW(p) · GW(p)

At least you didnt write a long longform post :)

comment by Sherrinford · 2023-02-07T20:58:04.003Z · LW(p) · GW(p)

German Jokes translated by ChatGPT

Me:

Here's a German joke, could you please translate it to English such that the joke is still funny? Zwei Kerzen unterhalten sich. Sagt die eine zur anderen: "Ist Wasser eigentlich gefährlich?" Die andere antwortet: "Ja, davon kannst du ausgehen!"

ChatGPT: 

Here's a translation of the German joke into English:

Two candles are talking to each other. One says to the other, "Is water dangerous?" The other answers, "Yes, you can count on it!"

The joke still retains its humor in English, as it relies on a simple play on words and a pun, with the word "count" having both a numerical meaning and a meaning of "rely on."

Me:

In Zentralamerika gibt es zwar schlechte Lackierer, aber Guatemala.

ChatGPT: 

Here's a translation of the German joke into English:

In Central America, there may be bad painters, but there's always Guatemala.

The joke still retains its humor in English, as it relies on a play on words and a pun, with the name "Guatemala" sounding similar to the German word "gut malen," which means "to paint well."

Me:

Frage: "Für wen kauft ein Egoist Obst?" Antwort: "Pfirsich!"

ChatGPT:

Here's a translation of the German joke into English:

Question: "For whom does an egotist buy fruit?" Answer: "A Peach!"

The joke still retains its humor in English, as it relies on a play on words and a pun, with the word "peach" sounding similar to the word "piece," as in a piece of fruit, but also having a secondary meaning of "an attractive or admirable person."

Replies from: ChristianKl
comment by ChristianKl · 2023-02-08T15:18:31.682Z · LW(p) · GW(p)

This seems to be pretty useless for people who don't know both languages to judge. 

For those who don't know German, basically, all the jokes lose their original humor but ChatGPT pretends that they keep the original humor. 

Replies from: Sherrinford
comment by Sherrinford · 2023-02-08T18:52:05.710Z · LW(p) · GW(p)

Thanks, Christian! Of course you are right! In this case, the shortform entry was meant to be funny for people who would understand the languages, like you. But I see that can also be misleading for people who do not understand them.

comment by Sherrinford · 2021-05-18T09:56:09.392Z · LW(p) · GW(p)

Saying that "the control system" does something is about as informative as saying that "emergence" is the cause of something.

Replies from: Measure
comment by Measure · 2021-05-18T19:11:50.578Z · LW(p) · GW(p)

"Control system" means something a bit more specific than "whatever it is that causes the behavior of the system". A control system has an input, an output, and a set point. It varies the output based on the difference (error) between the input and the desired set point (sometimes it has terms for the derivative or for the accumulation of the error, but not always). In practice this means that it is hard to move the output away from the set point since the control system will respond by pushing in the opposite direction.

Replies from: Sherrinford
comment by Sherrinford · 2021-05-18T19:35:56.624Z · LW(p) · GW(p)

In the context in which I have been seeing the statement that "the" control system moves a certain behavior, there is nothing but the claim that the control system does exactly do what it is claimed to do. No precise explanation. No precise prediction (sure, the claim is that the output moves towards the set point, but nothing about the time dimension). If anything, the term is always used to "explain" behavior ex-post.

comment by Sherrinford · 2020-07-07T16:44:13.235Z · LW(p) · GW(p)

Currently reading Fooled by Randomness, almost 20 years after it was published. By now I have read about a third of it. Up to now, it seems neither very insightful nor dense; all the insights (or observations) seem to be what you can read in the (relatively short) wikipedia article. It is also not extremely entertaining.

I wonder whether it was a revealing, revolutionary book back in the days, or whether it is different to people with a certain background (or lack thereof), such that my impression is, in some sense, biased. I also wonder whether the other books by Taleb are better, but given the praise that FbR seems to have received, I guess it is not likely that the Black Swan would be fundamentally different from FbR.

Replies from: rudi-c
comment by Rudi C (rudi-c) · 2020-07-12T18:18:00.838Z · LW(p) · GW(p)

I read Black Swan early in my introduction to heuristics and biases, in my teens. I remember that the book was quite illuminating for me, though I disliked Taleb's narcissism and his disrespect for the truth. I don't think it was so much "insightful" as helping me internalize a few big insights. The book's content definitely overlaps a lot with beginner rationality, so you might not find it worthwhile after all. I read a bit of FbR and about half of Antifragile as well, but I found those much less interesting.

An aside: Taleb talks about general topics. It's hard to say new things in that market (it's saturated), and the best parts of his new insights have already become part of the common lexicon.

comment by Sherrinford · 2020-06-13T18:52:24.602Z · LW(p) · GW(p)

New results published in Cell suggest that Sars-Cov 2 gets into the body via the nasal mucosa and then gets into deep parts of the lung via body fluids, and possibly into the brain. A second part of the same study suggests that there may be a partial immunity against Sars-Cov 2 of people who had Sars or Mers. (Disclaimer: I only read a newspaper summary.)

comment by Sherrinford · 2024-09-26T19:00:37.395Z · LW(p) · GW(p)

My impression is that the OpenAI thing has a larger effect ive negative impact on the world compared to the FTX thing, but less people will notice it.

Replies from: habryka4
comment by habryka (habryka4) · 2024-09-26T19:43:28.345Z · LW(p) · GW(p)

The OpenAI thing also seems like it's pretty huge. A ton of random Youtube channels I watch are covering it, there is a high-profile lawsuit from Elon, and my overall sense is more people are following along than the FTX thing.

Replies from: Sherrinford
comment by Sherrinford · 2024-09-27T05:45:54.200Z · LW(p) · GW(p)

That would be great, but maybe it is covered much more in your bubble than in large newspapers etc? Moreover, if this is covered like the OpenAI-internal fight last year, the typical news outlet comment will be: "crazy sci-fi cult paranoid people are making noise about this totally sensible change in the institutional structure of this very productive firm!"

Replies from: nikolas-kuhn
comment by Amalthea (nikolas-kuhn) · 2024-09-27T07:52:03.665Z · LW(p) · GW(p)

It's certainly covered by the NYT, although their angle is "OpenAI is growing up".

comment by Sherrinford · 2023-02-16T11:02:07.326Z · LW(p) · GW(p)

People worried about AI alignment: At some point, an AI might succeed convincing people to give it access to the internet!

Big internet company seeing an AI: Hm we could just include the AI into our search engine.

comment by Sherrinford · 2022-11-19T20:14:12.358Z · LW(p) · GW(p)

A list of some beliefs of which I changed my subjective probabilities because of FTX (in one direction or the other):

  • "When rich EA community people give money to political candidates, fear that they just want to buy influence so as to avoid regulation is misplaced."
  • "The EA community is very good at understanding and managing risks, including low-probability high-effect risks"
  • "when success stories are too good to be true, they aren't true."
  • "crypto is a good thing."
  • "People in general understand risk diversification."
comment by Sherrinford · 2021-09-10T15:10:12.156Z · LW(p) · GW(p)

Seeing a blog post from 2019 that called UK and Australia fascistic because Melatonin is a prescription medicine makes me update slightly in the direction that discourse norms in the ea/lw communities did not decline due to corona as much as I had thought.

Replies from: ChristianKl
comment by Sherrinford · 2021-07-27T16:47:19.912Z · LW(p) · GW(p)

Having put some thoughts into the 80,000 hours career planning document, I think it is time for the next "some weeks of thinking" projects.

Either it's gonna be similar planning processes:

  • the life of my kids, 20 years in the future
  • where should I spend the next 5 years? And where the next 20?
  • a plan for personal finances
  • a health plan
  • a sports plan
  • whom to spend time with
  • personal volunteering/politics/ea
  • writing projects

Or it's gonna be concrete learning projects:

  • gtd
  • a language
  • python or r
  • project management
  • ... (Some of them more like refreshers)

Your thoughts are appreciated.

Replies from: zac-hatfield-dodds, przemyslaw-czechowski
comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2021-07-27T23:55:19.998Z · LW(p) · GW(p)

Kids, location, finances, and health are all extraordinarily high-leverage to think about - at least if you act on your plans.

Personally I'd start with personal finance, mostly because it should be pretty quick and simple to sort out (not always easy, to stick to, but simple). The personalfinance reddit has good flowcharts to follow, and I wrote a list of investing resources here [LW(p) · GW(p)] if you want more detail than "buy index funds and get on with the rest of your life".

comment by Jan Czechowski (przemyslaw-czechowski) · 2021-08-01T13:38:49.520Z · LW(p) · GW(p)

Did you consider looking at it rather from "options" than "goals" perspective? Rather than defining goals and looking for the optimal path to get there, you can look at /brainstorm exploitable options that you have available and seem to have high returns. And then prioritize them. I recently spent half a day writing down cool ideas for things to do, then collected them in todoist, and since then, whenever I have time I go through them. And add something new.

Replies from: Sherrinford
comment by Sherrinford · 2021-08-02T19:16:49.211Z · LW(p) · GW(p)

That seems like an interesting approach that may potentially make good use of personal energy.

comment by Sherrinford · 2021-05-23T19:23:10.957Z · LW(p) · GW(p)

If lesswrong.com had to delete itself for some reason, where would you go instead?

Replies from: Viliam
comment by Viliam · 2021-05-25T10:01:47.471Z · LW(p) · GW(p)

I suppose many people would then move to astralcodexten.substack.com , and there I would ask the same question in an Open Thread.

Replies from: Sherrinford
comment by Sherrinford · 2021-05-25T12:37:09.528Z · LW(p) · GW(p)

Do you think that commenting in Open Threads is very similar to posting and commenting here?

Replies from: Viliam
comment by Viliam · 2021-05-25T15:02:00.371Z · LW(p) · GW(p)

More like, I would ask in the ACX Open Thread "what is the place you go to now that LW is gone?" And then I would follow the crowd.

Posting in ACX Open Threads as such... well, you get a smart audience, but there is simply too much content. Reading those threads is a full-time job.

Replies from: Sherrinford
comment by Sherrinford · 2021-05-27T15:28:06.777Z · LW(p) · GW(p)

Interesting focal point, though I wonder how strong the overlap is.

comment by Sherrinford · 2020-07-29T07:10:25.733Z · LW(p) · GW(p)

The results of Bob Jacob's LessWrong survey [LW · GW] are quite interesting. It's a pity the sample is so small.

The visualized results (link in his post) are univariate, but I would like to highlight some things:

49 out of 56 respondents identifying as "White",
53 out of 59 respondents born male and 46 out of 58 identifying male cisgender
47 of 59 identifying as heterosexual (comparison: https://en.wikipedia.org/wiki/Demographics_of_sexual_orientation)
1 out of 55 working in a "blue collar" profession
Most people identify as "left of center" in some sense. At the same time, 30 out of 55 identify as "libertarian", but there were multiple answers allowed.
31 of 59 respondents think they are at least "upper middle class"; 22 of 59 think the family they were raised in was "upper middle class". (Background: In social science surveys, wealthy people usually underestimate their position, and poor people overestimate it but to a lesser extent.)

I would not have guessed the left-of-center identification, and I would have slightly underestimated the share of male (cisgender).

Replies from: Viliam
comment by Viliam · 2020-08-01T20:59:05.901Z · LW(p) · GW(p)
I would not have guessed the left-of-center identification

If you have 9 people who identify as left-wing and 1 person who identifies as right-wing, many people will hysterically denounce the entire group as "extreme right", based on the fact that the 1 person wasn't banned.

Furthermore, if you have people who identify as left-wing, but don't fully buy the current Twitter left-wing orthodoxy, they too will be denounced by some as "extreme right".

This skews the perception.

Replies from: Sherrinford
comment by Sherrinford · 2020-08-02T06:19:57.788Z · LW(p) · GW(p)

I don't think that fits what I am talking about:

  1. The survey was non-binary. Your first claim does not distinguish extremes and moderates.
  2. The survey was anonymous. You cannot ban anonymous people.
  3. I see no reason why people should have overstated their leftishness.
  4. If your statement is meant to explain why my perception differs from the result, it does not fit. My perception based on posts and comments would have been relatively more rightwing, less liberal / social democratic / green etc.
  5. I don't see where leftwing lesswrongers are denounced as rightwing extremists. In particular, I don't see where this explains people identifying as leftwing in the survey.
Replies from: Viliam
comment by Viliam · 2020-08-02T15:38:13.808Z · LW(p) · GW(p)

My model is that in USA most intelligent people are left-wing. Especially when you define "left-wing" to mean the 50% of the political spectrum, not just the extreme. And there seem to be many Americans on Less Wrong, just like on most English-speaking websites.

(Note that I am not discussing here why this is so. Maybe the left-wing is inherently correct. Or maybe the intelligent people are just more likely to attend universities where they get brainwashed by the establishment. I am not discussing the cause here, merely observing the outcome.)

So, I would expect Less Wrong to be mostly left-wing (in the 50% sense). My question is, why were you surprised by this outcome?

I don't see where leftwing lesswrongers are denounced as rightwing extremists.

For example, "neoreaction" is the only flavor of politics that is mentioned in the Wikipedia article about LessWrong. It does not claim that it is the predominant political belief, and it even says that Yudkowsky disagrees with them. Nonetheless, it is the only political opinion mentioned in connection with Less Wrong. (This is about making associations rather than making arguments.) So a reader who does not know how to read between the lines properly, might leave with an impression that LW is mostly right-wing. (Which is exactly the intended outcome, in my opinion.) And Wikipedia is not the only place where this game of associations is played.

Replies from: Sherrinford, Dirichlet-to-Neumann
comment by Sherrinford · 2020-08-02T19:20:25.174Z · LW(p) · GW(p)
"My model is that in USA most intelligent people are left-wing. Especially when you define "left-wing" to mean the 50% of the political spectrum, not just the extreme."

I agree. (I assume that by political spectrum you refer to something "objective"?)

And there seem to be many Americans on Less Wrong, just like on most English-speaking websites.

Given the whole Bay-area thing, I would have expected a higher share. In the survey, 37 out of 60 say they are residing in the US.

So, I would expect Less Wrong to be mostly left-wing (in the 50% sense). My question is, why were you surprised by this outcome?

Having been in this forum for a while, my impressions based on posts and comments led me to believe that less than 50% of people on lessrong would say of themselves that they are on values 1-5 of 1-10 scale from left-wing to right-wing. In fact, 41/56 did so.

For example, "neoreaction" is the only flavor of politics that is mentioned in the Wikipedia article about LessWrong. It does not claim that it is the predominant political belief, and it even says that Yudkowsky disagrees with them. Nonetheless, it is the only political opinion mentioned in connection with Less Wrong. (This is about making associations rather than making arguments.) So a reader who does not know how to read between the lines properly, might leave with an impression that LW is mostly right-wing. (Which is exactly the intended outcome, in my opinion.) And Wikipedia is not the only place where this game of associations is played.

The wikipedia article, as far as I can see, explains in that paragraph where the neoreactionary movement originated. I don't agree on the "intended outcome", or rather, I do not see why I should believe that.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2020-08-02T20:20:33.091Z · LW(p) · GW(p)

The wikipedia article, as far as I can see, explains in that paragraph where the neoreactionary movement originated.

It's not true, though! The article claims: "The neoreactionary movement first grew on LessWrong, attracted by discussions on the site of eugenics and evolutionary psychology".

I mean, okay, it's true that we've had discussions on eugenics [LW · GW] and evolutionary psychology [LW · GW], and it's true that a few of the contrarian nerds who enthusiastically read Overcoming Bias back in the late 'aughts were also a few of the contrarian nerds who enthusiastically read Unqualified Reservations. But "first grew" (Wikipedia) and "originated" (your comment) really doesn't seem like a fair summary of that kind of minor overlap in readership. No one was doing neoreactionary political theorizing on this website. Okay, I don't have a exact formalization of what I mean by "no one" in the previous sentence because I haven't personally read and remembered every post in our archives [? · GW]; maybe there are nonzero posts with nonnegative karma that could be construed to match this description. Still, in essence, you can only make the claim "true" by gerrymandering the construal of those words [LW · GW].

And yet the characterization will remain in Wikipedia's view of us—glancing at the talk page, I don't expect to win an edit war with David Gerard [LW(p) · GW(p)].

Replies from: jimrandomh, Sherrinford
comment by jimrandomh · 2021-03-08T00:46:04.722Z · LW(p) · GW(p)

I don't expect to win an edit war with David Gerard.

Now might be a good time to revisit that belief. He just got a topic ban on "editing about Scott Siskind, broadly construed". I make the case on the LessWrong article talk page that this topic ban could be construed as extending to the LessWrong article, and also that there is a separate case against him editing that article based on similar behavior to the behavior that he got a topic ban for.

Replies from: Sherrinford, Sherrinford
comment by Sherrinford · 2021-03-08T09:34:55.171Z · LW(p) · GW(p)

It would probably really take a lot of time to even understand what is and what is not considered to be in accordance with wikpedia rules. I note that, as in any other volunteer organization, a lot depends on who wants to put more time and effort into fighting for his/her convictions.

Replies from: Viliam
comment by Viliam · 2021-05-25T15:48:32.044Z · LW(p) · GW(p)

The Wikipedia rules are... meaningful, if you interpret them in good faith. But if are using arguments as soldiers [? · GW], then pretty much in every situation for any side you can find a rule that can be used in its favor. The key is to find it, and express it using words familiar to other admins.

For example, if a person edits an article they are somehow related to, it is either a good thing (we want to encourage experts to edit Wikipedia) or a bad thing (conflict of interest). Depending on whether you agree with the person or not, you choose the relevant rule, and insist that it applies. Similarly, most content can be removed as not important (Wikipedia is an encyclopedia, not a collection of everything) or kept as important to some people (Wikipedia is not on paper, we do not need to worry about number of pages). Short articles can be removed (as useless) or kept (because even a short article encourages people to extend it). Then there is a debate about what sources are considered "reliable" scientifically (demand higher rigor if you disagree with the conclusion, anything goes if you agree) and politically (conservative or neutral). Shortly, the rules do not enforce themselves; they need people to enforce their interpretation.

As you said, if you visibly volunteer a lot, you gain status within the community. When a conflict escalates, the higher status person has much better chance to win.

The more experiences people can use more sophisticated techniques, for example if you are high status and you break enough rules that there is a realistic chance you might get banned, say: "guys, I honestly believe I did nothing wrong, but I value our friendship and peace so much that I decided to stop editing the article, because I love this community so much". Then everyone rejoices that the problem was resolved without having to ban a high-status person. Two weeks later you change your mind and start editing the article again. If anyone proposes a ban again, your friends will dismiss it, because "we already had this debate, stop wasting everyone's time". (David Gerard tried to play this card, and almost succeeded.)

There is also the passive-aggressive art of treating opponents with subtle disrespect and framing their activities in worst possible light, and when someone does the same to you, crying "assuming good faith is the fundamental principle of Wikipedia debates". Generally, accusing your opponents of breaking Wikipedia rules is a good way to gain support among admins. For example, if you complain about Wikipedia bias on a different website, it can be linked as a proof of "brigading".

Original research is discouraged on Wikipedia, but of course this itself becomes a topic of a debate (anything your opponent said, unless it is literally a quote, is original research; but of course you cannot really write an encyclopedic article as a concatenation of quotes). You can play the game of making parts of article longer or shorter depending on whether they put the target in favorable or unfavorable light. I could go on... but generally, there are all kinds of tricks, and the high-status volunteer is more likely to know them, and is more likely to be forgiven for using them.

comment by Sherrinford · 2021-03-08T10:22:16.944Z · LW(p) · GW(p)

I had to sigh when I read "it can be hard to find editors who don't have a strong opinion about the person. But this is very far from that, likely one reason why the NYT actually used David Gerard as a source".

comment by Sherrinford · 2020-08-03T06:02:51.603Z · LW(p) · GW(p)

Interesting. I had maybe read the Wikipedia article a long time ago, but it did not leave any impression in my memory. Now rereading it, I did not find it dramatic, but I see your point.

Tbh, I stilĺ do not fully understand how Wikipedia works (that is, I do not have a model who determines how an article develops). And the "originated" (ok maybe that is only almost and not fully identical to "first grew") is just what I got from the article. The problem with the association is that it is hard to definitely determine what even makes things mentionable, but once somebody publibly has to distance himself from something, this indicates a public kind of association.

Further reading the article, my impression is that it indeed cites things that in Wikipedia count as sources for its claims. If the impression of lesswrong is distorted, then this may be a problem of what kinds of thing on lesswrong are covered by media publications? Or maybe it is all just selective citing, but then it should be possible to cite other things.

Replies from: Viliam
comment by Viliam · 2020-08-03T14:36:35.797Z · LW(p) · GW(p)

In theory, Wikipedia strives to be impartial. In practice, the rules are always only as good as the judges who uphold them. (All legal systems involve some degree of human judgment somewhere in the loop, because it is impossible to write a set of rules that covers everything and doesn't allow some clever abuse. That's why we talk about the letter and the spirit of the law.)

How to become a Wikipedia admin? You need to spend a lot of time editing Wikipedia in a way other admins consider helpful, and you need to be interested in getting the role. (Probably a few more technical details I forgot.) The good thing is that by doing a lot of useful work you send a costly signal that you care about Wikipedia. The bad thing is that if certain political opinion becomes dominant among the existing admins, there is no mechanism to fix this bias; it's actually the other way round, because edits disagreeing with the consensus would be judged as harmful, and would probably disqualify their author from becoming an admin in the future.

I don't assume bad faith from most of Wikipedia editors. Being wrong about something feels the same from inside as being right; and if other people agree with you, that is usually a good sign. But if you have a few bad actors who can play it smart, who can pretend that their personal grudges are how they actually see the world... considering that other admins already see them as part of the same team, and the same political bias means they already roughly agree on who are the good guys and who are the bad guys... it is not difficult to defend their decisions in front of jury of their peers. An outsider has no chance in this fight, because the insider is fluent with local lingo. Whatever they want to argue, they can find a wiki-rule pointing in that direction; of course it would be just as easy for them to find a wiki-rule pointing in the opposite direction (e.g. if you want to edit an article about something you are personally involved with, you have a "conflict of interest", which is a bad thing; if I want to do the same thing, my personal involvement makes me a "subject-matter expert", which is a good thing; your repetitive editing of the article to make your point is "vandalism", my repetitive editing of the article to make an opposite point is "reverting vandalism"); and then the other admins will nod and say: "of course, if this is what the wiki-rules say, our job is to obey them".

The specific admin that is so obsessed with Less Wrong is David Gerard from RationalWiki. He keeps a grudge for almost a decade, when he added Less Wrong to his website as an example of pseudoscience, mostly because of the quantum physics sequence. After being explained that actually "many worlds" is one of the mainstream interpretations among the scientists, he failed to say oops [LW · GW], and continued in the spirit of: well, maybe I was technically wrong about the quantum thing, but still... and spent the last decade trying to find and document everything that is wrong with Less Wrong. (Roko's Basilisk -- a controversial comment that was posted on LW once, deleted by Eliezer along with the whole thread, then posted on RationalWiki as "this is what people at Less Wrong actually believe". Because the fact that it was deleted is somehow a proof that deep inside we actually agree with it, but we don't want the world to know. Neoreaction -- a small group of people who enjoyed debating their edgy beliefs on Less Wrong, were considered entertaining for a while, then became boring and were kicked out. Again, the fact that they were not kicked out sooner is evidence of something dark.) Now if you look who makes most edits on the Wikipedia page about Less Wrong: it's David Gerard. If you go through the edit history and look at the individual changes, most of them are small and innocent, but they are all in the same direction: the basilisk and neoreaction must remain in the article, no matter how minuscule they are from perspective of someone who actually reads Less Wrong; on the other hand, mentions of effective altruism must be kept as short as possible. All of this is technically true and defensible, but... I'd argue that the Less Wrong described by the Wikipedia article does not resemble the Less Wrong its readers know, and that we have David Gerard and his decade-long work to thank for this fact.

If the impression of lesswrong is distorted, then this may be a problem of what kinds of thing on lesswrong are covered by media publications?

True, but most of the information in media originates from RationalWiki, where it was written by David Gerard. A decade ago, RationalWiki used to be quite high in google rankings, if I remember correctly; any journalist who did a simple background check would find it. Then he or she would ask about the juicy things in the interview, and regardless of the answer, the juicy things would be mentioned in the article. Which means that the next journalist would now find them both at RationalWiki and in the previous article, which means that he or she would again make a part of the interview about it, reinforcing the connection. It is hard to find an article about Less Wrong that does not mention Roko's Basilisk, despite the fact that it is discussed here rarely, and usually in the context of "guys, I have read about this thing called Roko's Basilisk in the media, and I can't find anything about it here, could you please explain me what this is about?"

Part of this is the clickbait nature of media: given the choice between debating neoreaction and debating technical details of the latest decision theory, it doesn't matter which topic is more relevant to Less Wrong per se, they know that their audience doesn't care about the latter. And part of the problem with Wikipedia is that it is downstream of the clickbait journalism. They try to use more serious sources, but sometimes there is simply no other source on the topic.

Replies from: Sherrinford
comment by Sherrinford · 2020-08-03T21:13:31.201Z · LW(p) · GW(p)

Thanks for the history overview! Very interesting. Concerning the wikipedia dynamics, I agree that this is plausible, as it is a plausible development of nearly every volunteer organization, in particular if they try to be grassroots-democratic. The wikipedia-media problem is known (https://xkcd.com/978/) though in this particular case I was a bit surprised about the "original research" and "reliable source" distinction. Many articles there did not seem very "serious". On the other hand, during this whole "lost in hyperspace", I also found "A frequent poster to LessWrong was Michael Anissimov, who was MIRI’s media director until 2013." (https://splinternews.com/the-strange-and-conflicting-world-views-of-silicon-vall-1793857715) which was news to me. In internet years, all this is so long ago that I did not have any such associations. (I would rather have expected lesswrong to be notable for demanding the dissolution of the WHO, but probably that is not yet clickbaity enough.)

comment by Dirichlet-to-Neumann · 2021-05-25T14:30:40.165Z · LW(p) · GW(p)

My model is that what is called "left of center" in the USA is "far right, at least economically"* in Europe (and what the USA call "socialism" is "what everyone agrees with".

*"economically" does a fair bit of work here - on issues like immigration for example the left right divide is the same as in the US.

comment by Sherrinford · 2020-06-01T09:28:12.449Z · LW(p) · GW(p)

You would hope that people actually saw steelmanning as an ideal to follow. If that was ever true, the corona pandemic and the policy response seem to have killed the demand for this. It seems to become acceptable to attribute just any kind of seemingly-wrong behavior to either incredible stupidity or incredible malice, both proving that all institutions are completely broken.

Replies from: Dagon
comment by Dagon · 2020-06-01T16:14:04.189Z · LW(p) · GW(p)

I like the word "institurions". Some mix of institutions, intuitions, and centurions, and I agree that they're completely broken.

Replies from: Sherrinford
comment by Sherrinford · 2020-06-01T16:48:44.791Z · LW(p) · GW(p)

:-) Thanks. But I corrected it.

comment by ChristianKl · 2021-09-11T16:29:37.925Z · LW(p) · GW(p)

I remember reading a post about how the US navy went down in number of ships and average age of the ships went up a lot over time. It was about how the US military in general spends a lot of their money very inefficiently.

Unfortunately, it might have been a decade since I read it and I don't find it. Does anybody here know the post I'm referring to?

Replies from: Sherrinford
comment by Sherrinford · 2021-09-11T18:30:24.959Z · LW(p) · GW(p)

I assume you put that as a comment my shortform by accident? Or am I misunderstanding something?

Anyway, maybe you mean this: https://www.defensenews.com/naval/2018/06/07/the-us-navys-ships-are-getting-old-they-might-be-getting-a-lot-older/

Replies from: ChristianKl
comment by ChristianKl · 2021-09-11T21:57:30.010Z · LW(p) · GW(p)

Yes, it was on your shortform by accident. I wasn't even aware that it's possible to top level post on others shortform.

comment by Sherrinford · 2021-07-26T19:54:44.471Z · LW(p) · GW(p)

It's funny that in the interview episode "Rob Wiblin on how he ended up the way he is" of the 80,000 hours podcast, Misha Saul says that parents don't have much of an influence on the development of their own children (biodeterminism), but at the same time the whole interview is about important, formative experiences.

Replies from: pktechgirl, przemyslaw-czechowski
comment by Elizabeth (pktechgirl) · 2021-07-26T22:33:06.662Z · LW(p) · GW(p)

The thing the available evidence can show us is that variations in parenting choices don't explain much variation in outcomes, among the populations studied, which tend to be WEIRD and in the case of adoption studies have passed an explicit bar for estimated parenting quality. This is not the same as "parents don't have much of an influence on the development of their own children", which is easily disproven by looking at the 5th percentile parents.

Replies from: Sherrinford
comment by Sherrinford · 2021-07-27T16:37:24.545Z · LW(p) · GW(p)

I am not sure I exactly understand whst that says. Something like "parenting choices are only important if they are really bad"?

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2021-07-27T20:59:24.699Z · LW(p) · GW(p)

Not quite. The point is that the studies are mostly done on parents with pretty similar styles, so there isn't enough variation in parenting style to lead to detectable variation in outcomes. It's the equivalent of studying how water affects plant growth, but giving every plant in the range of 5% of what you think their ideal is, while varying other factors (light, nutrients, plant genetics) tremendously.  Your results don't show that water doesn't matter for plants, they show that the impact of that small variation in water is lost amongst the noise.

Replies from: Sherrinford
comment by Sherrinford · 2021-07-28T06:31:18.961Z · LW(p) · GW(p)

Very interesting, thanks.

comment by Jan Czechowski (przemyslaw-czechowski) · 2021-07-26T20:16:56.085Z · LW(p) · GW(p)

The best book I have ever only read the review of: Selfish Reasons to Have More Kids [LW · GW] Seems to have a good summary of evidence for upbringing if you're interested in the subject.

Replies from: Sherrinford
comment by Sherrinford · 2021-07-26T21:40:21.915Z · LW(p) · GW(p)

Thanks. Which evidence for upbringing do you mean in this case? I don't doubt that everybody is in some sense is "biodetermined", but it's ironic that the podcast episode then mainly talks about certain formative experiences the two who talk had when they met each other in highschool, and how Rob was influenced by his mother and his father als role models.

comment by Sherrinford · 2024-10-02T14:47:01.167Z · LW(p) · GW(p)

In the last weeks, I saw some posts or comments arguing why it would be in the self-interest of an extremely powerful AI to leave some power or habitat or whatever to humans. This seems to try to be an answer to the briader question "why should AI dobthings that we want even though we are powerless?" But it skips the complicqted question "What do we actually want an AI to do?" If we can answer that second question, then maybe the whole "please don't do things that we really do not want" quest becomes easier to solve.

comment by Sherrinford · 2024-09-27T06:35:39.048Z · LW(p) · GW(p)

When I write a post and select text, a menu appears where I can select  text appearance properties etc. However, in my latest post, this menu does not appear when I edit the post and select text. Any idea why that could be the case?

Replies from: Measure
comment by Measure · 2024-09-27T12:56:57.174Z · LW(p) · GW(p)

Did you switch to the markdown editor?

Replies from: Sherrinford
comment by Sherrinford · 2024-09-27T13:58:38.995Z · LW(p) · GW(p)

Sorry, but where/how would I do that?

Replies from: Measure
comment by Measure · 2024-09-27T20:00:43.562Z · LW(p) · GW(p)

There should be a dropdown menu at the left side in the input box (opposite the "submit" button).

Replies from: Sherrinford
comment by Sherrinford · 2024-09-28T07:26:31.158Z · LW(p) · GW(p)

Thanks for helping. In the end, I deleted the post and started from scratch and then it worked.

comment by Sherrinford · 2024-09-07T12:35:50.752Z · LW(p) · GW(p)

Metaculus should adjust election forecasting questions such that forecasters are forced to make their forecasts add up to 100% over all options (with an additional option "noone of the above").

Replies from: Dagon
comment by Dagon · 2024-09-07T16:54:58.589Z · LW(p) · GW(p)

What goals (for Metaculus, for observers, or for predictors) does that serve?  

Replies from: Sherrinford
comment by Sherrinford · 2024-09-07T17:12:59.840Z · LW(p) · GW(p)

The goal would be that forecasters would be forced to make internally consistent forecasts. That should reduce noise, firstly by reducing unintentional errors, secondly by cleaning up probabilities (by quasi-automatically adjusting the percentages of candidates who may previously have been considered to be low-but-relevant-probability candidates), and thirdly by crowding out forecasters who do not want to give consistent forecasts (which I assume correlates with low-quality forecasts). It should also make forecasts more legible and thus increase the demand for metaculus. 

Metaculus currently lists 20 people who could be elected US President ("This question will resolve as Yes for the person who wins the 2024 US presidential election, and No for all other options.", "Closes Nov 7, 2024"), and the sum of their probabilities is greater than 104%. Either this is not consistent, or I don't understand it and with all due modesty, if that is the reason for my confusion, then I think many people in the target audience will also be confused.

comment by Sherrinford · 2024-07-10T08:42:06.186Z · LW(p) · GW(p)

Noah Smith writes about 

"1) AI flooding social media with slop, and 2) foreign governments flooding English-language social media with disinformation. Well, if you take a look at the screenshot at the top of this post, you’ll see the intersection of the two!"

Check the screenshot in his post and tell me whether you see a rabbit or a duck.

I see a person called A. Mason writing on Twitter and ironically subverting the assumption that she is a bot, by answering with the requested poem but letting it end with a sentence about Biden that confirms her original statement and doesn't rhyme.

Of course, this could also be an AI being so smart that it can create exactly that impression. This would be the start of the disintegration of social reality.

Replies from: whestler, quetzal_rainbow
comment by whestler · 2024-07-10T13:32:11.827Z · LW(p) · GW(p)

I had a look, and no, I read it as a bot. I think if it were a human writing a witty response, they would likely have: 

a) used the format to poke fun at the other user (Toby)

b) made the last lines rhyme.

Also, I wanted to check further so I looked up the account and it's suspended. https://x.com/AnnetteMas80550
Not definitive proof, but certainly evidence in that direction.

Replies from: Sherrinford
comment by Sherrinford · 2024-07-10T15:02:05.289Z · LW(p) · GW(p)

That's interesting, because 

b) Wouldn't an LLM let it end in a rhyme exactly because that is what a user would expect it to do? Therefore, I thought not letting it end in a rhyme is like saying "don't annoy me, now I am going to make fun of you!" 

a) If my reading of b) is correct, then the account DID poke fun at the other user.

So, in a way, your reply confirms my rabbit/duck interpretation of the situation, and I assume people will have many more rabbit/duck situations in the future.

 

Of course you are right that the account suspension is evidence.

Replies from: whestler
comment by whestler · 2024-07-10T15:10:48.301Z · LW(p) · GW(p)

I think it's very likely we'll see more situations like this (and more ambiguous situations than this). I recall a story of an early turing test experiment using hand-coded scripts some time in the 2000's, where one of the most convincing chatbot contestants was one which said something like:

"Does not compute, Beep boop! :)" 

pretending to be a human pretending to be a robot for a joke.

comment by quetzal_rainbow · 2024-07-10T16:04:43.361Z · LW(p) · GW(p)

In LiveJournal if username has such numbers at the end, it is usually bot.

Replies from: Sherrinford
comment by Sherrinford · 2024-07-10T18:14:48.437Z · LW(p) · GW(p)

That sure is often the case, but not always, and therefore it may count as a bit of evidence but not extremely much. Otherwise it would be very easy to automatically delete bots on Twitter and similar platforms.

comment by Sherrinford · 2024-06-12T21:39:04.325Z · LW(p) · GW(p)

How much does technological progress of consumer goods increase happiness? 

Sure I prefer a modern TV to the TV we had in the 90's if I have to choose, but if I compare "pleasure I had from watching Star Trek TNG, which was only possible at 3 pm" to "pleasure I have from watching one of the many currently available TV shows at any time of the day", I am not really sure the second one really feels much better.

comment by Sherrinford · 2024-02-20T07:30:12.839Z · LW(p) · GW(p)

The "Snake cult of consciousness" theory sounds extremely fascinating. Qt the same time, it also sounds like the explanations why the pyramids were built by aliens. For laypeople, it is hard to distinguish between Important insights and clever nonsense.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2024-02-20T14:31:54.869Z · LW(p) · GW(p)

"Snake cult of consciousness"

What's your p(thulsa doom)?

comment by Sherrinford · 2023-06-08T21:26:26.222Z · LW(p) · GW(p)

Why do nuclear-energy fan articles often mention France as a positive example without discussing the drastic reduction of French nuclear power generation in 2022?

comment by Sherrinford · 2023-06-01T21:27:05.641Z · LW(p) · GW(p)

In the recent Econtalk podcast with Tyler Cowen, Cowen explicitly and strictly demands a mathematical model of AI risk, claiming something like that does not exist.

At the same time, he sees "Hayekian" arguments as a kind of benchmark. As far as I know, there is no mathematical benchmark model of the classical Hayek argument.

The same is true for Cowen's demand for loyalty to the US constitution. There is no mathematical model for that.

All claims and demands of Cowen are asymmetrical. The doomers are emotional, he says. The non-doomers are just reasonable.

comment by Sherrinford · 2023-05-31T20:55:55.213Z · LW(p) · GW(p)

One thing of which it might be helpful if powerful beings could learn it: "It's in general not okay to enforce your wishes on others. "

However, ethics is complicated and you will probably find many cases where enforcing your wishes on others is actually okay.

Moreover, if the learning dataset is humanity's behavior, then it's probably a problem that enforcing takes place all the time.

comment by Sherrinford · 2023-05-31T07:01:41.022Z · LW(p) · GW(p)

Typical fiction has probably framed thinking about the development of intelligence in non-humans in bad ways.

  • C3PO from Star Wars seems like a nervous language nerd.
  • Data from Star Trek mostly seems like a human with a built-in calculator.
  • The Planet of the Apes suggests that getting intelligence means getting a draw from a human-population IQ distribution.
  • All other artificial intelligences can be outsmarted. Otherwise the movie would not work.
  • Data, C3PO etc never have wifi.
Replies from: Dagon
comment by Dagon · 2023-05-31T15:43:32.481Z · LW(p) · GW(p)

I'm not sure the causality is that clear - typical fiction pulls from cultural norms at least as much as it pushes them.  I fully agree with your underlying belief that common thinking about intelligence is massively oversimplified.  Note that this applies to HUMAN intelligence as well.

comment by Sherrinford · 2023-05-20T15:03:29.434Z · LW(p) · GW(p)

I sometimes read claims like "doing strength training for more than [insert small number of times per week] is useless, see scientific evidence". Another impression is: people doing lots of exercise per week get big muscles. How do these observations relate to reality? What is the fitness production function, or the optimal schedule, if one really wanted to become something like a Schwarzenegger? (I don't. This is just curiosity.)

comment by Sherrinford · 2022-11-10T20:31:07.363Z · LW(p) · GW(p)

Several posts in this or the ea forum seem to suggest that it's kind of standard among people in the community to have invested a large share of their personal wealth into crypto, and in ftx in particular.

Why should this be assumed? Are there statistics on that?

I used to believe that most people have a diversified portfolio, both inside and outside of the communities. Isn't that the case?

Replies from: Dagon
comment by Dagon · 2022-11-10T22:18:20.318Z · LW(p) · GW(p)

I think it's ... not common nor standard, but not uncommon that especially younger people in techie communities have an outsized share of their net worth in crypto.  I doubt they INVESTED the majority, but if they invested a small-ish amount very early, it will have become the lion's share in the explosion of a few years ago.  

Depending on the population/segment you're talking about, most people have no significant portfolio, just savings and debt.  Many will add real-estate before financial investments.  Then there are a fair subset of (again, mostly younger) people who are RobinHood-style traders.  Buy for fun, maybe make a lot, maybe lose most of what you put in.  It's a minority who have enough assets and long-term focus to actually design and implement a portfolio.  Of that minority, a majority is diversified via mutual funds, often in retirement accounts.

Replies from: Sherrinford
comment by Sherrinford · 2022-11-10T23:08:57.047Z · LW(p) · GW(p)

So if I understand you correctly, some techies in the lw and ea communities have put some money in there and due to the strong increase of crypto prices, the worth of their crypto assets had increased a lot. Now that a lot of it is dust in the wind, these people have lost a lot, but if it had only grown from a small-ish amount, they are not really worse off than they would have been with conservative investements, right?

I am asking because I still don't fully understand posts like this [EA · GW]. I think I understand the "job security" part and also would understand if it was about the monetary loss to important EA causes, both because SBF gave money to EA. But the text explicitly mentions "personal savings" even before job security, and does not explicitly mention EA causes.

Replies from: Dagon
comment by Dagon · 2022-11-10T23:24:11.431Z · LW(p) · GW(p)

Ah, misunderstood "people in the community", and answered for a much larger community of rationalist and adjacent techies.

I don't really follow the EA side of things that closely (I support the stated mission behind the name, but it seems to have become rather cultish on many topics).  I presume the post you reference is mostly about Sam Bankman-Fried and his circle - he's the CEO/founder for FTX, and also a fairly vocal/visible EA ... Participant?  Donor?  Unsure of exact descriptor, but I can imagine that many in that group are afraid for him and his loss of wealth (and loss of status due to seemingly bad behavior), as well as many of them being personally invested in that particular branch of crypto.

comment by Sherrinford · 2022-03-18T21:48:44.808Z · LW(p) · GW(p)

What are the properties that make substack so successful? At first glance, substack blogs seem less structured than e.g. wordpress.com. In Substack, the "Archive" of a blog is just a long list. Distributing new articles via email does not seem like a spectacular feature, but in any case it should be possible on other blog platforms as well. What am I missing?

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-03-20T19:14:35.157Z · LW(p) · GW(p)

I think it's less about tech and more about the social aspect. Why is substack succeeding? Because they paid a bunch of people to use it, and so far some of those people are continuing to make good money writing on the platform. Then others want to use it because maybe they can make the same kind of money, or at least be on the same platform where all the other cool writers they like are.

Replies from: Sherrinford
comment by Sherrinford · 2022-03-20T22:14:25.493Z · LW(p) · GW(p)

Interesting perspective, and a bit disappointing.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-03-20T22:42:22.091Z · LW(p) · GW(p)

Maybe? Succeeding in business is about more than the tech. I've worked in stuff that was definitely "better" than the compression but lost because we didn't get other stuff right: sales, marketing, etc. I guess it's be cool if the best tech always won, but the world doesn't usually work that way.

comment by Sherrinford · 2021-09-17T09:24:22.035Z · LW(p) · GW(p)

There may be a certain risk that downvoting culture replaces "comment and discussion" culture (at least at the margins). A reason for that may be that there is no clear idea of what a downvote (or an upvote) actually means, such that possibly some people just upvote if the content of a comment / post confirms their worldview (and vice versa).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-09-17T11:50:15.986Z · LW(p) · GW(p)

confirms their worldview

A lot of things people talk about are not at all about "their worldview" in the sense of beliefs and values, this characterization is often enough noncentral. I'm arguing use of words in this comment, is that an element of my worldview? Perhaps I value accurate use of words, and can't find a suitable counterexample.

Replies from: Sherrinford
comment by Sherrinford · 2021-09-17T12:23:04.360Z · LW(p) · GW(p)

Maybe worldview is a word that comes along with too many associations? What about "prior belief"?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-09-17T12:49:36.016Z · LW(p) · GW(p)

Some hypotheses are not beliefs (they are beliefs-in-development that aren't yet ready for making predictions), and many constructions are not even hypotheses in this sense (they are not about the real world). I don't believe there is a unifying concept behind the things people talk about, different concepts are salient for different statements.

Replies from: Sherrinford
comment by Sherrinford · 2021-09-17T13:03:34.529Z · LW(p) · GW(p)

I don't think that contradicts my original statement strongly. The statement is itself a hypothesis, but I wrote it down because I find it likely that it describes behavior. However, I don't have a strong degree of confidence about it. 

Some comments may not be in the worldview / belief category, and in this case it may be the case that the people I hypothesized about may just neither upvote nor downvote. It is also possible that in this case voting on posts or comments may be motivated by different things.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-09-17T13:25:53.692Z · LW(p) · GW(p)

I don't think that contradicts my original statement strongly.

I don't think it contradicts it at all, it's unrelated to your original statement, only to the use of a word in it that can be steelmanned away in the obvious manner.

comment by Sherrinford · 2021-04-26T19:39:30.787Z · LW(p) · GW(p)

I guess this is a really bad time to write book reviews for lesswrong.

comment by Sherrinford · 2021-04-24T13:44:00.845Z · LW(p) · GW(p)

When people write articles containing wrong statements and statements without evidence or source, you can use your knowledge of the wrong statements to update the probability that the statements without evidence or source are true.

Replies from: Dagon
comment by Dagon · 2021-04-24T15:40:51.554Z · LW(p) · GW(p)

Kind of the reverse of Gell-Mann Amnesia (https://en.wikipedia.org/wiki/Michael_Crichton#GellMannAmnesiaEffect).  Arguably, it should be applied to editorial units (sites, publications, etc.), not just to individual authors.

Replies from: Sherrinford
comment by Sherrinford · 2021-04-24T18:48:14.226Z · LW(p) · GW(p)

Yes. I hope certain forums and sites I regularly read don't continue developing into a direction of not demanding evidence and sources for claims.

By the way, there is also the danger that someone at some point just exploits his/her own reputation to push an agenda.

comment by Sherrinford · 2021-03-21T17:00:13.321Z · LW(p) · GW(p)

More articles on the supposed Astra Zeneca bloodclot mechanism, adding to this [LW(p) · GW(p)]:

(All in German, but I think that in general, automated translation has become really good.)

comment by Sherrinford · 2020-12-16T13:22:35.743Z · LW(p) · GW(p)

I would love to see examples of contributions with actual steelmanning instead of just seeing people who pay lipservice to it.

Replies from: niplav
comment by niplav · 2020-12-16T16:15:01.096Z · LW(p) · GW(p)

I believe that steelmanning has mostly been deprecated and replaced with ideological turing tests.

Replies from: Kaj_Sotala, Sherrinford
comment by Kaj_Sotala · 2020-12-16T17:13:36.130Z · LW(p) · GW(p)

ITTs and steelmanning feel like they serve different (though overlapping) purposes to me. For example, if I am talking with people who are not X (libertarians, socialists, transhumanists, car-owners...), we can try to steelman an argument in favor of X together. But we can't do an ITT of X, since that would require us to talk to someone who is X.

Replies from: Sherrinford
comment by Sherrinford · 2020-12-16T18:21:17.824Z · LW(p) · GW(p)

Yes, though I assume the best test for whether you really steelman someone would be if you can take a break and ask her whether your representation fits.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-12-17T12:35:10.275Z · LW(p) · GW(p)

I don't think an ITT is a good test of a steelman. Often you're coming from a different frame from someone else, and strong arguments to you will be framed completely differently from strong arguments for someone else.

Replies from: Sherrinford
comment by Sherrinford · 2020-12-17T14:19:19.854Z · LW(p) · GW(p)

Yes maybe an ITT tests a fleshman instead of a steelman or a strawman...

comment by Sherrinford · 2020-12-16T16:49:19.212Z · LW(p) · GW(p)

What I mean is: 

I would like to see that people who write articles about what the supposed actions or motivations of other people - or government agencies, firms, or whatever - are to actually try to present their actions and motivations in a way that at least assumes that they are not completely dumb or evil or pathetic. It seems to be fashionable that when people do not see the sense behind actions, they do not try hard but jump to the conclusion that it must be due to some despicable, stupid, or at least equilibrium-inefficient behavior (e.g. some claims about "signalling"; no proper analysis whether the claim makes sense in a given situation required). This may feel very insightful; after all, the writer seemingly has a deeper insight into social structures than the social agents. But supposed insights that feel too good can be dangerous. And that a model is plausible does not mean that it applies to every situation.

comment by Sherrinford · 2020-10-25T17:48:27.423Z · LW(p) · GW(p)

Among EA-minded people interested in preventing climate change, it seems Clean Air Task Force (CATF) is seen very favorably. Why? The "Climate Change Cause Area Report" by Founders Pledge (PDF) gives an overview.

CATF's work is  introduced as follows:

"It was founded in 1996 with the aim of enacting federal policy reducing the air pollution caused by American coal-fired power plants. This campaign has been highly successful and has been a contributing factor to the retirement of a large portion of the US coal fleet." (p. 5)

On p. 88, you will read:

"Do they have a a good track record? CATF have conceived of and led several successful advocacy campaigns in the US, which have had very large public health and environmental benefits. According to our rough model, through their past work, they have averted a tonne of CO 2 e for around $1.

Is their future work cost- - effective? Going forward, CATF plans to continue its work on power plant regulation and to advocate for policy support for innovative but neglected low carbon technologies.

Given their track record and the nature of their future projects, we think it is likely that a donation to CATF would avert a tonne of CO 2 e for $0.10-$1."

On p. 91:

"CATF was founded in 1996 to advocate for regulation of the damaging air pollution produced by the US coal fleet, initially focusing on sulphur dioxide (SO 2 ) and nitrogen oxide (NO x ). They later advocated for controls on mercury emissions. The theory of change was that the cost of emission controls for conventional pollutants and mercury would result in the retirement or curtailment of coal plant operation resulting in reductions in CO 2 (and other) emissions. CATF conceived of the campaign goal, designed the strategy, and led the campaign, in turn drawing in philanthropic support and recruiting other environmental NGOs to the campaign."

How does the evaluation work? A spreadsheet with an evaluation shows benefits of the policy impact.

Where do the numbers come from? The spreadsheet states "subjective input" in several cells. The "Climate Change Cause Area Report" by Founders Pledge (p. 129--) states that "CATF is typical of research and policy advocacy organisations in that it has worked on heterogeneous projects. This makes it difficult to evaluate all of CATF’s past work, as this would require us to assess their counterfactual impact in a range of different contexts in which numerous actors are pushing for the same outcome." The report then asks e.g. how much CATF "brought the relevant regulation forward", and the answers seem to rely strongly on assessment by CATF. Nonetheless, it makes assessments like "Our very rough realistic estimate is therefore that CATF brought the relevant regulation forward by 12 months. The 90% confidence interval around this estimate is 6 months to 2 years." On p. 91 you can read: "Through each of these mechanisms, CATF increased the probability that regulation was introduced earlier in time. Our highly uncertain realistic estimate is that through their work, CATF brought regulation on US coal plants forward by 18 months, with a lower bound of 9 months and a higher bound of 4 years. CATF believe this to be a major underestimate, and have told us that they could have brought the relevant regulation forward by ten years."

While of course it's fine to give subjective estimates, they should be taken with a grain of salt. It seems the comparison is much more reliant on such subjectivity than when you evaluate charities with concrete, repeatedly applied health interventions.

What, if anything, could be biased?

Additional to the (probably unavoidable) reliance on self-information, the following paragraph made me wonder:

"CATF have told us that at the time the campaign was conceived, major environmental organisations were opposed to reopening the question of plant emissions after the Clean Act Amendments of 1990, as they feared the possibility that legislative debate would unravel other parts of the Act. 216 This is based on conversations at the time with the American Lung Association, Environmental Defense Fund, and the Natural Resources Defense Council."

How can we know whether such fears were justified ex ante? How do we guard against survivorship or hindsight bias?

comment by Sherrinford · 2022-11-14T17:37:07.696Z · LW(p) · GW(p)

In the latest episode, the Bayesian conspiracy podcast people discuss moral behavior in the context of FTX and in that context treat the claim that the FTX thing is evidence that you shouldn't trust people who look kind of weird as a totally normal part of a conversation. Could we maybe just slightly slow down all these discussions?

comment by Sherrinford · 2023-10-12T15:46:43.454Z · LW(p) · GW(p)

Still waiting for a change of the general attitude in certain rationality etc circles concerning Elon Musk, a change that would also take into account what kind of news sources Musk promotes on X.

Replies from: Dagon
comment by Dagon · 2023-10-12T17:05:35.759Z · LW(p) · GW(p)

Interesting.  Would you share your current measurement, and prediction of change over what timescale?  In my circles (which are not mostly self-identified as "rationality etc"), most of the change happened already - a shift from "annoying crazy person, with a genius for getting cool stuff into production" to "complete maniac who had lucky results when his lunacy was constrained, but that's no longer the case".  This change happened mostly before and during the Twitter/X runup, not since the actual sale and changes.  

Replies from: Sherrinford
comment by Sherrinford · 2023-10-12T22:02:30.344Z · LW(p) · GW(p)

Measurement would be an exaggeration.

I see fandom. I hardly see people discussing that Elon Musk retweets and promotes very problematic things, which seem like he builds his political worldviews on very low-quality sources. And I hardly see people noticing that such concentration of power can in itself be a problem.

Replies from: ChristianKl, Dagon
comment by ChristianKl · 2023-10-19T16:54:46.350Z · LW(p) · GW(p)

And I hardly see people noticing that such concentration of power can in itself be a problem.

Concentration of power is problematic but it's also necessary for things getting done. If you take robert Moses, the concentration of power around him was problematic but he got things build. 

Fear of concentration of power on individual people is one core feature of the Great Stagnation.

As far as Musk's retweeting goes, it's impact is not very large compared to the the effects of projects like Starship. 

Replies from: Sherrinford
comment by Sherrinford · 2023-10-20T08:35:42.202Z · LW(p) · GW(p)

"Concentration of power is problematic but it's also necessary for things getting done."

Sure some amount of power may be productive, but very high concentration of power can be problematic, for example as it puts people on the powerful person's Mercy.

"Fear of concentration of power on individual people is one core feature of the Great Stagnation."

I assume "feature" in this cases means correlate, not cause. 

"As far as Musk's retweeting goes, it's impact is not very large compared to the the effects of projects like Starship."

This is hard to compare. Influencing opinions is relevant. 

I would also say that Musk's tweets are informative in forming expectations about what he might use control of strategically important technologies for in the future. In general, I would prefer if a person did not have infinite power to determine the ability of societies to act, and I guess there is an amount of power lower than infinite at which this becomes problematic.

comment by Dagon · 2023-10-13T00:32:53.219Z · LW(p) · GW(p)

It's always surprising (which means it should be never surprising) how much of a bubble I (and you, it seems) am in.  I have multiple partially-overlapping groups, NONE of which view Musk as overall admirable, though some are more balanced in evaluating his successes and offenses.