Posts
Comments
Comment cross-posted to the Effective Altruism Forum
Edit, 15 December 2024: I'm not sure why this comment has gotten so downvoted in only the couple hours since I posted it, though I could guess why. I wrote this comment off the cuff, so I didn't put as much effort into writing it as clearly or succinctly as I could, or maybe should, have. So, I understand how it might read is as a long, meandering nitpick, of a few statements near the beginning of the podcast episode, without me having listened to the whole episode yet. Then, I call a bunch of ex-EAs naive idiots, like Elizabeth referred to herself as at least formerly being a naive idiot, and then say even future effective altruists will be proven to be idiots, and those still propagating EA after so long, like Scott Alexander, might be the most naive and idiotic of all. To be clear, I also included myself, so this reading would also imply that I'm calling myself a naive idiot.
That's not what I meant to say. I would downvote that comment too. I'm saying that
- If it's true what Elizabeth is saying about her being a naive idiot, then it would seem to follow that a lot of current, and former, effective altruists, including many rationalists, would also be naive idiots for similar reasons.
- If that were the case, then it'd be consistent with greater truth-seeking, and criticizing others for not putting enough effort into truth-seeking with integrity with regards to EA, to point out to those hundreds of other people that they either, at one point were, or maybe still are, naive idiots.
- If Elizabeth or whoever wouldn't do that, not only because they consider it mean, but moreover because they wouldn't think it true, then they should apply the same standards to themselves, and reconsider that they were not, in fact, just naive idiots.
- I'm disputing the "naive idiocy" hypothesis here as spurious, as it comes down to the question of
whether someone like Tim--and, by extension, someone like me in the same position, who has also mulled over quitting EA--are still being naive idiots, on account of not having updated yet to the conclusion Elizabeth has already reached. - That's important because it'd seem to be one of the major cruxes of whether someone like Tim, or me, would update and choose to quit EA entirely, which is the point of this dialogue, so if that's not a true crux of disagreement here, speculating about whether hundreds of current and former effective altruists have been naive idiots is a waste of time.
I've begun listening to this podcast episode. Only a few minutes in, I feel a need to clarify a point of contention over some of what Elizabeth said:
Yeah. I do want to say part of that is because I was a naive idiot and there's things I should never have taken at face value. But also I think if people are making excuses for a movement that I shouldn't have been that naive That's pretty bad for the movement.
She also mentioned that she considers herself to have caused harm by propagating EA. It seems like she might be being too hard on herself. While she might consider being that hard on herself to be appropriate, the problem could be what her conviction implies. There are clearly still some individual, long-time effective altruists she still respects, like Tim, even if she's done engaging with the EA community as a whole. If that wasn't true, I doubt this podcast would've been launched in the first place. Having been so heavily involved in the EA community for so long, and still being so involved in the rationality community, she may know hundreds of people, friends, who either still are effective altruists now, or used to be effective altruists, but no longer. Regarding the sort of harm caused by EA propagating itself as a movement, she provides this as a main example.
The fact that EA recruits so heavily and dogmatically among college students really bothers me.
Hearing that made me think about a criticism of the organization of EA groups for university students made last year by Dave Banerjee, former president of the student EA club at Columbia University. His was one of the most upvoted criticisms of such groups, and how they're managed, ever posted to the EA Forum. While Dave apparently realized what are presumably some of the same conclusions as Elizabeth about the problems with evangelical university EA groups, he did so with a much quicker turnaround than her. He shifted towards such a major update while still a university student, while it took her several years. I don't mention that so as to imply that she was necessarily more naive and/or idiotic than he was. From another angle, given that he was propagating a much bigger EA club than Elizabeth ever did, at a time when EA was being driven to grow much faster than when Elizabeth might've been more involved with EA movement/community building, Dave could have easily have been responsible for causing more harm. Therefore, perhaps he has perhaps been even a more naive idiot than she ever was.
I've known other university students who were formerly effective altruists helping build student EA clubs, who quit because they also felt betrayed by EA as a community. Given that it's not like EA will be changing overnight, in spite of whoever considers it imperative some of it movement-building activities stop, there will be teenagers in the future, coming months, who may come through EA with a similar experience. Their teenagers who may be chewed up and spit out, feeling ashamed of their complicity in causing harm through propagating EA as well. They may not have even graduated high school yet, and within a year or two, they may also be(come) those effective altruists, then former effective altruists, who Elizabeth is anticipating and predicting that she would call naive idiots. Yet those are the very young people Elizabeth would seek to prevent from befalling harm themselves by joining EA in the first place. It's not evident that there's any discrete point at which they cease being those who should heed her warning in the first place, and instead become naive idiots to chastise.
Elizabeth also mentions how she became introduced to EA in the first place.
I'd read Scott Alexander's blog for a long time, so I vaguely knew the term effective altruist. Then I met one of the two co founders of Seattle EA on OkCupid and he invited me to the in person meetings that were just getting started and I got very invested.
As of a year ago, Scott Alexander wrote a post entitled In Continued Defense of Effective Altruism. While I'm aware he made some later posts responding to some criticisms of that one he made, I'm guessing he hasn't abandoned that thesis of that post in its entirety. Meanwhile, as one of, if not the, most popular blog associated with either the rationality or EA communities, one way or another, Scott Alexander may still be drawing more people into the EA community than almost any other writer. If that means he may be causing more harm by propagating EA than almost any other rationalist still supportive of EA, then, at least in that particular way Elizabeth has in mind, Scott may right now continue to be one of the most naive idiots in the rationality community. The same may be true of so many effective altruists Elizabeth got to know in Seattle.
What I'm aware is a popular refrain among rationalists is: speak truth, even if your voice trembles. Never mind on the internet, Elizabeth could literally go meet hundreds of effective altruists or rationalists she has known in either the Bay Area, and Seattle, and tell them that for years they, too, were also naive idiots, or that they're still being naive idiots. Doing so could be how Elizabeth could prevent them from causing harm. In not being willing to say so, she may counterfactually be causing so much more harm by saying or doing so much less to stop EA from propagating than she knows that she can.
Whether it be Scott Alexander, or so many of her friends who have been or still are in EA, or those who've helped propagate university student groups like Dave Banerjee, or those young adults who will come and go through EA university groups by the year 2026, there are hundreds of people Elizabeth should be willing to call, to their faces, naive idiots. It's not a matter of whether she, or anyone, expects that'd work as some sort of convincing argument. That's the sort of perhaps cynical and dishonest calculation she, and others, rightly criticize in EA. She should tell all of them that, if she believes it, even if her voice trembles. If she doesn't believe that, that merits an explanation of how she considers herself to have been a naive idiot, but so many of them to not have been. If she can't convincingly justify, not just to herself, but others, why she was exceptional in her naive idiocy, then perhaps she should reconsider her belief that even she was a naive idiot.
In my opinion she, or so many other former effective altruists, were not just naive idiots. Whatever mistakes they made, epistemically or practically, I doubt the explanation is that simple. The operationalization here of "naive idiocy" doesn't seem like a decently measurable function of, say, how long it took before it was just how much harm someone was causing by propagating EA, and how much harm they did cause in that period of time. "Naive idiocy" here doesn't seem to be all that coherent an explanation for why so many effective altruists got so much, so wrong, for so long.
I suspect there's a deeper crux of disagreement here, one that hasn't been pinpointed yet, by Elizabeth or Tim. It's one I might be able to discern if I put in the effort, though I don't have a sense of what it might've been either. I could, given that I still consider myself an effective altruist, though I ceased to be an EA group organizer myself last year too, on account of me not being confident in helping grow the EA movement further, even if I've continued participating in it for what I consider its redeeming qualities.
If someone doesn't want to keep trying to change EA for the better, and instead opts to criticize it to steer others away from it, it may not be true that they were just naive idiots before. If they can't substantiate their formerly naive idiocy, then to refer to themselves as having only been naive idiots, and by extension imply so many others they've known still are or were naive idiots too, is neither true nor useful. In that case, if Elizabeth would still consider herself to have been a naive idiot, that isn't helpful, and maybe it is also a matter of her, truly, being too hard on herself. If you're someone who has felt similarly, but you couldn't bring yourself to call so many friends you made in EA a bunch of naive idiots to their faces because you'd consider that false or too hard on them, maybe you're being too hard on yourself too. Whatever you want to see happen with EA, us being too hard on ourselves like that isn't helpful to anyone.
Do you mean Evan Hubinger, Evan R. Murphy, or a different Evan? (I would be surprised and humbled if it was me, though my priors on that are low.)
How do you square encouraging others to weigh in on EA fundraising, and presumably the assumption that anyone in the EA community can trust you as a collaborator of any sort, with your intentions, as you put it in July, to probably seek to shut down at some point in the future?
The Substack post only mentions that a researcher leaked the document, not that any researcher authored it. The document could've been written up by one or more Google staffers who aren't directly doing the research themselves, like a project manager or a research assistant.
Nothing in the document should necessarily be taken as representative of Google, or any particular department, though the value of any insights drawn from the document could vary based on what AI research project(s)/department the authors of the document work on/in. This document is scant evidence in any direction of how representative the statements made are of Google and its leadership, or any of the teams or leaders of any particular projects or departments at Google focused on the relevant approaches to AI research.
Thanks for making this comment. I had a similar comment in mind. You're right nobody should assume any statements in this document represent the viewpoint of Google, or any of its subsidiaries, like DeepMind, or any department therein. Neither should be assumed that the researcher(s) who authored or leaked this document are department or project leads. The Substack post only mentions that a researcher leaked the document, not that any researcher authored it. The document could've been written up by one or more Google staffers who aren't directly doing the research themselves, like a project manager or a research assistant.
On the other hand, there isn't enough information to assume it was only one or more "random" staffers at Google. Again, nothing in the document should necessarily be taken as representative of Google, or any particular department, though the value of any insights drawn from the document could vary based on what AI research project(s)/department the authors of the document work on/in.
That might not be a useful question to puzzle over much, since we could easily never find out who the anonymous author(s) of the document is/are. Yet that the chance the authors aren't purely "random" researchers should still also be kept in mind.
Thank you for this detailed reply. It's valuable, so I appreciate the time and effort you've put into it.
The thoughts I've got to respond with are EA-focused concerns that would be tangential to the rationality community, so I'll draft a top-level post for the EA Forum instead of replying here on LW. I'll also read your EA Forum post and the other links you've shared to incorporate into my later response.
Please also send me a private message if you want to set up continuing the conversation over email, or over a call sometime.
I've edited the post so It's now "resentment from rationalists elsewhere to the Bay Area community" to "resentment from rationalists elsewhere toward the Bay Area community" because that seems to reduce the ambiguity some. My use of the word 'resentment' was intentional.
Thanks for catching those. The word 'is' was missing. The word "idea" was meant to be "ideal." I've made the changes.
I'm thinking of asking as another question post, or at least a post seeking feedback probably more than trying to stake a strong claim. Provoking debate for the sake of it would hinder that goal, so I'd try to writing any post in a way to avoid that. Those filters applied to any post I might write wouldn't hinder any kind of feedback I'd seek. The social barriers to posting raised by others with the concerns you expressed are seeming high enough that I'm unsure I'll post it after all.
This is a concern I take seriously. While it is possible increasing awareness of the problem of AI will make things worse overall, I think a more likely outcome is that it will be neutral to good.
Another consideration is how it may be a risk for long-termists to not pursue new ways of conveying the importance and challenge of ensuring human control of transformative AI. There is a certain principle of being cautious in EA. Yet in general we don't self-reflect enough to notice when being cautious by default is irrational on the margin.
Recognizing the risks of acts of omission is a habit William MacAskill has been trying to encourage and cultivate in the EA community during the last year. Yet it's been a principle we've acknowledged since the beginning. Consequentialism doesn't distinguish between action, and inaction, as a failure to take any appropriate, crucial or necessary action to prevent a negative outcome. Risk aversion is focused on in the LessWrong Sequences more than most cognitive biases.
It's now evident that past attempts at public communication about existential risks (x-risks) from AI have altogether proven to be neither sufficient nor adequate. It may not be a matter of not drawing more attention to the matter so much as drawing more of the right kind of attention. In other words, carefully conducing changes in how AI x-risks are perceived by various sections of the public is necessary.
The way we together as a community help you ensure how you write the book strikes the right balance may be to keep doing what MacAskill recommends:
- Stay in constant communication about our plans with others, inside and outside of the EA community, who have similar aims to do the most good they can
- Remember that, in the standard solution to the unilateralist’s dilemma, it’s the median view that’s the right (rather than the most optimistic or most pessimistic view)
- Are highly willing to course-correct in response to feedback
I'm aware it's a rather narrow range of ideas but a set of a few standard options being the ones most people adhere to is how it's represented in popular discourse, which is what I'm going off of as a starting point. It has been established in other comments on my post that isn't what to go off of. I've also mentioned that to be exposed to ideas I may not have thought of myself is part of why I want to have an open discussion on LW. My goal has been to gauge if that's a discussion any significant portion of the LW user-base is indeed open to having. The best I've been able to surmise as an answer thus far is: "yes, if it's done right."
As to the question of whether I can hold myself to those standards and maintain them, I'll interpret the question not as a rhetorical but literally. My answer is: yes, I expect I would be able to hold myself to those standards and maintain them. I wouldn't have asked the original question in the first place if I thought there wasn't at least a significant chance I could. I'm aware of how I'm writing this may seem to betray gross overconfidence on my part.
I'll try here to convince you otherwise by providing context in terms of the perceived strawmanning of korin43's comment on my part. The upshot as to why it's not a strawman is because my position is the relatively extreme one, putting me in opposition to most people who broadly adopt my side of the issue (i.e., pro-choice). I expect it's much more plausible that I am the one who is evil, crazy, insane, etc., than almost everyone who might disagree with me. Part of what I want to do is a 'sanity check,' figuratively speaking.
1. My position on abortion is one that most might describe as 'radically pro-choice.' The kind of position most would consider more extreme than mine is the kind that would go further to an outcome like banning anti-abortion/pro-life protests (which is an additional position I reject).
2. I embraced my current position on the basis of a rational appeal that contradicted the moral intuitions I had at the time. It still contradicts my moral intuitions. My prior moral intuition is also one I understand as among the more common (moral consideration should be given to an unborn infant or whatnot after the second trimester, or after the point when the infant could independently survive outside the womb). That this has me in a state of some confusion and that others on LessWrong can help me deconfuse better than I can by myself is why I want to ask the question.
3. What I consider a relatively rational basis for my position is one I expect only holds among those who broadly share similar moral intuitions. By "assumptions diametrically opposite mine," I meant someone having an intuition that what would render a fetus worth moral consideration is not based on its capacity for sentience but on it having an immortal soul imbued by God. In that case, I don't know of any way I might start making a direct appeal as to why someone should accept my position. The only approach I can think of is to start indirectly by convincing someone much of their own religion is false. That's not something I'm confident I could do with enough competence to make such an attempt worthwhile.
I meant to include the hyperlink to the original source in my post but I forgot to, so thanks for catching that. I've now added it to the OP.
It seems like the kind of post I have in mind would be respected more if I'm willing and prepared to put in the effort of moderating the comments well too. I won't make such a post before I'm ready to commit the time and effort to doing so. Thank you for being so direct about why you suspect I'm wrong. Voluntary explanations for the crux of a disagreement or a perception of irrationality are not provided on LessWrong nearly often enough.
I am thinking of making a question post to ask because I expect there may be others who are able to address an issue related to legal access to abortion in a way that is actually good. I expect I might be able to write a post that would be considered to not only "suck" but might be so-so as opposed to unusually good.
My concern was that by even only asking a question, even asked well in a way that will frame responses to be better, I would still be downvoted. It's seeming like if I put serious effort into it, though, the question post would not be super downvoted.
I'm not as concerned about potential reputational harm to myself compared to others. I also have a responsibility to communicate in ways that minimize undue reputational harm to others. Yet I'd want to talk about abortion in terms of either public policy or philosophical arguments, so it'd be a relatively jargon-filled and high-context discussion either way.
My impression has been it's presumed that a position presented will have been adopted for bad epistemological reasons and that it has little to do with rationality without much in the way of checking. I'm not asking about subjects I want to or would frame as political. I'm asking if there are some subjects that will be treated as though they are inherently political even when they are not.
It's not as much about moral intuitions to me so much as rational arguments. That may not hold up if someone has some assumptions diametrically opposite of mine, like the unborn being sacred or otherwise special in some way that assigns a moral weight to them incomparably higher than the moral weight assigned to pregnant persons. That's something I'd be willing to write about if that itself is considered interesting. My intention is to ask what are the best compromises for various positions being offered by the side of the debate opposite myself, so that's very different from perspectives unfit for LW.
I'm not an active rationalist anymore but I've 'been around' for a decade. Sometimes I occasionally post on LessWrong still because it's interesting or valuable enough for some subjects. That the rationality community functions the way you describe and the norms that entails is an example of why I don't participate in the rationality community as much anymore. Thank you, though, for the feedback.
Thank you!
This is great news! This could even be a topic for one of our meetups!
Thanks. Do you feel like you have a sense of what proportion of long-termists you know who are forecasting that way? Or do you know of some way how one might learn more about forecasts like this and the reasoning or models behind them?
I think the difficulty with answering this question is that many of the disagreements boil down to differences in estimates for how long it will take to operationalize lab-grade capabilities.
The same point was made on the Effective Altruism Forum and it's a considerable one. Yet I expected that.
The problem frustrating me is that the relative number of individuals who have volunteered their own numbers is so low it's an insignificant minority. One person doesn't disagree with their own self unless there is model uncertainty or whatever. Unless individual posts or comments among all of that debate provide specific answers or timelines, not enough people are providing helpful, quantitative information that would take trivial effort to provide.
Thank you though for providing your own numbers.
Upvoted. Thanks.
I'll state that in my opinion it shouldn't necessarily have to be the responsibility of MIRI or even Eliezer to clarify what was meant by a position stated but is taken out of context. I'm not sure but it seems as though at least a significant minority of those who've been alarmed by some of Eliezer's statements haven't read the full post to put it in a less dramatic context.
Yet errant signals sent seem important to rectify as they make it harder for MIRI to coordinate with other actors in the field of AI alignment based on existing misconceptions.
My impression is that misunderstanding about all of this is widespread in that there are at least a few people across every part of the field who don't understand what MIRI is about these days at all. I don't know how widespread it is in terms of how significant a portion of other actors in the field are generally confused with MIRI.
I don't know what "this" is referring to in your sentence.
I was referring to the fact that there are meta-jokes in the post about which parts are or are not jokes.
I want to push back a bit against a norm I think you're arguing for, along the lines of: we should impose much higher standards for sharing views that assert high p(doom), than for sharing views that assert low p(doom).
I'm sorry I didn't express myself more clearly. There shouldn't be a higher standard for sharing views that assert a high(er) probability of doom. That's not what I was arguing for. I've been under the impression Eliezer and maybe others have been sharing the view of a most extreme probability of doom, but without explaining their reasoning, or how their model changed from before. It's the latter part that would be provoking confusion.
I still don't know what incoherence you have in mind. Stuff like 'Eliezer has a high p(doom)' doesn't strike me as good evidence for a 'your strategy is incoherent' hypothesis; high and low p(doom) are just different probabilities about the physical world.
The reasons for Eliezer or others at MIRI being more pessimistic than ever before seeming unclear, one possibility that came to mind is that there isn't enough self-awareness of the model as to why, or that MIRI has for a few months had no idea what direction it's going in now. That would lend itself to not having a coherent strategy at this time. Your reply has clarified though that it's more like what MIRI's strategic pivot will be is still in flux, or at least publicly communicating that well will take some more time, so I'm not thinking any of that now.
I do appreciate the effort you, Eliezer and others at MIRI have put into what you've been publishing. I eagerly await a strategy update from MIRI.
I'll only mention one more thing that hasn't bugged me as much but has bugged others in conversations I've participated in. The issue is that Eliezer appears to think, but without any follow-up, that most other approaches to AI alignment distinct from MIRI's, including ones that otherwise draw inspiration from the rationality community, will also fail to bear fruit. Like, the takeaway isn't other alignment researchers should just give up, or just come work for MIRI...?, but then what is it?
A lack of answer to that question has left some people feel like they've been hung out to dry.
Thank you for the detailed response. It helps significantly.
The parts of the post that are an April Fool's Joke, AFAIK, are the title of the post, and the answer to Q6. The answer to Q6 is a joke because it's sort-of-pretending the rest of the post is an April Fool's joke.
It shouldn’t be surprising others are confused if this is your best guess about what the post means altogether.
believing p(doom) is high isn't a strategy, and adopting a specific mental framing device isn't really a "strategy" either). (I'm even more confused by how this could be MIRI's "policy".)
Most would probably be as confused as you are at the notion “dying with dignity” is a strategy. I was thinking the meaning of the title stripped of hyperbole was not a change in MIRI’s research agenda but some more “meta-level” organizational philosophy.
I’m paraphrasing here, so correct me if I’m wrong, but some of the recent dialogues between Eliezer and other AI alignment researchers in the last several months contained statements from Eliezer like “We [at least Nate and Eliezer] don’t think what MIRI has been doing for the last few years will work, and we don’t have a sense of what direction to go now”, and “I think maybe most other approaches in AI alignment have almost no chance of making any progress on the alignment problem.”
Maybe many people would have known better what Eliezer meant had they read the entirety of the post(s) in question. Yet the posts were so long and complicated Scott Alexander bothered to write a summary of only one of them and there are several more.
As far as I’m aware, the reasoning motivating the kind of sentiments Eliezer expressed weren’t much explained elsewhere. Between the confusion and concern that has caused, and the ambiguity of the above post, that right now MIRI’s strategy might be in a position of (temporary) incoherence was apparently plausible enough to a significant minority of readers.
The parts of your comment excerpted below are valuable and may even have saved MIRI a lot of work trying to deconfuse others had they been publicly stated at some point in the last few months:
A plurality of MIRI's research leadership, adjusted for org decision-making weight, thinks humanity's success probability is very low, and will (continue to) make org decisions accordingly.
MIRI is strongly in favor of its researchers building their own models and doing the work that makes sense to them; individual MIRI researchers' choices of direction don't require sign-off from Eliezer or Nate.
They [at least Eliezer and Nate] updated a lot toward existential wins being likelier if the larger community moves toward having much more candid and honest conversations, and generally produces more people who are thinking exceptionally clearly about the problem.
Summary: The ambiguity as to how much of the above is a joke appears it may be for Eliezer or others to have plausible deniability about the seriousness of apparently extreme but little-backed claims being made. This is after a lack of adequate handling on the part of the relevant parties of the impact of Eliezer’s output in recent months on various communities, such as rationality and effective altruism. Virtually none of this has indicated what real, meaningful changes can be expected in MIRI’s work. As MIRI’s work depends in large part on the communities supporting them understanding what the organization is really doing, MIRI’s leadership should clarify what the real or official relationship is between their current research and strategy, and Eliezer’s output in the last year.
Strongly downvoted.
Q6 doesn't appear to clarify whether this is all an April Fool's Day joke. I expect that's why some others have asked the question again in their comments. I won't myself ask again because I anticipate I won't receive a better answer than those already provided.
My guess is that some aspects of this are something of a joke, or the joke is a tone of exaggeration or hyperbole, for some aspects. I'm aware some aspects aren't jokes, as Eliezer has publicly expressed for months now some of the opinions expressed above. I expect one reason why is that exploiting April Fool's Day to publish this post provides plausible deniability for the seriousness of apparently extreme but poorly substantiated claims. Why that may be is because of, in my opinion, the inadequate handling thus far of the impact this discourse has had on the relevant communities (e.g., AI alignment, effective altruism, long-termism, existential risk reduction, rationality, etc.).
In contradiction to the title of this post, there is little to no content conveying what a change in strategy entails MIRI will really do differently than any time in the past. Insofar as Eliezer has been sincere above, it appears this is only an attempt to dissuade panic and facilitate a change in those communities to accept the presumed inevitability of existential catastrophe. While that effort is appreciated, it doesn’t reveal anything about what meaningful changes in a new strategy at MIRI. It has also thus far been ambiguous what the relationship is between some of the dialogues between Eliezer and others published in the last year, and what official changes there may be in MIRI’s work.
Other than Eliezer, other individuals who have commented and have a clear, direct and professional relationship with MIRI are:
- Rob Bensinger, Communications Lead
- Abram Demski, Research Staff
- Anna Salamon, Board Director
- Vanessa Kosoy, Research Associate
None of their comments here clarify any of this ambiguity. Eliezer has also now repeatedly clarified the relationship between the perspective he is now expressing and MIRI’s official strategy. Until that’s clarified, it’s not clear how seriously any of the above should be taken as meaningfully impacting MIRI’s work. At this stage, MIRI’s leadership (Nate Soares and Malo Bourgon) should provide that clarification, perhaps in tandem with Rob Bensinger and other MIRI researchers, but in a way independent of Eliezer’s recent output.
Here is an update on our efforts in Canada.
1. There are nearly five of us who would be willing to sponsor a refugee to settle in Canada (indefinitely or for however long the war might last). There is a requisite amount of money that must be committed beforehand to cover at least a few months worth of costs for settling in Canada and living here for a few months. Determining whether 3 or more of us would be able to cover those costs appears to be the most significant remaining bottleneck before we decide whether to take this on.
2. There are two effective altruists in the province of Alberta who would be willing to sponsor another refugee if another may need that help. If you or someone you know is in touch with someone living in Alberta, in particular the Calgary or Edmonton areas, who might be willing to sponsor as a refugee a community member for Ukraine, please reply with a comment or contact me.
Thanks for flagging all of that. I've made all of those edits.
That isn't something I thought of but that makes sense as the most significant reason that, at least so far, I hadn't considered yet.
I notice this comment has only received downvotes other than the strong upvote this post received by default from me as the original poster. My guess would be this post has been downvoted because it's (perceived as):
- an unnecessary and nitpicking question.
- maybe implying MIRI and the rationality community are not authoritative sources in the field of AI alignment.
That was not my intention. I'd like to know what other reasons there may be for why this post was downvoted, so please reply if you can think of any or you are one of the users who downvoted this post.
AI alignment is the term MIRI (among other actors in the field) ostensibly prefers to refer to the control problem instead of AI safety to distinguish it from other AI-related ethics or security issues because those other issues don't constitute x-risks. Of course the extra jargon could be confusing for a large audience being exposed to AI safety and alignment concerns for the first time. In the case of introducing the field to prospective entrants into the field or students, keeping it simpler as you do may very easily be the better way to go.
Strongly upvoted. Thanks for your comprehensive review. This might be the best answer I've ever received for any question I've asked on LW.
In my opinion, given that these other actors who've adopted the term are arguably leaders in the field more than MIRI, it's valid for someone in the rationality community to claim it's in fact the preferred term. A more accurate statement would be:
- There is a general or growing preference for the term AI alignment be used instead of AI safety to refer to the control problem.
- There isn't a complete consensus on this but there may not be a good reason for that and it's only because there is inertia in the field from years ago when the control problem wasn't distinguished as often from other ethics or security concerns about advanced AI.
Clarifying all of that by default isn't necessary but it would be worth mentioning if anyone asks which organizations or researchers beyond MIRI also agree.
Thanks for flagging this.
- I presumed that "AI alignment" was being used as a shorthand for x-risks from AI but I didn't think of that. I'm not aware either that anyone from the rationality community I've seen express this kind of statement really meant for AI alignment to mean all x-risks from AI. That's my mistake. I'll presume they're referring to only the control problem and edit my post to clarify that.
- As I understand it, s-risks are a sub-class of x-risks, as an existential risk is not only an extinction risk but any risk of the future trajectory of Earth-originating intelligence being permanently and irreversibly altered for the worse.
There are several signals the government might be trying to send that come to mind:
- It may be only one government agency or department, or a small set of agencies/departments, that are currently focused on the control problem. They may also still need to work on other tasks with government agencies/departments that have national security as the greatest priority. Even if a department internally thinks about the control problem in terms of global security, they may want to publicly reinforce national security as a top priority to keep a good working relationship with other departments they work closely with.
- Whatever arms of the government are focused on the control problem may be signaling to the public or electorate, or politicians more directly accountable to the public/electorate, to remain popular and retain access to resources.
I previously have not been as aware that this is a pattern of how so many people have experienced responses to criticism from Geoff and Leverage in the past.
Yeah, at this point, everyone coming together to sort this out together as a way of building a virtuous spiral of making speaking up feel safe enough that it doesn't even need to be a courageous thing to do or whatever is the kind of thing I think your comment also represents and what I was getting at.
For what it's worth, my opinion is that you sharing your perspective is the opposite of making a mistake.
In the past, I've been someone who has found it difficult and costly to talk about Leverage and the dynamics around it, or organizations that are or have been affiliated with effective altruism, though the times I've spoken up I've done more than others. I would have done it more but the costs were that some of my friends in effective altruism interacted with me less, seemed to take me less seriously in general and discouraged me from speaking up more often again with what sometimes amounted to nothing more than peer pressure.
That was a few years ago. For lots of reasons, it's easier, less costly, less risky and easier to not feel fear for me now. I don't know yet what I'll say regarding any or all of this related to Leverage because I don't have any sense of how I might be prompted or provoked to respond. Yet I expect I'll have more to say and towards what I might share as relevant I don't have any particular feelings about yet. I'm sensitive to how my statements might impact others but for myself personally I feel almost indifferent.
Those making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can't claim to know much about setting up effective norms for defending whistleblowers though.
I dipped my toe into openly commenting last week, and immediately received an email that made it more difficult to maintain anonymity - I was told "Geoff has previously speculated to me that you are 'throwaway', the author of the 2018 basic facts post".
Leverage Research hosted a virtual open house and AMA a couple weeks ago for their relaunch as a new kind of organization that has been percolating for the last couple years. I attended. One subject Geoff and I talked about was the debacle that was the article in The New York Times (NYT) on Scott Alexander from several months ago. I expressed my opinion that:
- Scott Alexander could have managed his online presence much better than he did on and off for a number of years.
- Scott Alexander and the rationality community in general could have handled the situation much better than they did.
- Those are parts of this whole affair that too few in the rationality community have been willing to face, acknowledge or discuss about what can be learned from mistakes made.
- Nonetheless, NYT was the instigating party in whatever of the situation constituted a conflict between NYT, and Scott Alexander and his supporters, and NYT is the party that should be held more accountable and is more blameworthy if anyone wants to make it about blame.
Geoff nodded, mostly in agreement, and shared his own perspective on the matter that I won't share. Yet if Geoff considers NYT to have done one or more things wrong in that case,
You yourself, Ryan, never made any mistake of posting your comments online in a way that might make it easier for someone else to de-anonymize you. If you made any mistake, it's that you didn't anticipate how adeptly Geoff would apparently infer or discern your identity. I expect why it wouldn't be so hard for Geoff to have figured it out it was you because you would have shared information about the internal activities at Leverage Research you are one of only a small number of people would have had access to.
Yet that's not something you should not have had to anticipate. A presumption of good faith in a community or organization entails a common assumption that nobody would do that to their other peers. Whatever Geoff himself has been thinking about you as the author of those posts, he understands exactly the way in which to de-anonymize you or whoever would also be considered a serious violation of a commonly respected norm.
Based on how you wrote your comment, it seems that the email you received may have come across as intimidating. Obviously I don't expect you to disclose anything else about it, and would respect and understand if you don't, but it seems the email may have been meant to provide you with a well-intended warning. If so, there is also a chance Geoff had discerned that you were the account-holder for 'throwaway' (at least at the time of the posts in question) but hasn't even considered the possibility of de-anonymizing you, at least in more than a private setting. Yet either way, Geoff has begun responding in a way that if he were to act upon enough would only have become more disrespectful to you, your privacy and your anonymity.
Of course, if it's not already obvious to anyone, neither am I someone who has an impersonal relationship with Leverage Research as an organization. I'm writing this comment with the anticipation that Geoff may read it himself or may not be comfortable with what I've disclosed above. Yet what I've shared was not from a particularly private conversation. It was during an AMA Leverage Research hosted that was open to the public. I've already explained above as well that in this comment I could have disclosed more, like what Geoff himself personally said, but I haven't. I mention that to also show that I am trying to come at this with good faith toward Geoff as well.
During the Leverage AMA, I also asked a question that Geoff called the kind of 'hard-hitting journalistic' question he wanted more people to have asked. If that's something he respected during the AMA, I expect this comment is one he would be willing to accept being in public as well.
Regarding problems related pseudoscientific quacks and cranks as a kind of example given, at this point it seems obvious that it needs to be taken for granted that there will be causal factors that, absent effective interventions, will induce large sections of society to embrace pseudo-scientific conspiracy theories. In other words, we should assume that if there is another pandemic in a decade or two, there will be more conspiracy theories.
At that point in time, people will beware science again because they'll recall the conspiracies they believed in from the pandemic of 2019-2022 and how their misgivings about the science back then were never resolved either. As there is remaining skepticism of human-caused climate change now, in the world a couple decades from now after QAnon, don't be shocked if there are conspiracy theories about how catastrophic natural disasters are caused by weather control carried out by the same governments who tried convincing everyone decades ago that climate change was real.
At present, we live in a world where the state of conspiracy theories in society has evolved to a point that it's insufficient to think about them in terms of how they were thought about even a decade ago. Conspiracy theories like how the moon landing was faked or even how 9/11 was a false flag attack don't seem to have the weight and staying power of conspiracy theories today. A decade from now, I expect COVID-19 conspiracy theories won't be the butt of jokes the same way those other conspiracy theories. Those other conspiracy theories didn't cause thousands of people to have their lives so needlessly shortened. I'm aware in the last few years there has been an increased investment in academia to research the nature of conspiracy theories as a way to combat them.
It also doesn't help that we live in a time when some of the worst modern conspiracies or otherwise clandestine activities by governments are being confirmed. Lies early on in the pandemic on the scientific consensus about the effectiveness of masks to the common denial of any evidence the origin of COVID-19 could have been a lab outbreak are examples regarding only that specific case. From declassified documents in recent years proving CIA conspiracies from decades ago to stories breaking every couple years about the lengths governments have gone to cover up their illegal and clandestine activities, it's becoming harder in general to blame anyone for believing conspiracy theories.
Given such a low rate of crankery among scientists but how that alone has proven sufficient to give a veneer of scientific credibility to fuel the most extreme COVID-19 conspiracy theories, it seems like the main chokepoint won't be to neutralize the spread of the message at the original source that is that small percentage of cranks among experts. (By neutralize I don't mean anything like stopping their capacity to speak freely but counter them with other free speech in the form of a strategy composed of communication tactics of the most convincing known-down arguments as soon as any given crank is on the brink of becoming popular.) It's also self-evident that it's insufficient to undo the spread of a conspiracy theory once it's hit critical mass.
Based on the few articles I've read on the research that's been done on this subject in the last few years, the chokepoint in the conspiracy theory pipeline to focus on to have the greatest impact may be to neutralize their viral spread as they first begin growing in popularity on social media. Again, with the cranks at the beginning of that pipeline, to stop the spread of so many conspiracy theories in the first place at their points of origin may prove too difficult. The best best may not to eliminate them in the first place but to minimize how much they spread once it becomes apparent.
This entails anticipating different kinds of conspiracy theories before they happen, perhaps years in advance. In other words, for the most damaging kinds of conspiracy theories one can most easily imagine taking root among the populace in the years to come, the time to begin mitigating the impact they will have is now.
Regarding the potential of prediction markets to combat this kind of problem, we could suggest that the prediction markets that are already related to the rationality community in some way begin facilitating predictions of future (new developments in) conspiracy theories starting now.
The fact that many scientists are awful communicators who are lousy as telling stories is not a point against them. It means that they were more interested in figuring out the truth than figuring out how to win popularity contests.
This implies to me that there is a market for science communicators who in their careers specialize in winning popularity contests but do so to spread the message of scientific consensus in a way optimized to combat the most dangerous pseudoscience and misinformation/disinformation. It seemed like the Skeptics movement was trying to do the latter part if not the part about doing so by winning popularity contests at some point over a decade ago but it's been sidetracked by lots of others things since.
For some science communicators to go about their craft in a way meant to win popularity contests may raise red flags about how it could backfire and those are potential problems worth thinking about. Yet I expect the case for doing so, in terms of cost-benefit analysis, is sufficient to justify considering this option.
First, don't trust any source that consistently sides with one political party or one political ideology, because Politics is the Mind Killer.
One challenge with this is that it's harder to tell what the ideology in question is. If anti-vaxxers are pulled from among the populations of wingnuts on both the left and the right, I'm inclined to take lots of people whose views consistently side with one political party much more seriously not only on vaccines but on many other issues as well.
It's quantitatively difficult to meet one million people, e.g., in terms of the amount of time it takes to accomplish that feat but how qualitatively hard it is makes it seem almost undoable but to me it's more imaginable. I've worked in customer service and sales jobs in multiple industries.
I never kept count enough to know if I ever met one hundred people in one day but it could easily have been several dozen people everyday. I wouldn't be surprised if someone working the till at a McDonalds in Manhattan met over one hundred people on some days. Most people won't work a career like that for 27 years straight but enough do. I expect I could recall hundreds of people I interacted with only once but it would take a lot of effort to track all of that and it would still be the minority of them.
Nonetheless, I thought it notable that to meet one million people in one's own lifetime is something common enough that it wouldn't surprise me if at least a few million people in the United States were people who had met over one million other individuals.
One overlooked complication here is the extent to which honor is still socially constructed in particular circumstances. One helpful way to frame practical ethics is to distinguish between public and private morality. Almost nobody subscribes to a value system that exists in a vacuum independent of the at least somewhat subjective influence of their social environment. Having integrity can sometimes still mean subverting one's personal morality to live up to societal standards imposed upon oneself.
To commit suicide after a sufficiently shameful act has been part of a traditional code of honor in some aspects of Japanese culture for centuries. Presumably not everyone who commits suicide in Japan out of a sense of duty and honor feels in their heart of hearts is what definitely the right choice. Yet they still feel obliged to act upon a code of honor they don't believe in the same way a soldier is still supposed to follow the orders of a commanding officer even if the soldier disagrees with them.
This mixture of what public and private morality mean for one's honor and integrity to the point people will sacrifice their lives for the sake of relatively arbitrary external societal standards points to how honor can't be so easily distinguished from PR in this way.
PR is about managing how an antagonist could distort your words and actions to portray you in a negative light.
There are narrow contexts in which the overwhelming purpose of PR, to the exclusion of almost any other concern, is to manage how an antagonist could distort one's words and actions to depict one in a hostile way. That's not the only good reason for PR in general.
Much of PR is about finding the right ways to best communicate what an organization is trying to do in an accurate way. Miscommunication may trigger others into fearing what one really intends to do and they anticipate needing to respond with hostility when one starts acting to achieve their stated goal. In such a case, one is antagonizing others by sending an errant signal. To minimize the rate of communication errors is only one example of another reason organizations engage in PR.
I'm coming to this article by way of being linked from a Facebook group though I am also an occasional LessWrong user. I would have asked this question in the comments of the FB post where this post was linked, but since the comments were closed there, I'll ask it here: What was (or were) the reason(s) behind:
- Posting this to a FB group with the comments open;
- Waiting until a few comments had been made, then closing them on FB and then asking for commenters to comment on this LW post instead?
I understand why someone would do this if they thought a platform with a higher variance for quality of discourse, like FB or another social media website, was delivering a significantly lower quality of feedback than one would hope or expect to receive on LW. Yet I read the comments on the FB post in question, in a group frequented by members of the rationality community, and none of them stuck out to me as defying what have become the expected norms and standards for discourse on LW.
What seems to matter is (1) that such a focus was chosen because interventions in that area are believed to be the most impactful, and (2) that this belief was reached from (a) welfarist premises and (b) rigorous reasoning of the sort one generally associates with EA.
This seems like a thin concept of EA. I know there are organizations who choose to pursue interventions based on them being in an area they believe to be (among) the most impactful, and based on welfarist premises and rigorous reasoning. Yet they don't identify as EA organizations. That would be because they disagree with the consensus in EA about what constitutes 'the most impactful,' 'the greatest welfare,' and/or 'rigorous reasoning.' So, the consensus position(s) in EA of how to interpret all those notions could be thought of as the thick concept of EA.
Also, this definition seems to be a prescriptive definition of "EA organizations," as opposed to being a descriptive definition. That is, all the features you mentioned seem necessary to define EA-aligned organizations as they exist, but I'm not convinced they're sufficient to capture all the characteristics of the typical EA-aligned organization. If they were sufficient, any NPO that could identify as an EA-aligned organization would do so. Yet there are some that aren't. An example of a typical feature of EA-aligned NPOs that is superficial but describes them in practice would be if they receive most of their funding from sources also aligned with EA (e.g., the Open Philanthropy Project, the EA Funds, EA-aligned donors, etc.).
Technical Aside: Upvoted for being a thoughtful albeit challenging response that impelled me to clarify why I'm asking this as part of a framework for a broader project of analysis I'm currently pursuing.
Summary:
I'm working on a global comparative analysis of funding/granting orgs not only in EA, but also in those movements/communities that overlap with EA, including x-risk.
Many in EA may evaluate/assess the relative effectiveness of these orgs in question according to the standard normative framework(s) of EA, as opposed to the lense(s)/framework(s) through which such orgs evaluate/assess themselves, or would prefer to be evaluted/assessed by other principals and agencies.
I expect that the EA community will want to know to what extent various orgs are amenable to change in practice or self-evaluation/self-assessment according to the standard normative framework(s) of EA, however more reductive they may be than ones employed for evaluating the effectiveness of funding allocation in x-risk of other communities, such as the rationality community.
Ergo, it may be in the self-interest of any funding/granting org in the x-risk space to precisely clarify their relationship to the EA community/movement, perhaps as operationalized through the heuristic framework of "(self-)identification as an EA-aligned organization. I assume that includes BERI.
I care because I'm working on a comparative analysis of funds and grants among EA-aligned organizations.
For the sake of completion, this will extend to funding and grantmaking organizations that are part of other movements that have overlap with or are constituent movements of effective altruism. This includes existential risk reduction.
Most of this series of analyses will be a review, as opposed to an evaluation or assessment. I believe the more of those normative judgements I leave out of the analysis, and to leave that to the community. I'm not confident with feasible to produce such a comparative analysis competently without at least a minimum of normative comparison. Yet, more importantly, the information could, and likely woud, be used by various communities/movements with a stake in x-risk reduction (e.g., EA, rationality, Long-Term World Improvement, transhumanism, etc.) to make those normative judgements far beyond what is contained in my own analysis.
I will include in a discussion section a variety of standards by which each of those communities might evaluate or assess BERI in relation to other funds and grants focused on x-risk reduction, most of which are EA-aligned organizations the form the structural core, not only of EA, but also of x-risk reduction. Hundreds, if not thousands, of individuals, including donors, vocal supporters, and managers of those funds and grants run by EA-aligned organizations, will be inclined to evaluate/assess each of these funds/grants focused on x-risk reduction through a lens in EA. Some of these funding/granting orgs in x-risk reduction may diverge in opinion about what is best in the practice and evaluation/assessment of funding allocation in x-risk reduction.
Out of respect for those funding/granting orgs in x-risk reduction that do diverge in opinion from those standards in EA, I would like to know that so as to include those details in the discussion section. This is important because it will inform how the EA community will engage with those orgs in question after my comparative analysis is complete. One shouldn't realistically expect that many in the EA community will evaluate/assess such orgs with a common normative framework, e.g., where the norms of the rationality community diverge from those of EA. My experience is they won't have the patience to read many blog posts about how the rationality community, as separate from EA, practices and evaluates/assesses x-risk reduction efforts differently than EA does, and why those of the rationality community are potentially better/superior. I expect many in EA will prefer a framework that is, however unfortunately, more reductive than applying conceptual tools like factorization and 'caching out' for parsing out more nuanced frameworks for evaluating x-risk reduction efforts.
So, it's less about what I, Evan Gaensbauer, care about and more about what hundreds, if not thousands, of others in EA and beyond care about, in terms of evaluating/assessing funding/granting orgs in x-risk reduction. That will go more smoothly for both those funding/granting orgs in question, and x-risk reducers in the EA community, to know if those orgs in question fit into the heuristic framework of "identifying (or not) as an EA-aligned organization." Ergo, it may be in the interest of those funding/granting orgs in question to clarify their relationship to EA as a movement/community, even if there are trade-offs, real or perceived, before I publish this series of comparative analyses. I imagine that includes BERI.
Summary: I'm aware of a lot of examples of real debates that inspired this dialogue. It seems in those real cases, a lot of disagreement or criticism of public claims or accusations of lying of different professional organizations in effective altruism, or AI risk, have repeatedly been generically interpreted as a blanket refusal to honestly engage with the clams being made. Instead of a good-faith effort to resolve different kinds of disputes with public accusations of lying being made, repeat accusations, and justifications for them, are made into long, complicated theories. These theories don't appear to respond at all to the content of the disagreements with the public accusations of lying and dishonesty, and that's why these repeat accusations and justifications for them are poorly received.
These complicated theories don't have anything to do with what people actually want when public accusations of dishonesty or lying are being made, what is typically called 'hard' (e.g., robust, empirical, etc.) evidence. If you were to make narrow claims of dishonesty with more modest language, based on just the best evidence you have, and being willing to defend the claim based on that; instead of making broad claims of dishonesty with ambiguous language, based on complicated theories, they would be received better. That doesn't mean the theories of how dishonesty functions in communities, as an exploration of social epistemology, shouldn't be written. It's just that they do not come across as the most compelling evidence to substantiate public accusations of dishonesty.
For me it's never been so complicated as to require involving decision theory. It's as simple as some of the basic claims being made into much larger, more exaggerated or hyperbolic claims being a problem. They also come along with readers, presumably a general audience among the effective altruism or rationality communities, apparently needing to have prior knowledge of a bunch of things they may not be familiar with. They will only be able to parse the claims being made by reading a series of long, dense blog posts that don't really emphasize the thing these communities should be most concerned about.
Sometimes the claims being made are that Givewell is being dishonest, and sometimes they are something like because of this the entire effective altruism movement has been totally compromised, and is also incorrigibly dishonest. There is disagreement, sometimes disputing how the numbers were used in the counterpoint to Givewell; and some about the hyperbolic claims made that appear as though they're intended to smear more people than whoever at Givewell, or who else in the EA community, is responsible. It appears as though people like you or Ben don't sort through, try parsing, and working through these different disagreements or criticisms. It appears as though you just take all that at face value as confirmation the rest of the EA community doesn't want to hear the truth, and that people worship Givewell at the expense of any honesty, or something.
It's in my experience too, that with these discussions of complicated subjects that appear very truncated for those unfamiliar, that the instructions are just to go read some much larger body of writing or theory to understand why and how people deceiving themselves, each other, and the public in the ways you're claiming. This is often said as if it's completely reasonable to claim it's the responsibility of a bunch of people with other criticisms or disagreements with what you're saying to go read tons of other content, when you are calling people liars, instead of you being able to say what you're trying to say in a different way.
I'm not even saying that you shouldn't publicly accuse people of being liars if you really think they're lying. In cases of a belief that Givewell or other actors in effective altruism have failed to change their public messaging in the face of, by their own convictions, being correctly pointed out as them being wrong, then just say that. It's not necessary to claim that thus the entire effective altruism community are also dishonest. That is especially the case for members of the EA community who disagree with you, not because they dishonestly refused the facts they were confronted with, but because they were disputing the claims being made, and their interlocutor refused to engage, or deflected all kinds of disagreements.
I'm sure there are lots of responses to criticisms of EA which have been needlessly hostile. Yet reacting, and writing strings of posts as though, the whole body of responses were consistent in just being garbage, is just not accurate of the responses you and Ben have received. Again, if you want to write long essays about what rational implications how people react to public accusations of dishonesty has for social epistemology, that's fine. It would just suit most people better if that was done entirely separately from the accusations of dishonesty. If you're publicly accusing some people of being dishonest, just accuse those and only those people of being dishonest very specifically. Stop tarring so many other people with such a broad brush.
I haven't read your recent article accusing some actors in AI alignment of being liars. This dialogue seems like it is both about that, and a response to other examples. I'm mostly going off those other examples. If you want to say someone is being dishonest, just say that. Substantiate it with what the closest thing you have to hard or empirical evidence that some kind of dishonesty is going on. It's not going to work with an idiosyncratic theory of how what someone is saying meets some kind of technical definition of dishonesty that defies common sense. I'm very critical of a lot of things that happen in effective altruism myself. It's just that the way that you and Ben have gone about it is so poorly executed, and backfires so much, I don't think there is any chance of you resolving the problems you're trying to resolve with your typical approaches.
So, I've given up on keeping up with the articles you're writing criticizing things in effective altruism happening, at least on a regular basis. Sometimes others nudge me to look at them. I might get around to them eventually. It's honestly at the point, though, where the pattern I've learned to follow is to not being open-minded that the criticisms being made of effective altruism are worth taking seriously.
The problem I have isn't the problems being pointed out, or that different organizations are being criticized for their alleged mistakes. It's how the presentation of the problem, and the criticism being made, are often so convoluted I can't understand them, and that's before I can figure out if I agree or not. I find that I am generally more open-minded than most people in effective altruism to take seriously criticisms made of the community, or related organizations. Yet I've learned to suspend that for the criticisms you and Ben make, for the reasons I gave, because it's just not worth the time and effort to do so.