“A Harmful Idea”

post by philosophytorres · 2022-01-14T22:09:14.215Z · LW · GW · 7 comments

Contents

  Introduction
  Bostrom takes a similar view [as Parfit], beginning his discussion by inviting us to “assume that, holding the quality and duration of a life constant, its value does not depend on when it occurs or on whether it already exists or is yet to be brought into existence as a result of future events and choices.” This assumption implies that the value lost when an existing person dies is no greater than the value lost when a child is not conceived, if the quality and duration of the lives are the same. 
  To refer to donating to help the global poor or reduce animal suffering as a “feel-good project” on which resources are “frittered away” is harsh language 
  But, as Phil Torres has pointed out, viewing current problems—other than our species’ extinction—through the lens of “longtermism” and “existential risk” can shrink those problems to almost nothing, while providing a rationale for doing almost anything to increase our odds of surviving long enough to spread beyond Earth.
  I don’t think that failing to create trillions and trillions of people (nearly all of whom would exist in computer simulations, according to longtermist calculations) deserves the scary-sounding term “existential risk.”
  Aeon 
  https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo
  Current Affairs 
  currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk?fbclid=IwAR1zhM1QqoEqF1bhNicGcZ5la7wGp3Q4hU0t9ytfbM2gBGrhOjGhFOl-NC8
None
7 comments

Phil Torres

I was originally planning on writing a short reply to Avital Balwit’s recent critique [EA · GW] of my critique of longtermism, but because her post gets almost everything wrong about my criticisms (and longtermism), I’m going to do what I did with Steven Pinker’s chapter on “Existential Threats” in Enlightenment Now and reproduce the entire text while interpolating comments/corrections/clarifications where needed. This makes the whole thing much more tedious, but I think it’s necessary to see how deeply misguided Balwit’s claims are. Please read to the end.

For context, note that Balwit is a researcher at the Future of Humanity Institute (FHI), which is run by Nick Bostrom, whose writings have been a major target of my critiques. Balwit is thus defending her boss’ views in her EA Forum post, which presents an obvious conflict of interest. Furthermore, EA Forum posts that gain favor with leaders in the community (e.g., scholars at FHI) can win prizes [EA · GW] of up to $500, which provides an extra monetary incentive to make claims that do not undermine established beliefs within the community. [EDIT: I have just been informed that this was discontinued. My apologies for the error.] Indeed, one of the fundamental problems with the EA community is that they claim, on the one hand, to care about "crucial considerations," epistemic openness, and being "less wrong," yet on the other determine funding opportunities (given the $46.1 billion in committed funding to EA causes) based on whether one's views align with or deviate from these established views. There is room to debate within the general framework; but if you question the framework itself, you're in trouble. Hence, if Balwit had, say, agreed with my critique, her chances of future positions at FHI or funding would disappear, for the same reason that critics of longtermism like myself will never get an EA grant. The vast fortune of EA and the aversion to criticism has made honest, open debate about the underlying philosophical problems with longtermism almost impossible. (This points to a serious issue with organizations like the Future of Humanity Institute: they want to remain affiliated with prestigious universities like Oxford but, at the same time, to undermine academic freedom by controlling what people at those organizations say publicly, e.g., what they publish. More on this in subsequent articles!)

 

———————————

Introduction 

What is longtermism?

Longtermism is the view that positively influencing the long term future is a key moral priority of our time. It’s based on the ideas that future people have moral worth, there could be very large numbers of future people, and that what we do today can affect how well or poorly their lives go [1].

This is very misleading, although I don’t think that is Balwit’s fault. (I’d blame the leading evangelists of the view.)

In the citation that Balwit provides, MacAskill says: “if we want to do the most good, we should focus on bringing about those changes to society that do the most to improve the very long-run future, and where the time scales involved here are much longer than people normally think about.” He describes this idea—longtermism—as “the most major kind of realization I've had over” the past 10 years.

"Doing the most good," i.e., maximizing the good, doesn’t just mean ensuring that those people who happen to exist in the future have lives that are “worth living” (whatever that means exactly). Consider two scenarios: (1) Joey hides a nuclear bomb in Manhattan that he sets to explode in exactly 1,000 years, killing exactly 1 million people who would have brought 1,000 utils into the world (thereby “improving” the world, they would say, impersonally). (2) Suzy creates a technology that will somehow (the details are unnecessary) prevent 1 million people who would otherwise have been born from being born in exactly 1,000 years, where there people would have brought 1,000 utils into the world. If maximizing the good, impersonally conceived, tallied up from the point of view of the universe (whatever that means exactly), is what matters, then these are indistinguishable. If you are willing to torture Joey to prevent him from hiding the bomb, then you should also be willing to torture Suzy to prevent her from creating that technology. Many moral theories would—assuming conditions that would virtually never actually obtain in the real world, such as perfect knowledge about the bomb, number of casualties, efficacy of torture, etc.—permit torturing Joey to save those lives, but I would not for a moment say that torturing Suzy is in any way permissible, even if torturing Joey is. Total impersonalist utilitarianism (by far the most compelling moral theory with respect to longtermism) has a problem here.

On the “stronger” versions of longtermism—e.g., the ones obsessed with calculating how many disembodied conscious minds could exist if we were to colonize space and build vast computer simulations powered by Dyson swarms that clutter every corner of our future light cone—“improving” the world means creating as many of these people as possible. That’s why it’s misleading to say “It’s based on the ideas that future people have moral worth,” which most non-experts will interpret as “insofar as someone exists in the future, they matter as much as people who matter today.” For Bostrom, Greaves, MacAskill, and the others (or at least on the views they articulate), the loss of a current life is no different morally speaking than the failure of a non-person who, as such, doesn’t currently exist to be born, assuming that person will have a “happy” (net-positive) life. This of course makes sense if we’re just fungible value containers. As Peter Singer writes:

Bostrom takes a similar view [as Parfit], beginning his discussion by inviting us to “assume that, holding the quality and duration of a life constant, its value does not depend on when it occurs or on whether it already exists or is yet to be brought into existence as a result of future events and choices.” This assumption implies that the value lost when an existing person dies is no greater than the value lost when a child is not conceived, if the quality and duration of the lives are the same. 

Greaves and MacAskill say this explicitly in their recent defense of longtermism.

In other words, bracketing the extra harms that death causes those who survive the decedent, the death of your best friend is no different, morally speaking, than the non-birth of a merely possible non-person who we could call “Jack.” In my view, the loss of my best friend is in an entirely different universe from the non-birth of Jack. But others disagree.

Humanity might last for a very long time. A typical species’ lifespan would mean there are hundreds of thousands of years ahead of us [2] and the Earth will remain habitable for hundreds of millions of years [3]. If history were a novel, we may be living on its very first page [4]. More than just focusing on this mind bending scope, we can imagine — at least vaguely — the characters that might populate it: billions and billions of people: people who will feel the sun on their skin, fall in love, laugh at a joke, and experience all the other joys that life has to offer. Yet, our society pays relatively little attention to how our actions might affect people in the future.

Concern for future generations is not a new idea. Environmentalists have advocated for the interests of future people for many decades. Concern for future generations is enshrined in the Iroquois Nation’s constitution. John Adams, the second U.S. president, believed American institutions might last for thousands of years [5], while Ben Franklin bequeathed money to American cities under the provision that it could only be used centuries later.

Completely different! “If people exist in 5,000 years, they shouldn’t have to deal with our nuclear waste” is radically not-the-same as “it’s very important that 1045 (Greaves and MacAskill’s number) digital persons come to exist in the Milky Way.” I’d be willing to bet anyone large sum of money that if one could have presented any of the five nations in the Iroquois Confederacy with Bostrom’s calculations of 1058 future simulated consciousnesses in computer simulations and asked them if we should be fretting about these people not existing if, say, the existential catastrophe of “technological stagnation” were to occur, they would guffaw. This is partly why I say that one should care about the long term but not be a longtermist.

That being said, there are several distinctive aspects of recent longtermist research and thinking, including the sheer timescales under consideration, the particular global problems that have been highlighted, and the consideration for the immense potential value of the future. Those engaged in longtermist research often look for events that will impact not just centuries, but potentially the whole future of civilisation — which might amount to millions or even billions of years.

Let's be more precise (slight digression). Longtermism has a crucial empirical aspect: if physical eschatologists were to discover that the heat death (or proton decay, or whatever) will happen in 1,000 years, then we would have greater reason to focus on current people (i.e., short-termism would be consistent with longtermism). Longtermists like to talk in terms of billions or trillions of years, calculating a finite but astronomical number of merely possible people (current non-persons). However, some physicists have speculated that we could escape the heat death by, e.g., tunneling into a neighboring universe, something that we might be able to do iteratively. Hence, the future number of people could very well be infinite, then however improbable this might be, we’re not dealing with a Pascal’s mugging, we’re dealing with Pascal’s wager. Longtermists tend to talk as if the former is the situation, when really it’s actually the latter, which is a very big problem that no one seems to be addressing. Calculations of expected value should use the number “infinity” rather than, e.g., “1045” or “1054” or “1058.” But I digress.

As for the global problems, a particular focus has been on existential risks: risks that threaten the destruction of humanity’s long-term potential [6]. Risks that have been highlighted by longtermist researchers include those from advanced artificial intelligence, engineered pathogens, nuclear war, extreme climate change, global totalitarianism, and others. If you care about the wellbeing of future generations, and take the long term seriously, then it’s of crucial importance to mitigate these or similarly threatening risks.

Again, very misleading. See above: this subtly prevaricates between the conditional "insofar as these people exist" and "we must ensure that as many new happy people (living in vast computer simulations) come to exist as possible." Completely different.

Finally, recent longtermist thinking is distinct in its consideration of the magnitude of value that could exist,

What exactly is this “value”? It would have been nice for Balwit to have been clearer. For example, should we exist for the sake of maximizing “value,” or should “value” exist for the sake of benefitting people? I endorse the latter, not the former, while total impersonalist utilitarians endorse the former: the one and only reason that you and I matter is because we are the fungible containers of value; our death would be bad, impersonally speaking, from the point of view of the universe (i.e., the moral point of view), only if our utility is positive; otherwise it would improve the world. Is that the right way to think about human beings?

and the potential harm that could occur if we fail to protect it. For example, existential risks could bring about the extinction of humanity or all life on earth, the unrecovered collapse of civilisation, or the permanent, global establishment of a harmful ideology or some unjust institutional structure.

“Harmful ideology”—that was my thesis in the Aeon article!

Criticisms of longtermism

Phil Torres recently wrote two essays critical of longtermism (which this essay presumes the reader is already familiar with). Much of this criticism misses the mark because Torres does not accurately explain what longtermism is, and fails to capture the heterogeneity in longtermist thought.

As I repeatedly and explicitly say in the articles, my specific focus is the sort of “longtermism” found in Bostrom’s oeuvre, as well as recent publications by Ord, Greaves, and MacAskill. Indeed, I would refer to myself as a "longtermist," but not the sort that could provide reasons to nuke Germany (as in the excellent example given by Olle Häggström), reasons based on claims made by, e.g., Bostrom.

He does sometimes gesture at important issues that require further discussion and reflection among longtermists, but because he often misrepresents longtermist positions, he ultimately adds more heat than light to those issues. 

I do not mean to deter criticism in general.

See the end of this response: the EA community (specifically, those at the top of the power hierarchy) is not only unfriendly toward criticisms, but is actually preventing criticisms from being put forward. That’s … not good.

I have read critical pieces [EA · GW] which helped refine and sharpen my own understanding of what longtermism should be aiming for, but I think it is also important to respond to criticism — particularly to the elements which seem off-base.

One housekeeping note — this piece largely focuses on criticisms from the Aeon essay, as it is more comprehensive. I have tried to note when I am answering a point that is solely in the Current Affairs piece. 

Beware of Missing Context 

If this is what longtermism is, why does it seem otherwise in Torres’ articles? One answer is selective quotation. 

For example, Torres quotes Bostrom saying that “priority number one, two, three and four should … be to reduce existential risk”. But he omits the crucial qualifier at the beginning of the sentence: “[f]or standard utilitarians.” Bostrom is exploring what follows from a particular ethical view, not endorsing that view himself.

But I am not criticizing Bostrom himself, I am criticizing the view that Bostrom articulates. This is precisely what, for example, Peter Singer does in his The Most Good You Can Do, which I quoted above. I assume that Balwit would have a similar problem with Singer writing things like “Bostrom takes a similar view …”? If so, she’d be misguided. Neither Singer nor I am in any way off-base in talking about “Bostrom’s view” the way that we do.

As for the qualifier, I later make the case that an integral component of the sort of longtermism that arises from Bostrom (et al.)’s view is the deeply alienating moral theory of total impersonalist utilitarianism. This is what largely motivates the argument of “astronomical waste.” It is what animates most of Greaves and MacAskill’s paper on longtermism. Etc., etc. And for good reason. As Joseph Carlsmith once said at an EA conference, “the simplest argument for the long-term future is if you’re a hedonistic total utilitarian.” True. (By contrast, if you're a Scanlonian contractualist, then all those future merely possible lives never coming into existence means nothing and, consequently, longtermism doesn't get more than an inch off the ground.)

In his case, Bostrom is not even a consequentialist [7].

First, I don’t see Bostrom saying this in any of those papers. Second, it does not matter much whether Bostrom is a consequentialist; I am, once again, criticizing the positions articulated by Bostrom and others, and these positions have important similarities with forms of consequentialism like total impersonalist utilitarianism. Third, someone can be a consequentialist while denying that she or he is a consequentialist.

Much the same can be said for MacAskill and Greaves’s paper “The case for strong longtermism,” where they work through the implications of variations on “total utilitarianism” before discussing what follows if this assumption is relaxed. Torres cuts the framing assumptions around these quotations, which are critical to understanding what contexts these conclusions actually apply in.

They provide the case for longtermism; I am criticizing that case. Indeed, I would strongly encourage people to read the article (here) because I believe that doing so will clearly show that my critiques are on-point, substantive, and wholly accurate. (People should also read this and this, the latter of which, in particular, they might find horrifying because of its apparent endorsement of mass global, invasive surveillance. Bostrom endorses preemptive war/violence to ensure that we create a posthuman civilization here.)

More generally, it should be borne in mind that Torres quotes from academic philosophy papers and then evaluates the quoted statements as if they were direct advice for everyday actions or policy. It should not be surprising that this produces strange results — nor is it how we treat other philosophical works, otherwise we would spend a lot of energy worrying about letting philosophy professors get too close to trolleys.

The field of Existential Risk Studies, like a few other fields (e.g., contemporary climatology), has an integral activist component. It is not about describing the world (as it were) but actively changing it. It makes, and aims to make, prescriptions, some of which have infiltrated the worldviews of literally the most powerful human beings in all of human history (e.g., Musk). This is precisely why critiquing longtermism is an extremely urgent matter. Frankly, I find a lot of the longtermist literature sophomoric (e.g., the idea of the “Long Reflection” is, to me, truly one of the silliest ideas I've come across), but the fact that longtermism is a massively influential ideology and the EA community that champions it has some $46.1 billion in committed funding is why I spend way more time than I’d like on this issue. (In other words, intellectually I find it dull and uninteresting. Most of Bostrom’s work—which actually consists of ideas that originated with other authors(!)—is not sophisticated. But somehow Bostrom has become a massively influential figure. So I’m not looking away.)

In another instance, Torres quotes Bostrom’s paper “The Future of Humanity” to show how longtermism makes one uncaring towards non-existential catastrophes.

To quote Singer:

Should the reduction of existential risk really take priority over these other causes? Bostrom is willing to draw this conclusion: “Unrestricted altruism is not so common that we can afford to fritter it away on a plethora of feel-good projects of suboptimal efficacy. If benefiting humanity by increasing existential safety achieves expected good on a scale many orders of magnitude greater than that of alternative contributions, we would do well to focus on this most efficient philanthropy.” To refer to donating to help the global poor or reduce animal suffering as a “feel-good project” on which resources are “frittered away” is harsh language (italics added).

In a section where Bostrom is distinguishing between catastrophes that kill all humans or permanently limit our potential, versus catastrophes that do not have permanent effects on humanity’s development, Torres highlights the fact that Bostrom calls this latter group of events “a potentially recoverable setback: a giant massacre for man, a small misstep for mankind.” Torres does not mention the very next line where Bostrom writes, “[a]n existential catastrophe is therefore qualitatively distinct from a ‘mere’ collapse of global civilization, although in terms of our moral and prudential attitudes perhaps we should simply view both as unimaginably bad outcomes.

Bostrom distinguishes between the concepts of an existential catastrophe and the collapse of civilization, and immediately suggests that we should regard both as unimaginably bad. The non-existential catastrophe does not shrink in importance from the perspective of longtermism. Rather, the existential catastrophe looms even larger — both outcomes remain so bad as to strain the imagination.

This is where context really does matter. In the broader context of Bostrom’s work, the notion of “unimaginably bad” does virtually zero work, unlike “existential risk,” “maxipok,” etc., which do a lot of work. See, for example, Singer’s discussion of Bostrom’s position. Singer gets it right (and so do I). The overall thrust of Bostrom’s position is clear if one considers the totality of his claims.

A particularly egregious example of selective quotation is when Torres quotes three sentences from Nick Beckstead’s PhD thesis, where Beckstead claims it is plausible that saving a life in a rich country is potentially more instrumentally important — because of its impacts on future generations — than saving a life in a poor country. In his Current Affairs piece, Torres claims that these lines could be used to show that longtermism supports white supremacy.

I would strongly encourage all readers to contact, e.g., a professor whose work focuses on white supremacy and ask them about Beckstead’s passage. Indeed, give them the whole chapter to read. That’s what I did. That’s actually how I came to this conclusion.

All Torres uses to support this claim are three sentences from a 198-page thesis. He states, before he offers the line, that “[Toby] Ord enthusiastically praises [the thesis] as one of the most important contributions to the longtermist literature,” not bothering to note that Ord might be praising any of the other 197 pages. It is key to note that the rest of the thesis does not deal with obligations to those in rich or poor countries, but makes the argument that “from a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years.” It is primarily a forceful moral argument for the value of the long term future and the actions we can take to protect it.

First of all, what Beckstead wrote is truly terrible. I have suggested that CEA denounce it on their website, but they have not taken that advice, so far as I know. Second, this is an absurd argument. Imagine that a philosophical ally of mine writes a book that I think is absolutely fantastic (I say so publicly). At one point, just in passing, the author claims that California wildfires were ignited by Jewish space lasers. The author says nothing else about this. Now, if I were to declare, “This is the best book ever,” one might reasonably have some questions! Even more, though, Beckstead’s claim about prioritizing the saving of people in rich countries over those in poor countries follows directly from his argument. Using the analogy above, then, it would be as if the rest of the book that I think is the best ever outlined an argument that actually supports the claim that California wildfires were ignited by Jewish space lasers. (Continued below …)

Torres also does not place the quotation with the relevant context about the individual. Nick Beckstead was among the first members of Giving What We Can [? · GW], a movement whose members have donated over 240 million dollars to effective charities, primarily in lower income countries. Beckstead joined GWWC in 2010 when it was solely global poverty focused, founded the first GWWC group in the US, donated thousands of dollars to global poverty interventions as a graduate student making about $20,000 per year, and served on the organization's board. This was all happening during the time he wrote this dissertation. 

Again, I’m criticizing an idea not the actions of particular people—an idea (longtermism) that has influenced quite literally the single most powerful human being in all of human history. Good for Beckstead (genuinely) donating to global poverty causes, something that I very much support. And good for, say, someone who accepts arguments that lead to racist conclusions not acting racist in real-life (analogy). I’m still going to criticize their arguments.

Also worth noting that the trend right now, so far as I know, within the EA community is away from global poverty and toward longtermism. I think Holden Karnofsky is an example, but I also recall seeing statistics. That's very unsettling: real people, poor people, and going to get left behind once again because the number of merely possible non-persons in computer simulations is so huge (1058—although, again, the number entered into expected value calculations should be "infinity").

But what about that specific quote? In this quote, Beckstead is looking at which interventions are likely to save the lives of future people

One can’t save the life of someone who doesn’t exist.

 — who are, inherently, a powerless and voiceless group. When he says that he might favor saving a life in a wealthy country he is not saying this because he believes that person is intrinsically more valuable.

That’s not what I’m saying, though, Balwit!

As a utilitarian-leaning philosopher, Beckstead holds that these lives have the same intrinsic value.

So, to be clear, imagine a runaway trolley. Standard situation: one person on track 2, five people on track 1. We might all agree that we should throw the switch, directing the trolley down track 2. But now imagine that there are no people on track 1, but for reasons that are causally complicated, if the trolley continues down track 1, five merely possible people (current non-persons) will never be born. Apparently, Beckstead would still throw the switch, thereby killing the one actual person. (This is one of the fundamental reasons that I am not a “longtermist,” contra-Balwit's claim, which of course directly links to the torture scenario mentioned above.)

He then raises another consideration about the second-order effects of saving different lives: the person in the wealthy country might be better placed to prevent future catastrophes or invent critical technology that will improve the lives of our many descendents. Additionally, in the actual line Torres quotes, Beckstead writes that this conclusion only holds “all else being equal,” and we know all else is not equal — donations go further in the lower-income countries, and interventions there are comparatively neglected, which is why many prominent longtermists, such as Beckstead, have focused on donating to causes in low income countries. The quote is part of a philosophical exploration that raises some complex issues, but doesn't have clear practical consequences. It certainly does not mean that in practice longtermists support saving the lives of those in rich countries rather than poor.

Unsettling for starving people in poor countries that, in practice, they should, in fact, get the help they need. My point, though, is that a theory that leads to the “in theory” conclusion that we should care more about people in rich countries has gone seriously wrong, for the same reason that a theory that leads to the conclusion that we should kill babies has gone seriously wrong, even if, in practice, there are contingent reasons we should not actually kill babies.

In general, throughout the two Torres pieces, one should be careful to take any of the particularly surprising quotations about longtermism at face value because of how frequently they are stripped of important context.

This conclusion could only be compelling, in my view, if one hasn’t read the whole body of literature closely. In my experience, even folks at FHI are sometimes surprised to discover some of the truly wild, awful things that Bostrom has written, i.e., "the particularly surprising quotations" are much more likely to appear surprising because most people haven't carefully read Bostrom's work.

Reading the complete pieces they came from will show this. It seems that Torres goes in with the aim to prove longtermism is dangerous and misguided, and is willing to shape the quotes he finds to this end, rather than give a more balanced and carefully-argued view on this philosophy.

It is very, very dangerous. Häggström captures this nicely in the passage of Here Be Dragons that I excerpt in the Aeon article (nuking Germany). Similarly, Peter Singer writes:

But, as Phil Torres has pointed out, viewing current problems—other than our species’ extinction—through the lens of “longtermism” and “existential risk” can shrink those problems to almost nothing, while providing a rationale for doing almost anything to increase our odds of surviving long enough to spread beyond Earth.

If someone were to take the longtermist view that Bostrom (et al.) proposes seriously, it could have absolutely catastrophic consequences, leading to nuclear first strikes (for the sake of future “value”), the implementation of a global invasive surveillance system, people being inclined to ignore or minimize non-runaway climate change, and so on. Others who have pointed this out include Vaden Masri (here) and Ben Chugg (here).

Also worth noting that Balwit ignores the many, many other quotes that I provide from Bostrom and others that are genuinely atrocious. This is a good argumentative strategy on Balwit’s part, but it doesn’t mean that these quotes have somehow stopped exiting.

Now I would like to go through the various criticisms that Torres raises about longtermism and answer them in greater depth.

Climate change

Torres is critical of longtermism’s treatment of climate change. Torres claims that longtermists do not call climate change an existential risk, and he conflates not calling climate change an existential risk with not caring about it at all.

Where do I say that? Where do I say that longtermists don’t at all care about climate change? Balwit is completely misrepresenting my argument. Nowhere do I make such a strong claim, nor would I.

There are several questions to disentangle here:

The answer to the first question is straightforward. Longtermists do care about climate change.

Multiple problems with this. First, I wonder whether Balwit and I are using the word “longtermist” in the same way. Are we talking about the same class of people? I have no doubt that some self-described “longtermists” care about climate change; indeed, I know some folks who would call themselves “longtermists” but think the views defended by Bostrom, Ord, etc. are despicable for one reason or another (including myself). Once again, it is their version of longtermism that is my target, and indeed those who have defended it have repeatedly claimed that existential risk mitigation should be our primary focus, and that climate change almost certainly isn’t an existential risk. How many more quotes of them saying this must I provide?

There are researchers at longtermist organizations

Note: I wouldn’t really describe CSER as a “longtermist organization.” Too many non-longtermists and longtermism skeptics there.

who study climate change and there are active debates [EA · GW] among longtermists over how best to use donations to mitigate climate change, and longtermists have helped contribute millions to climate change charities. There is active discussion about nuclear power and how to ensure that if geoengineering is done, that it is done safely and responsibly. These are not hallmarks of a community that does not care about climate change.

I never said otherwise!

Although it is fair to say that longtermists direct fewer resources towards it than they do towards other causes like biosecurity or AI safety — this also has to do with how many resources are already being directed towards climate change versus towards these other issues, which will be discussed more here [EA · GW]

There is disagreement among longtermists on the empirical question about whether, and the degree to which, climate change increases the risk of the full extinction of humanity or an unrecoverable collapse of civilization. Some think climate change is unlikely to cause either outcome. Some think it is plausible that it could. Open questions include the extent to which climate change:

Interpolation: Two centuries ago, the centurial probability of human extinction was utterly negligible. Today, it is perhaps between 15 and 25 percent, and will almost certainly continue to grow in the future (as Ord himself suggests in The Precipice—at the risk of assuming that what he writes reflects what he actually things, which seems to not to be the case). And no, space colonization isn’t going to save us from ourselves. So, talk of technological progress, especially in the context of “existential risks,” seems a bit perverse to me! (Not to mention the fact that folks in the Global North are almost entirely responsible for this radical rise in extinction risk, which of course threatens those everywhere else in the world, who might want nothing to do with such insanity, with total annihilation. That’s deeply unfair, a moral issue that I've never once heard any "longtermist" address.)

Neither group disagrees that climate change will have horrible effects that are worth working to stop.

Never said otherwise. Pointed out, though, that if one takes seriously what is actually written in the x-risk literature, climate change is only something to really worry about if there’s a runaway greenhouse effect. Otherwise, don’t fritter away our finite resources!

Finally, on the terminological question: for longtermists who do not think climate change will cause the full extinction of humanity or an unrecoverable collapse of civilization, it makes sense that they do not call it an existential risk, given the definition of existential risk.

Correct. In fact, I would tentatively agree that climate change is unlikely to prevent us, in the long run, from fully subjugating the natural world, maximizing economic productivity, colonizing space (bracketing the argument in the above link), and simulating 1058 people in vast computer simulations.

We have terms to designate different types of events: if someone calls one horrible event a genocide and another a murder, this does not imply that they are fine with murders. Longtermists still think climate change is very bad, and are strongly in favour of climate change mitigation.

Again, this is a complete misrepresentation of what I say. It’s good to know that longtermists think that mitigation policies should be implemented. But as Bostrom writes—once again—when faced with mitigating two risks, given finite resources, we shouldn’t “fritter … away” those resources on “feel-good projects.” That's this stated view, and I shouldn't be admonished for quoting him.

Torres gestures angrily multiple times that longtermists are callous not to call climate change an existential risk, but he does not even argue that climate change is one.

Why would I need to? In critiquing longtermism, I’m adopting the longtermist phraseology. In that phraseology, “existential risk” has a very specific definition—e.g., a definition that actually groups a devastating, horrible nuclear conflict in which 9 billion people starve to death along with humanity living great lives at roughly our level of technological development for the next 800 million years (at which point we die with dignity, let’s say, as the oceans evaporate). These are co-categorical; both are existential catastrophes. I strongly disapprove of this definition; it’s outrageous. That said, is climate change an “existential risk” on a much more reasonable, less Baconian-capitalistic definition of the term? Absolutely. But this is precisely the point: a theoretical framework that says (a) existential risk mitigation is top global priority; minuscule reductions in x-risk are the moral equivalent of saving billions of actual people; give to the rich (unless contingent circumstances mean that we should do otherwise); don’t fritter away your resources on global poverty and animal welfare (from Singer); etc. and (b) that climate change, unless there’s a highly improbable nonlinear catastrophe, isn’t an existential risk, is deeply flawed.

In the Aeon piece, he at times refers to it as a “dire threat” and that climate change will “caus[e] island nations to disappear, trigge[r] mass migrations and kil[l] millions of people.” Longtermists would agree with these descriptions — and would certainly think these are horrible outcomes worth preventing. What Torres does not argue is that climate change will cause the “premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development [11]"

At this point, I hope it is becoming clear just how far Balwit misses the target. Nearly every paragraph is chock-full of inaccurate or misleading claims; perhaps such propaganda is not surprising given that, once again, her boss is the guy she’s defending; people who defend their bosses tend to get rewarded, those who undermine their boss' views (academic freedom be damned!) tend to get punished. So, I’m going to stop commenting in just a bit, for now, and tackle the second half of her blog post next week.

There are many reasons for longtermists to care about climate change. These include the near-term suffering it will cause, that it does have long-term effects [12], and that climate change will also worsen other existential threats, which we will return to below. Additionally, many climate activists who have never heard the word “longtermism” are motivated by concern for future people, for example, appealing to the effects of climate change on the lives of their children and grandchildren.

I’d bet ya lots of money that most climate activists think, or would think, Bostromian/Ordian longtermism/existential risk are profoundly implausible ideas. This is the one thing actually in my favor: most folks find the existential risk framework utterly bizarre. Because it is.

Potential 

Torres never clearly defines longtermism. Instead Torres writes, “The initial thing to notice is that longtermism, as proposed by Bostrom and Beckstead, is not equivalent to ‘caring about the long term’ or ‘valuing the wellbeing of future generations’. It goes way beyond this.” What Torres takes issue with is that longtermism does not just focus on avoiding human extinction because of the suffering involved in all humans being annihilated, but that there is some further harm from a loss of “potential.” 

Torres misses something here and continues to do so throughout the rest of the piece — potential is not some abstract notion, it refers to the billions of people [13] who now do not get to exist.

Balwit contradicts herself here. Those “people” are not people, they are non-persons. They do not exist. They are figments of our imagination. How much more abstract can one get? Worse, these non-existent non-persons in computer simulations ten trillion years from now are said to have “interests” that matter no less than the actual human beings living right now in destitute regions of the world dying of starvation because of climate change caused by the Global North. (I am well-aware of the problems with person-affecting views. If Balwit would like to get more technical, she should let me know. Happy to have that conversation.)

Imagine if everyone on earth discovered they were sterile. There would obviously be suffering from the fact that many living wanted to have children and now realize they cannot have them, but there would also be some additional badness from the fact that no one would be around to experience the good things about living.

No, there wouldn’t. Do you bemoan the fact that Jack—a merely possible person who could have been born two decades ago but now never will because the egg and sperm that would have created him no longer exist—was never born? Should we cry for the billions and billions and billions and billions of people who could have come into existence but now never will, just the way we cry about people who die in real, actual genocides, wars, etc.? Obviously not. Figments of our imagination should not keep us awake at night.

We might be glad that no one is around to experience suffering. But there would also be no one around to witness the beauty of nature, to laugh at a joke, to listen to music, to look at a painting, and to have all the other worthwhile experiences in life. This seems like a tragic outcome [14].

Many people, probably the majority of philosophers, disagree. (I should have a survey with a colleague at some point soon showing this.) Here are just four examples: here, here, here, here.

Torres keeps his discussion of potential abstract and mocks the grand language that longtermists use to describe our “extremely long and prosperous future,” but he never makes the connection that “potential” implies actual experiencing beings. Extinction forecloses the lives of billions and billions. Yes, that does seem terrible.

Jack isn’t suffering because he wasn’t born. He doesn’t exist. People who don’t exist cannot cry, bleed, be afraid, starve, regret, worry, etc. etc. like actual people can. Think about what’s happening here. My sociological hypothesis is that this view is attractive to wealthy white people (see stats about the EA community) because it provides yet another reason for such individuals not to care about the sufferings of those in the Global South. Instead, it focuses attention away from them to imaginary non-persons who may or may not ever come to exist in billions or trillions of years from now. It’s the perfect religious worldview for wealthy white people. Indeed, it’s not the “grand” language that I chuckle about, it’s language that is virtually indistinguishable from the sort of language found in religious texts (e.g., "vast and glorious").

Like with longtermism, Torres does not offer a clear definition of existential risk. Existential risks are those risks which threaten the destruction of humanity’s long-term potential.

Note that the term has had many definitions. I published an entire paper on this in Inquiry (although it now needs to be updated). For example, Bostrom’s original definition was made explicitly in transhumanist terms: an x-risk is any event that would prevent us from creating a civilization of posthumans. That’s it—that’s the scenario that we must avoid, priority one, two, three, and four. (In fact, Bostrom even published it in a journal that had just been renamed from the Journal of Transhumanism, as I doubt that any mainstream journal would have published anything of the sort. Historically, Existential Risk Studies directly emerged from the modern transhumanist movement that itself took shape in (mostly) the 1990s. And this makes sense: transhumanists want to keep their technological cake and eat it, too.)

Some longtermists prefer to focus only on risks which could cause extinction, because this is a particularly crisp example of this destruction. There are fairly intuitive ways to see how other outcomes might also cause this: imagine a future where humanity does not go extinct, but instead falls into a global totalitarian regime which is maintained by technological surveillance so effective that its residents can never break free from it.

Or imagine a world in which we decide not to create more “advanced” (i.e., super-dangerous) technology, but focuses instead on being very happy living in wonderful little green communities where everyone is friendly and everyone owns a kitten and watches the sunsets and marvels at the firmament for the remainder of Earth’s history. This scenario, too, would count as an “existential catastrophe.” Note also that Bostrom has literally argued that we should seriously consider implementing a global, invasive surveillance system that monitors everyone on the planet! My gosh!

Returning to the totalitarian scenario:

That seems like a far worse future than one where humanity is free to govern itself in the way it likes. This is one example of “locking in” a negative future for humanity, but not the only one. It seems that working to prevent extinction or outcomes where humanity is left in a permanently worse position are sensible and valuable pursuits.  He also quips that longtermists have coined a “scary-sounding term” for catastrophes that could bring about these outcomes: “an existential risk.” It seems deeply inappropriate to think that extinction or permanent harm to humanity should be met with anything other than a “scary-sounding” term.

I don’t think that failing to create trillions and trillions of people (nearly all of whom would exist in computer simulations, according to longtermist calculations) deserves the scary-sounding term “existential risk.”

I will return to the rest of this later. But the accuracy of the text doesn’t get any better.

Before closing, I’d like to point out that there is really no reason for anyone in the EA community to engage with my writings, so I was surprised to see this. This whole “debate” was over before it began, and the anti-longtermism crowd has already lost. Why? Because there is simply no way that anyone can compete with $46.1 billion in committed funding. The marketplace of ideas is run in large part by dollar bills, not good arguments; there are so many desperate philosophers and academics that flashing large grants in front of them is enough for them to accept the project’s goals even if they don’t agree with the project’s assumptions. I know, personally, so many people in the EA community, including some at the leading institutes, who find longtermism atrocious, but have no plans of saying so in public (or even in most private settings) for the obvious reason that they don’t want to lose funding opportunities. My own example provides a case study in what happens when folks criticize the community, which really doesn’t like to be criticized, despite claiming the opposite. I lost three collaborators, for example, not because colleagues disagreed with me (to the contrary), but because they were afraid that, by doing so, they’d incur the wrath of their colleagues, with repercussions for their careers. As David Pearce, who co-founded the World Transhumanist Association with Bostrom in 1998, and many others have noted, “cancel culture” is a real problem within the EA community (“Sadly, Phil Torres is correct to speak of EAs who have been ”intimidated, silenced, or ‘canceled.’”), which stems in part from its extreme elitism. (Indeed, as Ben Goertzel writes: “For instance, the elitist strain of thinking that one sees in the background in Bostrom is plainly and openly articulated in Yudkowsky, with many of the same practical conclusions (e.g. that it may well be best if advanced AI is developed in secret by a small elite group).” Longtermists, especially those at FHI, are deeply anti-democratic.)

So, EA folks, there is nothing to gain by engaging with my critiques. These critiques may have alerted a handful of academics and members of the public about the religious extremism of longtermist thinking, but in the long run good ideas cannot compete with billions of dollars. Obviously, this poses a direct/immediate problem for EA’s supposed commitment to epistemic openness and crucial considerations: those who are critical of the movement don’t get funding, and those who get funding but become critical of the movement lose their funding. If EA really cared about crucial considerations, they’d fund work by critics. But they don’t, and won’t, and this is partly why I think it’s very much a secular religion built around the worship of future value, impersonally conceived.

Please read my Aeon article here: https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

Please read my Current Affairs article here: currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk?fbclid=IwAR1zhM1QqoEqF1bhNicGcZ5la7wGp3Q4hU0t9ytfbM2gBGrhOjGhFOl-NC8


 

7 comments

Comments sorted by top scores.

comment by Vaniver · 2021-12-22T18:36:02.201Z · LW(p) · GW(p)

I would strongly encourage all readers to contact, e.g., a professor whose work focuses on white supremacy and ask them about Beckstead’s passage. Indeed, give them the whole chapter to read. That’s what I did. That’s actually how I came to this conclusion.

I think 'white supremacy' is, unfortunately, a pretty loaded term in a culture war, which will almost necessarily lead to people talking past each other. ["I'm not the KKK!" -> "I wasn't saying you're like the KKK, I'm saying that you're perpetuating a system of injustice."]

I think that often when this accusation is levied, it's done by someone who is trying to being less selfish against someone who is probably being more selfish. For example, if I were to talk about immigration restrictions as being white supremacist because they structurally benefit (more white) citizens at the expense of (less white) non-citizens, you could see how the label might fit (even tho it might not, at all, be the frame chosen by the immigration restrictionist side, especially in a place like France which has done quite a lot to detach citizenship and race), and also how someone interested in fairness might immediately lean towards the side paying for the transfer instead of receiving it.

I think this is probably not the case here, where I think Bostrom and Beckstead and others have identified moral patients who we could help if we chose to, and people interested in social justice have identified moral patients who we could help if we chose to, and so both sides are pushing against selfishness and towards a more fair, more equal future; the question is how to compare the two, and I think terms of abuse probably won't help.

comment by Charlie Steiner · 2022-01-18T06:31:01.677Z · LW(p) · GW(p)

I feel like this argument isn't aided by people on both sides missing that it's okay (not to mention expected) to have complicated preferences about the universe. Reason is the slave of the passions - you shouldn't arrive at population ethics by by reasoning from simple premises, because human values aren't simple.

(Of course, given the choice between talking about the world as people care about it versus not having to admit they were on the wrong track, I'm sure many philosophers would say that trying to make population ethics simple was a great idea and still is, we just need to separate the "ethics" in question from what people actually will act on or be motivated by.)

So far it probably sounds like I'm agreeing with you and dunking on those professional philosophers at FHI, but that's not so. A lot of the common-sense problems with longtermism also only work if you assume that people can't have complicated preferences about how the future of the universe goes. E.g. you can just not like people getting blown up by bombs, this doesn't mean you have to want people to be packed into the universe like sardines.

You might say that this makes me an odd duck, and that I (along with CSER et al.) am not a true scotsman longermist. I would counter that actually, pretty much everyone takes practical actions based on this correct picture where they have complicated preferences about how they want the universe to and up, but because philosophy is confusing many people make verbal descriptions of ethics using wrong pictures.

comment by M. Y. Zuo · 2022-09-15T15:39:51.055Z · LW(p) · GW(p)

Just discovered this 9 months after the fact. To be fair to Phil, the OP, since the post is at -41 karma (22 total votes) and has only 2 top level comments in 9 months other than his own, it does lend a bit of credibility to the claim that some kind of 'cancelling' happened. 

But, Phil's comments seem to be geared to incite emotional discomfort as part of an agitation strategy. I wouldn't be surprised if a lot of folks downvoted out of principle and didn't care enough to engage with the claims. So the overall claim is not that convincing, bit of an own-goal.

Maybe Nick Bostrom and his supporters have some weird ideas, and weird claims that are glossed over, that seems possible. Yet to then impute, therefore, that they must be degenerates, seems like a nasty rhetorical trick. Which really undermines Phil's position in the eyes of any experienced reader.

After all anyone could equally claim Phil must also be a degenerate because of x, y, or z. Needless to say this line of reasoning is ridiculous as it could lead to the conclusion that everyone on Earth can become suspect just by saying these magic words.

There's also the meta problem of discussions like this quickly devolving to Godwin's law, where multiple sides compete to come up with clever ways to analogize each other's positions to the least defensible group.

If Phil refines and only presents his strongest claims there might be more serious discussion on the alleged drawbacks of 'longtermism'. 

comment by philosophytorres · 2021-12-22T16:56:52.206Z · LW(p) · GW(p)

I guarantee that the religious ideologues who have so far downvoted this haven't read it.

Replies from: Dr. David Mathers
comment by Dr. David Mathers · 2021-12-22T17:47:31.731Z · LW(p) · GW(p)

It's pretty telling that you think there's no chance that anyone who doesn't like your arguments is acting in good faith. I say that as someone who actually agrees that we should (probably, pop. ethics is hard!) reject total utilitarianism on the grounds that bringing someone into existence is just obviously less important than preventing a death, and that this means that longtermist are calling for important resource to be misallocated. (That is true of any false view about how EA resources should be spent though!). But I find your general tone of 'people have reasons to be biased against me so therefore nobody can possibly disagree with me in good faith or non-fanatically' extraordinarily off-putting, and think it's most likely effect is to cause a backfire where people in the middle move towards the simple total utilitarian view. 

Replies from: TAG
comment by TAG · 2022-01-18T19:06:21.519Z · LW(p) · GW(p)

It's also telling that there are lots of downvotes, and very little critique.

Replies from: clearthis
comment by Tobias H (clearthis) · 2022-01-20T07:44:31.882Z · LW(p) · GW(p)

There has been quite a lot of discussion over on the EA Forum:
https://forum.effectivealtruism.org/search?terms=phil%20torres [? · GW]

Avital Balwit linked to this lesswrong post in the comments of her own response to his longtermism critique (because Phil Torres is currently banned from the forum, afaik):
https://forum.effectivealtruism.org/posts/kageSSDLSMpuwkPKK/response-to-recent-criticisms-of-longtermism-1#6ZzPqhcBAELDiAJhw [EA(p) · GW(p)]