Could this be an unusually good time to Earn To Give?
post by TomGardiner (HorusXVI) · 2025-03-04T21:51:19.148Z · LW · GW · 0 commentsThis is a link post for https://forum.effectivealtruism.org/posts/dnKvAJcRko9ZPJcA6/could-this-be-an-unusually-good-time-to-earn-to-give
Contents
No comments
I think there could be compelling reasons to prioritise Earning To Give highly, depending on one's options. This is a "hot takes" explanation of this claim with a request for input from the community. This may not be a claim that I would stand by upon reflection.
I base the argument below on a few key assumptions, listed below. Each of these could be debated in their own right but I would prefer to keep any discussion of them outside this post and its comments. This is for brevity and because my reason for making them is largely a deferral to people better informed on the subject than I. The Intelligence Curse by Luke Drago is a good backdrop for this.
- Whether or not we see AGI or Superintelligence, AI will have significantly reduced the availability of white-collar jobs by 2030, and will only continue to reduce this availability.
- AI will eventually drive an enormous increase in world GDP.
- The combination of these will produce a severity of wealth inequality that is both unprecedented and near-totally locked-in.
If AI advances cause white-collar human workers to become redundant by outperforming them at lower cost, we are living in a dwindling window in which one can determine their financial destiny. Government action and philanthropy notwithstanding, one's assets may not grow appreciably again once their labour has become replaceable. An even shorter window may be available for starting new professions as entry-level jobs are likely the easiest to automate and companies will find it easier to stop hiring people than start firing them.
That this may be the fate of much of humanity in the not-too-distant future seems really bleak. While my ear is not the closest to the ground on all things AI, my intuition is that humanity will not have the collective wisdom to restructure society in time to prevent this leading to a technocratic feudal hierarchy. Frankly, I'm alarmed that having engaged with EA consistently for 7+ years I've only heard discussion of this very recently. Furthermore, the Trump Administration has proven itself willing to use America's economic and military superiority to pressure other states into arguably exploitative deals (tariffs, offering Ukraine security guarantees in exchange for mineral resources) and shed altruistic commitments (foreign aid). My assumption is that if this Administration, or a similar successor, oversaw the unveiling of workplace-changing AI, the furthest it would cast its moral circle would be American citizens. Those in other countries may have very unclear routes to income.
Should this scenario come to pass, altruistic individuals having bought shares in companies that experience this economic explosion before it happened could do disproportionate good. The number of actors able to steer the course of the future at all will have shrunk by orders of magnitude and I would predict that most of them will be more consumed by their rivalries than any desire to help others. Others have pointed out that this generally was the case in medieval feudal systems. Depending on the scale of investment, even a single such person could save dozens, hundreds, or even thousands of other people from destitution. If that person possessed charisma or political aptitude, their influence over other asset owners could improve the lives of a great many. Given that being immensely wealthy leaves many doors open for conventional Earning To Give if this scenario doesn't come to pass (and I would advocate for donating at least 10% of income along the way), it seems sensible to me for an EA to aggressively pursue their own wealth in the short term.
If one has a clear career path for helping solve the alignment problem or achieve the governance policies required to bring transformative AI into the world for the benefit of all, I unequivocally endorse pursuing those careers as a priority. These considerations are for those without such a clear path. I will now apply a vignette to my circumstances to provide a concrete example and because I genuinely want advice!
I have spent 4 years serving as a military officer. My friend works at a top financial services firm, which has a demonstrable preference for hiring ex-military personnel. He can think of salient examples of people being hired for jobs that pay £250k/year with CVs very arguably weaker, in both military and academic terms, than mine. With my friend's help, it is plausible that I could secure such a position. I am confident that I would not suffer more than trivial value drift while earning this wage, or on becoming ludicrously wealthy thereafter, based on concrete examples in which I upheld my ethics despite significant temptation not to. I am also confident that I have demonstrated sufficient resilience in my current profession to handle life as a trader, at least for a while. With much less confidence, I feel that I would be at least average in my ability to influence other wealthy people to buy into altruistic ideals.
My main alternative is to seek mid to senior operations management roles at EA and adjacent organisations with longtermist focus. I won't labour why I think these roles would be valuable, nor do I mean to diminish the contributions that can be made in such roles. This theory of impact does, of course, rely heavily on the org I get a job in delivering impactful results; money can almost certainly buy results but of a fundamentally more limited nature.
So, should one such as I Earn To Invest And Then Give, or work on pressing problems directly?
0 comments
Comments sorted by top scores.