Pre-ASI: The case for an enlightened mind, capital, and AI literacy in maximizing the good life
post by Noahh (noah-jackson) · 2025-02-21T00:03:47.922Z · LW · GW · 3 commentsContents
I. Premises II. The cases: 1. Enlightened mind 2. Capital 3. AI literacy III. Beyond the self Key Takeaways None 3 comments
I. Premises
- The goal is to live a life well lived, measured by maximizing well-being (whatever your utility function is).
- ASI would be able to engineer mental states through advanced pharmaceutical interventions, brain machine interface, etc., that radically improve human experience relative to the greatest pleasures on offer in today's world. If one assumes these advancements will occur in our lifetime, then the potential utility in the post-ASI period far exceeds anything achievable pre-ASI. This shifts the calculus — the pre-ASI period should be used to best position oneself in a post-ASI world in order to secure access to said technologies.
This article explores three high-leverage ways one can spend their time in the pre-ASI period given those premises. The optimal distribution of time spent on each depends on ASI timelines and individual circumstances, but the principles behind them remain true.
II. The cases:
1. Enlightened mind
Enlightened mind: A secure mind that can remain equanimous amidst any external stressor, is in control of itself, and desires the “right” things.
In a world without ASI, the best way to maximize the good life would be to directly improve one’s experience of life through creating a ‘happier’ mind. The reason this still holds importance, even in an ASI trajectory, is the non-trivial chance that humans become extinct. If extinction occurs, the only thing that matters is the life lived pre-ASI. In this case, an enlightened mind serves as a hedge against extinction.
On top of that, the transition of humanity’s way out is likely to be tumultuous. The world will likely face severe instability, with all of our institutions collapsing and the gradual loss of friends and family. A mind resilient to suffering, despite external circumstances, would significantly mitigate distress during this period. Even in the scenario where we avoid extinction, there will still be massive societal changes and constant uncertainty on the way to utopia.
A mind that cannot be overwhelmed will also provide clarity and enable better decision-making in the rapidly changing landscape. Put simply, if we can’t get a single good night’s sleep because we’re stressing too much about ASI, we’re unlikely to be effective during the transition.
Some might argue that if ASI leads to an unimaginably high-utility future, then even with something like a 90% risk of extinction, the rational move would still be to focus entirely on preparing for an ASI world rather than hedging, given the expected utility of that future. One’s comfort level with being mugged ultimately determines their stance on this issue.
Even if we’re comfortable with getting mugged, an enlightened mind may still be crucial for a different reason. If humans remain in control of AI, we may simply use it as a tool to more doggedly pursue our existing desires. However, if those desires aren’t optimal for well-being, ASI could end up worsening our lives rather than improving them. For instance, if status-driven comparison is suboptimal for happiness but remains an unchecked desire, ASI might simply change the playing field—where we’re now competing over the size of our star and planetary collections.
The broader principle here is that we must ensure we desire the right things—those that genuinely maximize well-being. Otherwise, we risk using ASI to further entrench us in misguided pursuits.
A mind in control of itself will be important during the transition to ASI, as companies wield advancing technology to make vices increasingly appealing and addictive. Say, a sex robot girlfriend that leaves one unfulfilled, or hyper-palatable sugary food that kills you via diabetes before you even make it to true ASI. Prior to the self-control pill being created, a mind that is resilient to impulses may be very valuable.
I use ‘enlightened mind’ here to broadly refer to a mind that maximizes your utility function in the present, rather than to invoke specific images of meditation. While meditation is likely one of the most effective tools for this, other interventions—such as therapy, philosophy, relationships, etc.—may be more or less effective depending on the individual.
2. Capital
An article titled By Default, Capital Will Matter More Than Ever After AGI [LW · GW] makes a compelling argument about how wealth and economic power will be determined in a post-AGI world.The argument, in short, is that humans currently generate value through skills and knowledge that the free market deems valuable enough to pay for. However, as AI surpasses humans across all cognitive and creative domains, these traditional levers for success will become obsolete. Why would a business hire us if it can hire a vastly more capable AI? Why would a venture capitalist invest in us if they can hire the AI entrepreneur? Instead of one being able to generate economic value through work, economic standing will depend on how much capital one possessed before the transition.
In a world where life-extending technologies and utility-maximizing experiences come with a price tag, accessing the highest-end versions—those that provide vastly greater utility, at least in an absolute sense, relative to anything experienced pre-ASI—will be crucial. Even if ASI creates a world of abundance, it will still exist within a finite universe. If resources and opportunities are not perfectly distributed, the best advancements may remain limited to the wealthiest, making capital the key determinant of access. If one’s relative economic position is locked in post-ASI, then accumulating wealth now may be the only window in a very long lifetime to influence that outcome.
Not only will wealth determine access to high-utility advancements, but it may also influence how ASI shapes reality. Those with significant capital could steer AI development and policy through lobbying, funding research, or controlling key AI infrastructure. If political influence remains something that can be bought, then accumulating wealth now isn’t just about personal access—it’s about having a say in the world ASI creates.
This may sound grim, but it’s important to note that if even a semblance of UBI is implemented and we land in a deep utopia, life will still be extraordinarily good for everyone. It’s also possible that the most utility-maximizing technologies will be inexpensive to produce and included even in the broke-tier UBI plan. Additionally, there may be a scenario where a central superintelligence, rather than individual wealthy humans or politicians, oversees society and equally allocates resources, making capital irrelevant altogether.
But if you don’t believe those scenarios are likely and want to climb the economic ladder, now is the time to do so (hint: leverage AI).
3. AI literacy
Having a solid understanding of AI—both technically and in terms of its broader potential societal impact—provides significant advantages. It refines decision-making, clarifies the landscape of opportunities, and enhances one’s ability to navigate the rapidly evolving world AI is creating.
One benefit is improving the accuracy of one’s predictions regarding AGI/ASI timelines. Whether ASI arrives in five years or thirty years drastically affects major life choices, from entrepreneurial ventures to personal goals.
Understanding AI’s capabilities also provides insight into which fields will be most impacted, allowing one to gain expertise in key areas that lead to influence or financial opportunities. For example, if neuromodulation appears to be one of the early AI-driven technologies likely to take off, positioning oneself in that field—whether through research, investment, or entrepreneurship—could be highly advantageous. Conversely, this knowledge can also help avoid industries that are at risk of rapid automation, such as software engineering.
Technical knowledge helps separate hype from reality, preventing one from getting caught up in the news cycle and bogus claims about AI’s capabilities. Understanding its actual progress and limitations provides a grounded perspective.
Networking and influence may also stem from AI literacy. Being able to engage with those knowledgeable in the field provides partnerships and well-informed peers.
Finally, AI safety is an area where AI literacy can have a direct impact on humanity’s survival. A strong understanding of AI allows one to contribute to discussions and efforts to mitigate existential risks. Even without working directly in AI safety, being informed enables one to influence their sphere—whether by advocating for sound policies, supporting alignment efforts, or sounding the alarms about the importance of the problem. Even from the stated goal of maximizing your own good life, if humans go extinct you’ll be one of them, and if humans reach utopia you’ll also be one of them. Not to mention friends, family, and conscious creatures that probably hold weight in your utility function.
III. Beyond the self
Although this post has been centered around maximizing our good life, I’m not entirely sure that these concepts—or the idea of a distinct “you” or “me”—have any real basis in reality. This article implicitly assumes a notion of Personal Identity, which, while not the focus of the post, fundamentally shapes how we think about everything.
Now, let me summarize a thousand years of philosophy and challenge the idea of personal identity in a few sentences. There’s no clear boundary of “me.” No definition—whether based on body, brain, personality, or memory—holds up entirely under scrutiny.
When we look at experience itself, there is no fixed “I” to be found. Boundaries between self and other aren’t innate to reality but drawn after the fact. We carve up this vast space of experience into “mine” and “yours,” but these divisions are somewhat arbitrary. What I call me is just a shifting collection of thoughts, sensations, and memories that blend into the broader space of all experience. The difference between “you” and “I” isn’t fundamental, rather a mental construct.
If that didn’t make sense, it’s probably not your fault—go read Parfit. If it did, well… we should probably get careers in AI safety.
Key Takeaways
- ASI Radically Shifts the Utility Landscape
- Post-ASI technologies will enable engineered mental states and vastly higher well-being than anything achievable today.
- The pre-ASI period may be used to best ensure a future where those technologies exist and one has access to them.
- Three High-Leverage Ways to Prepare for ASI:
- Enlightened mind – A hedge against extinction and a tool for navigating the transition to ASI, whether it leads to human survival or not. Mental resilience minimizes suffering, improves decision-making, and ensures one wields ASI for truly well-being-maximizing goals rather than getting trapped in suboptimal pursuits or hyper-addictive vices.
- Capital – Wealth may determine access to the best post-ASI advancements and may shape AI development. If wealth-based stratification persists, accumulating capital now may be the only window to influence one’s future standing.
- AI literacy– AI knowledge enhances decision-making, improves ASI timeline predictions, helps separate hype from reality, provides entrepreneurial leverage, and allows one to contribute to AI safety.
- The Role of AI Safety and the Bigger Picture
- Influencing a safe AI development could be one of the highest-leverage ways to shape the future for yourself and others.
- Personal Identity: The lines between maximizing my good life and everyone else’s are constructed and somewhat arbitrary upon closer examination. Thus it might be more rational to frame our pursuits as optimizing for the good of humanity.
3 comments
Comments sorted by top scores.
comment by Viliam · 2025-02-21T10:52:52.674Z · LW(p) · GW(p)
When we look at experience itself, there is no fixed “I” to be found. Boundaries between self and other aren’t innate to reality but drawn after the fact. We carve up this vast space of experience into “mine” and “yours,” but these divisions are somewhat arbitrary.
The boundaries are somewhat arbitrary, but it seems to me that if we keep going in this direction far enough, at the end of the road is equanimity with the universe being converted to paperclips. (Which would be a wrong thing in my current unenlightened opinion.) After all, there is no sharp boundary between "me" and a paperclip.
Replies from: noah-jackson↑ comment by Noahh (noah-jackson) · 2025-02-21T15:18:24.947Z · LW(p) · GW(p)
I see where you’re coming from, but my point about boundaries applies specifically within the domain of conscious experience. There’s no clear boundary between ‘you’ and ‘me’ in that space because consciousness doesn’t seem to have non-arbitrary borders. But paperclips aren’t conscious, so they don’t even exist within that domain of experience to begin with.
So while self/other distinctions might be constructed, that doesn’t mean we should erase distinctions that actually matter—like the difference between something that has subjective experience and something that doesn’t. That’s why I wouldn’t extend the same boundary-dissolving logic to a paperclip (or a rock, or a chair) in the same way I would to other conscious beings.
comment by Richard_Kennaway · 2025-02-21T15:34:13.560Z · LW(p) · GW(p)
When we look at experience itself, there is no fixed “I” to be found.
Speak for yourself. That whole paragraph does not resemble my experience. You recommend Parfit, but I've read Parfit and others and remain true to myself.