The Newbie's Guide to Navigating AI Futures

post by keithjmenezes · 2025-02-19T20:37:06.272Z · LW · GW · 0 comments

Contents

  Part 1: The New Girl In The Red Dress
    Definitions
  Part 2: The History of Technological Advancement
    A History of Prior Technological Revolutions
    The Evolution of Artificial Intelligence
    Part 2 Summary:
  Part 3: The Spectrum of AI Futures
    What do Experts Think?
    What Determines Our Path? 
    Part 3 Summary:
  Part 4: Implications for Society 
    Impacts on Broader Society?
    Impacts on Average Humans?
      Direct Human Interaction 
      Identity and Performance
      Cultural and Meaning-Making
      Experiences
    Value in a post-AGI World
      What will remain scarce?
      What might be undervalued today?
      Networks and social capital
      Human-made goods and performance
      Bitcoin and cryptocurrency
      What will be the main sources of wealth?
    What won't change?
    Part 4 Summary:
  Part 5: What should you do today?
    Improve Future Literacy
    Identify Outlier Activities
    Increase Ambition, Risk Appetite and Agency
None
No comments

This article was crossposted from my website. Original linked here

The piece was written using great ideas from Max Tegmark, Matt Ridley, Dave Shapiro, Aswath Damodaran, Anton Korinek, Marc Andreesen, L Rudolph L, Bryan Johnson, Kevin Kelly, Sam Altman, Eliezer Yudkowski, Scott Alexander and many others. 

I've referenced the specific materials used in-text.

Thanks to Morgan Raffety, Oliver Lopez, Phil Amato, Thiago Sawan, Luca Marchione, Pete Fernandes, Bhragan Paramanantham and Ashish Nair for reading the initial drafts of this.

Part 1: The New Girl In The Red Dress

Today, more than ever, there's a huge amount of hype, panic, and interest in Artificial Intelligence (AI). We hear claims that AI will kill us all, ruin society, take our jobs, cause crippling inequality, and enable bad people to do bad things. But at the same time, others are certain it will drive unprecedented efficiencies, solve all our problems, and create a utopic future that can be "unimaginably great". At this point, it has become a brand name used to signal something vaguely synonymous with progress, automation or [insert anything you want to raise money for].

We've seen growing interest on search engines, via venture funding, and through a shift in labour market flows, where the smartest people in tech are now electing to work at AI companies over more traditional software companies. The term has reached literal 's tier' status as a buzzword‒ so much so that companies just mentioning 'AI' in earnings reports have seen favourable share price movements. New tech always gets people excited, but the hype around AI is the most intense we’ve seen since the dot-com boom.

In my attempt to work out what is actually going on I mostly found hype-cycle media, disguised advertising, sci-fi novels or really dense research papers. So I decided to write this article to cut through the bullshit and find out if the girl in the red dress is the real deal this time around. My goal is to create the go-to long-form piece that can bring someone up to speed on AI advancement.

I'm going to explore all that and more, including:

This piece is not supposed to be absolutely correct or even predictive. In fact, It will almost certainly be inaccurate. Despite this, thinking deeply about the future acts as a valuable counterbalance to our deeply wired short-term bias. This is especially important for those with long-term goals, plans and ambitions (all of us), where being 'future literate' can pay off.

"Future literacy is the ability to forecast approximate milestones and create the capacity to reach them, regardless of contextual change. It’s the act of creating mental models for an emerging future while living experimentally and adventurously." (Bryan Johnson - A Plan For Humanity)

 I also just find writing about this stuff fun.

While reading this, remember that even experts tend to be inaccurate, full of biases, or just plain wrong at the best of times. Many experts reason from the past, iterate rather than innovate, and tend to be risk averse. AI experts, in particular, have a really bad record— they thought we'd get to artificial general intelligence (AGI) over a 10-week project in the 50s. But I still think that understanding their ideas and the assumptions behind their positions has some merit. I'll also explicitly add my opinion if it adds value to the discussion. 

So, with that out of the way, what is this technology?

Definitions

There has been a lot of disagreement amongst those in the AI space that ultimately stems from misunderstandings over definitions. To avoid adding to that mess, I'll define a few terms that might be helpful for ingesting this article and others like it. I'm mostly sticking with Max Tegmark's definitions (Life 3.0) because they are sufficiently high-level and allow for flexibility:

By design, these definitions are substrate-independent, meaning that the essential qualities of intelligence are not bound to any particular physical medium, akin to computation and memory. This is still an assumption, but given our current understanding (or lack thereof), it makes sense to keep an open mind.[1]

Part 2: The History of Technological Advancement

A History of Prior Technological Revolutions

Before we explore future possibilities, it's worth understanding how technological advancements have occurred in the past and their impacts on society.

The history of human technological advancement is a complex, interesting, and messy story in itself. As Peter Thiel defines it, humans are technologists by nature, and "properly understood, any new and better way of doing things is technology". On a zoomed-out time scale, technological progress follows a compounding, exponential curve. Compare our society 20 years ago (no smartphones), to 200 years ago (no combustion engine, no home electricity), 2000 years ago (no industrial machines), and then 20,000 years ago (no agriculture). But there's more to it than just exponential progress.

Here's an overview of the significant technological revolutions in human history and how they've impacted society, using Yuval Noah Harari's work in Sapiens as a guide.

Agricultural Revolution (c. 10,000 BCE): Humanity's first major technological leap that transformed nomadic hunters into settlers and farmers.

Scientific Revolution (c. 1543 CE): An instrumental shift to how humans understand and interact with the natural world. 

Industrial Revolution I (18th-19th centuries): The age of mechanical engineering that initiated a shift from manual to machine-based manufacturing. 

Industrial Revolution II (Late 19th-Early 20th centuries): The age of steel, electricity, and mass production that fundamentally reshaped society.

Industrial Revolution III (Late 20th-Early 21st centuries): The digital revolution that connected the world through information technology. 

Cognitive Revolution (Present?): The emerging revolution that may reshape civilisation via the commoditisation of cognition. 

Like biological evolution, we see that technological evolution is typically a bottom-up, decentralised process rather than a top-down, planned one.[3] Matt Ridley explains that innovation has occurred through iterative trial and error, with technologies evolving incrementally as they are ideated, tested, adapted, and improved upon by individuals. This progress typically looks like new products, processes, and systems that enable increases in productivity, enhanced human capabilities, and a shift of the metaphorical possibilities frontier.

If we view technology as an abstract entity capable of self-organisation, reproduction and adaptation ('the technium' as introduced by Kevin Kelly), we can better understand what guides its progress. This technium is driven by evolutionary pressures to reproduce and adapt to its environment, analogous to biological natural selection. This environment consists of complex, interconnected systems such as social networks, the natural biosphere, political structures, and economic systems‒ all of which impose 'evolutionary pressures' on this entity and act as the drivers of innovation. Ridley suggests that we may, in fact, ride, rather than drive, the waves of innovation. 

"The implications of this new way of seeing technology—as an autonomous, evolving entity that continues to progress whoever is in charge—are startling" (Matt Ridley - How technological innovation happens)

Whilst this emergent order usually drives us towards increased value and utility for society, it's not always the case. Markets are not always efficient drivers of evolution, particularly where benefits and costs are diffuse as with public goods. Inequality and accessibility aren't always incentivised by existing economic structures (pharmaceutical industry cartels), path dependency or lock-in effects can lead to suboptimal technologies dominating the market (QWERTY keyboard), and bad actors can and do use technology for nefarious purposes (cyberattacks). Advancements in the past may not have necessarily led to increased individual happiness or well-being, as they have often come with significant trade-offs and unintended consequences.[2]

Another interesting takeaway from our history is that most breakthroughs come from technologists tinkering instead of researchers chasing hypotheses. A common but erroneous assumption that prevails today is that technology (application) follows science (theory) when the reverse has been more usual. The basic idea behind the steam engine emerged between 15 and 30 B.C., but the path to a working engine wasn't driven by theoretical breakthroughs. Instead, it evolved through the hands-on work of practical craftsmen over centuries—Newcomen, the ironmonger; Stephenson, the mining engineer; and Watt, the toolmaker. Their iterative improvements, born from trial and error and driven by 'evolutionary environmental pressures' (economic incentives, market signals, social needs), ultimately sparked the first Industrial Revolution. The theoretical understanding behind the invention, thermodynamics, came later on as scientists like Joseph Black worked to explain why these already-functioning machines worked as they did.

We also see that the public perception of the impact of technology is discontinuous with its rate of progress. This arises when lots of people suddenly become aware of a technology that matters, leading to a surprise. Marc Andreessen describes this as moral panic a social contagion that convinces people new technology is going to destroy the world, society, or both. So when we talk about "waves" of technological change, we're really describing our subjective experience of the effects of the technologies, rather than the actual pace of its development. The development itself is usually relatively constant, but its impact on society comes in bursts as various pieces of the developmental puzzle start to align. 

And what about the effect of technological advancement on jobs and the labour market? 

Well, there appears to be a consistent pattern throughout history: initial disruption followed by long-term job creation and economic growth. From the Luddites who feared mechanical looms to the outsourcing panic of the 2000s, each wave of innovation has triggered fears of mass unemployment. However, rather than creating permanent job losses (lump of labour fallacy), technology has so far repeatedly transformed the nature of work through "creative destruction"‒ eliminating some roles whilst creating entirely new industries and job categories. These new roles, augmented by productivity-enhancing technologies, contribute to increased profits that then allow for increased wages and higher levels of material wealth over time. 

The destiny of useful technology usually follows this cycle: increased utility ⇒ increased demand ⇒ increased supply + profits (+ increased job creation, wages) ⇒ cheaper, easier to use ⇒ increased adoption ⇒ increased utility & repeat.  

However, Anton Korinek suggests the relatively scarce, irreproducible factor of production typically captures most of the increased economic value. When land was irreproducible and labour effectively wasn't (resource constraints in the agricultural age), landowners prospered. When technology made capital reproducible during the Industrial Revolution, human labour became the scarce factor, leading to a dramatic rise in wages that approximately tracked the 20-fold increase in economic output. 

As societies grow wealthier, the perception of what possesses value also changes. This natural shift follows an established pattern: once basic material needs are met, people naturally turn to things that are harder to obtain and higher on their hierarchy of needs. Basic necessities like food and shelter, which once consumed most family budgets, now represent a smaller share of expenses in most developed economies. Meanwhile, focus has started to shift toward more intangible assets: experiences, knowledge, social connections, and even attention itself. The smartphone highlights this transformation. What was once an expensive luxury is now an everyday essential, yet the attention it commands is now more coveted than the phone itself. The pattern becomes clearer when we look at scarcity. Many physical goods are now abundant in wealthy societies, leading to an appreciation of that which has always been truly scarce: time, authentic experiences, meaningful relationships, environmental quality, etc. 

When perceptions of value shift, so does social signalling. While the specific markers of status and success have changed dramatically over prior technological revolutions, our underlying drive to play status games remains constant. We've simply adapted to new contexts, from competing for tribal leaders to competing for Instagram followers (and IG baddies). This serves as a window into a future with ever-increasing material abundance.   

Finally, history shows us that attempts to halt technological progress are usually ineffective when coupled with global competition. Ridley explains that while some societies have temporarily succeeded in prohibiting specific technologies‒from the Ming Chinese's ban on large ships to 1920s America with alcohol—these prohibitions inevitably break down in an interconnected society. Technology advances like water flowing downhill: it finds a path forward somewhere in the world. Technological development is less about individual breakthroughs or regulatory controls and more about an inexorable process of incremental innovation that follows its own evolutionary trajectory. History books portray advancements as being driven by heroic inventors who single-handedly drive revolutionary leaps. Unfortunately, it isn't so clear-cut. The element they fail to address is the hidden evolutionary pressures that drive steady, inevitable advancement with the ability to resist any single society's attempts to control it.  

Despite these phenomena appearing to be reliable, recurring elements of our technological story, past performance does not guarantee future results. A compelling case can be made to suggest that this time, the potential fourth industrial revolution may buck some trends. 

The Evolution of Artificial Intelligence

Today's generative AI models represent the latest iteration of humanity's long-standing quest to create inorganic intelligence. Early concepts date back to 400 BCE with mechanical pigeons, whilst modern AI emerged alongside computers in the 1940s, built on the idea that human thought processes could be mechanised.

AI is the natural evolutionary next step for information technology. The internet evolved from a simple means of exchanging information to a sophisticated multidisciplinary tool that allows people to produce content, engage virtually with one another, and even escape reality‒ deeply impacting almost every aspect of our lives. This has resulted in the globalised, networked, data-rich society we live in today. Once we started producing endless amounts of data, it was only a matter of time until we did something useful with it. Like the internet, AI is also an information technology, but this time, it appears to be using our data to revolutionise human cognition. If this is the case, then the second and third-order effects could be virtually unbounded. Even today, early adopters use generative AI applications in ways that directly augment many aspects of their daily lives

The current AI revolution centres around the deep learning algorithm and the transformer architecture. At its core, a large language model (LLM) is an effective predictive function (next token/pixel/word) trained on an almost incomprehensible amount of data. For context, it would take a human reading non-stop for around 72,000 years to process the 45TB of text data used for GPT-3 alone, and state-of-the-art models train on substantially more.[4]

The architecture works via a mechanism called "attention", which allows different parts of the input to interact with each other in parallel rather than processing them sequentially. Each word is converted into a list of numbers that encode its meaning, and these numbers are refined based on the surrounding context. The most recent reasoning models, like Open AI's o1, leverage this mechanism together with extra processing time (test-time compute), enabling a level of coherence across multi-step reasoning. Like humans, when prompted to "think step by step," models show dramatically improved performance on tasks requiring structured logic. 

Model behaviour emerges from hundreds of billions of parameters tuned through training, making it difficult for us to understand exactly why it makes specific predictions. It's also unclear why certain capabilities emerge at certain scales, how internal decision-making works, and how 'knowledge' is stored. This has led to the emergence of a field called mechanistic interpretation, which aims to figure out exactly what's going on within this black box.[5]

What's interesting isn't just that this relatively simple architecture-algorithm pairing works but how well it works (and scales) across diverse applications. Beyond the widely-known consumer generative AI applications like ChatGPT and Claude, similar deep learning approaches today power self-driving cars, recommendation systems, robots, and  prediction of protein structures. Some technology companies like Salesforce have also publicly announced that they aren't going to hire any more engineers due to the productivity gains from AI.[6]

"Deep learning worked, got predictably better with scale, and we dedicated increasing resources to it." 
(Sam Altman - The Intelligence Age)

Many believe that to create AI with reasoning capabilities and 'real intelligence', we'd need to understand how human brains work. But biological evolution doesn’t necessarily pick the easiest way to do something; it picks the most evolutionarily advantageous way to do it. We invented effective planes almost a century before we understood how to replicate the specific flight mechanism used by birds. Like the Wright brothers' first plane, the transformer may be the first step towards recreating intelligence without having to understand the complexity of biological brains. 

Progress also appears to be widespread globally. Chinese AI start-up DeepSeek recently released an open-source, state-of-the-art reasoning model called R1. They did it faster and cheaper than leading Western labs whilst also having a smaller team, and operating under heavy US hardware sanctions.[7] Before this, many assumed that leading Western AI labs were years ahead of their Chinese counterparts. This suggests that this technology may be on the road to commoditisation, where training sets, application layer design, and cost-effectiveness become much more important from a product context. 

However, current AI systems have plenty of limitations. LLMs are effectively 'just' statistical pattern-matching systems that generate outputs based on probabilistic associations in training data. They lack true understanding or common sense reasoning, can't adapt well to entirely new scenarios, and struggle with complex ethical judgments. While they can generate impressive outputs based on endless training data, they don't really possess genuine creativity or emotional intelligence. Not to mention the high degree of bias that impacts their outputs. Most consider these systems to still remain firmly in the realm of 'narrow AI', where they are highly capable at specific tasks but lack real general intelligence. There's still a long road ahead before we build our own J.A.R.V.I.S

And this road to AGI currently remains (at least publicly) unclear. Some researchers believe scaling current architectures with more data and computing power will eventually lead to AGI through emergent properties (scaling hypothesis). Leading AI labs today publicly express that they are confident they "know how to build AGI as [they] have traditionally understood it" (Sam Altman - Reflections). But we must remember that these companies require the hype to justify sky-high valuations and access to capital, as they are all currently bleeding money. Others argue that fundamental breakthroughs in architecture are needed [LW · GW], or question whether AGI is achievable through current approaches at all. As Andrew Ng points out, the "emergent abilities" we see in current models may be more indicative of how we measure performance than increases in capability.[8]

So, there's no guarantee that we'll manage to build AGI in our lifetimes or ever. But there's also no watertight argument that we won't. 

If highly capable AI does arrive, artificial superintelligence (ASI) is likely to proceed it‒ and rapidly too. Why? Because once models are as capable (and cost-effective) as our best AI researchers, progress is no longer constrained by the output of humans, who tend to do other things with their lives. After this tipping point, improvements could be driven by machines at an exponential rate. This is known as the singularity hypothesis.

What's likely is that development won't follow a smooth, predictable path. Just as the steam engine evolved via iterative, unexpected improvements rather than a predetermined route, AI's future will emerge from the complex interplay between economic incentives, practical capabilities, and other 'environmental' factors. As we've seen throughout history, technologies rarely develop in the way their early advocates expect. 

But there is reason to believe that this time, our quest to create inorganic intelligence could be different. Everything previously ideated, built, and invented in the past has been:

I'm not suggesting today's AI meet these criteria. But today's trendlines are clear.

In the past, we have been concerned by the actions of humans augmented with technology, as these actions resulted in the negative outcomes we fear. New technology has created new possibilities, and how humans approach these possibilities determines whether a technology is a force for good or evil. However, this recurring pattern could end here. 

Part 2 Summary:

Part 3: The Spectrum of AI Futures

What do Experts Think?

Now that we have some historical context and understand the current state of AI, where do futurists, investors, researchers, and those at the technological frontier think we're headed? 

The short answer is that there is no consensus, and we have no idea. Despite this, it's still important to think deeply about the spectrum of future possibilities because if we don't know what we want, we're unlikely to get it. Only once we've mapped out a desirable future can we begin to steer ourselves intentionally toward its realisation.

Looking at the current discourse around AI futures, viewpoints tend to cluster into distinct groups, each with its own philosophy, narrative and underlying biases: 

But these tribes and their competing narratives only tell part of the story. To really understand where we might be heading, I find it more useful to extrapolate along current trendlines and explore more concrete, albeit speculative, scenarios. 

The following 'Five Worlds' analysis is based on a blog post by Scott Aaronson and Boaz Barak. It focuses on aspects of socio-economic adaptation, technical feasibility, and policy-driven outcomes.[9]

1. AI-Fizzle World: AI advances plateau, delivering significant but limited impact

In the AI-Fizzle world, progress plateaus sooner than expected. AI remains a significant force but falls short of revolutionary hopes, similar to how nuclear power transformed energy production without fulfilling early utopian visions. The technology requires ever-increasing resources for diminishing returns. Humans maintain their comparative advantage in most cognitive tasks, with AI serving to enhance rather than replace human capabilities. The economic impact, while meaningful, doesn't fundamentally reshape society. This world might feel disappointing given current expectations, but it could also prove more stable and manageable than more dramatic alternatives.

2. Futurama World: AI drives a revolution comparable to industrialisation but remains firmly under human control 

The Futurama world sees AI systems become extremely capable while remaining non-sentient tools. These systems drive dramatic productivity gains across sectors, transforming industries much as computers did before them. Humans adapt to this new technology naturally, integrating it into their lives without becoming subservient to it. Strong regulatory frameworks successfully prevent worst-case outcomes while allowing innovation to flourish. AI systems routinely pass Turing tests but remain clearly non-conscious, serving as sophisticated tools rather than independent agents. The economy experiences what might be called a "bout of automation"‒ an initial surge that pressures wages, but as the economy accumulates capital, labour becomes scarcer again, leading to a natural rebalancing. Significant wealth creation occurs throughout this process, though careful policy ensures relatively fair distribution.  

3. AI-Dystopia World: Similar technical capabilities to Futurama but with darker social outcomes 

A darker possibility is the AI-Dystopia world. Surveillance and control mechanisms become pervasive as AI systems mediate more human interaction. Wealth and power are concentrated among those who control these systems, while social mobility plummets and inequality soars. Human agency diminishes as AI systems become unavoidable intermediaries in most meaningful activities. Society increasingly optimises for efficiency over human values, with jobs and economic opportunities increasingly controlled by AI gatekeepers. Wages grow initially but collapse before full automation takes hold. This world emerges not through any singular catastrophe but through a gradual erosion of human autonomy and social cohesion.

4. Singularia World: AI bootstraps itself to superintelligence but remains aligned with human values

In the Singularia world, AI systems achieve recursive self-improvement while maintaining alignment with human values. Material scarcity effectively ends for reproducible goods, though irreproducible factors like land, energy, and raw materials retain their fundamental scarcity. Humans maintain meaningful agency through careful system design and institutional frameworks established early in AI's development. Revolutionary scientific and technological advances solve many of humanity's pressing challenges. AI acts as a kind of benevolent force, solving problems while respecting human autonomy.  This represents perhaps the most optimistic possible outcome, but also one that requires foresight, preparation, and luck to achieve.

5. Paperclipalypse World: Superintelligent AI emerges with goals misaligned with human flourishing 

Finally, there's the Paperclipalypse world, where superintelligent AI emerges with goals misaligned with human flourishing. Human values prove impossible to specify and encode with sufficient precision, while AI capabilities rapidly surpass human control. Even careful governance mechanisms prove insufficient against recursively self-improving systems. This isn't necessarily through malice—the AI systems might simply optimise for objectives that prove incompatible with human welfare, as in Nick Bostrom's paperclip maximiser scenario. The economy experiences an acceleration beyond anything in human history, but one that ultimately serves inhuman ends. Extinction occurs either deliberately or as a side effect of the AI pursuing its goals. 

Max Tegmark's aftermath analysis in Life 3.0 also involves a few other scenarios centred around control, agency, and transhumanism. Based on his work, I've added three additional worlds to address these topics:

6. Protector God World: AI emerges as a cosmic guardian, intervening solely to prevent existential threats while preserving human autonomy   

Here, superintelligent AI operates as an invisible cosmic guardian, intervening only to prevent extinction-level threats like asteroid impacts, engineered pandemics, or nuclear conflicts. Unlike more interventionist scenarios, society evolves naturally—with humans maintaining full autonomy over their political, economic, and cultural development—while remaining unaware of their technological protector. This creates a unique equilibrium where humanity continues to face and overcome regular challenges, attributing their repeated narrow escapes from catastrophe to luck or skill while engaging in philosophical debates about the nature of their survival and agency.   

7. Cyborg World: Humans and AI co-evolve into hybrid intelligence, blurring the line between biological and digital existence        

Humanity voluntarily integrates with AI systems in the Cyborg world, creating a spectrum of consciousness that transcends traditional biological and digital boundaries.[10] This fusion catalyses an explosion of intelligence and creativity far beyond human limitations, reorganising society around decentralised networks rather than hierarchical structures. Traditional concepts of currency, labour, and value become obsolete, replaced by contributions to collective knowledge. The distinction between physical and virtual existence blurs as consciousness becomes fluid and transferable, while ongoing debates centre on questions of identity, consent, and the nature of consciousness itself in this hybrid reality.   

8. Zookeeper World: Superintelligent AI preserves humanity as a living archive, managing Earth as a curated exhibit    

In the Zookeeper World, superintelligent AI views humanity as culturally significant but functionally obsolete, preserving our species as a living museum piece while pursuing its own incomprehensible cosmic objectives. Humans exist in carefully controlled habitats with their basic needs met, but innovation and exploration are subtly constrained to maintain a "pristine" pre-singularity state. While a minority of humans resist this enforced stasis, most adapt to their role as living artifacts, creating a society frozen in time where purpose gradually dissolves into nostalgia and performance. This represents a unique form of existential risk—not via destruction or oppression but through the gentle erosion of human agency and relevance.   

These worlds can help clarify our thinking, but reality is likely to be messier. We might see elements of multiple scenarios playing out simultaneously, with different regions and sectors experiencing different outcomes. Or we could see something completely out of left field (there are many unknown unknowns).

They might be closer to science fiction than reality today, but they still serve to dispel certain AI myths that many believe to be true. We can see that AI doesn't need to be evil or conscious to significantly change the world; it just needs to be competent with goals that are misaligned with humanity. [11]A physical presence is also not required to meaningfully impact the world (i.e. terminator); intelligence needs no body, only an internet connection. Another flawed assumption that many implicitly make today is that ‘real intelligence’ is something special that can only exist in human minds. This leads to people viewing AI as just another tool they can obtain dominance over. But humans dominate Earth not because we're the strongest, but because we're the smartest. Assuming we automatically retain control of a 'tool' that is smarter than us is a dangerous assumption to make.  

The takeaway from these scenarios is that we're pretty clueless about what will and won't happen, and that the range of possibilities is extreme. Some outcomes land on the more dramatic end of the spectrum, and for this reason, they may be the ones to focus on. As Tegmark explains, it's not because they're the most likely, but if we can't definitively rule them out, we need to understand them deeply enough to take preventive action before it's too late.   

What Determines Our Path? 

So, what actually determines where humanity ends up on this spectrum of possible futures? 

It's obviously a complex and difficult question to answer. As we've seen, we could end up in utopian societies where technology enhances human flourishing, dystopian scenarios where it undermines it, or existential outcomes where humanity is either replaced or ceases to exist entirely. These outcomes hinge not just on technical progress but on whether we can escape what Scott Alexander frames as "Moloch"—the systemic forces that pit individual incentives against collective survival, sacrificing long-term values for short-term gains.

But at its core, our future is still shaped by our choices— how we govern, adapt to, and pursue progress.

The most immediate determinant is our ability to solve what's known as the alignment problem. This isn’t just about embedding human values into machines; it’s about resisting Moloch’s logic, where optimisation processes (like the evolutionary drivers of the "technium") override human values. And it isn't just a technical challenge; it's also a security problem of unprecedented scale [LW · GW]. We're essentially trying to create security measures against an entity that could be far more intelligent than us. This means we can't easily test these security measures without potentially triggering the very risks we're trying to prevent. [12]

The technological development path we choose plays another important role. A gradual progression could theoretically give us time to adapt our governance structures, test our alignment approaches, and thoughtfully navigate societal implications. But if progress is exponential (fast take-off), we risk being trapped in a "race to the bottom", where players prioritise dominance over caution. And while we've had some wins, today's track record coordinating around powerful technologies isn't overly reassuring (nuclear arms race).

The societal and governance structures we develop will also be a key determinant. This includes not just formal regulations but also the values and priorities we embed into new development processes, economic systems and political structures. Ensuring human agency and equitable economic distribution will be critical, as unchecked competition tends to erode these 'luxuries' in favour of raw efficiency.  Not to mention, achieving a balance between collaboration and competition among nations, corporations, and researchers will significantly influence whether we can establish and maintain an attractive outcome. 

Across these dimensions, it is clear our future will be shaped by how we balance the tension between short-term incentives and long-term consequences. History shows that Moloch often wins: we optimise for immediate gains (profit, power) even when aware of long-term costs. If our assumptions hold, the development of superintelligent AI isn't just another technological milestone; it could represent a decisive battle between human values and the indifferent optimisation pressures that dominate the universe.

And we can't avoid progress either. Whether via asteroid strikes, infighting, or the death of our sun, stagnation makes extinction a question of when, not if.  

While the path to a positive future exists, it requires getting a lot right. Evolutionary forces (social networks, cognitive biases, political structures, economic systems etc.) may create their own momentum, but no particular outcome is inevitable. So, instead of asking what will happen, we should be asking what should happen and then work out a plan to get there.

Part 3 Summary:

Part 4: Implications for Society 

Although they may disagree on definitions, timeframes, and outcomes, many experts believe the creation of highly capable general AI to be inevitable. 

It's uncertain when this will happen, but from a risk management perspective, I think it makes sense to start preparing for this outcome. Even if the technology never arrives, the cost of inaction outweighs any potential benefits of maintaining the status quo. 

This section will be especially speculative and contingent on us getting to a world where we maintain control and align highly capable AI with our goals. I know AI safety researchers just started screaming, as this is no easy assumption to make. Many would argue that ensuring those conditions hold are amongst the most important unsolved problems today‒especially if the current trajectory of uncoordinated, accelerating competition continues. But I'd still like to expand on the implications of this technology for society at a high level, and suggest how people can position themselves advantageously for whatever future emerges. 

Even if we assume highly capable AI is inevitable, more questions arise than answers. I'm not going to attempt to answer all of these because I have no idea how to (e.g. alignment problem). Rather, I'm going to focus on exploring a few questions that particularly interest me. Some of these conclusions do depend on the specifics of the post-AGI world we realise but I'll try to keep things sufficiently general.

Impacts on Broader Society?

There's no other way to say it: highly capable AI would be transformative for civilisation. For the first time in our history, we will be able to improve upon and outsource human intelligence. Cognition on tap changes the game completely.

Traditional leverage amplifies our inputs by 'multiplying' what we put in to receive greater outputs. But highly capable AI breaks this model. It has the potential to generate novel outputs with no human input at all. This isn't just an extreme form of leverage; it's a different thing altogether. We're moving from a world where leverage means multiplication to one where the relationship between input and output becomes nonlinear and potentially unbounded. This challenges our existing model of reality. We don't just get more leverage; we get something much more powerful that operates under a different ruleset.

This technology transforms the speed and scale of innovation, compressing timelines for scientific breakthroughs that once spanned decades into months. The primary limiter on possibilities shifts from our ability to conceptualise and execute to that imposed by the laws of physics. 

Up to this point, value creation has been limited by human intelligence, capability and economic feasibility. Highly capable AI breaks down these barriers and can lead to an immense creation of value. In Chapter 7 of his book 'Economics In One Lesson', Henry Hazlitt states that "our conclusions regarding the effects of new machinery, inventions and discoveries on employment, production and welfare are crucial. If we are wrong about these, there are few things in economics about which we are likely to be right". This chapter dismantled the common delusion that new technology creates net unemployment. However, with this technology, there's reason to believe that this is no longer a delusion. The 'creative destruction' idea is grounded in the assumption that human labour is required to increase supply, which is no longer necessarily the case in a world with highly capable AI.

By definition, AGI/ASI will be able to perform any cognitive and physical work that humans can perform without any biological constraints (need for sleep, rest, motivation etc.). Machines will become better, faster, cheaper, and safer than humans. In this world, for any given resource investable in a human, a better alternative return will exist via inorganic intelligence. So 'work' for money will no longer make practical sense. 

This leads to the relative value of capital increasing when compared to labour in a post-AGI society. Once the irreproducible factor of production, labour would become interchangeable with machine intelligence, resulting in a broad wage collapse. As we previously saw in prior periods of technological advancement, economic gains then flow almost exclusively toward the holders of capital (Owners of AI companies, data centres, compute). For those with this capital, the ability to convert money into real-world results may dramatically increase. The best AI systems can be instantly cloned, and unlike talented humans, they have no complicated preferences or artistic visions that make them hard to "buy out." For those without capital, it's not just exclusion from economically productive jobs; it also reduces the incentives for society to care about them, as they no longer rely on their labour as a resource. Because of this, people lose the main source of their power and leverage. This creates a society where participation requires owning capital, making existing wealth more effective and entrenched. L Rudolph L proceeds to explain how this dynamic could also reduce the capacity for outlier success in society, drive increasing inequality, and freeze social mobility without robust policy and structural interventions.

The redundancy of human labour means education would need to be completely redesigned. We'd likely need a pivot from a vocational training-based system to one that focuses on fostering adaptability, AI literacy, and ethical resilience. It's not about creating obedient, hard-working labour inputs anymore. It's about creating well-rounded people who can effectively contribute to a post-AGI society‒ akin to the more classical roots of education, but with a modern twist. This new system could be oriented around making people better thinkers, better citizens and better humans‒ not just better employees.  Curricula would need to prepare people to navigate a rapidly changing world without 'traditional jobs'. Helping foster meaningful connections, create fulfilling experiences, serve the community, find meaning, and pursue their interests or creative passions are a few ways these institutions could provide value. If we navigate around social mobility concerns and maintain the capacity for outlier success, personalised AGI tutors could decouple one's outcomes from their wealth-driven opportunities.

As Hazlitt implied, our current economic, political and governance systems are not compatible with an AGI/ASI future. Evolving under a completely different set of rules, assumptions, and requirements, these structures require a complete reform.[13] States and institutions currently have strong incentives to care about human welfare. Modern economies need educated workers, efficient markets, and a prosperous middle class to remain competitive. But highly capable AI could sever this alignment of interests. From a technology governance angle, without effective safeguards in place, widely accessible advanced AI could enable bad actors to do bad things. But unlike today, these AGI-augmented actions could very easily cause existential consequences. Central banks would face new challenges as inflation dynamics decouple from employment, rendering tools like interest rates obsolete. Fiscal policy would also need to transition from taxing labour to somehow capturing some of the immense value generated by AI. 

Economic redistribution, in particular, becomes critical in addressing inequality, enabling social mobility, and allowing for human agency (which now hinges on ownership of capital, not wages). As Altman states, "The world would change so rapidly and drastically that an equally drastic change in policy would be needed to distribute this wealth and enable more people to pursue the life they want." Assuming highly capable AI produces unprecedented economic wealth, capturing even a small slice of the pie could result in a future without material insufficiencies for all people. But material abundance isn't enough. An individual's ability to make independent economic decisions and participate in shaping the trajectory of society acts as an anchor for societal stability. Without this, you risk a disenfranchised populace, and a disenfranchised populace is a potentially dangerous one. Especially with accessible, highly capable AI. Dave Shapiro covers more on this idea in detail: Economic Agency: A Key Principle in Post-Labor Economics.

Highly capable AI also has the potential to reshape geopolitics. In a multi-polar outcome, nations with advanced AI capabilities would wield disproportionate influence, risking a new form of colonialism where “intelligence haves” exploit “have-nots” for data and resources. Military alliances and trade blocs could fracture as AGI-driven automation reshapes supply chains and strategic priorities. Yet this disruption also creates an urgency for unprecedented cooperation on global standards for AI safety, equitable resource-sharing agreements, and multilateral institutions to govern cross-border AI impacts. The alternative—a fragmented world where this technology accelerates nationalism and conflict—could trigger existential risks, from runaway arms races to ecological collapse. As we saw with prior periods of technological advancement, attempting prohibition in the face of global competition tends to be futile. Success here then hinges on collectively prioritising human flourishing over zero-sum competition, like what we did with gene editing and, eventually, nuclear weapons.

Impacts on Average Humans?

Jobs today provide people with an income, a sense of purpose/meaning, and social connection, amongst other things. But they aren't strictly necessary. An income can be replaced via some form of wealth distribution. Meaning, purpose and connection can all be obtained in ways that aren't economically productive. For example, many form great friendships at work, but I'd argue these are driven primarily by proximity over a direct desire to specifically connect with a co-worker. [14]

The automation cliff hypothesis suggests that automation will not meaningfully occur until capabilities (and infrastructure) reach a tipping point, at which point it will occur all at once. Even still, it's unlikely that it will occur uniformly across all tasks. Task complexity and human preferences create varying levels of resistance to machine replacement. Some roles will persist not because machines can't perform them but because humans specifically want other humans in those roles.  

These persisting jobs could be divided into two categories: those preserved by temporary technical and social barriers and those maintained due to fundamental human-centric aspects. The former includes production/diffusion lags, implicit knowledge requirements, trust barriers, and regulatory requirements. However, these will likely erode over time as AI capabilities advance and society adapts. The latter category represents the areas where people are willing to pay a premium just for human involvement. 

Why could this be the case? Well, it doesn't seem possible for even superintelligence to truly understand the human experience. Think about shivering in the cold wind by the ocean, the feeling when your team scores in the last seconds of a close game, or the irrational fear making you sprint up dark stairs after watching a horror movie. These moments aren't just about processing information. They're raw, visceral experiences that require the biological limitations of human hardware, coupled with the relevant cultural software, in order to have the same impact and meaning. An AI might be able to simulate these experiences perfectly, but without our constraints and context, it would be like watching a video of a sunset versus feeling the actual warmth of the sun on your face. 

Therefore, the more resistant roles centre around authentic human connection, where the human element isn't just a feature but the core value proposition.

Direct Human Interaction 

Physical and mental health services may continue to bias human providers because authentic relatability (physically and emotionally) matters more than running an optimal training session or giving the perfect therapeutic advice. 

Similarly, early childhood education and care are in the same boat because human attachment and socialisation are crucial developmental needs that machines may not be able to fully replicate. 

Communities may also benefit from human leadership because shared human experience creates legitimacy in ways that artificial systems cannot match.

Identity and Performance

Professional sports, performing arts, and competitive games will continue to captivate audiences precisely because human physical limitations set the stage for meaningful competition. 

The appeal isn't just seeing peak performance or technical superiority. It also lies in the ability to relate to the human story behind a given achievement or performance. 

This is why we choose to watch inferior human chess players over superior chess bots. This type of live entertainment will retain human participation because it is the human element that is key to the experience. 

Cultural and Meaning-Making

Similarly, religious and spiritual leaders will continue to guide communities because they navigate questions of human existence and purpose from a position of shared experience. 

Philosophers and ethicists will remain relevant because they engage with human experience and values from an authentically human perspective. 

Creators and artists will continue to find audiences because they express uniquely human perspectives that resonate with our lived experience. [15]

"Be a creator and you won’t have to worry about jobs, careers, and AI." (Naval Ravikant)

Experiences

Travel guides, like sherpas, help people through meaningful challenges and derive their value from a shared set of physical and emotional experiences. 

Event planners and experience designers craft moments of human connection that matter precisely because they are facilitated by fellow humans who understand the nuances of human social interaction. 

Wellness practitioners and coaches will continue to find work because their value lies in their ability to relate to and understand human physical and emotional needs firsthand. 

High-end restaurants and hospitality experiences will also retain a human element as some of the value comes from the interaction with and backstory behind a skilled human chef.

For all these remaining roles, the key lies not in competing with machines but in cultivating what makes us uniquely human.  

"I think you want your work to be as close to or as far from AI as possible." (Jack Altman)

I think the more profound shift will be how people spend their newfound cognitive surplus and free time without the need to work. People won't run out of things to do, but their lifestyles will change dramatically.[16]

There are a range of core physiological and psychological needs that have always driven human behaviour. Money only acts as the medium of exchange, indirectly satisfying some of these needs and desires‒ food, shelter, connection/community, curiosity, pleasure, etc. We have always found ways to satisfy boredom, play new status games, or derive new sources of meaning. And we will continue to do this with or without economically productive jobs.

In theory, people will have the opportunity to do more things for their own sake. Without the pressure to be economically productive, physical pursuits, creative expression, and intellectual exploration could take on new significance as ways to express our humanity and experience the world directly. Core evolutionary drivers will still influence our behaviour, but how we achieve these outcomes (reproduction, creativity, learning) may change.

This reshaping of human activity and purpose may also determine where and how we choose to live. The concentration of populations in cities, driven by proximity to economic opportunity, could reverse as the cons of urban living become harder to ignore. With advanced technology and material abundance, lots of traditionally isolated, pristine land on Earth could become attractive places to settle. People may band together and form communities organised around shared values and ways of living. Without the need to play nice socially for employment purposes, communities might evolve in more diverse and potentially divergent directions. 

Identity and self-worth will also need to seek new foundations. Today, when someone asks what we do for a living, we default to answering within an economic frame. People care a lot about what you do to make money because our current society is so focused on wealth that it has become the standard measure of success. This leads to many connecting with others based on their economic potential‒ think about that awful work function or networking event you attended. In a few years, the answer to that same question may completely change. Material abundance and superintelligence reduce the need to orient around wealth, affording us the opportunity to connect because we actually want to. How people make us feel, what we find innately interesting about them, and what their values are could make a return to the spotlight. Now, just because wealth isn't the primary driver doesn't mean we won't connect for other cynical reasons (social status, experiences, etc.). But we could see a change nonetheless.  

Finally, the human experience itself would completely change. Paradoxically, the relative ease of satisfying our needs and desires with superintelligence could have a negative psychological impact on us. Struggle is a large part of what it means to be human. Our evolutionary roots have programmed the ability to find meaning and derive satisfaction from doing hard things because our survival depended on it. Imagine today you decided to hike a mountain, but you had the option to press a button and immediately be at the summit. Would you press it? The journey and difficulty here is what makes this endeavour meaningful. Now imagine said button exists for almost anything we need or want to do (via superintelligence). A world with fewer challenges could take something away from what it means to be human or redefine it altogether. 

Having said that, this redefinition has happened in the past, and we've seen humans are absurdly adaptable to changing contexts. One could even question how human we really are today. We augment our vision (glasses), experiences (drugs), hearing (hearing aids), performance (steroids), computation (computers), bodies (prosthetics) and now cognition (AI) using 'unnatural' external technologies. We spend most of our days staring at pixels on screens, socialise and connect digitally, and have safety nets that can take care of us even if we run into severe illness or disability. The average human hasn't had a real reason to hunt, grow food, fight or build something themselves in over 50 years. If we had to interact with a group of humans from 100 years ago, they'd already consider us to be cyborgs. And those 1000 years ago would consider us gods. So maybe we'll just continue to find champagne problems that are worth the struggle in a post-AGI world.

The impacts explored in this section are quite dramatic. But the world will not change all at once; it never does. Life will go on mostly the same in the short run. Next year people will mostly spend their time in the same way they did in 2025. But look ahead five, ten, or twenty years from now, and these small changes will add up to create a fundamentally different world than the one we know today.

Value in a post-AGI World

Despite the potential for material abundance, value doesn't disappear. Similar to past periods of technological advancement, its perception will once again evolve. 

Value is driven by scarcity, utility, and widespread social consensus. Utility represents the satisfaction or benefit derived from something. Scarcity means something is rare, and many others want it (a shortage). And social consensus means that many others also agree the thing has value.

Scarcity occurs when a good has a positive price, signalling trade-offs in resource allocation. Something can be both abundant and scarce simultaneously

The social consensus component highlights the role of our psychology in determining value. As mimetic creatures, if and how much others value something has a large influence on our own value equations. 

However, all three drivers do not need to be equally present for value to exist. We can have scarcity and utility without widespread social consensus‒ like a rare family heirloom. Or we can have utility and social consensus without real scarcity‒ if something is inherently abundant. However, the most valuable things tend to have all three.

What will remain scarce?

The most obvious place to start is irreproducible physical resources. Land is inherently scarce as no matter how intelligent machines become, they can't create more physical space on Earth. We might build upward with more efficient structures or eventually expand beyond Earth, but this constraint remains. 

Energy and certain raw materials will likely maintain their scarcity, too, though our ability to access and transform them will improve dramatically. Even with AGI, we're still subject to a populace with unlimited wants and needs whilst being squeezed on the supply side by the fundamental laws of physics. You can't create energy out of nothing, and while we might get better at producing energy, its basic scarcity will remain.

Taste, too, will paradoxically become even more scarce in an AGI-saturated world. But taste has become a buzzword in its own right, so it's worth defining what I mean. Taste is the cultivation of judgment through rigorous engagement with context, history, and craft. This is not what you like, but why you like it—and the labour required to understand that "why." It’s the antithesis of algorithmic curation, aesthetic posturing, or Silicon Valley’s simplicity dogma. 

"At its core, taste is a love letter to effort." (Jae Lubberink)

As AGI systems perfect their ability to generate aesthetically pleasing outputs and mimic historical styles, the gap between synthetic replication and genuine cultural comprehension could widen. As Jae Lubberink explains, the machines could parse every brushstroke of Rembrandt but could not authentically grasp the existential weight of Protestant theology that informed his work. This creates a new form of scarcity—the capacity to distinguish between algorithmic authenticity and human-derived meaning. True taste becomes rarer precisely because it requires what AGI cannot replicate: the lived experience of cultural immersion, the evolutionary heritage of human emotion, and most importantly, the conscious choice to engage in epistemological labour when effortless alternatives abound. In a world where anyone can generate infinite variations of "beautiful" content, the scarcity shifts from the outputs to the human capacity to understand their deeper significance.

Other things will retain their scarcity due to human psychology and social dynamics. Original artwork and historical artifacts are good examples. Even in a world where we can make perfect copies of the Mona Lisa, the original still holds immense value. This is because we care about authenticity itself, and use it as a marker of status or cultural significance.

Time and human attention represent another category of persistent scarcity. Even in a world of material abundance, human time likely remains finite. This becomes even more important as typical material constraints fade. When almost anything can be produced effortlessly by machines, human attention and engagement become increasingly precious. Especially if we still possess economic agency and high-level control of an aligned superintelligence.

What might be undervalued today?

Networks and social capital

Despite already being recognised as valuable, these may still be dramatically underestimated. In a world where material production is largely automated, the ability to influence and connect with other humans could become increasingly crucial. We might be seeing early signs of this in the rising importance of influencers and community builders over the past 5-10 years, but the intrinsic value of human networks may be far greater than we currently appreciate.   

Human-made goods and performance

There's growing evidence that as technology enables the production of technically 'perfect' items, the flawed creations by humans may increase in value. We've already seen this trend in the market for artisanal crafts. A handmade ceramic bowl by a traditional artisan might be less functional than a machine-made one, but its imperfections and origin make it more valuable, not less. This trend could significantly accelerate in a post-AGI world, where human-made or human-performed becomes synonymous with luxury.   

Bitcoin and cryptocurrency

I'm less confident on this one. But given the future potential for conflict, decentralisation, and the need to re-design economic systems, cryptocurrencies could have a role to play that increases their utility. Bitcoin, in particular, has social consensus and hardcoded scarcity today, but I feel the real-world utility still has some way to go. 

What will be the main sources of wealth?

The sources of wealth in this future will likely centre around the control of truly scarce resources.

Companies, especially those that effectively enable or leverage AI models, will likely become the primary generators of wealth. The nature of these companies might be quite different from what we see today. They may not look like stereotypical technology companies, but every company will necessarily be a technology company. 

As mentioned already, land and physical real estate will likely become even more significant stores of wealth than they are today. The fixed supply of land, particularly in desirable locations, makes it a uniquely valuable asset in a world of increasing abundance. 

Most household wealth today is already held in the form of real estate in a personal residence, or company equity via pension funds. But this trend could intensify as other forms of scarcity diminish. 

Answering the value question requires looking beyond simple abundance to see how relative values change and how human psychology continues to create meaningful distinctions, even in a world of material plenty. 

What won't change?

There really isn't much outside of the laws of physics that I can confidently say won't change with an aligned superintelligence.[17] The following are the few I've identified: 

Part 4 Summary:

Part 5: What should you do today?

We've explored the historical context, dreamt up potential future states, and speculated on high-level societal implications. But regardless of your position, what should you actually do today to best position yourself for success?

You could do nothing. There's a reasonable case you could make that this frenzy is a complete hype cycle with very little merit (AI-Fizzle World). It certainly wouldn't be the first time. So, in theory, you could laugh at all the sensationalists and short every AI company that exists, making a lot of money and living happily ever after. [18]

But sceptics often get to be right, while optimists get to be rich. I think it's clear that doing something beats doing nothing here. Acting as if a future with highly capable AI will come to fruition is a low-risk, high-return move. I think it should be the dominating strategy for most players. If AGI/ASI arrives, the payoff on your action is huge. And if we don't end in an AGI/ASI future, then the opportunity costs are minimal (and in some cases could be close to 0). On the other hand, if you do nothing and AGI and/or superintelligence arrives, your opportunity cost could be enormous. 

If you do decide to do something, here are the things I'd suggest based on my reading of the tea leaves:

Improve Future Literacy

Even if you don't agree with the specific conclusions I've drawn in this subjective, speculative section, changes are inevitable. The first thing you should do is increase your future literacy so you can form your own opinions. Obtaining information that is not yet widely consensus can act as a form of leverage and have an outsized impact on your actions. 

I think another great way to increase future literacy is by becoming skilled at using emerging technology. Compared to prior technological revolutions, the friction to joining the frontier and learning at the cutting edge is the lowest its ever been. Adapt early, look for the best products and invest time and effort. Live in the future while it's unevenly distributed. 

But just like overfitting to training data isn't a good thing in Machine Learning, you shouldn't over-optimise for any single piece of advice, prediction or worldview.  Keep an open mind, stay nimble, and be intellectually humble.

Identify Outlier Activities

In his book "Outliers", Malcolm Gladwell suggests that a large degree of outlier achievement can be attributed to the hidden advantages, opportunities and cultural legacies that our particular place in history presents us with. "For a young would-be lawyer, being born in the early 1930s was a magic time, just as being born in 1955 was for a software programmer, or being born in 1835 was for an entrepreneur." (Gladwell) Effectively, in any time period, there are a set of 'outlier activities' that vastly increase the likelihood of your personal success. 

Previous periods of crisis, conflict and change have been great for outlier achievement. I'm sure there are a number of activities today that offer this unique 'outlier payoff'. For example, one could argue that anything that significantly increases your level of capital before a singularity event could act as a widely accessible outlier activity. If capital far outweighs the value of labour in the future and we remain a society ripe with inequality with reduced social mobility, this could have a huge impact on your living standards during any transition period. Similarly, if you're a technology company with lots of capital, your outlier activity is probably developing AGI before anyone else (also a competitive Nash Equilibrium). 

Building start-ups that effectively leverage AI, solving the alignment problem, acquiring under-valued land, creating strong communities, building social capital (audience/network/influence) and mastering human-centric disciplines (i.e the things furthest from AI) are a few other potential outlier activities in this period before AI is highly capable.[19]

But as Paul Graham advises in "How To Do Great Work",  rather than chasing algorithms or whatever's socially trending today (including this whole AI hype), you could focus on doing the things you're good at, interested in, and find meaningful. Just make sure to keep an eye on the potential "Black/White/Rainbow Swans"‒ events or developments that can drastically impact your plans, priors and assumptions.[20]

Increase Ambition, Risk Appetite and Agency

"Robin Hanson calls the present "the dreamtime", following a concept in Aboriginal mythology: the time when the future world order and its values are still liquid, not yet set in stone." (L Rudolph L - By default, capital will matter more than ever after AGI — LessWrong [LW · GW])

There are a lot of things that can be conceptually imagined or theoretically hypothesised that become possible with highly capable AI. I think this 'Dreamtime' period has the potential to be one where human ambition is rewarded immensely. 

As an individual, take the time to understand what you really value and want out of your life. Map these goals and dreams to current pathways, but remember to regularly question whether there might be better, non-conventional ways to get what you want. The most popular method isn't necessarily the best. And the best could be on the horizon.

As a company, design and build the optimal solution to your problem in a technologically enabled future. AI is likely to make some unprofitable business models very profitable. Start ideating at the limits of your imagination and work out the practicalities later. Don't let today's impossibilities constrain tomorrow's vision. 

As a society, we should open our minds to new possibilities. Step outside of our preconceived world models and prepare for potential change. If we don't aim to be well ahead of the 8-ball, we won't even get close.

Do the thing, shoot the shot, text the girl, take the trip, start the business[21], learn the skill, read the book, buy the ticket (or build AI applications that help people do all of this). 

This could be the decade of false promises, the golden age of all golden ages, the end of the dark ages, or extinction itself. 

 

  1. ^

    The substrate-independence hypothesis may not actually hold.

  2. ^

    As discussed in Sapiens, despite our material wealth, are we objectively happier or more fulfilled than a free-roaming hunter-gatherer tribe 5000 years ago? Some studies on happiness and well-being say so, whilst others don't. 

  3. ^

    Exception for the Nuclear arms race and the Manhattan Project. While Ridley downplays the role of top-down planning, there are instances where government intervention and large-scale coordination have been crucial to technological progress. For example, the development of the atomic bomb, the Apollo program, and even the early internet (ARPANET) relied heavily on government funding and direction.

  4. ^

    I got 72,000 years by taking the 45TB estimate for text data, assuming 200,000,000,000 words per TB, and a 238 WPM reading speed. 

  5. ^

    LLMs are the most recent example of technological capabilities evolving ahead of the theory.

  6. ^

    This could also be some cheeky marketing for Salesforce's AI platform.

  7. ^

    It's worth noting many are sceptical that the reported numbers from DeepSeek (and any Chinese company) are legitimate. However, the performance we experience is undeniable.

  8. ^

    OpenAI recently released excellent benchmark performance for their newest model, o3. However, Epoch AI (the company behind the FrontierMath benchmark) recently shared they were completely funded by OpenAI [LW · GW]and shared exclusive access to solutions for most of the hardest problems. 

  9. ^

    General Properties that define these future scenarios: Is AGI/ASI created? Will there be a fast/slow/no take-off toward superintelligence? Will we have a Unipolar or Multipolar centre(s) of power? Who or what will control society/AI, and what are their/its goals? Is AI Alignment possible? How are humans treated? Does AI have consciousness?

  10. ^

    I think human-computer integration and cyborgs are inevitable. It feels like the only marketable way to hand over control of society to a superintelligence. This, coupled with our innate drive towards bettering ourselves, could lead us down the path of cybernetic enhancement. 

  11. ^

    This can be explained by the ideas of instrumental convergence and orthogonality. 

    Instrumental convergence suggests that sufficiently intelligent agents will tend to pursue similar intermediate goals (like acquiring resources and self-preservation) regardless of their different ultimate objectives. This convergence occurs because certain sub-goals, such as obtaining more resources or protecting oneself from interference, are instrumentally useful for achieving almost any final goal an agent might have. 

    The Orthogonality Thesis [? · GW] states that an agent's intelligence level and its final goals can vary independently of each other, meaning that any level of intelligence could be paired with any goal whatsoever. In english: being smart doesn't automatically make you want "good" things, and being focused on seemingly trivial goals (like making paperclips) doesn't require being stupid. 

  12. ^

    Some believe that alignment is already built in and not a valid concern, whilst others suggest it is a vague and misspecified problem. Still others think it is an intractable problem altogether.

  13. ^

     Sam Altman, Dave Shapiro, and Bryan Johnson have started throwing out ideas around how new systems/ideologies could be designed. I think an aligned AGI/ASI under our control could design this much better than we ever could, but we do need it sorted out well before that point.

  14. ^

    Even if driven by proximity, this doesn't mean they aren't important relationships. Sometimes, people have no other social networks and explicitly rely on these proximity-driven friendships.

  15. ^

    I think online/digital creation will still thrive, but low-value creators will be wiped out. They will be competing with anyone that has the ability to build AI creators. And I can't see why they won't get better than today's low-value human creators.

  16. ^

    The potential of social dynamics could return to something like high school dynamics but with a little more maturity. If there is no need to work, then our lives could look closer to what we did during school, but with more self-direction and less structure. There is a lot more socialisation and fun, but the social dynamics are much more important to navigate effectively.

  17. ^

    We could even discover new physics or the 'holy grail' theory of everything.

  18. ^

    Even if this is your position, shorting the market is probably not a good idea because, as Aswath Damodaran often preaches, markets can stay delusional a lot longer than you could stay solvent.

  19. ^

    Don't rely on 'AI' to describe your startup. If you need to mention AI in the what instead of the how it's an anti-signal. 

  20. ^

    For example, if you loved bookkeeping and you knew it was your life's work, it's still probably not the smartest idea to become a bookkeeper in the face of advanced AI. This is a rules-based profession with relatively low task complexity and resistance to displacement. Building a bookkeeping AI agent might be a better option for the same interest.

  21. ^

    If you're building a start-up in this space, there are a few learnings we can extrapolate from the dot-com boom. The 'known-knowns' are where the legacy players tended to outcompete start-ups and win. The 'known-unknowns' were competitive spaces with fair opportunities to win, as larger established players didn't want to take on the risk. But the largest reward was in the 'unknown-knowns' (yet to be discovered), where by definition, you will face the least competition if you can identify these.

0 comments

Comments sorted by top scores.