Techno-humanism is techno-optimism for the 21st century

post by Richard_Ngo (ricraz) · 2023-10-27T18:37:39.776Z · LW · GW · 5 comments

This is a link post for https://www.mindthefuture.info/p/techno-humanism-is-techno-optimism

Contents

  The triumphs of techno-optimism
  Three cracks in techno-optimism
  AI and techno-optimism
  AI and techno-humanism
  Balancing the tradeoffs
None
5 comments

Lately I’ve been reading about the history of economic thought, with the goal of understanding how today’s foundational ideas were originally developed. My biggest takeaway has been that economics is more of a moving target than I’d realized. Early economists were studying economies which lacked many of the defining features of modern economies: limited liability corporations, trade unions, depressions, labor markets, capital markets, and so on. The resulting theories were often right for their time; but that didn’t stop many from being wrong (sometimes disastrously so) for ours.

That’s also how I feel about the philosophy of techno-optimism, as recently defended in Marc Andreessen’s Techno-Optimist Manifesto. I’ll start this post with the many things it gets right; then explore why, over the last century, the power of technology itself has left straightforward techno-optimism outdated. I’ll finish by outlining an alternative to techno-optimism: techno-humanism, which focuses not just on building powerful engines of progress, but also on developing a deep scientific understanding of how they can advance the values we care about most.

The triumphs of techno-optimism

Here’s where I agree with Andreessen: throughout almost the entirety of human history, techno-optimism was fundamentally correct as a vision for how humans should strive to improve the world. Technology and markets have given us health and wealth unimaginable to people from previous centuries, despite consistent opposition to them. Even now, in the face of overwhelming historical evidence, we still dramatically underrate the potential of technology to solve problems ranging from climate change to neglected tropical diseases to extreme poverty and many more. This skepticism holds back not only speculative technologies but also ones that are right in front of us: miraculous solutions to huge problems (like vaccines for COVID and malaria, and gene drives for mosquito eradication) are regularly stalled by political or bureaucratic obstructionism, costing millions of lives. The RTS,S malaria vaccine spent twenty-three years bogged down in clinical trials, with six years of that delay caused by WHO “precautions” even after other regulators had already approved it. Fighting against the ideologies and institutional practices which cause tragedies like these is an incredibly important cause.

We often similarly underrate the other core tenet of techno-optimism: individual liberty. This was a radical principle when Enlightenment thinkers and leaders enshrined it—and, unfortunately, it remains a radical principle today, despite having proven it worth over and over again. It turns out that free markets can lead to near-miraculous prosperity; that free speech is the strongest enemy of tyranny; and that open-source projects (whether building software or building knowledge) can be extraordinarily successful. Yet all of this is constantly threatened by the creeping march of centralization and overregulation. Opponents of liberty always have high rhetoric about the near-term benefits, but fail to grapple with the extent to which centralized power is inevitably captured or subverted, sometimes with horrific consequences. To defeat that creep requires a near-pathological focus on freedom—which, little by little, has added up to a far better world. Even an ideally benevolent government simply couldn’t compete with billions of humans using the tools available to them to improve their own lives and the lives of those they interact with, including by discovering solutions that no central planner would have imagined.

Not only is techno-optimism vastly underappreciated, it’s often dismissed with shockingly bad arguments that lack understanding of basic history or economics. Because it’s so hard to viscerally understand how much better the world has gotten, even its most cogent critics fail to grapple with the sheer scale of the benefits of techno-optimism. Those benefits accrue to billions of people—and they aren’t just incidental gains, but increases in health and wealth that would have been inconceivable a few centuries ago. Familiarity with the past is, in this case, the best justification for optimism about the future: almost all of the barriers to progress we face today pale in comparison to the barriers that we’ve already overcome.

I say all this to emphasize that I very viscerally feel the hope and the beauty of techno-optimism. The idea that progress and knowledge can grow the pie for everyone is a stunningly powerful one; so too is the possibility that we live in a world where the principles of freedom and openness would triumph over any obstacle, if we only believed in them. And so when I criticize techno-optimism, I do so not from a place of scorn, but rather from a place of wistfulness. I wish I could support techno-optimism without reservations.

Three cracks in techno-optimism

But techno-optimism is not the right philosophy for the 21st century. Over the last century in particular, cracks have been developing in the techno-optimist narrative. These are not marginal cracks—not the whataboutism with which techno-optimism is usually met. These are cracks which, like the benefits of techno-optimism, play out at the largest of scales. I’ll talk about three: war, exploitation, and civilizational vulnerability.

One: the increasing scale of war. World War 1 was a bloody, protracted mess because of the deployment of new technologies like machine guns and rapid-firing artillery. World War 2 was even worse, with whole cities firebombed and nuked, amidst the industrial-scale slaughter of soldiers and civilians alike. And the Cold War was more dangerous still, leaving the world teetering on the brink of global nuclear war. Mutually Assured Destruction wasn’t enough to prevent close calls; our current civilization owes its existence to the courage of Stanislav Petrov, Vasily Arkhipov, and probably others we don’t yet know about. So you cannot be a full-throated techno-optimist without explaining how to reliably avoid catastrophe from the weapons that techno-optimists construct. And no such explanation exists yet, because so far we have been avoiding nuclear holocaust through luck and individual heroism. When the next weapon with planetary-scale destructive capabilities is developed, as it inevitably will be, we need far more robust mechanisms preventing it from being deployed.

Two: the increasing scale of exploitation. Technology allows the centralization of power, and the use of it to oppress the less powerful at far larger scales than before. Historical examples abound—most notably the Atlantic slave trade and the mass deaths of civilians under 20th-century communist and fascist regimes. But since it’s too easy to chalk these up as mistakes of the past which we’re now too enlightened to repeat, I’ll focus on an example that continues today: the mass torture of animals in factory farms. This started less than a century ago, in response to advances in logistics and disease-reduction technologies. Yet it has grown at a staggering scale since then: the number of animals killed in factory farms every year is comparable to the number of humans who have ever lived. The sheer scale of this suffering forces any serious moral thinker to ask: how can we stop it as soon as possible? And how can we ensure that nothing like it ever happens again?

Three: increasing vulnerability to reckless or malicious use of technology. Markets and technology have made the world far more robust in many ways. We should be deeply impressed by how supply chains which criss-cross the world remained largely intact even at the height of COVID. But there are other ways in which the world has become more vulnerable to our mistakes. The most prominent is gain-of-function research on pathogens. Not only are we now capable of engineering global pandemics, but China- and US-funded scientists probably did. And there’s no reason that the next one couldn’t be far worse, especially if released deliberately. Nor is bioengineering the sole domain in which (deliberate or accidental) offense may overwhelm defense to a catastrophic degree; other possibilities include geoengineering, asteroid redirection, nanotechnology, and new fields that we haven’t yet imagined.

These cracks all hurt the techno-optimist worldview. But I don’t think they take it down; techno-optimists have partial responses to all of them. The world has become far more peaceful as a result of our increasing wealth—even if wars can be far bigger, they’re also far rarer now. Technology will produce tastier meat substitutes, and cheap clean meat, and when it does factory farming will end, and humanity will look back on it in horror. And while we can’t yet robustly prevent the accidental or deliberate deployment of catastrophically powerful weapons (like nuclear weapons or engineered pandemics), we might yet stumble through regardless, as we have so far. So if the cracks above were the main problems with techno-optimism, I would still be a techno-optimist. I would have compunctions about the development of even more powerful weapons, and about all the sentient beings (whether farmed animals or wild animals or future people or artificial minds) which remained outside our circle of concern. But I’d still believe that any “cure” to those problems which undermined techno-optimism would be worse than the disease.

Yet I am not a techno-optimist, because we are about to leave the era in which humans are the most intelligent beings on earth. And the era of artificial intelligence will prise open these cracks until the deeper holes in the techno-optimist worldview become clear.

AI and techno-optimism

Until now, the primary forces shaping the world have been human intelligence and human agency, which allow us to envisage the outcomes we want, identify paths towards them, and consistently pursue them. Soon, artificial intelligence and artificial agency will match ours; and soon after that AI will far surpass us. I’ll describe at a very high level how I expect this to play out, in two stages.

Firstly: AIs used as tools will supercharge the dynamics I described above. The benefits of individual freedom will expand, as individuals become far more empowered, and can innovate far faster. But so too will the cracks enabled by technology: war, exploitation, and civilizational vulnerability. Which effect will be bigger? I simply don’t know; and nobody else does either. Predicting the offense-defense balance of new technology is incredibly difficult, because it requires accounting for all the different uses that future innovators will come up with. What we can predict is that the stakes will be higher than ever before: 21st-century technology could magnify the worst disasters of the 20th century, like world wars and totalitarianism.

Perhaps, despite that, the best path is still to rush ahead: to prioritize building new technology now, and assume that we can sort out the rest later. This is a very fragile strategy, though: even if you (and your government) will use it responsibly, many others will not. And there’s still relatively little attention and effort focused on trying to avert the large-scale risks of new technologies directly. So the techno-optimist response to risks like the ones I’ve described above is shallow: in theory it’s opposed, but in practice it may well make things worse.

Secondly: we’ll develop AI agents with values of their own. As AIs automate more and more complex tasks, over longer and longer timeframes, they’ll need to make more and more value-laden judgment calls about which actions and outcomes to favor. Eventually, viewing them as tools will be clearly inadequate, and we’ll need to treat them as agents in their own right—agents whose values may or may not align with our own. Note that this might only happen once they’ve significantly surpassed human intelligence—but given how fast progress in the field has been, this is something we should be planning for well in advance.

Humans who are free to make their own decisions tend to push the world to be better in terms of our values—that’s the techno-optimist position. But artificial agents who are making many decisions will push the world to be better in terms of their values. This isn’t necessarily a bad thing. Humans are hypocritical, short-sighted, often selfish, sometimes sadistic—and so it’s possible that AIs will be better custodians of our moral values than we are. We might train them to be wise, and kind, and consistently nudge us towards a better world—not by overriding human judgments, but rather by serving as teachers and mentors, giving us the help we need to become better people and build a better civilization. I think this is the most likely outcome, and it’s one we should be incredibly excited about.

But it’s also possible that AIs will develop alien values which conflict with those of humans. If so, when we give them instructions, they’ll appear to work towards our ends, but consistently make choices which bolster their power and undermine our own. Of course, AIs will start off with very little power—we’ll be able to shut them down whenever we detect misbehavior. But AIs will be able to coordinate with each other far better than humans can, communicate in ways we’re incapable of interpreting, and carry out tasks that we’re incapable of overseeing. They’ll become embedded in our economies in ways that amplify the effects of their decisions: hundreds of millions of people will use copies of a single model on a daily basis. And as AIs become ever more intelligent, the risks will grow. When agents are far more capable than the principals on whose behalf they act, principal-agent problems can become very severe. When it comes to superhuman AI agents, we should think about the risks less in terms of financial costs, or even human costs, and more in terms of political instability: careless principals risk losing control entirely.

How plausible is this scenario, really? That’s far too large a question to address here; instead, see this open letter, this position paper and this curriculum. But although there’s disagreement on many details, there’s a broad consensus that we simply don’t understand how AI motivations develop, or how those motivations generalize to novel situations. And although there’s widespread disagreement about the trajectory of AI capabilities, what’s much less controversial is that when AI does significantly surpass human capabilities, we should be wary of putting it in positions where it can accumulate power unless we have very good reasons to trust it.

It’s also worth noting that Andreessen's version of techno-optimism draws heavily from Nick Land’s philosophy of accelerationism, which expects us to lose control, and is actively excited about it. “Nothing human makes it out of the near future”, Land writes, and celebrates: “The planet has been run by imbeciles for long enough.” I read those words and shudder. Land displays a deep contempt for the things that make us ourselves; his philosophy is fundamentally anti-humanist (as Scott Alexander argues more extensively in his Meditations on Moloch). And while his position is extreme, it reflects a problem at the core of techno-optimism: the faster you go, the less time you have to orient to your surroundings, and the easier it is to diverge from the things you actually care about. Nor does speed even buy us very much. On a cosmic scale, we have plenty of time, plenty of resources available to us, and plenty of space to expand. The one thing we don’t have is a reset button for if we lose control.

AI and techno-humanism

So we need a philosophy which combines an appreciation for the incredible track record of both technology and liberty with a focus on ensuring that they actually end up promoting our values. This mindset is common amongst Effective Altruists—but Effective Altruism is an agglomeration of many very different perspectives, drawn together not by a shared vision about the future but by shared beliefs about how we’re obliged to act. I’d like to point more directly to an overarching positive vision of what humanity should aim towards. Transhumanism offers one such vision, but it’s so radically individualist that it glosses over the relationships and communities that are the most meaningful aspects of most people’s lives. (Note, for example, how Bostrom’s Letter from Utopia barely mentions the existence of other people; while his introduction to transhumanism relegates relationships to the final paragraph of the postscript.) So I’ll borrow a term coined by Yuval Harari, and call the philosophy that I’ve been describing techno-humanism.

Harari describes techno-humanism as an ideology focused on upgrading humans to allow our actions and values to remain relevant in an AI-dominated future. I broadly agree with his characterization (and will explore it more in future posts), but both the benefits and the risks of re-engineering human brains are still a long way away. On a shorter timeframe, I think a different way of “upgrading” our minds is more relevant: developing a deep understanding of our values and how technology can help achieve them. Flying cars and rockets are cool, but the things we ultimately care about are far more complex and far more opaque to us. We understand machines but not minds; algorithms but not institutions; economies but not communities; prices but not values. Insofar as we face risks from poor political decisions, or misaligned AIs, or society becoming more fragile, from a techno-humanist perspective it’s because we lack the understanding to do better.

This is a standard criticism of techno-optimism—and usually an unproductive one, since the main alternative typically given is to defer to academic humanities departments which produce far more ideology than understanding. But techno-humanism instead advocates trying to develop this understanding using the most powerful tools we have: science and technology. To give just a few examples of what this could look like: studying artificial minds and their motivations will allow us to build more trustworthy AIs, teach us about our own minds, and help us figure out how the two should interface. The internet should be full of experiments in how humans can interact—prediction markets, delegative democracies, adversarial collaborations, and many more—whose findings can then improve existing institutions. We should leverage insights from domains like game theory, voting theory, network theory, and bargaining theory to help understand and reimagine politics—starting with better voting systems and ideally going far further. And we should design sophisticated protocols for testing and verifying the knowledge that will be generated by AIs, so that we can avoid replicating the replication crises that currently plague many fields.

This may sound overly optimistic. But some of the most insightful fields of knowledge—like economics and evolutionary biology—uncovered deep structure in incredibly complex domains via identifying just a few core insights. And we’ll soon have AI assistance in uncovering patterns and principles that would otherwise be beyond our grasp. Meanwhile, platforms like Wikipedia and Stack Overflow have been successful beyond all expectations; it’s likely that there are others which could be just as valuable, if only there were more people trying to build them. So I think that the techno-humanist project has a huge amount of potential, and will only become more important over time.

Balancing the tradeoffs

So far I’ve described techno-humanism primarily in terms of advances that techno-optimists would also be excited about. But inevitably, there will also be clashes between those who prioritize avoiding the risks I’ve outlined and those who don’t. From a techno-optimist perspective—a perspective that has proven its worth over and over again during the last few centuries—slowing down technological progress has a cost measured in millions of lives. This is an invisible graveyard which is brushed aside even by the politicians and bureaucrats most responsible for it; no wonder many techno-optimists feel driven to push for unfettered acceleration.

But from a techno-humanist perspective, reckless technological progress has a cost measured in expected fractions of humanity’s entire future. Human civilization used to be a toddler: constantly tripping over and hurting itself, but never putting itself in any real danger. Now human civilization is a teenager: driving fast, experimenting with mind-altering substances, and genuinely capable of wrecking itself. We don’t need the car to go faster—it’s already constantly accelerating. Instead, we need to ensure that the steering wheel and brakes are working impeccably—and that we’re in a fit state to use them to prevent non- or anti-human forces controlling the direction of our society.

How can people who are torn between these two perspectives weigh them against each other? On a purely numerical level, humanity’s potential to build an intergalactic civilization renders “fractions of humanity’s future” bigger by far. But that math is too blasé—it’s the same calculation that can, and often has, been used to justify centralization of power, totalitarianism, and eventual atrocities. And so we should be extremely, extremely careful when using arguments that appeal to “humanity’s entire future” to override time-tested principles. That doesn’t imply that we should never do so. But wherever possible, techno-optimists and techno-humanists should try to cooperate rather than fight. After all, techno-humanism is also primarily about making progress: specifically, the type of progress that will be needed to defuse the crises sparked by other types of progress. The disagreement isn’t about where we should end up; it’s about the ordering of steps along the way.

The two groups should also challenge each other to do better in areas where we disagree, so that we can eventually reach a synthesis. One challenge that techno-humanists should pose to techno-optimists: be more broadly ambitious! We know that technology and markets can work incredibly well, and have a near-miraculous ability to overcome obstacles. And so it’s easy and natural to see them as solutions to all the challenges confronting us. But the most courageous and ambitious version of techno-optimism needs to grapple with the possibility that our downfall will come not from lack of technology, but rather overabundance of technology—and the possibility that to prevent it we need progress on the things that have historically been hardest to improve, like the quality of political decision-making. In other words, techno-humanism aims to harness human ingenuity (and technological progress) to make “steering wheels” and “brakes” more sophisticated and discerning, rather than the blunt cudgels that they often are today.

My other challenge for techno-optimists: be optimistic not just about the benefits of technological growth, but also about its robustness. The most visceral enemy of techno-optimism is stagnation. And it’s easy to see harbingers of stagnation all around us: overregulation, NIMBYism, illiberalism, degrowth advocacy, and so on. But when we zoom out enough to see the millennia-long exponential curve leading up to our current position, it seems far less plausible that these setbacks will actually derail the long-term trend, no matter how outrageous the latest news cycle is. On the contrary: taking AGI seriously implies that innovation is on the cusp of speeding up dramatically, as improvements generated by AIs feed back into the next generation of AIs. In light of that, a preference for slower AI progress is less like Luddism, and more like carefully braking as we approach a sharp bend in the road.

Techno-optimists should challenge techno-humanists to improve as well. I can’t speak for them, but my best guess for the challenges that techno-optimists should pose to techno-humanists:

Techno-humanists need to articulate a compelling positive vision, one which inspires people to fight for it. Above, I’ve listed some ideas which have potential to improve our collective understanding and decision-making abilities; but there’s far more work to be done in actually fleshing out those ideas, and pushing towards their implementation. And even if we succeeded, what then? What would it actually look like for humanity to make consistently sensible decisions, and leverage technology to promote our long-term flourishing? Knowing that would allow us to better steer towards those good futures.

Techno-humanists should grapple more seriously with the incredible track record of techno-optimism. Throughout history, people have consistently dramatically underrated how valuable scientific and technological progress can be. That’s not a coincidence at all, because characterizing which breakthroughs are possible is often a big chunk of the work required to actually make those breakthroughs. Nor is it a coincidence that people dramatically underrate the value of liberty—decentralization works so well precisely because there are so many things that central planners can’t predict. So even if you find my arguments above compelling, we should continue to be very wary of falling into the same trap.

The purpose of this blog is to meet those challenges. Few of the ideas in this post are original to me, but they lay the groundwork for future posts which will explore more novel territory. My next post will build on them by arguing that an understanding-first approach is feasible even when it comes to the biggest questions facing us—that we can look ahead to see the broad outlines of where humanity is going, and use that knowledge to steer towards a future that is both deeply human and deeply humane.

5 comments

Comments sorted by top scores.

comment by matto · 2023-11-05T21:40:43.638Z · LW(p) · GW(p)

Excellent post. I wholeheartedly agree that progress should be driven by humanistic values as that appears to be the only viable way of steering it toward a future in which humanity can flourish.

I'm somewhat confused though. The techno-optimist space seems to be largely and strongly already permeated with humanist values. For example, Jason Crawford's Roots of Progress regularly posts/links things like a startup using technology to decrease the costs of beautiful sculpture, a venture to use bacteria to eradicate cavities, or a newsletter about producing high quality policy (amongst other things like small scale nuclear energy, vaccine technology, and interesting histories of technology). Even Andreessen's manifesto cites people like Percy, Fuller, Nietzsche, all of who had rather humanistic and positive visions of humanity.

I think that a rather stark contrast to transhumanism or accelerationism, which I've never found alluring precisely because they seemed to lack a grounded focus on humans and humanity.

I do find Andreessen's mention of Nick Land troubling for precisely that reasons you wrote about, but I wonder how much of that is him making use of Land's idea to explain the economics, rather then subscribing to Land's human-less vision.

I'm not trying to argue about definitions. I guess what I'm trying to say is that techno-optimism seems and has for a long time to already possess a strong humanistic spirit, marking it very different from competing technology-focused communities of thought. Perhaps it makes more sense to fuel the humanist side of techno-optimism rather than forking it into its own thing?

Either way, looking forward to more posts! Especially curious about deeper takes on AI.

comment by Amalthea (nikolas-kuhn) · 2023-10-28T08:48:14.398Z · LW(p) · GW(p)

I find it quite alienating to that you seem to be conflating "techno-optimism" with "technological progress".

Particularly, I think "techno-optimism" beyond "recognizing that technological progress is often good (and maybe to a larger extend than is often recognized)" easily rises to the level of an ideology in that it diverts from truth-seeking (exemplified by Andreessen).

Basically, I agree on most of the object-level points you make but, in my intuition, having an additional emotional layer of attachment to so-and-so belief is not a thing we want in and of itself.

comment by jmh · 2023-10-28T11:31:33.734Z · LW(p) · GW(p)

When the next weapon with planetary-scale destructive capabilities is developed, as it inevitably will be, we need far more robust mechanisms preventing it from being deployed.

Just a small clarification for me. When you day "deployed" is that saying not used or not actually produced? Using atomic weapons as the example, would the theory to build a bomb be known but somehow we prevent anyone from building one or the bomb gets built but somehow we can prevent it from ever being used?

comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-12-07T22:18:59.522Z · LW(p) · GW(p)

The internet should be full of experiments in how humans can interact

 

Ok, just want to take this opportunity to talk about something I would like. I would like the idea to poll people from around the world, in their native language, in a balanced and fair way that didn't privilege particular countries or economic classes. I would be willing to pay $2-5 for a poll consisting of a handful of reasonably easy to answer questions. I want to know about things like: the common needs and desires of people in various places and economic situations, their moral intuitions on various topics, their hopes for the future and visions of a better world.

I also think it would be interesting to be able to have an LLM-based program give an interactive interview, and maybe find out interesting things I hadn't thought to ask.

I'd like to turn these anonymized responses into reports of human experience and post them as a sort of searchable newsfeed somewhere like Our World in Data. Who needs what where? Who hopes for what? 

Seems like this is something that our technological advances should enable.  

comment by Fergus Fettes (fergus-fettes) · 2023-10-28T04:54:57.996Z · LW(p) · GW(p)

One criticism of humanism you don't seem to touch on is,

  • isn't it possible that humanism directly contributes to the ongoing animal welfare catastrophe?

And indeed, it was something very like humanism (let's call it specific humanism) that laid the ideological foundation for the slave trade and the holocaust.

My view is that humanism can be thought of as a hangover of Christian values, the belief that our minds are the endowments of God.

But if we have been touched by the angels, perhaps the non metaphorical component of that is the development of the infosphere/memetic landscape/culture. Which is close to synonymous with technology. Edit: considering eg. writing a technology that is.