Searching for the Root of the Tree of Evil

post by Ivan Vendrov (ivan-vendrov) · 2024-06-08T17:05:53.950Z · LW · GW · 14 comments

This is a link post for https://nothinghuman.substack.com/p/searching-for-the-root-of-the-tree

“There are a thousand hacking at the branches of evil to one who is striking at the root”

Henry David Thoreau, Walden

The world is full of problems: Pain, poverty, illness, war, pollution, to pick a few among thousands. Many of us feel like we need to Do Something about these problems. There’s just one problem (sorry): There are so many problems that focusing on one specific problem feels wrong. Why choose that one? How dare you ignore all the other problems? Do those people suffering not matter to you? Can’t you hear their screams?

One response is the effective altruist’s shut up and multiply [? · GW]: loosely, find a framework that lets you rank problems against each other, and then work on the most important problems. Yes, war is hell, but solving it is not tractable or neglected. So you buy anti-malarial bednets and try your best to ignore the screams.

Another is to try to identify the root cause of all the problems, and work on that. Some popular candidates include metascience, “healing our collective trauma”, “awakening from the meaning crisis”, “solving intelligence and using it to solve everything else”.

Sometimes you can justify this on effective altruist grounds, but it’s also very neat psychologically: you don’t have to ignore the screams. Whenever a new problem comes to your attention, you can say “yes, my work is helping with this also”. You can channel that empathy into whatever you were already doing. And afterwards, you can sleep easy, knowing that you helped.

This is awfully convenient, and you might be tempted to dismiss it as cope, as a way to feel good about yourself without ever doing any real, object-level, devil-is-in-the-details kind of work. And yet… this meta-strategy has worked at least once before, and worked so well as to dwarf all the object-level charity ever wrought by human hands and hearts. I’m referring to Francis Bacon’s invention of science.

Quoting the historian and novelist Ada Palmer’s beautiful essay On Progress and Historical Change:

Then in the early seventeenth century, Francis Bacon invented progress.

If we work together — said he — if we observe the world around us, study, share our findings, collaborate, uncover as a human team the secret causes of things hidden in nature, we can base new inventions on our new knowledge which will, in small ways, little by little, make human life just a little easier, just a little better, warm us in winter, shield us in storm, make our crops fail a little less, give us some way to heal the child on his bed.  We can make every generation’s experience on this Earth a little better than our own.  […] Let us found a new method — the Scientific Method — and with it dedicate ourselves to the advancement of knowledge of the secret causes of things, and the expansion of the bounds of human empire to the achievement of all things possible.

There are many caveats: the Great Man theory of history is flawed, Bacon was just a spokesman for a broader community of British proto-scientists, science would have probably emerged anyway at some point, most progress comes from tinkerers and engineers rather than scientists proper. Nonetheless… has anyone ever been more right about anything than Francis Bacon was about the importance of the scientific method?

We now have an existence proof of a successful meta-intervention. Francis Bacon identified the root cause of most 17th century problems (our collective ignorance about the natural world) and worked to address them by developing and evangelizing the scientific method. What is the equivalent today?

Some say: still ignorance about the natural world! We need to double down on Science! If we just understood biology better we could cure all diseases, develop ever more nutritious foods, and find ways to arrest ecosystem collapse and environmental degradation. If we just understood physics better we could unlock new abundant sources of energy, solve poverty and climate change, and travel to the stars.

I don’t buy it. Science has already taught us how to be healthy: eat whole foods, exercise a lot, spend time outside, live in community. Yet we do less and less of all these things. Physics has already given us nuclear power, an incredibly cheap, reliable, safe, low-emission source of energy. We don’t use it because it’s politically unpopular. Our biologists and ecologists already know quite well how to protect ecosystems, as our national parks show. We just don’t care enough to do it.

Maybe the bottleneck is in distributing our scientific understanding more broadly? This is the position of those who see lack of education as the root of all problems. I don’t buy this one either. Our best analyses of education suggest that schools are ineffective at disseminating scientific knowledge. Even the knowledge that does get disseminated isn’t acted on: how many people at this point don’t know that regular exercise is the number one health intervention in the developed world?

All these examples suggest the bottleneck has something to do with people and their interactions. Perhaps we’re reached diminishing returns on natural science, but human and social sciences are still incredibly valuable. If we could only understand psychology and sociology and economics well enough, we could design interventions that convince people to act in accordance with their best interests, and that of society as a whole.

This feels compelling, but I can’t help but notice we’ve sunk hundreds of billions of dollars and millions of our brightest minds into this research over the last century, and yet… are we actually better at organizing our social life in accordance with human flourishing than the Victorians, the Qing, or the pre-conquest Lakota? In some ways yes, in some ways no. If we ignore the differences in material prosperity (largely due to our better science and engineering, not social science), I’d call it a wash at best. Admittedly, natural science took a long time to pay off: Bacon wrote Novum Organum in 1620 and the Royal Society was founded in 1660; the British economy started growing rapidly only around 1820. Perhaps all this psychology and social science research will pay off in the end. But I’m not holding my breath.

A final meta-intervention I’ll bring up to dismiss is the one exemplified by DeepMind’s founding mission statement: “solve intelligence. use it to solve everything else”. This has been the ideology of Silicon Valley for the last year or so, and seems poised to become the ideology of financial capitalism and the military-industrial complex as a whole. Instead of solving object-level problems, we are funneling all our surplus capital, our best analytical minds, and (soon) all our surplus energy towards increasing the supply of raw silicon-based intelligence in the hopes that it will solve our problems for us. I’ll address this more fully in a future essay, but briefly I have the same reaction to it as Fouché had to the murder of the Duc D’Enghien: “It is worse than a crime; it is a mistake”. Intelligence is merely an accelerant; it will amplify both the best and the worst trends in society; more intelligence helps only if you believe we are on a good trajectory on net, and our main problem is that we’re not getting there faster enough.

Enough criticism - what do I think the real root of all evil is? As you might have guessed from the above, I believe it’s our inability to understand and cooperate with each other at scale. There are different words for the thing we need more of: trust. social fabric. asabiyyah. attunement. love. But all these words are deceptively cozy, suggesting we just need to Retvrn to our tribal and relational instincts and all will be okay. This is a dangerous illusion. Our old social technologies for cooperation did not scale to the complexity and enormity of the modern world, and were replaced by global capital markets and massive state and corporate bureaucracies. We need to find the principles underlying these cozy words, and find a way to make them scale. Much like Bacon and Boyle built a Knowledge Machine that takes in irrational argumentative academics and produces scientific knowledge; much like Hinton and Schmidhuber and Sutskever built a Learning Machine that takes in oceans of data and computation and produces raw intelligence; we need to build a Cooperation Machine that takes in atomized people and raw intelligence and produces mutual understanding and harmonious collective action.

Thanks to Richard Ngo for his inimitable combination of encouragement and skepticism that helped sharpen the ideas in this essay.

14 comments

Comments sorted by top scores.

comment by cousin_it · 2024-06-12T17:17:23.960Z · LW(p) · GW(p)

You point out several problems in the world: people have unhealthy lifestyles, nuclear power isn't used to its full potential, ecosystems are not protected, our social lives are not in accordance with human flourishing. Then you say all these problems could be solved by a "cooperation machine". But you don't seem to explain why these problems could be solved by the same "machine". Maybe they're all separate problems that need to be solved separately.

Maybe one exercise to try is holding off on proposing solutions [LW · GW]. Can you discuss the problems in more detail, but without mentioning any solutions? Can you point out commonalities between the problems themselves? For example, "all these problems would be solved by a cooperation machine" wouldn't count, and "they happen because people are bad at cooperating" is too vague. I'm looking for something more like "these problems happen because people get angry", or "because people get deceived by politicians". Does that make sense? Can you give it a try?

comment by weightt an (weightt-an) · 2024-06-09T18:58:03.384Z · LW(p) · GW(p)

Sooo, you need to build some super Shapley values calculator and additionally embed it into our preexisting coordination mechanisms such that people who use it on average do better that people who don't.

Replies from: ivan-vendrov
comment by Ivan Vendrov (ivan-vendrov) · 2024-06-10T13:13:45.897Z · LW(p) · GW(p)

would to love to see this idea worked out a little more!

comment by Seth Herd · 2024-06-11T23:10:59.919Z · LW(p) · GW(p)

Magnificent! In particular, your pacing and composition was perfect - when I had a thought, you'd answer it in a sentence to a paragraph.

Unfortunately, improved coordination is going to have to be accomplished extremely quickly to help with superintelligence, which is going to happen very soon now on a historical timeframe. See my other comment here for thoughts on a reputation system that could be used to find better leadership to represent groups.

A similar algorithm balanceing priorities for each individual could work. But those with power would have to want to cooperate with those without. Some of them are nice people and do; lots of them aren't and don't. There are solutions for cooperation and we should search for them. There aren't solutions for people simply wanting different, conflicting things, and two of the problems you mention are in that category: arguably, for the most part, people simply don't want to be healthy or prevent climate change as much as they want to have fun now.

I guess you could say they're bad at cooperating with their future selves, so a better coordination mechanism could help that short-term bias, too.

To your point about improved understanding: you imply that AI could be part of that improvement, and I very much agree. Even current LLMs are probably able to do "translation" of cultural viewpoints, (e.g., explain to a liberal how someone could ever support anti-abortion laws without simply wanting to control women's bodies and punish sex). Getting people to use them or exposing them to those arguments would be the hard part. This is sort of back to the problem being that humans don't want to cooperate.

They'd want to cooperate if you could show them that the simple provable fact is that they'll probably get terrible results if they keep on competing. And that might very well be the case with a multipolar scenario with AGIs aligned to different humans' intentions.

comment by Viliam · 2024-06-09T22:58:07.369Z · LW(p) · GW(p)

It seems to me that the next level will be one of the following, or its combination:

  • a nicer, more cooperative society
  • prediction markets
  • superhuman artificial intelligence
  • eugenics
  • genetically modified humans
  • cyborgs
  • ems

In some sense, these are all just different strategies for "becoming smarter", the main difference is between creating individuals that are smarter (AI, mutants, cyborgs), creating more of the smart individuals (eugenics, ems), or improving cooperation between existing smart individuals (niceness, prediction markets).

In current situation, I see the problem as the following:

  • most people are not smart; most smart people are not sane
  • most people are kinda nice, but the bad actors are somehow over-represented
  • a lot of energy is wasted in zero-sum competitions, and we do not have good mechanisms for cooperation
  • we do not even have good ways to aggregate information about people

The first point is obvious. Maybe not if you live in the Bay Area, but I assume that everywhere else the lack of smart and sane people is visible and painful. I have no idea how to build a nicer society with stupid and insane people. Democracy selects for ideas that appeal to many, i.e. to the stupid and insane majority. Dictatorship selects for people who are not nice. My best guess would be to bet on creating a smart and sane subculture, and hope that it will inspire other people. But outside of Bay Area we probably don't have enough people to start it, and within Bay Area most people are on drugs or otherwise crazy.

The second point... I wish there was a short explanation I could point at, but the concept is approximately in the direction of "you are not hiring the top 1% [LW · GW]" and "the asshole filter". It's a combination of "less desirable people circulate more (because they have to)" and "naive defenses against bad people are actually more likely to discourage good ones (because they respect your intent to be left alone) and less likely to discourage bad ones (because predators have a motive to get to you)". As a result, our interactions with random people are more likely to be unpleasant than the statistics of the population would suggest.

The fourth point... I often feel like there should be some mechanism for "people reviewing people", where we could get information about other people in ways more efficient and reliable than gossip. Otherwise, trust can't scale. But when I imagine such review system, there are many obvious problems with no obvious solutions. For starters, people lie. If I say "avoid him, he punched me for no reason", nothing prevents the other person to write exactly the same thing about me, even if that did not happen. A naive solution might be to expect that people will reciprocate both good and bad reviews, and thus treat all relations as directionless; we cannot know whether X or Y is the bad guy, but we can know that X and Y don't like each other. Then look at the rest of the graph, and if X has conflicts with dozen different people, and Y only has a conflict with X, then probably X is the bad guy. Except, nope, maybe Y just asked all his friends to falsely accuse X. Also, maybe Y has a lot of money and threatens to sue everyone who gives him a bad rating in the system.

Replies from: Seth Herd
comment by Seth Herd · 2024-06-11T23:01:26.048Z · LW(p) · GW(p)

I very much agree that

most people are kinda nice, but the bad actors are somehow over-represented

I think that people who want power tend to get it. Power isn't a nice thing to want, and it's kind of not even a sane thing to want. The same sort of applies to a platform or a loud voice, another form of overrepresentation.

I agree that one way to better cooperation is a better asshole filter and better way to share information about people. Reputation systems to date have suffered the shortcomings you mention, but that doesn't mean there aren't solutions.

If these people frequently reciprocate bad reviews, corroborated by the same people, their votes can be deprecated to the point of being worthless. Any algorithm can be gamed, but if it's kept private, it might be made very hard to game. And shifting the algorithm retrospectively discourages gaming, by risking your future vote by any gaming attempts.

The stakes are quite high here, but I don't know of any work suggesting that the problem is unsolvable.

Replies from: Viliam
comment by Viliam · 2024-06-12T15:22:36.725Z · LW(p) · GW(p)

It's bad that getting power is positively correlated with wanting power, rather than with being competent, nice, and sane. But that's the natural outcome; there would have to be some force acting in the opposite direction to get  a different outcome.

People instinctively hate those who have power, but there are a few problems with the instinct. First, the world is complicated -- if someone magically offered me a position to rule something, I would be quite aware that I am not competent enough to take it. (If the options were either me, or a person who I believe is clearly a bad choice, I would probably accept the deal, but the next step would be desperately trying to find smart and trustworthy advisors; ideally hoping that I would also get a budget to pay them.) Second, the well-known trick for people who want power is to make you believe that they will fight for you, against some dangerous outgroup.

it's kind of not even a sane thing to want.

If the stakes are small, and I am obviously the most competent person in the room, it makes sense to attempt to take the power to make the decisions. But with big things, such as national politics, there are too many people competing, too big cost to pay, too many unpredictable influences... you either dedicate your entire life to it, or you don't have much of a chance.

I think the reasonable way to get power is to start with smaller stakes and move up gradually. That way, however far you get, you did what you were able to do. Many people ignore it; they focus on the big things they see in TV, where they have approximately zero chance to influence something, while ignoring less shiny options, such as municipal politics, where there is a greater chance to succeed. But this is a problem for rationalists outside of Bay Area, if each of us lives in a different city, we do not have much of an opportunity to cooperate on the municipal level.

comment by FlorianH (florian-habermacher) · 2024-06-09T17:29:32.003Z · LW(p) · GW(p)

Interesting read, though I find it not easy to see exactly what your main message is. Two points strike me as potentially relevant regarding

what do I think the real root of all evil is? As you might have guessed from the above, I believe it’s our inability to understand and cooperate with each other at scale. There are different words for the thing we need more of: trust. social fabric. asabiyyah. attunement. love.

The first more relevant, the second a term I'd simply find naturally core in a discussion on the topic:

  1. Even increased "trust. social fabric." is not so clear to bring us forward. Let's assume people remain similarly self-interested, similarly egoistic, but they are able to cooperate better in limited groups: easy to imagine circumstances in which dominant effects could include: (i) easier for hierarchies in tyrannic dictatorships to cooperate to oppress their population and/or (ii) easier for firms to cooperate to create & exploit market power, replacing some reasonably well-working markets by, say, crazily exploitative oligopolies and oligarchies.
  2. Altruism: simply the sheer limitation to our degree of altruism*[1] with the wider population, might one call that one out as a single most dominant root of the tree of evil? Or, say, lack of altruism, combined with the necessary imperfection in self-interested positive collaboration given our world features at the time (i) our limited rationality and (ii) a hugely complex natural and economic environment? Increase our altruism, and most of today's billions of bad incentives we're exposed to become a bit less disastrous...

 

  1. ^

    Along with self-serving bias, i.e. our brain's sneaky way to reduce our actual behavioral/exhibited altruism to levels even below our (already limited) 'heartfelt' degree of altruistic interest, where we often think we try to act in other people's interests while in reality pursuing our own interests.

Replies from: ivan-vendrov
comment by Ivan Vendrov (ivan-vendrov) · 2024-06-10T13:25:47.339Z · LW(p) · GW(p)
  1. Agree trust and cooperation is dual use, and I'm not sure how to think about this yet; perhaps the most important form of coordination is the one that prevents (directly or via substitution) harmful forms of coordination from arising.
  2. One reason I wouldn't call lack of altruism the root is that it's not clear how to intervene on it, it's like calling the laws of physics the root of all evil. I prefer to think about "how to reduce transaction costs to self-interested collaboration". I'm also less sure that a society of people more altruistic motives will necessarily do better... the nice thing about self-interest is that your degree of care is proportional to your degree of knowledge about the situation. A society of extremely altruistic people who are constantly devoting resources to solve what they believe to be other people's problems may actually be less effective at ensuring flourishing.
Replies from: florian-habermacher
comment by FlorianH (florian-habermacher) · 2024-06-11T21:47:08.810Z · LW(p) · GW(p)

Neither entirely convinced nor entirely against the idea of defining 'root cause' essentially with respect to 'where is intervention plausible'. Either way, to me that way of defining it would not have to exclude "altruism" as a candidate: (i) there could be scope to re-engineer ourselves to become more altruistic, and (ii) without doing that, gosh how infinitely difficult does it feel to improve the world truly systematically (as you rightly point out).

That is strongly related to Unfit for the Future - The Need for Moral Enhancement (whose core story is spot on imho, even though I find quite some of the details in the book substandard)

comment by Vaughn Papenhausen (Ikaxas) · 2024-06-09T15:09:30.934Z · LW(p) · GW(p)

I like and agree with a lot in this essay. But I have to admit I'm confused by your conclusion. You dismiss social science research as probably not going anywhere, but then your positive proposal is basically more social science research. Doesn't building "a Cooperation Machine that takes in atomized people and raw intelligence and produces mutual understanding and harmonious collective action" require being "better at organizing our social life in accordance with human flourishing than the Victorians, the Qing, or the pre-conquest Lakota" in exactly the way you claim social science is trying and failing to produce?

Replies from: ivan-vendrov, Mo Nastri
comment by Ivan Vendrov (ivan-vendrov) · 2024-06-10T13:18:49.953Z · LW(p) · GW(p)

You're right the conclusion is quite underspecified - how exactly do we build such a cooperation machine?

I don't know yet, but my bet is more on engineering, product design, and infrastructure than on social science. More like building a better Reddit or Uber (or supporting infrastructure layers like WWW and the Internet) than like writing papers.

Replies from: Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2024-06-10T15:49:17.114Z · LW(p) · GW(p)

Okay, I see better now where you're coming from and how you're thinking that social science could be hopeless and yet we can still build a cooperation machine. I still suspect you'll need some innovations in social science to implement such a machine. Even if we assume that we have a black box machine that does what you say, you still have to be sure that people will use the machine, so you'll need enough understanding of social science to either predict that they will, or somehow get them to.

But even if you solve the problem of implementation, I suspect you'll need innovations in social science in order to even design such a machine. In order to understand what kind of technology or infrastructure would increase trust, asabiyah, etc, you need to understand people. And maybe you think the understanding we already have of people with our current social science is already enough to tell us what we'd need to build such a machine. But you sounded pretty pessimistic about our current social science. (I'm making no claim one way or the other about our current social science, just trying to draw out tensions between different parts of your piece.)

comment by Mo Putera (Mo Nastri) · 2024-06-09T17:26:57.734Z · LW(p) · GW(p)

That's not the sense I get from skimming his second most recent post [LW · GW], but I don't understand what he's getting at well enough to speak in his place.