post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Logan Zoellner (logan-zoellner) · 2023-11-14T22:26:04.375Z · LW(p) · GW(p)

Since human values are not solely about power acquisition, evolution is continuously pushing the world away from them and towards states that are all about it, and our values fight an uphill battle against that. They are just a coincidence of their time, after all.

 

Assuming an awful lot of our conclusion today, aren't we?

Replies from: hold_my_fish, ricardo-meneghin-filho
comment by hold_my_fish · 2023-11-15T07:44:05.729Z · LW(p) · GW(p)

The OP and the linked PDF, to me, seem to express a view of natural selection that is oddly common yet strikes me as dualistic. The idea is that natural selection produces bad outcomes, so we're doomed. But we're already the product of natural selection--if natural selection produces exclusively bad outcomes, then we're living in one!

Sometimes people attempt to salvage their pessimistic view of natural selection by saying, well, we're not doing what we're supposed to do according to natural selection, and that's why the world isn't dystopic. But that doesn't work either: the point of natural selection is that we're operating according to strategies that are successful under conditions of natural selection (because the other ones died out).

So then the next attempt is to say, ah, but our environment is much different now--our behavior is outdated, owing back to a time when being non-evil worked, and being evil is optimal now. This at least is getting closer to plausibility (since indeed our behavior is outdated in many ways, with eating habits as an obvious example), but it's still strange in quite a few ways:

  • If what's good about the world is due to a leftover natural human tendency to goodness, then how come the world is so much less violent now than it was during our evolutionary history?
  • If the modern world makes evil optimal, how come evil kept notching up Ls in the 20th century (in WW2 and the Cold War, as the biggest examples)?
  • If our outdated behavior is really that far off optimal, how come it has kept our population booming for thousands of years, in conditions all quite different from our evolutionary history? Even now, fertility crisis notwithstanding, the human population is still growing, and we're among the most successful species ever to exist on Earth.

But despite these factors that make me doubt that we humans have suboptimally inherited an innate tendency to goodness, it's conceivable. What often comes next, though, is a disturbing policy suggestion: encode "human values" in some superintelligent AI that is installed as supreme eternal dictator of the universe. Leaving aside the issue of whether "human values" even makes sense as a concept (since it seems to me that various nasty youknowwhos of history, being undoubtedly homo sapiens, have as much a claim to the title as you or I), totalitarianism is bad.

It's not just that totalitarianism is bad to live in, though that's invariably true in the real world. It also seems to be ineffective. It lost in WW2, then in the Cold War. It's been performing badly in North Korea for decades. And it's increasingly dragging down modern China. Totalitarianism is evidently unfavored by natural selection. Granted, if there are no alternatives to compete against, it can persist (as seen in North Korea), so maybe a human-originated singular totalitarianism can persist for a billion years until it gets steamrolled by aliens running a more effective system of social organization.

One final thought: it may be that natural selection actually favors AI that cares more about humans than humans care about each other. Sound preposterous? Consider that there are species (such as Tasmanian devils) that present-day humans care about conserving but where the members of the species don't show much friendliness to each other.

Replies from: ricardo-meneghin-filho
comment by Ricardo Meneghin (ricardo-meneghin-filho) · 2023-11-15T09:21:16.313Z · LW(p) · GW(p)

Natural selection doesn't produce "bad" outcomes, it produces expansionist, power-seeking outcomes, not at the level of the individual, but the whole system. In the meantime it will produce several intermediate states that may have adjacent even more power-seeking states, but are harder to find.

Humans developed several altruistic values because it was what produced the most fitness in the local search natural selection was running at the time. Cooperating with individuals from your tribe would lead to better outcomes than purely selfish behavior.

The modern world doesn't make "evil" optimal. The reason violence has reduced is because negative-sum games among similarly capable individuals are an incredible waste of resources and we are undergoing selection in many different levels against that: violent people died in battle or were executed frequently during history, societies that enforced strong punishment against violence prospered more that ones that didn't, cultures and religions that encouraged less harm made the groups that adopted them prosper more.

I'm not sure what about the OP or the linked paper would make you conclude anything you have concluded.

The reason we shouldn't expect cooperation from AI is that it is remarkably more powerful than humans, and it may very well have better outcomes by paying the tiny cost of fighting humans if it can then turn all of us into more of it. I'm sure the pigs caged in our factory farms wouldn't agree with your sense that the passage of time is favoring "goodness".

There is also a huge asymmetry in AIs capability for self-modification, expansion and merging. In fact, I'd expect them to be less violent than humans among themselves, merging into single entities to avoid wasteful negative-sum competition, which is something that is impossible for humans to do.

One final thought: it may be that natural selection actually favors AI that cares more about humans than humans care about each other. Sound preposterous? Consider that there are species (such as Tasmanian devils) that present-day humans care about conserving but where the members of the species don't show much friendliness to each other.

Regarding this, I don't think it's preposterous at all. It might be that initial cooperation with humans gives a head-start to the first AI which "locks-in" a cooperative value into it, and it carries it on even as it doesn't need to. But longer term, I don't know what would happen.

comment by Ricardo Meneghin (ricardo-meneghin-filho) · 2023-11-15T09:24:25.945Z · LW(p) · GW(p)

I'm not sure which part of this you think is assuming the conclusion. That our values are not maximally about power acquisition should be clear. The part saying evolution is continuously pushing the world is what I've tried to explain in the post, though it should be read as "there is a force pushing in that direction" rather than "the resulting force is in that direction".