Are there non-AI projects focused on defeating Moloch globally?

post by eapache (evan-huus) · 2020-09-14T02:13:11.252Z · LW · GW · 7 comments

This is a question post.

Contents

  Answers
    avturchin
    4thWayWastrel
    ChristianKl
None
7 comments

Meditations on Moloch lays out a rather pessimistic view of the future, and then offers a super-intelligent AI "gardener" as the solution. A lot of the rationalist community is focused on AI, which makes sense in that light (and of course because of the existential risk of unaligned AI), but I don't know of any projects focused on non-AI solutions to countering or defeating Moloch. Some projects exist to counter specific local coordination problems, but apparently none to counter the global gardening problem in the original post? Am I missing such a project? Is there a reason that AI is the only plausible solution? Is this low-hanging fruit waiting to be picked?

edited to add some clarifications:

Answers

answer by avturchin · 2020-09-14T14:11:36.514Z · LW(p) · GW(p)

For Marx, capitalism was Moloch, and communism was a solution.

For Unabomber, the method to stop Moloch was the destruction of complex technological society and all complex coordination problems.

comment by Viliam · 2020-09-14T21:09:02.834Z · LW(p) · GW(p)

Maybe, let's generalize this a bit... let's call these types of solutions:

Singleton solutions -- there will be no coordination problems if everything is ruled by one royal dynasty / one political party / one recursively self-improving artificial intelligence.

Typical problems:

Requires absolute power; not sure if we can get there without sacrificing everything to Moloch during the wars between competing royal dynasties / political systems / artificial intelligencies.

Does not answer how the singleton makes decisions internally: royal succession problems / infighting in the political party / interaction between individual modules of the AI.

Fragility of outcome; there is a risk of huge dysutility if we happen to get an insane king / a political party with inhumane ideology / an unfriendly artificial intelligence.

Primitivism solutions -- all problems will be simple if we make our lifestyle simple.

Typical problems:

Avoiding Moloch is an instrumental goal; the terminal goal is to promote human well-being. But in primitive societies people starve, get sick, most of their kids die, etc.

Doesn't work in long term; even if you would reduce the entire planet into stone age, there would be a competition who gets out of the stone age first.

In a primitive society, some formerly easy coordination problems may become harder to solve, when you don't have internet or phones.

comment by maximkazhenkov · 2020-09-15T06:44:07.567Z · LW(p) · GW(p)
Singleton solutions -- there will be no coordination problems if everything is ruled by one royal dynasty / one political party / one recursively self-improving artificial intelligence.

Royal dynasties and political parties are not Singletons by any stretch of the imagination. Infighting is Moloch. But even if we assumed an immortal benevolent human dictator, a dictator only exercises power through keys to power and still has to constantly fight off competition for his power. Stalin didn't start the Great Purge for shits and giggles; it rather is a pattern that keeps repeating with rulers throughout history.

The hope with artificial superintelligence is that, due to the wide design space of possible AIs, we can perhaps pick one that is sub-agent stable and free of mesa-optimization, and also more powerful than all other agents in the universe combined by a huge margin. If no AI can satisfy these conditions, we are just as doomed.

Primitivism solutions -- all problems will be simple if we make our lifestyle simple.

That's not defeating Moloch, that's surrendering completely and unconditionally to Moloch in its original form of natural selection.

comment by eapache (evan-huus) · 2020-09-15T11:54:17.952Z · LW(p) · GW(p)

An immortal benevolent human dictator isn’t a singleton either. Human cells tend to cooperate to make humans because it tends to be their most effective competitive strategy. The cells in an immortal all powerful human dictator would have a different payoff matrix and would likely start defecting over time.

comment by eapache (evan-huus) · 2020-09-14T15:35:21.682Z · LW(p) · GW(p)

These are interesting parallels (maybe? The unabomber parallel seems odd but I don’t actually know enough about him to critique it properly) But they don’t seem to answer my question. If there is an answer being implied, please spell it out more explicitly. Otherwise maybe this belongs as a comment, not an answer?

answer by 4thWayWastrel · 2020-09-14T22:27:07.193Z · LW(p) · GW(p)

RadicalxChange is a movement that grew out of a book called Radical Markets, which proposes mechanism changes we could use to fund public goods (which would take a large bite out of the Moloch issue). Can recommend the book and or the 80,000 hours episode with Glen Weyl as an intro.

Other promising options I've seen but not looked into in as much depth

Generally speaking one might lump these approaches into "Anti Moloch memetic warfare" which in a way was what Scott was doing. Spreading memes that identify Moloch as an issue and proposing different ways of self organising to the network.

answer by ChristianKl · 2020-09-14T10:48:44.255Z · LW(p) · GW(p)

There's the Game B discourse around creating social norms that defeat moloch. 

comment by eapache (evan-huus) · 2020-09-14T11:52:05.243Z · LW(p) · GW(p)

This answer is interesting, but underspecified for somebody who’s never heard of this. What is Game B? Where is it? Google just returns a bunch of board game links.

edit: Ah, finally got to https://www.gameb.wiki/

comment by ChristianKl · 2020-09-14T14:35:47.190Z · LW(p) · GW(p)

I'm not sure what the best point of entry is. Youtube videos like https://www.youtube.com/watch?v=HL5bcgpprxY do give some explanation.

comment by Viliam · 2020-09-14T21:27:59.382Z · LW(p) · GW(p)

Skimmed the wiki, watched the first 15 minutes of the video, still have no idea whether there is anything specific. So far it seems to me like a group of people who are trying to improve the world by talking to each other about how important it is to improve the world.

You seem to know something about it, could you please post three specific examples? (I mean, examples other than making a video or a web page about how Game B is an important thing.)

comment by ChristianKl · 2020-09-14T22:58:23.053Z · LW(p) · GW(p)

That's a bit like saying: "What are all those AI safety people talking about? Can you please give me three specific examples of how they propose safety mechanisms should work?"

I haven't seen easy answers or a good link for them. At the same time, the project is one of the answers for the question in the OP.

comment by Raj Thimmiah (raj-thimmiah) · 2020-09-15T03:43:08.031Z · LW(p) · GW(p)

I actually have been wondering about the safety mechanism stuff, if anyone wants to give examples of actually produced things in AI alignment I’d be interested in hearing about them.

7 comments

Comments sorted by top scores.

comment by Donald Hobson (donald-hobson) · 2020-09-14T07:29:49.758Z · LW(p) · GW(p)

There aren't many other plausible technological options for things that could defeat moloch.

A sufficiently smart and benevolent team of uploaded humans could possibly act as a singleton, in the scenario that one team get mind uploading first, and that the hardware is enough to run uploads really fast.

 

What I would actually expect in this scenario is a short period of uploads doing AI research followed by a Foom.

But if we suppose that FAI is really difficult, and that the uploads know about this, and about moloch, then they could largely squash moloch at least for a while.

(I am unsure whether or not some subtle moloch like process would sneak back in, but at least the blatently molochy processes would be gone for a while.)

For example, if each copy of a person has any control over which copy is duplicated when more people are needed, then most of the population will have had life experiences that make them want to get copied a lot.

comment by eapache (evan-huus) · 2020-09-14T15:32:18.193Z · LW(p) · GW(p)

There aren't many other plausible technological options for things that could defeat moloch.

Why? What about non-technological solutions?

comment by Donald Hobson (donald-hobson) · 2020-09-14T20:38:53.428Z · LW(p) · GW(p)

Moloch appears at any point when multiple agents have similar levels of power and different goals. Any time you have multiple agents with similar levels of capability and different utility functions, a form of moloch appears. 

With current tech, it would be very hard to give total power to one human. The power would have to be borrowed, in the sense that their power is in setting a Nash equilibria as a shelling point. "Everyone do X and kill anyone who breaks this rule" is a nash equilibria, if everyone else is doing it, you better too. The dictator sets the shelling point by choice of X. The dictator is forced to quash any rebels or loose power. Another moloch.

 

Given that we have limited control over the preferences of new humans, there is likely to be some differences in utility functions between humans. Humans can die, go mad ect. You need to be able to transfer power to a new human, without having any adverse selection pressure in the choice of which.

 

One face of moloch is evolution. To stop it, you need to be reseting the gene pool with fresh DNA from long term storage, otherwise, over time the population genome might drift in a direction you don't like. 

We might be able to keep Moloch at a reasonably low damage level, just a sliver of moloch making things not quite as nice as they could be. At least if people know Moloch go out of their way to destroy it.

comment by maximkazhenkov · 2020-09-15T07:22:53.684Z · LW(p) · GW(p)
  • Maybe one-shot Prisoner's Dilemma is rare [LW · GW] and Moloch doesn't turn out to be a big issue after all
  • On the other hand, perhaps the FAI solution is just sweeping all the hard problems under the AI-alignment rug and isn't any more viable than engineering a global social system that is stable over millions of years (possibly using human genetic engineering [LW · GW])
comment by Donald Hobson (donald-hobson) · 2020-09-15T09:44:34.971Z · LW(p) · GW(p)

If we assume that super-intelligent AI is a thing, you have to engineer a global social system thats stable over milllions of years and where no one makes ASI in that time.

comment by maximkazhenkov · 2020-09-15T10:35:28.656Z · LW(p) · GW(p)

Well this requirement doesn't appear to be particularly stringent compared to the ability to suppress overpopulation and other dysgenic pressures that would be necessary for such a global social system. It would have to be totalitarian anyway (though not necessarily centralized).

It is also a useful question to ask whether there are alternative existential opportunities if super-intelligent AI doesn't turn out to be a thing. For me that's the most intriguing aspect of the FAI problem; there are plenty of existential risks to go around but FAI as an existential opportunity is unique.

comment by Uriel Fiori (uriel-fiori) · 2020-09-14T23:27:20.043Z · LW(p) · GW(p)

really can't help because I happen to think Moloch isn't only inevitable but positively good (and not only better than alternatives but actually the best possible world type of good)