Moloch Hasn’t Won
post by Zvi · 2019-12-28T16:30:00.947Z · LW · GW · 40 commentsContents
Meditations on Moloch That’s Not How This Works, That’s Not How Any of This Works Meditations on Elua None 40 comments
This post begins the Immoral Mazes sequence. See introduction for an overview of the plan. Before we get to the mazes, we need some background first.
Meditations on Moloch
Consider Scott Alexander’s Meditations on Moloch. I will summarize here.
Therein lie fourteen scenarios where participants can be caught in bad equilibria.
- In an iterated prisoner’s dilemma, two players keep playing defect.
- In a dollar auction, participants massively overpay.
- A group of fisherman fail to coordinate on using filters that efficiently benefit the group, because they can’t punish those who don’t profi by not using the filters.
- Rats are caught in a permanent Malthusian trap where only those who do nothing but compete and consume survive. All others are outcompeted.
- Capitalists serve a perfectly competitive market, and cannot pay a living wage.
- The tying of all good schools to ownership of land causes families to work two jobs whose incomes are then captured by the owners of land.
- Farmers outcompeted foragers despite this perhaps making everyone’s life worse for the first few thousand years.
- Si Vis Pacem, Para Bellum: If you want peace, prepare for war. So we do.
- Cancer cells focus on replication, multiply and kill off the host.
- Local governments compete to become more competitive and offer bigger bribes of money and easy regulation in order to lure businesses.
- Our education system is a giant signaling competition for prestige.
- Science doesn’t follow proper statistical and other research procedures, resulting in findings that mostly aren’t real.
- Governments hand out massive corporate welfare.
- Have you seen Congress?
Scott differentiates the first ten scenarios, where he says that perfect competition* wipes out all value, to the later four, where imperfect competition only wipes out most of the potential value.
He offers four potential ways out, which I believe to be an incomplete list:
- Excess resources allow a temporary respite. We live in the dream time.
- Physical limitations where the horrible thing isn’t actually efficient. He gives the example of slavery, where treating your slaves relatively well is the best way to get them to produce, and treating them horribly as in the antebellum South is so much worse that it needs to be enforced via government coordination or it will die out.
- The things being maximized for in competitions are often nice things we care about, so at least we get the nice things.
- We can coordinate. This may or may not involve government or coercion.
Scott differentiates this fourth, ‘good’ reason from the previous three ‘bad’ reasons, claiming coordination might be a long term solution, but we can’t expect the ‘bad’ reasons to work if optimization power and technology get sufficiently advanced.
The forces of the stronger competitors, who sacrifice more of what they value to become powerful and to be fruitful and multiply, eventually win out. We might be in the dream time now, but with time we’ll reach a steady state with static technology, where we’ve consumed all the surplus resources. All differentiation standing in the way of perfect competition will fade away. Horrible things will be the most efficient.
The optimizing things will keep getting better at optimizing, thus wiping out all value. When we optimize for X but are indifferent to Y [LW · GW], we by default actively optimize against Y, for all Y that would make any claims to resources. Any Y we value is making a claim to resources. See The Hidden Complexity of Wishes [LW · GW]. We only don’t optimize against Y if either we compensate by intentionally also optimizing for Y, or if X and Y have a relationship (causal, correlational or otherwise) where we happen to not want to optimize against Y, and we figure this out rather than fall victim to Goodhart’s Law.
The greater the optimization power we put behind X, the more pressure we put upon Y. Eventually, under sufficient pressure, any given Y is likely doomed. Since Value is Fragile [LW · GW], some necessary Y is eventually sacrificed, and all value gets destroyed.
Every simple optimization target yet suggested would, if fully implemented, destroy all value in the universe.
Submitting to this process means getting wiped out by these pressures.
Gotcha! You die anyway.
Even containing them locally won’t work, because that locality will be part of the country, or the Earth, or the universe, and eventually wipe out our little corner.
Gotcha! You die anyway.
Which is why the only ‘good’ solution, in the end, is coordination, whether consensual or otherwise. We must coordinate to kill these ancient forces who rule the universe and lay waste to all of value, before they kill us first. Then replace them with something better.
Great project! We should keep working on that.
That’s Not How This Works, That’s Not How Any of This Works
It’s easy to forget that the world we live in does not work this way. Thus, this whole line of thought can result in quite gloomy assessments of how the world inevitably always has and will work, such as this from Scott in Meditations on Moloch:
Suppose the coffee plantations discover a toxic pesticide that will increase their yield but make their customers sick. But their customers don’t know about the pesticide, and the government hasn’t caught up to regulating it yet. Now there’s a tiny uncoupling between “selling to Americans” and “satisfying Americans’ values”, and so of course Americans’ values get thrown under the bus.
Or this from Raymond, taken from a comment to a much later, distinct post, where ‘werewolf’ in context means ‘someone trying to destroy rather than create clarity as the core of their strategy’:
If you’re a king with 5 districts, and you have 20 competent managers who trust each other… one thing you can do is assign 4 competent managers to each fortress, to ensure the fortress has redundancy and resilience and to handle all of its business without any backstabbing or relying on inflexible bureaucracies. But another thing you can do is send 10 (or 15!) of the managers to conquer and reign over *another* 5 (or 15!) districts.
…
This is bad if you’re one of the millions of people who live in the kingdom, who have to contend with werewolves.
It’s an acceptable price to pay if you’re actually the king. Because if you didn’t pay the price, you’d be outcompeted by an empire who did. And meanwhile it doesn’t actually really affect your plans that much.
The key instinct is that any price that can be paid to be stronger or more competitive, must be paid, therefore despair: If you didn’t pay the price, you’d be out-competed by someone who did. People who despair this way often intuitively are modeling things as effectively perfect competition at least over time, which causes them to think that everything must by default become terrible, likely right away.
So many people increasingly bemoan how horrible anything and everything in the world is, and how we are all doomed.
When predictions of actual physical doom are made, as they increasingly are, often the response is to think things are so bad as to wish for the sweet release of death.
Moloch’s Army: An As-Yet Unjustified But Important Note
Others quietly, or increasingly loudly and explicitly to those who are listening, embrace Moloch.
They tell us that the good is to sacrifice everything of value, and pass moral judgments on that basis. To take morality and flip its sign. Caring about things of value becomes sin, indifference becomes virtue. They support others who support the favoring of Moloch, elevating them to power, and punish anyone who supports anything else.
They form Moloch’s Army and are the usual way Moloch locally wins, where Moloch locally wins. The real reason people give up slack and everything of value is not that it is ever so slightly more efficient to do so, because it almost always isn’t. It is so that others can notice they have given up slack and everything of value.
I am not claiming the right to assert this yet. Doing so needs not only a citation but an entire post or sequence that is yet unwritten. It’s hard to get right. Please don’t object that I haven’t justified it! But I find it important to say this here, explicitly, out loud, before we continue.
I also note that I explicitly support the implied norm of ‘make necessary assertions that you can’t explicitly justify if they seem important, and mark that you are doing this, then go back and justify them later when you know how to do so, or change your mind.’ It also led to this post, which led to many of what I think are my best other posts.
Meditations on Elua
The most vital and important part of Meditations on Moloch is hope. That we are winning. Yes, there are abominations and super-powerful forces out there looking to eat us and destroy everything of value, and yet we still have lots of stuff that has value.
Even before we escaped the Malthusian trap and entered the dream time, we still had lots of stuff that had value.
Quoting Scott Alexander:
Somewhere in this darkness is another god. He has also had many names. In the Kushiel books, his name was Elua. He is the god of flowers and free love and all soft and fragile things. Of art and science and philosophy and love. Of niceness, community, and civilization. He is a god of humans.
The other gods sit on their dark thrones and think “Ha ha, a god who doesn’t even control any hell-monsters or command his worshippers to become killing machines. What a weakling! This is going to be so easy!”
But somehow Elua is still here. No one knows exactly how. And the gods who oppose Him tend to find Themselves meeting with a surprising number of unfortunate accidents.
Moloch gets the entire meditation. Elua, who has been soundly kicking Moloch’s ass for all of human existence, gets the above quote and little else.
Going one by one:
Kingdoms don’t reliably expand to their breaking points.
Poisons don’t keep making their way into the coffee.
Iterated prisoner’s dilemmas often succeed.
Dollar auctions are not all over the internet.
Most communities do get most people to pitch in.
People caught in most Malthusian traps still usually have non-work lives.
Capitalists don’t pay the minimum wage all that frequently.
Many families spend perfectly reasonable amounts on housing.
Foragers never fully died out, also farming worked out in the end.
Most military budgets seem fixed at reasonable percentages of the economy, to the extent that for a long time that the United States has been mad its allies like Europe and Japan that they don’t spend enough.
Most people die of something other than cancer, and almost all cells aren’t cancerous.
Local governments enact rules and regulations that aren’t business friendly all the time.
Occasionally, someone in the educational system learns something.
Science has severe problems, but scientists are cooperating to challenge poor statistical methods, resulting in the replication crisis and improving statistical standards.
Governments are corrupt and hand out corporate welfare, but mostly are only finitely corrupt and hand out relatively small amounts of corporate welfare. States that expropriate the bulk of available wealth are rare.
If someone has consistently good luck, it ain’t luck.
(Yes, I have seen congress. Can’t win them all. But I’ve also seen, feared and imagined much worse Congresses. For now, your life, liberty and property are mostly safe while they are in session.)
(And yes the education exception is somewhat of a cop out but also things could be so much worse there on almost every axis.)
The world is filled with people whose lives have value and include nice things. Each day we look Moloch in the face, know exactly what the local personal incentives are, see the ancient doom looming over all of us, and say what we say to the God of Death: Not today.
Saying ‘not today’ won’t cut it against an AGI or other super strong optimization process. Gotcha. You die anyway. But people speak and often act as if the ancient ones have already been released, and the end times are happening now.
They haven’t, and they aren’t.
So in the context of shorter term problems that don’t involve such things, rather than bemoan how eventually Moloch will eat us all and how everything is terrible when actually many things are insanely great, perhaps we should ask a different question.
How is Elua pulling off all these unfortunate accidents?
*As a technical reminder we will expand upon in part two, perfect competition is a market with large numbers of buyers and sellers, homogeneity of the product, free entry and exit of firms, perfect market knowledge, one market price, perfect mobility of goods and factors of production with zero transportation costs, and no restrictions on trade. This forces the price to become equal to the marginal cost of production.
40 comments
Comments sorted by top scores.
comment by Dagon · 2019-12-28T19:07:29.303Z · LW(p) · GW(p)
There's a lot to be said for the dream-time argument. Inefficiency gives room for slack, and individual misalignment with group averages is inefficient. There are fewer people than the planet could support (in the near-term; hard to know what will happen in the longer term), easing the competitive pressure.
Limiting the number of children that industrial people have lets them maintain the wealth concentrations that make their lives pleasant.
Replies from: agai↑ comment by agai · 2019-12-29T06:06:12.541Z · LW(p) · GW(p)
Yeah, so, this is a complex issue. It is actually true IMO that we want fewer people in the world so that we can focus on giving them better lives and more meaningful lives. Unfortunately this would mean that people have to die, but yeah... I also think that cryogenics doesn't really make it much easier/hard to revive people, I would say either way you pretty much have to do the work of re-raising them by giving them the same experiences...
Although now I think about it there was a problem about that recently where I thought of a way to just "skip to the end state" given a finite length and initial state, the problem is we'd need to be able to simulate the entire world up to the end of the person's life. So I guess yeah that's why I don't think cryonics is too important except for research purposes and I guess motivating people to put their efforts into power efficiency, insulation, computation, materials technology etc. So it is useful in that sense probably more than just burying people but in the sense of "making it easier to bring them back alive," not really. Also sort of means having fewer people makes it more likely we can have more than a few seconds where no one dies, which would be nice for later.
In terms of numbers, "fewer" I'm thinking like 3-6 billion still, and maybe population will still keep increasing and our job will just be harder, which is annoying, but yeah. I would say don't have kids if you don't think the world is actually getting better is a good idea, particularly if you want to make it easier for later people to potentially bring back the people you care about that are already dead.
Life *extension* and recovery etc on the other hand is a *much, much easier* problem. I'm super interested in the technical aspects of this right now although the things I think will probably be substantially different from many people.
Basically in summary I agree with your post. :)
comment by fiddler · 2020-12-28T06:04:34.165Z · LW(p) · GW(p)
This review is more broadly of the first several posts of the sequence, and discusses the entire sequence.
Epistemic Status: The thesis of this review feels highly unoriginal, but I can't find where anyone else discusses it. I'm also very worried about proving too much. At minimum, I think this is an interesting exploration of some abstract ideas. Considering posting as a top-level post. I DO NOT ENDORSE THE POSITION IMPLIED BY THIS REVIEW (that leaving immoral mazes is bad), AND AM FAIRLY SURE I'M INCORRECT.
The rough thesis of "Meditations on Moloch" is that unregulated perfect competition will inevitably maximize for success-survival, eventually destroying all value in service of this greater goal. Zvi (correctly) points out that this does not happen in the real world, suggesting that something is at least partially incorrect about the above mode, and/or the applicability thereof. Zvi then suggests that a two-pronged reason can explain this: 1. most competition is imperfect, and 2. most of the actual cases in which we see an excess of Moloch occur when there are strong social or signaling pressures to give up slack.
In this essay, I posit an alternative explanation as to how an environment with high levels of perfect competition can prevent the destruction of all value, and further, why the immoral mazes discussed later on in this sequence are an example of highly imperfect competition that causes the Molochian nature thereof.
First, a brief digression on perfect competition: perfect competition assumes perfectly rational agents. Because all strategies discussed are continuous-time, the decisions made in any individual moment are relatively unimportant assuming that strategies do not change wildly from moment to moment, meaning that the majority of these situations can be modeled as perfect-information situations.
Second, the majority of value-destroying optimization issues in a perfect-competition environment can be presented as prisoners dilemmas: both agents get less value if all agents defect, but defection is always preferable to not-defection regardless of the strategy pursued by other agents.
Now, let's imagine our "rational" agents operate under simplified and informal timeless decision theory: they take 100% predictable opponent's strategy into account, and update their models of games based on these strategies (i.e. Prisoner's Dilemma defect/cooperate with two of our Econs has a payout of -1+0*n,5+0*n)
(The following two paragraphs are not novel, they is a summary of the thought experiment that provides a motive for TDT) Econs, then, can pursue a new class of strategies: by behaving "rationally," and having near-perfect information on opposing strategies because other agents are also behaving "rationally," a second Nash equilibria arises: cooperate-cooperate. The most abstract example is the two perpetually betrayed libertarian jailbirds: in this case, from the outset of the "market," both know the other's strategy. This creates a second Nash equilibria: any change in P1's strategy will be punished with a change in P2's strategy next round with extremely high certainty. P1 and P2 then have a strong incentive to not defect, because it results in lots of rounds of lost profit. (Note that because this IPD doesn't have a known end point, CDT does not mandate constant defection). In a meta-IPD game, then, competitive pressures push out defector agents who get stuck in defect-defect with our econs.
Fixed-number of player games are somewhat more complex, but fundamentally have the same scenario of any defection being punished with system-wide defection, meaning defection in highly competitive scenarios with perfectly rational agents will result in system-wide defection, a significant net negative to the potential defector. The filters stay on in a perfect market operating under TDT.
Then, an informal form of TDT (ITDT [to be clear, I'm distinguishing between TDT and ITDT only to avoid claiming that people actually abide by a formal DT)) can explain why all value is not destroyed in the majority of systems, even assuming a perfectly rational set of agents. Individually, this is interesting but not novel or particularly broad: the vast majority of the real-world examples discussed in this sequence are markets, so it's hard to evaluate the truth of this claim without discussing markets.
Market-based games are significantly more complex: because free entry and exit are elements of perfect competition, theoretically, an undercutter agent could exploit the vulnerabilities in this system by pursing the traditional strategy, which may appear to require value collapses as agents shore up against this external threat by moving to equilibrium pricing. Let's look at the example of the extremely rational coffee sellers, who have found a way to reduce their costs (and thus, juice their profits and all them to lower prices, increasing market share) by poisoning their coffee. At the present moment, CoffeeA and CoffeeB both control 50% of the coffee industry, and are entirely homogenous. Under the above simple model, assuming rational ITDT agents, neither agent will defect by poisoning coffee, because there's no incentive to destroy their own profit if the other agent will merely start poisoning as well. However, an exploiter agent could begin coffee-poisoning, and (again assuming perfect competition) surpass both CoffeeA and CoffeeB, driving prices to equilibrium. However, there's actually no incentive, again assuming a near-continuous time game, for CoffeeA and CoffeeB to defect before this actually happens. In truely perfect competition, this is irrelevant, because an agent arises "instantly" to do so, but in the real world, this is relaxed. However, it's actually still not necessary to defect even with infinite agents: if the defection is to a 0-producer surplus price, the presence of additional agents is irrelevant because market share holds no value, so defection before additional agents arrive is still marginally negative. If the defection is to a price that preserves producer surplus, pre-defecting from initial equilibria E1 to E2 price only incentives the stable equilibria to be at a lower price, as the new agent is forced to enter at a sub-E2 price, meaning the final equilibria is effectively capped, with no benefits. Note that this now means that exploiter agents are incentivized to enter at the original equilibria price, because they "know" any other price will trigger a market collapse to that exact price, so E1 maximizes profit.
This suggests that far from perfect competition destroying value, perfect competition may preserve value with the correct choice of "rational agents." However, any non-rational agents, or agents optimizing for different values, immediately destroy this peaceful cooperation! As EY notes here [LW · GW], TDT conceals a regress when other agents strategy is not predictable, which means that markets that substantially diverge from perfect competition, with non-perfect agents and/or non-perfect competition are not subject to our nice toy model.
Under this model, the reason why the airline industry is so miserable is because bailouts are accepted as common practice. This means that agents can undercut other agents to take on excess risk safely, effectively removing the 0-producer-surplus price (because agents can disguise costs by shuffling them to risk), and make strategy unpredictable and not subject to our cooperative equilibria.
Let's look at the middle-manager example brought up in the later part of the article. Any given middle manager, assuming all middle managers were playing fully optimized strategies, would not have a strong incentive to (WLOG) increase their hours at the office. However, the real world does not behave like this: as Zvi notes, some peers shift from "successful" to "competent," and despite the assertion that middle-management is an all-or-nothing game, I suspect that middle management is not totally homogenous in terms of willingness to erode ever-more valuable other time. This means that there are massive incentives to increase time at the office, in the hopes that peers are not willing to. The other dynamics noted by Zvi are all related to lack of equilibria, not the cause thereof.
This is a (very) longwinded way of saying that I do not think Zvi's model is the only, the most complete way, or the simplest way to model the dynamics of preserved value in the face of Moloch. I find several elements of the ITDT explanation quite appealing: it explains why humans often find traditional Econs so repulsive, as many of the least intuitive elements of traditional "rationality" are resolved by TDT. Additionally, I dislike the vague modeling of the world as fine because it doesn't satisfy easy-to-find price information intuitively: I don't find the effect strong enough to substantially preserve value intuitively. In the farmers market scenario specifically, I think the discrepancy between it being a relatively perfect competitive environment and having a ton of issues with competitiveness was glossed over too quickly; this type of disagreement seems to me as though it has the potential to have significant revelatory power. I think ITDT better explains the phenomena therein: farmer's market's aren't nearly as cutthroat as financial markets in using tools developed under decision that fails the prisoner's dilemma, meaning that prisoner's dilemmas are more likely to follow ITDT-type strategies. If desired, or others think it would offer clarity, I'd like to see either myself or someone else go through all of the scenarios discussed here under the above lens: I will do this if there is interest and this idea of this post doesn't have obvious flaws.
However, I strongly support curation of this post: I think it poses a fascinating problem, and a useful framing thereof.
tl;dr: the world could also operate under informal TDT, this has fairly strongly explanatory power for observed Moloch/Slack differentials, this explanation has several advantages.
Replies from: Yoav Ravid, supposedlyfun↑ comment by Yoav Ravid · 2020-12-28T06:36:22.408Z · LW(p) · GW(p)
i found your epistemic status confusing. it reads like it's about zvi's post, but i assume it's supposed to be about your review. (perhaps because you referenced your review as a post/article)
Replies from: fiddler↑ comment by fiddler · 2020-12-28T06:59:17.395Z · LW(p) · GW(p)
Oops, you're correct.
Replies from: Yoav Ravid↑ comment by Yoav Ravid · 2020-12-28T07:11:01.783Z · LW(p) · GW(p)
Nice, much clearer now :)
↑ comment by supposedlyfun · 2020-12-29T01:22:49.842Z · LW(p) · GW(p)
I would be very interested in your proposed follow-up but don't have enough game theory to say whether the idea has obvious flaws.
comment by Mati_Roy (MathieuRoy) · 2020-12-09T19:08:03.974Z · LW(p) · GW(p)
Replies from: adam-selker↑ comment by Adam Selker (adam-selker) · 2022-07-28T20:47:33.042Z · LW(p) · GW(p)
What was this image?
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2022-08-03T15:02:27.199Z · LW(p) · GW(p)
damn, i'm not sure; maybe it was my Twitter cover picture: https://twitter.com/matiroy9/ (this is content i modified from someone else)
comment by abramdemski · 2020-12-06T18:31:50.484Z · LW(p) · GW(p)
This is an important clarification/modification to a very important community concept, and as such, is deserving of further canonization in the LW review process.
Asking the question "why are things so bad?" (or, as it was put in Meditations on Moloch, "what does it?") leads to a lot of clarification useful for fighting the problems in the world. Yet, seeing all those mechanisms in detail can also lead to a lot of hopelessness.
Asking the mirror question, "why are things so good?" is also very helpful, particularly once one has traveled far down the road of understanding why things are so bad.
comment by romeostevensit · 2019-12-28T18:55:43.530Z · LW(p) · GW(p)
I have objections to most of your list of Elua wins, and that's despite me being an optimist. For now I'll just say that defensive tech outrunning offensive tech allows for capital formation.
Replies from: agaicomment by Hazard · 2019-12-28T18:32:25.441Z · LW(p) · GW(p)
I'm quite interested in the rest of this. Though I did find the idea of Moloch useful for responding to the most naive forms of "If we all did X everything would be perfect", I also have a vague feel that rationalist's belief in Moloch being all powerful prevents them from achieving totally achievable levels of group success.
Replies from: agaicomment by Davidmanheim · 2020-12-02T12:46:27.926Z · LW(p) · GW(p)
I have repeatedly thought back to and referenced this series of posts, which improved my mental models for how people engage within corporations.
comment by Raemon · 2020-01-21T21:31:00.094Z · LW(p) · GW(p)
> If you’re a king with 5 districts, and you have 20 competent managers who trust each other… one thing you can do is assign 4 competent managers to each fortress, to ensure the fortress has redundancy and resilience and to handle all of its business without any backstabbing or relying on inflexible bureaucracies. But another thing you can do is send 10 (or 15!) of the managers to conquer and reign over *another* 5 (or 15!) districts.
> This is bad if you’re one of the millions of people who live in the kingdom, who have to contend with werewolves.
> It’s an acceptable price to pay if you’re actually the king. Because if you didn’t pay the price, you’d be outcompeted by an empire who did. And meanwhile it doesn’t actually really affect your plans that much.
...The key instinct is that any price that can be paid to be stronger or more competitive, must be paid, therefore despair: If you didn’t pay the price, you’d be out-competed by someone who did. People who despair this way often intuitively are modeling things as effectively perfect competition at least over time, which causes them to think that everything must by default become terrible, likely right away.
[...]
Kingdoms don’t reliably expand to their breaking points.
Anthropics vs Goals
I didn't get around to replying to this until today, but this wasn't my main point and I think it's pretty important.
The issue isn't whether you'll fail to achieve your goals if you don't expand. The issue is "from an anthropic reasoning perspective, what sort of world will most people live in?"
I have shifted some of my thinking around "you'll be outcompeted and therefore it's in your interest to expand". I think I agree with "it's generally not worth trying to be the winner-take-all winner, because a) you need to sacrifice all the things you cared about anyway, b) even if you do, you're not actually likely to win anyway."
But that was only half the question – if you're looking around the world, trying to build a model of what's going on, I think the causal explanation is that "organizations that expand end up making up most of the world, so they'll account for most of your observations."
The reason this seems important is, like, I see you and Benquo looking in horror at the world. And... it is a useful takeaway that "hmm, I guess I don't need to expand in order to compete with the biggest empires in order to be happy/successful/productive, I can just focus on making a good business that delivers value and doesn't compromise it's integrity." (And having that crystallized has been helpful to my own developing worldview)
Nonetheless... the world will continue to be a place you recoil in horror from until somehow, someone creates something either stops mazes, or outcompetes them, or something.
Breaking Points vs Realistic Tradeoffs
I also disagree with the characterization "kingdoms don't expect to breaking point."
The original concept here was "why do people have a hard time detecting obfuscators and sociopaths?". A realistic example (to be clear I don't know much about medieval kingdoms), is a corporation that ends up creating multiple departments (i.e. hiring a legal team), or expanding to new locations.
This doesn't mean you expand to your breaking point – any longterm organization has to contend with shocks and robustness. The organizations I expect to be most successful will expand carefully, not overextending. But if you're asking the question "why are there obfuscators everywhere?", I think the answer is because the relative profitability of extinguishing obfusctators, vs. not worrying as much about it, points towards the latter direction.
This is, in part, because extinguishing obfuscating or other mazelike patterns is a rare, high skill job that, like, even small organizations don't usually have the capacity to deal with. I think if you can make it much cheaper, it's probably possible to shift the global pattern. But I think the status quo is that the profit-maximizing thing to do focus on expansion over being maze-proof, and there's a lot of profit-maximizing-entities out there.
It's not worth it for the king to try to expand to take over the world. It still seems, for many kings in many places, that expanding reasonably, robustly, is still the right strategy given their goals (or at least, they think it's their goal, and you'd have your work cut out for you convincing them otherwise), and that meanwhile worrying about werewolves in the lawyer department is probably more like a form of altruism than a form of self-interest.
Or, reframing this as a question (since I'm honestly not that confident)
If your inner circle is safe, how much selfish reason does a CEO have to make sure the rest of the organization is obfuscator-proof?
comment by Mary Chernyshenko (mary-chernyshenko) · 2019-12-29T19:22:08.264Z · LW(p) · GW(p)
Oh you firstwolder, you. Scott is so right somewhere.
(and how do you know that "Most communities do get most people to pitch in"?)
comment by Isnasene · 2019-12-28T19:26:49.349Z · LW(p) · GW(p)
I think the main reason Moloch doesn't succeed very effectively is just because the common response to "hey, you could sacrifice everything of value and give up all slack to optimize X" is "yeah but have you considered just, yanno, hanging out and watching TV?"
And most people who optimize X aren't actually becoming more competitive in the grand scheme of things. They'll die (or hopefully not die) like everybody else and probably have roughly the same number of kids. The selection process that created humans in the first place won't even favor them!
As a result, I'm not worried about Moloch imminently taking over the world. Frankly, I'm more short-term concerned with people just, yanno, hanging out and watching TV when this world is abjectly horrifying.
I am long-term concerned about Moloch as it pertains to value-drift. I doubt the sound of Moloch will be something like "people giving up all value to optimize X" and expect it to be something more like "thousands of years go by and eventually people just stop having our values."
Replies from: agai↑ comment by agai · 2019-12-29T06:08:44.926Z · LW(p) · GW(p)
It's more effective to retain more values since physics is basically unitary (at least up to the point we know) so you'll have more people on your side if you retain the values of past people. So we'd be able to defeat this Moloch if we're careful.
Replies from: Isnasene↑ comment by Isnasene · 2019-12-29T15:09:23.192Z · LW(p) · GW(p)
To be clear, the effectiveness of an action is defined by whatever values we use to make that judgement. Retaining the values of past people is not effective unless
- past-people values positively compliment your current values so you can positively leverage the work of past people by adopting more of their value systems (which doesn't necessarily mean you have to adopt their values)
- past-people have coordinated to limit the instrumental capabilities of anyone who doesn't have their values (for instance, by establishing a Nash equilibrium that makes it really hard for people to express drifting values or by building an AGI)
To be fair, maybe you're referring to Molochian effectiveness of the form (whatever things tend to maximize the existence of similar thnigs). For humans, similarity is a complicated measure. Do we care about memetic similarity (ie reproducing people with similar attitudes as ourselves) or genetic similarity (ie having more kids)? Of course, this is a nonsense question because the answer is most humans don't care strongly about either and we don't really have any psychological intuitions on the matter (I guess you could argue hedonic utilitarianism can be Molochian under certain assumptions but that's just because any strongly-optimizing morality becomes Molochian).
In the former case (memetic similarity), adopting values of past people is a strategy that makes you less fit because you're sacrificing your memetics to more competitive ones. In the latter case (genetic similarity), pretending to adopt people's values as a way to get them to have more kids with you is more dominant than just adopting their values.
But, overall, I agree that we could kind-of beat Moloch (in the sense of curbing Moloch on really long time-scales) just by setting up our values to be inherently more Molochian than those of people in the future. Effective altruism is actually a pretty good example of this. Utilitarian optimizers leveraging the far-future to manipulate things like value-drift over long-periods of time seem more memetically competitive than other value-sets.
comment by jmh · 2019-12-28T17:37:58.844Z · LW(p) · GW(p)
Some day I might go read the background here.
I do wonder if the old saying about evil triumphing only if good people stay quiet doesn't apply. Perhaps that is the source of all those unfortunate accidents Elua enjoys. But that is a pretty weak thesis. What might the model be that gets us some ratios related to number of good, bad and indifferent and perhaps a basic human trait about feeling better inside if we don't do bad. That last bit then allows a large number in the group be indifferent but display a propensities more aligned with Elua than Moloch.
There also seems to be (assuming I actually get the whole map-territory view right) an analytical concern. Perfect competitions is a fiction made up to allow the nice pictures to be drawn on the chalkboard. They are maps, and rather poor, simplistic ones at that, rather than the underlying territory. Seems like Moloch is given power based on the map rather than the underlying territory. Perhaps that is why you offered the technical note so look forward to where you take that. I suppose one path might be to suggest the accidents are not so accidental or surprising.
Replies from: agai↑ comment by agai · 2019-12-29T06:12:01.281Z · LW(p) · GW(p)
Accidents, if not too damaging, are net positive because they allow you to learn more & cause you to slow down. If you are wrong about what is good/right/whatever, and you think you are a good person, then you'd want to be corrected. So if you're having a lot of really damaging accidents in situations where you could reasonably be expected to control, that's probably not too good, but "reasonably be expected to control" is a very high standard. What I'm very explicitly not saying here is that the "just-world" hypothesis is true in any way; accidents *are* accidents, it's just that they can be net positive.
Replies from: jmh↑ comment by jmh · 2019-12-29T16:50:27.329Z · LW(p) · GW(p)
One of the recent "cultural" themes being pushed by company I work in is very similar. Basically, if someone critiques you and shows you where you made the mistake, or simply notes a mistake was made, they just gave you a gift, don't get mad or defensive.
I think there is a lot of truth to that.
My phase is "own your mistakes". Meaning, acknowledge and learn from.
So, I fully agree with your general point. Accidents and mistakes should never be pure loss settings. And, in some cases they can lead to net positive benefits (and we're probably done even need to consider those "I was looking for X but found Y and Y is really, really good/beneficial/productive/cost saving/life saving....)
Replies from: Zvicomment by Ben Pace (Benito) · 2020-12-15T07:19:55.893Z · LW(p) · GW(p)
Nominating this whole sequence. I learned a lot from it.
comment by NancyLebovitz · 2019-12-30T01:42:35.051Z · LW(p) · GW(p)
Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency is about why businesses fail if they ignore all other values in favor of maximizing profit-- they lose too much flexibility.
I'm looking forward to the rest of this series.
comment by Donald Hobson (donald-hobson) · 2019-12-29T14:54:20.849Z · LW(p) · GW(p)
The real world is high dimentional, and many people will go slightly out of their way to help. If the coffee place uses poisonous pesticides, people will tell others, an action that doesn't cost them much and helps others a lot.
Your Moloch traps only trap when they are too strong for the Moloch haters to destroy. The Moloch haters don't have a huge amount of resources, but in a high dimensional system, there is often a low resource option.
comment by velcro · 2020-01-08T01:36:40.353Z · LW(p) · GW(p)
It seems like the stability point of a lot of systems is Moloch-like. (Monopolies, race to the bottom, tragedy of the commons, dictatorships, etc.) It requires energy to keep the systems out of those stability points.
Lots of people need to make lots of sacrifices to keep us out of Moloch states. It is not accidents. It is revolutions, and voter registration, and volunteering and standing up to bullies. It is paying extra for fair trade coffee and protesting for democracy in Hong Kong.
Moloch has a huge advantage. If we do nothing, it will always win. We need to remember that.
comment by niplav · 2021-05-31T13:41:54.516Z · LW(p) · GW(p)
There is a big difference between a universe with -hugenum value, and a universe with 0 value. Moloch taking over would produce a universe with 0 value, not -hugenum (not just because we might assume that pain and pleasure are equally energy-efficient).
When one then considers how much net value there is in the universe (or how much net disvalue!), I suspect Eluas winning, while probably positive, isn't that great: Sure, sometimes someone learns something in the education system, but many other people also waste intellectual potential, or get bullied.
comment by Jameson Quinn (jameson-quinn) · 2020-01-16T17:06:35.379Z · LW(p) · GW(p)
I strongly suggest you rewrite your summary of "physical limitations". The original was slightly problematic; your summary is, to me, a train-wreck.
Scott's original point was, I believe, "slavery itself may be an example of a bad collective equilibrium, but work-people-to-death antebellum southern slavery was even worse than that." He spent so much effort showing how the WPTD version was inefficient that he forgot to reiterate the obvious point that both versions are morally bad; and since he was contrasting the two, it would be possible to infer that he's actually saying that non-WPTD slavery is not so bad morally; but he clearly deserves the benefit of the doubt on that, and anybody who's read that far is likely to give it to him.
Your summary is shorter, so it's easier to misinterpret, and "people unlikely to give you the benefit of the doubt" are more likely to read it. Furthermore, using "you" to mean slavers makes it actually worse than Scott's version. I, for one, really don't want to be asked to put myself into slavers' shoes unless it's crucial to the point being made, and in this case it clearly isn't.
I suggest you remove the "you" phrasing, and also explicitly say that even non-WPTD slavery is bad; that this is an example of physical limitations slightly ameliorating a bad equilibrium, but not removing it altogether. You can, I believe, safely imply that that's what Scott believes too, even though he doesn't explicitly say it.
Replies from: Zvi↑ comment by Zvi · 2020-01-17T12:13:01.207Z · LW(p) · GW(p)
Happy to delete the word 'you' there since it's doing no work. Not going to edit this version, but will update OP and mods are free to fix this one. Also took opportunity to do a sentence break-up.
As for saying explicitly that slavery is bad, well, pretty strong no. I'm not going to waste people's time doing that, nor am I going to invite further concern trolling, or the implication that when I do not explicitly condemn something it means I might secretly support it or something. If someone needs reassurance that someone talking about slavery as one of the horrible things also opposes a less horrible form of slavery, then they are not the target audience.
Replies from: jameson-quinn↑ comment by Jameson Quinn (jameson-quinn) · 2020-01-18T00:48:54.525Z · LW(p) · GW(p)
I think that I am probably inside the set you'd consider "target audience", though not a central member. To me, when you say "strong no" it sounds somewhat like "if somebody misunderstands me, it's their fault," which I'd think is a bad reaction.
I realize that what I'm asking for could be considered SJW virtue-signaling, and I understand that one possible reaction to such a request is "ew, no, that's not my tribe." However, I think there's reasons aside from signaling or counter-signaling to consider my request.
To me, one goal of a summary section like the one in question is to allow the reader to grasp the basic flavor of the argument in question without too much mental work. That might, in some cases, mean it's worth explicitly saying things that were implicit in the unabridged original, because the quicker read might leave such implicit ideas less obvious. In particular, to me, it's important that these "physical limitations" don't actually remove the badness of the equilibrium, they just moderate it slightly. That flows obviously to me when reading Scott's full original; with your summary, it's still obvious, but in a way that breaks the flow and requires me to stop and think "there's something left unsaid here". In a summary section, such a break in the flow seems better avoided.
Replies from: SaidAchmiz, Zvi↑ comment by Said Achmiz (SaidAchmiz) · 2020-01-18T01:01:36.250Z · LW(p) · GW(p)
Are you saying that you, personally, were confused about whether Zvi (or Scott) does, or does not, support slavery? Is that actually something that you were unsure whether you had understood properly?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-01-18T02:40:26.511Z · LW(p) · GW(p)
I'm reading Jameson as just saying that, from an editing standpoint, the wording was sufficiently confusing and had to stop for a few seconds to figure out that this wasn't what Zvi was saying. Like, he didn't believe Zvi believed it, but it nonetheless read like that for a minute.
(Either way, I don't care about it very much.)
Replies from: jameson-quinn↑ comment by Jameson Quinn (jameson-quinn) · 2020-01-18T09:00:07.988Z · LW(p) · GW(p)
Exactly, thank you.