Moloch Hasn’t Won

post by Zvi · 2019-12-28T16:30:00.947Z · score: 123 (46 votes) · LW · GW · 28 comments

Contents

  Meditations on Moloch
  That’s Not How This Works, That’s Not How Any of This Works
  Meditations on Elua
None
28 comments

This post begins the Immoral Mazes sequence. See introduction for an overview of the plan. Before we get to the mazes, we need some background first.

Meditations on Moloch

Consider Scott Alexander’s Meditations on Moloch. I will summarize here. 

Therein lie fourteen scenarios where participants can be caught in bad equilibria.

  1. In an iterated prisoner’s dilemma, two players keep playing defect.
  2. In a dollar auction, participants massively overpay.
  3. A group of fisherman fail to coordinate on using filters that efficiently benefit the group, because they can’t punish those who don’t profi by not using the filters.
  4. Rats are caught in a permanent Malthusian trap where only those who do nothing but compete and consume survive. All others are outcompeted.
  5. Capitalists serve a perfectly competitive market, and cannot pay a living wage.
  6. The tying of all good schools to ownership of land causes families to work two jobs whose incomes are then captured by the owners of land.
  7. Farmers outcompeted foragers despite this perhaps making everyone’s life worse for the first few thousand years.
  8. Si Vis Pacem, Para Bellum: If you want peace, prepare for war. So we do.
  9. Cancer cells focus on replication, multiply and kill off the host.
  10. Local governments compete to become more competitive and offer bigger bribes of money and easy regulation in order to lure businesses.
  11. Our education system is a giant signaling competition for prestige.
  12. Science doesn’t follow proper statistical and other research procedures, resulting in findings that mostly aren’t real.
  13. Governments hand out massive corporate welfare.
  14. Have you seen Congress?

Scott differentiates the first ten scenarios, where he says that perfect competition* wipes out all value, to the later four, where imperfect competition only wipes out most of the potential value.

He offers four potential ways out, which I believe to be an incomplete list:

 

  1. Excess resources allow a temporary respite. We live in the dream time.
  2. Physical limitations where the horrible thing isn’t actually efficient. He gives the example of slavery, where treating your slaves relatively well is the best way to get them to produce, and treating them horribly as in the antebellum South is so much worse that it needs to be enforced via government coordination or it will die out.
  3. The things being maximized for in competitions are often nice things we care about, so at least we get the nice things.
  4. We can coordinate. This may or may not involve government or coercion.

Scott differentiates this fourth, ‘good’ reason from the previous three ‘bad’ reasons, claiming coordination might be a long term solution, but we can’t expect the ‘bad’ reasons to work if optimization power and technology get sufficiently advanced. 

The forces of the stronger competitors, who sacrifice more of what they value to become powerful and to be fruitful and multiply, eventually win out. We might be in the dream time now, but with time we’ll reach a steady state with static technology, where we’ve consumed all the surplus resources. All differentiation standing in the way of perfect competition will fade away. Horrible things will be the most efficient. 

The optimizing things will keep getting better at optimizing, thus wiping out all value. When we optimize for X but are indifferent to Y [LW · GW], we by default actively optimize against Y, for all Y that would make any claims to resources. Any Y we value is making a claim to resources. See The Hidden Complexity of Wishes [LW · GW]. We only don’t optimize against Y if either we compensate by intentionally also optimizing for Y, or if X and Y have a relationship (causal, correlational or otherwise) where we happen to not want to optimize against Y, and we figure this out rather than fall victim to Goodhart’s Law

The greater the optimization power we put behind X, the more pressure we put upon Y. Eventually, under sufficient pressure, any given Y is likely doomed. Since Value is Fragile [LW · GW], some necessary Y is eventually sacrificed, and all value gets destroyed.

Every simple optimization target yet suggested would, if fully implemented, destroy all value in the universe. 

Submitting to this process means getting wiped out by these pressures.

Gotcha! You die anyway.

Even containing them locally won’t work, because that locality will be part of the country, or the Earth, or the universe, and eventually wipe out our little corner.

Gotcha! You die anyway.

Which is why the only ‘good’ solution, in the end, is coordination, whether consensual or otherwise. We must coordinate to kill these ancient forces who rule the universe and lay waste to all of value, before they kill us first. Then replace them with something better. 

Great project! We should keep working on that. 

That’s Not How This Works, That’s Not How Any of This Works

It’s easy to forget that the world we live in does not work this way. Thus, this whole line of thought can result in quite gloomy assessments of how the world inevitably always has and will work, such as this from Scott in Meditations on Moloch:

Suppose the coffee plantations discover a toxic pesticide that will increase their yield but make their customers sick. But their customers don’t know about the pesticide, and the government hasn’t caught up to regulating it yet. Now there’s a tiny uncoupling between “selling to Americans” and “satisfying Americans’ values”, and so of course Americans’ values get thrown under the bus.

Or this from Raymond, taken from a comment to a much later, distinct post, where ‘werewolf’ in context means ‘someone trying to destroy rather than create clarity as the core of their strategy’:

If you’re a king with 5 districts, and you have 20 competent managers who trust each other… one thing you can do is assign 4 competent managers to each fortress, to ensure the fortress has redundancy and resilience and to handle all of its business without any backstabbing or relying on inflexible bureaucracies. But another thing you can do is send 10 (or 15!) of the managers to conquer and reign over *another* 5 (or 15!) districts.

This is bad if you’re one of the millions of people who live in the kingdom, who have to contend with werewolves.

It’s an acceptable price to pay if you’re actually the king. Because if you didn’t pay the price, you’d be outcompeted by an empire who did. And meanwhile it doesn’t actually really affect your plans that much.

The key instinct is that any price that can be paid to be stronger or more competitive, must be paid, therefore despair: If you didn’t pay the price, you’d be out-competed by someone who did. People who despair this way often intuitively are modeling things as effectively perfect competition at least over time, which causes them to think that everything must by default become terrible, likely right away.

So many people increasingly bemoan how horrible anything and everything in the world is, and how we are all doomed.

When predictions of actual physical doom are made, as they increasingly are, often the response is to think things are so bad as to wish for the sweet release of death. 

Moloch’s Army: An As-Yet Unjustified But Important Note

Others quietly, or increasingly loudly and explicitly to those who are listening, embrace Moloch.

They tell us that the good is to sacrifice everything of value, and pass moral judgments on that basis. To take morality and flip its sign. Caring about things of value becomes sin, indifference becomes virtue. They support others who support the favoring of Moloch, elevating them to power, and punish anyone who supports anything else.

They form Moloch’s Army and are the usual way Moloch locally wins, where Moloch locally wins. The real reason people give up slack and everything of value is not that it is ever so slightly more efficient to do so, because it almost always isn’t. It is so that others can notice they have given up slack and everything of value.

I am not claiming the right to assert this yet. Doing so needs not only a citation but an entire post or sequence that is yet unwritten. It’s hard to get right. Please don’t object that I haven’t justified it! But I find it important to say this here, explicitly, out loud, before we continue. 

I also note that I explicitly support the implied norm of ‘make necessary assertions that you can’t explicitly justify if they seem important, and mark that you are doing this, then go back and justify them later when you know how to do so, or change your mind.’ It also led to this post, which led to many of what I think are my best other posts.

Meditations on Elua

The most vital and important part of Meditations on Moloch is hope. That we are winning. Yes, there are abominations and super-powerful forces out there looking to eat us and destroy everything of value, and yet we still have lots of stuff that has value. 

Even before we escaped the Malthusian trap and entered the dream time, we still had lots of stuff that had value. 

Quoting Scott Alexander:

Somewhere in this darkness is another god. He has also had many names. In the Kushiel books, his name was Elua. He is the god of flowers and free love and all soft and fragile things. Of art and science and philosophy and love. Of niceness, community, and civilization. He is a god of humans.

The other gods sit on their dark thrones and think “Ha ha, a god who doesn’t even control any hell-monsters or command his worshippers to become killing machines. What a weakling! This is going to be so easy!”

But somehow Elua is still here. No one knows exactly how. And the gods who oppose Him tend to find Themselves meeting with a surprising number of unfortunate accidents.

Moloch gets the entire meditation. Elua, who has been soundly kicking Moloch’s ass for all of human existence, gets the above quote and little else. 

Going one by one:

Kingdoms don’t reliably expand to their breaking points.

Poisons don’t keep making their way into the coffee.

Iterated prisoner’s dilemmas often succeed.

Dollar auctions are not all over the internet.

Most communities do get most people to pitch in.

People caught in most Malthusian traps still usually have non-work lives.

Capitalists don’t pay the minimum wage all that frequently.

Many families spend perfectly reasonable amounts on housing.

Foragers never fully died out, also farming worked out in the end.

Most military budgets seem fixed at reasonable percentages of the economy, to the extent that for a long time that the United States has been mad its allies like Europe and Japan that they don’t spend enough.

Most people die of something other than cancer, and almost all cells aren’t cancerous.

Local governments enact rules and regulations that aren’t business friendly all the time.

Occasionally, someone in the educational system learns something.

Science has severe problems, but scientists are cooperating to challenge poor statistical methods, resulting in the replication crisis and improving statistical standards.

Governments are corrupt and hand out corporate welfare, but mostly are only finitely corrupt and hand out relatively small amounts of corporate welfare. States that expropriate the bulk of available wealth are rare. 

If someone has consistently good luck, it ain’t luck.
 

(Yes, I have seen congress. Can’t win them all. But I’ve also seen, feared and imagined much worse Congresses. For now, your life, liberty and property are mostly safe while they are in session.)

(And yes the education exception is somewhat of a cop out but also things could be so much worse there on almost every axis.)

The world is filled with people whose lives have value and include nice things. Each day we look Moloch in the face, know exactly what the local personal incentives are, see the ancient doom looming over all of us, and say what we say to the God of Death: Not today.

Saying ‘not today’ won’t cut it against an AGI or other super strong optimization process. Gotcha. You die anyway. But people speak and often act as if the ancient ones have already been released, and the end times are happening now.

They haven’t, and they aren’t. 

So in the context of shorter term problems that don’t involve such things, rather than bemoan how eventually Moloch will eat us all and how everything is terrible when actually many things are insanely great, perhaps we should ask a different question. 

How is Elua pulling off all these unfortunate accidents?

*As a technical reminder we will expand upon in part two, perfect competition is a market with large numbers of buyers and sellers, homogeneity of the product, free entry and exit of firms, perfect market knowledge, one market price, perfect mobility of goods and factors of production with zero transportation costs, and no restrictions on trade. This forces the price to become equal to the marginal cost of production.  

28 comments

Comments sorted by top scores.

comment by Jay · 2019-12-29T14:08:35.049Z · score: 13 (5 votes) · LW(p) · GW(p)

Moloch's biggest enemy is change. In a rapidly changing environment, "Moloch" presents as crippling overspecialization. Since "slack" is useful in a wide variety of circumstances, rapid change selects for slack.

comment by Dagon · 2019-12-28T19:07:29.303Z · score: 13 (5 votes) · LW(p) · GW(p)

There's a lot to be said for the dream-time argument. Inefficiency gives room for slack, and individual misalignment with group averages is inefficient. There are fewer people than the planet could support (in the near-term; hard to know what will happen in the longer term), easing the competitive pressure.

Limiting the number of children that industrial people have lets them maintain the wealth concentrations that make their lives pleasant.

comment by agai · 2019-12-29T06:06:12.541Z · score: 0 (0 votes) · LW(p) · GW(p)

Yeah, so, this is a complex issue. It is actually true IMO that we want fewer people in the world so that we can focus on giving them better lives and more meaningful lives. Unfortunately this would mean that people have to die, but yeah... I also think that cryogenics doesn't really make it much easier/hard to revive people, I would say either way you pretty much have to do the work of re-raising them by giving them the same experiences...

Although now I think about it there was a problem about that recently where I thought of a way to just "skip to the end state" given a finite length and initial state, the problem is we'd need to be able to simulate the entire world up to the end of the person's life. So I guess yeah that's why I don't think cryonics is too important except for research purposes and I guess motivating people to put their efforts into power efficiency, insulation, computation, materials technology etc. So it is useful in that sense probably more than just burying people but in the sense of "making it easier to bring them back alive," not really. Also sort of means having fewer people makes it more likely we can have more than a few seconds where no one dies, which would be nice for later.

In terms of numbers, "fewer" I'm thinking like 3-6 billion still, and maybe population will still keep increasing and our job will just be harder, which is annoying, but yeah. I would say don't have kids if you don't think the world is actually getting better is a good idea, particularly if you want to make it easier for later people to potentially bring back the people you care about that are already dead.

Life *extension* and recovery etc on the other hand is a *much, much easier* problem. I'm super interested in the technical aspects of this right now although the things I think will probably be substantially different from many people.

Basically in summary I agree with your post. :)

comment by romeostevensit · 2019-12-28T18:55:43.530Z · score: 7 (5 votes) · LW(p) · GW(p)

I have objections to most of your list of Elua wins, and that's despite me being an optimist. For now I'll just say that defensive tech outrunning offensive tech allows for capital formation.

comment by agai · 2019-12-29T06:14:47.605Z · score: 1 (1 votes) · LW(p) · GW(p)

Would you be able to expand on those? I thought they were quite apt.

comment by Raemon · 2020-01-21T21:31:00.094Z · score: 6 (3 votes) · LW(p) · GW(p)

> If you’re a king with 5 districts, and you have 20 competent managers who trust each other… one thing you can do is assign 4 competent managers to each fortress, to ensure the fortress has redundancy and resilience and to handle all of its business without any backstabbing or relying on inflexible bureaucracies. But another thing you can do is send 10 (or 15!) of the managers to conquer and reign over *another* 5 (or 15!) districts.

> This is bad if you’re one of the millions of people who live in the kingdom, who have to contend with werewolves.

> It’s an acceptable price to pay if you’re actually the king. Because if you didn’t pay the price, you’d be outcompeted by an empire who did. And meanwhile it doesn’t actually really affect your plans that much.

...

The key instinct is that any price that can be paid to be stronger or more competitive, must be paid, therefore despair: If you didn’t pay the price, you’d be out-competed by someone who did. People who despair this way often intuitively are modeling things as effectively perfect competition at least over time, which causes them to think that everything must by default become terrible, likely right away.

[...]

Kingdoms don’t reliably expand to their breaking points.

Anthropics vs Goals

I didn't get around to replying to this until today, but this wasn't my main point and I think it's pretty important.

The issue isn't whether you'll fail to achieve your goals if you don't expand. The issue is "from an anthropic reasoning perspective, what sort of world will most people live in?"

I have shifted some of my thinking around "you'll be outcompeted and therefore it's in your interest to expand". I think I agree with "it's generally not worth trying to be the winner-take-all winner, because a) you need to sacrifice all the things you cared about anyway, b) even if you do, you're not actually likely to win anyway."

But that was only half the question – if you're looking around the world, trying to build a model of what's going on, I think the causal explanation is that "organizations that expand end up making up most of the world, so they'll account for most of your observations."

The reason this seems important is, like, I see you and Benquo looking in horror at the world. And... it is a useful takeaway that "hmm, I guess I don't need to expand in order to compete with the biggest empires in order to be happy/successful/productive, I can just focus on making a good business that delivers value and doesn't compromise it's integrity." (And having that crystallized has been helpful to my own developing worldview)

Nonetheless... the world will continue to be a place you recoil in horror from until somehow, someone creates something either stops mazes, or outcompetes them, or something. 

Breaking Points vs Realistic Tradeoffs

I also disagree with the characterization "kingdoms don't expect to breaking point." 

The original concept here was "why do people have a hard time detecting obfuscators and sociopaths?". A realistic example (to be clear I don't know much about medieval kingdoms), is a corporation that ends up creating multiple departments (i.e. hiring a legal team), or expanding to new locations.

This doesn't mean you expand to your breaking point – any longterm organization has to contend with shocks and robustness. The organizations I expect to be most successful will expand carefully, not overextending. But if you're asking the question "why are there obfuscators everywhere?", I think the answer is because the relative profitability of extinguishing obfusctators,  vs. not worrying as much about it, points towards the latter direction.

This is, in part, because extinguishing obfuscating or other mazelike patterns is a rare, high skill job that, like, even small organizations don't usually have the capacity to deal with. I think if you can make it much cheaper, it's probably possible to shift the global pattern. But I think the status quo is that the profit-maximizing thing to do focus on expansion over being maze-proof, and there's a lot of profit-maximizing-entities out there.

It's not worth it for the king to try to expand to take over the world. It still seems, for many kings in many places, that expanding reasonably, robustly, is still the right strategy given their goals (or at least, they think it's their goal, and you'd have your work cut out for you convincing them otherwise), and that meanwhile worrying about werewolves in the lawyer department is probably more like a form of altruism than a form of self-interest. 

Or, reframing this as a question (since I'm honestly not that confident)

If your inner circle is safe, how much selfish reason does a CEO have to make sure the rest of the organization is obfuscator-proof?

comment by Hazard · 2019-12-28T18:32:25.441Z · score: 6 (3 votes) · LW(p) · GW(p)

I'm quite interested in the rest of this. Though I did find the idea of Moloch useful for responding to the most naive forms of "If we all did X everything would be perfect", I also have a vague feel that rationalist's belief in Moloch being all powerful prevents them from achieving totally achievable levels of group success.

comment by agai · 2019-12-29T06:16:09.957Z · score: 1 (1 votes) · LW(p) · GW(p)

Yes. Although Moloch is "kind of" all powerful, there are also different levels of "all powerful" so there can be "more all powerful" things. :)

comment by Mary Chernyshenko (mary-chernyshenko) · 2019-12-29T19:22:08.264Z · score: 5 (4 votes) · LW(p) · GW(p)

Oh you firstwolder, you. Scott is so right somewhere.

(and how do you know that "Most communities do get most people to pitch in"?)

comment by NancyLebovitz · 2019-12-30T01:42:35.051Z · score: 4 (2 votes) · LW(p) · GW(p)

Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency is about why businesses fail if they ignore all other values in favor of maximizing profit-- they lose too much flexibility.

I'm looking forward to the rest of this series.

comment by Donald Hobson (donald-hobson) · 2019-12-29T14:54:20.849Z · score: 4 (4 votes) · LW(p) · GW(p)

The real world is high dimentional, and many people will go slightly out of their way to help. If the coffee place uses poisonous pesticides, people will tell others, an action that doesn't cost them much and helps others a lot.

Your Moloch traps only trap when they are too strong for the Moloch haters to destroy. The Moloch haters don't have a huge amount of resources, but in a high dimensional system, there is often a low resource option.

comment by Isnasene · 2019-12-28T19:26:49.349Z · score: 4 (3 votes) · LW(p) · GW(p)

I think the main reason Moloch doesn't succeed very effectively is just because the common response to "hey, you could sacrifice everything of value and give up all slack to optimize X" is "yeah but have you considered just, yanno, hanging out and watching TV?"

And most people who optimize X aren't actually becoming more competitive in the grand scheme of things. They'll die (or hopefully not die) like everybody else and probably have roughly the same number of kids. The selection process that created humans in the first place won't even favor them!

As a result, I'm not worried about Moloch imminently taking over the world. Frankly, I'm more short-term concerned with people just, yanno, hanging out and watching TV when this world is abjectly horrifying.

I am long-term concerned about Moloch as it pertains to value-drift. I doubt the sound of Moloch will be something like "people giving up all value to optimize X" and expect it to be something more like "thousands of years go by and eventually people just stop having our values."

comment by agai · 2019-12-29T06:08:44.926Z · score: 0 (0 votes) · LW(p) · GW(p)

It's more effective to retain more values since physics is basically unitary (at least up to the point we know) so you'll have more people on your side if you retain the values of past people. So we'd be able to defeat this Moloch if we're careful.

comment by Isnasene · 2019-12-29T15:09:23.192Z · score: 2 (2 votes) · LW(p) · GW(p)

To be clear, the effectiveness of an action is defined by whatever values we use to make that judgement. Retaining the values of past people is not effective unless

  • past-people values positively compliment your current values so you can positively leverage the work of past people by adopting more of their value systems (which doesn't necessarily mean you have to adopt their values)
  • past-people have coordinated to limit the instrumental capabilities of anyone who doesn't have their values (for instance, by establishing a Nash equilibrium that makes it really hard for people to express drifting values or by building an AGI)

To be fair, maybe you're referring to Molochian effectiveness of the form (whatever things tend to maximize the existence of similar thnigs). For humans, similarity is a complicated measure. Do we care about memetic similarity (ie reproducing people with similar attitudes as ourselves) or genetic similarity (ie having more kids)? Of course, this is a nonsense question because the answer is most humans don't care strongly about either and we don't really have any psychological intuitions on the matter (I guess you could argue hedonic utilitarianism can be Molochian under certain assumptions but that's just because any strongly-optimizing morality becomes Molochian).

In the former case (memetic similarity), adopting values of past people is a strategy that makes you less fit because you're sacrificing your memetics to more competitive ones. In the latter case (genetic similarity), pretending to adopt people's values as a way to get them to have more kids with you is more dominant than just adopting their values.

But, overall, I agree that we could kind-of beat Moloch (in the sense of curbing Moloch on really long time-scales) just by setting up our values to be inherently more Molochian than those of people in the future. Effective altruism is actually a pretty good example of this. Utilitarian optimizers leveraging the far-future to manipulate things like value-drift over long-periods of time seem more memetically competitive than other value-sets.



comment by jmh · 2019-12-28T17:37:58.844Z · score: 4 (3 votes) · LW(p) · GW(p)

Some day I might go read the background here.

I do wonder if the old saying about evil triumphing only if good people stay quiet doesn't apply. Perhaps that is the source of all those unfortunate accidents Elua enjoys. But that is a pretty weak thesis. What might the model be that gets us some ratios related to number of good, bad and indifferent and perhaps a basic human trait about feeling better inside if we don't do bad. That last bit then allows a large number in the group be indifferent but display a propensities more aligned with Elua than Moloch.

There also seems to be (assuming I actually get the whole map-territory view right) an analytical concern. Perfect competitions is a fiction made up to allow the nice pictures to be drawn on the chalkboard. They are maps, and rather poor, simplistic ones at that, rather than the underlying territory. Seems like Moloch is given power based on the map rather than the underlying territory. Perhaps that is why you offered the technical note so look forward to where you take that. I suppose one path might be to suggest the accidents are not so accidental or surprising.

comment by agai · 2019-12-29T06:12:01.281Z · score: 0 (0 votes) · LW(p) · GW(p)

Accidents, if not too damaging, are net positive because they allow you to learn more & cause you to slow down. If you are wrong about what is good/right/whatever, and you think you are a good person, then you'd want to be corrected. So if you're having a lot of really damaging accidents in situations where you could reasonably be expected to control, that's probably not too good, but "reasonably be expected to control" is a very high standard. What I'm very explicitly not saying here is that the "just-world" hypothesis is true in any way; accidents *are* accidents, it's just that they can be net positive.

comment by jmh · 2019-12-29T16:50:27.329Z · score: 3 (3 votes) · LW(p) · GW(p)

One of the recent "cultural" themes being pushed by company I work in is very similar. Basically, if someone critiques you and shows you where you made the mistake, or simply notes a mistake was made, they just gave you a gift, don't get mad or defensive.

I think there is a lot of truth to that.

My phase is "own your mistakes". Meaning, acknowledge and learn from.

So, I fully agree with your general point. Accidents and mistakes should never be pure loss settings. And, in some cases they can lead to net positive benefits (and we're probably done even need to consider those "I was looking for X but found Y and Y is really, really good/beneficial/productive/cost saving/life saving....)

comment by Zvi · 2019-12-30T13:23:52.687Z · score: 6 (3 votes) · LW(p) · GW(p)

Jane Street Capital was very big on owning your mistakes. I already believed in it as much or more than anyone I knew at the time, but they made it clear I didn't believe in it enough, and they were right.

comment by Jameson Quinn (jameson-quinn) · 2020-01-16T17:06:35.379Z · score: 2 (1 votes) · LW(p) · GW(p)

I strongly suggest you rewrite your summary of "physical limitations". The original was slightly problematic; your summary is, to me, a train-wreck.

Scott's original point was, I believe, "slavery itself may be an example of a bad collective equilibrium, but work-people-to-death antebellum southern slavery was even worse than that." He spent so much effort showing how the WPTD version was inefficient that he forgot to reiterate the obvious point that both versions are morally bad; and since he was contrasting the two, it would be possible to infer that he's actually saying that non-WPTD slavery is not so bad morally; but he clearly deserves the benefit of the doubt on that, and anybody who's read that far is likely to give it to him.

Your summary is shorter, so it's easier to misinterpret, and "people unlikely to give you the benefit of the doubt" are more likely to read it. Furthermore, using "you" to mean slavers makes it actually worse than Scott's version. I, for one, really don't want to be asked to put myself into slavers' shoes unless it's crucial to the point being made, and in this case it clearly isn't.

I suggest you remove the "you" phrasing, and also explicitly say that even non-WPTD slavery is bad; that this is an example of physical limitations slightly ameliorating a bad equilibrium, but not removing it altogether. You can, I believe, safely imply that that's what Scott believes too, even though he doesn't explicitly say it.

comment by Zvi · 2020-01-17T12:13:01.207Z · score: 19 (6 votes) · LW(p) · GW(p)

Happy to delete the word 'you' there since it's doing no work. Not going to edit this version, but will update OP and mods are free to fix this one. Also took opportunity to do a sentence break-up.

As for saying explicitly that slavery is bad, well, pretty strong no. I'm not going to waste people's time doing that, nor am I going to invite further concern trolling, or the implication that when I do not explicitly condemn something it means I might secretly support it or something. If someone needs reassurance that someone talking about slavery as one of the horrible things also opposes a less horrible form of slavery, then they are not the target audience.

comment by Jameson Quinn (jameson-quinn) · 2020-01-18T00:48:54.525Z · score: 5 (2 votes) · LW(p) · GW(p)

I think that I am probably inside the set you'd consider "target audience", though not a central member. To me, when you say "strong no" it sounds somewhat like "if somebody misunderstands me, it's their fault," which I'd think is a bad reaction.

I realize that what I'm asking for could be considered SJW virtue-signaling, and I understand that one possible reaction to such a request is "ew, no, that's not my tribe." However, I think there's reasons aside from signaling or counter-signaling to consider my request.

To me, one goal of a summary section like the one in question is to allow the reader to grasp the basic flavor of the argument in question without too much mental work. That might, in some cases, mean it's worth explicitly saying things that were implicit in the unabridged original, because the quicker read might leave such implicit ideas less obvious. In particular, to me, it's important that these "physical limitations" don't actually remove the badness of the equilibrium, they just moderate it slightly. That flows obviously to me when reading Scott's full original; with your summary, it's still obvious, but in a way that breaks the flow and requires me to stop and think "there's something left unsaid here". In a summary section, such a break in the flow seems better avoided.

comment by Said Achmiz (SaidAchmiz) · 2020-01-18T01:01:36.250Z · score: 10 (2 votes) · LW(p) · GW(p)

Are you saying that you, personally, were confused about whether Zvi (or Scott) does, or does not, support slavery? Is that actually something that you were unsure whether you had understood properly?

comment by Ben Pace (Benito) · 2020-01-18T02:40:26.511Z · score: 4 (2 votes) · LW(p) · GW(p)

I'm reading Jameson as just saying that, from an editing standpoint, the wording was sufficiently confusing and had to stop for a few seconds to figure out that this wasn't what Zvi was saying. Like, he didn't believe Zvi believed it, but it nonetheless read like that for a minute.

(Either way, I don't care about it very much.)

comment by Jameson Quinn (jameson-quinn) · 2020-01-18T09:00:07.988Z · score: 4 (2 votes) · LW(p) · GW(p)

Exactly, thank you.

comment by Zvi · 2020-01-19T21:59:30.156Z · score: 2 (1 votes) · LW(p) · GW(p)

I did a little more work to make it flow better in OP, and I'm going to let it drop there unless a bunch of other people confirm they had this same issue and it actually mattered (and with the new version).

comment by AnthonyC · 2019-12-29T00:21:39.093Z · score: 2 (2 votes) · LW(p) · GW(p)

I can't wait to read the rest of this sequence.

My starting opinion is still that Elua has to exist to win, and Moloch doesn't.

comment by agai · 2019-12-29T06:14:09.568Z · score: 0 (0 votes) · LW(p) · GW(p)

They both exist in different realms, however Elua's is bigger so by default Elua would win, but only if people care to live more in Elua's realm than Moloch's. Getting the map-territory distinction right is pretty important I think.

comment by velcro · 2020-01-08T01:36:40.353Z · score: 1 (1 votes) · LW(p) · GW(p)

It seems like the stability point of a lot of systems is Moloch-like. (Monopolies, race to the bottom, tragedy of the commons, dictatorships, etc.) It requires energy to keep the systems out of those stability points.

Lots of people need to make lots of sacrifices to keep us out of Moloch states. It is not accidents. It is revolutions, and voter registration, and volunteering and standing up to bullies. It is paying extra for fair trade coffee and protesting for democracy in Hong Kong.

Moloch has a huge advantage. If we do nothing, it will always win. We need to remember that.