Moloch: optimisation, "and" vs "or", information, and sacrificial ems
post by Stuart_Armstrong · 2014-08-06T15:57:58.010Z · LW · GW · Legacy · 59 commentsContents
Academic Moloch The point of the post None 59 comments
Go read Yvain/Scott's Meditations On Moloch. It's one of the most beautiful, disturbing, poetical look at the future that I've ever seen.
Go read it.
Don't worry, I can wait. I'm only a piece of text, my patience is infinite.
De-dum, de-dum.
You sure you've read it?
Ok, I believe you...
Really.
I hope you wouldn't deceive an innocent and trusting blog post? You wouldn't be a monster enough to abuse the trust of a being as defenceless as a constant string of ASCII symbols?
Of course not. So you'd have read that post before proceeding to the next paragraph, wouldn't you? Of course you would.
Academic Moloch
Ok, now to the point. The "Moloch" idea is very interesting, and, at the FHI, we may try to do some research in this area (naming it something more respectable/boring, of course, something like "how to avoid stable value-losing civilization attractors").
The project hasn't started yet, but a few caveats to the Moloch idea have already occurred to me. First of all, it's not obligatory for an optimisation process to trample everything we value into the mud. This is likely to happen with an AI's motivation, but it's not obligatory for an optimisation process.
One way of seeing this is the difference between "or" and "and". Take the democratic election optimisation process. It's clear, as Scott argues, that this optimises badly in some ways. It encourages appearance over substance, some types of corruption, etc... But it also optimises along some positive axes, with some clear, relatively stable differences between the parties which reflects some voters preferences, and punishment for particularly inept behaviour from leaders (I might argue that the main benefit of democracy is not the final vote between the available options, but the filtering out of many pernicious options because they'd never be politically viable). The question is whether these two strands of optimisation can be traded off against each other, or if a minimum of each is required. So can we make a campaign that is purely appearance based with any substantive position ("or": maximum on one axis is enough), or do you need a minimum of substance and a minimum of appearance to buy off different constituencies ("and": you need some achievements on all axes)? And no, I'm not interested in discussing current political examples.
Another example Scott gave was of the capitalist optimisation process, and how it in theory matches customers' and producers' interests, but could go very wrong:
Suppose the coffee plantations discover a toxic pesticide that will increase their yield but make their customers sick. But their customers don't know about the pesticide, and the government hasn't caught up to regulating it yet. Now there's a tiny uncoupling between "selling to [customers]" and "satisfying [customers'] values", and so of course [customers'] values get thrown under the bus.
This effect can be combated to some extent with extra information. If the customers (or journalists, bloggers, etc...) know about this, then the coffee plantations will suffer. "Our food is harming us!" isn't exactly a hard story to publicise. This certainly doesn't work in every case, but increased information is something that technological progress would bring, and this needs to be considered when asking whether optimisation processes will inevitably tend to a bad equilibrium as technology improves. An accurate theory of nutrition, for instance, would have great positive impact if its recommendations could be measured.
Finally, Zack Davis's poem about the em stripped of (almost all) humanity got me thinking. The end result of that process is tragic for two reasons: first, the em retains enough humanity to have curiosity, only to get killed for this. And secondly, that em once was human. If the em was entirely stripped of human desires, the situation would be less tragic. And if the em was further constructed in a process that didn't destroy any humans, this would be even more desirable. Ultimately, if the economy could be powered by entities developed non-destructively from humans, and which were clearly not conscious or suffering themselves, this would be no different that powering the economy with the non-conscious machines we use today. This might happen if certain pieces of a human-em could be extracted, copied and networked into an effective, non-conscious entity. In that scenario, humans and human-ems could be the capital owners, and the non-conscious modified ems could be the workers. The connection of this with the Moloch argument is that it shows that certain nightmare scenarios could in some circumstances be adjusted to much better outcomes, with a small amount of coordination.
The point of the post
The reason I posted this is to get people's suggestions about ideas relevant to a "Moloch" research project, and what they thought of the ideas I'd had so far.
59 comments
Comments sorted by top scores.
comment by Quinn · 2014-08-11T02:50:06.238Z · LW(p) · GW(p)
Because the length of Scott's Moloch post greatly exceeds my working memory (to the extent that I had trouble remembering what the point was by the end) I made these notes. I hope this is the right place to share them.
Notes on Moloch (ancient god of child sacrifice)
http://slatestarcodex.com/2014/07/30/meditations-on-moloch/
Intro - no real content.
Moloch as coordination failure: everyone makes a sacrifice to optimize for a zero-sum competition, ends up with the same relative status, but worse absolute status.
- 10 examples: Prisoner's Dilemma, dollar auctions, fish-farming story (tragedy of the commons), Malthusian trap, ruthless/exploitative Capitalist markets, the two-income trap, agriculture, arms races, cancer, political race to the bottom (lowering taxes to attract business)
- 4 partial examples: inefficient education, inefficient science, government corruption (corporate welfare), Congress (representatives voting against good of nation for good of constituency)
Existing systems are created by incentive structures, not agents, e.g. Las Vegas caused by a known bias in human reward circuitry, not optimization for human values.
But sometimes we move uphill anyway. Possible explanations:
- Excess resources / we are in the dream time and can afford non-competitive behavior.
- Physical limitations to what can be sacrificed
- Economic competition actually producing positive utility for consumers (but this is fragile)
- Coordination, e.g. via governments, guilds, friendships, etc.
Technology/ingenuity creates new opportunities to fall into such traps. Technology overcomes physical limitations, consumes excess resources. Automation further decouples economic activity from human values. Technology can improve coordination, but can also exacerbate existing conflicts by giving all sides more power.
AGI opens up whole new worlds of traps: Yudkowsky's paperclipper, Hanson's subsistence-level ems, Bostrom's Disneyland with no children.
6 & 7. Gnon - basically the god of the conservative scarcity mindset. Nick Land advocates compliance; Nyan wants to capture Gnon and build a walled garden. Scott warns that Moloch is far more terrifying than Gnon and will kill both of them anyway.
8 & 9. So we have to kill this Moloch guy, by lifting a better God to Heaven (Elua).
Replies from: lukeprog, kpreid, Stuart_Armstrong, None↑ comment by lukeprog · 2014-08-17T15:45:52.198Z · LW(p) · GW(p)
everyone makes a sacrifice to optimize for a zero-sum competition, ends up with the same relative status, but worse absolute status.
I'm a bit surprised I haven't seen this particular incentives problem named in the academic literature. It is related in different ways to economic concepts like tragedy of the commons, social trap, tyranny of small decisions, and information asymmetry, but isn't identical with or fully captured by any of them.
Replies from: Wei_Dai, satt↑ comment by Wei Dai (Wei_Dai) · 2014-08-18T04:55:38.438Z · LW(p) · GW(p)
See also, positional good, which pre-dates Robert Frank's "positional arms race".
Replies from: lukeprog↑ comment by lukeprog · 2014-08-24T03:29:10.124Z · LW(p) · GW(p)
Thanks. That led me to Robert Frank on positional externalities.
↑ comment by satt · 2014-08-18T02:04:14.513Z · LW(p) · GW(p)
everyone makes a sacrifice to optimize for a zero-sum competition, ends up with the same relative status, but worse absolute status.
I'm a bit surprised I haven't seen this particular incentives problem named in the academic literature.
Robert H. Frank has called it a "positional arms race". In a relatively recent article on higher education he gives this summary:
Replies from: lukeprogParticipants in virtually all winner-take-all markets face strong incentives to invest in performance enhancement, thereby to increase their chances of coming out ahead. As in the classic military arms race, however, many such investments prove mutually offsetting in the end. When each nation spends more on bombs, the balance of power is no different than if none had spent more. Yet that fact alone provides no escape for individual participants. Countries may find it burdensome to spend a lot on bombs, but the alternative—to be less well-armed than their rivals—is even worse.
In light of the growing importance of rank in the education marketplace, universities face increasing pressure to bid for the various resources that facilitate the quest for high rank. These pressures have spawned a positional arms race that already has proved extremely costly, and promises to become more so.
↑ comment by Stuart_Armstrong · 2014-08-11T10:24:00.462Z · LW(p) · GW(p)
Yep, a fair summary, with none of the wild poetry :-)
↑ comment by [deleted] · 2016-03-05T03:50:19.287Z · LW(p) · GW(p)
I came here to complain that I don't understand meditations on moloch at all. Can someone explain the points more explicitly? And you have. Thank you.
If:
Excess resources / we are in the dream time and can afford non-competitive behavior. Physical limitations to what can be sacrificed Economic competition actually producing positive utility for consumers (but this is fragile) Coordination, e.g. via governments, guilds, friendships, etc.
Then those factors are the critical systemic elements of progress.
However, is there any reason to believe those are those systemic elements, rather than others?
comment by ChristianKl · 2014-08-06T21:11:38.194Z · LW(p) · GW(p)
This effect can be combated to some extent with extra information. If the customers (or journalists, bloggers, etc...) know about this, then the coffee plantations will suffer.
At the moment the US food industry successfully lobbied for laws to prevent animal right activists from filming how they treat their animals.
A corporation can argue that they exact mix that they feed to animals is a trade secret and go after people who spread information about it. Companies have much more resources to fight legal battles than bloggers or journalists in underfunded newsrooms.
UK libel law would be another instance of how corporation can fight journalists who spread the information that a product does harm to customers health.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-07T10:16:10.934Z · LW(p) · GW(p)
Interestingly, those laws are examples of coordination, which suggests the picture is more nuanced that it seems (which is always a safe prediction :-)
comment by Kaj_Sotala · 2014-08-07T13:00:21.427Z · LW(p) · GW(p)
As I mentioned in the comments of Scott's post, I've been thinking about turning my "Technology will destroy human nature" essay into a formal paper, but I'd probably need someone more familiar with evolutionary biology as my co-author to make sure that the analogies to evolution make sense and to otherwise help develop it. TWDHN was basically talking about various physical limits that are currently stopping us from racing to the bottom but which technology will eventually overcome. Now that I've read Meditations, I might want the paper to discuss the other things holding Moloch in check that Scott talks about (excess resources, utility maximization, coordination), too.
Replies from: atorm, Stuart_Armstrong↑ comment by atorm · 2014-08-07T22:43:21.961Z · LW(p) · GW(p)
I'm a biologist looking for co-authorships.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-08-17T15:00:23.482Z · LW(p) · GW(p)
Sorry for the late response! I was avoiding LW.
Here's a copy of an e-mail where I summarized the argument in TWDHN before, and suggested some directions for the paper (note that this was written before I had read the post on Moloch):
In a nutshell, the argument goes something like:
- Evolution adapts creatures to the regularities of their environment, with organisms evolving to use those regularities to their advantage.
- A special case of such regularities are constraints, things which an organism must adapt to even though it may be costly: for example, very cold weather forces an organism to spend a part of its energy reserves on growing a fur or other forms of insulation.
- If a constraint disappears from the environment, evolution will gradually eliminate the costly adaptations that developed in response to it. If the Arctic Circle were to become warm, polar bears would eventually lose their fur or be outcompeted by organisms that never had a thick fur in the first place.
- Many fundamental features of human nature are likely adaptations to various constraints: e.g. the notion of distinct individuals and personal identity may only exist because we are incapable of linking our brains directly together and merging into one vast hive mind. Conscious thought may only exist because consciousness acts as an "error handler" to deal with situations where our learned habits are incapable of doing the job right, and might become unnecessary if there was a way of pre-programming us with such good habits that they always got the job done. Etc.
- The process of technological development acts to remove various constraints in our environment: for example, it may one day become possible to actually link minds together directly.
- If technology does remove previous constraints from our environment, the things that we consider fundamental human values would actually become costly and unnecessary, and be gradually eliminated as organisms without those "burdens" would do better.
What I'd like to do in the paper would be to state the above argument more rigorously and clearly, provide evidence in favor of it, clarify things that I'm uncertain about (Does it make sense to distinguish constraints from just regularities in general? Should one make a distinction between constraints in the environment and constraints from what evolution can do with biological cells?), discuss various possible constraints as well as what might eliminate them and how much of an advantage that would give to entities that didn't need to take them into account, raise the possibility of some of this actually being a good thing, etc. Stuff like that. :-)
Does that argument sound sensible (rather than something that represents a total misunderstanding of evolutionary biology) and something that you'd like to work on? Thoughts on how to expand it to take Moloch into account?
Also, could you say a little more about your background and amount of experience in the field?
Replies from: atorm↑ comment by atorm · 2014-08-22T03:02:27.689Z · LW(p) · GW(p)
This argument seems like something I would need to think long and hard about, which I see as a good thing: it seems rare to me that non-trivial things are simple and apparent. I don't see any glaring misinterpretation of natural selection. I would be interested in working on it in a "dialogue intellectually and hammer out more complete and concrete ideas" sense. I'm answering this quickly in a tired state because I'm not on LW as much as I used to be and I don't want to forget.
I'm getting a PhD in a biological field that is not Evolution. Both this and my undergraduate education covered evolution because it underlies all the biological fields. I have one publication out that discusses evolution but is not actually specifically relevant to this topic. I'll happily share more detail in private communications if you can't find an explicitly evolutionary biologist.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-08-23T17:04:28.645Z · LW(p) · GW(p)
That sounds good to me. :-)
E-mail me at xuenay@gmail.com and we can talk about things in more detail?
↑ comment by Stuart_Armstrong · 2014-08-07T14:35:41.048Z · LW(p) · GW(p)
We may make use of you and your ideas :-)
comment by Dagon · 2014-08-07T06:46:53.673Z · LW(p) · GW(p)
First of all, it's not obligatory for an optimisation process to trample everything we value into the mud.
In an intense competition environment, it is obligatory. Any resources spent not optimizing on a fitness axis necessarily make the entity more likely to lose.
Which implies to me that the only way out is to compete so thoroughly and be rich enough that we can act like we're not in competition, and can afford to waste effort on values other than survival/expansion.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-07T10:19:27.900Z · LW(p) · GW(p)
If the optimum for any political party is to produce 80% fluff and 20% substance, then optimisation pressure will push them towards it. (fun little observation: it seems to me that parties go something like 80-20 fluff substance in how they spend their money, but 20-80 in how individual party members spend their time).
Replies from: satt↑ comment by satt · 2014-08-12T02:11:29.133Z · LW(p) · GW(p)
Unfortunately, that kind of breathing-space-at-the-optimum seems a lot more likely in the case of political parties than for humanity as a whole.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-12T08:49:58.807Z · LW(p) · GW(p)
Is it? Products can be dangerous, but they can't instantly kill their purchaser; it's a least conceivable that this, plus increased information, would restrict how bad various products could get. It's not clear that there are no fences on the various slopes.
Replies from: satt↑ comment by satt · 2014-08-16T15:47:47.978Z · LW(p) · GW(p)
What I had in mind were the two largest traps: societies which maintained breathing space being overrun by societies which ruthlessly optimized to overrun other societies, and our entire planet being overrun by more efficient extraterrestrial intelligences which ruthlessly optimized for ability to expand through the universe.
I agree that for more mundane cases like dangerous consumer products and political parties, there'll probably be some "fences on the various slopes". But they will be cold comfort indeed if we get wiped out by Malthusian limit-embracing aliens in a century's time!
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-18T10:12:12.222Z · LW(p) · GW(p)
I take your point.
But it occurs to me that ruthlessly efficient societies need to be highly coordinated societies, which may push in other directions; I wonder if there's something worth digging into there...
Replies from: satt↑ comment by satt · 2014-08-20T01:18:54.991Z · LW(p) · GW(p)
Another hopeful thought: we might escape being eaten for an unexpectedly long time because evolution is stupid. It might consistently program organic life to maximize for proxies of reproductive success like social status, long life, or ready access to food, rather than the ability to tile the universe with copies of itself.
This in no way implies humanity's safe forever; evolution would almost surely blunder into creating a copy-maximizing species eventually, by sheer random accident if nothing else. But humanity's window of safety might be millions or billions or trillions of years rather than millennia.
comment by ChristianKl · 2014-08-07T00:18:21.565Z · LW(p) · GW(p)
Ok, now to the point. The "Moloch" idea is very interesting, and, at the FHI, we may try to do some research in this area (naming it something more respectable/boring, of course, something like "how to avoid stable value-losing civilization attractors").
Given that we don't want to stay with jargon that's not open to outsiders it would be great if we could find a good catch phrase that not as long as the one you suggest.
Replies from: Stuart_Armstrong, Vulture↑ comment by Stuart_Armstrong · 2014-08-07T10:17:21.949Z · LW(p) · GW(p)
Given that we don't want to stay with jargon that's not open to outsiders
You haven't experienced academia, have you? ;-)
But a good catch phrase alongside the dull description would be a good idea.
Replies from: Vulturecomment by KnaveOfAllTrades · 2014-08-07T01:04:04.035Z · LW(p) · GW(p)
A recurring problem with these forms of civilizational inadequacy is bystander effect/first-mover disadvantage/prisoners' dilemma/etc, and the obvious solutions (there might be others) are coordination or enforcement. Seeing if there's other solutions and seeing how far people have already run with coordination and enforcement seems promising. Even if one is pessimistic about how easily the problems can be addressed and thinks we're probably screwed anyway but slightly less probably screwed if we try, then the value of information is still very high; this is a common feature of FHI's work, which, by the way, I consider extremely valuable!
What reasons might we have to believe or disbelieve that we can do better than (or significantly improve) governments, the UN, NATO, sanctions, treaty-making, etc.?
comment by IlyaShpitser · 2014-08-06T20:00:51.266Z · LW(p) · GW(p)
Why is Elua still here?
Replies from: Vulture, Stuart_Armstrong, ChristianKl↑ comment by Vulture · 2014-08-06T21:57:18.175Z · LW(p) · GW(p)
Moloch is slow. And humanity is young. And Civilization is even younger.
Replies from: James_Miller↑ comment by James_Miller · 2014-08-06T22:36:10.480Z · LW(p) · GW(p)
Human productivity relies on Elua, so Moloch won't destroy her until after we have fully automated our economy. But after that don't expect Moloch to act slowly.
Replies from: SilentCal↑ comment by Stuart_Armstrong · 2014-08-07T10:12:39.118Z · LW(p) · GW(p)
Past Eluas have died (see the examples of traditional warrior ethos, of letting children bring themselves up, of personal interactions in all exchanges, and a whole bunch of things that make sense in tribal environments but not in ours).
↑ comment by ChristianKl · 2014-08-07T11:08:08.417Z · LW(p) · GW(p)
Between Omega, Moloch, Elua and the Basilisk we have quite a proliferation of gods on LW.
Replies from: IlyaShpitser, James_Miller, Lumifer, NancyLebovitz, MakoYass↑ comment by IlyaShpitser · 2014-08-07T18:03:26.319Z · LW(p) · GW(p)
That's what happens when you try to quit religion cold turkey.
↑ comment by James_Miller · 2014-08-08T01:46:37.539Z · LW(p) · GW(p)
One interpretation of Greek Mythology is that the Greeks didn't really believe in their gods, but the gods represented different aspects of human nature. We are sort of doing the same here and so following in the classical tradition.
Replies from: Vulture, SilentCal, army1987, ChristianKl↑ comment by Vulture · 2014-08-08T02:35:31.733Z · LW(p) · GW(p)
I agree that this is a possible interpretation, but (just to be clear) it isn't a very sensible one, is it? Being one of those "People we would naturally expect to be very different from us in some respect are actually very similar, only they behave exactly as if they differed, for a complicated reason" things
↑ comment by SilentCal · 2014-08-11T20:46:21.233Z · LW(p) · GW(p)
I don't know about Greek Mythology, but this is totally a real thing regarding Hinduism (though not necessarily the predominant view). http://en.wikipedia.org/wiki/Atheism_in_Hinduism
↑ comment by A1987dM (army1987) · 2014-08-08T10:03:47.925Z · LW(p) · GW(p)
(I suspect certain Greeks believed in them literally and other Greeks believed in them as metaphors but they never realized they disagreed for reasons akin to these.)
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-08-09T04:03:19.374Z · LW(p) · GW(p)
They did in fact have open disagreements. Most famously, Socrates was executed over one.
↑ comment by ChristianKl · 2014-08-08T09:38:59.926Z · LW(p) · GW(p)
That depends probably a lot of what you mean with "really believe". They probably didn't believe in the same sense of "believe" that 21st century Christians in their Gods.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2014-08-08T18:24:59.321Z · LW(p) · GW(p)
Not all 21st-century Christians "believe" in the same sense, either. If a future anthropologist or classicist were to reconstruct the "beliefs" of modern Christianity from the kind of patchwork sources that we have for ancient Greek myth, they might have a pretty hard time.
Replies from: advancedatheist↑ comment by advancedatheist · 2014-08-08T19:29:52.527Z · LW(p) · GW(p)
I wonder what people living 10,000 years from now (assuming that "people" would even exist then, however defined) would think of christianity. In that time, one of the world's dominant religions might have started 8,000 from now/2,000 years "before" their time, and only a few antiquarians would even know of the existence of the christian religion. They would have to reconstruct it from fragmentary evidence comparable to efforts to reconstruct, say, the ancient Sumerian religion.
↑ comment by Lumifer · 2014-08-07T14:52:52.304Z · LW(p) · GW(p)
Don't forget Eris :-D
Replies from: ChristianKl↑ comment by ChristianKl · 2014-08-07T15:40:44.290Z · LW(p) · GW(p)
I did forget. Could you link to a source?
Replies from: army1987, dspeyer↑ comment by A1987dM (army1987) · 2014-08-09T08:37:46.117Z · LW(p) · GW(p)
Seconded. The presence of this user makes googling for Eris site:lesswrong.com
useless.
↑ comment by NancyLebovitz · 2014-08-24T04:45:00.999Z · LW(p) · GW(p)
I haven't seen much explicit discussion of Murphy, but all the talk about AIs going wrong are tributes to his influence.
↑ comment by mako yass (MakoYass) · 2014-12-21T03:44:16.010Z · LW(p) · GW(p)
Add The Matrix Lords to the list. If you take acausal cooperation seriously, your relationship with them is a complex one.
comment by Lumifer · 2014-08-06T20:33:22.743Z · LW(p) · GW(p)
One word which is notably missing from Yvain's excellent blog post is "externalities". The concept is there, but acknowledging that it's externalities we're talking about would be helpful. There is a fair amount of literature on them.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-07T10:14:40.360Z · LW(p) · GW(p)
The concept is more general and more interesting than externalities. One of Ethiopian coffee-grower examples was about asymmetric information, not externalities. And sacrificing everything superfluous to stay competitive is a good description of what firms do, and is generally seen as positive in economics.
Replies from: Lumifer↑ comment by Lumifer · 2014-08-07T14:39:34.185Z · LW(p) · GW(p)
Well, I'm not convinced there is a single concept involved. What Yvain talks about is complex and multilayered. There are externalities and information asymmetries and attractor basins, etc. If I was forced to pick one expression I'd say something like emergent system behaviour, but that's not quite it either.
And sacrificing everything superfluous to stay competitive is a good description of what firms do
Beg to disagree. That is one factor which drives their behaviour and a major one, too, but there are others as well.
Replies from: Vulture↑ comment by Vulture · 2014-08-14T13:12:55.348Z · LW(p) · GW(p)
Well, I'm not convinced there is a single concept involved. What Yvain talks about is complex and multilayered. There are externalities and information asymmetries and attractor basins, etc. If I was forced to pick one expression I'd say something like emergent system behaviour, but that's not quite it either.
Am I the only one who didn't feel like the central Moloch concept was hard to reach? As I understand it, "Moloch" just refers to a specific kind of coordination failure, in which participants in a competitive market-like environment "defect" by throwing their own values under the bus, to no ultimate gain besides a temporary positional advantage over those who do not do so. Obviously this phenomenon can create/contribute to various eschatological attractor basins, of the paperclip-tiles/disneyland-with-no-children sort; without the "Moloch" concept it might be non-obvious that/why a marketplace of agents with human values could end up in such a situation.
comment by RomeoStevens · 2014-08-07T03:22:05.386Z · LW(p) · GW(p)
It seems to me that people have a preference against defection when they are able to grasp that it is in fact a PD scenario. This creates a lower bound on how complex a scenario has to be before people will be confused enough to defect reliably and not either notice or be able to come up with reasonable safeguards. As we create more robust complex things I would naively expect the problem to get worse, but OTOH we have the countervailing trend of coordination and information dissemination getting easier. So...we should want to live in a world where we take it more seriously when economists point out perverse incentives? That starts getting political since many economists have policy axes to grind.
comment by Azathoth123 · 2014-08-08T04:14:04.444Z · LW(p) · GW(p)
Yvain's post is confused in a number of ways, in fact I get the feeling that he hasn't added much to the articles he links at the beginning.
The most blatant problem is his Las Vegas example. He asserts:
Like all good mystical experiences, it happened in Vegas. I was standing on top of one of their many tall buildings, looking down at the city below, all lit up in the dark. If you’ve never been to Vegas, it is really impressive. Skyscrapers and lights in every variety strange and beautiful all clustered together. And I had two thoughts, crystal clear:
It is glorious that we can create something like this.
It is shameful that we did.
What reason does Yvain give for it being shameful? That it's an inefficient use of resources. This is an interesting objection, given that the rest of the essay consists of him objecting to the process of Moloch/Gnon destroying all human values in the name of efficiency. So when faced with an example of Gnon doing the opposite, i.e., building something beautiful the the middle of the desert despite concerns about inefficiency how does he react? By declaring that "there no philosophy on earth that would endorse" its existence.
Yeah, Yvain is not off to a good start here.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2014-08-08T17:38:13.531Z · LW(p) · GW(p)
It doesn't seem to me that the author's objection to Las Vegas is that it is an inefficient use of resources. He does mention use of resources, but that isn't the main point of that section. (Italics in the original; boldface added.)
Like, by what standard is building gigantic forty-story-high indoor replicas of Venice, Paris, Rome, Egypt, and Camelot side-by-side, filled with albino tigers, in the middle of the most inhospitable desert in North America, a remotely sane use of our civilization’s limited resources?
And it occurred to me that maybe there is no philosophy on Earth that would endorse the existence of Las Vegas. Even Objectivism, which is usually my go-to philosophy for justifying the excesses of capitalism, at least grounds it in the belief that capitalism improves people’s lives. Henry Ford was virtuous because he allowed lots of otherwise car-less people to obtain cars and so made them better off. What does Vegas do? Promise a bunch of shmucks free money and not give it to them.
Las Vegas doesn’t exist because of some decision to hedonically optimize civilization, it exists because of a quirk in dopaminergic reward circuits, plus the microstructure of an uneven regulatory environment, plus Schelling points. A rational central planner with a god’s-eye-view, contemplating these facts, might have thought “Hm, dopaminergic reward circuits have a quirk where certain tasks with slightly negative risk-benefit ratios get an emotional valence associated with slightly positive risk-benefit ratios, let’s see if we can educate people to beware of that.” People within the system, following the incentives created by these facts, think: “Let’s build a forty-story-high indoor replica of ancient Rome full of albino tigers in the middle of the desert, and so become slightly richer than people who didn’t!”
It isn't just that Vegas pours money into a hole in the desert that could be better used on something that makes people a lot better off. It's that Vegas makes people worse off by exploiting a bug in human cognition. And that the incentive structure of modern capitalism — with a little help from organized crime, historically — drove lots of resources into exploiting this ugly bug.
A self-aware designer of ants would probably want to fix the bug that leads to ant mills, the glitch in trail-following behavior that allows hundreds or thousands of ants to purposelessly walk in a loop until they walk themselves to death. But for an ant, following trails is a good (incentivized) behavior, even though it sometimes gets "exploited" by an ant mill.
The point isn't "Vegas is bad because it's not optimal." It's "Vegas is a negative-sum condition arising from a bug in an economic algorithm implemented cellularly. Reflection allows us to notice that bug, and capitalism gives us the opportunity to exploit it but not to fix it."
Ants aren't smart enough to worry about ant mills. Humans are smart enough to worry about civilization degenerating into negative-sum cognitive-bug-exploiting apparatus.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-08-24T04:52:01.752Z · LW(p) · GW(p)
The Elua thing about Las Vegas isn't that people can be snagged by intermittent reward, it's that people would like to have some sparkle with their intermittent rewards, so you get the extravagant architecture.