Posts
Comments
I.e. I agree with your analysis that they (and artemisinin treatment) are great and worth doing if the local governments don't tax or steal them (in various ways) too intensively.
Douglas,
It's $1000 per life not per net, because in most cases nets or treatment won't avert a death.
g,
There's plenty of room to work on vaccines and drugs for tropical diseases, improved strains of African crops like cassava, drip irrigation devices, charcoal technology, etc.
http://en.wikipedia.org/wiki/Amy_Smith http://web.mit.edu/newsoffice/2008/lemelson-sustainability-0423.html
kebko,
The best interventions today seem to cost $1000 per life saved. Much of the trillion dollars was Cold War payoffs, bribing African leaders not go Communist, so the fact that it was stolen/wasted wasn't that much of a concern.
I tend to prefer spending money on developing cheaper treatments and Africa-suitable technologies, then putting them in the public domain. That produces value but nothing to steal.
Regarding g's point, I note that there's a well-established market niche for this sort of thing: it's like the popularity of Ward Connerly among conservatives as an opponent of affirmative action, or Ayaan Hirsi Ali (not to downplay the murderous persecution she has suffered, or necessarily to attack her views) among advocates of war against Muslim countries. She'll probably sell a fair number of books, get support from conservative foundations, and some nice speaking engagements.
Steven,
Information value.
g,
This is based on the diavlog with Tyler Cowen, who did explicitly say that decision theory and other standard methodologies doesn't apply well to Pascalian cases.
Pablo,
Vagueness might leave you unable to subjectively distinguish probabilities, but you would still expect that an idealized reasoner using Solomonoff induction with unbounded computing power and your sensory info would not view the probabilities as exactly balancing, which would give infinite information value to further study of the question.
The idea that further study wouldn't unbalance estimates in humans is both empirically false in the cases of a number of smart people who have undertaken it, and looks like another rationalization.
The fallacious arguments against Pascal's Wager are usually followed by motivated stopping.
"that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God)." Utilitarian would rightly attack this, since the probabilities almost certainly won't wind up exactly balancing. A better argument is that wasting time thinking about Christianity will distract you from more probable weird-physics and Simulation Hypothesis Wagers.
A more important criticism is that humans just physiologically don't have any emotions that scale linearly. To the extent that we approximate utility functions, we approximate ones with bounded utility, although utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences, i.e. they have a bounded interest in 'shutting up and multiplying.'
Robin,
What standard do you use to identify "good tastes and values" to be open to?
This looks like a relatively clear case of excessive narrative-to-signal.
And again, babyeating norms need to invade in a similar fashion, and without norms other than baby-eating, the communal feeding pen selects for zero provisioning effort.
"If most of the total cost of growing a child lies in feeding it past the rapid growth stage, rather than birthing 50 infants and feeding them up to that point,"
From their visibility in the transmitted images it seems the disproportion isn't absurdly great. Also, if the scaling issues with their brains were so extreme, why didn't they become dwarfs? One big tool-using crystal being versus 500 tool-using dwarfs of equal intelligence seems like bad news for the giant.
"You're also postulating that a whole group gets this mutation in one shot - but even if you say "genetic drift", it seems pretty disadvantageous to a single invader."
Altruistic punishers don't need to be common, one or two can coordinate a group (the altruistic punisher recruits with the credible threat of punishment, and then imposes the norm on the whole group), and an allele for increased provisioning wouldn't directly conflict with babyeating instincts.
"I fear that you have not managed to convince me of this. If the general idiom of children in pens is stable, then the adults contributing lots and lots of children (as many as possible) is also evolutionarily stable."
I have a tribe of Babyeaters that each put 90% of their effort into reproducing, and 10% into contributing to the common food supply of the pen. This winds up producing 5000 offspring, 30 of which are not eaten, and are just adequately fed by the 10% of total resources allocated to the food supply. Now consider an allele, X, that disposes carriers to engage in altruistic punishment, and punishment of non-punishers, in support of a norm that adults spend most of their effort on contributing to the food supply (redirecting energy previously spent on offspring to be devoured with thermodynamic losses to the production and maintenance of offspring that will grow into adults). Every individual in the tribe will tend to have more surviving offspring, and the group will tend to be victorious in intertribal extermination warfare. Group selection will thus favor the spread of X, probably quite a bit more strongly than it would favor the spread of an allele for support of the babyeating norm (X achieves the benefits of babyeating while reallocating metabolic waste on devoured babies). The more closely X aligns offspring production and food contribution, the more it will be spread by group selection and the more it will reduce babyeating.
In a world with many groups, all engaging in winnowing-level babyeating, allele X can enter, spread, and vastly reduce babyeating. What is unconvincing about that argument?
"Suppose that all Babyeaters make equal contributions to the food pen; their leftover (variance in) food resources could be used to grow their own bodies, bribe desirable mates (those of good genetic material as witnessed by their large food contributions), or create larger numbers of offspring."
Different alleles might drive altruistic punishment (including of non-punishers) in support of many different levels of demand on tribe members. Group selection would support alleles supporting norms such that the mean contribution to the pen food supply was well-matched with the mean number of offspring contributed to the pen. Variance doesn't invalidate that conclusion.
Michael,
I guess it depends on whether the fantastic element can adequately stand in for whatever it is supposed to represent. Magic starship physics can be used to create a Prisoner's Dilemma without trouble, since PDs are well understood, and it's fairly obvious that we will face them in the future. No-Singularity and FTL, so that we can have human characters, are also understandable as translation tools. If Babyeaters are a stand-in for 'abhorrent alien evolved morality' to an audience that already grasps the topic, then the details of their evolution don't matter. If, however, they are supposed to make the possibility of a nasty evolved morality come alive to cosmopolitan optimistic science fictions fans or transhumanists, then they should be relatively probable.
Eliezer,
On the other hand, since you've already written the story, using one of your favorite examples of the nonanthropomorphic nature of evolution as inspiration for the Babyeaters, and have no authorial line of retreat available at this time, we can probably leave this horse for dead.
Eliezer, you're right that the coordination mechanisms would be imperfect, so it's an overstatement to say NO babyeating would occur, I meant that you wouldn't have the 'winnowing' sort of babyeating with consistent orders-of-magnitude disproportions between pre- and post-babyeating offspring populations.
Nits. I'd say there are probably lots of at-least-Babyeater-level-abhorrent evolutionary paths (not that Babyeaters are that bad, I'd rather have a Babyeater world than paperclips) making up a big share of evolved civilizations (it looks like the great majority, but it's very tough to be confident). Any lack of calm is irritation at the use of a dubious example of abhorrent evolved morality when you could have used one that was both more probable AND more abhorrent.
I wonder about the psychological mechanisms and intuitions at work in the Babyeaters. After all, human babies don't look like Babyeater babies, they're less intelligent, etc. Their intellectual extension of strong intuitions to exotic cases might well be much more flexible than their applications to situations from the EEA, e.g. satisfying them by drinking cocktails containing millions of blastocysts. Similarly, human intuitions start to go haywire in exotic sci-fi thought experiments and strange modern situations.
"I don't understand why you think that provisioning your own offspring is a group advantage." If parents could selectively provision their own offspring in the common pen, then the group would not be wracked by intense commons-problem selective pressures driving provisioning towards zero and reproduction towards the maximum (thus resulting in extermination by more numerous tribes).
Actually, babyeating in the common pen isn't even internally stable. Let's take the assumptions of the situation as given:
- There is intertribal extermination warfare. Larger tribes tend to win and grow. Even division of food among excessive numbers of offspring results in fewer surviving adults, and thus slower tribal population growth and more likely extermination.
- All offspring are placed in a common pen.
- Food placed in the common pen is automatically equally divided among those in the pen and adults cannot selectively provision.
- Group selection has resulted in collective enforced babyeating to reduce offspring numbers (without regard for parentage of the offspring) in the common pen to the level that will maximize the number of surviving adults given the availability of food resources.
- Individuals vary genetically in ways that affect their relative investment in producing offspring and in agricultural production to place into the common pen.
Under these circumstances, there will be intense selective pressure for individuals that put all their energy (after survival) into producing more offspring (which directly increase their reproductive fitness) rather than agricultural production (which is divided between their offspring and the offspring of the rest of the tribe). As more and more offspring are produced (in metabolically wasteful fashion) and less and less food is available, the tribe is on the path to extinction.
Groups that survive will be those in which social intelligence is used to punish (by death, devouring of offspring before they are placed in the pen, etc) those making low food contributions relative to offspring production. Remembering offspring production would be cognitively demanding, and only one side of the tradeoff needs to be measured, so we can guess that punishment of those making small food contributions would develop. This would force a homogenous level of reproductive effort, and group selection would push this level to the optimal tradeoff between agriculture and offspring production for group population growth, with just enough offspring to make optimal use of the food supply. This group is internally stable, and has much higher population growth than one wracked by commons problems, but it will also have no babyeating in the common pen.
I.e. sister ants with their parents alive don't need complex social recognition and punishment mechanisms to deal with conflicting individual and group interests, since their best outcomes coincide. That coincidence of interests can be almost as complete as for a group of clones.
Given ant chromosomal structure, an ant is more related to her sisters than her offspring, and a single female can convert food/resources to offspring roughly as well as two females each with half the resources.
Even relatively strong social recognition and coordination systems, as in primates, leave plenty of opportunities to shirk and betray. Behaviors of selective provisioning and parental investment (the cheating that already sometimes occurs and is punished among Babyeaters) serves both group and individual fitness, reducing the strength of group selection needed to maintain the altruistic punishment of shirkers. It would thus be easier for it to evolve, and groups of selective-provisioners would on average have a competitive advantage (since the group-beneficial slow population growth would degrade more slowly) against groups with the dispositions in the story.
Now, if the social coordination mechanisms got absurdly strong, much stronger than in any human society ever, this would no longer be the case. Likewise, if the story's babyeaters became universal, selective-provisioners would not be able to arise among them. So there is no contradiction, but there is a probabilistic surprise.
Re: "MST3K Mantra"
Very improbable evolved beings don't make for good warnings about the precious moral miracle of human values. It would be better to use an example of a plausible 'near-miss,' e.g. by extrapolating from something common in Earth species.
"Why doesn't modern society securitize hard assets into money of zero maturity, instead of using a purely abstract debt-based currency to denominate debts? Because it would be slightly more complicated, that's why." Eliezer,
I think you're mistaken about the relative complexity of parents selectively provisioning their own offspring, versus the baroque and complex adaptations for social intelligence and coordination required for this system to be stable.
"And anyone who tried to cheat, to hide away a child, or even go easier on their own children during the winnowing - well, the Babyeaters treated the merciful parents the same way that human tribes treat their traitors."
This means that the Babyeaters were capable of recognizing and preferring their own children after birth. Selectively provisioning your own offspring is an extremely common adaptation, as is allocating resources preferentially (e.g. starving runts) and most of the necessary complexity already seems to exist among the Babyeaters. Separate pens/nests are simpler than evolving a complex set of adaptations to manage and enforce an even-handed winnowing.
Consider that with pooled offspring in a single pen, we now have two commons problems, aside from even-handed winnowing, Babyeaters have strong incentives to shirk in their agricultural labor. For the Babyeaters to develop a set of immensely powerful adaptations for managing such conflicts of interest (exceedingly strong by the standards of Earth's biodiversity) is going to take evolution a long time, during which selective provisioning/penning/devouring would likely take hold in some groups and then sweep the population.
"makes the large numb" Is obviously a result of an incomplete edit.
Why didn't the Babyeaters develop the practice of separate pens for each family, with tribes redistributing common resources (e.g. erratic, potentially rotting, meat from hunts) among parents, and parents feeding children out of their share? Maybe their brains lacked the capacity to recognize so many distinct off-spring, but why not spray them with a pheromone? Producing vast numbers of offspring with big expensive full-size brains (which is itself implausible) makes the large numb to be destroyed immediately would impose huge metabolic costs relative to privatizing the commons and distinguishing between offspring, then adjusting clutch-size based on parental resources.
Richard,
You missed (5): preserve your goals/utility function to ensure that the resources acquired serve your goals. Avoiding transformation into Goal System Zero is a nearly universal instrumental value (none of the rest are universal either).
Patrick,
Those are instrumental reasons, and could be addressed in other ways. I was trying to point out that giving up big chunks of our personality for instrumental benefits can be a real trade-off.
http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/
"Ingroup-outgroup dynamics, the way we're most motivated only when we have someone to fear and hate: this too is an evolved value, and most of the people here would prefer to do away with it if we can."
So you would want to eliminate your special care for family, friends, and lovers? Or are you really just saying that your degree of ingroup-outgroup concern is less than average and you wish everyone was as cosmopolitan as you? Or, because ingroup-concern is indexical, it results in different values for different ingroups, so you wish every shared your precise ingroup concerns? Or that you are in a Prisoner's Dilemma with other groups (or worse), and you think the benefit of changing the values of others would be enough for you to accept a deal in which your own ingroup-concern was eliminated?
Roko, the Minimum Message Length of that wish would be MUCH greater if you weren't using information already built into English and our concepts.
James,
"I have set guards in the air that prohibit lethal violence, and any damage less than lethal, your body shall repair." I'm not sure whether this would prohibit the attainment or creation of superintelligence (capable of overwhelming the guards), but if not then this doesn't do that much to resolve existential risks. Still, unaging beings would look to the future, and thus there would be plenty of people who remembered the personal effects of an FAI screw-up when it became possible to try again (although it might also lead to overconfidence).
"How about "Every time nerds on OB discuss human relationships, one decibel of evidence is added to the hypothesis that the singularity will look like a sci-fi fanfic novel""
That gets to near-certainty too fast.
Interest in previously boring (due to repetition) things regenerates over time. Eating strawberries every six months may not be as good as the first time (although nostalgia may make it better), but it's not obvious that it declines in utility.
We may also actively value non-boredom in some mid-level contexts, e.g. in sexual fidelity, or for desires that we consider central to our identity/narratives.
Why didn't you look on his website?
TGGP,
The Brave New World was exceedingly stable and not improving. Our current society has some chance of becoming much better.
Well, they won't be doing numerically identical pieces of work. Are you thinking of things like patronage and nepotism positions that exist solely to hand money to their holders? An auto company employee who comes to 'work' and sits at a desk doing nothing from 9 to 5 in order to collect a paycheck, which is offered because of the UAW, isn't contributing anything to the company or the economy, but his enrichment makes a difference to the union leaders, since he will provide union dues and a vote. Many people are in this category, but the most blatant ones are still only a small fraction of the population.
If you start to include positions like government-subsidized social services jobs with nil or negative effects on recipients, people providing medicine that has nil or negative effect on health, people working in subsidized private industries (agriculture subsidies, etc) that destroy wealth on net, office politics positions, organized crime, legal versions of same (telemarketers selling bogus products or tricking people into signing harmful contracts), and similar categories you could wind up with a majority of the population in many countries. But if by 'making a difference to someone' Robin just means that an employer benefits from having the employee on staff, most such jobs wouldn't qualify.
I'm just confused by your distinction between mutation and other reasons to fall into different self-consistent attractors. I could wind up in one reflective equilibrium than another because I happened to consider one rational argument before another, because of early exposure to values, genetic mutations, infectious diseases, nutrition, etc, etc. It seems peculiar to single out the distinction between genetic mutation and everything else. I thought 'mutation' might be a shorthand for things that change your starting values or reflective processes before extensive moral philosophy and reflection, and so would include early formation of terminal values by experience/imitation, but apparently not.
"(b) my being a mutant,"
It looks like (especially young) humans have quite a lot of ability to pick up a wide variety of basic moral concerns, in a structured fashion, e.g. assigning ingroups, objects of purity-concerns, etc. Being raised in an environment of science-fiction and Modern Orthodox Judaism may have given you quite unusual terminal values without mutation (although personality genetics probably play a role here too). I don't think you would characterize this as an instance of c), would you?
Daniel,
Every decision rule we could use will result in some amount of suffering and death in some Everett branches, possible worlds, etc, so we have to use numbers and proportions. There are more and simpler interpretations of a human brain as a mind than there are such interpretations of a rock. If we're not mostly Boltzmann-brain interpretations of rocks that seems like an avenue worth pursuing.
Robin,
In that case can you respond to Eliezer more generally: what are some of the deviations from the competitive scenario that you would expect to prefer (upon reflection) that a singleton implement?
On the valuation of slaves, this comment seemed explicit to me.
nazgulnarsil,
A solar powered holodeck would be in trouble in deep space, particularly when the nearby stars are surrounded with Matrioshka shells/Dyson spheres. Not to mention being followed and preceded by smarter and more powerful entities.
Robin,
Do you think singleton scenarios in aggregate are very unlikely? If you are considering whether to push for a competitive outcome, then a rough distribution over projected singleton outcomes, and utilities for projected outcomes, will be important.
More specifically, you wrote that creating entities with strong altruistic preferences directed towards rich legacy humans would be bad, that the lives of the entities (despite satisfying their preferences) would be less valuable than those of hardscrapple frontier folk. It's not clear why you think that the existence of agents with those preferences would be bad relative to the existence of obsessive hardscrapple replicators. What if, as Nick Bostrom suggests, evolutionary pressures might result in agents with architectures you would find non-eudaimonic in similar fashion? What if hardscrapple replicators find that they can best expand in the universe by creating lots of slave-minds that only care about executing instructions, rather than intrinsically caring about reproductive success?
nazgulnarsil,
Scarcity can be restored very, very, shortly after satiation with digital reproduction and Malthusian population growth.
Robin,
Some brute preferences and values may be inculcated by connected social processes. Social psychology seems to point to flexible moral learning among young people (e.g. developing strong moral feelings about ritual purity as one's culture defines it through early exposure to adults reacting in the prescribed ways). Sexual psychology seems to show similar effects: there is a dizzying variety of learned sexual fetishes, and they tend to be culturally laden and connected to the experiences of today, but that doesn't make them wrong. Moral education dedicated to upholding the status quo may create real preferences for that status quo, (in addition to the bias you mention, not in place of it) in a 'moral miracle' but not a physical one:
"It seems to me that in a Big World, the people who already exist in your region have a much stronger claim on your charity than babies who have not yet been born into your region in particular."
This doesn't make sense to me. A superintelligence could:
A superintelligence could create a semi-random plausible human brain emulation de novo, and whatever this emulation was, it would be the continuation of some set of human lives.
A superintelligence could conduct simulations to explore the likely distribution of minds across the multiverse, as well as the degree to which emulations continuing their lives (in desirable fashions) would serve its altruistic goals. Vast numbers of copies could then be run accordingly, and the costs of exploratory simulation would be negligible by comparison, so there would be little advantage to continuing the lives of beings within our causal region as opposed to entities discovered in exploratory simulation.
If we're only concerned about proportions within 'extended-beings,' then there's more bang for the buck in running emulations of rare and exotic beings (fewer emulations are required to change their proportions). The mere fact that we find current people to exist suggests anthropically that they are relatively common (and thus that it's expensive to change their proportions) , so current local people would actually be neglected almost entirely by your sort of Big World average utilitarian.
"If this is so, isn't it almost probability 1 that CEV will be abandoned at some point?"
Phil, if a CEV makes choices for reasons why would you expect it to have a significant chance of reversing that decision without any new evidence or reasons, and for this chance to be independent across periods? I can be free to cut off my hand with an axe, even if the chance that I'll do it is very low, since I have reasons not to.
" I can see arguing with the feasibility of hard takeoff (I don't buy it myself), but if you accept that step, Eliezer's intentions seem correct."
Bambi,
Robin has already said just that. I think Eliezer is right that this is a large discussion, and when many of the commenters haven't carefully followed it, comments bringing up points that have already been explicitly addressed will take up a larger and larger share of the comment pool.
"Carl and Roko, I really wasn't trying to lay out a moral position," I was noting apparent value differences between you and Eliezer that might be relevant to his pondering of 'Lines of Retreat.'
"though I was expressing mild horror at encouraging total war, a horror I expected (incorrectly it seems) would be widely shared." It is shared, but there are offsetting benefits of accurate discussion.
"Eliezer, sometimes in a conversation one needs a rapid back and forth, often to clarify what exactly people mean by things they say. In such a situation a format like the one we are using, long daily blog posts, can work particularly badly." Why not have an online chat, and post the transcript?
Russell,
A broadly shared moral code with provisions for its propagation and defense is one of Bostrom's examples of a singleton. If altruistic punishment of the type you describe is costly, then evolved hardscrapple replicators won't reduce their expected reproductive fitness by punishing those who abuse the helpless. We can empathize with and help the helpless for the same reason that we can take contraceptives: evolution hasn't yet been able to stop us without outweighing disadvantages.