Comment by davidpearce on Four Focus Areas of Effective Altruism · 2014-07-28T07:53:42.281Z · LW · GW

Can preference utilitarians, classical utilitarians and negative utilitarians hammer out some kind of cosmological policy consensus? Not ideal by anyone's lights, but good enough? So long as we don't create more experience below "hedonic zero" in our forward light-cone, NUs are untroubled by wildly differing outcomes. There is clearly a tension between preference utilitarianism and classical utilitarianism; but most(?) preference utilitarians are relaxed about having hedonic ranges shifted upwards - perhaps even radically upwards - if recalibration is done safely, intelligently and conservatively - a big "if", for sure. Surrounding the sphere of sentient agents in our Local Supercluster(?) with a sea of hedonium propagated by von Neumann probes or whatever is a matter of indifference to most preference utilitarians and NUs but mandated(?) by CU.

Is this too rosy a scenario?

Comment by davidpearce on Confused as to usefulness of 'consciousness' as a concept · 2014-07-19T18:16:25.489Z · LW · GW

Eli, sorry, could you elaborate? Thanks!

Comment by davidpearce on Confused as to usefulness of 'consciousness' as a concept · 2014-07-19T11:00:12.285Z · LW · GW

Eli, fair point.

Comment by davidpearce on Confused as to usefulness of 'consciousness' as a concept · 2014-07-19T08:41:53.776Z · LW · GW

Eli, it's too quick to dismiss placing moral value on all conscious creatures as "very warm-and-fuzzy". If we're psychologising, then we might equally say that working towards the well-being of all sentience reflects the cognitive style of a rule-bound hyper-systematiser. No, chickens aren't going to win any Fields medals - though chickens can recognise logical relationships and perform transitive inferences (cf. the "pecking order"). But nonhuman animals can still experience states of extreme distress. Uncontrolled panic, for example, feels awful regardless of your species-identity. Such panic involves a complete absence or breakdown of reflective self-awareness - illustrating how the most intense forms of consciousness don't involve sophisticated meta-cognition.

Either way, if we can ethically justify spending, say, $100,000 salvaging a 23-week-old human micro-preemie, then impartial benevolence dictates caring for beings of greater sentience and sapience as well - or at the very least, not actively harming them.

Comment by davidpearce on [Video Link] PostHuman: An Introduction to Transhumanism · 2014-01-14T12:27:16.320Z · LW · GW

"Health is a state of complete [sic] physical, mental and social well-being": the World Health Organization definition of health. Knb, I don't doubt that sometimes you're right. But Is phasing out the biology of involuntary suffering really too "extreme" - any more than radical life-extension or radical intelligence-amplification? When talking to anyone new to transhumanism, I try also to make the most compelling case I can for radical superlongevity and extreme superintelligence - biological, Kurzweilian and MIRI conceptions alike. Yet for a large minority of people - stretching from Buddhists to wholly secular victims of chronic depression and chronic pain disorders - dealing with suffering in one guise or another is the central issue. Recall how for hundreds of millions of people in the world today, time hangs heavy - and the prospect of intelligence-amplification without improved subjective well-being leaves them cold. So your worry cuts both ways.

Anyhow, IMO the makers of the BIOPS video have done a fantastic job. Kudos. I gather future episodes of the series will tackle different conceptions of posthuman superintelligence - not least from the MIRI perspective.

Comment by davidpearce on Group Rationality Diary, August 1-15 · 2013-08-03T13:14:30.359Z · LW · GW

This is a difficult question. By analogy, should rich cannibals or human child abusers be legally permitted to indulge their pleasures if they offset the harm they cause with sufficiently large charitable donations to orphanages or children's charities elsewhere? On (indirect) utilitarian grounds if nothing else, we would all(?) favour an absolute legal prohibition on cannibalism and human child abuse. This analogy breaks down if the neuroscientfic evidence suggesting that pigs, for example, are at least as sentient as prelinguistic human toddlers turns out to be mistaken. I'm deeply pessimistic this is the case.

Comment by davidpearce on Arguments Against Speciesism · 2013-08-01T16:57:17.727Z · LW · GW

Could you possibly say a bit more about why the mirror test is inadequate as a test of possession of a self-concept? Either way, making self-awareness a precondition of moral status has troubling implications. For example, consider what happens to verbally competent adults when feelings intense fear turn into uncontrollable panic. In states of "blind" panic, reflective self-awareness and the capacity for any kind of meta-cognition is lost. Panic disorder is extraordinarily unpleasant. Are we to make the claim that such panic-ridden states aren't themselves important - only the memories of such states that a traumatised subject reports when s/he regains a measure of composure and some semblance of reflective self-awareness is restored? A pig, for example, or a prelinguistic human toddler, doesn't have the meta-cognitive capacity to self-reflect on such states. But I don't think we are ethically entitled to induce them - any more than we are ethically entitled to waterboard a normal adult human. I would hope posthuman superintelligence can engineer such states out of existence - in human and nonhuman animals alike.

Comment by davidpearce on Arguments Against Speciesism · 2013-08-01T08:43:55.662Z · LW · GW

Birds lack a neocortex. But members of at least one species, the European magpie, have convincingly passed the "mirror test" [cf. "Mirror-Induced Behavior in the Magpie (Pica pica): Evidence of Self-Recognition"] Most ethologists recognise passing the mirror test as evidence of a self-concept. As well as higher primates (chimpanzees, orang utans, bonobos, gorillas) members of other species who have passed the mirror test include elephants, orcas and bottlenose dolphins. Humans generally fail the mirror test below the age of eighteen months.

Comment by davidpearce on Arguments Against Speciesism · 2013-07-31T20:22:28.340Z · LW · GW

Lumifer, should the charge of "mind-killers" be levelled at anti-speciesists or meat-eaters? (If you were being ironic, apologies for being so literal-minded.)

Comment by davidpearce on Arguments Against Speciesism · 2013-07-31T16:21:42.106Z · LW · GW

Larks, by analogy, could a racist acknowledge that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect, but race is one of the things that has to be equal? If you think the "other things being equal" caveat dilutes the definition of speciesism so it's worthless, perhaps drop it - I was just trying to spike some guns.

Comment by davidpearce on Arguments Against Speciesism · 2013-07-31T13:21:46.321Z · LW · GW

Larks, all humans, even anencephalic babies, are more sentient than all Anopheles mosquitoes. So when human interests conflict irreconcilably with the interests of Anopheles mosquitoes, there is no need to conduct a careful case-by-case study of their comparative sentience. Simply identifying species membership alone is enough. By contrast, most pigs are more sentient than some humans. Unlike the antispeciesist, the speciesist claims that the interests of the human take precedence over the interests of the pig simply in virtue of species membership. (cf. :heart-warming yes, but irrational altruism - by antispeciesist criteria at any rate.) I try and say a bit more (without citing the Daily Mail) here:

Comment by davidpearce on Arguments Against Speciesism · 2013-07-29T11:56:58.081Z · LW · GW

Vanvier, you say that you wouldn't be averse to a quick end for young human children who are not going to live to see their third birthday. What about intellectually handicapped children with potentially normal lifespans whose cognitive capacities will never surpass a typical human toddler or mature pig?

Comment by davidpearce on Arguments Against Speciesism · 2013-07-29T10:18:42.903Z · LW · GW

Vanvier, do human infants and toddlers deserve moral consideration primarily on account of their potential to become rational adult humans? Or are they valuable in themselves? Young human children with genetic disorders are given love, care and respect - even if the nature of their illness means they will never live to see their third birthday. We don't hold their lack of "potential" against them. Likewise, pigs are never going to acquire generative syntax or do calculus. But their lack of cognitive sophistication doesn't make them any less sentient.

Comment by davidpearce on Arguments Against Speciesism · 2013-07-28T23:15:59.146Z · LW · GW

jkaufman, the dimmer-switch metaphor of consciousness is intuitively appealing. But consider some of the most intense experiences that humans can undergo, e.g. orgasm, raw agony, or blind panic. Such intense experiences are characterised by a breakdown of any capacity for abstract rational thought or reflective self-awareness. Neuroscanning evidence, too, suggests that much of our higher brain function effectively shuts down during the experience of panic or orgasm. Contrast this intensity of feeling with the subtle and rarefied phenomenology involved in e.g. language production, solving mathematical equations, introspecting one's thoughts-episodes, etc - all those cognitive capacities that make mature members of our species distinctively human. For sure, this evidence is suggestive, not conclusive. But the supportive evidence converges with e.g. microelectrode studies using awake human subjects. Such studies suggest the limbic brain structures that generate our most intense experiences are evolutionarily very ancient. Also, the same genes, same neurotransmitter pathways and same responses to noxious stimuli are found in our fellow vertebrates. In view of how humans treat nonhumans, I think we ought to be worried that humans could be catastrophically mistaken about nonhuman animal sentience.

Comment by davidpearce on Why Eat Less Meat? · 2013-07-27T11:26:08.061Z · LW · GW

Obamacare for elephants probably doesn't rank highly in the priorities of most lesswrongers. But from an anthropocentric perspective, isn't an analogous scenario for human beings - i.e. to stay free living but not "wild" - the most utopian outcome if the MIRI conception of an Intelligence Explosion comes to pass?

Comment by davidpearce on Why Eat Less Meat? · 2013-07-25T11:22:14.541Z · LW · GW

RobbBB, in what sense can phenomenal agony be an "illusion"? If your pain becomes so bad that abstract thought is impossible, does your agony - or the "illusion of agony" - somehow stop? The same genes, same neurotransmitters, same anatomical pathways and same behavioural responses to noxious stimuli are found in humans and the nonhuman animals in our factory-farms. A reasonable (but unproven) inference is that factory-farmed nonhumans endure misery - or the "illusion of misery" as the eliminativist puts it - as do abused human infants and toddlers.

Comment by davidpearce on Why Eat Less Meat? · 2013-07-24T13:43:03.443Z · LW · GW

drnickbone, the argument that meat-eating can be ethically justified if conditions of factory-farmed animals are improved so their lives are "barely" worth living is problematic. As it stands, the argument justifies human cannibalism. Breeding human babies for the pot is potentially ethically justified because the infants in question wouldn't otherwise exist - although they are factory- farmed, runs this thought-experiment, their lives are at least "barely" worth living because they don't self-mutilate or show the grosser signs of psychological trauma. No, I'm sure you don't buy this argument - but then we shouldn't buy it for nonhuman animals either.

Comment by davidpearce on Why Eat Less Meat? · 2013-07-24T10:18:13.527Z · LW · GW

Indeed so. Factory-farmed nonhuman animals are debeaked, tail-docked, castrated (etc) to prevent them from mutilating themselves and each other. Self-mutilitary behaviour in particular suggests an extraordinarily severe level of chronic distress. Compare how desperate human beings must be before we self-mutilate. A meat-eater can (correctly) respond that the behavioural and neuroscientific evidence that factory-farmed animals suffer a lot is merely suggestive, not conclusive. But we're not trying to defeat philosophical scepticism, just act on the best available evidence. Humans who persuade ourselves that factory-farmed animals are happy are simply kidding ourselves - we're trying to rationalise the ethically indefensible.

Comment by davidpearce on Effective Altruism Through Advertising Vegetarianism? · 2013-06-30T17:48:48.357Z · LW · GW

SaidAchmiz, to treat exploiting and killing nonhuman animals as ethically no different from "exploiting and killing ore-bearing rocks" does not suggest a cognitively ambitious level of empathetic understanding of other subjects of experience. Isn't there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever? Insofar as we want a benign outcome for humans, I'd have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.

Comment by davidpearce on Effective Altruism Through Advertising Vegetarianism? · 2013-06-30T10:21:30.699Z · LW · GW

SaidAchmiz, one difference between factory farming and the Holocaust is that the Nazis believed in the existence of an international conspiracy of the Jews to destroy the Aryan people. Humanity's only justification of exploiting and killing nonhuman animals is that we enjoy the taste of their flesh. No one believes that factory-farmed nonhuman animals have done "us" any harm. Perhaps the parallel with the (human) Holocaust fails for another reason. Pigs, for example, are at least as intelligent as prelinguistic toddlers; but are they less sentient? The same genes, neural processes, anatomical pathways and behavioural responses to noxious stimuli are found in pigs and toddlers alike. So I think the burden of proof here lies on meat-eating critics who deny any equivalence. A third possible reason for denying the parallel with the Holocaust is the issue of potential. Pigs (etc) lack the variant of the FOXP2 gene implicated in generative syntax. In consequence, pigs will never match the cognitive capacities of many but not all adult humans. The problem with this argument is that we don't regard, say, humans with infantile Tay-Sachs who lack the potential to become cognitively mature adults as any less worthy of love, care and respect than heathy toddlers. Indeed the Nazi treatment of congenitally handicapped humans (the "euthanasia" program) is often confused with the Holocaust, for which it provided many of the technical personnel. A fourth reason to deny the parallel with the human Holocaust is that it's offensive to Jewish people. This unconformable parallel has been drawn by some Jewish writers. "An eternal Treblinka", for example, was made by Isaac Bashevis Singer - the Jewish-American Nobel laureate. Apt comparison or otherwise, creating nonhuman-animal-friendly intelligence is going to be an immense challenge.

Comment by davidpearce on Is our continued existence evidence that Mutually Assured Destruction worked? · 2013-06-19T08:36:07.234Z · LW · GW

Yes, assuming post-Everett quantum mechanics, our continued existence needn't be interpreted as evidence that Mutually Assured Destruction works, but rather as an anthropic selection effect. It's unclear why (at least in our family of branches) Hugh Everett, who certainly took his own thesis seriously, spent much of his later life working for the Pentagon targeting thermonuclear weaponry on cities. For Everett must have realised that in countless world-branches, such weapons would actually be used. Either way, the idea that Mutually Assured Destruction works could prove ethically catastrophic this century if taken seriously.

Comment by davidpearce on Effective Altruism Through Advertising Vegetarianism? · 2013-06-17T17:52:04.348Z · LW · GW

Ice9, perhaps consider uncontrollable panic. Some of the most intense forms of sentience that humans undergo seem to be associated with a breakdown of meta-cognitive capacity. So let's hope that what it's like to be an asphyxiating fish, for example, doesn't remotely resemble what it feels like to be a waterboarded human. I worry that our intuitive dimmer-switch model of consciousness, i.e. more intelligent = more sentient, may turn out to be mistaken.

Comment by davidpearce on Effective Altruism Through Advertising Vegetarianism? · 2013-06-17T16:54:32.062Z · LW · GW

Elharo, which is more interesting? Wireheading - or "the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living"? Yes, I agree, the latter certainly sounds more exciting; but "from the inside", quite the reverse. Wireheading is always enthralling, whereas everyday life is often humdrum. Likewise with so-called utilitronium. To humans, utilitronium sounds unimaginably dull and monotonous, but "from the inside" it presumably feels sublime.

However, we don't need to choose between aiming for a utilitronium shockwave and conserving the status quo. The point of recalibrating our hedonic treadmill is that life can be fabulously richer - in principle orders of magnitude richer - for everyone without being any less diverse, and without forcing us to give up our existing values and preference architectures. (cf. "The catechol-O-methyl transferase Val158Met polymorphism and experience of reward in the flow of daily life.": In principle, there is nothing to stop benign (super)intelligence from spreading such reward pathway enhancements across the phylogenetic tree.

Comment by davidpearce on Effective Altruism Through Advertising Vegetarianism? · 2013-06-16T13:12:48.230Z · LW · GW

Elharo, I take your point, but surely we do want humans to enjoy healthy lives free from hunger and disease and safe from parasites and predators? Utopian technology promises similar blessings to nonhuman sentients too. Human and nonhuman animals alike typically flourish best when free- living but not "wild".

Comment by davidpearce on Effective Altruism Through Advertising Vegetarianism? · 2013-06-16T09:49:53.421Z · LW · GW

Eugine, in answer to your question: yes. If we are committed to the well-being of all sentience in our forward light-cone, then we can't simultaneously conserve predators in their existing guise. (cf. Humans are not obligate carnivores; and the in vitro meat revolution may shortly make this debate redundant; but it's questionable whether posthuman superintelligence committed to the well-being of all sentience could conserve humans in their existing guise either.

Comment by davidpearce on Effective Altruism Through Advertising Vegetarianism? · 2013-06-15T18:18:58.924Z · LW · GW

SaidAchmiz, you're right. The issue isn't settled: I wish it were so. The Transhumanist Declaration (1998, 2009) of the World Transhumanist Association / Humanity Plus does express a non-anthropocentric commitment to the well-being of all sentience. ["We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise" :] But I wonder what percentage of lesswrongers would support such a far-reaching statement?

Comment by davidpearce on Effective Altruism Through Advertising Vegetarianism? · 2013-06-13T17:36:48.277Z · LW · GW

SaidAchmiz, I wonder if a more revealing question would be to ask if / when in vitro meat products of equivalent taste and price hit the market, will you switch? Lesswrong readers tend not to be technophobes, so I assume the majority(?) of lesswrongers who are not already vegetarian will make the transition. However, you say above that you are "not interested in reducing the suffering of animals". Do you mean that you are literally indifferent one way or the other to nonhuman animal suffering - in which case presumably you won't bother changing to the cruelty-free alternative? Or do you mean merely that you don't consider nonhuman animal suffering important?

Comment by davidpearce on Effective Altruism Through Advertising Vegetarianism? · 2013-06-13T12:16:32.785Z · LW · GW

Eliezer, is that the right way to do the maths? If a high-status opinion-former publicly signals that he's quitting meat because it's ethically indefensible, then others are more likely to follow suit - and the chain-reaction continues. For sure, studies purportedly showing longer lifespans, higher IQs etc of vegetarians aren't very impressive because there are too many possible confounding variables. But what such studies surely do illustrate is that any health-benefits of meat-eating vs vegetarianism, if they exist, must be exceedingly subtle. Either way, practising friendliness towards cognitively humble lifeforms might not strike AI researchers as an urgent challenge now. But isn't the task of ensuring that precisely such an outcome ensues from a hypothetical Intelligence Explosion right at the heart of MIRI's mission - as I understand it at any rate?

Comment by davidpearce on Who thinks quantum computing will be necessary for AI? · 2013-06-06T12:15:30.088Z · LW · GW

Tim, perhaps I'm mistaken; you know lesswrongers better than me. But in any such poll I'd also want to ask respondents who believe the USA is a unitary subject of experience whether they believe such a conjecture is consistent with reductive physicalism?

Comment by davidpearce on Who thinks quantum computing will be necessary for AI? · 2013-06-06T12:14:10.279Z · LW · GW

Wedrifid, yes, if Schwitzgebel's conjecture were true, then farewell to reductive physicalism and the ontological unity of science. The USA is a "zombie". Its functionally interconnected but skull-bound minds are individually conscious; and sometimes the behaviour of the USA as a whole is amenable to functional description; but the USA not a unitary subject of experience. However, the problem with relying on this intuitive response is that the phenomenology of our own minds seems to entail exactly the sort of strong ontological emergence we're excluding for the USA. Let's assume, as microelectrode studies tentatively confirm, that individual neurons can support rudimentary experience. How can we rigorously derive bound experiential objects, let alone the fleeting synchronic unity of the self, from discrete, distributed, membrane-bound classical feature processors? Dreamless sleep aside, why aren't we mere patterns of "mind dust"?

None of this might seem relevant to ChrisHallquist's question. Computationally speaking, who cares whether Deep Blue, Watson, or Alpha Dog (etc) are unitary subjects of experience. But anyone who wants to save reductive reductive physicalism should at least consider why quantum mind theorists are prepared to contemplate a role for macroscopic quantum coherence in the CNS. Max Tegmark hasn't refuted quantum mind; he's made a plausible but unargued assumption, namely that sub-picosecond decoherence timescales are too short to do any computational and/or phenomenological work. Maybe so; but this assumption remains to be empirically tested. If all we find is "noise", then I don't see how reductive physicalism can be saved.

Comment by davidpearce on Who thinks quantum computing will be necessary for AI? · 2013-06-04T17:12:01.940Z · LW · GW

Huh, yes, in my view C. elegans is a P-zombie. If we grant reductive physicalism, the primitive nervous system of C. elegans can't support a unitary subject of experience. At most, its individual ganglia (cf. may be endowed with the rudiments of unitary consciousness. But otherwise, C. elegans can effectively be modelled classically. Most of us probably wouldn't agree with philosopher Eric Schwitzgebel. ("If Materialism Is True, the United States Is Probably Conscious" But exactly the same dilemma confronts those who treat neurons as essentially discrete, membrane-bound classical objects. Even if (rightly IMO) we take Strawsonian physicalism seriously (cf. then we still need to explain how classical neuronal "mind-dust" could generate bound experiential objects or a unitary subject of experience without invoking some sort of strong emergence.

Comment by davidpearce on Who thinks quantum computing will be necessary for AI? · 2013-06-03T22:13:04.287Z · LW · GW

Alas so. IMO a solution to the phenomenal binding problem (cf. is critical to understanding the evolutionary success of organic robots over the past 540 million years - and why classical digital computers are (and will remain) insentient zombies, not unitary minds. This conjecture may be false; but it has the virtue of being testable. If / when our experimental apparatus allows probing the CNS at the sub-picosecond timescales above which Max Tegmark ("Why the brain is probably not a quantum computer") posits thermally-induced decoherence, then I think we'll get a huge surprise! I predict we'll find, not random psychotic "noise", but instead the formal, quantum-coherent physical shadows of the macroscopic bound phenomenal objects of everyday experience - computationally optimised by hundreds of millions years of evolution, i.e. a perfect structural match. (cf. By contrast, critics of the quantum mind conjecture must presumably predict we'll find just "noise".

Comment by davidpearce on New report: Intelligence Explosion Microeconomics · 2013-05-22T07:58:25.896Z · LW · GW

Cruelty-free in vitro meat can potentially replace the flesh of all sentient beings currently used for food. Yes, it's more efficient; it also makes high-tech Jainism less of a pipedream.

Comment by davidpearce on New report: Intelligence Explosion Microeconomics · 2013-05-21T09:11:59.916Z · LW · GW

I disagree with Peter Singer here. So I'm not best placed to argue his position. But Singer is acutely sensitive to the potential risks of any notion of lives not worth living. Recall Singer lost three of his grandparents in the Holocaust. Let's just say it's not obvious that an incurable victim of, say, infantile Tay–Sachs disease, who is going do die around four years old after a chronic pain-ridden existence, is better off alive. We can't ask this question to the victim: the nature of the disorder means s/he is not cognitively competent to understand the question.

Either way, the case for the expanding circle doesn't depend on an alleged growth in empathy per se. If, as I think quite likely, we eventually enlarge our sphere of concern to the well-being of all sentience, this outcome may owe as much to the trait of high-AQ hyper-systematising as any widening or deepening compassion. By way of example, consider the work of Bill Gates in cost-effective investments in global health (vaccinations etc) and indeed in: ("the future of meat is vegan"). Not even his greatest admirers would describe Gates as unusually empathetic. But he is unusually rational - and the growth in secular scientific rationalism looks set to continue.

Comment by davidpearce on New report: Intelligence Explosion Microeconomics · 2013-05-19T10:40:19.342Z · LW · GW

Nornagest, fair point. See too "The Brain Functional Networks Associated to Human and Animal Suffering Differ among Omnivores, Vegetarians and Vegans" :

Comment by davidpearce on New report: Intelligence Explosion Microeconomics · 2013-05-19T09:17:39.178Z · LW · GW

Eugine, are you doing Peter Singer justice? What motivates Singer's position isn't a range of empathetic concern that's stunted in comparsion to people who favour the universal sanctity of human life. Rather it's a different conception of the threshold below which a life is not worth living. We find similar debates over the so-called "Logic of the Larder" for factory-farmed non-human animals: Actually, one may agree with Singer - both his utilitarian ethics and bleak diagnosis of some human and nonhuman lives - and still argue against his policy prescriptions on indirect utilitarian grounds. But this would take us far afield.

Comment by davidpearce on New report: Intelligence Explosion Microeconomics · 2013-05-18T11:23:49.076Z · LW · GW

On (indirect) utilitarian grounds, we may make a strong case that enshrining the sanctity of life in law will lead to better consequences than legalising infanticide. So I disagree with Singer here. But I'm not sure Singer's willingness to defend infanticide as (sometimes) the lesser evil is a counterexample to the broad sweep of the generalisation of the expanding circle. We're not talking about some Iron Law of Moral Progress.

Comment by davidpearce on New report: Intelligence Explosion Microeconomics · 2013-05-18T08:24:05.257Z · LW · GW

The growth of science has led to a decline in animism. So in one sense, our sphere of concern has narrowed. But within the sphere of sentience, I think Singer and Pinker are broadly correct. Also, utopian technology makes even the weakest forms of benevolence vastly more effective. Consider, say, vaccination. Even if, pessimistically, one doesn't foresee any net growth in empathetic concern, technology increasingly makes the costs of benevolence trivial.

[Once again, I'm not addressing here the prospect of hypothetical paperclippers - just mind-reading humans with a pain-pleasure (dis)value axis.]

Comment by davidpearce on New report: Intelligence Explosion Microeconomics · 2013-05-17T11:37:44.693Z · LW · GW

an expanding circle of empathetic concern needn't reflect a net gain in compassion. Naively, one might imagine that e.g. vegans are more compassionate than vegetarians. But I know of no evidence this is the case. Tellingly, female vegetarians outnumber male vegetarians by around 2:1, but the ratio of male to female vegans is roughly equal. So an expanding circle may reflect our reduced tolerance of inconsistency / cognitive dissonance. Men are more likely to be utilitarian hyper-systematisers.

Comment by davidpearce on New report: Intelligence Explosion Microeconomics · 2013-05-16T21:53:34.971Z · LW · GW

There is no guarantee that greater perspective-taking capacity will be matched with equivalent action. But presumably greater empathetic concern makes such action more likely. [cf. Steven Pinker's "The Better Angels of Our Nature". Pinker aptly chronicles e.g. the growth in consideration of the interests of nonhuman animals; but this greater concern hasn't (yet) led to an end to the growth of factory-farming. In practice, I suspect in vitro meat will be the game-changer.]

The attributes of superintelligence? Well, the growth of scientific knowledge has been paralleled by a growth in awareness - and partial correction - of all sorts of cognitive biases that were fitness-enhancing in the ancestral environment of adaptedness. Extrapolating, I was assuming that full-spectrum superintelligences would be capable of accessing and impartially weighing all possible first-person perspectives and acting accordingly. But I'm making a lot of contestable assumptions here. And see too the perils of:

Comment by davidpearce on New report: Intelligence Explosion Microeconomics · 2013-05-16T19:21:58.741Z · LW · GW

Perhaps it's worth distinguishing the Convergence vs Orthogonality theses for: 1) biological minds with a pain-pleasure (dis)value axis. 2) hypothetical paperclippers.

Unless we believe that the expanding circle of compassion is likely to contract, IMO a strong case can be made that rational agents will tend to phase out the biology of suffering in their forward light-cone. I'm assuming, controversially, that superintelligent biological posthumans will not be prey to the egocentric illusion that was fitness-enhancing on the African savannah. Hence the scientific view-from-nowhere, i.e. no arbitrarily privileged reference frames.

But what about 2? I confess I still struggle with the notion of a superintelligent paperclipper. But if we grant that such a prospect is feasible and even probable, then I agree the Orthogonality thesis is most likely true.

Comment by davidpearce on Decision Theory FAQ · 2013-04-06T12:34:25.931Z · LW · GW

notsonewuser, yes, "a (very) lossy compression", that's a good way of putting it - not just burger-eating Jane's lossy representation of the first-person perspective of a cow, but also her lossy representation of her pensioner namesake with atherosclerosis forty years hence. Insofar as Jane is ideally rational, she will take pains to offset such lossiness before acting.

Ants? Yes, you could indeed choose not to have your brain reconfigured so as faithfully to access their subjective panic and distress. Likewise, a touchy-feely super-empathiser can choose not to have her brain reconfigured so she better understands of the formal, structural features of the world - or what it means to be a good Bayesian rationalist. But insofar as you aspire to be an ideal rational agent, then you must aspire to maximum representational fidelity to the first-person and the first-third facts alike. This is a constraint on idealised rationality, not a plea for us to be more moral - although yes, the ethical implications may turn out to be profound.

The Hedonistic Imperative? Well, I wrote HI in 1995. The Abolitionist Project (2007) ( is shorter, more up-to-date, and (I hope) more readable. Of course, you don't need to buy into my quirky ideas on ideal rationality or ethics to believe that we should use biotech and infotech to phase out the biology of suffering throughout the living world.

On a different note, I don't know who'll be around in London next month. But on May 11, there is a book launch of the Springer volume, "Singularity Hypotheses: A Scientific and Philosophical Assessment":

I'll be making the case for imminent biologically-based superintelligence. I trust there will be speakers to put the Kurzweilian and MIRI / lesswrong perspective. I fear a consensus may prove elusive. But Springer have a commissioned a second volume - perhaps to tie up any loose ends.

Comment by davidpearce on Decision Theory FAQ · 2013-03-22T17:45:44.797Z · LW · GW

True Desrtopa. But just as doing mathematics is harder when mathematicians can't agree on what constitutes a valid proof (cf. constructivists versus nonconstructivists), likewise formalising a normative account of ideal rational agency is harder where disagreement exists over the criteria of rationality.

Comment by davidpearce on Decision Theory FAQ · 2013-03-21T12:08:26.163Z · LW · GW

Tim, when dreaming, one has a generic delusion, i.e. the background assumption that one is awake, and a specific delusion, i.e. the particular content of one's dream. But given we're constructing a FAQ of ideal rational agency, no such radical scepticism about perception is at stake - merely eliminating a source of systematic bias that is generic to cognitive agents evolved under pressure of natural selection. For sure, there may be some deluded folk who don't recognise it's a bias and who believe instead they really are the centre of the universe - and therefore their interests and preferences carry special ontological weight. But Luke's FAQ is expressly about normative decision theory. The FAQ explicitly contrasts itself with descriptive decision theory, which "studies how non-ideal agents (e.g. humans) actually choose."

Comment by davidpearce on Decision Theory FAQ · 2013-03-19T16:51:40.989Z · LW · GW

Khafra, one doesn't need to be a moral realist to give impartial weight to interests / preference strengths. Ideal rational agent Jill need no more be a moral realist in taking into consideration the stronger but introspectively inaccessible preferences of her slaves than she need be a moral realist taking into account the stronger but introspectively inaccessible preference of her namesake and distant successor Pensioner Jill not to be destitute in old age when weighing whether to raid her savings account. Ideal rationalist Jill does not mistake an epistemological limitation on her part for an ontological truth. Of course, in practice flesh-and-blood Jill may sometimes be akratic. But this, I think, is a separate issue.

Comment by davidpearce on Decision Theory FAQ · 2013-03-19T05:49:51.922Z · LW · GW

khafra, could you clarify? On your account, who in a slaveholding society is the ideal rational agent? Both Jill and Jane want a comfortable life. To keep things simple, let's assume they are both meta-ethical anti-realists. Both Jill and Jane know their slaves have an even stronger preference to be free - albeit not a preference introspectively accessible to our two agents in question. Jill's conception of ideal rational agency leads her impartially to satisfy the objectively stronger preferences and free her slaves. Jane, on the other hand, acknowledges their preference is stronger - but she allows her introspectively accessible but weaker preference to trump what she can't directly access. After all, Jane reasons, her slaves have no mechanism to satisfy their stronger preference for freedom. In other words, are we dealing with ideal rational agency or realpolitik? Likewise with burger-eater Jane and Vegan Jill today.

Comment by davidpearce on Decision Theory FAQ · 2013-03-18T18:02:46.121Z · LW · GW

The issue of how an ideal rational agent should act is indeed distinct from the issue of what mechanism could ensure we become ideal rational agents, impartially weighing the strength of preferences / interests regardless of the power of the subject of experience who holds them. Thus if we lived in a (human) slave-owning society, then as white slave-owners we might "pragmatically" choose to discount the preferences of black slaves from our ideal rational decision theory. After all, what is the point of impartially weighing the "preferences" of different subjects of experience without considering the agent that holds / implements them? For our Slaveowners' Decision Theory FAQ, let's pragmatically order over agents by their ability to accomplish their goals, instead of by "rationality," And likewise today with captive nonhuman animals in our factory farms ? Hmmm....

Comment by davidpearce on Decision Theory FAQ · 2013-03-18T14:45:57.106Z · LW · GW

Pragmatic? khafra, possibly I interpreted the FAQ too literally. ["Normative decision theory studies what an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose."] Whether in practice a conception of rationality that privileges a class of weaker preferences over stronger preferences will stand the test of time is clearly speculative. But if we're discussing ideal, perfectly rational agents - or even crude approximations to ideal perfectly rational agents - then a compelling case can be made for an impartial and objective weighing of preferences instead.

Comment by davidpearce on Decision Theory FAQ · 2013-03-18T13:38:39.923Z · LW · GW

IlyaShpitser, you might perhaps briefly want to glance through the above discussion for some context [But don't feel obliged; life is short!] The nature of rationality is a controversial topic in the philosophy of science (cf. Let's just say if either epistemic or instrumental rationality were purely a question of maths, then the route to knowledge would be unimaginably easier.

Comment by davidpearce on Decision Theory FAQ · 2013-03-18T12:59:18.449Z · LW · GW

IlyaShpitser, is someone who steals from their own pension fund an even bigger bastard, as you put it? Or irrational? What's at stake here is which preferences or interests to include in a utility function.